Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,592
| 4,187,686,555
|
IssuesEvent
|
2016-06-23 18:16:47
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Some identifiers from EUCTR are wrong
|
bug Data cleaning Processors
|
For example, http://explorer.opentrials.net/trials/0aebad21-70e9-4b91-b2a8-9fc81a5fd253. Its primary identifier is EUCTR2015-005843-15**-3rd**, however it should be EUCTR2015-005843-15. The `-3rd` appears to come from the source URL (https://www.clinicaltrialsregister.eu/ctr-search/trial/2015-005843-15/3rd)
|
1.0
|
Some identifiers from EUCTR are wrong - For example, http://explorer.opentrials.net/trials/0aebad21-70e9-4b91-b2a8-9fc81a5fd253. Its primary identifier is EUCTR2015-005843-15**-3rd**, however it should be EUCTR2015-005843-15. The `-3rd` appears to come from the source URL (https://www.clinicaltrialsregister.eu/ctr-search/trial/2015-005843-15/3rd)
|
process
|
some identifiers from euctr are wrong for example its primary identifier is however it should be the appears to come from the source url
| 1
|
3,319
| 6,429,389,580
|
IssuesEvent
|
2017-08-10 01:20:51
|
LibreHealthIO/LibreEHR
|
https://api.github.com/repos/LibreHealthIO/LibreEHR
|
closed
|
Possible Error in Flow Board with Double booking.
|
bug Changes Requested enhancement Work in Process
|
This is 2 providers with 11:00 appointments . Should this be showing Double booked?


|
1.0
|
Possible Error in Flow Board with Double booking. - This is 2 providers with 11:00 appointments . Should this be showing Double booked?


|
process
|
possible error in flow board with double booking this is providers with appointments should this be showing double booked
| 1
|
618,549
| 19,474,313,901
|
IssuesEvent
|
2021-12-24 09:08:49
|
literakl/mezinamiridici
|
https://api.github.com/repos/literakl/mezinamiridici
|
closed
|
Mongo findOneAndUpdate is deprecated
|
type: bug priority: P2
|
[MONGODB DRIVER] DeprecationWarning: collection.findOneAndUpdate option [returnOriginal] is deprecated and will be removed in a later version.
voteComment.js:74
|
1.0
|
Mongo findOneAndUpdate is deprecated - [MONGODB DRIVER] DeprecationWarning: collection.findOneAndUpdate option [returnOriginal] is deprecated and will be removed in a later version.
voteComment.js:74
|
non_process
|
mongo findoneandupdate is deprecated deprecationwarning collection findoneandupdate option is deprecated and will be removed in a later version votecomment js
| 0
|
588,916
| 17,685,905,553
|
IssuesEvent
|
2021-08-24 01:28:46
|
gw2efficiency/issues
|
https://api.github.com/repos/gw2efficiency/issues
|
closed
|
Add day/price heatmap to TP
|
1-Type: Feature 2-Priority: B 3-Complexity: Medium 5-Area: Tradingpost 9-Status: For next release 4-Impact: Low
|
2. Another toggle for items in the tradingpost like the "market depth graph" but then the toggle swaps the graph to a heatmap like this which has been shamelessly stolen from WoW.
https://i.imgur.com/Y5LT9Mj.png
|
1.0
|
Add day/price heatmap to TP - 2. Another toggle for items in the tradingpost like the "market depth graph" but then the toggle swaps the graph to a heatmap like this which has been shamelessly stolen from WoW.
https://i.imgur.com/Y5LT9Mj.png
|
non_process
|
add day price heatmap to tp another toggle for items in the tradingpost like the market depth graph but then the toggle swaps the graph to a heatmap like this which has been shamelessly stolen from wow
| 0
|
115,279
| 4,662,265,736
|
IssuesEvent
|
2016-10-05 02:37:23
|
EvgeniyGor/StudentRecords
|
https://api.github.com/repos/EvgeniyGor/StudentRecords
|
closed
|
Вопрос для чего нужна страница импорт/экспорт?
|
help wanted Priority: HIGH
|
Какой функционал включает в себя данная страница и почему её нельзя заменить просто кнопкой?
|
1.0
|
Вопрос для чего нужна страница импорт/экспорт? - Какой функционал включает в себя данная страница и почему её нельзя заменить просто кнопкой?
|
non_process
|
вопрос для чего нужна страница импорт экспорт какой функционал включает в себя данная страница и почему её нельзя заменить просто кнопкой
| 0
|
121,918
| 17,671,183,755
|
IssuesEvent
|
2021-08-23 06:23:47
|
AlexRogalskiy/screenshots
|
https://api.github.com/repos/AlexRogalskiy/screenshots
|
opened
|
CVE-2015-9251 (Medium) detected in jquery-1.8.1.min.js, jquery-1.9.1.js
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.8.1.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: screenshots/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: screenshots/node_modules/tinygradient/bower_components/tinycolor/index.html</p>
<p>Path to vulnerable library: /node_modules/tinygradient/bower_components/tinycolor/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/screenshots/commit/b5589f657792bcb8b68c618fb63df7fab0d2c73b">b5589f657792bcb8b68c618fb63df7fab0d2c73b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-9251 (Medium) detected in jquery-1.8.1.min.js, jquery-1.9.1.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.8.1.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: screenshots/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: screenshots/node_modules/tinygradient/bower_components/tinycolor/index.html</p>
<p>Path to vulnerable library: /node_modules/tinygradient/bower_components/tinycolor/demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/screenshots/commit/b5589f657792bcb8b68c618fb63df7fab0d2c73b">b5589f657792bcb8b68c618fb63df7fab0d2c73b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js jquery js cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js javascript library for dom operations library home page a href path to dependency file screenshots node modules redeyed examples browser index html path to vulnerable library node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file screenshots node modules tinygradient bower components tinycolor index html path to vulnerable library node modules tinygradient bower components tinycolor demo jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
2,701
| 5,556,805,444
|
IssuesEvent
|
2017-03-24 10:11:57
|
Hurence/logisland
|
https://api.github.com/repos/Hurence/logisland
|
opened
|
PutElasticsearch throws UnsupportedOperationException when duplicate document is found
|
bug processor
|
# Expected behavior and actual behavior.
Must filter duplicate document and not crash
Job aborted due to stage failure: Task 60 in stage 486.0 failed 8 times, most recent failure: Lost task 60.7 in stage 486.0 (TID 68192, dlpe17206.prod.fdj.fr): java.lang.UnsupportedOperationException
at scala.collection.convert.Wrappers$IteratorWrapper.remove(Wrappers.scala:33)
at scala.collection.convert.Wrappers$IteratorWrapper.remove(Wrappers.scala:28)
at com.hurence.logisland.processor.elasticsearch.PutElasticsearch.process(PutElasticsearch.java:246)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1$$anonfun$apply$1.apply(KafkaRecordStreamParallelProcessing.scala:160)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1$$anonfun$apply$1.apply(KafkaRecordStreamParallelProcessing.scala:128)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1.apply(KafkaRecordStreamParallelProcessing.scala:128)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1.apply(KafkaRecordStreamParallelProcessing.scala:96)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
# Steps to reproduce the problem.
sends a set of duplicate records
# Specifications like the version of the project, operating system, or hardware.
|
1.0
|
PutElasticsearch throws UnsupportedOperationException when duplicate document is found - # Expected behavior and actual behavior.
Must filter duplicate document and not crash
Job aborted due to stage failure: Task 60 in stage 486.0 failed 8 times, most recent failure: Lost task 60.7 in stage 486.0 (TID 68192, dlpe17206.prod.fdj.fr): java.lang.UnsupportedOperationException
at scala.collection.convert.Wrappers$IteratorWrapper.remove(Wrappers.scala:33)
at scala.collection.convert.Wrappers$IteratorWrapper.remove(Wrappers.scala:28)
at com.hurence.logisland.processor.elasticsearch.PutElasticsearch.process(PutElasticsearch.java:246)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1$$anonfun$apply$1.apply(KafkaRecordStreamParallelProcessing.scala:160)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1$$anonfun$apply$1.apply(KafkaRecordStreamParallelProcessing.scala:128)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1.apply(KafkaRecordStreamParallelProcessing.scala:128)
at com.hurence.logisland.stream.spark.KafkaRecordStreamParallelProcessing$$anonfun$process$1.apply(KafkaRecordStreamParallelProcessing.scala:96)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
# Steps to reproduce the problem.
sends a set of duplicate records
# Specifications like the version of the project, operating system, or hardware.
|
process
|
putelasticsearch throws unsupportedoperationexception when duplicate document is found expected behavior and actual behavior must filter duplicate document and not crash job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid prod fdj fr java lang unsupportedoperationexception at scala collection convert wrappers iteratorwrapper remove wrappers scala at scala collection convert wrappers iteratorwrapper remove wrappers scala at com hurence logisland processor elasticsearch putelasticsearch process putelasticsearch java at com hurence logisland stream spark kafkarecordstreamparallelprocessing anonfun process anonfun apply apply kafkarecordstreamparallelprocessing scala at com hurence logisland stream spark kafkarecordstreamparallelprocessing anonfun process anonfun apply apply kafkarecordstreamparallelprocessing scala at scala collection iterator class foreach iterator scala at scala collection abstractiterator foreach iterator scala at scala collection iterablelike class foreach iterablelike scala at scala collection abstractiterable foreach iterable scala at com hurence logisland stream spark kafkarecordstreamparallelprocessing anonfun process apply kafkarecordstreamparallelprocessing scala at com hurence logisland stream spark kafkarecordstreamparallelprocessing anonfun process apply kafkarecordstreamparallelprocessing scala at org apache spark rdd rdd anonfun foreachpartition anonfun apply apply rdd scala at org apache spark rdd rdd anonfun foreachpartition anonfun apply apply rdd scala at org apache spark sparkcontext anonfun runjob apply sparkcontext scala at org apache spark sparkcontext anonfun runjob apply sparkcontext scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java steps to reproduce the problem sends a set of duplicate records specifications like the version of the project operating system or hardware
| 1
|
6,180
| 9,087,801,946
|
IssuesEvent
|
2019-02-18 14:38:58
|
kubetenancy/tenant-integrator
|
https://api.github.com/repos/kubetenancy/tenant-integrator
|
opened
|
Gitlab Integrator
|
enhancement in process
|
One of the first integrators to be built is an integrator for Gitlab. The integrator uses the generic integrator library internally.
|
1.0
|
Gitlab Integrator - One of the first integrators to be built is an integrator for Gitlab. The integrator uses the generic integrator library internally.
|
process
|
gitlab integrator one of the first integrators to be built is an integrator for gitlab the integrator uses the generic integrator library internally
| 1
|
5,608
| 8,468,914,078
|
IssuesEvent
|
2018-10-23 21:07:11
|
carloseduardov8/Viajato
|
https://api.github.com/repos/carloseduardov8/Viajato
|
closed
|
Criar base de dados de seguros
|
Priority:Normal Process: Setup Environment
|
Definir seguradoras e contratos para apólices de seguro de viagem.
|
1.0
|
Criar base de dados de seguros - Definir seguradoras e contratos para apólices de seguro de viagem.
|
process
|
criar base de dados de seguros definir seguradoras e contratos para apólices de seguro de viagem
| 1
|
20,284
| 26,915,218,209
|
IssuesEvent
|
2023-02-07 05:33:32
|
MikaylaFischler/cc-mek-scada
|
https://api.github.com/repos/MikaylaFischler/cc-mek-scada
|
closed
|
Process Induction Matrix Charge Self Limiting
|
coordinator safety process control
|
The system should monitor induction matrix charge level and slow/stop the reactors as it nears high charge percentage.
- [x] SCRAM at limit
- [x] Hold until threshold before re-enabling to prevent rapid enable/disable
- [x] High charge state to wait in, returning from it would re-init process controllers so this is preferable
|
1.0
|
Process Induction Matrix Charge Self Limiting - The system should monitor induction matrix charge level and slow/stop the reactors as it nears high charge percentage.
- [x] SCRAM at limit
- [x] Hold until threshold before re-enabling to prevent rapid enable/disable
- [x] High charge state to wait in, returning from it would re-init process controllers so this is preferable
|
process
|
process induction matrix charge self limiting the system should monitor induction matrix charge level and slow stop the reactors as it nears high charge percentage scram at limit hold until threshold before re enabling to prevent rapid enable disable high charge state to wait in returning from it would re init process controllers so this is preferable
| 1
|
828,678
| 31,838,780,006
|
IssuesEvent
|
2023-09-14 14:59:59
|
sourcegraph/about
|
https://api.github.com/repos/sourcegraph/about
|
closed
|
Add Raman and Erika to About Sourcegraph + Update Yegge
|
Medium priority
|
https://about.sourcegraph.com/about
1. Erika Rice Scherpelz
Head of Engineering (Search and Platform)
LinkedIn: https://www.linkedin.com/in/erikars/
Github: https://github.com/erikars
No Twitter
Add her after Steve Yegge (where Dan Adler is).

2. Raman Sharma
Chief Marketing Officer
LinkedIn: https://www.linkedin.com/in/ramansharma
Twitter: https://twitter.com/rasharm_
Github: to share soon

3. Update Steve Yegge's role
Head of Engineering (Cody and AI)
|
1.0
|
Add Raman and Erika to About Sourcegraph + Update Yegge - https://about.sourcegraph.com/about
1. Erika Rice Scherpelz
Head of Engineering (Search and Platform)
LinkedIn: https://www.linkedin.com/in/erikars/
Github: https://github.com/erikars
No Twitter
Add her after Steve Yegge (where Dan Adler is).

2. Raman Sharma
Chief Marketing Officer
LinkedIn: https://www.linkedin.com/in/ramansharma
Twitter: https://twitter.com/rasharm_
Github: to share soon

3. Update Steve Yegge's role
Head of Engineering (Cody and AI)
|
non_process
|
add raman and erika to about sourcegraph update yegge erika rice scherpelz head of engineering search and platform linkedin github no twitter add her after steve yegge where dan adler is raman sharma chief marketing officer linkedin twitter github to share soon update steve yegge s role head of engineering cody and ai
| 0
|
13,243
| 15,715,515,252
|
IssuesEvent
|
2021-03-28 01:45:09
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
FreeBSD recognized as Linux
|
change log-processing
|
````
echo '192.168.1.1 - - [23/Mar/2021:04:01:15 +0100] "GET example.com HTTP/2.0" 200 3606 "https://duckduckgo.com/" "Mozilla/5.0 (X11; FreeBSD amd64; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36"' | goaccess --log-format='COMBINED'
````
navigate to `6 - Operating Systems`, it shows `Linux`, the expected value would have been `FreeBSD`
|
1.0
|
FreeBSD recognized as Linux - ````
echo '192.168.1.1 - - [23/Mar/2021:04:01:15 +0100] "GET example.com HTTP/2.0" 200 3606 "https://duckduckgo.com/" "Mozilla/5.0 (X11; FreeBSD amd64; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36"' | goaccess --log-format='COMBINED'
````
navigate to `6 - Operating Systems`, it shows `Linux`, the expected value would have been `FreeBSD`
|
process
|
freebsd recognized as linux echo get example com http mozilla freebsd linux applewebkit khtml like gecko chrome safari goaccess log format combined navigate to operating systems it shows linux the expected value would have been freebsd
| 1
|
19,991
| 26,466,029,300
|
IssuesEvent
|
2023-01-16 23:43:18
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Interger overflow error creating raster layer unique values report
|
Processing Bug
|
### What is the bug or the crash?
Feature could not be written to C:/path/output.shp: Error converting value (21345402912) for attribute field count: Value "21345402912" is too large for integer field. Could not write feature into OUTPUT_TABLE
Execution failed after 586.70 seconds (9 minutes 47 seconds)
### Steps to reproduce the issue
1. Take the age of secondary vegetation for entire Brazil [here](https://onewri-my.sharepoint.com/:i:/g/personal/jefferson_ferreira_wri_org/EeV-vW1GsJ9FucKaruN7qXIB3wEzyYK-Wn2ubEPod7ryog?e=iAVNQf)
2. open Processing Toolbox > Raster Analysis > Raster layer unique values report
3. in the `unique values table [optional]` enter a file output
4. run
### Versions
QGIS version
3.24.1-Tisler
QGIS code revision
5709b824
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.2-CAPI-1.16.0
SQLite version
3.37.2
PDAL version
2.3.0
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
ana_data_acquisition
1.0
cluz
2020.3.18
dados_censo
0.40
mapbiomascollection
1.4
OpenTopography-DEM-Downloader
1.0
OSMDownloader
1.0.3
pg_raster_import
1.0.10
PLUGIN
1.2.1
quick_map_services
0.19.29
searchlayers
3.0.7
SemiAutomaticClassificationPlugin
7.10.6
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
sagaprovider
2.12.99
ALSO TESTED WITH
QGIS version
3.22.4-Białowieża
QGIS code revision
ce8e65e9
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.2-CAPI-1.16.0
SQLite version
3.37.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
ana_data_acquisition
1.0
cluz
2020.3.18
dados_censo
0.40
mapbiomascollection
1.4
OpenTopography-DEM-Downloader
1.0
OSMDownloader
1.0.3
pg_raster_import
1.0.10
PLUGIN
1.2.1
quick_map_services
0.19.29
searchlayers
3.0.7
SemiAutomaticClassificationPlugin
7.10.6
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
It seems to be related to #40503
|
1.0
|
Interger overflow error creating raster layer unique values report - ### What is the bug or the crash?
Feature could not be written to C:/path/output.shp: Error converting value (21345402912) for attribute field count: Value "21345402912" is too large for integer field. Could not write feature into OUTPUT_TABLE
Execution failed after 586.70 seconds (9 minutes 47 seconds)
### Steps to reproduce the issue
1. Take the age of secondary vegetation for entire Brazil [here](https://onewri-my.sharepoint.com/:i:/g/personal/jefferson_ferreira_wri_org/EeV-vW1GsJ9FucKaruN7qXIB3wEzyYK-Wn2ubEPod7ryog?e=iAVNQf)
2. open Processing Toolbox > Raster Analysis > Raster layer unique values report
3. in the `unique values table [optional]` enter a file output
4. run
### Versions
QGIS version
3.24.1-Tisler
QGIS code revision
5709b824
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.2-CAPI-1.16.0
SQLite version
3.37.2
PDAL version
2.3.0
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
ana_data_acquisition
1.0
cluz
2020.3.18
dados_censo
0.40
mapbiomascollection
1.4
OpenTopography-DEM-Downloader
1.0
OSMDownloader
1.0.3
pg_raster_import
1.0.10
PLUGIN
1.2.1
quick_map_services
0.19.29
searchlayers
3.0.7
SemiAutomaticClassificationPlugin
7.10.6
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
sagaprovider
2.12.99
ALSO TESTED WITH
QGIS version
3.22.4-Białowieża
QGIS code revision
ce8e65e9
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
GEOS version
3.10.2-CAPI-1.16.0
SQLite version
3.37.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
ana_data_acquisition
1.0
cluz
2020.3.18
dados_censo
0.40
mapbiomascollection
1.4
OpenTopography-DEM-Downloader
1.0
OSMDownloader
1.0.3
pg_raster_import
1.0.10
PLUGIN
1.2.1
quick_map_services
0.19.29
searchlayers
3.0.7
SemiAutomaticClassificationPlugin
7.10.6
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
It seems to be related to #40503
|
process
|
interger overflow error creating raster layer unique values report what is the bug or the crash feature could not be written to c path output shp error converting value for attribute field count value is too large for integer field could not write feature into output table execution failed after seconds minutes seconds steps to reproduce the issue take the age of secondary vegetation for entire brazil open processing toolbox raster analysis raster layer unique values report in the unique values table enter a file output run versions qgis version tisler qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version windows version active python plugins ana data acquisition cluz dados censo mapbiomascollection opentopography dem downloader osmdownloader pg raster import plugin quick map services searchlayers semiautomaticclassificationplugin db manager grassprovider metasearch processing sagaprovider also tested with qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins ana data acquisition cluz dados censo mapbiomascollection opentopography dem downloader osmdownloader pg raster import plugin quick map services searchlayers semiautomaticclassificationplugin db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context it seems to be related to
| 1
|
15,641
| 19,826,002,505
|
IssuesEvent
|
2022-01-20 06:34:27
|
varabyte/kobweb
|
https://api.github.com/repos/varabyte/kobweb
|
opened
|
Publish all artifacts on mavenCentral
|
process
|
Right now we're hosting artifacts on Google cloud in central US but we'll have potential users all over the world, so probably maven central is doing a better job at supporting everyone than my setup is
|
1.0
|
Publish all artifacts on mavenCentral - Right now we're hosting artifacts on Google cloud in central US but we'll have potential users all over the world, so probably maven central is doing a better job at supporting everyone than my setup is
|
process
|
publish all artifacts on mavencentral right now we re hosting artifacts on google cloud in central us but we ll have potential users all over the world so probably maven central is doing a better job at supporting everyone than my setup is
| 1
|
17,016
| 9,574,995,038
|
IssuesEvent
|
2019-05-07 04:25:05
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
[tflite] tflite file with single ADD op produces duplicated outputs
|
comp:lite type:bug/performance
|
<em>Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): r1.13
- Python version: 3.6
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
You can collect some of this information using our environment capture
[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)
You can also obtain the TensorFlow version with: 1. TF 1.0: `python -c "import
tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` 2. TF 2.0: `python -c
"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"`
**Describe the current behavior**
I have created `.tflite` with single `ADD` op. It has two inputs and one output.
When reading this `.tflite` with interpreter(e.g. `tensorflow.lite.python`)
```py
import sys
import numpy as np
from tensorflow.lite.python import interpreter as interpreter_wrapper
interpreter = interpreter_wrapper.Interpreter(model_path=sys.argv[1])
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
print(output_details)
```
```
[{'name': 'input0', 'index': 0, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}, {'name': 'input1', 'index': 1, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
[{'name': 'output0', 'index': 2, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}, {'name': 'output0', 'index': 2, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
```
Code using C++ interpreter also reports duplicated outputs(2 2), even though outout of ADD(builtin code 0) shows one output.
```
Interpreter has 3 tensors and 1 nodes
Inputs: 0 1
Outputs: 2 2
Tensor 0 input0 kTfLiteFloat32 kTfLiteArenaRw 40 bytes ( 0.0 MB) 2 5
Tensor 1 input1 kTfLiteFloat32 kTfLiteArenaRw 40 bytes ( 0.0 MB) 2 5
Tensor 2 output0 kTfLiteFloat32 kTfLiteArenaRw 40 bytes ( 0.0 MB) 2 5
Node 0 Operator Builtin Code 0
Inputs: 0 1
Outputs: 2
```
**Describe the expected behavior**
`get_output_details()` returns unique list of outputs.
**Code to reproduce the issue**
Use attached `.tflite` file to reproduce the issue.
[add.tflite.zip](https://github.com/tensorflow/tensorflow/files/3124805/add.tflite.zip)
|
True
|
[tflite] tflite file with single ADD op produces duplicated outputs - <em>Please make sure that this is a bug. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): r1.13
- Python version: 3.6
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
You can collect some of this information using our environment capture
[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)
You can also obtain the TensorFlow version with: 1. TF 1.0: `python -c "import
tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` 2. TF 2.0: `python -c
"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"`
**Describe the current behavior**
I have created `.tflite` with single `ADD` op. It has two inputs and one output.
When reading this `.tflite` with interpreter(e.g. `tensorflow.lite.python`)
```py
import sys
import numpy as np
from tensorflow.lite.python import interpreter as interpreter_wrapper
interpreter = interpreter_wrapper.Interpreter(model_path=sys.argv[1])
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
print(output_details)
```
```
[{'name': 'input0', 'index': 0, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}, {'name': 'input1', 'index': 1, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
[{'name': 'output0', 'index': 2, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}, {'name': 'output0', 'index': 2, 'shape': array([2, 5], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
```
Code using C++ interpreter also reports duplicated outputs(2 2), even though outout of ADD(builtin code 0) shows one output.
```
Interpreter has 3 tensors and 1 nodes
Inputs: 0 1
Outputs: 2 2
Tensor 0 input0 kTfLiteFloat32 kTfLiteArenaRw 40 bytes ( 0.0 MB) 2 5
Tensor 1 input1 kTfLiteFloat32 kTfLiteArenaRw 40 bytes ( 0.0 MB) 2 5
Tensor 2 output0 kTfLiteFloat32 kTfLiteArenaRw 40 bytes ( 0.0 MB) 2 5
Node 0 Operator Builtin Code 0
Inputs: 0 1
Outputs: 2
```
**Describe the expected behavior**
`get_output_details()` returns unique list of outputs.
**Code to reproduce the issue**
Use attached `.tflite` file to reproduce the issue.
[add.tflite.zip](https://github.com/tensorflow/tensorflow/files/3124805/add.tflite.zip)
|
non_process
|
tflite file with single add op produces duplicated outputs please make sure that this is a bug as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag bug template system information have i written custom code as opposed to using a stock example script provided in tensorflow no os platform and distribution e g linux ubuntu linux ubuntu mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device tensorflow installed from source or binary binary tensorflow version use command below python version bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version gpu model and memory you can collect some of this information using our environment capture you can also obtain the tensorflow version with tf python c import tensorflow as tf print tf git version tf version tf python c import tensorflow as tf print tf version git version tf version version describe the current behavior i have created tflite with single add op it has two inputs and one output when reading this tflite with interpreter e g tensorflow lite python py import sys import numpy as np from tensorflow lite python import interpreter as interpreter wrapper interpreter interpreter wrapper interpreter model path sys argv interpreter allocate tensors input details interpreter get input details output details interpreter get output details print input details print output details dtype dtype quantization name index shape array dtype dtype quantization dtype dtype quantization name index shape array dtype dtype quantization code using c interpreter also reports duplicated outputs even though outout of add builtin code shows one output interpreter has tensors and nodes inputs outputs tensor ktflitearenarw bytes mb tensor ktflitearenarw bytes mb tensor ktflitearenarw bytes mb node operator builtin code inputs outputs describe the expected behavior get output details returns unique list of outputs code to reproduce the issue use attached tflite file to reproduce the issue
| 0
|
5,818
| 8,653,149,860
|
IssuesEvent
|
2018-11-27 10:05:01
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
<Stack/>: missing dataTest prop
|
Bug Processing
|
**Is your feature request related to a problem? Please describe.**
With changing MMB into Orbit, Cypress tests are failing because of deleting old divs with classnames and replacing them with Orbit components with dataTest. It would be great if dataTest would be available in Stack too, mainly because of Cypress tests.
**Describe the solution you'd like**
add prop `dataTest` into `<Stack />`
|
1.0
|
<Stack/>: missing dataTest prop - **Is your feature request related to a problem? Please describe.**
With changing MMB into Orbit, Cypress tests are failing because of deleting old divs with classnames and replacing them with Orbit components with dataTest. It would be great if dataTest would be available in Stack too, mainly because of Cypress tests.
**Describe the solution you'd like**
add prop `dataTest` into `<Stack />`
|
process
|
missing datatest prop is your feature request related to a problem please describe with changing mmb into orbit cypress tests are failing because of deleting old divs with classnames and replacing them with orbit components with datatest it would be great if datatest would be available in stack too mainly because of cypress tests describe the solution you d like add prop datatest into
| 1
|
13,094
| 15,441,869,430
|
IssuesEvent
|
2021-03-08 06:45:06
|
GoogleCloudPlatform/cloud-sql-jdbc-socket-factory
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory
|
closed
|
Update build badge for new CI builds
|
priority: p3 type: process
|
## Feature Description
Update the badge build in the README to point to the new CI builds in master.
@jsimonweb
|
1.0
|
Update build badge for new CI builds -
## Feature Description
Update the badge build in the README to point to the new CI builds in master.
@jsimonweb
|
process
|
update build badge for new ci builds feature description update the badge build in the readme to point to the new ci builds in master jsimonweb
| 1
|
247,377
| 20,976,036,593
|
IssuesEvent
|
2022-03-28 15:17:38
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: node-postgres failed
|
C-test-failure O-robot O-roachtest T-sql-experience branch-release-21.2
|
roachtest.node-postgres [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583017&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583017&tab=artifacts#/node-postgres) on release-21.2 @ [4caeb8b64a1bc37ba6d95641e982732a89ae2c3a](https://github.com/cockroachdb/cockroach/commits/4caeb8b64a1bc37ba6d95641e982732a89ae2c3a):
```
The test failed on branch=release-21.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/node-postgres/run_1
test_impl.go:274,assertions.go:262,assertions.go:1332,require.go:1231,nodejs_postgres.go:120,nodejs_postgres.go:156,test_runner.go:777:
Error Trace: nodejs_postgres.go:120
nodejs_postgres.go:156
test_runner.go:777
asm_amd64.s:1581
Error: Received unexpected error:
all attempts failed for building node-postgres due to error: output in run_102608.357964513_n1_cd: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-4583017-1647411420-27-n1cpu4:1 -- cd /mnt/data1/node-postgres/ && sudo yarn && sudo yarn lerna bootstrap returned: exit status 20
Test: node-postgres
```
<details><summary>Reproduce</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*node-postgres.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-13848
|
2.0
|
roachtest: node-postgres failed - roachtest.node-postgres [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583017&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4583017&tab=artifacts#/node-postgres) on release-21.2 @ [4caeb8b64a1bc37ba6d95641e982732a89ae2c3a](https://github.com/cockroachdb/cockroach/commits/4caeb8b64a1bc37ba6d95641e982732a89ae2c3a):
```
The test failed on branch=release-21.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/node-postgres/run_1
test_impl.go:274,assertions.go:262,assertions.go:1332,require.go:1231,nodejs_postgres.go:120,nodejs_postgres.go:156,test_runner.go:777:
Error Trace: nodejs_postgres.go:120
nodejs_postgres.go:156
test_runner.go:777
asm_amd64.s:1581
Error: Received unexpected error:
all attempts failed for building node-postgres due to error: output in run_102608.357964513_n1_cd: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-4583017-1647411420-27-n1cpu4:1 -- cd /mnt/data1/node-postgres/ && sudo yarn && sudo yarn lerna bootstrap returned: exit status 20
Test: node-postgres
```
<details><summary>Reproduce</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*node-postgres.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-13848
|
non_process
|
roachtest node postgres failed roachtest node postgres with on release the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts node postgres run test impl go assertions go assertions go require go nodejs postgres go nodejs postgres go test runner go error trace nodejs postgres go nodejs postgres go test runner go asm s error received unexpected error all attempts failed for building node postgres due to error output in run cd home agent work go src github com cockroachdb cockroach bin roachprod run teamcity cd mnt node postgres sudo yarn sudo yarn lerna bootstrap returned exit status test node postgres reproduce see cc cockroachdb sql experience jira issue crdb
| 0
|
569,074
| 16,993,929,700
|
IssuesEvent
|
2021-07-01 02:14:25
|
googleapis/python-automl
|
https://api.github.com/repos/googleapis/python-automl
|
closed
|
tests.system.gapic.v1beta1.test_system_tables_client_v1.TestSystemTablesClient: test_import_data failed
|
api: automl flakybot: issue priority: p1 type: bug
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 160a7adad3f2d53ca6f733a21e72bfe866a5ebc1
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/acec8dfd-5715-4d1c-a03e-cd6d983db6be), [Sponge](http://sponge2/acec8dfd-5715-4d1c-a03e-cd6d983db6be)
status: failed
<details><summary>Test output</summary><br><pre>self = <test_system_tables_client_v1.TestSystemTablesClient object at 0x7f0188e6b490>
@vpcsc_config.skip_if_inside_vpcsc
def test_import_data(self):
client = automl_v1beta1.TablesClient(project=PROJECT, region=REGION)
display_name = _id("t_import")
dataset = client.create_dataset(display_name)
op = client.import_data(
dataset=dataset,
gcs_input_uris="gs://cloud-ml-tables-data/bank-marketing.csv",
)
> self.cancel_and_wait(op)
tests/system/gapic/v1beta1/test_system_tables_client_v1.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_system_tables_client_v1.TestSystemTablesClient object at 0x7f0188e6b490>
op = <google.api_core.operation.Operation object at 0x7f01875d4310>
def cancel_and_wait(self, op):
op.cancel()
start = time.time()
sleep_time = 1
while time.time() - start < MAX_WAIT_TIME_SECONDS:
if op.cancelled():
return
time.sleep(sleep_time)
sleep_time = min(sleep_time * 2, MAX_SLEEP_TIME_SECONDS)
> assert op.cancelled()
E assert False
E + where False = <bound method Operation.cancelled of <google.api_core.operation.Operation object at 0x7f01875d4310>>()
E + where <bound method Operation.cancelled of <google.api_core.operation.Operation object at 0x7f01875d4310>> = <google.api_core.operation.Operation object at 0x7f01875d4310>.cancelled
tests/system/gapic/v1beta1/test_system_tables_client_v1.py:59: AssertionError</pre></details>
|
1.0
|
tests.system.gapic.v1beta1.test_system_tables_client_v1.TestSystemTablesClient: test_import_data failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 160a7adad3f2d53ca6f733a21e72bfe866a5ebc1
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/acec8dfd-5715-4d1c-a03e-cd6d983db6be), [Sponge](http://sponge2/acec8dfd-5715-4d1c-a03e-cd6d983db6be)
status: failed
<details><summary>Test output</summary><br><pre>self = <test_system_tables_client_v1.TestSystemTablesClient object at 0x7f0188e6b490>
@vpcsc_config.skip_if_inside_vpcsc
def test_import_data(self):
client = automl_v1beta1.TablesClient(project=PROJECT, region=REGION)
display_name = _id("t_import")
dataset = client.create_dataset(display_name)
op = client.import_data(
dataset=dataset,
gcs_input_uris="gs://cloud-ml-tables-data/bank-marketing.csv",
)
> self.cancel_and_wait(op)
tests/system/gapic/v1beta1/test_system_tables_client_v1.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_system_tables_client_v1.TestSystemTablesClient object at 0x7f0188e6b490>
op = <google.api_core.operation.Operation object at 0x7f01875d4310>
def cancel_and_wait(self, op):
op.cancel()
start = time.time()
sleep_time = 1
while time.time() - start < MAX_WAIT_TIME_SECONDS:
if op.cancelled():
return
time.sleep(sleep_time)
sleep_time = min(sleep_time * 2, MAX_SLEEP_TIME_SECONDS)
> assert op.cancelled()
E assert False
E + where False = <bound method Operation.cancelled of <google.api_core.operation.Operation object at 0x7f01875d4310>>()
E + where <bound method Operation.cancelled of <google.api_core.operation.Operation object at 0x7f01875d4310>> = <google.api_core.operation.Operation object at 0x7f01875d4310>.cancelled
tests/system/gapic/v1beta1/test_system_tables_client_v1.py:59: AssertionError</pre></details>
|
non_process
|
tests system gapic test system tables client testsystemtablesclient test import data failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output self vpcsc config skip if inside vpcsc def test import data self client automl tablesclient project project region region display name id t import dataset client create dataset display name op client import data dataset dataset gcs input uris gs cloud ml tables data bank marketing csv self cancel and wait op tests system gapic test system tables client py self op def cancel and wait self op op cancel start time time sleep time while time time start max wait time seconds if op cancelled return time sleep sleep time sleep time min sleep time max sleep time seconds assert op cancelled e assert false e where false e where cancelled tests system gapic test system tables client py assertionerror
| 0
|
6,135
| 8,998,570,039
|
IssuesEvent
|
2019-02-02 23:06:46
|
jasonblais/mattermost-community
|
https://api.github.com/repos/jasonblais/mattermost-community
|
opened
|
New labelling system for Help Wanted tickets
|
Contributor Journey Process
|
Below is the proposed labelling system for Help Wanted tickets.
Key changes to existing process
- Use labels recommended by GitHub so that contributors can more easily find help wanted and good first issues: https://help.github.com/articles/helping-new-contributors-find-your-project-with-labels/
- Rename difficulty levels to easy/medium/hard
- Break languages (Go, JavaScript) and frameworks (React Native, Redux) to separate categories
- Add new category for repositories
- General clean-up for a more obvious, consistent labelling system
1. Ticket status:
- **Help Wanted**: Recommended by GitHub so that new contributors can more easily find tickets to help with https://help.github.com/articles/helping-new-contributors-find-your-project-with-labels/
- **Up for Grabs**: Signifies whether ticket is still available or taken by a community member.
- Automate this via slash commands, e.g. /taken
- **PR Submitted**: Signifies when a contribution is made.
- Automate based on keywords, e.g. when GH ticket is linked in the help wanted ticket
2. Difficulty:
- **First Good Issue**: Recommended by GitHub so that new contributors can more easily find starter tickets to get them introduced to the codebase https://help.github.com/articles/helping-new-contributors-find-your-project-with-labels/
- **Difficulty/1:easy**: Easy tickets
- **Difficulty/2:medium**: Medium tickets
- **Difficulty/3:hard**: Hard tickets
3. Repository: Mattermost has a lot of repositories, but all help wanted tickets are created in the mattermost-server repo. Thus, it's important to be clear which repository changes are submitted to (see an example where contributor was confused and submitted their changes to the mattermost-redux repo instead of mattermost-webapp: https://github.com/mattermost/mattermost-redux/pull/761)
- **Repository/mattermost-server**
- **Repository/mattermost-webapp**
- **Repository/mattermost-redux**
- **Repository/mattermost-mobile**
- **Repository/mattermost-plugin-jira**
- etc..
4. Language: Contributors can then filter based on their skill sets, or find easy tickets for a language they want to learn
- **Language/Go**
- **Language/JavaScript**
5. (Optional) Framework: Contributors interesting in a specific framework such as React Native or Redux can filter for these tickets
- **Framework/Redux**
- **Framework/React Native**
- **Framework/ReactJS**
6. (Optional) Area: Which area the feature is related to. This is helpful for larger campaigns contributors are interested in. Below are some examples, some of which are already in use (e.g. `Add E2E Tests`
- **Area/APIv4**
- **Area/Add E2E Tests**
- **Area/Plugins**
- **Area/Build**
- **Area/Dev Tools**
- **Area/Code Quality**
7. When PR submitted:
- See contribution process [to be moved to Dev Docs] for the three PR stages https://github.com/mattermost/mattermost-server/blob/master/CONTRIBUTING.md
- **1: PM Review**
- **2: Dev Review**
- **3: Ready to Merge**
- See process for inactive contributions https://developers.mattermost.com/contribute/getting-started/inactive-contributions/
- **Lifecycle/1:stale**
- **Lifecycle/2:inactive**
- **Lifecycle/3:orphaned**
- **Lifecycle/frozen**
|
1.0
|
New labelling system for Help Wanted tickets - Below is the proposed labelling system for Help Wanted tickets.
Key changes to existing process
- Use labels recommended by GitHub so that contributors can more easily find help wanted and good first issues: https://help.github.com/articles/helping-new-contributors-find-your-project-with-labels/
- Rename difficulty levels to easy/medium/hard
- Break languages (Go, JavaScript) and frameworks (React Native, Redux) to separate categories
- Add new category for repositories
- General clean-up for a more obvious, consistent labelling system
1. Ticket status:
- **Help Wanted**: Recommended by GitHub so that new contributors can more easily find tickets to help with https://help.github.com/articles/helping-new-contributors-find-your-project-with-labels/
- **Up for Grabs**: Signifies whether ticket is still available or taken by a community member.
- Automate this via slash commands, e.g. /taken
- **PR Submitted**: Signifies when a contribution is made.
- Automate based on keywords, e.g. when GH ticket is linked in the help wanted ticket
2. Difficulty:
- **First Good Issue**: Recommended by GitHub so that new contributors can more easily find starter tickets to get them introduced to the codebase https://help.github.com/articles/helping-new-contributors-find-your-project-with-labels/
- **Difficulty/1:easy**: Easy tickets
- **Difficulty/2:medium**: Medium tickets
- **Difficulty/3:hard**: Hard tickets
3. Repository: Mattermost has a lot of repositories, but all help wanted tickets are created in the mattermost-server repo. Thus, it's important to be clear which repository changes are submitted to (see an example where contributor was confused and submitted their changes to the mattermost-redux repo instead of mattermost-webapp: https://github.com/mattermost/mattermost-redux/pull/761)
- **Repository/mattermost-server**
- **Repository/mattermost-webapp**
- **Repository/mattermost-redux**
- **Repository/mattermost-mobile**
- **Repository/mattermost-plugin-jira**
- etc..
4. Language: Contributors can then filter based on their skill sets, or find easy tickets for a language they want to learn
- **Language/Go**
- **Language/JavaScript**
5. (Optional) Framework: Contributors interesting in a specific framework such as React Native or Redux can filter for these tickets
- **Framework/Redux**
- **Framework/React Native**
- **Framework/ReactJS**
6. (Optional) Area: Which area the feature is related to. This is helpful for larger campaigns contributors are interested in. Below are some examples, some of which are already in use (e.g. `Add E2E Tests`
- **Area/APIv4**
- **Area/Add E2E Tests**
- **Area/Plugins**
- **Area/Build**
- **Area/Dev Tools**
- **Area/Code Quality**
7. When PR submitted:
- See contribution process [to be moved to Dev Docs] for the three PR stages https://github.com/mattermost/mattermost-server/blob/master/CONTRIBUTING.md
- **1: PM Review**
- **2: Dev Review**
- **3: Ready to Merge**
- See process for inactive contributions https://developers.mattermost.com/contribute/getting-started/inactive-contributions/
- **Lifecycle/1:stale**
- **Lifecycle/2:inactive**
- **Lifecycle/3:orphaned**
- **Lifecycle/frozen**
|
process
|
new labelling system for help wanted tickets below is the proposed labelling system for help wanted tickets key changes to existing process use labels recommended by github so that contributors can more easily find help wanted and good first issues rename difficulty levels to easy medium hard break languages go javascript and frameworks react native redux to separate categories add new category for repositories general clean up for a more obvious consistent labelling system ticket status help wanted recommended by github so that new contributors can more easily find tickets to help with up for grabs signifies whether ticket is still available or taken by a community member automate this via slash commands e g taken pr submitted signifies when a contribution is made automate based on keywords e g when gh ticket is linked in the help wanted ticket difficulty first good issue recommended by github so that new contributors can more easily find starter tickets to get them introduced to the codebase difficulty easy easy tickets difficulty medium medium tickets difficulty hard hard tickets repository mattermost has a lot of repositories but all help wanted tickets are created in the mattermost server repo thus it s important to be clear which repository changes are submitted to see an example where contributor was confused and submitted their changes to the mattermost redux repo instead of mattermost webapp repository mattermost server repository mattermost webapp repository mattermost redux repository mattermost mobile repository mattermost plugin jira etc language contributors can then filter based on their skill sets or find easy tickets for a language they want to learn language go language javascript optional framework contributors interesting in a specific framework such as react native or redux can filter for these tickets framework redux framework react native framework reactjs optional area which area the feature is related to this is helpful for larger campaigns contributors are interested in below are some examples some of which are already in use e g add tests area area add tests area plugins area build area dev tools area code quality when pr submitted see contribution process for the three pr stages pm review dev review ready to merge see process for inactive contributions lifecycle stale lifecycle inactive lifecycle orphaned lifecycle frozen
| 1
|
2,388
| 5,187,642,415
|
IssuesEvent
|
2017-01-20 17:24:52
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Display task name in completed start event
|
browser: all bug comp: activiti-processList
|
We should display task name within a completed start event
1. Go to processes
2. Go to completed filter
3. Click on completed start event
**Completed start event**

**Completed user task**

|
1.0
|
Display task name in completed start event - We should display task name within a completed start event
1. Go to processes
2. Go to completed filter
3. Click on completed start event
**Completed start event**

**Completed user task**

|
process
|
display task name in completed start event we should display task name within a completed start event go to processes go to completed filter click on completed start event completed start event completed user task
| 1
|
7,080
| 10,229,387,221
|
IssuesEvent
|
2019-08-17 12:13:56
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
`verdi process watch` error
|
priority/nice-to-have topic/processes topic/verdi type/bug
|
There seems to be a problem with the command `verdi process watch` where the following exception was reached:
```
07/04/2019 01:08:40 PM <19245> kiwipy.rmq.communicator: [ERROR] Exception in broadcast receiver
Traceback (most recent call last):
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/kiwipy/rmq/communicator.py", line 237, in _on_broadcast
msg[messages.BroadcastMessage.CORRELATION_ID])
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/tornado/gen.py", line 1055, in run
value = future.result()
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/tornado/gen.py", line 292, in wrapper
result = func(*args, **kwargs)
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/kiwipy/filters.py", line 33, in __call__
self._subscriber(communicator, body, sender, subject, correlation_id)
TypeError: _print() takes exactly 4 arguments (5 given)
```
The `_print` function takes 4 arguments:
https://github.com/aiidateam/aiida_core/blob/e232f946e2b5c1c55c8f8b2c903f05355fe9deea/aiida/cmdline/commands/cmd_process.py#L262-L273
But the callback gives 5 arguments in `kiwipy` here:
https://github.com/aiidateam/kiwipy/blob/f0b3ea3c6cd1ec9653586ee5ca5bdb6dcc84e565/kiwipy/filters.py#L9-L33
The `communicator` argument was not included in the `_print` function.
BTW I am not quite sure what exactly this command does and it appears to hang most of the time....
|
1.0
|
`verdi process watch` error - There seems to be a problem with the command `verdi process watch` where the following exception was reached:
```
07/04/2019 01:08:40 PM <19245> kiwipy.rmq.communicator: [ERROR] Exception in broadcast receiver
Traceback (most recent call last):
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/kiwipy/rmq/communicator.py", line 237, in _on_broadcast
msg[messages.BroadcastMessage.CORRELATION_ID])
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/tornado/gen.py", line 1055, in run
value = future.result()
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/tornado/gen.py", line 292, in wrapper
result = func(*args, **kwargs)
File "/home/bonan/miniconda3/envs/aiida-1.0-main/lib/python2.7/site-packages/kiwipy/filters.py", line 33, in __call__
self._subscriber(communicator, body, sender, subject, correlation_id)
TypeError: _print() takes exactly 4 arguments (5 given)
```
The `_print` function takes 4 arguments:
https://github.com/aiidateam/aiida_core/blob/e232f946e2b5c1c55c8f8b2c903f05355fe9deea/aiida/cmdline/commands/cmd_process.py#L262-L273
But the callback gives 5 arguments in `kiwipy` here:
https://github.com/aiidateam/kiwipy/blob/f0b3ea3c6cd1ec9653586ee5ca5bdb6dcc84e565/kiwipy/filters.py#L9-L33
The `communicator` argument was not included in the `_print` function.
BTW I am not quite sure what exactly this command does and it appears to hang most of the time....
|
process
|
verdi process watch error there seems to be a problem with the command verdi process watch where the following exception was reached pm kiwipy rmq communicator exception in broadcast receiver traceback most recent call last file home bonan envs aiida main lib site packages kiwipy rmq communicator py line in on broadcast msg file home bonan envs aiida main lib site packages tornado gen py line in run value future result file home bonan envs aiida main lib site packages tornado concurrent py line in result raise exc info self exc info file home bonan envs aiida main lib site packages tornado gen py line in wrapper result func args kwargs file home bonan envs aiida main lib site packages kiwipy filters py line in call self subscriber communicator body sender subject correlation id typeerror print takes exactly arguments given the print function takes arguments but the callback gives arguments in kiwipy here the communicator argument was not included in the print function btw i am not quite sure what exactly this command does and it appears to hang most of the time
| 1
|
11,005
| 13,793,019,353
|
IssuesEvent
|
2020-10-09 14:23:42
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Create tables in Glue asynchronously
|
p0 story team:data processing
|
### Description
Currently, Panther creates new Glue tables when a user onboards a source. Panther creates one table (and Athena view) for every log type added to a new source.
We should modify this behavior so that we create a table only upon receiving data from a given log type.
This has several advantages:
1. Faster updates in log sources. Updating a lot source currently several seconds, mainly because we try to create/update the underlying tables as part of the log source creation/update process
2. We no longer risk timeouts during log source creation/update https://github.com/panther-labs/panther/issues/1023
### Acceptance criteria
1. panther-data-catalog-updater is modified to create Glue table and update Athena views when it receives data from a specific log type (if the table doesn't exist)
1. Code should handle the race condition where multiple panther-data-catalog-updater instances are trying to create the table at the same time
1. sources-api no longer tries to create/update Glue tables and Athena views upon log source onboarding
|
1.0
|
Create tables in Glue asynchronously - ### Description
Currently, Panther creates new Glue tables when a user onboards a source. Panther creates one table (and Athena view) for every log type added to a new source.
We should modify this behavior so that we create a table only upon receiving data from a given log type.
This has several advantages:
1. Faster updates in log sources. Updating a lot source currently several seconds, mainly because we try to create/update the underlying tables as part of the log source creation/update process
2. We no longer risk timeouts during log source creation/update https://github.com/panther-labs/panther/issues/1023
### Acceptance criteria
1. panther-data-catalog-updater is modified to create Glue table and update Athena views when it receives data from a specific log type (if the table doesn't exist)
1. Code should handle the race condition where multiple panther-data-catalog-updater instances are trying to create the table at the same time
1. sources-api no longer tries to create/update Glue tables and Athena views upon log source onboarding
|
process
|
create tables in glue asynchronously description currently panther creates new glue tables when a user onboards a source panther creates one table and athena view for every log type added to a new source we should modify this behavior so that we create a table only upon receiving data from a given log type this has several advantages faster updates in log sources updating a lot source currently several seconds mainly because we try to create update the underlying tables as part of the log source creation update process we no longer risk timeouts during log source creation update acceptance criteria panther data catalog updater is modified to create glue table and update athena views when it receives data from a specific log type if the table doesn t exist code should handle the race condition where multiple panther data catalog updater instances are trying to create the table at the same time sources api no longer tries to create update glue tables and athena views upon log source onboarding
| 1
|
229,258
| 25,313,154,595
|
IssuesEvent
|
2022-11-17 19:11:04
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
opened
|
CVE-2022-41917 (Medium) detected in opensearch-1.3.5.jar
|
security vulnerability
|
## CVE-2022-41917 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensearch-1.3.5.jar</b></p></summary>
<p>OpenSearch subproject :server</p>
<p>Path to dependency file: /e2e-test/peerforwarder/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar</p>
<p>
Dependency Hierarchy:
- opensearch-rest-high-level-client-1.3.5.jar (Root Library)
- :x: **opensearch-1.3.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OpenSearch is a community-driven, open source fork of Elasticsearch and Kibana. OpenSearch allows users to specify a local file when defining text analyzers to process data for text analysis. An issue in the implementation of this feature allows certain specially crafted queries to return a response containing the first line of text from arbitrary files. The list of potentially impacted files is limited to text files with read permissions allowed in the Java Security Manager policy configuration. OpenSearch version 1.3.7 and 2.4.0 contain a fix for this issue. Users are advised to upgrade. There are no known workarounds for this issue.
<p>Publish Date: 2022-11-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41917>CVE-2022-41917</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/opensearch-project/OpenSearch/security/advisories/GHSA-w3rx-m34v-wrqx">https://github.com/opensearch-project/OpenSearch/security/advisories/GHSA-w3rx-m34v-wrqx</a></p>
<p>Release Date: 2022-11-16</p>
<p>Fix Resolution: org.opensearch:opensearch:2.4.0,org.opensearch.plugin:analysis-nori:2.4.0,org.opensearch.plugin:analysis-kuromoji:2.4.0,org.opensearch.plugin:analysis-icu-client:2.4.0,org.opensearch.plugin:analysis-common:2.4.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-41917 (Medium) detected in opensearch-1.3.5.jar - ## CVE-2022-41917 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensearch-1.3.5.jar</b></p></summary>
<p>OpenSearch subproject :server</p>
<p>Path to dependency file: /e2e-test/peerforwarder/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.opensearch/opensearch/1.3.5/b1c5b9898939fd6b42d6d3bbeda632142f9cef9d/opensearch-1.3.5.jar</p>
<p>
Dependency Hierarchy:
- opensearch-rest-high-level-client-1.3.5.jar (Root Library)
- :x: **opensearch-1.3.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/opensearch-project/data-prepper/commit/90bdaa7e7833bdd504c817e49d4434b4d8880f56">90bdaa7e7833bdd504c817e49d4434b4d8880f56</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
OpenSearch is a community-driven, open source fork of Elasticsearch and Kibana. OpenSearch allows users to specify a local file when defining text analyzers to process data for text analysis. An issue in the implementation of this feature allows certain specially crafted queries to return a response containing the first line of text from arbitrary files. The list of potentially impacted files is limited to text files with read permissions allowed in the Java Security Manager policy configuration. OpenSearch version 1.3.7 and 2.4.0 contain a fix for this issue. Users are advised to upgrade. There are no known workarounds for this issue.
<p>Publish Date: 2022-11-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41917>CVE-2022-41917</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/opensearch-project/OpenSearch/security/advisories/GHSA-w3rx-m34v-wrqx">https://github.com/opensearch-project/OpenSearch/security/advisories/GHSA-w3rx-m34v-wrqx</a></p>
<p>Release Date: 2022-11-16</p>
<p>Fix Resolution: org.opensearch:opensearch:2.4.0,org.opensearch.plugin:analysis-nori:2.4.0,org.opensearch.plugin:analysis-kuromoji:2.4.0,org.opensearch.plugin:analysis-icu-client:2.4.0,org.opensearch.plugin:analysis-common:2.4.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in opensearch jar cve medium severity vulnerability vulnerable library opensearch jar opensearch subproject server path to dependency file test peerforwarder build gradle path to vulnerable library home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar home wss scanner gradle caches modules files org opensearch opensearch opensearch jar dependency hierarchy opensearch rest high level client jar root library x opensearch jar vulnerable library found in head commit a href found in base branch main vulnerability details opensearch is a community driven open source fork of elasticsearch and kibana opensearch allows users to specify a local file when defining text analyzers to process data for text analysis an issue in the implementation of this feature allows certain specially crafted queries to return a response containing the first line of text from arbitrary files the list of potentially impacted files is limited to text files with read permissions allowed in the java security manager policy configuration opensearch version and contain a fix for this issue users are advised to upgrade there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org opensearch opensearch org opensearch plugin analysis nori org opensearch plugin analysis kuromoji org opensearch plugin analysis icu client org opensearch plugin analysis common
| 0
|
41,006
| 12,812,505,351
|
IssuesEvent
|
2020-07-04 06:53:08
|
shrivastava-prateek/angularjs-es6-webpack
|
https://api.github.com/repos/shrivastava-prateek/angularjs-es6-webpack
|
opened
|
CVE-2019-1010266 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.0.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/globule/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/accord/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.13.22.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/gulp-jshint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- plato-1.2.2.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shrivastava-prateek/angularjs-es6-webpack/commit/5a7519c9340d9d27cd18c80cc9093d3b1193db9d">5a7519c9340d9d27cd18c80cc9093d3b1193db9d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-1010266 (Medium) detected in multiple libraries - ## CVE-2019-1010266 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-1.0.2.tgz</b>, <b>lodash-3.10.1.tgz</b>, <b>lodash-2.4.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/globule/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-3.10.1.tgz</b></p></summary>
<p>The modern build of lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz">https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/accord/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-0.13.22.tgz (Root Library)
- :x: **lodash-3.10.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-2.4.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, & extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz">https://registry.npmjs.org/lodash/-/lodash-2.4.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/gulp-jshint/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- plato-1.2.2.tgz (Root Library)
- :x: **lodash-2.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shrivastava-prateek/angularjs-es6-webpack/commit/5a7519c9340d9d27cd18c80cc9093d3b1193db9d">5a7519c9340d9d27cd18c80cc9093d3b1193db9d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash prior to 4.17.11 is affected by: CWE-400: Uncontrolled Resource Consumption. The impact is: Denial of service. The component is: Date handler. The attack vector is: Attacker provides very long strings, which the library attempts to match using a regular expression. The fixed version is: 4.17.11.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010266>CVE-2019-1010266</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010266</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 4.17.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules globule node modules lodash package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x lodash tgz vulnerable library lodash tgz the modern build of lodash modular utilities library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules accord node modules lodash package json dependency hierarchy karma tgz root library x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance extras library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules gulp jshint node modules lodash package json dependency hierarchy plato tgz root library x lodash tgz vulnerable library found in head commit a href vulnerability details lodash prior to is affected by cwe uncontrolled resource consumption the impact is denial of service the component is date handler the attack vector is attacker provides very long strings which the library attempts to match using a regular expression the fixed version is publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
244,358
| 18,755,523,621
|
IssuesEvent
|
2021-11-05 10:14:08
|
Punzaman/Team07-BSCS-3AB
|
https://api.github.com/repos/Punzaman/Team07-BSCS-3AB
|
opened
|
Week 1
|
documentation
|
Meeting 1 == 11/05/21 == 5:25PM - 5:45PM
Scrum Poker Done.
Team agreed to make PHP Server and CRUD for Customer Details a priority.
|
1.0
|
Week 1 - Meeting 1 == 11/05/21 == 5:25PM - 5:45PM
Scrum Poker Done.
Team agreed to make PHP Server and CRUD for Customer Details a priority.
|
non_process
|
week meeting scrum poker done team agreed to make php server and crud for customer details a priority
| 0
|
15,260
| 19,411,244,676
|
IssuesEvent
|
2021-12-20 09:50:43
|
AdguardTeam/AdguardForWindows
|
https://api.github.com/repos/AdguardTeam/AdguardForWindows
|
closed
|
Connection error with Norton 360
|
bug compatibility P3: Medium Status: In Progress Version: AdGuard v7.9
|
После чистой установки `AdGuard 7.8 Beta 1` стало периодически выдавать сообщение: "Ошибка подключения!"
Логи отправил через аварийное окно в AdGuard с текстом "Norton 360 с VPN". Вчера примерно в 19:59, сегодня в 19:27.

|
True
|
Connection error with Norton 360 - После чистой установки `AdGuard 7.8 Beta 1` стало периодически выдавать сообщение: "Ошибка подключения!"
Логи отправил через аварийное окно в AdGuard с текстом "Norton 360 с VPN". Вчера примерно в 19:59, сегодня в 19:27.

|
non_process
|
connection error with norton после чистой установки adguard beta стало периодически выдавать сообщение ошибка подключения логи отправил через аварийное окно в adguard с текстом norton с vpn вчера примерно в сегодня в
| 0
|
214,578
| 24,077,733,661
|
IssuesEvent
|
2022-09-19 01:05:31
|
ChoeMinji/xStream_1_4_17
|
https://api.github.com/repos/ChoeMinji/xStream_1_4_17
|
opened
|
CVE-2022-40150 (Medium) detected in jettison-1.2.jar
|
security vulnerability
|
## CVE-2022-40150 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jettison-1.2.jar</b></p></summary>
<p>A StAX implementation for JSON.</p>
<p>Path to dependency file: /xstream/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/codehaus/jettison/jettison/1.2/jettison-1.2.jar,/sitory/org/codehaus/jettison/jettison/1.2/jettison-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jettison-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/xStream_1_4_17/commit/91b0fd5ebba59bc3d610a838562e863b18b7bb67">91b0fd5ebba59bc3d610a838562e863b18b7bb67</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Jettison to parse untrusted XML or JSON data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by Out of memory. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40150>CVE-2022-40150</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-40150 (Medium) detected in jettison-1.2.jar - ## CVE-2022-40150 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jettison-1.2.jar</b></p></summary>
<p>A StAX implementation for JSON.</p>
<p>Path to dependency file: /xstream/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/codehaus/jettison/jettison/1.2/jettison-1.2.jar,/sitory/org/codehaus/jettison/jettison/1.2/jettison-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jettison-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/xStream_1_4_17/commit/91b0fd5ebba59bc3d610a838562e863b18b7bb67">91b0fd5ebba59bc3d610a838562e863b18b7bb67</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Jettison to parse untrusted XML or JSON data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by Out of memory. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40150>CVE-2022-40150</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jettison jar cve medium severity vulnerability vulnerable library jettison jar a stax implementation for json path to dependency file xstream pom xml path to vulnerable library sitory org codehaus jettison jettison jettison jar sitory org codehaus jettison jettison jettison jar dependency hierarchy x jettison jar vulnerable library found in head commit a href found in base branch master vulnerability details those using jettison to parse untrusted xml or json data may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by out of memory this effect may support a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
| 0
|
20,455
| 27,122,431,451
|
IssuesEvent
|
2023-02-16 00:35:56
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Honey SQL 2 `InlineValue` behavior for `clojure.lang.Ratio` is busted
|
Type:Bug Priority:P2 Querying/Processor .Backend
|
We're relying on this for a few things. We need to add a mapping so it doesn't do something dumb.
```clj
;; current behavior
(honey.sql/format {:select [[[:metabase.driver.postgres/pg-conversion
[:inline (/ 1 2)]
"double"] :ratio]]})
=> ["SELECT 1/2::double AS ratio"]
```
This should actually be
```clj
["SELECT (1/2)::double AS ratio"]
```
|
1.0
|
Honey SQL 2 `InlineValue` behavior for `clojure.lang.Ratio` is busted - We're relying on this for a few things. We need to add a mapping so it doesn't do something dumb.
```clj
;; current behavior
(honey.sql/format {:select [[[:metabase.driver.postgres/pg-conversion
[:inline (/ 1 2)]
"double"] :ratio]]})
=> ["SELECT 1/2::double AS ratio"]
```
This should actually be
```clj
["SELECT (1/2)::double AS ratio"]
```
|
process
|
honey sql inlinevalue behavior for clojure lang ratio is busted we re relying on this for a few things we need to add a mapping so it doesn t do something dumb clj current behavior honey sql format select metabase driver postgres pg conversion double ratio this should actually be clj
| 1
|
12,472
| 8,683,335,379
|
IssuesEvent
|
2018-12-02 17:13:45
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
reopened
|
Allow limiting adding/removing finalizers
|
area/apiserver area/security kind/feature lifecycle/rotten sig/api-machinery sig/auth
|
/kind feature
@kubernetes/sig-auth-feature-requests
@kubernetes/sig-api-machinery-feature-requests
**What happened**:
As a namespace-constrained user, I am able to manually add/remove finalizers added by system components:
* garbage collection finalizers
* pv/pvc protection finalizers
* service catalog deprovisioning finalizers
* etc...
**What you expected to happen**:
As a cluster admin, I expected to be able to control what finalizers can be added/removed by end users, so they can be relied on by system components and controllers for gating deletion
|
True
|
Allow limiting adding/removing finalizers - /kind feature
@kubernetes/sig-auth-feature-requests
@kubernetes/sig-api-machinery-feature-requests
**What happened**:
As a namespace-constrained user, I am able to manually add/remove finalizers added by system components:
* garbage collection finalizers
* pv/pvc protection finalizers
* service catalog deprovisioning finalizers
* etc...
**What you expected to happen**:
As a cluster admin, I expected to be able to control what finalizers can be added/removed by end users, so they can be relied on by system components and controllers for gating deletion
|
non_process
|
allow limiting adding removing finalizers kind feature kubernetes sig auth feature requests kubernetes sig api machinery feature requests what happened as a namespace constrained user i am able to manually add remove finalizers added by system components garbage collection finalizers pv pvc protection finalizers service catalog deprovisioning finalizers etc what you expected to happen as a cluster admin i expected to be able to control what finalizers can be added removed by end users so they can be relied on by system components and controllers for gating deletion
| 0
|
13,120
| 15,504,884,828
|
IssuesEvent
|
2021-03-11 14:46:25
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Scenario 3 doesn't work
|
Pri1 assigned-to-author automation/svc process-automation/subsvc product-question triaged
|
Scenario 3 „Autostop VMs based on low CPU usage“ doesn’t work on subscription that already has been migrated to use new metric alert. This function uses classic alert rules. Classic alerts in Azure Monitor have been retired in September 2019.
I have the following error message in the runbook log „AutoStop_CreateAlert_Child“: Add-AzureRmMetricAlertRule : Exception type: ErrorResponseException, Message: You cannot create or modify classic metric alerts for this subscription as this subscription is being migrated or has been migrated to use new metric alerts.
Is there anybody with the same problem? How can I solve this? I don’t want to use 3rd party tools to stop VMs based on low CPU usage. Thanks a lot!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
Scenario 3 doesn't work - Scenario 3 „Autostop VMs based on low CPU usage“ doesn’t work on subscription that already has been migrated to use new metric alert. This function uses classic alert rules. Classic alerts in Azure Monitor have been retired in September 2019.
I have the following error message in the runbook log „AutoStop_CreateAlert_Child“: Add-AzureRmMetricAlertRule : Exception type: ErrorResponseException, Message: You cannot create or modify classic metric alerts for this subscription as this subscription is being migrated or has been migrated to use new metric alerts.
Is there anybody with the same problem? How can I solve this? I don’t want to use 3rd party tools to stop VMs based on low CPU usage. Thanks a lot!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/Stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
scenario doesn t work scenario „autostop vms based on low cpu usage“ doesn’t work on subscription that already has been migrated to use new metric alert this function uses classic alert rules classic alerts in azure monitor have been retired in september i have the following error message in the runbook log „autostop createalert child“ add azurermmetricalertrule exception type errorresponseexception message you cannot create or modify classic metric alerts for this subscription as this subscription is being migrated or has been migrated to use new metric alerts is there anybody with the same problem how can i solve this i don’t want to use party tools to stop vms based on low cpu usage thanks a lot document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
22,038
| 30,554,499,423
|
IssuesEvent
|
2023-07-20 10:43:49
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Incorrect initialization of `GaussianMixture` from `precisions_init` in the `_initialize` method
|
Bug module:gaussian_process
|
### Describe the bug
When passing `precisions_init` to a `GaussianMixture` model, a user expects to resume training the model from the provided precision matrices, which is done by calculating the `precisions_cholesky_` from `precisions_init` in the [_initialize](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L704) method and continuing EM iterations from there. However, the code is not correct in the [_initialize](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L704) method when the `covariance_type` is `full` or `tied`.
In an [_m_step](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L742), the `precisions_cholesky_` is calculated from the `covariances_` $\Sigma$ by the [_compute_precision_cholesky](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L301) method. In particular, the `precisions_` $\Lambda$ can decomposed as:
$$\Lambda=\Sigma^{-1}=(LL^{T})^{-1}=(L^{-1})^{T}L^{-1}=UU^{T}$$
Given the covariance matrix $\Sigma$, applying the Cholesky decomposition to it gives rise to a lower-triangular matrix $L$, and then we use back-substitution to calculate $L^{-1}$ from $L$, and finally the `precisions_cholesky_` can be calculated from $U=(L^{-1})^{T}$, which is an upper-triangular matrix $U$. This is correct for calculating `precisions_cholesky_` from `covariances_`.
However, when resuming training, the [_initialize](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L704) method calculates `precisions_cholesky_` from `precisions_init` by directly conducting the Cholesky decomposition of `precisions_init`, which is not correct. The error can be simply verified by the fact that the resultant `precisions_cholesky_` is a lower-triangular matrix which should be an upper-triangular matrix. In fact, what we need is a $UU^{T}$ decomposition. The correct math to do so is to first apply a similarity transformation to the `precisions_init` $\Lambda$ by an [exchange matrix](https://en.wikipedia.org/wiki/Exchange_matrix) $J$ and then the Cholesky decomposition to the transformed $\Lambda$. In particular, the decomposition can be expressed as:
$$J\Lambda J=\tilde{L}\tilde{L}^{T}=JUJ(JUJ)^{T}$$
Finally, the `precisions_cholesky_` $U$ can be calculated as $J\tilde{L}J$. It is noted that we've taken advantage of the property of $J$ that $J=J^{-1}=J^{T}$.
### Steps/Code to Reproduce
```py
import numpy as np
from sklearn.mixture import GaussianMixture
from sklearn.mixture._gaussian_mixture import (
_estimate_gaussian_parameters,
_compute_precision_cholesky,
)
from sklearn.utils._testing import assert_allclose
def test_gaussian_mixture_precisions_init():
def _generate_data(n_samples, n_features, n_components):
"""Randomly generate samples and responsibilities"""
rs = np.random.RandomState(12345)
X = rs.random_sample((n_samples, n_features))
resp = rs.random_sample((n_samples, n_components))
resp /= resp.sum(axis=1)[:, np.newaxis]
return X, resp
def _calculate_precisions(X, resp, covariance_type):
"""Calculate precision matrix and its Cholesky decomposition"""
reg_covar = 1e-6
weights, means, covariances = _estimate_gaussian_parameters(
X, resp, reg_covar, covariance_type
)
precisions_cholesky = _compute_precision_cholesky(covariances, covariance_type)
_, n_components = resp.shape
# Instantiate a `GaussianMixture` model in order to use its
# `_set_parameters` method to compute `precisions_` from
# `precisions_cholesky_`
gmm = GaussianMixture(
n_components=n_components, covariance_type=covariance_type
)
params = (weights, means, covariances, precisions_cholesky)
# pylint: disable-next=protected-access
gmm._set_parameters(params)
return gmm.precisions_, gmm.precisions_cholesky_
X, resp = _generate_data(n_samples=100, n_features=3, n_components=4)
for covariance_type in ("full", "tied", "diag", "spherical"):
# Arrange
precisions_init, precisions_cholesky = _calculate_precisions(
X, resp, covariance_type
)
desired_precisions_cholesky = precisions_cholesky
# Act
gmm = GaussianMixture(
covariance_type=covariance_type, precisions_init=precisions_init
)
# pylint: disable-next=protected-access
gmm._initialize(X, resp)
actual_precisions_cholesky = gmm.precisions_cholesky_
# Assert
assert_allclose(actual_precisions_cholesky, desired_precisions_cholesky)
```
### Expected Results
```
=================================================================== test session starts ===================================================================
platform win32 -- Python 3.10.1, pytest-7.3.1, pluggy-1.0.0 -- C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Documents\VSCode\scikit-learn
configfile: setup.cfg
plugins: cov-4.0.0
collected 1 item
sklearn/mixture/tests/test_gaussian_mixture.py::test_gaussian_mixture_precisions_init PASSED [100%]
==================================================================== 1 passed in 0.15s ====================================================================
```
### Actual Results
```
=================================================================== test session starts ===================================================================
platform win32 -- Python 3.10.1, pytest-7.3.1, pluggy-1.0.0 -- C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Documents\VSCode\scikit-learn
configfile: setup.cfg
plugins: cov-4.0.0
collected 1 item
sklearn/mixture/tests/test_gaussian_mixture.py::test_gaussian_mixture_precisions_init FAILED [100%]
======================================================================== FAILURES =========================================================================
__________________________________________________________ test_gaussian_mixture_precisions_init __________________________________________________________
def test_gaussian_mixture_precisions_init():
def _generate_data(n_samples, n_features, n_components):
"""Randomly generate samples and responsibilities"""
rs = np.random.RandomState(12345)
X = rs.random_sample((n_samples, n_features))
resp = rs.random_sample((n_samples, n_components))
resp /= resp.sum(axis=1)[:, np.newaxis]
return X, resp
def _calculate_precisions(X, resp, covariance_type):
"""Calculate precision matrix and its Cholesky decomposition"""
reg_covar = 1e-6
weights, means, covariances = _estimate_gaussian_parameters(
X, resp, reg_covar, covariance_type
)
precisions_cholesky = _compute_precision_cholesky(covariances, covariance_type)
_, n_components = resp.shape
# Instantiate a `GaussianMixture` model in order to use its
# `_set_parameters` method to compute `precisions_` from
# `precisions_cholesky_`
gmm = GaussianMixture(
n_components=n_components, covariance_type=covariance_type
)
params = (weights, means, covariances, precisions_cholesky)
# pylint: disable-next=protected-access
gmm._set_parameters(params)
return gmm.precisions_, gmm.precisions_cholesky_
X, resp = _generate_data(n_samples=100, n_features=3, n_components=4)
for covariance_type in ("full", "tied", "diag", "spherical"):
# Arrange
precisions_init, precisions_cholesky = _calculate_precisions(
X, resp, covariance_type
)
desired_precisions_cholesky = precisions_cholesky
# Act
gmm = GaussianMixture(
covariance_type=covariance_type, precisions_init=precisions_init
)
# pylint: disable-next=protected-access
gmm._initialize(X, resp)
actual_precisions_cholesky = gmm.precisions_cholesky_
# Assert
> assert_allclose(actual_precisions_cholesky, desired_precisions_cholesky)
sklearn\mixture\tests\test_gaussian_mixture.py:1376:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
sklearn\utils\_testing.py:323: in assert_allclose
np_assert_allclose(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<function assert_allclose.<locals>.compare at 0x000002AD2C937400>, array([[[ 3.72597723, 0. , 0. ],
...521, 0.65580101],
[ 0. , 3.59562675, -0.25208462],
[ 0. , 0. , 3.53699227]]]))
kwds = {'equal_nan': True, 'err_msg': '', 'header': 'Not equal to tolerance rtol=1e-07, atol=0', 'verbose': True}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0
E
E Mismatched elements: 36 / 36 (100%)
E Max absolute difference: 0.88302534
E Max relative difference: 1.
E x: array([[[ 3.725977, 0. , 0. ],
E [-0.595192, 3.614806, 0. ],
E [ 0.841309, -0.033688, 3.448654]],...
E y: array([[[ 3.575666, -0.563724, 0.883025],
E [ 0. , 3.659279, -0.175359],
E [ 0. , 0. , 3.549951]],...
C:\Program Files\Python310\lib\contextlib.py:79: AssertionError
==================================================================== 1 failed in 0.44s ====================================================================
```
### Versions
```shell
System:
python: 3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit (AMD64)]
executable: C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Scripts\python.exe
machine: Windows-10-10.0.19044-SP0
Python dependencies:
sklearn: 1.3.dev0
pip: 21.2.4
setuptools: 58.1.0
numpy: 1.24.3
scipy: 1.10.1
Cython: 0.29.34
pandas: None
matplotlib: None
joblib: 1.2.0
threadpoolctl: 3.1.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
prefix: libopenblas
filepath: C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Lib\site-packages\numpy\.libs\libopenblas64__v0.3.21-gcc_10_3_0.dll
version: 0.3.21
threading_layer: pthreads
architecture: SkylakeX
num_threads: 16
user_api: blas
internal_api: openblas
prefix: libopenblas
filepath: C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Lib\site-packages\scipy.libs\libopenblas-802f9ed1179cb9c9b03d67ff79f48187.dll
version: 0.3.18
threading_layer: pthreads
architecture: Prescott
num_threads: 16
user_api: openmp
internal_api: openmp
prefix: vcomp
filepath: C:\Windows\System32\vcomp140.dll
version: None
num_threads: 16
```
|
1.0
|
Incorrect initialization of `GaussianMixture` from `precisions_init` in the `_initialize` method - ### Describe the bug
When passing `precisions_init` to a `GaussianMixture` model, a user expects to resume training the model from the provided precision matrices, which is done by calculating the `precisions_cholesky_` from `precisions_init` in the [_initialize](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L704) method and continuing EM iterations from there. However, the code is not correct in the [_initialize](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L704) method when the `covariance_type` is `full` or `tied`.
In an [_m_step](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L742), the `precisions_cholesky_` is calculated from the `covariances_` $\Sigma$ by the [_compute_precision_cholesky](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L301) method. In particular, the `precisions_` $\Lambda$ can decomposed as:
$$\Lambda=\Sigma^{-1}=(LL^{T})^{-1}=(L^{-1})^{T}L^{-1}=UU^{T}$$
Given the covariance matrix $\Sigma$, applying the Cholesky decomposition to it gives rise to a lower-triangular matrix $L$, and then we use back-substitution to calculate $L^{-1}$ from $L$, and finally the `precisions_cholesky_` can be calculated from $U=(L^{-1})^{T}$, which is an upper-triangular matrix $U$. This is correct for calculating `precisions_cholesky_` from `covariances_`.
However, when resuming training, the [_initialize](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/mixture/_gaussian_mixture.py#L704) method calculates `precisions_cholesky_` from `precisions_init` by directly conducting the Cholesky decomposition of `precisions_init`, which is not correct. The error can be simply verified by the fact that the resultant `precisions_cholesky_` is a lower-triangular matrix which should be an upper-triangular matrix. In fact, what we need is a $UU^{T}$ decomposition. The correct math to do so is to first apply a similarity transformation to the `precisions_init` $\Lambda$ by an [exchange matrix](https://en.wikipedia.org/wiki/Exchange_matrix) $J$ and then the Cholesky decomposition to the transformed $\Lambda$. In particular, the decomposition can be expressed as:
$$J\Lambda J=\tilde{L}\tilde{L}^{T}=JUJ(JUJ)^{T}$$
Finally, the `precisions_cholesky_` $U$ can be calculated as $J\tilde{L}J$. It is noted that we've taken advantage of the property of $J$ that $J=J^{-1}=J^{T}$.
### Steps/Code to Reproduce
```py
import numpy as np
from sklearn.mixture import GaussianMixture
from sklearn.mixture._gaussian_mixture import (
_estimate_gaussian_parameters,
_compute_precision_cholesky,
)
from sklearn.utils._testing import assert_allclose
def test_gaussian_mixture_precisions_init():
def _generate_data(n_samples, n_features, n_components):
"""Randomly generate samples and responsibilities"""
rs = np.random.RandomState(12345)
X = rs.random_sample((n_samples, n_features))
resp = rs.random_sample((n_samples, n_components))
resp /= resp.sum(axis=1)[:, np.newaxis]
return X, resp
def _calculate_precisions(X, resp, covariance_type):
"""Calculate precision matrix and its Cholesky decomposition"""
reg_covar = 1e-6
weights, means, covariances = _estimate_gaussian_parameters(
X, resp, reg_covar, covariance_type
)
precisions_cholesky = _compute_precision_cholesky(covariances, covariance_type)
_, n_components = resp.shape
# Instantiate a `GaussianMixture` model in order to use its
# `_set_parameters` method to compute `precisions_` from
# `precisions_cholesky_`
gmm = GaussianMixture(
n_components=n_components, covariance_type=covariance_type
)
params = (weights, means, covariances, precisions_cholesky)
# pylint: disable-next=protected-access
gmm._set_parameters(params)
return gmm.precisions_, gmm.precisions_cholesky_
X, resp = _generate_data(n_samples=100, n_features=3, n_components=4)
for covariance_type in ("full", "tied", "diag", "spherical"):
# Arrange
precisions_init, precisions_cholesky = _calculate_precisions(
X, resp, covariance_type
)
desired_precisions_cholesky = precisions_cholesky
# Act
gmm = GaussianMixture(
covariance_type=covariance_type, precisions_init=precisions_init
)
# pylint: disable-next=protected-access
gmm._initialize(X, resp)
actual_precisions_cholesky = gmm.precisions_cholesky_
# Assert
assert_allclose(actual_precisions_cholesky, desired_precisions_cholesky)
```
### Expected Results
```
=================================================================== test session starts ===================================================================
platform win32 -- Python 3.10.1, pytest-7.3.1, pluggy-1.0.0 -- C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Documents\VSCode\scikit-learn
configfile: setup.cfg
plugins: cov-4.0.0
collected 1 item
sklearn/mixture/tests/test_gaussian_mixture.py::test_gaussian_mixture_precisions_init PASSED [100%]
==================================================================== 1 passed in 0.15s ====================================================================
```
### Actual Results
```
=================================================================== test session starts ===================================================================
platform win32 -- Python 3.10.1, pytest-7.3.1, pluggy-1.0.0 -- C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\Documents\VSCode\scikit-learn
configfile: setup.cfg
plugins: cov-4.0.0
collected 1 item
sklearn/mixture/tests/test_gaussian_mixture.py::test_gaussian_mixture_precisions_init FAILED [100%]
======================================================================== FAILURES =========================================================================
__________________________________________________________ test_gaussian_mixture_precisions_init __________________________________________________________
def test_gaussian_mixture_precisions_init():
def _generate_data(n_samples, n_features, n_components):
"""Randomly generate samples and responsibilities"""
rs = np.random.RandomState(12345)
X = rs.random_sample((n_samples, n_features))
resp = rs.random_sample((n_samples, n_components))
resp /= resp.sum(axis=1)[:, np.newaxis]
return X, resp
def _calculate_precisions(X, resp, covariance_type):
"""Calculate precision matrix and its Cholesky decomposition"""
reg_covar = 1e-6
weights, means, covariances = _estimate_gaussian_parameters(
X, resp, reg_covar, covariance_type
)
precisions_cholesky = _compute_precision_cholesky(covariances, covariance_type)
_, n_components = resp.shape
# Instantiate a `GaussianMixture` model in order to use its
# `_set_parameters` method to compute `precisions_` from
# `precisions_cholesky_`
gmm = GaussianMixture(
n_components=n_components, covariance_type=covariance_type
)
params = (weights, means, covariances, precisions_cholesky)
# pylint: disable-next=protected-access
gmm._set_parameters(params)
return gmm.precisions_, gmm.precisions_cholesky_
X, resp = _generate_data(n_samples=100, n_features=3, n_components=4)
for covariance_type in ("full", "tied", "diag", "spherical"):
# Arrange
precisions_init, precisions_cholesky = _calculate_precisions(
X, resp, covariance_type
)
desired_precisions_cholesky = precisions_cholesky
# Act
gmm = GaussianMixture(
covariance_type=covariance_type, precisions_init=precisions_init
)
# pylint: disable-next=protected-access
gmm._initialize(X, resp)
actual_precisions_cholesky = gmm.precisions_cholesky_
# Assert
> assert_allclose(actual_precisions_cholesky, desired_precisions_cholesky)
sklearn\mixture\tests\test_gaussian_mixture.py:1376:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
sklearn\utils\_testing.py:323: in assert_allclose
np_assert_allclose(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<function assert_allclose.<locals>.compare at 0x000002AD2C937400>, array([[[ 3.72597723, 0. , 0. ],
...521, 0.65580101],
[ 0. , 3.59562675, -0.25208462],
[ 0. , 0. , 3.53699227]]]))
kwds = {'equal_nan': True, 'err_msg': '', 'header': 'Not equal to tolerance rtol=1e-07, atol=0', 'verbose': True}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0
E
E Mismatched elements: 36 / 36 (100%)
E Max absolute difference: 0.88302534
E Max relative difference: 1.
E x: array([[[ 3.725977, 0. , 0. ],
E [-0.595192, 3.614806, 0. ],
E [ 0.841309, -0.033688, 3.448654]],...
E y: array([[[ 3.575666, -0.563724, 0.883025],
E [ 0. , 3.659279, -0.175359],
E [ 0. , 0. , 3.549951]],...
C:\Program Files\Python310\lib\contextlib.py:79: AssertionError
==================================================================== 1 failed in 0.44s ====================================================================
```
### Versions
```shell
System:
python: 3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit (AMD64)]
executable: C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Scripts\python.exe
machine: Windows-10-10.0.19044-SP0
Python dependencies:
sklearn: 1.3.dev0
pip: 21.2.4
setuptools: 58.1.0
numpy: 1.24.3
scipy: 1.10.1
Cython: 0.29.34
pandas: None
matplotlib: None
joblib: 1.2.0
threadpoolctl: 3.1.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
prefix: libopenblas
filepath: C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Lib\site-packages\numpy\.libs\libopenblas64__v0.3.21-gcc_10_3_0.dll
version: 0.3.21
threading_layer: pthreads
architecture: SkylakeX
num_threads: 16
user_api: blas
internal_api: openblas
prefix: libopenblas
filepath: C:\Users\Documents\VSCode\scikit-learn\sklearn-env\Lib\site-packages\scipy.libs\libopenblas-802f9ed1179cb9c9b03d67ff79f48187.dll
version: 0.3.18
threading_layer: pthreads
architecture: Prescott
num_threads: 16
user_api: openmp
internal_api: openmp
prefix: vcomp
filepath: C:\Windows\System32\vcomp140.dll
version: None
num_threads: 16
```
|
process
|
incorrect initialization of gaussianmixture from precisions init in the initialize method describe the bug when passing precisions init to a gaussianmixture model a user expects to resume training the model from the provided precision matrices which is done by calculating the precisions cholesky from precisions init in the method and continuing em iterations from there however the code is not correct in the method when the covariance type is full or tied in an the precisions cholesky is calculated from the covariances sigma by the method in particular the precisions lambda can decomposed as lambda sigma ll t l t l uu t given the covariance matrix sigma applying the cholesky decomposition to it gives rise to a lower triangular matrix l and then we use back substitution to calculate l from l and finally the precisions cholesky can be calculated from u l t which is an upper triangular matrix u this is correct for calculating precisions cholesky from covariances however when resuming training the method calculates precisions cholesky from precisions init by directly conducting the cholesky decomposition of precisions init which is not correct the error can be simply verified by the fact that the resultant precisions cholesky is a lower triangular matrix which should be an upper triangular matrix in fact what we need is a uu t decomposition the correct math to do so is to first apply a similarity transformation to the precisions init lambda by an j and then the cholesky decomposition to the transformed lambda in particular the decomposition can be expressed as j lambda j tilde l tilde l t juj juj t finally the precisions cholesky u can be calculated as j tilde l j it is noted that we ve taken advantage of the property of j that j j j t steps code to reproduce py import numpy as np from sklearn mixture import gaussianmixture from sklearn mixture gaussian mixture import estimate gaussian parameters compute precision cholesky from sklearn utils testing import assert allclose def test gaussian mixture precisions init def generate data n samples n features n components randomly generate samples and responsibilities rs np random randomstate x rs random sample n samples n features resp rs random sample n samples n components resp resp sum axis return x resp def calculate precisions x resp covariance type calculate precision matrix and its cholesky decomposition reg covar weights means covariances estimate gaussian parameters x resp reg covar covariance type precisions cholesky compute precision cholesky covariances covariance type n components resp shape instantiate a gaussianmixture model in order to use its set parameters method to compute precisions from precisions cholesky gmm gaussianmixture n components n components covariance type covariance type params weights means covariances precisions cholesky pylint disable next protected access gmm set parameters params return gmm precisions gmm precisions cholesky x resp generate data n samples n features n components for covariance type in full tied diag spherical arrange precisions init precisions cholesky calculate precisions x resp covariance type desired precisions cholesky precisions cholesky act gmm gaussianmixture covariance type covariance type precisions init precisions init pylint disable next protected access gmm initialize x resp actual precisions cholesky gmm precisions cholesky assert assert allclose actual precisions cholesky desired precisions cholesky expected results test session starts platform python pytest pluggy c users documents vscode scikit learn sklearn env scripts python exe cachedir pytest cache rootdir c users documents vscode scikit learn configfile setup cfg plugins cov collected item sklearn mixture tests test gaussian mixture py test gaussian mixture precisions init passed passed in actual results test session starts platform python pytest pluggy c users documents vscode scikit learn sklearn env scripts python exe cachedir pytest cache rootdir c users documents vscode scikit learn configfile setup cfg plugins cov collected item sklearn mixture tests test gaussian mixture py test gaussian mixture precisions init failed failures test gaussian mixture precisions init def test gaussian mixture precisions init def generate data n samples n features n components randomly generate samples and responsibilities rs np random randomstate x rs random sample n samples n features resp rs random sample n samples n components resp resp sum axis return x resp def calculate precisions x resp covariance type calculate precision matrix and its cholesky decomposition reg covar weights means covariances estimate gaussian parameters x resp reg covar covariance type precisions cholesky compute precision cholesky covariances covariance type n components resp shape instantiate a gaussianmixture model in order to use its set parameters method to compute precisions from precisions cholesky gmm gaussianmixture n components n components covariance type covariance type params weights means covariances precisions cholesky pylint disable next protected access gmm set parameters params return gmm precisions gmm precisions cholesky x resp generate data n samples n features n components for covariance type in full tied diag spherical arrange precisions init precisions cholesky calculate precisions x resp covariance type desired precisions cholesky precisions cholesky act gmm gaussianmixture covariance type covariance type precisions init precisions init pylint disable next protected access gmm initialize x resp actual precisions cholesky gmm precisions cholesky assert assert allclose actual precisions cholesky desired precisions cholesky sklearn mixture tests test gaussian mixture py sklearn utils testing py in assert allclose np assert allclose args compare at array kwds equal nan true err msg header not equal to tolerance rtol atol verbose true wraps func def inner args kwds with self recreate cm return func args kwds e assertionerror e not equal to tolerance rtol atol e e mismatched elements e max absolute difference e max relative difference e x array e e e y array e e c program files lib contextlib py assertionerror failed in versions shell system python tags dec executable c users documents vscode scikit learn sklearn env scripts python exe machine windows python dependencies sklearn pip setuptools numpy scipy cython pandas none matplotlib none joblib threadpoolctl built with openmp true threadpoolctl info user api blas internal api openblas prefix libopenblas filepath c users documents vscode scikit learn sklearn env lib site packages numpy libs gcc dll version threading layer pthreads architecture skylakex num threads user api blas internal api openblas prefix libopenblas filepath c users documents vscode scikit learn sklearn env lib site packages scipy libs libopenblas dll version threading layer pthreads architecture prescott num threads user api openmp internal api openmp prefix vcomp filepath c windows dll version none num threads
| 1
|
15,342
| 2,850,642,837
|
IssuesEvent
|
2015-05-31 19:04:08
|
damonkohler/sl4a
|
https://api.github.com/repos/damonkohler/sl4a
|
opened
|
problem with setResultInteger
|
auto-migrated Priority-Medium Type-Defect
|
_From @GoogleCodeExporter on May 31, 2015 11:29_
```
device : Samsung S GT-I9000
firmware version : 2.2
the values sent with setResultInteger in script pyhton isn't received
in intent of onActivityResult(int requestCode, int resultCode, Intent
data) in my app. (data = null)
My app :
public class TestScripting extends Activity {
/** Called when the activity is first created. */
static private final int RESULT = 3;
public static final String EXTRA_RESULT = "SCRIPT_RESULT";
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Intent intent = new Intent("com.googlecode.android_scripting.action.LAUNCH_BACKGROUND_SCRIPT");
intent.setClassName("com.googlecode.android_scripting", "com.googlecode.android_scripting.activity.ScriptingLayerServiceLauncher");
intent.putExtra("com.googlecode.android_scripting.extra.SCRIPT_PATH", "/sdcard/hello_world.py");
startActivityForResult(intent, RESULT);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
switch (requestCode) {
case RESULT:
int resultIntegerScript = data.getIntExtra(EXTRA_RESULT, 0);
if(resultIntegerScript == 6)
{
Toast.makeText(TestScripting.this, "END SCRIPT", 2000);
}
return;
}
}
}
My script :
import android
droid = android.Android()
droid.makeToast('Hello, Android!')
print 'Hello, Android!'
droid.setResultInteger(0, 6)
log cat :
03-24 11:06:21.708:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380):
Interpreter discovered: com.googlecode.pythonforandroid
03-24 11:06:21.708:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380): Binary:
/data/data/com.googlecode.pythonforandroid/files/python/bin/python
03-24 11:06:21.712:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380):
Interpreter discovered: com.googlecode.pythonforandroid
03-24 11:06:21.712:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380): Binary:
/data/data/com.googlecode.pythonforandroid/files/python/bin/python
03-24 11:06:55.165: VERBOSE/sl4a.SimpleServer:197(25380): Bound to
127.0.0.1:36199
03-24 11:06:55.196: VERBOSE/sl4a.Process:115(25380): Executing
/data/data/com.googlecode.pythonforandroid/files/python/bin/python with
arguments [/sdcard/hello_world.py] and with environment
{ANDROID_SOCKET_zygote=10, AP_HANDSHAKE=caffc2f2-207d-4811-922f-f500bc3b3603,
TMPDIR=/data/local/tmp, ANDROID_BOOTLOGO=1,
EXTERNAL_STORAGE=/mnt/sdcard/external_sd, ANDROID_ASSETS=/system/app,
PY4A_EXTRAS=/mnt/sdcard/com.googlecode.pythonforandroid/extras/,
PATH=/sbin:/system/sbin:/system/bin:/system/xbin, ASEC_MOUNTPOINT=/mnt/asec,
PYTHONPATH=/mnt/sdcard/com.googlecode.pythonforandroid/extras/python:/data/data/
com.googlecode.pythonforandroid/files/python/lib/python2.6/lib-dynload:/data/dat
a/com.googlecode.pythonforandroid/files/python/lib/python2.6,
AP_HOST=127.0.0.1,
TEMP=/mnt/sdcard/com.googlecode.pythonforandroid/extras/python/tmp,
BOOTCLASSPATH=/system/framework/core.jar:/system/framework/ext.jar:/system/frame
work/framework.jar:/system/framework/android.policy.jar:/system/framework/servic
es.jar, AP_PORT=36199, INTERNAL_STORAGE=/mnt/sdcard, ANDROID_DATA=/data,
PYTHONHOME=/data/data/com.googlecode.pythonforandroid/files/python,
LD_LIBRARY_PATH=/data/data/com.googlecode.pythonforandroid/files/python/lib,
ANDROID_ROOT=/system, ANDROID_PROPERTY_WORKSPACE=9,32768}
03-24 11:06:56.536: VERBOSE/sl4a.SimpleServer$ConnectionThread:88(25380):
Server thread 16 started.
03-24 11:06:56.673: VERBOSE/sl4a.JsonRpcServer:74(25380): Received: {"params":
["caffc2f2-207d-4811-922f-f500bc3b3603"], "id": 0, "method": "_authenticate"}
03-24 11:06:56.677: VERBOSE/sl4a.JsonRpcServer:117(25380): Sent:
{"error":null,"id":0,"result":true}
03-24 11:06:56.677: VERBOSE/sl4a.JsonRpcServer:74(25380): Received: {"params":
["Hello, Android!"], "id": 1, "method": "makeToast"}
03-24 11:06:56.708: VERBOSE/sl4a.JsonRpcServer:117(25380): Sent:
{"error":null,"id":1,"result":null}
03-24 11:06:56.712: VERBOSE/sl4a.JsonRpcServer:74(25380): Received: {"params":
[0, 6], "id": 2, "method": "setResultInteger"}
03-24 11:06:56.723: VERBOSE/sl4a.JsonRpcServer:117(25380): Sent:
{"error":null,"id":2,"result":null}
03-24 11:06:56.735: VERBOSE/sl4a.SimpleServer$ConnectionThread:101(25380):
Server thread 16 died.
03-24 11:06:56.743: VERBOSE/sl4a.Process$1:135(25380): Process 25440 exited
with result code 0.
```
Original issue reported on code.google.com by `SofianeM...@gmail.com` on 24 Mar 2011 at 10:15
_Copied from original issue: damonkohler/android-scripting#537_
|
1.0
|
problem with setResultInteger - _From @GoogleCodeExporter on May 31, 2015 11:29_
```
device : Samsung S GT-I9000
firmware version : 2.2
the values sent with setResultInteger in script pyhton isn't received
in intent of onActivityResult(int requestCode, int resultCode, Intent
data) in my app. (data = null)
My app :
public class TestScripting extends Activity {
/** Called when the activity is first created. */
static private final int RESULT = 3;
public static final String EXTRA_RESULT = "SCRIPT_RESULT";
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Intent intent = new Intent("com.googlecode.android_scripting.action.LAUNCH_BACKGROUND_SCRIPT");
intent.setClassName("com.googlecode.android_scripting", "com.googlecode.android_scripting.activity.ScriptingLayerServiceLauncher");
intent.putExtra("com.googlecode.android_scripting.extra.SCRIPT_PATH", "/sdcard/hello_world.py");
startActivityForResult(intent, RESULT);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
switch (requestCode) {
case RESULT:
int resultIntegerScript = data.getIntExtra(EXTRA_RESULT, 0);
if(resultIntegerScript == 6)
{
Toast.makeText(TestScripting.this, "END SCRIPT", 2000);
}
return;
}
}
}
My script :
import android
droid = android.Android()
droid.makeToast('Hello, Android!')
print 'Hello, Android!'
droid.setResultInteger(0, 6)
log cat :
03-24 11:06:21.708:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380):
Interpreter discovered: com.googlecode.pythonforandroid
03-24 11:06:21.708:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380): Binary:
/data/data/com.googlecode.pythonforandroid/files/python/bin/python
03-24 11:06:21.712:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380):
Interpreter discovered: com.googlecode.pythonforandroid
03-24 11:06:21.712:
VERBOSE/sl4a.InterpreterConfiguration$InterpreterListener:127(25380): Binary:
/data/data/com.googlecode.pythonforandroid/files/python/bin/python
03-24 11:06:55.165: VERBOSE/sl4a.SimpleServer:197(25380): Bound to
127.0.0.1:36199
03-24 11:06:55.196: VERBOSE/sl4a.Process:115(25380): Executing
/data/data/com.googlecode.pythonforandroid/files/python/bin/python with
arguments [/sdcard/hello_world.py] and with environment
{ANDROID_SOCKET_zygote=10, AP_HANDSHAKE=caffc2f2-207d-4811-922f-f500bc3b3603,
TMPDIR=/data/local/tmp, ANDROID_BOOTLOGO=1,
EXTERNAL_STORAGE=/mnt/sdcard/external_sd, ANDROID_ASSETS=/system/app,
PY4A_EXTRAS=/mnt/sdcard/com.googlecode.pythonforandroid/extras/,
PATH=/sbin:/system/sbin:/system/bin:/system/xbin, ASEC_MOUNTPOINT=/mnt/asec,
PYTHONPATH=/mnt/sdcard/com.googlecode.pythonforandroid/extras/python:/data/data/
com.googlecode.pythonforandroid/files/python/lib/python2.6/lib-dynload:/data/dat
a/com.googlecode.pythonforandroid/files/python/lib/python2.6,
AP_HOST=127.0.0.1,
TEMP=/mnt/sdcard/com.googlecode.pythonforandroid/extras/python/tmp,
BOOTCLASSPATH=/system/framework/core.jar:/system/framework/ext.jar:/system/frame
work/framework.jar:/system/framework/android.policy.jar:/system/framework/servic
es.jar, AP_PORT=36199, INTERNAL_STORAGE=/mnt/sdcard, ANDROID_DATA=/data,
PYTHONHOME=/data/data/com.googlecode.pythonforandroid/files/python,
LD_LIBRARY_PATH=/data/data/com.googlecode.pythonforandroid/files/python/lib,
ANDROID_ROOT=/system, ANDROID_PROPERTY_WORKSPACE=9,32768}
03-24 11:06:56.536: VERBOSE/sl4a.SimpleServer$ConnectionThread:88(25380):
Server thread 16 started.
03-24 11:06:56.673: VERBOSE/sl4a.JsonRpcServer:74(25380): Received: {"params":
["caffc2f2-207d-4811-922f-f500bc3b3603"], "id": 0, "method": "_authenticate"}
03-24 11:06:56.677: VERBOSE/sl4a.JsonRpcServer:117(25380): Sent:
{"error":null,"id":0,"result":true}
03-24 11:06:56.677: VERBOSE/sl4a.JsonRpcServer:74(25380): Received: {"params":
["Hello, Android!"], "id": 1, "method": "makeToast"}
03-24 11:06:56.708: VERBOSE/sl4a.JsonRpcServer:117(25380): Sent:
{"error":null,"id":1,"result":null}
03-24 11:06:56.712: VERBOSE/sl4a.JsonRpcServer:74(25380): Received: {"params":
[0, 6], "id": 2, "method": "setResultInteger"}
03-24 11:06:56.723: VERBOSE/sl4a.JsonRpcServer:117(25380): Sent:
{"error":null,"id":2,"result":null}
03-24 11:06:56.735: VERBOSE/sl4a.SimpleServer$ConnectionThread:101(25380):
Server thread 16 died.
03-24 11:06:56.743: VERBOSE/sl4a.Process$1:135(25380): Process 25440 exited
with result code 0.
```
Original issue reported on code.google.com by `SofianeM...@gmail.com` on 24 Mar 2011 at 10:15
_Copied from original issue: damonkohler/android-scripting#537_
|
non_process
|
problem with setresultinteger from googlecodeexporter on may device samsung s gt firmware version the values sent with setresultinteger in script pyhton isn t received in intent of onactivityresult int requestcode int resultcode intent data in my app data null my app public class testscripting extends activity called when the activity is first created static private final int result public static final string extra result script result override public void oncreate bundle savedinstancestate super oncreate savedinstancestate intent intent new intent com googlecode android scripting action launch background script intent setclassname com googlecode android scripting com googlecode android scripting activity scriptinglayerservicelauncher intent putextra com googlecode android scripting extra script path sdcard hello world py startactivityforresult intent result override protected void onactivityresult int requestcode int resultcode intent data super onactivityresult requestcode resultcode data switch requestcode case result int resultintegerscript data getintextra extra result if resultintegerscript toast maketext testscripting this end script return my script import android droid android android droid maketoast hello android print hello android droid setresultinteger log cat verbose interpreterconfiguration interpreterlistener interpreter discovered com googlecode pythonforandroid verbose interpreterconfiguration interpreterlistener binary data data com googlecode pythonforandroid files python bin python verbose interpreterconfiguration interpreterlistener interpreter discovered com googlecode pythonforandroid verbose interpreterconfiguration interpreterlistener binary data data com googlecode pythonforandroid files python bin python verbose simpleserver bound to verbose process executing data data com googlecode pythonforandroid files python bin python with arguments and with environment android socket zygote ap handshake tmpdir data local tmp android bootlogo external storage mnt sdcard external sd android assets system app extras mnt sdcard com googlecode pythonforandroid extras path sbin system sbin system bin system xbin asec mountpoint mnt asec pythonpath mnt sdcard com googlecode pythonforandroid extras python data data com googlecode pythonforandroid files python lib lib dynload data dat a com googlecode pythonforandroid files python lib ap host temp mnt sdcard com googlecode pythonforandroid extras python tmp bootclasspath system framework core jar system framework ext jar system frame work framework jar system framework android policy jar system framework servic es jar ap port internal storage mnt sdcard android data data pythonhome data data com googlecode pythonforandroid files python ld library path data data com googlecode pythonforandroid files python lib android root system android property workspace verbose simpleserver connectionthread server thread started verbose jsonrpcserver received params id method authenticate verbose jsonrpcserver sent error null id result true verbose jsonrpcserver received params id method maketoast verbose jsonrpcserver sent error null id result null verbose jsonrpcserver received params id method setresultinteger verbose jsonrpcserver sent error null id result null verbose simpleserver connectionthread server thread died verbose process process exited with result code original issue reported on code google com by sofianem gmail com on mar at copied from original issue damonkohler android scripting
| 0
|
102,901
| 11,309,586,891
|
IssuesEvent
|
2020-01-19 14:12:11
|
TheCraiggers/Pathfinder-Discord-Bot
|
https://api.github.com/repos/TheCraiggers/Pathfinder-Discord-Bot
|
opened
|
Need a quickstart guide for GMs and Players
|
documentation
|
Provide a list of commands to use to get started adding the bot and their characters, and then show some examples on how to roll for init, heal, damage, etc.
|
1.0
|
Need a quickstart guide for GMs and Players - Provide a list of commands to use to get started adding the bot and their characters, and then show some examples on how to roll for init, heal, damage, etc.
|
non_process
|
need a quickstart guide for gms and players provide a list of commands to use to get started adding the bot and their characters and then show some examples on how to roll for init heal damage etc
| 0
|
7,637
| 10,735,484,927
|
IssuesEvent
|
2019-10-29 08:52:45
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
closed
|
MemoryError for 3D vars in preprocessor function daily_statistics
|
preprocessor
|
According to CMIP tables, the 3D variables of ERA-Interim with daily frequency are in 8 pressure levels and with monthly frequency are in 19 levels. For example, when working with preprocessor functions like daily_statistics on daily Geopotential (its CMIP name is zg), it gives the error as:
MemoryError: Unable to allocate array with shape (1464, 8, 241, 480) and data type float32
During handling of the above exception, another exception occurred:
MemoryError: Failed to realise the lazy data as there was not enough memory available.
The data shape would have been (1464, 8, 241, 480) with dtype('float32').
Consider freeing up variables or indexing the data before trying again.
|
1.0
|
MemoryError for 3D vars in preprocessor function daily_statistics - According to CMIP tables, the 3D variables of ERA-Interim with daily frequency are in 8 pressure levels and with monthly frequency are in 19 levels. For example, when working with preprocessor functions like daily_statistics on daily Geopotential (its CMIP name is zg), it gives the error as:
MemoryError: Unable to allocate array with shape (1464, 8, 241, 480) and data type float32
During handling of the above exception, another exception occurred:
MemoryError: Failed to realise the lazy data as there was not enough memory available.
The data shape would have been (1464, 8, 241, 480) with dtype('float32').
Consider freeing up variables or indexing the data before trying again.
|
process
|
memoryerror for vars in preprocessor function daily statistics according to cmip tables the variables of era interim with daily frequency are in pressure levels and with monthly frequency are in levels for example when working with preprocessor functions like daily statistics on daily geopotential its cmip name is zg it gives the error as memoryerror unable to allocate array with shape and data type during handling of the above exception another exception occurred memoryerror failed to realise the lazy data as there was not enough memory available the data shape would have been with dtype consider freeing up variables or indexing the data before trying again
| 1
|
5,930
| 4,075,321,771
|
IssuesEvent
|
2016-05-29 04:27:50
|
d-ronin/dRonin
|
https://api.github.com/repos/d-ronin/dRonin
|
closed
|
Main page in GCS should have link to the documentation (dronin.readme.io)
|
enhancement gcs usability
|
The webpage also doesn't link to the Wiki (anymore?).
|
True
|
Main page in GCS should have link to the documentation (dronin.readme.io) - The webpage also doesn't link to the Wiki (anymore?).
|
non_process
|
main page in gcs should have link to the documentation dronin readme io the webpage also doesn t link to the wiki anymore
| 0
|
6,104
| 8,961,906,901
|
IssuesEvent
|
2019-01-28 10:54:37
|
Madek/madek
|
https://api.github.com/repos/Madek/madek
|
closed
|
Analyse: More than 36 entities should be selectable for any batch process
|
Batch process enhancement
|
**Analyse and check further solutions.**
A general solution for the functions of the batch processing (see printscreen) needs to be found. It shall be possible to process more than 36 media entries at once. Analyse if there is another solution than pulling all UUIDs together - in this solution the browser has a max string amount.
When using the page "Stapelverarbeitung", more than 4000 media entries can be processed at once.

********************
Infos from Refinement:
Stapelverarbeitung:
Mehrere 1000 können in der Stapelverarbeitung geändert werden (erfolgreich getestet)
Einzelauswahl:
Mögliche Lösung für editieren der Metadaten: Ähnliche Lösung wie bei leihs - Metadaten werden vorbestimmt und anschliessen werden diese Felder auf einzelne Einträge auf Listenelemente angewendet.
Für die URL-Lösung mit max. 36 Einträgen muss eine neue Lsg. gesucht werden.
|
1.0
|
Analyse: More than 36 entities should be selectable for any batch process - **Analyse and check further solutions.**
A general solution for the functions of the batch processing (see printscreen) needs to be found. It shall be possible to process more than 36 media entries at once. Analyse if there is another solution than pulling all UUIDs together - in this solution the browser has a max string amount.
When using the page "Stapelverarbeitung", more than 4000 media entries can be processed at once.

********************
Infos from Refinement:
Stapelverarbeitung:
Mehrere 1000 können in der Stapelverarbeitung geändert werden (erfolgreich getestet)
Einzelauswahl:
Mögliche Lösung für editieren der Metadaten: Ähnliche Lösung wie bei leihs - Metadaten werden vorbestimmt und anschliessen werden diese Felder auf einzelne Einträge auf Listenelemente angewendet.
Für die URL-Lösung mit max. 36 Einträgen muss eine neue Lsg. gesucht werden.
|
process
|
analyse more than entities should be selectable for any batch process analyse and check further solutions a general solution for the functions of the batch processing see printscreen needs to be found it shall be possible to process more than media entries at once analyse if there is another solution than pulling all uuids together in this solution the browser has a max string amount when using the page stapelverarbeitung more than media entries can be processed at once infos from refinement stapelverarbeitung mehrere können in der stapelverarbeitung geändert werden erfolgreich getestet einzelauswahl mögliche lösung für editieren der metadaten ähnliche lösung wie bei leihs metadaten werden vorbestimmt und anschliessen werden diese felder auf einzelne einträge auf listenelemente angewendet für die url lösung mit max einträgen muss eine neue lsg gesucht werden
| 1
|
231
| 2,658,658,776
|
IssuesEvent
|
2015-03-18 16:42:42
|
ChelseaStats/issues
|
https://api.github.com/repos/ChelseaStats/issues
|
closed
|
OptaJean March 16 2015 at 11:17AM
|
process
|
<blockquote class="twitter-tweet">
<p>50% - Chelsea have opened the scoring in every single competitive game in 2015, but have only won half of them (8 out of 16). Frail.</p>
— OptaJean (@OptaJean) <a href="http://u.thechels.uk/1wPGn1b">March 16, 2015</a>
</blockquote>
<br><br>
March 16, 2015 at 11:17AM<br>
via Twitter
|
1.0
|
OptaJean March 16 2015 at 11:17AM - <blockquote class="twitter-tweet">
<p>50% - Chelsea have opened the scoring in every single competitive game in 2015, but have only won half of them (8 out of 16). Frail.</p>
— OptaJean (@OptaJean) <a href="http://u.thechels.uk/1wPGn1b">March 16, 2015</a>
</blockquote>
<br><br>
March 16, 2015 at 11:17AM<br>
via Twitter
|
process
|
optajean march at chelsea have opened the scoring in every single competitive game in but have only won half of them out of frail mdash optajean optajean march at via twitter
| 1
|
172,015
| 27,221,866,770
|
IssuesEvent
|
2023-02-21 06:23:00
|
apache/superset
|
https://api.github.com/repos/apache/superset
|
closed
|
[native_filter] Inexistent filter value creation can be confusing to novice
|
inactive dashboard:native-filters need:design-review enhancement:committed
|
In the video below, there are 1000+ filter values in the underlying dataset's `name` column.
With `Search all filter options` box in native filter modal - Advanced section checked, users are able to search and grab filter values beyond the 1000 values display limit. for examples, dropdown list only contains names that start w letter A to J(Julia). name "Max" is a name value that doesn't show in the dropdown, but I can type to search/grab "Max" as a valid filter.
In this case, "junlin" is not a valid value in the selected column, new name value was "created" in the dropdown list, charts that are mapped to the name filter responded but no data returned. It's incredibly confusing. Since the action enables use to type to select, novice users may accidentally _create_ invalid values by typing, either a typo or inexistent value in the field and mistakenly believe that charts are being filtered by their input.
**Proposed change:**
If we have to keep the "type to search/select" functionality in native filter for the 'Search all filter options' feature, we should at least add a validator to distinguish valid and invalid inputs.
https://user-images.githubusercontent.com/67837651/134428765-51a118c8-3349-4d1e-9282-e3ad6fafd09a.mov
|
1.0
|
[native_filter] Inexistent filter value creation can be confusing to novice - In the video below, there are 1000+ filter values in the underlying dataset's `name` column.
With `Search all filter options` box in native filter modal - Advanced section checked, users are able to search and grab filter values beyond the 1000 values display limit. for examples, dropdown list only contains names that start w letter A to J(Julia). name "Max" is a name value that doesn't show in the dropdown, but I can type to search/grab "Max" as a valid filter.
In this case, "junlin" is not a valid value in the selected column, new name value was "created" in the dropdown list, charts that are mapped to the name filter responded but no data returned. It's incredibly confusing. Since the action enables use to type to select, novice users may accidentally _create_ invalid values by typing, either a typo or inexistent value in the field and mistakenly believe that charts are being filtered by their input.
**Proposed change:**
If we have to keep the "type to search/select" functionality in native filter for the 'Search all filter options' feature, we should at least add a validator to distinguish valid and invalid inputs.
https://user-images.githubusercontent.com/67837651/134428765-51a118c8-3349-4d1e-9282-e3ad6fafd09a.mov
|
non_process
|
inexistent filter value creation can be confusing to novice in the video below there are filter values in the underlying dataset s name column with search all filter options box in native filter modal advanced section checked users are able to search and grab filter values beyond the values display limit for examples dropdown list only contains names that start w letter a to j julia name max is a name value that doesn t show in the dropdown but i can type to search grab max as a valid filter in this case junlin is not a valid value in the selected column new name value was created in the dropdown list charts that are mapped to the name filter responded but no data returned it s incredibly confusing since the action enables use to type to select novice users may accidentally create invalid values by typing either a typo or inexistent value in the field and mistakenly believe that charts are being filtered by their input proposed change if we have to keep the type to search select functionality in native filter for the search all filter options feature we should at least add a validator to distinguish valid and invalid inputs
| 0
|
189,700
| 14,518,419,034
|
IssuesEvent
|
2020-12-13 23:34:21
|
DynamoRIO/drmemory
|
https://api.github.com/repos/DynamoRIO/drmemory
|
closed
|
Migrate ASAP from Travis CI as it no longer offers free OSS CI and has stopped running
|
Component-Tests Hotlist-ContinuousIntegration Priority-High
|
See the full explanation in the corresponding DR issue:
https://github.com/DynamoRIO/dynamorio/issues/4549
The Travis account seems to be shared as the DrM Travis is now blocked even though it certainly hasn't used 10K credits on its own:
https://travis-ci.com/github/DynamoRIO/drmemory/pull_requests
> Builds have been temporarily disabled for public repositories due to a negative credit balance. Please go to the Plan page to replenish your credit balance or alter your Consume paid credits for OSS setting.
The plan is to move to Github Actions, just like for DR.
|
1.0
|
Migrate ASAP from Travis CI as it no longer offers free OSS CI and has stopped running - See the full explanation in the corresponding DR issue:
https://github.com/DynamoRIO/dynamorio/issues/4549
The Travis account seems to be shared as the DrM Travis is now blocked even though it certainly hasn't used 10K credits on its own:
https://travis-ci.com/github/DynamoRIO/drmemory/pull_requests
> Builds have been temporarily disabled for public repositories due to a negative credit balance. Please go to the Plan page to replenish your credit balance or alter your Consume paid credits for OSS setting.
The plan is to move to Github Actions, just like for DR.
|
non_process
|
migrate asap from travis ci as it no longer offers free oss ci and has stopped running see the full explanation in the corresponding dr issue the travis account seems to be shared as the drm travis is now blocked even though it certainly hasn t used credits on its own builds have been temporarily disabled for public repositories due to a negative credit balance please go to the plan page to replenish your credit balance or alter your consume paid credits for oss setting the plan is to move to github actions just like for dr
| 0
|
49,494
| 13,453,559,196
|
IssuesEvent
|
2020-09-09 01:18:41
|
fufunoyu/shop
|
https://api.github.com/repos/fufunoyu/shop
|
opened
|
CVE-2020-11113 (High) detected in jackson-databind-2.9.9.jar
|
security vulnerability
|
## CVE-2020-11113 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /shop/target/shop/WEB-INF/lib/jackson-databind-2.9.9.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/shop/commit/96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5">96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113>CVE-2020-11113</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-11113 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-11113 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /shop/target/shop/WEB-INF/lib/jackson-databind-2.9.9.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/shop/commit/96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5">96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113>CVE-2020-11113</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library shop target shop web inf lib jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache openjpa ee wasregistrymanagedruntime aka openjpa publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind
| 0
|
45,879
| 7,207,868,083
|
IssuesEvent
|
2018-02-07 00:00:03
|
containous/traefik
|
https://api.github.com/repos/containous/traefik
|
closed
|
Better Docs (e.g Rancher)
|
area/documentation area/provider/rancher kind/question
|
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
-->
### Do you want to request a *feature* or report a *bug*?
Docs are the best feature we could ask for - so we do not have to look for support without needing taking time from you.
PS. Sorry for the huge text, but what I have to say can not be summed up. Please address as feedback please.
For example, I'm testing absolutely all possibilities. From third party to "official" and always encounter different configurations. Sometimes I have to cause failure to know if I'm doing something right. Applying "science" of "right and wrong".
About Docs, I believe all other docker-related documentation or any other (k8) documentation is well written. However, there is a gap as far as documentation related to Rancher is concerned. I have over 15 tabs of Chrome open each with a different solution / issue of the same product (Traefik) and in none of them I had 100% success. I did, but always with something small that I did not understand or there is a bug and I can not see or talk about it.
Continuing. There is a video on Youtube that a member of Team (Traefik) explains how to configure TK using "add service" in Rancher and adding all parameters in a command line only. I did not find what he did in the documentation. So That would be a "problem". Someone publicly sell a product with a technique that is not completely clear and consolidated in the official documentation. And this form (as service) does not allow you to use Labels for this Tk services it sellf (only command option) and some commands seemed to have no effect. At least I tested (use labels) and it did not work.
One thing I'm wondering is - Should I install Traefik configured to docker (sock) off the "Rancher" or are there no limitations keeping it as a service inside a "Rancher" stack? There are no technical explanations about this. Which is better. To spend time on.
A step by step of each configuration technique would be very good to have in the documentation. I think that only in Rancher is there more than one way to "install" and configure Traefik and I do not know which one is the best.
One detail is, in my case, I was able to install Traefik through the catalog and the "video technique". Although it is working, there are flaws in HTML, CSS and etc of the services - spite I've tested it on another public PORT and it's perfect. Only at Traefik that route wrong. When there are only responses like "404" or a blank page that in "dashboard / health" appears with negative status. See the video - By the way this issue isn't to support me. I'm already requesting this support in Slack. Here are just references.
Anyway, I think I ramified the writing a bit. But the very focus is on the documentation related to Rancher. I really wanted to see Traefik running on my project along with Rancher. If I had not been stubborn I would have given up. Please put all possibilities in your documentation. After all you sell that there is indeed support for Rancher.
https://youtu.be/0zVwqWwHo5w?t=14m53s "video technique".
https://youtu.be/7ZO3lB5K8aA - My video.
https://docs.traefik.io/configuration/backends/rancher/ - The Rancher TK Doc.
|
1.0
|
Better Docs (e.g Rancher) - <!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://traefik.herokuapp.com
-->
### Do you want to request a *feature* or report a *bug*?
Docs are the best feature we could ask for - so we do not have to look for support without needing taking time from you.
PS. Sorry for the huge text, but what I have to say can not be summed up. Please address as feedback please.
For example, I'm testing absolutely all possibilities. From third party to "official" and always encounter different configurations. Sometimes I have to cause failure to know if I'm doing something right. Applying "science" of "right and wrong".
About Docs, I believe all other docker-related documentation or any other (k8) documentation is well written. However, there is a gap as far as documentation related to Rancher is concerned. I have over 15 tabs of Chrome open each with a different solution / issue of the same product (Traefik) and in none of them I had 100% success. I did, but always with something small that I did not understand or there is a bug and I can not see or talk about it.
Continuing. There is a video on Youtube that a member of Team (Traefik) explains how to configure TK using "add service" in Rancher and adding all parameters in a command line only. I did not find what he did in the documentation. So That would be a "problem". Someone publicly sell a product with a technique that is not completely clear and consolidated in the official documentation. And this form (as service) does not allow you to use Labels for this Tk services it sellf (only command option) and some commands seemed to have no effect. At least I tested (use labels) and it did not work.
One thing I'm wondering is - Should I install Traefik configured to docker (sock) off the "Rancher" or are there no limitations keeping it as a service inside a "Rancher" stack? There are no technical explanations about this. Which is better. To spend time on.
A step by step of each configuration technique would be very good to have in the documentation. I think that only in Rancher is there more than one way to "install" and configure Traefik and I do not know which one is the best.
One detail is, in my case, I was able to install Traefik through the catalog and the "video technique". Although it is working, there are flaws in HTML, CSS and etc of the services - spite I've tested it on another public PORT and it's perfect. Only at Traefik that route wrong. When there are only responses like "404" or a blank page that in "dashboard / health" appears with negative status. See the video - By the way this issue isn't to support me. I'm already requesting this support in Slack. Here are just references.
Anyway, I think I ramified the writing a bit. But the very focus is on the documentation related to Rancher. I really wanted to see Traefik running on my project along with Rancher. If I had not been stubborn I would have given up. Please put all possibilities in your documentation. After all you sell that there is indeed support for Rancher.
https://youtu.be/0zVwqWwHo5w?t=14m53s "video technique".
https://youtu.be/7ZO3lB5K8aA - My video.
https://docs.traefik.io/configuration/backends/rancher/ - The Rancher TK Doc.
|
non_process
|
better docs e g rancher do not file issues for general support questions the issue tracker is for reporting bugs and feature requests only for end user related support questions refer to one of the following stack overflow using the traefik tag the traefik community slack channel do you want to request a feature or report a bug docs are the best feature we could ask for so we do not have to look for support without needing taking time from you ps sorry for the huge text but what i have to say can not be summed up please address as feedback please for example i m testing absolutely all possibilities from third party to official and always encounter different configurations sometimes i have to cause failure to know if i m doing something right applying science of right and wrong about docs i believe all other docker related documentation or any other documentation is well written however there is a gap as far as documentation related to rancher is concerned i have over tabs of chrome open each with a different solution issue of the same product traefik and in none of them i had success i did but always with something small that i did not understand or there is a bug and i can not see or talk about it continuing there is a video on youtube that a member of team traefik explains how to configure tk using add service in rancher and adding all parameters in a command line only i did not find what he did in the documentation so that would be a problem someone publicly sell a product with a technique that is not completely clear and consolidated in the official documentation and this form as service does not allow you to use labels for this tk services it sellf only command option and some commands seemed to have no effect at least i tested use labels and it did not work one thing i m wondering is should i install traefik configured to docker sock off the rancher or are there no limitations keeping it as a service inside a rancher stack there are no technical explanations about this which is better to spend time on a step by step of each configuration technique would be very good to have in the documentation i think that only in rancher is there more than one way to install and configure traefik and i do not know which one is the best one detail is in my case i was able to install traefik through the catalog and the video technique although it is working there are flaws in html css and etc of the services spite i ve tested it on another public port and it s perfect only at traefik that route wrong when there are only responses like or a blank page that in dashboard health appears with negative status see the video by the way this issue isn t to support me i m already requesting this support in slack here are just references anyway i think i ramified the writing a bit but the very focus is on the documentation related to rancher i really wanted to see traefik running on my project along with rancher if i had not been stubborn i would have given up please put all possibilities in your documentation after all you sell that there is indeed support for rancher video technique my video the rancher tk doc
| 0
|
26,487
| 2,684,557,024
|
IssuesEvent
|
2015-03-29 03:32:54
|
gtcasl/gpuocelot
|
https://api.github.com/repos/gtcasl/gpuocelot
|
opened
|
LLVM error when using glut: Assertion "Option already exists!" failed.
|
bug imported Priority-Medium
|
_From [max.m...@dameweb.de](https://code.google.com/u/116782405542037073817/) on June 19, 2013 09:59:17_
What steps will reproduce the problem? 1. Take the following code:
\#include \<GL/glut.h>
\#include \<cstdio>
void display(void){
}
int main(int argc, char** argv){
printf("a\n");
glutInit(&argc, argv);
printf("b\n");
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
printf("c\n");
glutInitWindowSize(100, 100);
printf("d\n");
glutInitWindowPosition(100, 100);
printf("e\n");
glutCreateWindow("test");
printf("f\n");
glutDisplayFunc(display);
printf("g\n");
glutMainLoop();
return 0;
}
2. Compile and run it once without and once with linking to ocelot:
gcc -c main.cpp && gcc -o main main.o -lglut -lstdc++ && ./main
gcc -c main.cpp && gcc -o main main.o -locelot -lglut -lstdc++ && ./main
3. Look at the console output. What is the expected output? What do you see instead? When I run the program not linked to ocelot, it outputs the following:
./main
a
b
c
d
e
f
g
When I run the program linked to ocelot, it breaks:
./main
a
b
c
d
e
main: /build/src/llvm- ce7bbb8b46abd1aef80dff50bd73315719e1f8bb /include/llvm/Support/CommandLine.h:646: void llvm::cl::parser<DataType>::addLiteralOption(const char*, const DT&, const char*) [with DT = llvm::FunctionPass* (*)(); DataType = llvm::FunctionPass* (*)()]: Assertion `findOption(Name) == Values.size() && "Option already exists!"' failed.
make: *** [run] Aborted (core dumped) What version of the product are you using? On what operating system? The program was compiled under Arch Linux with all current upgrades, LLVM 3.3-1, gpuocelot r2235 , freeglut 2.8.1 and mesa 9.1.3. Please provide any additional information below. When compiling with nvcc or when testing the CUDA code samples (e.g. simpleGL), the same error appears.
The same error appears as well under Ubuntu 11.04 with completely different library versions (gpuocelot was r2235 as well).
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=86_
|
1.0
|
LLVM error when using glut: Assertion "Option already exists!" failed. - _From [max.m...@dameweb.de](https://code.google.com/u/116782405542037073817/) on June 19, 2013 09:59:17_
What steps will reproduce the problem? 1. Take the following code:
\#include \<GL/glut.h>
\#include \<cstdio>
void display(void){
}
int main(int argc, char** argv){
printf("a\n");
glutInit(&argc, argv);
printf("b\n");
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
printf("c\n");
glutInitWindowSize(100, 100);
printf("d\n");
glutInitWindowPosition(100, 100);
printf("e\n");
glutCreateWindow("test");
printf("f\n");
glutDisplayFunc(display);
printf("g\n");
glutMainLoop();
return 0;
}
2. Compile and run it once without and once with linking to ocelot:
gcc -c main.cpp && gcc -o main main.o -lglut -lstdc++ && ./main
gcc -c main.cpp && gcc -o main main.o -locelot -lglut -lstdc++ && ./main
3. Look at the console output. What is the expected output? What do you see instead? When I run the program not linked to ocelot, it outputs the following:
./main
a
b
c
d
e
f
g
When I run the program linked to ocelot, it breaks:
./main
a
b
c
d
e
main: /build/src/llvm- ce7bbb8b46abd1aef80dff50bd73315719e1f8bb /include/llvm/Support/CommandLine.h:646: void llvm::cl::parser<DataType>::addLiteralOption(const char*, const DT&, const char*) [with DT = llvm::FunctionPass* (*)(); DataType = llvm::FunctionPass* (*)()]: Assertion `findOption(Name) == Values.size() && "Option already exists!"' failed.
make: *** [run] Aborted (core dumped) What version of the product are you using? On what operating system? The program was compiled under Arch Linux with all current upgrades, LLVM 3.3-1, gpuocelot r2235 , freeglut 2.8.1 and mesa 9.1.3. Please provide any additional information below. When compiling with nvcc or when testing the CUDA code samples (e.g. simpleGL), the same error appears.
The same error appears as well under Ubuntu 11.04 with completely different library versions (gpuocelot was r2235 as well).
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=86_
|
non_process
|
llvm error when using glut assertion option already exists failed from on june what steps will reproduce the problem take the following code include include void display void int main int argc char argv printf a n glutinit argc argv printf b n glutinitdisplaymode glut single glut rgb printf c n glutinitwindowsize printf d n glutinitwindowposition printf e n glutcreatewindow test printf f n glutdisplayfunc display printf g n glutmainloop return compile and run it once without and once with linking to ocelot gcc c main cpp gcc o main main o lglut lstdc main gcc c main cpp gcc o main main o locelot lglut lstdc main look at the console output what is the expected output what do you see instead when i run the program not linked to ocelot it outputs the following main a b c d e f g when i run the program linked to ocelot it breaks main a b c d e main build src llvm include llvm support commandline h void llvm cl parser addliteraloption const char const dt const char assertion findoption name values size option already exists failed make aborted core dumped what version of the product are you using on what operating system the program was compiled under arch linux with all current upgrades llvm gpuocelot freeglut and mesa please provide any additional information below when compiling with nvcc or when testing the cuda code samples e g simplegl the same error appears the same error appears as well under ubuntu with completely different library versions gpuocelot was as well original issue
| 0
|
278,523
| 8,643,322,663
|
IssuesEvent
|
2018-11-25 16:50:27
|
buttercup/buttercup-browser-extension
|
https://api.github.com/repos/buttercup/buttercup-browser-extension
|
opened
|
Bypass CORS restrictions on WebDAV services
|
Priority: Medium Status: Available Type: Enhancement
|
Perhaps [modify response headers](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/onHeadersReceived) to always include `Access-Control-Allow-Origin: *`.
|
1.0
|
Bypass CORS restrictions on WebDAV services - Perhaps [modify response headers](https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/onHeadersReceived) to always include `Access-Control-Allow-Origin: *`.
|
non_process
|
bypass cors restrictions on webdav services perhaps to always include access control allow origin
| 0
|
18,003
| 24,022,623,648
|
IssuesEvent
|
2022-09-15 08:54:54
|
Tencent/tdesign-miniprogram
|
https://api.github.com/repos/Tencent/tdesign-miniprogram
|
reopened
|
能否新增一个日历功能?
|
enhancement Stale in process
|
### 这个功能解决了什么问题
能否新增一个日历功能? 这个功能有利于安装预约模块,师傅接到用户需求时,经常会说周几,而不是几号,有利于师傅听取意见的时候快速定位操作
### 你建议的方案是什么
新增日历组件

|
1.0
|
能否新增一个日历功能? - ### 这个功能解决了什么问题
能否新增一个日历功能? 这个功能有利于安装预约模块,师傅接到用户需求时,经常会说周几,而不是几号,有利于师傅听取意见的时候快速定位操作
### 你建议的方案是什么
新增日历组件

|
process
|
能否新增一个日历功能 这个功能解决了什么问题 能否新增一个日历功能 这个功能有利于安装预约模块,师傅接到用户需求时,经常会说周几,而不是几号,有利于师傅听取意见的时候快速定位操作 你建议的方案是什么 新增日历组件
| 1
|
12,127
| 14,740,841,755
|
IssuesEvent
|
2021-01-07 09:42:40
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Upload Usage Files
|
anc-process anp-not prioritized ant-enhancement
|
In GitLab by @kdjstudios on Dec 5, 2018, 14:01
**Submitted by:** Kyle
**Helpdesk:** NA
**Server:** All
**Client/Site:** All
**Account:** All
**Issue:**
Over the last year we have had multiple times where we have received HD tickets regarding the usage on accounts. It seems most of the time these are due to issues with the file that is uploaded and not SA Billing Processing the file.
I would just like to confirm that we store both the original unaltered uploaded file, and we also store the altered version of the file where we add the calculated fields. The latter is the one that is available via the "Download" button correct?
If we do not store both files, What would it take to update the upload process and store both versions of the file?
|
1.0
|
Upload Usage Files - In GitLab by @kdjstudios on Dec 5, 2018, 14:01
**Submitted by:** Kyle
**Helpdesk:** NA
**Server:** All
**Client/Site:** All
**Account:** All
**Issue:**
Over the last year we have had multiple times where we have received HD tickets regarding the usage on accounts. It seems most of the time these are due to issues with the file that is uploaded and not SA Billing Processing the file.
I would just like to confirm that we store both the original unaltered uploaded file, and we also store the altered version of the file where we add the calculated fields. The latter is the one that is available via the "Download" button correct?
If we do not store both files, What would it take to update the upload process and store both versions of the file?
|
process
|
upload usage files in gitlab by kdjstudios on dec submitted by kyle helpdesk na server all client site all account all issue over the last year we have had multiple times where we have received hd tickets regarding the usage on accounts it seems most of the time these are due to issues with the file that is uploaded and not sa billing processing the file i would just like to confirm that we store both the original unaltered uploaded file and we also store the altered version of the file where we add the calculated fields the latter is the one that is available via the download button correct if we do not store both files what would it take to update the upload process and store both versions of the file
| 1
|
53,160
| 13,261,066,301
|
IssuesEvent
|
2020-08-20 19:14:56
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
gotoblas2 port doesnt add symlinks for the bulldozer variant (Trac #860)
|
Migrated from Trac defect tools/ports
|
the cmake `tooldef()` macro gets confused because it cant "see" the libgoto*_nehalim.* libraries.
symlinks need to be created.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/860">https://code.icecube.wisc.edu/projects/icecube/ticket/860</a>, reported by negaand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T20:47:35",
"_ts": "1423687655687171",
"description": "the cmake `tooldef()` macro gets confused because it cant \"see\" the libgoto*_nehalim.* libraries.\nsymlinks need to be created.",
"reporter": "nega",
"cc": "briedel",
"resolution": "fixed",
"time": "2015-01-14T23:22:19",
"component": "tools/ports",
"summary": "gotoblas2 port doesnt add symlinks for the bulldozer variant",
"priority": "normal",
"keywords": "blas",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
gotoblas2 port doesnt add symlinks for the bulldozer variant (Trac #860) - the cmake `tooldef()` macro gets confused because it cant "see" the libgoto*_nehalim.* libraries.
symlinks need to be created.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/860">https://code.icecube.wisc.edu/projects/icecube/ticket/860</a>, reported by negaand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T20:47:35",
"_ts": "1423687655687171",
"description": "the cmake `tooldef()` macro gets confused because it cant \"see\" the libgoto*_nehalim.* libraries.\nsymlinks need to be created.",
"reporter": "nega",
"cc": "briedel",
"resolution": "fixed",
"time": "2015-01-14T23:22:19",
"component": "tools/ports",
"summary": "gotoblas2 port doesnt add symlinks for the bulldozer variant",
"priority": "normal",
"keywords": "blas",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
port doesnt add symlinks for the bulldozer variant trac the cmake tooldef macro gets confused because it cant see the libgoto nehalim libraries symlinks need to be created migrated from json status closed changetime ts description the cmake tooldef macro gets confused because it cant see the libgoto nehalim libraries nsymlinks need to be created reporter nega cc briedel resolution fixed time component tools ports summary port doesnt add symlinks for the bulldozer variant priority normal keywords blas milestone owner nega type defect
| 0
|
1,577
| 4,167,538,946
|
IssuesEvent
|
2016-06-20 09:54:29
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Коростень: раскриття послуги - Видача копій, витягів з розпоряджень міського голови, рішень, прийнятих міською радою та виконавчим комітетом -
|
In process of testing in work test
|
В сервисДате раскрыто на тестирование, а ишью не было. Ввел для контроля
Прошу обратить внимание @ezhikus
|
1.0
|
Коростень: раскриття послуги - Видача копій, витягів з розпоряджень міського голови, рішень, прийнятих міською радою та виконавчим комітетом - - В сервисДате раскрыто на тестирование, а ишью не было. Ввел для контроля
Прошу обратить внимание @ezhikus
|
process
|
коростень раскриття послуги видача копій витягів з розпоряджень міського голови рішень прийнятих міською радою та виконавчим комітетом в сервисдате раскрыто на тестирование а ишью не было ввел для контроля прошу обратить внимание ezhikus
| 1
|
325,142
| 24,036,917,586
|
IssuesEvent
|
2022-09-15 20:06:40
|
kurkle/chartjs-plugin-autocolors
|
https://api.github.com/repos/kurkle/chartjs-plugin-autocolors
|
closed
|
Why `--save-dev`?
|
documentation
|
This in the README confused me:
https://github.com/kurkle/chartjs-plugin-autocolors/blob/4a19c43a44c844b85c8e648db1229b114b64c122/README.md?plain=1#L20
Wouldn't you need the package in production, not just development?
|
1.0
|
Why `--save-dev`? - This in the README confused me:
https://github.com/kurkle/chartjs-plugin-autocolors/blob/4a19c43a44c844b85c8e648db1229b114b64c122/README.md?plain=1#L20
Wouldn't you need the package in production, not just development?
|
non_process
|
why save dev this in the readme confused me wouldn t you need the package in production not just development
| 0
|
321,604
| 23,863,281,613
|
IssuesEvent
|
2022-09-07 08:54:49
|
solidusio/solidus
|
https://api.github.com/repos/solidusio/solidus
|
closed
|
Shared Examples for Spree::Event::Subscriber
|
Documentation
|
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
When writing specs for a subscriber which includes the `Spree::Event::Subscriber`, it would be nice to have `shared_examples`.
For example:
```ruby
module MySubscriber
include Spree::Event::Subscription
end
RSpec.describe MySubscriber do
it_behaves_like "a spree event subscriber"
# Rest of specs
end
```
|
1.0
|
Shared Examples for Spree::Event::Subscriber - **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
When writing specs for a subscriber which includes the `Spree::Event::Subscriber`, it would be nice to have `shared_examples`.
For example:
```ruby
module MySubscriber
include Spree::Event::Subscription
end
RSpec.describe MySubscriber do
it_behaves_like "a spree event subscriber"
# Rest of specs
end
```
|
non_process
|
shared examples for spree event subscriber is your feature request related to a problem please describe when writing specs for a subscriber which includes the spree event subscriber it would be nice to have shared examples for example ruby module mysubscriber include spree event subscription end rspec describe mysubscriber do it behaves like a spree event subscriber rest of specs end
| 0
|
122,003
| 17,685,632,954
|
IssuesEvent
|
2021-08-24 00:50:05
|
ghc-dev/Stefanie-Johnson
|
https://api.github.com/repos/ghc-dev/Stefanie-Johnson
|
opened
|
CVE-2017-16119 (High) detected in fresh-0.2.4.tgz
|
security vulnerability
|
## CVE-2017-16119 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>fresh-0.2.4.tgz</b></p></summary>
<p>HTTP response freshness testing</p>
<p>Library home page: <a href="https://registry.npmjs.org/fresh/-/fresh-0.2.4.tgz">https://registry.npmjs.org/fresh/-/fresh-0.2.4.tgz</a></p>
<p>Path to dependency file: Stefanie-Johnson/package.json</p>
<p>Path to vulnerable library: Stefanie-Johnson/node_modules/fresh/package.json</p>
<p>
Dependency Hierarchy:
- send-0.11.1.tgz (Root Library)
- :x: **fresh-0.2.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Stefanie-Johnson/commit/c3b55c261b066a7bc55a0b0000b534a8212da4b2">c3b55c261b066a7bc55a0b0000b534a8212da4b2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Fresh is a module used by the Express.js framework for HTTP response freshness testing. It is vulnerable to a regular expression denial of service when it is passed specially crafted input to parse. This causes the event loop to be blocked causing a denial of service condition.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16119>CVE-2017-16119</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/526">https://www.npmjs.com/advisories/526</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: fresh - 0.5.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"fresh","packageVersion":"0.2.4","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"send:0.11.1;fresh:0.2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"fresh - 0.5.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16119","vulnerabilityDetails":"Fresh is a module used by the Express.js framework for HTTP response freshness testing. It is vulnerable to a regular expression denial of service when it is passed specially crafted input to parse. This causes the event loop to be blocked causing a denial of service condition.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16119","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-16119 (High) detected in fresh-0.2.4.tgz - ## CVE-2017-16119 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>fresh-0.2.4.tgz</b></p></summary>
<p>HTTP response freshness testing</p>
<p>Library home page: <a href="https://registry.npmjs.org/fresh/-/fresh-0.2.4.tgz">https://registry.npmjs.org/fresh/-/fresh-0.2.4.tgz</a></p>
<p>Path to dependency file: Stefanie-Johnson/package.json</p>
<p>Path to vulnerable library: Stefanie-Johnson/node_modules/fresh/package.json</p>
<p>
Dependency Hierarchy:
- send-0.11.1.tgz (Root Library)
- :x: **fresh-0.2.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Stefanie-Johnson/commit/c3b55c261b066a7bc55a0b0000b534a8212da4b2">c3b55c261b066a7bc55a0b0000b534a8212da4b2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Fresh is a module used by the Express.js framework for HTTP response freshness testing. It is vulnerable to a regular expression denial of service when it is passed specially crafted input to parse. This causes the event loop to be blocked causing a denial of service condition.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16119>CVE-2017-16119</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/526">https://www.npmjs.com/advisories/526</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: fresh - 0.5.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"fresh","packageVersion":"0.2.4","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"send:0.11.1;fresh:0.2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"fresh - 0.5.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16119","vulnerabilityDetails":"Fresh is a module used by the Express.js framework for HTTP response freshness testing. It is vulnerable to a regular expression denial of service when it is passed specially crafted input to parse. This causes the event loop to be blocked causing a denial of service condition.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16119","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in fresh tgz cve high severity vulnerability vulnerable library fresh tgz http response freshness testing library home page a href path to dependency file stefanie johnson package json path to vulnerable library stefanie johnson node modules fresh package json dependency hierarchy send tgz root library x fresh tgz vulnerable library found in head commit a href found in base branch master vulnerability details fresh is a module used by the express js framework for http response freshness testing it is vulnerable to a regular expression denial of service when it is passed specially crafted input to parse this causes the event loop to be blocked causing a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution fresh isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree send fresh isminimumfixversionavailable true minimumfixversion fresh basebranches vulnerabilityidentifier cve vulnerabilitydetails fresh is a module used by the express js framework for http response freshness testing it is vulnerable to a regular expression denial of service when it is passed specially crafted input to parse this causes the event loop to be blocked causing a denial of service condition vulnerabilityurl
| 0
|
772,772
| 27,134,826,692
|
IssuesEvent
|
2023-02-16 12:28:28
|
MystenLabs/sui
|
https://api.github.com/repos/MystenLabs/sui
|
opened
|
[Move.lock] Avoid package resolution if manifest unchanged
|
Type: Enhancement Priority: Low devx move
|
**Depends on #8342**
Once lock files are enabled for Sui, it will be possible to rely on them, instead of the manifest, as a complete picture of a package's transitive dependency graph. This means that if a package's manifest has not been modified since a lock file was last generated from it, it does not need to be re-generated.
Generating a lock file from a manifest can be a costly process, because it requires fetching each dependency in turn, and discovering from that dependency's manifest what further packages to fetch, and so on, recursively, which is difficult to parallelise.
If this step can be avoided, there are further gains to be had, by parallelising transitive dependency fetching.
|
1.0
|
[Move.lock] Avoid package resolution if manifest unchanged - **Depends on #8342**
Once lock files are enabled for Sui, it will be possible to rely on them, instead of the manifest, as a complete picture of a package's transitive dependency graph. This means that if a package's manifest has not been modified since a lock file was last generated from it, it does not need to be re-generated.
Generating a lock file from a manifest can be a costly process, because it requires fetching each dependency in turn, and discovering from that dependency's manifest what further packages to fetch, and so on, recursively, which is difficult to parallelise.
If this step can be avoided, there are further gains to be had, by parallelising transitive dependency fetching.
|
non_process
|
avoid package resolution if manifest unchanged depends on once lock files are enabled for sui it will be possible to rely on them instead of the manifest as a complete picture of a package s transitive dependency graph this means that if a package s manifest has not been modified since a lock file was last generated from it it does not need to be re generated generating a lock file from a manifest can be a costly process because it requires fetching each dependency in turn and discovering from that dependency s manifest what further packages to fetch and so on recursively which is difficult to parallelise if this step can be avoided there are further gains to be had by parallelising transitive dependency fetching
| 0
|
6,807
| 3,462,699,733
|
IssuesEvent
|
2015-12-21 02:56:55
|
LloydMontgomery/clash_tool
|
https://api.github.com/repos/LloydMontgomery/clash_tool
|
opened
|
Decrease Database Load
|
bad code
|
Currently, when an update is made to a war, the entire war is re-written to the database. This is bad code. I need to change it so Angular tracks what has actually changed on the page, and then submits that information to the server so it only writes the new information. I imagine this can be done through a $watch Angular application
|
1.0
|
Decrease Database Load - Currently, when an update is made to a war, the entire war is re-written to the database. This is bad code. I need to change it so Angular tracks what has actually changed on the page, and then submits that information to the server so it only writes the new information. I imagine this can be done through a $watch Angular application
|
non_process
|
decrease database load currently when an update is made to a war the entire war is re written to the database this is bad code i need to change it so angular tracks what has actually changed on the page and then submits that information to the server so it only writes the new information i imagine this can be done through a watch angular application
| 0
|
15,573
| 19,703,506,497
|
IssuesEvent
|
2022-01-12 19:08:11
|
googleapis/nodejs-asset
|
https://api.github.com/repos/googleapis/nodejs-asset
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'asset' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'asset' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname asset invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
2,917
| 5,914,308,000
|
IssuesEvent
|
2017-05-22 02:00:54
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Parse having trouble with line feed
|
bug duplicate parse-tree-processing
|
When my code parses, it flags up a line feed. When I remove the line feed, it moves on until it gets to the next line feed.

The code is correct and runs correctly, so i am assuming this is a glitch. The code explorer doesn't raise any issues in the code
|
1.0
|
Parse having trouble with line feed - When my code parses, it flags up a line feed. When I remove the line feed, it moves on until it gets to the next line feed.

The code is correct and runs correctly, so i am assuming this is a glitch. The code explorer doesn't raise any issues in the code
|
process
|
parse having trouble with line feed when my code parses it flags up a line feed when i remove the line feed it moves on until it gets to the next line feed the code is correct and runs correctly so i am assuming this is a glitch the code explorer doesn t raise any issues in the code
| 1
|
499,503
| 14,448,929,820
|
IssuesEvent
|
2020-12-08 07:12:37
|
longhorn/longhorn
|
https://api.github.com/repos/longhorn/longhorn
|
opened
|
[BUG] 'last-applied-tolerations' annotation is missing in shareManager pods.
|
bug priority/2
|
**Describe the bug**
'last-applied-tolerations' annotation is missing in the Yaml of shareManager pods.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy longhorn v1.1.0-rc1 in a cluster having 4 nodes (1 etcd/control plane and 3 worker).
2. Create a RWX volume.
3. Check the Yaml of ShareManager pod.
4. `longhorn.io/last-applied-tolerations` is missing from the YAML.
```
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
creationTimestamp: "2020-12-08T07:02:24Z"
```
**Expected behavior**
```
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
longhorn.io/last-applied-tolerations: '[{"key":"persistence","operator":"Equal","value":"true","effect":"NoExecute"}]'
creationTimestamp: "2020-12-08T07:01:06Z"
```
**Environment:**
- Longhorn version: v1.1.0-rc1
- Kubernetes version: 18.12
|
1.0
|
[BUG] 'last-applied-tolerations' annotation is missing in shareManager pods. - **Describe the bug**
'last-applied-tolerations' annotation is missing in the Yaml of shareManager pods.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy longhorn v1.1.0-rc1 in a cluster having 4 nodes (1 etcd/control plane and 3 worker).
2. Create a RWX volume.
3. Check the Yaml of ShareManager pod.
4. `longhorn.io/last-applied-tolerations` is missing from the YAML.
```
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
creationTimestamp: "2020-12-08T07:02:24Z"
```
**Expected behavior**
```
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP:
cni.projectcalico.org/podIPs:
longhorn.io/last-applied-tolerations: '[{"key":"persistence","operator":"Equal","value":"true","effect":"NoExecute"}]'
creationTimestamp: "2020-12-08T07:01:06Z"
```
**Environment:**
- Longhorn version: v1.1.0-rc1
- Kubernetes version: 18.12
|
non_process
|
last applied tolerations annotation is missing in sharemanager pods describe the bug last applied tolerations annotation is missing in the yaml of sharemanager pods to reproduce steps to reproduce the behavior deploy longhorn in a cluster having nodes etcd control plane and worker create a rwx volume check the yaml of sharemanager pod longhorn io last applied tolerations is missing from the yaml apiversion kind pod metadata annotations cni projectcalico org podip cni projectcalico org podips creationtimestamp expected behavior apiversion kind pod metadata annotations cni projectcalico org podip cni projectcalico org podips longhorn io last applied tolerations creationtimestamp environment longhorn version kubernetes version
| 0
|
7,314
| 10,451,586,844
|
IssuesEvent
|
2019-09-19 13:09:29
|
EthVM/EthVM
|
https://api.github.com/repos/EthVM/EthVM
|
closed
|
Re-introduce historical hash rate
|
enhancement priority:medium project:processing
|
This was removed as part of some other work and needs re-introduced.
|
1.0
|
Re-introduce historical hash rate - This was removed as part of some other work and needs re-introduced.
|
process
|
re introduce historical hash rate this was removed as part of some other work and needs re introduced
| 1
|
11,788
| 14,617,711,767
|
IssuesEvent
|
2020-12-22 15:10:18
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
opened
|
[PM] [Dev] getting 500 error when site permission is not given
|
Bug P1 Participant manager Process: Dev
|
Getting 500 error when site permission is not given
AR : Displaying general error message
ER : 'Site(s) not found' (EC_0004) should be displayed

|
1.0
|
[PM] [Dev] getting 500 error when site permission is not given - Getting 500 error when site permission is not given
AR : Displaying general error message
ER : 'Site(s) not found' (EC_0004) should be displayed

|
process
|
getting error when site permission is not given getting error when site permission is not given ar displaying general error message er site s not found ec should be displayed
| 1
|
16,152
| 20,509,211,064
|
IssuesEvent
|
2022-03-01 03:20:24
|
googleapis/java-analytics-admin
|
https://api.github.com/repos/googleapis/java-analytics-admin
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process api: analyticsadmin repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'analytics-admin' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'analytics-admin' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname analytics admin invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
16,090
| 20,256,944,487
|
IssuesEvent
|
2022-02-15 00:49:12
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Panic in u32div operation processor
|
bug good first issue processor
|
`u32div` [operation processor](https://github.com/maticnetwork/miden/blob/next/processor/src/operations/u32_ops.rs#L133) currently incorrectly handles division by zero. It panics, but instead should return `ExecutionError::DivideByZero`.
We should also update relevant [u32tests](https://github.com/maticnetwork/miden/blob/next/processor/src/tests/u32_ops.rs) to expect an error instead of a panic.
|
1.0
|
Panic in u32div operation processor - `u32div` [operation processor](https://github.com/maticnetwork/miden/blob/next/processor/src/operations/u32_ops.rs#L133) currently incorrectly handles division by zero. It panics, but instead should return `ExecutionError::DivideByZero`.
We should also update relevant [u32tests](https://github.com/maticnetwork/miden/blob/next/processor/src/tests/u32_ops.rs) to expect an error instead of a panic.
|
process
|
panic in operation processor currently incorrectly handles division by zero it panics but instead should return executionerror dividebyzero we should also update relevant to expect an error instead of a panic
| 1
|
279,833
| 21,186,005,382
|
IssuesEvent
|
2022-04-08 12:49:30
|
Esri/arcgis-python-api
|
https://api.github.com/repos/Esri/arcgis-python-api
|
closed
|
Sample for Publishing Hosted Table in ArcGIS Online
|
enhancement under consideration documentation
|
I am trying to automate a process to do the following:
1. Add an empty hosted table with pre-defined schema to ArcGIS Online
2. Generate an ArcGIS Online credits report
3. Iterate over the generated report (CSV), and populate the empty table
I have been able to figure out no. 2 and no. 3. However, after much searching, I can't find any samples or clear guidance on how to create/add/publish a hosted table in ArcGIS Online using the Python API. The closest thing I found was [ArcGIS Pro help](https://pro.arcgis.com/en/pro-app/latest/help/sharing/overview/share-standalone-table.htm) for a manual process.
Having a sample for automating creating hosted tables would be very beneficial. Thanks for the consideration.
|
1.0
|
Sample for Publishing Hosted Table in ArcGIS Online - I am trying to automate a process to do the following:
1. Add an empty hosted table with pre-defined schema to ArcGIS Online
2. Generate an ArcGIS Online credits report
3. Iterate over the generated report (CSV), and populate the empty table
I have been able to figure out no. 2 and no. 3. However, after much searching, I can't find any samples or clear guidance on how to create/add/publish a hosted table in ArcGIS Online using the Python API. The closest thing I found was [ArcGIS Pro help](https://pro.arcgis.com/en/pro-app/latest/help/sharing/overview/share-standalone-table.htm) for a manual process.
Having a sample for automating creating hosted tables would be very beneficial. Thanks for the consideration.
|
non_process
|
sample for publishing hosted table in arcgis online i am trying to automate a process to do the following add an empty hosted table with pre defined schema to arcgis online generate an arcgis online credits report iterate over the generated report csv and populate the empty table i have been able to figure out no and no however after much searching i can t find any samples or clear guidance on how to create add publish a hosted table in arcgis online using the python api the closest thing i found was for a manual process having a sample for automating creating hosted tables would be very beneficial thanks for the consideration
| 0
|
14,144
| 17,035,122,250
|
IssuesEvent
|
2021-07-05 05:42:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Responsive issue in the Participant details page
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Participant details page > UI issue

|
3.0
|
[PM] Responsive issue in the Participant details page - Participant details page > UI issue

|
process
|
responsive issue in the participant details page participant details page ui issue
| 1
|
268,583
| 8,408,693,694
|
IssuesEvent
|
2018-10-12 03:08:32
|
ashtonkbailey/wheel-of-fortune-m2
|
https://api.github.com/repos/ashtonkbailey/wheel-of-fortune-m2
|
closed
|
Start page
|
high priority
|
Should show when new game is started/on page load. Should include short list of instructions and space for players to enter their names.
|
1.0
|
Start page - Should show when new game is started/on page load. Should include short list of instructions and space for players to enter their names.
|
non_process
|
start page should show when new game is started on page load should include short list of instructions and space for players to enter their names
| 0
|
4,367
| 7,260,514,899
|
IssuesEvent
|
2018-02-18 10:53:55
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] New algorithm for offsetting lines
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/82f4a82c66865e3d0b4af6c9410821c740eb3232 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
1.0
|
[FEATURE][processing] New algorithm for offsetting lines - Original commit: https://github.com/qgis/QGIS/commit/82f4a82c66865e3d0b4af6c9410821c740eb3232 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
process
|
new algorithm for offsetting lines original commit by nyalldawson unfortunately this naughty coder did not write a description
| 1
|
6,734
| 9,799,688,702
|
IssuesEvent
|
2019-06-11 14:52:27
|
ISISScientificComputing/autoreduce
|
https://api.github.com/repos/ISISScientificComputing/autoreduce
|
closed
|
GEM: Update reduction script
|
:bar_chart: GEM :bust_in_silhouette: User requirement :clock1: High Priority :key: External
|
Issue raised by: [user: Ivan da Silva]
### What?
The current script running on autoreduction for GEM is most likely out of date. Ivan will send a new script, calibration files and cycle mapping file which we can use to update between cycles.
### Where?
GEM reduction script and ISIS archive autoreduction directory
### How?
GEM initial requirements gathering meeting
### How to test the issue is resolved
* Ensure that after the script has been updated, GEM can still successfully run on autoreduction (This can be done with the Manual submission script using any GEM run)
|
1.0
|
GEM: Update reduction script - Issue raised by: [user: Ivan da Silva]
### What?
The current script running on autoreduction for GEM is most likely out of date. Ivan will send a new script, calibration files and cycle mapping file which we can use to update between cycles.
### Where?
GEM reduction script and ISIS archive autoreduction directory
### How?
GEM initial requirements gathering meeting
### How to test the issue is resolved
* Ensure that after the script has been updated, GEM can still successfully run on autoreduction (This can be done with the Manual submission script using any GEM run)
|
non_process
|
gem update reduction script issue raised by what the current script running on autoreduction for gem is most likely out of date ivan will send a new script calibration files and cycle mapping file which we can use to update between cycles where gem reduction script and isis archive autoreduction directory how gem initial requirements gathering meeting how to test the issue is resolved ensure that after the script has been updated gem can still successfully run on autoreduction this can be done with the manual submission script using any gem run
| 0
|
40,108
| 8,729,100,932
|
IssuesEvent
|
2018-12-10 19:16:39
|
CDCgov/MicrobeTrace
|
https://api.github.com/repos/CDCgov/MicrobeTrace
|
closed
|
Warning Message for IE Users
|
[effort] small [issue-type] enhancement [skill-level] beginner code.gov help-wanted
|
**Background**
Internet Explorer has not been actively supported by Microsoft for a number of years. It has hobbled along beyond its life cycle and lingers on as a relic of the past. That being said, MicrobeTrace does not currently warn Internet Explorer users of its incompatibility. We require a banner that detects Internet Explorer and warns users that they should join the 21st century.
**Open Task Description**
We should really add a banner warning people that their [terrible](https://www.wired.com/2016/01/the-sorry-legacy-of-microsoft-internet-explorer/), [unsupported](https://www.microsoft.com/en-us/windowsforbusiness/end-of-ie-support), non-standards-compliant browser is rubbish and they should switch to _literally anything else_. Not because I have an axe to grind, mind you, but because [MicrobeTrace does not and will never work on Internet Explorer](https://github.com/CDCgov/WebMicrobeTrace/wiki/Internet-Explorer).
|
1.0
|
Warning Message for IE Users - **Background**
Internet Explorer has not been actively supported by Microsoft for a number of years. It has hobbled along beyond its life cycle and lingers on as a relic of the past. That being said, MicrobeTrace does not currently warn Internet Explorer users of its incompatibility. We require a banner that detects Internet Explorer and warns users that they should join the 21st century.
**Open Task Description**
We should really add a banner warning people that their [terrible](https://www.wired.com/2016/01/the-sorry-legacy-of-microsoft-internet-explorer/), [unsupported](https://www.microsoft.com/en-us/windowsforbusiness/end-of-ie-support), non-standards-compliant browser is rubbish and they should switch to _literally anything else_. Not because I have an axe to grind, mind you, but because [MicrobeTrace does not and will never work on Internet Explorer](https://github.com/CDCgov/WebMicrobeTrace/wiki/Internet-Explorer).
|
non_process
|
warning message for ie users background internet explorer has not been actively supported by microsoft for a number of years it has hobbled along beyond its life cycle and lingers on as a relic of the past that being said microbetrace does not currently warn internet explorer users of its incompatibility we require a banner that detects internet explorer and warns users that they should join the century open task description we should really add a banner warning people that their non standards compliant browser is rubbish and they should switch to literally anything else not because i have an axe to grind mind you but because
| 0
|
10,172
| 13,044,162,749
|
IssuesEvent
|
2020-07-29 03:47:35
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `ValuesDuration` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `ValuesDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `ValuesDuration` from TiDB -
## Description
Port the scalar function `ValuesDuration` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function valuesduration from tidb description port the scalar function valuesduration from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
20,115
| 26,654,431,359
|
IssuesEvent
|
2023-01-25 15:51:43
|
oasis-tcs/csaf
|
https://api.github.com/repos/oasis-tcs/csaf
|
closed
|
Comment Resolution Log CS02 to CS03
|
csaf 2.0 oasis_tc_process non_material CS03
|
# Comment Resolution Log
The table summarizes the comments that were received for the committee specification "[Common Security Advisory Framework Version 2.0](https://docs.oasis-open.org/csaf/csaf/v2.0/cs02/csaf-v2.0-cs02.html)" and their resolution. Comments came to editors directly from OASIS admins and through Github PRs.
A status of "Completed" in the Disposition column indicates that the editors have implemented the changes on which the TC decided, which are outlined in the Resolution column. It also is a hyperlink to the GitHub commit notice.
The item number is a hyperlink to the issue number in this OASIS TC CASF repository at https://github.com/oasis-tcs/csaf/.
| Item # | Date | Commenter | Description | Date acknowledged | Resolution | Disposition |
|----------------------------------------------------|------------|----------------|----------------------------------------|-------------------|----------------------|---------------------------------------------------------|
| [1](https://github.com/oasis-tcs/csaf/pull/568/commits/97873fda3bca57e2dee91955f4281e8ff7378893) | 2022-07-06 | Paul Knight (OASIS Staff) | Several instances of improperly formed key words (non-compliance to RFC 2119/8174) | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/568) |
| [2](https://github.com/oasis-tcs/csaf/pull/566/files) | 2022-07-14 | Denny Page | Example 129 uses quotes inconsistent and has a trailing space after commas | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/566) |
| [3](https://github.com/oasis-tcs/csaf/pull/568/commits/6c0de2e7f9168c7872db46f4b680ec0b725f53c3) | 2022-07-15 | Thomas Schmidt | Several spelling mistakes have been spotted | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/568) |
| [4](https://github.com/oasis-tcs/csaf/pull/568/commits/6c0de2e7f9168c7872db46f4b680ec0b725f53c3) | 2022-07-15 | Thomas Schmidt | Several spelling mistakes have been spotted | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/568) |
| [5](https://github.com/oasis-tcs/csaf/pull/570) | 2022-07-15 | Thomas Schmidt | Correct usage of plural for "examples" | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/570) |
| [6](https://github.com/oasis-tcs/csaf/issues/572) | 2022-07-20 | Thomas Schmidt | Add `earlier` to informative comment in 6.1.31 | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/573) |
## Evaluation of Feedback
The editors consider above public comments as well as other more editorial feedback documented in issue(s) ... and classified/considered per pull request ... as **Non-Material** per OASIS TC process.
[A motion has been issued during the TC meeting by Thomas Schmidt on 2022-07-20](https://github.com/oasis-tcs/csaf/blob/master/meeting_minutes/2022-07-20.md) to promote the resulting revised work products to CS03 including non-material changes only.
To ease verification by anyone and to support the administration a separate [release candidate archive containing the 4 standards track work products has been created](https://github.com/oasis-tcs/csaf/releases/tag/cs-03-20220720-rc2) and linked to this issue as well as noted in the motion as annotation in the minutes of meeting.
|
1.0
|
Comment Resolution Log CS02 to CS03 - # Comment Resolution Log
The table summarizes the comments that were received for the committee specification "[Common Security Advisory Framework Version 2.0](https://docs.oasis-open.org/csaf/csaf/v2.0/cs02/csaf-v2.0-cs02.html)" and their resolution. Comments came to editors directly from OASIS admins and through Github PRs.
A status of "Completed" in the Disposition column indicates that the editors have implemented the changes on which the TC decided, which are outlined in the Resolution column. It also is a hyperlink to the GitHub commit notice.
The item number is a hyperlink to the issue number in this OASIS TC CASF repository at https://github.com/oasis-tcs/csaf/.
| Item # | Date | Commenter | Description | Date acknowledged | Resolution | Disposition |
|----------------------------------------------------|------------|----------------|----------------------------------------|-------------------|----------------------|---------------------------------------------------------|
| [1](https://github.com/oasis-tcs/csaf/pull/568/commits/97873fda3bca57e2dee91955f4281e8ff7378893) | 2022-07-06 | Paul Knight (OASIS Staff) | Several instances of improperly formed key words (non-compliance to RFC 2119/8174) | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/568) |
| [2](https://github.com/oasis-tcs/csaf/pull/566/files) | 2022-07-14 | Denny Page | Example 129 uses quotes inconsistent and has a trailing space after commas | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/566) |
| [3](https://github.com/oasis-tcs/csaf/pull/568/commits/6c0de2e7f9168c7872db46f4b680ec0b725f53c3) | 2022-07-15 | Thomas Schmidt | Several spelling mistakes have been spotted | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/568) |
| [4](https://github.com/oasis-tcs/csaf/pull/568/commits/6c0de2e7f9168c7872db46f4b680ec0b725f53c3) | 2022-07-15 | Thomas Schmidt | Several spelling mistakes have been spotted | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/568) |
| [5](https://github.com/oasis-tcs/csaf/pull/570) | 2022-07-15 | Thomas Schmidt | Correct usage of plural for "examples" | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/570) |
| [6](https://github.com/oasis-tcs/csaf/issues/572) | 2022-07-20 | Thomas Schmidt | Add `earlier` to informative comment in 6.1.31 | 2022-07-20 | Discussed at TC call | [TC agrees. Editors added clarifications and explanations as needed.](https://github.com/oasis-tcs/csaf/pull/573) |
## Evaluation of Feedback
The editors consider above public comments as well as other more editorial feedback documented in issue(s) ... and classified/considered per pull request ... as **Non-Material** per OASIS TC process.
[A motion has been issued during the TC meeting by Thomas Schmidt on 2022-07-20](https://github.com/oasis-tcs/csaf/blob/master/meeting_minutes/2022-07-20.md) to promote the resulting revised work products to CS03 including non-material changes only.
To ease verification by anyone and to support the administration a separate [release candidate archive containing the 4 standards track work products has been created](https://github.com/oasis-tcs/csaf/releases/tag/cs-03-20220720-rc2) and linked to this issue as well as noted in the motion as annotation in the minutes of meeting.
|
process
|
comment resolution log to comment resolution log the table summarizes the comments that were received for the committee specification and their resolution comments came to editors directly from oasis admins and through github prs a status of completed in the disposition column indicates that the editors have implemented the changes on which the tc decided which are outlined in the resolution column it also is a hyperlink to the github commit notice the item number is a hyperlink to the issue number in this oasis tc casf repository at item date commenter description date acknowledged resolution disposition paul knight oasis staff several instances of improperly formed key words non compliance to rfc discussed at tc call denny page example uses quotes inconsistent and has a trailing space after commas discussed at tc call thomas schmidt several spelling mistakes have been spotted discussed at tc call thomas schmidt several spelling mistakes have been spotted discussed at tc call thomas schmidt correct usage of plural for examples discussed at tc call thomas schmidt add earlier to informative comment in discussed at tc call evaluation of feedback the editors consider above public comments as well as other more editorial feedback documented in issue s and classified considered per pull request as non material per oasis tc process to promote the resulting revised work products to including non material changes only to ease verification by anyone and to support the administration a separate and linked to this issue as well as noted in the motion as annotation in the minutes of meeting
| 1
|
779,074
| 27,338,302,390
|
IssuesEvent
|
2023-02-26 13:43:29
|
SariItani/BAU-Engineering-Day
|
https://api.github.com/repos/SariItani/BAU-Engineering-Day
|
reopened
|
Polishing phase
|
bug enhancement help wanted medium priority
|
We need to finish the two available issues to get to the Polishing phase of the game
|
1.0
|
Polishing phase - We need to finish the two available issues to get to the Polishing phase of the game
|
non_process
|
polishing phase we need to finish the two available issues to get to the polishing phase of the game
| 0
|
5,226
| 8,029,426,362
|
IssuesEvent
|
2018-07-27 15:57:33
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Bigtable system tests fail creating tables with 503
|
api: bigtable flaky testing type: process
|
CI failures for changes unrelated to Bigtable:
- https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/6267
- https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/6268
|
1.0
|
Bigtable system tests fail creating tables with 503 - CI failures for changes unrelated to Bigtable:
- https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/6267
- https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/6268
|
process
|
bigtable system tests fail creating tables with ci failures for changes unrelated to bigtable
| 1
|
20,730
| 10,549,218,088
|
IssuesEvent
|
2019-10-03 08:12:21
|
ChetanSankhala/LoadGenerator
|
https://api.github.com/repos/ChetanSankhala/LoadGenerator
|
closed
|
CVE-2019-12814 (Medium) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2019-12814 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /LoadGenerator/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChetanSankhala/LoadGenerator/commit/9a0b34926a7572173616a2c328cf3cf18c48391c">9a0b34926a7572173616a2c328cf3cf18c48391c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12814>CVE-2019-12814</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/5f7c69bba07a7155adde130d9dee2e54a54f1fa5">https://github.com/FasterXML/jackson-databind/commit/5f7c69bba07a7155adde130d9dee2e54a54f1fa5</a></p>
<p>Release Date: 2019-06-14</p>
<p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, VERSION</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-12814 (Medium) detected in jackson-databind-2.9.8.jar - ## CVE-2019-12814 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /LoadGenerator/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChetanSankhala/LoadGenerator/commit/9a0b34926a7572173616a2c328cf3cf18c48391c">9a0b34926a7572173616a2c328cf3cf18c48391c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12814>CVE-2019-12814</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/commit/5f7c69bba07a7155adde130d9dee2e54a54f1fa5">https://github.com/FasterXML/jackson-databind/commit/5f7c69bba07a7155adde130d9dee2e54a54f1fa5</a></p>
<p>Release Date: 2019-06-14</p>
<p>Fix Resolution: Replace or update the following files: SubTypeValidator.java, VERSION</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file loadgenerator build gradle path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following files subtypevalidator java version step up your open source security game with whitesource
| 0
|
345,281
| 30,796,592,719
|
IssuesEvent
|
2023-07-31 20:24:32
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
: failed
|
C-test-failure O-robot branch-master T-testeng
|
. [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11130618?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11130618?buildTab=artifacts#/) on master @ [b27094b0ded0d37a56d3e8dd31e2e02514ee0eff](https://github.com/cockroachdb/cockroach/commits/b27094b0ded0d37a56d3e8dd31e2e02514ee0eff):
```
stdout:
, stderr:
```
<p>Parameters: <code>TAGS=bazel,gss</code>
, <code>stress=true</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #107852 : failed [C-test-failure O-robot T-testeng branch-release-23.1]
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-30256
|
2.0
|
: failed - . [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11130618?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11130618?buildTab=artifacts#/) on master @ [b27094b0ded0d37a56d3e8dd31e2e02514ee0eff](https://github.com/cockroachdb/cockroach/commits/b27094b0ded0d37a56d3e8dd31e2e02514ee0eff):
```
stdout:
, stderr:
```
<p>Parameters: <code>TAGS=bazel,gss</code>
, <code>stress=true</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #107852 : failed [C-test-failure O-robot T-testeng branch-release-23.1]
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-30256
|
non_process
|
failed with on master stdout stderr parameters tags bazel gss stress true help see also same failure on other branches failed cc cockroachdb test eng jira issue crdb
| 0
|
19,138
| 25,198,878,907
|
IssuesEvent
|
2022-11-12 21:52:45
|
emily-writes-poems/emily-writes-poems-processing
|
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
|
closed
|
refresh data for features
|
processing refinement
|
features table should update after changes are made:
- [x] - new feature created
- [x] - current feature is edited (set/unset)
|
1.0
|
refresh data for features - features table should update after changes are made:
- [x] - new feature created
- [x] - current feature is edited (set/unset)
|
process
|
refresh data for features features table should update after changes are made new feature created current feature is edited set unset
| 1
|
14,965
| 18,458,957,315
|
IssuesEvent
|
2021-10-15 20:48:15
|
bpython/bpython
|
https://api.github.com/repos/bpython/bpython
|
closed
|
werkzeug thread locals does not work under bpython
|
bug requires-separate-process
|
It seems that werkzeug's thread locals can't be used under bpython ?
`thread.get_indent()` seems coherent tough.
```
$ bpython
bpython version 0.15 on top of Python 2.7.12+ /usr/bin/python
>>> from werkzeug.local import Local
>>> test = Local()
>>> test.foo = 1
>>> test.foo
Traceback (most recent call last):
File "<input>", line 1, in <module>
test.foo
File "/usr/lib/python2.7/dist-packages/werkzeug/local.py", line 72, in __getattr__
raise AttributeError(name)
AttributeError: foo
```
expected behaviour under ipython:
```
$ ipython
Python 2.7.12+ (default, Sep 1 2016, 20:27:38)
IPython 4.2.1 -- An enhanced Interactive Python.
In [1]: from werkzeug.local import Local
In [2]: test = Local()
In [3]: test.foo = 1
In [4]: test.foo
Out[4]: 1
```
I could not find the time to dig deeper unfortunately so I'm just reporting.
|
1.0
|
werkzeug thread locals does not work under bpython - It seems that werkzeug's thread locals can't be used under bpython ?
`thread.get_indent()` seems coherent tough.
```
$ bpython
bpython version 0.15 on top of Python 2.7.12+ /usr/bin/python
>>> from werkzeug.local import Local
>>> test = Local()
>>> test.foo = 1
>>> test.foo
Traceback (most recent call last):
File "<input>", line 1, in <module>
test.foo
File "/usr/lib/python2.7/dist-packages/werkzeug/local.py", line 72, in __getattr__
raise AttributeError(name)
AttributeError: foo
```
expected behaviour under ipython:
```
$ ipython
Python 2.7.12+ (default, Sep 1 2016, 20:27:38)
IPython 4.2.1 -- An enhanced Interactive Python.
In [1]: from werkzeug.local import Local
In [2]: test = Local()
In [3]: test.foo = 1
In [4]: test.foo
Out[4]: 1
```
I could not find the time to dig deeper unfortunately so I'm just reporting.
|
process
|
werkzeug thread locals does not work under bpython it seems that werkzeug s thread locals can t be used under bpython thread get indent seems coherent tough bpython bpython version on top of python usr bin python from werkzeug local import local test local test foo test foo traceback most recent call last file line in test foo file usr lib dist packages werkzeug local py line in getattr raise attributeerror name attributeerror foo expected behaviour under ipython ipython python default sep ipython an enhanced interactive python in from werkzeug local import local in test local in test foo in test foo out i could not find the time to dig deeper unfortunately so i m just reporting
| 1
|
888
| 3,351,766,211
|
IssuesEvent
|
2015-11-17 19:54:28
|
tc39/Array.prototype.includes
|
https://api.github.com/repos/tc39/Array.prototype.includes
|
closed
|
Advance to stage 4
|
process
|
**Criteria:**
> - [x] Those from stage 3
This is #12.
> - [x] Test 262 acceptance tests have been written for mainline usage scenarios.
This is #1.
> - [x] Two compatible implementations which pass the acceptance tests.
This requires completion of two out of #7, #8, #9 plus also #27 and #28.
> - [x] The ECMAScript editor has signed off on the current spec text.
This seems like a dupe of a requirement from stage 3. In any case, tracked as #11.
**Implementation types expected during this stage:**
> - [x] Shipping
This requires completion of two out of #7, #8, #9 plus also #27 and #28.
Finally:
- [x] Get TC39 to agree that we have advanced to stage 4, after meeting all the above requirements.
|
1.0
|
Advance to stage 4 - **Criteria:**
> - [x] Those from stage 3
This is #12.
> - [x] Test 262 acceptance tests have been written for mainline usage scenarios.
This is #1.
> - [x] Two compatible implementations which pass the acceptance tests.
This requires completion of two out of #7, #8, #9 plus also #27 and #28.
> - [x] The ECMAScript editor has signed off on the current spec text.
This seems like a dupe of a requirement from stage 3. In any case, tracked as #11.
**Implementation types expected during this stage:**
> - [x] Shipping
This requires completion of two out of #7, #8, #9 plus also #27 and #28.
Finally:
- [x] Get TC39 to agree that we have advanced to stage 4, after meeting all the above requirements.
|
process
|
advance to stage criteria those from stage this is test acceptance tests have been written for mainline usage scenarios this is two compatible implementations which pass the acceptance tests this requires completion of two out of plus also and the ecmascript editor has signed off on the current spec text this seems like a dupe of a requirement from stage in any case tracked as implementation types expected during this stage shipping this requires completion of two out of plus also and finally get to agree that we have advanced to stage after meeting all the above requirements
| 1
|
5,090
| 7,876,583,936
|
IssuesEvent
|
2018-06-26 01:58:24
|
uccser/verto
|
https://api.github.com/repos/uccser/verto
|
opened
|
Set interactive 'text' value to be within block
|
processor implementation update
|
Similar to the caption of images, to enable easy translation.
|
1.0
|
Set interactive 'text' value to be within block - Similar to the caption of images, to enable easy translation.
|
process
|
set interactive text value to be within block similar to the caption of images to enable easy translation
| 1
|
21,221
| 28,306,211,553
|
IssuesEvent
|
2023-04-10 11:16:03
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Add a compatibility table for VM vs Automation Account combinations
|
automation/svc triaged cxp doc-enhancement process-automation/subsvc Pri2
|
Hi Team, great article here
Given that there are 9 different combinations of using system-assigned managed identity with VM/AA or user-assigned managed identity with VM/AA, it would be much clearer if you could include a table detailling the combination and result.
Not quite sure exactly how you'd intend it to look, but I was thinking something like:

---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://learn.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks?tabs=win-extn-hrw%2CLin-extn-hrw%2Csa-mi#runbook-auth-managed-identities)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
1.0
|
Add a compatibility table for VM vs Automation Account combinations - Hi Team, great article here
Given that there are 9 different combinations of using system-assigned managed identity with VM/AA or user-assigned managed identity with VM/AA, it would be much clearer if you could include a table detailling the combination and result.
Not quite sure exactly how you'd intend it to look, but I was thinking something like:

---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run Azure Automation runbooks on a Hybrid Runbook Worker](https://learn.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks?tabs=win-extn-hrw%2CLin-extn-hrw%2Csa-mi#runbook-auth-managed-identities)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SnehaSudhirG
* Microsoft Alias: **sudhirsneha**
|
process
|
add a compatibility table for vm vs automation account combinations hi team great article here given that there are different combinations of using system assigned managed identity with vm aa or user assigned managed identity with vm aa it would be much clearer if you could include a table detailling the combination and result not quite sure exactly how you d intend it to look but i was thinking something like document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login snehasudhirg microsoft alias sudhirsneha
| 1
|
16,911
| 22,239,790,262
|
IssuesEvent
|
2022-06-09 03:09:07
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
add grid values to points gives wrong results
|
Feedback stale Processing Bug
|
**Describe the bug**
I used from the processing toolbox the 'add grid values to points' function with as resampling option 'nearest neighbor', and with as input a point layer and a raster layer with integer values. The resulting column with raster values shows decimal numbers, not the exact integer raster value. When I run the function in SAGA directly, the column contains the exact raster values.
**QGIS and OS versions**
3.12.1 on Windows (installed with Osgeo4W
**Additional context**
Log of function
----
C:\OSGEO4~1\bin>call saga_cmd shapes_grid "Add Grid Values to Points"
-SHAPES "C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6601327d6d9a43199caeead18a394363/SHAPES.shp"
-GRIDS "C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6207aad29fad45aa94554d79463c8593/bio12.sgrd"
-RESAMPLING 0 -RESULT
"C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/306d704010524140aed1ef0e445fc93a/RESULT.shp"
____________________________
SAGA Version: 2.3.2 (64 bit)
____________________________
library path: C:\OSGEO4~1\apps\saga-ltr\modules\
library name: shapes_grid
library : Grid Tools
tool : Add Grid Values to Points
author : O.Conrad (c) 2003
processors : 8 [8]
________________________
Load shapes: C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6601327d6d9a43199caeead18a394363/SHAPES.shp...
Load grid: C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6207aad29fad45aa94554d79463c8593/bio12.sgrd...
Parameters
Points: SHAPES
Grids: 1 object (bio12)
Result: Result
Resampling: Nearest Neighbour
|
1.0
|
add grid values to points gives wrong results - **Describe the bug**
I used from the processing toolbox the 'add grid values to points' function with as resampling option 'nearest neighbor', and with as input a point layer and a raster layer with integer values. The resulting column with raster values shows decimal numbers, not the exact integer raster value. When I run the function in SAGA directly, the column contains the exact raster values.
**QGIS and OS versions**
3.12.1 on Windows (installed with Osgeo4W
**Additional context**
Log of function
----
C:\OSGEO4~1\bin>call saga_cmd shapes_grid "Add Grid Values to Points"
-SHAPES "C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6601327d6d9a43199caeead18a394363/SHAPES.shp"
-GRIDS "C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6207aad29fad45aa94554d79463c8593/bio12.sgrd"
-RESAMPLING 0 -RESULT
"C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/306d704010524140aed1ef0e445fc93a/RESULT.shp"
____________________________
SAGA Version: 2.3.2 (64 bit)
____________________________
library path: C:\OSGEO4~1\apps\saga-ltr\modules\
library name: shapes_grid
library : Grid Tools
tool : Add Grid Values to Points
author : O.Conrad (c) 2003
processors : 8 [8]
________________________
Load shapes: C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6601327d6d9a43199caeead18a394363/SHAPES.shp...
Load grid: C:/Users/brp/AppData/Local/Temp/processing_wjqTfe/6207aad29fad45aa94554d79463c8593/bio12.sgrd...
Parameters
Points: SHAPES
Grids: 1 object (bio12)
Result: Result
Resampling: Nearest Neighbour
|
process
|
add grid values to points gives wrong results describe the bug i used from the processing toolbox the add grid values to points function with as resampling option nearest neighbor and with as input a point layer and a raster layer with integer values the resulting column with raster values shows decimal numbers not the exact integer raster value when i run the function in saga directly the column contains the exact raster values qgis and os versions on windows installed with additional context log of function c bin call saga cmd shapes grid add grid values to points shapes c users brp appdata local temp processing wjqtfe shapes shp grids c users brp appdata local temp processing wjqtfe sgrd resampling result c users brp appdata local temp processing wjqtfe result shp saga version bit library path c apps saga ltr modules library name shapes grid library grid tools tool add grid values to points author o conrad c processors load shapes c users brp appdata local temp processing wjqtfe shapes shp load grid c users brp appdata local temp processing wjqtfe sgrd parameters points shapes grids object result result resampling nearest neighbour
| 1
|
12,135
| 14,740,981,280
|
IssuesEvent
|
2021-01-07 09:55:11
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
FW: Cron <root@answernet> /opt/sabilling/rf
|
anc-process anp-important ant-bug
|
In GitLab by @kdjstudios on Dec 19, 2018, 15:29
**Submitted by:** "Tim Traylor" <tim.traylor@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6339155
**Server:** External
**Client/Site:** NA
**Account:** NA
**Issue:**
Hi Sumeet,
Was the user changed back on the hosted server?
Thx,
Tim
-----Original Message-----
From: Cron Daemon [mailto:root@answernet.sabilling.com]
Sent: Wednesday, December 19, 2018 3:00 AM
To: apperrors@sahosted.com
Subject: Cron <root@answernet> /opt/sabilling/rf
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
|
1.0
|
FW: Cron <root@answernet> /opt/sabilling/rf - In GitLab by @kdjstudios on Dec 19, 2018, 15:29
**Submitted by:** "Tim Traylor" <tim.traylor@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6339155
**Server:** External
**Client/Site:** NA
**Account:** NA
**Issue:**
Hi Sumeet,
Was the user changed back on the hosted server?
Thx,
Tim
-----Original Message-----
From: Cron Daemon [mailto:root@answernet.sabilling.com]
Sent: Wednesday, December 19, 2018 3:00 AM
To: apperrors@sahosted.com
Subject: Cron <root@answernet> /opt/sabilling/rf
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
|
process
|
fw cron opt sabilling rf in gitlab by kdjstudios on dec submitted by tim traylor helpdesk server external client site na account na issue hi sumeet was the user changed back on the hosted server thx tim original message from cron daemon sent wednesday december am to apperrors sahosted com subject cron opt sabilling rf sudo sorry you must have a tty to run sudo sudo sorry you must have a tty to run sudo sudo sorry you must have a tty to run sudo
| 1
|
8,606
| 11,761,893,433
|
IssuesEvent
|
2020-03-13 23:09:21
|
cncf/cnf-conformance
|
https://api.github.com/repos/cncf/cnf-conformance
|
closed
|
[Process] semantic versioning of releases
|
1 pt process
|
### [Process] semantic versioning of releases
Tasks:
- [x] Add a quick overview of the process you are researching
- [x] Investigate potential process for implementation and document findings => https://hackmd.io/f7Op9FgJQqW2QQXb-tVRgg?view
- [x] Select a process to use, minimal/least effort, and add selection to ticket
- [x] Add comment suggesting updates as needed for:
- [ ] the [test categories markdown](https://github.com/cncf/cnf-conformance/blob/master/TEST-CATEGORIES.md)
- [ ] the [pseudo code markdown](https://github.com/cncf/cnf-conformance/blob/master/PSEUDO-CODE.md)
- [ ] slide content updates, LINK_TO_UPDATES
- [ ] the [README](https://github.com/cncf/cnf-conformance/blob/master/README.md)
- [x] Tag 1 or more people to peer review
|
1.0
|
[Process] semantic versioning of releases - ### [Process] semantic versioning of releases
Tasks:
- [x] Add a quick overview of the process you are researching
- [x] Investigate potential process for implementation and document findings => https://hackmd.io/f7Op9FgJQqW2QQXb-tVRgg?view
- [x] Select a process to use, minimal/least effort, and add selection to ticket
- [x] Add comment suggesting updates as needed for:
- [ ] the [test categories markdown](https://github.com/cncf/cnf-conformance/blob/master/TEST-CATEGORIES.md)
- [ ] the [pseudo code markdown](https://github.com/cncf/cnf-conformance/blob/master/PSEUDO-CODE.md)
- [ ] slide content updates, LINK_TO_UPDATES
- [ ] the [README](https://github.com/cncf/cnf-conformance/blob/master/README.md)
- [x] Tag 1 or more people to peer review
|
process
|
semantic versioning of releases semantic versioning of releases tasks add a quick overview of the process you are researching investigate potential process for implementation and document findings select a process to use minimal least effort and add selection to ticket add comment suggesting updates as needed for the the slide content updates link to updates the tag or more people to peer review
| 1
|
498,990
| 14,436,963,237
|
IssuesEvent
|
2020-12-07 10:51:18
|
graknlabs/grakn
|
https://api.github.com/repos/graknlabs/grakn
|
opened
|
AttributeTypeImpl.Boolean.'put' and 'get' are not overridden, and may produce incorrect outcomes
|
priority: low type: bug
|
## Description
AttributeTypeImpl.Boolean.'put' and 'get' are not overridden, and may produce incorrect outcomes.
We should ensure that 'put' throws a ROOT_TYPE_MUTATION error and 'get' returns null immediately.
|
1.0
|
AttributeTypeImpl.Boolean.'put' and 'get' are not overridden, and may produce incorrect outcomes - ## Description
AttributeTypeImpl.Boolean.'put' and 'get' are not overridden, and may produce incorrect outcomes.
We should ensure that 'put' throws a ROOT_TYPE_MUTATION error and 'get' returns null immediately.
|
non_process
|
attributetypeimpl boolean put and get are not overridden and may produce incorrect outcomes description attributetypeimpl boolean put and get are not overridden and may produce incorrect outcomes we should ensure that put throws a root type mutation error and get returns null immediately
| 0
|
258,196
| 8,166,025,499
|
IssuesEvent
|
2018-08-25 03:11:46
|
HabitRPG/habitica
|
https://api.github.com/repos/HabitRPG/habitica
|
closed
|
API rate limiter
|
help wanted priority: medium section: other
|
In v2 we had a middleware to throttle API requests, it's been removed in v3 because it was not supported anymore but we should add something back to limit the number of requests an IP can make.
The last used version can be found here https://github.com/HabitRPG/habitrpg/blob/47f6f2febecb3aea2b805ba77a0c8e1c8b095389/website/src/middlewares/apiThrottle.js
|
1.0
|
API rate limiter - In v2 we had a middleware to throttle API requests, it's been removed in v3 because it was not supported anymore but we should add something back to limit the number of requests an IP can make.
The last used version can be found here https://github.com/HabitRPG/habitrpg/blob/47f6f2febecb3aea2b805ba77a0c8e1c8b095389/website/src/middlewares/apiThrottle.js
|
non_process
|
api rate limiter in we had a middleware to throttle api requests it s been removed in because it was not supported anymore but we should add something back to limit the number of requests an ip can make the last used version can be found here
| 0
|
434,815
| 12,528,178,774
|
IssuesEvent
|
2020-06-04 09:09:11
|
geosolutions-it/MapStore2-C028
|
https://api.github.com/repos/geosolutions-it/MapStore2-C028
|
closed
|
Project Update
|
Epic Priority: Medium Project: C028 deploy needed
|
As part of this work the developments provided for the styles localization support will be provided on MapStore master branch and backported to the closest stable branch, so the MapStore revision on C028 MS project will be updated accordingly.
After the revision update (and related config files/project review) the following custom plugins need to be checked to verify the involved functionalities with the new MS version:
- [Road Accident](http://sit.comune.bolzano.it/mapstore2//#/roadAccidents/openlayers/incidentiMap) Plugin
- Search plugin for cadastral parcel (#54): this also includes the most recent implementation of custom logic related to the support of query parameters for searching parcels on the fly during the first loading of the viewer:
For a manual test by using the search tool, do searches similar to the following:
_Search in the search bar "Gries / Gries" (Building Particle) then search .4442 at this point you have to see the particle on the map, test desktop, mobile_
For a general test, put in the URL of a map the following params:
_?particella=.4442&comCat=669&tipoPart=partedif_
- Catalog Plugin: localization of layer titles (this customization works with specific keywords defined GeoServer side inside the layer configuration
There are also some relevant issues to consider during the revision update: #86, #87, #69
The current production instance is available here:
http://sit.comune.bolzano.it/mapstore2/#/
That instance can be useful to double check the updated MS project comparing it with the existing production instance.
|
1.0
|
Project Update - As part of this work the developments provided for the styles localization support will be provided on MapStore master branch and backported to the closest stable branch, so the MapStore revision on C028 MS project will be updated accordingly.
After the revision update (and related config files/project review) the following custom plugins need to be checked to verify the involved functionalities with the new MS version:
- [Road Accident](http://sit.comune.bolzano.it/mapstore2//#/roadAccidents/openlayers/incidentiMap) Plugin
- Search plugin for cadastral parcel (#54): this also includes the most recent implementation of custom logic related to the support of query parameters for searching parcels on the fly during the first loading of the viewer:
For a manual test by using the search tool, do searches similar to the following:
_Search in the search bar "Gries / Gries" (Building Particle) then search .4442 at this point you have to see the particle on the map, test desktop, mobile_
For a general test, put in the URL of a map the following params:
_?particella=.4442&comCat=669&tipoPart=partedif_
- Catalog Plugin: localization of layer titles (this customization works with specific keywords defined GeoServer side inside the layer configuration
There are also some relevant issues to consider during the revision update: #86, #87, #69
The current production instance is available here:
http://sit.comune.bolzano.it/mapstore2/#/
That instance can be useful to double check the updated MS project comparing it with the existing production instance.
|
non_process
|
project update as part of this work the developments provided for the styles localization support will be provided on mapstore master branch and backported to the closest stable branch so the mapstore revision on ms project will be updated accordingly after the revision update and related config files project review the following custom plugins need to be checked to verify the involved functionalities with the new ms version plugin search plugin for cadastral parcel this also includes the most recent implementation of custom logic related to the support of query parameters for searching parcels on the fly during the first loading of the viewer for a manual test by using the search tool do searches similar to the following search in the search bar gries gries building particle then search at this point you have to see the particle on the map test desktop mobile for a general test put in the url of a map the following params particella comcat tipopart partedif catalog plugin localization of layer titles this customization works with specific keywords defined geoserver side inside the layer configuration there are also some relevant issues to consider during the revision update the current production instance is available here that instance can be useful to double check the updated ms project comparing it with the existing production instance
| 0
|
2,914
| 10,391,741,219
|
IssuesEvent
|
2019-09-11 08:12:08
|
precice/precice
|
https://api.github.com/repos/precice/precice
|
opened
|
Generalize Mesh adding and filtering
|
maintainability
|
We currently have slight variations of the same code that handles adding one mesh to another.
1. `void Mesh::addMesh(Mesh const& diff);` adds the `diff` to the current mesh.
2. `void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB)`
This adds the internal mesh to `filteredMesh` and filters vertices based on a predicate: tagged vertices, or vertices inside a bounding-box.
Both functions can be generalized to:
```cpp
// Generalized version filtering vertices based on a given unary predicate.
template<typename UnaryPredicate>
void Mesh::addMesh(Mesh const& other, UnaryPredicate p);
// Version that simply adds the Mesh
void Mesh::addMesh(Mesh const& other) {
addMesh(other, [](mesh::Vertex const &) { return true; });
}
// The new possible implementation.
void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB) {
if (filterByBB) {
filteredMesh.addMesh(_mesh,
[this](mesh::Vertex const & v){ return this->isVertexinBB(v);});
} else {
filteredMesh.addMesh(_mesh,
[](mesh::Vertex const & v){ return v.isTagged();});
}
}
```
This makes the code DRY, easier to maintain and allows us to optimize a single function.
|
True
|
Generalize Mesh adding and filtering - We currently have slight variations of the same code that handles adding one mesh to another.
1. `void Mesh::addMesh(Mesh const& diff);` adds the `diff` to the current mesh.
2. `void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB)`
This adds the internal mesh to `filteredMesh` and filters vertices based on a predicate: tagged vertices, or vertices inside a bounding-box.
Both functions can be generalized to:
```cpp
// Generalized version filtering vertices based on a given unary predicate.
template<typename UnaryPredicate>
void Mesh::addMesh(Mesh const& other, UnaryPredicate p);
// Version that simply adds the Mesh
void Mesh::addMesh(Mesh const& other) {
addMesh(other, [](mesh::Vertex const &) { return true; });
}
// The new possible implementation.
void ReceivedPartition::filterMesh(mesh::Mesh &filteredMesh, const bool filterByBB) {
if (filterByBB) {
filteredMesh.addMesh(_mesh,
[this](mesh::Vertex const & v){ return this->isVertexinBB(v);});
} else {
filteredMesh.addMesh(_mesh,
[](mesh::Vertex const & v){ return v.isTagged();});
}
}
```
This makes the code DRY, easier to maintain and allows us to optimize a single function.
|
non_process
|
generalize mesh adding and filtering we currently have slight variations of the same code that handles adding one mesh to another void mesh addmesh mesh const diff adds the diff to the current mesh void receivedpartition filtermesh mesh mesh filteredmesh const bool filterbybb this adds the internal mesh to filteredmesh and filters vertices based on a predicate tagged vertices or vertices inside a bounding box both functions can be generalized to cpp generalized version filtering vertices based on a given unary predicate template void mesh addmesh mesh const other unarypredicate p version that simply adds the mesh void mesh addmesh mesh const other addmesh other mesh vertex const return true the new possible implementation void receivedpartition filtermesh mesh mesh filteredmesh const bool filterbybb if filterbybb filteredmesh addmesh mesh mesh vertex const v return this isvertexinbb v else filteredmesh addmesh mesh mesh vertex const v return v istagged this makes the code dry easier to maintain and allows us to optimize a single function
| 0
|
6,312
| 9,312,358,314
|
IssuesEvent
|
2019-03-26 00:51:48
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Consider Linux packages / binaries.
|
type: feature request type: process
|
Consider creating binary packages for Linux. We can host them in GCS or bintray. We can have CMake create them, but it might be better to manually create them to spell out the build dependencies and install dependencies.
At least the following should be considered, and if rejected say why:
- [ ] Docker image with all dependencies pre-installed
- [ ] tar.gz: for Ubuntu, Fedora, **and** CentOS
- [ ] Linuxbrew: https://linuxbrew.sh
- [ ] Conan: https://conan.io
- [ ] Hunter: https://github.com/ruslo/hunter
- [ ] cget: http://cget.readthedocs.io/en/latest/
- [ ] .rpm: http://rpm.org/
- [ ] .deb: https://wiki.debian.org/Packaging/Intro?action=show&redirect=IntroDebianPackaging
- [ ] Spack: https://www.spack.io/
- [ ] Buckaroo: https://buckaroo.pm/
|
1.0
|
Consider Linux packages / binaries. - Consider creating binary packages for Linux. We can host them in GCS or bintray. We can have CMake create them, but it might be better to manually create them to spell out the build dependencies and install dependencies.
At least the following should be considered, and if rejected say why:
- [ ] Docker image with all dependencies pre-installed
- [ ] tar.gz: for Ubuntu, Fedora, **and** CentOS
- [ ] Linuxbrew: https://linuxbrew.sh
- [ ] Conan: https://conan.io
- [ ] Hunter: https://github.com/ruslo/hunter
- [ ] cget: http://cget.readthedocs.io/en/latest/
- [ ] .rpm: http://rpm.org/
- [ ] .deb: https://wiki.debian.org/Packaging/Intro?action=show&redirect=IntroDebianPackaging
- [ ] Spack: https://www.spack.io/
- [ ] Buckaroo: https://buckaroo.pm/
|
process
|
consider linux packages binaries consider creating binary packages for linux we can host them in gcs or bintray we can have cmake create them but it might be better to manually create them to spell out the build dependencies and install dependencies at least the following should be considered and if rejected say why docker image with all dependencies pre installed tar gz for ubuntu fedora and centos linuxbrew conan hunter cget rpm deb spack buckaroo
| 1
|
286,780
| 21,608,904,002
|
IssuesEvent
|
2022-05-04 08:01:41
|
TheFoundryVisionmongers/OpenAssetIO
|
https://api.github.com/repos/TheFoundryVisionmongers/OpenAssetIO
|
opened
|
Re-visit `ManagerInterface` et. al. documentation
|
documentation
|
## What
Check that constraints described in `ManagerInterface` and `Manager` documentation, and then `HostInterface`/`Host` etc. match reality and amend/update/ensure tested accordingly.
## Why
There are several [claims in the documentation](https://github.com/TheFoundryVisionmongers/OpenAssetIO/blob/146a8a988517520f9dab603244b7c282c0df32d9/src/openassetio-core/include/openassetio/managerAPI/ManagerInterface.hpp#L175), that are either incorrect (as we've been more lenient), or untested in the `test.manager` API compliance suite.
This needs to be consistent one way or another, as everyone will be reading these docs and basing their implementation on them.
|
1.0
|
Re-visit `ManagerInterface` et. al. documentation - ## What
Check that constraints described in `ManagerInterface` and `Manager` documentation, and then `HostInterface`/`Host` etc. match reality and amend/update/ensure tested accordingly.
## Why
There are several [claims in the documentation](https://github.com/TheFoundryVisionmongers/OpenAssetIO/blob/146a8a988517520f9dab603244b7c282c0df32d9/src/openassetio-core/include/openassetio/managerAPI/ManagerInterface.hpp#L175), that are either incorrect (as we've been more lenient), or untested in the `test.manager` API compliance suite.
This needs to be consistent one way or another, as everyone will be reading these docs and basing their implementation on them.
|
non_process
|
re visit managerinterface et al documentation what check that constraints described in managerinterface and manager documentation and then hostinterface host etc match reality and amend update ensure tested accordingly why there are several that are either incorrect as we ve been more lenient or untested in the test manager api compliance suite this needs to be consistent one way or another as everyone will be reading these docs and basing their implementation on them
| 0
|
196,658
| 14,883,406,795
|
IssuesEvent
|
2021-01-20 13:17:45
|
tracim/tracim
|
https://api.github.com/repos/tracim/tracim
|
closed
|
Feat: Optional storage encryption for previews
|
add to changelog docker manually tested
|
## Feature description and goals
<!-- Explain why we want this feature and describe it. -->
Tracim can generate previews for its contents. Those previews are stored as files on the disk which could be used to retrieve information (for example backup leakage/…).
As the preview generation mechanism is very linked to writing/reading *files* a first step in the direction of securing the previews would be to store them encrypted.
Several file-based encryption solutions exists, a good candidate for it would be [gocryptfs](https://nuetzlich.net/gocryptfs/) as it is straightforward to use and make available in the official docker image.
- [x] integrate an option in the docker image which allows to encrypt the preview cache dir (`preview_cache_dir` )
<!-- ## Required sections, if relevant ## -->
<!-- - To be discussed before development -->
<!-- - Interface -->
<!-- - Translations -->
<!-- - Workaround -->
<!-- - Extra information -->
## Implemented solution
- Add copy of docker image in `Debian_New_Uwsgi`
- Adapt `Debian_New_Uwsgi` to handle encrypting depot local storage and preview_cache_dir with gocryptfs.
- Add documentation about using new docker image image with encryption.
|
1.0
|
Feat: Optional storage encryption for previews - ## Feature description and goals
<!-- Explain why we want this feature and describe it. -->
Tracim can generate previews for its contents. Those previews are stored as files on the disk which could be used to retrieve information (for example backup leakage/…).
As the preview generation mechanism is very linked to writing/reading *files* a first step in the direction of securing the previews would be to store them encrypted.
Several file-based encryption solutions exists, a good candidate for it would be [gocryptfs](https://nuetzlich.net/gocryptfs/) as it is straightforward to use and make available in the official docker image.
- [x] integrate an option in the docker image which allows to encrypt the preview cache dir (`preview_cache_dir` )
<!-- ## Required sections, if relevant ## -->
<!-- - To be discussed before development -->
<!-- - Interface -->
<!-- - Translations -->
<!-- - Workaround -->
<!-- - Extra information -->
## Implemented solution
- Add copy of docker image in `Debian_New_Uwsgi`
- Adapt `Debian_New_Uwsgi` to handle encrypting depot local storage and preview_cache_dir with gocryptfs.
- Add documentation about using new docker image image with encryption.
|
non_process
|
feat optional storage encryption for previews feature description and goals tracim can generate previews for its contents those previews are stored as files on the disk which could be used to retrieve information for example backup leakage … as the preview generation mechanism is very linked to writing reading files a first step in the direction of securing the previews would be to store them encrypted several file based encryption solutions exists a good candidate for it would be as it is straightforward to use and make available in the official docker image integrate an option in the docker image which allows to encrypt the preview cache dir preview cache dir implemented solution add copy of docker image in debian new uwsgi adapt debian new uwsgi to handle encrypting depot local storage and preview cache dir with gocryptfs add documentation about using new docker image image with encryption
| 0
|
17,713
| 23,609,610,993
|
IssuesEvent
|
2022-08-24 11:15:17
|
RIOT-OS/RIOT
|
https://api.github.com/repos/RIOT-OS/RIOT
|
opened
|
pkg_libhydrogen tests fail / update libhydrogen
|
Type: bug Area: pkg Process: needs backport
|
#### Description
The libhydrogen tests fail with scary errors on recent GCC:
```
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/sign.h:104:5: error: ‘hydro_x25519_core’ accessing 160 bytes in a region of size 32 [-Werror=stringop-overflow=]
104 | hydro_x25519_core(&xs[2], sig, hydro_x25519_BASE_POINT, 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/sign.h:104:5: note: referencing argument 1 of type ‘hydro_x25519_limb_t[5][8]’
{aka ‘unsigned int[5][8]’}
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/sign.h:104:5: note: referencing argument 2 of type ‘const uint8_t[32]’ {aka ‘const unsigned char[32]’}
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/x25519.h:251:1: note: in a call to function ‘hydro_x25519_core’
251 | hydro_x25519_core(hydro_x25519_fe xs[5], const uint8_t scalar[hydro_x25519_BYTES],
| ^~~~~~~~~~~~~~~~~
```
#### Steps to reproduce the issue
* Use recent GCC
* `make -C tests/pkg_libhydrogen all`
#### Issue references
This is fixed upstream in https://github.com/jedisct1/libhydrogen/issues/123 -- and the patch that closes it indicates that it's "purely cosmetic" (ie. ignoring the errors, as an old compiler would do it, does no harm).
#### Backport status
will need to be evaluated based on the amount of upstream changes; if excessive, a patch could be backported (but that'd need to happen before the update hits the master branch). For now I plan to list that as known issues; tagging as "needs backport" still to keep track.
|
1.0
|
pkg_libhydrogen tests fail / update libhydrogen - #### Description
The libhydrogen tests fail with scary errors on recent GCC:
```
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/sign.h:104:5: error: ‘hydro_x25519_core’ accessing 160 bytes in a region of size 32 [-Werror=stringop-overflow=]
104 | hydro_x25519_core(&xs[2], sig, hydro_x25519_BASE_POINT, 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/sign.h:104:5: note: referencing argument 1 of type ‘hydro_x25519_limb_t[5][8]’
{aka ‘unsigned int[5][8]’}
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/sign.h:104:5: note: referencing argument 2 of type ‘const uint8_t[32]’ {aka ‘const unsigned char[32]’}
/home/chrysn/git/RIOT/build/pkg/libhydrogen/impl/x25519.h:251:1: note: in a call to function ‘hydro_x25519_core’
251 | hydro_x25519_core(hydro_x25519_fe xs[5], const uint8_t scalar[hydro_x25519_BYTES],
| ^~~~~~~~~~~~~~~~~
```
#### Steps to reproduce the issue
* Use recent GCC
* `make -C tests/pkg_libhydrogen all`
#### Issue references
This is fixed upstream in https://github.com/jedisct1/libhydrogen/issues/123 -- and the patch that closes it indicates that it's "purely cosmetic" (ie. ignoring the errors, as an old compiler would do it, does no harm).
#### Backport status
will need to be evaluated based on the amount of upstream changes; if excessive, a patch could be backported (but that'd need to happen before the update hits the master branch). For now I plan to list that as known issues; tagging as "needs backport" still to keep track.
|
process
|
pkg libhydrogen tests fail update libhydrogen description the libhydrogen tests fail with scary errors on recent gcc home chrysn git riot build pkg libhydrogen impl sign h error ‘hydro core’ accessing bytes in a region of size hydro core xs sig hydro base point home chrysn git riot build pkg libhydrogen impl sign h note referencing argument of type ‘hydro limb t ’ aka ‘unsigned int ’ home chrysn git riot build pkg libhydrogen impl sign h note referencing argument of type ‘const t ’ aka ‘const unsigned char ’ home chrysn git riot build pkg libhydrogen impl h note in a call to function ‘hydro core’ hydro core hydro fe xs const t scalar steps to reproduce the issue use recent gcc make c tests pkg libhydrogen all issue references this is fixed upstream in and the patch that closes it indicates that it s purely cosmetic ie ignoring the errors as an old compiler would do it does no harm backport status will need to be evaluated based on the amount of upstream changes if excessive a patch could be backported but that d need to happen before the update hits the master branch for now i plan to list that as known issues tagging as needs backport still to keep track
| 1
|
6,734
| 9,856,935,128
|
IssuesEvent
|
2019-06-20 00:16:40
|
natario1/CameraView
|
https://api.github.com/repos/natario1/CameraView
|
closed
|
Size.getWidth() NullPointerException
|
about:frame processing is:question needs:info status:stale
|
Device: AVD Nexus 6 API 26
CameraView version: 2.0.0-beta04
I've been getting the error below with a high frequency. It's something that I cannot simulate or force to happen, it simply 'happens' without any change in the code.
Error log:
```
2019-04-27 10:18:26.011 24639-24659/com.smartnsens.opencvapp D/Camera: app passed NULL surface
2019-04-27 10:18:26.041 24639-24663/com.smartnsens.opencvapp D/EGL_emulation: eglMakeCurrent: 0xa86052a0: ver 2 0 (tinfo 0xa8603380)
2019-04-27 10:18:26.075 24639-24663/com.smartnsens.opencvapp D/EGL_emulation: eglMakeCurrent: 0xa86052a0: ver 2 0 (tinfo 0xa8603380)
2019-04-27 10:18:26.174 24639-24660/com.smartnsens.opencvapp D/skia: onFlyCompress
2019-04-27 10:18:26.231 24639-24660/com.smartnsens.opencvapp E/AndroidRuntime: FATAL EXCEPTION: FrameProcessorsWorker
Process: com.smartnsens.opencvapp, PID: 24639
java.lang.NullPointerException: Attempt to invoke virtual method 'int com.otaliastudios.cameraview.Size.getWidth()' on a null object reference
at com.smartnsens.opencvapp.MainActivity$1.process(MainActivity.java:134)
at com.otaliastudios.cameraview.CameraView$Callbacks$11.run(CameraView.java:1809)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
2019-04-27 10:18:26.255 24639-24659/com.smartnsens.opencvapp D/Camera: app passed NULL surface
2019-04-27 10:18:26.501 24639-24663/com.smartnsens.opencvapp D/EGL_emulation: eglMakeCurrent: 0xa86052a0: ver 2 0 (tinfo 0xa8603380)
2019-04-27 10:18:26.793 24639-24639/com.smartnsens.opencvapp E/libEGL: call to OpenGL ES API with no current context (logged once per thread)
```
And my FrameProcessor code:
```java
camera.addFrameProcessor(new FrameProcessor() {
@Override
@WorkerThread
public void process(Frame frame) {
// Get all the frame's data
byte[] data = frame.getData();
int rotation = frame.getRotation();
long time = frame.getTime();
Size size = frame.getSize();
int format = frame.getFormat();
// Preview started (first)
if (mBitmap == null) {
mFrameSize = size.getWidth() * size.getHeight();
mRGBA = new int[mFrameSize];
mBitmap = Bitmap.createBitmap(size.getWidth(), size.getHeight(), Bitmap.Config.ARGB_8888);
}
// Process the new frame only if the previous DoPreviewFrame() has already finished
// the last frame processing and its rendering.
if ( !bProcessing ) {
int[] rgba = mRGBA;
// FPS counter (uses a 20frames trailing window to compute)
thisTime = System.currentTimeMillis();
mLastFramesTimes.add(thisTime);
// Hold the first 30 frames timestamps before start to process the frames
if (mLastFramesTimes.size() <= 30)
return;
// Encode the raw data frame in a Bitmap image
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, size.getWidth(), size.getHeight(), null);
yuvImage.compressToJpeg(new Rect(0, 0, size.getWidth(), size.getHeight()), 90, out);
byte[] imageBytes = out.toByteArray();
bmpImage = CameraUtils.decodeBitmap(imageBytes);
mHandler.post(DoPreviewFrame);
}
}
});
}
```
Best Regards.
Kleyson Rios.
|
1.0
|
Size.getWidth() NullPointerException - Device: AVD Nexus 6 API 26
CameraView version: 2.0.0-beta04
I've been getting the error below with a high frequency. It's something that I cannot simulate or force to happen, it simply 'happens' without any change in the code.
Error log:
```
2019-04-27 10:18:26.011 24639-24659/com.smartnsens.opencvapp D/Camera: app passed NULL surface
2019-04-27 10:18:26.041 24639-24663/com.smartnsens.opencvapp D/EGL_emulation: eglMakeCurrent: 0xa86052a0: ver 2 0 (tinfo 0xa8603380)
2019-04-27 10:18:26.075 24639-24663/com.smartnsens.opencvapp D/EGL_emulation: eglMakeCurrent: 0xa86052a0: ver 2 0 (tinfo 0xa8603380)
2019-04-27 10:18:26.174 24639-24660/com.smartnsens.opencvapp D/skia: onFlyCompress
2019-04-27 10:18:26.231 24639-24660/com.smartnsens.opencvapp E/AndroidRuntime: FATAL EXCEPTION: FrameProcessorsWorker
Process: com.smartnsens.opencvapp, PID: 24639
java.lang.NullPointerException: Attempt to invoke virtual method 'int com.otaliastudios.cameraview.Size.getWidth()' on a null object reference
at com.smartnsens.opencvapp.MainActivity$1.process(MainActivity.java:134)
at com.otaliastudios.cameraview.CameraView$Callbacks$11.run(CameraView.java:1809)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
2019-04-27 10:18:26.255 24639-24659/com.smartnsens.opencvapp D/Camera: app passed NULL surface
2019-04-27 10:18:26.501 24639-24663/com.smartnsens.opencvapp D/EGL_emulation: eglMakeCurrent: 0xa86052a0: ver 2 0 (tinfo 0xa8603380)
2019-04-27 10:18:26.793 24639-24639/com.smartnsens.opencvapp E/libEGL: call to OpenGL ES API with no current context (logged once per thread)
```
And my FrameProcessor code:
```java
camera.addFrameProcessor(new FrameProcessor() {
@Override
@WorkerThread
public void process(Frame frame) {
// Get all the frame's data
byte[] data = frame.getData();
int rotation = frame.getRotation();
long time = frame.getTime();
Size size = frame.getSize();
int format = frame.getFormat();
// Preview started (first)
if (mBitmap == null) {
mFrameSize = size.getWidth() * size.getHeight();
mRGBA = new int[mFrameSize];
mBitmap = Bitmap.createBitmap(size.getWidth(), size.getHeight(), Bitmap.Config.ARGB_8888);
}
// Process the new frame only if the previous DoPreviewFrame() has already finished
// the last frame processing and its rendering.
if ( !bProcessing ) {
int[] rgba = mRGBA;
// FPS counter (uses a 20frames trailing window to compute)
thisTime = System.currentTimeMillis();
mLastFramesTimes.add(thisTime);
// Hold the first 30 frames timestamps before start to process the frames
if (mLastFramesTimes.size() <= 30)
return;
// Encode the raw data frame in a Bitmap image
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, size.getWidth(), size.getHeight(), null);
yuvImage.compressToJpeg(new Rect(0, 0, size.getWidth(), size.getHeight()), 90, out);
byte[] imageBytes = out.toByteArray();
bmpImage = CameraUtils.decodeBitmap(imageBytes);
mHandler.post(DoPreviewFrame);
}
}
});
}
```
Best Regards.
Kleyson Rios.
|
process
|
size getwidth nullpointerexception device avd nexus api cameraview version i ve been getting the error below with a high frequency it s something that i cannot simulate or force to happen it simply happens without any change in the code error log com smartnsens opencvapp d camera app passed null surface com smartnsens opencvapp d egl emulation eglmakecurrent ver tinfo com smartnsens opencvapp d egl emulation eglmakecurrent ver tinfo com smartnsens opencvapp d skia onflycompress com smartnsens opencvapp e androidruntime fatal exception frameprocessorsworker process com smartnsens opencvapp pid java lang nullpointerexception attempt to invoke virtual method int com otaliastudios cameraview size getwidth on a null object reference at com smartnsens opencvapp mainactivity process mainactivity java at com otaliastudios cameraview cameraview callbacks run cameraview java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android os handlerthread run handlerthread java com smartnsens opencvapp d camera app passed null surface com smartnsens opencvapp d egl emulation eglmakecurrent ver tinfo com smartnsens opencvapp e libegl call to opengl es api with no current context logged once per thread and my frameprocessor code java camera addframeprocessor new frameprocessor override workerthread public void process frame frame get all the frame s data byte data frame getdata int rotation frame getrotation long time frame gettime size size frame getsize int format frame getformat preview started first if mbitmap null mframesize size getwidth size getheight mrgba new int mbitmap bitmap createbitmap size getwidth size getheight bitmap config argb process the new frame only if the previous dopreviewframe has already finished the last frame processing and its rendering if bprocessing int rgba mrgba fps counter uses a trailing window to compute thistime system currenttimemillis mlastframestimes add thistime hold the first frames timestamps before start to process the frames if mlastframestimes size return encode the raw data frame in a bitmap image bytearrayoutputstream out new bytearrayoutputstream yuvimage yuvimage new yuvimage data imageformat size getwidth size getheight null yuvimage compresstojpeg new rect size getwidth size getheight out byte imagebytes out tobytearray bmpimage camerautils decodebitmap imagebytes mhandler post dopreviewframe best regards kleyson rios
| 1
|
2,874
| 5,831,717,515
|
IssuesEvent
|
2017-05-08 20:02:46
|
whosonfirst/whosonfirst-www-boundaryissues
|
https://api.github.com/repos/whosonfirst/whosonfirst-www-boundaryissues
|
opened
|
Ensure all records have a "wof:parent_id" property
|
pipeline pipeline-preprocess
|
For example:
```
$> less data/110/880/489/1/1108804891.geojson
...
"wof:geomhash":"fcbd8adee77e425762af4766b534c4e3",
"wof:hierarchy":[
{
"country_id":85632761,
"region_id":1108804891
}
],
"wof:id":1108804891,
"wof:lastmodified":1494271642,
"wof:name":"Osh City",
"wof:placetype":"region",
"wof:repo":"whosonfirst-data"
...
```
|
1.0
|
Ensure all records have a "wof:parent_id" property - For example:
```
$> less data/110/880/489/1/1108804891.geojson
...
"wof:geomhash":"fcbd8adee77e425762af4766b534c4e3",
"wof:hierarchy":[
{
"country_id":85632761,
"region_id":1108804891
}
],
"wof:id":1108804891,
"wof:lastmodified":1494271642,
"wof:name":"Osh City",
"wof:placetype":"region",
"wof:repo":"whosonfirst-data"
...
```
|
process
|
ensure all records have a wof parent id property for example less data geojson wof geomhash wof hierarchy country id region id wof id wof lastmodified wof name osh city wof placetype region wof repo whosonfirst data
| 1
|
8,359
| 11,515,283,646
|
IssuesEvent
|
2020-02-14 00:37:17
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Token '23/Nov/2019' doesn't match specifier '%d'
|
log-processing log/date/time format question
|
## nginx: 1.16.1
## goaccess: 1.3_1

## nginx access log format:
```
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
```
## nginx access log example:
```
127.0.0.1 - - [23/Nov/2019:11:45:04 +0800] "GET /hadoop-project-dist/hadoop-hdfs/WebHDFS.html HTTP/1.0" 200 125430 "http://localhost:9000/hadoop-hdfs-httpfs/index.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" "127.0.0.1"
```
## error message
```sh
$ /usr/local/Cellar/goaccess/1.3_1/bin/goaccess logs/access.log -o /Users/destiny/dev/nginx/logs/report.html --time-format='%T' --date-format='%d/%b/%Y' --log-format='%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^"'
Token '23/Nov/2019' doesn't match specifier '%d'
```
|
1.0
|
Token '23/Nov/2019' doesn't match specifier '%d' - ## nginx: 1.16.1
## goaccess: 1.3_1

## nginx access log format:
```
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
```
## nginx access log example:
```
127.0.0.1 - - [23/Nov/2019:11:45:04 +0800] "GET /hadoop-project-dist/hadoop-hdfs/WebHDFS.html HTTP/1.0" 200 125430 "http://localhost:9000/hadoop-hdfs-httpfs/index.html" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" "127.0.0.1"
```
## error message
```sh
$ /usr/local/Cellar/goaccess/1.3_1/bin/goaccess logs/access.log -o /Users/destiny/dev/nginx/logs/report.html --time-format='%T' --date-format='%d/%b/%Y' --log-format='%h - %^ [%d:%t %^] "%r" %s %b "%R" "%u" "%^"'
Token '23/Nov/2019' doesn't match specifier '%d'
```
|
process
|
token nov doesn t match specifier d nginx goaccess nginx access log format log format main remote addr remote user request status body bytes sent http referer http user agent http x forwarded for nginx access log example get hadoop project dist hadoop hdfs webhdfs html http mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari error message sh usr local cellar goaccess bin goaccess logs access log o users destiny dev nginx logs report html time format t date format d b y log format h r s b r u token nov doesn t match specifier d
| 1
|
764,408
| 26,798,943,704
|
IssuesEvent
|
2023-02-01 13:54:04
|
prysmaticlabs/prysm
|
https://api.github.com/repos/prysmaticlabs/prysm
|
closed
|
Make Running a Prysm Local Devnet Trivial
|
Enhancement Help Wanted Priority: Low
|
# 🚀 Feature Request
### Description
Running a local Prysm "devnet" meaning mock beacon node + validators is non-trivial and requires some commands and flags that are unintuitive to users. Being able to launch a dev setup is important and is done extensively by our team when testing code, however, no instructions are written anywhere and it is not simple to do so.
### Describe the solution you'd like
Ideally, something like `prysm.sh beacon-chain --dev` and `prysm.sh validator --dev` would spin up some simple configuration with N validators, allowing easy testing of changes at runtime with minimal overhead. If certain parameters wish to be changed, they can be specified as additional flags. Note this will only launch a single beacon node. For more complex setups, such as peering multiple nodes to this dev environment, we should also have a simple approach to accomplish this. Perhaps --dev mode can launch a very primitive bootnode that makes spinning up a second beacon node with --dev peer with the first.
These changes should be accompanied by a new page in our documentation portal on how to run a local devnet.
### Current approach
The current way to launch a single node, local devnet is to do the following
**Generate genesis state**
`bazel run //tools/genesis-state-gen -- --num-validators=1024 --output-ssz=/tmp/genesis.ssz --mainnet-config`
**Run beacon node**
```
bazel run //beacon-chain -- --datadir /tmp/chaindata --force-clear-db --interop-genesis-state /tmp/genesis.ssz --interop-eth1data-votes --min-sync-peers=0 --http-web3provider=https://goerli.prylabs.net/ --deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 --bootstrap-node= --pprof
```
**Run validator client with keys**
```
bazel run //validator -- --beacon-rpc-provider localhost:4000 --interop-num-validators=1024 --interop-start-index=0 --force-clear-db
```
Note the approach above uses deterministic private keys for the sake of testing, so perhaps our solution can allow users to also specify their own keys easily.
|
1.0
|
Make Running a Prysm Local Devnet Trivial - # 🚀 Feature Request
### Description
Running a local Prysm "devnet" meaning mock beacon node + validators is non-trivial and requires some commands and flags that are unintuitive to users. Being able to launch a dev setup is important and is done extensively by our team when testing code, however, no instructions are written anywhere and it is not simple to do so.
### Describe the solution you'd like
Ideally, something like `prysm.sh beacon-chain --dev` and `prysm.sh validator --dev` would spin up some simple configuration with N validators, allowing easy testing of changes at runtime with minimal overhead. If certain parameters wish to be changed, they can be specified as additional flags. Note this will only launch a single beacon node. For more complex setups, such as peering multiple nodes to this dev environment, we should also have a simple approach to accomplish this. Perhaps --dev mode can launch a very primitive bootnode that makes spinning up a second beacon node with --dev peer with the first.
These changes should be accompanied by a new page in our documentation portal on how to run a local devnet.
### Current approach
The current way to launch a single node, local devnet is to do the following
**Generate genesis state**
`bazel run //tools/genesis-state-gen -- --num-validators=1024 --output-ssz=/tmp/genesis.ssz --mainnet-config`
**Run beacon node**
```
bazel run //beacon-chain -- --datadir /tmp/chaindata --force-clear-db --interop-genesis-state /tmp/genesis.ssz --interop-eth1data-votes --min-sync-peers=0 --http-web3provider=https://goerli.prylabs.net/ --deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 --bootstrap-node= --pprof
```
**Run validator client with keys**
```
bazel run //validator -- --beacon-rpc-provider localhost:4000 --interop-num-validators=1024 --interop-start-index=0 --force-clear-db
```
Note the approach above uses deterministic private keys for the sake of testing, so perhaps our solution can allow users to also specify their own keys easily.
|
non_process
|
make running a prysm local devnet trivial 🚀 feature request description running a local prysm devnet meaning mock beacon node validators is non trivial and requires some commands and flags that are unintuitive to users being able to launch a dev setup is important and is done extensively by our team when testing code however no instructions are written anywhere and it is not simple to do so describe the solution you d like ideally something like prysm sh beacon chain dev and prysm sh validator dev would spin up some simple configuration with n validators allowing easy testing of changes at runtime with minimal overhead if certain parameters wish to be changed they can be specified as additional flags note this will only launch a single beacon node for more complex setups such as peering multiple nodes to this dev environment we should also have a simple approach to accomplish this perhaps dev mode can launch a very primitive bootnode that makes spinning up a second beacon node with dev peer with the first these changes should be accompanied by a new page in our documentation portal on how to run a local devnet current approach the current way to launch a single node local devnet is to do the following generate genesis state bazel run tools genesis state gen num validators output ssz tmp genesis ssz mainnet config run beacon node bazel run beacon chain datadir tmp chaindata force clear db interop genesis state tmp genesis ssz interop votes min sync peers http deposit contract bootstrap node pprof run validator client with keys bazel run validator beacon rpc provider localhost interop num validators interop start index force clear db note the approach above uses deterministic private keys for the sake of testing so perhaps our solution can allow users to also specify their own keys easily
| 0
|
11,188
| 13,957,697,973
|
IssuesEvent
|
2020-10-24 08:12:16
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
RO: The Romanian Geoportal is down
|
Geoportal Harvesting process RO - Romania
|
Dear Angelo,
We kindly request do not start a new harvesting from the Romanian Discovery Service.
We have a technical problem with our national geoportal. I will announce when it works.
Best regards,
Simona Bunea
|
1.0
|
RO: The Romanian Geoportal is down - Dear Angelo,
We kindly request do not start a new harvesting from the Romanian Discovery Service.
We have a technical problem with our national geoportal. I will announce when it works.
Best regards,
Simona Bunea
|
process
|
ro the romanian geoportal is down dear angelo we kindly request do not start a new harvesting from the romanian discovery service we have a technical problem with our national geoportal i will announce when it works best regards simona bunea
| 1
|
22,116
| 30,646,206,169
|
IssuesEvent
|
2023-07-25 05:03:18
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Add .readthedocs.yaml for Readthedocs
|
issue-processing-state-03
|
[This official Readthedocs announcement](https://blog.readthedocs.com/migrate-configuration-v2/) calls for a configuration file `.readthedocs.yaml` to be added to compile the document properly in the future.
|
1.0
|
Add .readthedocs.yaml for Readthedocs - [This official Readthedocs announcement](https://blog.readthedocs.com/migrate-configuration-v2/) calls for a configuration file `.readthedocs.yaml` to be added to compile the document properly in the future.
|
process
|
add readthedocs yaml for readthedocs calls for a configuration file readthedocs yaml to be added to compile the document properly in the future
| 1
|
17,601
| 23,425,324,400
|
IssuesEvent
|
2022-08-14 09:55:01
|
Battle-s/battle-school-backend
|
https://api.github.com/repos/Battle-s/battle-school-backend
|
opened
|
[FEAT] 종목 생성 및 조회
|
feature :computer: processing :hourglass_flowing_sand:
|
## 설명
> 종목 생성 및 조회
## 체크사항
- [ ] 카테고리(종목) 엔티티 및 repo 생성
- [ ] 종목 서비스 - crud
## 참고자료
## 관련 논의
|
1.0
|
[FEAT] 종목 생성 및 조회 - ## 설명
> 종목 생성 및 조회
## 체크사항
- [ ] 카테고리(종목) 엔티티 및 repo 생성
- [ ] 종목 서비스 - crud
## 참고자료
## 관련 논의
|
process
|
종목 생성 및 조회 설명 종목 생성 및 조회 체크사항 카테고리 종목 엔티티 및 repo 생성 종목 서비스 crud 참고자료 관련 논의
| 1
|
8,355
| 11,503,329,111
|
IssuesEvent
|
2020-02-12 20:52:55
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Tidying up multi-organism processes: Children of GO:0098630 aggregation of unicellular organisms
|
multi-species process
|
Move under 'intraspecies interaction between organisms':
* GO:0000752 agglutination involved in conjugation with cellular fusion
* GO:0000758 agglutination involved in conjugation with mutual genetic exchange
* GO:0031152 aggregation involved in sorocarp development
* GO:0000128 flocculation
Move under 'interspecies interaction between organisms':
* GO:0036281 coflocculation
Merge
* GO:0052000 type IV pili-dependent aggregation 0 annotations
-> merge into GO:0044010 single-species biofilm formation (see PMID:25852657)
|
1.0
|
Tidying up multi-organism processes: Children of GO:0098630 aggregation of unicellular organisms - Move under 'intraspecies interaction between organisms':
* GO:0000752 agglutination involved in conjugation with cellular fusion
* GO:0000758 agglutination involved in conjugation with mutual genetic exchange
* GO:0031152 aggregation involved in sorocarp development
* GO:0000128 flocculation
Move under 'interspecies interaction between organisms':
* GO:0036281 coflocculation
Merge
* GO:0052000 type IV pili-dependent aggregation 0 annotations
-> merge into GO:0044010 single-species biofilm formation (see PMID:25852657)
|
process
|
tidying up multi organism processes children of go aggregation of unicellular organisms move under intraspecies interaction between organisms go agglutination involved in conjugation with cellular fusion go agglutination involved in conjugation with mutual genetic exchange go aggregation involved in sorocarp development go flocculation move under interspecies interaction between organisms go coflocculation merge go type iv pili dependent aggregation annotations merge into go single species biofilm formation see pmid
| 1
|
347,392
| 24,888,198,803
|
IssuesEvent
|
2022-10-28 09:35:51
|
Ugholaf/ped
|
https://api.github.com/repos/Ugholaf/ped
|
opened
|
UG - Duplicate feature
|
type.DocumentationBug severity.Medium
|
### Feature 2.2.6

### Feature 2.4.2 (Under 2.4 Advanced Features)

Not really sure if this is a bug, but it seems to me that they are the same. Feature 2.4.2 is basically the same edit feature in 2.2.6 as they have the same command format but 2.4.2 just edits the tag name (in this case a shorter tag name).
I feel like this can include as a tip rather than an advanced feature.
Unless the command word `events` in 2.4.2 means a different command from the `event` in 2.2.6, but the command `events -e 1 t/tut` is not valid, so I am not sure if the command with `events` is not implemented or is just typo.
<!--session: 1666944684439-be126bf8-a38c-4dd9-9781-8ffaa0d02547-->
<!--Version: Web v3.4.4-->
|
1.0
|
UG - Duplicate feature - ### Feature 2.2.6

### Feature 2.4.2 (Under 2.4 Advanced Features)

Not really sure if this is a bug, but it seems to me that they are the same. Feature 2.4.2 is basically the same edit feature in 2.2.6 as they have the same command format but 2.4.2 just edits the tag name (in this case a shorter tag name).
I feel like this can include as a tip rather than an advanced feature.
Unless the command word `events` in 2.4.2 means a different command from the `event` in 2.2.6, but the command `events -e 1 t/tut` is not valid, so I am not sure if the command with `events` is not implemented or is just typo.
<!--session: 1666944684439-be126bf8-a38c-4dd9-9781-8ffaa0d02547-->
<!--Version: Web v3.4.4-->
|
non_process
|
ug duplicate feature feature feature under advanced features not really sure if this is a bug but it seems to me that they are the same feature is basically the same edit feature in as they have the same command format but just edits the tag name in this case a shorter tag name i feel like this can include as a tip rather than an advanced feature unless the command word events in means a different command from the event in but the command events e t tut is not valid so i am not sure if the command with events is not implemented or is just typo
| 0
|
66,601
| 12,805,844,517
|
IssuesEvent
|
2020-07-03 08:20:06
|
danglotb/skillful_network
|
https://api.github.com/repos/danglotb/skillful_network
|
closed
|
%3.2 FEAT: Refonte register
|
code enhancement
|
* Tempory code becomes current password
* Add template form registration-confirmation
* Add new feature in registration-confirmation to get more informations directly: firstName, lastName, and role.
|
1.0
|
%3.2 FEAT: Refonte register - * Tempory code becomes current password
* Add template form registration-confirmation
* Add new feature in registration-confirmation to get more informations directly: firstName, lastName, and role.
|
non_process
|
feat refonte register tempory code becomes current password add template form registration confirmation add new feature in registration confirmation to get more informations directly firstname lastname and role
| 0
|
141,912
| 21,639,545,293
|
IssuesEvent
|
2022-05-05 17:17:31
|
Joystream/atlas
|
https://api.github.com/repos/Joystream/atlas
|
opened
|
Update wording for cases when bid withdrawn automatically (when higher bid is placed)
|
enhancement design NFT
|
- [ ] Purchase view
- [ ] Notification when higher bid was placed by someone else
|
1.0
|
Update wording for cases when bid withdrawn automatically (when higher bid is placed) - - [ ] Purchase view
- [ ] Notification when higher bid was placed by someone else
|
non_process
|
update wording for cases when bid withdrawn automatically when higher bid is placed purchase view notification when higher bid was placed by someone else
| 0
|
3,520
| 6,562,270,721
|
IssuesEvent
|
2017-09-07 15:58:25
|
amaster507/ifbmt
|
https://api.github.com/repos/amaster507/ifbmt
|
opened
|
User Rights and Privileges
|
idea process
|
In regards to Updating Church Information #5, there needs to be made a user point system where user earn points for good practice using ifbmt and the higher points allow the user more rights and privileges. I imagine this similar to how stackoverflow.com manages user points. What needs to be decided is:
- What actions to reward points to
- How many points to award for certain actions
- What privileges are granted at what point levels
Here are some initial ideas:
## Actions and Points Rewarded
- 30 Days Active User **+20 Points**
- 90 Days Active User **+20 Points**
- 120 Days Active User **+30 Points**
- Suggest Church Edit **+2 Points**
- Receive +1 on Suggested Edit **+10 Points**
- Receive -1 on Suggested Edit **-10 Points**
- Receive total of 30 _+1_'s on Suggested Edits **+100 Point Bonus**
- Receive total of 300 _+1_'s on Suggested Edits **+1k Point Bonus**
- Receive +1 on Public Note **+10 Points**
- Receive -1 on Public Note **-10 Points**
- Refer a user that signs up and reaches 50 points **+20 Points**
- Downvote on Public Note **-1 Point** _this prevents excessive downvotes_
- Downvote on Suggested Edit **-1 Point** _this prevents excessive downvotes_
## Privilege Levels
- Ability to Downvote **20 Points**
- Ability to Confirm Suggested Church Edit **100 Points**
- An upvote from a user with this ability automatically counts as a confirm
- _3 Confirmations needed before the edit is saved as permanent_
- _suggested edits can continue to receive upvotes after confirmed from anyone who finds it useful_
- _a user may only upvote or downvote an item once, but may change their own vote_
- Ability to Edit Public Notes **200 Points**
- a user may always edit their own public note within the first 30 minutes
- Ability to Suggest Deletion of Public Notes **300 Points**
- A Downvote from a user with this ability automatically counts as a delete suggestion
- _3 Delete Suggestions are needed before the note is made invisible to the public_
- Deleted public notes no longer can be downvoted
This is just my rough draft idea.
|
1.0
|
User Rights and Privileges - In regards to Updating Church Information #5, there needs to be made a user point system where user earn points for good practice using ifbmt and the higher points allow the user more rights and privileges. I imagine this similar to how stackoverflow.com manages user points. What needs to be decided is:
- What actions to reward points to
- How many points to award for certain actions
- What privileges are granted at what point levels
Here are some initial ideas:
## Actions and Points Rewarded
- 30 Days Active User **+20 Points**
- 90 Days Active User **+20 Points**
- 120 Days Active User **+30 Points**
- Suggest Church Edit **+2 Points**
- Receive +1 on Suggested Edit **+10 Points**
- Receive -1 on Suggested Edit **-10 Points**
- Receive total of 30 _+1_'s on Suggested Edits **+100 Point Bonus**
- Receive total of 300 _+1_'s on Suggested Edits **+1k Point Bonus**
- Receive +1 on Public Note **+10 Points**
- Receive -1 on Public Note **-10 Points**
- Refer a user that signs up and reaches 50 points **+20 Points**
- Downvote on Public Note **-1 Point** _this prevents excessive downvotes_
- Downvote on Suggested Edit **-1 Point** _this prevents excessive downvotes_
## Privilege Levels
- Ability to Downvote **20 Points**
- Ability to Confirm Suggested Church Edit **100 Points**
- An upvote from a user with this ability automatically counts as a confirm
- _3 Confirmations needed before the edit is saved as permanent_
- _suggested edits can continue to receive upvotes after confirmed from anyone who finds it useful_
- _a user may only upvote or downvote an item once, but may change their own vote_
- Ability to Edit Public Notes **200 Points**
- a user may always edit their own public note within the first 30 minutes
- Ability to Suggest Deletion of Public Notes **300 Points**
- A Downvote from a user with this ability automatically counts as a delete suggestion
- _3 Delete Suggestions are needed before the note is made invisible to the public_
- Deleted public notes no longer can be downvoted
This is just my rough draft idea.
|
process
|
user rights and privileges in regards to updating church information there needs to be made a user point system where user earn points for good practice using ifbmt and the higher points allow the user more rights and privileges i imagine this similar to how stackoverflow com manages user points what needs to be decided is what actions to reward points to how many points to award for certain actions what privileges are granted at what point levels here are some initial ideas actions and points rewarded days active user points days active user points days active user points suggest church edit points receive on suggested edit points receive on suggested edit points receive total of s on suggested edits point bonus receive total of s on suggested edits point bonus receive on public note points receive on public note points refer a user that signs up and reaches points points downvote on public note point this prevents excessive downvotes downvote on suggested edit point this prevents excessive downvotes privilege levels ability to downvote points ability to confirm suggested church edit points an upvote from a user with this ability automatically counts as a confirm confirmations needed before the edit is saved as permanent suggested edits can continue to receive upvotes after confirmed from anyone who finds it useful a user may only upvote or downvote an item once but may change their own vote ability to edit public notes points a user may always edit their own public note within the first minutes ability to suggest deletion of public notes points a downvote from a user with this ability automatically counts as a delete suggestion delete suggestions are needed before the note is made invisible to the public deleted public notes no longer can be downvoted this is just my rough draft idea
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.