Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,834
| 12,974,217,847
|
IssuesEvent
|
2020-07-21 15:07:14
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
opened
|
tests/resource/aws_rds_cluster: TestAccAWSRDSCluster_Port Consistently Failing Since Postgres 11
|
good first issue service/rds tests
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
Latest codebase
### Affected Resource(s)
* aws_rds_cluster
### Expected Behavior
Test should pass consistently. 😄
### Actual Behavior
Consistent failures:
```
--- FAIL: TestAccAWSRDSCluster_Port (4.47s)
testing.go:684: Step 0 error: errors during apply:
Error: error creating RDS cluster: InvalidParameterCombination: The Parameter Group default.aurora-postgresql10 with DBParameterGroupFamily aurora-postgresql10 cannot be used for this instance. Please use a Parameter Group with DBParameterGroupFamily aurora-postgresql11
```
### Steps to Reproduce
1. `make testacc TESTARGS='-run=TestAccAWSRDSCluster_Port'`
|
1.0
|
tests/resource/aws_rds_cluster: TestAccAWSRDSCluster_Port Consistently Failing Since Postgres 11 - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
Latest codebase
### Affected Resource(s)
* aws_rds_cluster
### Expected Behavior
Test should pass consistently. 😄
### Actual Behavior
Consistent failures:
```
--- FAIL: TestAccAWSRDSCluster_Port (4.47s)
testing.go:684: Step 0 error: errors during apply:
Error: error creating RDS cluster: InvalidParameterCombination: The Parameter Group default.aurora-postgresql10 with DBParameterGroupFamily aurora-postgresql10 cannot be used for this instance. Please use a Parameter Group with DBParameterGroupFamily aurora-postgresql11
```
### Steps to Reproduce
1. `make testacc TESTARGS='-run=TestAccAWSRDSCluster_Port'`
|
non_process
|
tests resource aws rds cluster testaccawsrdscluster port consistently failing since postgres please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version latest codebase affected resource s aws rds cluster expected behavior test should pass consistently 😄 actual behavior consistent failures fail testaccawsrdscluster port testing go step error errors during apply error error creating rds cluster invalidparametercombination the parameter group default aurora with dbparametergroupfamily aurora cannot be used for this instance please use a parameter group with dbparametergroupfamily aurora steps to reproduce make testacc testargs run testaccawsrdscluster port
| 0
|
18,566
| 24,555,883,634
|
IssuesEvent
|
2022-10-12 15:50:00
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline Indicator] Offline error message is not getting displayed in the following scenario
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Open the installed app
2. Sign in and complete the passcode process
3. Navigate to 'Study list' screen
4. Click on the study which has updated consent flow
5. Turn off the internet/mobile data
6. Complete all the steps in consent flow and navigate to 'Consent Signature' screen
7. Click on 'Next' button and Verify
**AR:** Offline error message is not getting displayed in the following scenario
**ER:** Offline error message should get displayed in the following scenario
https://user-images.githubusercontent.com/86007179/178954827-1fddee65-7117-406d-b86b-1e30cde9800b.mp4
|
3.0
|
[Android] [Offline Indicator] Offline error message is not getting displayed in the following scenario - **Steps:**
1. Open the installed app
2. Sign in and complete the passcode process
3. Navigate to 'Study list' screen
4. Click on the study which has updated consent flow
5. Turn off the internet/mobile data
6. Complete all the steps in consent flow and navigate to 'Consent Signature' screen
7. Click on 'Next' button and Verify
**AR:** Offline error message is not getting displayed in the following scenario
**ER:** Offline error message should get displayed in the following scenario
https://user-images.githubusercontent.com/86007179/178954827-1fddee65-7117-406d-b86b-1e30cde9800b.mp4
|
process
|
offline error message is not getting displayed in the following scenario steps open the installed app sign in and complete the passcode process navigate to study list screen click on the study which has updated consent flow turn off the internet mobile data complete all the steps in consent flow and navigate to consent signature screen click on next button and verify ar offline error message is not getting displayed in the following scenario er offline error message should get displayed in the following scenario
| 1
|
181,073
| 30,616,535,052
|
IssuesEvent
|
2023-07-24 04:02:37
|
MozillaFoundation/foundation.mozilla.org
|
https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org
|
closed
|
[RCC] Update Figma deck with recap/reflections
|
design RCC
|
Update [Figma deck](https://www.figma.com/file/dPGD3acNbT75dvEGzNaGUb/Responsible-Computing?type=design&node-id=1%3A15&mode=design&t=NqMbdtCj3LOlHQw9-1):
- [x] Make sure deck is up to date
- [x] Reflections section with reflections and next steps/upcoming work
|
1.0
|
[RCC] Update Figma deck with recap/reflections - Update [Figma deck](https://www.figma.com/file/dPGD3acNbT75dvEGzNaGUb/Responsible-Computing?type=design&node-id=1%3A15&mode=design&t=NqMbdtCj3LOlHQw9-1):
- [x] Make sure deck is up to date
- [x] Reflections section with reflections and next steps/upcoming work
|
non_process
|
update figma deck with recap reflections update make sure deck is up to date reflections section with reflections and next steps upcoming work
| 0
|
10,305
| 13,155,326,986
|
IssuesEvent
|
2020-08-10 08:41:04
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
公共组件webpack打包问题
|
processing
|
**问题描述**
多个分包引用同一.mpx编写组件且js|ts 为外部引入时只有第一个加载该组件的分包能正常运行。例如,a,b两个页面位于不同分包下,我先进入a页面在进入b页面b页面会报错,反之a页面报错
list.mpx 文件代码如下
```
<template>
<view class="list">
<view wx:for="{{listData}}" wx:key="index">{{item}}</view>
</view>
</template>
<script lang="ts" src="./list.ts"></script>
<style lang="stylus">
.list
background-color red
</style>
<script type="application/json">
{
"component": true
}
</script>
```
list.ts
```
import { createComponent } from '@mpxjs/core'
createComponent({
data: {
listData: ['手机', '电视', '电脑']
}
})
```
报错如下:
```
jsEnginScriptError
Component is not found in path "packCenter/components/list5b238f42/list" (using by "packCenter/pages/orderDetail/index");onAppRoute
Error: Component is not found in path "packCenter/components/list5b238f42/list" (using by "packCenter/pages/orderDetail/index")
at K (WAService.js:1:1214064)
at K (WAService.js:1:1214268)
at WAService.js:1:1232006
at Module.Fe (WAService.js:1:1232585)
at Function.value (WAService.js:1:1260806)
at It (WAService.js:1:1276660)
at xt (WAService.js:1:1279059)
at Function.<anonymous> (WAService.js:1:1284536)
at i.<anonymous> (WAService.js:1:1253492)
at i.emit (WAService.js:1:412028)
```
打包出来dist目下在两个分包分都有一份list组件的代码
<img width="375" alt="屏幕快照 2020-08-10 上午11 49 31" src="https://user-images.githubusercontent.com/22835834/89750758-eadba400-daff-11ea-8efe-fb0ac738eee1.png">
<img width="252" alt="屏幕快照 2020-08-10 上午11 52 48" src="https://user-images.githubusercontent.com/22835834/89750817-1fe7f680-db00-11ea-8050-f9f1774287a4.png">
打包出来.wxml和wxss文件没问题js文件内容如下
```
var window = window || {};
window["webpackJsonp"] = require("../../../bundle.js");
(window["webpackJsonp"] = window["webpackJsonp"] || []).push([["packShop/components/list5b238f42/list"],{
/***/ "./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=json&index=0!./node_modules/@mpxjs/webpack-plugin/lib/json-compiler/index.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=json&index=0!./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, exports) {
// removed by extractor
/***/ }),
/***/ "./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=styles&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxss/loader.js?root=&importLoaders=1&extract=true!./node_modules/@mpxjs/webpack-plugin/lib/style-compiler/index.js?{\"moduleId\":\"m11703cf3\",\"scoped\":false,\"sourceMap\":false}!./node_modules/stylus-loader/index.js!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=styles&index=0!./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, exports) {
// removed by extractor
/***/ }),
/***/ "./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=template&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxml/wxml-loader.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/template-compiler/index.js?{\"usingComponents\":[],\"hasScoped\":false,\"isNative\":false,\"moduleId\":\"m11703cf3\",\"root\":\"\"}!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=template&index=0!./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, exports) {
// removed by extractor
/***/ }),
/***/ "./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, __webpack_exports__, __webpack_require__) {
"use strict";
__webpack_require__.r(__webpack_exports__);
/* WEBPACK VAR INJECTION */(function(global) {/* mpx inject */ global.currentModuleId = "m11703cf3"
global.currentResource = "/Users/litao/work/GM/gome/src/components/list/list.mpx"
global.currentCtor = Component
global.currentCtorType = "component"
global.currentSrcMode = "wx"
/* mpx inject */ global.currentInject = {
moduleId: "m11703cf3",
render: function () {
this._c("mpxShow", this.mpxShow) || this._c("mpxShow", this.mpxShow) === undefined ? '' : 'display:none;';
this._i(this._c("listData", this.listData), function (item, index) {
item;
});
this._r();
}
};
/* harmony import */ var _list_ts_resourcePath_Users_litao_work_GM_gome_src_components_list_list_mpx__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__("./src/components/list/list.ts?resourcePath=/Users/litao/work/GM/gome/src/components/list/list.mpx");
/* empty/unused harmony star reexport */global.currentModuleId
/* script */
/* styles */
__webpack_require__("./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=styles&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxss/loader.js?root=&importLoaders=1&extract=true!./node_modules/@mpxjs/webpack-plugin/lib/style-compiler/index.js?{\"moduleId\":\"m11703cf3\",\"scoped\":false,\"sourceMap\":false}!./node_modules/stylus-loader/index.js!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=styles&index=0!./src/components/list/list.mpx?packageName=packShop")
/* json */
__webpack_require__("./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=json&index=0!./node_modules/@mpxjs/webpack-plugin/lib/json-compiler/index.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=json&index=0!./src/components/list/list.mpx?packageName=packShop")
/* template */
__webpack_require__("./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=template&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxml/wxml-loader.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/template-compiler/index.js?{\"usingComponents\":[],\"hasScoped\":false,\"isNative\":false,\"moduleId\":\"m11703cf3\",\"root\":\"\"}!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=template&index=0!./src/components/list/list.mpx?packageName=packShop")
/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__("./node_modules/webpack/buildin/global.js")))
/***/ })
},[["./src/components/list/list.mpx?packageName=packShop","bundle"]]]);
```
这里并有调用CreateComponnet方法CreateComponent方法被打包到了bundle.js中
相关代码截图
<img width="767" alt="屏幕快照 2020-08-10 上午11 57 31" src="https://user-images.githubusercontent.com/22835834/89750953-bb796700-db00-11ea-9371-1ad900bb1dcd.png">
注如果将.mpx文件js|ts外部引入改为
```
<script>
//....
</script>
```
的形式则不会有这个问题(这时打包到bundle.js中的代码片段会被打到各自分包下的list.js中)
|
1.0
|
公共组件webpack打包问题 - **问题描述**
多个分包引用同一.mpx编写组件且js|ts 为外部引入时只有第一个加载该组件的分包能正常运行。例如,a,b两个页面位于不同分包下,我先进入a页面在进入b页面b页面会报错,反之a页面报错
list.mpx 文件代码如下
```
<template>
<view class="list">
<view wx:for="{{listData}}" wx:key="index">{{item}}</view>
</view>
</template>
<script lang="ts" src="./list.ts"></script>
<style lang="stylus">
.list
background-color red
</style>
<script type="application/json">
{
"component": true
}
</script>
```
list.ts
```
import { createComponent } from '@mpxjs/core'
createComponent({
data: {
listData: ['手机', '电视', '电脑']
}
})
```
报错如下:
```
jsEnginScriptError
Component is not found in path "packCenter/components/list5b238f42/list" (using by "packCenter/pages/orderDetail/index");onAppRoute
Error: Component is not found in path "packCenter/components/list5b238f42/list" (using by "packCenter/pages/orderDetail/index")
at K (WAService.js:1:1214064)
at K (WAService.js:1:1214268)
at WAService.js:1:1232006
at Module.Fe (WAService.js:1:1232585)
at Function.value (WAService.js:1:1260806)
at It (WAService.js:1:1276660)
at xt (WAService.js:1:1279059)
at Function.<anonymous> (WAService.js:1:1284536)
at i.<anonymous> (WAService.js:1:1253492)
at i.emit (WAService.js:1:412028)
```
打包出来dist目下在两个分包分都有一份list组件的代码
<img width="375" alt="屏幕快照 2020-08-10 上午11 49 31" src="https://user-images.githubusercontent.com/22835834/89750758-eadba400-daff-11ea-8efe-fb0ac738eee1.png">
<img width="252" alt="屏幕快照 2020-08-10 上午11 52 48" src="https://user-images.githubusercontent.com/22835834/89750817-1fe7f680-db00-11ea-8050-f9f1774287a4.png">
打包出来.wxml和wxss文件没问题js文件内容如下
```
var window = window || {};
window["webpackJsonp"] = require("../../../bundle.js");
(window["webpackJsonp"] = window["webpackJsonp"] || []).push([["packShop/components/list5b238f42/list"],{
/***/ "./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=json&index=0!./node_modules/@mpxjs/webpack-plugin/lib/json-compiler/index.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=json&index=0!./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, exports) {
// removed by extractor
/***/ }),
/***/ "./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=styles&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxss/loader.js?root=&importLoaders=1&extract=true!./node_modules/@mpxjs/webpack-plugin/lib/style-compiler/index.js?{\"moduleId\":\"m11703cf3\",\"scoped\":false,\"sourceMap\":false}!./node_modules/stylus-loader/index.js!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=styles&index=0!./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, exports) {
// removed by extractor
/***/ }),
/***/ "./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=template&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxml/wxml-loader.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/template-compiler/index.js?{\"usingComponents\":[],\"hasScoped\":false,\"isNative\":false,\"moduleId\":\"m11703cf3\",\"root\":\"\"}!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=template&index=0!./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, exports) {
// removed by extractor
/***/ }),
/***/ "./src/components/list/list.mpx?packageName=packShop":
/***/ (function(module, __webpack_exports__, __webpack_require__) {
"use strict";
__webpack_require__.r(__webpack_exports__);
/* WEBPACK VAR INJECTION */(function(global) {/* mpx inject */ global.currentModuleId = "m11703cf3"
global.currentResource = "/Users/litao/work/GM/gome/src/components/list/list.mpx"
global.currentCtor = Component
global.currentCtorType = "component"
global.currentSrcMode = "wx"
/* mpx inject */ global.currentInject = {
moduleId: "m11703cf3",
render: function () {
this._c("mpxShow", this.mpxShow) || this._c("mpxShow", this.mpxShow) === undefined ? '' : 'display:none;';
this._i(this._c("listData", this.listData), function (item, index) {
item;
});
this._r();
}
};
/* harmony import */ var _list_ts_resourcePath_Users_litao_work_GM_gome_src_components_list_list_mpx__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__("./src/components/list/list.ts?resourcePath=/Users/litao/work/GM/gome/src/components/list/list.mpx");
/* empty/unused harmony star reexport */global.currentModuleId
/* script */
/* styles */
__webpack_require__("./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=styles&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxss/loader.js?root=&importLoaders=1&extract=true!./node_modules/@mpxjs/webpack-plugin/lib/style-compiler/index.js?{\"moduleId\":\"m11703cf3\",\"scoped\":false,\"sourceMap\":false}!./node_modules/stylus-loader/index.js!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=styles&index=0!./src/components/list/list.mpx?packageName=packShop")
/* json */
__webpack_require__("./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=json&index=0!./node_modules/@mpxjs/webpack-plugin/lib/json-compiler/index.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=json&index=0!./src/components/list/list.mpx?packageName=packShop")
/* template */
__webpack_require__("./node_modules/@mpxjs/webpack-plugin/lib/extractor.js?type=template&index=0!./node_modules/@mpxjs/webpack-plugin/lib/wxml/wxml-loader.js?root=!./node_modules/@mpxjs/webpack-plugin/lib/template-compiler/index.js?{\"usingComponents\":[],\"hasScoped\":false,\"isNative\":false,\"moduleId\":\"m11703cf3\",\"root\":\"\"}!./node_modules/@mpxjs/webpack-plugin/lib/selector.js?type=template&index=0!./src/components/list/list.mpx?packageName=packShop")
/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__("./node_modules/webpack/buildin/global.js")))
/***/ })
},[["./src/components/list/list.mpx?packageName=packShop","bundle"]]]);
```
这里并有调用CreateComponnet方法CreateComponent方法被打包到了bundle.js中
相关代码截图
<img width="767" alt="屏幕快照 2020-08-10 上午11 57 31" src="https://user-images.githubusercontent.com/22835834/89750953-bb796700-db00-11ea-9371-1ad900bb1dcd.png">
注如果将.mpx文件js|ts外部引入改为
```
<script>
//....
</script>
```
的形式则不会有这个问题(这时打包到bundle.js中的代码片段会被打到各自分包下的list.js中)
|
process
|
公共组件webpack打包问题 问题描述 多个分包引用同一 mpx编写组件且js ts 为外部引入时只有第一个加载该组件的分包能正常运行。例如,a b两个页面位于不同分包下,我先进入a页面在进入b页面b页面会报错,反之a页面报错 list mpx 文件代码如下 item list background color red component true list ts import createcomponent from mpxjs core createcomponent data listdata 报错如下: jsenginscripterror component is not found in path packcenter components list using by packcenter pages orderdetail index onapproute error component is not found in path packcenter components list using by packcenter pages orderdetail index at k waservice js at k waservice js at waservice js at module fe waservice js at function value waservice js at it waservice js at xt waservice js at function waservice js at i waservice js at i emit waservice js 打包出来dist目下在两个分包分都有一份list组件的代码 img width alt 屏幕快照 src img width alt 屏幕快照 src 打包出来 wxml和wxss文件没问题js文件内容如下 var window window window require bundle js window window push node modules mpxjs webpack plugin lib extractor js type json index node modules mpxjs webpack plugin lib json compiler index js root node modules mpxjs webpack plugin lib selector js type json index src components list list mpx packagename packshop function module exports removed by extractor node modules mpxjs webpack plugin lib extractor js type styles index node modules mpxjs webpack plugin lib wxss loader js root importloaders extract true node modules mpxjs webpack plugin lib style compiler index js moduleid scoped false sourcemap false node modules stylus loader index js node modules mpxjs webpack plugin lib selector js type styles index src components list list mpx packagename packshop function module exports removed by extractor node modules mpxjs webpack plugin lib extractor js type template index node modules mpxjs webpack plugin lib wxml wxml loader js root node modules mpxjs webpack plugin lib template compiler index js usingcomponents hasscoped false isnative false moduleid root node modules mpxjs webpack plugin lib selector js type template index src components list list mpx packagename packshop function module exports removed by extractor src components list list mpx packagename packshop function module webpack exports webpack require use strict webpack require r webpack exports webpack var injection function global mpx inject global currentmoduleid global currentresource users litao work gm gome src components list list mpx global currentctor component global currentctortype component global currentsrcmode wx mpx inject global currentinject moduleid render function this c mpxshow this mpxshow this c mpxshow this mpxshow undefined display none this i this c listdata this listdata function item index item this r harmony import var list ts resourcepath users litao work gm gome src components list list mpx webpack imported module webpack require src components list list ts resourcepath users litao work gm gome src components list list mpx empty unused harmony star reexport global currentmoduleid script styles webpack require node modules mpxjs webpack plugin lib extractor js type styles index node modules mpxjs webpack plugin lib wxss loader js root importloaders extract true node modules mpxjs webpack plugin lib style compiler index js moduleid scoped false sourcemap false node modules stylus loader index js node modules mpxjs webpack plugin lib selector js type styles index src components list list mpx packagename packshop json webpack require node modules mpxjs webpack plugin lib extractor js type json index node modules mpxjs webpack plugin lib json compiler index js root node modules mpxjs webpack plugin lib selector js type json index src components list list mpx packagename packshop template webpack require node modules mpxjs webpack plugin lib extractor js type template index node modules mpxjs webpack plugin lib wxml wxml loader js root node modules mpxjs webpack plugin lib template compiler index js usingcomponents hasscoped false isnative false moduleid root node modules mpxjs webpack plugin lib selector js type template index src components list list mpx packagename packshop webpack var injection call this webpack require node modules webpack buildin global js 这里并有调用createcomponnet方法createcomponent方法被打包到了bundle js中 相关代码截图 img width alt 屏幕快照 src 注如果将 mpx文件js ts外部引入改为 的形式则不会有这个问题(这时打包到bundle js中的代码片段会被打到各自分包下的list js中)
| 1
|
3,940
| 6,885,215,753
|
IssuesEvent
|
2017-11-21 15:29:14
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
acetylcholine receptor impairing toxin
|
auto-migrated curator-request multiorganism processes New term request toxin UniProt
|
Hi,
I will need these 2 new GO terms:
envenomation resulting in positive regulation of G-protein coupled acetylcholine receptor activity in other organism; GO:new
def: A process that begins with venom being forced into an organism by the bite or sting of another organism, and ends with the resultant modulation of G-protein coupled acetylcholine receptor activity in the bitten organism.
- is the child of GO:0044513; envenomation resulting in modulation of G-protein coupled receptor activity in other organism
and
envenomation resulting in modulation of acetylcholine receptor activity in other organism
def: A process that begins with venom being forced into an organism by the bite or sting of another organism, and ends with the resultant modulation of nicotinic acetylcholine receptor activity in the bitten organism.
synonym: envenomation resulting in modulation of nicotinic acetylcholine receptor activity in other organism
- is the child of GO:0044511; envenomation resulting in modulation of receptor activity in other organism
many thanks
Florence
Reported by: fjungo
Original Ticket: [geneontology/ontology-requests/9983](https://sourceforge.net/p/geneontology/ontology-requests/9983)
|
1.0
|
acetylcholine receptor impairing toxin - Hi,
I will need these 2 new GO terms:
envenomation resulting in positive regulation of G-protein coupled acetylcholine receptor activity in other organism; GO:new
def: A process that begins with venom being forced into an organism by the bite or sting of another organism, and ends with the resultant modulation of G-protein coupled acetylcholine receptor activity in the bitten organism.
- is the child of GO:0044513; envenomation resulting in modulation of G-protein coupled receptor activity in other organism
and
envenomation resulting in modulation of acetylcholine receptor activity in other organism
def: A process that begins with venom being forced into an organism by the bite or sting of another organism, and ends with the resultant modulation of nicotinic acetylcholine receptor activity in the bitten organism.
synonym: envenomation resulting in modulation of nicotinic acetylcholine receptor activity in other organism
- is the child of GO:0044511; envenomation resulting in modulation of receptor activity in other organism
many thanks
Florence
Reported by: fjungo
Original Ticket: [geneontology/ontology-requests/9983](https://sourceforge.net/p/geneontology/ontology-requests/9983)
|
process
|
acetylcholine receptor impairing toxin hi i will need these new go terms envenomation resulting in positive regulation of g protein coupled acetylcholine receptor activity in other organism go new def a process that begins with venom being forced into an organism by the bite or sting of another organism and ends with the resultant modulation of g protein coupled acetylcholine receptor activity in the bitten organism is the child of go envenomation resulting in modulation of g protein coupled receptor activity in other organism and envenomation resulting in modulation of acetylcholine receptor activity in other organism def a process that begins with venom being forced into an organism by the bite or sting of another organism and ends with the resultant modulation of nicotinic acetylcholine receptor activity in the bitten organism synonym envenomation resulting in modulation of nicotinic acetylcholine receptor activity in other organism is the child of go envenomation resulting in modulation of receptor activity in other organism many thanks florence reported by fjungo original ticket
| 1
|
150,411
| 5,766,283,661
|
IssuesEvent
|
2017-04-27 06:35:27
|
Komodo/KomodoEdit
|
https://api.github.com/repos/Komodo/KomodoEdit
|
closed
|
Can't use collaboration feature
|
Bug Bug: New Pending: Response Severity: Priority
|
### Short Summary
I can't use collaboration feature. In the Collaboration tab I see the message "Collaboration encountered a communication error and will recover automatically"
When I try to reconnect - it doesn't work.
### Steps to Reproduce
1. Open Komodo IDE
2. Open collaboration tab
3. Click to "Force reconnect now"
### Expected results
Ability to create new session
### Actual results
501 error
### Platform Information
*Komodo IDE
*Version 10.2.1
*Ubuntu 15.10
### Additional Information
If I use HTTP Inspector I see 501 error:
Request:
URL: https://collaboration-push-v3.activestate.com:443
connection: keep-alive
host: collaboration-push-v3.activestate.com:443
proxy-connection: keep-alive
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:35.0) Gecko/20100101 KomodoIDE/
Response:
status: 501
content-length: 177
content-type: text/html
|
1.0
|
Can't use collaboration feature - ### Short Summary
I can't use collaboration feature. In the Collaboration tab I see the message "Collaboration encountered a communication error and will recover automatically"
When I try to reconnect - it doesn't work.
### Steps to Reproduce
1. Open Komodo IDE
2. Open collaboration tab
3. Click to "Force reconnect now"
### Expected results
Ability to create new session
### Actual results
501 error
### Platform Information
*Komodo IDE
*Version 10.2.1
*Ubuntu 15.10
### Additional Information
If I use HTTP Inspector I see 501 error:
Request:
URL: https://collaboration-push-v3.activestate.com:443
connection: keep-alive
host: collaboration-push-v3.activestate.com:443
proxy-connection: keep-alive
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:35.0) Gecko/20100101 KomodoIDE/
Response:
status: 501
content-length: 177
content-type: text/html
|
non_process
|
can t use collaboration feature short summary i can t use collaboration feature in the collaboration tab i see the message collaboration encountered a communication error and will recover automatically when i try to reconnect it doesn t work steps to reproduce open komodo ide open collaboration tab click to force reconnect now expected results ability to create new session actual results error platform information komodo ide version ubuntu additional information if i use http inspector i see error request url connection keep alive host collaboration push activestate com proxy connection keep alive user agent mozilla linux rv gecko komodoide response status content length content type text html
| 0
|
1,403
| 3,967,880,289
|
IssuesEvent
|
2016-05-03 17:44:46
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Setup continuous processing
|
Processors
|
- [ ] add strategy to process only updated data in `warehouse`
- [ ] update `processors` stack to use this strategy and `make-initial-processing` to do not use
|
1.0
|
Setup continuous processing - - [ ] add strategy to process only updated data in `warehouse`
- [ ] update `processors` stack to use this strategy and `make-initial-processing` to do not use
|
process
|
setup continuous processing add strategy to process only updated data in warehouse update processors stack to use this strategy and make initial processing to do not use
| 1
|
416,102
| 28,066,846,265
|
IssuesEvent
|
2023-03-29 16:00:48
|
dockstore/dockstore
|
https://api.github.com/repos/dockstore/dockstore
|
closed
|
Validate BioData Catalyst logos
|
enhancement documentation gui review
|
**Is your feature request related to a problem? Please describe.**
BioData Catalyst® has recently updated its logo and name usage.
https://www.biodatacatalyst.org/consortium-resources/bdcatalyst-branding/
**Describe the solution you'd like**
Validate that Dockstore is complying with the new requirements.
**Describe alternatives you've considered**
NA
**Additional context**
NA
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2291)
┆Attachments: <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10887">BDC-Logos.zip</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10885">BioDataCatalyst-StyleGuide.pdf</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10886">Color Icons.zip</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10888">Gray Icons.zip</a>
┆Fix Versions: Dockstore 1.14
┆Issue Number: DOCK-2291
┆Sprint: 104 - Blue Nile
┆Issue Type: Story
|
1.0
|
Validate BioData Catalyst logos - **Is your feature request related to a problem? Please describe.**
BioData Catalyst® has recently updated its logo and name usage.
https://www.biodatacatalyst.org/consortium-resources/bdcatalyst-branding/
**Describe the solution you'd like**
Validate that Dockstore is complying with the new requirements.
**Describe alternatives you've considered**
NA
**Additional context**
NA
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2291)
┆Attachments: <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10887">BDC-Logos.zip</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10885">BioDataCatalyst-StyleGuide.pdf</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10886">Color Icons.zip</a> | <a href="https://api.atlassian.com/ex/jira/9ff674c1-1cd9-4ee6-9138-6e2cdc7b3740/rest/api/2/attachment/content/10888">Gray Icons.zip</a>
┆Fix Versions: Dockstore 1.14
┆Issue Number: DOCK-2291
┆Sprint: 104 - Blue Nile
┆Issue Type: Story
|
non_process
|
validate biodata catalyst logos is your feature request related to a problem please describe biodata catalyst® has recently updated its logo and name usage describe the solution you d like validate that dockstore is complying with the new requirements describe alternatives you ve considered na additional context na ┆issue is synchronized with this ┆attachments ┆fix versions dockstore ┆issue number dock ┆sprint blue nile ┆issue type story
| 0
|
20,633
| 27,314,606,860
|
IssuesEvent
|
2023-02-24 14:48:07
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Adopt utility process for shared process, file watchers and terminal host and set `app.enableSandbox()`
|
plan-item on-testplan shared-process sandbox
|
This is a follow up from https://github.com/microsoft/vscode/issues/92164 and covers remaining work to eventually enable sandboxed renderers fully in Electron via `app.enableSandbox()`.
This means that our shared process has to move away from a node.js enabled browser window to the new utility process.
**Breaking down the usages today:**
* extension management
* settings sync
* profiles
* terminals
* file watcher
**Some initial thoughts:**
* the shared process should probably just change to be a utility process as a first step
* however, any code that relies on the browser window network stack instead has to leverage Electrons [`net`](https://www.electronjs.org/docs/latest/api/net) APIs from the `electron-main` process to not loose proxy support
* this can probably be done by implementing some kind of `IRequestService` that is backed by a main process service implementation
* any child process has to decide whether it wants to lift up to a utility process off the main process or remain inside the shared process
//cc @alexdima
|
1.0
|
Adopt utility process for shared process, file watchers and terminal host and set `app.enableSandbox()` - This is a follow up from https://github.com/microsoft/vscode/issues/92164 and covers remaining work to eventually enable sandboxed renderers fully in Electron via `app.enableSandbox()`.
This means that our shared process has to move away from a node.js enabled browser window to the new utility process.
**Breaking down the usages today:**
* extension management
* settings sync
* profiles
* terminals
* file watcher
**Some initial thoughts:**
* the shared process should probably just change to be a utility process as a first step
* however, any code that relies on the browser window network stack instead has to leverage Electrons [`net`](https://www.electronjs.org/docs/latest/api/net) APIs from the `electron-main` process to not loose proxy support
* this can probably be done by implementing some kind of `IRequestService` that is backed by a main process service implementation
* any child process has to decide whether it wants to lift up to a utility process off the main process or remain inside the shared process
//cc @alexdima
|
process
|
adopt utility process for shared process file watchers and terminal host and set app enablesandbox this is a follow up from and covers remaining work to eventually enable sandboxed renderers fully in electron via app enablesandbox this means that our shared process has to move away from a node js enabled browser window to the new utility process breaking down the usages today extension management settings sync profiles terminals file watcher some initial thoughts the shared process should probably just change to be a utility process as a first step however any code that relies on the browser window network stack instead has to leverage electrons apis from the electron main process to not loose proxy support this can probably be done by implementing some kind of irequestservice that is backed by a main process service implementation any child process has to decide whether it wants to lift up to a utility process off the main process or remain inside the shared process cc alexdima
| 1
|
96,419
| 16,129,635,934
|
IssuesEvent
|
2021-04-29 01:07:02
|
RG4421/ampere-centos-kernel
|
https://api.github.com/repos/RG4421/ampere-centos-kernel
|
opened
|
CVE-2020-27777 (Medium) detected in linuxv5.2
|
security vulnerability
|
## CVE-2020-27777 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/arch/powerpc/kernel/rtas.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/arch/powerpc/kernel/rtas.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the way RTAS handled memory accesses in userspace to kernel communication. On a locked down (usually due to Secure Boot) guest system running on top of PowerVM or KVM hypervisors (pseries platform) a root like local user could use this flaw to further increase their privileges to that of a running kernel.
<p>Publish Date: 2020-12-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27777>CVE-2020-27777</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-27777">https://www.linuxkernelcves.com/cves/CVE-2020-27777</a></p>
<p>Release Date: 2020-10-27</p>
<p>Fix Resolution: v4.14.204, v4.19.155, v5.4.75, v5.9.5,v5.10-rc1</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-27777 (Medium) detected in linuxv5.2 - ## CVE-2020-27777 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/arch/powerpc/kernel/rtas.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/arch/powerpc/kernel/rtas.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the way RTAS handled memory accesses in userspace to kernel communication. On a locked down (usually due to Secure Boot) guest system running on top of PowerVM or KVM hypervisors (pseries platform) a root like local user could use this flaw to further increase their privileges to that of a running kernel.
<p>Publish Date: 2020-12-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27777>CVE-2020-27777</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-27777">https://www.linuxkernelcves.com/cves/CVE-2020-27777</a></p>
<p>Release Date: 2020-10-27</p>
<p>Fix Resolution: v4.14.204, v4.19.155, v5.4.75, v5.9.5,v5.10-rc1</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files ampere centos kernel arch powerpc kernel rtas c ampere centos kernel arch powerpc kernel rtas c vulnerability details a flaw was found in the way rtas handled memory accesses in userspace to kernel communication on a locked down usually due to secure boot guest system running on top of powervm or kvm hypervisors pseries platform a root like local user could use this flaw to further increase their privileges to that of a running kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
22,664
| 31,895,996,039
|
IssuesEvent
|
2023-09-18 01:47:43
|
Significant-Gravitas/Auto-GPT
|
https://api.github.com/repos/Significant-Gravitas/Auto-GPT
|
closed
|
`read_file` does not find file that exists
|
AI model limitation needs investigation function: workspace function: process text Stale
|
### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
Windows
### Which version of Auto-GPT are you using?
Stable (branch)
### GPT-3 or GPT-4?
GPT-3.5
### Steps to reproduce 🕹
when the command read_file occurs, it can't find any file, because there is no path
although it wrote a file in auto_gpt_workspace folder
### Current behavior 😯
SYSTEM: Command read_file returned: Error: [Errno 2] No such file or directory: 'jokes.txt'
### Expected behavior 🤔
To read the file
### Your prompt 📝
```yaml
ai_goals:
- write a file with 3 jokes
- read from the file
ai_name: a
ai_role: wr
```
### Your Logs 📒
```log Use the 'write_to_file' command to write 3 jokes to a file.
2023-04-26 23:33:34,112 INFO CRITICISM: I need to make sure that the jokes are appropriate and not offensive.
2023-04-26 23:33:34,576 INFO NEXT ACTION: COMMAND = write_to_file ARGUMENTS = {'file': 'jokes.txt', 'text': "Why did the tomato turn red? Because it saw the salad dressing!\nWhat do you call a fake noodle? An impasta!\nWhy don't scientists trust atoms? Because they make up everything!"}
2023-04-26 23:33:59,289 INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
2023-04-26 23:34:00,049 INFO SYSTEM: Command write_to_file returned: File written to successfully.
2023-04-26 23:34:19,001 INFO THOUGHTS: Now that we have written some jokes to a file, I think I will use the 'read_file' command to make sure the jokes were saved correctly.
2023-04-26 23:34:19,686 INFO REASONING: I want to make sure that the jokes were saved correctly to the file, so I will use the 'read_file' command to check the contents of the file.
2023-04-26 23:34:20,463 INFO PLAN:
2023-04-26 23:34:20,569 INFO - Use the 'read_file' command to check the contents of the file
2023-04-26 23:34:20,918 INFO CRITICISM: I need to make sure that I am using the correct syntax for the 'read_file' command and that I am checking the correct file.
2023-04-26 23:34:21,551 INFO NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'jokes.txt'}
2023-04-26 23:35:16,410 INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
2023-04-26 23:35:17,241 INFO SYSTEM: Command read_file returned: Error: [Errno 2] No such file or directory: 'jokes.txt'
2023-04-26 23:35:34,998 INFO THOUGHTS: Now that we have written some jokes to a file, I think I will use the 'read_file' command to make sure the jokes were saved correctly.
2023-04-26 23:35:35,649 INFO REASONING: I want to make sure that the jokes were saved correctly to the file, so I will use the 'read_file' command to check the contents of the file.
2023-04-26 23:35:36,499 INFO PLAN:
2023-04-26 23:35:36,582 INFO - Use the 'read_file' command to check the contents of the file
2023-04-26 23:35:36,983 INFO CRITICISM: I need to make sure that I am using the correct syntax for the 'read_file' command and that I am checking the correct file.
2023-04-26 23:35:37,601 INFO NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'jokes.txt'}
```
|
1.0
|
`read_file` does not find file that exists - ### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
Windows
### Which version of Auto-GPT are you using?
Stable (branch)
### GPT-3 or GPT-4?
GPT-3.5
### Steps to reproduce 🕹
when the command read_file occurs, it can't find any file, because there is no path
although it wrote a file in auto_gpt_workspace folder
### Current behavior 😯
SYSTEM: Command read_file returned: Error: [Errno 2] No such file or directory: 'jokes.txt'
### Expected behavior 🤔
To read the file
### Your prompt 📝
```yaml
ai_goals:
- write a file with 3 jokes
- read from the file
ai_name: a
ai_role: wr
```
### Your Logs 📒
```log Use the 'write_to_file' command to write 3 jokes to a file.
2023-04-26 23:33:34,112 INFO CRITICISM: I need to make sure that the jokes are appropriate and not offensive.
2023-04-26 23:33:34,576 INFO NEXT ACTION: COMMAND = write_to_file ARGUMENTS = {'file': 'jokes.txt', 'text': "Why did the tomato turn red? Because it saw the salad dressing!\nWhat do you call a fake noodle? An impasta!\nWhy don't scientists trust atoms? Because they make up everything!"}
2023-04-26 23:33:59,289 INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
2023-04-26 23:34:00,049 INFO SYSTEM: Command write_to_file returned: File written to successfully.
2023-04-26 23:34:19,001 INFO THOUGHTS: Now that we have written some jokes to a file, I think I will use the 'read_file' command to make sure the jokes were saved correctly.
2023-04-26 23:34:19,686 INFO REASONING: I want to make sure that the jokes were saved correctly to the file, so I will use the 'read_file' command to check the contents of the file.
2023-04-26 23:34:20,463 INFO PLAN:
2023-04-26 23:34:20,569 INFO - Use the 'read_file' command to check the contents of the file
2023-04-26 23:34:20,918 INFO CRITICISM: I need to make sure that I am using the correct syntax for the 'read_file' command and that I am checking the correct file.
2023-04-26 23:34:21,551 INFO NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'jokes.txt'}
2023-04-26 23:35:16,410 INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
2023-04-26 23:35:17,241 INFO SYSTEM: Command read_file returned: Error: [Errno 2] No such file or directory: 'jokes.txt'
2023-04-26 23:35:34,998 INFO THOUGHTS: Now that we have written some jokes to a file, I think I will use the 'read_file' command to make sure the jokes were saved correctly.
2023-04-26 23:35:35,649 INFO REASONING: I want to make sure that the jokes were saved correctly to the file, so I will use the 'read_file' command to check the contents of the file.
2023-04-26 23:35:36,499 INFO PLAN:
2023-04-26 23:35:36,582 INFO - Use the 'read_file' command to check the contents of the file
2023-04-26 23:35:36,983 INFO CRITICISM: I need to make sure that I am using the correct syntax for the 'read_file' command and that I am checking the correct file.
2023-04-26 23:35:37,601 INFO NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'jokes.txt'}
```
|
process
|
read file does not find file that exists ⚠️ search for existing issues first ⚠️ i have searched the existing issues and there is no existing issue for my problem which operating system are you using windows which version of auto gpt are you using stable branch gpt or gpt gpt steps to reproduce 🕹 when the command read file occurs it can t find any file because there is no path although it wrote a file in auto gpt workspace folder current behavior 😯 system command read file returned error no such file or directory jokes txt expected behavior 🤔 to read the file your prompt 📝 yaml ai goals write a file with jokes read from the file ai name a ai role wr your logs 📒 log use the write to file command to write jokes to a file info criticism i need to make sure that the jokes are appropriate and not offensive info next action command write to file arguments file jokes txt text why did the tomato turn red because it saw the salad dressing nwhat do you call a fake noodle an impasta nwhy don t scientists trust atoms because they make up everything info command authorised by user info system command write to file returned file written to successfully info thoughts now that we have written some jokes to a file i think i will use the read file command to make sure the jokes were saved correctly info reasoning i want to make sure that the jokes were saved correctly to the file so i will use the read file command to check the contents of the file info plan info use the read file command to check the contents of the file info criticism i need to make sure that i am using the correct syntax for the read file command and that i am checking the correct file info next action command read file arguments file jokes txt info command authorised by user info system command read file returned error no such file or directory jokes txt info thoughts now that we have written some jokes to a file i think i will use the read file command to make sure the jokes were saved correctly info reasoning i want to make sure that the jokes were saved correctly to the file so i will use the read file command to check the contents of the file info plan info use the read file command to check the contents of the file info criticism i need to make sure that i am using the correct syntax for the read file command and that i am checking the correct file info next action command read file arguments file jokes txt
| 1
|
14,998
| 18,677,192,222
|
IssuesEvent
|
2021-10-31 19:06:12
|
varabyte/kobweb
|
https://api.github.com/repos/varabyte/kobweb
|
opened
|
Revisit ComponentModifiers API after JB fixes their bug upstream
|
process
|
See also: https://github.com/JetBrains/compose-jb/issues/1333
Currently, it seems like generics in composables are causing the compose compiler to trip up, so I'm simplifying the ComponentStyles API for now. However, this requires passing in a `data: Any?` parameter, which is ugly.
Revisit this if / when the upstream bug is resolved.
|
1.0
|
Revisit ComponentModifiers API after JB fixes their bug upstream - See also: https://github.com/JetBrains/compose-jb/issues/1333
Currently, it seems like generics in composables are causing the compose compiler to trip up, so I'm simplifying the ComponentStyles API for now. However, this requires passing in a `data: Any?` parameter, which is ugly.
Revisit this if / when the upstream bug is resolved.
|
process
|
revisit componentmodifiers api after jb fixes their bug upstream see also currently it seems like generics in composables are causing the compose compiler to trip up so i m simplifying the componentstyles api for now however this requires passing in a data any parameter which is ugly revisit this if when the upstream bug is resolved
| 1
|
428,648
| 30,004,444,493
|
IssuesEvent
|
2023-06-26 11:30:04
|
appsmithorg/appsmith-docs
|
https://api.github.com/repos/appsmithorg/appsmith-docs
|
opened
|
[Task]: Add GitHub action for Algolia index updates
|
Documentation medium User Education Pod
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Engineering Ticket Link
-
### Release Date
-
### Release Number
-
### First Draft
_No response_
### Loom video
_No response_
### Discord/slack/intercom Link if needed
_No response_
### PRD
_No response_
### Test plan/cases
_No response_
### Use cases or user requests
Add a Github action to wait when an index update is already in progress
|
1.0
|
[Task]: Add GitHub action for Algolia index updates - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Engineering Ticket Link
-
### Release Date
-
### Release Number
-
### First Draft
_No response_
### Loom video
_No response_
### Discord/slack/intercom Link if needed
_No response_
### PRD
_No response_
### Test plan/cases
_No response_
### Use cases or user requests
Add a Github action to wait when an index update is already in progress
|
non_process
|
add github action for algolia index updates is there an existing issue for this i have searched the existing issues engineering ticket link release date release number first draft no response loom video no response discord slack intercom link if needed no response prd no response test plan cases no response use cases or user requests add a github action to wait when an index update is already in progress
| 0
|
3,786
| 4,566,989,689
|
IssuesEvent
|
2016-09-15 09:25:40
|
camptocamp/c2cgeoportal
|
https://api.github.com/repos/camptocamp/c2cgeoportal
|
opened
|
Update the migration notes for an old cgxp interface
|
Infrastructure Ready
|
- update the changelog file
- remove the automation about the index.html - viewer.json
- document everything related to the interface
- all the file needed for an ngeo/cgxp interface
- the make variable: CGXP_INTERFACES/NGEO_INTERFACES
- the vars variable: default_interface
|
1.0
|
Update the migration notes for an old cgxp interface - - update the changelog file
- remove the automation about the index.html - viewer.json
- document everything related to the interface
- all the file needed for an ngeo/cgxp interface
- the make variable: CGXP_INTERFACES/NGEO_INTERFACES
- the vars variable: default_interface
|
non_process
|
update the migration notes for an old cgxp interface update the changelog file remove the automation about the index html viewer json document everything related to the interface all the file needed for an ngeo cgxp interface the make variable cgxp interfaces ngeo interfaces the vars variable default interface
| 0
|
243,034
| 7,852,546,931
|
IssuesEvent
|
2018-06-20 14:52:40
|
opentargets/webapp
|
https://api.github.com/repos/opentargets/webapp
|
closed
|
Greying out tabs when there is no data for a given target
|
area/usability area/web/target enhancement priority/backlog
|
We do grey out the tabs in the Evidence page (e.g. https://www.targetvalidation.org/evidence/ENSG00000065883/EFO_0000756).
I think it would be useful to have the same pattern for the Target profile page as well. E.g. Drugs here have no data but I need to click on the Tab to find that out. If it was grey, I'd have not bothered clicking on the tab.
https://www.targetvalidation.org/target/ENSG00000065883
|
1.0
|
Greying out tabs when there is no data for a given target - We do grey out the tabs in the Evidence page (e.g. https://www.targetvalidation.org/evidence/ENSG00000065883/EFO_0000756).
I think it would be useful to have the same pattern for the Target profile page as well. E.g. Drugs here have no data but I need to click on the Tab to find that out. If it was grey, I'd have not bothered clicking on the tab.
https://www.targetvalidation.org/target/ENSG00000065883
|
non_process
|
greying out tabs when there is no data for a given target we do grey out the tabs in the evidence page e g i think it would be useful to have the same pattern for the target profile page as well e g drugs here have no data but i need to click on the tab to find that out if it was grey i d have not bothered clicking on the tab
| 0
|
9,143
| 12,203,191,576
|
IssuesEvent
|
2020-04-30 10:10:45
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
Delete service treats every error like a job error
|
BUG :bug: EPIC - Auto Batch Process :oncoming_automobile:
|
**Describe the bug**
The doc-index-updater create service, sensibly, retries when it encounters unknown errors.
Delete throws a hissy fit, and cancels the job forever.
**To Reproduce**
Set any key or name used by the Delete service to an incorrect value, or go offline for a minute or two.
**Expected behavior**
Delete jobs are retried until the dead letter count is hit.
**Actual behaviour**
Watch as every delete job immediately becomes an error.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
# QA information
unit tests : make test
**For doc not found in index**
1. scripts/delete.sh 123123123123
2. (assumes that id doesn't exist)
3. wait for the lease lock to expire
4. see that it is not retried
**For other errors**
1. find a way to create another error - maybe wrong search index api key?
2. scripts/delete.sh REAL_ID_HERE
3. wait for the lease lock to expire
4. see that it is retried
|
1.0
|
Delete service treats every error like a job error - **Describe the bug**
The doc-index-updater create service, sensibly, retries when it encounters unknown errors.
Delete throws a hissy fit, and cancels the job forever.
**To Reproduce**
Set any key or name used by the Delete service to an incorrect value, or go offline for a minute or two.
**Expected behavior**
Delete jobs are retried until the dead letter count is hit.
**Actual behaviour**
Watch as every delete job immediately becomes an error.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
# QA information
unit tests : make test
**For doc not found in index**
1. scripts/delete.sh 123123123123
2. (assumes that id doesn't exist)
3. wait for the lease lock to expire
4. see that it is not retried
**For other errors**
1. find a way to create another error - maybe wrong search index api key?
2. scripts/delete.sh REAL_ID_HERE
3. wait for the lease lock to expire
4. see that it is retried
|
process
|
delete service treats every error like a job error describe the bug the doc index updater create service sensibly retries when it encounters unknown errors delete throws a hissy fit and cancels the job forever to reproduce set any key or name used by the delete service to an incorrect value or go offline for a minute or two expected behavior delete jobs are retried until the dead letter count is hit actual behaviour watch as every delete job immediately becomes an error screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here qa information unit tests make test for doc not found in index scripts delete sh assumes that id doesn t exist wait for the lease lock to expire see that it is not retried for other errors find a way to create another error maybe wrong search index api key scripts delete sh real id here wait for the lease lock to expire see that it is retried
| 1
|
307,836
| 26,567,229,634
|
IssuesEvent
|
2023-01-20 21:37:07
|
cicirello/Chips-n-Salsa
|
https://api.github.com/repos/cicirello/Chips-n-Salsa
|
closed
|
Refactor test cases for timed parallel multistarters
|
testing refactor
|
## Summary
Refactor test cases for timed parallel multistarters (suggested by RefactorFirst scan).
|
1.0
|
Refactor test cases for timed parallel multistarters - ## Summary
Refactor test cases for timed parallel multistarters (suggested by RefactorFirst scan).
|
non_process
|
refactor test cases for timed parallel multistarters summary refactor test cases for timed parallel multistarters suggested by refactorfirst scan
| 0
|
2,474
| 2,603,549,712
|
IssuesEvent
|
2015-02-24 16:38:37
|
jakobkroeker/test_singular
|
https://api.github.com/repos/jakobkroeker/test_singular
|
opened
|
missing test for 'brillnoether.lib'
|
missingTest
|
```
LIB("brillnoether.lib");
example RiemannRochBN;
```
|
1.0
|
missing test for 'brillnoether.lib' - ```
LIB("brillnoether.lib");
example RiemannRochBN;
```
|
non_process
|
missing test for brillnoether lib lib brillnoether lib example riemannrochbn
| 0
|
16,844
| 22,095,028,825
|
IssuesEvent
|
2022-06-01 09:14:28
|
Tencent/tdesign-miniprogram
|
https://api.github.com/repos/Tencent/tdesign-miniprogram
|
closed
|
[DateTimePicker] disableDate 在{before: x, after: y} 不能正常工作
|
bug processing
|
### tdesign-miniprogram 版本
0.10.0
### 重现链接
_No response_
### 重现步骤
```
<t-date-time-picker
title="选择日期和时间"
visible="{{isPickerVisible}}"
mode="{{['minute']}}"
value="{{datetimeText}}"
format="YYYY-MM-DD HH:mm"
bindconfirm="onPickerConfirm"
bindcancel="onPickerCancel"
disable-date="{{disableDate}}"
></t-date-time-picker>
onLoad(options) {
const today = dayjs().format('YYYY-MM-DD HH:mm')
const befor30 = dayjs().set('day', -30).format('YYYY-MM-DD HH:mm')
const disableDate = {
before: befor30,
after: today
}
console.log(disableDate)
this.setData({
datetimeText: today,
disableDate
})
}
```
### 期望结果
期望是,只能选择近30天里的日期
### 实际结果


按说4月 会是会有28天可选择
### 框架版本
_No response_
### 浏览器版本
_No response_
### 系统版本
_No response_
### Node版本
_No response_
### 补充说明
_No response_
|
1.0
|
[DateTimePicker] disableDate 在{before: x, after: y} 不能正常工作 - ### tdesign-miniprogram 版本
0.10.0
### 重现链接
_No response_
### 重现步骤
```
<t-date-time-picker
title="选择日期和时间"
visible="{{isPickerVisible}}"
mode="{{['minute']}}"
value="{{datetimeText}}"
format="YYYY-MM-DD HH:mm"
bindconfirm="onPickerConfirm"
bindcancel="onPickerCancel"
disable-date="{{disableDate}}"
></t-date-time-picker>
onLoad(options) {
const today = dayjs().format('YYYY-MM-DD HH:mm')
const befor30 = dayjs().set('day', -30).format('YYYY-MM-DD HH:mm')
const disableDate = {
before: befor30,
after: today
}
console.log(disableDate)
this.setData({
datetimeText: today,
disableDate
})
}
```
### 期望结果
期望是,只能选择近30天里的日期
### 实际结果


按说4月 会是会有28天可选择
### 框架版本
_No response_
### 浏览器版本
_No response_
### 系统版本
_No response_
### Node版本
_No response_
### 补充说明
_No response_
|
process
|
disabledate 在 before x after y 不能正常工作 tdesign miniprogram 版本 重现链接 no response 重现步骤 t date time picker title 选择日期和时间 visible ispickervisible mode value datetimetext format yyyy mm dd hh mm bindconfirm onpickerconfirm bindcancel onpickercancel disable date disabledate onload options const today dayjs format yyyy mm dd hh mm const dayjs set day format yyyy mm dd hh mm const disabledate before after today console log disabledate this setdata datetimetext today disabledate 期望结果 期望是, 实际结果 框架版本 no response 浏览器版本 no response 系统版本 no response node版本 no response 补充说明 no response
| 1
|
12,940
| 15,305,065,837
|
IssuesEvent
|
2021-02-24 17:38:59
|
nion-software/nionswift
|
https://api.github.com/repos/nion-software/nionswift
|
opened
|
Track processing history in data item metadata.
|
f - acquisition f - computations f - processing feature stage - planning type - enhancement
|
Acquisition and computations would both produce a data provenance object that could be tracked in either the data item or the underlying data-metadata item.
This requires thinking a bit more about what "metadata" means. There are two uses of metadata in use within Swift right now:
- Formal metadata that requires domain specific methods to access, e.g. calibrations or data type.
- Custom metadata that is just a dict.
Processing functions should have a uniform way of handling formal metadata.
It's not clear how custom metadata should be handled during processing - for instance, what should happen when adding two images with different custom metadata?
Also, consider whether provenance could also be attached to other project items, such as displays or graphics which can also be controlled by computations.
|
1.0
|
Track processing history in data item metadata. - Acquisition and computations would both produce a data provenance object that could be tracked in either the data item or the underlying data-metadata item.
This requires thinking a bit more about what "metadata" means. There are two uses of metadata in use within Swift right now:
- Formal metadata that requires domain specific methods to access, e.g. calibrations or data type.
- Custom metadata that is just a dict.
Processing functions should have a uniform way of handling formal metadata.
It's not clear how custom metadata should be handled during processing - for instance, what should happen when adding two images with different custom metadata?
Also, consider whether provenance could also be attached to other project items, such as displays or graphics which can also be controlled by computations.
|
process
|
track processing history in data item metadata acquisition and computations would both produce a data provenance object that could be tracked in either the data item or the underlying data metadata item this requires thinking a bit more about what metadata means there are two uses of metadata in use within swift right now formal metadata that requires domain specific methods to access e g calibrations or data type custom metadata that is just a dict processing functions should have a uniform way of handling formal metadata it s not clear how custom metadata should be handled during processing for instance what should happen when adding two images with different custom metadata also consider whether provenance could also be attached to other project items such as displays or graphics which can also be controlled by computations
| 1
|
5,543
| 8,392,607,105
|
IssuesEvent
|
2018-10-09 18:07:00
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Logging: 'test_update_sink' flakes with '503 Service Unavailable'
|
api: logging flaky testing type: process
|
From: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/8002
```python
_________________________ TestLogging.test_update_sink _________________________
args = (parent: "projects/precise-truck-742"
sink {
name: "test-update-sink-8002-1536695748"
destination: "storage.googleapis.com/g-c-python-testing-8002-1536695748"
filter: "logName:syslog AND severity>=INFO"
}
,)
kwargs = {'metadata': [('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.4.0 gapic/1.6.0 gccl/1.6.0')], 'timeout': 30.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f6a4e6604e0>
request = parent: "projects/precise-truck-742"
sink {
name: "test-update-sink-8002-1536695748"
destination: "storage.googleapis.com/g-c-python-testing-8002-1536695748"
filter: "logName:syslog AND severity>=INFO"
}
timeout = 30.0
metadata = [('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.4.0 gapic/1.6.0 gccl/1.6.0')]
credentials = None
def __call__(self, request, timeout=None, metadata=None, credentials=None):
state, call, = self._blocking(request, timeout, metadata, credentials)
> return _end_unary_response_blocking(state, call, False, None)
../.nox/sys-3-6/lib/python3.6/site-packages/grpc/_channel.py:532:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f6a4e511828>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f6a4e49d248>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _Rendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _Rendezvous(state, None, None, deadline)
E grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "The service is currently unavailable."
E debug_error_string = "{"created":"@1536695824.773475357","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1099,"grpc_message":"The service is currently unavailable.","grpc_status":14}"
E >
../.nox/sys-3-6/lib/python3.6/site-packages/grpc/_channel.py:466: _Rendezvous
The above exception was the direct cause of the following exception:
self = <test_system.TestLogging testMethod=test_update_sink>
def test_update_sink(self):
SINK_NAME = 'test-update-sink%s' % (_RESOURCE_ID,)
retry = RetryErrors(Conflict, max_tries=10)
bucket_uri = self._init_storage_bucket()
dataset_uri = self._init_bigquery_dataset()
UPDATED_FILTER = 'logName:syslog'
sink = Config.CLIENT.sink(SINK_NAME, DEFAULT_FILTER, bucket_uri)
self.assertFalse(sink.exists())
> retry(sink.create)()
tests/system/test_system.py:507:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../test_utils/test_utils/retry.py:95: in wrapped_function
return to_wrap(*args, **kwargs)
google/cloud/logging/sink.py:136: in create
unique_writer_identity=unique_writer_identity,
google/cloud/logging/_gapic.py:217: in sink_create
unique_writer_identity=unique_writer_identity
google/cloud/logging_v2/gapic/config_service_v2_client.py:424: in create_sink
request, retry=retry, timeout=timeout, metadata=metadata)
../api_core/google/api_core/gapic_v1/method.py:139: in __call__
return wrapped_func(*args, **kwargs)
../api_core/google/api_core/retry.py:260: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:177: in retry_target
return target()
../api_core/google/api_core/timeout.py:206: in func_with_timeout
return func(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:61: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "The service is currently unavai.../core/lib/surface/call.cc","file_line":1099,"grpc_message":"The service is currently unavailable.","grpc_status":14}"
>
> ???
E google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable.
<string>:3: ServiceUnavailable
```
|
1.0
|
Logging: 'test_update_sink' flakes with '503 Service Unavailable' - From: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/8002
```python
_________________________ TestLogging.test_update_sink _________________________
args = (parent: "projects/precise-truck-742"
sink {
name: "test-update-sink-8002-1536695748"
destination: "storage.googleapis.com/g-c-python-testing-8002-1536695748"
filter: "logName:syslog AND severity>=INFO"
}
,)
kwargs = {'metadata': [('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.4.0 gapic/1.6.0 gccl/1.6.0')], 'timeout': 30.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f6a4e6604e0>
request = parent: "projects/precise-truck-742"
sink {
name: "test-update-sink-8002-1536695748"
destination: "storage.googleapis.com/g-c-python-testing-8002-1536695748"
filter: "logName:syslog AND severity>=INFO"
}
timeout = 30.0
metadata = [('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.4.0 gapic/1.6.0 gccl/1.6.0')]
credentials = None
def __call__(self, request, timeout=None, metadata=None, credentials=None):
state, call, = self._blocking(request, timeout, metadata, credentials)
> return _end_unary_response_blocking(state, call, False, None)
../.nox/sys-3-6/lib/python3.6/site-packages/grpc/_channel.py:532:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f6a4e511828>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f6a4e49d248>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _Rendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _Rendezvous(state, None, None, deadline)
E grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "The service is currently unavailable."
E debug_error_string = "{"created":"@1536695824.773475357","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1099,"grpc_message":"The service is currently unavailable.","grpc_status":14}"
E >
../.nox/sys-3-6/lib/python3.6/site-packages/grpc/_channel.py:466: _Rendezvous
The above exception was the direct cause of the following exception:
self = <test_system.TestLogging testMethod=test_update_sink>
def test_update_sink(self):
SINK_NAME = 'test-update-sink%s' % (_RESOURCE_ID,)
retry = RetryErrors(Conflict, max_tries=10)
bucket_uri = self._init_storage_bucket()
dataset_uri = self._init_bigquery_dataset()
UPDATED_FILTER = 'logName:syslog'
sink = Config.CLIENT.sink(SINK_NAME, DEFAULT_FILTER, bucket_uri)
self.assertFalse(sink.exists())
> retry(sink.create)()
tests/system/test_system.py:507:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../test_utils/test_utils/retry.py:95: in wrapped_function
return to_wrap(*args, **kwargs)
google/cloud/logging/sink.py:136: in create
unique_writer_identity=unique_writer_identity,
google/cloud/logging/_gapic.py:217: in sink_create
unique_writer_identity=unique_writer_identity
google/cloud/logging_v2/gapic/config_service_v2_client.py:424: in create_sink
request, retry=retry, timeout=timeout, metadata=metadata)
../api_core/google/api_core/gapic_v1/method.py:139: in __call__
return wrapped_func(*args, **kwargs)
../api_core/google/api_core/retry.py:260: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:177: in retry_target
return target()
../api_core/google/api_core/timeout.py:206: in func_with_timeout
return func(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:61: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "The service is currently unavai.../core/lib/surface/call.cc","file_line":1099,"grpc_message":"The service is currently unavailable.","grpc_status":14}"
>
> ???
E google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable.
<string>:3: ServiceUnavailable
```
|
process
|
logging test update sink flakes with service unavailable from python testlogging test update sink args parent projects precise truck sink name test update sink destination storage googleapis com g c python testing filter logname syslog and severity info kwargs metadata timeout six wraps callable def error remapped callable args kwargs try return callable args kwargs api core google api core grpc helpers py self request parent projects precise truck sink name test update sink destination storage googleapis com g c python testing filter logname syslog and severity info timeout metadata credentials none def call self request timeout none metadata none credentials none state call self blocking request timeout metadata credentials return end unary response blocking state call false none nox sys lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous rendezvous state call none deadline return state response rendezvous else return state response else raise rendezvous state none none deadline e grpc channel rendezvous rendezvous of rpc that terminated with e status statuscode unavailable e details the service is currently unavailable e debug error string created description error received from peer file src core lib surface call cc file line grpc message the service is currently unavailable grpc status e nox sys lib site packages grpc channel py rendezvous the above exception was the direct cause of the following exception self def test update sink self sink name test update sink s resource id retry retryerrors conflict max tries bucket uri self init storage bucket dataset uri self init bigquery dataset updated filter logname syslog sink config client sink sink name default filter bucket uri self assertfalse sink exists retry sink create tests system test system py test utils test utils retry py in wrapped function return to wrap args kwargs google cloud logging sink py in create unique writer identity unique writer identity google cloud logging gapic py in sink create unique writer identity unique writer identity google cloud logging gapic config service client py in create sink request retry retry timeout timeout metadata metadata api core google api core gapic method py in call return wrapped func args kwargs api core google api core retry py in retry wrapped func on error on error api core google api core retry py in retry target return target api core google api core timeout py in func with timeout return func args kwargs api core google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value rendezvous of rpc that terminated with status statuscode unavailable details the service is currently unavai core lib surface call cc file line grpc message the service is currently unavailable grpc status e google api core exceptions serviceunavailable the service is currently unavailable serviceunavailable
| 1
|
225,555
| 7,488,401,256
|
IssuesEvent
|
2018-04-06 01:01:53
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
CNAME records aren't properly created with unhealthy service in OpenStack cluster
|
kind/bug lifecycle/rotten priority/important-soon sig/multicluster
|
I have a Federation with control plane running in a GKE cluster, and joined a local OpenStack cluster. When I create a federated LoadBalancer service, the LBs are created in both clusters and properly registered in my Google CloudDNS zone. However, when my OpenStack cluster has no healthy endpoints (I scaled the deploy to 1 replica, that stayed in GKE cluster), the A records in DNS aren't updated to CNAMEs.
My OS cluster is the "europe-west", while the GKE one is the "asia-east".
With both cluster healthy:
```
$ gcloud dns record-sets list --zone=fed
NAME TYPE TTL DATA
microbot.default.fed.svc.asia-east1-a.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.europe-west-1b.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.mydomain.com. A 180 10.11.4.90,104.199.160.255,150.165.85.44
```
When I scale to 1 replica and the OS endpoints become unhealthy:
```
$ gcloud dns record-sets list --zone=fed
NAME TYPE TTL DATA
microbot.default.fed.svc.asia-east1-a.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.europe-west-1b.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.mydomain.com. A 180 104.199.160.255
```
Note that the upper register is correctly updated to have just the Google endpoint (104.199.160.255) but the "europe" records stay as A instead of CNAMEs.
/kind bug
**What you expected to happen**:
That `microbot.default.fed.svc.europe-west-1b.europe-west-1.mydomain.com.` had CNAME pointing to `microbot.default.fed.svc.europe-west-1.mydomain.com.` and `microbot.default.fed.svc.europe-west-1.mydomain.com.` pointing to `microbot.default.fed.svc.mydomain.com.`
**Environment**:
- Kubernetes version (use `kubectl version`): 1.7.3
/sig openstack
/sig federation
|
1.0
|
CNAME records aren't properly created with unhealthy service in OpenStack cluster - I have a Federation with control plane running in a GKE cluster, and joined a local OpenStack cluster. When I create a federated LoadBalancer service, the LBs are created in both clusters and properly registered in my Google CloudDNS zone. However, when my OpenStack cluster has no healthy endpoints (I scaled the deploy to 1 replica, that stayed in GKE cluster), the A records in DNS aren't updated to CNAMEs.
My OS cluster is the "europe-west", while the GKE one is the "asia-east".
With both cluster healthy:
```
$ gcloud dns record-sets list --zone=fed
NAME TYPE TTL DATA
microbot.default.fed.svc.asia-east1-a.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.europe-west-1b.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.mydomain.com. A 180 10.11.4.90,104.199.160.255,150.165.85.44
```
When I scale to 1 replica and the OS endpoints become unhealthy:
```
$ gcloud dns record-sets list --zone=fed
NAME TYPE TTL DATA
microbot.default.fed.svc.asia-east1-a.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.asia-east1.mydomain.com. A 180 104.199.160.255
microbot.default.fed.svc.europe-west-1b.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.europe-west-1.mydomain.com. A 180 10.11.4.90,150.165.85.44
microbot.default.fed.svc.mydomain.com. A 180 104.199.160.255
```
Note that the upper register is correctly updated to have just the Google endpoint (104.199.160.255) but the "europe" records stay as A instead of CNAMEs.
/kind bug
**What you expected to happen**:
That `microbot.default.fed.svc.europe-west-1b.europe-west-1.mydomain.com.` had CNAME pointing to `microbot.default.fed.svc.europe-west-1.mydomain.com.` and `microbot.default.fed.svc.europe-west-1.mydomain.com.` pointing to `microbot.default.fed.svc.mydomain.com.`
**Environment**:
- Kubernetes version (use `kubectl version`): 1.7.3
/sig openstack
/sig federation
|
non_process
|
cname records aren t properly created with unhealthy service in openstack cluster i have a federation with control plane running in a gke cluster and joined a local openstack cluster when i create a federated loadbalancer service the lbs are created in both clusters and properly registered in my google clouddns zone however when my openstack cluster has no healthy endpoints i scaled the deploy to replica that stayed in gke cluster the a records in dns aren t updated to cnames my os cluster is the europe west while the gke one is the asia east with both cluster healthy gcloud dns record sets list zone fed name type ttl data microbot default fed svc asia a asia mydomain com a microbot default fed svc asia mydomain com a microbot default fed svc europe west europe west mydomain com a microbot default fed svc europe west mydomain com a microbot default fed svc mydomain com a when i scale to replica and the os endpoints become unhealthy gcloud dns record sets list zone fed name type ttl data microbot default fed svc asia a asia mydomain com a microbot default fed svc asia mydomain com a microbot default fed svc europe west europe west mydomain com a microbot default fed svc europe west mydomain com a microbot default fed svc mydomain com a note that the upper register is correctly updated to have just the google endpoint but the europe records stay as a instead of cnames kind bug what you expected to happen that microbot default fed svc europe west europe west mydomain com had cname pointing to microbot default fed svc europe west mydomain com and microbot default fed svc europe west mydomain com pointing to microbot default fed svc mydomain com environment kubernetes version use kubectl version sig openstack sig federation
| 0
|
69,442
| 7,134,425,786
|
IssuesEvent
|
2018-01-22 20:49:57
|
Azure/azure-webjobs-sdk-script
|
https://api.github.com/repos/Azure/azure-webjobs-sdk-script
|
closed
|
Enable KeysController tests
|
v2-testgaps
|
The keys controller tests [here](https://github.com/fabiocav/azure-webjobs-sdk-script/blob/5867f55820109c14d73ba94994abb92986b65118/test/WebJobs.Script.Tests/Controllers/Admin/KeysControllerTests.cs) have been disabled during the .NET Core migration. The controller has been fully migrated and the tests should be re-enabled.
|
1.0
|
Enable KeysController tests - The keys controller tests [here](https://github.com/fabiocav/azure-webjobs-sdk-script/blob/5867f55820109c14d73ba94994abb92986b65118/test/WebJobs.Script.Tests/Controllers/Admin/KeysControllerTests.cs) have been disabled during the .NET Core migration. The controller has been fully migrated and the tests should be re-enabled.
|
non_process
|
enable keyscontroller tests the keys controller tests have been disabled during the net core migration the controller has been fully migrated and the tests should be re enabled
| 0
|
1,937
| 4,764,030,537
|
IssuesEvent
|
2016-10-25 15:53:37
|
openvstorage/alba
|
https://api.github.com/repos/openvstorage/alba
|
closed
|
Proxy: (Invalid_argument "index out of bounds")
|
priority_urgent process_wontfix SRP type_bug
|
```
2016-08-11 09:24:04 300931 +0200 - cmp03 - 15924/0 - alba/proxy - 7897 - info - server: exception occurred in client connection (172.19.12.142,49799): (Invalid_argument "index out of bounds")
```
this is triggered during a write call from the voldrv:
```
2016-08-11 09:24:03 211011 +0200 - cmp03 - 15928/0x00007f0e56330700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000001038e - error - ~Logger: Exiting write for 01756be4-f8ae-45b0-80a9-4ee206b070ea sco_access_data with exception
```
|
1.0
|
Proxy: (Invalid_argument "index out of bounds") - ```
2016-08-11 09:24:04 300931 +0200 - cmp03 - 15924/0 - alba/proxy - 7897 - info - server: exception occurred in client connection (172.19.12.142,49799): (Invalid_argument "index out of bounds")
```
this is triggered during a write call from the voldrv:
```
2016-08-11 09:24:03 211011 +0200 - cmp03 - 15928/0x00007f0e56330700 - volumedriverfs/BackendConnectionInterfaceLogger - 000000000001038e - error - ~Logger: Exiting write for 01756be4-f8ae-45b0-80a9-4ee206b070ea sco_access_data with exception
```
|
process
|
proxy invalid argument index out of bounds alba proxy info server exception occurred in client connection invalid argument index out of bounds this is triggered during a write call from the voldrv volumedriverfs backendconnectioninterfacelogger error logger exiting write for sco access data with exception
| 1
|
16,369
| 21,075,040,570
|
IssuesEvent
|
2022-04-02 02:49:11
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
No (more) warning when trying to run tool with inputs with different CRSs
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
According to the option "Warn before before executing if parameter CRS's do not match" a warning should be displayed if different CRSs are processed in external tools.


**Yet no warning is displayed!**
### Steps to reproduce the issue
Load two different layers and execute tool v.what.vect
For an actual example and detailed steps see https://github.com/qgis/QGIS/issues/46916
### Versions
QGIS version
3.22.3-Białowieża
QGIS code revision
1628765ec7
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
Compiled against GEOS
3.10.0-CAPI-1.16.0
Running against GEOS
3.10.2-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
GeoCoding
2.18
GroupStats
2.2.5
HCMGIS
21.8.28
mmqgis
2021.9.10
Qgis2threejs
2.6
QuickOSM
2.0.0
quick_map_services
0.19.27
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
No (more) warning when trying to run tool with inputs with different CRSs - ### What is the bug or the crash?
According to the option "Warn before before executing if parameter CRS's do not match" a warning should be displayed if different CRSs are processed in external tools.


**Yet no warning is displayed!**
### Steps to reproduce the issue
Load two different layers and execute tool v.what.vect
For an actual example and detailed steps see https://github.com/qgis/QGIS/issues/46916
### Versions
QGIS version
3.22.3-Białowieża
QGIS code revision
1628765ec7
Qt version
5.15.2
Python version
3.9.5
GDAL/OGR version
3.4.1
PROJ version
8.2.1
EPSG Registry database version
v10.041 (2021-12-03)
Compiled against GEOS
3.10.0-CAPI-1.16.0
Running against GEOS
3.10.2-CAPI-1.16.0
SQLite version
3.35.2
PDAL version
2.3.0
PostgreSQL client version
13.0
SpatiaLite version
5.0.1
QWT version
6.1.3
QScintilla2 version
2.11.5
OS version
Windows 10 Version 2009
Active Python plugins
GeoCoding
2.18
GroupStats
2.2.5
HCMGIS
21.8.28
mmqgis
2021.9.10
Qgis2threejs
2.6
QuickOSM
2.0.0
quick_map_services
0.19.27
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.5
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
no more warning when trying to run tool with inputs with different crss what is the bug or the crash according to the option warn before before executing if parameter crs s do not match a warning should be displayed if different crss are processed in external tools yet no warning is displayed steps to reproduce the issue load two different layers and execute tool v what vect for an actual example and detailed steps see versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version compiled against geos capi running against geos capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins geocoding groupstats hcmgis mmqgis quickosm quick map services db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
3,563
| 6,599,726,590
|
IssuesEvent
|
2017-09-17 00:08:11
|
jfgossage/UnderstandLanguage
|
https://api.github.com/repos/jfgossage/UnderstandLanguage
|
opened
|
Change documentation process
|
High Priority Process change
|
Change the documentation process to use the Github wiki for documentation instead of Readthedocs to make it easier for contributors and users to change or augment the documentation.
|
1.0
|
Change documentation process - Change the documentation process to use the Github wiki for documentation instead of Readthedocs to make it easier for contributors and users to change or augment the documentation.
|
process
|
change documentation process change the documentation process to use the github wiki for documentation instead of readthedocs to make it easier for contributors and users to change or augment the documentation
| 1
|
9,681
| 12,682,972,476
|
IssuesEvent
|
2020-06-19 18:37:05
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
IPC with fd>2 doesn't work
|
child_process windows
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: master
* **Platform**: Windows 10 64-bit
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
index.js
```js
const spawn = require('child_process').spawn
const path = require('path')
let dir = path.join(__dirname, 'child.js')
let server = spawn(process.argv0, [`"${dir}"`], {
stdio: [0, 1, 2, 'ipc'],
shell: true,
})
```
child.js
```js
```
Output:
```
child_process.js:107
p.open(fd);
^
Error: EBADF: bad file descriptor, uv_pipe_open
at Object.exports._forkChild (child_process.js:107:5)
at Object.setupChannel (internal/process.js:237:8)
at startup (bootstrap_node.js:73:16)
at bootstrap_node.js:613:3
```
Before I start digging further into this, I'd like someone to confirm that this code is valid and should work.
|
1.0
|
IPC with fd>2 doesn't work - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: master
* **Platform**: Windows 10 64-bit
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
index.js
```js
const spawn = require('child_process').spawn
const path = require('path')
let dir = path.join(__dirname, 'child.js')
let server = spawn(process.argv0, [`"${dir}"`], {
stdio: [0, 1, 2, 'ipc'],
shell: true,
})
```
child.js
```js
```
Output:
```
child_process.js:107
p.open(fd);
^
Error: EBADF: bad file descriptor, uv_pipe_open
at Object.exports._forkChild (child_process.js:107:5)
at Object.setupChannel (internal/process.js:237:8)
at startup (bootstrap_node.js:73:16)
at bootstrap_node.js:613:3
```
Before I start digging further into this, I'd like someone to confirm that this code is valid and should work.
|
process
|
ipc with fd doesn t work thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version master platform windows bit subsystem child process index js js const spawn require child process spawn const path require path let dir path join dirname child js let server spawn process stdio shell true child js js output child process js p open fd error ebadf bad file descriptor uv pipe open at object exports forkchild child process js at object setupchannel internal process js at startup bootstrap node js at bootstrap node js before i start digging further into this i d like someone to confirm that this code is valid and should work
| 1
|
34,062
| 6,287,984,252
|
IssuesEvent
|
2017-07-19 15:59:37
|
CuBoulder/express
|
https://api.github.com/repos/CuBoulder/express
|
closed
|
Improve Embed UI
|
deploy:Bundle evaluate-2:Documentation Needs evaluate-2:Usability improvement:UX qa:pre-merge request:Feature Branch type:Task
|
- [x] Add icon to Embed tab
- [x] Format filter options to minimize form
- [x] Add inline documentation links
|
1.0
|
Improve Embed UI - - [x] Add icon to Embed tab
- [x] Format filter options to minimize form
- [x] Add inline documentation links
|
non_process
|
improve embed ui add icon to embed tab format filter options to minimize form add inline documentation links
| 0
|
15,145
| 18,901,477,666
|
IssuesEvent
|
2021-11-16 01:46:54
|
DSE511-Project3-Team/DSE511-Project-3-Code-Repo
|
https://api.github.com/repos/DSE511-Project3-Team/DSE511-Project-3-Code-Repo
|
closed
|
Preprocess Infrastructure
|
Preprocess
|
Infrastructure Analysis will conducted in this issues to preprocess the following variables:
'Traffic_Signal', 'Crossing', 'Station','Amenity', 'Bump', 'Give_Way', 'Junction', 'No_Exit', 'Railway', 'Roundabout','Stop', 'Traffic_Calming', 'Turning_Loop'.
|
1.0
|
Preprocess Infrastructure - Infrastructure Analysis will conducted in this issues to preprocess the following variables:
'Traffic_Signal', 'Crossing', 'Station','Amenity', 'Bump', 'Give_Way', 'Junction', 'No_Exit', 'Railway', 'Roundabout','Stop', 'Traffic_Calming', 'Turning_Loop'.
|
process
|
preprocess infrastructure infrastructure analysis will conducted in this issues to preprocess the following variables traffic signal crossing station amenity bump give way junction no exit railway roundabout stop traffic calming turning loop
| 1
|
68,755
| 21,878,659,511
|
IssuesEvent
|
2022-05-19 12:36:15
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
Host dead after caches seem to break possibly due to multiple room delete commands in short period of time
|
P2 S-Major T-Defect
|
### Description
From the logs it looks like a host admin deleted a room multiple times in a row within a few minutes. Initially this caused 403's for message sending for a bot user, which makes sense, but this soon turned to 500's until restarted, after which a 403 was again returned. Also some time after the room delete activity, a sync started to throw long stacktraces with 500 error. This is unconfirmed to have been fixed after restart, but it seems likely given the stack mentions caches.
From logs:
* Multiple events of the following events within a few minutes:
* `synapse.handlers.room - 1339 - INFO - DELETE-6639- Shutting down room '!roomid:domain.tld'`
* `synapse.handlers.room - 1348 - INFO - DELETE-6639- Kicking '@botuser:domain.tld' from '!roomid:domain.tld'...`
* `@botuser:domain.tld` starts to see 403
* Purge events seems to run multiple times for the rooms within a short period, with various events like this
* `synapse.storage.databases.main.purge_events - 412 - INFO - DELETE-6639- [purge] removing !roomid:domain.tld from event_push_summary`
* Room send 403 turns into 500:
```
> 2021-09-09 18:02:40,129 - synapse.http.server - 93 - ERROR - PUT-6693- Failed handle request via 'RoomSendEventRestServlet': <XForwardedForRequest at 0x7f978dff48b0 method='PUT' uri='/_matrix/client/r0/rooms/!roomid:domain.tld/send/m.room.message/m1627686538.10576?access_token=<redacted>' clientproto='HTTP/1.1' site='8008'>
> Sep 9, 2021 @ 18:02:40.132synapse
Traceback (most recent call last):
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1661, in _inlineCallbacks
Sep 9, 2021 @ 18:02:40.132synapse
result = current_context.run(gen.send, result)
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/rest/client/room.py", line 242, in on_POST
Sep 9, 2021 @ 18:02:40.132synapse
) = await self.event_creation_handler.create_and_send_nonmember_event(
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/message.py", line 863, in create_and_send_nonmember_event
Sep 9, 2021 @ 18:02:40.132synapse
event, context = await self.create_event(
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/message.py", line 623, in create_event
Sep 9, 2021 @ 18:02:40.132synapse
event, context = await self.create_new_client_event(
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/util/metrics.py", line 91, in measured_func
Sep 9, 2021 @ 18:02:40.132synapse
r = await func(self, *args, **kwargs)
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/message.py", line 943, in create_new_client_event
Sep 9, 2021 @ 18:02:40.132synapse
assert (
Sep 9, 2021 @ 18:02:40.132synapse
AssertionError: Attempting to create an event with no prev_events
```
The admin continues to create and delete rooms (scripted based on number of API calls and them happening over a short period of time for the same rooms). Later sync starts failing for the admin operating on behalf of another user:
```
2021-09-10 00:20:08,523 - synapse.access.http.8009 - 389 - INFO - GET-27763- 76.104.170.72 - 8009 - {@admin:domain.tld.@someuser:domain.tld} Processed request: 0.088sec/-0.000sec (0.064sec, 0.008sec) (0.045sec/0.162sec/14) 55B 500 "GET /_matrix/client/r0/sync?filter=1&timeout=0&_cacheBuster=1631233209635&access_token=<redacted> HTTP/1.1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" [0 dbevts]
2021-09-10 00:20:15,977 - synapse.http.server - 93 - ERROR - GET-27765- Failed handle request via 'SyncRestServlet': <XForwardedForRequest at 0x7fcd04c930c0 method='GET' uri='/_matrix/client/r0/sync?filter=1&timeout=0&_cacheBuster=1631233217123&access_token=<redacted>' clientproto='HTTP/1.1' site=8009>
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
Traceback (most recent call last):
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1657, in _inlineCallbacks
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
result = current_context.run(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/twisted/python/failure.py", line 500, in throwExceptionIntoGenerator
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
return g.throw(self.type, self.value, self.tb)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 362, in _wait_for_sync_for_user
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
result: SyncResult = await self.current_sync_for_user(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 406, in current_sync_for_user
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
sync_result = await self.generate_sync_result(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1072, in generate_sync_result
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
res = await self._generate_sync_entry_for_rooms(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1524, in _generate_sync_entry_for_rooms
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
await concurrently_execute(handle_room_entries, room_entries, 10)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1661, in _inlineCallbacks
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
result = current_context.run(gen.send, result)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/async_helpers.py", line 191, in _concurrently_execute_inner
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
await maybe_awaitable(func(value))
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1512, in handle_room_entries
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
res = await self._generate_room_entry(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1936, in _generate_room_entry
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
batch = await self._load_filtered_recents(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 583, in _load_filtered_recents
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
loaded_recents = await filter_events_for_client(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/visibility.py", line 84, in filter_events_for_client
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
event_id_to_state = await storage.state.get_state_for_events(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/state.py", line 466, in get_state_for_events
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
group_to_state = await self.stores.state._get_state_for_groups(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/state/store.py", line 237, in _get_state_for_groups
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
(member_state, incomplete_groups_m,) = self._get_state_for_groups_using_cache(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/state/store.py", line 301, in _get_state_for_groups_using_cache
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
state_dict_ids, got_all = self._get_state_for_group_using_cache(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/state/store.py", line 185, in _get_state_for_group_using_cache
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
cache_entry = cache.get(group)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/dictionary_cache.py", line 96, in get
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
return f(*args, **kwargs)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/lrucache.py", line 483, in cache_get
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
move_node_to_front(node)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/lrucache.py", line 439, in move_node_to_front
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
node.move_to_front(real_clock, list_root)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/lrucache.py", line 294, in move_to_front
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
self._list_node.move_after(cache_list_root)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/linked_list.py", line 93, in move_after
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
assert self.prev_node
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
AssertionError
```
Until Synapse is restarted, the event send graph looks completely dead.

Restart seems to have made everything happy again (no more 500, a good old 403 for event sending as we should).
Not sure if the admin would still see a sync error, but looking at stacktrace it's probably fixed by nuking the caches on restart?
### Version information
- **Homeserver**: EMS customer, please ping for host name and other details
- **Version**: v1.42.0
- **Install method**: EMS docker images
- **Workers**: Synchrotron + initial synchrotron
|
1.0
|
Host dead after caches seem to break possibly due to multiple room delete commands in short period of time - ### Description
From the logs it looks like a host admin deleted a room multiple times in a row within a few minutes. Initially this caused 403's for message sending for a bot user, which makes sense, but this soon turned to 500's until restarted, after which a 403 was again returned. Also some time after the room delete activity, a sync started to throw long stacktraces with 500 error. This is unconfirmed to have been fixed after restart, but it seems likely given the stack mentions caches.
From logs:
* Multiple events of the following events within a few minutes:
* `synapse.handlers.room - 1339 - INFO - DELETE-6639- Shutting down room '!roomid:domain.tld'`
* `synapse.handlers.room - 1348 - INFO - DELETE-6639- Kicking '@botuser:domain.tld' from '!roomid:domain.tld'...`
* `@botuser:domain.tld` starts to see 403
* Purge events seems to run multiple times for the rooms within a short period, with various events like this
* `synapse.storage.databases.main.purge_events - 412 - INFO - DELETE-6639- [purge] removing !roomid:domain.tld from event_push_summary`
* Room send 403 turns into 500:
```
> 2021-09-09 18:02:40,129 - synapse.http.server - 93 - ERROR - PUT-6693- Failed handle request via 'RoomSendEventRestServlet': <XForwardedForRequest at 0x7f978dff48b0 method='PUT' uri='/_matrix/client/r0/rooms/!roomid:domain.tld/send/m.room.message/m1627686538.10576?access_token=<redacted>' clientproto='HTTP/1.1' site='8008'>
> Sep 9, 2021 @ 18:02:40.132synapse
Traceback (most recent call last):
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1661, in _inlineCallbacks
Sep 9, 2021 @ 18:02:40.132synapse
result = current_context.run(gen.send, result)
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/rest/client/room.py", line 242, in on_POST
Sep 9, 2021 @ 18:02:40.132synapse
) = await self.event_creation_handler.create_and_send_nonmember_event(
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/message.py", line 863, in create_and_send_nonmember_event
Sep 9, 2021 @ 18:02:40.132synapse
event, context = await self.create_event(
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/message.py", line 623, in create_event
Sep 9, 2021 @ 18:02:40.132synapse
event, context = await self.create_new_client_event(
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/util/metrics.py", line 91, in measured_func
Sep 9, 2021 @ 18:02:40.132synapse
r = await func(self, *args, **kwargs)
Sep 9, 2021 @ 18:02:40.132synapse
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/message.py", line 943, in create_new_client_event
Sep 9, 2021 @ 18:02:40.132synapse
assert (
Sep 9, 2021 @ 18:02:40.132synapse
AssertionError: Attempting to create an event with no prev_events
```
The admin continues to create and delete rooms (scripted based on number of API calls and them happening over a short period of time for the same rooms). Later sync starts failing for the admin operating on behalf of another user:
```
2021-09-10 00:20:08,523 - synapse.access.http.8009 - 389 - INFO - GET-27763- 76.104.170.72 - 8009 - {@admin:domain.tld.@someuser:domain.tld} Processed request: 0.088sec/-0.000sec (0.064sec, 0.008sec) (0.045sec/0.162sec/14) 55B 500 "GET /_matrix/client/r0/sync?filter=1&timeout=0&_cacheBuster=1631233209635&access_token=<redacted> HTTP/1.1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" [0 dbevts]
2021-09-10 00:20:15,977 - synapse.http.server - 93 - ERROR - GET-27765- Failed handle request via 'SyncRestServlet': <XForwardedForRequest at 0x7fcd04c930c0 method='GET' uri='/_matrix/client/r0/sync?filter=1&timeout=0&_cacheBuster=1631233217123&access_token=<redacted>' clientproto='HTTP/1.1' site=8009>
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
Traceback (most recent call last):
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1657, in _inlineCallbacks
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
result = current_context.run(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/twisted/python/failure.py", line 500, in throwExceptionIntoGenerator
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
return g.throw(self.type, self.value, self.tb)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 362, in _wait_for_sync_for_user
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
result: SyncResult = await self.current_sync_for_user(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 406, in current_sync_for_user
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
sync_result = await self.generate_sync_result(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1072, in generate_sync_result
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
res = await self._generate_sync_entry_for_rooms(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1524, in _generate_sync_entry_for_rooms
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
await concurrently_execute(handle_room_entries, room_entries, 10)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/twisted/internet/defer.py", line 1661, in _inlineCallbacks
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
result = current_context.run(gen.send, result)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/async_helpers.py", line 191, in _concurrently_execute_inner
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
await maybe_awaitable(func(value))
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1512, in handle_room_entries
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
res = await self._generate_room_entry(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 1936, in _generate_room_entry
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
batch = await self._load_filtered_recents(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/sync.py", line 583, in _load_filtered_recents
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
loaded_recents = await filter_events_for_client(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/visibility.py", line 84, in filter_events_for_client
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
event_id_to_state = await storage.state.get_state_for_events(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/state.py", line 466, in get_state_for_events
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
group_to_state = await self.stores.state._get_state_for_groups(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/state/store.py", line 237, in _get_state_for_groups
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
(member_state, incomplete_groups_m,) = self._get_state_for_groups_using_cache(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/state/store.py", line 301, in _get_state_for_groups_using_cache
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
state_dict_ids, got_all = self._get_state_for_group_using_cache(
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/storage/databases/state/store.py", line 185, in _get_state_for_group_using_cache
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
cache_entry = cache.get(group)
Sep 10, 2021 @ 00:20:15.979synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/dictionary_cache.py", line 96, in get
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
return f(*args, **kwargs)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/lrucache.py", line 483, in cache_get
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
move_node_to_front(node)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/lrucache.py", line 439, in move_node_to_front
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
node.move_to_front(real_clock, list_root)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/caches/lrucache.py", line 294, in move_to_front
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
self._list_node.move_after(cache_list_root)
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
File "/usr/local/lib/python3.8/site-packages/synapse/util/linked_list.py", line 93, in move_after
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
assert self.prev_node
Sep 10, 2021 @ 00:20:15.980synapse-synchrotron
AssertionError
```
Until Synapse is restarted, the event send graph looks completely dead.

Restart seems to have made everything happy again (no more 500, a good old 403 for event sending as we should).
Not sure if the admin would still see a sync error, but looking at stacktrace it's probably fixed by nuking the caches on restart?
### Version information
- **Homeserver**: EMS customer, please ping for host name and other details
- **Version**: v1.42.0
- **Install method**: EMS docker images
- **Workers**: Synchrotron + initial synchrotron
|
non_process
|
host dead after caches seem to break possibly due to multiple room delete commands in short period of time description from the logs it looks like a host admin deleted a room multiple times in a row within a few minutes initially this caused s for message sending for a bot user which makes sense but this soon turned to s until restarted after which a was again returned also some time after the room delete activity a sync started to throw long stacktraces with error this is unconfirmed to have been fixed after restart but it seems likely given the stack mentions caches from logs multiple events of the following events within a few minutes synapse handlers room info delete shutting down room roomid domain tld synapse handlers room info delete kicking botuser domain tld from roomid domain tld botuser domain tld starts to see purge events seems to run multiple times for the rooms within a short period with various events like this synapse storage databases main purge events info delete removing roomid domain tld from event push summary room send turns into synapse http server error put failed handle request via roomsendeventrestservlet clientproto http site sep traceback most recent call last sep file usr local lib site packages twisted internet defer py line in inlinecallbacks sep result current context run gen send result sep file usr local lib site packages synapse rest client room py line in on post sep await self event creation handler create and send nonmember event sep file usr local lib site packages synapse handlers message py line in create and send nonmember event sep event context await self create event sep file usr local lib site packages synapse handlers message py line in create event sep event context await self create new client event sep file usr local lib site packages synapse util metrics py line in measured func sep r await func self args kwargs sep file usr local lib site packages synapse handlers message py line in create new client event sep assert sep assertionerror attempting to create an event with no prev events the admin continues to create and delete rooms scripted based on number of api calls and them happening over a short period of time for the same rooms later sync starts failing for the admin operating on behalf of another user synapse access http info get admin domain tld someuser domain tld processed request get matrix client sync filter timeout cachebuster access token http mozilla windows nt applewebkit khtml like gecko chrome safari synapse http server error get failed handle request via syncrestservlet clientproto http site sep synchrotron traceback most recent call last sep synchrotron file usr local lib site packages twisted internet defer py line in inlinecallbacks sep synchrotron result current context run sep synchrotron file usr local lib site packages twisted python failure py line in throwexceptionintogenerator sep synchrotron return g throw self type self value self tb sep synchrotron file usr local lib site packages synapse handlers sync py line in wait for sync for user sep synchrotron result syncresult await self current sync for user sep synchrotron file usr local lib site packages synapse handlers sync py line in current sync for user sep synchrotron sync result await self generate sync result sep synchrotron file usr local lib site packages synapse handlers sync py line in generate sync result sep synchrotron res await self generate sync entry for rooms sep synchrotron file usr local lib site packages synapse handlers sync py line in generate sync entry for rooms sep synchrotron await concurrently execute handle room entries room entries sep synchrotron file usr local lib site packages twisted internet defer py line in inlinecallbacks sep synchrotron result current context run gen send result sep synchrotron file usr local lib site packages synapse util async helpers py line in concurrently execute inner sep synchrotron await maybe awaitable func value sep synchrotron file usr local lib site packages synapse handlers sync py line in handle room entries sep synchrotron res await self generate room entry sep synchrotron file usr local lib site packages synapse handlers sync py line in generate room entry sep synchrotron batch await self load filtered recents sep synchrotron file usr local lib site packages synapse handlers sync py line in load filtered recents sep synchrotron loaded recents await filter events for client sep synchrotron file usr local lib site packages synapse visibility py line in filter events for client sep synchrotron event id to state await storage state get state for events sep synchrotron file usr local lib site packages synapse storage state py line in get state for events sep synchrotron group to state await self stores state get state for groups sep synchrotron file usr local lib site packages synapse storage databases state store py line in get state for groups sep synchrotron member state incomplete groups m self get state for groups using cache sep synchrotron file usr local lib site packages synapse storage databases state store py line in get state for groups using cache sep synchrotron state dict ids got all self get state for group using cache sep synchrotron file usr local lib site packages synapse storage databases state store py line in get state for group using cache sep synchrotron cache entry cache get group sep synchrotron file usr local lib site packages synapse util caches dictionary cache py line in get sep synchrotron return f args kwargs sep synchrotron file usr local lib site packages synapse util caches lrucache py line in cache get sep synchrotron move node to front node sep synchrotron file usr local lib site packages synapse util caches lrucache py line in move node to front sep synchrotron node move to front real clock list root sep synchrotron file usr local lib site packages synapse util caches lrucache py line in move to front sep synchrotron self list node move after cache list root sep synchrotron file usr local lib site packages synapse util linked list py line in move after sep synchrotron assert self prev node sep synchrotron assertionerror until synapse is restarted the event send graph looks completely dead restart seems to have made everything happy again no more a good old for event sending as we should not sure if the admin would still see a sync error but looking at stacktrace it s probably fixed by nuking the caches on restart version information homeserver ems customer please ping for host name and other details version install method ems docker images workers synchrotron initial synchrotron
| 0
|
256,543
| 27,561,687,798
|
IssuesEvent
|
2023-03-07 22:40:08
|
samqws-marketing/pinterest_orion
|
https://api.github.com/repos/samqws-marketing/pinterest_orion
|
closed
|
CVE-2022-0691 (High) detected in url-parse-1.5.3.tgz - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.3.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.3.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.3.tgz</a></p>
<p>Path to dependency file: /orion-server/src/main/resources/webapp/package.json</p>
<p>Path to vulnerable library: /orion-server/src/main/resources/webapp/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/pinterest_orion/commit/f713a1acc7accd46b2232cbbabae1990941bc416">f713a1acc7accd46b2232cbbabae1990941bc416</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2022-0691 (High) detected in url-parse-1.5.3.tgz - autoclosed - ## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.3.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.3.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.3.tgz</a></p>
<p>Path to dependency file: /orion-server/src/main/resources/webapp/package.json</p>
<p>Path to vulnerable library: /orion-server/src/main/resources/webapp/node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.4.tgz (Root Library)
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/pinterest_orion/commit/f713a1acc7accd46b2232cbbabae1990941bc416">f713a1acc7accd46b2232cbbabae1990941bc416</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (react-scripts): 4.0.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_process
|
cve high detected in url parse tgz autoclosed cve high severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file orion server src main resources webapp package json path to vulnerable library orion server src main resources webapp node modules url parse package json dependency hierarchy react scripts tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution react scripts check this box to open an automated fix pr
| 0
|
14,612
| 17,754,674,339
|
IssuesEvent
|
2021-08-28 14:13:09
|
Leviatan-Analytics/LA-data-processing
|
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
|
closed
|
Delete this task
|
Data Processing Week 4 Sprint 3
|
Test processing server endpoints and look for possible improvements. Gather some performance metrics such as response time and processing time.
|
1.0
|
Delete this task - Test processing server endpoints and look for possible improvements. Gather some performance metrics such as response time and processing time.
|
process
|
delete this task test processing server endpoints and look for possible improvements gather some performance metrics such as response time and processing time
| 1
|
16,898
| 22,198,466,988
|
IssuesEvent
|
2022-06-07 09:04:09
|
quark-engine/quark-rules
|
https://api.github.com/repos/quark-engine/quark-rules
|
closed
|
Add rules for screen access and gesture simulation
|
issue-processing-state-06
|
Add 3 new quark rules (number 205 - 207) for the screen access and gesture simulation in the Pixstealer malware.
|
1.0
|
Add rules for screen access and gesture simulation - Add 3 new quark rules (number 205 - 207) for the screen access and gesture simulation in the Pixstealer malware.
|
process
|
add rules for screen access and gesture simulation add new quark rules number for the screen access and gesture simulation in the pixstealer malware
| 1
|
4,593
| 7,432,105,972
|
IssuesEvent
|
2018-03-25 21:13:57
|
brucemiller/LaTeXML
|
https://api.github.com/repos/brucemiller/LaTeXML
|
closed
|
Latexmlc epub generation fails on Windows
|
bug portability postprocessing
|
Hello,
I tried to convert some arxiv.org-stuff from source to epub but failed with Windows 10:
```
C:\Users\Klaus\Eigene Dokumente\Anne\HERUS A CO Atlas from SPIRE Spectroscopy of local ULIRGs~>latexmlc herusfts_ajs_forastroph.tex --destination=book.epub
latexmlc (LaTeXML version 0.8.2) : these path directories do not exist: C:/Users/Klaus/Eigene\ Dokumente/Anne/HERUS\ A\ CO\ Atlas\ from\ SPIRE\ Spectroscopy\ of\ local\ ULIRGs~
(Loading C:\Strawberry\perl\site\lib\LaTeXML\Package\TeX.pool.ltxml...
(Loading C:\Strawberry\perl\site\lib\LaTeXML\Package\eTeX.pool.ltxml... 0.00 sec)
(Loading C:\Strawberry\perl\site\lib\LaTeXML\Package\pdfTeX.pool.ltxml... 0.01 sec) 0.20 sec)
latexmlc (LaTeXML version 0.8.2)
processing started Sat Oct 22 20:24:47 2016...
Calibre cannot open the file:
Traceback (most recent call last):
File "site-packages\calibre\utils\ipc\simple_worker.py", line 286, in main
File "site-packages\calibre\ebooks\oeb\iterator\book.py", line 64, in extract_book
File "site-packages\calibre\customize\conversion.py", line 245, in __call__
File "site-packages\calibre\ebooks\conversion\plugins\epub_input.py", line 238, in convert
File "site-packages\calibre\utils\localunzip.py", line 231, in extractall
File "site-packages\calibre\utils\localunzip.py", line 187, in _extractall
AttributeError: 'NoneType' object has no attribute 'replace'
```
Trying with Ubuntu 16.04 I succeed.
```
charlyms@ubuntu:~/Dokumente/HERUS A CO Atlas from SPIRE Spectroscopy of local ULIRGs~$ latexmlc herusfts_ajs_forastroph.tex --destination=book.epub
latexmlc (LaTeXML version 0.8.1) : these path directories do not exist: /home/charlyms/Dokumente/HERUS\ A\ CO\ Atlas\ from\ SPIRE\ Spectroscopy\ of\ local\ ULIRGs~
(Loading /usr/share/perl5/LaTeXML/Package/TeX.pool.ltxml...
(Loading /usr/share/perl5/LaTeXML/Package/eTeX.pool.ltxml... 0.04 sec)
(Loading /usr/share/perl5/LaTeXML/Package/pdfTeX.pool.ltxml... 0.01 sec) 0.20 sec)
latexmlc (LaTeXML version 0.8.1)
processing started Sat Oct 22 13:36:22 2016...
```
Do this depend on the different versions of LaTeXML?
Regards from Germany
Charlyms
|
1.0
|
Latexmlc epub generation fails on Windows - Hello,
I tried to convert some arxiv.org-stuff from source to epub but failed with Windows 10:
```
C:\Users\Klaus\Eigene Dokumente\Anne\HERUS A CO Atlas from SPIRE Spectroscopy of local ULIRGs~>latexmlc herusfts_ajs_forastroph.tex --destination=book.epub
latexmlc (LaTeXML version 0.8.2) : these path directories do not exist: C:/Users/Klaus/Eigene\ Dokumente/Anne/HERUS\ A\ CO\ Atlas\ from\ SPIRE\ Spectroscopy\ of\ local\ ULIRGs~
(Loading C:\Strawberry\perl\site\lib\LaTeXML\Package\TeX.pool.ltxml...
(Loading C:\Strawberry\perl\site\lib\LaTeXML\Package\eTeX.pool.ltxml... 0.00 sec)
(Loading C:\Strawberry\perl\site\lib\LaTeXML\Package\pdfTeX.pool.ltxml... 0.01 sec) 0.20 sec)
latexmlc (LaTeXML version 0.8.2)
processing started Sat Oct 22 20:24:47 2016...
Calibre cannot open the file:
Traceback (most recent call last):
File "site-packages\calibre\utils\ipc\simple_worker.py", line 286, in main
File "site-packages\calibre\ebooks\oeb\iterator\book.py", line 64, in extract_book
File "site-packages\calibre\customize\conversion.py", line 245, in __call__
File "site-packages\calibre\ebooks\conversion\plugins\epub_input.py", line 238, in convert
File "site-packages\calibre\utils\localunzip.py", line 231, in extractall
File "site-packages\calibre\utils\localunzip.py", line 187, in _extractall
AttributeError: 'NoneType' object has no attribute 'replace'
```
Trying with Ubuntu 16.04 I succeed.
```
charlyms@ubuntu:~/Dokumente/HERUS A CO Atlas from SPIRE Spectroscopy of local ULIRGs~$ latexmlc herusfts_ajs_forastroph.tex --destination=book.epub
latexmlc (LaTeXML version 0.8.1) : these path directories do not exist: /home/charlyms/Dokumente/HERUS\ A\ CO\ Atlas\ from\ SPIRE\ Spectroscopy\ of\ local\ ULIRGs~
(Loading /usr/share/perl5/LaTeXML/Package/TeX.pool.ltxml...
(Loading /usr/share/perl5/LaTeXML/Package/eTeX.pool.ltxml... 0.04 sec)
(Loading /usr/share/perl5/LaTeXML/Package/pdfTeX.pool.ltxml... 0.01 sec) 0.20 sec)
latexmlc (LaTeXML version 0.8.1)
processing started Sat Oct 22 13:36:22 2016...
```
Do this depend on the different versions of LaTeXML?
Regards from Germany
Charlyms
|
process
|
latexmlc epub generation fails on windows hello i tried to convert some arxiv org stuff from source to epub but failed with windows c users klaus eigene dokumente anne herus a co atlas from spire spectroscopy of local ulirgs latexmlc herusfts ajs forastroph tex destination book epub latexmlc latexml version these path directories do not exist c users klaus eigene dokumente anne herus a co atlas from spire spectroscopy of local ulirgs loading c strawberry perl site lib latexml package tex pool ltxml loading c strawberry perl site lib latexml package etex pool ltxml sec loading c strawberry perl site lib latexml package pdftex pool ltxml sec sec latexmlc latexml version processing started sat oct calibre cannot open the file traceback most recent call last file site packages calibre utils ipc simple worker py line in main file site packages calibre ebooks oeb iterator book py line in extract book file site packages calibre customize conversion py line in call file site packages calibre ebooks conversion plugins epub input py line in convert file site packages calibre utils localunzip py line in extractall file site packages calibre utils localunzip py line in extractall attributeerror nonetype object has no attribute replace trying with ubuntu i succeed charlyms ubuntu dokumente herus a co atlas from spire spectroscopy of local ulirgs latexmlc herusfts ajs forastroph tex destination book epub latexmlc latexml version these path directories do not exist home charlyms dokumente herus a co atlas from spire spectroscopy of local ulirgs loading usr share latexml package tex pool ltxml loading usr share latexml package etex pool ltxml sec loading usr share latexml package pdftex pool ltxml sec sec latexmlc latexml version processing started sat oct do this depend on the different versions of latexml regards from germany charlyms
| 1
|
61,980
| 17,023,824,490
|
IssuesEvent
|
2021-07-03 04:02:40
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Display prefered language(s) in user's profile
|
Component: website Priority: minor Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 8.24pm, Wednesday, 19th September 2012]**
OSM attract more and more users... and it's a good thing.
To keep those users active, and to attract new users, it's nice to have more "social" or community features inside the osm website.
Sometimes, you detect new user in your area, and you want to invite them to your community event or to do some remarks about the mapping... but you don't know the language of the user... so you start by English with one or to other languages depending of the country.
I live in Belgium where in small country you can find French, dutch and German as official language...
so please, could we add an indication about user's language(s) in his/her profile.
Something like :
----
%Username% prefer to be contacted in %LANG%. You can also use %LANG%, %LANG%.
|
1.0
|
Display prefered language(s) in user's profile - **[Submitted to the original trac issue database at 8.24pm, Wednesday, 19th September 2012]**
OSM attract more and more users... and it's a good thing.
To keep those users active, and to attract new users, it's nice to have more "social" or community features inside the osm website.
Sometimes, you detect new user in your area, and you want to invite them to your community event or to do some remarks about the mapping... but you don't know the language of the user... so you start by English with one or to other languages depending of the country.
I live in Belgium where in small country you can find French, dutch and German as official language...
so please, could we add an indication about user's language(s) in his/her profile.
Something like :
----
%Username% prefer to be contacted in %LANG%. You can also use %LANG%, %LANG%.
|
non_process
|
display prefered language s in user s profile osm attract more and more users and it s a good thing to keep those users active and to attract new users it s nice to have more social or community features inside the osm website sometimes you detect new user in your area and you want to invite them to your community event or to do some remarks about the mapping but you don t know the language of the user so you start by english with one or to other languages depending of the country i live in belgium where in small country you can find french dutch and german as official language so please could we add an indication about user s language s in his her profile something like username prefer to be contacted in lang you can also use lang lang
| 0
|
20,092
| 26,612,394,643
|
IssuesEvent
|
2023-01-24 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Tue, 24 Jan 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Impact of PCA-based preprocessing and different CNN structures on deformable registration of sonograms
- **Authors:** Christian Schmidt, Heinrich Martin Overhoff
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08802
- **Pdf link:** https://arxiv.org/pdf/2301.08802
- **Abstract**
Central venous catheters (CVC) are commonly inserted into the large veins of the neck, e.g. the internal jugular vein (IJV). CVC insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels. Placing a CVC under sonographic guidance is an appropriate method to reduce such adverse events, if anatomical landmarks like venous and arterial vessels can be detected reliably. This task shall be solved by registration of patient individual images vs. an anatomically labelled reference image. In this work, a linear, affine transformation is performed on cervical sonograms, followed by a non-linear transformation to achieve a more precise registration. Voxelmorph (VM), a learning-based library for deformable image registration using a convolutional neural network (CNN) with U-Net structure was used for non-linear transformation. The impact of principal component analysis (PCA)-based pre-denoising of patient individual images, as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively, the latter using metrics for deformation and image similarity. Using the PCA-approximated cervical sonograms resulted in decreased mean deformation lengths between 18% and 66% compared to their original image counterparts, depending on net structure. In addition, reducing the number of convolutional layers led to improved image similarity with PCA images, while worsening in original images. Despite a large reduction of network parameters, no overall decrease in registration quality was observed, leading to the conclusion that the original net structure is oversized for the task at hand.
### Toward Foundation Models for Earth Monitoring: Generalizable Deep Learning Models for Natural Hazard Segmentation
- **Authors:** Johannes Jakubik, Michal Muszynski, Michael Vössing, Niklas Kühl, Thomas Brunschwiler
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computational Engineering, Finance, and Science (cs.CE); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.09318
- **Pdf link:** https://arxiv.org/pdf/2301.09318
- **Abstract**
Climate change results in an increased probability of extreme weather events that put societies and businesses at risk on a global scale. Therefore, near real-time mapping of natural hazards is an emerging priority for the support of natural disaster relief, risk management, and informing governmental policy decisions. Recent methods to achieve near real-time mapping increasingly leverage deep learning (DL). However, DL-based approaches are designed for one specific task in a single geographic region based on specific frequency bands of satellite data. Therefore, DL models used to map specific natural hazards struggle with their generalization to other types of natural hazards in unseen regions. In this work, we propose a methodology to significantly improve the generalizability of DL natural hazards mappers based on pre-training on a suitable pre-task. Without access to any data from the target domain, we demonstrate this improved generalizability across four U-Net architectures for the segmentation of unseen natural hazards. Importantly, our method is invariant to geographic differences and differences in the type of frequency bands of satellite data. By leveraging characteristics of unlabeled images from the target domain that are publicly available, our approach is able to further improve the generalization behavior without fine-tuning. Thereby, our approach supports the development of foundation models for earth monitoring with the objective of directly segmenting unseen natural hazards across novel geographic regions given different sources of satellite imagery.
### Contracting Skeletal Kinematic Embeddings for Anomaly Detection
- **Authors:** Alessandro Flaborea, Guido Maria D'Amely di Melendugno, Stefano D'arrigo, Marco Aurelio Sterpa, Alessio Sampieri, Fabio Galasso
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09489
- **Pdf link:** https://arxiv.org/pdf/2301.09489
- **Abstract**
Detecting the anomaly of human behavior is paramount to timely recognizing endangering situations, such as street fights or elderly falls. However, anomaly detection is complex, since anomalous events are rare and because it is an open set recognition task, i.e., what is anomalous at inference has not been observed at training. We propose COSKAD, a novel model which encodes skeletal human motion by an efficient graph convolutional network and learns to COntract SKeletal kinematic embeddings onto a latent hypersphere of minimum volume for Anomaly Detection. We propose and analyze three latent space designs for COSKAD: the commonly-adopted Euclidean, and the new spherical-radial and hyperbolic volumes. All three variants outperform the state-of-the-art, including video-based techniques, on the ShangaiTechCampus, the Avenue, and on the most recent UBnormal dataset, for which we contribute novel skeleton annotations and the selection of human-related videos. The source code and dataset will be released upon acceptance.
## Keyword: event camera
### An Asynchronous Intensity Representation for Framed and Event Video Sources
- **Authors:** Andrew C. Freeman, Montek Singh, Ketan Mayer-Patel
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2301.08783
- **Pdf link:** https://arxiv.org/pdf/2301.08783
- **Abstract**
Neuromorphic "event" cameras, designed to mimic the human vision system with asynchronous sensing, unlock a new realm of high-speed and high dynamic range applications. However, researchers often either revert to a framed representation of event data for applications, or build bespoke applications for a particular camera's event data type. To usher in the next era of video systems, accommodate new event camera designs, and explore the benefits to asynchronous video in classical applications, we argue that there is a need for an asynchronous, source-agnostic video representation. In this paper, we introduce a novel, asynchronous intensity representation for both framed and non-framed data sources. We show that our representation can increase intensity precision and greatly reduce the number of samples per pixel compared to grid-based representations. With framed sources, we demonstrate that by permitting a small amount of loss through the temporal averaging of similar pixel values, we can reduce our representational sample rate by more than half, while incurring a drop in VMAF quality score of only 4.5. We also demonstrate lower latency than the state-of-the-art method for fusing and transcoding framed and event camera data to an intensity representation, while maintaining $2000\times$ the temporal resolution. We argue that our method provides the computational efficiency and temporal granularity necessary to build real-time intensity-based applications for event cameras.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### In-situ Water quality monitoring in Oil and Gas operations
- **Authors:** Satish Kumar, Rui Kou, Henry Hill, Jake Lempges, Eric Qian, Vikram Jayaram
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Applications (stat.AP); Computation (stat.CO); Methodology (stat.ME)
- **Arxiv link:** https://arxiv.org/abs/2301.08800
- **Pdf link:** https://arxiv.org/pdf/2301.08800
- **Abstract**
From agriculture to mining, to energy, surface water quality monitoring is an essential task. As oil and gas operators work to reduce the consumption of freshwater, it is increasingly important to actively manage fresh and non-fresh water resources over the long term. For large-scale monitoring, manual sampling at many sites has become too time-consuming and unsustainable, given the sheer number of dispersed ponds, small lakes, playas, and wetlands over a large area. Therefore, satellite-based environmental monitoring presents great potential. Many existing satellite-based monitoring studies utilize index-based methods to monitor large water bodies such as rivers and oceans. However, these existing methods fail when monitoring small ponds-the reflectance signal received from small water bodies is too weak to detect. To address this challenge, we propose a new Water Quality Enhanced Index (WQEI) Model, which is designed to enable users to determine contamination levels in water bodies with weak reflectance patterns. Our results show that 1) WQEI is a good indicator of water turbidity validated with 1200 water samples measured in the laboratory, and 2) by applying our method to commonly available satellite data (e.g. LandSat8), one can achieve high accuracy water quality monitoring efficiently in large regions. This provides a tool for operators to optimize the quality of water stored within surface storage ponds and increasing the readiness and availability of non-fresh water.
### Impact of PCA-based preprocessing and different CNN structures on deformable registration of sonograms
- **Authors:** Christian Schmidt, Heinrich Martin Overhoff
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08802
- **Pdf link:** https://arxiv.org/pdf/2301.08802
- **Abstract**
Central venous catheters (CVC) are commonly inserted into the large veins of the neck, e.g. the internal jugular vein (IJV). CVC insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels. Placing a CVC under sonographic guidance is an appropriate method to reduce such adverse events, if anatomical landmarks like venous and arterial vessels can be detected reliably. This task shall be solved by registration of patient individual images vs. an anatomically labelled reference image. In this work, a linear, affine transformation is performed on cervical sonograms, followed by a non-linear transformation to achieve a more precise registration. Voxelmorph (VM), a learning-based library for deformable image registration using a convolutional neural network (CNN) with U-Net structure was used for non-linear transformation. The impact of principal component analysis (PCA)-based pre-denoising of patient individual images, as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively, the latter using metrics for deformation and image similarity. Using the PCA-approximated cervical sonograms resulted in decreased mean deformation lengths between 18% and 66% compared to their original image counterparts, depending on net structure. In addition, reducing the number of convolutional layers led to improved image similarity with PCA images, while worsening in original images. Despite a large reduction of network parameters, no overall decrease in registration quality was observed, leading to the conclusion that the original net structure is oversized for the task at hand.
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
### Improving Presentation Attack Detection for ID Cards on Remote Verification Systems
- **Authors:** Sebastian Gonzalez, Juan Tapia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09542
- **Pdf link:** https://arxiv.org/pdf/2301.09542
- **Abstract**
In this paper, an updated two-stage, end-to-end Presentation Attack Detection method for remote biometric verification systems of ID cards, based on MobileNetV2, is presented. Several presentation attack species such as printed, display, composite (based on cropped and spliced areas), plastic (PVC), and synthetic ID card images using different capture sources are used. This proposal was developed using a database consisting of 190.000 real case Chilean ID card images with the support of a third-party company. Also, a new framework called PyPAD, used to estimate multi-class metrics compliant with the ISO/IEC 30107-3 standard was developed, and will be made available for research purposes. Our method is trained on two convolutional neural networks separately, reaching BPCER\textsubscript{100} scores on ID cards attacks of 1.69\% and 2.36\% respectively. The two-stage method using both models together can reach a BPCER\textsubscript{100} score of 0.92\%.
## Keyword: image signal processing
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
## Keyword: image signal process
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
## Keyword: compression
There is no result
## Keyword: RAW
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
### Learning Open-vocabulary Semantic Segmentation Models From Natural Language Supervision
- **Authors:** Jilan Xu, Junlin Hou, Yuejie Zhang, Rui Feng, Yi Wang, Yu Qiao, Weidi Xie
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09121
- **Pdf link:** https://arxiv.org/pdf/2301.09121
- **Abstract**
In this paper, we consider the problem of open-vocabulary semantic segmentation (OVS), which aims to segment objects of arbitrary classes instead of pre-defined, closed-set categories. The main contributions are as follows: First, we propose a transformer-based model for OVS, termed as OVSegmentor, which only exploits web-crawled image-text pairs for pre-training without using any mask annotations. OVSegmentor assembles the image pixels into a set of learnable group tokens via a slot-attention based binding module, and aligns the group tokens to the corresponding caption embedding. Second, we propose two proxy tasks for training, namely masked entity completion and cross-image mask consistency. The former aims to infer all masked entities in the caption given the group tokens, that enables the model to learn fine-grained alignment between visual groups and text entities. The latter enforces consistent mask predictions between images that contain shared entities, which encourages the model to learn visual invariance. Third, we construct CC4M dataset for pre-training by filtering CC12M with frequently appeared entities, which significantly improves training efficiency. Fourth, we perform zero-shot transfer on three benchmark datasets, PASCAL VOC 2012, PASCAL Context, and COCO Object. Our model achieves superior segmentation results over the state-of-the-art method by using only 3\% data (4M vs 134M) for pre-training. Code and pre-trained models will be released for future research.
### Combined Use of Federated Learning and Image Encryption for Privacy-Preserving Image Classification with Vision Transformer
- **Authors:** Teru Nagamori, Hitoshi Kiya
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.09255
- **Pdf link:** https://arxiv.org/pdf/2301.09255
- **Abstract**
In recent years, privacy-preserving methods for deep learning have become an urgent problem. Accordingly, we propose the combined use of federated learning (FL) and encrypted images for privacy-preserving image classification under the use of the vision transformer (ViT). The proposed method allows us not only to train models over multiple participants without directly sharing their raw data but to also protect the privacy of test (query) images for the first time. In addition, it can also maintain the same accuracy as normally trained models. In an experiment, the proposed method was demonstrated to well work without any performance degradation on the CIFAR-10 and CIFAR-100 datasets.
## Keyword: raw image
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
|
2.0
|
New submissions for Tue, 24 Jan 23 - ## Keyword: events
### Impact of PCA-based preprocessing and different CNN structures on deformable registration of sonograms
- **Authors:** Christian Schmidt, Heinrich Martin Overhoff
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08802
- **Pdf link:** https://arxiv.org/pdf/2301.08802
- **Abstract**
Central venous catheters (CVC) are commonly inserted into the large veins of the neck, e.g. the internal jugular vein (IJV). CVC insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels. Placing a CVC under sonographic guidance is an appropriate method to reduce such adverse events, if anatomical landmarks like venous and arterial vessels can be detected reliably. This task shall be solved by registration of patient individual images vs. an anatomically labelled reference image. In this work, a linear, affine transformation is performed on cervical sonograms, followed by a non-linear transformation to achieve a more precise registration. Voxelmorph (VM), a learning-based library for deformable image registration using a convolutional neural network (CNN) with U-Net structure was used for non-linear transformation. The impact of principal component analysis (PCA)-based pre-denoising of patient individual images, as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively, the latter using metrics for deformation and image similarity. Using the PCA-approximated cervical sonograms resulted in decreased mean deformation lengths between 18% and 66% compared to their original image counterparts, depending on net structure. In addition, reducing the number of convolutional layers led to improved image similarity with PCA images, while worsening in original images. Despite a large reduction of network parameters, no overall decrease in registration quality was observed, leading to the conclusion that the original net structure is oversized for the task at hand.
### Toward Foundation Models for Earth Monitoring: Generalizable Deep Learning Models for Natural Hazard Segmentation
- **Authors:** Johannes Jakubik, Michal Muszynski, Michael Vössing, Niklas Kühl, Thomas Brunschwiler
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computational Engineering, Finance, and Science (cs.CE); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.09318
- **Pdf link:** https://arxiv.org/pdf/2301.09318
- **Abstract**
Climate change results in an increased probability of extreme weather events that put societies and businesses at risk on a global scale. Therefore, near real-time mapping of natural hazards is an emerging priority for the support of natural disaster relief, risk management, and informing governmental policy decisions. Recent methods to achieve near real-time mapping increasingly leverage deep learning (DL). However, DL-based approaches are designed for one specific task in a single geographic region based on specific frequency bands of satellite data. Therefore, DL models used to map specific natural hazards struggle with their generalization to other types of natural hazards in unseen regions. In this work, we propose a methodology to significantly improve the generalizability of DL natural hazards mappers based on pre-training on a suitable pre-task. Without access to any data from the target domain, we demonstrate this improved generalizability across four U-Net architectures for the segmentation of unseen natural hazards. Importantly, our method is invariant to geographic differences and differences in the type of frequency bands of satellite data. By leveraging characteristics of unlabeled images from the target domain that are publicly available, our approach is able to further improve the generalization behavior without fine-tuning. Thereby, our approach supports the development of foundation models for earth monitoring with the objective of directly segmenting unseen natural hazards across novel geographic regions given different sources of satellite imagery.
### Contracting Skeletal Kinematic Embeddings for Anomaly Detection
- **Authors:** Alessandro Flaborea, Guido Maria D'Amely di Melendugno, Stefano D'arrigo, Marco Aurelio Sterpa, Alessio Sampieri, Fabio Galasso
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09489
- **Pdf link:** https://arxiv.org/pdf/2301.09489
- **Abstract**
Detecting the anomaly of human behavior is paramount to timely recognizing endangering situations, such as street fights or elderly falls. However, anomaly detection is complex, since anomalous events are rare and because it is an open set recognition task, i.e., what is anomalous at inference has not been observed at training. We propose COSKAD, a novel model which encodes skeletal human motion by an efficient graph convolutional network and learns to COntract SKeletal kinematic embeddings onto a latent hypersphere of minimum volume for Anomaly Detection. We propose and analyze three latent space designs for COSKAD: the commonly-adopted Euclidean, and the new spherical-radial and hyperbolic volumes. All three variants outperform the state-of-the-art, including video-based techniques, on the ShangaiTechCampus, the Avenue, and on the most recent UBnormal dataset, for which we contribute novel skeleton annotations and the selection of human-related videos. The source code and dataset will be released upon acceptance.
## Keyword: event camera
### An Asynchronous Intensity Representation for Framed and Event Video Sources
- **Authors:** Andrew C. Freeman, Montek Singh, Ketan Mayer-Patel
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2301.08783
- **Pdf link:** https://arxiv.org/pdf/2301.08783
- **Abstract**
Neuromorphic "event" cameras, designed to mimic the human vision system with asynchronous sensing, unlock a new realm of high-speed and high dynamic range applications. However, researchers often either revert to a framed representation of event data for applications, or build bespoke applications for a particular camera's event data type. To usher in the next era of video systems, accommodate new event camera designs, and explore the benefits to asynchronous video in classical applications, we argue that there is a need for an asynchronous, source-agnostic video representation. In this paper, we introduce a novel, asynchronous intensity representation for both framed and non-framed data sources. We show that our representation can increase intensity precision and greatly reduce the number of samples per pixel compared to grid-based representations. With framed sources, we demonstrate that by permitting a small amount of loss through the temporal averaging of similar pixel values, we can reduce our representational sample rate by more than half, while incurring a drop in VMAF quality score of only 4.5. We also demonstrate lower latency than the state-of-the-art method for fusing and transcoding framed and event camera data to an intensity representation, while maintaining $2000\times$ the temporal resolution. We argue that our method provides the computational efficiency and temporal granularity necessary to build real-time intensity-based applications for event cameras.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### In-situ Water quality monitoring in Oil and Gas operations
- **Authors:** Satish Kumar, Rui Kou, Henry Hill, Jake Lempges, Eric Qian, Vikram Jayaram
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Applications (stat.AP); Computation (stat.CO); Methodology (stat.ME)
- **Arxiv link:** https://arxiv.org/abs/2301.08800
- **Pdf link:** https://arxiv.org/pdf/2301.08800
- **Abstract**
From agriculture to mining, to energy, surface water quality monitoring is an essential task. As oil and gas operators work to reduce the consumption of freshwater, it is increasingly important to actively manage fresh and non-fresh water resources over the long term. For large-scale monitoring, manual sampling at many sites has become too time-consuming and unsustainable, given the sheer number of dispersed ponds, small lakes, playas, and wetlands over a large area. Therefore, satellite-based environmental monitoring presents great potential. Many existing satellite-based monitoring studies utilize index-based methods to monitor large water bodies such as rivers and oceans. However, these existing methods fail when monitoring small ponds-the reflectance signal received from small water bodies is too weak to detect. To address this challenge, we propose a new Water Quality Enhanced Index (WQEI) Model, which is designed to enable users to determine contamination levels in water bodies with weak reflectance patterns. Our results show that 1) WQEI is a good indicator of water turbidity validated with 1200 water samples measured in the laboratory, and 2) by applying our method to commonly available satellite data (e.g. LandSat8), one can achieve high accuracy water quality monitoring efficiently in large regions. This provides a tool for operators to optimize the quality of water stored within surface storage ponds and increasing the readiness and availability of non-fresh water.
### Impact of PCA-based preprocessing and different CNN structures on deformable registration of sonograms
- **Authors:** Christian Schmidt, Heinrich Martin Overhoff
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08802
- **Pdf link:** https://arxiv.org/pdf/2301.08802
- **Abstract**
Central venous catheters (CVC) are commonly inserted into the large veins of the neck, e.g. the internal jugular vein (IJV). CVC insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels. Placing a CVC under sonographic guidance is an appropriate method to reduce such adverse events, if anatomical landmarks like venous and arterial vessels can be detected reliably. This task shall be solved by registration of patient individual images vs. an anatomically labelled reference image. In this work, a linear, affine transformation is performed on cervical sonograms, followed by a non-linear transformation to achieve a more precise registration. Voxelmorph (VM), a learning-based library for deformable image registration using a convolutional neural network (CNN) with U-Net structure was used for non-linear transformation. The impact of principal component analysis (PCA)-based pre-denoising of patient individual images, as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively, the latter using metrics for deformation and image similarity. Using the PCA-approximated cervical sonograms resulted in decreased mean deformation lengths between 18% and 66% compared to their original image counterparts, depending on net structure. In addition, reducing the number of convolutional layers led to improved image similarity with PCA images, while worsening in original images. Despite a large reduction of network parameters, no overall decrease in registration quality was observed, leading to the conclusion that the original net structure is oversized for the task at hand.
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
### Improving Presentation Attack Detection for ID Cards on Remote Verification Systems
- **Authors:** Sebastian Gonzalez, Juan Tapia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09542
- **Pdf link:** https://arxiv.org/pdf/2301.09542
- **Abstract**
In this paper, an updated two-stage, end-to-end Presentation Attack Detection method for remote biometric verification systems of ID cards, based on MobileNetV2, is presented. Several presentation attack species such as printed, display, composite (based on cropped and spliced areas), plastic (PVC), and synthetic ID card images using different capture sources are used. This proposal was developed using a database consisting of 190.000 real case Chilean ID card images with the support of a third-party company. Also, a new framework called PyPAD, used to estimate multi-class metrics compliant with the ISO/IEC 30107-3 standard was developed, and will be made available for research purposes. Our method is trained on two convolutional neural networks separately, reaching BPCER\textsubscript{100} scores on ID cards attacks of 1.69\% and 2.36\% respectively. The two-stage method using both models together can reach a BPCER\textsubscript{100} score of 0.92\%.
## Keyword: image signal processing
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
## Keyword: image signal process
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
## Keyword: compression
There is no result
## Keyword: RAW
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
### Learning Open-vocabulary Semantic Segmentation Models From Natural Language Supervision
- **Authors:** Jilan Xu, Junlin Hou, Yuejie Zhang, Rui Feng, Yi Wang, Yu Qiao, Weidi Xie
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2301.09121
- **Pdf link:** https://arxiv.org/pdf/2301.09121
- **Abstract**
In this paper, we consider the problem of open-vocabulary semantic segmentation (OVS), which aims to segment objects of arbitrary classes instead of pre-defined, closed-set categories. The main contributions are as follows: First, we propose a transformer-based model for OVS, termed as OVSegmentor, which only exploits web-crawled image-text pairs for pre-training without using any mask annotations. OVSegmentor assembles the image pixels into a set of learnable group tokens via a slot-attention based binding module, and aligns the group tokens to the corresponding caption embedding. Second, we propose two proxy tasks for training, namely masked entity completion and cross-image mask consistency. The former aims to infer all masked entities in the caption given the group tokens, that enables the model to learn fine-grained alignment between visual groups and text entities. The latter enforces consistent mask predictions between images that contain shared entities, which encourages the model to learn visual invariance. Third, we construct CC4M dataset for pre-training by filtering CC12M with frequently appeared entities, which significantly improves training efficiency. Fourth, we perform zero-shot transfer on three benchmark datasets, PASCAL VOC 2012, PASCAL Context, and COCO Object. Our model achieves superior segmentation results over the state-of-the-art method by using only 3\% data (4M vs 134M) for pre-training. Code and pre-trained models will be released for future research.
### Combined Use of Federated Learning and Image Encryption for Privacy-Preserving Image Classification with Vision Transformer
- **Authors:** Teru Nagamori, Hitoshi Kiya
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.09255
- **Pdf link:** https://arxiv.org/pdf/2301.09255
- **Abstract**
In recent years, privacy-preserving methods for deep learning have become an urgent problem. Accordingly, we propose the combined use of federated learning (FL) and encrypted images for privacy-preserving image classification under the use of the vision transformer (ViT). The proposed method allows us not only to train models over multiple participants without directly sharing their raw data but to also protect the privacy of test (query) images for the first time. In addition, it can also maintain the same accuracy as normally trained models. In an experiment, the proposed method was demonstrated to well work without any performance degradation on the CIFAR-10 and CIFAR-100 datasets.
## Keyword: raw image
### Raw or Cooked? Object Detection on RAW Images
- **Authors:** William Ljungbergh, Joakim Johnander, Christoffer Petersson, Michael Felsberg
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2301.08965
- **Pdf link:** https://arxiv.org/pdf/2301.08965
- **Abstract**
Images fed to a deep neural network have in general undergone several handcrafted image signal processing (ISP) operations, all of which have been optimized to produce visually pleasing images. In this work, we investigate the hypothesis that the intermediate representation of visually pleasing images is sub-optimal for downstream computer vision tasks compared to the RAW image representation. We suggest that the operations of the ISP instead should be optimized towards the end task, by learning the parameters of the operations jointly during training. We extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional RGB images. In experiments on the open PASCALRAW dataset, we empirically confirm our hypothesis.
|
process
|
new submissions for tue jan keyword events impact of pca based preprocessing and different cnn structures on deformable registration of sonograms authors christian schmidt heinrich martin overhoff subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract central venous catheters cvc are commonly inserted into the large veins of the neck e g the internal jugular vein ijv cvc insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels placing a cvc under sonographic guidance is an appropriate method to reduce such adverse events if anatomical landmarks like venous and arterial vessels can be detected reliably this task shall be solved by registration of patient individual images vs an anatomically labelled reference image in this work a linear affine transformation is performed on cervical sonograms followed by a non linear transformation to achieve a more precise registration voxelmorph vm a learning based library for deformable image registration using a convolutional neural network cnn with u net structure was used for non linear transformation the impact of principal component analysis pca based pre denoising of patient individual images as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively the latter using metrics for deformation and image similarity using the pca approximated cervical sonograms resulted in decreased mean deformation lengths between and compared to their original image counterparts depending on net structure in addition reducing the number of convolutional layers led to improved image similarity with pca images while worsening in original images despite a large reduction of network parameters no overall decrease in registration quality was observed leading to the conclusion that the original net structure is oversized for the task at hand toward foundation models for earth monitoring generalizable deep learning models for natural hazard segmentation authors johannes jakubik michal muszynski michael vössing niklas kühl thomas brunschwiler subjects computer vision and pattern recognition cs cv computational engineering finance and science cs ce machine learning cs lg arxiv link pdf link abstract climate change results in an increased probability of extreme weather events that put societies and businesses at risk on a global scale therefore near real time mapping of natural hazards is an emerging priority for the support of natural disaster relief risk management and informing governmental policy decisions recent methods to achieve near real time mapping increasingly leverage deep learning dl however dl based approaches are designed for one specific task in a single geographic region based on specific frequency bands of satellite data therefore dl models used to map specific natural hazards struggle with their generalization to other types of natural hazards in unseen regions in this work we propose a methodology to significantly improve the generalizability of dl natural hazards mappers based on pre training on a suitable pre task without access to any data from the target domain we demonstrate this improved generalizability across four u net architectures for the segmentation of unseen natural hazards importantly our method is invariant to geographic differences and differences in the type of frequency bands of satellite data by leveraging characteristics of unlabeled images from the target domain that are publicly available our approach is able to further improve the generalization behavior without fine tuning thereby our approach supports the development of foundation models for earth monitoring with the objective of directly segmenting unseen natural hazards across novel geographic regions given different sources of satellite imagery contracting skeletal kinematic embeddings for anomaly detection authors alessandro flaborea guido maria d amely di melendugno stefano d arrigo marco aurelio sterpa alessio sampieri fabio galasso subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract detecting the anomaly of human behavior is paramount to timely recognizing endangering situations such as street fights or elderly falls however anomaly detection is complex since anomalous events are rare and because it is an open set recognition task i e what is anomalous at inference has not been observed at training we propose coskad a novel model which encodes skeletal human motion by an efficient graph convolutional network and learns to contract skeletal kinematic embeddings onto a latent hypersphere of minimum volume for anomaly detection we propose and analyze three latent space designs for coskad the commonly adopted euclidean and the new spherical radial and hyperbolic volumes all three variants outperform the state of the art including video based techniques on the shangaitechcampus the avenue and on the most recent ubnormal dataset for which we contribute novel skeleton annotations and the selection of human related videos the source code and dataset will be released upon acceptance keyword event camera an asynchronous intensity representation for framed and event video sources authors andrew c freeman montek singh ketan mayer patel subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract neuromorphic event cameras designed to mimic the human vision system with asynchronous sensing unlock a new realm of high speed and high dynamic range applications however researchers often either revert to a framed representation of event data for applications or build bespoke applications for a particular camera s event data type to usher in the next era of video systems accommodate new event camera designs and explore the benefits to asynchronous video in classical applications we argue that there is a need for an asynchronous source agnostic video representation in this paper we introduce a novel asynchronous intensity representation for both framed and non framed data sources we show that our representation can increase intensity precision and greatly reduce the number of samples per pixel compared to grid based representations with framed sources we demonstrate that by permitting a small amount of loss through the temporal averaging of similar pixel values we can reduce our representational sample rate by more than half while incurring a drop in vmaf quality score of only we also demonstrate lower latency than the state of the art method for fusing and transcoding framed and event camera data to an intensity representation while maintaining times the temporal resolution we argue that our method provides the computational efficiency and temporal granularity necessary to build real time intensity based applications for event cameras keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp in situ water quality monitoring in oil and gas operations authors satish kumar rui kou henry hill jake lempges eric qian vikram jayaram subjects computer vision and pattern recognition cs cv applications stat ap computation stat co methodology stat me arxiv link pdf link abstract from agriculture to mining to energy surface water quality monitoring is an essential task as oil and gas operators work to reduce the consumption of freshwater it is increasingly important to actively manage fresh and non fresh water resources over the long term for large scale monitoring manual sampling at many sites has become too time consuming and unsustainable given the sheer number of dispersed ponds small lakes playas and wetlands over a large area therefore satellite based environmental monitoring presents great potential many existing satellite based monitoring studies utilize index based methods to monitor large water bodies such as rivers and oceans however these existing methods fail when monitoring small ponds the reflectance signal received from small water bodies is too weak to detect to address this challenge we propose a new water quality enhanced index wqei model which is designed to enable users to determine contamination levels in water bodies with weak reflectance patterns our results show that wqei is a good indicator of water turbidity validated with water samples measured in the laboratory and by applying our method to commonly available satellite data e g one can achieve high accuracy water quality monitoring efficiently in large regions this provides a tool for operators to optimize the quality of water stored within surface storage ponds and increasing the readiness and availability of non fresh water impact of pca based preprocessing and different cnn structures on deformable registration of sonograms authors christian schmidt heinrich martin overhoff subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract central venous catheters cvc are commonly inserted into the large veins of the neck e g the internal jugular vein ijv cvc insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels placing a cvc under sonographic guidance is an appropriate method to reduce such adverse events if anatomical landmarks like venous and arterial vessels can be detected reliably this task shall be solved by registration of patient individual images vs an anatomically labelled reference image in this work a linear affine transformation is performed on cervical sonograms followed by a non linear transformation to achieve a more precise registration voxelmorph vm a learning based library for deformable image registration using a convolutional neural network cnn with u net structure was used for non linear transformation the impact of principal component analysis pca based pre denoising of patient individual images as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively the latter using metrics for deformation and image similarity using the pca approximated cervical sonograms resulted in decreased mean deformation lengths between and compared to their original image counterparts depending on net structure in addition reducing the number of convolutional layers led to improved image similarity with pca images while worsening in original images despite a large reduction of network parameters no overall decrease in registration quality was observed leading to the conclusion that the original net structure is oversized for the task at hand raw or cooked object detection on raw images authors william ljungbergh joakim johnander christoffer petersson michael felsberg subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract images fed to a deep neural network have in general undergone several handcrafted image signal processing isp operations all of which have been optimized to produce visually pleasing images in this work we investigate the hypothesis that the intermediate representation of visually pleasing images is sub optimal for downstream computer vision tasks compared to the raw image representation we suggest that the operations of the isp instead should be optimized towards the end task by learning the parameters of the operations jointly during training we extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional rgb images in experiments on the open pascalraw dataset we empirically confirm our hypothesis improving presentation attack detection for id cards on remote verification systems authors sebastian gonzalez juan tapia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this paper an updated two stage end to end presentation attack detection method for remote biometric verification systems of id cards based on is presented several presentation attack species such as printed display composite based on cropped and spliced areas plastic pvc and synthetic id card images using different capture sources are used this proposal was developed using a database consisting of real case chilean id card images with the support of a third party company also a new framework called pypad used to estimate multi class metrics compliant with the iso iec standard was developed and will be made available for research purposes our method is trained on two convolutional neural networks separately reaching bpcer textsubscript scores on id cards attacks of and respectively the two stage method using both models together can reach a bpcer textsubscript score of keyword image signal processing raw or cooked object detection on raw images authors william ljungbergh joakim johnander christoffer petersson michael felsberg subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract images fed to a deep neural network have in general undergone several handcrafted image signal processing isp operations all of which have been optimized to produce visually pleasing images in this work we investigate the hypothesis that the intermediate representation of visually pleasing images is sub optimal for downstream computer vision tasks compared to the raw image representation we suggest that the operations of the isp instead should be optimized towards the end task by learning the parameters of the operations jointly during training we extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional rgb images in experiments on the open pascalraw dataset we empirically confirm our hypothesis keyword image signal process raw or cooked object detection on raw images authors william ljungbergh joakim johnander christoffer petersson michael felsberg subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract images fed to a deep neural network have in general undergone several handcrafted image signal processing isp operations all of which have been optimized to produce visually pleasing images in this work we investigate the hypothesis that the intermediate representation of visually pleasing images is sub optimal for downstream computer vision tasks compared to the raw image representation we suggest that the operations of the isp instead should be optimized towards the end task by learning the parameters of the operations jointly during training we extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional rgb images in experiments on the open pascalraw dataset we empirically confirm our hypothesis keyword compression there is no result keyword raw raw or cooked object detection on raw images authors william ljungbergh joakim johnander christoffer petersson michael felsberg subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract images fed to a deep neural network have in general undergone several handcrafted image signal processing isp operations all of which have been optimized to produce visually pleasing images in this work we investigate the hypothesis that the intermediate representation of visually pleasing images is sub optimal for downstream computer vision tasks compared to the raw image representation we suggest that the operations of the isp instead should be optimized towards the end task by learning the parameters of the operations jointly during training we extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional rgb images in experiments on the open pascalraw dataset we empirically confirm our hypothesis learning open vocabulary semantic segmentation models from natural language supervision authors jilan xu junlin hou yuejie zhang rui feng yi wang yu qiao weidi xie subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this paper we consider the problem of open vocabulary semantic segmentation ovs which aims to segment objects of arbitrary classes instead of pre defined closed set categories the main contributions are as follows first we propose a transformer based model for ovs termed as ovsegmentor which only exploits web crawled image text pairs for pre training without using any mask annotations ovsegmentor assembles the image pixels into a set of learnable group tokens via a slot attention based binding module and aligns the group tokens to the corresponding caption embedding second we propose two proxy tasks for training namely masked entity completion and cross image mask consistency the former aims to infer all masked entities in the caption given the group tokens that enables the model to learn fine grained alignment between visual groups and text entities the latter enforces consistent mask predictions between images that contain shared entities which encourages the model to learn visual invariance third we construct dataset for pre training by filtering with frequently appeared entities which significantly improves training efficiency fourth we perform zero shot transfer on three benchmark datasets pascal voc pascal context and coco object our model achieves superior segmentation results over the state of the art method by using only data vs for pre training code and pre trained models will be released for future research combined use of federated learning and image encryption for privacy preserving image classification with vision transformer authors teru nagamori hitoshi kiya subjects computer vision and pattern recognition cs cv cryptography and security cs cr machine learning cs lg arxiv link pdf link abstract in recent years privacy preserving methods for deep learning have become an urgent problem accordingly we propose the combined use of federated learning fl and encrypted images for privacy preserving image classification under the use of the vision transformer vit the proposed method allows us not only to train models over multiple participants without directly sharing their raw data but to also protect the privacy of test query images for the first time in addition it can also maintain the same accuracy as normally trained models in an experiment the proposed method was demonstrated to well work without any performance degradation on the cifar and cifar datasets keyword raw image raw or cooked object detection on raw images authors william ljungbergh joakim johnander christoffer petersson michael felsberg subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract images fed to a deep neural network have in general undergone several handcrafted image signal processing isp operations all of which have been optimized to produce visually pleasing images in this work we investigate the hypothesis that the intermediate representation of visually pleasing images is sub optimal for downstream computer vision tasks compared to the raw image representation we suggest that the operations of the isp instead should be optimized towards the end task by learning the parameters of the operations jointly during training we extend previous works on this topic and propose a new learnable operation that enables an object detector to achieve superior performance when compared to both previous works and traditional rgb images in experiments on the open pascalraw dataset we empirically confirm our hypothesis
| 1
|
247,646
| 26,726,197,010
|
IssuesEvent
|
2023-01-29 18:44:49
|
snowdensb/nibrs
|
https://api.github.com/repos/snowdensb/nibrs
|
closed
|
CVE-2022-40153 (High) detected in woodstox-core-5.0.3.jar - autoclosed
|
security vulnerability
|
## CVE-2022-40153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>woodstox-core-5.0.3.jar</b></p></summary>
<p>Woodstox is a high-performance XML processor that
implements Stax (JSR-173), SAX2 and Stax2 APIs</p>
<p>Library home page: <a href="https://github.com/FasterXML/woodstox">https://github.com/FasterXML/woodstox</a></p>
<p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar,/home/wss-scanner/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/woodstox-core-5.0.3.jar,/home/wss-scanner/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **woodstox-core-5.0.3.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Xstream to seralize XML data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40153>CVE-2022-40153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: 5.4.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2022-40153 (High) detected in woodstox-core-5.0.3.jar - autoclosed - ## CVE-2022-40153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>woodstox-core-5.0.3.jar</b></p></summary>
<p>Woodstox is a high-performance XML processor that
implements Stax (JSR-173), SAX2 and Stax2 APIs</p>
<p>Library home page: <a href="https://github.com/FasterXML/woodstox">https://github.com/FasterXML/woodstox</a></p>
<p>Path to dependency file: /tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar,/home/wss-scanner/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar,/web/nibrs-web/target/nibrs-web/WEB-INF/lib/woodstox-core-5.0.3.jar,/home/wss-scanner/.m2/repository/com/fasterxml/woodstox/woodstox-core/5.0.3/woodstox-core-5.0.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **woodstox-core-5.0.3.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Those using Xstream to seralize XML data may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow. This effect may support a denial of service attack.
<p>Publish Date: 2022-09-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40153>CVE-2022-40153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-09-16</p>
<p>Fix Resolution: 5.4.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in woodstox core jar autoclosed cve high severity vulnerability vulnerable library woodstox core jar woodstox is a high performance xml processor that implements stax jsr and apis library home page a href path to dependency file tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository com fasterxml woodstox woodstox core woodstox core jar home wss scanner repository com fasterxml woodstox woodstox core woodstox core jar web nibrs web target nibrs web web inf lib woodstox core jar home wss scanner repository com fasterxml woodstox woodstox core woodstox core jar dependency hierarchy x woodstox core jar vulnerable library found in base branch master vulnerability details those using xstream to seralize xml data may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow this effect may support a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution rescue worker helmet automatic remediation is available for this issue
| 0
|
260,203
| 19,661,611,735
|
IssuesEvent
|
2022-01-10 17:35:18
|
KUCC-1997/KUCC-Discord-Bot
|
https://api.github.com/repos/KUCC-1997/KUCC-Discord-Bot
|
closed
|
Update Readme.md
|
documentation
|
Create a fully descriptive readme file for KUCC Discord Bot including documentation and contributions guide.
|
1.0
|
Update Readme.md - Create a fully descriptive readme file for KUCC Discord Bot including documentation and contributions guide.
|
non_process
|
update readme md create a fully descriptive readme file for kucc discord bot including documentation and contributions guide
| 0
|
327,641
| 24,145,953,200
|
IssuesEvent
|
2022-09-21 18:45:48
|
alexjohn7516/reddit-data-visualization
|
https://api.github.com/repos/alexjohn7516/reddit-data-visualization
|
closed
|
Project Neccesities
|
documentation enhancement
|
Linters, Technologies, Git, SDLC write up, PR format, Commit formats, Issue Format, Readme Format
|
1.0
|
Project Neccesities - Linters, Technologies, Git, SDLC write up, PR format, Commit formats, Issue Format, Readme Format
|
non_process
|
project neccesities linters technologies git sdlc write up pr format commit formats issue format readme format
| 0
|
1,402
| 3,967,867,683
|
IssuesEvent
|
2016-05-03 17:42:12
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Improve data processing
|
Processors
|
- [ ] review all `TODO` in `processors` repo and make appropriate changes
|
1.0
|
Improve data processing - - [ ] review all `TODO` in `processors` repo and make appropriate changes
|
process
|
improve data processing review all todo in processors repo and make appropriate changes
| 1
|
16,814
| 22,060,918,529
|
IssuesEvent
|
2022-05-30 17:41:51
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
closed
|
Introduce spy-only-flag
|
enhancement kmock-processor kmock-gradle
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
KMock primarily aims on an intrusive approach. While it supports spying, there is no possibility to deactivate the `kmock` completely. As discussed elsewhere, KMock should also support this kind of approach.
Acceptance criteria:
1. Add a feature flag `spyOnly`, which is false on default.
2. If this flag is true, the processor should not produce `kmock`, but enable all captured interfaces for spying.
|
1.0
|
Introduce spy-only-flag - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
KMock primarily aims on an intrusive approach. While it supports spying, there is no possibility to deactivate the `kmock` completely. As discussed elsewhere, KMock should also support this kind of approach.
Acceptance criteria:
1. Add a feature flag `spyOnly`, which is false on default.
2. If this flag is true, the processor should not produce `kmock`, but enable all captured interfaces for spying.
|
process
|
introduce spy only flag description kmock primarily aims on an intrusive approach while it supports spying there is no possibility to deactivate the kmock completely as discussed elsewhere kmock should also support this kind of approach acceptance criteria add a feature flag spyonly which is false on default if this flag is true the processor should not produce kmock but enable all captured interfaces for spying
| 1
|
401,474
| 11,790,682,162
|
IssuesEvent
|
2020-03-17 19:27:31
|
risd/risd-congratulations-v2-ph2
|
https://api.github.com/repos/risd/risd-congratulations-v2-ph2
|
closed
|
adding covid banner to homepage
|
priority
|
@rubillionaire per the slack conversation in #admissions we need to please add a covid banner to the top of the site:
so initial modal moment stays the same, it would just be whenever you aren't seeing the modal that you'd see the covid banner above the top, yellow welcome strip
We will need to add this to graduatestudy.risd.edu also. And can we add it to the homepage content types on both sites like the module in the homepage content type of www.risd.edu?:
<img width="896" alt="Screen Shot 2020-03-16 at 11 53 39 AM" src="https://user-images.githubusercontent.com/14020234/76776280-eabbc000-677c-11ea-8ddd-3d54aa1dfce0.png">
lastly, i think we'd like the banner background color to be changeable, so if we could input a hex color in a field in the cms that would be great.
@mmarol did I miss anything here?
|
1.0
|
adding covid banner to homepage - @rubillionaire per the slack conversation in #admissions we need to please add a covid banner to the top of the site:
so initial modal moment stays the same, it would just be whenever you aren't seeing the modal that you'd see the covid banner above the top, yellow welcome strip
We will need to add this to graduatestudy.risd.edu also. And can we add it to the homepage content types on both sites like the module in the homepage content type of www.risd.edu?:
<img width="896" alt="Screen Shot 2020-03-16 at 11 53 39 AM" src="https://user-images.githubusercontent.com/14020234/76776280-eabbc000-677c-11ea-8ddd-3d54aa1dfce0.png">
lastly, i think we'd like the banner background color to be changeable, so if we could input a hex color in a field in the cms that would be great.
@mmarol did I miss anything here?
|
non_process
|
adding covid banner to homepage rubillionaire per the slack conversation in admissions we need to please add a covid banner to the top of the site so initial modal moment stays the same it would just be whenever you aren t seeing the modal that you d see the covid banner above the top yellow welcome strip we will need to add this to graduatestudy risd edu also and can we add it to the homepage content types on both sites like the module in the homepage content type of img width alt screen shot at am src lastly i think we d like the banner background color to be changeable so if we could input a hex color in a field in the cms that would be great mmarol did i miss anything here
| 0
|
16,917
| 22,266,104,362
|
IssuesEvent
|
2022-06-10 07:36:43
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
opened
|
bitwise AND process
|
new process
|
**Proposed Process ID:** bitwise_and
## Context
A possible scenario came up in this issue, when considering quality flags of Sentinel-3 https://github.com/Open-EO/openeo-processes-python/issues/179
## Summary
Currently the and process works only with boolean inputs (true or false, 0 or 1). A bitwise_and operator defined similarly to [what numpy does](https://numpy.org/doc/stable/reference/generated/numpy.bitwise_and.html) would be necessary to accomplish this use case.
## Description
Computes the bit-wise AND of the underlying binary representation of the input numbers.
## Parameters
### x
**Optional:** no
#### Description
A number
#### Data Type
number/null
### y
**Optional:** no
#### Description
A number
#### Data Type
number/null
## Return Value
### Description
boolean value resulting from the bit-wise AND of the underlying binary representation of the input numbers.
### Data Type
boolean/null
## Links to additional resources (optional)
* https://numpy.org/doc/stable/reference/generated/numpy.bitwise_and.html
## Examples (optional)
* https://github.com/Open-EO/openeo-processes-python/issues/179#issuecomment-1151974639
|
1.0
|
bitwise AND process - **Proposed Process ID:** bitwise_and
## Context
A possible scenario came up in this issue, when considering quality flags of Sentinel-3 https://github.com/Open-EO/openeo-processes-python/issues/179
## Summary
Currently the and process works only with boolean inputs (true or false, 0 or 1). A bitwise_and operator defined similarly to [what numpy does](https://numpy.org/doc/stable/reference/generated/numpy.bitwise_and.html) would be necessary to accomplish this use case.
## Description
Computes the bit-wise AND of the underlying binary representation of the input numbers.
## Parameters
### x
**Optional:** no
#### Description
A number
#### Data Type
number/null
### y
**Optional:** no
#### Description
A number
#### Data Type
number/null
## Return Value
### Description
boolean value resulting from the bit-wise AND of the underlying binary representation of the input numbers.
### Data Type
boolean/null
## Links to additional resources (optional)
* https://numpy.org/doc/stable/reference/generated/numpy.bitwise_and.html
## Examples (optional)
* https://github.com/Open-EO/openeo-processes-python/issues/179#issuecomment-1151974639
|
process
|
bitwise and process proposed process id bitwise and context a possible scenario came up in this issue when considering quality flags of sentinel summary currently the and process works only with boolean inputs true or false or a bitwise and operator defined similarly to would be necessary to accomplish this use case description computes the bit wise and of the underlying binary representation of the input numbers parameters x optional no description a number data type number null y optional no description a number data type number null return value description boolean value resulting from the bit wise and of the underlying binary representation of the input numbers data type boolean null links to additional resources optional examples optional
| 1
|
18,384
| 24,515,154,836
|
IssuesEvent
|
2022-10-11 03:52:29
|
medic/cht-core
|
https://api.github.com/repos/medic/cht-core
|
closed
|
Release 3.17.0
|
Type: Internal process
|
# Planning - Product Manager
- [X] Create a GH Milestone for the release. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [x] Add all the issues to be worked on to the Milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes.
- [x] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo
- [x] Assign an engineer as Release Engineer for this release.
# Development - Release Engineer
When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [ ] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [x] Write an update in the weekly [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) summarising development and acceptance testing progress and identifying any blockers (the [milestone-status](https://github.com/medic/support-scripts/tree/master/milestone-status) script can be used to get a breakdown of the issues). The release Engineer is to update this every week until the version is released.
# Releasing - Release Engineer
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [x] Create a new release branch from `3.16.0` named `<major>.<minor>.x` in `cht-core`, cherry pick in all commits from issues in milestone (**NB - different than normal release process!!**). Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [ ] Announce the start of release testing on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*Release testing has started for {{version}} of {{product}}*
To get a sneak peak at this upcoming release, you can install `<major>.<minor>.<patch>-beta.1` on your testing environment. We suggest you test your forms and workflows with this release candidate version and raise any issues that you experience. This helps to to discover any potential regressions that wouldn't otherwise be caught during release testing.
Keep an eye on the forum for the release announcement in the next couple of weeks!
```
- [x] Add release notes to the [Core Framework Releases](https://docs.communityhealthtoolkit.org/core/releases/) page:
- [x] Create a new document for the release in the [releases folder](https://github.com/medic/cht-docs/tree/main/content/en/core/releases).
- [x] Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions.
- [x] Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes) to export the issues into our release note format.
- [x] Manually document any known migration steps and known issues.
- [x] Provide description, screenshots, videos, and anything else to help communicate particularly important changes.
- [x] Document any required or recommended upgrades to our other products (eg: cht-conf, cht-gateway, cht-android).
- [x] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions) and update the EOL date and status of previous releases. Also add a link in the `Release Notes` section to the new release page.
- [x] Assign the PR to:
- The Director of Technology
- An SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient
- [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [ ] Execute the scalability testing suite on the final build and download the scalability results on S3 at medic-e2e/scalability/$TAG_NAME. Add the release `.jtl` file to `cht-core/tests/scalability/previous_results`. More info in the [scalability documentation](https://github.com/medic/cht-core/blob/master/tests/scalability/README.md).
- [x] Upgrade the `demo-cht.dev` instance to this version.
- [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*We're excited to announce the release of {{version}} of {{product}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the [release notes]({{url}}) for full details.
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our [software support documentation](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions).
Check out our [roadmap](https://github.com/orgs/medic/projects/112) to see what we're working on next.
```
- [x] Add one last update to the [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) and use this meeting to lead an internal release retrospective covering what went well and areas to improve for next time.
- [x] Mark this issue "done" and close the Milestone.
|
1.0
|
Release 3.17.0 - # Planning - Product Manager
- [X] Create a GH Milestone for the release. We use [semver](http://semver.org) so if there are breaking changes increment the major, otherwise if there are new features increment the minor, otherwise increment the service pack. Breaking changes in our case relate to updated software requirements (egs: CouchDB, node, minimum browser versions), broken backwards compatibility in an api, or a major visual update that requires user retraining.
- [x] Add all the issues to be worked on to the Milestone. Ideally each minor release will have one or two features, a handful of improvements, and plenty of bug fixes.
- [x] Identify any features and improvements in the release that need end-user documentation (beyond eng team documentation improvements) and create corresponding issues in the cht-docs repo
- [x] Assign an engineer as Release Engineer for this release.
# Development - Release Engineer
When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them.
- [x] Set the version number in `package.json` and `package-lock.json` and submit a PR. The easiest way to do this is to use `npm --no-git-tag-version version <major|minor>`.
- [ ] Raise a new issue called `Update dependencies for <version>` with a description that links to [the documentation](https://docs.communityhealthtoolkit.org/core/guides/update-dependencies/). This should be done early in the release cycle so find a volunteer to take this on and assign it to them.
- [x] Write an update in the weekly [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) summarising development and acceptance testing progress and identifying any blockers (the [milestone-status](https://github.com/medic/support-scripts/tree/master/milestone-status) script can be used to get a breakdown of the issues). The release Engineer is to update this every week until the version is released.
# Releasing - Release Engineer
Once all issues have passed acceptance testing and have been merged into `master` release testing can begin.
- [x] Create a new release branch from `3.16.0` named `<major>.<minor>.x` in `cht-core`, cherry pick in all commits from issues in milestone (**NB - different than normal release process!!**). Post a message to #development using this template:
```
@core_devs I've just created the `<major>.<minor>.x` release branch. Please be aware that any further changes intended for this release will have to be merged to `master` then backported. Thanks!
```
- [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing.
- [ ] Announce the start of release testing on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*Release testing has started for {{version}} of {{product}}*
To get a sneak peak at this upcoming release, you can install `<major>.<minor>.<patch>-beta.1` on your testing environment. We suggest you test your forms and workflows with this release candidate version and raise any issues that you experience. This helps to to discover any potential regressions that wouldn't otherwise be caught during release testing.
Keep an eye on the forum for the release announcement in the next couple of weeks!
```
- [x] Add release notes to the [Core Framework Releases](https://docs.communityhealthtoolkit.org/core/releases/) page:
- [x] Create a new document for the release in the [releases folder](https://github.com/medic/cht-docs/tree/main/content/en/core/releases).
- [x] Ensure all issues are in the GH Milestone, that they're correctly labelled (in particular: they have the right Type, "UI/UX" if they change the UI, and "Breaking change" if appropriate), and have human readable descriptions.
- [x] Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes) to export the issues into our release note format.
- [x] Manually document any known migration steps and known issues.
- [x] Provide description, screenshots, videos, and anything else to help communicate particularly important changes.
- [x] Document any required or recommended upgrades to our other products (eg: cht-conf, cht-gateway, cht-android).
- [x] Add the release to the [Supported versions](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions) and update the EOL date and status of previous releases. Also add a link in the `Release Notes` section to the new release page.
- [x] Assign the PR to:
- The Director of Technology
- An SRE to review and confirm the documentation on upgrade instructions and breaking changes is sufficient
- [x] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta.
- [x] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release.
- [x] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>`
- [ ] Execute the scalability testing suite on the final build and download the scalability results on S3 at medic-e2e/scalability/$TAG_NAME. Add the release `.jtl` file to `cht-core/tests/scalability/previous_results`. More info in the [scalability documentation](https://github.com/medic/cht-core/blob/master/tests/scalability/README.md).
- [x] Upgrade the `demo-cht.dev` instance to this version.
- [x] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/c/product/releases/26), under the "Product - Releases" category using this template:
```
*We're excited to announce the release of {{version}} of {{product}}*
New features include {{key_features}}. We've also implemented loads of other improvements and fixed a heap of bugs.
Read the [release notes]({{url}}) for full details.
Following our support policy, versions {{versions}} are no longer supported. Projects running these versions should start planning to upgrade in the near future. For more details read our [software support documentation](https://docs.communityhealthtoolkit.org/core/releases/#supported-versions).
Check out our [roadmap](https://github.com/orgs/medic/projects/112) to see what we're working on next.
```
- [x] Add one last update to the [Medic Product Team call agenda](https://docs.google.com/document/d/14AuJ7SerLuOPESBjQlJqpBtzwSAoVf5ykTT7fjyJBT0/edit) and use this meeting to lead an internal release retrospective covering what went well and areas to improve for next time.
- [x] Mark this issue "done" and close the Milestone.
|
process
|
release planning product manager create a gh milestone for the release we use so if there are breaking changes increment the major otherwise if there are new features increment the minor otherwise increment the service pack breaking changes in our case relate to updated software requirements egs couchdb node minimum browser versions broken backwards compatibility in an api or a major visual update that requires user retraining add all the issues to be worked on to the milestone ideally each minor release will have one or two features a handful of improvements and plenty of bug fixes identify any features and improvements in the release that need end user documentation beyond eng team documentation improvements and create corresponding issues in the cht docs repo assign an engineer as release engineer for this release development release engineer when development is ready to begin one of the engineers should be nominated as a release engineer they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr the easiest way to do this is to use npm no git tag version version raise a new issue called update dependencies for with a description that links to this should be done early in the release cycle so find a volunteer to take this on and assign it to them write an update in the weekly summarising development and acceptance testing progress and identifying any blockers the script can be used to get a breakdown of the issues the release engineer is to update this every week until the version is released releasing release engineer once all issues have passed acceptance testing and have been merged into master release testing can begin create a new release branch from named x in cht core cherry pick in all commits from issues in milestone nb different than normal release process post a message to development using this template core devs i ve just created the x release branch please be aware that any further changes intended for this release will have to be merged to master then backported thanks build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing announce the start of release testing on the under the product releases category using this template release testing has started for version of product to get a sneak peak at this upcoming release you can install beta on your testing environment we suggest you test your forms and workflows with this release candidate version and raise any issues that you experience this helps to to discover any potential regressions that wouldn t otherwise be caught during release testing keep an eye on the forum for the release announcement in the next couple of weeks add release notes to the page create a new document for the release in the ensure all issues are in the gh milestone that they re correctly labelled in particular they have the right type ui ux if they change the ui and breaking change if appropriate and have human readable descriptions use to export the issues into our release note format manually document any known migration steps and known issues provide description screenshots videos and anything else to help communicate particularly important changes document any required or recommended upgrades to our other products eg cht conf cht gateway cht android add the release to the and update the eol date and status of previous releases also add a link in the release notes section to the new release page assign the pr to the director of technology an sre to review and confirm the documentation on upgrade instructions and breaking changes is sufficient until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic execute the scalability testing suite on the final build and download the scalability results on at medic scalability tag name add the release jtl file to cht core tests scalability previous results more info in the upgrade the demo cht dev instance to this version announce the release on the under the product releases category using this template we re excited to announce the release of version of product new features include key features we ve also implemented loads of other improvements and fixed a heap of bugs read the url for full details following our support policy versions versions are no longer supported projects running these versions should start planning to upgrade in the near future for more details read our check out our to see what we re working on next add one last update to the and use this meeting to lead an internal release retrospective covering what went well and areas to improve for next time mark this issue done and close the milestone
| 1
|
1,091
| 3,560,375,281
|
IssuesEvent
|
2016-01-23 01:59:12
|
tsilvers/GTEST
|
https://api.github.com/repos/tsilvers/GTEST
|
closed
|
Background processing of credit card payments
|
Payment Processing Priority - High
|
Currently, credit card payments require that the user's browser navigate multiple pages as each processor is attempted. This opens opportunities for errors or interruption. Communication with the various processors should not involve the users' browser.
|
1.0
|
Background processing of credit card payments - Currently, credit card payments require that the user's browser navigate multiple pages as each processor is attempted. This opens opportunities for errors or interruption. Communication with the various processors should not involve the users' browser.
|
process
|
background processing of credit card payments currently credit card payments require that the user s browser navigate multiple pages as each processor is attempted this opens opportunities for errors or interruption communication with the various processors should not involve the users browser
| 1
|
5,877
| 8,700,266,777
|
IssuesEvent
|
2018-12-05 08:09:43
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
error on click "Enter" on the last element in task list
|
Process bug bug hotfix
|
error message: cannot find property "._id" of undefined
|
1.0
|
error on click "Enter" on the last element in task list - error message: cannot find property "._id" of undefined
|
process
|
error on click enter on the last element in task list error message cannot find property id of undefined
| 1
|
397,435
| 11,728,376,849
|
IssuesEvent
|
2020-03-10 17:26:54
|
kubernetes-sigs/cluster-api-provider-aws
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
|
closed
|
Evaluate conversion webhook behavior based on liggitt's review of Cluster API
|
lifecycle/active priority/important-soon
|
- Take into account the feedback Jordan Liggitt gave during the API review session for Cluster API and ensure that we address the same issues here as well
Cluster API review feedback notes: https://docs.google.com/document/d/1fQNlqsDkvEggWFi51GVxOglL2P1Bvo2JhZlMhm2d-Co/edit#heading=h.2zz7is11zbh5
/priority important-soon
/milestone v0.5.0
|
1.0
|
Evaluate conversion webhook behavior based on liggitt's review of Cluster API - - Take into account the feedback Jordan Liggitt gave during the API review session for Cluster API and ensure that we address the same issues here as well
Cluster API review feedback notes: https://docs.google.com/document/d/1fQNlqsDkvEggWFi51GVxOglL2P1Bvo2JhZlMhm2d-Co/edit#heading=h.2zz7is11zbh5
/priority important-soon
/milestone v0.5.0
|
non_process
|
evaluate conversion webhook behavior based on liggitt s review of cluster api take into account the feedback jordan liggitt gave during the api review session for cluster api and ensure that we address the same issues here as well cluster api review feedback notes priority important soon milestone
| 0
|
44,103
| 5,584,067,165
|
IssuesEvent
|
2017-03-29 03:11:27
|
SickBoySB/cecommpatch
|
https://api.github.com/repos/SickBoySB/cecommpatch
|
closed
|
Gravestones are misaligned in graveyards
|
bug fixed tested
|
Maybe change the fixed width a bit (in commands.xml)? Not sure how that will impact rotation.. need to test it.
|
1.0
|
Gravestones are misaligned in graveyards - Maybe change the fixed width a bit (in commands.xml)? Not sure how that will impact rotation.. need to test it.
|
non_process
|
gravestones are misaligned in graveyards maybe change the fixed width a bit in commands xml not sure how that will impact rotation need to test it
| 0
|
17,975
| 23,986,131,857
|
IssuesEvent
|
2022-09-13 19:14:28
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Fix flake test: "experimentalStudio is not a valid config for component testing"
|
process: flaky test topic: flake ❄️ pkg/launchpad stage: review routed-to-ct
|
### Current behavior
Fails sometimes on CI, sometimes linux, usually windows. I think it is a detached DOM related issue (maybe).
### Desired behavior
Does not fail.
### Test code to reproduce
This is an example: https://app.circleci.com/pipelines/github/cypress-io/cypress/43088/workflows/d2b0ef06-de0a-4040-bd99-b40c726ff83e/jobs/1796860
Just run it a few times locally - it should fail eventually, even on linux/Mac.
### Cypress Version
10.7
### Node version
16.16
### Operating System
linux, mac,windows
### Debug Logs
_No response_
### Other
_No response_
|
1.0
|
Fix flake test: "experimentalStudio is not a valid config for component testing" - ### Current behavior
Fails sometimes on CI, sometimes linux, usually windows. I think it is a detached DOM related issue (maybe).
### Desired behavior
Does not fail.
### Test code to reproduce
This is an example: https://app.circleci.com/pipelines/github/cypress-io/cypress/43088/workflows/d2b0ef06-de0a-4040-bd99-b40c726ff83e/jobs/1796860
Just run it a few times locally - it should fail eventually, even on linux/Mac.
### Cypress Version
10.7
### Node version
16.16
### Operating System
linux, mac,windows
### Debug Logs
_No response_
### Other
_No response_
|
process
|
fix flake test experimentalstudio is not a valid config for component testing current behavior fails sometimes on ci sometimes linux usually windows i think it is a detached dom related issue maybe desired behavior does not fail test code to reproduce this is an example just run it a few times locally it should fail eventually even on linux mac cypress version node version operating system linux mac windows debug logs no response other no response
| 1
|
1,851
| 4,651,131,334
|
IssuesEvent
|
2016-10-03 08:53:21
|
openvstorage/alba-asdmanager
|
https://api.github.com/repos/openvstorage/alba-asdmanager
|
closed
|
etcd proxy should be set up through asd-manager-setup during install of hyperscale/geoscale
|
process_wontfix type_feature
|
It is very strange that during the hyperscale/geoscale setup the storagenodes still need a manual setup for a local etcd proxy.
Currenty situation for asd-manager setup unattended install:
```
}
"asdmanager":
{
"api_ip": "1.2.3.5"
}
}
```
Possible future situation:
```
}
"asdmanager":
{
"api_ip": "1.2.3.5"
"etcd_proxy": {
"node1": { "storagerouterid": "KLdJMbcFI6advVlE", "etcd_settings": "http://172.19.12.31:2380" },
"node2": { "storagerouterid": "7UWCEYyOGShJdTop", "etcd_settings": "http://172.19.12.32:2380" }
}
}
}
```
|
1.0
|
etcd proxy should be set up through asd-manager-setup during install of hyperscale/geoscale - It is very strange that during the hyperscale/geoscale setup the storagenodes still need a manual setup for a local etcd proxy.
Currenty situation for asd-manager setup unattended install:
```
}
"asdmanager":
{
"api_ip": "1.2.3.5"
}
}
```
Possible future situation:
```
}
"asdmanager":
{
"api_ip": "1.2.3.5"
"etcd_proxy": {
"node1": { "storagerouterid": "KLdJMbcFI6advVlE", "etcd_settings": "http://172.19.12.31:2380" },
"node2": { "storagerouterid": "7UWCEYyOGShJdTop", "etcd_settings": "http://172.19.12.32:2380" }
}
}
}
```
|
process
|
etcd proxy should be set up through asd manager setup during install of hyperscale geoscale it is very strange that during the hyperscale geoscale setup the storagenodes still need a manual setup for a local etcd proxy currenty situation for asd manager setup unattended install asdmanager api ip possible future situation asdmanager api ip etcd proxy storagerouterid etcd settings storagerouterid etcd settings
| 1
|
7,258
| 10,420,033,317
|
IssuesEvent
|
2019-09-15 20:59:49
|
MikePopoloski/slang
|
https://api.github.com/repos/MikePopoloski/slang
|
closed
|
Add the concept of "controlling macros" to include files
|
area-preprocessor medium
|
Compilers like GCC and Clang notice the common ifndef define pattern at the top of include files and use it as a marker to skip lexing the entire file if it's already been seen. As an optimization we should do the same.
This is low priority.
|
1.0
|
Add the concept of "controlling macros" to include files - Compilers like GCC and Clang notice the common ifndef define pattern at the top of include files and use it as a marker to skip lexing the entire file if it's already been seen. As an optimization we should do the same.
This is low priority.
|
process
|
add the concept of controlling macros to include files compilers like gcc and clang notice the common ifndef define pattern at the top of include files and use it as a marker to skip lexing the entire file if it s already been seen as an optimization we should do the same this is low priority
| 1
|
35,254
| 6,425,771,519
|
IssuesEvent
|
2017-08-09 16:01:44
|
Nick-Lucas/EntryPoint
|
https://api.github.com/repos/Nick-Lucas/EntryPoint
|
closed
|
Include 'Best Practice' section in the documentation
|
documentation
|
Right now a user may have to read the entire documentation to get an idea of a complete implementation
|
1.0
|
Include 'Best Practice' section in the documentation - Right now a user may have to read the entire documentation to get an idea of a complete implementation
|
non_process
|
include best practice section in the documentation right now a user may have to read the entire documentation to get an idea of a complete implementation
| 0
|
8,159
| 11,364,214,349
|
IssuesEvent
|
2020-01-27 07:36:39
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
BigQuery: the test for load partitioned table sample fails
|
api: bigquery testing type: process
|
Running the snippet tests on the current `master` fails:
```
$ nox -f noxfile.py -s snippets-3.7
```
The test in question is `test_client_load_partitioned_table`.
|
1.0
|
BigQuery: the test for load partitioned table sample fails - Running the snippet tests on the current `master` fails:
```
$ nox -f noxfile.py -s snippets-3.7
```
The test in question is `test_client_load_partitioned_table`.
|
process
|
bigquery the test for load partitioned table sample fails running the snippet tests on the current master fails nox f noxfile py s snippets the test in question is test client load partitioned table
| 1
|
557,076
| 16,499,522,725
|
IssuesEvent
|
2021-05-25 13:29:39
|
Inter-Actief/amelie
|
https://api.github.com/repos/Inter-Actief/amelie
|
closed
|
Logging in to mailing@inter-actief.net Gmail-account leads to SAML2-error
|
Priority bug
|
STR:
- Enter mailing@inter-actief.net while trying to sign in to Gmail (or enter the email address of supposedly any account without an actual IA-user attached to it).
- Get redirected to the Inter-Actief authentication flow.
- Log in with the correct Inter-Actief account credentials.
Results:
- Expected: get redirected to Gmail, logged in as the correct Inter-Actief user.
- Actual: the following SAML2-error at the URL [https://www.inter-actief.utwente.nl/saml2idp/login/process/](https://www.inter-actief.utwente.nl/saml2idp/login/process/)
```
Error during SAML2 authentication
NameError
name 'user' is not defined
```
This issue is of high importance, as it prevents us from sending company mailings.
|
1.0
|
Logging in to mailing@inter-actief.net Gmail-account leads to SAML2-error - STR:
- Enter mailing@inter-actief.net while trying to sign in to Gmail (or enter the email address of supposedly any account without an actual IA-user attached to it).
- Get redirected to the Inter-Actief authentication flow.
- Log in with the correct Inter-Actief account credentials.
Results:
- Expected: get redirected to Gmail, logged in as the correct Inter-Actief user.
- Actual: the following SAML2-error at the URL [https://www.inter-actief.utwente.nl/saml2idp/login/process/](https://www.inter-actief.utwente.nl/saml2idp/login/process/)
```
Error during SAML2 authentication
NameError
name 'user' is not defined
```
This issue is of high importance, as it prevents us from sending company mailings.
|
non_process
|
logging in to mailing inter actief net gmail account leads to error str enter mailing inter actief net while trying to sign in to gmail or enter the email address of supposedly any account without an actual ia user attached to it get redirected to the inter actief authentication flow log in with the correct inter actief account credentials results expected get redirected to gmail logged in as the correct inter actief user actual the following error at the url error during authentication nameerror name user is not defined this issue is of high importance as it prevents us from sending company mailings
| 0
|
9,046
| 12,130,108,028
|
IssuesEvent
|
2020-04-23 00:30:41
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from appengine/standard/mailjet/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from appengine/standard/mailjet/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from appengine/standard/mailjet/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/mailjet/requirements-test.txt
|
process
|
remove gcp devrel py tools from appengine standard mailjet requirements test txt remove gcp devrel py tools from appengine standard mailjet requirements test txt
| 1
|
22,006
| 30,512,381,370
|
IssuesEvent
|
2023-07-18 22:08:44
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
procpath 1.7.0 has 2 GuardDog issues
|
guarddog exec-base64 silent-process-execution
|
https://pypi.org/project/procpath
https://inspector.pypi.io/project/procpath
```{
"dependency": "procpath",
"version": "1.7.0",
"result": {
"issues": 2,
"errors": {},
"results": {
"exec-base64": [
{
"location": "Procpath-1.7.0/procpath/utility.py:27",
"code": " env = subprocess.check_output('\\n'.join(script), shell=True, encoding='utf-8')",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"silent-process-execution": [
{
"location": "Procpath-1.7.0/procpath/test.py:2280",
"code": " p = subprocess.Popen(\n ['timeout', '0.25', 'tail', '---disable-inotify', '-f', f'{f.name}', f.name],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subp... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp4qj9tv9b/procpath"
}
}```
|
1.0
|
procpath 1.7.0 has 2 GuardDog issues - https://pypi.org/project/procpath
https://inspector.pypi.io/project/procpath
```{
"dependency": "procpath",
"version": "1.7.0",
"result": {
"issues": 2,
"errors": {},
"results": {
"exec-base64": [
{
"location": "Procpath-1.7.0/procpath/utility.py:27",
"code": " env = subprocess.check_output('\\n'.join(script), shell=True, encoding='utf-8')",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
],
"silent-process-execution": [
{
"location": "Procpath-1.7.0/procpath/test.py:2280",
"code": " p = subprocess.Popen(\n ['timeout', '0.25', 'tail', '---disable-inotify', '-f', f'{f.name}', f.name],\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subp... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp4qj9tv9b/procpath"
}
}```
|
process
|
procpath has guarddog issues dependency procpath version result issues errors results exec location procpath procpath utility py code env subprocess check output n join script shell true encoding utf message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n silent process execution location procpath procpath test py code p subprocess popen n n stdin subprocess devnull n stdout subprocess devnull n stderr subp message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp procpath
| 1
|
10,738
| 12,700,597,949
|
IssuesEvent
|
2020-06-22 16:36:00
|
jenkinsci/dark-theme-plugin
|
https://api.github.com/repos/jenkinsci/dark-theme-plugin
|
closed
|
Transparent buttons are not adapted for dark mode
|
core-compatibility
|
The hover & active effects for transparent buttons are too bright as they have a hardcoded value:
<img width="540" alt="Captura de pantalla 2020-06-22 a las 15 25 22" src="https://user-images.githubusercontent.com/5738588/85292894-ddf5e900-b49c-11ea-909d-685bce5ee941.png">
|
True
|
Transparent buttons are not adapted for dark mode - The hover & active effects for transparent buttons are too bright as they have a hardcoded value:
<img width="540" alt="Captura de pantalla 2020-06-22 a las 15 25 22" src="https://user-images.githubusercontent.com/5738588/85292894-ddf5e900-b49c-11ea-909d-685bce5ee941.png">
|
non_process
|
transparent buttons are not adapted for dark mode the hover active effects for transparent buttons are too bright as they have a hardcoded value img width alt captura de pantalla a las src
| 0
|
318,334
| 27,297,691,684
|
IssuesEvent
|
2023-02-23 21:57:19
|
nucleus-security/Test-repo
|
https://api.github.com/repos/nucleus-security/Test-repo
|
opened
|
Nucleus - [High] - 440057
|
Test
|
Source: QUALYS
Finding Description: CentOS has released security update for kernel to fix the vulnerabilities. Affected Products: centos 6
Impact: Successful exploitation allows attacker to compromise the system.
Target(s): Asset name: 192.168.56.147
IP: 192.168.56.147
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2016-July/021977.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2016:1406: centos 6 (https://lists.centos.org/pipermail/centos-announce/2016-July/021977.html)
References:
QID:440057
CVE:CVE-2016-4565
Category:CentOS
PCI Flagged:yes
Vendor References:CESA-2016:1406 centos 6
Bugtraq IDs:90301
Severity: High
Date Discovered: 2022-11-12 08:04:44
Nucleus Notification Rules Triggered: Rule GitHub
Project Name: 6716
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/201000007/NDQwMDU3/UVVBTFlT/VnVsbg--/false/MjAxMDAwMDA3/c3VtbWFyeQ--/false
|
1.0
|
Nucleus - [High] - 440057 - Source: QUALYS
Finding Description: CentOS has released security update for kernel to fix the vulnerabilities. Affected Products: centos 6
Impact: Successful exploitation allows attacker to compromise the system.
Target(s): Asset name: 192.168.56.147
IP: 192.168.56.147
Solution: To resolve this issue, upgrade to the latest packages which contain a patch. Refer to CentOS advisory centos 6 (https://lists.centos.org/pipermail/centos-announce/2016-July/021977.html) for updates and patch information.
Patch:
Following are links for downloading patches to fix the vulnerabilities:
CESA-2016:1406: centos 6 (https://lists.centos.org/pipermail/centos-announce/2016-July/021977.html)
References:
QID:440057
CVE:CVE-2016-4565
Category:CentOS
PCI Flagged:yes
Vendor References:CESA-2016:1406 centos 6
Bugtraq IDs:90301
Severity: High
Date Discovered: 2022-11-12 08:04:44
Nucleus Notification Rules Triggered: Rule GitHub
Project Name: 6716
Please see Nucleus for more information on these vulnerabilities:https://192.168.56.101/nucleus/public/app/index.html#vuln/201000007/NDQwMDU3/UVVBTFlT/VnVsbg--/false/MjAxMDAwMDA3/c3VtbWFyeQ--/false
|
non_process
|
nucleus source qualys finding description centos has released security update for kernel to fix the vulnerabilities affected products centos impact successful exploitation allows attacker to compromise the system target s asset name ip solution to resolve this issue upgrade to the latest packages which contain a patch refer to centos advisory centos for updates and patch information patch following are links for downloading patches to fix the vulnerabilities cesa centos references qid cve cve category centos pci flagged yes vendor references cesa centos bugtraq ids severity high date discovered nucleus notification rules triggered rule github project name please see nucleus for more information on these vulnerabilities
| 0
|
62,690
| 14,656,587,570
|
IssuesEvent
|
2020-12-28 13:45:29
|
fu1771695yongxie/gitbook
|
https://api.github.com/repos/fu1771695yongxie/gitbook
|
opened
|
CVE-2018-3737 (High) detected in sshpk-1.7.4.tgz
|
security vulnerability
|
## CVE-2018-3737 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sshpk-1.7.4.tgz</b></p></summary>
<p>A library for finding and using SSH public keys</p>
<p>Library home page: <a href="https://registry.npmjs.org/sshpk/-/sshpk-1.7.4.tgz">https://registry.npmjs.org/sshpk/-/sshpk-1.7.4.tgz</a></p>
<p>Path to dependency file: gitbook/package.json</p>
<p>Path to vulnerable library: gitbook/node_modules/npm/node_modules/request/node_modules/http-signature/node_modules/sshpk/package.json</p>
<p>
Dependency Hierarchy:
- npm-3.9.2.tgz (Root Library)
- request-2.72.0.tgz
- http-signature-1.1.1.tgz
- :x: **sshpk-1.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/gitbook/commit/004dead9ea0900f68966817c7a0134682f0a3d5c">004dead9ea0900f68966817c7a0134682f0a3d5c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
sshpk is vulnerable to ReDoS when parsing crafted invalid public keys.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3737>CVE-2018-3737</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hackerone.com/reports/319593">https://hackerone.com/reports/319593</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 1.13.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-3737 (High) detected in sshpk-1.7.4.tgz - ## CVE-2018-3737 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sshpk-1.7.4.tgz</b></p></summary>
<p>A library for finding and using SSH public keys</p>
<p>Library home page: <a href="https://registry.npmjs.org/sshpk/-/sshpk-1.7.4.tgz">https://registry.npmjs.org/sshpk/-/sshpk-1.7.4.tgz</a></p>
<p>Path to dependency file: gitbook/package.json</p>
<p>Path to vulnerable library: gitbook/node_modules/npm/node_modules/request/node_modules/http-signature/node_modules/sshpk/package.json</p>
<p>
Dependency Hierarchy:
- npm-3.9.2.tgz (Root Library)
- request-2.72.0.tgz
- http-signature-1.1.1.tgz
- :x: **sshpk-1.7.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/gitbook/commit/004dead9ea0900f68966817c7a0134682f0a3d5c">004dead9ea0900f68966817c7a0134682f0a3d5c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
sshpk is vulnerable to ReDoS when parsing crafted invalid public keys.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3737>CVE-2018-3737</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://hackerone.com/reports/319593">https://hackerone.com/reports/319593</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 1.13.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in sshpk tgz cve high severity vulnerability vulnerable library sshpk tgz a library for finding and using ssh public keys library home page a href path to dependency file gitbook package json path to vulnerable library gitbook node modules npm node modules request node modules http signature node modules sshpk package json dependency hierarchy npm tgz root library request tgz http signature tgz x sshpk tgz vulnerable library found in head commit a href found in base branch master vulnerability details sshpk is vulnerable to redos when parsing crafted invalid public keys publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
149,897
| 11,939,280,373
|
IssuesEvent
|
2020-04-02 14:59:28
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: scaledata/filesystem_simulator/nodes=3 failed
|
C-test-failure O-roachtest O-robot branch-release-19.2 release-blocker
|
[(roachtest).scaledata/filesystem_simulator/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1817002&tab=buildLog) on [release-19.2@a50439b7808533873305957259bb6d1d48965ed3](https://github.com/cockroachdb/cockroach/commits/a50439b7808533873305957259bb6d1d48965ed3):
```
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2010
main.runSqlapp.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/scaledata.go:108
main.(*monitor).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2344
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
- error with embedded safe details: output in %s
-- arg 1: <string>
- output in run_064603.363_n4_filesystemsimulator_:
- error with attached stack trace:
main.execCmd
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:406
main.(*cluster).RunL
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2019
main.(*cluster).RunE
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2000
main.runSqlapp.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/scaledata.go:108
main.(*monitor).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2344
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
- error with embedded safe details: %s returned:
stderr:
%s
stdout:
%s
-- arg 1: <string>
-- arg 2: <string>
-- arg 3: <string>
- /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1817002-1584600112-03-n4cpu4:4 -- ./filesystem_simulator --duration_secs=600 --num_workers=16 --cockroach_ip_addresses_csv='10.128.0.222:26257,10.128.0.229:26257,10.128.0.221:26257' returned:
stderr:
with error dial tcp 10.128.0.229:26257: connect: connection refused: ... Retrying after sleeping 10ns
2020/03/19 06:53:38 Deleted &{5994b736-b28d-4270-8610-b10fdece2938 1 0 98 default}
2020/03/19 06:53:38 RobustDB.RandomDB chose DB at index 1
2020/03/19 06:53:38 ExecuteTx retry attempt 3 failed, started at 2020-03-19 06:53:38.945193739 +0000 UTC m=+454.846799571, now = 2020-03-19 06:53:38.946391341 +0000 UTC m=+454.847997235, took 1.197664ms
2020/03/19 06:53:38 Attempt failed with error dial tcp 10.128.0.229:26257: connect: connection refused: ... Retrying after sleeping 20ns
2020/03/19 06:53:38 RobustDB.RandomDB chose DB at index 0
2020/03/19 06:53:38 ExecuteTx retry attempt 1 failed, started at 2020-03-19 06:53:37.896513976 +0000 UTC m=+453.798119813, now = 2020-03-19 06:53:38.969886835 +0000 UTC m=+454.871492747, took 1.073372934s
2020/03/19 06:53:38 Aborting Retries because this error of type *errors.errorString is not retryable : unexpected EOF
2020/03/19 06:53:38 unexpected EOF
Error: exit status 255
stdout::
- exit status 1
```
<details><summary>More</summary><p>
Artifacts: [/scaledata/filesystem_simulator/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=1817002&tab=artifacts#/scaledata/filesystem_simulator/nodes=3)
Related:
- #46291 roachtest: scaledata/filesystem_simulator/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #46282 roachtest: scaledata/filesystem_simulator/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202003181957_v20.1.0-beta.3](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202003181957_v20.1.0-beta.3) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #43273 roachtest: scaledata/filesystem_simulator/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ascaledata%2Ffilesystem_simulator%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: scaledata/filesystem_simulator/nodes=3 failed - [(roachtest).scaledata/filesystem_simulator/nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1817002&tab=buildLog) on [release-19.2@a50439b7808533873305957259bb6d1d48965ed3](https://github.com/cockroachdb/cockroach/commits/a50439b7808533873305957259bb6d1d48965ed3):
```
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2010
main.runSqlapp.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/scaledata.go:108
main.(*monitor).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2344
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
- error with embedded safe details: output in %s
-- arg 1: <string>
- output in run_064603.363_n4_filesystemsimulator_:
- error with attached stack trace:
main.execCmd
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:406
main.(*cluster).RunL
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2019
main.(*cluster).RunE
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2000
main.runSqlapp.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/scaledata.go:108
main.(*monitor).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2344
github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
/home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
- error with embedded safe details: %s returned:
stderr:
%s
stdout:
%s
-- arg 1: <string>
-- arg 2: <string>
-- arg 3: <string>
- /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1817002-1584600112-03-n4cpu4:4 -- ./filesystem_simulator --duration_secs=600 --num_workers=16 --cockroach_ip_addresses_csv='10.128.0.222:26257,10.128.0.229:26257,10.128.0.221:26257' returned:
stderr:
with error dial tcp 10.128.0.229:26257: connect: connection refused: ... Retrying after sleeping 10ns
2020/03/19 06:53:38 Deleted &{5994b736-b28d-4270-8610-b10fdece2938 1 0 98 default}
2020/03/19 06:53:38 RobustDB.RandomDB chose DB at index 1
2020/03/19 06:53:38 ExecuteTx retry attempt 3 failed, started at 2020-03-19 06:53:38.945193739 +0000 UTC m=+454.846799571, now = 2020-03-19 06:53:38.946391341 +0000 UTC m=+454.847997235, took 1.197664ms
2020/03/19 06:53:38 Attempt failed with error dial tcp 10.128.0.229:26257: connect: connection refused: ... Retrying after sleeping 20ns
2020/03/19 06:53:38 RobustDB.RandomDB chose DB at index 0
2020/03/19 06:53:38 ExecuteTx retry attempt 1 failed, started at 2020-03-19 06:53:37.896513976 +0000 UTC m=+453.798119813, now = 2020-03-19 06:53:38.969886835 +0000 UTC m=+454.871492747, took 1.073372934s
2020/03/19 06:53:38 Aborting Retries because this error of type *errors.errorString is not retryable : unexpected EOF
2020/03/19 06:53:38 unexpected EOF
Error: exit status 255
stdout::
- exit status 1
```
<details><summary>More</summary><p>
Artifacts: [/scaledata/filesystem_simulator/nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=1817002&tab=artifacts#/scaledata/filesystem_simulator/nodes=3)
Related:
- #46291 roachtest: scaledata/filesystem_simulator/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #46282 roachtest: scaledata/filesystem_simulator/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202003181957_v20.1.0-beta.3](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202003181957_v20.1.0-beta.3) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #43273 roachtest: scaledata/filesystem_simulator/nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ascaledata%2Ffilesystem_simulator%2Fnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest scaledata filesystem simulator nodes failed on home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runsqlapp home agent work go src github com cockroachdb cockroach pkg cmd roachtest scaledata go main monitor go home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go github com cockroachdb cockroach vendor golang org x sync errgroup group go home agent work go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go runtime goexit usr local go src runtime asm s error with embedded safe details output in s arg output in run filesystemsimulator error with attached stack trace main execcmd home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster runl home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster rune home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runsqlapp home agent work go src github com cockroachdb cockroach pkg cmd roachtest scaledata go main monitor go home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go github com cockroachdb cockroach vendor golang org x sync errgroup group go home agent work go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go runtime goexit usr local go src runtime asm s error with embedded safe details s returned stderr s stdout s arg arg arg home agent work go src github com cockroachdb cockroach bin roachprod run teamcity filesystem simulator duration secs num workers cockroach ip addresses csv returned stderr with error dial tcp connect connection refused retrying after sleeping deleted default robustdb randomdb chose db at index executetx retry attempt failed started at utc m now utc m took attempt failed with error dial tcp connect connection refused retrying after sleeping robustdb randomdb chose db at index executetx retry attempt failed started at utc m now utc m took aborting retries because this error of type errors errorstring is not retryable unexpected eof unexpected eof error exit status stdout exit status more artifacts related roachtest scaledata filesystem simulator nodes failed roachtest scaledata filesystem simulator nodes failed roachtest scaledata filesystem simulator nodes failed powered by
| 0
|
423,020
| 12,289,598,949
|
IssuesEvent
|
2020-05-09 22:23:13
|
googleapis/release-please
|
https://api.github.com/repos/googleapis/release-please
|
opened
|
first release is listed as "commits since undefined"
|
priority: p2 type: bug
|
If there have been no commits since the last release, we should instead have a message along the lines of "this was the first release".
|
1.0
|
first release is listed as "commits since undefined" - If there have been no commits since the last release, we should instead have a message along the lines of "this was the first release".
|
non_process
|
first release is listed as commits since undefined if there have been no commits since the last release we should instead have a message along the lines of this was the first release
| 0
|
603,248
| 18,535,099,731
|
IssuesEvent
|
2021-10-21 10:35:14
|
AY2122S1-CS2113T-W13-2/tp
|
https://api.github.com/repos/AY2122S1-CS2113T-W13-2/tp
|
closed
|
'Tagging' functionality misc.
|
priority.Medium type.Misc
|
~1. Upgrade `find /tag tag1 + tag2 + ...` function, so that it is sorted lexically~
~2. Refactor code to make it more lean / understandable~
- Organise printing messages
- Port I/O tests to JUnit
|
1.0
|
'Tagging' functionality misc. - ~1. Upgrade `find /tag tag1 + tag2 + ...` function, so that it is sorted lexically~
~2. Refactor code to make it more lean / understandable~
- Organise printing messages
- Port I/O tests to JUnit
|
non_process
|
tagging functionality misc upgrade find tag function so that it is sorted lexically refactor code to make it more lean understandable organise printing messages port i o tests to junit
| 0
|
130,438
| 27,697,500,678
|
IssuesEvent
|
2023-03-14 04:16:27
|
CarsOk/tienda_ropa
|
https://api.github.com/repos/CarsOk/tienda_ropa
|
opened
|
Realizar Logos
|
Code
|
## Card
**Como** Aprendiz Sena
**Quiero** Realizar logos diferentes relación de aspecto
**Para** Agregarlos al menú principal
## Criterios de aceptación
-[ ] Calidad Superior a 768P
-[ ] Colores relacionados al pedido del cliente
|
1.0
|
Realizar Logos - ## Card
**Como** Aprendiz Sena
**Quiero** Realizar logos diferentes relación de aspecto
**Para** Agregarlos al menú principal
## Criterios de aceptación
-[ ] Calidad Superior a 768P
-[ ] Colores relacionados al pedido del cliente
|
non_process
|
realizar logos card como aprendiz sena quiero realizar logos diferentes relación de aspecto para agregarlos al menú principal criterios de aceptación calidad superior a colores relacionados al pedido del cliente
| 0
|
103,707
| 16,604,678,270
|
IssuesEvent
|
2021-06-02 01:17:56
|
ikonovalov/spring-distributed-tracing
|
https://api.github.com/repos/ikonovalov/spring-distributed-tracing
|
opened
|
CVE-2020-1757 (High) detected in undertow-core-2.0.19.Final.jar
|
security vulnerability
|
## CVE-2020-1757 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undertow-core-2.0.19.Final.jar</b></p></summary>
<p>Undertow</p>
<p>Path to dependency file: /spring-distributed-tracing/service-reservation/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/io/undertow/undertow-core/2.0.19.Final/undertow-core-2.0.19.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.1.4.RELEASE.jar (Root Library)
- :x: **undertow-core-2.0.19.Final.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in all undertow-2.x.x SP1 versions prior to undertow-2.0.30.SP1, all undertow-1.x.x and undertow-2.x.x versions prior to undertow-2.1.0.Final, where the Servlet container causes servletPath to normalize incorrectly by truncating the path after semicolon which may lead to an application mapping resulting in the security bypass.
<p>Publish Date: 2020-04-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1757>CVE-2020-1757</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1757">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1757</a></p>
<p>Release Date: 2020-04-30</p>
<p>Fix Resolution: io.undertow:undertow-core:2.0.30.Final, io.undertow:undertow-examples:2.0.30.Final</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-1757 (High) detected in undertow-core-2.0.19.Final.jar - ## CVE-2020-1757 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undertow-core-2.0.19.Final.jar</b></p></summary>
<p>Undertow</p>
<p>Path to dependency file: /spring-distributed-tracing/service-reservation/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/io/undertow/undertow-core/2.0.19.Final/undertow-core-2.0.19.Final.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.1.4.RELEASE.jar (Root Library)
- :x: **undertow-core-2.0.19.Final.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in all undertow-2.x.x SP1 versions prior to undertow-2.0.30.SP1, all undertow-1.x.x and undertow-2.x.x versions prior to undertow-2.1.0.Final, where the Servlet container causes servletPath to normalize incorrectly by truncating the path after semicolon which may lead to an application mapping resulting in the security bypass.
<p>Publish Date: 2020-04-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1757>CVE-2020-1757</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1757">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1757</a></p>
<p>Release Date: 2020-04-30</p>
<p>Fix Resolution: io.undertow:undertow-core:2.0.30.Final, io.undertow:undertow-examples:2.0.30.Final</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in undertow core final jar cve high severity vulnerability vulnerable library undertow core final jar undertow path to dependency file spring distributed tracing service reservation pom xml path to vulnerable library root repository io undertow undertow core final undertow core final jar dependency hierarchy spring boot starter undertow release jar root library x undertow core final jar vulnerable library vulnerability details a flaw was found in all undertow x x versions prior to undertow all undertow x x and undertow x x versions prior to undertow final where the servlet container causes servletpath to normalize incorrectly by truncating the path after semicolon which may lead to an application mapping resulting in the security bypass publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io undertow undertow core final io undertow undertow examples final step up your open source security game with whitesource
| 0
|
11,071
| 13,907,546,666
|
IssuesEvent
|
2020-10-20 12:45:50
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Alerts API should allow filtering based on Alert type (rule error or rule match)
|
p0 story team:data processing
|
### Description
Alerts API should allow filtering based on Alert type (rule error or rule match)
### Related Services
panther-alerts-api
### Acceptance Criteria
A concise list of specific user stories that qualify this story as done.
- Alerts API supports an "Alert type" filter that allows retrieving all alerts that are related to rule errors
- Alerts API supports an "Alert type" filter that allows retrieving all alerts that are related to rule errors for a specific Rule ID
|
1.0
|
Alerts API should allow filtering based on Alert type (rule error or rule match) - ### Description
Alerts API should allow filtering based on Alert type (rule error or rule match)
### Related Services
panther-alerts-api
### Acceptance Criteria
A concise list of specific user stories that qualify this story as done.
- Alerts API supports an "Alert type" filter that allows retrieving all alerts that are related to rule errors
- Alerts API supports an "Alert type" filter that allows retrieving all alerts that are related to rule errors for a specific Rule ID
|
process
|
alerts api should allow filtering based on alert type rule error or rule match description alerts api should allow filtering based on alert type rule error or rule match related services panther alerts api acceptance criteria a concise list of specific user stories that qualify this story as done alerts api supports an alert type filter that allows retrieving all alerts that are related to rule errors alerts api supports an alert type filter that allows retrieving all alerts that are related to rule errors for a specific rule id
| 1
|
673,176
| 22,951,220,178
|
IssuesEvent
|
2022-07-19 07:36:50
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.target.com - site is not usable
|
priority-important browser-focus-geckoview engine-gecko
|
<!-- @browser: Firefox Mobile 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:102.0) Gecko/102.0 Firefox/102.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/107471 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.target.com/cart
**Browser / Version**: Firefox Mobile 102.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Missing items
**Steps to Reproduce**:
After adding items to cart and clicking on checkout, the page keeps loading without anything progress. However, this works on Firefox Android browser just not on Firefox Focus.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/7/22913a9c-81a8-45cb-9a65-9b3309dac501.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220705093820</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/7/ce0f8f46-681a-4a9d-8bba-84e135bdf31f)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.target.com - site is not usable - <!-- @browser: Firefox Mobile 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:102.0) Gecko/102.0 Firefox/102.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/107471 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.target.com/cart
**Browser / Version**: Firefox Mobile 102.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Missing items
**Steps to Reproduce**:
After adding items to cart and clicking on checkout, the page keeps loading without anything progress. However, this works on Firefox Android browser just not on Firefox Focus.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/7/22913a9c-81a8-45cb-9a65-9b3309dac501.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220705093820</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/7/ce0f8f46-681a-4a9d-8bba-84e135bdf31f)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description missing items steps to reproduce after adding items to cart and clicking on checkout the page keeps loading without anything progress however this works on firefox android browser just not on firefox focus view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
178,309
| 6,606,914,284
|
IssuesEvent
|
2017-09-19 03:28:34
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
techcrunch.com - video or audio doesn't play
|
browser-firefox-mobile priority-important status-needstriage
|
<!-- @browser: Firefox Mobile 57.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:57.0) Gecko/57.0 Firefox/57.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://techcrunch.com/2017/09/18/watch-industrial-robots-play-traditional-instruments/amp/
**Browser / Version**: Firefox Mobile 57.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: doesnt load
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
techcrunch.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 57.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0.1; Mobile; rv:57.0) Gecko/57.0 Firefox/57.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://techcrunch.com/2017/09/18/watch-industrial-robots-play-traditional-instruments/amp/
**Browser / Version**: Firefox Mobile 57.0
**Operating System**: Android 6.0.1
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: doesnt load
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
techcrunch com video or audio doesn t play url browser version firefox mobile operating system android tested another browser no problem type video or audio doesn t play description doesnt load steps to reproduce from with ❤️
| 0
|
37,225
| 9,980,269,746
|
IssuesEvent
|
2019-07-10 02:46:01
|
keepassxreboot/keepassxc
|
https://api.github.com/repos/keepassxreboot/keepassxc
|
closed
|
Compilation warning on 2.3.3
|
build system
|
## Expected Behavior
No warning should appear during compilation
## Current Behavior
```
In file included from /usr/include/qt/QtGui/qtguiglobal.h:43,
from /usr/include/qt/QtWidgets/qtwidgetsglobal.h:43,
from /usr/include/qt/QtWidgets/qapplication.h:43,
from /usr/include/qt/QtWidgets/QApplication:1,
from /build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.h:23,
from /build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp:20:
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp: In static member function ‘static void Application::handleUnixSignal(int)’:
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp:240:29: warning: ignoring return value of ‘ssize_t write(int, const void*, size_t)’, declared with attribute warn_unused_result [-Wunused-result]
Q_UNUSED(::write(unixSignalSocket[0], &buf, sizeof(buf)));
~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp: In member function ‘void Application::quitBySignal()’:
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp:252:20: warning: ignoring return value of ‘ssize_t read(int, void*, size_t)’, declared with attribute warn_unused_result [-Wunused-result]
Q_UNUSED(::read(unixSignalSocket[1], &buf, sizeof(buf)));
~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
## Steps to Reproduce (for bugs)
1. Compile 2.3.3 (with gcc 8.1.0)
## Context
Compiling 2.3.3 for Arch Linux package.
|
1.0
|
Compilation warning on 2.3.3 - ## Expected Behavior
No warning should appear during compilation
## Current Behavior
```
In file included from /usr/include/qt/QtGui/qtguiglobal.h:43,
from /usr/include/qt/QtWidgets/qtwidgetsglobal.h:43,
from /usr/include/qt/QtWidgets/qapplication.h:43,
from /usr/include/qt/QtWidgets/QApplication:1,
from /build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.h:23,
from /build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp:20:
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp: In static member function ‘static void Application::handleUnixSignal(int)’:
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp:240:29: warning: ignoring return value of ‘ssize_t write(int, const void*, size_t)’, declared with attribute warn_unused_result [-Wunused-result]
Q_UNUSED(::write(unixSignalSocket[0], &buf, sizeof(buf)));
~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp: In member function ‘void Application::quitBySignal()’:
/build/keepassxc/src/keepassxc-2.3.3/src/gui/Application.cpp:252:20: warning: ignoring return value of ‘ssize_t read(int, void*, size_t)’, declared with attribute warn_unused_result [-Wunused-result]
Q_UNUSED(::read(unixSignalSocket[1], &buf, sizeof(buf)));
~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
## Steps to Reproduce (for bugs)
1. Compile 2.3.3 (with gcc 8.1.0)
## Context
Compiling 2.3.3 for Arch Linux package.
|
non_process
|
compilation warning on expected behavior no warning should appear during compilation current behavior in file included from usr include qt qtgui qtguiglobal h from usr include qt qtwidgets qtwidgetsglobal h from usr include qt qtwidgets qapplication h from usr include qt qtwidgets qapplication from build keepassxc src keepassxc src gui application h from build keepassxc src keepassxc src gui application cpp build keepassxc src keepassxc src gui application cpp in static member function ‘static void application handleunixsignal int ’ build keepassxc src keepassxc src gui application cpp warning ignoring return value of ‘ssize t write int const void size t ’ declared with attribute warn unused result q unused write unixsignalsocket buf sizeof buf build keepassxc src keepassxc src gui application cpp in member function ‘void application quitbysignal ’ build keepassxc src keepassxc src gui application cpp warning ignoring return value of ‘ssize t read int void size t ’ declared with attribute warn unused result q unused read unixsignalsocket buf sizeof buf steps to reproduce for bugs compile with gcc context compiling for arch linux package
| 0
|
5,740
| 8,391,994,301
|
IssuesEvent
|
2018-10-09 16:21:15
|
bull313/Musika
|
https://api.github.com/repos/bull313/Musika
|
opened
|
Requirement 12: Jump/Goto
|
compiler requirement
|
The Musika language shall have the ability to jump to a particular section of music in order to ignore the rest of it (can be used for debugging)
|
1.0
|
Requirement 12: Jump/Goto - The Musika language shall have the ability to jump to a particular section of music in order to ignore the rest of it (can be used for debugging)
|
non_process
|
requirement jump goto the musika language shall have the ability to jump to a particular section of music in order to ignore the rest of it can be used for debugging
| 0
|
5,778
| 8,616,935,900
|
IssuesEvent
|
2018-11-20 02:40:44
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[0.31.0] Unable to get tables name in a custom query
|
Bug Query Processor
|
Hello,
First, thanks for your amazing product. We recently updated our Metabase to version 0.31 (from 0.30) and encountered a server error while trying to get table names for a custom query.
A hard refresh on our browser or rescan of our databases' tables didn't help us.
Please find here our configuration:
- Your browser and the version: Google Chrome, version 70.0.3538.77 (64 bits)
- Your operating system: Ubuntu 18.04
- Your databases: Redshift and MariaDB
- Metabase version: 0.31.0 (stable release)
- Metabase hosting environment: Docker on Ubuntu
- Metabase internal database: MariaDB 10.2.12
JS errors:

Metabase Logs:
> Nov 09 19:48:51 ERROR metabase.middleware :: GET /api/database 500 (886 ms) (5 DB calls).
{:value [nil],
:error [(named (named (not (matches-some-precondition? nil)) "Keyword or string") token)],
:message "Input to normalize-token does not match schema: \n\n\t [(named (named (not (matches-some-precondition? nil)) \"Keyword or string\") token)] \n\n",
:type clojure.lang.ExceptionInfo,
:stacktrace
("--> mbql.util$fn__18719$normalize_token__18724.invoke(util.clj:14)"
"mbql.normalize$normalize_template_tags$iter__19397__19401$fn__19402.invoke(normalize.clj:201)"
"mbql.normalize$normalize_template_tags.invokeStatic(normalize.clj:198)"
"mbql.normalize$normalize_template_tags.invoke(normalize.clj:194)"
"mbql.normalize$normalize_tokens.invokeStatic(normalize.clj:253)"
"mbql.normalize$normalize_tokens.doInvoke(normalize.clj:235)"
"mbql.normalize$normalize_tokens$iter__19441__19445$fn__19446.invoke(normalize.clj:265)"
"mbql.normalize$normalize_tokens.invokeStatic(normalize.clj:263)"
"mbql.normalize$normalize_tokens.doInvoke(normalize.clj:235)"
"mbql.normalize$normalize_tokens$iter__19441__19445$fn__19446.invoke(normalize.clj:265)"
"mbql.normalize$normalize_tokens.invokeStatic(normalize.clj:263)"
"mbql.normalize$normalize_tokens.doInvoke(normalize.clj:235)"
"models.interface$maybe_normalize.invokeStatic(interface.clj:62)"
"models.interface$maybe_normalize.invoke(interface.clj:61)"
"api.database$source_query_cards.invokeStatic(database.clj:105)"
"api.database$source_query_cards.invoke(database.clj:102)"
"api.database$cards_virtual_tables.invokeStatic(database.clj:118)"
"api.database$cards_virtual_tables.doInvoke(database.clj:113)"
"api.database$saved_cards_virtual_db_metadata.invokeStatic(database.clj:123)"
"api.database$saved_cards_virtual_db_metadata.doInvoke(database.clj:121)"
"api.database$add_virtual_tables_for_saved_cards.invokeStatic(database.clj:132)"
"api.database$add_virtual_tables_for_saved_cards.invoke(database.clj:131)"
"api.database$dbs_list.invokeStatic(database.clj:139)"
"api.database$dbs_list.invoke(database.clj:137)"
"api.database$fn__45154.invokeStatic(database.clj:150)"
"api.database$fn__45154.invoke(database.clj:143)"
"middleware$enforce_authentication$fn__55497.invoke(middleware.clj:113)"
"api.routes$fn__55649.invokeStatic(routes.clj:65)"
"api.routes$fn__55649.invoke(routes.clj:65)"
"routes$fn__56385$fn__56386.doInvoke(routes.clj:108)"
"routes$fn__56385.invokeStatic(routes.clj:103)"
"routes$fn__56385.invoke(routes.clj:103)"
"middleware$catch_api_exceptions$fn__55632.invoke(middleware.clj:436)"
"middleware$log_api_call$fn__55610$fn__55612.invoke(middleware.clj:364)"
"middleware$log_api_call$fn__55610.invoke(middleware.clj:363)"
"middleware$add_security_headers$fn__55552.invoke(middleware.clj:252)"
"core$wrap_streamed_json_response$fn__57236.invoke(core.clj:67)"
"middleware$bind_current_user$fn__55502.invoke(middleware.clj:137)"
"middleware$maybe_set_site_url$fn__55562.invoke(middleware.clj:290)"
"middleware$add_content_type$fn__55555.invoke(middleware.clj:262)")}
If you need further information, do not hesitate!
Best regards,
|
1.0
|
[0.31.0] Unable to get tables name in a custom query - Hello,
First, thanks for your amazing product. We recently updated our Metabase to version 0.31 (from 0.30) and encountered a server error while trying to get table names for a custom query.
A hard refresh on our browser or rescan of our databases' tables didn't help us.
Please find here our configuration:
- Your browser and the version: Google Chrome, version 70.0.3538.77 (64 bits)
- Your operating system: Ubuntu 18.04
- Your databases: Redshift and MariaDB
- Metabase version: 0.31.0 (stable release)
- Metabase hosting environment: Docker on Ubuntu
- Metabase internal database: MariaDB 10.2.12
JS errors:

Metabase Logs:
> Nov 09 19:48:51 ERROR metabase.middleware :: GET /api/database 500 (886 ms) (5 DB calls).
{:value [nil],
:error [(named (named (not (matches-some-precondition? nil)) "Keyword or string") token)],
:message "Input to normalize-token does not match schema: \n\n\t [(named (named (not (matches-some-precondition? nil)) \"Keyword or string\") token)] \n\n",
:type clojure.lang.ExceptionInfo,
:stacktrace
("--> mbql.util$fn__18719$normalize_token__18724.invoke(util.clj:14)"
"mbql.normalize$normalize_template_tags$iter__19397__19401$fn__19402.invoke(normalize.clj:201)"
"mbql.normalize$normalize_template_tags.invokeStatic(normalize.clj:198)"
"mbql.normalize$normalize_template_tags.invoke(normalize.clj:194)"
"mbql.normalize$normalize_tokens.invokeStatic(normalize.clj:253)"
"mbql.normalize$normalize_tokens.doInvoke(normalize.clj:235)"
"mbql.normalize$normalize_tokens$iter__19441__19445$fn__19446.invoke(normalize.clj:265)"
"mbql.normalize$normalize_tokens.invokeStatic(normalize.clj:263)"
"mbql.normalize$normalize_tokens.doInvoke(normalize.clj:235)"
"mbql.normalize$normalize_tokens$iter__19441__19445$fn__19446.invoke(normalize.clj:265)"
"mbql.normalize$normalize_tokens.invokeStatic(normalize.clj:263)"
"mbql.normalize$normalize_tokens.doInvoke(normalize.clj:235)"
"models.interface$maybe_normalize.invokeStatic(interface.clj:62)"
"models.interface$maybe_normalize.invoke(interface.clj:61)"
"api.database$source_query_cards.invokeStatic(database.clj:105)"
"api.database$source_query_cards.invoke(database.clj:102)"
"api.database$cards_virtual_tables.invokeStatic(database.clj:118)"
"api.database$cards_virtual_tables.doInvoke(database.clj:113)"
"api.database$saved_cards_virtual_db_metadata.invokeStatic(database.clj:123)"
"api.database$saved_cards_virtual_db_metadata.doInvoke(database.clj:121)"
"api.database$add_virtual_tables_for_saved_cards.invokeStatic(database.clj:132)"
"api.database$add_virtual_tables_for_saved_cards.invoke(database.clj:131)"
"api.database$dbs_list.invokeStatic(database.clj:139)"
"api.database$dbs_list.invoke(database.clj:137)"
"api.database$fn__45154.invokeStatic(database.clj:150)"
"api.database$fn__45154.invoke(database.clj:143)"
"middleware$enforce_authentication$fn__55497.invoke(middleware.clj:113)"
"api.routes$fn__55649.invokeStatic(routes.clj:65)"
"api.routes$fn__55649.invoke(routes.clj:65)"
"routes$fn__56385$fn__56386.doInvoke(routes.clj:108)"
"routes$fn__56385.invokeStatic(routes.clj:103)"
"routes$fn__56385.invoke(routes.clj:103)"
"middleware$catch_api_exceptions$fn__55632.invoke(middleware.clj:436)"
"middleware$log_api_call$fn__55610$fn__55612.invoke(middleware.clj:364)"
"middleware$log_api_call$fn__55610.invoke(middleware.clj:363)"
"middleware$add_security_headers$fn__55552.invoke(middleware.clj:252)"
"core$wrap_streamed_json_response$fn__57236.invoke(core.clj:67)"
"middleware$bind_current_user$fn__55502.invoke(middleware.clj:137)"
"middleware$maybe_set_site_url$fn__55562.invoke(middleware.clj:290)"
"middleware$add_content_type$fn__55555.invoke(middleware.clj:262)")}
If you need further information, do not hesitate!
Best regards,
|
process
|
unable to get tables name in a custom query hello first thanks for your amazing product we recently updated our metabase to version from and encountered a server error while trying to get table names for a custom query a hard refresh on our browser or rescan of our databases tables didn t help us please find here our configuration your browser and the version google chrome version bits your operating system ubuntu your databases redshift and mariadb metabase version stable release metabase hosting environment docker on ubuntu metabase internal database mariadb js errors metabase logs nov error metabase middleware get api database ms db calls value error message input to normalize token does not match schema n n t n n type clojure lang exceptioninfo stacktrace mbql util fn normalize token invoke util clj mbql normalize normalize template tags iter fn invoke normalize clj mbql normalize normalize template tags invokestatic normalize clj mbql normalize normalize template tags invoke normalize clj mbql normalize normalize tokens invokestatic normalize clj mbql normalize normalize tokens doinvoke normalize clj mbql normalize normalize tokens iter fn invoke normalize clj mbql normalize normalize tokens invokestatic normalize clj mbql normalize normalize tokens doinvoke normalize clj mbql normalize normalize tokens iter fn invoke normalize clj mbql normalize normalize tokens invokestatic normalize clj mbql normalize normalize tokens doinvoke normalize clj models interface maybe normalize invokestatic interface clj models interface maybe normalize invoke interface clj api database source query cards invokestatic database clj api database source query cards invoke database clj api database cards virtual tables invokestatic database clj api database cards virtual tables doinvoke database clj api database saved cards virtual db metadata invokestatic database clj api database saved cards virtual db metadata doinvoke database clj api database add virtual tables for saved cards invokestatic database clj api database add virtual tables for saved cards invoke database clj api database dbs list invokestatic database clj api database dbs list invoke database clj api database fn invokestatic database clj api database fn invoke database clj middleware enforce authentication fn invoke middleware clj api routes fn invokestatic routes clj api routes fn invoke routes clj routes fn fn doinvoke routes clj routes fn invokestatic routes clj routes fn invoke routes clj middleware catch api exceptions fn invoke middleware clj middleware log api call fn fn invoke middleware clj middleware log api call fn invoke middleware clj middleware add security headers fn invoke middleware clj core wrap streamed json response fn invoke core clj middleware bind current user fn invoke middleware clj middleware maybe set site url fn invoke middleware clj middleware add content type fn invoke middleware clj if you need further information do not hesitate best regards
| 1
|
21,518
| 29,803,984,839
|
IssuesEvent
|
2023-06-16 10:12:00
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
reprepbuild 0.9.0 has 1 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/reprepbuild
https://inspector.pypi.io/project/reprepbuild
```{
"dependency": "reprepbuild",
"version": "0.9.0",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "RepRepBuild-0.9.0/src/reprepbuild/latexdep.py:83",
"code": " subprocess.run(\n args,\n cwd=workdir,\n check=False,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpzazz6iv6/reprepbuild"
}
}```
|
1.0
|
reprepbuild 0.9.0 has 1 GuardDog issues - https://pypi.org/project/reprepbuild
https://inspector.pypi.io/project/reprepbuild
```{
"dependency": "reprepbuild",
"version": "0.9.0",
"result": {
"issues": 1,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "RepRepBuild-0.9.0/src/reprepbuild/latexdep.py:83",
"code": " subprocess.run(\n args,\n cwd=workdir,\n check=False,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpzazz6iv6/reprepbuild"
}
}```
|
process
|
reprepbuild has guarddog issues dependency reprepbuild version result issues errors results silent process execution location reprepbuild src reprepbuild latexdep py code subprocess run n args n cwd workdir n check false n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp reprepbuild
| 1
|
830,667
| 32,020,462,623
|
IssuesEvent
|
2023-09-22 03:47:45
|
dimensionhq/infralink
|
https://api.github.com/repos/dimensionhq/infralink
|
reopened
|
Predictive analytics for spot & on-demand pricing
|
medium-priority backlog
|
Predict spot and on-demand pricing for future auto-scaling functionality.
|
1.0
|
Predictive analytics for spot & on-demand pricing - Predict spot and on-demand pricing for future auto-scaling functionality.
|
non_process
|
predictive analytics for spot on demand pricing predict spot and on demand pricing for future auto scaling functionality
| 0
|
3,173
| 6,226,519,166
|
IssuesEvent
|
2017-07-10 18:38:09
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
child_process,Windows: fix docs for `spawn({shell})`
|
child_process doc good first contribution
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: *
* **Platform**: Windows
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback erronously states that `cmd.exe` is the default shell for windows, while infact it's `process.env.ComSpec`.
|
1.0
|
child_process,Windows: fix docs for `spawn({shell})` - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: *
* **Platform**: Windows
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback erronously states that `cmd.exe` is the default shell for windows, while infact it's `process.env.ComSpec`.
|
process
|
child process windows fix docs for spawn shell thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform windows subsystem child process erronously states that cmd exe is the default shell for windows while infact it s process env comspec
| 1
|
20,178
| 26,732,909,322
|
IssuesEvent
|
2023-01-30 06:56:38
|
OpenEnergyPlatform/open-MaStR
|
https://api.github.com/repos/OpenEnergyPlatform/open-MaStR
|
closed
|
Split postprocessing into data cleansing and enrichment
|
:scissors: post processing
|
Data cleansing
- Unplausible nominal power correction
- Filtering duplicates (expects raw data including all StatistikFlag A and B units)
Enrichment
- Add geom column and fill with data from lat/lon and PLZ
By doing this, parts of the code might be translated to python if that eases the process (as done for geom column creation).
|
1.0
|
Split postprocessing into data cleansing and enrichment - Data cleansing
- Unplausible nominal power correction
- Filtering duplicates (expects raw data including all StatistikFlag A and B units)
Enrichment
- Add geom column and fill with data from lat/lon and PLZ
By doing this, parts of the code might be translated to python if that eases the process (as done for geom column creation).
|
process
|
split postprocessing into data cleansing and enrichment data cleansing unplausible nominal power correction filtering duplicates expects raw data including all statistikflag a and b units enrichment add geom column and fill with data from lat lon and plz by doing this parts of the code might be translated to python if that eases the process as done for geom column creation
| 1
|
2,748
| 5,658,539,590
|
IssuesEvent
|
2017-04-10 10:21:28
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
closed
|
issue undeclared model variable errors at the end of model parsing
|
enhancement preprocessor
|
Currently, when a variable is in an equation but not declared, the preprocessor issues an error and stops parsing. This is a problem during model development as a user can potentially need to run dynare several times before catching all undeclared variables. To fix this, `preprocessor/ParsingDriver.cc` needs to be modified to check the existence of variables at the end of the `model` block.
|
1.0
|
issue undeclared model variable errors at the end of model parsing - Currently, when a variable is in an equation but not declared, the preprocessor issues an error and stops parsing. This is a problem during model development as a user can potentially need to run dynare several times before catching all undeclared variables. To fix this, `preprocessor/ParsingDriver.cc` needs to be modified to check the existence of variables at the end of the `model` block.
|
process
|
issue undeclared model variable errors at the end of model parsing currently when a variable is in an equation but not declared the preprocessor issues an error and stops parsing this is a problem during model development as a user can potentially need to run dynare several times before catching all undeclared variables to fix this preprocessor parsingdriver cc needs to be modified to check the existence of variables at the end of the model block
| 1
|
18,528
| 24,552,207,545
|
IssuesEvent
|
2022-10-12 13:26:04
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] Comprehension success screen is not getting displayed in updated consent flow
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Sign in and enroll to the study
2. Now, Go to SB and update the consent for enrolled participants
3. Open the mobile app and complete the eligibility section
4. Navigate to comprehension section
5. Complete the comprehension section and Observe
**AR:** Comprehension success screen is not getting displayed in updated consent flow
**ER:** Comprehension success screen should get displayed in updated consent flow [ Refer the screenshot attached]

|
3.0
|
[Android] Comprehension success screen is not getting displayed in updated consent flow - **Steps:**
1. Sign in and enroll to the study
2. Now, Go to SB and update the consent for enrolled participants
3. Open the mobile app and complete the eligibility section
4. Navigate to comprehension section
5. Complete the comprehension section and Observe
**AR:** Comprehension success screen is not getting displayed in updated consent flow
**ER:** Comprehension success screen should get displayed in updated consent flow [ Refer the screenshot attached]

|
process
|
comprehension success screen is not getting displayed in updated consent flow steps sign in and enroll to the study now go to sb and update the consent for enrolled participants open the mobile app and complete the eligibility section navigate to comprehension section complete the comprehension section and observe ar comprehension success screen is not getting displayed in updated consent flow er comprehension success screen should get displayed in updated consent flow
| 1
|
9,156
| 3,024,277,496
|
IssuesEvent
|
2015-08-02 12:57:28
|
mathjax/MathJax
|
https://api.github.com/repos/mathjax/MathJax
|
closed
|
\boxed with \text and nested math
|
Accepted Merged Test Needed
|
\boxed runs into a problem if it contains \text with nested math. See https://jsfiddle.net/w5kv6a95/ (MathJax seems to think there's a missing closing brace, but there's not.) The only way around it appears to be going in and out of \text to keep the math and the words separated.
|
1.0
|
\boxed with \text and nested math - \boxed runs into a problem if it contains \text with nested math. See https://jsfiddle.net/w5kv6a95/ (MathJax seems to think there's a missing closing brace, but there's not.) The only way around it appears to be going in and out of \text to keep the math and the words separated.
|
non_process
|
boxed with text and nested math boxed runs into a problem if it contains text with nested math see mathjax seems to think there s a missing closing brace but there s not the only way around it appears to be going in and out of text to keep the math and the words separated
| 0
|
5,279
| 8,068,749,718
|
IssuesEvent
|
2018-08-06 00:30:29
|
okTurtles/group-income-simple
|
https://api.github.com/repos/okTurtles/group-income-simple
|
closed
|
Add CLA bot
|
Kind:Process Note:Stale
|
### Problem
Would be nice to have simple, automated acknowledgement of contribution guidelines for PRs.
### Solution
- https://github.com/clabot/clabot
- See also [this fork of it](https://github.com/jdan/clabot)
|
1.0
|
Add CLA bot - ### Problem
Would be nice to have simple, automated acknowledgement of contribution guidelines for PRs.
### Solution
- https://github.com/clabot/clabot
- See also [this fork of it](https://github.com/jdan/clabot)
|
process
|
add cla bot problem would be nice to have simple automated acknowledgement of contribution guidelines for prs solution see also
| 1
|
16,780
| 21,966,012,497
|
IssuesEvent
|
2022-05-24 20:23:17
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
opened
|
Replace u32addc with u32add3 operation
|
assembly processor
|
As discussed [here](https://github.com/maticnetwork/miden/issues/203#issuecomment-1127178122), we can generalize `U32ADDC` operation into `U32ADD3` operation. Stack transition for this operation would be very similar to `U32ADDC`:
```
[c, b, a, ... ] -> [e, d, ... ]
```
Where `d` and `e` would be the lower and the upper 32-bit limbs of `a + b + c`. The main difference between `U32ADDC` and `U32ADD3` is that in case of `U32ADD3` all 3 operands could be full 32-bit values, while in case of `U32ADDC`, the `a` operand must be a binary value (i.e., either 1 or 0).
Once the operation is updated, we should also update `u32addc` assembly instruction to `u32add3`.
|
1.0
|
Replace u32addc with u32add3 operation - As discussed [here](https://github.com/maticnetwork/miden/issues/203#issuecomment-1127178122), we can generalize `U32ADDC` operation into `U32ADD3` operation. Stack transition for this operation would be very similar to `U32ADDC`:
```
[c, b, a, ... ] -> [e, d, ... ]
```
Where `d` and `e` would be the lower and the upper 32-bit limbs of `a + b + c`. The main difference between `U32ADDC` and `U32ADD3` is that in case of `U32ADD3` all 3 operands could be full 32-bit values, while in case of `U32ADDC`, the `a` operand must be a binary value (i.e., either 1 or 0).
Once the operation is updated, we should also update `u32addc` assembly instruction to `u32add3`.
|
process
|
replace with operation as discussed we can generalize operation into operation stack transition for this operation would be very similar to where d and e would be the lower and the upper bit limbs of a b c the main difference between and is that in case of all operands could be full bit values while in case of the a operand must be a binary value i e either or once the operation is updated we should also update assembly instruction to
| 1
|
397,634
| 11,730,789,932
|
IssuesEvent
|
2020-03-10 22:12:17
|
google/gvisor
|
https://api.github.com/repos/google/gvisor
|
opened
|
Support netstat
|
area: compatibility area: networking priority: p2 type: enhancement
|
Support for the netstat tool
On ubuntu:
```
~$ docker run --runtime=gvisor --rm -ti ubuntu su -c "apt update && apt install net-tools && netstat"
...
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
```
On alpine:
```
~$ docker run --runtime=gvisor --rm -ti alpine netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
netstat: /proc/net/raw: No such file or directory
netstat: /proc/net/raw6: No such file or directory
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
```
|
1.0
|
Support netstat - Support for the netstat tool
On ubuntu:
```
~$ docker run --runtime=gvisor --rm -ti ubuntu su -c "apt update && apt install net-tools && netstat"
...
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
```
On alpine:
```
~$ docker run --runtime=gvisor --rm -ti alpine netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
netstat: /proc/net/raw: No such file or directory
netstat: /proc/net/raw6: No such file or directory
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
```
|
non_process
|
support netstat support for the netstat tool on ubuntu docker run runtime gvisor rm ti ubuntu su c apt update apt install net tools netstat active internet connections w o servers proto recv q send q local address foreign address state active unix domain sockets w o servers proto refcnt flags type state i node path on alpine docker run runtime gvisor rm ti alpine netstat active internet connections w o servers proto recv q send q local address foreign address state netstat proc net raw no such file or directory netstat proc net no such file or directory active unix domain sockets w o servers proto refcnt flags type state i node path
| 0
|
56,112
| 8,051,405,737
|
IssuesEvent
|
2018-08-01 16:01:47
|
has2k1/plotnine
|
https://api.github.com/repos/has2k1/plotnine
|
closed
|
Preferred Citation?
|
documentation question
|
I've utilized plotnine to visualize some of my own work and am in the processing of submitting a manuscript. I'd like to acknowledge the package and development with a citation. Do you have a preferred citation I should use (e.g. DOI or the github page)? For example I've used the following two in the past to cite [ggplot](https://cran.r-project.org/web/packages/ggplot2/citation.html) and [seaborn](https://zenodo.org/record/12710) respectively.
|
1.0
|
Preferred Citation? - I've utilized plotnine to visualize some of my own work and am in the processing of submitting a manuscript. I'd like to acknowledge the package and development with a citation. Do you have a preferred citation I should use (e.g. DOI or the github page)? For example I've used the following two in the past to cite [ggplot](https://cran.r-project.org/web/packages/ggplot2/citation.html) and [seaborn](https://zenodo.org/record/12710) respectively.
|
non_process
|
preferred citation i ve utilized plotnine to visualize some of my own work and am in the processing of submitting a manuscript i d like to acknowledge the package and development with a citation do you have a preferred citation i should use e g doi or the github page for example i ve used the following two in the past to cite and respectively
| 0
|
104,315
| 16,613,601,601
|
IssuesEvent
|
2021-06-02 14:17:37
|
Thanraj/linux-4.1.15
|
https://api.github.com/repos/Thanraj/linux-4.1.15
|
opened
|
CVE-2021-28972 (Medium) detected in linuxv4.4.3
|
security vulnerability
|
## CVE-2021-28972 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.4.3</b></p></summary>
<p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In drivers/pci/hotplug/rpadlpar_sysfs.c in the Linux kernel through 5.11.8, the RPA PCI Hotplug driver has a user-tolerable buffer overflow when writing a new device name to the driver from userspace, allowing userspace to write data to the kernel stack frame directly. This occurs because add_slot_store and remove_slot_store mishandle drc_name '\0' termination, aka CID-cc7a0bb058b8.
<p>Publish Date: 2021-03-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28972>CVE-2021-28972</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972</a></p>
<p>Release Date: 2021-03-22</p>
<p>Fix Resolution: v4.4.263, v4.9.263, v4.14.227, v4.19.183, v5.4.108, v5.10.26, v5.11.9, v5.12-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-28972 (Medium) detected in linuxv4.4.3 - ## CVE-2021-28972 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.4.3</b></p></summary>
<p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/drivers/pci/hotplug/rpadlpar_sysfs.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In drivers/pci/hotplug/rpadlpar_sysfs.c in the Linux kernel through 5.11.8, the RPA PCI Hotplug driver has a user-tolerable buffer overflow when writing a new device name to the driver from userspace, allowing userspace to write data to the kernel stack frame directly. This occurs because add_slot_store and remove_slot_store mishandle drc_name '\0' termination, aka CID-cc7a0bb058b8.
<p>Publish Date: 2021-03-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28972>CVE-2021-28972</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28972</a></p>
<p>Release Date: 2021-03-22</p>
<p>Fix Resolution: v4.4.263, v4.9.263, v4.14.227, v4.19.183, v5.4.108, v5.10.26, v5.11.9, v5.12-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in cve medium severity vulnerability vulnerable library library home page a href found in head commit a href found in base branch master vulnerable source files linux drivers pci hotplug rpadlpar sysfs c linux drivers pci hotplug rpadlpar sysfs c linux drivers pci hotplug rpadlpar sysfs c vulnerability details in drivers pci hotplug rpadlpar sysfs c in the linux kernel through the rpa pci hotplug driver has a user tolerable buffer overflow when writing a new device name to the driver from userspace allowing userspace to write data to the kernel stack frame directly this occurs because add slot store and remove slot store mishandle drc name termination aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
8,049
| 11,220,784,620
|
IssuesEvent
|
2020-01-07 16:27:55
|
code4romania/expert-consultation-api
|
https://api.github.com/repos/code4romania/expert-consultation-api
|
closed
|
[Document processing] Implement parsing of generic pdf file
|
document processing documents help wanted java spring
|
As a legal consultation user, I want to be able to load a pdf file in a predefined format and break the content of the file in chapters and articles.
The pdf file will have a predefined format, an example file can be provided by request.
A strategy needs to be implemented to break the content of the file in the desired format: chapters and articles.
|
1.0
|
[Document processing] Implement parsing of generic pdf file - As a legal consultation user, I want to be able to load a pdf file in a predefined format and break the content of the file in chapters and articles.
The pdf file will have a predefined format, an example file can be provided by request.
A strategy needs to be implemented to break the content of the file in the desired format: chapters and articles.
|
process
|
implement parsing of generic pdf file as a legal consultation user i want to be able to load a pdf file in a predefined format and break the content of the file in chapters and articles the pdf file will have a predefined format an example file can be provided by request a strategy needs to be implemented to break the content of the file in the desired format chapters and articles
| 1
|
77,712
| 14,909,778,125
|
IssuesEvent
|
2021-01-22 08:34:39
|
LiskHQ/lisk-desktop
|
https://api.github.com/repos/LiskHQ/lisk-desktop
|
closed
|
Set up Lisk Service to run on Jenkins
|
type: code type: test
|
### Description
Since Lisk Desktop uses Lisk Service as the single source of data, we need to run Lisk Service on Jenkins to be able to run e2e tests.
### Motivation
Reinstate e2e tests.
### Acceptance Criteria
All e2e tests must pass on Jenkins as they do locally.
|
1.0
|
Set up Lisk Service to run on Jenkins - ### Description
Since Lisk Desktop uses Lisk Service as the single source of data, we need to run Lisk Service on Jenkins to be able to run e2e tests.
### Motivation
Reinstate e2e tests.
### Acceptance Criteria
All e2e tests must pass on Jenkins as they do locally.
|
non_process
|
set up lisk service to run on jenkins description since lisk desktop uses lisk service as the single source of data we need to run lisk service on jenkins to be able to run tests motivation reinstate tests acceptance criteria all tests must pass on jenkins as they do locally
| 0
|
12,060
| 14,739,694,182
|
IssuesEvent
|
2021-01-07 07:44:18
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Client Portal - Forgot Password | Parent:597
|
anc-ops anc-process anp-not prioritized ant-bug ant-child/secondary customer portal
|
In GitLab by @kdjstudios on Sep 12, 2018, 12:28
Hello Team,
I was just on a demo with Dave and Terri, and we ran into a bit of a hiccup. Dave had forgotten his password for the client account. And it became extremely apparent that there should be a 'forgot password' functionality for client users.
As currently the client would have to contact the site, then the site would have to resend the invite, and then the client would have to go back and setup a password. I know we have had clients in the past bring this up, so I will see if we already have tickets open.
|
1.0
|
Client Portal - Forgot Password | Parent:597 - In GitLab by @kdjstudios on Sep 12, 2018, 12:28
Hello Team,
I was just on a demo with Dave and Terri, and we ran into a bit of a hiccup. Dave had forgotten his password for the client account. And it became extremely apparent that there should be a 'forgot password' functionality for client users.
As currently the client would have to contact the site, then the site would have to resend the invite, and then the client would have to go back and setup a password. I know we have had clients in the past bring this up, so I will see if we already have tickets open.
|
process
|
client portal forgot password parent in gitlab by kdjstudios on sep hello team i was just on a demo with dave and terri and we ran into a bit of a hiccup dave had forgotten his password for the client account and it became extremely apparent that there should be a forgot password functionality for client users as currently the client would have to contact the site then the site would have to resend the invite and then the client would have to go back and setup a password i know we have had clients in the past bring this up so i will see if we already have tickets open
| 1
|
9,902
| 12,907,315,915
|
IssuesEvent
|
2020-07-15 04:32:37
|
OI-wiki/OI-wiki
|
https://api.github.com/repos/OI-wiki/OI-wiki
|
closed
|
左/右位移沒有解釋UB的情況
|
More details needed / 内容需增修 Need Processing / 需要处理 div. 3
|
[this](https://github.com/24OI/OI-wiki/blob/master/docs/math/bit.md):
> !!! warning 我们平常写的除法是向 0 取整,而这里的右移是向下取整(注意这里的区别),即当数大于等于 0 时两种方法等价,当数小于 0 时会有区别,如: $-1 \div 2 = 0$ , 而 $-1 >> 1 = -1$
C++標準裡左/右位移(`x << y`/`x >> y`)僅對於`x`為**非負數**且結果可表示時有定義值,不然是UB: https://stackoverflow.com/a/8416000
在UB下編譯器可以假設`x`為非負數而進行優化, 例如有`foo(int x) { if (x >= 0) printf("Kaboom!"); int y= x<<4; }`, 在-O2以上`foo(-1);`會導致`"Kaboom!"`: https://godbolt.org/z/haY9UN
這代碼也同理:
```
int mulTwo(int n) { // 计算 n*2
return n << 1;
}
```
signed type的左位移在結果把1左移到sign bit或以左位置時也是UB
|
1.0
|
左/右位移沒有解釋UB的情況 - [this](https://github.com/24OI/OI-wiki/blob/master/docs/math/bit.md):
> !!! warning 我们平常写的除法是向 0 取整,而这里的右移是向下取整(注意这里的区别),即当数大于等于 0 时两种方法等价,当数小于 0 时会有区别,如: $-1 \div 2 = 0$ , 而 $-1 >> 1 = -1$
C++標準裡左/右位移(`x << y`/`x >> y`)僅對於`x`為**非負數**且結果可表示時有定義值,不然是UB: https://stackoverflow.com/a/8416000
在UB下編譯器可以假設`x`為非負數而進行優化, 例如有`foo(int x) { if (x >= 0) printf("Kaboom!"); int y= x<<4; }`, 在-O2以上`foo(-1);`會導致`"Kaboom!"`: https://godbolt.org/z/haY9UN
這代碼也同理:
```
int mulTwo(int n) { // 计算 n*2
return n << 1;
}
```
signed type的左位移在結果把1左移到sign bit或以左位置時也是UB
|
process
|
左 右位移沒有解釋ub的情況 warning 我们平常写的除法是向 取整,而这里的右移是向下取整(注意这里的区别),即当数大于等于 时两种方法等价,当数小于 时会有区别,如: div 而 c 標準裡左 右位移 x y 僅對於 x 為 非負數 且結果可表示時有定義值 不然是ub 在ub下編譯器可以假設 x 為非負數而進行優化 例如有 foo int x if x printf kaboom int y x 在 foo 會導致 kaboom 這代碼也同理 int multwo int n 计算 n return n signed bit或以左位置時也是ub
| 1
|
44,710
| 12,341,717,366
|
IssuesEvent
|
2020-05-14 22:37:10
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Touching an IMap entry with an EntryProcessor changes the TTL
|
Type: Defect
|
**Describe the bug**
Using Hazelcast 4, we noticed the behavior has changed when setting explicit TTLs on IMap entries. Previously, an explicit TTL would be maintained after EntryProcessors operated on the entries. Now it looks like some very long TTL is applied on calling Entry.setValue(...) in an EntryProcessor.
**Expected behavior**
In Hazelcast 3, the observed behavior is that the TTL is unchanged by an EntryProcessor operating on an entry.
**To Reproduce**
I wrote simple test cases:
Hazelcast 3: https://github.com/keteracel/hazelcast4-tests/blob/master/hz3-tests/src/test/java/hz3/tests/TTLTest.java
Hazelcast 4: https://github.com/keteracel/hazelcast4-tests/blob/master/hz4-tests/src/test/java/hz4/tests/TTLTest.java
Result for HZ3:

Result for HZ4:

I didn't see anything in migration guides that suggest this is an expected change.
|
1.0
|
Touching an IMap entry with an EntryProcessor changes the TTL - **Describe the bug**
Using Hazelcast 4, we noticed the behavior has changed when setting explicit TTLs on IMap entries. Previously, an explicit TTL would be maintained after EntryProcessors operated on the entries. Now it looks like some very long TTL is applied on calling Entry.setValue(...) in an EntryProcessor.
**Expected behavior**
In Hazelcast 3, the observed behavior is that the TTL is unchanged by an EntryProcessor operating on an entry.
**To Reproduce**
I wrote simple test cases:
Hazelcast 3: https://github.com/keteracel/hazelcast4-tests/blob/master/hz3-tests/src/test/java/hz3/tests/TTLTest.java
Hazelcast 4: https://github.com/keteracel/hazelcast4-tests/blob/master/hz4-tests/src/test/java/hz4/tests/TTLTest.java
Result for HZ3:

Result for HZ4:

I didn't see anything in migration guides that suggest this is an expected change.
|
non_process
|
touching an imap entry with an entryprocessor changes the ttl describe the bug using hazelcast we noticed the behavior has changed when setting explicit ttls on imap entries previously an explicit ttl would be maintained after entryprocessors operated on the entries now it looks like some very long ttl is applied on calling entry setvalue in an entryprocessor expected behavior in hazelcast the observed behavior is that the ttl is unchanged by an entryprocessor operating on an entry to reproduce i wrote simple test cases hazelcast hazelcast result for result for i didn t see anything in migration guides that suggest this is an expected change
| 0
|
11,999
| 14,737,342,273
|
IssuesEvent
|
2021-01-07 01:33:53
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
opened
|
Towne - Copy Accounts or Customers
|
anc-process anp-1 ant-feature grt-ui processes pl-wish list
|
In GitLab by @kdjstudios on May 8, 2018, 11:16
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-08-76377
**Server:** External
**Client/Site:** Towne
**Account:** NA
**Issue:**
I am adding 8 accounts that are all separate offices but owned by the same medical group.
The invoices are all going to be sent to the corporate office.
Does SA have a ‘copy’ feature so that I can duplicate all of the info – address and billing info – without typing it in 8 times?
|
2.0
|
Towne - Copy Accounts or Customers - In GitLab by @kdjstudios on May 8, 2018, 11:16
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-08-76377
**Server:** External
**Client/Site:** Towne
**Account:** NA
**Issue:**
I am adding 8 accounts that are all separate offices but owned by the same medical group.
The invoices are all going to be sent to the corporate office.
Does SA have a ‘copy’ feature so that I can duplicate all of the info – address and billing info – without typing it in 8 times?
|
process
|
towne copy accounts or customers in gitlab by kdjstudios on may submitted by deb crown helpdesk server external client site towne account na issue i am adding accounts that are all separate offices but owned by the same medical group the invoices are all going to be sent to the corporate office does sa have a ‘copy’ feature so that i can duplicate all of the info – address and billing info – without typing it in times
| 1
|
17,047
| 22,421,785,239
|
IssuesEvent
|
2022-06-20 04:35:41
|
MineCake147E/Shamisen
|
https://api.github.com/repos/MineCake147E/Shamisen
|
opened
|
Advanced High-Quality Resampler
|
Kind: Enhancement 📈 Feature: Signal Processing 🎛️ Status: Working ▶️ Priority: Moderate 🚃
|
## Background and motivation
Current `SplineResampler` has no optimization for down-sampling at all, relying on slow BiQuad LPF.
We need another resampler for down-sampling.
## Ideas
- [ ] Apply SIMD-friendly LPF
- [ ] Wavelets like CDF 9/7 Wavelet
- [ ] Further optimization of BiQuad filter might also be needed
- [ ] Dynamic resolution scaling
- [ ] Decimate or interpolate samples down or up to the maximum power-of-two multiple of source frequency less than target frequency, by wavelets.
- [ ] Use existing `SplineResampler` for final interpolation
|
1.0
|
Advanced High-Quality Resampler - ## Background and motivation
Current `SplineResampler` has no optimization for down-sampling at all, relying on slow BiQuad LPF.
We need another resampler for down-sampling.
## Ideas
- [ ] Apply SIMD-friendly LPF
- [ ] Wavelets like CDF 9/7 Wavelet
- [ ] Further optimization of BiQuad filter might also be needed
- [ ] Dynamic resolution scaling
- [ ] Decimate or interpolate samples down or up to the maximum power-of-two multiple of source frequency less than target frequency, by wavelets.
- [ ] Use existing `SplineResampler` for final interpolation
|
process
|
advanced high quality resampler background and motivation current splineresampler has no optimization for down sampling at all relying on slow biquad lpf we need another resampler for down sampling ideas apply simd friendly lpf wavelets like cdf wavelet further optimization of biquad filter might also be needed dynamic resolution scaling decimate or interpolate samples down or up to the maximum power of two multiple of source frequency less than target frequency by wavelets use existing splineresampler for final interpolation
| 1
|
16,146
| 20,425,828,458
|
IssuesEvent
|
2022-02-24 03:40:34
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_success_first_then_exception (__main__.ForkTest)
|
module: multiprocessing triaged module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_success_first_then_exception%2C%20ForkTest) and the most recent [workflow logs](https://github.com/pytorch/pytorch/actions/runs/1890850942).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 2 red and 2 green.
|
1.0
|
DISABLED test_success_first_then_exception (__main__.ForkTest) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](http://torch-ci.com/failure/test_success_first_then_exception%2C%20ForkTest) and the most recent [workflow logs](https://github.com/pytorch/pytorch/actions/runs/1890850942).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 2 red and 2 green.
|
process
|
disabled test success first then exception main forktest platforms linux this test was disabled because it is failing in ci see and the most recent over the past hours it has been determined flaky in workflow s with red and green
| 1
|
6,350
| 9,403,752,750
|
IssuesEvent
|
2019-04-09 02:50:58
|
icesuns/icesuns.github.io
|
https://api.github.com/repos/icesuns/icesuns.github.io
|
closed
|
Batch Processing vs Stream Processing <br>批处理与流处理 | ZIcesun
|
/2019/03/22/Batch-Processing-vs-Stream-Processing/ Gitalk
|
https://zicesun.com/2019/03/22/Batch-Processing-vs-Stream-Processing/
最近在学习关于大数据处理方面的东西,因此想总结一下批处理和流处理之间的异同之处。
|
2.0
|
Batch Processing vs Stream Processing <br>批处理与流处理 | ZIcesun - https://zicesun.com/2019/03/22/Batch-Processing-vs-Stream-Processing/
最近在学习关于大数据处理方面的东西,因此想总结一下批处理和流处理之间的异同之处。
|
process
|
batch processing vs stream processing 批处理与流处理 zicesun 最近在学习关于大数据处理方面的东西,因此想总结一下批处理和流处理之间的异同之处。
| 1
|
1,384
| 3,952,432,663
|
IssuesEvent
|
2016-04-29 08:48:23
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
opened
|
instance.toString is not a function
|
AREA: client BROWSER: Chrome SYSTEM: resource processing TYPE: bug
|
```
"Error
at Object.eval (eval at evaluate (unknown source), <anonymous>:1:2)
at Object.InjectedScript._evaluateOn (<anonymous>:904:55)
at Object.InjectedScript._evaluateAndWrap (<anonymous>:837:34)
at Object.InjectedScript.evaluateOnCallFrame (<anonymous>:963:21)
at Object.HammerheadClient.define.exports.isStyleInstance (http://kirov-sv:1340/tcse6be4085-e27c-46d7-8310-fa770c1f18fa/hammerhead.js:25552:23)
at Object.HammerheadClient.define.exports.init.elementPropertyAccessors.background.condition (http://kirov-sv:1340/tcse6be4085-e27c-46d7-8310-fa770c1f18fa/hammerhead.js:21606:33)
at setProperty (http://kirov-sv:1340/tcse6be4085-e27c-46d7-8310-fa770c1f18fa/hammerhead.js:21775:52)
at http://kirov-sv:1340/ace/ace.js?7929ba6d39aa4465=http%3A%7Clocalhost%3A3777%7C4%7Ct%7Cscript:10:86387
at Array.forEach (native)
at createKeywordMapper (http://kirov-sv:1340/ace/ace.js?7929ba6d39aa4465=http%3A%7Clocalhost%3A3777%7C4%7Ct%7Cscript:10:86270)
```
|
1.0
|
instance.toString is not a function - ```
"Error
at Object.eval (eval at evaluate (unknown source), <anonymous>:1:2)
at Object.InjectedScript._evaluateOn (<anonymous>:904:55)
at Object.InjectedScript._evaluateAndWrap (<anonymous>:837:34)
at Object.InjectedScript.evaluateOnCallFrame (<anonymous>:963:21)
at Object.HammerheadClient.define.exports.isStyleInstance (http://kirov-sv:1340/tcse6be4085-e27c-46d7-8310-fa770c1f18fa/hammerhead.js:25552:23)
at Object.HammerheadClient.define.exports.init.elementPropertyAccessors.background.condition (http://kirov-sv:1340/tcse6be4085-e27c-46d7-8310-fa770c1f18fa/hammerhead.js:21606:33)
at setProperty (http://kirov-sv:1340/tcse6be4085-e27c-46d7-8310-fa770c1f18fa/hammerhead.js:21775:52)
at http://kirov-sv:1340/ace/ace.js?7929ba6d39aa4465=http%3A%7Clocalhost%3A3777%7C4%7Ct%7Cscript:10:86387
at Array.forEach (native)
at createKeywordMapper (http://kirov-sv:1340/ace/ace.js?7929ba6d39aa4465=http%3A%7Clocalhost%3A3777%7C4%7Ct%7Cscript:10:86270)
```
|
process
|
instance tostring is not a function error at object eval eval at evaluate unknown source at object injectedscript evaluateon at object injectedscript evaluateandwrap at object injectedscript evaluateoncallframe at object hammerheadclient define exports isstyleinstance at object hammerheadclient define exports init elementpropertyaccessors background condition at setproperty at at array foreach native at createkeywordmapper
| 1
|
17,279
| 23,074,819,898
|
IssuesEvent
|
2022-07-25 22:00:33
|
googleapis/gapic-generator-ruby
|
https://api.github.com/repos/googleapis/gapic-generator-ruby
|
closed
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in shared/output/cloud/compute_small/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/compute_small/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/compute_small_wrapper/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/compute_small_wrapper/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/language_v1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/language_v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/language_v1beta1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/language_v1beta1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/language_v1beta2/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/language_v1beta2/.repo-metadata.json
* must have required property 'release_level' in shared/output/cloud/noservice/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/noservice/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/secretmanager_v1beta1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/secretmanager_wrapper/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/secretmanager_wrapper/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/speech_v1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/speech_v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/vision_v1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/vision_v1/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in shared/output/cloud/compute_small/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/compute_small/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/compute_small_wrapper/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/compute_small_wrapper/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/language_v1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/language_v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/language_v1beta1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/language_v1beta1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/language_v1beta2/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/language_v1beta2/.repo-metadata.json
* must have required property 'release_level' in shared/output/cloud/noservice/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/noservice/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/secretmanager_v1beta1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/secretmanager_wrapper/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/secretmanager_wrapper/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/speech_v1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/speech_v1/.repo-metadata.json
* release_level must be equal to one of the allowed values in shared/output/cloud/vision_v1/.repo-metadata.json
* api_shortname field missing from shared/output/cloud/vision_v1/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 release level must be equal to one of the allowed values in shared output cloud compute small repo metadata json api shortname field missing from shared output cloud compute small repo metadata json release level must be equal to one of the allowed values in shared output cloud compute small wrapper repo metadata json api shortname field missing from shared output cloud compute small wrapper repo metadata json release level must be equal to one of the allowed values in shared output cloud language repo metadata json api shortname field missing from shared output cloud language repo metadata json release level must be equal to one of the allowed values in shared output cloud language repo metadata json api shortname field missing from shared output cloud language repo metadata json release level must be equal to one of the allowed values in shared output cloud language repo metadata json api shortname field missing from shared output cloud language repo metadata json must have required property release level in shared output cloud noservice repo metadata json api shortname field missing from shared output cloud noservice repo metadata json release level must be equal to one of the allowed values in shared output cloud secretmanager repo metadata json release level must be equal to one of the allowed values in shared output cloud secretmanager wrapper repo metadata json api shortname field missing from shared output cloud secretmanager wrapper repo metadata json release level must be equal to one of the allowed values in shared output cloud speech repo metadata json api shortname field missing from shared output cloud speech repo metadata json release level must be equal to one of the allowed values in shared output cloud vision repo metadata json api shortname field missing from shared output cloud vision repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
28,566
| 4,106,508,353
|
IssuesEvent
|
2016-06-06 09:05:14
|
Roy2014Kimi/JI3U47XFP7X25PCEAD3LMVXQ
|
https://api.github.com/repos/Roy2014Kimi/JI3U47XFP7X25PCEAD3LMVXQ
|
closed
|
WdUsyvIVCxi7z/q5RkhGRMD9wGE2xFlOMi3cDoTFeX2wpKFJbPAI0LLSdi7xSI8MHt1d98mKrR5zkLQEnMaUN2Hf5TLrJc6NIPmQlfdnik8qGS1pEH3QA+rESYBxRh6YTRxMSpIsFFuS+bm5g4aVgXCroR35hEYBZ/RMsF7dtto=
|
design
|
E2uhmDDUmLz8egqdUc8ehm2F2zP39O6TU25RP1p+0kbKXP4N2AdgeCQaKeRvtivA0cp7FTAAbcH6UfcQoaepsE1LL/xmkmkhU1kp5Hgf9xCd28GV+sRfy4TQ9YgpEteyZDJm69KCfIob9F8cMKHLIteze8bVNGUkkh6swZfXcVbvkame1USV0S5LOweudPUOrv9Wz14TrXiMJX+7/PoFjlEU2x5I8UQGQtepHd60TXQ4SFccpYVw1LPhtPX5duYfm1ICx/+OGMQWIS8G0mSNKW/tZpw5KZTxIIBEferO4k+HrSvsSm65ErkTZQFZksAoOEhXHKWFcNSz4bT1+XbmH6aZYwiKPB4GO2SjSYIBtuB3WDosTI7S8LaanOKqVKqhOEhXHKWFcNSz4bT1+XbmH/kgerdT+0KmBu4ufPuxKlrW49hVpEep39dTBSF9Blb/hLELDXNeG7e2Q6Zr3Uza0oaSP2LAXGUf8FVTepRh+gVUCljFJcspVmIEwW8kMjL1p8WhiBPuwBI95gmvujGk0wy/16CWjAk1Zk5Wp6yPKfMO2eTpnQHPVSpVlMG1qIHqOEhXHKWFcNSz4bT1+XbmHwJ0YAKwhBz7ln2pAtS6WuRkt6HgBMPxrcnfH0tVGl3D+SB6t1P7QqYG7i58+7EqWtbj2FWkR6nf11MFIX0GVv8y09ho/cnbbyRvujLivMvzuwm1z0ioJl8D4ail+5+IBFS2ALy30gHY/LURW9XFGAiA1+rNBExhkQSh4ctJv2ZgjXGi8F4kIlpNJpos50CDC2+F50EZ6ec1T/APZzeUIoXsCWS+Hv+PpYQlKICb6FomJWGh3mWlu8YSB/93WPkxKAkH3C64kleeR3R6eQhfHsSxqGQdeqCqMgdWNp/RNGSDQWrtxqgTn0M81MvwbbMl50oaqOd88sJSkSUON2CNnOXoNhfF9JIxKzm6IjB/VVFq
|
1.0
|
WdUsyvIVCxi7z/q5RkhGRMD9wGE2xFlOMi3cDoTFeX2wpKFJbPAI0LLSdi7xSI8MHt1d98mKrR5zkLQEnMaUN2Hf5TLrJc6NIPmQlfdnik8qGS1pEH3QA+rESYBxRh6YTRxMSpIsFFuS+bm5g4aVgXCroR35hEYBZ/RMsF7dtto= - E2uhmDDUmLz8egqdUc8ehm2F2zP39O6TU25RP1p+0kbKXP4N2AdgeCQaKeRvtivA0cp7FTAAbcH6UfcQoaepsE1LL/xmkmkhU1kp5Hgf9xCd28GV+sRfy4TQ9YgpEteyZDJm69KCfIob9F8cMKHLIteze8bVNGUkkh6swZfXcVbvkame1USV0S5LOweudPUOrv9Wz14TrXiMJX+7/PoFjlEU2x5I8UQGQtepHd60TXQ4SFccpYVw1LPhtPX5duYfm1ICx/+OGMQWIS8G0mSNKW/tZpw5KZTxIIBEferO4k+HrSvsSm65ErkTZQFZksAoOEhXHKWFcNSz4bT1+XbmH6aZYwiKPB4GO2SjSYIBtuB3WDosTI7S8LaanOKqVKqhOEhXHKWFcNSz4bT1+XbmH/kgerdT+0KmBu4ufPuxKlrW49hVpEep39dTBSF9Blb/hLELDXNeG7e2Q6Zr3Uza0oaSP2LAXGUf8FVTepRh+gVUCljFJcspVmIEwW8kMjL1p8WhiBPuwBI95gmvujGk0wy/16CWjAk1Zk5Wp6yPKfMO2eTpnQHPVSpVlMG1qIHqOEhXHKWFcNSz4bT1+XbmHwJ0YAKwhBz7ln2pAtS6WuRkt6HgBMPxrcnfH0tVGl3D+SB6t1P7QqYG7i58+7EqWtbj2FWkR6nf11MFIX0GVv8y09ho/cnbbyRvujLivMvzuwm1z0ioJl8D4ail+5+IBFS2ALy30gHY/LURW9XFGAiA1+rNBExhkQSh4ctJv2ZgjXGi8F4kIlpNJpos50CDC2+F50EZ6ec1T/APZzeUIoXsCWS+Hv+PpYQlKICb6FomJWGh3mWlu8YSB/93WPkxKAkH3C64kleeR3R6eQhfHsSxqGQdeqCqMgdWNp/RNGSDQWrtxqgTn0M81MvwbbMl50oaqOd88sJSkSUON2CNnOXoNhfF9JIxKzm6IjB/VVFq
|
non_process
|
xbmh kgerdt apzzeuioxscws hv vvfq
| 0
|
12,613
| 9,875,198,223
|
IssuesEvent
|
2019-06-23 09:33:44
|
OpenCHS/openchs-product
|
https://api.github.com/repos/OpenCHS/openchs-product
|
closed
|
Setup cognito backup
|
2.8 Complete Could Infrastructure/other Story
|
Backup is uploaded on Drive under OpenCHS/Implementations folder.
Target to backup in infra makefile
|
1.0
|
Setup cognito backup - Backup is uploaded on Drive under OpenCHS/Implementations folder.
Target to backup in infra makefile
|
non_process
|
setup cognito backup backup is uploaded on drive under openchs implementations folder target to backup in infra makefile
| 0
|
9,665
| 12,663,260,376
|
IssuesEvent
|
2020-06-18 00:45:00
|
googleapis/java-storage-nio
|
https://api.github.com/repos/googleapis/java-storage-nio
|
closed
|
Release PRs are not being created
|
api: storage type: process
|
PR's auto-generated automatically when releases are ready aren't being generated and not sure why this is the case.
@kolea2 or @chingor13 do you have ideas? I'm a bit confused.
|
1.0
|
Release PRs are not being created - PR's auto-generated automatically when releases are ready aren't being generated and not sure why this is the case.
@kolea2 or @chingor13 do you have ideas? I'm a bit confused.
|
process
|
release prs are not being created pr s auto generated automatically when releases are ready aren t being generated and not sure why this is the case or do you have ideas i m a bit confused
| 1
|
188,370
| 22,046,330,502
|
IssuesEvent
|
2022-05-30 02:25:48
|
CostasVoliotisXO/tensorflow
|
https://api.github.com/repos/CostasVoliotisXO/tensorflow
|
closed
|
WS-2010-0001 (Medium) detected in commons-codec-1.4.jar - autoclosed
|
security vulnerability
|
## WS-2010-0001 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.4.jar</b></p></summary>
<p>The codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>Library home page: <a href="http://commons.apache.org/codec/">http://commons.apache.org/codec/</a></p>
<p>Path to dependency file: /tensorflow/tensorflow/java/maven/tensorflow-hadoop/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar</p>
<p>
Dependency Hierarchy:
- hadoop-common-2.6.0.jar (Root Library)
- :x: **commons-codec-1.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/CostasVoliotisXO/tensorflow/commits/e6785ff6cc9c0dfe688c3ab7c22d27134de75368">e6785ff6cc9c0dfe688c3ab7c22d27134de75368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Base64 encode() method is no longer thread-safe in Apache Commons Codec before version 1.7, which might disclose the wrong data or allow an attacker to change non-private fields.
<p>Publish Date: 2010-02-26
<p>URL: <a href=https://issues.apache.org/jira/browse/CODEC-96>WS-2010-0001</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://issues.apache.org/jira/browse/CODEC-96">https://issues.apache.org/jira/browse/CODEC-96</a></p>
<p>Release Date: 2017-01-31</p>
<p>Fix Resolution: 1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2010-0001 (Medium) detected in commons-codec-1.4.jar - autoclosed - ## WS-2010-0001 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-codec-1.4.jar</b></p></summary>
<p>The codec package contains simple encoder and decoders for
various formats such as Base64 and Hexadecimal. In addition to these
widely used encoders and decoders, the codec package also maintains a
collection of phonetic encoding utilities.</p>
<p>Library home page: <a href="http://commons.apache.org/codec/">http://commons.apache.org/codec/</a></p>
<p>Path to dependency file: /tensorflow/tensorflow/java/maven/tensorflow-hadoop/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar</p>
<p>
Dependency Hierarchy:
- hadoop-common-2.6.0.jar (Root Library)
- :x: **commons-codec-1.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/CostasVoliotisXO/tensorflow/commits/e6785ff6cc9c0dfe688c3ab7c22d27134de75368">e6785ff6cc9c0dfe688c3ab7c22d27134de75368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Base64 encode() method is no longer thread-safe in Apache Commons Codec before version 1.7, which might disclose the wrong data or allow an attacker to change non-private fields.
<p>Publish Date: 2010-02-26
<p>URL: <a href=https://issues.apache.org/jira/browse/CODEC-96>WS-2010-0001</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://issues.apache.org/jira/browse/CODEC-96">https://issues.apache.org/jira/browse/CODEC-96</a></p>
<p>Release Date: 2017-01-31</p>
<p>Fix Resolution: 1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in commons codec jar autoclosed ws medium severity vulnerability vulnerable library commons codec jar the codec package contains simple encoder and decoders for various formats such as and hexadecimal in addition to these widely used encoders and decoders the codec package also maintains a collection of phonetic encoding utilities library home page a href path to dependency file tensorflow tensorflow java maven tensorflow hadoop pom xml path to vulnerable library root repository commons codec commons codec commons codec jar dependency hierarchy hadoop common jar root library x commons codec jar vulnerable library found in head commit a href vulnerability details encode method is no longer thread safe in apache commons codec before version which might disclose the wrong data or allow an attacker to change non private fields publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
13,346
| 15,806,102,103
|
IssuesEvent
|
2021-04-04 03:05:49
|
yargs/yargs
|
https://api.github.com/repos/yargs/yargs
|
opened
|
Remaining work for yargs v17 release
|
type: process
|
[v17](https://github.com/yargs/yargs/pull/1876) is a large release, in which I've tried to address many long outstanding oddities related to middleware, async handlers, check functions, coerce functions, help output from commands.
There are still a few more tasks I would like to complete before we push it out the door:
- [ ] Node 10 is about to EOL, so I'd like to drop Node 10 before we take on the next major.
- [ ] skim through any recent `docs` issues and update docs pertinent to this release.
- [ ] land @OsmanAltun's work on https://github.com/yargs/yargs/pull/1882
- [ ] figure out why WebPack has issues with our current export maps.
- [ ] try to reduce unpacked module size under 300kb (_this is becoming increasingly difficult, since much of the size is due to lines of code_).
- [ ] make sure that https://github.com/DefinitelyTyped/DefinitelyTyped/pull/52169 is ready to merge, so that we don't break TypeScript users for any length of time.
- [ ] update TypeScript docs.
|
1.0
|
Remaining work for yargs v17 release - [v17](https://github.com/yargs/yargs/pull/1876) is a large release, in which I've tried to address many long outstanding oddities related to middleware, async handlers, check functions, coerce functions, help output from commands.
There are still a few more tasks I would like to complete before we push it out the door:
- [ ] Node 10 is about to EOL, so I'd like to drop Node 10 before we take on the next major.
- [ ] skim through any recent `docs` issues and update docs pertinent to this release.
- [ ] land @OsmanAltun's work on https://github.com/yargs/yargs/pull/1882
- [ ] figure out why WebPack has issues with our current export maps.
- [ ] try to reduce unpacked module size under 300kb (_this is becoming increasingly difficult, since much of the size is due to lines of code_).
- [ ] make sure that https://github.com/DefinitelyTyped/DefinitelyTyped/pull/52169 is ready to merge, so that we don't break TypeScript users for any length of time.
- [ ] update TypeScript docs.
|
process
|
remaining work for yargs release is a large release in which i ve tried to address many long outstanding oddities related to middleware async handlers check functions coerce functions help output from commands there are still a few more tasks i would like to complete before we push it out the door node is about to eol so i d like to drop node before we take on the next major skim through any recent docs issues and update docs pertinent to this release land osmanaltun s work on figure out why webpack has issues with our current export maps try to reduce unpacked module size under this is becoming increasingly difficult since much of the size is due to lines of code make sure that is ready to merge so that we don t break typescript users for any length of time update typescript docs
| 1
|
50,906
| 6,475,121,879
|
IssuesEvent
|
2017-08-17 19:39:17
|
phetsims/molecule-polarity
|
https://api.github.com/repos/phetsims/molecule-polarity
|
reopened
|
change initial orientation of the diatomic molecule?
|
design:general type:question
|
A concern about the diatomic molecule has been voiced multiple times: If the E-field is turned on when the molecule is in its initial orientation, the dipole is already aligned with the E-field, and there will no change in the molecule's orientation. So the student won't get any feedback and might not use the E-field again.
Should we consider changing the default orientation? Perhaps something like:

|
1.0
|
change initial orientation of the diatomic molecule? - A concern about the diatomic molecule has been voiced multiple times: If the E-field is turned on when the molecule is in its initial orientation, the dipole is already aligned with the E-field, and there will no change in the molecule's orientation. So the student won't get any feedback and might not use the E-field again.
Should we consider changing the default orientation? Perhaps something like:

|
non_process
|
change initial orientation of the diatomic molecule a concern about the diatomic molecule has been voiced multiple times if the e field is turned on when the molecule is in its initial orientation the dipole is already aligned with the e field and there will no change in the molecule s orientation so the student won t get any feedback and might not use the e field again should we consider changing the default orientation perhaps something like
| 0
|
36,740
| 2,811,932,155
|
IssuesEvent
|
2015-05-18 03:20:50
|
Rob-MFn-Fletcher/egammaCore
|
https://api.github.com/repos/Rob-MFn-Fletcher/egammaCore
|
closed
|
Add errors to calculation of Chi-squared methods
|
enhancement High Priority
|
Take into account the variance when calculating chi-squared in the shifting code. Should just need to divide by the error on each bin which to start with we can estimate as sqrt(bin content). This will weight the bins with more events in them higher than bins with less.
|
1.0
|
Add errors to calculation of Chi-squared methods - Take into account the variance when calculating chi-squared in the shifting code. Should just need to divide by the error on each bin which to start with we can estimate as sqrt(bin content). This will weight the bins with more events in them higher than bins with less.
|
non_process
|
add errors to calculation of chi squared methods take into account the variance when calculating chi squared in the shifting code should just need to divide by the error on each bin which to start with we can estimate as sqrt bin content this will weight the bins with more events in them higher than bins with less
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.