Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15,261
| 19,190,521,431
|
IssuesEvent
|
2021-12-05 22:38:58
|
km4ack/pi-build
|
https://api.github.com/repos/km4ack/pi-build
|
closed
|
WSJTX/JS8Call Memory Issue
|
bug in process
|
It was reported that WSJTX and possibly JS8Call will not build on a Pi 3 unless we increase the swap memory. Need to test and confirm.
|
1.0
|
WSJTX/JS8Call Memory Issue - It was reported that WSJTX and possibly JS8Call will not build on a Pi 3 unless we increase the swap memory. Need to test and confirm.
|
process
|
wsjtx memory issue it was reported that wsjtx and possibly will not build on a pi unless we increase the swap memory need to test and confirm
| 1
|
23,182
| 2,656,940,341
|
IssuesEvent
|
2015-03-18 02:48:10
|
ekmett/profunctors
|
https://api.github.com/repos/ekmett/profunctors
|
closed
|
profunctors-4.4 requires excessive amounts of RAM with GHC 7.10.1-rc1
|
compiler-bug high priority
|
While compiling `Compiling Data.Profunctor.Unsafe` at http://hydra.cryp.to/build/608380/nixlog/7/raw, the `ghc` process goes on allocating memory until it's finally killed by the Linux kernel:
Out of memory: Kill process 8052 (ghc) score 936 or sacrifice child
Killed process 8052 (ghc) total-vm:46687868kB, anon-rss:29845192kB, file-rss:712kB
Previous versions of the library did not trigger that behavior.
|
1.0
|
profunctors-4.4 requires excessive amounts of RAM with GHC 7.10.1-rc1 - While compiling `Compiling Data.Profunctor.Unsafe` at http://hydra.cryp.to/build/608380/nixlog/7/raw, the `ghc` process goes on allocating memory until it's finally killed by the Linux kernel:
Out of memory: Kill process 8052 (ghc) score 936 or sacrifice child
Killed process 8052 (ghc) total-vm:46687868kB, anon-rss:29845192kB, file-rss:712kB
Previous versions of the library did not trigger that behavior.
|
non_process
|
profunctors requires excessive amounts of ram with ghc while compiling compiling data profunctor unsafe at the ghc process goes on allocating memory until it s finally killed by the linux kernel out of memory kill process ghc score or sacrifice child killed process ghc total vm anon rss file rss previous versions of the library did not trigger that behavior
| 0
|
18,028
| 24,036,393,792
|
IssuesEvent
|
2022-09-15 19:36:59
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
closed
|
Keep track of completeness of the implementation
|
Process
|
It would be good to have an easy way to see how far along we are with supporting various aspects of the implementation - prettyprinting, verification, shape inference, interpreter, etc. Perhaps we could have a Markdown document that tracks this status? (#5 and #6 look like related work for this ticket).
|
1.0
|
Keep track of completeness of the implementation - It would be good to have an easy way to see how far along we are with supporting various aspects of the implementation - prettyprinting, verification, shape inference, interpreter, etc. Perhaps we could have a Markdown document that tracks this status? (#5 and #6 look like related work for this ticket).
|
process
|
keep track of completeness of the implementation it would be good to have an easy way to see how far along we are with supporting various aspects of the implementation prettyprinting verification shape inference interpreter etc perhaps we could have a markdown document that tracks this status and look like related work for this ticket
| 1
|
18,725
| 24,611,723,869
|
IssuesEvent
|
2022-10-14 22:29:03
|
GoogleCloudPlatform/emblem
|
https://api.github.com/repos/GoogleCloudPlatform/emblem
|
closed
|
Name the Emblem Application
|
type: process priority: p2
|
## Proposal
We need to decide a consistent name for the application, then use it in the UI, API, and documentation.
"Cymbal Giving" has been considered, this needs confirmation with the Cymbal demo brand team if we want to use it.
|
1.0
|
Name the Emblem Application - ## Proposal
We need to decide a consistent name for the application, then use it in the UI, API, and documentation.
"Cymbal Giving" has been considered, this needs confirmation with the Cymbal demo brand team if we want to use it.
|
process
|
name the emblem application proposal we need to decide a consistent name for the application then use it in the ui api and documentation cymbal giving has been considered this needs confirmation with the cymbal demo brand team if we want to use it
| 1
|
13,783
| 3,194,944,202
|
IssuesEvent
|
2015-09-30 14:36:06
|
seedstack/seed
|
https://api.github.com/repos/seedstack/seed
|
closed
|
Provide helper methods to load a KeyStore and a TrustStore
|
design in progress
|
Currently a master KeyStore is configured in the application configuration. This configuration should be moved in the "bootstrap configuration" this will allow to get the KeyStore before the kernel startup.
Refactor the crypto support to extract the logic to load a KeyStore. Add in the same time a method to load a TrustStore.
|
1.0
|
Provide helper methods to load a KeyStore and a TrustStore - Currently a master KeyStore is configured in the application configuration. This configuration should be moved in the "bootstrap configuration" this will allow to get the KeyStore before the kernel startup.
Refactor the crypto support to extract the logic to load a KeyStore. Add in the same time a method to load a TrustStore.
|
non_process
|
provide helper methods to load a keystore and a truststore currently a master keystore is configured in the application configuration this configuration should be moved in the bootstrap configuration this will allow to get the keystore before the kernel startup refactor the crypto support to extract the logic to load a keystore add in the same time a method to load a truststore
| 0
|
2,700
| 5,556,364,105
|
IssuesEvent
|
2017-03-24 08:56:37
|
sjchat/sjchat
|
https://api.github.com/repos/sjchat/sjchat
|
closed
|
Continuous integration
|
area:infra-testing status:new type:process
|
We need a continuous integration system to test our code.
Travis is free for open source projects, and seems to integrate well enough with Github.
It seemed to support bazel builds too.
|
1.0
|
Continuous integration - We need a continuous integration system to test our code.
Travis is free for open source projects, and seems to integrate well enough with Github.
It seemed to support bazel builds too.
|
process
|
continuous integration we need a continuous integration system to test our code travis is free for open source projects and seems to integrate well enough with github it seemed to support bazel builds too
| 1
|
163,456
| 20,363,790,003
|
IssuesEvent
|
2022-02-21 01:28:16
|
mgh3326/nuber-eats-backend
|
https://api.github.com/repos/mgh3326/nuber-eats-backend
|
opened
|
CVE-2022-0639 (Medium) detected in url-parse-1.4.7.tgz
|
security vulnerability
|
## CVE-2022-0639 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- graphql-tools-7.0.3.tgz (Root Library)
- url-loader-6.8.0.tgz
- eventsource-1.0.7.tgz
- original-1.0.2.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: url-parse - 1.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0639 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2022-0639 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- graphql-tools-7.0.3.tgz (Root Library)
- url-loader-6.8.0.tgz
- eventsource-1.0.7.tgz
- original-1.0.2.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7.
<p>Publish Date: 2022-02-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p>
<p>Release Date: 2022-02-17</p>
<p>Fix Resolution: url-parse - 1.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file package json path to vulnerable library node modules url parse package json dependency hierarchy graphql tools tgz root library url loader tgz eventsource tgz original tgz x url parse tgz vulnerable library found in base branch master vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource
| 0
|
76,920
| 14,699,246,475
|
IssuesEvent
|
2021-01-04 08:13:09
|
NaClYen/blog
|
https://api.github.com/repos/NaClYen/blog
|
opened
|
vscode remote 不斷重複 login 的問題
|
vscode
|
花了很久才找到目前後端的VM環境無法用 vscode ssh remote 的問題, 在此紀錄一下.
## 環境
### client
> Visual Studio Code(1.52.1)
> Remote - SSH (Version: 0.62.0)
### server
> CentOS Linux release 7.9.2009 (Core)
## 前提
- 已經可以用 ssh 指令連上remote端. e.g. `ssh user_name@xxx.xxx.xxx.xxx`.
- 也可以用 `Remote-SSH: kill VS Code Server on Host...` 移除remote端上的 vscode server.
如果還沒達成這個前提, 那我的狀況應該不是和你有關係的, 問題會更前面嘿~
## 問題根源
- 在純淨的環境下測試非常正常(ubuntu 20 & CentOS 7)
- 花了好幾個工作天看了goolge搜尋好幾頁的資料都沒有真正的解法
- 在某次測試換了一個 username 就可以連上, 當時還以為是 root 登入的原因, 但仍然不踏實, 也不和最終需求(此需求會需要重建很多環境設定)
- 又在某次測試看到有人介紹 `zsh`, 順便試試之下就又可以連上!!
- 切回 bash 之後又進入無限登入
- 綜合換 user & shell 可以解決問題的線索, 開始排查 `.bashrc` 的設定
- 篩出 `export GREP_OPTIONS="-n --color"` 這條是關鍵
- 最後測試關鍵是 `-n` 這個參數是最後的根源, 但為何觸發可能要看 vscode ssh remote 的code了吧~
## 處理
移除或者使用 `alias grep='grep -n --color'` 取代 `export GREP_OPTIONS="-n --color`
ref: [Replace grep command with grep -n --colour?](https://askubuntu.com/a/2193)
## 沒有直接關係但可參考建置remote環境的文章
- [使用VSCode的Remote-SSH连接Linux进行远程开发](https://www.cnblogs.com/WindSun/p/12142621.html)
- [Visual Studio Code ssh Remote Tutorial](https://medium.com/@joepow2/visual-studio-code-ssh-remote-tutorial-f043a0600ef)
- [使用VSCode Remote透過 SSH 進行遠端開發](https://hackmd.io/@brick9450/vscode-remote)
|
1.0
|
vscode remote 不斷重複 login 的問題 - 花了很久才找到目前後端的VM環境無法用 vscode ssh remote 的問題, 在此紀錄一下.
## 環境
### client
> Visual Studio Code(1.52.1)
> Remote - SSH (Version: 0.62.0)
### server
> CentOS Linux release 7.9.2009 (Core)
## 前提
- 已經可以用 ssh 指令連上remote端. e.g. `ssh user_name@xxx.xxx.xxx.xxx`.
- 也可以用 `Remote-SSH: kill VS Code Server on Host...` 移除remote端上的 vscode server.
如果還沒達成這個前提, 那我的狀況應該不是和你有關係的, 問題會更前面嘿~
## 問題根源
- 在純淨的環境下測試非常正常(ubuntu 20 & CentOS 7)
- 花了好幾個工作天看了goolge搜尋好幾頁的資料都沒有真正的解法
- 在某次測試換了一個 username 就可以連上, 當時還以為是 root 登入的原因, 但仍然不踏實, 也不和最終需求(此需求會需要重建很多環境設定)
- 又在某次測試看到有人介紹 `zsh`, 順便試試之下就又可以連上!!
- 切回 bash 之後又進入無限登入
- 綜合換 user & shell 可以解決問題的線索, 開始排查 `.bashrc` 的設定
- 篩出 `export GREP_OPTIONS="-n --color"` 這條是關鍵
- 最後測試關鍵是 `-n` 這個參數是最後的根源, 但為何觸發可能要看 vscode ssh remote 的code了吧~
## 處理
移除或者使用 `alias grep='grep -n --color'` 取代 `export GREP_OPTIONS="-n --color`
ref: [Replace grep command with grep -n --colour?](https://askubuntu.com/a/2193)
## 沒有直接關係但可參考建置remote環境的文章
- [使用VSCode的Remote-SSH连接Linux进行远程开发](https://www.cnblogs.com/WindSun/p/12142621.html)
- [Visual Studio Code ssh Remote Tutorial](https://medium.com/@joepow2/visual-studio-code-ssh-remote-tutorial-f043a0600ef)
- [使用VSCode Remote透過 SSH 進行遠端開發](https://hackmd.io/@brick9450/vscode-remote)
|
non_process
|
vscode remote 不斷重複 login 的問題 花了很久才找到目前後端的vm環境無法用 vscode ssh remote 的問題 在此紀錄一下 環境 client visual studio code remote ssh version server centos linux release core 前提 已經可以用 ssh 指令連上remote端 e g ssh user name xxx xxx xxx xxx 也可以用 remote ssh kill vs code server on host 移除remote端上的 vscode server 如果還沒達成這個前提 那我的狀況應該不是和你有關係的 問題會更前面嘿 問題根源 在純淨的環境下測試非常正常 ubuntu centos 花了好幾個工作天看了goolge搜尋好幾頁的資料都沒有真正的解法 在某次測試換了一個 username 就可以連上 當時還以為是 root 登入的原因 但仍然不踏實 也不和最終需求 此需求會需要重建很多環境設定 又在某次測試看到有人介紹 zsh 順便試試之下就又可以連上 切回 bash 之後又進入無限登入 綜合換 user shell 可以解決問題的線索 開始排查 bashrc 的設定 篩出 export grep options n color 這條是關鍵 最後測試關鍵是 n 這個參數是最後的根源 但為何觸發可能要看 vscode ssh remote 的code了吧 處理 移除或者使用 alias grep grep n color 取代 export grep options n color ref 沒有直接關係但可參考建置remote環境的文章
| 0
|
38,877
| 10,260,956,653
|
IssuesEvent
|
2019-08-22 08:42:16
|
okteto/okteto
|
https://api.github.com/repos/okteto/okteto
|
closed
|
Inject docker,kubectl,okteto... binaries if namespace is managed by okteto
|
remote builds
|
Also, set the `DOCKER_HOST` envvar to allow remote builds.
|
1.0
|
Inject docker,kubectl,okteto... binaries if namespace is managed by okteto - Also, set the `DOCKER_HOST` envvar to allow remote builds.
|
non_process
|
inject docker kubectl okteto binaries if namespace is managed by okteto also set the docker host envvar to allow remote builds
| 0
|
2,442
| 5,220,655,164
|
IssuesEvent
|
2017-01-26 22:29:44
|
vuejs/vue-loader
|
https://api.github.com/repos/vuejs/vue-loader
|
closed
|
Preprocessing templates before sending them to vue-loader
|
feature request pre-processor
|
### Background
Recently I'm experimenting with feature toggles (feature flags) within Vue.js. It is fairly easy to implement for JavaScript code with webpack's `DefinePlugin` and `UglifyJSPlugin`, but when it comes to templates in `.vue` files, it is a little bit tricky.
What I have in mind is to preprocess the `.vue` files and remove some markup in the `<template>` block before sending it to `vue-loader` so that I can dictate what's included in the final build. As an example, say I have the following vue component:
```html
<template>
<div>
<on feature="AWESOME_FEATURE">
Awesome feature is here!
</on>
<off feature="AWESOME_FEATURE">
If awesome feature is not released yet, I will be shown.
</off>
</div>
</template>
```
And I have my `AWESOME_FEATURE` feature toggle set to `false`, I would like to preprocess the template and send the following code to `vue-loader`:
```html
<template>
<div>
If awesome feature is not released yet, I will be shown.
</div>
</template>
```
I have tried to chain my preprocessing loader before `vue-loader` like the following (in webpack config):
```javascript
...
module: {
loaders: [
{
test: /\.vue$/,
loader: 'vue!vue-features'
}
]
}
...
```
However, this does not work. I then verified that my `vue-features-loader` does pass the correct content to `vue-loader`, and `vue-loader` indeed received the corrent content, but the final build code does not seem to be affected.
When I looked at the source code of `vue-loader`, I found that what it does is just generating a `require` statement for each part of the `vue` file and using a `selector` module to read directly from the `vue` file. So loaders sitting in between the source file and `vue-loader` cannot really do their jobs as expected, because the generate require statement does not reference them.
### The Impact
This problem impacts all scenarios where any sort of preprocessing needs to be done on the `.vue` source files.
Although we can use the `vue.loaders` config to swap out the default loaders entirely and replace them with custom loaders, but this is not trivial, and not necessary in most cases.
### Proposed Solutions
A possible solution to this problem would be to pass content selected with the `selector` module to those loaders falling in between the source file and `vue-loader`.
Another solution might be allowing users to specify extra loaders to preprocess each part of the `vue` file. (This is not to replace the default loaders via the `vue.loaders` config, but to add in additional loaders to do preprocessing, and still pass the output to the default loaders)
|
1.0
|
Preprocessing templates before sending them to vue-loader - ### Background
Recently I'm experimenting with feature toggles (feature flags) within Vue.js. It is fairly easy to implement for JavaScript code with webpack's `DefinePlugin` and `UglifyJSPlugin`, but when it comes to templates in `.vue` files, it is a little bit tricky.
What I have in mind is to preprocess the `.vue` files and remove some markup in the `<template>` block before sending it to `vue-loader` so that I can dictate what's included in the final build. As an example, say I have the following vue component:
```html
<template>
<div>
<on feature="AWESOME_FEATURE">
Awesome feature is here!
</on>
<off feature="AWESOME_FEATURE">
If awesome feature is not released yet, I will be shown.
</off>
</div>
</template>
```
And I have my `AWESOME_FEATURE` feature toggle set to `false`, I would like to preprocess the template and send the following code to `vue-loader`:
```html
<template>
<div>
If awesome feature is not released yet, I will be shown.
</div>
</template>
```
I have tried to chain my preprocessing loader before `vue-loader` like the following (in webpack config):
```javascript
...
module: {
loaders: [
{
test: /\.vue$/,
loader: 'vue!vue-features'
}
]
}
...
```
However, this does not work. I then verified that my `vue-features-loader` does pass the correct content to `vue-loader`, and `vue-loader` indeed received the corrent content, but the final build code does not seem to be affected.
When I looked at the source code of `vue-loader`, I found that what it does is just generating a `require` statement for each part of the `vue` file and using a `selector` module to read directly from the `vue` file. So loaders sitting in between the source file and `vue-loader` cannot really do their jobs as expected, because the generate require statement does not reference them.
### The Impact
This problem impacts all scenarios where any sort of preprocessing needs to be done on the `.vue` source files.
Although we can use the `vue.loaders` config to swap out the default loaders entirely and replace them with custom loaders, but this is not trivial, and not necessary in most cases.
### Proposed Solutions
A possible solution to this problem would be to pass content selected with the `selector` module to those loaders falling in between the source file and `vue-loader`.
Another solution might be allowing users to specify extra loaders to preprocess each part of the `vue` file. (This is not to replace the default loaders via the `vue.loaders` config, but to add in additional loaders to do preprocessing, and still pass the output to the default loaders)
|
process
|
preprocessing templates before sending them to vue loader background recently i m experimenting with feature toggles feature flags within vue js it is fairly easy to implement for javascript code with webpack s defineplugin and uglifyjsplugin but when it comes to templates in vue files it is a little bit tricky what i have in mind is to preprocess the vue files and remove some markup in the block before sending it to vue loader so that i can dictate what s included in the final build as an example say i have the following vue component html awesome feature is here if awesome feature is not released yet i will be shown and i have my awesome feature feature toggle set to false i would like to preprocess the template and send the following code to vue loader html if awesome feature is not released yet i will be shown i have tried to chain my preprocessing loader before vue loader like the following in webpack config javascript module loaders test vue loader vue vue features however this does not work i then verified that my vue features loader does pass the correct content to vue loader and vue loader indeed received the corrent content but the final build code does not seem to be affected when i looked at the source code of vue loader i found that what it does is just generating a require statement for each part of the vue file and using a selector module to read directly from the vue file so loaders sitting in between the source file and vue loader cannot really do their jobs as expected because the generate require statement does not reference them the impact this problem impacts all scenarios where any sort of preprocessing needs to be done on the vue source files although we can use the vue loaders config to swap out the default loaders entirely and replace them with custom loaders but this is not trivial and not necessary in most cases proposed solutions a possible solution to this problem would be to pass content selected with the selector module to those loaders falling in between the source file and vue loader another solution might be allowing users to specify extra loaders to preprocess each part of the vue file this is not to replace the default loaders via the vue loaders config but to add in additional loaders to do preprocessing and still pass the output to the default loaders
| 1
|
10,604
| 13,429,608,295
|
IssuesEvent
|
2020-09-07 02:22:09
|
JonathanBerkeley/Chess-games
|
https://api.github.com/repos/JonathanBerkeley/Chess-games
|
closed
|
Pawn movement, assessing legal moves
|
Processing_Chess bug
|
Needs to check for blocking pieces. Needs to check that the second selected tile isn't the same tile it's currently occupying. Needs checks for take-able enemy pieces.
|
1.0
|
Pawn movement, assessing legal moves - Needs to check for blocking pieces. Needs to check that the second selected tile isn't the same tile it's currently occupying. Needs checks for take-able enemy pieces.
|
process
|
pawn movement assessing legal moves needs to check for blocking pieces needs to check that the second selected tile isn t the same tile it s currently occupying needs checks for take able enemy pieces
| 1
|
7,859
| 11,035,465,478
|
IssuesEvent
|
2019-12-07 13:56:19
|
wirecard/shop-systems-coding-guidelines
|
https://api.github.com/repos/wirecard/shop-systems-coding-guidelines
|
reopened
|
How to do code reviews
|
processes tutorial
|
A tutorial on how to do them, how to navigate code, what to watch out for, where to draw a line, etc.
A list of "trigger patterns", sections of code that you might see while reviewing which make you look closer. Links to other, corresponding articles elaborating on those patterns or anti-patterns.
|
1.0
|
How to do code reviews - A tutorial on how to do them, how to navigate code, what to watch out for, where to draw a line, etc.
A list of "trigger patterns", sections of code that you might see while reviewing which make you look closer. Links to other, corresponding articles elaborating on those patterns or anti-patterns.
|
process
|
how to do code reviews a tutorial on how to do them how to navigate code what to watch out for where to draw a line etc a list of trigger patterns sections of code that you might see while reviewing which make you look closer links to other corresponding articles elaborating on those patterns or anti patterns
| 1
|
10,499
| 13,259,945,084
|
IssuesEvent
|
2020-08-20 17:27:14
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] Add modeler algorithm to set a project expression variable
|
3.14 Automatic new feature Graphical modeler Processing Alg
|
Original commit: https://github.com/qgis/QGIS/commit/ea420df28fea05083488cefff022fdb50acf42d8 by nyalldawson
Allows a model to set Project-level expression variables during execution. Especially
useful with the new Export Print Layout algorithms to allow models which dynamically set variables
used in a layout prior to export.
|
1.0
|
[FEATURE][processing] Add modeler algorithm to set a project expression variable - Original commit: https://github.com/qgis/QGIS/commit/ea420df28fea05083488cefff022fdb50acf42d8 by nyalldawson
Allows a model to set Project-level expression variables during execution. Especially
useful with the new Export Print Layout algorithms to allow models which dynamically set variables
used in a layout prior to export.
|
process
|
add modeler algorithm to set a project expression variable original commit by nyalldawson allows a model to set project level expression variables during execution especially useful with the new export print layout algorithms to allow models which dynamically set variables used in a layout prior to export
| 1
|
9,435
| 12,423,721,147
|
IssuesEvent
|
2020-05-24 07:30:18
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Ubuntu PPA Key Expired
|
P0 breakage team-EngProd type: process
|
### Description of the problem / feature request:
The Ubuntu PPA key has expired which is causing installation of Bazel to fail.
### Feature requests: what underlying problem are you trying to solve with this feature?
Trying to install Bazel on Ubuntu using the instructions here: https://docs.bazel.build/versions/master/install-ubuntu.html
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```bash
docker run --rm -it ubuntu:18.04 bash
apt-get update && apt-get install -y gnupg curl
curl https://bazel.build/bazel-release.pub.gpg | gpg -v
exit
```
Outputs:
```plain
gpg: Note: signature key 3D5919B448457EE0 expired Sat May 23 13:10:53 2020 UTC
pub rsa4096 2016-05-24 [SC] [expired: 2020-05-23]
71A1D0EFCFEB6281FD0437C93D5919B448457EE0
uid Bazel Developer (Bazel APT repository key) <bazel-dev@googlegroups.com>
sig DD3EF963991F1EC2 2016-10-28 [User ID not found]
sig 3D5919B448457EE0 2018-05-24 [selfsig]
sig 3D5919B448457EE0 2016-05-24 [selfsig]
sub rsa4096 2016-05-24 [E] [expired: 2020-05-23]
sig 3D5919B448457EE0 2018-05-24 [keybind]
```
### What operating system are you running Bazel on?
Ubuntu 18.04
### What's the output of `bazel info release`?
N/A
### Have you found anything relevant by searching the web?
Related issue the last time this happened: https://github.com/bazelbuild/bazel/issues/5261
|
1.0
|
Ubuntu PPA Key Expired - ### Description of the problem / feature request:
The Ubuntu PPA key has expired which is causing installation of Bazel to fail.
### Feature requests: what underlying problem are you trying to solve with this feature?
Trying to install Bazel on Ubuntu using the instructions here: https://docs.bazel.build/versions/master/install-ubuntu.html
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```bash
docker run --rm -it ubuntu:18.04 bash
apt-get update && apt-get install -y gnupg curl
curl https://bazel.build/bazel-release.pub.gpg | gpg -v
exit
```
Outputs:
```plain
gpg: Note: signature key 3D5919B448457EE0 expired Sat May 23 13:10:53 2020 UTC
pub rsa4096 2016-05-24 [SC] [expired: 2020-05-23]
71A1D0EFCFEB6281FD0437C93D5919B448457EE0
uid Bazel Developer (Bazel APT repository key) <bazel-dev@googlegroups.com>
sig DD3EF963991F1EC2 2016-10-28 [User ID not found]
sig 3D5919B448457EE0 2018-05-24 [selfsig]
sig 3D5919B448457EE0 2016-05-24 [selfsig]
sub rsa4096 2016-05-24 [E] [expired: 2020-05-23]
sig 3D5919B448457EE0 2018-05-24 [keybind]
```
### What operating system are you running Bazel on?
Ubuntu 18.04
### What's the output of `bazel info release`?
N/A
### Have you found anything relevant by searching the web?
Related issue the last time this happened: https://github.com/bazelbuild/bazel/issues/5261
|
process
|
ubuntu ppa key expired description of the problem feature request the ubuntu ppa key has expired which is causing installation of bazel to fail feature requests what underlying problem are you trying to solve with this feature trying to install bazel on ubuntu using the instructions here bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible bash docker run rm it ubuntu bash apt get update apt get install y gnupg curl curl gpg v exit outputs plain gpg note signature key expired sat may utc pub uid bazel developer bazel apt repository key sig sig sig sub sig what operating system are you running bazel on ubuntu what s the output of bazel info release n a have you found anything relevant by searching the web related issue the last time this happened
| 1
|
3,229
| 6,288,965,430
|
IssuesEvent
|
2017-07-19 18:09:28
|
yahoo/fili
|
https://api.github.com/repos/yahoo/fili
|
closed
|
Document CHANGELOG process
|
DOCS PROCESS WIP
|
Be sure to underline that `Current` changes can be modified in unstable ways and shouldn't be considered `stable` from a public API point of view.
|
1.0
|
Document CHANGELOG process - Be sure to underline that `Current` changes can be modified in unstable ways and shouldn't be considered `stable` from a public API point of view.
|
process
|
document changelog process be sure to underline that current changes can be modified in unstable ways and shouldn t be considered stable from a public api point of view
| 1
|
553,580
| 16,374,515,698
|
IssuesEvent
|
2021-05-15 20:37:48
|
Thorium-Sim/thorium
|
https://api.github.com/repos/Thorium-Sim/thorium
|
opened
|
PLANET COLOR SLIDER
|
priority/high type/bug
|
### Requested By: JORDAN
### Priority: High
### Version: 3.3.3
THE SLIDER TO CHANGE THE HUE OF THE COLOR JUMPS RATHER THAN BEING SMOOTH
### Steps to Reproduce
TRY TO CHANGE THE COLOR OF A PLANET TO ORANGE. YOU CAN ONLY GET YELLOW OR RED
|
1.0
|
PLANET COLOR SLIDER - ### Requested By: JORDAN
### Priority: High
### Version: 3.3.3
THE SLIDER TO CHANGE THE HUE OF THE COLOR JUMPS RATHER THAN BEING SMOOTH
### Steps to Reproduce
TRY TO CHANGE THE COLOR OF A PLANET TO ORANGE. YOU CAN ONLY GET YELLOW OR RED
|
non_process
|
planet color slider requested by jordan priority high version the slider to change the hue of the color jumps rather than being smooth steps to reproduce try to change the color of a planet to orange you can only get yellow or red
| 0
|
375
| 2,816,245,330
|
IssuesEvent
|
2015-05-19 10:31:36
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
closed
|
Add preprocessor interface for selecting proposal density
|
preprocessor
|
In the MCMC, we seem to have everything in place for using a student t-distribution as the proposal density, but we have no interface.
Modify the estimation command to accept the following new options:
* proposal_distribution, which takes two arguments: ```rand_multivariate_normal``` (default) or ```rand_multivariate_student``` and maps to ```options_.proposal_distribution```
* ```student_degrees_of_freedom```, which accepts an integer argument and maps to ```options_.student_degrees_of_freedom```
These options already exist in ```global_initialization.m```.
After doing this, we need to convert ```tests/estimation/t_proposal/fs2000_student.mod``` to use this interface.
|
1.0
|
Add preprocessor interface for selecting proposal density - In the MCMC, we seem to have everything in place for using a student t-distribution as the proposal density, but we have no interface.
Modify the estimation command to accept the following new options:
* proposal_distribution, which takes two arguments: ```rand_multivariate_normal``` (default) or ```rand_multivariate_student``` and maps to ```options_.proposal_distribution```
* ```student_degrees_of_freedom```, which accepts an integer argument and maps to ```options_.student_degrees_of_freedom```
These options already exist in ```global_initialization.m```.
After doing this, we need to convert ```tests/estimation/t_proposal/fs2000_student.mod``` to use this interface.
|
process
|
add preprocessor interface for selecting proposal density in the mcmc we seem to have everything in place for using a student t distribution as the proposal density but we have no interface modify the estimation command to accept the following new options proposal distribution which takes two arguments rand multivariate normal default or rand multivariate student and maps to options proposal distribution student degrees of freedom which accepts an integer argument and maps to options student degrees of freedom these options already exist in global initialization m after doing this we need to convert tests estimation t proposal student mod to use this interface
| 1
|
24,952
| 17,936,108,885
|
IssuesEvent
|
2021-09-10 15:30:50
|
coq/coq
|
https://api.github.com/repos/coq/coq
|
closed
|
CI runners from ci.inria.fr need maintenance
|
kind: infrastructure
|
The VMs from ci.inria.fr that we use for CI need maintenance. This is an issue per se.
On linux machines, docker images pile up. Periodically running “docker system prune” may help. Otherwise, one can list the images by executing “docker images” and remove the ones that are no longer useful by executing “docker rmi XXXX”.
On Windows machines, build directories are not always removed (usually in case of failure). I don’t know how to manually remove these directories. Maybe Michael can help.
Ping @MSoegtropIMC
|
1.0
|
CI runners from ci.inria.fr need maintenance - The VMs from ci.inria.fr that we use for CI need maintenance. This is an issue per se.
On linux machines, docker images pile up. Periodically running “docker system prune” may help. Otherwise, one can list the images by executing “docker images” and remove the ones that are no longer useful by executing “docker rmi XXXX”.
On Windows machines, build directories are not always removed (usually in case of failure). I don’t know how to manually remove these directories. Maybe Michael can help.
Ping @MSoegtropIMC
|
non_process
|
ci runners from ci inria fr need maintenance the vms from ci inria fr that we use for ci need maintenance this is an issue per se on linux machines docker images pile up periodically running “docker system prune” may help otherwise one can list the images by executing “docker images” and remove the ones that are no longer useful by executing “docker rmi xxxx” on windows machines build directories are not always removed usually in case of failure i don’t know how to manually remove these directories maybe michael can help ping msoegtropimc
| 0
|
2,358
| 2,607,897,593
|
IssuesEvent
|
2015-02-26 00:12:06
|
chrsmithdemos/zen-coding
|
https://api.github.com/repos/chrsmithdemos/zen-coding
|
closed
|
Dreamweaver Problem
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Expanding abbrevation
What is the expected output? What do you see instead?
The marker "jumps" up to my title, deletes my "|" stands there blinking.
What version of the product are you using? On what operating system?
DW CS4, Vista
Please provide any additional information below.
```
-----
Original issue reported on code.google.com by `samue...@gmail.com` on 23 Nov 2009 at 7:41
|
1.0
|
Dreamweaver Problem - ```
What steps will reproduce the problem?
1. Expanding abbrevation
What is the expected output? What do you see instead?
The marker "jumps" up to my title, deletes my "|" stands there blinking.
What version of the product are you using? On what operating system?
DW CS4, Vista
Please provide any additional information below.
```
-----
Original issue reported on code.google.com by `samue...@gmail.com` on 23 Nov 2009 at 7:41
|
non_process
|
dreamweaver problem what steps will reproduce the problem expanding abbrevation what is the expected output what do you see instead the marker jumps up to my title deletes my stands there blinking what version of the product are you using on what operating system dw vista please provide any additional information below original issue reported on code google com by samue gmail com on nov at
| 0
|
12,236
| 14,743,676,043
|
IssuesEvent
|
2021-01-07 14:15:39
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Portland - VCC Transition in SA Billing
|
anc-process anp-urgent ant-support
|
In GitLab by @kdjstudios on Sep 12, 2019, 09:02
**Submitted by:** Kyle
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/9309908
**Server:** Internal
**Client/Site:** Portland
**Account:** NA
**Issue:**
We are writing as your site has just switched over to the VCC and as such we need to know a few things in regards to your transition in SA Billing too.
* What date you will need to have SA Billing updated to use the new VCC upload process, HR Report, and have the VCC Codes added at the site? Ops will need to add them at the account level.
* If for your next billing cycle in SA Billing you will be using your old switches billing export or if you will be using the VCC switches billing export?
* We will need to know how many days are being missed from the next billing cycle in the usage period? This will allow us to adjust the posting factor accordingly to accommodate for the missed usage.
Please let us know if you have any questions and we thank you for your response.
|
1.0
|
Portland - VCC Transition in SA Billing - In GitLab by @kdjstudios on Sep 12, 2019, 09:02
**Submitted by:** Kyle
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/9309908
**Server:** Internal
**Client/Site:** Portland
**Account:** NA
**Issue:**
We are writing as your site has just switched over to the VCC and as such we need to know a few things in regards to your transition in SA Billing too.
* What date you will need to have SA Billing updated to use the new VCC upload process, HR Report, and have the VCC Codes added at the site? Ops will need to add them at the account level.
* If for your next billing cycle in SA Billing you will be using your old switches billing export or if you will be using the VCC switches billing export?
* We will need to know how many days are being missed from the next billing cycle in the usage period? This will allow us to adjust the posting factor accordingly to accommodate for the missed usage.
Please let us know if you have any questions and we thank you for your response.
|
process
|
portland vcc transition in sa billing in gitlab by kdjstudios on sep submitted by kyle helpdesk server internal client site portland account na issue we are writing as your site has just switched over to the vcc and as such we need to know a few things in regards to your transition in sa billing too what date you will need to have sa billing updated to use the new vcc upload process hr report and have the vcc codes added at the site ops will need to add them at the account level if for your next billing cycle in sa billing you will be using your old switches billing export or if you will be using the vcc switches billing export we will need to know how many days are being missed from the next billing cycle in the usage period this will allow us to adjust the posting factor accordingly to accommodate for the missed usage please let us know if you have any questions and we thank you for your response
| 1
|
5,618
| 8,476,584,509
|
IssuesEvent
|
2018-10-24 22:32:03
|
ArctosDB/new-collections
|
https://api.github.com/repos/ArctosDB/new-collections
|
closed
|
UNE - Forward questionnaire to [Arctos Working Group](arctos-working-group@googlegroups.com) and request volunteers for collection mentor.
|
Application in process
|
AWG member can volunteer to act as primary contact, especially if they have similar collections or specific knowledge about a collection; can serve as ‘in kind support’ for collections to help offset costs
|
1.0
|
UNE - Forward questionnaire to [Arctos Working Group](arctos-working-group@googlegroups.com) and request volunteers for collection mentor. - AWG member can volunteer to act as primary contact, especially if they have similar collections or specific knowledge about a collection; can serve as ‘in kind support’ for collections to help offset costs
|
process
|
une forward questionnaire to arctos working group googlegroups com and request volunteers for collection mentor awg member can volunteer to act as primary contact especially if they have similar collections or specific knowledge about a collection can serve as ‘in kind support’ for collections to help offset costs
| 1
|
214,314
| 16,581,046,794
|
IssuesEvent
|
2021-05-31 11:53:44
|
lutraconsulting/input-manual-tests
|
https://api.github.com/repos/lutraconsulting/input-manual-tests
|
opened
|
TC 11: Subscriptions
|
test case
|
**Prerequirements**
- Prepare a user that has almost full Mergin storage
- Input is running and the project is opened
---
### Test A - Storage limit
**A1.** Copy a big project (with size higher than user's remaining storage) to mobile.
> You can use `ttester/big-project` (dev.dev) and remove .mergin folder after downloading to the testing device
**A2.** Make sure you do not have project with such name on Mergin. If you do, remove it
**A3.** Hit _upload_ in Input
- you should see a modal window about insufficient storage
**A4.** Close the modal and navigate to _My projects_. Check that the project **was not created**
**A5.** Remove some files from the project so that it is lower than remaining storage
**A6.** Hit _upload_ in Input
- upload should proceed and finish normally
**A7.** Navigate to _My projects_ and check that the project is uploaded
|
1.0
|
TC 11: Subscriptions - **Prerequirements**
- Prepare a user that has almost full Mergin storage
- Input is running and the project is opened
---
### Test A - Storage limit
**A1.** Copy a big project (with size higher than user's remaining storage) to mobile.
> You can use `ttester/big-project` (dev.dev) and remove .mergin folder after downloading to the testing device
**A2.** Make sure you do not have project with such name on Mergin. If you do, remove it
**A3.** Hit _upload_ in Input
- you should see a modal window about insufficient storage
**A4.** Close the modal and navigate to _My projects_. Check that the project **was not created**
**A5.** Remove some files from the project so that it is lower than remaining storage
**A6.** Hit _upload_ in Input
- upload should proceed and finish normally
**A7.** Navigate to _My projects_ and check that the project is uploaded
|
non_process
|
tc subscriptions prerequirements prepare a user that has almost full mergin storage input is running and the project is opened test a storage limit copy a big project with size higher than user s remaining storage to mobile you can use ttester big project dev dev and remove mergin folder after downloading to the testing device make sure you do not have project with such name on mergin if you do remove it hit upload in input you should see a modal window about insufficient storage close the modal and navigate to my projects check that the project was not created remove some files from the project so that it is lower than remaining storage hit upload in input upload should proceed and finish normally navigate to my projects and check that the project is uploaded
| 0
|
15,403
| 19,594,660,756
|
IssuesEvent
|
2022-01-05 16:29:07
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
opened
|
Process: Review and update Milestone Definitions
|
Process
|
The Milestone Definitions are not being respected correctly in our current release process.
Adapt and rework them so that they make sense to our releases:
https://github.com/zephyrproject-rtos/zephyr/wiki/Program-Management#milestone-definitions
|
1.0
|
Process: Review and update Milestone Definitions - The Milestone Definitions are not being respected correctly in our current release process.
Adapt and rework them so that they make sense to our releases:
https://github.com/zephyrproject-rtos/zephyr/wiki/Program-Management#milestone-definitions
|
process
|
process review and update milestone definitions the milestone definitions are not being respected correctly in our current release process adapt and rework them so that they make sense to our releases
| 1
|
5,901
| 8,718,250,651
|
IssuesEvent
|
2018-12-07 19:48:26
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Resolver: ByRef Parameter to Argument binding
|
enhancement parse-tree-processing resolver
|
If a local variable is used as an argument for a parameter that is ByRef, the variable can be changed. Should we add an identifier reference from Parameter to local variable?
|
1.0
|
Resolver: ByRef Parameter to Argument binding - If a local variable is used as an argument for a parameter that is ByRef, the variable can be changed. Should we add an identifier reference from Parameter to local variable?
|
process
|
resolver byref parameter to argument binding if a local variable is used as an argument for a parameter that is byref the variable can be changed should we add an identifier reference from parameter to local variable
| 1
|
20,282
| 26,912,651,000
|
IssuesEvent
|
2023-02-07 02:00:10
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 6 Feb 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Real-Time Traffic End-of-Queue Detection and Tracking in UAV Video
- **Authors:** Russ Messenger, Md Zobaer Islam, Matthew Whitlock, Erik Spong, Nate Morton, Layne Claggett, Chris Matthews, Jordan Fox, Leland Palmer, Dane C. Johnson, John F. O'Hara, Christopher J. Crick, Jamey D. Jacob, Sabit Ekin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2302.01923
- **Pdf link:** https://arxiv.org/pdf/2302.01923
- **Abstract**
Highway work zones are susceptible to undue accumulation of motorized vehicles which calls for dynamic work zone warning signs to prevent accidents. The work zone signs are placed according to the location of the end-of-queue of vehicles which usually changes rapidly. The detection of moving objects in video captured by Unmanned Aerial Vehicles (UAV) has been extensively researched so far, and is used in a wide array of applications including traffic monitoring. Unlike the fixed traffic cameras, UAVs can be used to monitor the traffic at work zones in real-time and also in a more cost-effective way. This study presents a method as a proof of concept for detecting End-of-Queue (EOQ) of traffic by processing the real-time video footage of a highway work zone captured by UAV. EOQ is detected in the video by image processing which includes background subtraction and blob detection methods. This dynamic localization of EOQ of vehicles will enable faster and more accurate relocation of work zone warning signs for drivers and thus will reduce work zone fatalities. The method can be applied to detect EOQ of vehicles and notify drivers in any other roads or intersections too where vehicles are rapidly accumulating due to special events, traffic jams, construction, or accidents.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Object Dimension Extraction for Environment Mapping with Low Cost Cameras Fused with Laser Ranging
- **Authors:** E.M.S.P. Ekanayake, T.H.M.N.C. Thelasingha, U.V.B.L. Udugama, G.M.R.I. Godaliyadda, M.P.B. Ekanayake, B.G.L.T. Samaranayake, J.V. Wijayakulasooriya
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2302.01387
- **Pdf link:** https://arxiv.org/pdf/2302.01387
- **Abstract**
It is essential to have a method to map an unknown terrain for various applications. For places where human access is not possible, a method should be proposed to identify the environment. Exploration, disaster relief, transportation and many other purposes would be convenient if a map of the environment is available. Replicating the human vision system using stereo cameras would be an optimum solution. In this work, we have used laser ranging based technique fused with stereo cameras to extract dimension of objects for mapping. The distortions were calibrated using mathematical model of the camera. By means of Semi Global Block Matching [1] disparity map was generated and reduces the noise using novel noise reduction method of disparity map by dilation. The Data from the Laser Range Finder (LRF) and noise reduced vision data has been used to identify the object parameters.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### INV: Towards Streaming Incremental Neural Videos
- **Authors:** Shengze Wang, Alexey Supikov, Joshua Ratcliff, Henry Fuchs, Ronald Azuma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2302.01532
- **Pdf link:** https://arxiv.org/pdf/2302.01532
- **Abstract**
Recent works in spatiotemporal radiance fields can produce photorealistic free-viewpoint videos. However, they are inherently unsuitable for interactive streaming scenarios (e.g. video conferencing, telepresence) because have an inevitable lag even if the training is instantaneous. This is because these approaches consume videos and thus have to buffer chunks of frames (often seconds) before processing. In this work, we take a step towards interactive streaming via a frame-by-frame approach naturally free of lag. Conventional wisdom believes that per-frame NeRFs are impractical due to prohibitive training costs and storage. We break this belief by introducing Incremental Neural Videos (INV), a per-frame NeRF that is efficiently trained and streamable. We designed INV based on two insights: (1) Our main finding is that MLPs naturally partition themselves into Structure and Color Layers, which store structural and color/texture information respectively. (2) We leverage this property to retain and improve upon knowledge from previous frames, thus amortizing training across frames and reducing redundant learning. As a result, with negligible changes to NeRF, INV can achieve good qualities (>28.6db) in 8min/frame. It can also outperform prior SOTA in 19% less training time. Additionally, our Temporal Weight Compression reduces the per-frame size to 0.3MB/frame (6.6% of NeRF). More importantly, INV is free from buffer lag and is naturally fit for streaming. While this work does not achieve real-time training, it shows that incremental approaches like INV present new possibilities in interactive 3D streaming. Moreover, our discovery of natural information partition leads to a better understanding and manipulation of MLPs. Code and dataset will be released soon.
## Keyword: RAW
### Cluster-CAM: Cluster-Weighted Visual Interpretation of CNNs' Decision in Image Classification
- **Authors:** Zhenpeng Feng, Hongbing Ji, Milos Dakovic, Xiyang Cui, Mingzhe Zhu, Ljubisa Stankovic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2302.01642
- **Pdf link:** https://arxiv.org/pdf/2302.01642
- **Abstract**
Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN's decision, has drawn increasing attention. Gradient-based CAMs are efficient while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, existing gradient-free CAMs are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters in an unsupervised manner. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be computed by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human's cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.
### From slides (through tiles) to pixels: an explainability framework for weakly supervised models in pre-clinical pathology
- **Authors:** Marco Bertolini, Van-Khoa Le, Jake Pencharz, Andreas Poehlmann, Djork-Arné Clevert, Santiago Villalba, Floriane Montanari
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.01653
- **Pdf link:** https://arxiv.org/pdf/2302.01653
- **Abstract**
In pre-clinical pathology, there is a paradox between the abundance of raw data (whole slide images from many organs of many individual animals) and the lack of pixel-level slide annotations done by pathologists. Due to time constraints and requirements from regulatory authorities, diagnoses are instead stored as slide labels. Weakly supervised training is designed to take advantage of those data, and the trained models can be used by pathologists to rank slides by their probability of containing a given lesion of interest. In this work, we propose a novel contextualized eXplainable AI (XAI) framework and its application to deep learning models trained on Whole Slide Images (WSIs) in Digital Pathology. Specifically, we apply our methods to a multi-instance-learning (MIL) model, which is trained solely on slide-level labels, without the need for pixel-level annotations. We validate quantitatively our methods by quantifying the agreements of our explanations' heatmaps with pathologists' annotations, as well as with predictions from a segmentation model trained on such annotations. We demonstrate the stability of the explanations with respect to input shifts, and the fidelity with respect to increased model performance. We quantitatively evaluate the correlation between available pixel-wise annotations and explainability heatmaps. We show that the explanations on important tiles of the whole slide correlate with tissue changes between healthy regions and lesions, but do not exactly behave like a human annotator. This result is coherent with the model training strategy.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 6 Feb 23 - ## Keyword: events
### Real-Time Traffic End-of-Queue Detection and Tracking in UAV Video
- **Authors:** Russ Messenger, Md Zobaer Islam, Matthew Whitlock, Erik Spong, Nate Morton, Layne Claggett, Chris Matthews, Jordan Fox, Leland Palmer, Dane C. Johnson, John F. O'Hara, Christopher J. Crick, Jamey D. Jacob, Sabit Ekin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2302.01923
- **Pdf link:** https://arxiv.org/pdf/2302.01923
- **Abstract**
Highway work zones are susceptible to undue accumulation of motorized vehicles which calls for dynamic work zone warning signs to prevent accidents. The work zone signs are placed according to the location of the end-of-queue of vehicles which usually changes rapidly. The detection of moving objects in video captured by Unmanned Aerial Vehicles (UAV) has been extensively researched so far, and is used in a wide array of applications including traffic monitoring. Unlike the fixed traffic cameras, UAVs can be used to monitor the traffic at work zones in real-time and also in a more cost-effective way. This study presents a method as a proof of concept for detecting End-of-Queue (EOQ) of traffic by processing the real-time video footage of a highway work zone captured by UAV. EOQ is detected in the video by image processing which includes background subtraction and blob detection methods. This dynamic localization of EOQ of vehicles will enable faster and more accurate relocation of work zone warning signs for drivers and thus will reduce work zone fatalities. The method can be applied to detect EOQ of vehicles and notify drivers in any other roads or intersections too where vehicles are rapidly accumulating due to special events, traffic jams, construction, or accidents.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Object Dimension Extraction for Environment Mapping with Low Cost Cameras Fused with Laser Ranging
- **Authors:** E.M.S.P. Ekanayake, T.H.M.N.C. Thelasingha, U.V.B.L. Udugama, G.M.R.I. Godaliyadda, M.P.B. Ekanayake, B.G.L.T. Samaranayake, J.V. Wijayakulasooriya
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
- **Arxiv link:** https://arxiv.org/abs/2302.01387
- **Pdf link:** https://arxiv.org/pdf/2302.01387
- **Abstract**
It is essential to have a method to map an unknown terrain for various applications. For places where human access is not possible, a method should be proposed to identify the environment. Exploration, disaster relief, transportation and many other purposes would be convenient if a map of the environment is available. Replicating the human vision system using stereo cameras would be an optimum solution. In this work, we have used laser ranging based technique fused with stereo cameras to extract dimension of objects for mapping. The distortions were calibrated using mathematical model of the camera. By means of Semi Global Block Matching [1] disparity map was generated and reduces the noise using novel noise reduction method of disparity map by dilation. The Data from the Laser Range Finder (LRF) and noise reduced vision data has been used to identify the object parameters.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### INV: Towards Streaming Incremental Neural Videos
- **Authors:** Shengze Wang, Alexey Supikov, Joshua Ratcliff, Henry Fuchs, Ronald Azuma
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2302.01532
- **Pdf link:** https://arxiv.org/pdf/2302.01532
- **Abstract**
Recent works in spatiotemporal radiance fields can produce photorealistic free-viewpoint videos. However, they are inherently unsuitable for interactive streaming scenarios (e.g. video conferencing, telepresence) because have an inevitable lag even if the training is instantaneous. This is because these approaches consume videos and thus have to buffer chunks of frames (often seconds) before processing. In this work, we take a step towards interactive streaming via a frame-by-frame approach naturally free of lag. Conventional wisdom believes that per-frame NeRFs are impractical due to prohibitive training costs and storage. We break this belief by introducing Incremental Neural Videos (INV), a per-frame NeRF that is efficiently trained and streamable. We designed INV based on two insights: (1) Our main finding is that MLPs naturally partition themselves into Structure and Color Layers, which store structural and color/texture information respectively. (2) We leverage this property to retain and improve upon knowledge from previous frames, thus amortizing training across frames and reducing redundant learning. As a result, with negligible changes to NeRF, INV can achieve good qualities (>28.6db) in 8min/frame. It can also outperform prior SOTA in 19% less training time. Additionally, our Temporal Weight Compression reduces the per-frame size to 0.3MB/frame (6.6% of NeRF). More importantly, INV is free from buffer lag and is naturally fit for streaming. While this work does not achieve real-time training, it shows that incremental approaches like INV present new possibilities in interactive 3D streaming. Moreover, our discovery of natural information partition leads to a better understanding and manipulation of MLPs. Code and dataset will be released soon.
## Keyword: RAW
### Cluster-CAM: Cluster-Weighted Visual Interpretation of CNNs' Decision in Image Classification
- **Authors:** Zhenpeng Feng, Hongbing Ji, Milos Dakovic, Xiyang Cui, Mingzhe Zhu, Ljubisa Stankovic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2302.01642
- **Pdf link:** https://arxiv.org/pdf/2302.01642
- **Abstract**
Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN's decision, has drawn increasing attention. Gradient-based CAMs are efficient while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, existing gradient-free CAMs are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters in an unsupervised manner. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be computed by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human's cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.
### From slides (through tiles) to pixels: an explainability framework for weakly supervised models in pre-clinical pathology
- **Authors:** Marco Bertolini, Van-Khoa Le, Jake Pencharz, Andreas Poehlmann, Djork-Arné Clevert, Santiago Villalba, Floriane Montanari
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.01653
- **Pdf link:** https://arxiv.org/pdf/2302.01653
- **Abstract**
In pre-clinical pathology, there is a paradox between the abundance of raw data (whole slide images from many organs of many individual animals) and the lack of pixel-level slide annotations done by pathologists. Due to time constraints and requirements from regulatory authorities, diagnoses are instead stored as slide labels. Weakly supervised training is designed to take advantage of those data, and the trained models can be used by pathologists to rank slides by their probability of containing a given lesion of interest. In this work, we propose a novel contextualized eXplainable AI (XAI) framework and its application to deep learning models trained on Whole Slide Images (WSIs) in Digital Pathology. Specifically, we apply our methods to a multi-instance-learning (MIL) model, which is trained solely on slide-level labels, without the need for pixel-level annotations. We validate quantitatively our methods by quantifying the agreements of our explanations' heatmaps with pathologists' annotations, as well as with predictions from a segmentation model trained on such annotations. We demonstrate the stability of the explanations with respect to input shifts, and the fidelity with respect to increased model performance. We quantitatively evaluate the correlation between available pixel-wise annotations and explainability heatmaps. We show that the explanations on important tiles of the whole slide correlate with tissue changes between healthy regions and lesions, but do not exactly behave like a human annotator. This result is coherent with the model training strategy.
## Keyword: raw image
There is no result
|
process
|
new submissions for mon feb keyword events real time traffic end of queue detection and tracking in uav video authors russ messenger md zobaer islam matthew whitlock erik spong nate morton layne claggett chris matthews jordan fox leland palmer dane c johnson john f o hara christopher j crick jamey d jacob sabit ekin subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract highway work zones are susceptible to undue accumulation of motorized vehicles which calls for dynamic work zone warning signs to prevent accidents the work zone signs are placed according to the location of the end of queue of vehicles which usually changes rapidly the detection of moving objects in video captured by unmanned aerial vehicles uav has been extensively researched so far and is used in a wide array of applications including traffic monitoring unlike the fixed traffic cameras uavs can be used to monitor the traffic at work zones in real time and also in a more cost effective way this study presents a method as a proof of concept for detecting end of queue eoq of traffic by processing the real time video footage of a highway work zone captured by uav eoq is detected in the video by image processing which includes background subtraction and blob detection methods this dynamic localization of eoq of vehicles will enable faster and more accurate relocation of work zone warning signs for drivers and thus will reduce work zone fatalities the method can be applied to detect eoq of vehicles and notify drivers in any other roads or intersections too where vehicles are rapidly accumulating due to special events traffic jams construction or accidents keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp object dimension extraction for environment mapping with low cost cameras fused with laser ranging authors e m s p ekanayake t h m n c thelasingha u v b l udugama g m r i godaliyadda m p b ekanayake b g l t samaranayake j v wijayakulasooriya subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract it is essential to have a method to map an unknown terrain for various applications for places where human access is not possible a method should be proposed to identify the environment exploration disaster relief transportation and many other purposes would be convenient if a map of the environment is available replicating the human vision system using stereo cameras would be an optimum solution in this work we have used laser ranging based technique fused with stereo cameras to extract dimension of objects for mapping the distortions were calibrated using mathematical model of the camera by means of semi global block matching disparity map was generated and reduces the noise using novel noise reduction method of disparity map by dilation the data from the laser range finder lrf and noise reduced vision data has been used to identify the object parameters keyword image signal processing there is no result keyword image signal process there is no result keyword compression inv towards streaming incremental neural videos authors shengze wang alexey supikov joshua ratcliff henry fuchs ronald azuma subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract recent works in spatiotemporal radiance fields can produce photorealistic free viewpoint videos however they are inherently unsuitable for interactive streaming scenarios e g video conferencing telepresence because have an inevitable lag even if the training is instantaneous this is because these approaches consume videos and thus have to buffer chunks of frames often seconds before processing in this work we take a step towards interactive streaming via a frame by frame approach naturally free of lag conventional wisdom believes that per frame nerfs are impractical due to prohibitive training costs and storage we break this belief by introducing incremental neural videos inv a per frame nerf that is efficiently trained and streamable we designed inv based on two insights our main finding is that mlps naturally partition themselves into structure and color layers which store structural and color texture information respectively we leverage this property to retain and improve upon knowledge from previous frames thus amortizing training across frames and reducing redundant learning as a result with negligible changes to nerf inv can achieve good qualities in frame it can also outperform prior sota in less training time additionally our temporal weight compression reduces the per frame size to frame of nerf more importantly inv is free from buffer lag and is naturally fit for streaming while this work does not achieve real time training it shows that incremental approaches like inv present new possibilities in interactive streaming moreover our discovery of natural information partition leads to a better understanding and manipulation of mlps code and dataset will be released soon keyword raw cluster cam cluster weighted visual interpretation of cnns decision in image classification authors zhenpeng feng hongbing ji milos dakovic xiyang cui mingzhe zhu ljubisa stankovic subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract despite the tremendous success of convolutional neural networks cnns in computer vision the mechanism of cnns still lacks clear interpretation currently class activation mapping cam a famous visualization technique to interpret cnn s decision has drawn increasing attention gradient based cams are efficient while the performance is heavily affected by gradient vanishing and exploding in contrast gradient free cams can avoid computing gradients to produce more understandable results however existing gradient free cams are quite time consuming because hundreds of forward interference per image are required in this paper we proposed cluster cam an effective and efficient gradient free cnn interpretation algorithm cluster cam can significantly reduce the times of forward propagation by splitting the feature maps into clusters in an unsupervised manner furthermore we propose an artful strategy to forge a cognition base map and cognition scissors from clustered feature maps the final salience heatmap will be computed by merging the above cognition maps qualitative results conspicuously show that cluster cam can produce heatmaps where the highlighted regions match the human s cognition more precisely than existing cams the quantitative evaluation further demonstrates the superiority of cluster cam in both effectiveness and efficiency from slides through tiles to pixels an explainability framework for weakly supervised models in pre clinical pathology authors marco bertolini van khoa le jake pencharz andreas poehlmann djork arné clevert santiago villalba floriane montanari subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract in pre clinical pathology there is a paradox between the abundance of raw data whole slide images from many organs of many individual animals and the lack of pixel level slide annotations done by pathologists due to time constraints and requirements from regulatory authorities diagnoses are instead stored as slide labels weakly supervised training is designed to take advantage of those data and the trained models can be used by pathologists to rank slides by their probability of containing a given lesion of interest in this work we propose a novel contextualized explainable ai xai framework and its application to deep learning models trained on whole slide images wsis in digital pathology specifically we apply our methods to a multi instance learning mil model which is trained solely on slide level labels without the need for pixel level annotations we validate quantitatively our methods by quantifying the agreements of our explanations heatmaps with pathologists annotations as well as with predictions from a segmentation model trained on such annotations we demonstrate the stability of the explanations with respect to input shifts and the fidelity with respect to increased model performance we quantitatively evaluate the correlation between available pixel wise annotations and explainability heatmaps we show that the explanations on important tiles of the whole slide correlate with tissue changes between healthy regions and lesions but do not exactly behave like a human annotator this result is coherent with the model training strategy keyword raw image there is no result
| 1
|
8,267
| 11,429,171,011
|
IssuesEvent
|
2020-02-04 07:12:40
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Escaping quotes - portable way
|
api-needs-work area-System.Diagnostics.Process
|
How do you escape quotes the portable way when using Process with UseShellExecute and Arguments?
EDIT:
UseShellExecute:
```
System.PlatformNotSupportedException: UseShellExecute must always be set to false.
at System.Diagnostics.ProcessStartInfo.set_UseShellExecute(Boolean value)
```
Process could perhaps have an option of taking string[] to match Main and be able roundtrip
|
1.0
|
Escaping quotes - portable way - How do you escape quotes the portable way when using Process with UseShellExecute and Arguments?
EDIT:
UseShellExecute:
```
System.PlatformNotSupportedException: UseShellExecute must always be set to false.
at System.Diagnostics.ProcessStartInfo.set_UseShellExecute(Boolean value)
```
Process could perhaps have an option of taking string[] to match Main and be able roundtrip
|
process
|
escaping quotes portable way how do you escape quotes the portable way when using process with useshellexecute and arguments edit useshellexecute system platformnotsupportedexception useshellexecute must always be set to false at system diagnostics processstartinfo set useshellexecute boolean value process could perhaps have an option of taking string to match main and be able roundtrip
| 1
|
50,307
| 12,495,975,830
|
IssuesEvent
|
2020-06-01 14:06:14
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Can't compile pytorch from source
|
module: build module: xnnpack triaged
|
## 🐛 Bug
I have been trying to build pytorch v1.4 from the GitHub repo on a device running arm64 but I get an error once I start building.
I have looked through the issues page for similar issues but the one's I saw are for v1.0.
Here's my stacktrace.
`[ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-neonfma-rr2-lut64-p2-div.c.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/spmm.h:11, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f32-spmm/gen/8x2-neonfma.c:14: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-neonfma-rr2-p5-div.c.o [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f16-gemm/gen/4x8-neonfp16arith-ld64.c.o [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f16-gemm/gen/6x8-neonfp16arith-ld64.c.o [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f16-gemm/gen/8x8-neonfp16arith-ld64.c.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/up4x9-aarch64-neonfma-cortex-a55.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/up4x9-aarch64-neonfma.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x12-aarch64-neonfma-cortex-a53.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x8-aarch64-neonfma-cortex-a53.S.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/gemm.h:14, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f16-gemm/gen/4x8-neonfp16arith-ld64.c:18: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x8-aarch64-neonfma-cortex-a57.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x8-aarch64-neonfma-cortex-a75.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x12-aarch64-neonfma-cortex-a53.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-cortex-a53.S.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/gemm.h:14, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f16-gemm/gen/6x8-neonfp16arith-ld64.c:18: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-cortex-a57.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/5x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/5x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a73.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a57.S.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/gemm.h:14, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f16-gemm/gen/8x8-neonfp16arith-ld64.c:18: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x12-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x12-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/5x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/5x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a73.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/1x12-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/1x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/4x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/4x12-aarch64-neonfma-cortex-a53.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x8-aarch64-neonfma-cortex-a75.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x8-aarch64-neonfma-cortex-a57.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/5x8-aarch64-neonfma-cortex-a75.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/5x8-aarch64-neonfma-cortex-a57.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/6x8-aarch64-neonfma-cortex-a53.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/6x8-aarch64-neonfma-cortex-a73.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/6x8-aarch64-neonfma-cortex-a75.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/6x8-aarch64-neonfma-cortex-a57.S.o [ 55%] Linking C static library ../../lib/libXNNPACK.a [ 55%] Built target XNNPACK make: *** [Makefile:141: all] Error 2 Traceback (most recent call last): File "setup.py", line 737, in <module> build_deps() File "setup.py", line 316, in build_deps cmake=cmake) File "/root/pytorch_install/pytorch/tools/build_pytorch_libs.py", line 62, in build_caffe2 cmake.build(my_env) File "/root/pytorch_install/pytorch/tools/setup_helpers/cmake.py", line 337, in build self.run(build_args, my_env) File "/root/pytorch_install/pytorch/tools/setup_helpers/cmake.py", line 141, in run check_call(command, cwd=self.build_dir, env=env) File "/usr/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 2.`
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- PyTorch Version : v1.4
- OS : Ubuntu 19.04
- How you installed PyTorch : pip
- Build command you used (if compiling from source): python3 setup.py build
- Python version: 3.7
- Any other relevant information: Ubuntu on Arm64
## Step to reproduce
Running **python3 setup.py build**
|
1.0
|
Can't compile pytorch from source - ## 🐛 Bug
I have been trying to build pytorch v1.4 from the GitHub repo on a device running arm64 but I get an error once I start building.
I have looked through the issues page for similar issues but the one's I saw are for v1.0.
Here's my stacktrace.
`[ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-neonfma-rr2-lut64-p2-div.c.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/spmm.h:11, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f32-spmm/gen/8x2-neonfma.c:14: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/math/sigmoid-neonfma-rr2-p5-div.c.o [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f16-gemm/gen/4x8-neonfp16arith-ld64.c.o [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f16-gemm/gen/6x8-neonfp16arith-ld64.c.o [ 53%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f16-gemm/gen/8x8-neonfp16arith-ld64.c.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/up4x9-aarch64-neonfma-cortex-a55.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-dwconv/up4x9-aarch64-neonfma.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x12-aarch64-neonfma-cortex-a53.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x8-aarch64-neonfma-cortex-a53.S.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/gemm.h:14, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f16-gemm/gen/4x8-neonfp16arith-ld64.c:18: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x8-aarch64-neonfma-cortex-a57.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/1x8-aarch64-neonfma-cortex-a75.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x12-aarch64-neonfma-cortex-a53.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-cortex-a53.S.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/gemm.h:14, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f16-gemm/gen/6x8-neonfp16arith-ld64.c:18: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-cortex-a57.S.o [ 53%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/4x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/5x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/5x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a73.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-cortex-a57.S.o In file included from /root/pytorch_install/pytorch/third_party/XNNPACK/include/xnnpack.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/params.h:15, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/xnnpack/gemm.h:14, from /root/pytorch_install/pytorch/third_party/XNNPACK/src/f16-gemm/gen/8x8-neonfp16arith-ld64.c:18: /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:183:2: warning: ‘pthreadpool_function_1d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:189:2: warning: ‘pthreadpool_function_1d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_1d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:196:2: warning: ‘pthreadpool_function_2d_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:203:2: warning: ‘pthreadpool_function_2d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_2d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:212:2: warning: ‘pthreadpool_function_3d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_3d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /root/pytorch_install/pytorch/third_party/pthreadpool/include/pthreadpool.h:223:2: warning: ‘pthreadpool_function_4d_tiled_t’ is deprecated [-Wdeprecated-declarations] pthreadpool_function_4d_tiled_t function, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x12-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen/6x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x12-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/1x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/5x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/4x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/5x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-cortex-a73.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-ld64.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-gemm/gen-inc/6x8-aarch64-neonfma-ld128.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/1x12-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/1x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x8-aarch64-neonfma-cortex-a57.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/1x8-aarch64-neonfma-cortex-a75.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/4x8-aarch64-neonfma-cortex-a53.S.o [ 54%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/4x12-aarch64-neonfma-cortex-a53.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x8-aarch64-neonfma-cortex-a75.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/4x8-aarch64-neonfma-cortex-a57.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/5x8-aarch64-neonfma-cortex-a75.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/5x8-aarch64-neonfma-cortex-a57.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/6x8-aarch64-neonfma-cortex-a53.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/6x8-aarch64-neonfma-cortex-a73.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/6x8-aarch64-neonfma-cortex-a75.S.o [ 55%] Building ASM object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/f32-igemm/gen/6x8-aarch64-neonfma-cortex-a57.S.o [ 55%] Linking C static library ../../lib/libXNNPACK.a [ 55%] Built target XNNPACK make: *** [Makefile:141: all] Error 2 Traceback (most recent call last): File "setup.py", line 737, in <module> build_deps() File "setup.py", line 316, in build_deps cmake=cmake) File "/root/pytorch_install/pytorch/tools/build_pytorch_libs.py", line 62, in build_caffe2 cmake.build(my_env) File "/root/pytorch_install/pytorch/tools/setup_helpers/cmake.py", line 337, in build self.run(build_args, my_env) File "/root/pytorch_install/pytorch/tools/setup_helpers/cmake.py", line 141, in run check_call(command, cwd=self.build_dir, env=env) File "/usr/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 2.`
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- PyTorch Version : v1.4
- OS : Ubuntu 19.04
- How you installed PyTorch : pip
- Build command you used (if compiling from source): python3 setup.py build
- Python version: 3.7
- Any other relevant information: Ubuntu on Arm64
## Step to reproduce
Running **python3 setup.py build**
|
non_process
|
can t compile pytorch from source 🐛 bug i have been trying to build pytorch from the github repo on a device running but i get an error once i start building i have looked through the issues page for similar issues but the one s i saw are for here s my stacktrace building c object confu deps xnnpack cmakefiles xnnpack dir src math sigmoid neonfma div c o in file included from root pytorch install pytorch third party xnnpack include xnnpack h from root pytorch install pytorch third party xnnpack src xnnpack params h from root pytorch install pytorch third party xnnpack src xnnpack spmm h from root pytorch install pytorch third party xnnpack src spmm gen neonfma c root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function building c object confu deps xnnpack cmakefiles xnnpack dir src math sigmoid neonfma div c o building c object confu deps xnnpack cmakefiles xnnpack dir src gemm gen c o building c object confu deps xnnpack cmakefiles xnnpack dir src gemm gen c o building c object confu deps xnnpack cmakefiles xnnpack dir src gemm gen c o building asm object confu deps xnnpack cmakefiles xnnpack dir src dwconv neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src dwconv neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o in file included from root pytorch install pytorch third party xnnpack include xnnpack h from root pytorch install pytorch third party xnnpack src xnnpack params h from root pytorch install pytorch third party xnnpack src xnnpack gemm h from root pytorch install pytorch third party xnnpack src gemm gen c root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o in file included from root pytorch install pytorch third party xnnpack include xnnpack h from root pytorch install pytorch third party xnnpack src xnnpack params h from root pytorch install pytorch third party xnnpack src xnnpack gemm h from root pytorch install pytorch third party xnnpack src gemm gen c root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma cortex s o in file included from root pytorch install pytorch third party xnnpack include xnnpack h from root pytorch install pytorch third party xnnpack src xnnpack params h from root pytorch install pytorch third party xnnpack src xnnpack gemm h from root pytorch install pytorch third party xnnpack src gemm gen c root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function t’ is deprecated pthreadpool function t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function root pytorch install pytorch third party pthreadpool include pthreadpool h warning ‘pthreadpool function tiled t’ is deprecated pthreadpool function tiled t function building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src gemm gen inc neonfma s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o building asm object confu deps xnnpack cmakefiles xnnpack dir src igemm gen neonfma cortex s o linking c static library lib libxnnpack a built target xnnpack make error traceback most recent call last file setup py line in build deps file setup py line in build deps cmake cmake file root pytorch install pytorch tools build pytorch libs py line in build cmake build my env file root pytorch install pytorch tools setup helpers cmake py line in build self run build args my env file root pytorch install pytorch tools setup helpers cmake py line in run check call command cwd self build dir env env file usr lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status environment pytorch version os ubuntu how you installed pytorch pip build command you used if compiling from source setup py build python version any other relevant information ubuntu on step to reproduce running setup py build
| 0
|
4,800
| 7,695,217,272
|
IssuesEvent
|
2018-05-18 11:28:51
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Tab "REQUESTED FILES (URLS)" only shows "http 200" returncodes, which is not correct/helpful
|
log-processing question
|
The tab called "REQUESTED FILES (URLS)", which shows sucessfully requested files, only use requests which generated an 200 http response code (200 ^= "successful").
Well, this may look correct, but has flaws, especially when making forwarded POST requests:
When you have forums or bulletin boards (e.g. vBulletin) the user is writing a post, and send it, afterwards he's redirected to the thread which he opened or replied to. So this is happening:
User posts to e.g. "newtopic.php" via POST and get a returncode 303 (see also [HTTP_303 on Wikipedia](https://en.wikipedia.org/wiki/HTTP_303) ), which forwards the user to the topic. So, 1 POST request (http 303) and 1 GET request (http 200) are made.
Even the 303 returned page is "successful", however, it's not shown in the "REQUESTED FILES (URLS)" tab, which isn't correct at all.
I couldn't find a quick solution to add response code 303 to that tab, so IMHO it's a real issue, at least there should be a config value for it.
Any thoughts?
|
1.0
|
Tab "REQUESTED FILES (URLS)" only shows "http 200" returncodes, which is not correct/helpful - The tab called "REQUESTED FILES (URLS)", which shows sucessfully requested files, only use requests which generated an 200 http response code (200 ^= "successful").
Well, this may look correct, but has flaws, especially when making forwarded POST requests:
When you have forums or bulletin boards (e.g. vBulletin) the user is writing a post, and send it, afterwards he's redirected to the thread which he opened or replied to. So this is happening:
User posts to e.g. "newtopic.php" via POST and get a returncode 303 (see also [HTTP_303 on Wikipedia](https://en.wikipedia.org/wiki/HTTP_303) ), which forwards the user to the topic. So, 1 POST request (http 303) and 1 GET request (http 200) are made.
Even the 303 returned page is "successful", however, it's not shown in the "REQUESTED FILES (URLS)" tab, which isn't correct at all.
I couldn't find a quick solution to add response code 303 to that tab, so IMHO it's a real issue, at least there should be a config value for it.
Any thoughts?
|
process
|
tab requested files urls only shows http returncodes which is not correct helpful the tab called requested files urls which shows sucessfully requested files only use requests which generated an http response code successful well this may look correct but has flaws especially when making forwarded post requests when you have forums or bulletin boards e g vbulletin the user is writing a post and send it afterwards he s redirected to the thread which he opened or replied to so this is happening user posts to e g newtopic php via post and get a returncode see also which forwards the user to the topic so post request http and get request http are made even the returned page is successful however it s not shown in the requested files urls tab which isn t correct at all i couldn t find a quick solution to add response code to that tab so imho it s a real issue at least there should be a config value for it any thoughts
| 1
|
189
| 2,563,673,988
|
IssuesEvent
|
2015-02-06 14:52:40
|
akvo/akvo-flow
|
https://api.github.com/repos/akvo/akvo-flow
|
opened
|
Code changes for running Akvo FLOW in CapeDwarf
|
3 - Deployment & infrastructure
|
There are some code changes required to run Akvo FLOW in CapeDwarf
|
1.0
|
Code changes for running Akvo FLOW in CapeDwarf - There are some code changes required to run Akvo FLOW in CapeDwarf
|
non_process
|
code changes for running akvo flow in capedwarf there are some code changes required to run akvo flow in capedwarf
| 0
|
50,881
| 3,007,708,147
|
IssuesEvent
|
2015-07-27 17:27:09
|
jah2488/classroom
|
https://api.github.com/repos/jah2488/classroom
|
opened
|
App sometimes hangs and then timeouts
|
bug help wanted high priority
|
The app is suffering from sporadic time outs that cause the whole dyno to need to be reset. My suspicion is that Refile is causing this issue with its self hosted sinatra based asset server. Possible fixes for that are to migrate from Refile over to Paperclip/Carrierwave + S3 or to up the dynos on heroku and hope that fixes the issue.
Currently images are only being used for badges. So a third option could be to rip out the image upload functionality entirely and to just place all the badges into the assets folder and serve them directly from there.
(User profile images are generated via gravatar emails)
|
1.0
|
App sometimes hangs and then timeouts - The app is suffering from sporadic time outs that cause the whole dyno to need to be reset. My suspicion is that Refile is causing this issue with its self hosted sinatra based asset server. Possible fixes for that are to migrate from Refile over to Paperclip/Carrierwave + S3 or to up the dynos on heroku and hope that fixes the issue.
Currently images are only being used for badges. So a third option could be to rip out the image upload functionality entirely and to just place all the badges into the assets folder and serve them directly from there.
(User profile images are generated via gravatar emails)
|
non_process
|
app sometimes hangs and then timeouts the app is suffering from sporadic time outs that cause the whole dyno to need to be reset my suspicion is that refile is causing this issue with its self hosted sinatra based asset server possible fixes for that are to migrate from refile over to paperclip carrierwave or to up the dynos on heroku and hope that fixes the issue currently images are only being used for badges so a third option could be to rip out the image upload functionality entirely and to just place all the badges into the assets folder and serve them directly from there user profile images are generated via gravatar emails
| 0
|
4,090
| 2,970,853,803
|
IssuesEvent
|
2015-07-14 00:21:15
|
sympy/sympy
|
https://api.github.com/repos/sympy/sympy
|
closed
|
Tests fail, seems like Cython is not configured to compile with numpy correctly
|
utilities.codegen Windows
|
Following are the standard output:
D:\sympytest>git clone git@github.com:sympy/sympy.git
Cloning into 'sympy'...
remote: Counting objects: 153742, done.
remote: Compressing objects: 100% (4/4), done.
Receiving objects: 100% (153742/153742), 62.98 MiB | 13.00 KiB/s, done.
Resolving deltas: 100% (122073/122073), done.
Checking connectivity... done
sh-3.1$ cd sympy
sh-3.1$ ./bin/test |tee test.log
======================================================= test process starts =======================================================
executable: c:\Python\python.exe (2.7.8-final-0) [CPython]
architecture: 32-bit
cache: yes
ground types: python
random seed: 68232668
hash randomization: on (PYTHONHASHSEED=2780600447)
sympy\assumptions\tests\test_assumptions_2.py[5] ..... [OK]
sympy\assumptions\tests\test_context.py[4] .... [OK]
sympy\assumptions\tests\test_matrices.py[19] ..........ff....... [OK]
__________________________________________________________ xpassed tests __________________________________________________________
sympy\core\tests\test_wester.py: test_V12
sympy\core\tests\test_wester.py: test_T10
sympy\printing\tests\test_gtk.py: test_1
sympy\utilities\tests\test_module_imports.py: test_module_imports_are_direct
___________________________________________________________________________________________________________________________________
_________________________________ sympy\external\tests\test_autowrap.py:test_wrap_twice_c_cython __________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 138, in test_wrap_twice_c_cython
runtest_autowrap_twice('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 51, in runtest_autowrap_twice
f = autowrap((((a + b)/c)**5).expand(), language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 151, in wrap_code
shutil.rmtree(workdir)
File "c:\Python\lib\shutil.py", line 252, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "c:\Python\lib\shutil.py", line 250, in rmtree
os.remove(fullname)
WindowsError: [Error 5] Access is denied: 'c:\\users\\pecker\\appdata\\local\\temp\\tmpgz2ecp_sympy_compile\\wrapper_module_1.pyd'
___________________________________________________________________________________________________________________________________
_______________________________ sympy\external\tests\test_autowrap.py:test_autowrap_trace_C_Cython ________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 143, in test_autowrap_trace_C_Cython
runtest_autowrap_trace('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 61, in runtest_autowrap_trace
trace = autowrap(A[i, i], language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_2.pyx to wrapper_module_2.c
building 'wrapper_module_2' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_2.c -o build\temp.win-amd64-2.7\Release\wrapper_module_2.o -std=c99
wrapper_module_2.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
___________________________ sympy\external\tests\test_autowrap.py:test_autowrap_matrix_vector_C_cython ____________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 148, in test_autowrap_matrix_vector_C_cython
runtest_autowrap_matrix_vector('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 69, in runtest_autowrap_matrix_vector
mv = autowrap(expr, language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_3.pyx to wrapper_module_3.c
building 'wrapper_module_3' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_3.c -o build\temp.win-amd64-2.7\Release\wrapper_module_3.o -std=c99
wrapper_module_3.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
___________________________ sympy\external\tests\test_autowrap.py:test_autowrap_matrix_matrix_C_cython ____________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 153, in test_autowrap_matrix_matrix_C_cython
runtest_autowrap_matrix_matrix('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 81, in runtest_autowrap_matrix_matrix
matmat = autowrap(expr, language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_4.pyx to wrapper_module_4.c
building 'wrapper_module_4' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_4.c -o build\temp.win-amd64-2.7\Release\wrapper_module_4.o -std=c99
wrapper_module_4.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
__________________________________ sympy\external\tests\test_autowrap.py:test_ufuncify_C_Cython ___________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 158, in test_ufuncify_C_Cython
runtest_ufuncify('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 93, in runtest_ufuncify
fabc = ufuncify([a, b, c], a*b + c, backend=backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 857, in wrapper
return user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 878, in ufuncify
tempdir, args, flags, verbose, helpers)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 857, in wrapper
return user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_5.pyx to wrapper_module_5.c
building 'wrapper_module_5' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_5.c -o build\temp.win-amd64-2.7\Release\wrapper_module_5.o -std=c99
wrapper_module_5.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
____________________________________ sympy\external\tests\test_autowrap.py:test_ufuncify_numpy ____________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 167, in test_ufuncify_numpy
runtest_ufuncify('C', 'numpy')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 93, in runtest_ufuncify
fabc = ufuncify([a, b, c], a*b + c, backend=backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 857, in wrapper
return user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 864, in ufuncify
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 151, in wrap_code
shutil.rmtree(workdir)
File "c:\Python\lib\shutil.py", line 252, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "c:\Python\lib\shutil.py", line 250, in rmtree
os.remove(fullname)
WindowsError: [Error 5] Access is denied: 'c:\\users\\pecker\\appdata\\local\\temp\\tmpqfz1mc_sympy_compile\\wrapper_module_6.pyd'
___________________________________________________________________________________________________________________________________
____________________________ sympy\interactive\tests\test_ipythonprinting.py:test_print_builtin_option ____________________________
File "d:\sympytest\sympy\sympy\interactive\tests\test_ipythonprinting.py", line 70, in test_print_builtin_option
assert text in ("{pi: 3.14, n_i: 3}", u('{n\u1d62: 3, \u03c0: 3.14}'))
AssertionError
___________________________________________________________________________________________________________________________________
______________________________ sympy\physics\vector\tests\test_printing.py:test_vector_pretty_print _______________________________
File "d:\sympytest\sympy\sympy\physics\vector\tests\test_printing.py", line 45, in test_vector_pretty_print
assert expected == pp.doprint(v)
AssertionError
___________________________________________________________________________________________________________________________________
______________________________ sympy\physics\vector\tests\test_printing.py:test_dyadic_pretty_print _______________________________
File "d:\sympytest\sympy\sympy\physics\vector\tests\test_printing.py", line 138, in test_dyadic_pretty_print
assert expected == result
AssertionError
___________________________________________________________________________________________________________________________________
___________________________________ sympy\plotting\tests\test_plot_implicit.py:test_matplotlib ____________________________________
File "d:\sympytest\sympy\sympy\plotting\tests\test_plot_implicit.py", line 63, in test_matplotlib
plot_and_save('test')
File "d:\sympytest\sympy\sympy\plotting\tests\test_plot_implicit.py", line 58, in plot_and_save
assert 'No labeled objects found' in str(w[0].message)
AssertionError
___________________________________________________________________________________________________________________________________
____________________________________ sympy\vector\tests\test_printing.py:test_pretty_printing _____________________________________
File "d:\sympytest\sympy\sympy\vector\tests\test_printing.py", line 78, in test_pretty_printing
assert pretty(v[8]) == pretty_v_8
AssertionError
tests finished: 5678 passed, 5 failed, 138 skipped, 321 expected to fail, 4 expected to fail but passed, 6 exceptions,
in 3051.11 seconds
DO *NOT* COMMIT!
|
1.0
|
Tests fail, seems like Cython is not configured to compile with numpy correctly - Following are the standard output:
D:\sympytest>git clone git@github.com:sympy/sympy.git
Cloning into 'sympy'...
remote: Counting objects: 153742, done.
remote: Compressing objects: 100% (4/4), done.
Receiving objects: 100% (153742/153742), 62.98 MiB | 13.00 KiB/s, done.
Resolving deltas: 100% (122073/122073), done.
Checking connectivity... done
sh-3.1$ cd sympy
sh-3.1$ ./bin/test |tee test.log
======================================================= test process starts =======================================================
executable: c:\Python\python.exe (2.7.8-final-0) [CPython]
architecture: 32-bit
cache: yes
ground types: python
random seed: 68232668
hash randomization: on (PYTHONHASHSEED=2780600447)
sympy\assumptions\tests\test_assumptions_2.py[5] ..... [OK]
sympy\assumptions\tests\test_context.py[4] .... [OK]
sympy\assumptions\tests\test_matrices.py[19] ..........ff....... [OK]
__________________________________________________________ xpassed tests __________________________________________________________
sympy\core\tests\test_wester.py: test_V12
sympy\core\tests\test_wester.py: test_T10
sympy\printing\tests\test_gtk.py: test_1
sympy\utilities\tests\test_module_imports.py: test_module_imports_are_direct
___________________________________________________________________________________________________________________________________
_________________________________ sympy\external\tests\test_autowrap.py:test_wrap_twice_c_cython __________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 138, in test_wrap_twice_c_cython
runtest_autowrap_twice('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 51, in runtest_autowrap_twice
f = autowrap((((a + b)/c)**5).expand(), language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 151, in wrap_code
shutil.rmtree(workdir)
File "c:\Python\lib\shutil.py", line 252, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "c:\Python\lib\shutil.py", line 250, in rmtree
os.remove(fullname)
WindowsError: [Error 5] Access is denied: 'c:\\users\\pecker\\appdata\\local\\temp\\tmpgz2ecp_sympy_compile\\wrapper_module_1.pyd'
___________________________________________________________________________________________________________________________________
_______________________________ sympy\external\tests\test_autowrap.py:test_autowrap_trace_C_Cython ________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 143, in test_autowrap_trace_C_Cython
runtest_autowrap_trace('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 61, in runtest_autowrap_trace
trace = autowrap(A[i, i], language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_2.pyx to wrapper_module_2.c
building 'wrapper_module_2' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_2.c -o build\temp.win-amd64-2.7\Release\wrapper_module_2.o -std=c99
wrapper_module_2.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
___________________________ sympy\external\tests\test_autowrap.py:test_autowrap_matrix_vector_C_cython ____________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 148, in test_autowrap_matrix_vector_C_cython
runtest_autowrap_matrix_vector('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 69, in runtest_autowrap_matrix_vector
mv = autowrap(expr, language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_3.pyx to wrapper_module_3.c
building 'wrapper_module_3' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_3.c -o build\temp.win-amd64-2.7\Release\wrapper_module_3.o -std=c99
wrapper_module_3.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
___________________________ sympy\external\tests\test_autowrap.py:test_autowrap_matrix_matrix_C_cython ____________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 153, in test_autowrap_matrix_matrix_C_cython
runtest_autowrap_matrix_matrix('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 81, in runtest_autowrap_matrix_matrix
matmat = autowrap(expr, language, backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 872, in wrapper
result = user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_4.pyx to wrapper_module_4.c
building 'wrapper_module_4' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_4.c -o build\temp.win-amd64-2.7\Release\wrapper_module_4.o -std=c99
wrapper_module_4.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
__________________________________ sympy\external\tests\test_autowrap.py:test_ufuncify_C_Cython ___________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 158, in test_ufuncify_C_Cython
runtest_ufuncify('C', 'cython')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 93, in runtest_ufuncify
fabc = ufuncify([a, b, c], a*b + c, backend=backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 857, in wrapper
return user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 878, in ufuncify
tempdir, args, flags, verbose, helpers)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 857, in wrapper
return user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 508, in autowrap
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 144, in wrap_code
self._process_files(routine)
File "sympy\utilities\autowrap.py", line 163, in _process_files
" ".join(command), e.output.decode()))
CodeWrapError: Error while executing command: c:\Python\python.exe setup.py build_ext --inplace. Command output is:
running build_ext
cythoning wrapper_module_5.pyx to wrapper_module_5.c
building 'wrapper_module_5' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
c:\Python\MinGW\bin\gcc.exe -DMS_WIN64 -mdll -O -Wall -Ic:\Python\include -Ic:\Python\PC -c wrapper_module_5.c -o build\temp.win-amd64-2.7\Release\wrapper_module_5.o -std=c99
wrapper_module_5.c:232:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'c:\\Python\\MinGW\\bin\\gcc.exe' failed with exit status 1
___________________________________________________________________________________________________________________________________
____________________________________ sympy\external\tests\test_autowrap.py:test_ufuncify_numpy ____________________________________
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 167, in test_ufuncify_numpy
runtest_ufuncify('C', 'numpy')
File "d:\sympytest\sympy\sympy\external\tests\test_autowrap.py", line 93, in runtest_ufuncify
fabc = ufuncify([a, b, c], a*b + c, backend=backend)
File "sympy\core\cache.py", line 91, in wrapper
retval = cfunc(*args, **kwargs)
File "sympy\core\compatibility.py", line 857, in wrapper
return user_function(*args, **kwds)
File "sympy\utilities\autowrap.py", line 864, in ufuncify
return code_wrapper.wrap_code(routine, helpers=helps)
File "sympy\utilities\autowrap.py", line 151, in wrap_code
shutil.rmtree(workdir)
File "c:\Python\lib\shutil.py", line 252, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "c:\Python\lib\shutil.py", line 250, in rmtree
os.remove(fullname)
WindowsError: [Error 5] Access is denied: 'c:\\users\\pecker\\appdata\\local\\temp\\tmpqfz1mc_sympy_compile\\wrapper_module_6.pyd'
___________________________________________________________________________________________________________________________________
____________________________ sympy\interactive\tests\test_ipythonprinting.py:test_print_builtin_option ____________________________
File "d:\sympytest\sympy\sympy\interactive\tests\test_ipythonprinting.py", line 70, in test_print_builtin_option
assert text in ("{pi: 3.14, n_i: 3}", u('{n\u1d62: 3, \u03c0: 3.14}'))
AssertionError
___________________________________________________________________________________________________________________________________
______________________________ sympy\physics\vector\tests\test_printing.py:test_vector_pretty_print _______________________________
File "d:\sympytest\sympy\sympy\physics\vector\tests\test_printing.py", line 45, in test_vector_pretty_print
assert expected == pp.doprint(v)
AssertionError
___________________________________________________________________________________________________________________________________
______________________________ sympy\physics\vector\tests\test_printing.py:test_dyadic_pretty_print _______________________________
File "d:\sympytest\sympy\sympy\physics\vector\tests\test_printing.py", line 138, in test_dyadic_pretty_print
assert expected == result
AssertionError
___________________________________________________________________________________________________________________________________
___________________________________ sympy\plotting\tests\test_plot_implicit.py:test_matplotlib ____________________________________
File "d:\sympytest\sympy\sympy\plotting\tests\test_plot_implicit.py", line 63, in test_matplotlib
plot_and_save('test')
File "d:\sympytest\sympy\sympy\plotting\tests\test_plot_implicit.py", line 58, in plot_and_save
assert 'No labeled objects found' in str(w[0].message)
AssertionError
___________________________________________________________________________________________________________________________________
____________________________________ sympy\vector\tests\test_printing.py:test_pretty_printing _____________________________________
File "d:\sympytest\sympy\sympy\vector\tests\test_printing.py", line 78, in test_pretty_printing
assert pretty(v[8]) == pretty_v_8
AssertionError
tests finished: 5678 passed, 5 failed, 138 skipped, 321 expected to fail, 4 expected to fail but passed, 6 exceptions,
in 3051.11 seconds
DO *NOT* COMMIT!
|
non_process
|
tests fail seems like cython is not configured to compile with numpy correctly following are the standard output d sympytest git clone git github com sympy sympy git cloning into sympy remote counting objects done remote compressing objects done receiving objects mib kib s done resolving deltas done checking connectivity done sh cd sympy sh bin test tee test log test process starts executable c python python exe final architecture bit cache yes ground types python random seed hash randomization on pythonhashseed sympy assumptions tests test assumptions py sympy assumptions tests test context py sympy assumptions tests test matrices py ff xpassed tests sympy core tests test wester py test sympy core tests test wester py test sympy printing tests test gtk py test sympy utilities tests test module imports py test module imports are direct sympy external tests test autowrap py test wrap twice c cython file d sympytest sympy sympy external tests test autowrap py line in test wrap twice c cython runtest autowrap twice c cython file d sympytest sympy sympy external tests test autowrap py line in runtest autowrap twice f autowrap a b c expand language backend file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper result user function args kwds file sympy utilities autowrap py line in autowrap return code wrapper wrap code routine helpers helps file sympy utilities autowrap py line in wrap code shutil rmtree workdir file c python lib shutil py line in rmtree onerror os remove fullname sys exc info file c python lib shutil py line in rmtree os remove fullname windowserror access is denied c users pecker appdata local temp sympy compile wrapper module pyd sympy external tests test autowrap py test autowrap trace c cython file d sympytest sympy sympy external tests test autowrap py line in test autowrap trace c cython runtest autowrap trace c cython file d sympytest sympy sympy external tests test autowrap py line in runtest autowrap trace trace autowrap a language backend file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper result user function args kwds file sympy utilities autowrap py line in autowrap return code wrapper wrap code routine helpers helps file sympy utilities autowrap py line in wrap code self process files routine file sympy utilities autowrap py line in process files join command e output decode codewraperror error while executing command c python python exe setup py build ext inplace command output is running build ext cythoning wrapper module pyx to wrapper module c building wrapper module extension creating build creating build temp win creating build temp win release c python mingw bin gcc exe dms mdll o wall ic python include ic python pc c wrapper module c o build temp win release wrapper module o std wrapper module c fatal error numpy arrayobject h no such file or directory compilation terminated error command c python mingw bin gcc exe failed with exit status sympy external tests test autowrap py test autowrap matrix vector c cython file d sympytest sympy sympy external tests test autowrap py line in test autowrap matrix vector c cython runtest autowrap matrix vector c cython file d sympytest sympy sympy external tests test autowrap py line in runtest autowrap matrix vector mv autowrap expr language backend file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper result user function args kwds file sympy utilities autowrap py line in autowrap return code wrapper wrap code routine helpers helps file sympy utilities autowrap py line in wrap code self process files routine file sympy utilities autowrap py line in process files join command e output decode codewraperror error while executing command c python python exe setup py build ext inplace command output is running build ext cythoning wrapper module pyx to wrapper module c building wrapper module extension creating build creating build temp win creating build temp win release c python mingw bin gcc exe dms mdll o wall ic python include ic python pc c wrapper module c o build temp win release wrapper module o std wrapper module c fatal error numpy arrayobject h no such file or directory compilation terminated error command c python mingw bin gcc exe failed with exit status sympy external tests test autowrap py test autowrap matrix matrix c cython file d sympytest sympy sympy external tests test autowrap py line in test autowrap matrix matrix c cython runtest autowrap matrix matrix c cython file d sympytest sympy sympy external tests test autowrap py line in runtest autowrap matrix matrix matmat autowrap expr language backend file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper result user function args kwds file sympy utilities autowrap py line in autowrap return code wrapper wrap code routine helpers helps file sympy utilities autowrap py line in wrap code self process files routine file sympy utilities autowrap py line in process files join command e output decode codewraperror error while executing command c python python exe setup py build ext inplace command output is running build ext cythoning wrapper module pyx to wrapper module c building wrapper module extension creating build creating build temp win creating build temp win release c python mingw bin gcc exe dms mdll o wall ic python include ic python pc c wrapper module c o build temp win release wrapper module o std wrapper module c fatal error numpy arrayobject h no such file or directory compilation terminated error command c python mingw bin gcc exe failed with exit status sympy external tests test autowrap py test ufuncify c cython file d sympytest sympy sympy external tests test autowrap py line in test ufuncify c cython runtest ufuncify c cython file d sympytest sympy sympy external tests test autowrap py line in runtest ufuncify fabc ufuncify a b c backend backend file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper return user function args kwds file sympy utilities autowrap py line in ufuncify tempdir args flags verbose helpers file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper return user function args kwds file sympy utilities autowrap py line in autowrap return code wrapper wrap code routine helpers helps file sympy utilities autowrap py line in wrap code self process files routine file sympy utilities autowrap py line in process files join command e output decode codewraperror error while executing command c python python exe setup py build ext inplace command output is running build ext cythoning wrapper module pyx to wrapper module c building wrapper module extension creating build creating build temp win creating build temp win release c python mingw bin gcc exe dms mdll o wall ic python include ic python pc c wrapper module c o build temp win release wrapper module o std wrapper module c fatal error numpy arrayobject h no such file or directory compilation terminated error command c python mingw bin gcc exe failed with exit status sympy external tests test autowrap py test ufuncify numpy file d sympytest sympy sympy external tests test autowrap py line in test ufuncify numpy runtest ufuncify c numpy file d sympytest sympy sympy external tests test autowrap py line in runtest ufuncify fabc ufuncify a b c backend backend file sympy core cache py line in wrapper retval cfunc args kwargs file sympy core compatibility py line in wrapper return user function args kwds file sympy utilities autowrap py line in ufuncify return code wrapper wrap code routine helpers helps file sympy utilities autowrap py line in wrap code shutil rmtree workdir file c python lib shutil py line in rmtree onerror os remove fullname sys exc info file c python lib shutil py line in rmtree os remove fullname windowserror access is denied c users pecker appdata local temp sympy compile wrapper module pyd sympy interactive tests test ipythonprinting py test print builtin option file d sympytest sympy sympy interactive tests test ipythonprinting py line in test print builtin option assert text in pi n i u n assertionerror sympy physics vector tests test printing py test vector pretty print file d sympytest sympy sympy physics vector tests test printing py line in test vector pretty print assert expected pp doprint v assertionerror sympy physics vector tests test printing py test dyadic pretty print file d sympytest sympy sympy physics vector tests test printing py line in test dyadic pretty print assert expected result assertionerror sympy plotting tests test plot implicit py test matplotlib file d sympytest sympy sympy plotting tests test plot implicit py line in test matplotlib plot and save test file d sympytest sympy sympy plotting tests test plot implicit py line in plot and save assert no labeled objects found in str w message assertionerror sympy vector tests test printing py test pretty printing file d sympytest sympy sympy vector tests test printing py line in test pretty printing assert pretty v pretty v assertionerror tests finished passed failed skipped expected to fail expected to fail but passed exceptions in seconds do not commit
| 0
|
8,855
| 11,955,704,754
|
IssuesEvent
|
2020-04-04 06:19:36
|
hngskj/labnote
|
https://api.github.com/repos/hngskj/labnote
|
closed
|
Add Process view
|
process-view
|
We need the process view when the "Create Process" button is clicked.
The essential elements would be like:
- The name of a process (eg. Merge, Stir, Boil ...)
- The chemicals to be handled (drag-and-drop or click items in the list)
- The details of experimental conditions (eg. instruments, rpm, temperature ...)
- The expected result (the output chemical)
What else do we need?
|
1.0
|
Add Process view - We need the process view when the "Create Process" button is clicked.
The essential elements would be like:
- The name of a process (eg. Merge, Stir, Boil ...)
- The chemicals to be handled (drag-and-drop or click items in the list)
- The details of experimental conditions (eg. instruments, rpm, temperature ...)
- The expected result (the output chemical)
What else do we need?
|
process
|
add process view we need the process view when the create process button is clicked the essential elements would be like the name of a process eg merge stir boil the chemicals to be handled drag and drop or click items in the list the details of experimental conditions eg instruments rpm temperature the expected result the output chemical what else do we need
| 1
|
7,704
| 10,798,206,948
|
IssuesEvent
|
2019-11-06 09:35:17
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
closed
|
Pipeline Rule allows invocation of function "set_field()" without setting a value for the new field.
|
#S bug processing triaged
|
The documentation for Pipeline Rule function "set_field()" states that the second argument "value" is a required parameter for the function. However, Graylog allows me to invoke the function without specifying that argument.
## Expected Behavior
When writing the Pipeline Rule, an error message should appear and Graylog should forbid the user from saving the Rule until the issue is resolved, as is the behavior for other functions if not invoked correctly.
## Current Behavior
I am able to save the Pipeline Rule without supplying an argument for the "value" parameter. However the Rule does not appear to work, as the new field that is supposed to be created does not show up in logs.
## Possible Solution
Force the user to fix the error in the Pipeline Rule by supplying an argument for the "value" parameter in the "set_field()" function.
## Steps to Reproduce (for bugs)
1. Create Pipeline Rule like the following example:
```
rule "test set_field()"
when
true
then
set_field("foo");
end
```
2. Save the Rule, connect it to a Stream receiving logs.
3. Go to Stream, generate new logs.
4. Note that new logs do not have the field "foo" at all.
## Context
Since Pipeline Rules have no "elif" or "else" clauses, this makes setting binary fields useless, as instead you can just test for the existence of the field itself. Take the a modified version of the above Pipeline Rule for example:
```
rule "If true, create field foo and set to true"
when
// List of conditions:
$message.condition1 == "x"
then
set_field("foo", true)
end
```
If want to avoid acting upon all logs that don't match the supplied condition, there's no reason for me to write a separate Pipeline Rule to set field "foo" to false, as it is implied by not matching against the above Rule.
Therefore, why must I set "foo" to any value at all? If the supplied conditions match, "foo" will equal Boolean True, and if conditions don't match, the field won't be set at all. So it would be more efficient to simply test for the existence of the field "foo", as its mere existence implies its value equal to Boolean True.
Honestly, instead of forcing the user to set a value for the new field in the set_field() function, why not make it officially optional? Until Pipeline Rules can extend beyond strictly "if/then" scenarios and include "elif" and "else" clauses, there is no reason for setting Boolean values at all, as demonstrated above.
In addition, currently the only way to test the existence of a field is by employing the "is_null()" and "is_not_null()" functions, which is rather indirect. Could a new function be written strictly for checking the existence of a field, like the "_exists_" search term in Graylog Search?
## Your Environment
* Graylog Version: 3.0.2
* Elasticsearch Version: 6.x
* MongoDB Version: 4.0.8
* Operating System: Oracle Linux 7.5
|
1.0
|
Pipeline Rule allows invocation of function "set_field()" without setting a value for the new field. - The documentation for Pipeline Rule function "set_field()" states that the second argument "value" is a required parameter for the function. However, Graylog allows me to invoke the function without specifying that argument.
## Expected Behavior
When writing the Pipeline Rule, an error message should appear and Graylog should forbid the user from saving the Rule until the issue is resolved, as is the behavior for other functions if not invoked correctly.
## Current Behavior
I am able to save the Pipeline Rule without supplying an argument for the "value" parameter. However the Rule does not appear to work, as the new field that is supposed to be created does not show up in logs.
## Possible Solution
Force the user to fix the error in the Pipeline Rule by supplying an argument for the "value" parameter in the "set_field()" function.
## Steps to Reproduce (for bugs)
1. Create Pipeline Rule like the following example:
```
rule "test set_field()"
when
true
then
set_field("foo");
end
```
2. Save the Rule, connect it to a Stream receiving logs.
3. Go to Stream, generate new logs.
4. Note that new logs do not have the field "foo" at all.
## Context
Since Pipeline Rules have no "elif" or "else" clauses, this makes setting binary fields useless, as instead you can just test for the existence of the field itself. Take the a modified version of the above Pipeline Rule for example:
```
rule "If true, create field foo and set to true"
when
// List of conditions:
$message.condition1 == "x"
then
set_field("foo", true)
end
```
If want to avoid acting upon all logs that don't match the supplied condition, there's no reason for me to write a separate Pipeline Rule to set field "foo" to false, as it is implied by not matching against the above Rule.
Therefore, why must I set "foo" to any value at all? If the supplied conditions match, "foo" will equal Boolean True, and if conditions don't match, the field won't be set at all. So it would be more efficient to simply test for the existence of the field "foo", as its mere existence implies its value equal to Boolean True.
Honestly, instead of forcing the user to set a value for the new field in the set_field() function, why not make it officially optional? Until Pipeline Rules can extend beyond strictly "if/then" scenarios and include "elif" and "else" clauses, there is no reason for setting Boolean values at all, as demonstrated above.
In addition, currently the only way to test the existence of a field is by employing the "is_null()" and "is_not_null()" functions, which is rather indirect. Could a new function be written strictly for checking the existence of a field, like the "_exists_" search term in Graylog Search?
## Your Environment
* Graylog Version: 3.0.2
* Elasticsearch Version: 6.x
* MongoDB Version: 4.0.8
* Operating System: Oracle Linux 7.5
|
process
|
pipeline rule allows invocation of function set field without setting a value for the new field the documentation for pipeline rule function set field states that the second argument value is a required parameter for the function however graylog allows me to invoke the function without specifying that argument expected behavior when writing the pipeline rule an error message should appear and graylog should forbid the user from saving the rule until the issue is resolved as is the behavior for other functions if not invoked correctly current behavior i am able to save the pipeline rule without supplying an argument for the value parameter however the rule does not appear to work as the new field that is supposed to be created does not show up in logs possible solution force the user to fix the error in the pipeline rule by supplying an argument for the value parameter in the set field function steps to reproduce for bugs create pipeline rule like the following example rule test set field when true then set field foo end save the rule connect it to a stream receiving logs go to stream generate new logs note that new logs do not have the field foo at all context since pipeline rules have no elif or else clauses this makes setting binary fields useless as instead you can just test for the existence of the field itself take the a modified version of the above pipeline rule for example rule if true create field foo and set to true when list of conditions message x then set field foo true end if want to avoid acting upon all logs that don t match the supplied condition there s no reason for me to write a separate pipeline rule to set field foo to false as it is implied by not matching against the above rule therefore why must i set foo to any value at all if the supplied conditions match foo will equal boolean true and if conditions don t match the field won t be set at all so it would be more efficient to simply test for the existence of the field foo as its mere existence implies its value equal to boolean true honestly instead of forcing the user to set a value for the new field in the set field function why not make it officially optional until pipeline rules can extend beyond strictly if then scenarios and include elif and else clauses there is no reason for setting boolean values at all as demonstrated above in addition currently the only way to test the existence of a field is by employing the is null and is not null functions which is rather indirect could a new function be written strictly for checking the existence of a field like the exists search term in graylog search your environment graylog version elasticsearch version x mongodb version operating system oracle linux
| 1
|
11,044
| 13,854,934,815
|
IssuesEvent
|
2020-10-15 10:12:00
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
opened
|
Add more APIs
|
type: process
|
The following APIs have been identified for generation:
- [ ] Access Approval API
- [ ] App Engine Admin API
- [ ] Cloud Bigtable Admin API
- [ ] Binary Authorization API
- [ ] Cloud Build API
- [ ] Cloud IoT API
- [ ] Cloud Resource Manager API
- [ ] IAM Service Account Credentials API
- [ ] Pub/Sub Lite API
- [ ] Web Security Scanner API
- [ ] Area120 Tables API
- [ ] Data Labeling API
- [ ] Media Translation API
- [ ] Policy Troubleshooter API
- [ ] Recommendations AI
- [ ] Service Control API
- [ ] Service Management API
- [ ] Workflow Executions API
We'll add a date and the kind of action as we go.
|
1.0
|
Add more APIs - The following APIs have been identified for generation:
- [ ] Access Approval API
- [ ] App Engine Admin API
- [ ] Cloud Bigtable Admin API
- [ ] Binary Authorization API
- [ ] Cloud Build API
- [ ] Cloud IoT API
- [ ] Cloud Resource Manager API
- [ ] IAM Service Account Credentials API
- [ ] Pub/Sub Lite API
- [ ] Web Security Scanner API
- [ ] Area120 Tables API
- [ ] Data Labeling API
- [ ] Media Translation API
- [ ] Policy Troubleshooter API
- [ ] Recommendations AI
- [ ] Service Control API
- [ ] Service Management API
- [ ] Workflow Executions API
We'll add a date and the kind of action as we go.
|
process
|
add more apis the following apis have been identified for generation access approval api app engine admin api cloud bigtable admin api binary authorization api cloud build api cloud iot api cloud resource manager api iam service account credentials api pub sub lite api web security scanner api tables api data labeling api media translation api policy troubleshooter api recommendations ai service control api service management api workflow executions api we ll add a date and the kind of action as we go
| 1
|
14,676
| 17,791,985,976
|
IssuesEvent
|
2021-08-31 17:16:21
|
googleapis/python-api-core
|
https://api.github.com/repos/googleapis/python-api-core
|
closed
|
'test_consumer_unexpected_error' coverage flakes (race condition)
|
type: process
|
From [this Kokoro failure](https://source.cloud.google.com/results/invocations/cffbd079-e899-4020-8a98-92674cb1b934/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-api-core%2Fpresubmit%2Fpresubmit/log):
```python
---------------------------------------------------------------------------------------------------------
google/api_core/__init__.py 3 0 0 0 100%
google/api_core/bidi.py 255 0 64 0 100%
google/api_core/client_info.py 32 0 10 0 100%
google/api_core/client_options.py 18 0 6 0 100%
google/api_core/datetime_helpers.py 81 0 18 0 100%
google/api_core/exceptions.py 151 0 16 0 100%
google/api_core/future/__init__.py 3 0 0 0 100%
google/api_core/future/_helpers.py 14 0 0 0 100%
google/api_core/future/async_future.py 46 0 6 0 100%
google/api_core/future/base.py 14 0 0 0 100%
google/api_core/future/polling.py 64 0 12 0 100%
google/api_core/gapic_v1/__init__.py 7 0 0 0 100%
google/api_core/gapic_v1/client_info.py 7 0 0 0 100%
google/api_core/gapic_v1/config.py 29 0 10 0 100%
google/api_core/gapic_v1/config_async.py 6 0 0 0 100%
google/api_core/gapic_v1/method.py 51 0 20 0 100%
google/api_core/gapic_v1/method_async.py 11 0 0 0 100%
google/api_core/gapic_v1/routing_header.py 7 0 0 0 100%
google/api_core/general_helpers.py 0 0 0 0 100%
google/api_core/grpc_helpers.py 160 0 32 0 100%
google/api_core/grpc_helpers_async.py 121 0 12 0 100%
google/api_core/iam.py 129 0 48 0 100%
google/api_core/operation.py 85 0 16 0 100%
google/api_core/operation_async.py 58 0 12 0 100%
google/api_core/operations_v1/__init__.py 4 0 0 0 100%
google/api_core/operations_v1/operations_async_client.py 37 0 0 0 100%
google/api_core/operations_v1/operations_client.py 38 0 0 0 100%
google/api_core/operations_v1/operations_client_config.py 2 0 0 0 100%
google/api_core/page_iterator.py 166 0 50 0 100%
google/api_core/page_iterator_async.py 71 0 24 0 100%
google/api_core/path_template.py 68 0 28 0 100%
google/api_core/protobuf_helpers.py 114 0 70 0 100%
google/api_core/retry.py 75 0 14 0 100%
google/api_core/retry_async.py 64 0 14 0 100%
google/api_core/timeout.py 44 0 2 0 100%
google/api_core/version.py 1 0 0 0 100%
tests/asyncio/__init__.py 0 0 0 0 100%
tests/asyncio/future/__init__.py 0 0 0 0 100%
tests/asyncio/future/test_async_future.py 129 0 2 0 100%
tests/asyncio/operations_v1/__init__.py 0 0 0 0 100%
tests/asyncio/operations_v1/test_operations_async_client.py 55 0 2 0 100%
tests/asyncio/test_grpc_helpers_async.py 296 0 4 0 100%
tests/asyncio/test_operation_async.py 90 0 8 0 100%
tests/asyncio/test_page_iterator_async.py 161 0 6 0 100%
tests/asyncio/test_retry_async.py 202 0 12 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/future/__init__.py 0 0 0 0 100%
tests/unit/future/test__helpers.py 15 0 0 0 100%
tests/unit/future/test_polling.py 148 0 2 0 100%
tests/unit/operations_v1/__init__.py 0 0 0 0 100%
tests/unit/operations_v1/test_operations_client.py 49 0 0 0 100%
tests/unit/test_bidi.py 551 1 32 1 99% 839
tests/unit/test_client_info.py 30 0 0 0 100%
tests/unit/test_client_options.py 33 0 0 0 100%
tests/unit/test_datetime_helpers.py 187 0 0 0 100%
tests/unit/test_exceptions.py 140 0 0 0 100%
tests/unit/test_grpc_helpers.py 401 0 26 0 100%
tests/unit/test_iam.py 263 0 0 0 100%
tests/unit/test_operation.py 162 0 8 0 100%
tests/unit/test_page_iterator.py 353 0 16 0 100%
tests/unit/test_path_template.py 42 0 6 0 100%
tests/unit/test_protobuf_helpers.py 296 0 6 0 100%
tests/unit/test_retry.py 238 0 14 0 100%
tests/unit/test_timeout.py 75 0 2 0 100%
---------------------------------------------------------------------------------------------------------
TOTAL 5952 1 630 1 99%
```
|
1.0
|
'test_consumer_unexpected_error' coverage flakes (race condition) - From [this Kokoro failure](https://source.cloud.google.com/results/invocations/cffbd079-e899-4020-8a98-92674cb1b934/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-api-core%2Fpresubmit%2Fpresubmit/log):
```python
---------------------------------------------------------------------------------------------------------
google/api_core/__init__.py 3 0 0 0 100%
google/api_core/bidi.py 255 0 64 0 100%
google/api_core/client_info.py 32 0 10 0 100%
google/api_core/client_options.py 18 0 6 0 100%
google/api_core/datetime_helpers.py 81 0 18 0 100%
google/api_core/exceptions.py 151 0 16 0 100%
google/api_core/future/__init__.py 3 0 0 0 100%
google/api_core/future/_helpers.py 14 0 0 0 100%
google/api_core/future/async_future.py 46 0 6 0 100%
google/api_core/future/base.py 14 0 0 0 100%
google/api_core/future/polling.py 64 0 12 0 100%
google/api_core/gapic_v1/__init__.py 7 0 0 0 100%
google/api_core/gapic_v1/client_info.py 7 0 0 0 100%
google/api_core/gapic_v1/config.py 29 0 10 0 100%
google/api_core/gapic_v1/config_async.py 6 0 0 0 100%
google/api_core/gapic_v1/method.py 51 0 20 0 100%
google/api_core/gapic_v1/method_async.py 11 0 0 0 100%
google/api_core/gapic_v1/routing_header.py 7 0 0 0 100%
google/api_core/general_helpers.py 0 0 0 0 100%
google/api_core/grpc_helpers.py 160 0 32 0 100%
google/api_core/grpc_helpers_async.py 121 0 12 0 100%
google/api_core/iam.py 129 0 48 0 100%
google/api_core/operation.py 85 0 16 0 100%
google/api_core/operation_async.py 58 0 12 0 100%
google/api_core/operations_v1/__init__.py 4 0 0 0 100%
google/api_core/operations_v1/operations_async_client.py 37 0 0 0 100%
google/api_core/operations_v1/operations_client.py 38 0 0 0 100%
google/api_core/operations_v1/operations_client_config.py 2 0 0 0 100%
google/api_core/page_iterator.py 166 0 50 0 100%
google/api_core/page_iterator_async.py 71 0 24 0 100%
google/api_core/path_template.py 68 0 28 0 100%
google/api_core/protobuf_helpers.py 114 0 70 0 100%
google/api_core/retry.py 75 0 14 0 100%
google/api_core/retry_async.py 64 0 14 0 100%
google/api_core/timeout.py 44 0 2 0 100%
google/api_core/version.py 1 0 0 0 100%
tests/asyncio/__init__.py 0 0 0 0 100%
tests/asyncio/future/__init__.py 0 0 0 0 100%
tests/asyncio/future/test_async_future.py 129 0 2 0 100%
tests/asyncio/operations_v1/__init__.py 0 0 0 0 100%
tests/asyncio/operations_v1/test_operations_async_client.py 55 0 2 0 100%
tests/asyncio/test_grpc_helpers_async.py 296 0 4 0 100%
tests/asyncio/test_operation_async.py 90 0 8 0 100%
tests/asyncio/test_page_iterator_async.py 161 0 6 0 100%
tests/asyncio/test_retry_async.py 202 0 12 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/future/__init__.py 0 0 0 0 100%
tests/unit/future/test__helpers.py 15 0 0 0 100%
tests/unit/future/test_polling.py 148 0 2 0 100%
tests/unit/operations_v1/__init__.py 0 0 0 0 100%
tests/unit/operations_v1/test_operations_client.py 49 0 0 0 100%
tests/unit/test_bidi.py 551 1 32 1 99% 839
tests/unit/test_client_info.py 30 0 0 0 100%
tests/unit/test_client_options.py 33 0 0 0 100%
tests/unit/test_datetime_helpers.py 187 0 0 0 100%
tests/unit/test_exceptions.py 140 0 0 0 100%
tests/unit/test_grpc_helpers.py 401 0 26 0 100%
tests/unit/test_iam.py 263 0 0 0 100%
tests/unit/test_operation.py 162 0 8 0 100%
tests/unit/test_page_iterator.py 353 0 16 0 100%
tests/unit/test_path_template.py 42 0 6 0 100%
tests/unit/test_protobuf_helpers.py 296 0 6 0 100%
tests/unit/test_retry.py 238 0 14 0 100%
tests/unit/test_timeout.py 75 0 2 0 100%
---------------------------------------------------------------------------------------------------------
TOTAL 5952 1 630 1 99%
```
|
process
|
test consumer unexpected error coverage flakes race condition from python google api core init py google api core bidi py google api core client info py google api core client options py google api core datetime helpers py google api core exceptions py google api core future init py google api core future helpers py google api core future async future py google api core future base py google api core future polling py google api core gapic init py google api core gapic client info py google api core gapic config py google api core gapic config async py google api core gapic method py google api core gapic method async py google api core gapic routing header py google api core general helpers py google api core grpc helpers py google api core grpc helpers async py google api core iam py google api core operation py google api core operation async py google api core operations init py google api core operations operations async client py google api core operations operations client py google api core operations operations client config py google api core page iterator py google api core page iterator async py google api core path template py google api core protobuf helpers py google api core retry py google api core retry async py google api core timeout py google api core version py tests asyncio init py tests asyncio future init py tests asyncio future test async future py tests asyncio operations init py tests asyncio operations test operations async client py tests asyncio test grpc helpers async py tests asyncio test operation async py tests asyncio test page iterator async py tests asyncio test retry async py tests unit init py tests unit future init py tests unit future test helpers py tests unit future test polling py tests unit operations init py tests unit operations test operations client py tests unit test bidi py tests unit test client info py tests unit test client options py tests unit test datetime helpers py tests unit test exceptions py tests unit test grpc helpers py tests unit test iam py tests unit test operation py tests unit test page iterator py tests unit test path template py tests unit test protobuf helpers py tests unit test retry py tests unit test timeout py total
| 1
|
8,336
| 11,495,418,820
|
IssuesEvent
|
2020-02-12 04:46:24
|
Triple-T/gradle-play-publisher
|
https://api.github.com/repos/Triple-T/gradle-play-publisher
|
closed
|
Revamp testing story
|
feature:other process
|
There have been numerous regressions that should have been caught by tests.
Tests to add:
- [x] Unit tests for Validation.kt methods
- [x] Unit tests for Plugins.kt
- [x] Unit tests for logic in CliOptions.kt
- [x] Unit tests + integration tests for PlayWorkers.kt
- [x] Integration tests Agp.kt
- [x] Integration tests with and without a mapping file
- [x] Integration tests validating customDir artifact fetching behavior
- [x] Integration tests for each task validating that it's publishing stuff
- [x] Integration tests validating the correct extension was picked
- [x] Remove all Groovy
|
1.0
|
Revamp testing story - There have been numerous regressions that should have been caught by tests.
Tests to add:
- [x] Unit tests for Validation.kt methods
- [x] Unit tests for Plugins.kt
- [x] Unit tests for logic in CliOptions.kt
- [x] Unit tests + integration tests for PlayWorkers.kt
- [x] Integration tests Agp.kt
- [x] Integration tests with and without a mapping file
- [x] Integration tests validating customDir artifact fetching behavior
- [x] Integration tests for each task validating that it's publishing stuff
- [x] Integration tests validating the correct extension was picked
- [x] Remove all Groovy
|
process
|
revamp testing story there have been numerous regressions that should have been caught by tests tests to add unit tests for validation kt methods unit tests for plugins kt unit tests for logic in clioptions kt unit tests integration tests for playworkers kt integration tests agp kt integration tests with and without a mapping file integration tests validating customdir artifact fetching behavior integration tests for each task validating that it s publishing stuff integration tests validating the correct extension was picked remove all groovy
| 1
|
19,695
| 26,047,573,583
|
IssuesEvent
|
2022-12-22 15:37:48
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Could you illustrate more on how to use "variables" part in stage?
|
doc-enhancement devops/prod Pri2 devops-cicd-process/tech
|
Could you please illustrate more on how to use "variables" part in stage? Like add an example of it?
Could you put a link of the convention of following grammar syntax? Like what does curly bracket mean?
`variables: { string: string } | [ variable | variableReference ] `
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d322215c-8025-4f21-0700-7dfa7dc5c46e
* Version Independent ID: 141fcdbb-8394-525b-bb29-eff9a693a9c4
* Content: [Stages in Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/stages.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/stages.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Could you illustrate more on how to use "variables" part in stage? - Could you please illustrate more on how to use "variables" part in stage? Like add an example of it?
Could you put a link of the convention of following grammar syntax? Like what does curly bracket mean?
`variables: { string: string } | [ variable | variableReference ] `
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d322215c-8025-4f21-0700-7dfa7dc5c46e
* Version Independent ID: 141fcdbb-8394-525b-bb29-eff9a693a9c4
* Content: [Stages in Azure Pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/stages.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/stages.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
could you illustrate more on how to use variables part in stage could you please illustrate more on how to use variables part in stage like add an example of it could you put a link of the convention of following grammar syntax like what does curly bracket mean variables string string document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
391,318
| 11,572,216,217
|
IssuesEvent
|
2020-02-20 23:24:25
|
googleapis/python-bigquery-datatransfer
|
https://api.github.com/repos/googleapis/python-bigquery-datatransfer
|
closed
|
Synthesis failed for python-bigquery-datatransfer
|
api: bigquerydatatransfer autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate python-bigquery-datatransfer. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
On branch autosynth
nothing to commit, working tree clean
HEAD detached at FETCH_HEAD
nothing to commit, working tree clean
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:6aec9c34db0e4be221cdaf6faba27bdc07cfea846808b3d3b964dfce3a9a0f9b
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/cloud/bigquery/datatransfer/artman_bigquerydatatransfer.yaml.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1.
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/datatransfer/v1/transfer.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1/google/cloud/bigquery_datatransfer_v1/proto/transfer.proto
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/datatransfer/v1/datatransfer.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1/google/cloud/bigquery_datatransfer_v1/proto/datatransfer.proto
synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1/google/cloud/bigquery_datatransfer_v1/proto.
synthtool > Replaced 'from google.cloud.bigquery.datatransfer_v1.proto' in google/cloud/bigquery_datatransfer_v1/proto/datatransfer_pb2.py.
synthtool > Replaced 'from google.cloud.bigquery.datatransfer_v1.proto' in google/cloud/bigquery_datatransfer_v1/proto/datatransfer_pb2_grpc.py.
synthtool > Replaced 'google-cloud-bigquerydatatransfer' in google/cloud/bigquery_datatransfer_v1/gapic/data_transfer_service_client.py.
synthtool > Replaced 'import google.api_core.gapic_v1.method\n' in google/cloud/bigquery_datatransfer_v1/gapic/data_transfer_service_client.py.
.coveragerc
.flake8
.github/CONTRIBUTING.md
.github/ISSUE_TEMPLATE/bug_report.md
.github/ISSUE_TEMPLATE/feature_request.md
.github/ISSUE_TEMPLATE/support_request.md
.github/PULL_REQUEST_TEMPLATE.md
.github/release-please.yml
.gitignore
.kokoro/build.sh
.kokoro/continuous/common.cfg
.kokoro/continuous/continuous.cfg
.kokoro/docs/common.cfg
.kokoro/docs/docs.cfg
.kokoro/presubmit/common.cfg
.kokoro/presubmit/presubmit.cfg
.kokoro/publish-docs.sh
.kokoro/release.sh
.kokoro/release/common.cfg
.kokoro/release/release.cfg
.kokoro/trampoline.sh
CODE_OF_CONDUCT.md
CONTRIBUTING.rst
LICENSE
MANIFEST.in
docs/_static/custom.css
docs/_templates/layout.html
docs/conf.py.j2
noxfile.py.j2
renovate.json
setup.cfg
Running session blacken
Creating virtual environment (virtualenv) using python3.6 in .nox/blacken
pip install black==19.3b0
Error: pip is not installed into the virtualenv, it is located at /tmpfs/src/git/autosynth/env/bin/pip. Pass external=True into run() to explicitly allow this.
Session blacken failed.
synthtool > Failed executing nox -s blacken:
None
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 68, in <module>
s.shell.run(["nox", "-s", "blacken"], hide_output=False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['nox', '-s', 'blacken']' returned non-zero exit status 1.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/bc296324-7867-4eae-a113-8090e6bcc834).
|
1.0
|
Synthesis failed for python-bigquery-datatransfer - Hello! Autosynth couldn't regenerate python-bigquery-datatransfer. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
On branch autosynth
nothing to commit, working tree clean
HEAD detached at FETCH_HEAD
nothing to commit, working tree clean
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
latest: Pulling from googleapis/artman
Digest: sha256:6aec9c34db0e4be221cdaf6faba27bdc07cfea846808b3d3b964dfce3a9a0f9b
Status: Image is up to date for googleapis/artman:latest
synthtool > Cloning googleapis.
synthtool > Running generator for google/cloud/bigquery/datatransfer/artman_bigquerydatatransfer.yaml.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1.
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/datatransfer/v1/transfer.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1/google/cloud/bigquery_datatransfer_v1/proto/transfer.proto
synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/bigquery/datatransfer/v1/datatransfer.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1/google/cloud/bigquery_datatransfer_v1/proto/datatransfer.proto
synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/bigquerydatatransfer-v1/google/cloud/bigquery_datatransfer_v1/proto.
synthtool > Replaced 'from google.cloud.bigquery.datatransfer_v1.proto' in google/cloud/bigquery_datatransfer_v1/proto/datatransfer_pb2.py.
synthtool > Replaced 'from google.cloud.bigquery.datatransfer_v1.proto' in google/cloud/bigquery_datatransfer_v1/proto/datatransfer_pb2_grpc.py.
synthtool > Replaced 'google-cloud-bigquerydatatransfer' in google/cloud/bigquery_datatransfer_v1/gapic/data_transfer_service_client.py.
synthtool > Replaced 'import google.api_core.gapic_v1.method\n' in google/cloud/bigquery_datatransfer_v1/gapic/data_transfer_service_client.py.
.coveragerc
.flake8
.github/CONTRIBUTING.md
.github/ISSUE_TEMPLATE/bug_report.md
.github/ISSUE_TEMPLATE/feature_request.md
.github/ISSUE_TEMPLATE/support_request.md
.github/PULL_REQUEST_TEMPLATE.md
.github/release-please.yml
.gitignore
.kokoro/build.sh
.kokoro/continuous/common.cfg
.kokoro/continuous/continuous.cfg
.kokoro/docs/common.cfg
.kokoro/docs/docs.cfg
.kokoro/presubmit/common.cfg
.kokoro/presubmit/presubmit.cfg
.kokoro/publish-docs.sh
.kokoro/release.sh
.kokoro/release/common.cfg
.kokoro/release/release.cfg
.kokoro/trampoline.sh
CODE_OF_CONDUCT.md
CONTRIBUTING.rst
LICENSE
MANIFEST.in
docs/_static/custom.css
docs/_templates/layout.html
docs/conf.py.j2
noxfile.py.j2
renovate.json
setup.cfg
Running session blacken
Creating virtual environment (virtualenv) using python3.6 in .nox/blacken
pip install black==19.3b0
Error: pip is not installed into the virtualenv, it is located at /tmpfs/src/git/autosynth/env/bin/pip. Pass external=True into run() to explicitly allow this.
Session blacken failed.
synthtool > Failed executing nox -s blacken:
None
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 68, in <module>
s.shell.run(["nox", "-s", "blacken"], hide_output=False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['nox', '-s', 'blacken']' returned non-zero exit status 1.
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/bc296324-7867-4eae-a113-8090e6bcc834).
|
non_process
|
synthesis failed for python bigquery datatransfer hello autosynth couldn t regenerate python bigquery datatransfer broken heart here s the output from running synth py cloning into working repo switched to branch autosynth running synthtool synthtool executing tmpfs src git autosynth working repo synth py on branch autosynth nothing to commit working tree clean head detached at fetch head nothing to commit working tree clean synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest synthtool cloning googleapis synthtool running generator for google cloud bigquery datatransfer artman bigquerydatatransfer yaml synthtool generated code into home kbuilder cache synthtool googleapis artman genfiles python bigquerydatatransfer synthtool copy home kbuilder cache synthtool googleapis google cloud bigquery datatransfer transfer proto to home kbuilder cache synthtool googleapis artman genfiles python bigquerydatatransfer google cloud bigquery datatransfer proto transfer proto synthtool copy home kbuilder cache synthtool googleapis google cloud bigquery datatransfer datatransfer proto to home kbuilder cache synthtool googleapis artman genfiles python bigquerydatatransfer google cloud bigquery datatransfer proto datatransfer proto synthtool placed proto files into home kbuilder cache synthtool googleapis artman genfiles python bigquerydatatransfer google cloud bigquery datatransfer proto synthtool replaced from google cloud bigquery datatransfer proto in google cloud bigquery datatransfer proto datatransfer py synthtool replaced from google cloud bigquery datatransfer proto in google cloud bigquery datatransfer proto datatransfer grpc py synthtool replaced google cloud bigquerydatatransfer in google cloud bigquery datatransfer gapic data transfer service client py synthtool replaced import google api core gapic method n in google cloud bigquery datatransfer gapic data transfer service client py coveragerc github contributing md github issue template bug report md github issue template feature request md github issue template support request md github pull request template md github release please yml gitignore kokoro build sh kokoro continuous common cfg kokoro continuous continuous cfg kokoro docs common cfg kokoro docs docs cfg kokoro presubmit common cfg kokoro presubmit presubmit cfg kokoro publish docs sh kokoro release sh kokoro release common cfg kokoro release release cfg kokoro trampoline sh code of conduct md contributing rst license manifest in docs static custom css docs templates layout html docs conf py noxfile py renovate json setup cfg running session blacken creating virtual environment virtualenv using in nox blacken pip install black error pip is not installed into the virtualenv it is located at tmpfs src git autosynth env bin pip pass external true into run to explicitly allow this session blacken failed synthtool failed executing nox s blacken none synthtool wrote metadata to synth metadata traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo synth py line in s shell run hide output false file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status synthesis failed google internal developers can see the full log
| 0
|
163,358
| 12,719,414,019
|
IssuesEvent
|
2020-06-24 09:14:36
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
EQL: NodeSubclass test
|
:Query Languages/EQL >test Team:QL
|
`Node` specific operations should be tested similarly with SqlNodeSubclassTests from ES SQL project.
|
1.0
|
EQL: NodeSubclass test - `Node` specific operations should be tested similarly with SqlNodeSubclassTests from ES SQL project.
|
non_process
|
eql nodesubclass test node specific operations should be tested similarly with sqlnodesubclasstests from es sql project
| 0
|
571,882
| 17,023,379,658
|
IssuesEvent
|
2021-07-03 01:43:25
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Rendering leisure=marina areas blue
|
Component: mapnik Priority: minor Resolution: fixed Type: enhancement
|
**[Submitted to the original trac issue database at 3.01pm, Monday, 30th March 2009]**
Areas tagged with leisure=marina are currently rendered with the same colour as land. Instead they should be rendered in a blue shade (probably the same shade as the sea).
Example: http://www.openstreetmap.org/browse/way/30877931
|
1.0
|
Rendering leisure=marina areas blue - **[Submitted to the original trac issue database at 3.01pm, Monday, 30th March 2009]**
Areas tagged with leisure=marina are currently rendered with the same colour as land. Instead they should be rendered in a blue shade (probably the same shade as the sea).
Example: http://www.openstreetmap.org/browse/way/30877931
|
non_process
|
rendering leisure marina areas blue areas tagged with leisure marina are currently rendered with the same colour as land instead they should be rendered in a blue shade probably the same shade as the sea example
| 0
|
328,323
| 24,179,662,592
|
IssuesEvent
|
2022-09-23 07:38:54
|
zyskarch/pytestarch
|
https://api.github.com/repos/zyskarch/pytestarch
|
closed
|
Improve official documentation
|
documentation
|
- [x] add docu and src code link in PyPi
- [ ] releases in github?
|
1.0
|
Improve official documentation - - [x] add docu and src code link in PyPi
- [ ] releases in github?
|
non_process
|
improve official documentation add docu and src code link in pypi releases in github
| 0
|
4,574
| 7,397,810,455
|
IssuesEvent
|
2018-03-19 01:50:46
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Software engineer
|
cxp in-process product-question storage triaged
|
Is there a way to download an entire folder from the blob container rather downloading individual images. Please help.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 60e56e69-1c44-2056-ad5f-319535e4b525
* Version Independent ID: 49fa6098-0b8e-ebf5-efe7-f50f3604e325
* Content: [Azure Quickstart - Upload, download, and list blobs in Azure Storage using .NET | Microsoft Docs](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet?tabs=windows)
* Content Source: [articles/storage/blobs/storage-quickstart-blobs-dotnet.md](https://github.com/Microsoft/azure-docs/blob/master/articles/storage/blobs/storage-quickstart-blobs-dotnet.md)
* Service: **storage**
* GitHub Login: @tamram
* Microsoft Alias: **tamram**
|
1.0
|
Software engineer - Is there a way to download an entire folder from the blob container rather downloading individual images. Please help.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 60e56e69-1c44-2056-ad5f-319535e4b525
* Version Independent ID: 49fa6098-0b8e-ebf5-efe7-f50f3604e325
* Content: [Azure Quickstart - Upload, download, and list blobs in Azure Storage using .NET | Microsoft Docs](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet?tabs=windows)
* Content Source: [articles/storage/blobs/storage-quickstart-blobs-dotnet.md](https://github.com/Microsoft/azure-docs/blob/master/articles/storage/blobs/storage-quickstart-blobs-dotnet.md)
* Service: **storage**
* GitHub Login: @tamram
* Microsoft Alias: **tamram**
|
process
|
software engineer is there a way to download an entire folder from the blob container rather downloading individual images please help document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service storage github login tamram microsoft alias tamram
| 1
|
8,627
| 11,779,655,378
|
IssuesEvent
|
2020-03-16 18:26:41
|
uncrustify/uncrustify
|
https://api.github.com/repos/uncrustify/uncrustify
|
closed
|
preprocessor is considered part of type in functinon definition
|
C and C++11 Preprocessor
|
test.c:
```C
#define m_new(type, num) ((type *)(m_malloc(sizeof(type) * (num))))
void *m_malloc(size_t num_bytes);
```
Expected output (no change):
```C
#define m_new(type, num) ((type *)(m_malloc(sizeof(type) * (num))))
void *m_malloc(size_t num_bytes);
```
Actual output (space after star is deleted):
```C
#define m_new(type, num) ((type *)(m_malloc(sizeof(type) *(num))))
void *m_malloc(size_t num_bytes);
```
[debug.txt](https://github.com/uncrustify/uncrustify/files/4309986/debug.txt)
Version 0.70.1
Debugging shows that this happens because `mark_function()` treats everything except for comments and newlines before a function definition as part of the type.
[//]: # " If the issue is connected to existing Uncrustify options please, if possible, add the"
[//]: # " following information to ease up the process:"
[//]: # " • a link to a debug file:"
[//]: # " generated with: 'uncrustify -p debug.txt -c pathToUsedConfig.cfg -f toBeFormatedFile.cpp' "
[//]: # " Example: [debug.txt](https://linkToTheFile)"
[//]: # " Example hosters for debug files: pastebin.com, gist.github.com, ..."
[//]: # " The used config file is included in the debug file and does not need to be included here."
[//]: #
[//]: # " • include a small but complete test file that will be uncrustifyed"
[//]: # " • include the generated results"
[//]: # " • include the expected results"
[//]: #
[//]: # " ✋ please add a line containing ``` above and below of each of those three code sections"
[//]: #
[//]: # " • include the current version of your Uncrustify executable"
[//]: # " printout via 'uncrustify -v'"
[//]: # " Example: current version: uncrustify 0.63"
[//]: # " or if possible additionally with the git sha of the commit"
[//]: # " current version: uncrustify 0.63 dc7b412"
[//]: #
[//]: # " • if possible include a version that worked"
[//]: # " Example: working version: uncrustify 0.63"
[//]: # " or"
[//]: # " working version: uncrustify 0.63 2a5e88f"
|
1.0
|
preprocessor is considered part of type in functinon definition - test.c:
```C
#define m_new(type, num) ((type *)(m_malloc(sizeof(type) * (num))))
void *m_malloc(size_t num_bytes);
```
Expected output (no change):
```C
#define m_new(type, num) ((type *)(m_malloc(sizeof(type) * (num))))
void *m_malloc(size_t num_bytes);
```
Actual output (space after star is deleted):
```C
#define m_new(type, num) ((type *)(m_malloc(sizeof(type) *(num))))
void *m_malloc(size_t num_bytes);
```
[debug.txt](https://github.com/uncrustify/uncrustify/files/4309986/debug.txt)
Version 0.70.1
Debugging shows that this happens because `mark_function()` treats everything except for comments and newlines before a function definition as part of the type.
[//]: # " If the issue is connected to existing Uncrustify options please, if possible, add the"
[//]: # " following information to ease up the process:"
[//]: # " • a link to a debug file:"
[//]: # " generated with: 'uncrustify -p debug.txt -c pathToUsedConfig.cfg -f toBeFormatedFile.cpp' "
[//]: # " Example: [debug.txt](https://linkToTheFile)"
[//]: # " Example hosters for debug files: pastebin.com, gist.github.com, ..."
[//]: # " The used config file is included in the debug file and does not need to be included here."
[//]: #
[//]: # " • include a small but complete test file that will be uncrustifyed"
[//]: # " • include the generated results"
[//]: # " • include the expected results"
[//]: #
[//]: # " ✋ please add a line containing ``` above and below of each of those three code sections"
[//]: #
[//]: # " • include the current version of your Uncrustify executable"
[//]: # " printout via 'uncrustify -v'"
[//]: # " Example: current version: uncrustify 0.63"
[//]: # " or if possible additionally with the git sha of the commit"
[//]: # " current version: uncrustify 0.63 dc7b412"
[//]: #
[//]: # " • if possible include a version that worked"
[//]: # " Example: working version: uncrustify 0.63"
[//]: # " or"
[//]: # " working version: uncrustify 0.63 2a5e88f"
|
process
|
preprocessor is considered part of type in functinon definition test c c define m new type num type m malloc sizeof type num void m malloc size t num bytes expected output no change c define m new type num type m malloc sizeof type num void m malloc size t num bytes actual output space after star is deleted c define m new type num type m malloc sizeof type num void m malloc size t num bytes version debugging shows that this happens because mark function treats everything except for comments and newlines before a function definition as part of the type if the issue is connected to existing uncrustify options please if possible add the following information to ease up the process • a link to a debug file generated with uncrustify p debug txt c pathtousedconfig cfg f tobeformatedfile cpp example example hosters for debug files pastebin com gist github com the used config file is included in the debug file and does not need to be included here • include a small but complete test file that will be uncrustifyed • include the generated results • include the expected results ✋ please add a line containing above and below of each of those three code sections • include the current version of your uncrustify executable printout via uncrustify v example current version uncrustify or if possible additionally with the git sha of the commit current version uncrustify • if possible include a version that worked example working version uncrustify or working version uncrustify
| 1
|
241,455
| 20,142,680,880
|
IssuesEvent
|
2022-02-09 02:00:29
|
PlayFab/thundernetes
|
https://api.github.com/repos/PlayFab/thundernetes
|
closed
|
Create end to end tests for GameServer API
|
area/tests
|
This is a follow up to #79 and #91. We should create end to end tests that run as part of the CI pipeline, for the GameServer API service.
|
1.0
|
Create end to end tests for GameServer API - This is a follow up to #79 and #91. We should create end to end tests that run as part of the CI pipeline, for the GameServer API service.
|
non_process
|
create end to end tests for gameserver api this is a follow up to and we should create end to end tests that run as part of the ci pipeline for the gameserver api service
| 0
|
10,843
| 13,624,216,232
|
IssuesEvent
|
2020-09-24 07:43:05
|
modi-w/AutoVersionsDB
|
https://api.github.com/repos/modi-w/AutoVersionsDB
|
closed
|
Create Console Application
|
area-Core area-Tests blocking process-ready-for-implementation type-enhancement
|
**Goal**
Create Console Application for the tool.
**Action Items:**
1. Create Console application
2. Allow change configuration
3. Allow all the functionality
4. Create end to end tests for the console application
**Updates**
1.
|
1.0
|
Create Console Application - **Goal**
Create Console Application for the tool.
**Action Items:**
1. Create Console application
2. Allow change configuration
3. Allow all the functionality
4. Create end to end tests for the console application
**Updates**
1.
|
process
|
create console application goal create console application for the tool action items create console application allow change configuration allow all the functionality create end to end tests for the console application updates
| 1
|
62,611
| 12,227,356,163
|
IssuesEvent
|
2020-05-03 14:57:40
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Make salvage artifact and salvage wreck missions spawn ruins and wrecks, if not available
|
Code Feature request
|
Currently, the salvage missions use existing ruins and wrecks. This prevents us from making biomes without any ruins or wrecks in them, because the missions will spawn the item out of the map.
Suggestion: If no ruin or wreck is available, spawn one for the mission. This will ensure that these missions will work properly in all situations and allow us greater creative freedom with the biomes.
|
1.0
|
Make salvage artifact and salvage wreck missions spawn ruins and wrecks, if not available - Currently, the salvage missions use existing ruins and wrecks. This prevents us from making biomes without any ruins or wrecks in them, because the missions will spawn the item out of the map.
Suggestion: If no ruin or wreck is available, spawn one for the mission. This will ensure that these missions will work properly in all situations and allow us greater creative freedom with the biomes.
|
non_process
|
make salvage artifact and salvage wreck missions spawn ruins and wrecks if not available currently the salvage missions use existing ruins and wrecks this prevents us from making biomes without any ruins or wrecks in them because the missions will spawn the item out of the map suggestion if no ruin or wreck is available spawn one for the mission this will ensure that these missions will work properly in all situations and allow us greater creative freedom with the biomes
| 0
|
732,200
| 25,248,703,844
|
IssuesEvent
|
2022-11-15 13:09:30
|
matrixorigin/matrixone
|
https://api.github.com/repos/matrixorigin/matrixone
|
opened
|
[Feature Request]: CN/DN account 级别的本地磁盘缓存统计
|
priority/p1 kind/feature source/on-demand
|
### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
_No response_
### Describe the feature you'd like
区分account对本地磁盘的占用
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_
|
1.0
|
[Feature Request]: CN/DN account 级别的本地磁盘缓存统计 - ### Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
### Is your feature request related to a problem?
_No response_
### Describe the feature you'd like
区分account对本地磁盘的占用
### Describe implementation you've considered
_No response_
### Documentation, Adoption, Use Case, Migration Strategy
_No response_
### Additional information
_No response_
|
non_process
|
cn dn account 级别的本地磁盘缓存统计 is there an existing issue for the same feature request i have checked the existing issues is your feature request related to a problem no response describe the feature you d like 区分account对本地磁盘的占用 describe implementation you ve considered no response documentation adoption use case migration strategy no response additional information no response
| 0
|
17,650
| 23,470,982,367
|
IssuesEvent
|
2022-08-16 21:49:53
|
anitsh/til
|
https://api.github.com/repos/anitsh/til
|
opened
|
Data Normalization
|
basics data process
|
# Data Normalization
It is a process in which data attributes within a data model are organized to increase the cohesion of entity types. In other words, the goal of data normalization is to reduce and even eliminate data redundancy, an important consideration for application developers because it is incredibly difficult to stores objects in a relational database that maintains the same information in several places.
### Why Data Normalization?
There are two primary advantages of having a highly normalized data schema:
- Increased consistency. Information is stored in one place and one place only, reducing the possibility of inconsistent data.
- Easier object-to-data mapping. Highly-normalized data schemas in general are closer conceptually to object-oriented schemas because the object-oriented goals of promoting high cohesion and loose coupling between classes results in similar solutions (at least from a data point of view).
You typically want to have highly normalized operational data stores (ODSs) and data warehouses (DWs).
The primary disadvantage of normalization is slower reporting performance. You will want to have a denormalized schema to support reporting, particularly in data marts.
### The Steps of Data Normalization
Table 1 summarizes the three most common forms of normalization ( First normal form (1NF), Second normal form (2NF), and Third normal form (3NF)) describing how to put entity types into a series of increasing levels of normalization. Higher levels of data normalization are beyond the scope of this article. With respect to terminology, a data schema is considered to be at the level of normalization of its least normalized entity type. For example, if all of your entity types are at second normal form (2NF) or higher then we say that your data schema is at 2NF.
Data Normalization Rules:
Level | Rule
-- | --
First normal form (1NF) | An entity type is in 1NF when it contains no repeating groups of data.
Second normal form (2NF) | An entity type is in 2NF when it is in 1NF and when all of its non-key attributes are fully dependent on its primary key.
Third normal form (3NF) | An entity type is in 3NF when it is in 2NF and when all of its attributes are directly dependent on the primary key.
From a purist point of view you want to normalize your data structures as much as possible, but from a practical point of view you will find that you need to 'back out" of some of your normalizations for performance reasons. This is called "denormalization".
# Resource
- [ ] https://en.wikipedia.org/wiki/Cardinality_(data_modeling)
- [ ] http://www.agiledata.org/essays/dataNormalization.html
- [ ] https://erwin.com/bookshelf/public_html/2020R1/Content/User%20Guides/erwin%20Help/Domains_and_Data_Modeling.html
- [ ] https://www.red-gate.com/simple-talk/sql/performance/lessons-learned-from-six-years-of-agile-database-development
- [ ] https://en.wikipedia.org/wiki/Database_normalization
- [ ] http://www.agiledata.org/essays/dataModeling101.html
- [ ] https://www.import.io/post/what-is-data-normalization-and-why-is-it-important
- [ ] https://en.wikipedia.org/wiki/Database_normalization
- [ ] https://www.kdnuggets.com/2020/04/data-transformation-standardization-normalization.html
- [ ] https://www3.dbmaestro.com/blog/keys-for-implementing-agile-database-methodologies
- [ ] http://agiledata.org
|
1.0
|
Data Normalization - # Data Normalization
It is a process in which data attributes within a data model are organized to increase the cohesion of entity types. In other words, the goal of data normalization is to reduce and even eliminate data redundancy, an important consideration for application developers because it is incredibly difficult to stores objects in a relational database that maintains the same information in several places.
### Why Data Normalization?
There are two primary advantages of having a highly normalized data schema:
- Increased consistency. Information is stored in one place and one place only, reducing the possibility of inconsistent data.
- Easier object-to-data mapping. Highly-normalized data schemas in general are closer conceptually to object-oriented schemas because the object-oriented goals of promoting high cohesion and loose coupling between classes results in similar solutions (at least from a data point of view).
You typically want to have highly normalized operational data stores (ODSs) and data warehouses (DWs).
The primary disadvantage of normalization is slower reporting performance. You will want to have a denormalized schema to support reporting, particularly in data marts.
### The Steps of Data Normalization
Table 1 summarizes the three most common forms of normalization ( First normal form (1NF), Second normal form (2NF), and Third normal form (3NF)) describing how to put entity types into a series of increasing levels of normalization. Higher levels of data normalization are beyond the scope of this article. With respect to terminology, a data schema is considered to be at the level of normalization of its least normalized entity type. For example, if all of your entity types are at second normal form (2NF) or higher then we say that your data schema is at 2NF.
Data Normalization Rules:
Level | Rule
-- | --
First normal form (1NF) | An entity type is in 1NF when it contains no repeating groups of data.
Second normal form (2NF) | An entity type is in 2NF when it is in 1NF and when all of its non-key attributes are fully dependent on its primary key.
Third normal form (3NF) | An entity type is in 3NF when it is in 2NF and when all of its attributes are directly dependent on the primary key.
From a purist point of view you want to normalize your data structures as much as possible, but from a practical point of view you will find that you need to 'back out" of some of your normalizations for performance reasons. This is called "denormalization".
# Resource
- [ ] https://en.wikipedia.org/wiki/Cardinality_(data_modeling)
- [ ] http://www.agiledata.org/essays/dataNormalization.html
- [ ] https://erwin.com/bookshelf/public_html/2020R1/Content/User%20Guides/erwin%20Help/Domains_and_Data_Modeling.html
- [ ] https://www.red-gate.com/simple-talk/sql/performance/lessons-learned-from-six-years-of-agile-database-development
- [ ] https://en.wikipedia.org/wiki/Database_normalization
- [ ] http://www.agiledata.org/essays/dataModeling101.html
- [ ] https://www.import.io/post/what-is-data-normalization-and-why-is-it-important
- [ ] https://en.wikipedia.org/wiki/Database_normalization
- [ ] https://www.kdnuggets.com/2020/04/data-transformation-standardization-normalization.html
- [ ] https://www3.dbmaestro.com/blog/keys-for-implementing-agile-database-methodologies
- [ ] http://agiledata.org
|
process
|
data normalization data normalization it is a process in which data attributes within a data model are organized to increase the cohesion of entity types in other words the goal of data normalization is to reduce and even eliminate data redundancy an important consideration for application developers because it is incredibly difficult to stores objects in a relational database that maintains the same information in several places why data normalization there are two primary advantages of having a highly normalized data schema increased consistency information is stored in one place and one place only reducing the possibility of inconsistent data easier object to data mapping highly normalized data schemas in general are closer conceptually to object oriented schemas because the object oriented goals of promoting high cohesion and loose coupling between classes results in similar solutions at least from a data point of view you typically want to have highly normalized operational data stores odss and data warehouses dws the primary disadvantage of normalization is slower reporting performance you will want to have a denormalized schema to support reporting particularly in data marts the steps of data normalization table summarizes the three most common forms of normalization first normal form second normal form and third normal form describing how to put entity types into a series of increasing levels of normalization higher levels of data normalization are beyond the scope of this article with respect to terminology a data schema is considered to be at the level of normalization of its least normalized entity type for example if all of your entity types are at second normal form or higher then we say that your data schema is at data normalization rules level rule first normal form an entity type is in when it contains no repeating groups of data second normal form an entity type is in when it is in and when all of its non key attributes are fully dependent on its primary key third normal form an entity type is in when it is in and when all of its attributes are directly dependent on the primary key from a purist point of view you want to normalize your data structures as much as possible but from a practical point of view you will find that you need to back out of some of your normalizations for performance reasons this is called denormalization resource
| 1
|
956
| 3,419,117,472
|
IssuesEvent
|
2015-12-08 07:52:00
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Днепр обл. ЦНАП Надання відомостей з Державного земельного кадастру у формі витягу з Державного земельного кадастру про землі в межах території адміністративно-територіальних одиниць
|
In process of testing
|
Вольногорск, Марганец, Царичанский р-н
|
1.0
|
Днепр обл. ЦНАП Надання відомостей з Державного земельного кадастру у формі витягу з Державного земельного кадастру про землі в межах території адміністративно-територіальних одиниць - Вольногорск, Марганец, Царичанский р-н
|
process
|
днепр обл цнап надання відомостей з державного земельного кадастру у формі витягу з державного земельного кадастру про землі в межах території адміністративно територіальних одиниць вольногорск марганец царичанский р н
| 1
|
243,738
| 26,287,578,476
|
IssuesEvent
|
2023-01-08 01:40:25
|
temporalio/subscription-workflow-project-template-go
|
https://api.github.com/repos/temporalio/subscription-workflow-project-template-go
|
closed
|
go.temporal.io/sdk-v1.6.0: 1 vulnerabilities (highest severity is: 7.5) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>go.temporal.io/sdk-v1.6.0</b></p></summary>
<p></p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (go.temporal.io/sdk-v1.6.0 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-38561](https://www.mend.io/vulnerability-database/CVE-2021-38561) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/text-v0.3.5 | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-38561</summary>
### Vulnerable Library - <b>github.com/golang/text-v0.3.5</b></p>
<p>[mirror] Go text processing support</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/golang/text/@v/v0.3.5.zip">https://proxy.golang.org/github.com/golang/text/@v/v0.3.5.zip</a></p>
<p>
Dependency Hierarchy:
- go.temporal.io/sdk-v1.6.0 (Root Library)
- github.com/grpc/grpc-go-v1.36.0
- github.com/golang/net-v0.1.0
- :x: **github.com/golang/text-v0.3.5** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Due to improper index calculation, an incorrectly formatted language tag can cause Parse
to panic, due to an out of bounds read. If Parse is used to process untrusted user inputs,
this may be used as a vector for a denial of service attack.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-38561>CVE-2021-38561</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0113">https://osv.dev/vulnerability/GO-2021-0113</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: v0.3.7</p>
</p>
<p></p>
</details>
|
True
|
go.temporal.io/sdk-v1.6.0: 1 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>go.temporal.io/sdk-v1.6.0</b></p></summary>
<p></p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (go.temporal.io/sdk-v1.6.0 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-38561](https://www.mend.io/vulnerability-database/CVE-2021-38561) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/text-v0.3.5 | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-38561</summary>
### Vulnerable Library - <b>github.com/golang/text-v0.3.5</b></p>
<p>[mirror] Go text processing support</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/golang/text/@v/v0.3.5.zip">https://proxy.golang.org/github.com/golang/text/@v/v0.3.5.zip</a></p>
<p>
Dependency Hierarchy:
- go.temporal.io/sdk-v1.6.0 (Root Library)
- github.com/grpc/grpc-go-v1.36.0
- github.com/golang/net-v0.1.0
- :x: **github.com/golang/text-v0.3.5** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Due to improper index calculation, an incorrectly formatted language tag can cause Parse
to panic, due to an out of bounds read. If Parse is used to process untrusted user inputs,
this may be used as a vector for a denial of service attack.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-38561>CVE-2021-38561</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0113">https://osv.dev/vulnerability/GO-2021-0113</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: v0.3.7</p>
</p>
<p></p>
</details>
|
non_process
|
go temporal io sdk vulnerabilities highest severity is autoclosed vulnerable library go temporal io sdk vulnerabilities cve severity cvss dependency type fixed in go temporal io sdk version remediation available high github com golang text transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library github com golang text go text processing support library home page a href dependency hierarchy go temporal io sdk root library github com grpc grpc go github com golang net x github com golang text vulnerable library found in base branch main vulnerability details due to improper index calculation an incorrectly formatted language tag can cause parse to panic due to an out of bounds read if parse is used to process untrusted user inputs this may be used as a vector for a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
74,556
| 15,355,803,945
|
IssuesEvent
|
2021-03-01 11:34:53
|
wrbejar/JavaVulnerableC
|
https://api.github.com/repos/wrbejar/JavaVulnerableC
|
opened
|
CVE-2015-4852 (High) detected in commons-collections-3.2.1.jar
|
security vulnerability
|
## CVE-2015-4852 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: JavaVulnerableC/bin/pom.xml</p>
<p>Path to vulnerable library: JavaVulnerableC/bin/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,JavaVulnerableC/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,JavaVulnerableC/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,/home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,JavaVulnerableC/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,/home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,JavaVulnerableC/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,/home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,JavaVulnerableC/bin/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- hibernate-core-4.0.1.Final.jar (Root Library)
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerableC/commit/53684c7b4feab7655c67d23cd7f4fb170ffe0b6e">53684c7b4feab7655c67d23cd7f4fb170ffe0b6e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The WLS Security component in Oracle WebLogic Server 10.3.6.0, 12.1.2.0, 12.1.3.0, and 12.2.1.0 allows remote attackers to execute arbitrary commands via a crafted serialized Java object in T3 protocol traffic to TCP port 7001, related to oracle_common/modules/com.bea.core.apache.commons.collections.jar. NOTE: the scope of this CVE is limited to the WebLogic Server product.
<p>Publish Date: 2015-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-4852>CVE-2015-4852</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2015/11/17/19">https://www.openwall.com/lists/oss-security/2015/11/17/19</a></p>
<p>Release Date: 2015-11-18</p>
<p>Fix Resolution: commons-collections:commons-collections:3.2.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","packageFilePaths":["/bin/pom.xml","/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml","/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.hibernate:hibernate-core:4.0.1.Final;commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-collections:commons-collections:3.2.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2015-4852","vulnerabilityDetails":"The WLS Security component in Oracle WebLogic Server 10.3.6.0, 12.1.2.0, 12.1.3.0, and 12.2.1.0 allows remote attackers to execute arbitrary commands via a crafted serialized Java object in T3 protocol traffic to TCP port 7001, related to oracle_common/modules/com.bea.core.apache.commons.collections.jar. NOTE: the scope of this CVE is limited to the WebLogic Server product.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-4852","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2015-4852 (High) detected in commons-collections-3.2.1.jar - ## CVE-2015-4852 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: JavaVulnerableC/bin/pom.xml</p>
<p>Path to vulnerable library: JavaVulnerableC/bin/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,JavaVulnerableC/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,JavaVulnerableC/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,/home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,JavaVulnerableC/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,/home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,JavaVulnerableC/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar,/home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar,JavaVulnerableC/bin/target/JavaVulnerableLab/WEB-INF/lib/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- hibernate-core-4.0.1.Final.jar (Root Library)
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerableC/commit/53684c7b4feab7655c67d23cd7f4fb170ffe0b6e">53684c7b4feab7655c67d23cd7f4fb170ffe0b6e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The WLS Security component in Oracle WebLogic Server 10.3.6.0, 12.1.2.0, 12.1.3.0, and 12.2.1.0 allows remote attackers to execute arbitrary commands via a crafted serialized Java object in T3 protocol traffic to TCP port 7001, related to oracle_common/modules/com.bea.core.apache.commons.collections.jar. NOTE: the scope of this CVE is limited to the WebLogic Server product.
<p>Publish Date: 2015-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-4852>CVE-2015-4852</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2015/11/17/19">https://www.openwall.com/lists/oss-security/2015/11/17/19</a></p>
<p>Release Date: 2015-11-18</p>
<p>Fix Resolution: commons-collections:commons-collections:3.2.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","packageFilePaths":["/bin/pom.xml","/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml","/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.hibernate:hibernate-core:4.0.1.Final;commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-collections:commons-collections:3.2.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2015-4852","vulnerabilityDetails":"The WLS Security component in Oracle WebLogic Server 10.3.6.0, 12.1.2.0, 12.1.3.0, and 12.2.1.0 allows remote attackers to execute arbitrary commands via a crafted serialized Java object in T3 protocol traffic to TCP port 7001, related to oracle_common/modules/com.bea.core.apache.commons.collections.jar. NOTE: the scope of this CVE is limited to the WebLogic Server product.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-4852","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to dependency file javavulnerablec bin pom xml path to vulnerable library javavulnerablec bin target javavulnerablelab web inf lib commons collections jar javavulnerablec target javavulnerablelab web inf lib commons collections jar javavulnerablec target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib commons collections jar home wss scanner repository commons collections commons collections commons collections jar javavulnerablec target javavulnerablelab web inf lib commons collections jar home wss scanner repository commons collections commons collections commons collections jar javavulnerablec bin target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib commons collections jar home wss scanner repository commons collections commons collections commons collections jar javavulnerablec bin target javavulnerablelab web inf lib commons collections jar dependency hierarchy hibernate core final jar root library x commons collections jar vulnerable library found in head commit a href found in base branch master vulnerability details the wls security component in oracle weblogic server and allows remote attackers to execute arbitrary commands via a crafted serialized java object in protocol traffic to tcp port related to oracle common modules com bea core apache commons collections jar note the scope of this cve is limited to the weblogic server product publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution commons collections commons collections isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org hibernate hibernate core final commons collections commons collections isminimumfixversionavailable true minimumfixversion commons collections commons collections basebranches vulnerabilityidentifier cve vulnerabilitydetails the wls security component in oracle weblogic server and allows remote attackers to execute arbitrary commands via a crafted serialized java object in protocol traffic to tcp port related to oracle common modules com bea core apache commons collections jar note the scope of this cve is limited to the weblogic server product vulnerabilityurl
| 0
|
13,110
| 15,498,172,589
|
IssuesEvent
|
2021-03-11 06:01:28
|
cypress-io/cypress-documentation
|
https://api.github.com/repos/cypress-io/cypress-documentation
|
closed
|
Describe using Yarn on CircleCI v2
|
content: new process: ci
|
describe caching for yarn users - very simple. A good example is circle.yml in this pull request https://github.com/johnlindquist/react-streams/pull/10/files
also, add a note that Circle v1 will be discontinued August 31st 2018.
|
1.0
|
Describe using Yarn on CircleCI v2 - describe caching for yarn users - very simple. A good example is circle.yml in this pull request https://github.com/johnlindquist/react-streams/pull/10/files
also, add a note that Circle v1 will be discontinued August 31st 2018.
|
process
|
describe using yarn on circleci describe caching for yarn users very simple a good example is circle yml in this pull request also add a note that circle will be discontinued august
| 1
|
12,809
| 15,187,024,364
|
IssuesEvent
|
2021-02-15 13:14:19
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
$transaction doesn't rollback in case any transaction fails
|
bug/1-repro-available kind/bug process/candidate team/client
|
<!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
I am trying to use Prisma transactions as specified [here](https://www.prisma.io/docs/guides/prisma-guides/prisma-client-transactions-guide/#can-i-perform-multiple-nested-writes---for-example-create-two-new-teams-and-assign-users). My transaction is not rolling back in case of an error.
**Example**
```ts
const txn1 = this.prisma.table1.create({
data: {
name: `Test 1`
},
});
const txn2 = this.prisma.table1.create({
data: {
name: undefined //`Test 2`,
},
});
const transactionResults = await this.prisma.$transaction([txn1, txn2]);
```
`name` is a mandatory column so when I send undefined, `txn2` fails but `txn1` still creates a record in the table even though it should rollback as `txn2` failed.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
Sample code shared above. Try to execute it with a test table.
## Expected behavior
Transaction 1 insert should rollback as transaction 2 failed.
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: MacOS Catalina 10.15.1, Running inside docker `node:14.15.4-alpine`<!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: PostgreSQL <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: v14.15.4 <!--[Run `node -v` to see your Node.js version]-->
- Prisma version:
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 2.16.0
@prisma/client : 2.16.0
Current platform : linux-musl
Query Engine : query-engine 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/query-engine-linux-musl)
Migration Engine : migration-engine-cli 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/migration-engine-linux-musl)
Introspection Engine : introspection-core 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/introspection-engine-linux-musl)
Format Binary : prisma-fmt 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/prisma-fmt-linux-musl)
Studio : 0.346.0
Preview Features : nativeTypes, createMany
```
|
1.0
|
$transaction doesn't rollback in case any transaction fails - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
I am trying to use Prisma transactions as specified [here](https://www.prisma.io/docs/guides/prisma-guides/prisma-client-transactions-guide/#can-i-perform-multiple-nested-writes---for-example-create-two-new-teams-and-assign-users). My transaction is not rolling back in case of an error.
**Example**
```ts
const txn1 = this.prisma.table1.create({
data: {
name: `Test 1`
},
});
const txn2 = this.prisma.table1.create({
data: {
name: undefined //`Test 2`,
},
});
const transactionResults = await this.prisma.$transaction([txn1, txn2]);
```
`name` is a mandatory column so when I send undefined, `txn2` fails but `txn1` still creates a record in the table even though it should rollback as `txn2` failed.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
Sample code shared above. Try to execute it with a test table.
## Expected behavior
Transaction 1 insert should rollback as transaction 2 failed.
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: MacOS Catalina 10.15.1, Running inside docker `node:14.15.4-alpine`<!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: PostgreSQL <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Node.js version: v14.15.4 <!--[Run `node -v` to see your Node.js version]-->
- Prisma version:
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
prisma : 2.16.0
@prisma/client : 2.16.0
Current platform : linux-musl
Query Engine : query-engine 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/query-engine-linux-musl)
Migration Engine : migration-engine-cli 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/migration-engine-linux-musl)
Introspection Engine : introspection-core 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/introspection-engine-linux-musl)
Format Binary : prisma-fmt 854c8ba7f0dce66f115af36af24e66989a8c02a1 (at node_modules/prisma/node_modules/@prisma/engines/prisma-fmt-linux-musl)
Studio : 0.346.0
Preview Features : nativeTypes, createMany
```
|
process
|
transaction doesn t rollback in case any transaction fails thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description i am trying to use prisma transactions as specified my transaction is not rolling back in case of an error example ts const this prisma create data name test const this prisma create data name undefined test const transactionresults await this prisma transaction name is a mandatory column so when i send undefined fails but still creates a record in the table even though it should rollback as failed how to reproduce steps to reproduce the behavior go to change run see error sample code shared above try to execute it with a test table expected behavior transaction insert should rollback as transaction failed prisma information your prisma schema prisma client queries do not include your database credentials when sharing your prisma schema environment setup os macos catalina running inside docker node alpine database postgresql node js version prisma version prisma prisma client current platform linux musl query engine query engine at node modules prisma node modules prisma engines query engine linux musl migration engine migration engine cli at node modules prisma node modules prisma engines migration engine linux musl introspection engine introspection core at node modules prisma node modules prisma engines introspection engine linux musl format binary prisma fmt at node modules prisma node modules prisma engines prisma fmt linux musl studio preview features nativetypes createmany
| 1
|
10,708
| 3,135,051,047
|
IssuesEvent
|
2015-09-10 13:39:05
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
closed
|
Add option `visibleRows` for Autocomplete cells.
|
Cell type: autocomplete / dropdown / handsontable Feature Released Tested
|
In current code scrollbar of dropdown list of choices for autocomplete cell will be visible if number of choices will be greater than 10, and it's hardcoded
https://github.com/handsontable/handsontable/blob/master/src/editors/autocompleteEditor.js#L295
Add option `visibleRows` for Autocomplete cells to control number of choices after exceeding it scrollbar will be visible.
|
1.0
|
Add option `visibleRows` for Autocomplete cells. - In current code scrollbar of dropdown list of choices for autocomplete cell will be visible if number of choices will be greater than 10, and it's hardcoded
https://github.com/handsontable/handsontable/blob/master/src/editors/autocompleteEditor.js#L295
Add option `visibleRows` for Autocomplete cells to control number of choices after exceeding it scrollbar will be visible.
|
non_process
|
add option visiblerows for autocomplete cells in current code scrollbar of dropdown list of choices for autocomplete cell will be visible if number of choices will be greater than and it s hardcoded add option visiblerows for autocomplete cells to control number of choices after exceeding it scrollbar will be visible
| 0
|
9,035
| 12,130,107,896
|
IssuesEvent
|
2020-04-23 00:30:39
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from appengine/standard/analytics/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from appengine/standard/analytics/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from appengine/standard/analytics/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/analytics/requirements-test.txt
|
process
|
remove gcp devrel py tools from appengine standard analytics requirements test txt remove gcp devrel py tools from appengine standard analytics requirements test txt
| 1
|
12,276
| 14,789,762,352
|
IssuesEvent
|
2021-01-12 11:01:33
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Support BAZEL_USE_CPP_ONLY_TOOLCHAIN to build bazel itself
|
P4 team-Rules-CPP type: support / not a bug (process)
|
> ATTENTION! Please read and follow:
> - if this is a _question_ about how to build / test / query / deploy using Bazel, or a _discussion starter_, send it to bazel-discuss@googlegroups.com
> - if this is a _bug_ or _feature request_, fill the form below as best as you can.
### Description of the problem / feature request:
Building bazel for nixpkgs requires patching various paths referring to `/usr/bin/xcrun`, while there is also `BAZEL_USE_CPP_ONLY_TOOLCHAIN`, which should be enough to build bazel itself.
### Feature requests: what underlying problem are you trying to solve with this feature?
Building bazel for MacOS without /usr/bin/xcrun, but another compiler (provided by nix).
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Building bazel inside nixpkgs without the `sed` patching at https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/tools/build-managers/bazel/default.nix#L308.
### What operating system are you running Bazel on?
MacOS X
### What's the output of `bazel info release`?
> Replace this line with your answer.
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
> Replace this line with your answer.
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
> Replace this line with your answer.
### Have you found anything relevant by searching the web?
https://github.com/NixOS/nixpkgs/pull/69252#issuecomment-541040021
### Any other information, logs, or outputs that you want to share?
```
Building Bazel from scratch....../usr/bin/xcrun --sdk macosx clang -fobjc-arc -framework CoreServices -framework Foundation -o /private/tmp/nix-build-bazel-1.0.0.drv-0/bazel_s6Mj2C4u/archive/_embedded_binaries/xcode-locator tools/osx/xcode_locator.m
xcode-select: error: no developer tools were found at '/Applications/Xcode.app', and no install could be requested (perhaps no UI is present), please install manually from 'developer.apple.com'.
./bazel_src/scripts/generate_bash_completion.sh: line 71: ./bazel_src/output/bazel: No such file or directory
builder for '/nix/store/lfpcg31ckz5xf8bdygimz41ngc2dfh0h-bazel-1.0.0.drv' failed with exit code 127
```
|
1.0
|
Support BAZEL_USE_CPP_ONLY_TOOLCHAIN to build bazel itself - > ATTENTION! Please read and follow:
> - if this is a _question_ about how to build / test / query / deploy using Bazel, or a _discussion starter_, send it to bazel-discuss@googlegroups.com
> - if this is a _bug_ or _feature request_, fill the form below as best as you can.
### Description of the problem / feature request:
Building bazel for nixpkgs requires patching various paths referring to `/usr/bin/xcrun`, while there is also `BAZEL_USE_CPP_ONLY_TOOLCHAIN`, which should be enough to build bazel itself.
### Feature requests: what underlying problem are you trying to solve with this feature?
Building bazel for MacOS without /usr/bin/xcrun, but another compiler (provided by nix).
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Building bazel inside nixpkgs without the `sed` patching at https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/tools/build-managers/bazel/default.nix#L308.
### What operating system are you running Bazel on?
MacOS X
### What's the output of `bazel info release`?
> Replace this line with your answer.
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
> Replace this line with your answer.
### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?
> Replace this line with your answer.
### Have you found anything relevant by searching the web?
https://github.com/NixOS/nixpkgs/pull/69252#issuecomment-541040021
### Any other information, logs, or outputs that you want to share?
```
Building Bazel from scratch....../usr/bin/xcrun --sdk macosx clang -fobjc-arc -framework CoreServices -framework Foundation -o /private/tmp/nix-build-bazel-1.0.0.drv-0/bazel_s6Mj2C4u/archive/_embedded_binaries/xcode-locator tools/osx/xcode_locator.m
xcode-select: error: no developer tools were found at '/Applications/Xcode.app', and no install could be requested (perhaps no UI is present), please install manually from 'developer.apple.com'.
./bazel_src/scripts/generate_bash_completion.sh: line 71: ./bazel_src/output/bazel: No such file or directory
builder for '/nix/store/lfpcg31ckz5xf8bdygimz41ngc2dfh0h-bazel-1.0.0.drv' failed with exit code 127
```
|
process
|
support bazel use cpp only toolchain to build bazel itself attention please read and follow if this is a question about how to build test query deploy using bazel or a discussion starter send it to bazel discuss googlegroups com if this is a bug or feature request fill the form below as best as you can description of the problem feature request building bazel for nixpkgs requires patching various paths referring to usr bin xcrun while there is also bazel use cpp only toolchain which should be enough to build bazel itself feature requests what underlying problem are you trying to solve with this feature building bazel for macos without usr bin xcrun but another compiler provided by nix bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible building bazel inside nixpkgs without the sed patching at what operating system are you running bazel on macos x what s the output of bazel info release replace this line with your answer if bazel info release returns development version or non git tell us how you built bazel replace this line with your answer what s the output of git remote get url origin git rev parse master git rev parse head replace this line with your answer have you found anything relevant by searching the web any other information logs or outputs that you want to share building bazel from scratch usr bin xcrun sdk macosx clang fobjc arc framework coreservices framework foundation o private tmp nix build bazel drv bazel archive embedded binaries xcode locator tools osx xcode locator m xcode select error no developer tools were found at applications xcode app and no install could be requested perhaps no ui is present please install manually from developer apple com bazel src scripts generate bash completion sh line bazel src output bazel no such file or directory builder for nix store bazel drv failed with exit code
| 1
|
6,244
| 9,201,027,684
|
IssuesEvent
|
2019-03-07 18:33:09
|
googleapis/nodejs-pubsub
|
https://api.github.com/repos/googleapis/nodejs-pubsub
|
closed
|
Prettier - formatting
|
type: feature request type: process
|
I've noticed that this library uses `gts` for formatting, but Prettier is listed as a `devDependency`. I personally prefer Prettier, so I'd be happy to contribute and introduce full support for it. Anyway, we have two options:
1) Remove Prettier and related packages from `devDependencies`.
2) Start formatting the source code using Prettier.
Let me know WDYT and I'll make a PR.
|
1.0
|
Prettier - formatting - I've noticed that this library uses `gts` for formatting, but Prettier is listed as a `devDependency`. I personally prefer Prettier, so I'd be happy to contribute and introduce full support for it. Anyway, we have two options:
1) Remove Prettier and related packages from `devDependencies`.
2) Start formatting the source code using Prettier.
Let me know WDYT and I'll make a PR.
|
process
|
prettier formatting i ve noticed that this library uses gts for formatting but prettier is listed as a devdependency i personally prefer prettier so i d be happy to contribute and introduce full support for it anyway we have two options remove prettier and related packages from devdependencies start formatting the source code using prettier let me know wdyt and i ll make a pr
| 1
|
233,381
| 25,765,388,549
|
IssuesEvent
|
2022-12-09 01:07:58
|
Jsn2win/mergify_27ENG-
|
https://api.github.com/repos/Jsn2win/mergify_27ENG-
|
opened
|
CVE-2022-23491 (Medium) detected in certifi-2020.6.20-py2.py3-none-any.whl, certifi-2021.5.30-py2.py3-none-any.whl
|
security vulnerability
|
## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>certifi-2020.6.20-py2.py3-none-any.whl</b>, <b>certifi-2021.5.30-py2.py3-none-any.whl</b></p></summary>
<p>
<details><summary><b>certifi-2020.6.20-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/mergify_27ENG-,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **certifi-2020.6.20-py2.py3-none-any.whl** (Vulnerable Library)
</details>
<details><summary><b>certifi-2021.5.30-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/mergify_27ENG-</p>
<p>Path to vulnerable library: /mergify_27ENG-</p>
<p>
Dependency Hierarchy:
- :x: **certifi-2021.5.30-py2.py3-none-any.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-23491 (Medium) detected in certifi-2020.6.20-py2.py3-none-any.whl, certifi-2021.5.30-py2.py3-none-any.whl - ## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>certifi-2020.6.20-py2.py3-none-any.whl</b>, <b>certifi-2021.5.30-py2.py3-none-any.whl</b></p></summary>
<p>
<details><summary><b>certifi-2020.6.20-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/mergify_27ENG-,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **certifi-2020.6.20-py2.py3-none-any.whl** (Vulnerable Library)
</details>
<details><summary><b>certifi-2021.5.30-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/05/1b/0a0dece0e8aa492a6ec9e4ad2fe366b511558cdc73fd3abc82ba7348e875/certifi-2021.5.30-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/mergify_27ENG-</p>
<p>Path to vulnerable library: /mergify_27ENG-</p>
<p>
Dependency Hierarchy:
- :x: **certifi-2021.5.30-py2.py3-none-any.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in certifi none any whl certifi none any whl cve medium severity vulnerability vulnerable libraries certifi none any whl certifi none any whl certifi none any whl python package for providing mozilla s ca bundle library home page a href path to dependency file requirements txt path to vulnerable library requirements txt mergify requirements txt dependency hierarchy x certifi none any whl vulnerable library certifi none any whl python package for providing mozilla s ca bundle library home page a href path to dependency file tmp ws scm mergify path to vulnerable library mergify dependency hierarchy x certifi none any whl vulnerable library found in base branch main vulnerability details certifi is a curated collection of root certificates for validating the trustworthiness of ssl certificates while verifying the identity of tls hosts certifi removes root certificates from trustcor from the root store these are in the process of being removed from mozilla s trust store trustcor s root certificates are being removed pursuant to an investigation prompted by media reporting that trustcor s ownership also operated a business that produced spyware conclusions of mozilla s investigation can be found in the linked google group discussion publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope changed impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution certifi step up your open source security game with mend
| 0
|
12,128
| 14,740,867,340
|
IssuesEvent
|
2021-01-07 09:44:57
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SAB Error Emails: FW: Cron <root@answernet> /opt/sabilling/rf
|
anc-process anp-1 ant-support
|
In GitLab by @kdjstudios on Dec 10, 2018, 09:37
**Submitted by:** "Tim Traylor" <tim.traylor@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6245147
**Server:** ALL
**Client/Site:** NA
**Account:** NA
**Issue:**
Do you know what this is and if it is preventing something important from running?
Thx,
Tim
-----Original Message-----
From: Cron Daemon [mailto:root@answernet.sabilling.com]
Sent: Sunday, December 09, 2018 3:00 AM
To: apperrors@sahosted.com
Subject: Cron <root@answernet> /opt/sabilling/rf
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
|
1.0
|
SAB Error Emails: FW: Cron <root@answernet> /opt/sabilling/rf - In GitLab by @kdjstudios on Dec 10, 2018, 09:37
**Submitted by:** "Tim Traylor" <tim.traylor@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6245147
**Server:** ALL
**Client/Site:** NA
**Account:** NA
**Issue:**
Do you know what this is and if it is preventing something important from running?
Thx,
Tim
-----Original Message-----
From: Cron Daemon [mailto:root@answernet.sabilling.com]
Sent: Sunday, December 09, 2018 3:00 AM
To: apperrors@sahosted.com
Subject: Cron <root@answernet> /opt/sabilling/rf
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
|
process
|
sab error emails fw cron opt sabilling rf in gitlab by kdjstudios on dec submitted by tim traylor helpdesk server all client site na account na issue do you know what this is and if it is preventing something important from running thx tim original message from cron daemon sent sunday december am to apperrors sahosted com subject cron opt sabilling rf sudo sorry you must have a tty to run sudo sudo sorry you must have a tty to run sudo sudo sorry you must have a tty to run sudo
| 1
|
14,548
| 17,668,753,436
|
IssuesEvent
|
2021-08-23 00:33:00
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Santa Cruz from 28 Days
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Santa Cruz
Type (film/tv show): tv show / soap opera
Film or show in which it appears: 28 Days (2000)
Is the parent film/show streaming anywhere? hulu, amazon
About when in the parent film/show does it appear? Mentioned throughout - I believe it’s early on when she gets to rehab and meets the baseball player
Actual footage of the film/show can be seen (yes/no)? yes.
|
1.0
|
Add Santa Cruz from 28 Days - Please add as much of the following info as you can:
Title: Santa Cruz
Type (film/tv show): tv show / soap opera
Film or show in which it appears: 28 Days (2000)
Is the parent film/show streaming anywhere? hulu, amazon
About when in the parent film/show does it appear? Mentioned throughout - I believe it’s early on when she gets to rehab and meets the baseball player
Actual footage of the film/show can be seen (yes/no)? yes.
|
process
|
add santa cruz from days please add as much of the following info as you can title santa cruz type film tv show tv show soap opera film or show in which it appears days is the parent film show streaming anywhere hulu amazon about when in the parent film show does it appear mentioned throughout i believe it’s early on when she gets to rehab and meets the baseball player actual footage of the film show can be seen yes no yes
| 1
|
9,241
| 12,270,099,465
|
IssuesEvent
|
2020-05-07 15:00:47
|
arunkumar9t2/scabbard
|
https://api.github.com/repos/arunkumar9t2/scabbard
|
closed
|
Feature Request - Standalone plugin for showing how injections and providers are connected
|
enhancement module:gradle-plugin module:processor needs investigation
|
This is sort of a selfish request, but I do like scabbard, but I'm looking for something "simpler" at the same time. I found out that https://github.com/square/dagger-intellij-plugin used to do the trick, but it's for Dagger 1.
Is there anyway that there could be a standalone plugin from scabbard that basically ties these points together so you can easily see who is using what? This really stems from me wanting something like https://github.com/square/otto-intellij-plugin for Dagger though. In the otto plugin you can use it to navigate between events posted by Otto.
Curious to hear your thoughts.
|
1.0
|
Feature Request - Standalone plugin for showing how injections and providers are connected - This is sort of a selfish request, but I do like scabbard, but I'm looking for something "simpler" at the same time. I found out that https://github.com/square/dagger-intellij-plugin used to do the trick, but it's for Dagger 1.
Is there anyway that there could be a standalone plugin from scabbard that basically ties these points together so you can easily see who is using what? This really stems from me wanting something like https://github.com/square/otto-intellij-plugin for Dagger though. In the otto plugin you can use it to navigate between events posted by Otto.
Curious to hear your thoughts.
|
process
|
feature request standalone plugin for showing how injections and providers are connected this is sort of a selfish request but i do like scabbard but i m looking for something simpler at the same time i found out that used to do the trick but it s for dagger is there anyway that there could be a standalone plugin from scabbard that basically ties these points together so you can easily see who is using what this really stems from me wanting something like for dagger though in the otto plugin you can use it to navigate between events posted by otto curious to hear your thoughts
| 1
|
746
| 3,218,831,363
|
IssuesEvent
|
2015-10-08 05:25:24
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На бэке (централа) в сервисе /getStatisticServiceCounts сливать статистику отдельных услуг по МРЕО для Киеав - в одну
|
active bug hi priority In process of testing test _wf-central
|
Всю статистику по:
https://igov.org.ua/service/726/statistics
https://igov.org.ua/service/727/statistics
https://igov.org.ua/service/728/statistics
https://igov.org.ua/service/729/statistics
https://igov.org.ua/service/730/statistics
https://igov.org.ua/service/731/statistics
https://igov.org.ua/service/732/statistics
https://igov.org.ua/service/733/statistics
сливать в:
https://igov.org.ua/service/159/statistics
т.е. рекурсивно внутри сервиса при nID_Service=159
выгребать данные по верхнему списку, и дополнять ими существующую мапу.
|
1.0
|
На бэке (централа) в сервисе /getStatisticServiceCounts сливать статистику отдельных услуг по МРЕО для Киеав - в одну -
Всю статистику по:
https://igov.org.ua/service/726/statistics
https://igov.org.ua/service/727/statistics
https://igov.org.ua/service/728/statistics
https://igov.org.ua/service/729/statistics
https://igov.org.ua/service/730/statistics
https://igov.org.ua/service/731/statistics
https://igov.org.ua/service/732/statistics
https://igov.org.ua/service/733/statistics
сливать в:
https://igov.org.ua/service/159/statistics
т.е. рекурсивно внутри сервиса при nID_Service=159
выгребать данные по верхнему списку, и дополнять ими существующую мапу.
|
process
|
на бэке централа в сервисе getstatisticservicecounts сливать статистику отдельных услуг по мрео для киеав в одну всю статистику по сливать в т е рекурсивно внутри сервиса при nid service выгребать данные по верхнему списку и дополнять ими существующую мапу
| 1
|
11,581
| 14,444,311,707
|
IssuesEvent
|
2020-12-07 21:02:27
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Remove -Xverify:none from Java invocations
|
P2 area-java-toolchains team-Rules-Java type: process
|
### Description of the problem / feature request:
Bazel currently add `-Xverify:none` to Java commands it executes.
### Feature requests: what underlying problem are you trying to solve with this feature?
This is a dangerous option because it disables the byte code verifier. Having the option in place without justification/documentation reduces trust of our developers/users.
The option has been deprecated in Java 13 and will be removed in a future release.
Please remove the option.
Alternatively please provide a flag to disable it and add documentation why it's needed and what the risks are of removing/disabling `-Xverify:none` with regards to Bazel.
|
1.0
|
Remove -Xverify:none from Java invocations - ### Description of the problem / feature request:
Bazel currently add `-Xverify:none` to Java commands it executes.
### Feature requests: what underlying problem are you trying to solve with this feature?
This is a dangerous option because it disables the byte code verifier. Having the option in place without justification/documentation reduces trust of our developers/users.
The option has been deprecated in Java 13 and will be removed in a future release.
Please remove the option.
Alternatively please provide a flag to disable it and add documentation why it's needed and what the risks are of removing/disabling `-Xverify:none` with regards to Bazel.
|
process
|
remove xverify none from java invocations description of the problem feature request bazel currently add xverify none to java commands it executes feature requests what underlying problem are you trying to solve with this feature this is a dangerous option because it disables the byte code verifier having the option in place without justification documentation reduces trust of our developers users the option has been deprecated in java and will be removed in a future release please remove the option alternatively please provide a flag to disable it and add documentation why it s needed and what the risks are of removing disabling xverify none with regards to bazel
| 1
|
33,440
| 4,487,745,548
|
IssuesEvent
|
2016-08-30 02:54:44
|
Jeremy-Barnes/Critters
|
https://api.github.com/repos/Jeremy-Barnes/Critters
|
opened
|
Explore: World map
|
Design feature
|
Create a screen that show the user a view of the world as a whole, allowing them to choose a destination of a continent/land (theme worlds).
These lands should have distinct names, but no other worded details about a specific location will be given on this screen.
|
1.0
|
Explore: World map - Create a screen that show the user a view of the world as a whole, allowing them to choose a destination of a continent/land (theme worlds).
These lands should have distinct names, but no other worded details about a specific location will be given on this screen.
|
non_process
|
explore world map create a screen that show the user a view of the world as a whole allowing them to choose a destination of a continent land theme worlds these lands should have distinct names but no other worded details about a specific location will be given on this screen
| 0
|
227,516
| 18,066,419,128
|
IssuesEvent
|
2021-09-20 19:45:42
|
finos/waltz
|
https://api.github.com/repos/finos/waltz
|
closed
|
Slopey Graph: needs to be aware of DT hierarchies and scale better
|
fixed (test & close) exploration
|
Gets very messy with lots of datatypes and tens/hundreds of upstream/downstreams.
See also:
#3883
|
1.0
|
Slopey Graph: needs to be aware of DT hierarchies and scale better - Gets very messy with lots of datatypes and tens/hundreds of upstream/downstreams.
See also:
#3883
|
non_process
|
slopey graph needs to be aware of dt hierarchies and scale better gets very messy with lots of datatypes and tens hundreds of upstream downstreams see also
| 0
|
13,699
| 16,456,105,817
|
IssuesEvent
|
2021-05-21 12:48:05
|
googleapis/python-securitycenter
|
https://api.github.com/repos/googleapis/python-securitycenter
|
closed
|
Release as GA
|
api: securitycenter type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: April 1 2021**
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: April 1 2021**
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface release on after april server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
49,056
| 6,006,796,801
|
IssuesEvent
|
2017-06-06 00:16:14
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Cancel on Add Host doesn't work
|
area/host area/ui kind/bug status/resolved status/to-test
|
**Rancher versions:** 6/5
**Steps to Reproduce:**
1. Click on Add Host
2. From Add Host, click on Cancel
**Results:** Nothing happens
|
1.0
|
Cancel on Add Host doesn't work - **Rancher versions:** 6/5
**Steps to Reproduce:**
1. Click on Add Host
2. From Add Host, click on Cancel
**Results:** Nothing happens
|
non_process
|
cancel on add host doesn t work rancher versions steps to reproduce click on add host from add host click on cancel results nothing happens
| 0
|
12,716
| 15,090,214,754
|
IssuesEvent
|
2021-02-06 09:55:13
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
Suprocessor plugin corrupts messages when binary safe codecs are used
|
bug processors
|
In my PR https://github.com/Jeffail/benthos/pull/595, I accidentally introduced a bug in the handling of buffers representing message payloads.
The bug is due to directly assigning a re-used buffer to the message part in the following two lines
https://github.com/Jeffail/benthos/blob/e4d2485f90767cf7a02a8ab82b894f5a2abb20e7/lib/processor/subprocess.go#L157
https://github.com/Jeffail/benthos/blob/e4d2485f90767cf7a02a8ab82b894f5a2abb20e7/lib/processor/subprocess.go#L171
I will issue a pull request with a fix in a few minutes.
|
1.0
|
Suprocessor plugin corrupts messages when binary safe codecs are used - In my PR https://github.com/Jeffail/benthos/pull/595, I accidentally introduced a bug in the handling of buffers representing message payloads.
The bug is due to directly assigning a re-used buffer to the message part in the following two lines
https://github.com/Jeffail/benthos/blob/e4d2485f90767cf7a02a8ab82b894f5a2abb20e7/lib/processor/subprocess.go#L157
https://github.com/Jeffail/benthos/blob/e4d2485f90767cf7a02a8ab82b894f5a2abb20e7/lib/processor/subprocess.go#L171
I will issue a pull request with a fix in a few minutes.
|
process
|
suprocessor plugin corrupts messages when binary safe codecs are used in my pr i accidentally introduced a bug in the handling of buffers representing message payloads the bug is due to directly assigning a re used buffer to the message part in the following two lines i will issue a pull request with a fix in a few minutes
| 1
|
9,031
| 12,129,695,713
|
IssuesEvent
|
2020-04-22 23:17:41
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Usage of expressions in build number format
|
devops-cicd-process/tech devops/prod
|
Hi, I'd like my build number format to change depending on the `[build.reason](https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#build-variables)`.
For example for GitHub PR triggers I can use:
```
$(System.PullRequest.PullRequestNumber)$(Rev:.r)
```
But if the build reason is anything other than pull request then I get a malformed build number.
Ideally I'd like to use an expression or some way to set a variable depending on the build reason.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml#feedback)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Usage of expressions in build number format - Hi, I'd like my build number format to change depending on the `[build.reason](https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#build-variables)`.
For example for GitHub PR triggers I can use:
```
$(System.PullRequest.PullRequestNumber)$(Rev:.r)
```
But if the build reason is anything other than pull request then I get a malformed build number.
Ideally I'd like to use an expression or some way to set a variable depending on the build reason.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml#feedback)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
usage of expressions in build number format hi i d like my build number format to change depending on the for example for github pr triggers i can use system pullrequest pullrequestnumber rev r but if the build reason is anything other than pull request then i get a malformed build number ideally i d like to use an expression or some way to set a variable depending on the build reason document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
1,959
| 4,777,399,629
|
IssuesEvent
|
2016-10-27 16:11:57
|
paulkornikov/Pragonas
|
https://api.github.com/repos/paulkornikov/Pragonas
|
closed
|
Script de mise à jour des dates de création famille
|
a-enhancement admin processus workload II
|
pour les comptes demo et ibis changer les dates de création famille.
Idem pour le compte admin et principal.
|
1.0
|
Script de mise à jour des dates de création famille - pour les comptes demo et ibis changer les dates de création famille.
Idem pour le compte admin et principal.
|
process
|
script de mise à jour des dates de création famille pour les comptes demo et ibis changer les dates de création famille idem pour le compte admin et principal
| 1
|
37,395
| 4,807,320,376
|
IssuesEvent
|
2016-11-02 21:05:58
|
USDA-FSA/fsa-style
|
https://api.github.com/repos/USDA-FSA/fsa-style
|
opened
|
Badge
|
component P3 source: internal EAS type: design type: feature request type: front-end
|
### Task
- [ ] Design
- [ ] Build HTML/CSS
- [ ] Variations (e.g. small, large, colors, etc)
- [ ] `aria`, if at all
### Inspiration

|
1.0
|
Badge - ### Task
- [ ] Design
- [ ] Build HTML/CSS
- [ ] Variations (e.g. small, large, colors, etc)
- [ ] `aria`, if at all
### Inspiration

|
non_process
|
badge task design build html css variations e g small large colors etc aria if at all inspiration
| 0
|
500,025
| 14,484,579,667
|
IssuesEvent
|
2020-12-10 16:30:59
|
SD2E/opil
|
https://api.github.com/repos/SD2E/opil
|
closed
|
How to create a sampleSet object
|
enhancement highest priority
|
@jakebeal mentioned that OPIL has a field called `sampleSet` that I can make use of for encoding data for measurements performed in an experiment. This `sampleSet` field should be able to link to an SBOL3 `CombinatorialDerivation` object. How do I go about setting this up from the OPIL library? I've tried locating this information from the library's codebase but did not see anything that I can call to encode this information. Could I request to have a function set up in the library to expose this field for a user to use?
|
1.0
|
How to create a sampleSet object - @jakebeal mentioned that OPIL has a field called `sampleSet` that I can make use of for encoding data for measurements performed in an experiment. This `sampleSet` field should be able to link to an SBOL3 `CombinatorialDerivation` object. How do I go about setting this up from the OPIL library? I've tried locating this information from the library's codebase but did not see anything that I can call to encode this information. Could I request to have a function set up in the library to expose this field for a user to use?
|
non_process
|
how to create a sampleset object jakebeal mentioned that opil has a field called sampleset that i can make use of for encoding data for measurements performed in an experiment this sampleset field should be able to link to an combinatorialderivation object how do i go about setting this up from the opil library i ve tried locating this information from the library s codebase but did not see anything that i can call to encode this information could i request to have a function set up in the library to expose this field for a user to use
| 0
|
60,648
| 12,132,009,018
|
IssuesEvent
|
2020-04-23 06:20:44
|
kwk/test-llvm-bz-import-4
|
https://api.github.com/repos/kwk/test-llvm-bz-import-4
|
closed
|
clang aborts on tentative definition with incomplete type
|
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED clang/LLVM Codegen dummy import from bugzilla
|
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=3980.
|
1.0
|
clang aborts on tentative definition with incomplete type - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=3980.
|
non_process
|
clang aborts on tentative definition with incomplete type this issue was imported from bugzilla
| 0
|
99,085
| 8,689,777,660
|
IssuesEvent
|
2018-12-03 19:39:20
|
chapel-lang/chapel
|
https://api.github.com/repos/chapel-lang/chapel
|
closed
|
Spike: Start porting Bale permute matrix and random permutation
|
area: Tests type: Performance
|
We have indexgather and histogram ports as well as a start on toposort. We're missing permute matrix and random permutation, so start working on a port of them.
|
1.0
|
Spike: Start porting Bale permute matrix and random permutation - We have indexgather and histogram ports as well as a start on toposort. We're missing permute matrix and random permutation, so start working on a port of them.
|
non_process
|
spike start porting bale permute matrix and random permutation we have indexgather and histogram ports as well as a start on toposort we re missing permute matrix and random permutation so start working on a port of them
| 0
|
5,247
| 8,039,259,757
|
IssuesEvent
|
2018-07-30 17:49:11
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Logging system test backoff failures during teardown
|
api: logging flaky testing type: process
|
Similar to #5303, but [this failure](https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7228) is in trying to delete the `python` logger during teardown:
```python
tests/system/test_system.py::TestLogging::test_log_handler_async _has_entries. Trying again in 1 seconds...
_has_entries. Trying again in 2 seconds...
_has_entries. Trying again in 4 seconds...
_has_entries. Trying again in 8 seconds...
404 Log python does not exist, Trying again in 1 seconds...
404 Log python does not exist, Trying again in 2 seconds...
404 Log python does not exist, Trying again in 4 seconds...
404 Log python does not exist, Trying again in 8 seconds...
404 Log python does not exist, Trying again in 16 seconds...
404 Log python does not exist, Trying again in 32 seconds...
404 Log python does not exist, Trying again in 64 seconds...
404 Log python does not exist, Trying again in 128 seconds...
404 Log python does not exist, Trying again in 256 seconds...
FAILED
tests/system/test_system.py::TestLogging::test_log_handler_async ERROR
```
Might be due to overlapping test runs?
|
1.0
|
Logging system test backoff failures during teardown - Similar to #5303, but [this failure](https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7228) is in trying to delete the `python` logger during teardown:
```python
tests/system/test_system.py::TestLogging::test_log_handler_async _has_entries. Trying again in 1 seconds...
_has_entries. Trying again in 2 seconds...
_has_entries. Trying again in 4 seconds...
_has_entries. Trying again in 8 seconds...
404 Log python does not exist, Trying again in 1 seconds...
404 Log python does not exist, Trying again in 2 seconds...
404 Log python does not exist, Trying again in 4 seconds...
404 Log python does not exist, Trying again in 8 seconds...
404 Log python does not exist, Trying again in 16 seconds...
404 Log python does not exist, Trying again in 32 seconds...
404 Log python does not exist, Trying again in 64 seconds...
404 Log python does not exist, Trying again in 128 seconds...
404 Log python does not exist, Trying again in 256 seconds...
FAILED
tests/system/test_system.py::TestLogging::test_log_handler_async ERROR
```
Might be due to overlapping test runs?
|
process
|
logging system test backoff failures during teardown similar to but is in trying to delete the python logger during teardown python tests system test system py testlogging test log handler async has entries trying again in seconds has entries trying again in seconds has entries trying again in seconds has entries trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds log python does not exist trying again in seconds failed tests system test system py testlogging test log handler async error might be due to overlapping test runs
| 1
|
19,848
| 11,303,518,259
|
IssuesEvent
|
2020-01-17 20:19:34
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Azure Database for Postgres provider bug
|
bug service/postgresql
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) 0.12.8
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
* `azurerm_postgresql_server`
### Description
When createing the postgres server resource, the name of server must be lower case. It would throw error as below when the name of var.postgres_server_name is upper case. I tried to create this resource on Azure portal directly and upper case is allowed. So I believe this issue can be improved
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_postgresql_server" "rsconnect-postgres-server" {
name = "${var.postgres_server_name}"
location = "australiaeast"
resource_group_name = "${var.rsconnect_rg_name}"
sku {
name = "GP_Gen5_2"
capacity = 2
tier = "GeneralPurpose"
family = "Gen5"
}
storage_profile {
storage_mb = 8192
backup_retention_days = 7
geo_redundant_backup = "Disabled"
}
administrator_login = "${var.postgres_username}"
administrator_login_password = "${var.postgres_admin_password}"
version = "9.6"
ssl_enforcement = "Enabled"
}
resource "azurerm_postgresql_virtual_network_rule" "rsconnect-postgres-server-vnet-rule" {
name = "${var.postgres_vnet_rule_name}"
resource_group_name = "${var.rsconnect_rg_name}"
server_name = "${azurerm_postgresql_server.rsconnect-postgres-server.name}"
subnet_id = "${var.deploy_subnet_id}"
ignore_missing_vnet_service_endpoint = true
}
```
### Debug Output
```
Error: Provider produced inconsistent final plan
When expanding the plan for
azurerm_***ql_virtual_network_rule.rsconnect-***-server-vnet-rule to
include new values learned so far during apply, provider "azurerm" produced an
invalid new value for .server_name: was cty.StringVal("PG-DEV-AUE"), but
now cty.StringVal("pg-dev-aue").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
```
* #0000
|
1.0
|
Azure Database for Postgres provider bug - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) 0.12.8
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
* `azurerm_postgresql_server`
### Description
When createing the postgres server resource, the name of server must be lower case. It would throw error as below when the name of var.postgres_server_name is upper case. I tried to create this resource on Azure portal directly and upper case is allowed. So I believe this issue can be improved
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_postgresql_server" "rsconnect-postgres-server" {
name = "${var.postgres_server_name}"
location = "australiaeast"
resource_group_name = "${var.rsconnect_rg_name}"
sku {
name = "GP_Gen5_2"
capacity = 2
tier = "GeneralPurpose"
family = "Gen5"
}
storage_profile {
storage_mb = 8192
backup_retention_days = 7
geo_redundant_backup = "Disabled"
}
administrator_login = "${var.postgres_username}"
administrator_login_password = "${var.postgres_admin_password}"
version = "9.6"
ssl_enforcement = "Enabled"
}
resource "azurerm_postgresql_virtual_network_rule" "rsconnect-postgres-server-vnet-rule" {
name = "${var.postgres_vnet_rule_name}"
resource_group_name = "${var.rsconnect_rg_name}"
server_name = "${azurerm_postgresql_server.rsconnect-postgres-server.name}"
subnet_id = "${var.deploy_subnet_id}"
ignore_missing_vnet_service_endpoint = true
}
```
### Debug Output
```
Error: Provider produced inconsistent final plan
When expanding the plan for
azurerm_***ql_virtual_network_rule.rsconnect-***-server-vnet-rule to
include new values learned so far during apply, provider "azurerm" produced an
invalid new value for .server_name: was cty.StringVal("PG-DEV-AUE"), but
now cty.StringVal("pg-dev-aue").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
```
* #0000
|
non_process
|
azure database for postgres provider bug please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider affected resource s azurerm postgresql server description when createing the postgres server resource the name of server must be lower case it would throw error as below when the name of var postgres server name is upper case i tried to create this resource on azure portal directly and upper case is allowed so i believe this issue can be improved terraform configuration files hcl resource azurerm postgresql server rsconnect postgres server name var postgres server name location australiaeast resource group name var rsconnect rg name sku name gp capacity tier generalpurpose family storage profile storage mb backup retention days geo redundant backup disabled administrator login var postgres username administrator login password var postgres admin password version ssl enforcement enabled resource azurerm postgresql virtual network rule rsconnect postgres server vnet rule name var postgres vnet rule name resource group name var rsconnect rg name server name azurerm postgresql server rsconnect postgres server name subnet id var deploy subnet id ignore missing vnet service endpoint true debug output error provider produced inconsistent final plan when expanding the plan for azurerm ql virtual network rule rsconnect server vnet rule to include new values learned so far during apply provider azurerm produced an invalid new value for server name was cty stringval pg dev aue but now cty stringval pg dev aue this is a bug in the provider which should be reported in the provider s own issue tracker
| 0
|
93,900
| 10,779,423,116
|
IssuesEvent
|
2019-11-04 10:34:23
|
xspec/xspec
|
https://api.github.com/repos/xspec/xspec
|
closed
|
Content of role attributes
|
documentation enhancement
|
When creating my first tests using XSpec, I tested a Schematron file that used asserts with role attribute set to ERROR, as described on http://schematron.com/2018/12/standard-severity-levels-with-schematron-role/ :
> So here is a list of severity levels that I think Schematron tools and schemas might consider supporting. It is drawn from various IDEs and the Language Server Protocol. I have added the equivalent LSP severity levels, and potential ISB severity levels.
> ```
> @role="FATAL" – something so bad that processing or validation should or did stop.
> LSP Error
> ISQB S1 CRITICAL
> @role="ERROR" – something wrong has occurred but processing may continue
> LSP Error
> ISQB S2 MAJOR
> @role="WARN" – something wrong has happened, but it does not necessarily require action
> LSP Warning
> ISQB S3 MINOR
> @role="CAUTION" – It may not be wrong, but care is required. Or a system might be expected to process the document in a special way because of this.
> LSP Information
> ISQB S4 TRIVIAL
> @role="INFO" – some information is being reported
> LSP Information
> @role="HINT" – some hint is being given to the user
> LSP Hint
> @role="TRACE" – some information on execution is being reported.
> LSP Information
> @role="DEBUG" – some information that is not intended for exposure in production.
> LSP Hint
> ```
> These severities also represent an order in which SVRL items might be presented to the user, if useful.
This resulted in `x:expect-valid` not working as I expected it to work.
When looking at the xspec code, I then found out that lowercase roles are used:
- https://github.com/xspec/xspec/blob/877ed425593ee7ec613669c5df8ce8723013ab7e/src/schematron/schut-to-xspec.xsl#L12
- https://github.com/xspec/xspec/blob/877ed425593ee7ec613669c5df8ce8723013ab7e/src/schematron/schut-to-xspec.xsl#L181
As a minimum, I think an update of https://github.com/xspec/xspec/wiki/Writing-Scenarios-for-Schematron is needed. Currently is says:
> `x:expect-valid` verifies that the Schematron is executed and passes validation. In the Schematron an assert or report can have a role attribute specifying that it is a warning or informational message and these are considered to be allowed for a passing validation.
This could e.g. be changed to:
`x:expect-valid` verifies that the Schematron is executed and passes validation. In the Schematron an assert or report can have a role attribute specifying that it is an error (value 'error' or 'fatal'), a warning (value 'warn' or 'warning') or an informational message (value 'info' or 'information'). A role attribute not equal to 'error' or 'fatal' is considered to be allowed for a passing validation.
A more advanced solution could be to extend the values XSpec can recognize (lowercase vs. uppercase, and the addition of the values described at http://schematron.com/2018/12/standard-severity-levels-with-schematron-role/ )
|
1.0
|
Content of role attributes - When creating my first tests using XSpec, I tested a Schematron file that used asserts with role attribute set to ERROR, as described on http://schematron.com/2018/12/standard-severity-levels-with-schematron-role/ :
> So here is a list of severity levels that I think Schematron tools and schemas might consider supporting. It is drawn from various IDEs and the Language Server Protocol. I have added the equivalent LSP severity levels, and potential ISB severity levels.
> ```
> @role="FATAL" – something so bad that processing or validation should or did stop.
> LSP Error
> ISQB S1 CRITICAL
> @role="ERROR" – something wrong has occurred but processing may continue
> LSP Error
> ISQB S2 MAJOR
> @role="WARN" – something wrong has happened, but it does not necessarily require action
> LSP Warning
> ISQB S3 MINOR
> @role="CAUTION" – It may not be wrong, but care is required. Or a system might be expected to process the document in a special way because of this.
> LSP Information
> ISQB S4 TRIVIAL
> @role="INFO" – some information is being reported
> LSP Information
> @role="HINT" – some hint is being given to the user
> LSP Hint
> @role="TRACE" – some information on execution is being reported.
> LSP Information
> @role="DEBUG" – some information that is not intended for exposure in production.
> LSP Hint
> ```
> These severities also represent an order in which SVRL items might be presented to the user, if useful.
This resulted in `x:expect-valid` not working as I expected it to work.
When looking at the xspec code, I then found out that lowercase roles are used:
- https://github.com/xspec/xspec/blob/877ed425593ee7ec613669c5df8ce8723013ab7e/src/schematron/schut-to-xspec.xsl#L12
- https://github.com/xspec/xspec/blob/877ed425593ee7ec613669c5df8ce8723013ab7e/src/schematron/schut-to-xspec.xsl#L181
As a minimum, I think an update of https://github.com/xspec/xspec/wiki/Writing-Scenarios-for-Schematron is needed. Currently is says:
> `x:expect-valid` verifies that the Schematron is executed and passes validation. In the Schematron an assert or report can have a role attribute specifying that it is a warning or informational message and these are considered to be allowed for a passing validation.
This could e.g. be changed to:
`x:expect-valid` verifies that the Schematron is executed and passes validation. In the Schematron an assert or report can have a role attribute specifying that it is an error (value 'error' or 'fatal'), a warning (value 'warn' or 'warning') or an informational message (value 'info' or 'information'). A role attribute not equal to 'error' or 'fatal' is considered to be allowed for a passing validation.
A more advanced solution could be to extend the values XSpec can recognize (lowercase vs. uppercase, and the addition of the values described at http://schematron.com/2018/12/standard-severity-levels-with-schematron-role/ )
|
non_process
|
content of role attributes when creating my first tests using xspec i tested a schematron file that used asserts with role attribute set to error as described on so here is a list of severity levels that i think schematron tools and schemas might consider supporting it is drawn from various ides and the language server protocol i have added the equivalent lsp severity levels and potential isb severity levels role fatal – something so bad that processing or validation should or did stop lsp error isqb critical role error – something wrong has occurred but processing may continue lsp error isqb major role warn – something wrong has happened but it does not necessarily require action lsp warning isqb minor role caution – it may not be wrong but care is required or a system might be expected to process the document in a special way because of this lsp information isqb trivial role info – some information is being reported lsp information role hint – some hint is being given to the user lsp hint role trace – some information on execution is being reported lsp information role debug – some information that is not intended for exposure in production lsp hint these severities also represent an order in which svrl items might be presented to the user if useful this resulted in x expect valid not working as i expected it to work when looking at the xspec code i then found out that lowercase roles are used as a minimum i think an update of is needed currently is says x expect valid verifies that the schematron is executed and passes validation in the schematron an assert or report can have a role attribute specifying that it is a warning or informational message and these are considered to be allowed for a passing validation this could e g be changed to x expect valid verifies that the schematron is executed and passes validation in the schematron an assert or report can have a role attribute specifying that it is an error value error or fatal a warning value warn or warning or an informational message value info or information a role attribute not equal to error or fatal is considered to be allowed for a passing validation a more advanced solution could be to extend the values xspec can recognize lowercase vs uppercase and the addition of the values described at
| 0
|
4,655
| 7,496,144,176
|
IssuesEvent
|
2018-04-08 06:06:29
|
kookmin-sw/2018-cap1-2
|
https://api.github.com/repos/kookmin-sw/2018-cap1-2
|
closed
|
영상처리 5: 라인별 윤곽선 최종 정렬 및 병합
|
ImageProcessing
|
TODO)
1) 동적 리스트 생성을 활용하여 이미지에서 도출되는 라인의 갯수에 맞춰 리스트를 생성할 것.
2) 라인 리스트 별로 x축을 기준으로 정렬하여 좌 -> 우 순서로 윤곽선을 정렬.
3) 윤곽선의 중심을 기준으로 일정거리 이내에 발견되는 윤곽선을 병합할 것(i / j / = 등의 문자)
PROBLEM)
1) 동적으로 리스트를 생성하는 것은 어떻게 구현할 것인지?
2) 라인별 리스트에 윤곽선들은 어떻게 넣을 것인지?
3) 윤곽선의 중심을 기준으로 일정거리 이내를 탐색하는 과정에서 '일정거리'를 어떠한 값으로 설정을 할 것인가? 현재 사용중인 이미지에서만 가능한 방법이 아니라 보편적으로 사용가능한 방법인가?
4) 3줄 이상의 라인을 가진 이미지에 대한 테스트가 필요.
|
1.0
|
영상처리 5: 라인별 윤곽선 최종 정렬 및 병합 - TODO)
1) 동적 리스트 생성을 활용하여 이미지에서 도출되는 라인의 갯수에 맞춰 리스트를 생성할 것.
2) 라인 리스트 별로 x축을 기준으로 정렬하여 좌 -> 우 순서로 윤곽선을 정렬.
3) 윤곽선의 중심을 기준으로 일정거리 이내에 발견되는 윤곽선을 병합할 것(i / j / = 등의 문자)
PROBLEM)
1) 동적으로 리스트를 생성하는 것은 어떻게 구현할 것인지?
2) 라인별 리스트에 윤곽선들은 어떻게 넣을 것인지?
3) 윤곽선의 중심을 기준으로 일정거리 이내를 탐색하는 과정에서 '일정거리'를 어떠한 값으로 설정을 할 것인가? 현재 사용중인 이미지에서만 가능한 방법이 아니라 보편적으로 사용가능한 방법인가?
4) 3줄 이상의 라인을 가진 이미지에 대한 테스트가 필요.
|
process
|
영상처리 라인별 윤곽선 최종 정렬 및 병합 todo 동적 리스트 생성을 활용하여 이미지에서 도출되는 라인의 갯수에 맞춰 리스트를 생성할 것 라인 리스트 별로 x축을 기준으로 정렬하여 좌 우 순서로 윤곽선을 정렬 윤곽선의 중심을 기준으로 일정거리 이내에 발견되는 윤곽선을 병합할 것 i j 등의 문자 problem 동적으로 리스트를 생성하는 것은 어떻게 구현할 것인지 라인별 리스트에 윤곽선들은 어떻게 넣을 것인지 윤곽선의 중심을 기준으로 일정거리 이내를 탐색하는 과정에서 일정거리 를 어떠한 값으로 설정을 할 것인가 현재 사용중인 이미지에서만 가능한 방법이 아니라 보편적으로 사용가능한 방법인가 이상의 라인을 가진 이미지에 대한 테스트가 필요
| 1
|
12,675
| 15,044,297,523
|
IssuesEvent
|
2021-02-03 02:38:38
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
[Edit in Place] count points in the polygon does not work
|
Bug Processing
|
The `Count Points in Polygon` algorithm works fine, but when used with `Edit in Place` it doesn't work, i.e. it doesn't add in the field with the count.
How to reproduce it:
1. add two layers, one polygonal and one point;
2. from the processing tools, activate Edit in place and start it;
3. insert the polygon as the first layer and the points as the second;
4. launch the algorithm;
5. when opening the attribute table of the polygonal layer there is no field with the count.
## OSGeo4W64 Win 10 64
## Even with a clean profile it doesn't work!

[RipGeo01012020_g.zip](https://github.com/qgis/QGIS/files/5489119/RipGeo01012020_g.zip)
|
1.0
|
[Edit in Place] count points in the polygon does not work - The `Count Points in Polygon` algorithm works fine, but when used with `Edit in Place` it doesn't work, i.e. it doesn't add in the field with the count.
How to reproduce it:
1. add two layers, one polygonal and one point;
2. from the processing tools, activate Edit in place and start it;
3. insert the polygon as the first layer and the points as the second;
4. launch the algorithm;
5. when opening the attribute table of the polygonal layer there is no field with the count.
## OSGeo4W64 Win 10 64
## Even with a clean profile it doesn't work!

[RipGeo01012020_g.zip](https://github.com/qgis/QGIS/files/5489119/RipGeo01012020_g.zip)
|
process
|
count points in the polygon does not work the count points in polygon algorithm works fine but when used with edit in place it doesn t work i e it doesn t add in the field with the count how to reproduce it add two layers one polygonal and one point from the processing tools activate edit in place and start it insert the polygon as the first layer and the points as the second launch the algorithm when opening the attribute table of the polygonal layer there is no field with the count win even with a clean profile it doesn t work
| 1
|
13,127
| 15,527,027,913
|
IssuesEvent
|
2021-03-13 03:53:54
|
hasura/ask-me-anything
|
https://api.github.com/repos/hasura/ask-me-anything
|
closed
|
What are some strategies to research ways to improve performance of GraphQL queries?
|
next-up-for-ama processing-for-shortvid question
|
The quick list to figure out the performance of queries is to determine what is being executed from the SQL call.
Will elaborate soon on an upcoming AMA session.
|
1.0
|
What are some strategies to research ways to improve performance of GraphQL queries? - The quick list to figure out the performance of queries is to determine what is being executed from the SQL call.
Will elaborate soon on an upcoming AMA session.
|
process
|
what are some strategies to research ways to improve performance of graphql queries the quick list to figure out the performance of queries is to determine what is being executed from the sql call will elaborate soon on an upcoming ama session
| 1
|
171,929
| 21,007,669,194
|
IssuesEvent
|
2022-03-30 01:17:54
|
Satheesh575555/kernel-mm-huge_memory
|
https://api.github.com/repos/Satheesh575555/kernel-mm-huge_memory
|
opened
|
CVE-2021-38202 (High) detected in linuxlinux-4.19.236
|
security vulnerability
|
## CVE-2021-38202 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
fs/nfsd/trace.h in the Linux kernel before 5.13.4 might allow remote attackers to cause a denial of service (out-of-bounds read in strlen) by sending NFS traffic when the trace event framework is being used for nfsd.
<p>Publish Date: 2021-08-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38202>CVE-2021-38202</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38202">https://www.linuxkernelcves.com/cves/CVE-2021-38202</a></p>
<p>Release Date: 2021-08-08</p>
<p>Fix Resolution: v5.13.4,v5.14-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-38202 (High) detected in linuxlinux-4.19.236 - ## CVE-2021-38202 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
fs/nfsd/trace.h in the Linux kernel before 5.13.4 might allow remote attackers to cause a denial of service (out-of-bounds read in strlen) by sending NFS traffic when the trace event framework is being used for nfsd.
<p>Publish Date: 2021-08-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38202>CVE-2021-38202</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-38202">https://www.linuxkernelcves.com/cves/CVE-2021-38202</a></p>
<p>Release Date: 2021-08-08</p>
<p>Fix Resolution: v5.13.4,v5.14-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files vulnerability details fs nfsd trace h in the linux kernel before might allow remote attackers to cause a denial of service out of bounds read in strlen by sending nfs traffic when the trace event framework is being used for nfsd publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
234,171
| 17,935,946,123
|
IssuesEvent
|
2021-09-10 15:19:36
|
felangel/mason
|
https://api.github.com/repos/felangel/mason
|
closed
|
docs: adds list of all casing options
|
documentation
|
**Description**
The casing options for variables should be documented for variables in bricks and their paths.
**Requirements**
* snake_case
* dot.case
* path/case
* param-case
* PascalCase
* Header-Case
* Title Case
* camelCase
* Sentence case
* CONSTANT_CASE
|
1.0
|
docs: adds list of all casing options - **Description**
The casing options for variables should be documented for variables in bricks and their paths.
**Requirements**
* snake_case
* dot.case
* path/case
* param-case
* PascalCase
* Header-Case
* Title Case
* camelCase
* Sentence case
* CONSTANT_CASE
|
non_process
|
docs adds list of all casing options description the casing options for variables should be documented for variables in bricks and their paths requirements snake case dot case path case param case pascalcase header case title case camelcase sentence case constant case
| 0
|
14,158
| 17,082,524,706
|
IssuesEvent
|
2021-07-08 07:41:23
|
RIOT-OS/RIOT
|
https://api.github.com/repos/RIOT-OS/RIOT
|
closed
|
periph_pwm API needs a way to define the initial duty cycle during init
|
Area: cpu Discussion: RFC Process: API change State: stale Type: enhancement
|
### Description
The `periph_pwm` API has no way to provide an initial duty cycle during init. Most (?) implementations automagically initialize the duty cycle to 0% and enable the output. In the worst case, this can be the exact opposite of what the user wants.
#### Steps to reproduce the issue
Use PWM to drive an LED connected in an active-low configuration and try to initialize it in the off (0%) state.
The application code to initialize the LED in an off state would look like this:
```c
pwm_init(...);
pwm_set(100); /* 100 for 0% because inverted LED */
```
#### Expected results
The code initializes the LED into an off state without any intermediate on states.
#### Actual results
After `pwm_init()` the LED is running at 100% until `pwm_set()`. The duration of the on state is long enough to see with human eyes.
### Proposed Solution
Modify the `periph_pwm` API as such:
```diff
- uint32_t pwm_init(pwm_t pwm, pwm_mode_t mode, uint32_t freq, uint16_t res);
+ uint32_t pwm_init(pwm_t pwm, pwm_mode_t mode, uint32_t freq, uint16_t res, uint16_t duty_cycle);
```
Similar to the situation with `gpio_init()` and `gpio_init_low/high()`, I think it makes no sense from a hardware perspective to enable a pin as an output without atomically defining what state to output.
|
1.0
|
periph_pwm API needs a way to define the initial duty cycle during init - ### Description
The `periph_pwm` API has no way to provide an initial duty cycle during init. Most (?) implementations automagically initialize the duty cycle to 0% and enable the output. In the worst case, this can be the exact opposite of what the user wants.
#### Steps to reproduce the issue
Use PWM to drive an LED connected in an active-low configuration and try to initialize it in the off (0%) state.
The application code to initialize the LED in an off state would look like this:
```c
pwm_init(...);
pwm_set(100); /* 100 for 0% because inverted LED */
```
#### Expected results
The code initializes the LED into an off state without any intermediate on states.
#### Actual results
After `pwm_init()` the LED is running at 100% until `pwm_set()`. The duration of the on state is long enough to see with human eyes.
### Proposed Solution
Modify the `periph_pwm` API as such:
```diff
- uint32_t pwm_init(pwm_t pwm, pwm_mode_t mode, uint32_t freq, uint16_t res);
+ uint32_t pwm_init(pwm_t pwm, pwm_mode_t mode, uint32_t freq, uint16_t res, uint16_t duty_cycle);
```
Similar to the situation with `gpio_init()` and `gpio_init_low/high()`, I think it makes no sense from a hardware perspective to enable a pin as an output without atomically defining what state to output.
|
process
|
periph pwm api needs a way to define the initial duty cycle during init description the periph pwm api has no way to provide an initial duty cycle during init most implementations automagically initialize the duty cycle to and enable the output in the worst case this can be the exact opposite of what the user wants steps to reproduce the issue use pwm to drive an led connected in an active low configuration and try to initialize it in the off state the application code to initialize the led in an off state would look like this c pwm init pwm set for because inverted led expected results the code initializes the led into an off state without any intermediate on states actual results after pwm init the led is running at until pwm set the duration of the on state is long enough to see with human eyes proposed solution modify the periph pwm api as such diff t pwm init pwm t pwm pwm mode t mode t freq t res t pwm init pwm t pwm pwm mode t mode t freq t res t duty cycle similar to the situation with gpio init and gpio init low high i think it makes no sense from a hardware perspective to enable a pin as an output without atomically defining what state to output
| 1
|
137,190
| 20,086,670,386
|
IssuesEvent
|
2022-02-05 03:57:16
|
microsoft/pyright
|
https://api.github.com/repos/microsoft/pyright
|
closed
|
Change in behavior between pylance 2021.12.1 and 2022.2.0 on a pandas example
|
as designed
|
**Describe the bug**
Using the pandas stubs shipped with pylance 2021.12.1 on pylance 2022.2.0 reports new errors
**To Reproduce**
Using code below, do the following:
Install pylance 2021.12.1 and no errors are reported
With pylance 2022.2.0 and you get errors.
Then to see it is not a problem with the stubs, do the following:
Take the pandas stubs from `~/.vscode/extensions/ms-python.vscode-pylance-2021.12.1/dist/bundled/stubs/pandas` and use them in the directory `~/.vscode/extensions/ms-python.vscode-pylance-2021.22.0/dist/bundled/stubs/pandas`
Then you get the error using pylance 2022.2.0 using the stubs from 2021.12.1 .
Error is:
```
Cannot access member "values" for type "Series[Dtype@__getitem__]"
```
Note a similar error is achieved if you try to access `s2.name` or `s2.hasnans` but not `s2.items()`, so it has something to do with looking up the type of a property.
**Expected behavior**
No error
**Screenshots or Code**
If applicable, add screenshots or the text of the code (surrounded by triple back ticks) to help explain your problem.
```python
import pandas as pd
df = pd.DataFrame({"x": [1, 2, 3], "y": [5, 6, 7]})
d2 = df[["x"]]
s2 = d2.loc[0]
sv = s2.values
```
**VS Code extension or command-line**
VSCode versions as described above
**Additional context**
Same issue seems to be true with pylance 2022.1.3, but things are OK with pylance 2022.1.1
The methodology above proves the issue is with pyright, not with the stubs, unless there is something in the stubs you are shipping that is incorrect and then you have to resolve that with the owners of those stubs.
|
1.0
|
Change in behavior between pylance 2021.12.1 and 2022.2.0 on a pandas example - **Describe the bug**
Using the pandas stubs shipped with pylance 2021.12.1 on pylance 2022.2.0 reports new errors
**To Reproduce**
Using code below, do the following:
Install pylance 2021.12.1 and no errors are reported
With pylance 2022.2.0 and you get errors.
Then to see it is not a problem with the stubs, do the following:
Take the pandas stubs from `~/.vscode/extensions/ms-python.vscode-pylance-2021.12.1/dist/bundled/stubs/pandas` and use them in the directory `~/.vscode/extensions/ms-python.vscode-pylance-2021.22.0/dist/bundled/stubs/pandas`
Then you get the error using pylance 2022.2.0 using the stubs from 2021.12.1 .
Error is:
```
Cannot access member "values" for type "Series[Dtype@__getitem__]"
```
Note a similar error is achieved if you try to access `s2.name` or `s2.hasnans` but not `s2.items()`, so it has something to do with looking up the type of a property.
**Expected behavior**
No error
**Screenshots or Code**
If applicable, add screenshots or the text of the code (surrounded by triple back ticks) to help explain your problem.
```python
import pandas as pd
df = pd.DataFrame({"x": [1, 2, 3], "y": [5, 6, 7]})
d2 = df[["x"]]
s2 = d2.loc[0]
sv = s2.values
```
**VS Code extension or command-line**
VSCode versions as described above
**Additional context**
Same issue seems to be true with pylance 2022.1.3, but things are OK with pylance 2022.1.1
The methodology above proves the issue is with pyright, not with the stubs, unless there is something in the stubs you are shipping that is incorrect and then you have to resolve that with the owners of those stubs.
|
non_process
|
change in behavior between pylance and on a pandas example describe the bug using the pandas stubs shipped with pylance on pylance reports new errors to reproduce using code below do the following install pylance and no errors are reported with pylance and you get errors then to see it is not a problem with the stubs do the following take the pandas stubs from vscode extensions ms python vscode pylance dist bundled stubs pandas and use them in the directory vscode extensions ms python vscode pylance dist bundled stubs pandas then you get the error using pylance using the stubs from error is cannot access member values for type series note a similar error is achieved if you try to access name or hasnans but not items so it has something to do with looking up the type of a property expected behavior no error screenshots or code if applicable add screenshots or the text of the code surrounded by triple back ticks to help explain your problem python import pandas as pd df pd dataframe x y df loc sv values vs code extension or command line vscode versions as described above additional context same issue seems to be true with pylance but things are ok with pylance the methodology above proves the issue is with pyright not with the stubs unless there is something in the stubs you are shipping that is incorrect and then you have to resolve that with the owners of those stubs
| 0
|
323,208
| 27,704,296,534
|
IssuesEvent
|
2023-03-14 10:10:07
|
angular/angular
|
https://api.github.com/repos/angular/angular
|
closed
|
Standalone Component DebugElements are missing properties
|
area: testing needs reproduction cross-cutting: standalone
|
### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
If you use a non standalone `ChildComponent` by its selector and pass an attribute, this will be included inside `DebugElement.properties` . Example: When doing `<child-component [foos]="fooList"/>` then `DebugElement.properties.foos` will contain `fooList` . However, if you add `standalone: true` to `ChildComponent` and import it from `ParentComponent` , at the time of testing `ParentComponent` and querying `child-component`, `DebugElement.properties.foos` will return `undefined` .
Workaround: If I do `DebugElement.componentInstance.foos` then the value is there, however this requires me to change a lot of tests when migrating my project to standalone.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
Fails at: `expect(childComponent.properties.foos).toBe([]);`
`Expected undefined to be []`
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 15.1.4
Node: 16.14.2
Package Manager: npm 8.5.0
OS: linux x64
Angular: 15.1.3
... animations, cdk, common, compiler, compiler-cli, core, forms
... language-service, material, platform-browser
... platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1501.2
@angular-devkit/build-angular 15.1.3
@angular-devkit/core 15.1.3
@angular-devkit/schematics 15.1.4
@angular/cli 15.1.4
@schematics/angular 15.1.4
rxjs 7.5.6
typescript 4.8.4
webpack 5.75.0
```
### Anything else?
_No response_
|
1.0
|
Standalone Component DebugElements are missing properties - ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
No
### Description
If you use a non standalone `ChildComponent` by its selector and pass an attribute, this will be included inside `DebugElement.properties` . Example: When doing `<child-component [foos]="fooList"/>` then `DebugElement.properties.foos` will contain `fooList` . However, if you add `standalone: true` to `ChildComponent` and import it from `ParentComponent` , at the time of testing `ParentComponent` and querying `child-component`, `DebugElement.properties.foos` will return `undefined` .
Workaround: If I do `DebugElement.componentInstance.foos` then the value is there, however this requires me to change a lot of tests when migrating my project to standalone.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
Fails at: `expect(childComponent.properties.foos).toBe([]);`
`Expected undefined to be []`
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 15.1.4
Node: 16.14.2
Package Manager: npm 8.5.0
OS: linux x64
Angular: 15.1.3
... animations, cdk, common, compiler, compiler-cli, core, forms
... language-service, material, platform-browser
... platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1501.2
@angular-devkit/build-angular 15.1.3
@angular-devkit/core 15.1.3
@angular-devkit/schematics 15.1.4
@angular/cli 15.1.4
@schematics/angular 15.1.4
rxjs 7.5.6
typescript 4.8.4
webpack 5.75.0
```
### Anything else?
_No response_
|
non_process
|
standalone component debugelements are missing properties which angular package s are the source of the bug don t known other is this a regression no description if you use a non standalone childcomponent by its selector and pass an attribute this will be included inside debugelement properties example when doing then debugelement properties foos will contain foolist however if you add standalone true to childcomponent and import it from parentcomponent at the time of testing parentcomponent and querying child component debugelement properties foos will return undefined workaround if i do debugelement componentinstance foos then the value is there however this requires me to change a lot of tests when migrating my project to standalone please provide a link to a minimal reproduction of the bug no response please provide the exception or error you saw true fails at expect childcomponent properties foos tobe expected undefined to be please provide the environment you discovered this bug in run ng version true angular cli node package manager npm os linux angular animations cdk common compiler compiler cli core forms language service material platform browser platform browser dynamic router package version angular devkit architect angular devkit build angular angular devkit core angular devkit schematics angular cli schematics angular rxjs typescript webpack anything else no response
| 0
|
5,182
| 7,964,094,187
|
IssuesEvent
|
2018-07-13 20:03:43
|
CCALI/caw
|
https://api.github.com/repos/CCALI/caw
|
closed
|
Style Topic tags to look like tags
|
enhancement in process ready
|
Put each Topic/tag into a balloon so they display something like this:.

|
1.0
|
Style Topic tags to look like tags - Put each Topic/tag into a balloon so they display something like this:.

|
process
|
style topic tags to look like tags put each topic tag into a balloon so they display something like this
| 1
|
99,950
| 30,588,755,901
|
IssuesEvent
|
2023-07-21 15:16:17
|
rpopuc/gha-build-homolog
|
https://api.github.com/repos/rpopuc/gha-build-homolog
|
closed
|
Build Homolog
|
build-homolog
|
## Description
Realiza deploy automatizado da aplicação.
## Environments
environment_1
## Branches
essa_e_para_dar_erro_um_erro_bom
|
1.0
|
Build Homolog - ## Description
Realiza deploy automatizado da aplicação.
## Environments
environment_1
## Branches
essa_e_para_dar_erro_um_erro_bom
|
non_process
|
build homolog description realiza deploy automatizado da aplicação environments environment branches essa e para dar erro um erro bom
| 0
|
17,631
| 23,447,098,157
|
IssuesEvent
|
2022-08-15 20:51:48
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
reopened
|
Remove row limit or define a reasonable number for model cache
|
Querying/Processor Type:New Feature Querying/Models
|
**Is your feature request related to a problem? Please describe.**
The query processor has a global row limit of [1048575](https://github.com/metabase/metabase/blob/edd687a9ea2ab1584a2dfff9a1bd3415e23918a5/src/metabase/query_processor/interface.clj#L7-L15).
As we use the same pipeline to execute the query that caches models, we need to remove this limit. There are many use cases when users want to cache models with more than 1M rows.
**Describe the solution you'd like**
Two options;
1. Remove the limit for model cache
2. Change it to 1B (minimum), ideally 10B.
**How important is this feature to you?**
This is important for model cache, as the feature was designed for large models.
|
1.0
|
Remove row limit or define a reasonable number for model cache - **Is your feature request related to a problem? Please describe.**
The query processor has a global row limit of [1048575](https://github.com/metabase/metabase/blob/edd687a9ea2ab1584a2dfff9a1bd3415e23918a5/src/metabase/query_processor/interface.clj#L7-L15).
As we use the same pipeline to execute the query that caches models, we need to remove this limit. There are many use cases when users want to cache models with more than 1M rows.
**Describe the solution you'd like**
Two options;
1. Remove the limit for model cache
2. Change it to 1B (minimum), ideally 10B.
**How important is this feature to you?**
This is important for model cache, as the feature was designed for large models.
|
process
|
remove row limit or define a reasonable number for model cache is your feature request related to a problem please describe the query processor has a global row limit of as we use the same pipeline to execute the query that caches models we need to remove this limit there are many use cases when users want to cache models with more than rows describe the solution you d like two options remove the limit for model cache change it to minimum ideally how important is this feature to you this is important for model cache as the feature was designed for large models
| 1
|
415,209
| 28,022,956,178
|
IssuesEvent
|
2023-03-28 07:11:47
|
contentlayerdev/contentlayer
|
https://api.github.com/repos/contentlayerdev/contentlayer
|
closed
|
React Server Components (RSC) support for Contentlayer
|
documentation needs-research
|
[React Server Components](https://nextjs.org/docs/advanced-features/react-18/server-components) are about to become more generally available. We should investigate what's the best way to use Contentlayer in a RSC setup.
Status: Currently blocked by https://github.com/vercel/next.js/issues/41865
|
1.0
|
React Server Components (RSC) support for Contentlayer - [React Server Components](https://nextjs.org/docs/advanced-features/react-18/server-components) are about to become more generally available. We should investigate what's the best way to use Contentlayer in a RSC setup.
Status: Currently blocked by https://github.com/vercel/next.js/issues/41865
|
non_process
|
react server components rsc support for contentlayer are about to become more generally available we should investigate what s the best way to use contentlayer in a rsc setup status currently blocked by
| 0
|
266,661
| 23,252,522,694
|
IssuesEvent
|
2022-08-04 06:05:50
|
vanlyfe/capstone
|
https://api.github.com/repos/vanlyfe/capstone
|
closed
|
implement order.test.js
|
Backend Create unit tests
|
DESCRIPTION
As a backend developer, I want to have sufficient tests for my listings order file so that I ensure the orders dataflow is working properly
A/C
- [ ] Test "getOrdersByUserId"
- [ ] Test "Can successfully get orders by user id"
- [ ] Test "Returns nothing if user doesn't exist"
- [ ] Test "Returns nothing if user doesn't have any orders"
- [ ] Test "getOrderById"
- [ ] Test "Can successfully get an order by id"
- [ ] Test "Returns nothing if order doesn't exist"
- [ ] Test "postOrder"
- [ ] Test "Can successfully post an order"
- [ ] Test "Throws error if required field is missing"
- [ ] Test "Throws error if invalid field is provided"
DoD
All the test cases are defined and running them displays the results in the console
|
1.0
|
implement order.test.js - DESCRIPTION
As a backend developer, I want to have sufficient tests for my listings order file so that I ensure the orders dataflow is working properly
A/C
- [ ] Test "getOrdersByUserId"
- [ ] Test "Can successfully get orders by user id"
- [ ] Test "Returns nothing if user doesn't exist"
- [ ] Test "Returns nothing if user doesn't have any orders"
- [ ] Test "getOrderById"
- [ ] Test "Can successfully get an order by id"
- [ ] Test "Returns nothing if order doesn't exist"
- [ ] Test "postOrder"
- [ ] Test "Can successfully post an order"
- [ ] Test "Throws error if required field is missing"
- [ ] Test "Throws error if invalid field is provided"
DoD
All the test cases are defined and running them displays the results in the console
|
non_process
|
implement order test js description as a backend developer i want to have sufficient tests for my listings order file so that i ensure the orders dataflow is working properly a c test getordersbyuserid test can successfully get orders by user id test returns nothing if user doesn t exist test returns nothing if user doesn t have any orders test getorderbyid test can successfully get an order by id test returns nothing if order doesn t exist test postorder test can successfully post an order test throws error if required field is missing test throws error if invalid field is provided dod all the test cases are defined and running them displays the results in the console
| 0
|
2,887
| 2,607,964,586
|
IssuesEvent
|
2015-02-26 00:41:45
|
chrsmithdemos/leveldb
|
https://api.github.com/repos/chrsmithdemos/leveldb
|
opened
|
LevelDB get 'stuck' in reorganizing keys.
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Generate about 100 Strings, like "00000" to "00999"
2. Generate around 200.000 (0 to 200k) numbers and concat them with the above
like "0|00001", "1|00010", , ..., "199|00099", ..
3. Insert them into LevelDB
What is the expected output? What do you see instead?
Around 1.5M keys inserted, LevelDB starts to reorganize the files forever
(watched around 15 minutes to no end).
What version of the product are you using? On what operating system?
I'm using the Java LevelDB interface and MacOS X. Version 1.1.
Please provide any additional information below.
I know the keys are weird (to someone who don't know the business problem.
LevelDB gives me something like this over the logger interface:
```
-----
Original issue reported on code.google.com by `seidl.ma...@gmail.com` on 11 Dec 2011 at 1:21
Attachments:
* [leveldb-log.rtf](https://storage.googleapis.com/google-code-attachments/leveldb/issue-61/comment-0/leveldb-log.rtf)
|
1.0
|
LevelDB get 'stuck' in reorganizing keys. - ```
What steps will reproduce the problem?
1. Generate about 100 Strings, like "00000" to "00999"
2. Generate around 200.000 (0 to 200k) numbers and concat them with the above
like "0|00001", "1|00010", , ..., "199|00099", ..
3. Insert them into LevelDB
What is the expected output? What do you see instead?
Around 1.5M keys inserted, LevelDB starts to reorganize the files forever
(watched around 15 minutes to no end).
What version of the product are you using? On what operating system?
I'm using the Java LevelDB interface and MacOS X. Version 1.1.
Please provide any additional information below.
I know the keys are weird (to someone who don't know the business problem.
LevelDB gives me something like this over the logger interface:
```
-----
Original issue reported on code.google.com by `seidl.ma...@gmail.com` on 11 Dec 2011 at 1:21
Attachments:
* [leveldb-log.rtf](https://storage.googleapis.com/google-code-attachments/leveldb/issue-61/comment-0/leveldb-log.rtf)
|
non_process
|
leveldb get stuck in reorganizing keys what steps will reproduce the problem generate about strings like to generate around to numbers and concat them with the above like insert them into leveldb what is the expected output what do you see instead around keys inserted leveldb starts to reorganize the files forever watched around minutes to no end what version of the product are you using on what operating system i m using the java leveldb interface and macos x version please provide any additional information below i know the keys are weird to someone who don t know the business problem leveldb gives me something like this over the logger interface original issue reported on code google com by seidl ma gmail com on dec at attachments
| 0
|
9,072
| 12,140,814,808
|
IssuesEvent
|
2020-04-23 21:11:52
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
Remove gcp-devrel-py-tools usage from the repo
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
We agreed to remove the dependency on gcp-devrel-py-tools. This is an umbrella issue.
Here are current usage as of 2020-04-10:
```
$ grep -r 'gcp-devrel-py-tools==' *|wc -l
36
```
I'll create a github project and child issues.
|
1.0
|
Remove gcp-devrel-py-tools usage from the repo - We agreed to remove the dependency on gcp-devrel-py-tools. This is an umbrella issue.
Here are current usage as of 2020-04-10:
```
$ grep -r 'gcp-devrel-py-tools==' *|wc -l
36
```
I'll create a github project and child issues.
|
process
|
remove gcp devrel py tools usage from the repo we agreed to remove the dependency on gcp devrel py tools this is an umbrella issue here are current usage as of grep r gcp devrel py tools wc l i ll create a github project and child issues
| 1
|
11,669
| 14,530,576,185
|
IssuesEvent
|
2020-12-14 19:28:14
|
akamai/terraform-provider-akamai
|
https://api.github.com/repos/akamai/terraform-provider-akamai
|
closed
|
Cannot find group when there are groups with the same name under multiple contract.
|
Fix in process PR Review/Upcoming Release
|
### Terraform Version
Terraform v0.12.28
### Affected Resource(s)
Please list the resources as a list, for example:
- data_property_akamai_group
Fix in pull request #94 is waiting to be merged.
|
1.0
|
Cannot find group when there are groups with the same name under multiple contract. - ### Terraform Version
Terraform v0.12.28
### Affected Resource(s)
Please list the resources as a list, for example:
- data_property_akamai_group
Fix in pull request #94 is waiting to be merged.
|
process
|
cannot find group when there are groups with the same name under multiple contract terraform version terraform affected resource s please list the resources as a list for example data property akamai group fix in pull request is waiting to be merged
| 1
|
93,159
| 8,402,179,943
|
IssuesEvent
|
2018-10-11 05:19:06
|
CodeChain-io/codechain
|
https://api.github.com/repos/CodeChain-io/codechain
|
opened
|
Improve the integration test speed
|
test
|
The default Rust cargo cache setting is: (https://docs.travis-ci.com/user/caching/#rust-cargo-cache)
- `$HOME/.cargo`
- `$TRAVIS_BUILD_DIR/target`
## Possible try
- Remove `$TRAVIS_BUILD_DIR/target` from caching
This takes much time in setting up a build cache and storing a build cache.
The current time limit in storing a build cache is 3 minutes, and currently jobs always bump into this time limit.
- Add `$HOME/.rustup` to the cache list
It reduces time about 20 secs ~ 1 min in installing Rust.
Compare [non-applied one](https://travis-ci.org/CodeChain-io/codechain/jobs/439938151) and [applied one](https://travis-ci.org/jhs7jhs/codechain/jobs/439943506).
- Use `sccache`
It reduces time about 1.5 min in compiling the code. But it will take time in storing cache.
Compare [non-applied one](https://travis-ci.org/CodeChain-io/codechain/jobs/439938151) and [applied one](https://travis-ci.org/jhs7jhs/codechain/jobs/439954984)
## To check
- Sometimes the test runs fast. [Link](https://travis-ci.org/CodeChain-io/codechain/builds/439972205)
|
1.0
|
Improve the integration test speed - The default Rust cargo cache setting is: (https://docs.travis-ci.com/user/caching/#rust-cargo-cache)
- `$HOME/.cargo`
- `$TRAVIS_BUILD_DIR/target`
## Possible try
- Remove `$TRAVIS_BUILD_DIR/target` from caching
This takes much time in setting up a build cache and storing a build cache.
The current time limit in storing a build cache is 3 minutes, and currently jobs always bump into this time limit.
- Add `$HOME/.rustup` to the cache list
It reduces time about 20 secs ~ 1 min in installing Rust.
Compare [non-applied one](https://travis-ci.org/CodeChain-io/codechain/jobs/439938151) and [applied one](https://travis-ci.org/jhs7jhs/codechain/jobs/439943506).
- Use `sccache`
It reduces time about 1.5 min in compiling the code. But it will take time in storing cache.
Compare [non-applied one](https://travis-ci.org/CodeChain-io/codechain/jobs/439938151) and [applied one](https://travis-ci.org/jhs7jhs/codechain/jobs/439954984)
## To check
- Sometimes the test runs fast. [Link](https://travis-ci.org/CodeChain-io/codechain/builds/439972205)
|
non_process
|
improve the integration test speed the default rust cargo cache setting is home cargo travis build dir target possible try remove travis build dir target from caching this takes much time in setting up a build cache and storing a build cache the current time limit in storing a build cache is minutes and currently jobs always bump into this time limit add home rustup to the cache list it reduces time about secs min in installing rust compare and use sccache it reduces time about min in compiling the code but it will take time in storing cache compare and to check sometimes the test runs fast
| 0
|
7,581
| 10,694,868,211
|
IssuesEvent
|
2019-10-23 11:51:00
|
prisma/specs
|
https://api.github.com/repos/prisma/specs
|
closed
|
Backend Epic process spec
|
area/process kind/spec spec/new
|
Backend team now has a process to work with Epics, we should document that.
|
1.0
|
Backend Epic process spec - Backend team now has a process to work with Epics, we should document that.
|
process
|
backend epic process spec backend team now has a process to work with epics we should document that
| 1
|
4,854
| 7,744,363,538
|
IssuesEvent
|
2018-05-29 15:12:35
|
gvwilson/teachtogether.tech
|
https://api.github.com/repos/gvwilson/teachtogether.tech
|
opened
|
Ch06 Florian Shkurti
|
Ch06 Process
|
- "Collaboration on lesson development" I think you're ignoring the fact that some teachers might not want to compromise on having creative control over their material or share it with other teachers with differing opinions. In a wikipedia-type model, if two teachers disagree about which presentation method is best, do you just create a personal branch of your own lectures? Aren't you going to end up with multiple copies?
- "If Caulfield is right" I personally find this unlikely. I'd expect it would only work for motivated students or experienced students browsing through documentation of frameworks about how to get something done.
|
1.0
|
Ch06 Florian Shkurti - - "Collaboration on lesson development" I think you're ignoring the fact that some teachers might not want to compromise on having creative control over their material or share it with other teachers with differing opinions. In a wikipedia-type model, if two teachers disagree about which presentation method is best, do you just create a personal branch of your own lectures? Aren't you going to end up with multiple copies?
- "If Caulfield is right" I personally find this unlikely. I'd expect it would only work for motivated students or experienced students browsing through documentation of frameworks about how to get something done.
|
process
|
florian shkurti collaboration on lesson development i think you re ignoring the fact that some teachers might not want to compromise on having creative control over their material or share it with other teachers with differing opinions in a wikipedia type model if two teachers disagree about which presentation method is best do you just create a personal branch of your own lectures aren t you going to end up with multiple copies if caulfield is right i personally find this unlikely i d expect it would only work for motivated students or experienced students browsing through documentation of frameworks about how to get something done
| 1
|
12,970
| 15,345,275,273
|
IssuesEvent
|
2021-02-28 06:08:57
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
Investigate adding Zeitwerk to core gems
|
process
|
Because we are pulling in ActiveSupport now, it's bringing the [Zeitwerk gem](https://github.com/fxn/zeitwerk) along for the ride. If we can leverage that to make the process of requiring or autoloading parts of Bridgetown easier and/or faster, that would be groovy.
|
1.0
|
Investigate adding Zeitwerk to core gems - Because we are pulling in ActiveSupport now, it's bringing the [Zeitwerk gem](https://github.com/fxn/zeitwerk) along for the ride. If we can leverage that to make the process of requiring or autoloading parts of Bridgetown easier and/or faster, that would be groovy.
|
process
|
investigate adding zeitwerk to core gems because we are pulling in activesupport now it s bringing the along for the ride if we can leverage that to make the process of requiring or autoloading parts of bridgetown easier and or faster that would be groovy
| 1
|
346,990
| 24,887,422,197
|
IssuesEvent
|
2022-10-28 08:59:05
|
avock/ped
|
https://api.github.com/repos/avock/ped
|
opened
|
UG - Ambiguous Terms
|
severity.High type.DocumentationBug
|

- It is not well documented in the user guide as to what the criteria refers to
<!--session: 1666944569322-649c9bc5-2a2a-4b2c-aa59-40129fdaf2f5-->
<!--Version: Web v3.4.4-->
|
1.0
|
UG - Ambiguous Terms - 
- It is not well documented in the user guide as to what the criteria refers to
<!--session: 1666944569322-649c9bc5-2a2a-4b2c-aa59-40129fdaf2f5-->
<!--Version: Web v3.4.4-->
|
non_process
|
ug ambiguous terms it is not well documented in the user guide as to what the criteria refers to
| 0
|
9,666
| 12,663,362,411
|
IssuesEvent
|
2020-06-18 01:06:34
|
axa-group/Parsr
|
https://api.github.com/repos/axa-group/Parsr
|
closed
|
Automatic high performance Header/Footer detection
|
feature processing
|
The current header/footer detection module `HeaderFooterDetectionModule` requires an estimate in percentage of the maximal distance from the page limit, where the header and footers lie.
It would be great to have this module automatically detect headers and footers (using techniques like NLP, Vision, etc) without the need of such a parameter.
|
1.0
|
Automatic high performance Header/Footer detection - The current header/footer detection module `HeaderFooterDetectionModule` requires an estimate in percentage of the maximal distance from the page limit, where the header and footers lie.
It would be great to have this module automatically detect headers and footers (using techniques like NLP, Vision, etc) without the need of such a parameter.
|
process
|
automatic high performance header footer detection the current header footer detection module headerfooterdetectionmodule requires an estimate in percentage of the maximal distance from the page limit where the header and footers lie it would be great to have this module automatically detect headers and footers using techniques like nlp vision etc without the need of such a parameter
| 1
|
21,335
| 29,041,781,639
|
IssuesEvent
|
2023-05-13 03:15:27
|
gqylpy/gqylpy-dict
|
https://api.github.com/repos/gqylpy/gqylpy-dict
|
reopened
|
向gdict实例中写入的dict实例没有被转换为gdict实例
|
question Processed
|
问题模拟代码:
```python
>>> d = gdict()
>>> d.a = {}
>>> d.a.__class__.__qualname__
'dict'
```
我们希望 `d.a.__class__.__qualname__` 得到的是 `'GqylpyDict'`。
|
1.0
|
向gdict实例中写入的dict实例没有被转换为gdict实例 - 问题模拟代码:
```python
>>> d = gdict()
>>> d.a = {}
>>> d.a.__class__.__qualname__
'dict'
```
我们希望 `d.a.__class__.__qualname__` 得到的是 `'GqylpyDict'`。
|
process
|
向gdict实例中写入的dict实例没有被转换为gdict实例 问题模拟代码: python d gdict d a d a class qualname dict 我们希望 d a class qualname 得到的是 gqylpydict 。
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.