Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
238,011
18,215,436,961
IssuesEvent
2021-09-30 03:16:34
SynBioHub/Plugin-Submit-Excel-Library
https://api.github.com/repos/SynBioHub/Plugin-Submit-Excel-Library
closed
Document how to locally run "Submit excel library"
documentation
Similar request to https://github.com/SynBioHub/Plugin-Submit-Excel-Composition/issues/6#issue-696251251. It would be helpful to document how this add-on can be ran locally.
1.0
Document how to locally run "Submit excel library" - Similar request to https://github.com/SynBioHub/Plugin-Submit-Excel-Composition/issues/6#issue-696251251. It would be helpful to document how this add-on can be ran locally.
non_process
document how to locally run submit excel library similar request to it would be helpful to document how this add on can be ran locally
0
416,739
28,097,845,128
IssuesEvent
2023-03-30 17:04:47
microsoft/torchgeo
https://api.github.com/repos/microsoft/torchgeo
closed
torchgeo install in google colab
documentation
### Description I wanted to check a potential bug with a reproducible example in a colab notebook and found that there is some installation issue. ``` %pip install torchgeo from torchgeo.trainers import ClassificationTask ``` `ContextualVersionConflict: (Pygments 2.6.1 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('pygments<3.0.0,>=2.14.0'), {'rich'})` ### Steps to reproduce [Notebook](https://colab.research.google.com/drive/1zK4uXLPGOkNWqRuqFszR9opg2bnFzDb1?usp=sharing) ### Version 0.4.0
1.0
torchgeo install in google colab - ### Description I wanted to check a potential bug with a reproducible example in a colab notebook and found that there is some installation issue. ``` %pip install torchgeo from torchgeo.trainers import ClassificationTask ``` `ContextualVersionConflict: (Pygments 2.6.1 (/usr/local/lib/python3.8/dist-packages), Requirement.parse('pygments<3.0.0,>=2.14.0'), {'rich'})` ### Steps to reproduce [Notebook](https://colab.research.google.com/drive/1zK4uXLPGOkNWqRuqFszR9opg2bnFzDb1?usp=sharing) ### Version 0.4.0
non_process
torchgeo install in google colab description i wanted to check a potential bug with a reproducible example in a colab notebook and found that there is some installation issue pip install torchgeo from torchgeo trainers import classificationtask contextualversionconflict pygments usr local lib dist packages requirement parse pygments rich steps to reproduce version
0
41,096
21,476,328,635
IssuesEvent
2022-04-26 13:57:12
dotnet/msbuild
https://api.github.com/repos/dotnet/msbuild
closed
LoggingService.LogComment causes large amounts of contention between unrelated evaluation threads.
performance
### Issue Description LoggingService.LogComment causes large amounts of contention between unrelated evaluation threads. ![image](https://user-images.githubusercontent.com/25249058/152361351-ff5d51a9-5630-4aaf-8178-4597d5bf9773.png) ### Steps to Reproduce - clean solution VS temp files (.vs) - clean solution (msbuild /t:clean) - track in perfview: open solution with large number of projects ### Data TBD ### Analysis TBD ### Versions & Configurations VS 17.2 ### Regression? TBD
True
LoggingService.LogComment causes large amounts of contention between unrelated evaluation threads. - ### Issue Description LoggingService.LogComment causes large amounts of contention between unrelated evaluation threads. ![image](https://user-images.githubusercontent.com/25249058/152361351-ff5d51a9-5630-4aaf-8178-4597d5bf9773.png) ### Steps to Reproduce - clean solution VS temp files (.vs) - clean solution (msbuild /t:clean) - track in perfview: open solution with large number of projects ### Data TBD ### Analysis TBD ### Versions & Configurations VS 17.2 ### Regression? TBD
non_process
loggingservice logcomment causes large amounts of contention between unrelated evaluation threads issue description loggingservice logcomment causes large amounts of contention between unrelated evaluation threads steps to reproduce clean solution vs temp files vs clean solution msbuild t clean track in perfview open solution with large number of projects data tbd analysis tbd versions configurations vs regression tbd
0
7,315
10,452,444,693
IssuesEvent
2019-09-19 14:42:02
prisma/photonjs
https://api.github.com/repos/prisma/photonjs
closed
Remove `photon.connect()` from all examples
kind/docs process/next-milestone
In order to simplify examples we should remove the `await photon.connect()` call and rely on the lazy connect behavior of Photon.js by default. Additionally we should **document** how to explicitly connect to your data source e.g. as an optimization strategy.
1.0
Remove `photon.connect()` from all examples - In order to simplify examples we should remove the `await photon.connect()` call and rely on the lazy connect behavior of Photon.js by default. Additionally we should **document** how to explicitly connect to your data source e.g. as an optimization strategy.
process
remove photon connect from all examples in order to simplify examples we should remove the await photon connect call and rely on the lazy connect behavior of photon js by default additionally we should document how to explicitly connect to your data source e g as an optimization strategy
1
170,624
20,883,786,688
IssuesEvent
2022-03-23 01:12:44
snowdensb/dependabot-core
https://api.github.com/repos/snowdensb/dependabot-core
reopened
CVE-2015-8315 (High) detected in ms-0.6.2.tgz
security vulnerability
## CVE-2015-8315 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.6.2.tgz</b></p></summary> <p>Tiny ms conversion utility</p> <p>Library home page: <a href="https://registry.npmjs.org/ms/-/ms-0.6.2.tgz">https://registry.npmjs.org/ms/-/ms-0.6.2.tgz</a></p> <p>Path to dependency file: /npm_and_yarn/spec/fixtures/projects/npm6/path_dependency/deps/etag/package.json</p> <p>Path to vulnerable library: /npm_and_yarn/spec/fixtures/projects/npm6/path_dependency/deps/etag/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm6/etag_no_lockfile/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm7/path_dependency/deps/etag/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm7/library/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm6_and_yarn/path_dependency/deps/etag/node_modules/mocha/node_modules/ms/package.json</p> <p> Dependency Hierarchy: - mocha-1.21.5.tgz (Root Library) - debug-2.0.0.tgz - :x: **ms-0.6.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/dependabot-core/commit/ba8cd9078c8ce0cb202767d627706711237abf71">ba8cd9078c8ce0cb202767d627706711237abf71</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The ms package before 0.7.1 for Node.js allows attackers to cause a denial of service (CPU consumption) via a long version string, aka a "regular expression denial of service (ReDoS)." <p>Publish Date: 2017-01-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8315>CVE-2015-8315</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8315">https://nvd.nist.gov/vuln/detail/CVE-2015-8315</a></p> <p>Release Date: 2017-01-23</p> <p>Fix Resolution (ms): 0.7.1</p> <p>Direct dependency fix Resolution (mocha): 2.3.4</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"mocha","packageVersion":"1.21.5","packageFilePaths":["/npm_and_yarn/spec/fixtures/projects/npm6/path_dependency/deps/etag/package.json"],"isTransitiveDependency":false,"dependencyTree":"mocha:1.21.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.4","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2015-8315","vulnerabilityDetails":"The ms package before 0.7.1 for Node.js allows attackers to cause a denial of service (CPU consumption) via a long version string, aka a \"regular expression denial of service (ReDoS).\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8315","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2015-8315 (High) detected in ms-0.6.2.tgz - ## CVE-2015-8315 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ms-0.6.2.tgz</b></p></summary> <p>Tiny ms conversion utility</p> <p>Library home page: <a href="https://registry.npmjs.org/ms/-/ms-0.6.2.tgz">https://registry.npmjs.org/ms/-/ms-0.6.2.tgz</a></p> <p>Path to dependency file: /npm_and_yarn/spec/fixtures/projects/npm6/path_dependency/deps/etag/package.json</p> <p>Path to vulnerable library: /npm_and_yarn/spec/fixtures/projects/npm6/path_dependency/deps/etag/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm6/etag_no_lockfile/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm7/path_dependency/deps/etag/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm7/library/node_modules/mocha/node_modules/ms/package.json,/npm_and_yarn/spec/fixtures/projects/npm6_and_yarn/path_dependency/deps/etag/node_modules/mocha/node_modules/ms/package.json</p> <p> Dependency Hierarchy: - mocha-1.21.5.tgz (Root Library) - debug-2.0.0.tgz - :x: **ms-0.6.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snowdensb/dependabot-core/commit/ba8cd9078c8ce0cb202767d627706711237abf71">ba8cd9078c8ce0cb202767d627706711237abf71</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The ms package before 0.7.1 for Node.js allows attackers to cause a denial of service (CPU consumption) via a long version string, aka a "regular expression denial of service (ReDoS)." <p>Publish Date: 2017-01-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8315>CVE-2015-8315</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8315">https://nvd.nist.gov/vuln/detail/CVE-2015-8315</a></p> <p>Release Date: 2017-01-23</p> <p>Fix Resolution (ms): 0.7.1</p> <p>Direct dependency fix Resolution (mocha): 2.3.4</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"mocha","packageVersion":"1.21.5","packageFilePaths":["/npm_and_yarn/spec/fixtures/projects/npm6/path_dependency/deps/etag/package.json"],"isTransitiveDependency":false,"dependencyTree":"mocha:1.21.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.4","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2015-8315","vulnerabilityDetails":"The ms package before 0.7.1 for Node.js allows attackers to cause a denial of service (CPU consumption) via a long version string, aka a \"regular expression denial of service (ReDoS).\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8315","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in ms tgz cve high severity vulnerability vulnerable library ms tgz tiny ms conversion utility library home page a href path to dependency file npm and yarn spec fixtures projects path dependency deps etag package json path to vulnerable library npm and yarn spec fixtures projects path dependency deps etag node modules mocha node modules ms package json npm and yarn spec fixtures projects etag no lockfile node modules mocha node modules ms package json npm and yarn spec fixtures projects path dependency deps etag node modules mocha node modules ms package json npm and yarn spec fixtures projects library node modules mocha node modules ms package json npm and yarn spec fixtures projects and yarn path dependency deps etag node modules mocha node modules ms package json dependency hierarchy mocha tgz root library debug tgz x ms tgz vulnerable library found in head commit a href found in base branch main vulnerability details the ms package before for node js allows attackers to cause a denial of service cpu consumption via a long version string aka a regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ms direct dependency fix resolution mocha rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mocha isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the ms package before for node js allows attackers to cause a denial of service cpu consumption via a long version string aka a regular expression denial of service redos vulnerabilityurl
0
462,355
13,245,684,753
IssuesEvent
2020-08-19 14:42:30
carbon-design-system/ibm-dotcom-library
https://api.github.com/repos/carbon-design-system/ibm-dotcom-library
opened
Web Component: Develop Callout (internal) of the React version
dev package: web components priority: high
#### User Story <!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} --> > As a `[user role below]`: IBM.com Library developer > I need to: create the `Callout (internal)` > so that I can: provide ibm.com adopter developers a web component version for every react version available in the ibm.com Library #### Additional information <!-- {{Please provide any additional information or resources for reference}} --> - Story within Storybook with corresponding knobs - Utilize Carbon - Create with Shadow DOM and Custom Elements standards - **See the Epic for the Design and Functional specs information** - [React canary environment](https://ibmdotcom-react-canary.mybluemix.net/?path=/docs/overview-getting-started--page) - Prod QA testing issue (#3637) #### Acceptance criteria - [ ] Include README for the web component and corresponding styles - [ ] Create Web Components styles in styles package - [ ] No custom styles in web-components package - [ ] Do not create knobs in Storybook that include JSON objects - [ ] Break out Storybook stories into multiple variation stories, if applicable - [ ] Create codesandbox example under `/packages/web-components/examples/codesandbox` and include in README - [ ] Minimum 80% unit test coverage - [ ] If a design is provided, the Designer is included as a Reviewer in the Pull Request - [ ] Provide a direct link to the deploy preview for the designer in the Pull Request description - [ ] A comment is posted in the Design QA issue, tagging Wonil and Roberta, when development is finished - [ ] The Storybook link is added to the Design QA issue for their testing - [ ] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when development is finished
1.0
Web Component: Develop Callout (internal) of the React version - #### User Story <!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} --> > As a `[user role below]`: IBM.com Library developer > I need to: create the `Callout (internal)` > so that I can: provide ibm.com adopter developers a web component version for every react version available in the ibm.com Library #### Additional information <!-- {{Please provide any additional information or resources for reference}} --> - Story within Storybook with corresponding knobs - Utilize Carbon - Create with Shadow DOM and Custom Elements standards - **See the Epic for the Design and Functional specs information** - [React canary environment](https://ibmdotcom-react-canary.mybluemix.net/?path=/docs/overview-getting-started--page) - Prod QA testing issue (#3637) #### Acceptance criteria - [ ] Include README for the web component and corresponding styles - [ ] Create Web Components styles in styles package - [ ] No custom styles in web-components package - [ ] Do not create knobs in Storybook that include JSON objects - [ ] Break out Storybook stories into multiple variation stories, if applicable - [ ] Create codesandbox example under `/packages/web-components/examples/codesandbox` and include in README - [ ] Minimum 80% unit test coverage - [ ] If a design is provided, the Designer is included as a Reviewer in the Pull Request - [ ] Provide a direct link to the deploy preview for the designer in the Pull Request description - [ ] A comment is posted in the Design QA issue, tagging Wonil and Roberta, when development is finished - [ ] The Storybook link is added to the Design QA issue for their testing - [ ] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when development is finished
non_process
web component develop callout internal of the react version user story as a ibm com library developer i need to create the callout internal so that i can provide ibm com adopter developers a web component version for every react version available in the ibm com library additional information story within storybook with corresponding knobs utilize carbon create with shadow dom and custom elements standards see the epic for the design and functional specs information prod qa testing issue acceptance criteria include readme for the web component and corresponding styles create web components styles in styles package no custom styles in web components package do not create knobs in storybook that include json objects break out storybook stories into multiple variation stories if applicable create codesandbox example under packages web components examples codesandbox and include in readme minimum unit test coverage if a design is provided the designer is included as a reviewer in the pull request provide a direct link to the deploy preview for the designer in the pull request description a comment is posted in the design qa issue tagging wonil and roberta when development is finished the storybook link is added to the design qa issue for their testing a comment is posted in the prod qa issue tagging praveen and chetan when development is finished
0
18,768
24,674,277,745
IssuesEvent
2022-10-18 15:46:19
keras-team/keras-cv
https://api.github.com/repos/keras-team/keras-cv
closed
Add RandomSunFlare preprocessing layer
preprocessing
## Weather Augmentation One of the real-world scenarios that pose challenges for training neural networks for Autonomous vehicle ![output_39_0](https://user-images.githubusercontent.com/17668390/164973817-fee5266c-bbfb-4424-a502-6cb56f8b8b6d.png) Ref. Imple. - https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library - https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomSunFlare
1.0
Add RandomSunFlare preprocessing layer - ## Weather Augmentation One of the real-world scenarios that pose challenges for training neural networks for Autonomous vehicle ![output_39_0](https://user-images.githubusercontent.com/17668390/164973817-fee5266c-bbfb-4424-a502-6cb56f8b8b6d.png) Ref. Imple. - https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library - https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomSunFlare
process
add randomsunflare preprocessing layer weather augmentation one of the real world scenarios that pose challenges for training neural networks for autonomous vehicle ref imple
1
17,592
23,416,245,101
IssuesEvent
2022-08-13 02:18:14
pycaret/pycaret
https://api.github.com/repos/pycaret/pycaret
closed
[BUG] Why is the inverse transformation of the target not applied?
bug time_series preprocessing priority_high
### Discussed in https://github.com/pycaret/pycaret/discussions/2706 <div type='discussions-op-text'> <sup>Originally posted by **acartro** July 4, 2022</sup> Good morning, I am doing a time series experiment with the new 3.0 library. I have found that when I set the setup to do a logarithmic transformation of the target (transform_target='log') when making the forecasts, it applies the inverse transformation. ### Whit target transformation ``` from pycaret.time_series import * from pycaret.datasets import get_data y = get_data("airline") #### Setup experiment ---- exp = TSForecastingExperiment() exp.setup(data=y, fh=12, session_id=42, transform_target='log') ##### List the available models ---- exp.models() #### Create a model from available models ---- model = exp.create_model("arima") ### plot model exp.plot_model(model ) ``` ![image](https://user-images.githubusercontent.com/11137446/177162592-c0e9c57e-0bf9-43ba-a8c5-909c9caa3226.png) ### Whit target transformation and impute missings But when I set the setup to do an imputation of the missings and then the logarithmic transformation (transform_target='log', numeric_imputation_target='drift'), when it makes the forecast, it doesn't apply the inverse transformation. ``` #### Setup experiment ---- exp = TSForecastingExperiment() exp.setup(data=y, fh=12, session_id=42, transform_target='log', numeric_imputation_target='drift') ##### List the available models ---- exp.models() #### Create a model from available models ---- model = exp.create_model("arima") ### plot model exp.plot_model(model ) ``` ![image](https://user-images.githubusercontent.com/11137446/177163187-fcb08417-ad58-4eaf-82b2-37bbc1346c21.png) Does anyone know why this is happening or am I just doing the forecast wrong? </div>
1.0
[BUG] Why is the inverse transformation of the target not applied? - ### Discussed in https://github.com/pycaret/pycaret/discussions/2706 <div type='discussions-op-text'> <sup>Originally posted by **acartro** July 4, 2022</sup> Good morning, I am doing a time series experiment with the new 3.0 library. I have found that when I set the setup to do a logarithmic transformation of the target (transform_target='log') when making the forecasts, it applies the inverse transformation. ### Whit target transformation ``` from pycaret.time_series import * from pycaret.datasets import get_data y = get_data("airline") #### Setup experiment ---- exp = TSForecastingExperiment() exp.setup(data=y, fh=12, session_id=42, transform_target='log') ##### List the available models ---- exp.models() #### Create a model from available models ---- model = exp.create_model("arima") ### plot model exp.plot_model(model ) ``` ![image](https://user-images.githubusercontent.com/11137446/177162592-c0e9c57e-0bf9-43ba-a8c5-909c9caa3226.png) ### Whit target transformation and impute missings But when I set the setup to do an imputation of the missings and then the logarithmic transformation (transform_target='log', numeric_imputation_target='drift'), when it makes the forecast, it doesn't apply the inverse transformation. ``` #### Setup experiment ---- exp = TSForecastingExperiment() exp.setup(data=y, fh=12, session_id=42, transform_target='log', numeric_imputation_target='drift') ##### List the available models ---- exp.models() #### Create a model from available models ---- model = exp.create_model("arima") ### plot model exp.plot_model(model ) ``` ![image](https://user-images.githubusercontent.com/11137446/177163187-fcb08417-ad58-4eaf-82b2-37bbc1346c21.png) Does anyone know why this is happening or am I just doing the forecast wrong? </div>
process
why is the inverse transformation of the target not applied discussed in originally posted by acartro july good morning i am doing a time series experiment with the new library i have found that when i set the setup to do a logarithmic transformation of the target transform target log when making the forecasts it applies the inverse transformation whit target transformation from pycaret time series import from pycaret datasets import get data y get data airline setup experiment exp tsforecastingexperiment exp setup data y fh session id transform target log list the available models exp models create a model from available models model exp create model arima plot model exp plot model model whit target transformation and impute missings but when i set the setup to do an imputation of the missings and then the logarithmic transformation transform target log numeric imputation target drift when it makes the forecast it doesn t apply the inverse transformation setup experiment exp tsforecastingexperiment exp setup data y fh session id transform target log numeric imputation target drift list the available models exp models create a model from available models model exp create model arima plot model exp plot model model does anyone know why this is happening or am i just doing the forecast wrong
1
71,390
23,606,276,517
IssuesEvent
2022-08-24 08:32:34
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
FileUpload | The error message does not disappear correctly when removing file(s), to match your file limit.
defect
### Describe the bug if you choose files more than fileLimit you see an alert message, but when you remove a file and the fileLimit is correct now, the alert message stays.It should go but it is not. Also when you press cancel button and delete all files by doing so.Alert message stays.It should ideally go. Can someone look into it? ### Environment primeng ### Reproducer _No response_ ### Angular version 13.1.0 ### PrimeNG version 13.1.0 ### Build / Runtime Angular CLI App ### Language TypeScript ### Node version (for AoT issues node --version) 14.18.1 ### Browser(s) _No response_ ### Steps to reproduce the behavior 1. use advanced p-fileupload. Use following in component.html file.Set fileLimit to 3 <p-fileUpload #fileUpload [fileLimit]="3" class="center" name="demoA" (onUpload)="onUpload($event)" [multiple]="true" accept="image/*" [maxFileSize]="1000000"> </p-fileUpload> 2.select 4 files.You will see alert message. Maximum number of files exceeded,limit is 3 at most. 3.Now remove any one file by clicking on X button next to it. or press cancel button next to upload to delete all files. 4. Alert message should go off because now the no of selected file is 3 or 0 .However alert message stays. ### Expected behavior Alert message should go off
1.0
FileUpload | The error message does not disappear correctly when removing file(s), to match your file limit. - ### Describe the bug if you choose files more than fileLimit you see an alert message, but when you remove a file and the fileLimit is correct now, the alert message stays.It should go but it is not. Also when you press cancel button and delete all files by doing so.Alert message stays.It should ideally go. Can someone look into it? ### Environment primeng ### Reproducer _No response_ ### Angular version 13.1.0 ### PrimeNG version 13.1.0 ### Build / Runtime Angular CLI App ### Language TypeScript ### Node version (for AoT issues node --version) 14.18.1 ### Browser(s) _No response_ ### Steps to reproduce the behavior 1. use advanced p-fileupload. Use following in component.html file.Set fileLimit to 3 <p-fileUpload #fileUpload [fileLimit]="3" class="center" name="demoA" (onUpload)="onUpload($event)" [multiple]="true" accept="image/*" [maxFileSize]="1000000"> </p-fileUpload> 2.select 4 files.You will see alert message. Maximum number of files exceeded,limit is 3 at most. 3.Now remove any one file by clicking on X button next to it. or press cancel button next to upload to delete all files. 4. Alert message should go off because now the no of selected file is 3 or 0 .However alert message stays. ### Expected behavior Alert message should go off
non_process
fileupload the error message does not disappear correctly when removing file s to match your file limit describe the bug if you choose files more than filelimit you see an alert message but when you remove a file and the filelimit is correct now the alert message stays it should go but it is not also when you press cancel button and delete all files by doing so alert message stays it should ideally go can someone look into it environment primeng reproducer no response angular version primeng version build runtime angular cli app language typescript node version for aot issues node version browser s no response steps to reproduce the behavior use advanced p fileupload use following in component html file set filelimit to p fileupload fileupload class center name demoa onupload onupload event true accept image select files you will see alert message maximum number of files exceeded limit is at most now remove any one file by clicking on x button next to it or press cancel button next to upload to delete all files alert message should go off because now the no of selected file is or however alert message stays expected behavior alert message should go off
0
17,802
23,728,560,753
IssuesEvent
2022-08-30 22:21:25
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
opened
Provide consistent deprecated behavior and reduce downstream breakage
Enhancement Process
**Need consistent behavior for deprecated objects** The behavior of builds varies based on the type object that is deprecated and also varies based on the toolchain used. For example, adding a `label` property to the qemu_x86.dts file results in a warning when running west: ``` $ west build -b qemu_x86 samples/hello_world ... 'label' is marked as deprecated in 'properties:' in /workdir/zephyr/dts/bindings/mtd/zephyr,emu-eeprom.yaml for node /eeprom1. ... [138/138] Linking C executable zephyr/zephyr.elf ``` But running the same thing under twister, results in a build failure, because warnings are promoted to errors: ``` ./scripts/twister -T samples/hello_world/ -p qemu_x86 -v -i --clobber ... devicetree error: 'label' is marked as deprecated in 'properties:' in /workdir/zephyr/dts/bindings/mtd/zephyr,emu-eeprom.yaml for node /eeprom1. ... -- Configuring incomplete, errors occurred! ``` This also demonstrates how deprecation can break downstream users. Any downstream projects that promote warnings to errors see their downstream builds fail. This forces downstream repos to immediately fix deprecation errors, or disable the warnings-as-errors flags. **Proposal** If this can be supported by the tooling, it would helpful if deprecated objects only generated an informational message during the first TBD months. At some later point, the deprecated message would be switched to the current behavior to generate a warning. And then as a final step, the deprecated object is removed completely.
1.0
Provide consistent deprecated behavior and reduce downstream breakage - **Need consistent behavior for deprecated objects** The behavior of builds varies based on the type object that is deprecated and also varies based on the toolchain used. For example, adding a `label` property to the qemu_x86.dts file results in a warning when running west: ``` $ west build -b qemu_x86 samples/hello_world ... 'label' is marked as deprecated in 'properties:' in /workdir/zephyr/dts/bindings/mtd/zephyr,emu-eeprom.yaml for node /eeprom1. ... [138/138] Linking C executable zephyr/zephyr.elf ``` But running the same thing under twister, results in a build failure, because warnings are promoted to errors: ``` ./scripts/twister -T samples/hello_world/ -p qemu_x86 -v -i --clobber ... devicetree error: 'label' is marked as deprecated in 'properties:' in /workdir/zephyr/dts/bindings/mtd/zephyr,emu-eeprom.yaml for node /eeprom1. ... -- Configuring incomplete, errors occurred! ``` This also demonstrates how deprecation can break downstream users. Any downstream projects that promote warnings to errors see their downstream builds fail. This forces downstream repos to immediately fix deprecation errors, or disable the warnings-as-errors flags. **Proposal** If this can be supported by the tooling, it would helpful if deprecated objects only generated an informational message during the first TBD months. At some later point, the deprecated message would be switched to the current behavior to generate a warning. And then as a final step, the deprecated object is removed completely.
process
provide consistent deprecated behavior and reduce downstream breakage need consistent behavior for deprecated objects the behavior of builds varies based on the type object that is deprecated and also varies based on the toolchain used for example adding a label property to the qemu dts file results in a warning when running west west build b qemu samples hello world label is marked as deprecated in properties in workdir zephyr dts bindings mtd zephyr emu eeprom yaml for node linking c executable zephyr zephyr elf but running the same thing under twister results in a build failure because warnings are promoted to errors scripts twister t samples hello world p qemu v i clobber devicetree error label is marked as deprecated in properties in workdir zephyr dts bindings mtd zephyr emu eeprom yaml for node configuring incomplete errors occurred this also demonstrates how deprecation can break downstream users any downstream projects that promote warnings to errors see their downstream builds fail this forces downstream repos to immediately fix deprecation errors or disable the warnings as errors flags proposal if this can be supported by the tooling it would helpful if deprecated objects only generated an informational message during the first tbd months at some later point the deprecated message would be switched to the current behavior to generate a warning and then as a final step the deprecated object is removed completely
1
5,089
7,876,462,731
IssuesEvent
2018-06-26 01:13:54
Jacksgong/okcat
https://api.github.com/repos/Jacksgong/okcat
closed
adb connection is lost
processing
**device** & **adb** is connected when I execute cmd `okcat -y darwin.yml` , I got ``` using config on /Users/sep/.okcat/darwin.yml using config on /Users/sep/.okcat/logx.yml find regex: ['data', 'time', 'process', 'thread', 'level', 'tag', 'message'] with (.\S*) *(.\S*) *(\d*) *(\d*) *([A-Z]) *([^:]*): *(.*?)$ ADB CONNECTION IS LOST. ``` How to resolve this issue, tks
1.0
adb connection is lost - **device** & **adb** is connected when I execute cmd `okcat -y darwin.yml` , I got ``` using config on /Users/sep/.okcat/darwin.yml using config on /Users/sep/.okcat/logx.yml find regex: ['data', 'time', 'process', 'thread', 'level', 'tag', 'message'] with (.\S*) *(.\S*) *(\d*) *(\d*) *([A-Z]) *([^:]*): *(.*?)$ ADB CONNECTION IS LOST. ``` How to resolve this issue, tks
process
adb connection is lost device adb is connected when i execute cmd okcat y darwin yml i got using config on users sep okcat darwin yml using config on users sep okcat logx yml find regex with s s d d adb connection is lost how to resolve this issue tks
1
3,865
6,808,638,327
IssuesEvent
2017-11-04 06:00:27
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
reopened
binary files should use myExitHandler
apps-all status-inprocess tools-all type-bug
All apps and tools that write to the binary cache should register an onExit function to properly remove the .lck files. blockScrape for one, does not.
1.0
binary files should use myExitHandler - All apps and tools that write to the binary cache should register an onExit function to properly remove the .lck files. blockScrape for one, does not.
process
binary files should use myexithandler all apps and tools that write to the binary cache should register an onexit function to properly remove the lck files blockscrape for one does not
1
33,066
7,018,636,687
IssuesEvent
2017-12-21 14:28:14
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Lightbox content behind the overlay mask when opened from a Dialog
defect discussion pending-review
### There is no guarantee in receiving a response in GitHub Issue Tracker, If you'd like to secure our response, you may consider *PrimeNG PRO Support* where support is provided within 4 business hours **I'm submitting a ...** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed. http://plnkr.co/edit/Hzii86TTawHy1PUxF4FV?p=preview **Current behavior** <!-- Describe how the bug manifests. --> When we open a Lightbox from a Dialog, there is a problem of positions of layers. The content of the lightbox (I put a random pdf in the plunkr) is not accessible because the mask is above it. **Expected behavior** <!-- Describe what the behavior would be without the bug. --> Be able to access the lighbox content, for example a pdf like in my plunkr example. **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Just click on "`Show`" (opens the dialog) and then on "`See PDF in Lightbox`" **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> * **Angular version:** 5.0.X <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 5.0.X <!-- Check whether this is still an issue in the most recent Angular version --> * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced -->
1.0
Lightbox content behind the overlay mask when opened from a Dialog - ### There is no guarantee in receiving a response in GitHub Issue Tracker, If you'd like to secure our response, you may consider *PrimeNG PRO Support* where support is provided within 4 business hours **I'm submitting a ...** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed. http://plnkr.co/edit/Hzii86TTawHy1PUxF4FV?p=preview **Current behavior** <!-- Describe how the bug manifests. --> When we open a Lightbox from a Dialog, there is a problem of positions of layers. The content of the lightbox (I put a random pdf in the plunkr) is not accessible because the mask is above it. **Expected behavior** <!-- Describe what the behavior would be without the bug. --> Be able to access the lighbox content, for example a pdf like in my plunkr example. **Minimal reproduction of the problem with instructions** <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> Just click on "`Show`" (opens the dialog) and then on "`See PDF in Lightbox`" **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> * **Angular version:** 5.0.X <!-- Check whether this is still an issue in the most recent Angular version --> * **PrimeNG version:** 5.0.X <!-- Check whether this is still an issue in the most recent Angular version --> * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced -->
non_process
lightbox content behind the overlay mask when opened from a dialog there is no guarantee in receiving a response in github issue tracker if you d like to secure our response you may consider primeng pro support where support is provided within business hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please fork the plunkr below and create a case demonstrating your bug report issues without a plunkr have much less possibility to be reviewed current behavior when we open a lightbox from a dialog there is a problem of positions of layers the content of the lightbox i put a random pdf in the plunkr is not accessible because the mask is above it expected behavior be able to access the lighbox content for example a pdf like in my plunkr example minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point just click on show opens the dialog and then on see pdf in lightbox what is the motivation use case for changing the behavior please tell us about your environment angular version x primeng version x browser
0
12,845
3,655,879,771
IssuesEvent
2016-02-17 17:46:57
ExcaliburZero/tempconvert-c
https://api.github.com/repos/ExcaliburZero/tempconvert-c
closed
Add reporting bugs section to manual file
documentation
A section should be added to the manual file with information on how to report bugs. See `man ls` for an example.
1.0
Add reporting bugs section to manual file - A section should be added to the manual file with information on how to report bugs. See `man ls` for an example.
non_process
add reporting bugs section to manual file a section should be added to the manual file with information on how to report bugs see man ls for an example
0
5,156
7,933,325,977
IssuesEvent
2018-07-08 04:07:50
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
getAccounts may be more useful than it currently is
status-inprocess tools-getAccounts type-enhancement
Carlos suggests that getAccounts can be more useful than it is. For example, backing up public/private keys. Concern is that this is ultra-sentsitive data. If we mishandle it it would be disastrous.
1.0
getAccounts may be more useful than it currently is - Carlos suggests that getAccounts can be more useful than it is. For example, backing up public/private keys. Concern is that this is ultra-sentsitive data. If we mishandle it it would be disastrous.
process
getaccounts may be more useful than it currently is carlos suggests that getaccounts can be more useful than it is for example backing up public private keys concern is that this is ultra sentsitive data if we mishandle it it would be disastrous
1
624,981
19,715,194,682
IssuesEvent
2022-01-13 10:17:41
o3de/o3de
https://api.github.com/repos/o3de/o3de
opened
Script Canvas Promote to Variable action causes unexpected issues with the node
kind/bug needs-triage sig/content priority/minor
**Describe the bug** The following issues can be observed after performing the _Promote to Variable_ action on a data pin: - The data pin name is changed to _(1)_ (or a higher number, depending on how many Variables are created in the graph). - Undoing and redoing such action causes a redundant data pin of the same type and the original name to appear on the node. Please refer to the attached video for more details. **Steps to reproduce** Steps to reproduce the behavior: 1. Open Script Canvas. 2. Add any node with a data pin to the graph area (i.e. Math/Vector3/Length). 3. Right click the _Source_ data pin and select _Promote to Variable_. 4. Undo the action (CTRL + Z). 5. Redo the action (CTRL + Shift + Z). **Expected behavior** The data pin name does not change after promoting it to Variable. Undoing and redoing it does not add a redundant data pin to the node. **Actual behavior** The data pin name changes after promoting it to Variable. Undoing and redoing it adds a redundant data pin to the node. **Assets required** N/A **Screenshots/Video** https://user-images.githubusercontent.com/86953108/149309125-753c7e9e-594a-426e-9bfc-f06c5d0c4e19.mp4 **Found in Branch** Development (e224fc2) **Desktop/Device:** - Device: PC - OS: Windows - Version 10 - CPU AMD Ryzen 5 3600 - GPU Nvidia RTX 2060 SUPER - Memory 16GB
1.0
Script Canvas Promote to Variable action causes unexpected issues with the node - **Describe the bug** The following issues can be observed after performing the _Promote to Variable_ action on a data pin: - The data pin name is changed to _(1)_ (or a higher number, depending on how many Variables are created in the graph). - Undoing and redoing such action causes a redundant data pin of the same type and the original name to appear on the node. Please refer to the attached video for more details. **Steps to reproduce** Steps to reproduce the behavior: 1. Open Script Canvas. 2. Add any node with a data pin to the graph area (i.e. Math/Vector3/Length). 3. Right click the _Source_ data pin and select _Promote to Variable_. 4. Undo the action (CTRL + Z). 5. Redo the action (CTRL + Shift + Z). **Expected behavior** The data pin name does not change after promoting it to Variable. Undoing and redoing it does not add a redundant data pin to the node. **Actual behavior** The data pin name changes after promoting it to Variable. Undoing and redoing it adds a redundant data pin to the node. **Assets required** N/A **Screenshots/Video** https://user-images.githubusercontent.com/86953108/149309125-753c7e9e-594a-426e-9bfc-f06c5d0c4e19.mp4 **Found in Branch** Development (e224fc2) **Desktop/Device:** - Device: PC - OS: Windows - Version 10 - CPU AMD Ryzen 5 3600 - GPU Nvidia RTX 2060 SUPER - Memory 16GB
non_process
script canvas promote to variable action causes unexpected issues with the node describe the bug the following issues can be observed after performing the promote to variable action on a data pin the data pin name is changed to or a higher number depending on how many variables are created in the graph undoing and redoing such action causes a redundant data pin of the same type and the original name to appear on the node please refer to the attached video for more details steps to reproduce steps to reproduce the behavior open script canvas add any node with a data pin to the graph area i e math length right click the source data pin and select promote to variable undo the action ctrl z redo the action ctrl shift z expected behavior the data pin name does not change after promoting it to variable undoing and redoing it does not add a redundant data pin to the node actual behavior the data pin name changes after promoting it to variable undoing and redoing it adds a redundant data pin to the node assets required n a screenshots video found in branch development desktop device device pc os windows version cpu amd ryzen gpu nvidia rtx super memory
0
320,082
9,769,306,767
IssuesEvent
2019-06-06 08:14:09
brian-team/brian2
https://api.github.com/repos/brian-team/brian2
closed
SpikeGeneratorGroup not correctly working after restore
bug high priority
A user report an issue on the [mailing list](https://groups.google.com/d/msg/briansupport/SSdLkRigOJU/br__yN0IBAAJ). The following code leads to a broadcasting error with the `numpy` target (and leads to incorrect values on Cython/weave): ```Python nb_sim = 2 # number of successive simulations time_simulation = 100*ms # time of each simulation N_in = 3 # number of input neurons freq = 8 # frequence times = array([10*k + (1000/freq)*n for n in range(5) for k in range(5) for _ in range(N_in)])*ms indices = array([k for _ in range(round(times.size/N_in)) for k in range(N_in)]) G_in = SpikeGeneratorGroup(N_in, indices, times) spikemon_in = SpikeMonitor(G_in) net = Network(G_in, spikemon_in) net.store('initialized network') for k in range(nb_sim): net.restore('initialized network') net.run(time_simulation) print(spikemon_in.t) print(spikemon_in.i) print('Epoch {} completed'.format(k + 1)) ``` The error is: ``` ValueError: could not broadcast input array from shape (75) into shape (4) ... The error was raised in the following line: _array_spikegeneratorgroup__spikespace[:_n_spikes] = _indices ``` This suggests that the `SpikeGeneratorGroup` tries to emit all spikes at once, which seems similar to #1017, but here we do not have any spike times "in the past". A workaround is to use `SpikeGeneratorGroup.set_spikes` after the `restore` call, i.e. in this example: ```Python for k in range(nb_sim): net.restore('initialized network') G_in.set_spikes(indices, times) net.run(time_simulation) ... ```
1.0
SpikeGeneratorGroup not correctly working after restore - A user report an issue on the [mailing list](https://groups.google.com/d/msg/briansupport/SSdLkRigOJU/br__yN0IBAAJ). The following code leads to a broadcasting error with the `numpy` target (and leads to incorrect values on Cython/weave): ```Python nb_sim = 2 # number of successive simulations time_simulation = 100*ms # time of each simulation N_in = 3 # number of input neurons freq = 8 # frequence times = array([10*k + (1000/freq)*n for n in range(5) for k in range(5) for _ in range(N_in)])*ms indices = array([k for _ in range(round(times.size/N_in)) for k in range(N_in)]) G_in = SpikeGeneratorGroup(N_in, indices, times) spikemon_in = SpikeMonitor(G_in) net = Network(G_in, spikemon_in) net.store('initialized network') for k in range(nb_sim): net.restore('initialized network') net.run(time_simulation) print(spikemon_in.t) print(spikemon_in.i) print('Epoch {} completed'.format(k + 1)) ``` The error is: ``` ValueError: could not broadcast input array from shape (75) into shape (4) ... The error was raised in the following line: _array_spikegeneratorgroup__spikespace[:_n_spikes] = _indices ``` This suggests that the `SpikeGeneratorGroup` tries to emit all spikes at once, which seems similar to #1017, but here we do not have any spike times "in the past". A workaround is to use `SpikeGeneratorGroup.set_spikes` after the `restore` call, i.e. in this example: ```Python for k in range(nb_sim): net.restore('initialized network') G_in.set_spikes(indices, times) net.run(time_simulation) ... ```
non_process
spikegeneratorgroup not correctly working after restore a user report an issue on the the following code leads to a broadcasting error with the numpy target and leads to incorrect values on cython weave python nb sim number of successive simulations time simulation ms time of each simulation n in number of input neurons freq frequence times array ms indices array g in spikegeneratorgroup n in indices times spikemon in spikemonitor g in net network g in spikemon in net store initialized network for k in range nb sim net restore initialized network net run time simulation print spikemon in t print spikemon in i print epoch completed format k the error is valueerror could not broadcast input array from shape into shape the error was raised in the following line array spikegeneratorgroup spikespace indices this suggests that the spikegeneratorgroup tries to emit all spikes at once which seems similar to but here we do not have any spike times in the past a workaround is to use spikegeneratorgroup set spikes after the restore call i e in this example python for k in range nb sim net restore initialized network g in set spikes indices times net run time simulation
0
2,222
5,071,651,939
IssuesEvent
2016-12-26 15:09:28
mitchellh/packer
https://api.github.com/repos/mitchellh/packer
closed
checksum post-processor ignores "output" parameter if multiple artifacts and then disregards keep_input_artifacts=false
post-processor/checksum
Packer 0.12.0 on Windows When using the checksum post-processor with something like the following with virtualbox-iso (and probably others): ``` { "type": "checksum", "checksum_types": [ "md5", "sha256" ], "output": "{{template_dir}}/vbox.checksums", "keep_input_artifact": false } ``` and when there are multiple "artifacts" then the "output" parameter is ignored and the checksum files are put in the virtualbox output folder. Since these files now exist the "keep_input_artifacts": false is ignored and that entire folder remains after the build process. The desired effect is that the "output" parameter is always respected regardless of how many files are checksummed.
1.0
checksum post-processor ignores "output" parameter if multiple artifacts and then disregards keep_input_artifacts=false - Packer 0.12.0 on Windows When using the checksum post-processor with something like the following with virtualbox-iso (and probably others): ``` { "type": "checksum", "checksum_types": [ "md5", "sha256" ], "output": "{{template_dir}}/vbox.checksums", "keep_input_artifact": false } ``` and when there are multiple "artifacts" then the "output" parameter is ignored and the checksum files are put in the virtualbox output folder. Since these files now exist the "keep_input_artifacts": false is ignored and that entire folder remains after the build process. The desired effect is that the "output" parameter is always respected regardless of how many files are checksummed.
process
checksum post processor ignores output parameter if multiple artifacts and then disregards keep input artifacts false packer on windows when using the checksum post processor with something like the following with virtualbox iso and probably others type checksum checksum types output template dir vbox checksums keep input artifact false and when there are multiple artifacts then the output parameter is ignored and the checksum files are put in the virtualbox output folder since these files now exist the keep input artifacts false is ignored and that entire folder remains after the build process the desired effect is that the output parameter is always respected regardless of how many files are checksummed
1
4,789
3,886,470,050
IssuesEvent
2016-04-14 01:11:16
lionheart/openradar-mirror
https://api.github.com/repos/lionheart/openradar-mirror
opened
19926911: Products with the same name should be visually differentiated in the navigator
classification:ui/usability reproducible:always status:open
#### Description Summary: If a project file includes two targest that specify a product with the same name, then the name should be differentitated such that it's obvious which one corresponds to which target. Steps to Reproduce: 1. Open attached Test App.xcodeproj 2. Expand the "Products" section. 3. Observe there are two items named TestApp.app Expected Results: It should be possible to differentiate by sight which product goes with the "App 1" target and which goes with "App 2." Actual Results: The items look identical. By contrast, when you go to add a library to a "Link with Libraries" build phase, if there are duplicate products with the same name, Xcode differentiates them fully by listing both the project and target from which they originate. For the purposes of the "Products" folder it would be enough to just differentiate them by target, since they are all known to belong to the same target. Version: Version 6.3 (6D520o) Notes: Configuration: Attachments: 'TestAppSchemes.zip' was successfully uploaded. - Product Version: 6.2 Created: 2015-02-23T21:48:30.406481 Originated: 2015-02-23T16:48:00 Open Radar Link: http://www.openradar.me/19926911
True
19926911: Products with the same name should be visually differentiated in the navigator - #### Description Summary: If a project file includes two targest that specify a product with the same name, then the name should be differentitated such that it's obvious which one corresponds to which target. Steps to Reproduce: 1. Open attached Test App.xcodeproj 2. Expand the "Products" section. 3. Observe there are two items named TestApp.app Expected Results: It should be possible to differentiate by sight which product goes with the "App 1" target and which goes with "App 2." Actual Results: The items look identical. By contrast, when you go to add a library to a "Link with Libraries" build phase, if there are duplicate products with the same name, Xcode differentiates them fully by listing both the project and target from which they originate. For the purposes of the "Products" folder it would be enough to just differentiate them by target, since they are all known to belong to the same target. Version: Version 6.3 (6D520o) Notes: Configuration: Attachments: 'TestAppSchemes.zip' was successfully uploaded. - Product Version: 6.2 Created: 2015-02-23T21:48:30.406481 Originated: 2015-02-23T16:48:00 Open Radar Link: http://www.openradar.me/19926911
non_process
products with the same name should be visually differentiated in the navigator description summary if a project file includes two targest that specify a product with the same name then the name should be differentitated such that it s obvious which one corresponds to which target steps to reproduce open attached test app xcodeproj expand the products section observe there are two items named testapp app expected results it should be possible to differentiate by sight which product goes with the app target and which goes with app actual results the items look identical by contrast when you go to add a library to a link with libraries build phase if there are duplicate products with the same name xcode differentiates them fully by listing both the project and target from which they originate for the purposes of the products folder it would be enough to just differentiate them by target since they are all known to belong to the same target version version notes configuration attachments testappschemes zip was successfully uploaded product version created originated open radar link
0
55,445
14,008,921,712
IssuesEvent
2020-10-29 00:58:34
mwilliams7197/bootstrap
https://api.github.com/repos/mwilliams7197/bootstrap
closed
CVE-2018-20190 (Medium) detected in multiple libraries - autoclosed
security vulnerability
## CVE-2018-20190 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.11.0.tgz</b>, <b>opennmsopennms-source-25.1.0-1</b>, <b>opennmsopennms-source-24.1.2-1</b></p></summary> <p> <details><summary><b>node-sass-4.11.0.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz</a></p> <p>Path to dependency file: bootstrap/package.json</p> <p>Path to vulnerable library: bootstrap/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - :x: **node-sass-4.11.0.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>v4-dev</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Eval::operator()(Sass::Supports_Operator*) in eval.cpp may cause a Denial of Service (application crash) via a crafted sass input file. <p>Publish Date: 2018-12-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20190>CVE-2018-20190</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20190">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20190</a></p> <p>Release Date: 2018-12-17</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.11.0","isTransitiveDependency":false,"dependencyTree":"node-sass:4.11.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"LibSass - 3.6.0"}],"vulnerabilityIdentifier":"CVE-2018-20190","vulnerabilityDetails":"In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Eval::operator()(Sass::Supports_Operator*) in eval.cpp may cause a Denial of Service (application crash) via a crafted sass input file.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20190","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-20190 (Medium) detected in multiple libraries - autoclosed - ## CVE-2018-20190 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.11.0.tgz</b>, <b>opennmsopennms-source-25.1.0-1</b>, <b>opennmsopennms-source-24.1.2-1</b></p></summary> <p> <details><summary><b>node-sass-4.11.0.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz</a></p> <p>Path to dependency file: bootstrap/package.json</p> <p>Path to vulnerable library: bootstrap/node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - :x: **node-sass-4.11.0.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>v4-dev</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Eval::operator()(Sass::Supports_Operator*) in eval.cpp may cause a Denial of Service (application crash) via a crafted sass input file. <p>Publish Date: 2018-12-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20190>CVE-2018-20190</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20190">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20190</a></p> <p>Release Date: 2018-12-17</p> <p>Fix Resolution: LibSass - 3.6.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.11.0","isTransitiveDependency":false,"dependencyTree":"node-sass:4.11.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"LibSass - 3.6.0"}],"vulnerabilityIdentifier":"CVE-2018-20190","vulnerabilityDetails":"In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Eval::operator()(Sass::Supports_Operator*) in eval.cpp may cause a Denial of Service (application crash) via a crafted sass input file.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20190","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries node sass tgz opennmsopennms source opennmsopennms source node sass tgz wrapper around libsass library home page a href path to dependency file bootstrap package json path to vulnerable library bootstrap node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in base branch dev vulnerability details in libsass a null pointer dereference in the function sass eval operator sass supports operator in eval cpp may cause a denial of service application crash via a crafted sass input file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in libsass a null pointer dereference in the function sass eval operator sass supports operator in eval cpp may cause a denial of service application crash via a crafted sass input file vulnerabilityurl
0
54,824
13,456,130,192
IssuesEvent
2020-09-09 07:23:30
curl/curl
https://api.github.com/repos/curl/curl
closed
linking curl fails with ../lib/.libs/libcurl.so: undefined reference to Curl_base64_encode
build
I tried to compile curl with `--disable-http-auth --disable-ldap --disable-doh` and all ssl and ssh related options disabled. Basically all options to ensure that the `#if ...` at the beginning of `lib/base64.c` is false. No building fails with `../lib/.libs/libcurl.so: undefined reference to Curl_base64_encode` because `Curl_base64_encode` is not built but it is still used in `lib/vauth/oauth2.c` and `lib/vauth/cleartext.c`. I'm not quite sure what the correct solution is, but I think `Curl_base64_encode` is always used in `lib/vauth/cleartext.c` so it should probably build unconditionally. But I'm not sure if I understand the build-system correctly.
1.0
linking curl fails with ../lib/.libs/libcurl.so: undefined reference to Curl_base64_encode - I tried to compile curl with `--disable-http-auth --disable-ldap --disable-doh` and all ssl and ssh related options disabled. Basically all options to ensure that the `#if ...` at the beginning of `lib/base64.c` is false. No building fails with `../lib/.libs/libcurl.so: undefined reference to Curl_base64_encode` because `Curl_base64_encode` is not built but it is still used in `lib/vauth/oauth2.c` and `lib/vauth/cleartext.c`. I'm not quite sure what the correct solution is, but I think `Curl_base64_encode` is always used in `lib/vauth/cleartext.c` so it should probably build unconditionally. But I'm not sure if I understand the build-system correctly.
non_process
linking curl fails with lib libs libcurl so undefined reference to curl encode i tried to compile curl with disable http auth disable ldap disable doh and all ssl and ssh related options disabled basically all options to ensure that the if at the beginning of lib c is false no building fails with lib libs libcurl so undefined reference to curl encode because curl encode is not built but it is still used in lib vauth c and lib vauth cleartext c i m not quite sure what the correct solution is but i think curl encode is always used in lib vauth cleartext c so it should probably build unconditionally but i m not sure if i understand the build system correctly
0
17,628
23,444,802,504
IssuesEvent
2022-08-15 18:28:14
googleapis/python-certificate-manager
https://api.github.com/repos/googleapis/python-certificate-manager
closed
Update release level to stable
type: process
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287) ## Required - [ ] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: April 30 2022** - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA
1.0
Update release level to stable - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287) ## Required - [ ] 28 days elapsed since last beta release with new API surface **RELEASE ON/AFTER: April 30 2022** - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA
process
update release level to stable required days elapsed since last beta release with new api surface release on after april server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga
1
381
2,823,565,140
IssuesEvent
2015-05-21 09:36:35
austundag/testing
https://api.github.com/repos/austundag/testing
closed
Patient Header (Toolbar) allergies are not in severity order
enhancement in process
Also the full Allergies presentation that comes up when toolbar is clicked do not have the severities and not in order either.
1.0
Patient Header (Toolbar) allergies are not in severity order - Also the full Allergies presentation that comes up when toolbar is clicked do not have the severities and not in order either.
process
patient header toolbar allergies are not in severity order also the full allergies presentation that comes up when toolbar is clicked do not have the severities and not in order either
1
5,323
8,139,225,923
IssuesEvent
2018-08-20 17:00:20
cityofaustin/techstack
https://api.github.com/repos/cityofaustin/techstack
closed
CMS - Edit Page Styling
Content type: Process Page Content type: Service Page Feature: Service Page Template Joplin MVP Size: M Team: Dev
- [x] components (existing) - [x] Steps component will stay WYSIWYG - ~Style guide sidebar will be present #467~ - ~help text w/ scroll to links to corresponding style guide section~ - [x] secondary informational headers - [x] add question mark icons, move style guide links to icon click
1.0
CMS - Edit Page Styling - - [x] components (existing) - [x] Steps component will stay WYSIWYG - ~Style guide sidebar will be present #467~ - ~help text w/ scroll to links to corresponding style guide section~ - [x] secondary informational headers - [x] add question mark icons, move style guide links to icon click
process
cms edit page styling components existing steps component will stay wysiwyg style guide sidebar will be present help text w scroll to links to corresponding style guide section secondary informational headers add question mark icons move style guide links to icon click
1
264,331
28,144,380,355
IssuesEvent
2023-04-02 10:09:52
automation-staging-ghe-cloud/3452551_2401
https://api.github.com/repos/automation-staging-ghe-cloud/3452551_2401
opened
CVE-2021-20180 (Medium) detected in ansible-2.9.9.tar.gz
Mend: dependency security vulnerability
## CVE-2021-20180 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/automation-staging-ghe-cloud/3452551_2401/commit/b57d0e69a158cf7c332c11416223e425dd01f49e">b57d0e69a158cf7c332c11416223e425dd01f49e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in ansible module where credentials are disclosed in the console log by default and not protected by the security feature when using the bitbucket_pipeline_variable module. This flaw allows an attacker to steal bitbucket_pipeline credentials. The highest threat from this vulnerability is to confidentiality. <p>Publish Date: 2022-03-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-20180>CVE-2021-20180</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-fh5v-5f35-2rv2">https://github.com/advisories/GHSA-fh5v-5f35-2rv2</a></p> <p>Release Date: 2022-03-16</p> <p>Fix Resolution: ansible - 2.8.19,2.9.18</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2021-20180 (Medium) detected in ansible-2.9.9.tar.gz - ## CVE-2021-20180 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/automation-staging-ghe-cloud/3452551_2401/commit/b57d0e69a158cf7c332c11416223e425dd01f49e">b57d0e69a158cf7c332c11416223e425dd01f49e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in ansible module where credentials are disclosed in the console log by default and not protected by the security feature when using the bitbucket_pipeline_variable module. This flaw allows an attacker to steal bitbucket_pipeline credentials. The highest threat from this vulnerability is to confidentiality. <p>Publish Date: 2022-03-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-20180>CVE-2021-20180</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-fh5v-5f35-2rv2">https://github.com/advisories/GHSA-fh5v-5f35-2rv2</a></p> <p>Release Date: 2022-03-16</p> <p>Fix Resolution: ansible - 2.8.19,2.9.18</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_process
cve medium detected in ansible tar gz cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch main vulnerability details a flaw was found in ansible module where credentials are disclosed in the console log by default and not protected by the security feature when using the bitbucket pipeline variable module this flaw allows an attacker to steal bitbucket pipeline credentials the highest threat from this vulnerability is to confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansible rescue worker helmet automatic remediation is available for this issue
0
225,877
24,909,107,937
IssuesEvent
2022-10-29 16:35:39
AlexRogalskiy/ws-documents
https://api.github.com/repos/AlexRogalskiy/ws-documents
opened
CVE-2022-21724 (High) detected in postgresql-42.2.23.jar
security vulnerability
## CVE-2022-21724 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postgresql-42.2.23.jar</b></p></summary> <p>PostgreSQL JDBC Driver Postgresql</p> <p>Library home page: <a href="https://jdbc.postgresql.org">https://jdbc.postgresql.org</a></p> <p> Dependency Hierarchy: - :x: **postgresql-42.2.23.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/ws-documents/commit/519756786389e6def30b4e68ca7e83fe94667d5b">519756786389e6def30b4e68ca7e83fe94667d5b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> pgjdbc is the offical PostgreSQL JDBC Driver. A security hole was found in the jdbc driver for postgresql database while doing security research. The system using the postgresql library will be attacked when attacker control the jdbc url or properties. pgjdbc instantiates plugin instances based on class names provided via `authenticationPluginClassName`, `sslhostnameverifier`, `socketFactory`, `sslfactory`, `sslpasswordcallback` connection properties. However, the driver did not verify if the class implements the expected interface before instantiating the class. This can lead to code execution loaded via arbitrary classes. Users using plugins are advised to upgrade. There are no known workarounds for this issue. <p>Publish Date: 2022-02-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21724>CVE-2022-21724</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-v7wg-cpwc-24m4">https://github.com/advisories/GHSA-v7wg-cpwc-24m4</a></p> <p>Release Date: 2022-02-02</p> <p>Fix Resolution: 42.2.23.jre6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-21724 (High) detected in postgresql-42.2.23.jar - ## CVE-2022-21724 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postgresql-42.2.23.jar</b></p></summary> <p>PostgreSQL JDBC Driver Postgresql</p> <p>Library home page: <a href="https://jdbc.postgresql.org">https://jdbc.postgresql.org</a></p> <p> Dependency Hierarchy: - :x: **postgresql-42.2.23.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/ws-documents/commit/519756786389e6def30b4e68ca7e83fe94667d5b">519756786389e6def30b4e68ca7e83fe94667d5b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> pgjdbc is the offical PostgreSQL JDBC Driver. A security hole was found in the jdbc driver for postgresql database while doing security research. The system using the postgresql library will be attacked when attacker control the jdbc url or properties. pgjdbc instantiates plugin instances based on class names provided via `authenticationPluginClassName`, `sslhostnameverifier`, `socketFactory`, `sslfactory`, `sslpasswordcallback` connection properties. However, the driver did not verify if the class implements the expected interface before instantiating the class. This can lead to code execution loaded via arbitrary classes. Users using plugins are advised to upgrade. There are no known workarounds for this issue. <p>Publish Date: 2022-02-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21724>CVE-2022-21724</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-v7wg-cpwc-24m4">https://github.com/advisories/GHSA-v7wg-cpwc-24m4</a></p> <p>Release Date: 2022-02-02</p> <p>Fix Resolution: 42.2.23.jre6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in postgresql jar cve high severity vulnerability vulnerable library postgresql jar postgresql jdbc driver postgresql library home page a href dependency hierarchy x postgresql jar vulnerable library found in head commit a href found in base branch master vulnerability details pgjdbc is the offical postgresql jdbc driver a security hole was found in the jdbc driver for postgresql database while doing security research the system using the postgresql library will be attacked when attacker control the jdbc url or properties pgjdbc instantiates plugin instances based on class names provided via authenticationpluginclassname sslhostnameverifier socketfactory sslfactory sslpasswordcallback connection properties however the driver did not verify if the class implements the expected interface before instantiating the class this can lead to code execution loaded via arbitrary classes users using plugins are advised to upgrade there are no known workarounds for this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
7,525
10,599,527,851
IssuesEvent
2019-10-10 08:09:49
linnovate/root
https://api.github.com/repos/linnovate/root
closed
folder from offices add tags not update on activities
2.0.8 Process bug
go to office open new office go to folder from office add tags result : the activities not update
1.0
folder from offices add tags not update on activities - go to office open new office go to folder from office add tags result : the activities not update
process
folder from offices add tags not update on activities go to office open new office go to folder from office add tags result the activities not update
1
12,162
14,741,522,342
IssuesEvent
2021-01-07 10:45:04
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Late Fee - Single Billing cycle Preventing previous late charges
anc-process anp-urgent ant-bug ant-parent/primary
In GitLab by @kdjstudios on Jan 23, 2019, 08:52 **Submitted by:** @pchaudhary **Helpdesk:** NA **Server:** All **Client/Site:** All **Account:** All **Issue:** - Let's say there is a Billing Cycle A having an invoice I-1 of $ 100. - Then we create a new Billing Cycle B & create an invoice I-2 of $ 0. - Now, if we make a payment of $ 100 for invoice I-1. (After the due date) - If we create a Billing Cycle C and create an invoice I-3 in which there should be the late charge for invoice I-1 but it'll not be there, because we have a limit for going back to an only single billing cycle to check late paid invoices. `TEST CASES`: [Here](https://docs.google.com/spreadsheets/d/17tJ-PCr_8TtRtOsKVKEa-lJ_QQMxvheD190uvOHR1jY/edit?usp=sharing)
1.0
Late Fee - Single Billing cycle Preventing previous late charges - In GitLab by @kdjstudios on Jan 23, 2019, 08:52 **Submitted by:** @pchaudhary **Helpdesk:** NA **Server:** All **Client/Site:** All **Account:** All **Issue:** - Let's say there is a Billing Cycle A having an invoice I-1 of $ 100. - Then we create a new Billing Cycle B & create an invoice I-2 of $ 0. - Now, if we make a payment of $ 100 for invoice I-1. (After the due date) - If we create a Billing Cycle C and create an invoice I-3 in which there should be the late charge for invoice I-1 but it'll not be there, because we have a limit for going back to an only single billing cycle to check late paid invoices. `TEST CASES`: [Here](https://docs.google.com/spreadsheets/d/17tJ-PCr_8TtRtOsKVKEa-lJ_QQMxvheD190uvOHR1jY/edit?usp=sharing)
process
late fee single billing cycle preventing previous late charges in gitlab by kdjstudios on jan submitted by pchaudhary helpdesk na server all client site all account all issue let s say there is a billing cycle a having an invoice i of then we create a new billing cycle b create an invoice i of now if we make a payment of for invoice i after the due date if we create a billing cycle c and create an invoice i in which there should be the late charge for invoice i but it ll not be there because we have a limit for going back to an only single billing cycle to check late paid invoices test cases
1
21,409
29,351,205,997
IssuesEvent
2023-05-27 00:34:44
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Remoto] Data Analyst na Coodesh
SALVADOR PJ BANCO DE DADOS SQL POSTGRESQL AWS ETL REQUISITOS REMOTO PROCESSOS GITHUB UMA QUALIDADE MODELAGEM DE DADOS MANUTENÇÃO PIPELINE MONITORAMENTO Stale
## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-analyst-164223531?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Gove</strong> está em busca de <strong><ins>Data Analyst</ins></strong> para compor seu time!</p> <p>Somos uma GovTech que trabalha para transformar o jeito que gestores públicos municipais tomam suas decisões diárias e aumentar a eficiência das finanças públicas. Nossa missão é permitir que gestores públicos municipais acessem num único local todos os dados necessários para suas decisões, possibilitando maior agilidade na tomada de decisão, mais eficiência da gestão pública e melhor qualidade de vida para a população.</p> <p>Buscamos alguém que goste de trabalhar em ambiente colaborativo e tenha capacidade analítica.</p> <p></p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Mapeamento de fontes de dados do município;</li> <li>Automação e manutenção das rotinas de ETL e criação e execução de scripts em SQL para extração de dados e carga de dados;</li> <li>Desenvolver procedimentos para coleta e análise de dados de diversas fontes;</li> <li>Criação, manutenção e monitoramento de pipeline de dados e dos processos de integração;</li> <li>Construção de modelos conceitual e lógico de dados (Diagramas e Modelos);</li> <li>Construção de algoritmos de tratamento e transformação de dados de forma automatizada.</li> </ul> ## Gove: <p>Somos uma GovTech que trabalha para transformar o jeito que gestores públicos municipais tomam suas decisões diárias e aumentar a eficiência das finanças públicas.</p> <p>Nossa missão é permitir que gestores públicos municipais acessem num único local todos os dados necessários para suas decisões, possibilitando maior agilidade na tomada de decisão, mais eficiência da gestão pública e melhor qualidade de vida para a população.</p><a href='https://coodesh.com/empresas/gove'>Veja mais no site</a> ## Habilidades: - ETL - Banco de dados relacionais (SQL) - PostgreSQL - API ## Local: 100% Remoto ## Requisitos: - Experiência em escrever scripts de validação para validar os dados, integrações de dados e transformações ETL; - Conhecimento avançado em Modelagem de Dados Relacional e Multidimensional e técnica de segmentação; - Conhecimento em ferramentas de ETL, preferencialmente Pentaho Data Integration; - Sólida experiência em escrita de SQLs; - Conhecimento de SQL e habilidades de análise de dados para detecção de anomalias de dados e garantia de qualidade de dados. ## Diferenciais: - AWS; - Conhecimento Pentaho. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na Gove](https://coodesh.com/vagas/data-analyst-164223531?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Banco de Dados
1.0
[Remoto] Data Analyst na Coodesh - ## Descrição da vaga: Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios. Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-analyst-164223531?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋 <p>A <strong>Gove</strong> está em busca de <strong><ins>Data Analyst</ins></strong> para compor seu time!</p> <p>Somos uma GovTech que trabalha para transformar o jeito que gestores públicos municipais tomam suas decisões diárias e aumentar a eficiência das finanças públicas. Nossa missão é permitir que gestores públicos municipais acessem num único local todos os dados necessários para suas decisões, possibilitando maior agilidade na tomada de decisão, mais eficiência da gestão pública e melhor qualidade de vida para a população.</p> <p>Buscamos alguém que goste de trabalhar em ambiente colaborativo e tenha capacidade analítica.</p> <p></p> <p><strong>Responsabilidades:</strong></p> <ul> <li>Mapeamento de fontes de dados do município;</li> <li>Automação e manutenção das rotinas de ETL e criação e execução de scripts em SQL para extração de dados e carga de dados;</li> <li>Desenvolver procedimentos para coleta e análise de dados de diversas fontes;</li> <li>Criação, manutenção e monitoramento de pipeline de dados e dos processos de integração;</li> <li>Construção de modelos conceitual e lógico de dados (Diagramas e Modelos);</li> <li>Construção de algoritmos de tratamento e transformação de dados de forma automatizada.</li> </ul> ## Gove: <p>Somos uma GovTech que trabalha para transformar o jeito que gestores públicos municipais tomam suas decisões diárias e aumentar a eficiência das finanças públicas.</p> <p>Nossa missão é permitir que gestores públicos municipais acessem num único local todos os dados necessários para suas decisões, possibilitando maior agilidade na tomada de decisão, mais eficiência da gestão pública e melhor qualidade de vida para a população.</p><a href='https://coodesh.com/empresas/gove'>Veja mais no site</a> ## Habilidades: - ETL - Banco de dados relacionais (SQL) - PostgreSQL - API ## Local: 100% Remoto ## Requisitos: - Experiência em escrever scripts de validação para validar os dados, integrações de dados e transformações ETL; - Conhecimento avançado em Modelagem de Dados Relacional e Multidimensional e técnica de segmentação; - Conhecimento em ferramentas de ETL, preferencialmente Pentaho Data Integration; - Sólida experiência em escrita de SQLs; - Conhecimento de SQL e habilidades de análise de dados para detecção de anomalias de dados e garantia de qualidade de dados. ## Diferenciais: - AWS; - Conhecimento Pentaho. ## Como se candidatar: Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na Gove](https://coodesh.com/vagas/data-analyst-164223531?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação. ## Labels #### Alocação Remoto #### Regime PJ #### Categoria Banco de Dados
process
data analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a gove está em busca de data analyst para compor seu time somos uma govtech que trabalha para transformar o jeito que gestores públicos municipais tomam suas decisões diárias e aumentar a eficiência das finanças públicas nossa missão é permitir que gestores públicos municipais acessem num único local todos os dados necessários para suas decisões possibilitando maior agilidade na tomada de decisão mais eficiência da gestão pública e melhor qualidade de vida para a população buscamos alguém que goste de trabalhar em ambiente colaborativo e tenha capacidade analítica responsabilidades mapeamento de fontes de dados do município automação e manutenção das rotinas de etl e criação e execução de scripts em sql para extração de dados e carga de dados desenvolver procedimentos para coleta e análise de dados de diversas fontes criação manutenção e monitoramento de pipeline de dados e dos processos de integração construção de modelos conceitual e lógico de dados diagramas e modelos construção de algoritmos de tratamento e transformação de dados de forma automatizada gove somos uma govtech que trabalha para transformar o jeito que gestores públicos municipais tomam suas decisões diárias e aumentar a eficiência das finanças públicas nossa missão é permitir que gestores públicos municipais acessem num único local todos os dados necessários para suas decisões possibilitando maior agilidade na tomada de decisão mais eficiência da gestão pública e melhor qualidade de vida para a população habilidades etl banco de dados relacionais sql postgresql api local remoto requisitos experiência em escrever scripts de validação para validar os dados integrações de dados e transformações etl conhecimento avançado em modelagem de dados relacional e multidimensional e técnica de segmentação conhecimento em ferramentas de etl preferencialmente pentaho data integration sólida experiência em escrita de sqls conhecimento de sql e habilidades de análise de dados para detecção de anomalias de dados e garantia de qualidade de dados diferenciais aws conhecimento pentaho como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria banco de dados
1
12,026
14,738,544,628
IssuesEvent
2021-01-07 05:04:02
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Lucky Lincoln Gaming 123-E1616
anc-ops anc-process anp-important ant-bug ant-support has attachment
In GitLab by @kdjstudios on Jun 8, 2018, 14:53 **Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-08-19323 Vericheck HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-06-13-16448/conversation **Server:** Internal **Client/Site:** 123 **Account:** E1616 **Issue:** I just attempted to process an echeck payment for Lucky Lincoln in the amount of $4133.28 and receiving the following error: The page you were looking for doesn’t exist. You may have mistyped the address or the page may have moved Then, I hit the back arrow to try it again and got the form to fill out so I proceeded and then got the following message: ![image](/uploads/b2ac1ddd6be3435f1576566e359a744d/image.png) Please advise as I did try the back error and this message came up a second time.
1.0
Lucky Lincoln Gaming 123-E1616 - In GitLab by @kdjstudios on Jun 8, 2018, 14:53 **Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-08-19323 Vericheck HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-06-13-16448/conversation **Server:** Internal **Client/Site:** 123 **Account:** E1616 **Issue:** I just attempted to process an echeck payment for Lucky Lincoln in the amount of $4133.28 and receiving the following error: The page you were looking for doesn’t exist. You may have mistyped the address or the page may have moved Then, I hit the back arrow to try it again and got the form to fill out so I proceeded and then got the following message: ![image](/uploads/b2ac1ddd6be3435f1576566e359a744d/image.png) Please advise as I did try the back error and this message came up a second time.
process
lucky lincoln gaming in gitlab by kdjstudios on jun submitted by kimberly gagner helpdesk vericheck hd server internal client site account issue i just attempted to process an echeck payment for lucky lincoln in the amount of and receiving the following error the page you were looking for doesn’t exist you may have mistyped the address or the page may have moved then i hit the back arrow to try it again and got the form to fill out so i proceeded and then got the following message uploads image png please advise as i did try the back error and this message came up a second time
1
2,992
5,968,996,141
IssuesEvent
2017-05-30 19:18:49
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Desktop: System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException failed with "Xunit.Sdk.EqualException"
area-System.ServiceProcess test-run-desktop
Failed test: System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netfx_windows_nt_debug/66/testReport/System.ServiceProcess.Tests/ServiceControllerTests/Start_NullArg_ThrowsArgumentNullException/ Configuration: outerloop_netfx_windows_nt_debug MESSAGE: ~~~ Assert.Equal() Failure ↓ (pos 0) Expected: args[0] Actual: Arguments within the 'args' array passed ··· ↑ (pos 0) ~~~ STACK TRACE: ~~~ at System.AssertExtensions.Throws[T](String paramName, Action action) in D:\j\workspace\outerloop_net---903ddde6\src\Common\tests\System\AssertExtensions.cs:line 47 at System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException() in D:\j\workspace\outerloop_net---903ddde6\src\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs:line 165 ~~~
1.0
Desktop: System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException failed with "Xunit.Sdk.EqualException" - Failed test: System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException Detail: https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netfx_windows_nt_debug/66/testReport/System.ServiceProcess.Tests/ServiceControllerTests/Start_NullArg_ThrowsArgumentNullException/ Configuration: outerloop_netfx_windows_nt_debug MESSAGE: ~~~ Assert.Equal() Failure ↓ (pos 0) Expected: args[0] Actual: Arguments within the 'args' array passed ··· ↑ (pos 0) ~~~ STACK TRACE: ~~~ at System.AssertExtensions.Throws[T](String paramName, Action action) in D:\j\workspace\outerloop_net---903ddde6\src\Common\tests\System\AssertExtensions.cs:line 47 at System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException() in D:\j\workspace\outerloop_net---903ddde6\src\System.ServiceProcess.ServiceController\tests\System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs:line 165 ~~~
process
desktop system serviceprocess tests servicecontrollertests start nullarg throwsargumentnullexception failed with xunit sdk equalexception failed test system serviceprocess tests servicecontrollertests start nullarg throwsargumentnullexception detail configuration outerloop netfx windows nt debug message assert equal failure ↓ pos expected args actual arguments within the args array passed ··· ↑ pos stack trace at system assertextensions throws string paramname action action in d j workspace outerloop net src common tests system assertextensions cs line at system serviceprocess tests servicecontrollertests start nullarg throwsargumentnullexception in d j workspace outerloop net src system serviceprocess servicecontroller tests system serviceprocess servicecontroller tests servicecontrollertests cs line
1
490,069
14,115,123,731
IssuesEvent
2020-11-07 19:16:17
bounswe/bounswe2020group4
https://api.github.com/repos/bounswe/bounswe2020group4
opened
(WEB) Homepage, Navbar UI
Effort: Medium Frontend Priority: Medium Status: Pending
- UI implementation of the homepage and the navbar and the modular product car component - Implementation of the hide/show navbar action Deadline: 15.11.2020 23:59
1.0
(WEB) Homepage, Navbar UI - - UI implementation of the homepage and the navbar and the modular product car component - Implementation of the hide/show navbar action Deadline: 15.11.2020 23:59
non_process
web homepage navbar ui ui implementation of the homepage and the navbar and the modular product car component implementation of the hide show navbar action deadline
0
93,341
11,773,766,049
IssuesEvent
2020-03-16 08:07:32
dynatrace-oss/barista
https://api.github.com/repos/dynatrace-oss/barista
opened
Barista: Migrate Strapi database
P2 design-system
For better scalability and backups we have to migrate the Strapi database from sqlite to postgres. This has to be done manually as an automated migration seems not to work.
1.0
Barista: Migrate Strapi database - For better scalability and backups we have to migrate the Strapi database from sqlite to postgres. This has to be done manually as an automated migration seems not to work.
non_process
barista migrate strapi database for better scalability and backups we have to migrate the strapi database from sqlite to postgres this has to be done manually as an automated migration seems not to work
0
17,582
23,392,148,696
IssuesEvent
2022-08-11 18:57:08
ArneBinder/pie-utils
https://api.github.com/repos/ArneBinder/pie-utils
closed
collect and show distribution of text lengths (num tokens)
document processor
Add a document processor that tokenizes the text (e.g. with a Huggingface tokenizer), collects the lengths of the documents in means of token numbers and displays that information in a way that it is easy to digest.
1.0
collect and show distribution of text lengths (num tokens) - Add a document processor that tokenizes the text (e.g. with a Huggingface tokenizer), collects the lengths of the documents in means of token numbers and displays that information in a way that it is easy to digest.
process
collect and show distribution of text lengths num tokens add a document processor that tokenizes the text e g with a huggingface tokenizer collects the lengths of the documents in means of token numbers and displays that information in a way that it is easy to digest
1
338,630
10,232,495,594
IssuesEvent
2019-08-18 17:57:49
tideland/go
https://api.github.com/repos/tideland/go
opened
together/cells: Add factories package
priority / b / normal status / a / available type / b / enhancement
Idea of the package is to provide functions for the creation of larger cell meshes. - Package name is `factories` - Factory functions follow the signature pattern `CreateXyz(<mesh>, <id>, <params>) (Cells, error)` - Parameters are individual structs with their arguments, possible behavior helpers, and possible constructor functions for needed individual cells, all depending on the factories - Depending on the pattern the `id` is the ID of the input cell and the namespace of all cells behind (`<id>/<sub-id>`) or only the namespace - Factory functions return a cell map (`type Cells map[string]string)`) containing the created IDs as well as their roles (_to be discussed_) First ideas for cell factories: - [ ] Alerter (a number of signal input cells and/or poller; multi-layer condensing cells; alert trigger cells) - [ ] Charter (a number of signal input cells and/or poller; multi-layer condensing cells; chart creation cell) - [ ] Map/Reduce More to come
1.0
together/cells: Add factories package - Idea of the package is to provide functions for the creation of larger cell meshes. - Package name is `factories` - Factory functions follow the signature pattern `CreateXyz(<mesh>, <id>, <params>) (Cells, error)` - Parameters are individual structs with their arguments, possible behavior helpers, and possible constructor functions for needed individual cells, all depending on the factories - Depending on the pattern the `id` is the ID of the input cell and the namespace of all cells behind (`<id>/<sub-id>`) or only the namespace - Factory functions return a cell map (`type Cells map[string]string)`) containing the created IDs as well as their roles (_to be discussed_) First ideas for cell factories: - [ ] Alerter (a number of signal input cells and/or poller; multi-layer condensing cells; alert trigger cells) - [ ] Charter (a number of signal input cells and/or poller; multi-layer condensing cells; chart creation cell) - [ ] Map/Reduce More to come
non_process
together cells add factories package idea of the package is to provide functions for the creation of larger cell meshes package name is factories factory functions follow the signature pattern createxyz cells error parameters are individual structs with their arguments possible behavior helpers and possible constructor functions for needed individual cells all depending on the factories depending on the pattern the id is the id of the input cell and the namespace of all cells behind or only the namespace factory functions return a cell map type cells map string containing the created ids as well as their roles to be discussed first ideas for cell factories alerter a number of signal input cells and or poller multi layer condensing cells alert trigger cells charter a number of signal input cells and or poller multi layer condensing cells chart creation cell map reduce more to come
0
14,499
17,604,292,652
IssuesEvent
2021-08-17 15:13:32
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
Missing docs for "Extract Shapefile encoding" and "Set layer encoding" processing algorithms
Processing Alg
## Description Missing docs for "Extract Shapefile encoding" (`native:shpencodinginfo`) and "Set layer encoding" (`native:setlayerencoding`) algorithms added to the "Vector general" processing group with commit https://github.com/qgis/QGIS/commit/8cec5d0686c0e2800a1c63935ff8ac6608056a1f (PR https://github.com/qgis/QGIS/pull/34381) Page URL: https://docs.qgis.org/testing/en/docs/user_manual/processing_algs/qgis/vectorgeneral.html
1.0
Missing docs for "Extract Shapefile encoding" and "Set layer encoding" processing algorithms - ## Description Missing docs for "Extract Shapefile encoding" (`native:shpencodinginfo`) and "Set layer encoding" (`native:setlayerencoding`) algorithms added to the "Vector general" processing group with commit https://github.com/qgis/QGIS/commit/8cec5d0686c0e2800a1c63935ff8ac6608056a1f (PR https://github.com/qgis/QGIS/pull/34381) Page URL: https://docs.qgis.org/testing/en/docs/user_manual/processing_algs/qgis/vectorgeneral.html
process
missing docs for extract shapefile encoding and set layer encoding processing algorithms description missing docs for extract shapefile encoding native shpencodinginfo and set layer encoding native setlayerencoding algorithms added to the vector general processing group with commit pr page url
1
523,316
15,178,153,350
IssuesEvent
2021-02-14 14:19:24
wevote/WebApp
https://api.github.com/repos/wevote/WebApp
closed
Add interface for uploading your own Profile Photo and Profile Banner
Difficulty: Medium Priority: 1
Please implement the React interface for these on the "Settings" > "General Settings" page: 1. uploading your own profile photo 2. your own profile banner 3. Choosing which profile photo you would like to be displayed 4. Choose with profile banner you would like to have displayed (There is no need to implement the API calls.) Please add to: http://localhost:3000/settings/profile src/js/components/Settings/SettingsProfile.jsx I would recommend implementing the Profile picture interface and the Profile Banner interface in their own components. PLEASE NOTE: In these mockups, the photos are round, but in We Vote all voter photos are square (we reserve round photos for candidates) NOTE 2: Given time (not part of this issue), I would like us to find a package that allows drag-and-drop upload of a photo on Desktop NOTE 3: Given time (not part of this issue), I would like to find a react tool that lets us crop and resize photos before we submit them to the API server. DESKTOP ![Screen Shot 2020-10-05 at 8 05 30 AM](https://user-images.githubusercontent.com/7756031/95096791-9446ac80-06e1-11eb-9a03-2ddd20a5e639.png) MOBILE ![Screen Shot 2020-10-05 at 8 06 57 AM](https://user-images.githubusercontent.com/7756031/95096954-c526e180-06e1-11eb-8c97-0d6ad1f00743.png) We have code that lets you upload a photo on the "Logo & Sharing" page -- there is good example code there: http://localhost:3000/settings/sharing See: src/js/components/Settings/SettingsSharing.jsx ![Screen Shot 2020-10-05 at 7 46 40 AM](https://user-images.githubusercontent.com/7756031/95094615-f18d2e80-06de-11eb-8d12-3ca69ffb372c.png)
1.0
Add interface for uploading your own Profile Photo and Profile Banner - Please implement the React interface for these on the "Settings" > "General Settings" page: 1. uploading your own profile photo 2. your own profile banner 3. Choosing which profile photo you would like to be displayed 4. Choose with profile banner you would like to have displayed (There is no need to implement the API calls.) Please add to: http://localhost:3000/settings/profile src/js/components/Settings/SettingsProfile.jsx I would recommend implementing the Profile picture interface and the Profile Banner interface in their own components. PLEASE NOTE: In these mockups, the photos are round, but in We Vote all voter photos are square (we reserve round photos for candidates) NOTE 2: Given time (not part of this issue), I would like us to find a package that allows drag-and-drop upload of a photo on Desktop NOTE 3: Given time (not part of this issue), I would like to find a react tool that lets us crop and resize photos before we submit them to the API server. DESKTOP ![Screen Shot 2020-10-05 at 8 05 30 AM](https://user-images.githubusercontent.com/7756031/95096791-9446ac80-06e1-11eb-9a03-2ddd20a5e639.png) MOBILE ![Screen Shot 2020-10-05 at 8 06 57 AM](https://user-images.githubusercontent.com/7756031/95096954-c526e180-06e1-11eb-8c97-0d6ad1f00743.png) We have code that lets you upload a photo on the "Logo & Sharing" page -- there is good example code there: http://localhost:3000/settings/sharing See: src/js/components/Settings/SettingsSharing.jsx ![Screen Shot 2020-10-05 at 7 46 40 AM](https://user-images.githubusercontent.com/7756031/95094615-f18d2e80-06de-11eb-8d12-3ca69ffb372c.png)
non_process
add interface for uploading your own profile photo and profile banner please implement the react interface for these on the settings general settings page uploading your own profile photo your own profile banner choosing which profile photo you would like to be displayed choose with profile banner you would like to have displayed there is no need to implement the api calls please add to src js components settings settingsprofile jsx i would recommend implementing the profile picture interface and the profile banner interface in their own components please note in these mockups the photos are round but in we vote all voter photos are square we reserve round photos for candidates note given time not part of this issue i would like us to find a package that allows drag and drop upload of a photo on desktop note given time not part of this issue i would like to find a react tool that lets us crop and resize photos before we submit them to the api server desktop mobile we have code that lets you upload a photo on the logo sharing page there is good example code there see src js components settings settingssharing jsx
0
148,998
23,411,654,732
IssuesEvent
2022-08-12 18:14:55
department-of-veterans-affairs/vets-design-system-documentation
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
closed
Promo banner is not tracked on va.gov analytics
vsp-design-system-team va-promo-banner
## Description It was [brought to our attention](https://dsva.slack.com/archives/C01DBGX4P45/p1660260310590559) that the `<va-promo-banner>` was not being tracked on google analytics for va.gov. With the recent PACT news being communicated through the promo banner, having insight into the clicks on the banner would be useful. ## Details This isn't a problem with the component itself - a change needs to be made in `vets-website` to properly connect the custom event from the component to the datalayer for Google Analytics. ## Tasks - [ ] Update the [`component-library-analytics-setup.js` file](https://github.com/department-of-veterans-affairs/vets-website/blob/58c33cc05b428743579f1918b6035cebf9b6eecc/src/platform/site-wide/component-library-analytics-setup.js) to track `<va-promo-banner>` ## Acceptance Criteria - [ ] `<va-promo-banner>` is tracked in Google analytics
1.0
Promo banner is not tracked on va.gov analytics - ## Description It was [brought to our attention](https://dsva.slack.com/archives/C01DBGX4P45/p1660260310590559) that the `<va-promo-banner>` was not being tracked on google analytics for va.gov. With the recent PACT news being communicated through the promo banner, having insight into the clicks on the banner would be useful. ## Details This isn't a problem with the component itself - a change needs to be made in `vets-website` to properly connect the custom event from the component to the datalayer for Google Analytics. ## Tasks - [ ] Update the [`component-library-analytics-setup.js` file](https://github.com/department-of-veterans-affairs/vets-website/blob/58c33cc05b428743579f1918b6035cebf9b6eecc/src/platform/site-wide/component-library-analytics-setup.js) to track `<va-promo-banner>` ## Acceptance Criteria - [ ] `<va-promo-banner>` is tracked in Google analytics
non_process
promo banner is not tracked on va gov analytics description it was that the was not being tracked on google analytics for va gov with the recent pact news being communicated through the promo banner having insight into the clicks on the banner would be useful details this isn t a problem with the component itself a change needs to be made in vets website to properly connect the custom event from the component to the datalayer for google analytics tasks update the to track acceptance criteria is tracked in google analytics
0
14,411
2,806,432,003
IssuesEvent
2015-05-15 02:20:53
rocky/python3-trepan
https://api.github.com/repos/rocky/python3-trepan
closed
cannot install trepan on python3.3
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. pip install trepan 2. 3. What is the expected output? What do you see instead? successful install. Downloading/unpacking trepan Could not fetch URL http://code.google.com/p/trepan/ (from https://pypi.python.org/simple/trepan/): HTTP Error 404: Not Found Will skip URL http://code.google.com/p/trepan/ when looking for download links for trepan Using version 0.2.8 (newest of versions: 0.2.8, 0.2.8, 0.2.7, 0.2.5) Downloading trepan-0.2.8.tar.gz (130kB): Downloading from URL https://pypi.python.org/packages/source/t/trepan/trepan-0.2.8.tar.gz#md5=216a9ee0e60df183a4c90e412d0cbf37 (from https://pypi.python.org/simple/trepan/) ...Downloading trepan-0.2.8.tar.gz (130kB): 130kB downloaded Running setup.py egg_info for package trepan Traceback (most recent call last): File "<string>", line 16, in <module> File "/tmp/pip_build_root/trepan/setup.py", line 12, in <module> from __pkginfo__ import \ ImportError: No module named '__pkginfo__' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 16, in <module> File "/tmp/pip_build_root/trepan/setup.py", line 12, in <module> from __pkginfo__ import \ ImportError: No module named '__pkginfo__' ---------------------------------------- Cleaning up... Removing temporary dir /tmp/pip_build_root... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/trepan Exception information: Traceback (most recent call last): File "/usr/lib/python3.3/site-packages/pip/basecommand.py", line 134, in main status = self.run(options, args) File "/usr/lib/python3.3/site-packages/pip/commands/install.py", line 236, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3.3/site-packages/pip/req.py", line 1134, in prepare_files req_to_install.run_egg_info() File "/usr/lib/python3.3/site-packages/pip/req.py", line 259, in run_egg_info command_desc='python setup.py egg_info') File "/usr/lib/python3.3/site-packages/pip/util.py", line 670, in call_subprocess % (command_desc, proc.returncode, cwd)) pip.exceptions.InstallationError: Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/trepan Storing complete log in /root/.pip/pip.log What version of the product are you using? On what operating system? linux fedora 20 Please provide any additional information below. ``` Original issue reported on code.google.com by `finjulh...@gmail.com` on 18 Jul 2014 at 12:56
1.0
cannot install trepan on python3.3 - ``` What steps will reproduce the problem? 1. pip install trepan 2. 3. What is the expected output? What do you see instead? successful install. Downloading/unpacking trepan Could not fetch URL http://code.google.com/p/trepan/ (from https://pypi.python.org/simple/trepan/): HTTP Error 404: Not Found Will skip URL http://code.google.com/p/trepan/ when looking for download links for trepan Using version 0.2.8 (newest of versions: 0.2.8, 0.2.8, 0.2.7, 0.2.5) Downloading trepan-0.2.8.tar.gz (130kB): Downloading from URL https://pypi.python.org/packages/source/t/trepan/trepan-0.2.8.tar.gz#md5=216a9ee0e60df183a4c90e412d0cbf37 (from https://pypi.python.org/simple/trepan/) ...Downloading trepan-0.2.8.tar.gz (130kB): 130kB downloaded Running setup.py egg_info for package trepan Traceback (most recent call last): File "<string>", line 16, in <module> File "/tmp/pip_build_root/trepan/setup.py", line 12, in <module> from __pkginfo__ import \ ImportError: No module named '__pkginfo__' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 16, in <module> File "/tmp/pip_build_root/trepan/setup.py", line 12, in <module> from __pkginfo__ import \ ImportError: No module named '__pkginfo__' ---------------------------------------- Cleaning up... Removing temporary dir /tmp/pip_build_root... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/trepan Exception information: Traceback (most recent call last): File "/usr/lib/python3.3/site-packages/pip/basecommand.py", line 134, in main status = self.run(options, args) File "/usr/lib/python3.3/site-packages/pip/commands/install.py", line 236, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3.3/site-packages/pip/req.py", line 1134, in prepare_files req_to_install.run_egg_info() File "/usr/lib/python3.3/site-packages/pip/req.py", line 259, in run_egg_info command_desc='python setup.py egg_info') File "/usr/lib/python3.3/site-packages/pip/util.py", line 670, in call_subprocess % (command_desc, proc.returncode, cwd)) pip.exceptions.InstallationError: Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/trepan Storing complete log in /root/.pip/pip.log What version of the product are you using? On what operating system? linux fedora 20 Please provide any additional information below. ``` Original issue reported on code.google.com by `finjulh...@gmail.com` on 18 Jul 2014 at 12:56
non_process
cannot install trepan on what steps will reproduce the problem pip install trepan what is the expected output what do you see instead successful install downloading unpacking trepan could not fetch url from http error not found will skip url when looking for download links for trepan using version newest of versions downloading trepan tar gz downloading from url from downloading trepan tar gz downloaded running setup py egg info for package trepan traceback most recent call last file line in file tmp pip build root trepan setup py line in from pkginfo import importerror no module named pkginfo complete output from command python setup py egg info traceback most recent call last file line in file tmp pip build root trepan setup py line in from pkginfo import importerror no module named pkginfo cleaning up removing temporary dir tmp pip build root command python setup py egg info failed with error code in tmp pip build root trepan exception information traceback most recent call last file usr lib site packages pip basecommand py line in main status self run options args file usr lib site packages pip commands install py line in run requirement set prepare files finder force root egg info self bundle bundle self bundle file usr lib site packages pip req py line in prepare files req to install run egg info file usr lib site packages pip req py line in run egg info command desc python setup py egg info file usr lib site packages pip util py line in call subprocess command desc proc returncode cwd pip exceptions installationerror command python setup py egg info failed with error code in tmp pip build root trepan storing complete log in root pip pip log what version of the product are you using on what operating system linux fedora please provide any additional information below original issue reported on code google com by finjulh gmail com on jul at
0
15,832
5,188,984,742
IssuesEvent
2017-01-20 21:42:09
BruceJohnJennerLawso/scrap
https://api.github.com/repos/BruceJohnJennerLawso/scrap
closed
function to take levelId argument and filter non watMu seasons by levelId teams list
app codebase console enhancement
Console will need to load everything, but not everything wanted for nhl/wha outputs, so the seasons list needs to be filtered by levelId seasons list in task calls.
1.0
function to take levelId argument and filter non watMu seasons by levelId teams list - Console will need to load everything, but not everything wanted for nhl/wha outputs, so the seasons list needs to be filtered by levelId seasons list in task calls.
non_process
function to take levelid argument and filter non watmu seasons by levelid teams list console will need to load everything but not everything wanted for nhl wha outputs so the seasons list needs to be filtered by levelid seasons list in task calls
0
11,220
14,000,470,549
IssuesEvent
2020-10-28 12:22:23
timberio/vector
https://api.github.com/repos/timberio/vector
closed
Replace the `reduce` transform's `ends_when` option with Remap conditionals
domain: processing transform: reduce type: enhancement
Throughout Vector we have a concept of "conditions" that let users express conditional statements through a variety of options. This is present in the `reduce` transform's `ends_when` option: ```toml [transforms.reduce] type = "reduce" ends_when.type = "check_fields" ends_when."message.regex" = "\w" ``` This is not very user friendly and lacks flexibility. The Remap language supports robust conditional statements and we'd like to replace conditions with this: ```toml [transforms.reduce] type = "reduce" ends_when = 'matches(.message, /^\w/)' ``` Users can use this syntax to represent a variety of conditions: ```toml ends_when = '(matches(.message, /^\w/) && .level == "error") || .level != "error"' ```
1.0
Replace the `reduce` transform's `ends_when` option with Remap conditionals - Throughout Vector we have a concept of "conditions" that let users express conditional statements through a variety of options. This is present in the `reduce` transform's `ends_when` option: ```toml [transforms.reduce] type = "reduce" ends_when.type = "check_fields" ends_when."message.regex" = "\w" ``` This is not very user friendly and lacks flexibility. The Remap language supports robust conditional statements and we'd like to replace conditions with this: ```toml [transforms.reduce] type = "reduce" ends_when = 'matches(.message, /^\w/)' ``` Users can use this syntax to represent a variety of conditions: ```toml ends_when = '(matches(.message, /^\w/) && .level == "error") || .level != "error"' ```
process
replace the reduce transform s ends when option with remap conditionals throughout vector we have a concept of conditions that let users express conditional statements through a variety of options this is present in the reduce transform s ends when option toml type reduce ends when type check fields ends when message regex w this is not very user friendly and lacks flexibility the remap language supports robust conditional statements and we d like to replace conditions with this toml type reduce ends when matches message w users can use this syntax to represent a variety of conditions toml ends when matches message w level error level error
1
2,669
5,468,243,446
IssuesEvent
2017-03-10 05:02:19
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
opened
Failure in System.Diagnostics.Tests.ProcessTests.TestRootGetProcessById in CI
area-System.Diagnostics.Process test-run-core
https://ci.dot.net/job/dotnet_corefx/job/master/job/osx_debug_prtest/4347/consoleFull#-6109453532d31e50d-1517-49fc-92b3-2ca637122019 . ``` System.Diagnostics.Tests.ProcessTests.TestRootGetProcessById [FAIL] 19:19:29 Assert.True() Failure 19:19:29 Expected: True 19:19:29 Actual: False 19:19:29 Stack Trace: 19:19:29 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/src/System.Diagnostics.Process/tests/ProcessTestBase.cs(33,0): at System.Diagnostics.Tests.ProcessTestBase.Dispose(Boolean disposing) 19:19:29 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/src/Common/tests/System/IO/FileCleanupTestBase.cs(41,0): at System.IO.FileCleanupTestBase.Dispose() 19:19:29 at ReflectionAbstractionExtensions.DisposeTestClass(ITest test, Object testClass, IMessageBus messageBus, ExecutionTimer timer, CancellationTokenSource cancellationTokenSource) 19:19:43 Finished: System.Diagnostics.Process.Tests 19:19:43 19:19:43 === TEST EXECUTION SUMMARY === 19:19:43 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/Tools/tests.targets(247,5): warning : System.Diagnostics.Process.Tests Total: 108, Errors: 0, Failed: 1, Skipped: 0, Time: 625.446s [/Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/src/System.Diagnostics.Process/tests/System.Diagnostics.Process.Tests.csproj] ``` XUnit creates the test fixture (class) for each test invoked. After the test completes, it disposes the class: https://github.com/xunit/xunit/blob/master/src/xunit.execution/Sdk/Frameworks/Runners/TestInvoker.cs#L216 This test doesn't make any processes but its base class makes one that sleeps 5 minutes (for no obvious reason) https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.Process/tests/ProcessTestBase.cs#L19 In dispose of the base class it kills the process if necessary then waits 5 minutes if necessary https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.Process/tests/ProcessTestBase.cs#L33 This seems pretty random. Is the machine so overloaded that it takes over 5 minutes to kill the process? @Priya91
1.0
Failure in System.Diagnostics.Tests.ProcessTests.TestRootGetProcessById in CI - https://ci.dot.net/job/dotnet_corefx/job/master/job/osx_debug_prtest/4347/consoleFull#-6109453532d31e50d-1517-49fc-92b3-2ca637122019 . ``` System.Diagnostics.Tests.ProcessTests.TestRootGetProcessById [FAIL] 19:19:29 Assert.True() Failure 19:19:29 Expected: True 19:19:29 Actual: False 19:19:29 Stack Trace: 19:19:29 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/src/System.Diagnostics.Process/tests/ProcessTestBase.cs(33,0): at System.Diagnostics.Tests.ProcessTestBase.Dispose(Boolean disposing) 19:19:29 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/src/Common/tests/System/IO/FileCleanupTestBase.cs(41,0): at System.IO.FileCleanupTestBase.Dispose() 19:19:29 at ReflectionAbstractionExtensions.DisposeTestClass(ITest test, Object testClass, IMessageBus messageBus, ExecutionTimer timer, CancellationTokenSource cancellationTokenSource) 19:19:43 Finished: System.Diagnostics.Process.Tests 19:19:43 19:19:43 === TEST EXECUTION SUMMARY === 19:19:43 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/Tools/tests.targets(247,5): warning : System.Diagnostics.Process.Tests Total: 108, Errors: 0, Failed: 1, Skipped: 0, Time: 625.446s [/Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_debug_prtest/src/System.Diagnostics.Process/tests/System.Diagnostics.Process.Tests.csproj] ``` XUnit creates the test fixture (class) for each test invoked. After the test completes, it disposes the class: https://github.com/xunit/xunit/blob/master/src/xunit.execution/Sdk/Frameworks/Runners/TestInvoker.cs#L216 This test doesn't make any processes but its base class makes one that sleeps 5 minutes (for no obvious reason) https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.Process/tests/ProcessTestBase.cs#L19 In dispose of the base class it kills the process if necessary then waits 5 minutes if necessary https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.Process/tests/ProcessTestBase.cs#L33 This seems pretty random. Is the machine so overloaded that it takes over 5 minutes to kill the process? @Priya91
process
failure in system diagnostics tests processtests testrootgetprocessbyid in ci system diagnostics tests processtests testrootgetprocessbyid assert true failure expected true actual false stack trace users dotnet bot j workspace dotnet corefx master osx debug prtest src system diagnostics process tests processtestbase cs at system diagnostics tests processtestbase dispose boolean disposing users dotnet bot j workspace dotnet corefx master osx debug prtest src common tests system io filecleanuptestbase cs at system io filecleanuptestbase dispose at reflectionabstractionextensions disposetestclass itest test object testclass imessagebus messagebus executiontimer timer cancellationtokensource cancellationtokensource finished system diagnostics process tests test execution summary users dotnet bot j workspace dotnet corefx master osx debug prtest tools tests targets warning system diagnostics process tests total errors failed skipped time xunit creates the test fixture class for each test invoked after the test completes it disposes the class this test doesn t make any processes but its base class makes one that sleeps minutes for no obvious reason in dispose of the base class it kills the process if necessary then waits minutes if necessary this seems pretty random is the machine so overloaded that it takes over minutes to kill the process
1
8,278
11,432,871,836
IssuesEvent
2020-02-04 14:47:48
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
fix synonym relation in GO:0009870
multi-species process
follow on from https://github.com/geneontology/go-ontology/issues/18707 GO:0009870 defense response signaling pathway, resistance gene-dependent this is the plant term that means "ETI signalling" I don't know if it is the best name based on current thinking, but it is the historical name. I can ask about this at the PHI-Base RGA unless anyone can add any clarity? the existing synonym effector-triggered immunity exact needs to be fixed to effector-triggered immunity signalling exact ETI is much broader than just "signalling" but my understanding is that this term would be used for any of the R-genes (receptors), and any downstream signalling (I have not yet seen any papers on the downstream signalling, but many effectors have been studied). My understanding is that these probably signal through a common pathway? @CuzickA (could you also tag Kim H-K, I can't locate here GIt-HUb) @tberardini
1.0
fix synonym relation in GO:0009870 - follow on from https://github.com/geneontology/go-ontology/issues/18707 GO:0009870 defense response signaling pathway, resistance gene-dependent this is the plant term that means "ETI signalling" I don't know if it is the best name based on current thinking, but it is the historical name. I can ask about this at the PHI-Base RGA unless anyone can add any clarity? the existing synonym effector-triggered immunity exact needs to be fixed to effector-triggered immunity signalling exact ETI is much broader than just "signalling" but my understanding is that this term would be used for any of the R-genes (receptors), and any downstream signalling (I have not yet seen any papers on the downstream signalling, but many effectors have been studied). My understanding is that these probably signal through a common pathway? @CuzickA (could you also tag Kim H-K, I can't locate here GIt-HUb) @tberardini
process
fix synonym relation in go follow on from go defense response signaling pathway resistance gene dependent this is the plant term that means eti signalling i don t know if it is the best name based on current thinking but it is the historical name i can ask about this at the phi base rga unless anyone can add any clarity the existing synonym effector triggered immunity exact needs to be fixed to effector triggered immunity signalling exact eti is much broader than just signalling but my understanding is that this term would be used for any of the r genes receptors and any downstream signalling i have not yet seen any papers on the downstream signalling but many effectors have been studied my understanding is that these probably signal through a common pathway cuzicka could you also tag kim h k i can t locate here git hub tberardini
1
21,171
28,141,720,892
IssuesEvent
2023-04-02 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 31 Mar 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions - **Authors:** Brian Chen, Nina Shvetsova, Andrew Rouditchenko, Daniel Kondermann, Samuel Thomas, Shih-Fu Chang, Rogerio Feris, James Glass, Hilde Kuehne - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.16990 - **Pdf link:** https://arxiv.org/pdf/2303.16990 - **Abstract** Spatio-temporal grounding describes the task of localizing events in space and time, e.g., in video data, based on verbal descriptions only. Models for this task are usually trained with human-annotated sentences and bounding box supervision. This work addresses this task from a multimodal supervision perspective, proposing a framework for spatio-temporal action grounding trained on loose video and subtitle supervision only, without human annotation. To this end, we combine local representation learning, which focuses on leveraging fine-grained spatial information, with a global representation encoding that captures higher-level representations and incorporates both in a joint approach. To evaluate this challenging task in a real-life setting, a new benchmark dataset is proposed providing dense spatio-temporal grounding annotations in long, untrimmed, multi-action instructional videos for over 5K events. We evaluate the proposed approach and other methods on the proposed and standard downstream tasks showing that our method improves over current baselines in various settings, including spatial, temporal, and untrimmed multi-action spatio-temporal grounding. ### C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation - **Authors:** Nazmul Karim, Niluthpol Chowdhury Mithun, Abhinav Rajvanshi, Han-pang Chiu, Supun Samarasekera, Nazanin Rahnavard - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17132 - **Pdf link:** https://arxiv.org/pdf/2303.17132 - **Abstract** Unsupervised domain adaptation (UDA) approaches focus on adapting models trained on a labeled source domain to an unlabeled target domain. UDA methods have a strong assumption that the source data is accessible during adaptation, which may not be feasible in many real-world scenarios due to privacy concerns and resource constraints of devices. In this regard, source-free domain adaptation (SFDA) excels as access to source data is no longer required during adaptation. Recent state-of-the-art (SOTA) methods on SFDA mostly focus on pseudo-label refinement based self-training which generally suffers from two issues: i) inevitable occurrence of noisy pseudo-labels that could lead to early training time memorization, ii) refinement process requires maintaining a memory bank which creates a significant burden in resource constraint scenarios. To address these concerns, we propose C-SFDA, a curriculum learning aided self-training framework for SFDA that adapts efficiently and reliably to changes across domains based on selective pseudo-labeling. Specifically, we employ a curriculum learning scheme to promote learning from a restricted amount of pseudo labels selected based on their reliabilities. This simple yet effective step successfully prevents label noise propagation during different stages of adaptation and eliminates the need for costly memory-bank based label refinement. Our extensive experimental evaluations on both image recognition and semantic segmentation tasks confirm the effectiveness of our method. C-SFDA is readily applicable to online test-time domain adaptation and also outperforms previous SOTA methods in this task. ### Complementary Random Masking for RGB-Thermal Semantic Segmentation - **Authors:** Ukcheol Shin, Kyunghyun Lee, In So Kweon - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2303.17386 - **Pdf link:** https://arxiv.org/pdf/2303.17386 - **Abstract** RGB-thermal semantic segmentation is one potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions. However, the previous studies mostly focus on designing a multi-modal fusion module without consideration of the nature of multi-modality inputs. Therefore, the networks easily become over-reliant on a single modality, making it difficult to learn complementary and meaningful representations for each modality. This paper proposes 1) a complementary random masking strategy of RGB-T images and 2) self-distillation loss between clean and masked input modalities. The proposed masking strategy prevents over-reliance on a single modality. It also improves the accuracy and robustness of the neural network by forcing the network to segment and classify objects even when one modality is partially available. Also, the proposed self-distillation loss encourages the network to extract complementary and meaningful representations from a single modality or complementary masked modalities. Based on the proposed method, we achieve state-of-the-art performance over three RGB-T semantic segmentation benchmarks. Our source code is available at https://github.com/UkcheolShin/CRM_RGBTSeg. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals - **Authors:** James Giroux, Martin Bouchard, Robert Laganiere - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.16940 - **Pdf link:** https://arxiv.org/pdf/2303.16940 - **Abstract** Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems. Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow. However, radar does possess traits that make it unsuitable for standard emission-based deep learning representations such as point clouds. Radar point clouds tend to be sparse and therefore information extraction is not efficient. To overcome this, more traditional digital signal processing pipelines were adapted to form inputs residing directly in the frequency domain via Fast Fourier Transforms. Commonly, three transformations were used to form Range-Azimuth-Doppler cubes in which deep learning algorithms could perform object detection. This too has drawbacks, namely the pre-processing costs associated with performing multiple Fourier Transforms and normalization. We explore the possibility of operating on raw radar inputs from analog to digital converters via the utilization of complex transformation layers. Moreover, we introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations, i.e. relatively low and high numbers of transmitters and receivers, while obtaining on par or better results than the state-of-the-art. ### Asymmetric Face Recognition with Cross Model Compatible Ensembles - **Authors:** Ori Linial, Alon Shoshan, Nadav Bhonker, Elad Hirsch, Lior Zamir, Igor Kviatkovsky, Gerard Medioni - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17531 - **Pdf link:** https://arxiv.org/pdf/2303.17531 - **Abstract** The asymmetrical retrieval setting is a well suited solution for resource constrained face recognition. In this setting a large model is used for indexing the gallery while a lightweight model is used for querying. The key principle in such systems is ensuring that both models share the same embedding space. Most methods in this domain are based on knowledge distillation. While useful, they suffer from several drawbacks: they are upper-bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner. In this paper we present an approach that does not rely on knowledge distillation, rather it utilizes embedding transformation models. This allows the use of N independently trained and diverse gallery models (e.g., trained on different datasets or having a different architecture) and a single query model. As a result, we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying. Additionally, we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images. ## Keyword: ISP ### Enhanced Stable View Synthesis - **Authors:** Nishant Jain, Suryansh Kumar, Luc Van Gool - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2303.17094 - **Pdf link:** https://arxiv.org/pdf/2303.17094 - **Abstract** We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera. The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging, leading to inferior results using the state-of-the-art stable view synthesis (SVS) method. SVS and related methods fail for outdoor scenes primarily due to (i) over-relying on the multiview stereo (MVS) for geometric scaffold recovery and (ii) assuming COLMAP computed camera poses as the best possible estimates, despite it being well-studied that MVS 3D reconstruction accuracy is limited to scene disparity and camera-pose accuracy is sensitive to key-point correspondence selection. This work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geometry. By leveraging the complementary behavior of MVS and monocular depth, we arrive at a better scene depth per view for nearby and far points, respectively. Moreover, our approach jointly refines camera poses with image-based rendering via multiple rotation averaging graph optimization. The recovered scene depth and the camera-pose help better view-dependent on-surface feature aggregation of the entire scene. Extensive evaluation of our approach on the popular benchmark dataset, such as Tanks and Temples, shows substantial improvement in view synthesis results compared to the prior art. For instance, our method shows 1.5 dB of PSNR improvement on the Tank and Temples. Similar statistics are observed when tested on other benchmark datasets such as FVS, Mip-NeRF 360, and DTU. ### A View From Somewhere: Human-Centric Face Representations - **Authors:** Jerone T. A. Andrews, Przemyslaw Joniak, Alice Xiang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17176 - **Pdf link:** https://arxiv.org/pdf/2303.17176 - **Abstract** Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks. Besides, categorical labels can fail to reflect the continuous nature of human phenotypic diversity, making it difficult to compare the similarity between same-labeled faces. To address these issues, we present A View From Somewhere (AVFS) -- a dataset of 638,180 human judgments of face similarity. We demonstrate the utility of AVFS for learning a continuous, low-dimensional embedding space aligned with human perception. Our embedding space, induced under a novel conditional framework, not only enables the accurate prediction of face similarity, but also provides a human-interpretable decomposition of the dimensions used in the human-decision making process, and the importance distinct annotators place on each dimension. We additionally show the practicality of the dimensions for collecting continuous attributes, performing classification, and comparing dataset attribute disparities. ### Implicit View-Time Interpolation of Stereo Videos using Multi-Plane Disparities and Non-Uniform Coordinates - **Authors:** Avinash Paliwal, Andrii Tsarov, Nima Khademi Kalantari - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2303.17181 - **Pdf link:** https://arxiv.org/pdf/2303.17181 - **Abstract** In this paper, we propose an approach for view-time interpolation of stereo videos. Specifically, we build upon X-Fields that approximates an interpolatable mapping between the input coordinates and 2D RGB images using a convolutional decoder. Our main contribution is to analyze and identify the sources of the problems with using X-Fields in our application and propose novel techniques to overcome these challenges. Specifically, we observe that X-Fields struggles to implicitly interpolate the disparities for large baseline cameras. Therefore, we propose multi-plane disparities to reduce the spatial distance of the objects in the stereo views. Moreover, we propose non-uniform time coordinates to handle the non-linear and sudden motion spikes in videos. We additionally introduce several simple, but important, improvements over X-Fields. We demonstrate that our approach is able to produce better results than the state of the art, while running in near real-time rates and having low memory and storage costs. ### NeRF-Supervised Deep Stereo - **Authors:** Fabio Tosi, Alessio Tonioni, Daniele De Gregorio, Matteo Poggi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2303.17603 - **Pdf link:** https://arxiv.org/pdf/2303.17603 - **Abstract** We introduce a novel framework for training deep stereo networks effortlessly and without any ground-truth. By leveraging state-of-the-art neural rendering solutions, we generate stereo training data from image sequences collected with a single handheld camera. On top of them, a NeRF-supervised training procedure is carried out, from which we exploit rendered stereo triplets to compensate for occlusions and depth maps as proxy labels. This results in stereo networks capable of predicting sharp and detailed disparity maps. Experimental results show that models trained under this regime yield a 30-40% improvement over existing self-supervised methods on the challenging Middlebury dataset, filling the gap to supervised models and, most times, outperforming them at zero-shot generalization. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals - **Authors:** James Giroux, Martin Bouchard, Robert Laganiere - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.16940 - **Pdf link:** https://arxiv.org/pdf/2303.16940 - **Abstract** Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems. Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow. However, radar does possess traits that make it unsuitable for standard emission-based deep learning representations such as point clouds. Radar point clouds tend to be sparse and therefore information extraction is not efficient. To overcome this, more traditional digital signal processing pipelines were adapted to form inputs residing directly in the frequency domain via Fast Fourier Transforms. Commonly, three transformations were used to form Range-Azimuth-Doppler cubes in which deep learning algorithms could perform object detection. This too has drawbacks, namely the pre-processing costs associated with performing multiple Fourier Transforms and normalization. We explore the possibility of operating on raw radar inputs from analog to digital converters via the utilization of complex transformation layers. Moreover, we introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations, i.e. relatively low and high numbers of transmitters and receivers, while obtaining on par or better results than the state-of-the-art. ### Enhanced Stable View Synthesis - **Authors:** Nishant Jain, Suryansh Kumar, Luc Van Gool - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2303.17094 - **Pdf link:** https://arxiv.org/pdf/2303.17094 - **Abstract** We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera. The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging, leading to inferior results using the state-of-the-art stable view synthesis (SVS) method. SVS and related methods fail for outdoor scenes primarily due to (i) over-relying on the multiview stereo (MVS) for geometric scaffold recovery and (ii) assuming COLMAP computed camera poses as the best possible estimates, despite it being well-studied that MVS 3D reconstruction accuracy is limited to scene disparity and camera-pose accuracy is sensitive to key-point correspondence selection. This work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geometry. By leveraging the complementary behavior of MVS and monocular depth, we arrive at a better scene depth per view for nearby and far points, respectively. Moreover, our approach jointly refines camera poses with image-based rendering via multiple rotation averaging graph optimization. The recovered scene depth and the camera-pose help better view-dependent on-surface feature aggregation of the entire scene. Extensive evaluation of our approach on the popular benchmark dataset, such as Tanks and Temples, shows substantial improvement in view synthesis results compared to the prior art. For instance, our method shows 1.5 dB of PSNR improvement on the Tank and Temples. Similar statistics are observed when tested on other benchmark datasets such as FVS, Mip-NeRF 360, and DTU. ### Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving - **Authors:** Zijian Zhu, Yichi Zhang, Hai Chen, Yinpeng Dong, Shu Zhao, Wenbo Ding, Jiachen Zhong, Shibao Zheng - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR) - **Arxiv link:** https://arxiv.org/abs/2303.17297 - **Pdf link:** https://arxiv.org/pdf/2303.17297 - **Abstract** 3D object detection is an essential perception task in autonomous driving to understand the environments. The Bird's-Eye-View (BEV) representations have significantly improved the performance of 3D detectors with camera inputs on popular benchmarks. However, there still lacks a systematic understanding of the robustness of these vision-dependent BEV models, which is closely related to the safety of autonomous driving systems. In this paper, we evaluate the natural and adversarial robustness of various representative models under extensive settings, to fully understand their behaviors influenced by explicit BEV features compared with those without BEV. In addition to the classic settings, we propose a 3D consistent patch attack by applying adversarial patches in the 3D space to guarantee the spatiotemporal consistency, which is more realistic for the scenario of autonomous driving. With substantial experiments, we draw several findings: 1) BEV models tend to be more stable than previous methods under different natural conditions and common corruptions due to the expressive spatial representations; 2) BEV models are more vulnerable to adversarial noises, mainly caused by the redundant BEV features; 3) Camera-LiDAR fusion models have superior performance under different settings with multi-modal inputs, but BEV fusion model is still vulnerable to adversarial noises of both point cloud and image. These findings alert the safety issue in the applications of BEV detectors and could facilitate the development of more robust models. ### Asymmetric Face Recognition with Cross Model Compatible Ensembles - **Authors:** Ori Linial, Alon Shoshan, Nadav Bhonker, Elad Hirsch, Lior Zamir, Igor Kviatkovsky, Gerard Medioni - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17531 - **Pdf link:** https://arxiv.org/pdf/2303.17531 - **Abstract** The asymmetrical retrieval setting is a well suited solution for resource constrained face recognition. In this setting a large model is used for indexing the gallery while a lightweight model is used for querying. The key principle in such systems is ensuring that both models share the same embedding space. Most methods in this domain are based on knowledge distillation. While useful, they suffer from several drawbacks: they are upper-bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner. In this paper we present an approach that does not rely on knowledge distillation, rather it utilizes embedding transformation models. This allows the use of N independently trained and diverse gallery models (e.g., trained on different datasets or having a different architecture) and a single query model. As a result, we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying. Additionally, we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images. ### Robo3D: Towards Robust and Reliable 3D Perception against Corruptions - **Authors:** Lingdong Kong, Youquan Liu, Xin Li, Runnan Chen, Wenwei Zhang, Jiawei Ren, Liang Pan, Kai Chen, Ziwei Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2303.17597 - **Pdf link:** https://arxiv.org/pdf/2303.17597 - **Abstract** The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from adversarial weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available. ## Keyword: raw image There is no result
2.0
New submissions for Fri, 31 Mar 23 - ## Keyword: events ### What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions - **Authors:** Brian Chen, Nina Shvetsova, Andrew Rouditchenko, Daniel Kondermann, Samuel Thomas, Shih-Fu Chang, Rogerio Feris, James Glass, Hilde Kuehne - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.16990 - **Pdf link:** https://arxiv.org/pdf/2303.16990 - **Abstract** Spatio-temporal grounding describes the task of localizing events in space and time, e.g., in video data, based on verbal descriptions only. Models for this task are usually trained with human-annotated sentences and bounding box supervision. This work addresses this task from a multimodal supervision perspective, proposing a framework for spatio-temporal action grounding trained on loose video and subtitle supervision only, without human annotation. To this end, we combine local representation learning, which focuses on leveraging fine-grained spatial information, with a global representation encoding that captures higher-level representations and incorporates both in a joint approach. To evaluate this challenging task in a real-life setting, a new benchmark dataset is proposed providing dense spatio-temporal grounding annotations in long, untrimmed, multi-action instructional videos for over 5K events. We evaluate the proposed approach and other methods on the proposed and standard downstream tasks showing that our method improves over current baselines in various settings, including spatial, temporal, and untrimmed multi-action spatio-temporal grounding. ### C-SFDA: A Curriculum Learning Aided Self-Training Framework for Efficient Source Free Domain Adaptation - **Authors:** Nazmul Karim, Niluthpol Chowdhury Mithun, Abhinav Rajvanshi, Han-pang Chiu, Supun Samarasekera, Nazanin Rahnavard - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17132 - **Pdf link:** https://arxiv.org/pdf/2303.17132 - **Abstract** Unsupervised domain adaptation (UDA) approaches focus on adapting models trained on a labeled source domain to an unlabeled target domain. UDA methods have a strong assumption that the source data is accessible during adaptation, which may not be feasible in many real-world scenarios due to privacy concerns and resource constraints of devices. In this regard, source-free domain adaptation (SFDA) excels as access to source data is no longer required during adaptation. Recent state-of-the-art (SOTA) methods on SFDA mostly focus on pseudo-label refinement based self-training which generally suffers from two issues: i) inevitable occurrence of noisy pseudo-labels that could lead to early training time memorization, ii) refinement process requires maintaining a memory bank which creates a significant burden in resource constraint scenarios. To address these concerns, we propose C-SFDA, a curriculum learning aided self-training framework for SFDA that adapts efficiently and reliably to changes across domains based on selective pseudo-labeling. Specifically, we employ a curriculum learning scheme to promote learning from a restricted amount of pseudo labels selected based on their reliabilities. This simple yet effective step successfully prevents label noise propagation during different stages of adaptation and eliminates the need for costly memory-bank based label refinement. Our extensive experimental evaluations on both image recognition and semantic segmentation tasks confirm the effectiveness of our method. C-SFDA is readily applicable to online test-time domain adaptation and also outperforms previous SOTA methods in this task. ### Complementary Random Masking for RGB-Thermal Semantic Segmentation - **Authors:** Ukcheol Shin, Kyunghyun Lee, In So Kweon - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2303.17386 - **Pdf link:** https://arxiv.org/pdf/2303.17386 - **Abstract** RGB-thermal semantic segmentation is one potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions. However, the previous studies mostly focus on designing a multi-modal fusion module without consideration of the nature of multi-modality inputs. Therefore, the networks easily become over-reliant on a single modality, making it difficult to learn complementary and meaningful representations for each modality. This paper proposes 1) a complementary random masking strategy of RGB-T images and 2) self-distillation loss between clean and masked input modalities. The proposed masking strategy prevents over-reliance on a single modality. It also improves the accuracy and robustness of the neural network by forcing the network to segment and classify objects even when one modality is partially available. Also, the proposed self-distillation loss encourages the network to extract complementary and meaningful representations from a single modality or complementary masked modalities. Based on the proposed method, we achieve state-of-the-art performance over three RGB-T semantic segmentation benchmarks. Our source code is available at https://github.com/UkcheolShin/CRM_RGBTSeg. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals - **Authors:** James Giroux, Martin Bouchard, Robert Laganiere - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.16940 - **Pdf link:** https://arxiv.org/pdf/2303.16940 - **Abstract** Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems. Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow. However, radar does possess traits that make it unsuitable for standard emission-based deep learning representations such as point clouds. Radar point clouds tend to be sparse and therefore information extraction is not efficient. To overcome this, more traditional digital signal processing pipelines were adapted to form inputs residing directly in the frequency domain via Fast Fourier Transforms. Commonly, three transformations were used to form Range-Azimuth-Doppler cubes in which deep learning algorithms could perform object detection. This too has drawbacks, namely the pre-processing costs associated with performing multiple Fourier Transforms and normalization. We explore the possibility of operating on raw radar inputs from analog to digital converters via the utilization of complex transformation layers. Moreover, we introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations, i.e. relatively low and high numbers of transmitters and receivers, while obtaining on par or better results than the state-of-the-art. ### Asymmetric Face Recognition with Cross Model Compatible Ensembles - **Authors:** Ori Linial, Alon Shoshan, Nadav Bhonker, Elad Hirsch, Lior Zamir, Igor Kviatkovsky, Gerard Medioni - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17531 - **Pdf link:** https://arxiv.org/pdf/2303.17531 - **Abstract** The asymmetrical retrieval setting is a well suited solution for resource constrained face recognition. In this setting a large model is used for indexing the gallery while a lightweight model is used for querying. The key principle in such systems is ensuring that both models share the same embedding space. Most methods in this domain are based on knowledge distillation. While useful, they suffer from several drawbacks: they are upper-bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner. In this paper we present an approach that does not rely on knowledge distillation, rather it utilizes embedding transformation models. This allows the use of N independently trained and diverse gallery models (e.g., trained on different datasets or having a different architecture) and a single query model. As a result, we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying. Additionally, we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images. ## Keyword: ISP ### Enhanced Stable View Synthesis - **Authors:** Nishant Jain, Suryansh Kumar, Luc Van Gool - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2303.17094 - **Pdf link:** https://arxiv.org/pdf/2303.17094 - **Abstract** We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera. The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging, leading to inferior results using the state-of-the-art stable view synthesis (SVS) method. SVS and related methods fail for outdoor scenes primarily due to (i) over-relying on the multiview stereo (MVS) for geometric scaffold recovery and (ii) assuming COLMAP computed camera poses as the best possible estimates, despite it being well-studied that MVS 3D reconstruction accuracy is limited to scene disparity and camera-pose accuracy is sensitive to key-point correspondence selection. This work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geometry. By leveraging the complementary behavior of MVS and monocular depth, we arrive at a better scene depth per view for nearby and far points, respectively. Moreover, our approach jointly refines camera poses with image-based rendering via multiple rotation averaging graph optimization. The recovered scene depth and the camera-pose help better view-dependent on-surface feature aggregation of the entire scene. Extensive evaluation of our approach on the popular benchmark dataset, such as Tanks and Temples, shows substantial improvement in view synthesis results compared to the prior art. For instance, our method shows 1.5 dB of PSNR improvement on the Tank and Temples. Similar statistics are observed when tested on other benchmark datasets such as FVS, Mip-NeRF 360, and DTU. ### A View From Somewhere: Human-Centric Face Representations - **Authors:** Jerone T. A. Andrews, Przemyslaw Joniak, Alice Xiang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17176 - **Pdf link:** https://arxiv.org/pdf/2303.17176 - **Abstract** Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks. Besides, categorical labels can fail to reflect the continuous nature of human phenotypic diversity, making it difficult to compare the similarity between same-labeled faces. To address these issues, we present A View From Somewhere (AVFS) -- a dataset of 638,180 human judgments of face similarity. We demonstrate the utility of AVFS for learning a continuous, low-dimensional embedding space aligned with human perception. Our embedding space, induced under a novel conditional framework, not only enables the accurate prediction of face similarity, but also provides a human-interpretable decomposition of the dimensions used in the human-decision making process, and the importance distinct annotators place on each dimension. We additionally show the practicality of the dimensions for collecting continuous attributes, performing classification, and comparing dataset attribute disparities. ### Implicit View-Time Interpolation of Stereo Videos using Multi-Plane Disparities and Non-Uniform Coordinates - **Authors:** Avinash Paliwal, Andrii Tsarov, Nima Khademi Kalantari - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2303.17181 - **Pdf link:** https://arxiv.org/pdf/2303.17181 - **Abstract** In this paper, we propose an approach for view-time interpolation of stereo videos. Specifically, we build upon X-Fields that approximates an interpolatable mapping between the input coordinates and 2D RGB images using a convolutional decoder. Our main contribution is to analyze and identify the sources of the problems with using X-Fields in our application and propose novel techniques to overcome these challenges. Specifically, we observe that X-Fields struggles to implicitly interpolate the disparities for large baseline cameras. Therefore, we propose multi-plane disparities to reduce the spatial distance of the objects in the stereo views. Moreover, we propose non-uniform time coordinates to handle the non-linear and sudden motion spikes in videos. We additionally introduce several simple, but important, improvements over X-Fields. We demonstrate that our approach is able to produce better results than the state of the art, while running in near real-time rates and having low memory and storage costs. ### NeRF-Supervised Deep Stereo - **Authors:** Fabio Tosi, Alessio Tonioni, Daniele De Gregorio, Matteo Poggi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2303.17603 - **Pdf link:** https://arxiv.org/pdf/2303.17603 - **Abstract** We introduce a novel framework for training deep stereo networks effortlessly and without any ground-truth. By leveraging state-of-the-art neural rendering solutions, we generate stereo training data from image sequences collected with a single handheld camera. On top of them, a NeRF-supervised training procedure is carried out, from which we exploit rendered stereo triplets to compensate for occlusions and depth maps as proxy labels. This results in stereo networks capable of predicting sharp and detailed disparity maps. Experimental results show that models trained under this regime yield a 30-40% improvement over existing self-supervised methods on the challenging Middlebury dataset, filling the gap to supervised models and, most times, outperforming them at zero-shot generalization. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals - **Authors:** James Giroux, Martin Bouchard, Robert Laganiere - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.16940 - **Pdf link:** https://arxiv.org/pdf/2303.16940 - **Abstract** Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems. Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow. However, radar does possess traits that make it unsuitable for standard emission-based deep learning representations such as point clouds. Radar point clouds tend to be sparse and therefore information extraction is not efficient. To overcome this, more traditional digital signal processing pipelines were adapted to form inputs residing directly in the frequency domain via Fast Fourier Transforms. Commonly, three transformations were used to form Range-Azimuth-Doppler cubes in which deep learning algorithms could perform object detection. This too has drawbacks, namely the pre-processing costs associated with performing multiple Fourier Transforms and normalization. We explore the possibility of operating on raw radar inputs from analog to digital converters via the utilization of complex transformation layers. Moreover, we introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations, i.e. relatively low and high numbers of transmitters and receivers, while obtaining on par or better results than the state-of-the-art. ### Enhanced Stable View Synthesis - **Authors:** Nishant Jain, Suryansh Kumar, Luc Van Gool - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR) - **Arxiv link:** https://arxiv.org/abs/2303.17094 - **Pdf link:** https://arxiv.org/pdf/2303.17094 - **Abstract** We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera. The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging, leading to inferior results using the state-of-the-art stable view synthesis (SVS) method. SVS and related methods fail for outdoor scenes primarily due to (i) over-relying on the multiview stereo (MVS) for geometric scaffold recovery and (ii) assuming COLMAP computed camera poses as the best possible estimates, despite it being well-studied that MVS 3D reconstruction accuracy is limited to scene disparity and camera-pose accuracy is sensitive to key-point correspondence selection. This work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geometry. By leveraging the complementary behavior of MVS and monocular depth, we arrive at a better scene depth per view for nearby and far points, respectively. Moreover, our approach jointly refines camera poses with image-based rendering via multiple rotation averaging graph optimization. The recovered scene depth and the camera-pose help better view-dependent on-surface feature aggregation of the entire scene. Extensive evaluation of our approach on the popular benchmark dataset, such as Tanks and Temples, shows substantial improvement in view synthesis results compared to the prior art. For instance, our method shows 1.5 dB of PSNR improvement on the Tank and Temples. Similar statistics are observed when tested on other benchmark datasets such as FVS, Mip-NeRF 360, and DTU. ### Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving - **Authors:** Zijian Zhu, Yichi Zhang, Hai Chen, Yinpeng Dong, Shu Zhao, Wenbo Ding, Jiachen Zhong, Shibao Zheng - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cryptography and Security (cs.CR) - **Arxiv link:** https://arxiv.org/abs/2303.17297 - **Pdf link:** https://arxiv.org/pdf/2303.17297 - **Abstract** 3D object detection is an essential perception task in autonomous driving to understand the environments. The Bird's-Eye-View (BEV) representations have significantly improved the performance of 3D detectors with camera inputs on popular benchmarks. However, there still lacks a systematic understanding of the robustness of these vision-dependent BEV models, which is closely related to the safety of autonomous driving systems. In this paper, we evaluate the natural and adversarial robustness of various representative models under extensive settings, to fully understand their behaviors influenced by explicit BEV features compared with those without BEV. In addition to the classic settings, we propose a 3D consistent patch attack by applying adversarial patches in the 3D space to guarantee the spatiotemporal consistency, which is more realistic for the scenario of autonomous driving. With substantial experiments, we draw several findings: 1) BEV models tend to be more stable than previous methods under different natural conditions and common corruptions due to the expressive spatial representations; 2) BEV models are more vulnerable to adversarial noises, mainly caused by the redundant BEV features; 3) Camera-LiDAR fusion models have superior performance under different settings with multi-modal inputs, but BEV fusion model is still vulnerable to adversarial noises of both point cloud and image. These findings alert the safety issue in the applications of BEV detectors and could facilitate the development of more robust models. ### Asymmetric Face Recognition with Cross Model Compatible Ensembles - **Authors:** Ori Linial, Alon Shoshan, Nadav Bhonker, Elad Hirsch, Lior Zamir, Igor Kviatkovsky, Gerard Medioni - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.17531 - **Pdf link:** https://arxiv.org/pdf/2303.17531 - **Abstract** The asymmetrical retrieval setting is a well suited solution for resource constrained face recognition. In this setting a large model is used for indexing the gallery while a lightweight model is used for querying. The key principle in such systems is ensuring that both models share the same embedding space. Most methods in this domain are based on knowledge distillation. While useful, they suffer from several drawbacks: they are upper-bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner. In this paper we present an approach that does not rely on knowledge distillation, rather it utilizes embedding transformation models. This allows the use of N independently trained and diverse gallery models (e.g., trained on different datasets or having a different architecture) and a single query model. As a result, we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying. Additionally, we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images. ### Robo3D: Towards Robust and Reliable 3D Perception against Corruptions - **Authors:** Lingdong Kong, Youquan Liu, Xin Li, Runnan Chen, Wenwei Zhang, Jiawei Ren, Liang Pan, Kai Chen, Ziwei Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2303.17597 - **Pdf link:** https://arxiv.org/pdf/2303.17597 - **Abstract** The robustness of 3D perception systems under natural corruptions from environments and sensors is pivotal for safety-critical applications. Existing large-scale 3D perception datasets often contain data that are meticulously cleaned. Such configurations, however, cannot reflect the reliability of perception models during the deployment stage. In this work, we present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios against natural corruptions that occur in real-world environments. Specifically, we consider eight corruption types stemming from adversarial weather conditions, external disturbances, and internal sensor failure. We uncover that, although promising results have been progressively achieved on standard benchmarks, state-of-the-art 3D perception models are at risk of being vulnerable to corruptions. We draw key observations on the use of data representations, augmentation schemes, and training strategies, that could severely affect the model's performance. To pursue better robustness, we propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency. We hope our benchmark and approach could inspire future research in designing more robust and reliable 3D perception models. Our robustness benchmark suite is publicly available. ## Keyword: raw image There is no result
process
new submissions for fri mar keyword events what when and where self supervised spatio temporal grounding in untrimmed multi action videos from narrated instructions authors brian chen nina shvetsova andrew rouditchenko daniel kondermann samuel thomas shih fu chang rogerio feris james glass hilde kuehne subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract spatio temporal grounding describes the task of localizing events in space and time e g in video data based on verbal descriptions only models for this task are usually trained with human annotated sentences and bounding box supervision this work addresses this task from a multimodal supervision perspective proposing a framework for spatio temporal action grounding trained on loose video and subtitle supervision only without human annotation to this end we combine local representation learning which focuses on leveraging fine grained spatial information with a global representation encoding that captures higher level representations and incorporates both in a joint approach to evaluate this challenging task in a real life setting a new benchmark dataset is proposed providing dense spatio temporal grounding annotations in long untrimmed multi action instructional videos for over events we evaluate the proposed approach and other methods on the proposed and standard downstream tasks showing that our method improves over current baselines in various settings including spatial temporal and untrimmed multi action spatio temporal grounding c sfda a curriculum learning aided self training framework for efficient source free domain adaptation authors nazmul karim niluthpol chowdhury mithun abhinav rajvanshi han pang chiu supun samarasekera nazanin rahnavard subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract unsupervised domain adaptation uda approaches focus on adapting models trained on a labeled source domain to an unlabeled target domain uda methods have a strong assumption that the source data is accessible during adaptation which may not be feasible in many real world scenarios due to privacy concerns and resource constraints of devices in this regard source free domain adaptation sfda excels as access to source data is no longer required during adaptation recent state of the art sota methods on sfda mostly focus on pseudo label refinement based self training which generally suffers from two issues i inevitable occurrence of noisy pseudo labels that could lead to early training time memorization ii refinement process requires maintaining a memory bank which creates a significant burden in resource constraint scenarios to address these concerns we propose c sfda a curriculum learning aided self training framework for sfda that adapts efficiently and reliably to changes across domains based on selective pseudo labeling specifically we employ a curriculum learning scheme to promote learning from a restricted amount of pseudo labels selected based on their reliabilities this simple yet effective step successfully prevents label noise propagation during different stages of adaptation and eliminates the need for costly memory bank based label refinement our extensive experimental evaluations on both image recognition and semantic segmentation tasks confirm the effectiveness of our method c sfda is readily applicable to online test time domain adaptation and also outperforms previous sota methods in this task complementary random masking for rgb thermal semantic segmentation authors ukcheol shin kyunghyun lee in so kweon subjects computer vision and pattern recognition cs cv artificial intelligence cs ai robotics cs ro arxiv link pdf link abstract rgb thermal semantic segmentation is one potential solution to achieve reliable semantic scene understanding in adverse weather and lighting conditions however the previous studies mostly focus on designing a multi modal fusion module without consideration of the nature of multi modality inputs therefore the networks easily become over reliant on a single modality making it difficult to learn complementary and meaningful representations for each modality this paper proposes a complementary random masking strategy of rgb t images and self distillation loss between clean and masked input modalities the proposed masking strategy prevents over reliance on a single modality it also improves the accuracy and robustness of the neural network by forcing the network to segment and classify objects even when one modality is partially available also the proposed self distillation loss encourages the network to extract complementary and meaningful representations from a single modality or complementary masked modalities based on the proposed method we achieve state of the art performance over three rgb t semantic segmentation benchmarks our source code is available at keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb t fftradnet object detection with swin vision transformers from raw adc radar signals authors james giroux martin bouchard robert laganiere subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract object detection utilizing frequency modulated continous wave radar is becoming increasingly popular in the field of autonomous systems radar does not possess the same drawbacks seen by other emission based sensors such as lidar primarily the degradation or loss of return signals due to weather conditions such as rain or snow however radar does possess traits that make it unsuitable for standard emission based deep learning representations such as point clouds radar point clouds tend to be sparse and therefore information extraction is not efficient to overcome this more traditional digital signal processing pipelines were adapted to form inputs residing directly in the frequency domain via fast fourier transforms commonly three transformations were used to form range azimuth doppler cubes in which deep learning algorithms could perform object detection this too has drawbacks namely the pre processing costs associated with performing multiple fourier transforms and normalization we explore the possibility of operating on raw radar inputs from analog to digital converters via the utilization of complex transformation layers moreover we introduce hierarchical swin vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre processing along with different radar configurations i e relatively low and high numbers of transmitters and receivers while obtaining on par or better results than the state of the art asymmetric face recognition with cross model compatible ensembles authors ori linial alon shoshan nadav bhonker elad hirsch lior zamir igor kviatkovsky gerard medioni subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the asymmetrical retrieval setting is a well suited solution for resource constrained face recognition in this setting a large model is used for indexing the gallery while a lightweight model is used for querying the key principle in such systems is ensuring that both models share the same embedding space most methods in this domain are based on knowledge distillation while useful they suffer from several drawbacks they are upper bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner in this paper we present an approach that does not rely on knowledge distillation rather it utilizes embedding transformation models this allows the use of n independently trained and diverse gallery models e g trained on different datasets or having a different architecture and a single query model as a result we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying additionally we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images keyword isp enhanced stable view synthesis authors nishant jain suryansh kumar luc van gool subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract we introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera the introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging leading to inferior results using the state of the art stable view synthesis svs method svs and related methods fail for outdoor scenes primarily due to i over relying on the multiview stereo mvs for geometric scaffold recovery and ii assuming colmap computed camera poses as the best possible estimates despite it being well studied that mvs reconstruction accuracy is limited to scene disparity and camera pose accuracy is sensitive to key point correspondence selection this work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geometry by leveraging the complementary behavior of mvs and monocular depth we arrive at a better scene depth per view for nearby and far points respectively moreover our approach jointly refines camera poses with image based rendering via multiple rotation averaging graph optimization the recovered scene depth and the camera pose help better view dependent on surface feature aggregation of the entire scene extensive evaluation of our approach on the popular benchmark dataset such as tanks and temples shows substantial improvement in view synthesis results compared to the prior art for instance our method shows db of psnr improvement on the tank and temples similar statistics are observed when tested on other benchmark datasets such as fvs mip nerf and dtu a view from somewhere human centric face representations authors jerone t a andrews przemyslaw joniak alice xiang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract few datasets contain self identified sensitive attributes inferring attributes risks introducing additional biases and collecting attributes can carry legal risks besides categorical labels can fail to reflect the continuous nature of human phenotypic diversity making it difficult to compare the similarity between same labeled faces to address these issues we present a view from somewhere avfs a dataset of human judgments of face similarity we demonstrate the utility of avfs for learning a continuous low dimensional embedding space aligned with human perception our embedding space induced under a novel conditional framework not only enables the accurate prediction of face similarity but also provides a human interpretable decomposition of the dimensions used in the human decision making process and the importance distinct annotators place on each dimension we additionally show the practicality of the dimensions for collecting continuous attributes performing classification and comparing dataset attribute disparities implicit view time interpolation of stereo videos using multi plane disparities and non uniform coordinates authors avinash paliwal andrii tsarov nima khademi kalantari subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract in this paper we propose an approach for view time interpolation of stereo videos specifically we build upon x fields that approximates an interpolatable mapping between the input coordinates and rgb images using a convolutional decoder our main contribution is to analyze and identify the sources of the problems with using x fields in our application and propose novel techniques to overcome these challenges specifically we observe that x fields struggles to implicitly interpolate the disparities for large baseline cameras therefore we propose multi plane disparities to reduce the spatial distance of the objects in the stereo views moreover we propose non uniform time coordinates to handle the non linear and sudden motion spikes in videos we additionally introduce several simple but important improvements over x fields we demonstrate that our approach is able to produce better results than the state of the art while running in near real time rates and having low memory and storage costs nerf supervised deep stereo authors fabio tosi alessio tonioni daniele de gregorio matteo poggi subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract we introduce a novel framework for training deep stereo networks effortlessly and without any ground truth by leveraging state of the art neural rendering solutions we generate stereo training data from image sequences collected with a single handheld camera on top of them a nerf supervised training procedure is carried out from which we exploit rendered stereo triplets to compensate for occlusions and depth maps as proxy labels this results in stereo networks capable of predicting sharp and detailed disparity maps experimental results show that models trained under this regime yield a improvement over existing self supervised methods on the challenging middlebury dataset filling the gap to supervised models and most times outperforming them at zero shot generalization keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw t fftradnet object detection with swin vision transformers from raw adc radar signals authors james giroux martin bouchard robert laganiere subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract object detection utilizing frequency modulated continous wave radar is becoming increasingly popular in the field of autonomous systems radar does not possess the same drawbacks seen by other emission based sensors such as lidar primarily the degradation or loss of return signals due to weather conditions such as rain or snow however radar does possess traits that make it unsuitable for standard emission based deep learning representations such as point clouds radar point clouds tend to be sparse and therefore information extraction is not efficient to overcome this more traditional digital signal processing pipelines were adapted to form inputs residing directly in the frequency domain via fast fourier transforms commonly three transformations were used to form range azimuth doppler cubes in which deep learning algorithms could perform object detection this too has drawbacks namely the pre processing costs associated with performing multiple fourier transforms and normalization we explore the possibility of operating on raw radar inputs from analog to digital converters via the utilization of complex transformation layers moreover we introduce hierarchical swin vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre processing along with different radar configurations i e relatively low and high numbers of transmitters and receivers while obtaining on par or better results than the state of the art enhanced stable view synthesis authors nishant jain suryansh kumar luc van gool subjects computer vision and pattern recognition cs cv graphics cs gr arxiv link pdf link abstract we introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera the introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging leading to inferior results using the state of the art stable view synthesis svs method svs and related methods fail for outdoor scenes primarily due to i over relying on the multiview stereo mvs for geometric scaffold recovery and ii assuming colmap computed camera poses as the best possible estimates despite it being well studied that mvs reconstruction accuracy is limited to scene disparity and camera pose accuracy is sensitive to key point correspondence selection this work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geometry by leveraging the complementary behavior of mvs and monocular depth we arrive at a better scene depth per view for nearby and far points respectively moreover our approach jointly refines camera poses with image based rendering via multiple rotation averaging graph optimization the recovered scene depth and the camera pose help better view dependent on surface feature aggregation of the entire scene extensive evaluation of our approach on the popular benchmark dataset such as tanks and temples shows substantial improvement in view synthesis results compared to the prior art for instance our method shows db of psnr improvement on the tank and temples similar statistics are observed when tested on other benchmark datasets such as fvs mip nerf and dtu understanding the robustness of object detection with bird s eye view representations in autonomous driving authors zijian zhu yichi zhang hai chen yinpeng dong shu zhao wenbo ding jiachen zhong shibao zheng subjects computer vision and pattern recognition cs cv cryptography and security cs cr arxiv link pdf link abstract object detection is an essential perception task in autonomous driving to understand the environments the bird s eye view bev representations have significantly improved the performance of detectors with camera inputs on popular benchmarks however there still lacks a systematic understanding of the robustness of these vision dependent bev models which is closely related to the safety of autonomous driving systems in this paper we evaluate the natural and adversarial robustness of various representative models under extensive settings to fully understand their behaviors influenced by explicit bev features compared with those without bev in addition to the classic settings we propose a consistent patch attack by applying adversarial patches in the space to guarantee the spatiotemporal consistency which is more realistic for the scenario of autonomous driving with substantial experiments we draw several findings bev models tend to be more stable than previous methods under different natural conditions and common corruptions due to the expressive spatial representations bev models are more vulnerable to adversarial noises mainly caused by the redundant bev features camera lidar fusion models have superior performance under different settings with multi modal inputs but bev fusion model is still vulnerable to adversarial noises of both point cloud and image these findings alert the safety issue in the applications of bev detectors and could facilitate the development of more robust models asymmetric face recognition with cross model compatible ensembles authors ori linial alon shoshan nadav bhonker elad hirsch lior zamir igor kviatkovsky gerard medioni subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the asymmetrical retrieval setting is a well suited solution for resource constrained face recognition in this setting a large model is used for indexing the gallery while a lightweight model is used for querying the key principle in such systems is ensuring that both models share the same embedding space most methods in this domain are based on knowledge distillation while useful they suffer from several drawbacks they are upper bounded by the performance of the single best model found and cannot be extended to use an ensemble of models in a straightforward manner in this paper we present an approach that does not rely on knowledge distillation rather it utilizes embedding transformation models this allows the use of n independently trained and diverse gallery models e g trained on different datasets or having a different architecture and a single query model as a result we improve the overall accuracy beyond that of any single model while maintaining a low computational budget for querying additionally we propose a gallery image rejection method that utilizes the diversity between multiple transformed embeddings to estimate the uncertainty of gallery images towards robust and reliable perception against corruptions authors lingdong kong youquan liu xin li runnan chen wenwei zhang jiawei ren liang pan kai chen ziwei liu subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract the robustness of perception systems under natural corruptions from environments and sensors is pivotal for safety critical applications existing large scale perception datasets often contain data that are meticulously cleaned such configurations however cannot reflect the reliability of perception models during the deployment stage in this work we present the first comprehensive benchmark heading toward probing the robustness of detectors and segmentors under out of distribution scenarios against natural corruptions that occur in real world environments specifically we consider eight corruption types stemming from adversarial weather conditions external disturbances and internal sensor failure we uncover that although promising results have been progressively achieved on standard benchmarks state of the art perception models are at risk of being vulnerable to corruptions we draw key observations on the use of data representations augmentation schemes and training strategies that could severely affect the model s performance to pursue better robustness we propose a density insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency we hope our benchmark and approach could inspire future research in designing more robust and reliable perception models our robustness benchmark suite is publicly available keyword raw image there is no result
1
344,864
10,349,698,393
IssuesEvent
2019-09-04 23:32:23
oslc-op/oslc-specs
https://api.github.com/repos/oslc-op/oslc-specs
opened
TRS base member predicate may introduce incompatibilities with TRS 1.0
Core: TRS Priority: High Xtra: Jira
The TRS 1.0 specification did not define a trs:Base class, but rather referred to the object of a trs:base predicate as an "RDF container". The Base Resources does not specify and rdf:type for the base resource. The members of this RDF container are captured using rdfs:member predicates.  TRS 2.0 also does not define a trs;Base class, it also refers to the Base resource as the target of the trs:base predicate whose range is specified to be ldp:DirectContainer. The TRS 2.0 spec does define some properties of this unspecified resource type, so when I created the resource shapes for TRS 3.0, I added a trs:Base resource shape that oslc:describes ldp:DirectContainer in order to formerly reflect what was in the 2.0 specification. It is not clear what TRS 1.0 implementations exist, but any TRS implementations done using eclipse/Lyo oslc-trs as a reference implementation may not be entirely compatible with TRS 2.0 as specified. It appears oslc-trs does utilize LDP, but it sues ldp:Container, not ldp:DirectContainer (a subclass), and uses rdfs:member, not ldp:member (ldp:member is a subPropertyOf rdfs:member). A TRS provider I recently implemented ([https://github.com/OSLC/iotp-adaptor)](https://github.com/OSLC/iotp-adaptor)) using oslc-trs does produce ldp:Container with rdfs:member, and this base is consumed without any problem by IBM LQE.  So it appears we have a mixture of container and member specifications that we may need to maintain for backward compatibility. --- _Migrated from https://issues.oasis-open.org/browse/OSLCCORE-171 (opened by @jamsden; previously assigned to _**Unknown user**_)_
1.0
TRS base member predicate may introduce incompatibilities with TRS 1.0 - The TRS 1.0 specification did not define a trs:Base class, but rather referred to the object of a trs:base predicate as an "RDF container". The Base Resources does not specify and rdf:type for the base resource. The members of this RDF container are captured using rdfs:member predicates.  TRS 2.0 also does not define a trs;Base class, it also refers to the Base resource as the target of the trs:base predicate whose range is specified to be ldp:DirectContainer. The TRS 2.0 spec does define some properties of this unspecified resource type, so when I created the resource shapes for TRS 3.0, I added a trs:Base resource shape that oslc:describes ldp:DirectContainer in order to formerly reflect what was in the 2.0 specification. It is not clear what TRS 1.0 implementations exist, but any TRS implementations done using eclipse/Lyo oslc-trs as a reference implementation may not be entirely compatible with TRS 2.0 as specified. It appears oslc-trs does utilize LDP, but it sues ldp:Container, not ldp:DirectContainer (a subclass), and uses rdfs:member, not ldp:member (ldp:member is a subPropertyOf rdfs:member). A TRS provider I recently implemented ([https://github.com/OSLC/iotp-adaptor)](https://github.com/OSLC/iotp-adaptor)) using oslc-trs does produce ldp:Container with rdfs:member, and this base is consumed without any problem by IBM LQE.  So it appears we have a mixture of container and member specifications that we may need to maintain for backward compatibility. --- _Migrated from https://issues.oasis-open.org/browse/OSLCCORE-171 (opened by @jamsden; previously assigned to _**Unknown user**_)_
non_process
trs base member predicate may introduce incompatibilities with trs the trs specification did not define a trs base class but rather referred to the object of a trs base predicate as an rdf container the base resources does not specify and rdf type for the base resource the members of this rdf container are captured using rdfs member predicates   trs also does not define a trs base class it also refers to the base resource as the target of the trs base predicate whose range is specified to be ldp directcontainer the trs spec does define some properties of this unspecified resource type so when i created the resource shapes for trs i added a trs base resource shape that oslc describes ldp directcontainer in order to formerly reflect what was in the specification it is not clear what trs implementations exist but any trs implementations done using eclipse lyo oslc trs as a reference implementation may not be entirely compatible with trs as specified it appears oslc trs does utilize ldp but it sues ldp container not ldp directcontainer a subclass and uses rdfs member not ldp member ldp member is a subpropertyof rdfs member a trs provider i recently implemented  using oslc trs does produce ldp container with rdfs member and this base is consumed without any problem by ibm lqe   so it appears we have a mixture of container and member specifications that we may need to maintain for backward compatibility migrated from opened by jamsden previously assigned to unknown user
0
8,285
11,452,143,048
IssuesEvent
2020-02-06 13:07:59
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
Improve documentation around securing your database connection with SSL
kind/docs process/candidate
Recently I tried setting up prisma2 with a SSL-secured postgres database. There is some documentation about it [here](https://github.com/prisma/prisma2/blob/master/docs/core/connectors/postgresql.md). However with the information provided here I was not able to get it to work for me. I am not sure which of the parameters are required and how the connection string should look like in the end, especially, because there are paths to files in there. Although maybe not related to prisma2 it would also be nice to know what is the best way to store the keys in prod Edit: It might also be a good idea to link to some place where generating a S12 keystore(?) is explained
1.0
Improve documentation around securing your database connection with SSL - Recently I tried setting up prisma2 with a SSL-secured postgres database. There is some documentation about it [here](https://github.com/prisma/prisma2/blob/master/docs/core/connectors/postgresql.md). However with the information provided here I was not able to get it to work for me. I am not sure which of the parameters are required and how the connection string should look like in the end, especially, because there are paths to files in there. Although maybe not related to prisma2 it would also be nice to know what is the best way to store the keys in prod Edit: It might also be a good idea to link to some place where generating a S12 keystore(?) is explained
process
improve documentation around securing your database connection with ssl recently i tried setting up with a ssl secured postgres database there is some documentation about it however with the information provided here i was not able to get it to work for me i am not sure which of the parameters are required and how the connection string should look like in the end especially because there are paths to files in there although maybe not related to it would also be nice to know what is the best way to store the keys in prod edit it might also be a good idea to link to some place where generating a keystore is explained
1
72,998
15,252,076,885
IssuesEvent
2021-02-20 01:26:21
jgeraigery/haystack
https://api.github.com/repos/jgeraigery/haystack
opened
CVE-2020-28500 (Medium) detected in lodash-4.17.15.tgz
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p> <p>Path to dependency file: haystack/docsite/package.json</p> <p>Path to vulnerable library: haystack/docsite/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - docusaurus-1.13.0.tgz (Root Library) - :x: **lodash-4.17.15.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2) <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","packageFilePaths":["/docsite/package.json"],"isTransitiveDependency":true,"dependencyTree":"docusaurus:1.13.0;lodash:4.17.15","isMinimumFixVersionAvailable":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo \u003d require(\u0027lodash\u0027); function build_blank (n) { var ret \u003d \"1\" for (var i \u003d 0; i \u003c n; i++) { ret +\u003d \" \" } return ret + \"1\"; } var s \u003d build_blank(50000) var time0 \u003d Date.now(); lo.trim(s) var time_cost0 \u003d Date.now() - time0; console.log(\"time_cost0: \" + time_cost0) var time1 \u003d Date.now(); lo.toNumber(s) var time_cost1 \u003d Date.now() - time1; console.log(\"time_cost1: \" + time_cost1) var time2 \u003d Date.now(); lo.trimEnd(s) var time_cost2 \u003d Date.now() - time2; console.log(\"time_cost2: \" + time_cost2)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-28500 (Medium) detected in lodash-4.17.15.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p> <p>Path to dependency file: haystack/docsite/package.json</p> <p>Path to vulnerable library: haystack/docsite/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - docusaurus-1.13.0.tgz (Root Library) - :x: **lodash-4.17.15.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2) <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","packageFilePaths":["/docsite/package.json"],"isTransitiveDependency":true,"dependencyTree":"docusaurus:1.13.0;lodash:4.17.15","isMinimumFixVersionAvailable":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2020-28500","vulnerabilityDetails":"All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo \u003d require(\u0027lodash\u0027); function build_blank (n) { var ret \u003d \"1\" for (var i \u003d 0; i \u003c n; i++) { ret +\u003d \" \" } return ret + \"1\"; } var s \u003d build_blank(50000) var time0 \u003d Date.now(); lo.trim(s) var time_cost0 \u003d Date.now() - time0; console.log(\"time_cost0: \" + time_cost0) var time1 \u003d Date.now(); lo.toNumber(s) var time_cost1 \u003d Date.now() - time1; console.log(\"time_cost1: \" + time_cost1) var time2 \u003d Date.now(); lo.trimEnd(s) var time_cost2 \u003d Date.now() - time2; console.log(\"time_cost2: \" + time_cost2)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file haystack docsite package json path to vulnerable library haystack docsite node modules lodash package json dependency hierarchy docusaurus tgz root library x lodash tgz vulnerable library vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions steps to reproduce provided by reporter liyuan chen var lo require lodash function build blank n var ret for var i i n i ret return ret var s build blank var date now lo trim s var time date now console log time time var date now lo tonumber s var time date now console log time time var date now lo trimend s var time date now console log time time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree docusaurus lodash isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions steps to reproduce provided by reporter liyuan chen var lo require function build blank n var ret for var i i n i ret return ret var s build blank var date now lo trim s var time date now console log time time var date now lo tonumber s var time date now console log time time var date now lo trimend s var time date now console log time time vulnerabilityurl
0
65,654
7,892,486,079
IssuesEvent
2018-06-28 15:07:37
lbryio/lbry-app
https://api.github.com/repos/lbryio/lbry-app
closed
Long URLs cut off in search/URL bar
area: viewer redesign type: improvement
<!-- Thanks for reporting an issue to LBRY and helping us improve! To make it possible for us to help you, please fill out below information carefully. Before reporting any issues, please make sure that you're using the latest version. - App releases: https://github.com/lbryio/lbry-app/releases - Standalone daemon: https://github.com/lbryio/lbry/releases We are also available on live chat at https://chat.lbry.io --> ## The Issue See below - this would mainly happen when accessing content through a channel page / downloads/pubish page for content that as a channel + a longer than normal claim name. ![long url](https://user-images.githubusercontent.com/8120721/38910965-29683090-429a-11e8-8019-dcd8ce2a5560.jpg) ## System Configuration <!-- For the app, this info is in the About section at the bottom of the Help page. You can include a screenshot instead of typing it out --> <!-- For the daemon, run: curl 'http://localhost:5279' --data '{"method":"version"}' and include the full output --> - LBRY Daemon version: - LBRY App version: - LBRY Installation ID: - Operating system: ## Anything Else <!-- Include anything else that does not fit into the above sections --> ## Screenshots <!-- If a screenshot would help explain the bug, please include one or two here -->
1.0
Long URLs cut off in search/URL bar - <!-- Thanks for reporting an issue to LBRY and helping us improve! To make it possible for us to help you, please fill out below information carefully. Before reporting any issues, please make sure that you're using the latest version. - App releases: https://github.com/lbryio/lbry-app/releases - Standalone daemon: https://github.com/lbryio/lbry/releases We are also available on live chat at https://chat.lbry.io --> ## The Issue See below - this would mainly happen when accessing content through a channel page / downloads/pubish page for content that as a channel + a longer than normal claim name. ![long url](https://user-images.githubusercontent.com/8120721/38910965-29683090-429a-11e8-8019-dcd8ce2a5560.jpg) ## System Configuration <!-- For the app, this info is in the About section at the bottom of the Help page. You can include a screenshot instead of typing it out --> <!-- For the daemon, run: curl 'http://localhost:5279' --data '{"method":"version"}' and include the full output --> - LBRY Daemon version: - LBRY App version: - LBRY Installation ID: - Operating system: ## Anything Else <!-- Include anything else that does not fit into the above sections --> ## Screenshots <!-- If a screenshot would help explain the bug, please include one or two here -->
non_process
long urls cut off in search url bar thanks for reporting an issue to lbry and helping us improve to make it possible for us to help you please fill out below information carefully before reporting any issues please make sure that you re using the latest version app releases standalone daemon we are also available on live chat at the issue see below this would mainly happen when accessing content through a channel page downloads pubish page for content that as a channel a longer than normal claim name system configuration for the app this info is in the about section at the bottom of the help page you can include a screenshot instead of typing it out for the daemon run curl data method version and include the full output lbry daemon version lbry app version lbry installation id operating system anything else screenshots
0
59,424
14,589,148,093
IssuesEvent
2020-12-19 00:41:36
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
opened
Illegal instruction on older CPUs under version 2.4.0
type:build/install
**System information** - Ubuntu 18.04 and 20.04, Scientific Linux 7 - binary installed via pip - version 2.4.0 - Python 3.8 - installed via pip (either inside or not inside a Conda environment) - various CPU-only and GPU-hosting machines **Describe the problem** `import tensorflow` produces "Illegal instruction (core dumped)" on older machines (seemingly those that do not support AVX2 instructions). There is no problem on new machines (seemingly those that support AVX2 instructions). **Provide the exact sequence of commands / steps that you executed before running into the problem** ``` pip install tensorflow python -c "import tensorflow" ``` **Any other info / logs** The core dump occurs on various machines with various types of CPUs. The common thread seems to be that it occurs on machines that don't support AVX2 instructions. Most of the machines on which this occurs do support AVX instructions. The issue does not occur with Tensorflow 2.3.1 nor with Tensorflow 2.5.0 installed via tf-nightly. Any chance Tensorflow 2.4.0 was built in a way (perhaps unintentionally) that requires AVX2 instructions or some other requirement that causes it to fail on somewhat older (but not really old) machines? Based on what I am seeing, it seems that using 2.4.0 on many machines will fail. The same issue occurs when running in the official Tensorflow Docker container. This seems related to issue #44668.
1.0
Illegal instruction on older CPUs under version 2.4.0 - **System information** - Ubuntu 18.04 and 20.04, Scientific Linux 7 - binary installed via pip - version 2.4.0 - Python 3.8 - installed via pip (either inside or not inside a Conda environment) - various CPU-only and GPU-hosting machines **Describe the problem** `import tensorflow` produces "Illegal instruction (core dumped)" on older machines (seemingly those that do not support AVX2 instructions). There is no problem on new machines (seemingly those that support AVX2 instructions). **Provide the exact sequence of commands / steps that you executed before running into the problem** ``` pip install tensorflow python -c "import tensorflow" ``` **Any other info / logs** The core dump occurs on various machines with various types of CPUs. The common thread seems to be that it occurs on machines that don't support AVX2 instructions. Most of the machines on which this occurs do support AVX instructions. The issue does not occur with Tensorflow 2.3.1 nor with Tensorflow 2.5.0 installed via tf-nightly. Any chance Tensorflow 2.4.0 was built in a way (perhaps unintentionally) that requires AVX2 instructions or some other requirement that causes it to fail on somewhat older (but not really old) machines? Based on what I am seeing, it seems that using 2.4.0 on many machines will fail. The same issue occurs when running in the official Tensorflow Docker container. This seems related to issue #44668.
non_process
illegal instruction on older cpus under version system information ubuntu and scientific linux binary installed via pip version python installed via pip either inside or not inside a conda environment various cpu only and gpu hosting machines describe the problem import tensorflow produces illegal instruction core dumped on older machines seemingly those that do not support instructions there is no problem on new machines seemingly those that support instructions provide the exact sequence of commands steps that you executed before running into the problem pip install tensorflow python c import tensorflow any other info logs the core dump occurs on various machines with various types of cpus the common thread seems to be that it occurs on machines that don t support instructions most of the machines on which this occurs do support avx instructions the issue does not occur with tensorflow nor with tensorflow installed via tf nightly any chance tensorflow was built in a way perhaps unintentionally that requires instructions or some other requirement that causes it to fail on somewhat older but not really old machines based on what i am seeing it seems that using on many machines will fail the same issue occurs when running in the official tensorflow docker container this seems related to issue
0
192,343
14,614,822,906
IssuesEvent
2020-12-22 10:30:39
dusk-network/dusk-blockchain
https://api.github.com/repos/dusk-network/dusk-blockchain
closed
Modify mempool testing file to avoid using a shared context
area:testing type:refactor
This shared context object often causes issues when amendments need to be made to the mempool, and hinders progress. Besides this, it strikes me as an anti-pattern to share a context object between unit tests, which should ideally be isolated and fine to be ran concurrently without usage of mutexes or anything of the sort.
1.0
Modify mempool testing file to avoid using a shared context - This shared context object often causes issues when amendments need to be made to the mempool, and hinders progress. Besides this, it strikes me as an anti-pattern to share a context object between unit tests, which should ideally be isolated and fine to be ran concurrently without usage of mutexes or anything of the sort.
non_process
modify mempool testing file to avoid using a shared context this shared context object often causes issues when amendments need to be made to the mempool and hinders progress besides this it strikes me as an anti pattern to share a context object between unit tests which should ideally be isolated and fine to be ran concurrently without usage of mutexes or anything of the sort
0
7,079
10,229,177,582
IssuesEvent
2019-08-17 10:16:33
SB-MaterialAdmin/Web
https://api.github.com/repos/SB-MaterialAdmin/Web
reopened
Бан с WEB панели
In process
![111](https://user-images.githubusercontent.com/26358845/47965335-d16a1600-e056-11e8-963c-2793da67465e.png) Хотелось бы чтобы он выводил steam_id в другом формате STEAM_X:X:XXXXXX ![222](https://user-images.githubusercontent.com/26358845/47965342-efd01180-e056-11e8-9b86-568c64c390e8.png) Когда получаешь бан с веб панели, кикает с сервера с таким диалоговым окном. ![333](https://user-images.githubusercontent.com/26358845/47965345-04140e80-e057-11e8-9f03-746092f1d725.png) Хотелось бы чтобы описание было более подробным, время бана, причина, сайт бан системы (а лучше свой указанный). ![666](https://user-images.githubusercontent.com/26358845/47965369-78e74880-e057-11e8-8aea-a59a08b857f6.png) При бане с сервера все выводит отлично, но при заходе все равно пишет, просто, что Вас забанили - без причины, времени и т.д
1.0
Бан с WEB панели - ![111](https://user-images.githubusercontent.com/26358845/47965335-d16a1600-e056-11e8-963c-2793da67465e.png) Хотелось бы чтобы он выводил steam_id в другом формате STEAM_X:X:XXXXXX ![222](https://user-images.githubusercontent.com/26358845/47965342-efd01180-e056-11e8-9b86-568c64c390e8.png) Когда получаешь бан с веб панели, кикает с сервера с таким диалоговым окном. ![333](https://user-images.githubusercontent.com/26358845/47965345-04140e80-e057-11e8-9f03-746092f1d725.png) Хотелось бы чтобы описание было более подробным, время бана, причина, сайт бан системы (а лучше свой указанный). ![666](https://user-images.githubusercontent.com/26358845/47965369-78e74880-e057-11e8-8aea-a59a08b857f6.png) При бане с сервера все выводит отлично, но при заходе все равно пишет, просто, что Вас забанили - без причины, времени и т.д
process
бан с web панели хотелось бы чтобы он выводил steam id в другом формате steam x x xxxxxx когда получаешь бан с веб панели кикает с сервера с таким диалоговым окном хотелось бы чтобы описание было более подробным время бана причина сайт бан системы а лучше свой указанный при бане с сервера все выводит отлично но при заходе все равно пишет просто что вас забанили без причины времени и т д
1
14,001
16,772,783,485
IssuesEvent
2021-06-14 16:44:57
googleapis/sloth
https://api.github.com/repos/googleapis/sloth
closed
(Re)assign serverless APIs to CAKE
type: process
@tequilarista to action when appropriate resources are in place - [ ] Rollback #845 - [ ] Add Workflows API
1.0
(Re)assign serverless APIs to CAKE - @tequilarista to action when appropriate resources are in place - [ ] Rollback #845 - [ ] Add Workflows API
process
re assign serverless apis to cake tequilarista to action when appropriate resources are in place rollback add workflows api
1
201,918
15,229,383,534
IssuesEvent
2021-02-18 12:50:20
WeiXian042901/fyp_repository
https://api.github.com/repos/WeiXian042901/fyp_repository
opened
FU_057_Quiz Play Page FIQ Awarded And Time Specific Question(Select Wrong Answer)
Acceptance Test Quiz User
**Test Scenario** - User selects the wrong answer **Test Case** - Check that the wrong answer is highlighted as red **Pre-Conditions** - User has successfully entered the application - User clicked on the “Quizzes” Option - User selected the “Testing title(FIQ and Time)” quiz option - User clicked on the “Start Quiz” button. **Test-Steps** 1. Select “ Wrong Answer” as the correct answer **Test Data** **Expected Results** - The users should be informed that their answer was incorrect, with the answer placeholders for all the incorrect answers highlighted red and the correct answer placeholder highlighted green. **Actual Results** - The users should be informed that their answer was incorrect, with the answer placeholders for all the incorrect answers highlighted red and the correct answer placeholder highlighted green. **Pass/Fail** - Pass **Date Tested** - 10th February 2021 **Tested By** - Zachary Tan
1.0
FU_057_Quiz Play Page FIQ Awarded And Time Specific Question(Select Wrong Answer) - **Test Scenario** - User selects the wrong answer **Test Case** - Check that the wrong answer is highlighted as red **Pre-Conditions** - User has successfully entered the application - User clicked on the “Quizzes” Option - User selected the “Testing title(FIQ and Time)” quiz option - User clicked on the “Start Quiz” button. **Test-Steps** 1. Select “ Wrong Answer” as the correct answer **Test Data** **Expected Results** - The users should be informed that their answer was incorrect, with the answer placeholders for all the incorrect answers highlighted red and the correct answer placeholder highlighted green. **Actual Results** - The users should be informed that their answer was incorrect, with the answer placeholders for all the incorrect answers highlighted red and the correct answer placeholder highlighted green. **Pass/Fail** - Pass **Date Tested** - 10th February 2021 **Tested By** - Zachary Tan
non_process
fu quiz play page fiq awarded and time specific question select wrong answer test scenario user selects the wrong answer test case check that the wrong answer is highlighted as red pre conditions user has successfully entered the application user clicked on the “quizzes” option user selected the “testing title fiq and time ” quiz option user clicked on the “start quiz” button test steps select “ wrong answer” as the correct answer test data expected results the users should be informed that their answer was incorrect with the answer placeholders for all the incorrect answers highlighted red and the correct answer placeholder highlighted green actual results the users should be informed that their answer was incorrect with the answer placeholders for all the incorrect answers highlighted red and the correct answer placeholder highlighted green pass fail pass date tested february tested by zachary tan
0
776,749
27,264,619,558
IssuesEvent
2023-02-22 17:05:29
ascheid/itsg33-pbmm-issue-gen
https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen
opened
IR-3(2): Incident Response Testing | Coordination With Related Plans
Priority: P3 Suggested Assignment: IT Security Function ITSG-33 Class: Operational Control: IR-3
# Control Definition INCIDENT RESPONSE TESTING | COORDINATION WITH RELATED PLANS The organization coordinates incident response testing with organizational elements responsible for related plans. # Class Operational # Supplemental Guidance Organizational plans related to incident response testing include, for example, Business Continuity Plans, Contingency Plans, Disaster Recovery Plans, Continuity of Operations Plans, Crisis Communications Plans, Critical Infrastructure Plans, and Occupant Emergency Plans. # Suggested Assignment IT Security Function # Support Teams IT Operations Group
1.0
IR-3(2): Incident Response Testing | Coordination With Related Plans - # Control Definition INCIDENT RESPONSE TESTING | COORDINATION WITH RELATED PLANS The organization coordinates incident response testing with organizational elements responsible for related plans. # Class Operational # Supplemental Guidance Organizational plans related to incident response testing include, for example, Business Continuity Plans, Contingency Plans, Disaster Recovery Plans, Continuity of Operations Plans, Crisis Communications Plans, Critical Infrastructure Plans, and Occupant Emergency Plans. # Suggested Assignment IT Security Function # Support Teams IT Operations Group
non_process
ir incident response testing coordination with related plans control definition incident response testing coordination with related plans the organization coordinates incident response testing with organizational elements responsible for related plans class operational supplemental guidance organizational plans related to incident response testing include for example business continuity plans contingency plans disaster recovery plans continuity of operations plans crisis communications plans critical infrastructure plans and occupant emergency plans suggested assignment it security function support teams it operations group
0
20,661
27,334,331,628
IssuesEvent
2023-02-26 02:00:08
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 24 Feb 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Real-Time Damage Detection in Fiber Lifting Ropes Using Convolutional Neural Networks - **Authors:** Tuomas Jalonen, Mohammad Al-Sa'd, Roope Mellanen, Serkan Kiranyaz, Moncef Gabbouj - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.11947 - **Pdf link:** https://arxiv.org/pdf/2302.11947 - **Abstract** The health and safety hazards posed by worn crane lifting ropes mandate periodic inspection for damage. This task is time-consuming, prone to human error, halts operation, and may result in the premature disposal of ropes. Therefore, we propose using deep learning and computer vision methods to automate the process of detecting damaged ropes. Specifically, we present a novel vision-based system for detecting damage in synthetic fiber rope images using convolutional neural networks (CNN). We use a camera-based apparatus to photograph the lifting rope's surface, while in operation, and capture the progressive wear-and-tear as well as the more significant degradation in the rope's health state. Experts from Konecranes annotate the collected images in accordance with the rope's condition; normal or damaged. Then, we pre-process the images, design a CNN model in a systematic manner, evaluate its detection and prediction performance, analyze its computational complexity, and compare it with various other models. Experimental results show the proposed model outperforms other techniques with 96.4% accuracy, 95.8% precision, 97.2% recall, 96.5% F1-score, and 99.2% AUC. Besides, they demonstrate the model's real-time operation, low memory footprint, robustness to various environmental and operational conditions, and adequacy for deployment in industrial systems. ### Dermatological Diagnosis Explainability Benchmark for Convolutional Neural Networks - **Authors:** Raluca Jalaboi, Ole Winther, Alfiia Galimzianova - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.12084 - **Pdf link:** https://arxiv.org/pdf/2302.12084 - **Abstract** In recent years, large strides have been taken in developing machine learning methods for dermatological applications, supported in part by the success of deep learning (DL). To date, diagnosing diseases from images is one of the most explored applications of DL within dermatology. Convolutional neural networks (ConvNets) are the most common (DL) method in medical imaging due to their training efficiency and accuracy, although they are often described as black boxes because of their limited explainability. One popular way to obtain insight into a ConvNet's decision mechanism is gradient class activation maps (Grad-CAM). A quantitative evaluation of the Grad-CAM explainability has been recently made possible by the release of DermXDB, a skin disease diagnosis explainability dataset which enables explainability benchmarking of ConvNet architectures. In this paper, we perform a literature review to identify the most common ConvNet architectures used for this task, and compare their Grad-CAM explanations with the explanation maps provided by DermXDB. We identified 11 architectures: DenseNet121, EfficientNet-B0, InceptionV3, InceptionResNetV2, MobileNet, MobileNetV2, NASNetMobile, ResNet50, ResNet50V2, VGG16, and Xception. We pre-trained all architectures on an clinical skin disease dataset, and fine-tuned them on a DermXDB subset. Validation results on the DermXDB holdout subset show an explainability F1 score of between 0.35-0.46, with Xception displaying the highest explainability performance. NASNetMobile reports the highest characteristic-level explainability sensitivity, despite it's mediocre diagnosis performance. These results highlight the importance of choosing the right architecture for the desired application and target market, underline need for additional explainability datasets, and further confirm the need for explainability benchmarking that relies on quantitative analyses. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW There is no result ## Keyword: raw image There is no result
2.0
New submissions for Fri, 24 Feb 23 - ## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Real-Time Damage Detection in Fiber Lifting Ropes Using Convolutional Neural Networks - **Authors:** Tuomas Jalonen, Mohammad Al-Sa'd, Roope Mellanen, Serkan Kiranyaz, Moncef Gabbouj - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.11947 - **Pdf link:** https://arxiv.org/pdf/2302.11947 - **Abstract** The health and safety hazards posed by worn crane lifting ropes mandate periodic inspection for damage. This task is time-consuming, prone to human error, halts operation, and may result in the premature disposal of ropes. Therefore, we propose using deep learning and computer vision methods to automate the process of detecting damaged ropes. Specifically, we present a novel vision-based system for detecting damage in synthetic fiber rope images using convolutional neural networks (CNN). We use a camera-based apparatus to photograph the lifting rope's surface, while in operation, and capture the progressive wear-and-tear as well as the more significant degradation in the rope's health state. Experts from Konecranes annotate the collected images in accordance with the rope's condition; normal or damaged. Then, we pre-process the images, design a CNN model in a systematic manner, evaluate its detection and prediction performance, analyze its computational complexity, and compare it with various other models. Experimental results show the proposed model outperforms other techniques with 96.4% accuracy, 95.8% precision, 97.2% recall, 96.5% F1-score, and 99.2% AUC. Besides, they demonstrate the model's real-time operation, low memory footprint, robustness to various environmental and operational conditions, and adequacy for deployment in industrial systems. ### Dermatological Diagnosis Explainability Benchmark for Convolutional Neural Networks - **Authors:** Raluca Jalaboi, Ole Winther, Alfiia Galimzianova - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.12084 - **Pdf link:** https://arxiv.org/pdf/2302.12084 - **Abstract** In recent years, large strides have been taken in developing machine learning methods for dermatological applications, supported in part by the success of deep learning (DL). To date, diagnosing diseases from images is one of the most explored applications of DL within dermatology. Convolutional neural networks (ConvNets) are the most common (DL) method in medical imaging due to their training efficiency and accuracy, although they are often described as black boxes because of their limited explainability. One popular way to obtain insight into a ConvNet's decision mechanism is gradient class activation maps (Grad-CAM). A quantitative evaluation of the Grad-CAM explainability has been recently made possible by the release of DermXDB, a skin disease diagnosis explainability dataset which enables explainability benchmarking of ConvNet architectures. In this paper, we perform a literature review to identify the most common ConvNet architectures used for this task, and compare their Grad-CAM explanations with the explanation maps provided by DermXDB. We identified 11 architectures: DenseNet121, EfficientNet-B0, InceptionV3, InceptionResNetV2, MobileNet, MobileNetV2, NASNetMobile, ResNet50, ResNet50V2, VGG16, and Xception. We pre-trained all architectures on an clinical skin disease dataset, and fine-tuned them on a DermXDB subset. Validation results on the DermXDB holdout subset show an explainability F1 score of between 0.35-0.46, with Xception displaying the highest explainability performance. NASNetMobile reports the highest characteristic-level explainability sensitivity, despite it's mediocre diagnosis performance. These results highlight the importance of choosing the right architecture for the desired application and target market, underline need for additional explainability datasets, and further confirm the need for explainability benchmarking that relies on quantitative analyses. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW There is no result ## Keyword: raw image There is no result
process
new submissions for fri feb keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp real time damage detection in fiber lifting ropes using convolutional neural networks authors tuomas jalonen mohammad al sa d roope mellanen serkan kiranyaz moncef gabbouj subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract the health and safety hazards posed by worn crane lifting ropes mandate periodic inspection for damage this task is time consuming prone to human error halts operation and may result in the premature disposal of ropes therefore we propose using deep learning and computer vision methods to automate the process of detecting damaged ropes specifically we present a novel vision based system for detecting damage in synthetic fiber rope images using convolutional neural networks cnn we use a camera based apparatus to photograph the lifting rope s surface while in operation and capture the progressive wear and tear as well as the more significant degradation in the rope s health state experts from konecranes annotate the collected images in accordance with the rope s condition normal or damaged then we pre process the images design a cnn model in a systematic manner evaluate its detection and prediction performance analyze its computational complexity and compare it with various other models experimental results show the proposed model outperforms other techniques with accuracy precision recall score and auc besides they demonstrate the model s real time operation low memory footprint robustness to various environmental and operational conditions and adequacy for deployment in industrial systems dermatological diagnosis explainability benchmark for convolutional neural networks authors raluca jalaboi ole winther alfiia galimzianova subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract in recent years large strides have been taken in developing machine learning methods for dermatological applications supported in part by the success of deep learning dl to date diagnosing diseases from images is one of the most explored applications of dl within dermatology convolutional neural networks convnets are the most common dl method in medical imaging due to their training efficiency and accuracy although they are often described as black boxes because of their limited explainability one popular way to obtain insight into a convnet s decision mechanism is gradient class activation maps grad cam a quantitative evaluation of the grad cam explainability has been recently made possible by the release of dermxdb a skin disease diagnosis explainability dataset which enables explainability benchmarking of convnet architectures in this paper we perform a literature review to identify the most common convnet architectures used for this task and compare their grad cam explanations with the explanation maps provided by dermxdb we identified architectures efficientnet mobilenet nasnetmobile and xception we pre trained all architectures on an clinical skin disease dataset and fine tuned them on a dermxdb subset validation results on the dermxdb holdout subset show an explainability score of between with xception displaying the highest explainability performance nasnetmobile reports the highest characteristic level explainability sensitivity despite it s mediocre diagnosis performance these results highlight the importance of choosing the right architecture for the desired application and target market underline need for additional explainability datasets and further confirm the need for explainability benchmarking that relies on quantitative analyses keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw there is no result keyword raw image there is no result
1
230,545
18,675,949,812
IssuesEvent
2021-10-31 15:08:06
kubernetes-csi/csi-driver-smb
https://api.github.com/repos/kubernetes-csi/csi-driver-smb
closed
fix the integration test failure when provision secret is provided
lifecycle/rotten sig/testing
**Is your feature request related to a problem?/Why is this needed** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> **Describe the solution you'd like in detail** <!-- A clear and concise description of what you want to happen. --> https://github.com/kubernetes-csi/csi-driver-smb/pull/295/checks?check_run_id=2733216033 ``` CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags "-X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.driverVersion=e2e-c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.gitCommit=c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.buildDate=2021-06-03T02:02:55Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/smbplugin ./pkg/smbplugin CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags "-X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.driverVersion=e2e-c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.gitCommit=c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.buildDate=2021-06-03T02:04:03Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/smbplugin ./pkg/smbplugin sudo -E env "PATH=$PATH" bash test/integration/run-test.sh Unable to find image 'servercontainers/samba:4.9.4' locally 4.9.4: Pulling from servercontainers/samba ab1fc7e4bf91: Pulling fs layer 472bedc18387: Pulling fs layer 512a1aa27330: Pulling fs layer 512a1aa27330: Verifying Checksum 512a1aa27330: Download complete ab1fc7e4bf91: Verifying Checksum ab1fc7e4bf91: Download complete 472bedc18387: Verifying Checksum 472bedc18387: Download complete ab1fc7e4bf91: Pull complete 472bedc18387: Pull complete 512a1aa27330: Pull complete Digest: sha256:eced9c2e09e4f61baf6dc252c695c1f3b3ca1831133b71c18e291896b3aa7dcf Status: Downloaded newer image for servercontainers/samba:4.9.4 3a78bb793537adeaaea5b571081230c6952a72ca2351c782e59c41b749af2bfb Begin to run integration test ... I0603 02:05:26.307395 36012 main.go:85] set up prometheus server on [::]:29644 I0603 02:05:26.307775 36012 smb.go:63] DRIVER INFORMATION: ------------------- Build Date: "2021-06-03T02:04:03Z" Compiler: gc Driver Name: smb.csi.k8s.io Driver Version: e2e-c862e712bfaa8addf5f85f7c43e67336e19354ff Git Commit: c862e712bfaa8addf5f85f7c43e67336e19354ff Go Version: go1.16.4 Platform: linux/amd64 Streaming logs below: I0603 02:05:26.322108 36012 mount_linux.go:206] Detected OS with systemd I0603 02:05:26.322144 36012 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME I0603 02:05:26.322151 36012 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER I0603 02:05:26.322156 36012 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY I0603 02:05:26.322159 36012 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY I0603 02:05:26.322163 36012 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER I0603 02:05:26.322166 36012 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER I0603 02:05:26.322170 36012 driver.go:103] Enabling node service capability: GET_VOLUME_STATS I0603 02:05:26.322174 36012 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME I0603 02:05:26.322305 36012 server.go:118] Listening for connections on address: &net.TCPAddr{IP:net.IP{0x7f, 0x0, 0x0, 0x1}, Port:10000, Zone:""} Create volume test: I0603 02:05:31.309971 36012 utils.go:118] GRPC call: /csi.v1.Controller/CreateVolume I0603 02:05:31.309989 36012 utils.go:119] GRPC request: {"capacity_range":{"required_bytes":2147483648},"name":"citest-1622685926","parameters":{"source":"//0.0.0.0/share"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]} I0603 02:05:31.313894 36012 controllerserver.go:238] internally mounting source at /tmp/citest-1622685926 I0603 02:05:31.314638 36012 nodeserver.go:180] NodeStageVolume: targetPath(/tmp/citest-1622685926) volumeID(0.0.0.0/share#citest-1622685926) context(map[source://0.0.0.0/share]) mountflags([]) mountOptions([]) I0603 02:05:31.315063 36012 mount_linux.go:175] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /tmp/citest-1622685926 --scope -- mount -t cifs -o <masked> //0.0.0.0/share /tmp/citest-1622685926) I0603 02:05:31.540479 36012 nodeserver.go:205] volume(0.0.0.0/share#citest-1622685926) mount "//0.0.0.0/share" on "/tmp/citest-1622685926" succeeded I0603 02:05:31.542817 36012 controllerserver.go:256] internally unmounting /tmp/citest-1622685926 I0603 02:05:31.542836 36012 nodeserver.go:227] NodeUnstageVolume: CleanupMountPoint on /tmp/citest-1622685926 with volume 0.0.0.0/share#citest-1622685926 I0603 02:05:31.542842 36012 mount_linux.go:266] Unmounting /tmp/citest-1622685926 W0603 02:05:31.550853 36012 mount_helper_common.go:133] Warning: "/tmp/citest-1622685926" is not a mountpoint, deleting I0603 02:05:31.552746 36012 nodeserver.go:232] NodeUnstageVolume: unmount volume 0.0.0.0/share#citest-1622685926 on /tmp/citest-1622685926 successfully E0603 02:05:31.552763 36012 utils.go:123] GRPC error: rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/citest-1622685926/citest-1622685926: permission denied failed to make subdirectory: mkdir /tmp/citest-1622685926/citest-1622685926: permission denied Please use -h,--help for more information Got volume id: stage volume test: I0603 02:05:33.570029 36012 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume I0603 02:05:33.570048 36012 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/tmp/stagingtargetpath","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"source":"//0.0.0.0/share"}} E0603 02:05:33.571685 36012 utils.go:123] GRPC error: rpc error: code = InvalidArgument desc = Volume ID missing in request Volume ID missing in request Please use -h,--help for more information stop and delete samba container samba samba delete tmp dir pkill -f smbplugin make: *** [Makefile:80: integration-test] Error 3 Error: Process completed with exit code 2. ``` **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Additional context** <!-- Add any other context or screenshots about the feature request here. -->
1.0
fix the integration test failure when provision secret is provided - **Is your feature request related to a problem?/Why is this needed** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> **Describe the solution you'd like in detail** <!-- A clear and concise description of what you want to happen. --> https://github.com/kubernetes-csi/csi-driver-smb/pull/295/checks?check_run_id=2733216033 ``` CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags "-X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.driverVersion=e2e-c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.gitCommit=c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.buildDate=2021-06-03T02:02:55Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/smbplugin ./pkg/smbplugin CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags "-X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.driverVersion=e2e-c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.gitCommit=c862e712bfaa8addf5f85f7c43e67336e19354ff -X github.com/kubernetes-csi/csi-driver-smb/pkg/smb.buildDate=2021-06-03T02:04:03Z -s -w -extldflags '-static'" -mod vendor -o _output/amd64/smbplugin ./pkg/smbplugin sudo -E env "PATH=$PATH" bash test/integration/run-test.sh Unable to find image 'servercontainers/samba:4.9.4' locally 4.9.4: Pulling from servercontainers/samba ab1fc7e4bf91: Pulling fs layer 472bedc18387: Pulling fs layer 512a1aa27330: Pulling fs layer 512a1aa27330: Verifying Checksum 512a1aa27330: Download complete ab1fc7e4bf91: Verifying Checksum ab1fc7e4bf91: Download complete 472bedc18387: Verifying Checksum 472bedc18387: Download complete ab1fc7e4bf91: Pull complete 472bedc18387: Pull complete 512a1aa27330: Pull complete Digest: sha256:eced9c2e09e4f61baf6dc252c695c1f3b3ca1831133b71c18e291896b3aa7dcf Status: Downloaded newer image for servercontainers/samba:4.9.4 3a78bb793537adeaaea5b571081230c6952a72ca2351c782e59c41b749af2bfb Begin to run integration test ... I0603 02:05:26.307395 36012 main.go:85] set up prometheus server on [::]:29644 I0603 02:05:26.307775 36012 smb.go:63] DRIVER INFORMATION: ------------------- Build Date: "2021-06-03T02:04:03Z" Compiler: gc Driver Name: smb.csi.k8s.io Driver Version: e2e-c862e712bfaa8addf5f85f7c43e67336e19354ff Git Commit: c862e712bfaa8addf5f85f7c43e67336e19354ff Go Version: go1.16.4 Platform: linux/amd64 Streaming logs below: I0603 02:05:26.322108 36012 mount_linux.go:206] Detected OS with systemd I0603 02:05:26.322144 36012 driver.go:93] Enabling controller service capability: CREATE_DELETE_VOLUME I0603 02:05:26.322151 36012 driver.go:112] Enabling volume access mode: SINGLE_NODE_WRITER I0603 02:05:26.322156 36012 driver.go:112] Enabling volume access mode: SINGLE_NODE_READER_ONLY I0603 02:05:26.322159 36012 driver.go:112] Enabling volume access mode: MULTI_NODE_READER_ONLY I0603 02:05:26.322163 36012 driver.go:112] Enabling volume access mode: MULTI_NODE_SINGLE_WRITER I0603 02:05:26.322166 36012 driver.go:112] Enabling volume access mode: MULTI_NODE_MULTI_WRITER I0603 02:05:26.322170 36012 driver.go:103] Enabling node service capability: GET_VOLUME_STATS I0603 02:05:26.322174 36012 driver.go:103] Enabling node service capability: STAGE_UNSTAGE_VOLUME I0603 02:05:26.322305 36012 server.go:118] Listening for connections on address: &net.TCPAddr{IP:net.IP{0x7f, 0x0, 0x0, 0x1}, Port:10000, Zone:""} Create volume test: I0603 02:05:31.309971 36012 utils.go:118] GRPC call: /csi.v1.Controller/CreateVolume I0603 02:05:31.309989 36012 utils.go:119] GRPC request: {"capacity_range":{"required_bytes":2147483648},"name":"citest-1622685926","parameters":{"source":"//0.0.0.0/share"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Block":{}},"access_mode":{"mode":1}}]} I0603 02:05:31.313894 36012 controllerserver.go:238] internally mounting source at /tmp/citest-1622685926 I0603 02:05:31.314638 36012 nodeserver.go:180] NodeStageVolume: targetPath(/tmp/citest-1622685926) volumeID(0.0.0.0/share#citest-1622685926) context(map[source://0.0.0.0/share]) mountflags([]) mountOptions([]) I0603 02:05:31.315063 36012 mount_linux.go:175] Mounting cmd (systemd-run) with arguments (--description=Kubernetes transient mount for /tmp/citest-1622685926 --scope -- mount -t cifs -o <masked> //0.0.0.0/share /tmp/citest-1622685926) I0603 02:05:31.540479 36012 nodeserver.go:205] volume(0.0.0.0/share#citest-1622685926) mount "//0.0.0.0/share" on "/tmp/citest-1622685926" succeeded I0603 02:05:31.542817 36012 controllerserver.go:256] internally unmounting /tmp/citest-1622685926 I0603 02:05:31.542836 36012 nodeserver.go:227] NodeUnstageVolume: CleanupMountPoint on /tmp/citest-1622685926 with volume 0.0.0.0/share#citest-1622685926 I0603 02:05:31.542842 36012 mount_linux.go:266] Unmounting /tmp/citest-1622685926 W0603 02:05:31.550853 36012 mount_helper_common.go:133] Warning: "/tmp/citest-1622685926" is not a mountpoint, deleting I0603 02:05:31.552746 36012 nodeserver.go:232] NodeUnstageVolume: unmount volume 0.0.0.0/share#citest-1622685926 on /tmp/citest-1622685926 successfully E0603 02:05:31.552763 36012 utils.go:123] GRPC error: rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/citest-1622685926/citest-1622685926: permission denied failed to make subdirectory: mkdir /tmp/citest-1622685926/citest-1622685926: permission denied Please use -h,--help for more information Got volume id: stage volume test: I0603 02:05:33.570029 36012 utils.go:118] GRPC call: /csi.v1.Node/NodeStageVolume I0603 02:05:33.570048 36012 utils.go:119] GRPC request: {"secrets":"***stripped***","staging_target_path":"/tmp/stagingtargetpath","volume_capability":{"AccessType":{"Block":{}},"access_mode":{"mode":1}},"volume_context":{"source":"//0.0.0.0/share"}} E0603 02:05:33.571685 36012 utils.go:123] GRPC error: rpc error: code = InvalidArgument desc = Volume ID missing in request Volume ID missing in request Please use -h,--help for more information stop and delete samba container samba samba delete tmp dir pkill -f smbplugin make: *** [Makefile:80: integration-test] Error 3 Error: Process completed with exit code 2. ``` **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Additional context** <!-- Add any other context or screenshots about the feature request here. -->
non_process
fix the integration test failure when provision secret is provided is your feature request related to a problem why is this needed describe the solution you d like in detail cgo enabled goos linux goarch go build a ldflags x github com kubernetes csi csi driver smb pkg smb driverversion x github com kubernetes csi csi driver smb pkg smb gitcommit x github com kubernetes csi csi driver smb pkg smb builddate s w extldflags static mod vendor o output smbplugin pkg smbplugin cgo enabled goos linux goarch go build a ldflags x github com kubernetes csi csi driver smb pkg smb driverversion x github com kubernetes csi csi driver smb pkg smb gitcommit x github com kubernetes csi csi driver smb pkg smb builddate s w extldflags static mod vendor o output smbplugin pkg smbplugin sudo e env path path bash test integration run test sh unable to find image servercontainers samba locally pulling from servercontainers samba pulling fs layer pulling fs layer pulling fs layer verifying checksum download complete verifying checksum download complete verifying checksum download complete pull complete pull complete pull complete digest status downloaded newer image for servercontainers samba begin to run integration test main go set up prometheus server on smb go driver information build date compiler gc driver name smb csi io driver version git commit go version platform linux streaming logs below mount linux go detected os with systemd driver go enabling controller service capability create delete volume driver go enabling volume access mode single node writer driver go enabling volume access mode single node reader only driver go enabling volume access mode multi node reader only driver go enabling volume access mode multi node single writer driver go enabling volume access mode multi node multi writer driver go enabling node service capability get volume stats driver go enabling node service capability stage unstage volume server go listening for connections on address net tcpaddr ip net ip port zone create volume test utils go grpc call csi controller createvolume utils go grpc request capacity range required bytes name citest parameters source share secrets stripped volume capabilities controllerserver go internally mounting source at tmp citest nodeserver go nodestagevolume targetpath tmp citest volumeid share citest context map mountflags mountoptions mount linux go mounting cmd systemd run with arguments description kubernetes transient mount for tmp citest scope mount t cifs o share tmp citest nodeserver go volume share citest mount share on tmp citest succeeded controllerserver go internally unmounting tmp citest nodeserver go nodeunstagevolume cleanupmountpoint on tmp citest with volume share citest mount linux go unmounting tmp citest mount helper common go warning tmp citest is not a mountpoint deleting nodeserver go nodeunstagevolume unmount volume share citest on tmp citest successfully utils go grpc error rpc error code internal desc failed to make subdirectory mkdir tmp citest citest permission denied failed to make subdirectory mkdir tmp citest citest permission denied please use h help for more information got volume id stage volume test utils go grpc call csi node nodestagevolume utils go grpc request secrets stripped staging target path tmp stagingtargetpath volume capability accesstype block access mode mode volume context source share utils go grpc error rpc error code invalidargument desc volume id missing in request volume id missing in request please use h help for more information stop and delete samba container samba samba delete tmp dir pkill f smbplugin make error error process completed with exit code describe alternatives you ve considered additional context
0
197,841
6,965,134,641
IssuesEvent
2017-12-09 02:10:48
lucaslioli/payless
https://api.github.com/repos/lucaslioli/payless
closed
Fix stablishment's display screen bug and implement pull-to-refresh
bug high priority
Sometimes, when the user opens a stablishment's info page, the information retrieved via the web service is not shown, despite the information being received from the server. Obs.: Probably the page is just not being updated with the new information, or it is being updated before the information has arrived. Our application loads and caches the tabs, so the user must reopen the app whenever he wishes to see new info. By changing the navigation controller method we currently use for loading tabs and implementing pull-to-refresh, we will be able to provide a better experience to the user when it comes to usability.
1.0
Fix stablishment's display screen bug and implement pull-to-refresh - Sometimes, when the user opens a stablishment's info page, the information retrieved via the web service is not shown, despite the information being received from the server. Obs.: Probably the page is just not being updated with the new information, or it is being updated before the information has arrived. Our application loads and caches the tabs, so the user must reopen the app whenever he wishes to see new info. By changing the navigation controller method we currently use for loading tabs and implementing pull-to-refresh, we will be able to provide a better experience to the user when it comes to usability.
non_process
fix stablishment s display screen bug and implement pull to refresh sometimes when the user opens a stablishment s info page the information retrieved via the web service is not shown despite the information being received from the server obs probably the page is just not being updated with the new information or it is being updated before the information has arrived our application loads and caches the tabs so the user must reopen the app whenever he wishes to see new info by changing the navigation controller method we currently use for loading tabs and implementing pull to refresh we will be able to provide a better experience to the user when it comes to usability
0
12,239
14,743,840,937
IssuesEvent
2021-01-07 14:29:32
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Mobile] Html breaking in Resources Text
Bug P1 Process: Dev Process: Tested dev Unknown backend
Entered text in resource is breaking in mobile ![resources_](https://user-images.githubusercontent.com/60500517/103407288-28f76400-4b84-11eb-8d71-cb1d58e4ad95.png) ![Resources_text](https://user-images.githubusercontent.com/60500517/103407184-a373b400-4b83-11eb-98e5-a1ba8c4ce488.jpg)
2.0
[Mobile] Html breaking in Resources Text - Entered text in resource is breaking in mobile ![resources_](https://user-images.githubusercontent.com/60500517/103407288-28f76400-4b84-11eb-8d71-cb1d58e4ad95.png) ![Resources_text](https://user-images.githubusercontent.com/60500517/103407184-a373b400-4b83-11eb-98e5-a1ba8c4ce488.jpg)
process
html breaking in resources text entered text in resource is breaking in mobile
1
12,601
7,926,509,939
IssuesEvent
2018-07-06 02:35:23
raoulvdberge/refinedstorage
https://api.github.com/repos/raoulvdberge/refinedstorage
closed
[1.10.2 - Minor] Performance issue caused by renderupdates from disk-drives(?)
help wanted performance
#### Issue description: In a larger system with lots of disks and drives, larger crafting operations cause a lot of render updates because of the lights flickering between full and near-full. Mind you I do not know the exact impact it has on performance, I've just been taught that if csampler shows something continuously red![continuously red](https://cloud.githubusercontent.com/assets/24588620/21337635/5935f346-c66f-11e6-85bc-706fa8a3c773.png) it is usually bad. The operation pictured is with no machines running nearby(loaded chunk outside render distance), and quarries turned off so the system has nearly no extra I/O. #### What happens: Light show. #### What you expected to happen: Less of a light show. Maybe a more conservative cap be set on how often it updates? Maybe auto-defragmenting drives as well, but suspect that is more complicated than it sounds. Is this one of the reasons AE2 decided to go the route of forcing a max amount of types per drives? #### Steps to reproduce: 1. Large system with many drives and resources 2. Order something that is 1-1 conversion 3. Watch the light show and fancy colors with csampler ... #### Version (Make sure you are on the latest version before reporting): - Minecraft: 1.10.2 - Forge: 12.18.3.2185 - Refined Storage: 1.2.12 Does this issue occur on a server? [yes/no] No As an addendum: Considering the render-updates signifies way more going on behind the scenes and that I do not know anything about it, I thought it'd be safer to report in hopes it might be a piece of the puzzle in regards to performance issues on larger crafting jobs.
True
[1.10.2 - Minor] Performance issue caused by renderupdates from disk-drives(?) - #### Issue description: In a larger system with lots of disks and drives, larger crafting operations cause a lot of render updates because of the lights flickering between full and near-full. Mind you I do not know the exact impact it has on performance, I've just been taught that if csampler shows something continuously red![continuously red](https://cloud.githubusercontent.com/assets/24588620/21337635/5935f346-c66f-11e6-85bc-706fa8a3c773.png) it is usually bad. The operation pictured is with no machines running nearby(loaded chunk outside render distance), and quarries turned off so the system has nearly no extra I/O. #### What happens: Light show. #### What you expected to happen: Less of a light show. Maybe a more conservative cap be set on how often it updates? Maybe auto-defragmenting drives as well, but suspect that is more complicated than it sounds. Is this one of the reasons AE2 decided to go the route of forcing a max amount of types per drives? #### Steps to reproduce: 1. Large system with many drives and resources 2. Order something that is 1-1 conversion 3. Watch the light show and fancy colors with csampler ... #### Version (Make sure you are on the latest version before reporting): - Minecraft: 1.10.2 - Forge: 12.18.3.2185 - Refined Storage: 1.2.12 Does this issue occur on a server? [yes/no] No As an addendum: Considering the render-updates signifies way more going on behind the scenes and that I do not know anything about it, I thought it'd be safer to report in hopes it might be a piece of the puzzle in regards to performance issues on larger crafting jobs.
non_process
performance issue caused by renderupdates from disk drives issue description in a larger system with lots of disks and drives larger crafting operations cause a lot of render updates because of the lights flickering between full and near full mind you i do not know the exact impact it has on performance i ve just been taught that if csampler shows something continuously red it is usually bad the operation pictured is with no machines running nearby loaded chunk outside render distance and quarries turned off so the system has nearly no extra i o what happens light show what you expected to happen less of a light show maybe a more conservative cap be set on how often it updates maybe auto defragmenting drives as well but suspect that is more complicated than it sounds is this one of the reasons decided to go the route of forcing a max amount of types per drives steps to reproduce large system with many drives and resources order something that is conversion watch the light show and fancy colors with csampler version make sure you are on the latest version before reporting minecraft forge refined storage does this issue occur on a server no as an addendum considering the render updates signifies way more going on behind the scenes and that i do not know anything about it i thought it d be safer to report in hopes it might be a piece of the puzzle in regards to performance issues on larger crafting jobs
0
5,587
8,442,541,178
IssuesEvent
2018-10-18 13:30:44
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
shell-local post-processor fails for command that includes comma
post-processor/shell-local question
FOR BUGS: Describe the problem and include the following information: - Packer version from `packer version` Packer v1.3.1 - Host platform $ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.1 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic - Debug log output from `PACKER_LOG=1 packer build template.json`. Template, kickstart ad the log files are available in this gist. https://gist.github.com/vikramhansawat/b06e73b46e42c6e60fa294ac4b6b9c04
1.0
shell-local post-processor fails for command that includes comma - FOR BUGS: Describe the problem and include the following information: - Packer version from `packer version` Packer v1.3.1 - Host platform $ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.1 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic - Debug log output from `PACKER_LOG=1 packer build template.json`. Template, kickstart ad the log files are available in this gist. https://gist.github.com/vikramhansawat/b06e73b46e42c6e60fa294ac4b6b9c04
process
shell local post processor fails for command that includes comma for bugs describe the problem and include the following information packer version from packer version packer host platform cat etc os release name ubuntu version lts bionic beaver id ubuntu id like debian pretty name ubuntu lts version id home url support url bug report url privacy policy url version codename bionic ubuntu codename bionic debug log output from packer log packer build template json template kickstart ad the log files are available in this gist
1
10,661
13,453,144,070
IssuesEvent
2020-09-09 00:04:24
GoogleCloudPlatform/cloud-ops-sandbox
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
closed
Install cloud monitoring in Custom Cloud Shell Image
priority: p2 type: process
Currently there's a line in `install.sh` which installs Cloud Monitoring: ` python3 -m pip install google-cloud-monitoring`. It would make more sense in terms of workflow to install it in the Custom Cloud Shell image, which already handles terraform and python3 installations.
1.0
Install cloud monitoring in Custom Cloud Shell Image - Currently there's a line in `install.sh` which installs Cloud Monitoring: ` python3 -m pip install google-cloud-monitoring`. It would make more sense in terms of workflow to install it in the Custom Cloud Shell image, which already handles terraform and python3 installations.
process
install cloud monitoring in custom cloud shell image currently there s a line in install sh which installs cloud monitoring m pip install google cloud monitoring it would make more sense in terms of workflow to install it in the custom cloud shell image which already handles terraform and installations
1
120,651
17,644,250,816
IssuesEvent
2021-08-20 02:03:18
fbennets/HCLC-GDPR-Bot
https://api.github.com/repos/fbennets/HCLC-GDPR-Bot
opened
CVE-2021-29616 (High) detected in tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl
security vulnerability
## CVE-2021-29616 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: HCLC-GDPR-Bot/requirements.txt</p> <p>Path to vulnerable library: HCLC-GDPR-Bot/requirements.txt</p> <p> Dependency Hierarchy: - tensorflow_addons-0.7.1-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library) - :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. The implementation of TrySimplify(https://github.com/tensorflow/tensorflow/blob/c22d88d6ff33031aa113e48aa3fc9aa74ed79595/tensorflow/core/grappler/optimizers/arithmetic_optimizer.cc#L390-L401) has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29616>CVE-2021-29616</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-29616 (High) detected in tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-29616 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: HCLC-GDPR-Bot/requirements.txt</p> <p>Path to vulnerable library: HCLC-GDPR-Bot/requirements.txt</p> <p> Dependency Hierarchy: - tensorflow_addons-0.7.1-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library) - :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. The implementation of TrySimplify(https://github.com/tensorflow/tensorflow/blob/c22d88d6ff33031aa113e48aa3fc9aa74ed79595/tensorflow/core/grappler/optimizers/arithmetic_optimizer.cc#L390-L401) has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29616>CVE-2021-29616</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-4hvv-7x94-7vq8</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file hclc gdpr bot requirements txt path to vulnerable library hclc gdpr bot requirements txt dependency hierarchy tensorflow addons whl root library x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning the implementation of trysimplify has undefined behavior due to dereferencing a null pointer in corner cases that result in optimizing a node with no inputs the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
371,506
25,954,444,451
IssuesEvent
2022-12-18 02:29:56
tf-libsonnet/core
https://api.github.com/repos/tf-libsonnet/core
closed
Create a CI job to ensure the docsonnets are always generated
documentation enhancement
Right now, it is a manual process to always remember to run `docsonnet` to generate the docs. Instead, we should make sure it is baked into the CI process and automatically commit if there is a diff.
1.0
Create a CI job to ensure the docsonnets are always generated - Right now, it is a manual process to always remember to run `docsonnet` to generate the docs. Instead, we should make sure it is baked into the CI process and automatically commit if there is a diff.
non_process
create a ci job to ensure the docsonnets are always generated right now it is a manual process to always remember to run docsonnet to generate the docs instead we should make sure it is baked into the ci process and automatically commit if there is a diff
0
102,407
21,960,079,496
IssuesEvent
2022-05-24 15:07:25
Onelinerhub/onelinerhub
https://api.github.com/repos/Onelinerhub/onelinerhub
opened
Short solution needed: "How to plot heatmap" (python-matplotlib)
help wanted good first issue code python-matplotlib
Please help us write most modern and shortest code solution for this issue: **How to plot heatmap** (technology: [python-matplotlib](https://onelinerhub.com/python-matplotlib)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
1.0
Short solution needed: "How to plot heatmap" (python-matplotlib) - Please help us write most modern and shortest code solution for this issue: **How to plot heatmap** (technology: [python-matplotlib](https://onelinerhub.com/python-matplotlib)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
non_process
short solution needed how to plot heatmap python matplotlib please help us write most modern and shortest code solution for this issue how to plot heatmap technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request
0
11,293
14,101,157,021
IssuesEvent
2020-11-06 06:12:15
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
multiprocess pickle error
module: multiprocessing module: serialization triaged
I run in multiprocess mode, report error. ``` import torch.multiprocessing as mp mp.spawn( self._process, nprocs=self.gpu_nums) ``` error info: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated ... ... Traceback (most recent call last): File "yolo_v3_train.py", line 192, in <module> process.run() File "/usr/local/python/lib/python3.7/site-packages/dh_aiflow/dh_aiflow_process/dh_aiflow_process.py", line 291, in run nprocs=self._dist_config.ngpus_per_node) File "/usr/local/python/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn while not spawn_context.join(): File "/usr/local/python/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 107, in join (error_index, name) Exception: process 5 terminated with signal SIGKILL Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated /usr/local/python/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 20 leaked semaphores to clean up at shutdown len(cache)) /usr/local/python/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 20 leaked semaphores to clean up at shutdown len(cache)) /usr/local/python/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 20 leaked semaphores to clean up at shutdown len(cache)) ``` cc @mruberry
1.0
multiprocess pickle error - I run in multiprocess mode, report error. ``` import torch.multiprocessing as mp mp.spawn( self._process, nprocs=self.gpu_nums) ``` error info: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated ... ... Traceback (most recent call last): File "yolo_v3_train.py", line 192, in <module> process.run() File "/usr/local/python/lib/python3.7/site-packages/dh_aiflow/dh_aiflow_process/dh_aiflow_process.py", line 291, in run nprocs=self._dist_config.ngpus_per_node) File "/usr/local/python/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn while not spawn_context.join(): File "/usr/local/python/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 107, in join (error_index, name) Exception: process 5 terminated with signal SIGKILL Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main exitcode = _main(fd) File "/usr/local/python/lib/python3.7/multiprocessing/spawn.py", line 115, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated /usr/local/python/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 20 leaked semaphores to clean up at shutdown len(cache)) /usr/local/python/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 20 leaked semaphores to clean up at shutdown len(cache)) /usr/local/python/lib/python3.7/multiprocessing/semaphore_tracker.py:144: UserWarning: semaphore_tracker: There appear to be 20 leaked semaphores to clean up at shutdown len(cache)) ``` cc @mruberry
process
multiprocess pickle error i run in multiprocess mode report error import torch multiprocessing as mp mp spawn self process nprocs self gpu nums error info traceback most recent call last file line in file usr local python lib multiprocessing spawn py line in spawn main exitcode main fd file usr local python lib multiprocessing spawn py line in main self reduction pickle load from parent pickle unpicklingerror pickle data was truncated traceback most recent call last file line in file usr local python lib multiprocessing spawn py line in spawn main exitcode main fd file usr local python lib multiprocessing spawn py line in main self reduction pickle load from parent pickle unpicklingerror pickle data was truncated traceback most recent call last file line in file usr local python lib multiprocessing spawn py line in spawn main exitcode main fd file usr local python lib multiprocessing spawn py line in main self reduction pickle load from parent pickle unpicklingerror pickle data was truncated traceback most recent call last file yolo train py line in process run file usr local python lib site packages dh aiflow dh aiflow process dh aiflow process py line in run nprocs self dist config ngpus per node file usr local python lib site packages torch multiprocessing spawn py line in spawn while not spawn context join file usr local python lib site packages torch multiprocessing spawn py line in join error index name exception process terminated with signal sigkill traceback most recent call last file line in file usr local python lib multiprocessing spawn py line in spawn main exitcode main fd file usr local python lib multiprocessing spawn py line in main self reduction pickle load from parent pickle unpicklingerror pickle data was truncated usr local python lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache usr local python lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache usr local python lib multiprocessing semaphore tracker py userwarning semaphore tracker there appear to be leaked semaphores to clean up at shutdown len cache cc mruberry
1
66,324
27,416,766,190
IssuesEvent
2023-03-01 14:17:02
hashicorp/nomad
https://api.github.com/repos/hashicorp/nomad
reopened
Large number of warning logs with nomad service provider
type/bug stage/accepted theme/service-discovery/nomad
<!-- Hi there, Thank you for opening an issue. Please note that we try to keep the Nomad issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.nomadproject.io/community --> ### Nomad version 1.4.4 ### Issue ## logs( journalctl -fu nomad ) ``` Feb 22 11:19:06 ai-149 nomad[10066]: 2023-02-22T11:19:06.652+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:07 ai-149 nomad[10066]: 2023-02-22T11:19:07.653+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:08 ai-149 nomad[10066]: 2023-02-22T11:19:08.653+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:09 ai-149 nomad[10066]: 2023-02-22T11:19:09.654+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:10 ai-149 nomad[10066]: 2023-02-22T11:19:10.654+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:11 ai-149 nomad[10066]: 2023-02-22T11:19:11.654+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:12 ai-149 nomad[10066]: 2023-02-22T11:19:12.655+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:13 ai-149 nomad[10066]: 2023-02-22T11:19:13.655+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:14 ai-149 nomad[10066]: 2023-02-22T11:19:14.656+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:15 ai-149 nomad[10066]: 2023-02-22T11:19:15.656+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:16 ai-149 nomad[10066]: 2023-02-22T11:19:16.657+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:17 ai-149 nomad[10066]: 2023-02-22T11:19:17.657+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:18 ai-149 nomad[10066]: 2023-02-22T11:19:18.657+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:19 ai-149 nomad[10066]: 2023-02-22T11:19:19.658+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:20 ai-149 nomad[10066]: 2023-02-22T11:19:20.658+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:21 ai-149 nomad[10066]: 2023-02-22T11:19:21.659+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:22 ai-149 nomad[10066]: 2023-02-22T11:19:22.659+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:23 ai-149 nomad[10066]: 2023-02-22T11:19:23.660+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:24 ai-149 nomad[10066]: 2023-02-22T11:19:24.660+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:25 ai-149 nomad[10066]: 2023-02-22T11:19:25.661+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:26 ai-149 nomad[10066]: 2023-02-22T11:19:26.661+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:27 ai-149 nomad[10066]: 2023-02-22T11:19:27.662+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:28 ai-149 nomad[10066]: 2023-02-22T11:19:28.662+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:29 ai-149 nomad[10066]: 2023-02-22T11:19:29.663+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:30 ai-149 nomad[10066]: 2023-02-22T11:19:30.663+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:31 ai-149 nomad[10066]: 2023-02-22T11:19:31.664+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:32 ai-149 nomad[10066]: 2023-02-22T11:19:32.664+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:33 ai-149 nomad[10066]: 2023-02-22T11:19:33.665+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:34 ai-149 nomad[10066]: 2023-02-22T11:19:34.665+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:35 ai-149 nomad[10066]: 2023-02-22T11:19:35.665+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:36 ai-149 nomad[10066]: 2023-02-22T11:19:36.666+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:37 ai-149 nomad[10066]: 2023-02-22T11:19:37.667+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:38 ai-149 nomad[10066]: 2023-02-22T11:19:38.667+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:39 ai-149 nomad[10066]: 2023-02-22T11:19:39.668+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:40 ai-149 nomad[10066]: 2023-02-22T11:19:40.668+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:41 ai-149 nomad[10066]: 2023-02-22T11:19:41.669+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:42 ai-149 nomad[10066]: 2023-02-22T11:19:42.669+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:43 ai-149 nomad[10066]: 2023-02-22T11:19:43.670+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:44 ai-149 nomad[10066]: 2023-02-22T11:19:44.670+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:45 ai-149 nomad[10066]: 2023-02-22T11:19:45.671+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:46 ai-149 nomad[10066]: 2023-02-22T11:19:46.671+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:47 ai-149 nomad[10066]: 2023-02-22T11:19:47.671+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:48 ai-149 nomad[10066]: 2023-02-22T11:19:48.672+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:49 ai-149 nomad[10066]: 2023-02-22T11:19:49.672+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:50 ai-149 nomad[10066]: 2023-02-22T11:19:50.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:51 ai-149 nomad[10066]: 2023-02-22T11:19:51.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:52 ai-149 nomad[10066]: 2023-02-22T11:19:52.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:53 ai-149 nomad[10066]: 2023-02-22T11:19:53.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:54 ai-149 nomad[10066]: 2023-02-22T11:19:54.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:55 ai-149 nomad[10066]: 2023-02-22T11:19:55.674+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:56 ai-149 nomad[10066]: 2023-02-22T11:19:56.674+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:57 ai-149 nomad[10066]: 2023-02-22T11:19:57.675+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:58 ai-149 nomad[10066]: 2023-02-22T11:19:58.675+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:59 ai-149 nomad[10066]: 2023-02-22T11:19:59.675+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:00 ai-149 nomad[10066]: 2023-02-22T11:20:00.676+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:01 ai-149 nomad[10066]: 2023-02-22T11:20:01.676+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:02 ai-149 nomad[10066]: 2023-02-22T11:20:02.677+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:03 ai-149 nomad[10066]: 2023-02-22T11:20:03.677+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:04 ai-149 nomad[10066]: 2023-02-22T11:20:04.678+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:05 ai-149 nomad[10066]: 2023-02-22T11:20:05.678+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:06 ai-149 nomad[10066]: 2023-02-22T11:20:06.679+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:07 ai-149 nomad[10066]: 2023-02-22T11:20:07.680+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:08 ai-149 nomad[10066]: 2023-02-22T11:20:08.680+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:09 ai-149 nomad[10066]: 2023-02-22T11:20:09.681+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:10 ai-149 nomad[10066]: 2023-02-22T11:20:10.681+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:11 ai-149 nomad[10066]: 2023-02-22T11:20:11.682+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:12 ai-149 nomad[10066]: 2023-02-22T11:20:12.682+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 ``` ## job.hcl ``` job "demo" { datacenters = ["dc1"] type = "service" update { max_parallel = 1 min_healthy_time = "10s" healthy_deadline = "3m" auto_revert = true auto_promote = true canary = 1 } reschedule { attempts = 15 interval = "1h" delay = "15s" delay_function = "exponential" max_delay = "120s" unlimited = false } group "service" { scaling { enabled = true min = 1 max = 3 } restart { interval = "3m" attempts = 3 delay = "15s" mode = "delay" } network { port "http" { to = 8080 } } service { name = "test" port = "http" address_mode = "host" tags = [ ] provider = "nomad" check { type = "http" port = "http" path = "/" interval = "12s" timeout = "6s" check_restart { limit = 3 grace = "10s" } } } task "app" { driver = "docker" config { image = "...." command = "python" ports = ["http"] args = [ ] } } } } ```
1.0
Large number of warning logs with nomad service provider - <!-- Hi there, Thank you for opening an issue. Please note that we try to keep the Nomad issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.nomadproject.io/community --> ### Nomad version 1.4.4 ### Issue ## logs( journalctl -fu nomad ) ``` Feb 22 11:19:06 ai-149 nomad[10066]: 2023-02-22T11:19:06.652+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:07 ai-149 nomad[10066]: 2023-02-22T11:19:07.653+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:08 ai-149 nomad[10066]: 2023-02-22T11:19:08.653+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:09 ai-149 nomad[10066]: 2023-02-22T11:19:09.654+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:10 ai-149 nomad[10066]: 2023-02-22T11:19:10.654+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:11 ai-149 nomad[10066]: 2023-02-22T11:19:11.654+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:12 ai-149 nomad[10066]: 2023-02-22T11:19:12.655+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:13 ai-149 nomad[10066]: 2023-02-22T11:19:13.655+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:14 ai-149 nomad[10066]: 2023-02-22T11:19:14.656+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:15 ai-149 nomad[10066]: 2023-02-22T11:19:15.656+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:16 ai-149 nomad[10066]: 2023-02-22T11:19:16.657+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:17 ai-149 nomad[10066]: 2023-02-22T11:19:17.657+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:18 ai-149 nomad[10066]: 2023-02-22T11:19:18.657+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:19 ai-149 nomad[10066]: 2023-02-22T11:19:19.658+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:20 ai-149 nomad[10066]: 2023-02-22T11:19:20.658+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:21 ai-149 nomad[10066]: 2023-02-22T11:19:21.659+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:22 ai-149 nomad[10066]: 2023-02-22T11:19:22.659+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:23 ai-149 nomad[10066]: 2023-02-22T11:19:23.660+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:24 ai-149 nomad[10066]: 2023-02-22T11:19:24.660+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:25 ai-149 nomad[10066]: 2023-02-22T11:19:25.661+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:26 ai-149 nomad[10066]: 2023-02-22T11:19:26.661+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:27 ai-149 nomad[10066]: 2023-02-22T11:19:27.662+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:28 ai-149 nomad[10066]: 2023-02-22T11:19:28.662+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:29 ai-149 nomad[10066]: 2023-02-22T11:19:29.663+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:30 ai-149 nomad[10066]: 2023-02-22T11:19:30.663+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:31 ai-149 nomad[10066]: 2023-02-22T11:19:31.664+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:32 ai-149 nomad[10066]: 2023-02-22T11:19:32.664+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:33 ai-149 nomad[10066]: 2023-02-22T11:19:33.665+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:34 ai-149 nomad[10066]: 2023-02-22T11:19:34.665+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:35 ai-149 nomad[10066]: 2023-02-22T11:19:35.665+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:36 ai-149 nomad[10066]: 2023-02-22T11:19:36.666+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:37 ai-149 nomad[10066]: 2023-02-22T11:19:37.667+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:38 ai-149 nomad[10066]: 2023-02-22T11:19:38.667+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:39 ai-149 nomad[10066]: 2023-02-22T11:19:39.668+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:40 ai-149 nomad[10066]: 2023-02-22T11:19:40.668+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:41 ai-149 nomad[10066]: 2023-02-22T11:19:41.669+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:42 ai-149 nomad[10066]: 2023-02-22T11:19:42.669+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:43 ai-149 nomad[10066]: 2023-02-22T11:19:43.670+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:44 ai-149 nomad[10066]: 2023-02-22T11:19:44.670+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:45 ai-149 nomad[10066]: 2023-02-22T11:19:45.671+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:46 ai-149 nomad[10066]: 2023-02-22T11:19:46.671+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:47 ai-149 nomad[10066]: 2023-02-22T11:19:47.671+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:48 ai-149 nomad[10066]: 2023-02-22T11:19:48.672+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:49 ai-149 nomad[10066]: 2023-02-22T11:19:49.672+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:50 ai-149 nomad[10066]: 2023-02-22T11:19:50.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:51 ai-149 nomad[10066]: 2023-02-22T11:19:51.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:52 ai-149 nomad[10066]: 2023-02-22T11:19:52.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:53 ai-149 nomad[10066]: 2023-02-22T11:19:53.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:54 ai-149 nomad[10066]: 2023-02-22T11:19:54.673+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:55 ai-149 nomad[10066]: 2023-02-22T11:19:55.674+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:56 ai-149 nomad[10066]: 2023-02-22T11:19:56.674+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:57 ai-149 nomad[10066]: 2023-02-22T11:19:57.675+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:58 ai-149 nomad[10066]: 2023-02-22T11:19:58.675+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:19:59 ai-149 nomad[10066]: 2023-02-22T11:19:59.675+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:00 ai-149 nomad[10066]: 2023-02-22T11:20:00.676+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:01 ai-149 nomad[10066]: 2023-02-22T11:20:01.676+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:02 ai-149 nomad[10066]: 2023-02-22T11:20:02.677+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:03 ai-149 nomad[10066]: 2023-02-22T11:20:03.677+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:04 ai-149 nomad[10066]: 2023-02-22T11:20:04.678+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:05 ai-149 nomad[10066]: 2023-02-22T11:20:05.678+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:06 ai-149 nomad[10066]: 2023-02-22T11:20:06.679+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:07 ai-149 nomad[10066]: 2023-02-22T11:20:07.680+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:08 ai-149 nomad[10066]: 2023-02-22T11:20:08.680+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:09 ai-149 nomad[10066]: 2023-02-22T11:20:09.681+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:10 ai-149 nomad[10066]: 2023-02-22T11:20:10.681+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:11 ai-149 nomad[10066]: 2023-02-22T11:20:11.682+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 Feb 22 11:20:12 ai-149 nomad[10066]: 2023-02-22T11:20:12.682+0800 [WARN] watch.checks: watched check not found: check_id=90d04bc04f044aba3fe01081bff55de1 ``` ## job.hcl ``` job "demo" { datacenters = ["dc1"] type = "service" update { max_parallel = 1 min_healthy_time = "10s" healthy_deadline = "3m" auto_revert = true auto_promote = true canary = 1 } reschedule { attempts = 15 interval = "1h" delay = "15s" delay_function = "exponential" max_delay = "120s" unlimited = false } group "service" { scaling { enabled = true min = 1 max = 3 } restart { interval = "3m" attempts = 3 delay = "15s" mode = "delay" } network { port "http" { to = 8080 } } service { name = "test" port = "http" address_mode = "host" tags = [ ] provider = "nomad" check { type = "http" port = "http" path = "/" interval = "12s" timeout = "6s" check_restart { limit = 3 grace = "10s" } } } task "app" { driver = "docker" config { image = "...." command = "python" ports = ["http"] args = [ ] } } } } ```
non_process
large number of warning logs with nomad service provider hi there thank you for opening an issue please note that we try to keep the nomad issue tracker reserved for bug reports and feature requests for general usage questions please see nomad version issue logs journalctl fu nomad feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id feb ai nomad watch checks watched check not found check id job hcl job demo datacenters type service update max parallel min healthy time healthy deadline auto revert true auto promote true canary reschedule attempts interval delay delay function exponential max delay unlimited false group service scaling enabled true min max restart interval attempts delay mode delay network port http to service name test port http address mode host tags provider nomad check type http port http path interval timeout check restart limit grace task app driver docker config image command python ports args
0
163,932
12,750,698,608
IssuesEvent
2020-06-27 06:20:31
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: X-Pack Jest Tests.x-pack/plugins/lens/public/indexpattern_datasource - loader loadInitialState should load a default state when lastUsedIndexPatternId is not found in indexPatternRefs
Team:KibanaApp blocker failed-test skipped-test v8.0.0
A test failed on a tracked branch ``` Error: expect(received).toMatchObject(expected) - Expected - 1 + Received + 0 @@ -66,7 +66,6 @@ "timeFieldName": "timestamp", "title": "my-fake-index-pattern", }, }, "layers": Object {}, - "showEmptyFields": false, } at Object.it (/dev/shm/workspace/kibana/x-pack/plugins/lens/public/indexpattern_datasource/loader.test.ts:310:21) at process._tickCallback (internal/process/next_tick.js:68:7) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/6201/) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/lens/public/indexpattern_datasource","test.name":"loader loadInitialState should load a default state when lastUsedIndexPatternId is not found in indexPatternRefs","test.failCount":2}} -->
2.0
Failing test: X-Pack Jest Tests.x-pack/plugins/lens/public/indexpattern_datasource - loader loadInitialState should load a default state when lastUsedIndexPatternId is not found in indexPatternRefs - A test failed on a tracked branch ``` Error: expect(received).toMatchObject(expected) - Expected - 1 + Received + 0 @@ -66,7 +66,6 @@ "timeFieldName": "timestamp", "title": "my-fake-index-pattern", }, }, "layers": Object {}, - "showEmptyFields": false, } at Object.it (/dev/shm/workspace/kibana/x-pack/plugins/lens/public/indexpattern_datasource/loader.test.ts:310:21) at process._tickCallback (internal/process/next_tick.js:68:7) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/6201/) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/lens/public/indexpattern_datasource","test.name":"loader loadInitialState should load a default state when lastUsedIndexPatternId is not found in indexPatternRefs","test.failCount":2}} -->
non_process
failing test x pack jest tests x pack plugins lens public indexpattern datasource loader loadinitialstate should load a default state when lastusedindexpatternid is not found in indexpatternrefs a test failed on a tracked branch error expect received tomatchobject expected expected received timefieldname timestamp title my fake index pattern layers object showemptyfields false at object it dev shm workspace kibana x pack plugins lens public indexpattern datasource loader test ts at process tickcallback internal process next tick js first failure
0
390,384
26,860,457,767
IssuesEvent
2023-02-03 17:54:05
danielsp13/SuperCatch
https://api.github.com/repos/danielsp13/SuperCatch
closed
[M1 - Dev] Documentar elección de NLTK
documentation
Tal y como se ha descrito al final de #35 , la biblioteca que servirá de ayuda para el procesamiento de lenguaje, será `NLTK`. Podrían haberse considerado otros candidatos, pero la elección de esta herramienta es clara. Hay que justificar en la documentación del proyecto, en un fichero, por qué. **** Debe incluir lo siguiente: 1. He consultado documentos de proyectos que hablan (o tienen relación) sobre #34 , como es el caso de [Análisis automático de textos en español utilizando NLTK](https://riull.ull.es/xmlui/bitstream/handle/915/3082/Analisis%20automatico%20de%20textos%20en%20espanol%20utilizando%20NLTK.pdf?sequence=1&isAllowed=y) : secciones 1.4 y 1.5 del documento. 2. He preguntado a dos amigos míos que han estado cursando la asignatura de *Recuperación de Información (RI)* sobre si tenían conocimiento acerca de este asunto. En la discusión, me han comentado las diversas herramientas para este propósito además de proporcionarme documentación de interés, donde hablan de estos aspectos y mencionan a la herramienta.
1.0
[M1 - Dev] Documentar elección de NLTK - Tal y como se ha descrito al final de #35 , la biblioteca que servirá de ayuda para el procesamiento de lenguaje, será `NLTK`. Podrían haberse considerado otros candidatos, pero la elección de esta herramienta es clara. Hay que justificar en la documentación del proyecto, en un fichero, por qué. **** Debe incluir lo siguiente: 1. He consultado documentos de proyectos que hablan (o tienen relación) sobre #34 , como es el caso de [Análisis automático de textos en español utilizando NLTK](https://riull.ull.es/xmlui/bitstream/handle/915/3082/Analisis%20automatico%20de%20textos%20en%20espanol%20utilizando%20NLTK.pdf?sequence=1&isAllowed=y) : secciones 1.4 y 1.5 del documento. 2. He preguntado a dos amigos míos que han estado cursando la asignatura de *Recuperación de Información (RI)* sobre si tenían conocimiento acerca de este asunto. En la discusión, me han comentado las diversas herramientas para este propósito además de proporcionarme documentación de interés, donde hablan de estos aspectos y mencionan a la herramienta.
non_process
documentar elección de nltk tal y como se ha descrito al final de la biblioteca que servirá de ayuda para el procesamiento de lenguaje será nltk podrían haberse considerado otros candidatos pero la elección de esta herramienta es clara hay que justificar en la documentación del proyecto en un fichero por qué debe incluir lo siguiente he consultado documentos de proyectos que hablan o tienen relación sobre como es el caso de secciones y del documento he preguntado a dos amigos míos que han estado cursando la asignatura de recuperación de información ri sobre si tenían conocimiento acerca de este asunto en la discusión me han comentado las diversas herramientas para este propósito además de proporcionarme documentación de interés donde hablan de estos aspectos y mencionan a la herramienta
0
20,695
27,367,951,483
IssuesEvent
2023-02-27 20:46:45
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
OS packages out of date
bug process
### Description The OS packages within the containers are out of date despite running apt commands to update them ### Steps to reproduce 1. docker build ### Additional context _No response_ ### Hedera network other ### Version main ### Operating system None
1.0
OS packages out of date - ### Description The OS packages within the containers are out of date despite running apt commands to update them ### Steps to reproduce 1. docker build ### Additional context _No response_ ### Hedera network other ### Version main ### Operating system None
process
os packages out of date description the os packages within the containers are out of date despite running apt commands to update them steps to reproduce docker build additional context no response hedera network other version main operating system none
1
7,155
10,298,493,466
IssuesEvent
2019-08-28 16:29:33
PPHubApp/PPHub-Feedback
https://api.github.com/repos/PPHubApp/PPHub-Feedback
closed
v1.9.3.113 - 1 希望增加黑暗模式 2 希望增...
Feature 🤔 Processing 👨🏻‍💻🚧
1 希望增加黑暗模式 2 希望增加查看release 运行环境: iPad Air 2 - iOS13.0 - v1.9.3.113
1.0
v1.9.3.113 - 1 希望增加黑暗模式 2 希望增... - 1 希望增加黑暗模式 2 希望增加查看release 运行环境: iPad Air 2 - iOS13.0 - v1.9.3.113
process
希望增加黑暗模式 希望增 希望增加黑暗模式 希望增加查看release 运行环境 ipad air
1
14,624
17,766,807,493
IssuesEvent
2021-08-30 08:35:24
googleapis/nodejs-essential-contacts
https://api.github.com/repos/googleapis/nodejs-essential-contacts
closed
Dependency Dashboard
type: process api: essentialcontacts
This issue contains a list of Renovate updates and their statuses. ## Awaiting Schedule These updates are awaiting their schedule. Click on a checkbox to get an update now. - [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->Update actions/setup-node action to v2 ## Ignored or Blocked These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below. - [ ] <!-- recreate-branch=renovate/mocha-9.x -->[Update dependency mocha to v9](../pull/10) (`mocha`, `@types/mocha`) --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
1.0
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses. ## Awaiting Schedule These updates are awaiting their schedule. Click on a checkbox to get an update now. - [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->Update actions/setup-node action to v2 ## Ignored or Blocked These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below. - [ ] <!-- recreate-branch=renovate/mocha-9.x -->[Update dependency mocha to v9](../pull/10) (`mocha`, `@types/mocha`) --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
process
dependency dashboard this issue contains a list of renovate updates and their statuses awaiting schedule these updates are awaiting their schedule click on a checkbox to get an update now update actions setup node action to ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull mocha types mocha check this box to trigger a request for renovate to run again on this repository
1
10,298
13,151,520,678
IssuesEvent
2020-08-09 17:10:17
AmpersandTarski/Ampersand
https://api.github.com/repos/AmpersandTarski/Ampersand
closed
Feature request for enhanced testing of prototype framework existence/loading
component:prototype generator enhancement software process
It sometimes happens, particularly with novice &-users, that generation of a prototype fails and a message appears that says that some part of the prototype framework is missing, e.g. a template. Currently, the & prototype generator does not generate the frontend files when the prototype directory exists and is not empty. However, what it should do is test for the existence of a properly installed frontend. This can be easily achieved by - checking whether or not the prototype directory exists, and if so, whether or not it contains a semaphore file, and if so, refrain from installing the frontend files. In all other cases the frontend files should be installed, optionally overwriting existing stuff. - writing a semaphore file to the prototype directory after the frontend files are succesfully installed.
1.0
Feature request for enhanced testing of prototype framework existence/loading - It sometimes happens, particularly with novice &-users, that generation of a prototype fails and a message appears that says that some part of the prototype framework is missing, e.g. a template. Currently, the & prototype generator does not generate the frontend files when the prototype directory exists and is not empty. However, what it should do is test for the existence of a properly installed frontend. This can be easily achieved by - checking whether or not the prototype directory exists, and if so, whether or not it contains a semaphore file, and if so, refrain from installing the frontend files. In all other cases the frontend files should be installed, optionally overwriting existing stuff. - writing a semaphore file to the prototype directory after the frontend files are succesfully installed.
process
feature request for enhanced testing of prototype framework existence loading it sometimes happens particularly with novice users that generation of a prototype fails and a message appears that says that some part of the prototype framework is missing e g a template currently the prototype generator does not generate the frontend files when the prototype directory exists and is not empty however what it should do is test for the existence of a properly installed frontend this can be easily achieved by checking whether or not the prototype directory exists and if so whether or not it contains a semaphore file and if so refrain from installing the frontend files in all other cases the frontend files should be installed optionally overwriting existing stuff writing a semaphore file to the prototype directory after the frontend files are succesfully installed
1
18,779
24,681,484,476
IssuesEvent
2022-10-18 21:50:14
NEARWEEK/CORE
https://api.github.com/repos/NEARWEEK/CORE
opened
Auto-generated email process for Bounty on Fund3r
Process
When the user goes through the process of applying for being a whitelisted bounty hunter, they need to get email as notifications of their progress. These will be auto-generated emails triggered from .admin. These will from the start to the end include: - [ ] Email about confirmation of meeting with admin team - [ ] Email if the applicant got accepted from call or not (approved/rejected email is sent), with possibility for admin to leave comments for explaining the decision. - [ ] An email from HelloSign asking to sign document for legal agreement - [ ] Final Onboarding email ## 🤼‍♂️ Reviewer @Kisgus ## 🔗 Work doc(s) / inspirational links https://docs.google.com/document/d/1tXCrfYK4YkuNLlnJdCzGuVavwwbu2rXWkmgjf65Q84Y/edit
1.0
Auto-generated email process for Bounty on Fund3r - When the user goes through the process of applying for being a whitelisted bounty hunter, they need to get email as notifications of their progress. These will be auto-generated emails triggered from .admin. These will from the start to the end include: - [ ] Email about confirmation of meeting with admin team - [ ] Email if the applicant got accepted from call or not (approved/rejected email is sent), with possibility for admin to leave comments for explaining the decision. - [ ] An email from HelloSign asking to sign document for legal agreement - [ ] Final Onboarding email ## 🤼‍♂️ Reviewer @Kisgus ## 🔗 Work doc(s) / inspirational links https://docs.google.com/document/d/1tXCrfYK4YkuNLlnJdCzGuVavwwbu2rXWkmgjf65Q84Y/edit
process
auto generated email process for bounty on when the user goes through the process of applying for being a whitelisted bounty hunter they need to get email as notifications of their progress these will be auto generated emails triggered from admin these will from the start to the end include email about confirmation of meeting with admin team email if the applicant got accepted from call or not approved rejected email is sent with possibility for admin to leave comments for explaining the decision an email from hellosign asking to sign document for legal agreement final onboarding email 🤼‍♂️ reviewer kisgus 🔗 work doc s inspirational links
1
22,891
4,857,394,253
IssuesEvent
2016-11-12 15:47:01
aurelia/documentation
https://api.github.com/repos/aurelia/documentation
closed
updateSource, updateTarget and callSource are documented as properties instead of methods
documentation
In the API docs page for `Binding`, `updateSource`, `updateTarget` and `callSource` are documented as properties instead of methods. They are methods. Would be nice to know what "Source" and "Target" refer to. I would assume viewmodel and element, but would like to see it explicit in the docs. (I might add (forgive me) that the purpose of `callSource` is really not clear.)
1.0
updateSource, updateTarget and callSource are documented as properties instead of methods - In the API docs page for `Binding`, `updateSource`, `updateTarget` and `callSource` are documented as properties instead of methods. They are methods. Would be nice to know what "Source" and "Target" refer to. I would assume viewmodel and element, but would like to see it explicit in the docs. (I might add (forgive me) that the purpose of `callSource` is really not clear.)
non_process
updatesource updatetarget and callsource are documented as properties instead of methods in the api docs page for binding updatesource updatetarget and callsource are documented as properties instead of methods they are methods would be nice to know what source and target refer to i would assume viewmodel and element but would like to see it explicit in the docs i might add forgive me that the purpose of callsource is really not clear
0
7,146
10,290,731,283
IssuesEvent
2019-08-27 12:46:14
TorXakis/TorXakis
https://api.github.com/repos/TorXakis/TorXakis
closed
build on SemaphoreCI failed with write access due to stack lock files
development-process
ERROR: type should be string, got "https://semaphoreci.com/torxakis-admin/torxakis/branches/develop/builds/454 failed with\r\n```\r\n...\r\nremote: Resolving deltas: 100% (2677/2677) \r\nremote: Resolving deltas: 100% (2677/2677), completed with 3 local objects.\r\nremote: error: GH006: Protected branch update failed for refs/heads/develop.\r\nremote: error: At least 1 approving review is required by reviewers with write access.\r\nTo https://github.com/TorXakis/TorXakis.git\r\n ! [remote rejected] develop -> develop (protected branch hook declined)\r\nerror: failed to push some refs to 'https://torxakis-admin:11c19c48d8f673f1715178f2f736b06b04144e36@github.com/TorXakis/TorXakis.git'\r\n```\r\n\r\nWhy did this happen? \r\nDid approver of the change not have enough rights? \r\nWhy was this not detected earlier?\r\n"
1.0
build on SemaphoreCI failed with write access due to stack lock files - https://semaphoreci.com/torxakis-admin/torxakis/branches/develop/builds/454 failed with ``` ... remote: Resolving deltas: 100% (2677/2677) remote: Resolving deltas: 100% (2677/2677), completed with 3 local objects. remote: error: GH006: Protected branch update failed for refs/heads/develop. remote: error: At least 1 approving review is required by reviewers with write access. To https://github.com/TorXakis/TorXakis.git ! [remote rejected] develop -> develop (protected branch hook declined) error: failed to push some refs to 'https://torxakis-admin:11c19c48d8f673f1715178f2f736b06b04144e36@github.com/TorXakis/TorXakis.git' ``` Why did this happen? Did approver of the change not have enough rights? Why was this not detected earlier?
process
build on semaphoreci failed with write access due to stack lock files failed with remote resolving deltas remote resolving deltas completed with local objects remote error protected branch update failed for refs heads develop remote error at least approving review is required by reviewers with write access to develop develop protected branch hook declined error failed to push some refs to why did this happen did approver of the change not have enough rights why was this not detected earlier
1
28,901
23,595,918,657
IssuesEvent
2022-08-23 19:13:43
carbon-language/carbon-lang
https://api.github.com/repos/carbon-language/carbon-lang
closed
new_proposal.py: could not request reviewer: 'carbon-language/carbon-leads'
infrastructure
I tried to create a new proposal and I got the following error: ``` RUNNING: /usr/bin/gh pr create --draft --label proposal --project Proposals --reviewer carbon-language/carbon-leads --repo carbon-language/carbon-lang --title 'Multidimensional array' --body 'TODO: add summary and links here' Warning: 1 uncommitted change could not request reviewer: 'carbon-language/carbon-leads' not found ERROR: Command failed: /usr/bin/gh pr create --draft --label proposal --project Proposals --reviewer carbon-language/carbon-leads --repo carbon-language/carbon-lang --title 'Multidimensional array' --body 'TODO: add summary and links here' ``` After removing `--reviewer carbon-language/carbon-leads`, PR was successfully created. My version of `gh` is: ``` $ gh --version gh version 2.14.2 (2022-07-15) https://github.com/cli/cli/releases/tag/v2.14.2 ```
1.0
new_proposal.py: could not request reviewer: 'carbon-language/carbon-leads' - I tried to create a new proposal and I got the following error: ``` RUNNING: /usr/bin/gh pr create --draft --label proposal --project Proposals --reviewer carbon-language/carbon-leads --repo carbon-language/carbon-lang --title 'Multidimensional array' --body 'TODO: add summary and links here' Warning: 1 uncommitted change could not request reviewer: 'carbon-language/carbon-leads' not found ERROR: Command failed: /usr/bin/gh pr create --draft --label proposal --project Proposals --reviewer carbon-language/carbon-leads --repo carbon-language/carbon-lang --title 'Multidimensional array' --body 'TODO: add summary and links here' ``` After removing `--reviewer carbon-language/carbon-leads`, PR was successfully created. My version of `gh` is: ``` $ gh --version gh version 2.14.2 (2022-07-15) https://github.com/cli/cli/releases/tag/v2.14.2 ```
non_process
new proposal py could not request reviewer carbon language carbon leads i tried to create a new proposal and i got the following error running usr bin gh pr create draft label proposal project proposals reviewer carbon language carbon leads repo carbon language carbon lang title multidimensional array body todo add summary and links here warning uncommitted change could not request reviewer carbon language carbon leads not found error command failed usr bin gh pr create draft label proposal project proposals reviewer carbon language carbon leads repo carbon language carbon lang title multidimensional array body todo add summary and links here after removing reviewer carbon language carbon leads pr was successfully created my version of gh is gh version gh version
0
1,206
3,703,230,013
IssuesEvent
2016-02-29 19:39:19
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
closed
AppG Processor didn't work as expected (CR #8103)
AppGPostProcess S2 - Medium suggestion
###### Added on 2010-04-27 16:04 by @lklawrie -- #### Description User couldn't figure out why her input file ran, but the AppG postprocessor didn't. Solution was that the meter name entered was incorrect -- warning in EnergyPlus but there were no meter files so the AppG postprocessor didnt. This is a suggestion to make a readable error file (the errors were in the command window but hard to decipher) that would show as a button in EP-Launch. Input File: 8103- -- External Ref: ticket 2422 Last build tested: `10.04.26 V5.0.0.031 - Release`
1.0
AppG Processor didn't work as expected (CR #8103) - ###### Added on 2010-04-27 16:04 by @lklawrie -- #### Description User couldn't figure out why her input file ran, but the AppG postprocessor didn't. Solution was that the meter name entered was incorrect -- warning in EnergyPlus but there were no meter files so the AppG postprocessor didnt. This is a suggestion to make a readable error file (the errors were in the command window but hard to decipher) that would show as a button in EP-Launch. Input File: 8103- -- External Ref: ticket 2422 Last build tested: `10.04.26 V5.0.0.031 - Release`
process
appg processor didn t work as expected cr added on by lklawrie description user couldn t figure out why her input file ran but the appg postprocessor didn t solution was that the meter name entered was incorrect warning in energyplus but there were no meter files so the appg postprocessor didnt this is a suggestion to make a readable error file the errors were in the command window but hard to decipher that would show as a button in ep launch input file external ref ticket last build tested release
1
14,974
18,474,606,217
IssuesEvent
2021-10-18 05:05:05
bisq-network/proposals
https://api.github.com/repos/bisq-network/proposals
closed
Bisq regulatory issues
a:proposal re:processes
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._ The US is on the verge of passing regulatory requirements that will effectively ban cryptocurrency software development: https://www.coindesk.cc/coinbase-warns-infrastructure-bill-s-crypto-provisions-could-impact-20-of-us-population-39813.html GitHub is owned my M S. Bisq would be in danger of being regulated out of existence. Should the team consider moving development to a non-US, crypo-friendly development host? Here are a few options: 1. Gogs - self-hosted development platform. Ensures development can continue in any country. 2. BitBucket - hosted in Australia. However, Australia is launching their own CBDC, which will likely become unfriendly to decentralized cryptocurrencies as they may follow the US's lead. 3. More alternatives: https://www.guru99.com/github-alternative.html <!-- Please do not remove the text above. -->
1.0
Bisq regulatory issues - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._ The US is on the verge of passing regulatory requirements that will effectively ban cryptocurrency software development: https://www.coindesk.cc/coinbase-warns-infrastructure-bill-s-crypto-provisions-could-impact-20-of-us-population-39813.html GitHub is owned my M S. Bisq would be in danger of being regulated out of existence. Should the team consider moving development to a non-US, crypo-friendly development host? Here are a few options: 1. Gogs - self-hosted development platform. Ensures development can continue in any country. 2. BitBucket - hosted in Australia. However, Australia is launching their own CBDC, which will likely become unfriendly to decentralized cryptocurrencies as they may follow the US's lead. 3. More alternatives: https://www.guru99.com/github-alternative.html <!-- Please do not remove the text above. -->
process
bisq regulatory issues this is a bisq network proposal please familiarize yourself with the the us is on the verge of passing regulatory requirements that will effectively ban cryptocurrency software development github is owned my m s bisq would be in danger of being regulated out of existence should the team consider moving development to a non us crypo friendly development host here are a few options gogs self hosted development platform ensures development can continue in any country bitbucket hosted in australia however australia is launching their own cbdc which will likely become unfriendly to decentralized cryptocurrencies as they may follow the us s lead more alternatives
1
14,540
17,651,660,947
IssuesEvent
2021-08-20 13:59:15
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
ERROR: Generating JavaLite proto_library failed:
type: support / not a bug (process)
### Description of the problem / feature request: I am have been building by Bazel project with ease earlier, but after updating my mac to Big Sur version 11.1, I am facing issues in generating the proto files. ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. ### What operating system are you running Bazel on? Big Sur version 11.1 ### What's the output of `bazel info release`? ``` Starting local Bazel server and connecting to it... release 4.1.0 ``` ### Have you found anything relevant by searching the web? After checking few SO answers, I tried installing XCode but nothing worked as required. ``` xcode-select -p /Library/Developer/CommandLineTools ``` ### Any other information, logs, or outputs that you want to share? ERROR Running -> bazel run @com_google_protobuf//:protoc ``` Starting local Bazel server and connecting to it... INFO: Analyzed target @com_google_protobuf//:protoc (21 packages loaded, 511 targets configured). INFO: Found 1 target... INFO: From Linking external/com_google_protobuf/libprotobuf_lite.a: /Library/Developer/CommandLineTools/usr/bin/libtool: file: bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/_objs/protobuf_lite/io_win32.pic.o has no symbols INFO: From Linking external/com_google_protobuf/libprotobuf.a: /Library/Developer/CommandLineTools/usr/bin/libtool: file: bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/_objs/protobuf/error_listener.pic.o has no symbols Target @com_google_protobuf//:protoc up-to-date: bazel-bin/external/com_google_protobuf/protoc INFO: Elapsed time: 127.751s, Critical Path: 12.77s INFO: 196 processes: 14 internal, 182 darwin-sandbox. INFO: Build completed successfully, 196 total actions INFO: Build completed successfully, 196 total actions dyld: Symbol not found: __ZNK6google8protobuf8compiler3php9Generator11GenerateAllERKNSt3__16vectorIPKNS0_14FileDescriptorENS4_9allocatorIS8_EEEERKNS4_12basic_stringIcNS4_11char_traitsIcEENS9_IcEEEEPNS1_16GeneratorContextEPSI_ Referenced from: /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/execroot/__main__/bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/protoc Expected in: flat namespace in /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/execroot/__main__/bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/protoc ``` Running -> Bazel build command ``` ERROR: oppia/oppia-android/model/BUILD.bazel:200:28: Generating JavaLite proto_library //model:topic_proto failed: (Aborted): protoc failed: error executing command bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc '--proto_path=bazel-out/android-armeabi-v7a-fastbuild/bin/model/_virtual_imports/topic_proto' ... (remaining 11 argument(s) skipped) Use --sandbox_debug to see verbose messages from the sandbox dyld: Symbol not found: __ZNK6google8protobuf8compiler3php9Generator11GenerateAllERKNSt3__16vectorIPKNS0_14FileDescriptorENS4_9allocatorIS8_EEEERKNS4_12basic_stringIcNS4_11char_traitsIcEENS9_IcEEEEPNS1_16GeneratorContextEPSI_ Referenced from: /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/sandbox/darwin-sandbox/298/execroot/__main__/bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc Expected in: flat namespace in /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/sandbox/darwin-sandbox/298/execroot/__main__/bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc Target //:oppia failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 423.863s, Critical Path: 59.86s INFO: 325 processes: 29 internal, 295 darwin-sandbox, 1 worker. FAILED: Build did NOT complete successfully ``` https://github.com/oppia/oppia-android/blob/develop/model/BUILD.bazel#L200
1.0
ERROR: Generating JavaLite proto_library failed: - ### Description of the problem / feature request: I am have been building by Bazel project with ease earlier, but after updating my mac to Big Sur version 11.1, I am facing issues in generating the proto files. ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. ### What operating system are you running Bazel on? Big Sur version 11.1 ### What's the output of `bazel info release`? ``` Starting local Bazel server and connecting to it... release 4.1.0 ``` ### Have you found anything relevant by searching the web? After checking few SO answers, I tried installing XCode but nothing worked as required. ``` xcode-select -p /Library/Developer/CommandLineTools ``` ### Any other information, logs, or outputs that you want to share? ERROR Running -> bazel run @com_google_protobuf//:protoc ``` Starting local Bazel server and connecting to it... INFO: Analyzed target @com_google_protobuf//:protoc (21 packages loaded, 511 targets configured). INFO: Found 1 target... INFO: From Linking external/com_google_protobuf/libprotobuf_lite.a: /Library/Developer/CommandLineTools/usr/bin/libtool: file: bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/_objs/protobuf_lite/io_win32.pic.o has no symbols INFO: From Linking external/com_google_protobuf/libprotobuf.a: /Library/Developer/CommandLineTools/usr/bin/libtool: file: bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/_objs/protobuf/error_listener.pic.o has no symbols Target @com_google_protobuf//:protoc up-to-date: bazel-bin/external/com_google_protobuf/protoc INFO: Elapsed time: 127.751s, Critical Path: 12.77s INFO: 196 processes: 14 internal, 182 darwin-sandbox. INFO: Build completed successfully, 196 total actions INFO: Build completed successfully, 196 total actions dyld: Symbol not found: __ZNK6google8protobuf8compiler3php9Generator11GenerateAllERKNSt3__16vectorIPKNS0_14FileDescriptorENS4_9allocatorIS8_EEEERKNS4_12basic_stringIcNS4_11char_traitsIcEENS9_IcEEEEPNS1_16GeneratorContextEPSI_ Referenced from: /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/execroot/__main__/bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/protoc Expected in: flat namespace in /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/execroot/__main__/bazel-out/darwin-fastbuild/bin/external/com_google_protobuf/protoc ``` Running -> Bazel build command ``` ERROR: oppia/oppia-android/model/BUILD.bazel:200:28: Generating JavaLite proto_library //model:topic_proto failed: (Aborted): protoc failed: error executing command bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc '--proto_path=bazel-out/android-armeabi-v7a-fastbuild/bin/model/_virtual_imports/topic_proto' ... (remaining 11 argument(s) skipped) Use --sandbox_debug to see verbose messages from the sandbox dyld: Symbol not found: __ZNK6google8protobuf8compiler3php9Generator11GenerateAllERKNSt3__16vectorIPKNS0_14FileDescriptorENS4_9allocatorIS8_EEEERKNS4_12basic_stringIcNS4_11char_traitsIcEENS9_IcEEEEPNS1_16GeneratorContextEPSI_ Referenced from: /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/sandbox/darwin-sandbox/298/execroot/__main__/bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc Expected in: flat namespace in /private/var/tmp/_bazel_akshaynandwana/0e0a7569f455d331a530cc35527abf40/sandbox/darwin-sandbox/298/execroot/__main__/bazel-out/darwin-opt-exec-2B5CBBC6/bin/external/com_google_protobuf/protoc Target //:oppia failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 423.863s, Critical Path: 59.86s INFO: 325 processes: 29 internal, 295 darwin-sandbox, 1 worker. FAILED: Build did NOT complete successfully ``` https://github.com/oppia/oppia-android/blob/develop/model/BUILD.bazel#L200
process
error generating javalite proto library failed description of the problem feature request i am have been building by bazel project with ease earlier but after updating my mac to big sur version i am facing issues in generating the proto files bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible what operating system are you running bazel on big sur version what s the output of bazel info release starting local bazel server and connecting to it release have you found anything relevant by searching the web after checking few so answers i tried installing xcode but nothing worked as required xcode select p library developer commandlinetools any other information logs or outputs that you want to share error running bazel run com google protobuf protoc starting local bazel server and connecting to it info analyzed target com google protobuf protoc packages loaded targets configured info found target info from linking external com google protobuf libprotobuf lite a library developer commandlinetools usr bin libtool file bazel out darwin fastbuild bin external com google protobuf objs protobuf lite io pic o has no symbols info from linking external com google protobuf libprotobuf a library developer commandlinetools usr bin libtool file bazel out darwin fastbuild bin external com google protobuf objs protobuf error listener pic o has no symbols target com google protobuf protoc up to date bazel bin external com google protobuf protoc info elapsed time critical path info processes internal darwin sandbox info build completed successfully total actions info build completed successfully total actions dyld symbol not found referenced from private var tmp bazel akshaynandwana execroot main bazel out darwin fastbuild bin external com google protobuf protoc expected in flat namespace in private var tmp bazel akshaynandwana execroot main bazel out darwin fastbuild bin external com google protobuf protoc running bazel build command error oppia oppia android model build bazel generating javalite proto library model topic proto failed aborted protoc failed error executing command bazel out darwin opt exec bin external com google protobuf protoc proto path bazel out android armeabi fastbuild bin model virtual imports topic proto remaining argument s skipped use sandbox debug to see verbose messages from the sandbox dyld symbol not found referenced from private var tmp bazel akshaynandwana sandbox darwin sandbox execroot main bazel out darwin opt exec bin external com google protobuf protoc expected in flat namespace in private var tmp bazel akshaynandwana sandbox darwin sandbox execroot main bazel out darwin opt exec bin external com google protobuf protoc target oppia failed to build use verbose failures to see the command lines of failed build steps info elapsed time critical path info processes internal darwin sandbox worker failed build did not complete successfully
1
36,980
15,110,681,589
IssuesEvent
2021-02-08 19:33:35
Azure/azure-sdk-for-net
https://api.github.com/repos/Azure/azure-sdk-for-net
opened
Add Thread Safety and Additional Concepts README sections to Event Hubs and Service Bus
Client Docs Event Hubs Service Bus
The READMEs in our libraries usually contain the following sections: ``` ### Thread safety We guarantee that all client instance methods are thread-safe and independent of each other ([guideline](https://azure.github.io/azure-sdk/dotnet_introduction.html#dotnet-service-methods-thread-safety)). This ensures that the recommendation of reusing client instances is always safe, even across threads. ### Additional concepts <!-- CLIENT COMMON BAR --> [Client options](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#configuring-service-clients-using-clientoptions) | [Accessing the response](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#accessing-http-response-details-using-responset) | [Long-running operations](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#consuming-long-running-operations-using-operationt) | [Handling failures](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#reporting-errors-requestfailedexception) | [Diagnostics](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md) | [Mocking](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#mocking) | [Client lifetime](https://devblogs.microsoft.com/azure-sdk/lifetime-management-and-thread-safety-guarantees-of-azure-sdk-net-clients/) <!-- CLIENT COMMON BAR --> ``` Many of the examples and concepts in these sections are not relevant to the messaging libraries due to the references to the `Azure.Core` pipeline and HTTP specifics, so they haven't been added to Event Hubs and Service Bus ([discussion here](https://github.com/Azure/azure-sdk-for-net/pull/18354#discussion_r568681043)). **Goal:** include these sections in the READMEs of the Event Hubs and Service Bus libraries with the content that's applicable to them.
1.0
Add Thread Safety and Additional Concepts README sections to Event Hubs and Service Bus - The READMEs in our libraries usually contain the following sections: ``` ### Thread safety We guarantee that all client instance methods are thread-safe and independent of each other ([guideline](https://azure.github.io/azure-sdk/dotnet_introduction.html#dotnet-service-methods-thread-safety)). This ensures that the recommendation of reusing client instances is always safe, even across threads. ### Additional concepts <!-- CLIENT COMMON BAR --> [Client options](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#configuring-service-clients-using-clientoptions) | [Accessing the response](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#accessing-http-response-details-using-responset) | [Long-running operations](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#consuming-long-running-operations-using-operationt) | [Handling failures](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#reporting-errors-requestfailedexception) | [Diagnostics](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/samples/Diagnostics.md) | [Mocking](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/core/Azure.Core/README.md#mocking) | [Client lifetime](https://devblogs.microsoft.com/azure-sdk/lifetime-management-and-thread-safety-guarantees-of-azure-sdk-net-clients/) <!-- CLIENT COMMON BAR --> ``` Many of the examples and concepts in these sections are not relevant to the messaging libraries due to the references to the `Azure.Core` pipeline and HTTP specifics, so they haven't been added to Event Hubs and Service Bus ([discussion here](https://github.com/Azure/azure-sdk-for-net/pull/18354#discussion_r568681043)). **Goal:** include these sections in the READMEs of the Event Hubs and Service Bus libraries with the content that's applicable to them.
non_process
add thread safety and additional concepts readme sections to event hubs and service bus the readmes in our libraries usually contain the following sections thread safety we guarantee that all client instance methods are thread safe and independent of each other this ensures that the recommendation of reusing client instances is always safe even across threads additional concepts many of the examples and concepts in these sections are not relevant to the messaging libraries due to the references to the azure core pipeline and http specifics so they haven t been added to event hubs and service bus goal include these sections in the readmes of the event hubs and service bus libraries with the content that s applicable to them
0
37,381
8,273,793,305
IssuesEvent
2018-09-17 07:36:23
skyverge/woocommerce-memberships-rest-api-docs
https://api.github.com/repos/skyverge/woocommerce-memberships-rest-api-docs
closed
Add code examples in the right column and check if json response in example is correct
code example :floppy_disk: enhancement :star:
REST docs are complete, however the code examples that show the reader how to make an API call have some `TODO`s because no code example is given, but only the JSON response. Since this API is dependent on the WC/WP API probably we don't have to give complicated examples, and still highlight the JSON response which is more important. While at it we can check if the response example is in line with the current API iteration that Memberships ships with. * **Requested by:** Chase / Fulvio
1.0
Add code examples in the right column and check if json response in example is correct - REST docs are complete, however the code examples that show the reader how to make an API call have some `TODO`s because no code example is given, but only the JSON response. Since this API is dependent on the WC/WP API probably we don't have to give complicated examples, and still highlight the JSON response which is more important. While at it we can check if the response example is in line with the current API iteration that Memberships ships with. * **Requested by:** Chase / Fulvio
non_process
add code examples in the right column and check if json response in example is correct rest docs are complete however the code examples that show the reader how to make an api call have some todo s because no code example is given but only the json response since this api is dependent on the wc wp api probably we don t have to give complicated examples and still highlight the json response which is more important while at it we can check if the response example is in line with the current api iteration that memberships ships with requested by chase fulvio
0
83,648
3,638,088,461
IssuesEvent
2016-02-12 14:13:29
molgenis/molgenis
https://api.github.com/repos/molgenis/molgenis
closed
VCF with REF with multiple bases can't be imported
bug molgenis-data-vcf priority-first
Reproduce: - Import 1000G_CardioPanel_nogenotypes.vcf Expected: - Successful import Actual: - Some variants for which the ID starts with 'esv' the REF columns contains values longer than 255 characters (InDels). Since the REF attribute is of type STRING this data can't be imported.
1.0
VCF with REF with multiple bases can't be imported - Reproduce: - Import 1000G_CardioPanel_nogenotypes.vcf Expected: - Successful import Actual: - Some variants for which the ID starts with 'esv' the REF columns contains values longer than 255 characters (InDels). Since the REF attribute is of type STRING this data can't be imported.
non_process
vcf with ref with multiple bases can t be imported reproduce import cardiopanel nogenotypes vcf expected successful import actual some variants for which the id starts with esv the ref columns contains values longer than characters indels since the ref attribute is of type string this data can t be imported
0
8,382
11,543,930,037
IssuesEvent
2020-02-18 10:29:35
bisq-network/bisq
https://api.github.com/repos/bisq-network/bisq
closed
"Sending message failed" when trying to confirm my payment has started
a:bug in:trade-process
Hi there. My trade is stuck: I (BTC buyer) cannot proceed confirming my payment has started. When I clicked on "Payment started" I got "Sending message failed". I was asked to go with mediator, but he/she is not doing anything about it. The seller wrote to me in the direct chat that he/she got the payment, the trade is just stuck because of the problem of Bisq. <img width="577" alt="Schermata 2019-10-07 alle 22 43 00" src="https://user-images.githubusercontent.com/19292930/66369798-1650a800-e99e-11e9-80f6-d044ccb7f545.png"> <img width="829" alt="Schermata 2019-10-07 alle 22 33 51" src="https://user-images.githubusercontent.com/19292930/66369799-1650a800-e99e-11e9-90b3-74f2f7f8a079.png"> Has anyone some ideas on how to resolve this issue?
1.0
"Sending message failed" when trying to confirm my payment has started - Hi there. My trade is stuck: I (BTC buyer) cannot proceed confirming my payment has started. When I clicked on "Payment started" I got "Sending message failed". I was asked to go with mediator, but he/she is not doing anything about it. The seller wrote to me in the direct chat that he/she got the payment, the trade is just stuck because of the problem of Bisq. <img width="577" alt="Schermata 2019-10-07 alle 22 43 00" src="https://user-images.githubusercontent.com/19292930/66369798-1650a800-e99e-11e9-80f6-d044ccb7f545.png"> <img width="829" alt="Schermata 2019-10-07 alle 22 33 51" src="https://user-images.githubusercontent.com/19292930/66369799-1650a800-e99e-11e9-90b3-74f2f7f8a079.png"> Has anyone some ideas on how to resolve this issue?
process
sending message failed when trying to confirm my payment has started hi there my trade is stuck i btc buyer cannot proceed confirming my payment has started when i clicked on payment started i got sending message failed i was asked to go with mediator but he she is not doing anything about it the seller wrote to me in the direct chat that he she got the payment the trade is just stuck because of the problem of bisq img width alt schermata alle src img width alt schermata alle src has anyone some ideas on how to resolve this issue
1
15,711
19,848,776,782
IssuesEvent
2022-01-21 09:55:01
ooi-data/CE02SHSP-SP001-09-PARADJ000-recovered_cspp-parad_j_cspp_instrument_recovered
https://api.github.com/repos/ooi-data/CE02SHSP-SP001-09-PARADJ000-recovered_cspp-parad_j_cspp_instrument_recovered
opened
🛑 Processing failed: ValueError
process
## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:55:01.453123. ## Details Flow name: `CE02SHSP-SP001-09-PARADJ000-recovered_cspp-parad_j_cspp_instrument_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
1.0
🛑 Processing failed: ValueError - ## Overview `ValueError` found in `processing_task` task during run ended on 2022-01-21T09:55:01.453123. ## Details Flow name: `CE02SHSP-SP001-09-PARADJ000-recovered_cspp-parad_j_cspp_instrument_recovered` Task name: `processing_task` Error type: `ValueError` Error message: not enough values to unpack (expected 3, got 0) <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream append_to_zarr(mod_ds, final_store, enc, logger=logger) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr _append_zarr(store, mod_ds) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr existing_arr.append(var_data.values) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values return _as_array_or_item(self._data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item data = np.asarray(data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__ x = self.compute() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute (result,) = compute(self, traverse=False, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute results = schedule(dsk, keys, **kwargs) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get results = get_async( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async raise_exception(exc, tb) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise raise exc File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task result = _execute_task(task, data) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task return func(*(_execute_task(a, cache) for a in args)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter c = np.asarray(c) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__ self._ensure_cached() File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached self.array = NumpyIndexingAdapter(np.asarray(self.array)) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__ return np.asarray(self.array, dtype=dtype) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__ return np.asarray(array[self.key], dtype=None) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__ return array[key.tuple] File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__ return self.get_basic_selection(selection, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection return self._get_basic_selection_nd(selection=selection, out=out, File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd return self._get_selection(indexer=indexer, out=out, fields=fields) File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection lchunk_coords, lchunk_selection, lout_selection = zip(*indexer) ValueError: not enough values to unpack (expected 3, got 0) ``` </details>
process
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered cspp parad j cspp instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
1
175,816
6,554,242,527
IssuesEvent
2017-09-06 04:21:05
segmentio/evergreen
https://api.github.com/repos/segmentio/evergreen
closed
evergreen-buttons outline
Priority: High Status: Proposal Type: New Package
`evergreen-button` is a package exporting React components and a `ButtonAppearances` object. I think this package should export some opinionated button components too: `CloseButton`, `BackButton`, `IconButton`. Maybe more in the future. ## Usage ```jsx import { Button, CloseButton, BackButton, IconButton, ButtonAppearances } from 'evergreen-buttons' <Button>Default (32)</Button> <Button height={40}>Default 40</Button> <Button height={36}>Default 36</Button> <Button height={28}>Default 28</Button> <Button appearance="green">Green</Button> <Button appearance="blue">Blue</Button> <Button appearance="red">Label</Button> ``` ## Design Example <img width="897" alt="screen shot 2017-09-04 at 5 49 08 pm" src="https://user-images.githubusercontent.com/564463/30040923-2f17142c-919a-11e7-9154-e8c3b4a08310.png"> ## Key Implementation take aways * Button is="button" by default * Button implements `Text` * Button, and other controls, are infinitely scaleable on our 4px soft grid * Button height determines text size * Button uses `ui` font family * Button implements a `appearance` property that maps to `ButtonAppearances` * `ButtonAppearances` is an object containing available styles * `default` is the default prop for the `appearance` property ## Button implements `Text` A button is build up of text styles and stateful layer styles (default, hover, active, focus). Therefore a button component should implement the `Text` component. The stateful layer styles will be implemented through the `ButtonAppearances` object. ### Using the `css` property on `ui-box` for appearance. Since there is not a huge value proposition for overwriting the appearance of a button, I am leaning towards using the `css` property on `ui-box` to implement the stateful layer styles (but not dimensions). ```javascript const ButtonAppearances = { default: { ... }, red: { ... }, green: { ... }, blue: { ... }, } ``` #### Directly referencing colors Instead of using a indirection of `primary => green` and `danger => red`, I lean towards the simpler `green` and `red`. You can make mappings or new components in your application layer to add semantics or abstractions like these. ## Button, and other controls, are infinitely scaleable on our 4px soft grid In a lot of cases, design systems implement a abstraction to express different button sizes. This can be something like `small`, `medium`, `large`. I have mixed feelings about this, because it's a level of indirection that is hard to change in the future. Instead I would like to try a different approach. ### Infinitely scalable buttons One of the premises I am building this design system on, is that **you can never anticipate all future requirements, only prepare for it**. In a different project I have implemented a button component that is infinitely scalable by simply setting the height property. Your components become less dependent on other use cases throughout your app. It is somewhat harder to change `small` or `medium` later on, or put a new size in between. The benefit of referring to a `small` or `medium` button is almost non-existent in my experience, instead designers and engineers would use references as `Button 32` and `Button 40` in communication and design documentation. #### Restricting the height you pass While it is great to have buttons be available in any size, it becomes annoying for the implementer not to have any height constraints. To solve that, the height property will only accept values on the grid. ##### 4px soft grid height enforcement The grid scale we are using is a `8px` major scale, with a `4px` minor scale. I am not sure how this will be enforced, but I think we'll just accept anything you can divide by 4 — and otherwise throw an error / violation. #### Using the height to get the text size Because the height is infinitely scalable (on our soft grid), we need the text size to adjust when the height adjusts. In the past I have done this at one point using a function similar to `getTextStyleForControl({ height })`. The right final abstraction will require me playing around a little. ### Disabled styles Disabled styles should be implemented on `[disabled]` for buttons and `[data-disabled]` for links. ### React Router Link and Link component In some cases you need the styling of a Button with functionality of a `ReactRouterLink` or `Link` component. This can be done through setting the `is` property to this component: ```jsx <Button is={Link}>Label</Button> ``` ## IconButton This should be used for a single icon. ## BackButton This should be a Button with a left arrow icon and text. ## CloseButton This should be a Button with a close icon and text.
1.0
evergreen-buttons outline - `evergreen-button` is a package exporting React components and a `ButtonAppearances` object. I think this package should export some opinionated button components too: `CloseButton`, `BackButton`, `IconButton`. Maybe more in the future. ## Usage ```jsx import { Button, CloseButton, BackButton, IconButton, ButtonAppearances } from 'evergreen-buttons' <Button>Default (32)</Button> <Button height={40}>Default 40</Button> <Button height={36}>Default 36</Button> <Button height={28}>Default 28</Button> <Button appearance="green">Green</Button> <Button appearance="blue">Blue</Button> <Button appearance="red">Label</Button> ``` ## Design Example <img width="897" alt="screen shot 2017-09-04 at 5 49 08 pm" src="https://user-images.githubusercontent.com/564463/30040923-2f17142c-919a-11e7-9154-e8c3b4a08310.png"> ## Key Implementation take aways * Button is="button" by default * Button implements `Text` * Button, and other controls, are infinitely scaleable on our 4px soft grid * Button height determines text size * Button uses `ui` font family * Button implements a `appearance` property that maps to `ButtonAppearances` * `ButtonAppearances` is an object containing available styles * `default` is the default prop for the `appearance` property ## Button implements `Text` A button is build up of text styles and stateful layer styles (default, hover, active, focus). Therefore a button component should implement the `Text` component. The stateful layer styles will be implemented through the `ButtonAppearances` object. ### Using the `css` property on `ui-box` for appearance. Since there is not a huge value proposition for overwriting the appearance of a button, I am leaning towards using the `css` property on `ui-box` to implement the stateful layer styles (but not dimensions). ```javascript const ButtonAppearances = { default: { ... }, red: { ... }, green: { ... }, blue: { ... }, } ``` #### Directly referencing colors Instead of using a indirection of `primary => green` and `danger => red`, I lean towards the simpler `green` and `red`. You can make mappings or new components in your application layer to add semantics or abstractions like these. ## Button, and other controls, are infinitely scaleable on our 4px soft grid In a lot of cases, design systems implement a abstraction to express different button sizes. This can be something like `small`, `medium`, `large`. I have mixed feelings about this, because it's a level of indirection that is hard to change in the future. Instead I would like to try a different approach. ### Infinitely scalable buttons One of the premises I am building this design system on, is that **you can never anticipate all future requirements, only prepare for it**. In a different project I have implemented a button component that is infinitely scalable by simply setting the height property. Your components become less dependent on other use cases throughout your app. It is somewhat harder to change `small` or `medium` later on, or put a new size in between. The benefit of referring to a `small` or `medium` button is almost non-existent in my experience, instead designers and engineers would use references as `Button 32` and `Button 40` in communication and design documentation. #### Restricting the height you pass While it is great to have buttons be available in any size, it becomes annoying for the implementer not to have any height constraints. To solve that, the height property will only accept values on the grid. ##### 4px soft grid height enforcement The grid scale we are using is a `8px` major scale, with a `4px` minor scale. I am not sure how this will be enforced, but I think we'll just accept anything you can divide by 4 — and otherwise throw an error / violation. #### Using the height to get the text size Because the height is infinitely scalable (on our soft grid), we need the text size to adjust when the height adjusts. In the past I have done this at one point using a function similar to `getTextStyleForControl({ height })`. The right final abstraction will require me playing around a little. ### Disabled styles Disabled styles should be implemented on `[disabled]` for buttons and `[data-disabled]` for links. ### React Router Link and Link component In some cases you need the styling of a Button with functionality of a `ReactRouterLink` or `Link` component. This can be done through setting the `is` property to this component: ```jsx <Button is={Link}>Label</Button> ``` ## IconButton This should be used for a single icon. ## BackButton This should be a Button with a left arrow icon and text. ## CloseButton This should be a Button with a close icon and text.
non_process
evergreen buttons outline evergreen button is a package exporting react components and a buttonappearances object i think this package should export some opinionated button components too closebutton backbutton iconbutton maybe more in the future usage jsx import button closebutton backbutton iconbutton buttonappearances from evergreen buttons default default default default green blue label design example img width alt screen shot at pm src key implementation take aways button is button by default button implements text button and other controls are infinitely scaleable on our soft grid button height determines text size button uses ui font family button implements a appearance property that maps to buttonappearances buttonappearances is an object containing available styles default is the default prop for the appearance property button implements text a button is build up of text styles and stateful layer styles default hover active focus therefore a button component should implement the text component the stateful layer styles will be implemented through the buttonappearances object using the css property on ui box for appearance since there is not a huge value proposition for overwriting the appearance of a button i am leaning towards using the css property on ui box to implement the stateful layer styles but not dimensions javascript const buttonappearances default red green blue directly referencing colors instead of using a indirection of primary green and danger red i lean towards the simpler green and red you can make mappings or new components in your application layer to add semantics or abstractions like these button and other controls are infinitely scaleable on our soft grid in a lot of cases design systems implement a abstraction to express different button sizes this can be something like small medium large i have mixed feelings about this because it s a level of indirection that is hard to change in the future instead i would like to try a different approach infinitely scalable buttons one of the premises i am building this design system on is that you can never anticipate all future requirements only prepare for it in a different project i have implemented a button component that is infinitely scalable by simply setting the height property your components become less dependent on other use cases throughout your app it is somewhat harder to change small or medium later on or put a new size in between the benefit of referring to a small or medium button is almost non existent in my experience instead designers and engineers would use references as button and button in communication and design documentation restricting the height you pass while it is great to have buttons be available in any size it becomes annoying for the implementer not to have any height constraints to solve that the height property will only accept values on the grid soft grid height enforcement the grid scale we are using is a major scale with a minor scale i am not sure how this will be enforced but i think we ll just accept anything you can divide by  — and otherwise throw an error violation using the height to get the text size because the height is infinitely scalable on our soft grid we need the text size to adjust when the height adjusts in the past i have done this at one point using a function similar to gettextstyleforcontrol height the right final abstraction will require me playing around a little disabled styles disabled styles should be implemented on for buttons and for links react router link and link component in some cases you need the styling of a button with functionality of a reactrouterlink or link component this can be done through setting the is property to this component jsx label iconbutton this should be used for a single icon backbutton this should be a button with a left arrow icon and text closebutton this should be a button with a close icon and text
0
11,930
14,705,307,487
IssuesEvent
2021-01-04 17:54:07
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Test failure: System.Diagnostics.Tests.ProcessStartInfoTests.TestUserCredentialsPropertiesOnWindows
area-System.Diagnostics.Process in pr test bug
failed in job: [runtime-libraries-coreclr outerloop 20201217.2 ](https://dev.azure.com/dnceng/public/_build/results?buildId=924882&view=ms.vss-test-web.build-test-results-tab&runId=29343938&resultId=102792&paneView=debug) net6.0-windows-Release-x64-CoreCLR_release-(Windows.Server.Core.1909.Amd64.Open)windows.10.amd64.server20h1.open@mcr.microsoft.com/dotnet-buildtools/prereqs:windowsservercore-2004-helix-amd64-20200904200251-272704c Error message ~~~ System.UnauthorizedAccessException : Attempted to perform an unauthorized operation. Stack trace at System.Security.AccessControl.Win32.SetSecurityInfo(ResourceType type, String name, SafeHandle handle, SecurityInfos securityInformation, SecurityIdentifier owner, SecurityIdentifier group, GenericAcl sacl, GenericAcl dacl) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/Win32.cs:line 314 at System.Security.AccessControl.NativeObjectSecurity.Persist(String name, SafeHandle handle, AccessControlSections includeSections, Object exceptionContext) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/NativeObjectSecurity.cs:line 263 at System.Security.AccessControl.NativeObjectSecurity.Persist(String name, AccessControlSections includeSections, Object exceptionContext) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/NativeObjectSecurity.cs:line 353 at System.Security.AccessControl.NativeObjectSecurity.Persist(String name, AccessControlSections includeSections) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/NativeObjectSecurity.cs:line 343 at System.Security.AccessControl.FileSystemSecurity.Persist(String fullPath) in /_/src/libraries/System.IO.FileSystem.AccessControl/src/System/Security/AccessControl/FileSystemSecurity.cs:line 124 at System.IO.FileSystemAclExtensions.SetAccessControl(FileInfo fileInfo, FileSecurity fileSecurity) in /_/src/libraries/System.IO.FileSystem.AccessControl/src/System/IO/FileSystemAclExtensions.cs:line 78 at System.Diagnostics.Tests.ProcessStartInfoTests.SetAccessControl(String userName, String filePath, AccessControlType accessControlType) in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 523 at System.Diagnostics.Tests.ProcessStartInfoTests.TestUserCredentialsPropertiesOnWindows() in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 502 ~~~
1.0
Test failure: System.Diagnostics.Tests.ProcessStartInfoTests.TestUserCredentialsPropertiesOnWindows - failed in job: [runtime-libraries-coreclr outerloop 20201217.2 ](https://dev.azure.com/dnceng/public/_build/results?buildId=924882&view=ms.vss-test-web.build-test-results-tab&runId=29343938&resultId=102792&paneView=debug) net6.0-windows-Release-x64-CoreCLR_release-(Windows.Server.Core.1909.Amd64.Open)windows.10.amd64.server20h1.open@mcr.microsoft.com/dotnet-buildtools/prereqs:windowsservercore-2004-helix-amd64-20200904200251-272704c Error message ~~~ System.UnauthorizedAccessException : Attempted to perform an unauthorized operation. Stack trace at System.Security.AccessControl.Win32.SetSecurityInfo(ResourceType type, String name, SafeHandle handle, SecurityInfos securityInformation, SecurityIdentifier owner, SecurityIdentifier group, GenericAcl sacl, GenericAcl dacl) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/Win32.cs:line 314 at System.Security.AccessControl.NativeObjectSecurity.Persist(String name, SafeHandle handle, AccessControlSections includeSections, Object exceptionContext) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/NativeObjectSecurity.cs:line 263 at System.Security.AccessControl.NativeObjectSecurity.Persist(String name, AccessControlSections includeSections, Object exceptionContext) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/NativeObjectSecurity.cs:line 353 at System.Security.AccessControl.NativeObjectSecurity.Persist(String name, AccessControlSections includeSections) in /_/src/libraries/System.Security.AccessControl/src/System/Security/AccessControl/NativeObjectSecurity.cs:line 343 at System.Security.AccessControl.FileSystemSecurity.Persist(String fullPath) in /_/src/libraries/System.IO.FileSystem.AccessControl/src/System/Security/AccessControl/FileSystemSecurity.cs:line 124 at System.IO.FileSystemAclExtensions.SetAccessControl(FileInfo fileInfo, FileSecurity fileSecurity) in /_/src/libraries/System.IO.FileSystem.AccessControl/src/System/IO/FileSystemAclExtensions.cs:line 78 at System.Diagnostics.Tests.ProcessStartInfoTests.SetAccessControl(String userName, String filePath, AccessControlType accessControlType) in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 523 at System.Diagnostics.Tests.ProcessStartInfoTests.TestUserCredentialsPropertiesOnWindows() in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 502 ~~~
process
test failure system diagnostics tests processstartinfotests testusercredentialspropertiesonwindows failed in job windows release coreclr release windows server core open windows open mcr microsoft com dotnet buildtools prereqs windowsservercore helix error message system unauthorizedaccessexception attempted to perform an unauthorized operation stack trace at system security accesscontrol setsecurityinfo resourcetype type string name safehandle handle securityinfos securityinformation securityidentifier owner securityidentifier group genericacl sacl genericacl dacl in src libraries system security accesscontrol src system security accesscontrol cs line at system security accesscontrol nativeobjectsecurity persist string name safehandle handle accesscontrolsections includesections object exceptioncontext in src libraries system security accesscontrol src system security accesscontrol nativeobjectsecurity cs line at system security accesscontrol nativeobjectsecurity persist string name accesscontrolsections includesections object exceptioncontext in src libraries system security accesscontrol src system security accesscontrol nativeobjectsecurity cs line at system security accesscontrol nativeobjectsecurity persist string name accesscontrolsections includesections in src libraries system security accesscontrol src system security accesscontrol nativeobjectsecurity cs line at system security accesscontrol filesystemsecurity persist string fullpath in src libraries system io filesystem accesscontrol src system security accesscontrol filesystemsecurity cs line at system io filesystemaclextensions setaccesscontrol fileinfo fileinfo filesecurity filesecurity in src libraries system io filesystem accesscontrol src system io filesystemaclextensions cs line at system diagnostics tests processstartinfotests setaccesscontrol string username string filepath accesscontroltype accesscontroltype in src libraries system diagnostics process tests processstartinfotests cs line at system diagnostics tests processstartinfotests testusercredentialspropertiesonwindows in src libraries system diagnostics process tests processstartinfotests cs line
1
7,484
10,574,469,726
IssuesEvent
2019-10-07 14:03:41
Hurence/logisland
https://api.github.com/repos/Hurence/logisland
closed
add SplitRecord processor
feature processor
this processor takes 1 record in and gives n records out according to dynamic parameters example conf - processor: split_record component: com.hurence.logisland.processor.SplitRecord configuration: # default false keep.parent.record: false # default false, if false the new record_type will be the name of the dynamic property keep.parent.record_type: false # default true, if false the new record_time will is set to processing_time keep.parent.record_time: false # dynamic parameters record_type1: fieldA, fieldB record_type2: fieldC record_type3: fieldA, fieldD will give R(record_time0, record_type0, record_id0, fieldA, fieldB, fieldC, fieldD) => R1(record_time0, record_type1, record_id1, parent_record_id =record_id0, fieldA, fieldB) R2(record_time0, record_type2, record_id2, parent_record_id =record_id0, fieldC) R2(record_time0, record_type3, record_id3, parent_record_id =record_id0, fieldA, fieldD)
1.0
add SplitRecord processor - this processor takes 1 record in and gives n records out according to dynamic parameters example conf - processor: split_record component: com.hurence.logisland.processor.SplitRecord configuration: # default false keep.parent.record: false # default false, if false the new record_type will be the name of the dynamic property keep.parent.record_type: false # default true, if false the new record_time will is set to processing_time keep.parent.record_time: false # dynamic parameters record_type1: fieldA, fieldB record_type2: fieldC record_type3: fieldA, fieldD will give R(record_time0, record_type0, record_id0, fieldA, fieldB, fieldC, fieldD) => R1(record_time0, record_type1, record_id1, parent_record_id =record_id0, fieldA, fieldB) R2(record_time0, record_type2, record_id2, parent_record_id =record_id0, fieldC) R2(record_time0, record_type3, record_id3, parent_record_id =record_id0, fieldA, fieldD)
process
add splitrecord processor this processor takes record in and gives n records out according to dynamic parameters example conf processor split record component com hurence logisland processor splitrecord configuration default false keep parent record false default false if false the new record type will be the name of the dynamic property keep parent record type false default true if false the new record time will is set to processing time keep parent record time false dynamic parameters record fielda fieldb record fieldc record fielda fieldd will give r record record record fielda fieldb fieldc fieldd record record record parent record id record fielda fieldb record record record parent record id record fieldc record record record parent record id record fielda fieldd
1
185,275
14,347,299,986
IssuesEvent
2020-11-29 06:20:08
rubyforgood/casa
https://api.github.com/repos/rubyforgood/casa
closed
Clarify test naming convention
:sparkles: :computer: Contributor Friendly / Devel Priority: Low Tests! 🎉💖👏🏼
Create a naming convention for the tests, particularly the system specs. We’ve got a few different naming conventions and its not always clear where to look for specific kinds of specs (also post-conf)
1.0
Clarify test naming convention - Create a naming convention for the tests, particularly the system specs. We’ve got a few different naming conventions and its not always clear where to look for specific kinds of specs (also post-conf)
non_process
clarify test naming convention create a naming convention for the tests particularly the system specs we’ve got a few different naming conventions and its not always clear where to look for specific kinds of specs also post conf
0
189,858
6,802,441,007
IssuesEvent
2017-11-02 20:13:10
GoogleCloudPlatform/google-cloud-node
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-node
closed
speech.streamingRecognize randomly silently not working (v1)
api: speech priority: p1 type: bug
The problem is I am using v1 and the latest @google-cloud/speech@0.10.2. It worked perfectly for 6 months (even v1beta) until yesterday when it just randomly (and silently!) stops working. I mean one call of streamingRecognize may go well and recognition starts, but then second or third call of streamingRecognize just don't receive any input (and I'm getting you're streaming to slow error). Using the same code! And even in recognition starts, the quality is very low. Often leading to completely wrong results. What happened??? We use google speech API for our paying customers and this is very important. #### Environment details - OS: amazon linux 4.4.44-39.55.amzn1.x86_64 - Node.js version: v7.2.1 - npm version: 3.10.10 - using google-cloud/speech@0.10.2 #### Steps to reproduce ``` var Speech = require('@google-cloud/speech')({ credentials: require(_base + '/google_cloud_credential.json') }); self.recognizeStream = Speech.streamingRecognize({ config: { encoding: 'MULAW', sampleRateHertz: 8000, languageCode: "ru-RU", }, singleUtterance: false, interimResults: true }); self.iStream.pipe(self.recognizeStream) .on('error', function(err) { logger.error('google-speech error:', err); self.restartRecognizing(); //GOOGLE BUG: randomly crashes https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1894 }) .on('end', function(err) { logger.trace('google-speech end:', err); }) .on('close', function(err) { logger.error('google-speech close: ', err); self.restartRecognizing(); //GOOGLE BUG: randomly crashes https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1894 }) .on('data', function(data) { //logger.warn(data); require('tracer').setLevel('info'); if(data.results && data.results[0] && data.results[0].alternatives) logger.warn("isFinal: %s, conf: %s, stab: %s, trans: %s, delay=%s", data.results[0].isFinal, data.results[0].alternatives[0].confidence, data.results[0].stability, data.results[0].alternatives[0].transcript, Number(self.thisDate-self.prevDate)); if(!self.isListeningEmitted && data && data.results && data.results[0] && !data.results[0].isFinal) { self.isListeningEmitted = true; self.emit('started_hearing_speech'); } if(data && data.results && data.results[0] && data.results[0].alternatives && !data.results[0].isFinal) { self.thisDate = Date.now(); if(!self.prevDate) self.prevDate = self.thisDate; logger.debug("isFinal: %s, conf: %s, stab: %s, trans: %s, delay=%s", data.results[0].isFinal, data.results[0].alternatives[0].confidence, data.results[0].stability, data.results[0].alternatives[0].transcript, Number(self.thisDate-self.prevDate)); self.prevDate = self.thisDate; if(self.timer) clearTimeout(self.timer); if(!self.isFired) self.timer = setTimeout(self.onRecognizedText.bind(self), 800, data.results[0].alternatives[0].transcript); } }); ```
1.0
speech.streamingRecognize randomly silently not working (v1) - The problem is I am using v1 and the latest @google-cloud/speech@0.10.2. It worked perfectly for 6 months (even v1beta) until yesterday when it just randomly (and silently!) stops working. I mean one call of streamingRecognize may go well and recognition starts, but then second or third call of streamingRecognize just don't receive any input (and I'm getting you're streaming to slow error). Using the same code! And even in recognition starts, the quality is very low. Often leading to completely wrong results. What happened??? We use google speech API for our paying customers and this is very important. #### Environment details - OS: amazon linux 4.4.44-39.55.amzn1.x86_64 - Node.js version: v7.2.1 - npm version: 3.10.10 - using google-cloud/speech@0.10.2 #### Steps to reproduce ``` var Speech = require('@google-cloud/speech')({ credentials: require(_base + '/google_cloud_credential.json') }); self.recognizeStream = Speech.streamingRecognize({ config: { encoding: 'MULAW', sampleRateHertz: 8000, languageCode: "ru-RU", }, singleUtterance: false, interimResults: true }); self.iStream.pipe(self.recognizeStream) .on('error', function(err) { logger.error('google-speech error:', err); self.restartRecognizing(); //GOOGLE BUG: randomly crashes https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1894 }) .on('end', function(err) { logger.trace('google-speech end:', err); }) .on('close', function(err) { logger.error('google-speech close: ', err); self.restartRecognizing(); //GOOGLE BUG: randomly crashes https://github.com/GoogleCloudPlatform/google-cloud-node/issues/1894 }) .on('data', function(data) { //logger.warn(data); require('tracer').setLevel('info'); if(data.results && data.results[0] && data.results[0].alternatives) logger.warn("isFinal: %s, conf: %s, stab: %s, trans: %s, delay=%s", data.results[0].isFinal, data.results[0].alternatives[0].confidence, data.results[0].stability, data.results[0].alternatives[0].transcript, Number(self.thisDate-self.prevDate)); if(!self.isListeningEmitted && data && data.results && data.results[0] && !data.results[0].isFinal) { self.isListeningEmitted = true; self.emit('started_hearing_speech'); } if(data && data.results && data.results[0] && data.results[0].alternatives && !data.results[0].isFinal) { self.thisDate = Date.now(); if(!self.prevDate) self.prevDate = self.thisDate; logger.debug("isFinal: %s, conf: %s, stab: %s, trans: %s, delay=%s", data.results[0].isFinal, data.results[0].alternatives[0].confidence, data.results[0].stability, data.results[0].alternatives[0].transcript, Number(self.thisDate-self.prevDate)); self.prevDate = self.thisDate; if(self.timer) clearTimeout(self.timer); if(!self.isFired) self.timer = setTimeout(self.onRecognizedText.bind(self), 800, data.results[0].alternatives[0].transcript); } }); ```
non_process
speech streamingrecognize randomly silently not working the problem is i am using and the latest google cloud speech it worked perfectly for months even until yesterday when it just randomly and silently stops working i mean one call of streamingrecognize may go well and recognition starts but then second or third call of streamingrecognize just don t receive any input and i m getting you re streaming to slow error using the same code and even in recognition starts the quality is very low often leading to completely wrong results what happened we use google speech api for our paying customers and this is very important environment details os amazon linux node js version npm version using google cloud speech steps to reproduce var speech require google cloud speech credentials require base google cloud credential json self recognizestream speech streamingrecognize config encoding mulaw sampleratehertz languagecode ru ru singleutterance false interimresults true self istream pipe self recognizestream on error function err logger error google speech error err self restartrecognizing google bug randomly crashes on end function err logger trace google speech end err on close function err logger error google speech close err self restartrecognizing google bug randomly crashes on data function data logger warn data require tracer setlevel info if data results data results data results alternatives logger warn isfinal s conf s stab s trans s delay s data results isfinal data results alternatives confidence data results stability data results alternatives transcript number self thisdate self prevdate if self islisteningemitted data data results data results data results isfinal self islisteningemitted true self emit started hearing speech if data data results data results data results alternatives data results isfinal self thisdate date now if self prevdate self prevdate self thisdate logger debug isfinal s conf s stab s trans s delay s data results isfinal data results alternatives confidence data results stability data results alternatives transcript number self thisdate self prevdate self prevdate self thisdate if self timer cleartimeout self timer if self isfired self timer settimeout self onrecognizedtext bind self data results alternatives transcript
0
12,103
14,740,319,319
IssuesEvent
2021-01-07 08:53:49
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
SAB Billing file
anc-process anp-important ant-support
In GitLab by @kdjstudios on Oct 30, 2018, 16:27 **Submitted by:** "Kimberly Gagner" <kim.mckellar@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-10-30-11157 **Server:** Internal **Client/Site:** Billerica Inbound **Account:** NA **Issue:** Here is the file I am attempting to upload and keep getting the error, “sorry but something went wrong”. I have rerun the files too in the event they were corrupt.
1.0
SAB Billing file - In GitLab by @kdjstudios on Oct 30, 2018, 16:27 **Submitted by:** "Kimberly Gagner" <kim.mckellar@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-10-30-11157 **Server:** Internal **Client/Site:** Billerica Inbound **Account:** NA **Issue:** Here is the file I am attempting to upload and keep getting the error, “sorry but something went wrong”. I have rerun the files too in the event they were corrupt.
process
sab billing file in gitlab by kdjstudios on oct submitted by kimberly gagner helpdesk server internal client site billerica inbound account na issue here is the file i am attempting to upload and keep getting the error “sorry but something went wrong” i have rerun the files too in the event they were corrupt
1
122,389
26,124,198,795
IssuesEvent
2022-12-28 16:13:31
oppia/oppia
https://api.github.com/repos/oppia/oppia
closed
Modularize pre_commit_linter
Project-specific starter issue code-health enhancement
`pre_commit_linter.py` currently suffers from a lot of duplicate/complicated code. As a consequence, the file is extremely hard to optimize, and only grows harder to do so as time goes on. We'd like to split the functions into smaller modules (classes, functions, new files/modules), anything to help make the code more reasonable and easier to update and maintain. Here are a list of starter items that could be improved: - [x] Consolidate file-access (#6023 — @brianrodri) - [x] Consolidate searching for bad patterns - (#7329 -- @cuichenli) - [x] Consolidate searching for patterns in general - (#7339 -- @cuichenli) - [x] Consolidate command-line interface - [x] Consolidate command-line configuration/options access - [x] Consolidate thread creation - [ ] Have a base class for linter runner (eg. python_linter.py) to de-duplicate creation of TaskResult objects. Ideally, each of these issues should be managed by some sort of new module/class/set of functions which provides an interface that allows us to make optimizations without changing the overall behavior of the linter. Crucially, we should also be able to add tests to make that behavior provably correct moving forward.
1.0
Modularize pre_commit_linter - `pre_commit_linter.py` currently suffers from a lot of duplicate/complicated code. As a consequence, the file is extremely hard to optimize, and only grows harder to do so as time goes on. We'd like to split the functions into smaller modules (classes, functions, new files/modules), anything to help make the code more reasonable and easier to update and maintain. Here are a list of starter items that could be improved: - [x] Consolidate file-access (#6023 — @brianrodri) - [x] Consolidate searching for bad patterns - (#7329 -- @cuichenli) - [x] Consolidate searching for patterns in general - (#7339 -- @cuichenli) - [x] Consolidate command-line interface - [x] Consolidate command-line configuration/options access - [x] Consolidate thread creation - [ ] Have a base class for linter runner (eg. python_linter.py) to de-duplicate creation of TaskResult objects. Ideally, each of these issues should be managed by some sort of new module/class/set of functions which provides an interface that allows us to make optimizations without changing the overall behavior of the linter. Crucially, we should also be able to add tests to make that behavior provably correct moving forward.
non_process
modularize pre commit linter pre commit linter py currently suffers from a lot of duplicate complicated code as a consequence the file is extremely hard to optimize and only grows harder to do so as time goes on we d like to split the functions into smaller modules classes functions new files modules anything to help make the code more reasonable and easier to update and maintain here are a list of starter items that could be improved consolidate file access — brianrodri consolidate searching for bad patterns cuichenli consolidate searching for patterns in general cuichenli consolidate command line interface consolidate command line configuration options access consolidate thread creation have a base class for linter runner eg python linter py to de duplicate creation of taskresult objects ideally each of these issues should be managed by some sort of new module class set of functions which provides an interface that allows us to make optimizations without changing the overall behavior of the linter crucially we should also be able to add tests to make that behavior provably correct moving forward
0
189,962
14,529,514,846
IssuesEvent
2020-12-14 17:55:24
NVIDIA/spark-rapids
https://api.github.com/repos/NVIDIA/spark-rapids
closed
[FEA] Automated canonicalization tests
P1 SQL feature request test
**Is your feature request related to a problem? Please describe.** One of the causes of #1308 is parts of the plan failing to canonicalize properly, and this confuses the Spark query planner into thinking it cannot reuse parts of the query that are reusable. This ends up running scans redundantly during the query needlessly. The canonicalization error was subtle and easy to miss, so we need a better way of automatically detecting when canonicalization methods are incorrect. **Describe the solution you'd like** One idea is to inject canonicalization testing into our existing unit and integration tests. When running a GPU query, we could plan the query _twice_ and verify that the two executed plans canonicalized are equivalent. For example, if we have a plan like `spark.sql(query).collect` we could do something like this: ```scala val plan1 = spark.sql(query) val plan2 = spark.sql(query) val isCanonical = plan1.queryExecution.executedPlan.canonicalized == plan2.queryExecution.executedPlan.canonicalized plan1.collect ``` **Describe alternatives you've considered** If we could programmatically generate unit tests for the various combinations of GPU Exec nodes and expressions that would be great, but I think artificially creating some of the input arguments outside of a query could be tricky.
1.0
[FEA] Automated canonicalization tests - **Is your feature request related to a problem? Please describe.** One of the causes of #1308 is parts of the plan failing to canonicalize properly, and this confuses the Spark query planner into thinking it cannot reuse parts of the query that are reusable. This ends up running scans redundantly during the query needlessly. The canonicalization error was subtle and easy to miss, so we need a better way of automatically detecting when canonicalization methods are incorrect. **Describe the solution you'd like** One idea is to inject canonicalization testing into our existing unit and integration tests. When running a GPU query, we could plan the query _twice_ and verify that the two executed plans canonicalized are equivalent. For example, if we have a plan like `spark.sql(query).collect` we could do something like this: ```scala val plan1 = spark.sql(query) val plan2 = spark.sql(query) val isCanonical = plan1.queryExecution.executedPlan.canonicalized == plan2.queryExecution.executedPlan.canonicalized plan1.collect ``` **Describe alternatives you've considered** If we could programmatically generate unit tests for the various combinations of GPU Exec nodes and expressions that would be great, but I think artificially creating some of the input arguments outside of a query could be tricky.
non_process
automated canonicalization tests is your feature request related to a problem please describe one of the causes of is parts of the plan failing to canonicalize properly and this confuses the spark query planner into thinking it cannot reuse parts of the query that are reusable this ends up running scans redundantly during the query needlessly the canonicalization error was subtle and easy to miss so we need a better way of automatically detecting when canonicalization methods are incorrect describe the solution you d like one idea is to inject canonicalization testing into our existing unit and integration tests when running a gpu query we could plan the query twice and verify that the two executed plans canonicalized are equivalent for example if we have a plan like spark sql query collect we could do something like this scala val spark sql query val spark sql query val iscanonical queryexecution executedplan canonicalized queryexecution executedplan canonicalized collect describe alternatives you ve considered if we could programmatically generate unit tests for the various combinations of gpu exec nodes and expressions that would be great but i think artificially creating some of the input arguments outside of a query could be tricky
0
22,535
31,683,721,625
IssuesEvent
2023-09-08 03:39:23
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
reopened
DISABLED test_fs_pool (__main__.TestMultiprocessing)
high priority triage review module: multiprocessing module: flaky-tests skipped
This test has been determined flaky through reruns in CI and its instances are reported in our flaky_tests table here https://metrics.pytorch.org/d/L0r6ErGnk/github-status?orgId=1&from=1636426818307&to=1639018818307&viewPanel=57. ``` ====================================================================== FAIL [5.016s]: test_fs_pool (__main__.TestMultiprocessing) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_multiprocessing.py", line 355, in test_fs_pool self._test_pool(repeat=TEST_REPEATS) File "test_multiprocessing.py", line 327, in _test_pool do_test() File "test_multiprocessing.py", line 206, in __exit__ self.test_case.assertFalse(self.has_shm_files()) AssertionError: True is not false ``` Please look at the table for details from the past 30 days such as * number of failed instances * an example url * which platforms it failed on * the number of times it failed on trunk vs on PRs. cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @VitalyFedyunin
1.0
DISABLED test_fs_pool (__main__.TestMultiprocessing) - This test has been determined flaky through reruns in CI and its instances are reported in our flaky_tests table here https://metrics.pytorch.org/d/L0r6ErGnk/github-status?orgId=1&from=1636426818307&to=1639018818307&viewPanel=57. ``` ====================================================================== FAIL [5.016s]: test_fs_pool (__main__.TestMultiprocessing) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_multiprocessing.py", line 355, in test_fs_pool self._test_pool(repeat=TEST_REPEATS) File "test_multiprocessing.py", line 327, in _test_pool do_test() File "test_multiprocessing.py", line 206, in __exit__ self.test_case.assertFalse(self.has_shm_files()) AssertionError: True is not false ``` Please look at the table for details from the past 30 days such as * number of failed instances * an example url * which platforms it failed on * the number of times it failed on trunk vs on PRs. cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @VitalyFedyunin
process
disabled test fs pool main testmultiprocessing this test has been determined flaky through reruns in ci and its instances are reported in our flaky tests table here fail test fs pool main testmultiprocessing traceback most recent call last file test multiprocessing py line in test fs pool self test pool repeat test repeats file test multiprocessing py line in test pool do test file test multiprocessing py line in exit self test case assertfalse self has shm files assertionerror true is not false please look at the table for details from the past days such as number of failed instances an example url which platforms it failed on the number of times it failed on trunk vs on prs cc ezyang gchanan bdhirsh jbschlosser vitalyfedyunin
1
229,970
25,403,330,793
IssuesEvent
2022-11-22 13:42:04
elastic/csp-security-policies
https://api.github.com/repos/elastic/csp-security-policies
opened
Implement the Logging rules in the AWS CIS benchmark
Team:Cloud Security Posture 8.7 Candidate
- [ ] Implement the automated rules of section 3 of the "CIS AWS" benchmark - [ ] Implement unit tests to ensure rules work as expected.
True
Implement the Logging rules in the AWS CIS benchmark - - [ ] Implement the automated rules of section 3 of the "CIS AWS" benchmark - [ ] Implement unit tests to ensure rules work as expected.
non_process
implement the logging rules in the aws cis benchmark implement the automated rules of section of the cis aws benchmark implement unit tests to ensure rules work as expected
0
21,686
30,180,085,389
IssuesEvent
2023-07-04 08:18:36
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
Unable to use Command::spawn with unallocated console
O-windows T-libs A-process
I'm currently trying to run an executable on Windows with Rust Nightly `1.72.0-nightly (5b377cece 2023-06-30)`. It works when the console is allocated with `winapi::um::wincon::AttachConsole` but whenever the console has been unallocated with `winapi::um::wincon::FreeConsole` it's unable to spawn new processes. Here is the following code that I'm using to spawn the process. ```rust use std::process::Command; fn test() { // this is called somewhere else in the program. winapi::um::wincon::FreeConsole(); let current_pwd: String = std::env::current_dir().unwrap().into_os_string().into_string().unwrap(); println!("Using pwd {loc}"); let cmd_pre = Command::new("test.exe".to_string()).current_dir(current_pwd).spawn(); if let Err(e) = cmd_pre { println!("Failed to launch embedded application. {:#?}", e); return; } let mut cmd_unwr = cmd_pre.unwrap(); println!("PID {}", cmd_unwr.id()); if let Err(e) = cmd_unwr.wait() { println!("Failed to run embedded application, exited with code {:#?}", e); return; } println!("Done!"); } ``` When I try and run the process, this is the following error I get; ``` Os { code: 50, kind: Uncategorized, message: "The request is not supported.", } ```
1.0
Unable to use Command::spawn with unallocated console - I'm currently trying to run an executable on Windows with Rust Nightly `1.72.0-nightly (5b377cece 2023-06-30)`. It works when the console is allocated with `winapi::um::wincon::AttachConsole` but whenever the console has been unallocated with `winapi::um::wincon::FreeConsole` it's unable to spawn new processes. Here is the following code that I'm using to spawn the process. ```rust use std::process::Command; fn test() { // this is called somewhere else in the program. winapi::um::wincon::FreeConsole(); let current_pwd: String = std::env::current_dir().unwrap().into_os_string().into_string().unwrap(); println!("Using pwd {loc}"); let cmd_pre = Command::new("test.exe".to_string()).current_dir(current_pwd).spawn(); if let Err(e) = cmd_pre { println!("Failed to launch embedded application. {:#?}", e); return; } let mut cmd_unwr = cmd_pre.unwrap(); println!("PID {}", cmd_unwr.id()); if let Err(e) = cmd_unwr.wait() { println!("Failed to run embedded application, exited with code {:#?}", e); return; } println!("Done!"); } ``` When I try and run the process, this is the following error I get; ``` Os { code: 50, kind: Uncategorized, message: "The request is not supported.", } ```
process
unable to use command spawn with unallocated console i m currently trying to run an executable on windows with rust nightly nightly it works when the console is allocated with winapi um wincon attachconsole but whenever the console has been unallocated with winapi um wincon freeconsole it s unable to spawn new processes here is the following code that i m using to spawn the process rust use std process command fn test this is called somewhere else in the program winapi um wincon freeconsole let current pwd string std env current dir unwrap into os string into string unwrap println using pwd loc let cmd pre command new test exe to string current dir current pwd spawn if let err e cmd pre println failed to launch embedded application e return let mut cmd unwr cmd pre unwrap println pid cmd unwr id if let err e cmd unwr wait println failed to run embedded application exited with code e return println done when i try and run the process this is the following error i get os code kind uncategorized message the request is not supported
1
767,134
26,911,691,498
IssuesEvent
2023-02-07 00:40:12
envoyproxy/gateway
https://api.github.com/repos/envoyproxy/gateway
closed
Add support for the GRPCRoute API
enhancement priority/medium
*Description*: Add support for the GRPCRoute API defined [here](https://github.com/kubernetes-sigs/gateway-api/blob/21bba43681b3db3a62d7ca9f108bf3be4c0cea71/apis/v1alpha2/grpcroute_types.go#L64)
1.0
Add support for the GRPCRoute API - *Description*: Add support for the GRPCRoute API defined [here](https://github.com/kubernetes-sigs/gateway-api/blob/21bba43681b3db3a62d7ca9f108bf3be4c0cea71/apis/v1alpha2/grpcroute_types.go#L64)
non_process
add support for the grpcroute api description add support for the grpcroute api defined
0
182,940
31,032,097,564
IssuesEvent
2023-08-10 13:07:02
carbon-design-system/carbon-design-kit
https://api.github.com/repos/carbon-design-system/carbon-design-kit
closed
Figma themes deprecation: Update Figma v11 libraries
type: enhancement 💡 kit: figma role: design :pencil2:
## Acceptance Criteria ### Add theme variables - [x] All themes file: Include theme variables to cover all four theme values for color tokens. (White, Gray 10, Gray 90, and Gray 100) ### Renaming libraries - [x] All themes file: Rename to `(v11) All themes - Carbon Design System`. - [x] Other theme files: - [x] Rename to `🚫(v11) Theme - Carbon Design System [Deprecated]`. - [x] Still keep files in the same folder till further notice. - [x] Cross link the White theme file url with a disclaimer in the "Description" of the deprecated theme files. ### Read me and About pages - [x] All themes file: [Follow Thys template](https://www.figma.com/file/XBWlLpk5CiTo9MuEIipGLr/Figma-Deprecation?type=design&node-id=23%3A8913&mode=design&t=fw4GU9CYBsVEae1q-1) - [x] Other theme files: [Follow Thys template](https://www.figma.com/file/XBWlLpk5CiTo9MuEIipGLr/Figma-Deprecation?type=design&node-id=23%3A8913&mode=design&t=fw4GU9CYBsVEae1q-1) ### Color page - [x] All themes file: Put notification/disclaimer at top of page that we will eventually be updating it with other theme info. ### Effects page - [x] All themes file: Put notification/disclaimer at top of page that we we cant use modes/variables for effects yet, but Figma will be implementing this feature in the future. ### Migration artboard - [x] Add a Migration artboard to the About page. ------ ### Publishing **Update file on Friday, Aug 4** - [x] **Publish internally:** Add description of the changes/benefits in the release notes. - [x] **Publish publicly:** Add description and meaningful tags. (Ref [Platform work](https://next.carbondesignsystem.com/design-kits) for library descriptions)
1.0
Figma themes deprecation: Update Figma v11 libraries - ## Acceptance Criteria ### Add theme variables - [x] All themes file: Include theme variables to cover all four theme values for color tokens. (White, Gray 10, Gray 90, and Gray 100) ### Renaming libraries - [x] All themes file: Rename to `(v11) All themes - Carbon Design System`. - [x] Other theme files: - [x] Rename to `🚫(v11) Theme - Carbon Design System [Deprecated]`. - [x] Still keep files in the same folder till further notice. - [x] Cross link the White theme file url with a disclaimer in the "Description" of the deprecated theme files. ### Read me and About pages - [x] All themes file: [Follow Thys template](https://www.figma.com/file/XBWlLpk5CiTo9MuEIipGLr/Figma-Deprecation?type=design&node-id=23%3A8913&mode=design&t=fw4GU9CYBsVEae1q-1) - [x] Other theme files: [Follow Thys template](https://www.figma.com/file/XBWlLpk5CiTo9MuEIipGLr/Figma-Deprecation?type=design&node-id=23%3A8913&mode=design&t=fw4GU9CYBsVEae1q-1) ### Color page - [x] All themes file: Put notification/disclaimer at top of page that we will eventually be updating it with other theme info. ### Effects page - [x] All themes file: Put notification/disclaimer at top of page that we we cant use modes/variables for effects yet, but Figma will be implementing this feature in the future. ### Migration artboard - [x] Add a Migration artboard to the About page. ------ ### Publishing **Update file on Friday, Aug 4** - [x] **Publish internally:** Add description of the changes/benefits in the release notes. - [x] **Publish publicly:** Add description and meaningful tags. (Ref [Platform work](https://next.carbondesignsystem.com/design-kits) for library descriptions)
non_process
figma themes deprecation update figma libraries acceptance criteria add theme variables all themes file include theme variables to cover all four theme values for color tokens white gray gray and gray renaming libraries all themes file rename to all themes carbon design system other theme files rename to 🚫 theme carbon design system still keep files in the same folder till further notice cross link the white theme file url with a disclaimer in the description of the deprecated theme files read me and about pages all themes file other theme files color page all themes file put notification disclaimer at top of page that we will eventually be updating it with other theme info effects page all themes file put notification disclaimer at top of page that we we cant use modes variables for effects yet but figma will be implementing this feature in the future migration artboard add a migration artboard to the about page publishing update file on friday aug publish internally add description of the changes benefits in the release notes publish publicly add description and meaningful tags ref for library descriptions
0
16,064
20,205,554,107
IssuesEvent
2022-02-11 19:54:16
createwithrani/superlist
https://api.github.com/repos/createwithrani/superlist
closed
Refactor tooling to take advantage of wp-scripts' new multiple block support
Process
Currently using a multiple entry custom webpack config setup. But I've tested in another project that the new `wp-scripts` feature for supporting multiple block is FLAWless. So should update tooling here to simplify things.
1.0
Refactor tooling to take advantage of wp-scripts' new multiple block support - Currently using a multiple entry custom webpack config setup. But I've tested in another project that the new `wp-scripts` feature for supporting multiple block is FLAWless. So should update tooling here to simplify things.
process
refactor tooling to take advantage of wp scripts new multiple block support currently using a multiple entry custom webpack config setup but i ve tested in another project that the new wp scripts feature for supporting multiple block is flawless so should update tooling here to simplify things
1
292,856
25,244,766,546
IssuesEvent
2022-11-15 10:15:21
NexusMutual/smart-contracts
https://api.github.com/repos/NexusMutual/smart-contracts
closed
V2 Cover unit tests: CoverNFT.sol
test bootnode
**CoverNFT.sol** mint - [ ] Should revert if caller is not operator - [ ] Should mint the tokenId to the address burn - [ ] Should revert if caller is not operator - [ ] Should burn the token id - [ ] operatorTransferFrom - [ ] Should revert if caller is not operator - [ ] Should revert if incorrect owner address - [ ] Should revert if recipient is address zero - [ ] Should correctly transfer the tokenId changeOperator - [ ] Should revert if caller is not operator - [ ] Should revert if new operator is address zero? - [ ] Should set the new operator address
1.0
V2 Cover unit tests: CoverNFT.sol - **CoverNFT.sol** mint - [ ] Should revert if caller is not operator - [ ] Should mint the tokenId to the address burn - [ ] Should revert if caller is not operator - [ ] Should burn the token id - [ ] operatorTransferFrom - [ ] Should revert if caller is not operator - [ ] Should revert if incorrect owner address - [ ] Should revert if recipient is address zero - [ ] Should correctly transfer the tokenId changeOperator - [ ] Should revert if caller is not operator - [ ] Should revert if new operator is address zero? - [ ] Should set the new operator address
non_process
cover unit tests covernft sol covernft sol mint should revert if caller is not operator should mint the tokenid to the address burn should revert if caller is not operator should burn the token id operatortransferfrom should revert if caller is not operator should revert if incorrect owner address should revert if recipient is address zero should correctly transfer the tokenid changeoperator should revert if caller is not operator should revert if new operator is address zero should set the new operator address
0
133,168
18,842,240,677
IssuesEvent
2021-11-11 10:56:01
tezos-checker/checker
https://api.github.com/repos/tezos-checker/checker
closed
Generalization
enhancement design
Minor generalizations of Checker greatly increase its use cases. Broadly speaking, the following aspects should be modular: 1. allow FA1.2 / FA2 token as collateral for burrows instead of tez only. A single form of collateral is still accepted for a given checker instance, but different instances can have different collateral 2. the control formula that sets the drift derivative as a function of the target should be pluggable, in some cases, a continuous function might make sense as opposed to a bang-bang control 3. We can distinguish two big classes of contracts: those that attempt to replicate an index provided by an oracle (index-based), and those that attempt to replicate an existing token on the chain (token-based) 1. For index-based we use the index, the ctez / tez cfmm price, and the ctez / kit cfmm price, with the ctez / kit cfmm being subsidized by the checker contract. 2. For token-based, the setup is typically a bit different. Suppose we want to replicate USDS using tzBTC as collateral. In that case, we would want to derive prices from 3 cfmms, USDS / cUSDS, tzBTC / ctez and USDS / ctez, with the USDs / cUSDS pair being subsidized by the checker contract.
1.0
Generalization - Minor generalizations of Checker greatly increase its use cases. Broadly speaking, the following aspects should be modular: 1. allow FA1.2 / FA2 token as collateral for burrows instead of tez only. A single form of collateral is still accepted for a given checker instance, but different instances can have different collateral 2. the control formula that sets the drift derivative as a function of the target should be pluggable, in some cases, a continuous function might make sense as opposed to a bang-bang control 3. We can distinguish two big classes of contracts: those that attempt to replicate an index provided by an oracle (index-based), and those that attempt to replicate an existing token on the chain (token-based) 1. For index-based we use the index, the ctez / tez cfmm price, and the ctez / kit cfmm price, with the ctez / kit cfmm being subsidized by the checker contract. 2. For token-based, the setup is typically a bit different. Suppose we want to replicate USDS using tzBTC as collateral. In that case, we would want to derive prices from 3 cfmms, USDS / cUSDS, tzBTC / ctez and USDS / ctez, with the USDs / cUSDS pair being subsidized by the checker contract.
non_process
generalization minor generalizations of checker greatly increase its use cases broadly speaking the following aspects should be modular allow token as collateral for burrows instead of tez only a single form of collateral is still accepted for a given checker instance but different instances can have different collateral the control formula that sets the drift derivative as a function of the target should be pluggable in some cases a continuous function might make sense as opposed to a bang bang control we can distinguish two big classes of contracts those that attempt to replicate an index provided by an oracle index based and those that attempt to replicate an existing token on the chain token based for index based we use the index the ctez tez cfmm price and the ctez kit cfmm price with the ctez kit cfmm being subsidized by the checker contract for token based the setup is typically a bit different suppose we want to replicate usds using tzbtc as collateral in that case we would want to derive prices from cfmms usds cusds tzbtc ctez and usds ctez with the usds cusds pair being subsidized by the checker contract
0