Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
71,941
| 23,863,515,152
|
IssuesEvent
|
2022-09-07 09:05:11
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]: '--lang=en' or '--accept-lang=en' option not working with Node.js on Mac OS
|
I-defect needs-triaging
|
### What happened?
Hi ! ✌️
I try to create a web end to end automation framework for my company and I need to be able to run tests in multiple browsers and in multiple languages.
Current targets are:
- browsers: chrome and firefox
- languages: deutsh, english, french, italian, spanish (more to come later)
I am doing a Proof Of Concept project to see how it could be achieved with either Selenium or Puppeteer
I am trying to implement a simple scenario:
- Open Google home page
- Verify in the popup that the title is in the right language
Examples:
```
browser | langISO639 | countryCodeISO3166 | expectedTitle
${'chrome'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'chrome'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'chrome'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'chrome'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
${'firefox'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'firefox'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'firefox'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'firefox'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
```
My problem is that english test case is always failed and i've tried all workarounds i've found around the web (github issues, stack overflow, ...)
PS: I am using webdriver-manager to run a local grid but It is also failed with chromedriver and geckodriver local installations (from NPM)
### How can we reproduce the issue?
```shell
Node.js test script with selenium
const fs = require('fs')
const path = require('path')
const mkdirp = require('mkdirp')
const rimraf = require('rimraf')
const { Builder, until, By } = require('selenium-webdriver')
const Chrome = require('selenium-webdriver/chrome')
const Firefox = require('selenium-webdriver/firefox')
require('chromedriver')
require('geckodriver')
jest.setTimeout(1000*60*10)
async function getWebDriver(browser, lang, country) {
if (browser === 'chrome') {
const chromeOptions = new Chrome.Options()
.addArguments(
`--accept-lang=${lang}-${country}`,
'--browser-test',
'--bwsi',
`--default-country-code=${country}`,
'--disable-default-apps',
'--disable-extensions',
'--disable-gpu',
'--disable-logging',
'--disable-web-security',
'--dom-automation',
'--enable-automation',
'--force-headless-for-tests',
'--guest',
'--headless',
'--incognito',
`--lang=${lang}`,
'--no-sandbox',
'--window-size=1440,900',
)
.setUserPreferences({ ['intl.accept_languages']: lang, ['translate']: { enabled: true } })
const driver = new Builder()
.forBrowser('chrome')
.setChromeOptions(chromeOptions)
.usingServer('http://localhost:4444/wd/hub')
.build()
await sleep(3)
return driver
} else {
const firefoxOptions = new Firefox.Options()
.headless()
.setPreference('intl.accept_languages', lang)
const driver = new Builder()
.forBrowser('firefox')
.setFirefoxOptions(firefoxOptions)
.usingServer('http://localhost:4444/wd/hub')
.build()
await sleep(3)
return driver
}
}
async function sleep(seconds) {
return new Promise((resolve) => {
setTimeout(() => {
resolve()
}, 1000 * seconds)
})
}
describe.each`
browser | langISO639 | countryCodeISO3166 | expectedTitle
${'chrome'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'chrome'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'chrome'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'chrome'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
${'firefox'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'firefox'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'firefox'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'firefox'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
`('open Google on $browser browser with language $langISO639', ({ browser, langISO639, countryCodeISO3166, expectedTitle }) => {
let envLangBefore = process.env['LANG'] || undefined
beforeAll(async () => {
await rimraf.sync(path.join(__dirname, 'test-screenshots'))
await mkdirp.sync(path.join(__dirname, 'test-screenshots'))
process.env['LANG'] = langISO639
})
afterAll(() => {
process.env['LANG'] = envLangBefore
})
let driver
it('opens web driver instance', async () => {
driver = await getWebDriver(browser, langISO639, countryCodeISO3166)
})
it('reach goole web app', async () => {
await driver.get('https://www.google.com/')
await driver.navigate().refresh()
})
it('waits for translated text or take a screenshot and throw', async () => {
try {
await driver.wait(until.elementLocated(By.xpath(`//h1[text()="${expectedTitle}"]`)), 3000)
} catch (e) {
const image = await driver.takeScreenshot()
fs.writeFileSync(path.join(__dirname, `test-screenshots/${browser}_${langISO639}.png`), image, 'base64')
throw e
}
})
it('closes web driver instance', async () => {
await driver.quit()
})
})
```
Node.js test script with puppeteer
```
const fs = require('fs')
const path = require('path')
const mkdirp = require('mkdirp')
const puppeteer = require('puppeteer')
const rimraf = require('rimraf')
jest.setTimeout(1000*60*10)
async function getWebDriver(lang, country) {
const driver = await puppeteer.launch({
args: [
`--accept-lang=${lang}-${country}`,
'--browser-test',
'--bwsi',
`--default-country-code=${country}`,
'--disable-default-apps',
'--disable-extensions',
'--disable-gpu',
'--disable-logging',
'--disable-web-security',
'--dom-automation',
'--enable-automation',
'--force-headless-for-tests',
'--guest',
'--headless',
'--incognito',
`--lang=${lang}`,
'--no-sandbox',
'--window-size=1440,900',
]
})
await sleep(3)
return driver
}
async function sleep(seconds) {
return new Promise((resolve) => {
setTimeout(() => {
resolve()
}, 1000 * seconds)
})
}
describe.each`
langISO639 | countryCodeISO3166 | expectedTitle
${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
`('open Puppeteer with language $langISO639', ({ langISO639, countryCodeISO3166, expectedTitle }) => {
let envLangBefore = process.env['LANG'] || undefined
beforeAll(async () => {
await rimraf.sync(path.join(__dirname, 'test-screenshots'))
await mkdirp.sync(path.join(__dirname, 'test-screenshots'))
process.env['LANG'] = langISO639
})
afterAll(() => {
process.env['LANG'] = envLangBefore
})
/** @type {puppeteer.Browser} */
let browser
/** @type {puppeteer.Page} */
let page
it('opens web driver instance', async () => {
browser = await getWebDriver(langISO639, countryCodeISO3166)
page = await browser.newPage()
})
it('reach goole web app', async () => {
await page.goto('https://www.google.com/')
await page.reload()
})
it('waits for translated text or take a screenshot and throw', async () => {
try {
await page.waitForXPath(`//h1[text()="${expectedTitle}"]`)
} catch (e) {
await page.screenshot({ path: path.join(__dirname, `test-screenshots/puppeteer_${langISO639}.png`) })
throw e
}
})
it('closes web driver instance', async () => {
await browser.close()
})
})
```
Node.js project package.json example
```
{
"name": "sample-en-issue-with-chromium",
"scripts": {
"test": "jest"
},
"devDependencies": {
"@types/jest": "^29.0.0",
"@types/selenium-webdriver": "^4.1.3",
"eslint": "^8.23.0",
"eslint-config-node": "^4.1.0",
"eslint-plugin-import": "^2.26.0",
"eslint-plugin-jest": "^27.0.1",
"jest": "^29.0.1",
"jest-runner-groups": "^2.2.0",
"webdriver-manager": "^12.1.8"
},
"dependencies": {
"chromedriver": "^104.0.0",
"geckodriver": "^3.0.2",
"mkdirp": "^1.0.4",
"puppeteer": "^17.1.1",
"rimraf": "^3.0.2",
"selenium-webdriver": "^4.4.0"
}
}
```
```
### Relevant log output
```shell
With Selenium
FAIL framework/automate-web/WebDriverManager.test.js (45.459 s)
open Google on chrome browser with language de
✓ opens web driver instance (3003 ms)
✓ reach goole web app (1587 ms)
✓ waits for translated text or take a screenshot and throw (527 ms)
✓ closes web driver instance (54 ms)
open Google on chrome browser with language en
✓ opens web driver instance (3002 ms)
✓ reach goole web app (1364 ms)
✕ waits for translated text or take a screenshot and throw (3506 ms)
✓ closes web driver instance (55 ms)
open Google on chrome browser with language es
✓ opens web driver instance (3002 ms)
✓ reach goole web app (1371 ms)
✓ waits for translated text or take a screenshot and throw (485 ms)
✓ closes web driver instance (58 ms)
open Google on chrome browser with language fr
✓ opens web driver instance (3001 ms)
✓ reach goole web app (1414 ms)
✓ waits for translated text or take a screenshot and throw (490 ms)
✓ closes web driver instance (55 ms)
open Google on firefox browser with language de
✓ opens web driver instance (3001 ms)
✓ reach goole web app (909 ms)
✓ waits for translated text or take a screenshot and throw (22 ms)
✓ closes web driver instance (404 ms)
open Google on firefox browser with language en
✓ opens web driver instance (3002 ms)
✓ reach goole web app (786 ms)
✕ waits for translated text or take a screenshot and throw (3202 ms)
✓ closes web driver instance (409 ms)
open Google on firefox browser with language es
✓ opens web driver instance (3001 ms)
✓ reach goole web app (2718 ms)
✓ waits for translated text or take a screenshot and throw (17 ms)
✓ closes web driver instance (408 ms)
open Google on firefox browser with language fr
✓ opens web driver instance (3002 ms)
✓ reach goole web app (788 ms)
✓ waits for translated text or take a screenshot and throw (19 ms)
✓ closes web driver instance (406 ms)
```
With Puppeteer
```
FAIL framework/automate-web/test-screenshots/Puppeteer-multiple-langs.test.js (63.579 s)
open Puppeteer with language de
✓ opens web driver instance (19901 ms)
✓ reach goole web app (852 ms)
✓ waits for translated text or take a screenshot and throw (8 ms)
✓ closes web driver instance (8 ms)
open Puppeteer with language en
✓ opens web driver instance (3319 ms)
✓ reach goole web app (679 ms)
✕ waits for translated text or take a screenshot and throw (30071 ms)
✓ closes web driver instance (7 ms)
open Puppeteer with language es
✓ opens web driver instance (3324 ms)
✓ reach goole web app (825 ms)
✓ waits for translated text or take a screenshot and throw (6 ms)
✓ closes web driver instance (11 ms)
open Puppeteer with language fr
✓ opens web driver instance (3334 ms)
✓ reach goole web app (705 ms)
✓ waits for translated text or take a screenshot and throw (7 ms)
✓ closes web driver instance (6 ms)
```
```
### Operating System
Mac OS Monterey 12.5.1
### Selenium version
"selenium-webdriver": "^4.4.0",
### What are the browser(s) and version(s) where you see this issue?
Firefox 104.0.1, Chrome version as of 2022-09-06
### What are the browser driver(s) and version(s) where you see this issue?
from NPM: {"chromedriver": "^104.0.0", "geckodriver": "^3.0.2"}
### Are you using Selenium Grid?
yes with npm module webdriver-manager / no
|
1.0
|
[🐛 Bug]: '--lang=en' or '--accept-lang=en' option not working with Node.js on Mac OS - ### What happened?
Hi ! ✌️
I try to create a web end to end automation framework for my company and I need to be able to run tests in multiple browsers and in multiple languages.
Current targets are:
- browsers: chrome and firefox
- languages: deutsh, english, french, italian, spanish (more to come later)
I am doing a Proof Of Concept project to see how it could be achieved with either Selenium or Puppeteer
I am trying to implement a simple scenario:
- Open Google home page
- Verify in the popup that the title is in the right language
Examples:
```
browser | langISO639 | countryCodeISO3166 | expectedTitle
${'chrome'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'chrome'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'chrome'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'chrome'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
${'firefox'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'firefox'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'firefox'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'firefox'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
```
My problem is that english test case is always failed and i've tried all workarounds i've found around the web (github issues, stack overflow, ...)
PS: I am using webdriver-manager to run a local grid but It is also failed with chromedriver and geckodriver local installations (from NPM)
### How can we reproduce the issue?
```shell
Node.js test script with selenium
const fs = require('fs')
const path = require('path')
const mkdirp = require('mkdirp')
const rimraf = require('rimraf')
const { Builder, until, By } = require('selenium-webdriver')
const Chrome = require('selenium-webdriver/chrome')
const Firefox = require('selenium-webdriver/firefox')
require('chromedriver')
require('geckodriver')
jest.setTimeout(1000*60*10)
async function getWebDriver(browser, lang, country) {
if (browser === 'chrome') {
const chromeOptions = new Chrome.Options()
.addArguments(
`--accept-lang=${lang}-${country}`,
'--browser-test',
'--bwsi',
`--default-country-code=${country}`,
'--disable-default-apps',
'--disable-extensions',
'--disable-gpu',
'--disable-logging',
'--disable-web-security',
'--dom-automation',
'--enable-automation',
'--force-headless-for-tests',
'--guest',
'--headless',
'--incognito',
`--lang=${lang}`,
'--no-sandbox',
'--window-size=1440,900',
)
.setUserPreferences({ ['intl.accept_languages']: lang, ['translate']: { enabled: true } })
const driver = new Builder()
.forBrowser('chrome')
.setChromeOptions(chromeOptions)
.usingServer('http://localhost:4444/wd/hub')
.build()
await sleep(3)
return driver
} else {
const firefoxOptions = new Firefox.Options()
.headless()
.setPreference('intl.accept_languages', lang)
const driver = new Builder()
.forBrowser('firefox')
.setFirefoxOptions(firefoxOptions)
.usingServer('http://localhost:4444/wd/hub')
.build()
await sleep(3)
return driver
}
}
async function sleep(seconds) {
return new Promise((resolve) => {
setTimeout(() => {
resolve()
}, 1000 * seconds)
})
}
describe.each`
browser | langISO639 | countryCodeISO3166 | expectedTitle
${'chrome'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'chrome'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'chrome'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'chrome'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
${'firefox'} | ${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'firefox'} | ${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'firefox'} | ${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'firefox'} | ${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
`('open Google on $browser browser with language $langISO639', ({ browser, langISO639, countryCodeISO3166, expectedTitle }) => {
let envLangBefore = process.env['LANG'] || undefined
beforeAll(async () => {
await rimraf.sync(path.join(__dirname, 'test-screenshots'))
await mkdirp.sync(path.join(__dirname, 'test-screenshots'))
process.env['LANG'] = langISO639
})
afterAll(() => {
process.env['LANG'] = envLangBefore
})
let driver
it('opens web driver instance', async () => {
driver = await getWebDriver(browser, langISO639, countryCodeISO3166)
})
it('reach goole web app', async () => {
await driver.get('https://www.google.com/')
await driver.navigate().refresh()
})
it('waits for translated text or take a screenshot and throw', async () => {
try {
await driver.wait(until.elementLocated(By.xpath(`//h1[text()="${expectedTitle}"]`)), 3000)
} catch (e) {
const image = await driver.takeScreenshot()
fs.writeFileSync(path.join(__dirname, `test-screenshots/${browser}_${langISO639}.png`), image, 'base64')
throw e
}
})
it('closes web driver instance', async () => {
await driver.quit()
})
})
```
Node.js test script with puppeteer
```
const fs = require('fs')
const path = require('path')
const mkdirp = require('mkdirp')
const puppeteer = require('puppeteer')
const rimraf = require('rimraf')
jest.setTimeout(1000*60*10)
async function getWebDriver(lang, country) {
const driver = await puppeteer.launch({
args: [
`--accept-lang=${lang}-${country}`,
'--browser-test',
'--bwsi',
`--default-country-code=${country}`,
'--disable-default-apps',
'--disable-extensions',
'--disable-gpu',
'--disable-logging',
'--disable-web-security',
'--dom-automation',
'--enable-automation',
'--force-headless-for-tests',
'--guest',
'--headless',
'--incognito',
`--lang=${lang}`,
'--no-sandbox',
'--window-size=1440,900',
]
})
await sleep(3)
return driver
}
async function sleep(seconds) {
return new Promise((resolve) => {
setTimeout(() => {
resolve()
}, 1000 * seconds)
})
}
describe.each`
langISO639 | countryCodeISO3166 | expectedTitle
${'de'} | ${'DE'} | ${'Bevor Sie zu Google weitergehen'}
${'en'} | ${'GB'} | ${'Before you continue to Google'}
${'es'} | ${'ES'} | ${'Antes de ir a Google'}
${'fr'} | ${'FR'} | ${'Avant d\'accéder à Google'}
`('open Puppeteer with language $langISO639', ({ langISO639, countryCodeISO3166, expectedTitle }) => {
let envLangBefore = process.env['LANG'] || undefined
beforeAll(async () => {
await rimraf.sync(path.join(__dirname, 'test-screenshots'))
await mkdirp.sync(path.join(__dirname, 'test-screenshots'))
process.env['LANG'] = langISO639
})
afterAll(() => {
process.env['LANG'] = envLangBefore
})
/** @type {puppeteer.Browser} */
let browser
/** @type {puppeteer.Page} */
let page
it('opens web driver instance', async () => {
browser = await getWebDriver(langISO639, countryCodeISO3166)
page = await browser.newPage()
})
it('reach goole web app', async () => {
await page.goto('https://www.google.com/')
await page.reload()
})
it('waits for translated text or take a screenshot and throw', async () => {
try {
await page.waitForXPath(`//h1[text()="${expectedTitle}"]`)
} catch (e) {
await page.screenshot({ path: path.join(__dirname, `test-screenshots/puppeteer_${langISO639}.png`) })
throw e
}
})
it('closes web driver instance', async () => {
await browser.close()
})
})
```
Node.js project package.json example
```
{
"name": "sample-en-issue-with-chromium",
"scripts": {
"test": "jest"
},
"devDependencies": {
"@types/jest": "^29.0.0",
"@types/selenium-webdriver": "^4.1.3",
"eslint": "^8.23.0",
"eslint-config-node": "^4.1.0",
"eslint-plugin-import": "^2.26.0",
"eslint-plugin-jest": "^27.0.1",
"jest": "^29.0.1",
"jest-runner-groups": "^2.2.0",
"webdriver-manager": "^12.1.8"
},
"dependencies": {
"chromedriver": "^104.0.0",
"geckodriver": "^3.0.2",
"mkdirp": "^1.0.4",
"puppeteer": "^17.1.1",
"rimraf": "^3.0.2",
"selenium-webdriver": "^4.4.0"
}
}
```
```
### Relevant log output
```shell
With Selenium
FAIL framework/automate-web/WebDriverManager.test.js (45.459 s)
open Google on chrome browser with language de
✓ opens web driver instance (3003 ms)
✓ reach goole web app (1587 ms)
✓ waits for translated text or take a screenshot and throw (527 ms)
✓ closes web driver instance (54 ms)
open Google on chrome browser with language en
✓ opens web driver instance (3002 ms)
✓ reach goole web app (1364 ms)
✕ waits for translated text or take a screenshot and throw (3506 ms)
✓ closes web driver instance (55 ms)
open Google on chrome browser with language es
✓ opens web driver instance (3002 ms)
✓ reach goole web app (1371 ms)
✓ waits for translated text or take a screenshot and throw (485 ms)
✓ closes web driver instance (58 ms)
open Google on chrome browser with language fr
✓ opens web driver instance (3001 ms)
✓ reach goole web app (1414 ms)
✓ waits for translated text or take a screenshot and throw (490 ms)
✓ closes web driver instance (55 ms)
open Google on firefox browser with language de
✓ opens web driver instance (3001 ms)
✓ reach goole web app (909 ms)
✓ waits for translated text or take a screenshot and throw (22 ms)
✓ closes web driver instance (404 ms)
open Google on firefox browser with language en
✓ opens web driver instance (3002 ms)
✓ reach goole web app (786 ms)
✕ waits for translated text or take a screenshot and throw (3202 ms)
✓ closes web driver instance (409 ms)
open Google on firefox browser with language es
✓ opens web driver instance (3001 ms)
✓ reach goole web app (2718 ms)
✓ waits for translated text or take a screenshot and throw (17 ms)
✓ closes web driver instance (408 ms)
open Google on firefox browser with language fr
✓ opens web driver instance (3002 ms)
✓ reach goole web app (788 ms)
✓ waits for translated text or take a screenshot and throw (19 ms)
✓ closes web driver instance (406 ms)
```
With Puppeteer
```
FAIL framework/automate-web/test-screenshots/Puppeteer-multiple-langs.test.js (63.579 s)
open Puppeteer with language de
✓ opens web driver instance (19901 ms)
✓ reach goole web app (852 ms)
✓ waits for translated text or take a screenshot and throw (8 ms)
✓ closes web driver instance (8 ms)
open Puppeteer with language en
✓ opens web driver instance (3319 ms)
✓ reach goole web app (679 ms)
✕ waits for translated text or take a screenshot and throw (30071 ms)
✓ closes web driver instance (7 ms)
open Puppeteer with language es
✓ opens web driver instance (3324 ms)
✓ reach goole web app (825 ms)
✓ waits for translated text or take a screenshot and throw (6 ms)
✓ closes web driver instance (11 ms)
open Puppeteer with language fr
✓ opens web driver instance (3334 ms)
✓ reach goole web app (705 ms)
✓ waits for translated text or take a screenshot and throw (7 ms)
✓ closes web driver instance (6 ms)
```
```
### Operating System
Mac OS Monterey 12.5.1
### Selenium version
"selenium-webdriver": "^4.4.0",
### What are the browser(s) and version(s) where you see this issue?
Firefox 104.0.1, Chrome version as of 2022-09-06
### What are the browser driver(s) and version(s) where you see this issue?
from NPM: {"chromedriver": "^104.0.0", "geckodriver": "^3.0.2"}
### Are you using Selenium Grid?
yes with npm module webdriver-manager / no
|
non_test
|
lang en or accept lang en option not working with node js on mac os what happened hi ✌️ i try to create a web end to end automation framework for my company and i need to be able to run tests in multiple browsers and in multiple languages current targets are browsers chrome and firefox languages deutsh english french italian spanish more to come later i am doing a proof of concept project to see how it could be achieved with either selenium or puppeteer i am trying to implement a simple scenario open google home page verify in the popup that the title is in the right language examples browser expectedtitle chrome de de bevor sie zu google weitergehen chrome en gb before you continue to google chrome es es antes de ir a google chrome fr fr avant d accéder à google firefox de de bevor sie zu google weitergehen firefox en gb before you continue to google firefox es es antes de ir a google firefox fr fr avant d accéder à google my problem is that english test case is always failed and i ve tried all workarounds i ve found around the web github issues stack overflow ps i am using webdriver manager to run a local grid but it is also failed with chromedriver and geckodriver local installations from npm how can we reproduce the issue shell node js test script with selenium const fs require fs const path require path const mkdirp require mkdirp const rimraf require rimraf const builder until by require selenium webdriver const chrome require selenium webdriver chrome const firefox require selenium webdriver firefox require chromedriver require geckodriver jest settimeout async function getwebdriver browser lang country if browser chrome const chromeoptions new chrome options addarguments accept lang lang country browser test bwsi default country code country disable default apps disable extensions disable gpu disable logging disable web security dom automation enable automation force headless for tests guest headless incognito lang lang no sandbox window size setuserpreferences lang enabled true const driver new builder forbrowser chrome setchromeoptions chromeoptions usingserver build await sleep return driver else const firefoxoptions new firefox options headless setpreference intl accept languages lang const driver new builder forbrowser firefox setfirefoxoptions firefoxoptions usingserver build await sleep return driver async function sleep seconds return new promise resolve settimeout resolve seconds describe each browser expectedtitle chrome de de bevor sie zu google weitergehen chrome en gb before you continue to google chrome es es antes de ir a google chrome fr fr avant d accéder à google firefox de de bevor sie zu google weitergehen firefox en gb before you continue to google firefox es es antes de ir a google firefox fr fr avant d accéder à google open google on browser browser with language browser expectedtitle let envlangbefore process env undefined beforeall async await rimraf sync path join dirname test screenshots await mkdirp sync path join dirname test screenshots process env afterall process env envlangbefore let driver it opens web driver instance async driver await getwebdriver browser it reach goole web app async await driver get await driver navigate refresh it waits for translated text or take a screenshot and throw async try await driver wait until elementlocated by xpath catch e const image await driver takescreenshot fs writefilesync path join dirname test screenshots browser png image throw e it closes web driver instance async await driver quit node js test script with puppeteer const fs require fs const path require path const mkdirp require mkdirp const puppeteer require puppeteer const rimraf require rimraf jest settimeout async function getwebdriver lang country const driver await puppeteer launch args accept lang lang country browser test bwsi default country code country disable default apps disable extensions disable gpu disable logging disable web security dom automation enable automation force headless for tests guest headless incognito lang lang no sandbox window size await sleep return driver async function sleep seconds return new promise resolve settimeout resolve seconds describe each expectedtitle de de bevor sie zu google weitergehen en gb before you continue to google es es antes de ir a google fr fr avant d accéder à google open puppeteer with language expectedtitle let envlangbefore process env undefined beforeall async await rimraf sync path join dirname test screenshots await mkdirp sync path join dirname test screenshots process env afterall process env envlangbefore type puppeteer browser let browser type puppeteer page let page it opens web driver instance async browser await getwebdriver page await browser newpage it reach goole web app async await page goto await page reload it waits for translated text or take a screenshot and throw async try await page waitforxpath catch e await page screenshot path path join dirname test screenshots puppeteer png throw e it closes web driver instance async await browser close node js project package json example name sample en issue with chromium scripts test jest devdependencies types jest types selenium webdriver eslint eslint config node eslint plugin import eslint plugin jest jest jest runner groups webdriver manager dependencies chromedriver geckodriver mkdirp puppeteer rimraf selenium webdriver relevant log output shell with selenium fail framework automate web webdrivermanager test js s open google on chrome browser with language de ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on chrome browser with language en ✓ opens web driver instance ms ✓ reach goole web app ms ✕ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on chrome browser with language es ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on chrome browser with language fr ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on firefox browser with language de ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on firefox browser with language en ✓ opens web driver instance ms ✓ reach goole web app ms ✕ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on firefox browser with language es ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open google on firefox browser with language fr ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms with puppeteer fail framework automate web test screenshots puppeteer multiple langs test js s open puppeteer with language de ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open puppeteer with language en ✓ opens web driver instance ms ✓ reach goole web app ms ✕ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open puppeteer with language es ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms open puppeteer with language fr ✓ opens web driver instance ms ✓ reach goole web app ms ✓ waits for translated text or take a screenshot and throw ms ✓ closes web driver instance ms operating system mac os monterey selenium version selenium webdriver what are the browser s and version s where you see this issue firefox chrome version as of what are the browser driver s and version s where you see this issue from npm chromedriver geckodriver are you using selenium grid yes with npm module webdriver manager no
| 0
|
267,237
| 20,195,873,226
|
IssuesEvent
|
2022-02-11 10:35:49
|
UST-QuAntiL/qc-atlas
|
https://api.github.com/repos/UST-QuAntiL/qc-atlas
|
opened
|
Finalize Winery Feature
|
documentation enhancement
|
- we require **tests** for the new feature in the QC-Atlas
- union with UI
- we require (UI) **documentation** about the feature in our [readthedocs](https://github.com/UST-QuAntiL/quantil-docs)
- quantil-docker update for **winery** and **env variables**, additional profile for winery
|
1.0
|
Finalize Winery Feature - - we require **tests** for the new feature in the QC-Atlas
- union with UI
- we require (UI) **documentation** about the feature in our [readthedocs](https://github.com/UST-QuAntiL/quantil-docs)
- quantil-docker update for **winery** and **env variables**, additional profile for winery
|
non_test
|
finalize winery feature we require tests for the new feature in the qc atlas union with ui we require ui documentation about the feature in our quantil docker update for winery and env variables additional profile for winery
| 0
|
321,698
| 27,547,214,481
|
IssuesEvent
|
2023-03-07 12:40:28
|
Cli4d/Testing-GitHub-issues-and-Projects
|
https://api.github.com/repos/Cli4d/Testing-GitHub-issues-and-Projects
|
closed
|
Exploring GitHub projects
|
Test issue
|
I am using a test project to explore GitHub's project features and learn how to utilize them. As mentioned here
https://github.com/Cli4d/Testing-GitHub-issues-and-Projects/blob/0afe2260758d864698c7a1cb922a74fc7146b0c8/README.md?plain=1#L2
I will be exploring the following features:
- [x] Creating a project
- [x] Attaching issues to the project
- [x] Toggling the project's view and layout
- [x] Managing and applying an iteration
- [x] Applying and customizing default workflows
|
1.0
|
Exploring GitHub projects - I am using a test project to explore GitHub's project features and learn how to utilize them. As mentioned here
https://github.com/Cli4d/Testing-GitHub-issues-and-Projects/blob/0afe2260758d864698c7a1cb922a74fc7146b0c8/README.md?plain=1#L2
I will be exploring the following features:
- [x] Creating a project
- [x] Attaching issues to the project
- [x] Toggling the project's view and layout
- [x] Managing and applying an iteration
- [x] Applying and customizing default workflows
|
test
|
exploring github projects i am using a test project to explore github s project features and learn how to utilize them as mentioned here i will be exploring the following features creating a project attaching issues to the project toggling the project s view and layout managing and applying an iteration applying and customizing default workflows
| 1
|
226,356
| 7,518,335,093
|
IssuesEvent
|
2018-04-12 08:05:00
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
Allow passing a JobCalculation to the launch free functions
|
priority/nice to have priority/quality-of-life
|
Since the launch free functions (`run`, `submit` and companions) expect a `Process` class as the first argument, if one wants to launch a `JobCalculation` one has to call `JobCalculation.process()` to generate the `JobProcess` class. However, this could just as well be done by the launch function by checking the class of the first argument and this would make it a lot easier for the user
|
2.0
|
Allow passing a JobCalculation to the launch free functions - Since the launch free functions (`run`, `submit` and companions) expect a `Process` class as the first argument, if one wants to launch a `JobCalculation` one has to call `JobCalculation.process()` to generate the `JobProcess` class. However, this could just as well be done by the launch function by checking the class of the first argument and this would make it a lot easier for the user
|
non_test
|
allow passing a jobcalculation to the launch free functions since the launch free functions run submit and companions expect a process class as the first argument if one wants to launch a jobcalculation one has to call jobcalculation process to generate the jobprocess class however this could just as well be done by the launch function by checking the class of the first argument and this would make it a lot easier for the user
| 0
|
456,212
| 13,147,108,154
|
IssuesEvent
|
2020-08-08 13:56:09
|
unoplatform/uno
|
https://api.github.com/repos/unoplatform/uno
|
closed
|
Toggleswitch.Header doesn't work
|
kind/bug priority/backlog
|
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://github.com/nventive/Uno/security/
-->
## Current behavior
Assigning a value to a Toggleswitch Header has no effect in anything other than UWP.
<!-- Describe how the issue manifests. -->
## Expected behavior
<!-- Describe what the desired behavior would be. -->
## How to reproduce it (as minimally and precisely as possible)
`<ToggleSwitch Header="Hello World" />`
<!-- Please provide a **MINIMAL REPRO PROJECT** and the **STEPS TO REPRODUCE**-->
## Environment
<!-- For bug reports Check one or more of the following options with "x" -->
Nuget Package: Uno.UI
Package Version(s): 3.0.0-dev.636
Affected platform(s):
- [ ] iOS
- [x] Android
- [x] WebAssembly
- [ ] WebAssembly renderers for Xamarin.Forms
- [ ] macOS
- [ ] Windows
- [ ] Build tasks
- [ ] Solution Templates
Visual Studio:
- [ ] 2017 (version: )
- [x] 2019 (version: 16.6.2)
- [ ] for Mac (version: )
Relevant plugins:
- [ ] Resharper (version: )
## Anything else we need to know?
<!-- We would love to know of any friction, apart from knowledge, that prevented you from sending in a pull-request -->
|
1.0
|
Toggleswitch.Header doesn't work - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://github.com/nventive/Uno/security/
-->
## Current behavior
Assigning a value to a Toggleswitch Header has no effect in anything other than UWP.
<!-- Describe how the issue manifests. -->
## Expected behavior
<!-- Describe what the desired behavior would be. -->
## How to reproduce it (as minimally and precisely as possible)
`<ToggleSwitch Header="Hello World" />`
<!-- Please provide a **MINIMAL REPRO PROJECT** and the **STEPS TO REPRODUCE**-->
## Environment
<!-- For bug reports Check one or more of the following options with "x" -->
Nuget Package: Uno.UI
Package Version(s): 3.0.0-dev.636
Affected platform(s):
- [ ] iOS
- [x] Android
- [x] WebAssembly
- [ ] WebAssembly renderers for Xamarin.Forms
- [ ] macOS
- [ ] Windows
- [ ] Build tasks
- [ ] Solution Templates
Visual Studio:
- [ ] 2017 (version: )
- [x] 2019 (version: 16.6.2)
- [ ] for Mac (version: )
Relevant plugins:
- [ ] Resharper (version: )
## Anything else we need to know?
<!-- We would love to know of any friction, apart from knowledge, that prevented you from sending in a pull-request -->
|
non_test
|
toggleswitch header doesn t work please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately via current behavior assigning a value to a toggleswitch header has no effect in anything other than uwp expected behavior how to reproduce it as minimally and precisely as possible environment nuget package uno ui package version s dev affected platform s ios android webassembly webassembly renderers for xamarin forms macos windows build tasks solution templates visual studio version version for mac version relevant plugins resharper version anything else we need to know
| 0
|
298,633
| 22,540,893,725
|
IssuesEvent
|
2022-06-26 00:24:48
|
Max-Rodriguez/libastron-js
|
https://api.github.com/repos/Max-Rodriguez/libastron-js
|
closed
|
Created example Astron environment diagram
|
documentation
|
Already completed, issued for progress tracking on the project board.
|
1.0
|
Created example Astron environment diagram - Already completed, issued for progress tracking on the project board.
|
non_test
|
created example astron environment diagram already completed issued for progress tracking on the project board
| 0
|
127,092
| 10,451,833,524
|
IssuesEvent
|
2019-09-19 13:37:28
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: disk-stalled/log=false,data=false failed
|
C-test-failure O-roachtest O-robot
|
SHA: https://github.com/cockroachdb/cockroach/commits/c6342c90a7fa4ceb1b674faa47a95e1726d05e79
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=disk-stalled/log=false,data=false PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1496387&tab=artifacts#/disk-stalled/log=false,data=false
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190919-1496387/disk-stalled/log=false_data=false/run_1
disk_stall.go:68,disk_stall.go:40,test_runner.go:689: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod install teamcity-1568869602-26-n1cpu4:1 charybdefs returned:
stderr:
stdout:
Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/p/pcre3/libpcre32-3_8.38-3.1_amd64.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/p/pkg-config/pkg-config_0.29.1-0ubuntu1_amd64.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/m/manpages/manpages-dev_4.04-2_all.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/p/python-setuptools/python-setuptools_20.7.0-1_all.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/o/ocl-icd/ocl-icd-libopencl1_2.2.8-1_amd64.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Error: exit status 100
: exit status 1
```
|
2.0
|
roachtest: disk-stalled/log=false,data=false failed - SHA: https://github.com/cockroachdb/cockroach/commits/c6342c90a7fa4ceb1b674faa47a95e1726d05e79
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=disk-stalled/log=false,data=false PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1496387&tab=artifacts#/disk-stalled/log=false,data=false
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190919-1496387/disk-stalled/log=false_data=false/run_1
disk_stall.go:68,disk_stall.go:40,test_runner.go:689: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod install teamcity-1568869602-26-n1cpu4:1 charybdefs returned:
stderr:
stdout:
Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/p/pcre3/libpcre32-3_8.38-3.1_amd64.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/p/pkg-config/pkg-config_0.29.1-0ubuntu1_amd64.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/m/manpages/manpages-dev_4.04-2_all.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/p/python-setuptools/python-setuptools_20.7.0-1_all.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Failed to fetch http://us-central1.gce.archive.ubuntu.com/ubuntu/pool/main/o/ocl-icd/ocl-icd-libopencl1_2.2.8-1_amd64.deb 503 Service Unavailable [IP: 35.184.34.241 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Error: exit status 100
: exit status 1
```
|
test
|
roachtest disk stalled log false data false failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests disk stalled log false data false pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts disk stalled log false data false run disk stall go disk stall go test runner go home agent work go src github com cockroachdb cockroach bin roachprod install teamcity charybdefs returned stderr stdout service unavailable e failed to fetch service unavailable e failed to fetch service unavailable e failed to fetch service unavailable e failed to fetch service unavailable e failed to fetch service unavailable e unable to fetch some archives maybe run apt get update or try with fix missing error exit status exit status
| 1
|
274,310
| 20,831,080,971
|
IssuesEvent
|
2022-03-19 12:57:14
|
apache/airflow
|
https://api.github.com/repos/apache/airflow
|
opened
|
Ask problems about branching
|
kind:bug kind:documentation
|
### What do you see as an issue?
If I have a dag looks like attached: a branching with two branches, I wonder which branch will be executed first? It seems that they won't be executed at the same time. And how can I control which branch to go first?
<img width="276" alt="image" src="https://user-images.githubusercontent.com/37681002/159121949-0631bf85-05ff-48f1-a4a6-8e02821657a3.png">
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
1.0
|
Ask problems about branching - ### What do you see as an issue?
If I have a dag looks like attached: a branching with two branches, I wonder which branch will be executed first? It seems that they won't be executed at the same time. And how can I control which branch to go first?
<img width="276" alt="image" src="https://user-images.githubusercontent.com/37681002/159121949-0631bf85-05ff-48f1-a4a6-8e02821657a3.png">
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
non_test
|
ask problems about branching what do you see as an issue if i have a dag looks like attached a branching with two branches i wonder which branch will be executed first it seems that they won t be executed at the same time and how can i control which branch to go first img width alt image src solving the problem no response anything else no response are you willing to submit pr yes i am willing to submit a pr code of conduct i agree to follow this project s
| 0
|
260,144
| 22,595,539,794
|
IssuesEvent
|
2022-06-29 02:18:13
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
There is an extra option 'Timestamp' in the 'Column Options' dialog for one empty table
|
🧪 testing :gear: tables :beetle: regression
|
**Storage Explorer Version**: 1.25.0-dev
**Build Number**: 20220622.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.4 (Apple M1 Pro)
**Architecture** ia32/x64
**How Found**: From running test cases
**Regression From**: Previous release (1.24.3)
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Create a new table -> Click 'Column Options'.
3. Check there is no extra option 'Timestamp' in the 'Column Options' dialog.
## Expected Experience ##
There is no extra option 'Timestamp' in the 'Column Options' dialog.

## Actual Experience ##
There is an extra option 'Timestamp' in the 'Column Options' dialog.

## Additional Context ##
This issue also reproduces for 'Select Columns' dialog.

|
1.0
|
There is an extra option 'Timestamp' in the 'Column Options' dialog for one empty table - **Storage Explorer Version**: 1.25.0-dev
**Build Number**: 20220622.1
**Branch**: main
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Monterey 12.4 (Apple M1 Pro)
**Architecture** ia32/x64
**How Found**: From running test cases
**Regression From**: Previous release (1.24.3)
## Steps to Reproduce ##
1. Expand one storage account -> Tables.
2. Create a new table -> Click 'Column Options'.
3. Check there is no extra option 'Timestamp' in the 'Column Options' dialog.
## Expected Experience ##
There is no extra option 'Timestamp' in the 'Column Options' dialog.

## Actual Experience ##
There is an extra option 'Timestamp' in the 'Column Options' dialog.

## Additional Context ##
This issue also reproduces for 'Select Columns' dialog.

|
test
|
there is an extra option timestamp in the column options dialog for one empty table storage explorer version dev build number branch main platform os windows linux ubuntu macos monterey apple pro architecture how found from running test cases regression from previous release steps to reproduce expand one storage account tables create a new table click column options check there is no extra option timestamp in the column options dialog expected experience there is no extra option timestamp in the column options dialog actual experience there is an extra option timestamp in the column options dialog additional context this issue also reproduces for select columns dialog
| 1
|
46,875
| 19,533,598,107
|
IssuesEvent
|
2021-12-30 22:51:47
|
meshery/meshery-istio
|
https://api.github.com/repos/meshery/meshery-istio
|
opened
|
[CI] Consolidate end-to-end test workflows into one workflow
|
help wanted service-mesh/istio area/ci area/tests kind/enhancement
|
#### Current Behavior
This adapter has two separate end-to-end test GitHub workflows:
1. https://github.com/meshery/meshery-istio/blob/master/.github/workflows/e2etest-servicemeshinstall.yaml
2. https://github.com/meshery/meshery-istio/blob/master/.github/workflows/e2etest-servicemeshandaddon.yaml
#### Desired Behavior
These two separate workflows need to be combined into a single workflow.
#### Implementation
The resultant workflow filename should be `e2etests.yml`
#### Acceptance Tests
Combined test results should be published to Meshery Docs on PR merge, not on PR open.
---
#### Contributor [Guides](https://docs.meshery.io/project/contributing) and Resources
- 🛠 [Meshery Build & Release Strategy](https://docs.meshery.io/project/build-and-release)
- 📚 [Instructions for contributing to documentation](https://github.com/meshery/meshery/blob/master/CONTRIBUTING.md#documentation-contribution-flow)
- Meshery documentation [site](https://docs.meshery.io/) and [source](https://github.com/meshery/meshery/tree/master/docs)
- 🎨 Wireframes and designs for Meshery UI in [Figma](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI)
- 🙋🏾🙋🏼 Questions: [Layer5 Discussion Forum](https://discuss.layer5.io) and [Layer5 Community Slack](http://slack.layer5.io)
|
1.0
|
[CI] Consolidate end-to-end test workflows into one workflow - #### Current Behavior
This adapter has two separate end-to-end test GitHub workflows:
1. https://github.com/meshery/meshery-istio/blob/master/.github/workflows/e2etest-servicemeshinstall.yaml
2. https://github.com/meshery/meshery-istio/blob/master/.github/workflows/e2etest-servicemeshandaddon.yaml
#### Desired Behavior
These two separate workflows need to be combined into a single workflow.
#### Implementation
The resultant workflow filename should be `e2etests.yml`
#### Acceptance Tests
Combined test results should be published to Meshery Docs on PR merge, not on PR open.
---
#### Contributor [Guides](https://docs.meshery.io/project/contributing) and Resources
- 🛠 [Meshery Build & Release Strategy](https://docs.meshery.io/project/build-and-release)
- 📚 [Instructions for contributing to documentation](https://github.com/meshery/meshery/blob/master/CONTRIBUTING.md#documentation-contribution-flow)
- Meshery documentation [site](https://docs.meshery.io/) and [source](https://github.com/meshery/meshery/tree/master/docs)
- 🎨 Wireframes and designs for Meshery UI in [Figma](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI)
- 🙋🏾🙋🏼 Questions: [Layer5 Discussion Forum](https://discuss.layer5.io) and [Layer5 Community Slack](http://slack.layer5.io)
|
non_test
|
consolidate end to end test workflows into one workflow current behavior this adapter has two separate end to end test github workflows desired behavior these two separate workflows need to be combined into a single workflow implementation the resultant workflow filename should be yml acceptance tests combined test results should be published to meshery docs on pr merge not on pr open contributor and resources 🛠 📚 meshery documentation and 🎨 wireframes and designs for meshery ui in 🙋🏾🙋🏼 questions and
| 0
|
113,243
| 9,633,423,434
|
IssuesEvent
|
2019-05-15 18:36:13
|
andes/app
|
https://api.github.com/repos/andes/app
|
closed
|
MPI (NV) - Indicar tipo de relación
|
bug test
|
<!--
PASOS PARA REGISTRAR UN ISSUE
_____________________________________________
1) Seleccionar el proyecto al que pertenece (CITAS, RUP, MPI, ...)
2) Seleccionar un label de identificación (bug, feature, enhancement, etc.)
3) Asignar revisores que sean miembros del equipo responsable de solucionar el issue
4) Completar las siguientes secciones:
-->
### Comportamiento actual
Al indicar el tipo de relación este dato no queda guardado
### Resultado esperado
una vez guardados los datos del paciente visualizar el tipo de relación ingresado
### Pasos para reproducir el problema
1. a un paciente agregar relación e indicar tipo
2. guardar
3. buscar nuevamente al paciente
4. ver el tipo en la solapa relaciones.
<!-- Agregar captura de pantalla, si fuera relevante -->

<!-- Código relevante
```
(pegar código aquí)
```
-->
|
1.0
|
MPI (NV) - Indicar tipo de relación - <!--
PASOS PARA REGISTRAR UN ISSUE
_____________________________________________
1) Seleccionar el proyecto al que pertenece (CITAS, RUP, MPI, ...)
2) Seleccionar un label de identificación (bug, feature, enhancement, etc.)
3) Asignar revisores que sean miembros del equipo responsable de solucionar el issue
4) Completar las siguientes secciones:
-->
### Comportamiento actual
Al indicar el tipo de relación este dato no queda guardado
### Resultado esperado
una vez guardados los datos del paciente visualizar el tipo de relación ingresado
### Pasos para reproducir el problema
1. a un paciente agregar relación e indicar tipo
2. guardar
3. buscar nuevamente al paciente
4. ver el tipo en la solapa relaciones.
<!-- Agregar captura de pantalla, si fuera relevante -->

<!-- Código relevante
```
(pegar código aquí)
```
-->
|
test
|
mpi nv indicar tipo de relación pasos para registrar un issue seleccionar el proyecto al que pertenece citas rup mpi seleccionar un label de identificación bug feature enhancement etc asignar revisores que sean miembros del equipo responsable de solucionar el issue completar las siguientes secciones comportamiento actual al indicar el tipo de relación este dato no queda guardado resultado esperado una vez guardados los datos del paciente visualizar el tipo de relación ingresado pasos para reproducir el problema a un paciente agregar relación e indicar tipo guardar buscar nuevamente al paciente ver el tipo en la solapa relaciones código relevante pegar código aquí
| 1
|
24,516
| 17,363,098,863
|
IssuesEvent
|
2021-07-30 00:53:21
|
APSIMInitiative/ApsimX
|
https://api.github.com/repos/APSIMInitiative/ApsimX
|
closed
|
Additional functionality needed for economic analysis of beef enterprises
|
CLEM interface/infrastructure
|
Research economists have provided a list of additional functionality needed for full economic analysis
- Price schedules from external data/streams
- Fully customisable category for reporting transaction types
- Breakdown of ruminant purchases and sales by class/pricing groups
- Present amount and/or value in resource balance report
|
1.0
|
Additional functionality needed for economic analysis of beef enterprises - Research economists have provided a list of additional functionality needed for full economic analysis
- Price schedules from external data/streams
- Fully customisable category for reporting transaction types
- Breakdown of ruminant purchases and sales by class/pricing groups
- Present amount and/or value in resource balance report
|
non_test
|
additional functionality needed for economic analysis of beef enterprises research economists have provided a list of additional functionality needed for full economic analysis price schedules from external data streams fully customisable category for reporting transaction types breakdown of ruminant purchases and sales by class pricing groups present amount and or value in resource balance report
| 0
|
284,112
| 8,735,807,612
|
IssuesEvent
|
2018-12-11 17:43:59
|
aowen87/TicketTester
|
https://api.github.com/repos/aowen87/TicketTester
|
closed
|
Cracks Clipper is broken in 2.x
|
bug crash likelihood medium priority reviewed severity high wrong results
|
The CracksClipper operator no longer works, as of 2.0. Greg Burton has need of this functionality, and would like it fixed asap.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 402
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Cracks Clipper is broken in 2.x
Assigned to: Kathleen Biagas
Category:
Target version: 2.1.1
Author: Kathleen Biagas
Start: 09/22/2010
Due date:
% Done: 0
Estimated time:
Created: 09/22/2010 12:54 pm
Updated: 09/29/2010 02:53 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.0.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The CracksClipper operator no longer works, as of 2.0. Greg Burton has need of this functionality, and would like it fixed asap.
Comments:
Restored functionality of CracksClipper operator.SVN revisions 12594 (2.1 RC) 12596 (trunk).
|
1.0
|
Cracks Clipper is broken in 2.x - The CracksClipper operator no longer works, as of 2.0. Greg Burton has need of this functionality, and would like it fixed asap.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 402
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Cracks Clipper is broken in 2.x
Assigned to: Kathleen Biagas
Category:
Target version: 2.1.1
Author: Kathleen Biagas
Start: 09/22/2010
Due date:
% Done: 0
Estimated time:
Created: 09/22/2010 12:54 pm
Updated: 09/29/2010 02:53 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.0.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The CracksClipper operator no longer works, as of 2.0. Greg Burton has need of this functionality, and would like it fixed asap.
Comments:
Restored functionality of CracksClipper operator.SVN revisions 12594 (2.1 RC) 12596 (trunk).
|
non_test
|
cracks clipper is broken in x the cracksclipper operator no longer works as of greg burton has need of this functionality and would like it fixed asap redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject cracks clipper is broken in x assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description the cracksclipper operator no longer works as of greg burton has need of this functionality and would like it fixed asap comments restored functionality of cracksclipper operator svn revisions rc trunk
| 0
|
207,182
| 23,430,310,760
|
IssuesEvent
|
2022-08-15 01:01:35
|
MidnightBSD/src
|
https://api.github.com/repos/MidnightBSD/src
|
reopened
|
CVE-2020-24370 (Medium) detected in freebsd-srcrelease/12.3.0
|
security vulnerability
|
## CVE-2020-24370 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/12.3.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p>
<p>Found in base branch: <b>stable/2.1</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/ldebug.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/ldebug.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ldebug.c in Lua 5.4.0 allows a negation overflow and segmentation fault in getlocal and setlocal, as demonstrated by getlocal(3,2^31).
<p>Publish Date: 2020-08-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24370>CVE-2020-24370</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-24370">https://nvd.nist.gov/vuln/detail/CVE-2020-24370</a></p>
<p>Release Date: 2020-09-26</p>
<p>Fix Resolution: lua-debuginfo - 5.3.4-12,5.3.4-12;lua-libs - 5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12;lua - 5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12;lua-debugsource - 5.3.4-12,5.3.4-12;lua-libs-debuginfo - 5.3.4-12,5.3.4-12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-24370 (Medium) detected in freebsd-srcrelease/12.3.0 - ## CVE-2020-24370 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/12.3.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p>
<p>Found in base branch: <b>stable/2.1</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/ldebug.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/ldebug.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ldebug.c in Lua 5.4.0 allows a negation overflow and segmentation fault in getlocal and setlocal, as demonstrated by getlocal(3,2^31).
<p>Publish Date: 2020-08-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24370>CVE-2020-24370</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-24370">https://nvd.nist.gov/vuln/detail/CVE-2020-24370</a></p>
<p>Release Date: 2020-09-26</p>
<p>Fix Resolution: lua-debuginfo - 5.3.4-12,5.3.4-12;lua-libs - 5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12;lua - 5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12,5.3.4-12;lua-debugsource - 5.3.4-12,5.3.4-12;lua-libs-debuginfo - 5.3.4-12,5.3.4-12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in freebsd srcrelease cve medium severity vulnerability vulnerable library freebsd srcrelease freebsd src tree read only mirror library home page a href found in head commit a href found in base branch stable vulnerable source files ldebug c ldebug c vulnerability details ldebug c in lua allows a negation overflow and segmentation fault in getlocal and setlocal as demonstrated by getlocal publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lua debuginfo lua libs lua lua debugsource lua libs debuginfo step up your open source security game with mend
| 0
|
13,965
| 5,521,781,081
|
IssuesEvent
|
2017-03-19 18:16:36
|
alexrj/Slic3r
|
https://api.github.com/repos/alexrj/Slic3r
|
closed
|
1.2.7-dev: parts jumping out of sight in 3D plater when moving around
|
Can't Reproduce - Development Build
|
Hi,
i've found a glitch in the 3D plater: when moving around several parts, sometimes the part that was just moved jumps far out of sight, away from the cursor - but can be found again when zooming out quite a bit.
This could possibly overlap with #2728 - anyway, i made a screencast where i just managed to reproduce this effect:

|
1.0
|
1.2.7-dev: parts jumping out of sight in 3D plater when moving around - Hi,
i've found a glitch in the 3D plater: when moving around several parts, sometimes the part that was just moved jumps far out of sight, away from the cursor - but can be found again when zooming out quite a bit.
This could possibly overlap with #2728 - anyway, i made a screencast where i just managed to reproduce this effect:

|
non_test
|
dev parts jumping out of sight in plater when moving around hi i ve found a glitch in the plater when moving around several parts sometimes the part that was just moved jumps far out of sight away from the cursor but can be found again when zooming out quite a bit this could possibly overlap with anyway i made a screencast where i just managed to reproduce this effect
| 0
|
9,388
| 2,902,610,568
|
IssuesEvent
|
2015-06-18 08:17:00
|
ramu2016/SXUMX357FFAWEJHBMNM54FBQ
|
https://api.github.com/repos/ramu2016/SXUMX357FFAWEJHBMNM54FBQ
|
closed
|
f+fZMR2Wh9VQBoBXfX4r/BZ071BepESorIvq4YpDlHQ5s4tJr4TbP8zTpB+5YkZzllhvexG+WP50tfVEFIpVsaFEYJsFf2H6cOCW8JyIU07Hj5JVOxHIbC0HrmoWGW0/ErvdYUeR86gS+oghElVphKwaFY8pOPsID9eHK6Vm1co=
|
design
|
7N6eSvry3MPUy74tNyFuxsSWseik7pDphTCJdsJ2dRrc+LP50MqICnemrY6F5PwCw0yBmhFMLhkaIV8bC5Yov7OwPeP0q13FDhsMEIMMScJfzY5/iAqqhGIwepGjFmvdC3UXRLdP1/PeLriQfnXe4xpig75g5dz9J2lIYqwQal/BHVTbWPWmMkEcwV8LIMLKwCYoaKNv6ci0eUA924XT36Iz3gWxzryUODcY4+FtaHZxOGsiIzY+IN7jpIqv1GhZ55CgNFyhicfYUw5MSx1rmtxdluzMByBFGR1pbu4GbpbQyL3zZV5j7MkIJe7zP5gIv2f7OVYEP0rl/xLuTAZ9sk4TrqtXOZTvsQomOD1Wqg1pUjFibkqnoyRFdX0T/N5lndLhP2Ibrtsaovs+46R9JnygzbrARJCI3S7Pmm7T7rfLyTEQ4Nc3QihuhpIrIDLmVrg/eDWBBiY++RjetO5486Vj1a70D8eahMbn4MLm7YDekqzCOKV+9Fo3z7vbnY81xNUEOHwHUAX+qUYRGtg8+CYWo4ryM9E2V1P4yg2lLbt8oM26wESQiN0uz5pu0+63y8kxEODXN0IoboaSKyAy5j9anWet+P6/3NTzfUSvqxIMyuL2NOj6RokVyQ8n5KcDl6GwcQmybDvi1QbXaSX694OWzN7+3jl1TXn2eqiStZuVilhmrQ3Pd/eu3gWez5kYSF10yDc37CARB6gNzDRQ1lLVNaszrze4DSDQJc+gQlyyuseT+0dHYgxRZiup2LnagHBweAU8yu9qZ4YV84hT7Qm/bwfHavLkSb3RKy7aepSK3g4g7fZlPgVtB+OGfsi0aofCLaekid7eC+Ej8ERUFyORv7u9OPZfp5b+6r7uMWe3LiMlrMjosB5cspelqm7cHQuPRZjSUq3mfbltsu7U2kfM1xYkCE9sLGTnE/1L6k62YFerFfAtkR+QEW1Oq//UfNesR5CGRPCYkXDhp4eUSzr2t1FLI8nBr8Ct95+3tAU=
|
1.0
|
f+fZMR2Wh9VQBoBXfX4r/BZ071BepESorIvq4YpDlHQ5s4tJr4TbP8zTpB+5YkZzllhvexG+WP50tfVEFIpVsaFEYJsFf2H6cOCW8JyIU07Hj5JVOxHIbC0HrmoWGW0/ErvdYUeR86gS+oghElVphKwaFY8pOPsID9eHK6Vm1co= - 7N6eSvry3MPUy74tNyFuxsSWseik7pDphTCJdsJ2dRrc+LP50MqICnemrY6F5PwCw0yBmhFMLhkaIV8bC5Yov7OwPeP0q13FDhsMEIMMScJfzY5/iAqqhGIwepGjFmvdC3UXRLdP1/PeLriQfnXe4xpig75g5dz9J2lIYqwQal/BHVTbWPWmMkEcwV8LIMLKwCYoaKNv6ci0eUA924XT36Iz3gWxzryUODcY4+FtaHZxOGsiIzY+IN7jpIqv1GhZ55CgNFyhicfYUw5MSx1rmtxdluzMByBFGR1pbu4GbpbQyL3zZV5j7MkIJe7zP5gIv2f7OVYEP0rl/xLuTAZ9sk4TrqtXOZTvsQomOD1Wqg1pUjFibkqnoyRFdX0T/N5lndLhP2Ibrtsaovs+46R9JnygzbrARJCI3S7Pmm7T7rfLyTEQ4Nc3QihuhpIrIDLmVrg/eDWBBiY++RjetO5486Vj1a70D8eahMbn4MLm7YDekqzCOKV+9Fo3z7vbnY81xNUEOHwHUAX+qUYRGtg8+CYWo4ryM9E2V1P4yg2lLbt8oM26wESQiN0uz5pu0+63y8kxEODXN0IoboaSKyAy5j9anWet+P6/3NTzfUSvqxIMyuL2NOj6RokVyQ8n5KcDl6GwcQmybDvi1QbXaSX694OWzN7+3jl1TXn2eqiStZuVilhmrQ3Pd/eu3gWez5kYSF10yDc37CARB6gNzDRQ1lLVNaszrze4DSDQJc+gQlyyuseT+0dHYgxRZiup2LnagHBweAU8yu9qZ4YV84hT7Qm/bwfHavLkSb3RKy7aepSK3g4g7fZlPgVtB+OGfsi0aofCLaekid7eC+Ej8ERUFyORv7u9OPZfp5b+6r7uMWe3LiMlrMjosB5cspelqm7cHQuPRZjSUq3mfbltsu7U2kfM1xYkCE9sLGTnE/1L6k62YFerFfAtkR+QEW1Oq//UfNesR5CGRPCYkXDhp4eUSzr2t1FLI8nBr8Ct95+3tAU=
|
non_test
|
f ftahzxogsiizy edwbbiy gqlyyuset
| 0
|
123,620
| 10,276,899,888
|
IssuesEvent
|
2019-08-24 21:54:06
|
gitgitgadget/git
|
https://api.github.com/repos/gitgitgadget/git
|
closed
|
Fix flaky "t0021.15 required process filter should filter data"
|
bug flaky test
|
This test case seems to fail infrequently, most often in the `linux-gcc` job. See e.g. https://dev.azure.com/gitgitgadget/git/_build/results?buildId=9226&view=ms.vss-test-web.build-test-results-tab&runId=19716&resultId=100893&paneView=debug
|
1.0
|
Fix flaky "t0021.15 required process filter should filter data" - This test case seems to fail infrequently, most often in the `linux-gcc` job. See e.g. https://dev.azure.com/gitgitgadget/git/_build/results?buildId=9226&view=ms.vss-test-web.build-test-results-tab&runId=19716&resultId=100893&paneView=debug
|
test
|
fix flaky required process filter should filter data this test case seems to fail infrequently most often in the linux gcc job see e g
| 1
|
449,562
| 31,850,974,902
|
IssuesEvent
|
2023-09-15 01:37:47
|
CenterForTheBuiltEnvironment/pythermalcomfort
|
https://api.github.com/repos/CenterForTheBuiltEnvironment/pythermalcomfort
|
closed
|
pythermalcomfort user interface on the web
|
documentation question
|
I noticed that pythermalcomfort user interface had changed. Is that your change or just a bug?
contents

pmv model page

|
1.0
|
pythermalcomfort user interface on the web - I noticed that pythermalcomfort user interface had changed. Is that your change or just a bug?
contents

pmv model page

|
non_test
|
pythermalcomfort user interface on the web i noticed that pythermalcomfort user interface had changed is that your change or just a bug contents pmv model page
| 0
|
184,186
| 31,835,605,282
|
IssuesEvent
|
2023-09-14 13:21:21
|
eosnetworkfoundation/product
|
https://api.github.com/repos/eosnetworkfoundation/product
|
closed
|
Enable greater parallelization of concurrent WASM executions
|
👍 lgtm design-review
|
Parallelization of read only transactions is limited by the number of concurrent WASM executions that can be supported by Leap. There are resource limits that bound the number of concurrent WASM executions allowed. This epic focusses on changes to eos-vm runtimes (for example: interpreter, JIT, and OC), as well as further optimizations within Leap to increase the number of concurrent WASM executions that Leap can support.
```[tasklist]
### Tasks
- [ ] https://github.com/eosnetworkfoundation/product/pull/161
- [ ] https://github.com/AntelopeIO/leap/issues/1159
- [ ] https://github.com/AntelopeIO/leap/issues/1119
- [ ] https://github.com/AntelopeIO/eos-vm/issues/7
- [ ] https://github.com/AntelopeIO/eos-vm/issues/17
- [ ] https://github.com/AntelopeIO/leap/issues/1456
- [ ] https://github.com/AntelopeIO/leap/issues/1158
- [ ] https://github.com/AntelopeIO/leap/issues/645
- [ ] https://github.com/AntelopeIO/leap/issues/1257
- [ ] https://github.com/AntelopeIO/leap/issues/1256
- [ ] https://github.com/AntelopeIO/leap/issues/801
```
|
1.0
|
Enable greater parallelization of concurrent WASM executions - Parallelization of read only transactions is limited by the number of concurrent WASM executions that can be supported by Leap. There are resource limits that bound the number of concurrent WASM executions allowed. This epic focusses on changes to eos-vm runtimes (for example: interpreter, JIT, and OC), as well as further optimizations within Leap to increase the number of concurrent WASM executions that Leap can support.
```[tasklist]
### Tasks
- [ ] https://github.com/eosnetworkfoundation/product/pull/161
- [ ] https://github.com/AntelopeIO/leap/issues/1159
- [ ] https://github.com/AntelopeIO/leap/issues/1119
- [ ] https://github.com/AntelopeIO/eos-vm/issues/7
- [ ] https://github.com/AntelopeIO/eos-vm/issues/17
- [ ] https://github.com/AntelopeIO/leap/issues/1456
- [ ] https://github.com/AntelopeIO/leap/issues/1158
- [ ] https://github.com/AntelopeIO/leap/issues/645
- [ ] https://github.com/AntelopeIO/leap/issues/1257
- [ ] https://github.com/AntelopeIO/leap/issues/1256
- [ ] https://github.com/AntelopeIO/leap/issues/801
```
|
non_test
|
enable greater parallelization of concurrent wasm executions parallelization of read only transactions is limited by the number of concurrent wasm executions that can be supported by leap there are resource limits that bound the number of concurrent wasm executions allowed this epic focusses on changes to eos vm runtimes for example interpreter jit and oc as well as further optimizations within leap to increase the number of concurrent wasm executions that leap can support tasks
| 0
|
288,078
| 21,684,308,877
|
IssuesEvent
|
2022-05-09 09:44:51
|
clue/reactphp-redis
|
https://api.github.com/repos/clue/reactphp-redis
|
closed
|
Incorrect code in the documentation
|
easy pick documentation
|
Hi
first of all: great project! thanks for that
It is about the following code (under _Promises_ in the documentation):
```
$redis->get($key)->then(function (string $value) {
var_dump($value);
}, function (Exception $e) {
echo 'Error: ' . $e->getMessage() . PHP_EOL;
});
```
the problem is if the _key_ doesn't exist, the returned value is NULL. therefore nothing will happen because a string-value is expected in the function.
|
1.0
|
Incorrect code in the documentation - Hi
first of all: great project! thanks for that
It is about the following code (under _Promises_ in the documentation):
```
$redis->get($key)->then(function (string $value) {
var_dump($value);
}, function (Exception $e) {
echo 'Error: ' . $e->getMessage() . PHP_EOL;
});
```
the problem is if the _key_ doesn't exist, the returned value is NULL. therefore nothing will happen because a string-value is expected in the function.
|
non_test
|
incorrect code in the documentation hi first of all great project thanks for that it is about the following code under promises in the documentation redis get key then function string value var dump value function exception e echo error e getmessage php eol the problem is if the key doesn t exist the returned value is null therefore nothing will happen because a string value is expected in the function
| 0
|
812,659
| 30,346,558,762
|
IssuesEvent
|
2023-07-11 15:47:41
|
kytos-ng/pathfinder
|
https://api.github.com/repos/kytos-ng/pathfinder
|
closed
|
Possible error when handling ownership link metadata
|
bug priority_major 2022.3
|
Hi,
When trying to use `ownership` link metadata attribute, I got the following error:
```
2023-07-03 14:40:04,644 - INFO [werkzeug] [_internal.py:225:_log] (Thread-122085) 127.0.0.1 - - [03/Jul/2023 14:40:04] "POST /api/kytos/topology/v3/links/048a97e782ee6e336b111248c97e0d523fa44a2c317d068203075369746c8f98/metadata HTTP/1.1" 201 -
2023-07-03 14:40:23,747 - INFO [werkzeug] [_internal.py:225:_log] (Thread-122104) 127.0.0.1 - - [03/Jul/2023 14:40:23] "POST /api/kytos/topology/v3/links/b554ee5eafb54898cc54d723afa68001dd838e9e69d266c68cc1b7e7ff03036d/metadata HTTP/1.1" 201 -
2023-07-03 14:40:23,781 - WARNING [kytos.napps.kytos/pathfinder] [main.py:268:update_links_metadata_changed] (thread_pool_app_57) Unexpected KeyError '00:00:00:00:00:16:00:02:30' on event kytos/topology.links.metadata.added. pathfinder will reconciliate the topology
```
The request was basically:
```
# curl -X POST -H 'Content-type: application/json' http://127.0.0.1:8181/api/kytos/topology/v3/links/048a97e782ee6e336b111248c97e0d523fa44a2c317d068203075369746c8f98/metadata -d '{"ownership": "Monet"}'
"Operation successful"
# curl -X POST -H 'Content-type: application/json' http://127.0.0.1:8181/api/kytos/topology/v3/links/b554ee5eafb54898cc54d723afa68001dd838e9e69d266c68cc1b7e7ff03036d/metadata -d '{"ownership": "Monet"}'
"Operation successful"
# curl -s http://127.0.0.1:8181/api/kytos/topology/v3/links | jq -r '.links[] | .id + " " + .metadata.ownership'
048a97e782ee6e336b111248c97e0d523fa44a2c317d068203075369746c8f98 Monet
b554ee5eafb54898cc54d723afa68001dd838e9e69d266c68cc1b7e7ff03036d Monet
```
|
1.0
|
Possible error when handling ownership link metadata - Hi,
When trying to use `ownership` link metadata attribute, I got the following error:
```
2023-07-03 14:40:04,644 - INFO [werkzeug] [_internal.py:225:_log] (Thread-122085) 127.0.0.1 - - [03/Jul/2023 14:40:04] "POST /api/kytos/topology/v3/links/048a97e782ee6e336b111248c97e0d523fa44a2c317d068203075369746c8f98/metadata HTTP/1.1" 201 -
2023-07-03 14:40:23,747 - INFO [werkzeug] [_internal.py:225:_log] (Thread-122104) 127.0.0.1 - - [03/Jul/2023 14:40:23] "POST /api/kytos/topology/v3/links/b554ee5eafb54898cc54d723afa68001dd838e9e69d266c68cc1b7e7ff03036d/metadata HTTP/1.1" 201 -
2023-07-03 14:40:23,781 - WARNING [kytos.napps.kytos/pathfinder] [main.py:268:update_links_metadata_changed] (thread_pool_app_57) Unexpected KeyError '00:00:00:00:00:16:00:02:30' on event kytos/topology.links.metadata.added. pathfinder will reconciliate the topology
```
The request was basically:
```
# curl -X POST -H 'Content-type: application/json' http://127.0.0.1:8181/api/kytos/topology/v3/links/048a97e782ee6e336b111248c97e0d523fa44a2c317d068203075369746c8f98/metadata -d '{"ownership": "Monet"}'
"Operation successful"
# curl -X POST -H 'Content-type: application/json' http://127.0.0.1:8181/api/kytos/topology/v3/links/b554ee5eafb54898cc54d723afa68001dd838e9e69d266c68cc1b7e7ff03036d/metadata -d '{"ownership": "Monet"}'
"Operation successful"
# curl -s http://127.0.0.1:8181/api/kytos/topology/v3/links | jq -r '.links[] | .id + " " + .metadata.ownership'
048a97e782ee6e336b111248c97e0d523fa44a2c317d068203075369746c8f98 Monet
b554ee5eafb54898cc54d723afa68001dd838e9e69d266c68cc1b7e7ff03036d Monet
```
|
non_test
|
possible error when handling ownership link metadata hi when trying to use ownership link metadata attribute i got the following error info thread post api kytos topology links metadata http info thread post api kytos topology links metadata http warning thread pool app unexpected keyerror on event kytos topology links metadata added pathfinder will reconciliate the topology the request was basically curl x post h content type application json d ownership monet operation successful curl x post h content type application json d ownership monet operation successful curl s jq r links id metadata ownership monet monet
| 0
|
66,811
| 8,971,153,449
|
IssuesEvent
|
2019-01-29 15:18:45
|
pnp/pnpjs
|
https://api.github.com/repos/pnp/pnpjs
|
closed
|
Please doucment the use of sp.setup
|
area: documentation status: answered type: question
|
### Category
- [X] Enhancement
- [ ] Bug
- [X] Question
- [ ] Documentation gap/issue
### Version
Please specify what version of the library you are using: [ 1.2.8 ]
Please specify what version(s) of SharePoint you are targeting: [online ]
*If you are not using the latest release, please update and see if the issue is resolved before submitting an issue.*
### Expected / Desired Behavior / Question
I'am tying to access multiple different sites/web in the same app (azure function) for this, I am doing multiple requests, each one preceded by a different call to "sp.setup".
However, "sometimes" I see strage bahaviour, as if the url to the site being used is the url from the sp.setup used before the last one.
Is there some limitiation on sp.setup, like "use only once" or something?
Is there a method to change only the url for the next sp.web-call?
|
1.0
|
Please doucment the use of sp.setup - ### Category
- [X] Enhancement
- [ ] Bug
- [X] Question
- [ ] Documentation gap/issue
### Version
Please specify what version of the library you are using: [ 1.2.8 ]
Please specify what version(s) of SharePoint you are targeting: [online ]
*If you are not using the latest release, please update and see if the issue is resolved before submitting an issue.*
### Expected / Desired Behavior / Question
I'am tying to access multiple different sites/web in the same app (azure function) for this, I am doing multiple requests, each one preceded by a different call to "sp.setup".
However, "sometimes" I see strage bahaviour, as if the url to the site being used is the url from the sp.setup used before the last one.
Is there some limitiation on sp.setup, like "use only once" or something?
Is there a method to change only the url for the next sp.web-call?
|
non_test
|
please doucment the use of sp setup category enhancement bug question documentation gap issue version please specify what version of the library you are using please specify what version s of sharepoint you are targeting if you are not using the latest release please update and see if the issue is resolved before submitting an issue expected desired behavior question i am tying to access multiple different sites web in the same app azure function for this i am doing multiple requests each one preceded by a different call to sp setup however sometimes i see strage bahaviour as if the url to the site being used is the url from the sp setup used before the last one is there some limitiation on sp setup like use only once or something is there a method to change only the url for the next sp web call
| 0
|
13,552
| 8,272,250,582
|
IssuesEvent
|
2018-09-16 18:09:31
|
CNugteren/CLBlast
|
https://api.github.com/repos/CNugteren/CLBlast
|
closed
|
ARM Mali GPU no need GlocalToLocal* and LocalToPrivate* API
|
performance
|
As desktoptop and qcom Adreno GPU has real Local mem or private mem, which is more fast that global
mem, at this sence, use GlocalToLocal* and LocalToPrivate* API will speed up calculate!
But, as we know ARM Mali GPU implement Local and private mem from Global mem, so use GlocalToLocal* and LocalToPrivate* API will do not good for speedup, to the contrary will be slow calculate, caused by GlocalToLocal* and LocalToPrivate* API will be swith to GlobalToGlobal,
so any possible add a config to user enable/disable GlocalToLocal* and LocalToPrivate* API
ps: At other side , also I test on Qcom Adreno 510 and 506 GPU, try to disable LocalToPrivate API also
will be be more fast ,about 1.8X speed up
|
True
|
ARM Mali GPU no need GlocalToLocal* and LocalToPrivate* API - As desktoptop and qcom Adreno GPU has real Local mem or private mem, which is more fast that global
mem, at this sence, use GlocalToLocal* and LocalToPrivate* API will speed up calculate!
But, as we know ARM Mali GPU implement Local and private mem from Global mem, so use GlocalToLocal* and LocalToPrivate* API will do not good for speedup, to the contrary will be slow calculate, caused by GlocalToLocal* and LocalToPrivate* API will be swith to GlobalToGlobal,
so any possible add a config to user enable/disable GlocalToLocal* and LocalToPrivate* API
ps: At other side , also I test on Qcom Adreno 510 and 506 GPU, try to disable LocalToPrivate API also
will be be more fast ,about 1.8X speed up
|
non_test
|
arm mali gpu no need glocaltolocal and localtoprivate api as desktoptop and qcom adreno gpu has real local mem or private mem which is more fast that global mem at this sence use glocaltolocal and localtoprivate api will speed up calculate! but as we know arm mali gpu implement local and private mem from global mem so use glocaltolocal and localtoprivate api will do not good for speedup to the contrary will be slow calculate caused by glocaltolocal and localtoprivate api will be swith to globaltoglobal so any possible add a config to user enable disable glocaltolocal and localtoprivate api ps at other side also i test on qcom adreno and gpu try to disable localtoprivate api also will be be more fast about speed up
| 0
|
118,196
| 9,978,011,336
|
IssuesEvent
|
2019-07-09 18:45:24
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
BigQuery Acceptance Failures
|
api: bigquery testing type: cleanup
|
Seeing failures on CI with the following:
`TypeError: no implicit conversion of Google::Apis::BigqueryV2::ListModelsResponse into String`
|
1.0
|
BigQuery Acceptance Failures - Seeing failures on CI with the following:
`TypeError: no implicit conversion of Google::Apis::BigqueryV2::ListModelsResponse into String`
|
test
|
bigquery acceptance failures seeing failures on ci with the following typeerror no implicit conversion of google apis listmodelsresponse into string
| 1
|
43,978
| 2,894,768,142
|
IssuesEvent
|
2015-06-16 02:57:17
|
uwdata/vega-lite
|
https://api.github.com/repos/uwdata/vega-lite
|
closed
|
Test Scale/Axis with negative numbers
|
Easy help-wanted Priority/4-Low
|
This is for open sourcing the tool. (Not for our research)
|
1.0
|
Test Scale/Axis with negative numbers - This is for open sourcing the tool. (Not for our research)
|
non_test
|
test scale axis with negative numbers this is for open sourcing the tool not for our research
| 0
|
331,868
| 29,145,371,096
|
IssuesEvent
|
2023-05-18 02:04:05
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
opened
|
Test failure: WebAppTabStripBrowserTest.HomeTabScopeWildcardString
|
QA/No release-notes/exclude ci-concern bot/type/test bot/platform/windows bot/arch/x64 bot/channel/nightly bot/branch/master
|
Greetings human!
Bad news. `WebAppTabStripBrowserTest.HomeTabScopeWildcardString` [failed on windows x64 nightly master](https://ci.brave.com/job/brave-browser-build-windows-x64-asan/491/testReport/junit/(root)/WebAppTabStripBrowserTest/windows_x64___test_browser_chromium___HomeTabScopeWildcardString).
<details>
<summary>Stack trace</summary>
```
[ RUN ] WebAppTabStripBrowserTest.HomeTabScopeWildcardString
[14016:13864:0518/012122.436:WARNING:chrome_main_delegate.cc(593)] This is Chrome version 114.1.53.48 (not a warning)
[14016:13864:0518/012122.448:WARNING:chrome_browser_cloud_management_controller.cc(87)] Could not create policy manager as CBCM is not enabled.
[14016:13864:0518/012122.570:ERROR:chrome_browser_cloud_management_controller.cc(162)] Cloud management controller initialization aborted as CBCM is not enabled.
[14016:13864:0518/012122.808:WARNING:external_provider_impl.cc(512)] Malformed extension dictionary for extension: odbfpeeihdkbihmopkbjmoonfanlbfcl. Key external_update_url has value "", which is not a valid URL.
[14016:5060:0518/012124.384:WARNING:embedded_test_server.cc(675)] Request not handled. Returning 404: /favicon.ico
[14016:13864:0518/012125.625:WARNING:CONSOLE(5)] "crbug/1173575, non-JS module files deprecated.", source: chrome://resources/js/load_time_data_deprecated.js (5)
[14016:13864:0518/012125.952:WARNING:brave_stats_updater_params.cc(129)] Couldn't find the time of first run. This should only happen when running tests, but never in production code.
[14016:13864:0518/012126.216:ERROR:render_process_host_impl.cc(5358)] Terminating render process for bad Mojo message: Received bad user message: No binder found for interface brave_news.mojom.BraveNewsController for the frame/document scope
[14016:13864:0518/012126.217:ERROR:bad_message.cc(29)] Terminating renderer for bad IPC message, reason 123
..\..\content\public\test\no_renderer_crashes_assertion.cc(101): error: Failed
Unexpected termination of a renderer process; status: 3, exit_code: 3
Stack trace:
Backtrace:
content::NoRendererCrashesAssertion::Observe [0x00007FF71530D6BD+1931] (C:\jenkins\x64-nightly\src\content\public\test\no_renderer_crashes_assertion.cc:101)
content::NotificationServiceImpl::Notify [0x00007FF70E8865AE+3184] (C:\jenkins\x64-nightly\src\content\browser\notification_service_impl.cc:105)
content::RenderProcessHostImpl::ProcessDied [0x00007FF70EECE2DE+1182] (C:\jenkins\x64-nightly\src\content\browser\renderer_host\render_process_host_impl.cc:4878)
content::RenderProcessHostImpl::OnChannelError [0x00007FF70EED00FC+236] (C:\jenkins\x64-nightly\src\content\browser\renderer_host\render_process_host_impl.cc:3809)
base::internal::Invoker<base::internal::BindState<void (IPC::ChannelProxy::Context::*)(),scoped_refptr<IPC::ChannelProxy::Context> >,void ()>::RunOnce [0x00007FF71673B0C6+300] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
base::TaskAnnotator::RunTaskImpl [0x00007FF71429C4F7+1079] (C:\jenkins\x64-nightly\src\base\task\common\task_annotator.cc:186)
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWorkImpl [0x00007FF71AED82D3+3363] (C:\jenkins\x64-nightly\src\base\task\sequence_manager\thread_controller_with_message_pump_impl.cc:486)
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork [0x00007FF71AED7050+528] (C:\jenkins\x64-nightly\src\base\task\sequence_manager\thread_controller_with_message_pump_impl.cc:351)
base::MessagePumpForUI::DoRunLoop [0x00007FF7141D4F91+561] (C:\jenkins\x64-nightly\src\base\message_loop\message_pump_win.cc:212)
base::MessagePumpWin::Run [0x00007FF7141D2A17+535] (C:\jenkins\x64-nightly\src\base\message_loop\message_pump_win.cc:78)
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run [0x00007FF71AEDA988+1128] (C:\jenkins\x64-nightly\src\base\task\sequence_manager\thread_controller_with_message_pump_impl.cc:651)
base::RunLoop::Run [0x00007FF714313458+1432] (C:\jenkins\x64-nightly\src\base\run_loop.cc:136)
content::WindowedNotificationObserver::Wait [0x00007FF7152B7B2B+347] (C:\jenkins\x64-nightly\src\content\public\test\test_utils.cc:404)
content::WaitForLoadStopWithoutSuccessCheck [0x00007FF715408E26+565] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_utils.cc:864)
content::WaitForLoadStop [0x00007FF715407C43+443] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_utils.cc:870)
web_app::WebAppTabStripBrowserTest::OpenUrlAndWait [0x00007FF70201FF23+275] (C:\jenkins\x64-nightly\src\chrome\browser\ui\views\web_apps\web_app_tab_strip_browsertest.cc:104)
web_app::WebAppTabStripBrowserTest_HomeTabScopeWildcardString_Test::RunTestOnMainThread [0x00007FF70203DAE3+10257] (C:\jenkins\x64-nightly\src\chrome\browser\ui\views\web_apps\web_app_tab_strip_browsertest.cc:985)
content::BrowserTestBase::ProxyRunTestOnMainThreadLoop [0x00007FF71544AFEE+2084] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:901)
base::internal::Invoker<base::internal::BindState<void (content::BrowserTestBase::*)(),base::internal::UnretainedWrapper<content::BrowserTestBase,base::unretained_traits::MayNotDangle,0> >,void ()>::RunOnce [0x00007FF715450778+366] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
content::BrowserMainLoop::InterceptMainMessageLoopRun [0x00007FF70DDFD458+424] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1044)
content::BrowserMainLoop::RunMainMessageLoop [0x00007FF70DDFD646+222] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1056)
content::BrowserMainRunnerImpl::Run [0x00007FF70DE04620+42] (C:\jenkins\x64-nightly\src\content\browser\browser_main_runner_impl.cc:160)
content::BrowserMain [0x00007FF70DDF56A3+535] (C:\jenkins\x64-nightly\src\content\browser\browser_main.cc:34)
content::RunBrowserProcessMain [0x00007FF712ADC442+650] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:706)
content::ContentMainRunnerImpl::RunBrowser [0x00007FF712AE01E6+1210] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1276)
content::ContentMainRunnerImpl::Run [0x00007FF712ADF95D+2675] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1134)
content::RunContentProcess [0x00007FF712ADA3DA+2768] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:326)
content::ContentMain [0x00007FF712ADAF10+471] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:343)
content::BrowserTestBase::SetUp [0x00007FF715448A31+5753] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:580)
InProcessBrowserTest::SetUp [0x00007FF714057FDE+1092] (C:\jenkins\x64-nightly\src\chrome\test\base\in_process_browser_test.cc:491)
web_app::WebAppControllerBrowserTest::SetUp [0x00007FF713F6C81E+276] (C:\jenkins\x64-nightly\src\chrome\browser\ui\web_applications\web_app_controller_browsertest.cc:198)
web_app::WebAppTabStripBrowserTest::SetUp [0x00007FF70203F70D+805] (C:\jenkins\x64-nightly\src\chrome\browser\ui\views\web_apps\web_app_tab_strip_browsertest.cc:71)
[ FAILED ] WebAppTabStripBrowserTest.HomeTabScopeWildcardString, where TypeParam = and GetParam() = (7835 ms)
[ FAILED ] WebAppTabStripBrowserTest.HomeTabScopeWildcardString
```
</details>
|
1.0
|
Test failure: WebAppTabStripBrowserTest.HomeTabScopeWildcardString - Greetings human!
Bad news. `WebAppTabStripBrowserTest.HomeTabScopeWildcardString` [failed on windows x64 nightly master](https://ci.brave.com/job/brave-browser-build-windows-x64-asan/491/testReport/junit/(root)/WebAppTabStripBrowserTest/windows_x64___test_browser_chromium___HomeTabScopeWildcardString).
<details>
<summary>Stack trace</summary>
```
[ RUN ] WebAppTabStripBrowserTest.HomeTabScopeWildcardString
[14016:13864:0518/012122.436:WARNING:chrome_main_delegate.cc(593)] This is Chrome version 114.1.53.48 (not a warning)
[14016:13864:0518/012122.448:WARNING:chrome_browser_cloud_management_controller.cc(87)] Could not create policy manager as CBCM is not enabled.
[14016:13864:0518/012122.570:ERROR:chrome_browser_cloud_management_controller.cc(162)] Cloud management controller initialization aborted as CBCM is not enabled.
[14016:13864:0518/012122.808:WARNING:external_provider_impl.cc(512)] Malformed extension dictionary for extension: odbfpeeihdkbihmopkbjmoonfanlbfcl. Key external_update_url has value "", which is not a valid URL.
[14016:5060:0518/012124.384:WARNING:embedded_test_server.cc(675)] Request not handled. Returning 404: /favicon.ico
[14016:13864:0518/012125.625:WARNING:CONSOLE(5)] "crbug/1173575, non-JS module files deprecated.", source: chrome://resources/js/load_time_data_deprecated.js (5)
[14016:13864:0518/012125.952:WARNING:brave_stats_updater_params.cc(129)] Couldn't find the time of first run. This should only happen when running tests, but never in production code.
[14016:13864:0518/012126.216:ERROR:render_process_host_impl.cc(5358)] Terminating render process for bad Mojo message: Received bad user message: No binder found for interface brave_news.mojom.BraveNewsController for the frame/document scope
[14016:13864:0518/012126.217:ERROR:bad_message.cc(29)] Terminating renderer for bad IPC message, reason 123
..\..\content\public\test\no_renderer_crashes_assertion.cc(101): error: Failed
Unexpected termination of a renderer process; status: 3, exit_code: 3
Stack trace:
Backtrace:
content::NoRendererCrashesAssertion::Observe [0x00007FF71530D6BD+1931] (C:\jenkins\x64-nightly\src\content\public\test\no_renderer_crashes_assertion.cc:101)
content::NotificationServiceImpl::Notify [0x00007FF70E8865AE+3184] (C:\jenkins\x64-nightly\src\content\browser\notification_service_impl.cc:105)
content::RenderProcessHostImpl::ProcessDied [0x00007FF70EECE2DE+1182] (C:\jenkins\x64-nightly\src\content\browser\renderer_host\render_process_host_impl.cc:4878)
content::RenderProcessHostImpl::OnChannelError [0x00007FF70EED00FC+236] (C:\jenkins\x64-nightly\src\content\browser\renderer_host\render_process_host_impl.cc:3809)
base::internal::Invoker<base::internal::BindState<void (IPC::ChannelProxy::Context::*)(),scoped_refptr<IPC::ChannelProxy::Context> >,void ()>::RunOnce [0x00007FF71673B0C6+300] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
base::TaskAnnotator::RunTaskImpl [0x00007FF71429C4F7+1079] (C:\jenkins\x64-nightly\src\base\task\common\task_annotator.cc:186)
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWorkImpl [0x00007FF71AED82D3+3363] (C:\jenkins\x64-nightly\src\base\task\sequence_manager\thread_controller_with_message_pump_impl.cc:486)
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork [0x00007FF71AED7050+528] (C:\jenkins\x64-nightly\src\base\task\sequence_manager\thread_controller_with_message_pump_impl.cc:351)
base::MessagePumpForUI::DoRunLoop [0x00007FF7141D4F91+561] (C:\jenkins\x64-nightly\src\base\message_loop\message_pump_win.cc:212)
base::MessagePumpWin::Run [0x00007FF7141D2A17+535] (C:\jenkins\x64-nightly\src\base\message_loop\message_pump_win.cc:78)
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run [0x00007FF71AEDA988+1128] (C:\jenkins\x64-nightly\src\base\task\sequence_manager\thread_controller_with_message_pump_impl.cc:651)
base::RunLoop::Run [0x00007FF714313458+1432] (C:\jenkins\x64-nightly\src\base\run_loop.cc:136)
content::WindowedNotificationObserver::Wait [0x00007FF7152B7B2B+347] (C:\jenkins\x64-nightly\src\content\public\test\test_utils.cc:404)
content::WaitForLoadStopWithoutSuccessCheck [0x00007FF715408E26+565] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_utils.cc:864)
content::WaitForLoadStop [0x00007FF715407C43+443] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_utils.cc:870)
web_app::WebAppTabStripBrowserTest::OpenUrlAndWait [0x00007FF70201FF23+275] (C:\jenkins\x64-nightly\src\chrome\browser\ui\views\web_apps\web_app_tab_strip_browsertest.cc:104)
web_app::WebAppTabStripBrowserTest_HomeTabScopeWildcardString_Test::RunTestOnMainThread [0x00007FF70203DAE3+10257] (C:\jenkins\x64-nightly\src\chrome\browser\ui\views\web_apps\web_app_tab_strip_browsertest.cc:985)
content::BrowserTestBase::ProxyRunTestOnMainThreadLoop [0x00007FF71544AFEE+2084] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:901)
base::internal::Invoker<base::internal::BindState<void (content::BrowserTestBase::*)(),base::internal::UnretainedWrapper<content::BrowserTestBase,base::unretained_traits::MayNotDangle,0> >,void ()>::RunOnce [0x00007FF715450778+366] (C:\jenkins\x64-nightly\src\base\functional\bind_internal.h:976)
content::BrowserMainLoop::InterceptMainMessageLoopRun [0x00007FF70DDFD458+424] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1044)
content::BrowserMainLoop::RunMainMessageLoop [0x00007FF70DDFD646+222] (C:\jenkins\x64-nightly\src\content\browser\browser_main_loop.cc:1056)
content::BrowserMainRunnerImpl::Run [0x00007FF70DE04620+42] (C:\jenkins\x64-nightly\src\content\browser\browser_main_runner_impl.cc:160)
content::BrowserMain [0x00007FF70DDF56A3+535] (C:\jenkins\x64-nightly\src\content\browser\browser_main.cc:34)
content::RunBrowserProcessMain [0x00007FF712ADC442+650] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:706)
content::ContentMainRunnerImpl::RunBrowser [0x00007FF712AE01E6+1210] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1276)
content::ContentMainRunnerImpl::Run [0x00007FF712ADF95D+2675] (C:\jenkins\x64-nightly\src\content\app\content_main_runner_impl.cc:1134)
content::RunContentProcess [0x00007FF712ADA3DA+2768] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:326)
content::ContentMain [0x00007FF712ADAF10+471] (C:\jenkins\x64-nightly\src\content\app\content_main.cc:343)
content::BrowserTestBase::SetUp [0x00007FF715448A31+5753] (C:\jenkins\x64-nightly\src\content\public\test\browser_test_base.cc:580)
InProcessBrowserTest::SetUp [0x00007FF714057FDE+1092] (C:\jenkins\x64-nightly\src\chrome\test\base\in_process_browser_test.cc:491)
web_app::WebAppControllerBrowserTest::SetUp [0x00007FF713F6C81E+276] (C:\jenkins\x64-nightly\src\chrome\browser\ui\web_applications\web_app_controller_browsertest.cc:198)
web_app::WebAppTabStripBrowserTest::SetUp [0x00007FF70203F70D+805] (C:\jenkins\x64-nightly\src\chrome\browser\ui\views\web_apps\web_app_tab_strip_browsertest.cc:71)
[ FAILED ] WebAppTabStripBrowserTest.HomeTabScopeWildcardString, where TypeParam = and GetParam() = (7835 ms)
[ FAILED ] WebAppTabStripBrowserTest.HomeTabScopeWildcardString
```
</details>
|
test
|
test failure webapptabstripbrowsertest hometabscopewildcardstring greetings human bad news webapptabstripbrowsertest hometabscopewildcardstring stack trace webapptabstripbrowsertest hometabscopewildcardstring this is chrome version not a warning could not create policy manager as cbcm is not enabled cloud management controller initialization aborted as cbcm is not enabled malformed extension dictionary for extension odbfpeeihdkbihmopkbjmoonfanlbfcl key external update url has value which is not a valid url request not handled returning favicon ico crbug non js module files deprecated source chrome resources js load time data deprecated js couldn t find the time of first run this should only happen when running tests but never in production code terminating render process for bad mojo message received bad user message no binder found for interface brave news mojom bravenewscontroller for the frame document scope terminating renderer for bad ipc message reason content public test no renderer crashes assertion cc error failed unexpected termination of a renderer process status exit code stack trace backtrace content norenderercrashesassertion observe c jenkins nightly src content public test no renderer crashes assertion cc content notificationserviceimpl notify c jenkins nightly src content browser notification service impl cc content renderprocesshostimpl processdied c jenkins nightly src content browser renderer host render process host impl cc content renderprocesshostimpl onchannelerror c jenkins nightly src content browser renderer host render process host impl cc base internal invoker void runonce c jenkins nightly src base functional bind internal h base taskannotator runtaskimpl c jenkins nightly src base task common task annotator cc base sequence manager internal threadcontrollerwithmessagepumpimpl doworkimpl c jenkins nightly src base task sequence manager thread controller with message pump impl cc base sequence manager internal threadcontrollerwithmessagepumpimpl dowork c jenkins nightly src base task sequence manager thread controller with message pump impl cc base messagepumpforui dorunloop c jenkins nightly src base message loop message pump win cc base messagepumpwin run c jenkins nightly src base message loop message pump win cc base sequence manager internal threadcontrollerwithmessagepumpimpl run c jenkins nightly src base task sequence manager thread controller with message pump impl cc base runloop run c jenkins nightly src base run loop cc content windowednotificationobserver wait c jenkins nightly src content public test test utils cc content waitforloadstopwithoutsuccesscheck c jenkins nightly src content public test browser test utils cc content waitforloadstop c jenkins nightly src content public test browser test utils cc web app webapptabstripbrowsertest openurlandwait c jenkins nightly src chrome browser ui views web apps web app tab strip browsertest cc web app webapptabstripbrowsertest hometabscopewildcardstring test runtestonmainthread c jenkins nightly src chrome browser ui views web apps web app tab strip browsertest cc content browsertestbase proxyruntestonmainthreadloop c jenkins nightly src content public test browser test base cc base internal invoker void runonce c jenkins nightly src base functional bind internal h content browsermainloop interceptmainmessagelooprun c jenkins nightly src content browser browser main loop cc content browsermainloop runmainmessageloop c jenkins nightly src content browser browser main loop cc content browsermainrunnerimpl run c jenkins nightly src content browser browser main runner impl cc content browsermain c jenkins nightly src content browser browser main cc content runbrowserprocessmain c jenkins nightly src content app content main runner impl cc content contentmainrunnerimpl runbrowser c jenkins nightly src content app content main runner impl cc content contentmainrunnerimpl run c jenkins nightly src content app content main runner impl cc content runcontentprocess c jenkins nightly src content app content main cc content contentmain c jenkins nightly src content app content main cc content browsertestbase setup c jenkins nightly src content public test browser test base cc inprocessbrowsertest setup c jenkins nightly src chrome test base in process browser test cc web app webappcontrollerbrowsertest setup c jenkins nightly src chrome browser ui web applications web app controller browsertest cc web app webapptabstripbrowsertest setup c jenkins nightly src chrome browser ui views web apps web app tab strip browsertest cc webapptabstripbrowsertest hometabscopewildcardstring where typeparam and getparam ms webapptabstripbrowsertest hometabscopewildcardstring
| 1
|
176,352
| 14,578,984,057
|
IssuesEvent
|
2020-12-18 06:18:38
|
yihuabaowei/X-PaaS
|
https://api.github.com/repos/yihuabaowei/X-PaaS
|
opened
|
工程重构
|
documentation
|
参照父issues[77](https://github.com/yihuabaowei/APM/issues/77l)
实现:
X-PaaS, 负责 APM, flowcontrol,前端GUI, 以及Helm的打包工程。
要求:
所有包名规范化,统一 com.huawei, 不要再有代号,如yulinbar。
所有原有开源文件改动的地方,加上注释,暂定为 "//huawei update"
所有代码保证过门禁
|
1.0
|
工程重构 - 参照父issues[77](https://github.com/yihuabaowei/APM/issues/77l)
实现:
X-PaaS, 负责 APM, flowcontrol,前端GUI, 以及Helm的打包工程。
要求:
所有包名规范化,统一 com.huawei, 不要再有代号,如yulinbar。
所有原有开源文件改动的地方,加上注释,暂定为 "//huawei update"
所有代码保证过门禁
|
non_test
|
工程重构 参照父issues 实现: x paas 负责 apm flowcontrol,前端gui 以及helm的打包工程。 要求: 所有包名规范化,统一 com huawei 不要再有代号,如yulinbar。 所有原有开源文件改动的地方,加上注释,暂定为 huawei update 所有代码保证过门禁
| 0
|
144,320
| 11,612,311,696
|
IssuesEvent
|
2020-02-26 08:43:20
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: sqlsmith/setup=tpcc/setting=no-ddl failed
|
C-test-failure O-roachtest O-robot branch-master release-blocker
|
[(roachtest).sqlsmith/setup=tpcc/setting=no-ddl failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1766891&tab=buildLog) on [master@8b5adba703fae9b6961623f65b685d93b0fe0290](https://github.com/cockroachdb/cockroach/commits/8b5adba703fae9b6961623f65b685d93b0fe0290):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpcc/setting=no-ddl/run_1
sqlsmith.go:160,sqlsmith.go:165,sqlsmith.go:200,test_runner.go:741: query timed out, but did not cancel execution:
SELECT
tab_31.no_d_id AS col_96,
tab_32.ol_amount AS col_97,
7181791512994303457:::INT8 AS col_98,
tab_31.no_d_id AS col_99,
(-4199306508495062502):::INT8 AS col_100,
e'8\x7fA3\n\x1226n':::STRING AS col_101,
tab_32.ol_w_id AS col_102,
(-3689639311333846620):::INT8 AS col_103,
NULL AS col_104,
tab_32.ol_supply_w_id AS col_105,
tab_31.no_w_id AS col_106
FROM
defaultdb.public.new_order@primary AS tab_31, defaultdb.public.order_line@order_line_stock_fk_idx AS tab_32;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=tpcc/setting=no-ddl](https://teamcity.cockroachdb.com/viewLog.html?buildId=1766891&tab=artifacts#/sqlsmith/setup=tpcc/setting=no-ddl)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpcc%2Fsetting%3Dno-ddl.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: sqlsmith/setup=tpcc/setting=no-ddl failed - [(roachtest).sqlsmith/setup=tpcc/setting=no-ddl failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=1766891&tab=buildLog) on [master@8b5adba703fae9b6961623f65b685d93b0fe0290](https://github.com/cockroachdb/cockroach/commits/8b5adba703fae9b6961623f65b685d93b0fe0290):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=tpcc/setting=no-ddl/run_1
sqlsmith.go:160,sqlsmith.go:165,sqlsmith.go:200,test_runner.go:741: query timed out, but did not cancel execution:
SELECT
tab_31.no_d_id AS col_96,
tab_32.ol_amount AS col_97,
7181791512994303457:::INT8 AS col_98,
tab_31.no_d_id AS col_99,
(-4199306508495062502):::INT8 AS col_100,
e'8\x7fA3\n\x1226n':::STRING AS col_101,
tab_32.ol_w_id AS col_102,
(-3689639311333846620):::INT8 AS col_103,
NULL AS col_104,
tab_32.ol_supply_w_id AS col_105,
tab_31.no_w_id AS col_106
FROM
defaultdb.public.new_order@primary AS tab_31, defaultdb.public.order_line@order_line_stock_fk_idx AS tab_32;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=tpcc/setting=no-ddl](https://teamcity.cockroachdb.com/viewLog.html?buildId=1766891&tab=artifacts#/sqlsmith/setup=tpcc/setting=no-ddl)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dtpcc%2Fsetting%3Dno-ddl.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest sqlsmith setup tpcc setting no ddl failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup tpcc setting no ddl run sqlsmith go sqlsmith go sqlsmith go test runner go query timed out but did not cancel execution select tab no d id as col tab ol amount as col as col tab no d id as col as col e n string as col tab ol w id as col as col null as col tab ol supply w id as col tab no w id as col from defaultdb public new order primary as tab defaultdb public order line order line stock fk idx as tab more artifacts powered by
| 1
|
11,932
| 3,238,273,599
|
IssuesEvent
|
2015-10-14 15:32:28
|
plumi/criticalcommons.content
|
https://api.github.com/repos/plumi/criticalcommons.content
|
closed
|
RSS feed missing media file URLs
|
1 - Ready bug question testing
|
From our Scalar partner developer - this is important for being able to import CC videos into Scalar!
In the Critical Commons Atom feed, some of the media file URLs are missing. For example, here is the URL to the feed we use when searching for 'simpsons':
http://criticalcommons.org/cc/playlist?SearchableText=simpsons
Depending on your browser, you may need to View > Source to see the XML source (as opposed to a rendering of it, since this format is used for RSS). Once you can see the XML, run a string search (Option-F) for "none/none", and you'll see two nodes that have this:
<link>None/None</link>
This should instead have a URL to the media file in it, for example (from another node in the same document):
<link>http://videos.criticalcommons.org/transcoded/http/www.criticalcommons.org/Members/AdrianFohr/clips/elasticity-necessity-or-luxury/video_file/mp4-high/elasticity-in-the-simpsons-m4v.mp4</link>
<!---
@huboard:{"milestone_order":13.0,"order":29.0,"custom_state":""}
-->
|
1.0
|
RSS feed missing media file URLs - From our Scalar partner developer - this is important for being able to import CC videos into Scalar!
In the Critical Commons Atom feed, some of the media file URLs are missing. For example, here is the URL to the feed we use when searching for 'simpsons':
http://criticalcommons.org/cc/playlist?SearchableText=simpsons
Depending on your browser, you may need to View > Source to see the XML source (as opposed to a rendering of it, since this format is used for RSS). Once you can see the XML, run a string search (Option-F) for "none/none", and you'll see two nodes that have this:
<link>None/None</link>
This should instead have a URL to the media file in it, for example (from another node in the same document):
<link>http://videos.criticalcommons.org/transcoded/http/www.criticalcommons.org/Members/AdrianFohr/clips/elasticity-necessity-or-luxury/video_file/mp4-high/elasticity-in-the-simpsons-m4v.mp4</link>
<!---
@huboard:{"milestone_order":13.0,"order":29.0,"custom_state":""}
-->
|
test
|
rss feed missing media file urls from our scalar partner developer this is important for being able to import cc videos into scalar in the critical commons atom feed some of the media file urls are missing for example here is the url to the feed we use when searching for simpsons depending on your browser you may need to view source to see the xml source as opposed to a rendering of it since this format is used for rss once you can see the xml run a string search option f for none none and you ll see two nodes that have this none none this should instead have a url to the media file in it for example from another node in the same document huboard milestone order order custom state
| 1
|
249,979
| 21,220,094,678
|
IssuesEvent
|
2022-04-11 11:05:41
|
diddipoeler/sportsmanagement
|
https://api.github.com/repos/diddipoeler/sportsmanagement
|
closed
|
Wish: Darstellung Ereignisse
|
wishlist Test by User
|
In der Kaderanzeige (roster) wird die Anzahl der Ereignisse angezeigt:

Es wäre schön, wenn das bei der Spielerinfo Anzeige (player) auch so wäre. Hier wird wohl eine Summe angezeigt:

|
1.0
|
Wish: Darstellung Ereignisse - In der Kaderanzeige (roster) wird die Anzahl der Ereignisse angezeigt:

Es wäre schön, wenn das bei der Spielerinfo Anzeige (player) auch so wäre. Hier wird wohl eine Summe angezeigt:

|
test
|
wish darstellung ereignisse in der kaderanzeige roster wird die anzahl der ereignisse angezeigt es wäre schön wenn das bei der spielerinfo anzeige player auch so wäre hier wird wohl eine summe angezeigt
| 1
|
8,608
| 3,000,120,044
|
IssuesEvent
|
2015-07-23 22:47:39
|
NMGRL/pychron
|
https://api.github.com/repos/NMGRL/pychron
|
closed
|
pipeline selection order
|
Enhancement Implemented Tested OK
|
For graphic clarity of the pull down of pipeline functions, could we list the functions in typical use order
a opposed to alphabetic order
iso evo
ic factor
blanks
flux
ideogram
spectrum
isochron
table
I'm not sure how gain figures in.
|
1.0
|
pipeline selection order - For graphic clarity of the pull down of pipeline functions, could we list the functions in typical use order
a opposed to alphabetic order
iso evo
ic factor
blanks
flux
ideogram
spectrum
isochron
table
I'm not sure how gain figures in.
|
test
|
pipeline selection order for graphic clarity of the pull down of pipeline functions could we list the functions in typical use order a opposed to alphabetic order iso evo ic factor blanks flux ideogram spectrum isochron table i m not sure how gain figures in
| 1
|
333,069
| 29,508,042,632
|
IssuesEvent
|
2023-06-03 15:06:56
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix activations.test_tensorflow_relu
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_activations.py::test_tensorflow_relu[cpu-ivy.functional.backends.paddle-False-False]</summary>
2023-06-02T03:03:25.9025786Z E AssertionError: the results from backend paddle and ground truth framework tensorflow do not match2023-06-02T03:03:25.9026355Z E -0.00048828125!=0.0 2023-06-02T03:03:25.9026685Z E 2023-06-02T03:03:25.9026988Z E 2023-06-02T03:03:25.9027367Z E Falsifying example: test_tensorflow_relu(2023-06-02T03:03:25.9027817Z E on_device='cpu',2023-06-02T03:03:25.9028246Z E frontend='tensorflow',2023-06-02T03:03:25.9028786Z E dtype_and_x=(['float16'], [array(-0.0001221, dtype=float16)]),2023-06-02T03:03:25.9029470Z E fn_tree='ivy.functional.frontends.tensorflow.keras.activations.elu',2023-06-02T03:03:25.9030043Z E test_flags=FrontendFunctionTestFlags(2023-06-02T03:03:25.9030458Z E num_positional_args=0,2023-06-02T03:03:25.9030825Z E with_out=False,2023-06-02T03:03:25.9031183Z E inplace=False,2023-06-02T03:03:25.9031548Z E as_variable=[False],2023-06-02T03:03:25.9031919Z E native_arrays=[False],2023-06-02T03:03:25.9032312Z E generate_frontend_arrays=False,2023-06-02T03:03:25.9032674Z E ),2023-06-02T03:03:25.9032984Z E )2023-06-02T03:03:25.9033287Z E 2023-06-02T03:03:25.9034012Z E You can reproduce this example by temporarily adding @reproduce_failure('6.75.9', b'AXicY2AAAsYDBxgQAAAS4QGC') as a decorator on your test case
</details>
|
1.0
|
Fix activations.test_tensorflow_relu - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5151064247/jobs/9275883899" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_frontends/test_tensorflow/test_activations.py::test_tensorflow_relu[cpu-ivy.functional.backends.paddle-False-False]</summary>
2023-06-02T03:03:25.9025786Z E AssertionError: the results from backend paddle and ground truth framework tensorflow do not match2023-06-02T03:03:25.9026355Z E -0.00048828125!=0.0 2023-06-02T03:03:25.9026685Z E 2023-06-02T03:03:25.9026988Z E 2023-06-02T03:03:25.9027367Z E Falsifying example: test_tensorflow_relu(2023-06-02T03:03:25.9027817Z E on_device='cpu',2023-06-02T03:03:25.9028246Z E frontend='tensorflow',2023-06-02T03:03:25.9028786Z E dtype_and_x=(['float16'], [array(-0.0001221, dtype=float16)]),2023-06-02T03:03:25.9029470Z E fn_tree='ivy.functional.frontends.tensorflow.keras.activations.elu',2023-06-02T03:03:25.9030043Z E test_flags=FrontendFunctionTestFlags(2023-06-02T03:03:25.9030458Z E num_positional_args=0,2023-06-02T03:03:25.9030825Z E with_out=False,2023-06-02T03:03:25.9031183Z E inplace=False,2023-06-02T03:03:25.9031548Z E as_variable=[False],2023-06-02T03:03:25.9031919Z E native_arrays=[False],2023-06-02T03:03:25.9032312Z E generate_frontend_arrays=False,2023-06-02T03:03:25.9032674Z E ),2023-06-02T03:03:25.9032984Z E )2023-06-02T03:03:25.9033287Z E 2023-06-02T03:03:25.9034012Z E You can reproduce this example by temporarily adding @reproduce_failure('6.75.9', b'AXicY2AAAsYDBxgQAAAS4QGC') as a decorator on your test case
</details>
|
test
|
fix activations test tensorflow relu tensorflow img src torch img src numpy img src jax img src paddle img src failed ivy tests test ivy test frontends test tensorflow test activations py test tensorflow relu e assertionerror the results from backend paddle and ground truth framework tensorflow do not e e e e falsifying example test tensorflow relu e on device cpu e frontend tensorflow e dtype and x e fn tree ivy functional frontends tensorflow keras activations elu e test flags frontendfunctiontestflags e num positional args e with out false e inplace false e as variable e native arrays e generate frontend arrays false e e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case
| 1
|
580,347
| 17,241,693,470
|
IssuesEvent
|
2021-07-21 00:05:26
|
TacticalTome/TacticalTome
|
https://api.github.com/repos/TacticalTome/TacticalTome
|
opened
|
Fix bug regarding dropdowns in the navigation on mobile
|
Good First Issue Priority: Low Status: Available Type: Bug
|
Currently there is a problem with dropdowns in the navigation.
```JS
if (isUserDeviceMobile()) {
var dropdowns = document.getElementsByClassName("dropdown");
for (let i = 0; i < dropdowns.length; i++) {
var parentElement = dropdowns[i].parentElement;
if (parentElement.classList.contains("linkContainer") || parentElement.classList.contains("linkContainerRight") || parentElement.classList.contains("dropdown")) {
dropdowns[i].setAttribute("onclick", "toggleDropdownDisplay(this);");
}
}
}
toggleDropdownDisplay(element) {
if (element.classList.contains("display")) {
element.classList.remove("display");
} else {
element.classList.add("display");
}
}
```
Both of these functions are in the same object and therefore need to be refactored accordingly, since the button will not be able to recognize the `toggleDropdownDisplay(this)`. Try changing it to a `dropdowns[i].onclick = () => { }` and then refactor accordingly.
|
1.0
|
Fix bug regarding dropdowns in the navigation on mobile - Currently there is a problem with dropdowns in the navigation.
```JS
if (isUserDeviceMobile()) {
var dropdowns = document.getElementsByClassName("dropdown");
for (let i = 0; i < dropdowns.length; i++) {
var parentElement = dropdowns[i].parentElement;
if (parentElement.classList.contains("linkContainer") || parentElement.classList.contains("linkContainerRight") || parentElement.classList.contains("dropdown")) {
dropdowns[i].setAttribute("onclick", "toggleDropdownDisplay(this);");
}
}
}
toggleDropdownDisplay(element) {
if (element.classList.contains("display")) {
element.classList.remove("display");
} else {
element.classList.add("display");
}
}
```
Both of these functions are in the same object and therefore need to be refactored accordingly, since the button will not be able to recognize the `toggleDropdownDisplay(this)`. Try changing it to a `dropdowns[i].onclick = () => { }` and then refactor accordingly.
|
non_test
|
fix bug regarding dropdowns in the navigation on mobile currently there is a problem with dropdowns in the navigation js if isuserdevicemobile var dropdowns document getelementsbyclassname dropdown for let i i dropdowns length i var parentelement dropdowns parentelement if parentelement classlist contains linkcontainer parentelement classlist contains linkcontainerright parentelement classlist contains dropdown dropdowns setattribute onclick toggledropdowndisplay this toggledropdowndisplay element if element classlist contains display element classlist remove display else element classlist add display both of these functions are in the same object and therefore need to be refactored accordingly since the button will not be able to recognize the toggledropdowndisplay this try changing it to a dropdowns onclick and then refactor accordingly
| 0
|
244,637
| 20,681,114,051
|
IssuesEvent
|
2022-03-10 14:00:21
|
irods/irods
|
https://api.github.com/repos/irods/irods
|
closed
|
Test failure: test_ireg.test_ireg_options.test_ireg_recursive_C__issue_2912
|
bug testing
|
- [x] main
---
Due to comments in the code review for https://github.com/irods/irods/pull/6203, we changed how a certain message was being displayed when the `-C` option is used with `ireg`. The output now appears on `stdout` instead of `stderr`, so the test fails. Please update the test to check for the appropriate output.
|
1.0
|
Test failure: test_ireg.test_ireg_options.test_ireg_recursive_C__issue_2912 - - [x] main
---
Due to comments in the code review for https://github.com/irods/irods/pull/6203, we changed how a certain message was being displayed when the `-C` option is used with `ireg`. The output now appears on `stdout` instead of `stderr`, so the test fails. Please update the test to check for the appropriate output.
|
test
|
test failure test ireg test ireg options test ireg recursive c issue main due to comments in the code review for we changed how a certain message was being displayed when the c option is used with ireg the output now appears on stdout instead of stderr so the test fails please update the test to check for the appropriate output
| 1
|
245,410
| 20,767,543,232
|
IssuesEvent
|
2022-03-15 22:33:21
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
opened
|
[Flakey-test] Metricbeat kubernetes/controllermanager TestFetchMetricset
|
Metricbeat flaky-test Team:Cloudnative-Monitoring
|
## Flaky Test
* **Test Name:** metricbeat/module/kubernetes/controllermanager TestFetchMetricset
* **Link:** Link to file/line number in github.
* **Branch:** 7.17
* **Artifact Link:** https://beats-ci.elastic.co/job/Beats/job/beats/job/7.17/183/
* **Notes:** Branch and PR builds to 7.17 have been failing.
<img width="335" alt="Screen Shot 2022-03-15 at 18 28 29" src="https://user-images.githubusercontent.com/4565752/158482858-4ce7b782-af0f-4be2-8891-97e1ee3c286d.png">
### Stack Trace
```
=== Failed
=== FAIL: metricbeat/module/kubernetes/controllermanager TestFetchMetricset (0.01s)
controllermanager_integration_test.go:37: Expected 0 error, had 1. [error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp [::1]:10252: connect: connection refused]
=== FAIL: metricbeat/module/kubernetes/scheduler TestFetchMetricset (0.00s)
scheduler_integration_test.go:37: Expected 0 error, had 1. [error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp [::1]:10251: connect: connection refused]
DONE 32 tests, 2 failures in 64.192s
Error: failed modules: kubernetes
```
|
1.0
|
[Flakey-test] Metricbeat kubernetes/controllermanager TestFetchMetricset - ## Flaky Test
* **Test Name:** metricbeat/module/kubernetes/controllermanager TestFetchMetricset
* **Link:** Link to file/line number in github.
* **Branch:** 7.17
* **Artifact Link:** https://beats-ci.elastic.co/job/Beats/job/beats/job/7.17/183/
* **Notes:** Branch and PR builds to 7.17 have been failing.
<img width="335" alt="Screen Shot 2022-03-15 at 18 28 29" src="https://user-images.githubusercontent.com/4565752/158482858-4ce7b782-af0f-4be2-8891-97e1ee3c286d.png">
### Stack Trace
```
=== Failed
=== FAIL: metricbeat/module/kubernetes/controllermanager TestFetchMetricset (0.01s)
controllermanager_integration_test.go:37: Expected 0 error, had 1. [error getting processed metrics: error making http request: Get "http://localhost:10252/metrics": dial tcp [::1]:10252: connect: connection refused]
=== FAIL: metricbeat/module/kubernetes/scheduler TestFetchMetricset (0.00s)
scheduler_integration_test.go:37: Expected 0 error, had 1. [error getting processed metrics: error making http request: Get "http://localhost:10251/metrics": dial tcp [::1]:10251: connect: connection refused]
DONE 32 tests, 2 failures in 64.192s
Error: failed modules: kubernetes
```
|
test
|
metricbeat kubernetes controllermanager testfetchmetricset flaky test test name metricbeat module kubernetes controllermanager testfetchmetricset link link to file line number in github branch artifact link notes branch and pr builds to have been failing img width alt screen shot at src stack trace failed fail metricbeat module kubernetes controllermanager testfetchmetricset controllermanager integration test go expected error had connect connection refused fail metricbeat module kubernetes scheduler testfetchmetricset scheduler integration test go expected error had connect connection refused done tests failures in error failed modules kubernetes
| 1
|
606,645
| 18,767,053,082
|
IssuesEvent
|
2021-11-06 04:55:02
|
FantasticoFox/VerifyPage
|
https://api.github.com/repos/FantasticoFox/VerifyPage
|
closed
|
[Benchmark] Add a counter in 0.0s next to each revision to see how long the verification took | add a total somewhere in the header
|
medium priority UX
|
This well help us to see the bottlenecks and get an estimate for our performance.

|
1.0
|
[Benchmark] Add a counter in 0.0s next to each revision to see how long the verification took | add a total somewhere in the header - This well help us to see the bottlenecks and get an estimate for our performance.

|
non_test
|
add a counter in next to each revision to see how long the verification took add a total somewhere in the header this well help us to see the bottlenecks and get an estimate for our performance
| 0
|
776,826
| 27,264,688,421
|
IssuesEvent
|
2023-02-22 17:08:16
|
ascheid/itsg33-pbmm-issue-gen
|
https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen
|
opened
|
PL-2(3): System Security Plan | Plan / Coordinate With Other Organizational Entities
|
Priority: P2 Suggested Assignment: IT Security Function ITSG-33 Class: Management Control: PL-2
|
# Control Definition
SYSTEM SECURITY PLAN | PLAN / COORDINATE WITH OTHER ORGANIZATIONAL ENTITIES
The organization plans and coordinates security-related activities affecting the information system with [Assignment: organization-defined individuals or groups] before conducting such activities in order to reduce the impact on other organizational entities.
# Class
Management
# Supplemental Guidance
Security-related activities include, for example, security assessments, audits, hardware and software maintenance, patch management, and contingency plan testing. Advance planning and coordination includes emergency and non-emergency (i.e., planned or non-urgent unplanned) situations. The process defined by organizations to plan and coordinate security-related activities can be included in security plans for information systems or other documents, as appropriate. Related controls: CP-4, IR-4.
# Suggested Assignment
IT Security Function
|
1.0
|
PL-2(3): System Security Plan | Plan / Coordinate With Other Organizational Entities - # Control Definition
SYSTEM SECURITY PLAN | PLAN / COORDINATE WITH OTHER ORGANIZATIONAL ENTITIES
The organization plans and coordinates security-related activities affecting the information system with [Assignment: organization-defined individuals or groups] before conducting such activities in order to reduce the impact on other organizational entities.
# Class
Management
# Supplemental Guidance
Security-related activities include, for example, security assessments, audits, hardware and software maintenance, patch management, and contingency plan testing. Advance planning and coordination includes emergency and non-emergency (i.e., planned or non-urgent unplanned) situations. The process defined by organizations to plan and coordinate security-related activities can be included in security plans for information systems or other documents, as appropriate. Related controls: CP-4, IR-4.
# Suggested Assignment
IT Security Function
|
non_test
|
pl system security plan plan coordinate with other organizational entities control definition system security plan plan coordinate with other organizational entities the organization plans and coordinates security related activities affecting the information system with before conducting such activities in order to reduce the impact on other organizational entities class management supplemental guidance security related activities include for example security assessments audits hardware and software maintenance patch management and contingency plan testing advance planning and coordination includes emergency and non emergency i e planned or non urgent unplanned situations the process defined by organizations to plan and coordinate security related activities can be included in security plans for information systems or other documents as appropriate related controls cp ir suggested assignment it security function
| 0
|
157,312
| 12,369,766,448
|
IssuesEvent
|
2020-05-18 15:44:21
|
thefrontside/bigtest
|
https://api.github.com/repos/thefrontside/bigtest
|
closed
|
Step timeouts
|
@bigtest/agent enhancement
|
Steps are currently unbounded in how long they can take. For a condition that can never be met, this means that the test run will be blocked forever.
We need a way to configure the step timeout. Probably around 2 seconds?
|
1.0
|
Step timeouts - Steps are currently unbounded in how long they can take. For a condition that can never be met, this means that the test run will be blocked forever.
We need a way to configure the step timeout. Probably around 2 seconds?
|
test
|
step timeouts steps are currently unbounded in how long they can take for a condition that can never be met this means that the test run will be blocked forever we need a way to configure the step timeout probably around seconds
| 1
|
263,243
| 23,043,940,051
|
IssuesEvent
|
2022-07-23 15:44:58
|
transcelerate/ddf-sdr-backlog
|
https://api.github.com/repos/transcelerate/ddf-sdr-backlog
|
closed
|
[Bug]: ADO ID - 1786 : UI: Issues for System Usage screen and menu changes
|
bug testing Sprint13
|
### Contact Details
_No response_
### Steps to reproduce the Bug.
Issue1:
1. Login as Admin or Non-Admin
2. Click on the main menu like Study Definition,Reports, Manage
3. Observation: User is redirected to the root URL
Expected result: If there is a sub - menu then the main menu should not be clickable
Issue2:
1. Navigate to Usage report screen
2. Observation: In the response code dropdown 500 code is not present
Issue3:
1. Navigate to Usage report screen
2. Observation: In the response code dropdown the word "UnAuthorized" should be written as "Unauthorized"
Issue4:
1. Navigate to Usage report screen
2. Sort any column
3. Click on Reset Button
4. Observation: The sorting applied by user doesnot reset to the default sorting
### Bug prevalence
.
### ResolvingReason
_No response_
|
1.0
|
[Bug]: ADO ID - 1786 : UI: Issues for System Usage screen and menu changes - ### Contact Details
_No response_
### Steps to reproduce the Bug.
Issue1:
1. Login as Admin or Non-Admin
2. Click on the main menu like Study Definition,Reports, Manage
3. Observation: User is redirected to the root URL
Expected result: If there is a sub - menu then the main menu should not be clickable
Issue2:
1. Navigate to Usage report screen
2. Observation: In the response code dropdown 500 code is not present
Issue3:
1. Navigate to Usage report screen
2. Observation: In the response code dropdown the word "UnAuthorized" should be written as "Unauthorized"
Issue4:
1. Navigate to Usage report screen
2. Sort any column
3. Click on Reset Button
4. Observation: The sorting applied by user doesnot reset to the default sorting
### Bug prevalence
.
### ResolvingReason
_No response_
|
test
|
ado id ui issues for system usage screen and menu changes contact details no response steps to reproduce the bug login as admin or non admin click on the main menu like study definition reports manage observation user is redirected to the root url expected result if there is a sub menu then the main menu should not be clickable navigate to usage report screen observation in the response code dropdown code is not present navigate to usage report screen observation in the response code dropdown the word unauthorized should be written as unauthorized navigate to usage report screen sort any column click on reset button observation the sorting applied by user doesnot reset to the default sorting bug prevalence resolvingreason no response
| 1
|
3,885
| 3,266,269,356
|
IssuesEvent
|
2015-10-22 19:55:36
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
Internal registry push loading bar
|
area/usability component/build priority/P3
|
I want to see pushing to internal registry with loading bar:
-bash-4.2$ oc logs -f apaas-1-2708674907-build
Internal Error: pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
-bash-4.2$ oc logs -f apaas-1-2708674907-build
Already on 'master'
W1022 08:40:30.217551 1 docker.go:304] The 'io.s2i.scripts-url' label is deprecated. Use "io.openshift.s2i.scripts-url" instead.
W1022 08:40:30.236089 1 docker.go:304] The 'io.s2i.scripts-url' label is deprecated. Use "io.openshift.s2i.scripts-url" instead.
W1022 08:40:30.294958 1 docker.go:304] The 'io.s2i.scripts-url' label is deprecated. Use "io.openshift.s2i.scripts-url" instead.
I1022 08:40:31.313216 1 sti.go:407] Copying all WAR artifacts from /home/jboss/source/deployments directory into /opt/webserver/webapps for later deployment...
I1022 08:40:31.346271 1 sti.go:407] '/home/jboss/source/deployments/prime-face.war' -> '/opt/webserver/webapps/prime-face.war'
I1022 08:40:40.333435 1 sti.go:149] Using provided push secret for pushing 172.17.131.32:5000/mangis/apaas:latest image
I1022 08:40:40.333475 1 sti.go:151] Pushing 172.17.131.32:5000/mangis/apaas:latest image ...
My images (STI builders) are quite big and sometimes due some performance issues it hand up on push procedure. So status on push state would help to identify hanged builds.
|
1.0
|
Internal registry push loading bar - I want to see pushing to internal registry with loading bar:
-bash-4.2$ oc logs -f apaas-1-2708674907-build
Internal Error: pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
-bash-4.2$ oc logs -f apaas-1-2708674907-build
Already on 'master'
W1022 08:40:30.217551 1 docker.go:304] The 'io.s2i.scripts-url' label is deprecated. Use "io.openshift.s2i.scripts-url" instead.
W1022 08:40:30.236089 1 docker.go:304] The 'io.s2i.scripts-url' label is deprecated. Use "io.openshift.s2i.scripts-url" instead.
W1022 08:40:30.294958 1 docker.go:304] The 'io.s2i.scripts-url' label is deprecated. Use "io.openshift.s2i.scripts-url" instead.
I1022 08:40:31.313216 1 sti.go:407] Copying all WAR artifacts from /home/jboss/source/deployments directory into /opt/webserver/webapps for later deployment...
I1022 08:40:31.346271 1 sti.go:407] '/home/jboss/source/deployments/prime-face.war' -> '/opt/webserver/webapps/prime-face.war'
I1022 08:40:40.333435 1 sti.go:149] Using provided push secret for pushing 172.17.131.32:5000/mangis/apaas:latest image
I1022 08:40:40.333475 1 sti.go:151] Pushing 172.17.131.32:5000/mangis/apaas:latest image ...
My images (STI builders) are quite big and sometimes due some performance issues it hand up on push procedure. So status on push state would help to identify hanged builds.
|
non_test
|
internal registry push loading bar i want to see pushing to internal registry with loading bar bash oc logs f apaas build internal error pod is not in running succeeded or failed state state pending bash oc logs f apaas build already on master docker go the io scripts url label is deprecated use io openshift scripts url instead docker go the io scripts url label is deprecated use io openshift scripts url instead docker go the io scripts url label is deprecated use io openshift scripts url instead sti go copying all war artifacts from home jboss source deployments directory into opt webserver webapps for later deployment sti go home jboss source deployments prime face war opt webserver webapps prime face war sti go using provided push secret for pushing mangis apaas latest image sti go pushing mangis apaas latest image my images sti builders are quite big and sometimes due some performance issues it hand up on push procedure so status on push state would help to identify hanged builds
| 0
|
196,483
| 14,861,818,877
|
IssuesEvent
|
2021-01-19 00:00:23
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/security/user_email·js - security app useremail click changepassword link, change the password and re-login
|
Team:Security blocker bug failed-test regression test-cloud
|
**Version: 7.11.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/security/user_email·js**
**Stack Trace:**
```
Error: expected testSubject(passwordUpdateSuccess) to exist
at TestSubjects.existOrFail (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/functional/services/common/test_subjects.ts:62:15)
at AccountSettingsPage.changePassword (test/functional/page_objects/account_settings_page.ts:33:7)
at Context.<anonymous> (test/functional/apps/security/user_email.js:47:7)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
```
**Other test failures:**
- security app useremail login as new user with changed password
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/1050/testReport/_
|
2.0
|
[test-failed]: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/security/user_email·js - security app useremail click changepassword link, change the password and re-login - **Version: 7.11.0**
**Class: Chrome X-Pack UI Functional Tests1.x-pack/test/functional/apps/security/user_email·js**
**Stack Trace:**
```
Error: expected testSubject(passwordUpdateSuccess) to exist
at TestSubjects.existOrFail (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/test/functional/services/common/test_subjects.ts:62:15)
at AccountSettingsPage.changePassword (test/functional/page_objects/account_settings_page.ts:33:7)
at Context.<anonymous> (test/functional/apps/security/user_email.js:47:7)
at Object.apply (/var/lib/jenkins/workspace/elastic+estf-cloud-kibana-tests/JOB/xpackGrp4/TASK/saas_run_kibana_tests/node/linux-immutable/ci/cloud/common/build/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
```
**Other test failures:**
- security app useremail login as new user with changed password
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/1050/testReport/_
|
test
|
chrome x pack ui functional x pack test functional apps security user email·js security app useremail click changepassword link change the password and re login version class chrome x pack ui functional x pack test functional apps security user email·js stack trace error expected testsubject passwordupdatesuccess to exist at testsubjects existorfail var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana test functional services common test subjects ts at accountsettingspage changepassword test functional page objects account settings page ts at context test functional apps security user email js at object apply var lib jenkins workspace elastic estf cloud kibana tests job task saas run kibana tests node linux immutable ci cloud common build kibana packages kbn test src functional test runner lib mocha wrap function js other test failures security app useremail login as new user with changed password test report
| 1
|
248,951
| 21,091,158,518
|
IssuesEvent
|
2022-04-04 05:16:54
|
DnD-Montreal/session-tome
|
https://api.github.com/repos/DnD-Montreal/session-tome
|
closed
|
Accept: Open Session Entry Automation
|
acceptance test
|
## Description
Acceptance Test for #364
<!-- Provide a general summary of the test in the title above -->
Ability to have entries created automatically for a user, based off the start time of a session they are registered for
[UAT Environment](https://session-tome.triassi.ca) for executing the acceptance flow
<!-- See #E2E for automation of this flow -->
## Acceptance Flow
<!-- Describe the step by step procedure of the acceptance test -->
1. User Logs in
2. User navigates to the Event Page
3. User picks an event they'd like to register for
4. User clicks "register" on a particular Session and chooses a character
5. User waits for the start time for the session to pass
6. User observes a new entry created for that character
|
1.0
|
Accept: Open Session Entry Automation - ## Description
Acceptance Test for #364
<!-- Provide a general summary of the test in the title above -->
Ability to have entries created automatically for a user, based off the start time of a session they are registered for
[UAT Environment](https://session-tome.triassi.ca) for executing the acceptance flow
<!-- See #E2E for automation of this flow -->
## Acceptance Flow
<!-- Describe the step by step procedure of the acceptance test -->
1. User Logs in
2. User navigates to the Event Page
3. User picks an event they'd like to register for
4. User clicks "register" on a particular Session and chooses a character
5. User waits for the start time for the session to pass
6. User observes a new entry created for that character
|
test
|
accept open session entry automation description acceptance test for ability to have entries created automatically for a user based off the start time of a session they are registered for for executing the acceptance flow acceptance flow user logs in user navigates to the event page user picks an event they d like to register for user clicks register on a particular session and chooses a character user waits for the start time for the session to pass user observes a new entry created for that character
| 1
|
646,406
| 21,046,992,821
|
IssuesEvent
|
2022-03-31 16:56:39
|
manbuegom/tfg_project_22
|
https://api.github.com/repos/manbuegom/tfg_project_22
|
closed
|
Init Svelte.ts + Vite project
|
medium priority configuration
|
First steps with guides. Start the base project and learn how it works.
|
1.0
|
Init Svelte.ts + Vite project - First steps with guides. Start the base project and learn how it works.
|
non_test
|
init svelte ts vite project first steps with guides start the base project and learn how it works
| 0
|
178,603
| 13,786,074,275
|
IssuesEvent
|
2020-10-09 00:42:03
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Flutter driver appears to be broken on web
|
P2 a: null-safety a: tests engine framework passed first triage platform-web plugin t: flutter driver
|
This is causing tree closures on engine and plugins
/cc @jonahwilliams (because I think it might be related to your nullsafety PR)
@jason-simmons is bisecting right now AFAIK.
/cc @zanderso @liyuqian @yjbanov
|
1.0
|
Flutter driver appears to be broken on web - This is causing tree closures on engine and plugins
/cc @jonahwilliams (because I think it might be related to your nullsafety PR)
@jason-simmons is bisecting right now AFAIK.
/cc @zanderso @liyuqian @yjbanov
|
test
|
flutter driver appears to be broken on web this is causing tree closures on engine and plugins cc jonahwilliams because i think it might be related to your nullsafety pr jason simmons is bisecting right now afaik cc zanderso liyuqian yjbanov
| 1
|
753,537
| 26,352,185,172
|
IssuesEvent
|
2023-01-11 06:24:30
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
school.apple.com - site is not usable
|
browser-firefox priority-critical engine-gecko
|
<!-- @browser: Firefox 108.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/116650 -->
**URL**: https://school.apple.com/
**Browser / Version**: Firefox 108.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
When I navigate to the site, it gives an error saying my browser isn't supported. Only Chrome or Edge are supported.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/937dcc23-889c-4d9c-a3cd-88f3691e270e.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
school.apple.com - site is not usable - <!-- @browser: Firefox 108.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/116650 -->
**URL**: https://school.apple.com/
**Browser / Version**: Firefox 108.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
When I navigate to the site, it gives an error saying my browser isn't supported. Only Chrome or Edge are supported.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/1/937dcc23-889c-4d9c-a3cd-88f3691e270e.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
school apple com site is not usable url browser version firefox operating system windows tested another browser yes edge problem type site is not usable description browser unsupported steps to reproduce when i navigate to the site it gives an error saying my browser isn t supported only chrome or edge are supported view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
319,520
| 27,379,683,998
|
IssuesEvent
|
2023-02-28 09:11:23
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Add tests for the TypeHashVisitor
|
Type/Task Priority/High Area/UnitTest Team/CompilerFE Deferred
|
**Description:**
$title.
**Related Issues (optional):**
- #30836
|
1.0
|
Add tests for the TypeHashVisitor - **Description:**
$title.
**Related Issues (optional):**
- #30836
|
test
|
add tests for the typehashvisitor description title related issues optional
| 1
|
435,830
| 30,521,684,082
|
IssuesEvent
|
2023-07-19 08:31:05
|
UCLH-Foundry/Garden-Path
|
https://api.github.com/repos/UCLH-Foundry/Garden-Path
|
closed
|
Update new user docs for prod deployment
|
bug documentation
|
urls have changed thus should be updated in the notebooks
|
1.0
|
Update new user docs for prod deployment - urls have changed thus should be updated in the notebooks
|
non_test
|
update new user docs for prod deployment urls have changed thus should be updated in the notebooks
| 0
|
22,634
| 3,967,000,346
|
IssuesEvent
|
2016-05-03 14:52:53
|
radare/radare2
|
https://api.github.com/repos/radare/radare2
|
opened
|
pf: print array of a user-defined format
|
bug pf test-required
|
If I do:
```
$ r2 -nn ./r2r/bins/elf/main
> pf elf_phdr @ elf_phdr
type : 0x00000000 = type (enum) = (null)
flags : 0x00000004 = flags (enum) = (null)
offset : 0x00000008 = (qword) 0x0000000000000000
vaddr : 0x00000010 = (qword) 0x00000001003e0002
paddr : 0x00000018 = (qword) 0x0000000000400410
filesz : 0x00000020 = (qword) 0x0000000000000040
memsz : 0x00000028 = (qword) 0x0000000000000fc8
align : 0x00000030 = (qword) 0x0038004000000000
```
I get the expected output, but if I do:
```
> pf {4}elf_phdr @ elf_phdr
0x00000040 = 100663296.000000
0x00000044 = (qword) 0x0500000040000000
0x0000004c = 0
Register (null) does not exists
0x00000050 = 4194368.000000
0x00000054 = (qword) 0x0040004000000000
0x0000005c = 0
Register (null) does not exists
0x00000060 = 3221291008.000000
0x00000064 = (qword) 0x00000000c0010000
0x0000006c = 0
Register (null) does not exists
0x00000070 = 8.000000
0x00000074 = (qword) 0x0000000300000000
0x0000007c = 4
Register (null) does not exists
```
I don't. Am I using pf in the wrong way or is it just a bug?
|
1.0
|
pf: print array of a user-defined format - If I do:
```
$ r2 -nn ./r2r/bins/elf/main
> pf elf_phdr @ elf_phdr
type : 0x00000000 = type (enum) = (null)
flags : 0x00000004 = flags (enum) = (null)
offset : 0x00000008 = (qword) 0x0000000000000000
vaddr : 0x00000010 = (qword) 0x00000001003e0002
paddr : 0x00000018 = (qword) 0x0000000000400410
filesz : 0x00000020 = (qword) 0x0000000000000040
memsz : 0x00000028 = (qword) 0x0000000000000fc8
align : 0x00000030 = (qword) 0x0038004000000000
```
I get the expected output, but if I do:
```
> pf {4}elf_phdr @ elf_phdr
0x00000040 = 100663296.000000
0x00000044 = (qword) 0x0500000040000000
0x0000004c = 0
Register (null) does not exists
0x00000050 = 4194368.000000
0x00000054 = (qword) 0x0040004000000000
0x0000005c = 0
Register (null) does not exists
0x00000060 = 3221291008.000000
0x00000064 = (qword) 0x00000000c0010000
0x0000006c = 0
Register (null) does not exists
0x00000070 = 8.000000
0x00000074 = (qword) 0x0000000300000000
0x0000007c = 4
Register (null) does not exists
```
I don't. Am I using pf in the wrong way or is it just a bug?
|
test
|
pf print array of a user defined format if i do nn bins elf main pf elf phdr elf phdr type type enum null flags flags enum null offset qword vaddr qword paddr qword filesz qword memsz qword align qword i get the expected output but if i do pf elf phdr elf phdr qword register null does not exists qword register null does not exists qword register null does not exists qword register null does not exists i don t am i using pf in the wrong way or is it just a bug
| 1
|
321,898
| 23,877,435,691
|
IssuesEvent
|
2022-09-07 20:33:04
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
opened
|
update design docs to reflect new stack overflow design
|
documentation
|
The design docs for the stack overflow table should be updated to match the changes in #381 which allow the stack overflow table to be non-empty at the start and end of execution
|
1.0
|
update design docs to reflect new stack overflow design - The design docs for the stack overflow table should be updated to match the changes in #381 which allow the stack overflow table to be non-empty at the start and end of execution
|
non_test
|
update design docs to reflect new stack overflow design the design docs for the stack overflow table should be updated to match the changes in which allow the stack overflow table to be non empty at the start and end of execution
| 0
|
590,114
| 17,771,409,565
|
IssuesEvent
|
2021-08-30 14:04:39
|
prometheus/prometheus
|
https://api.github.com/repos/prometheus/prometheus
|
closed
|
Include exemplars in snapshot
|
kind/enhancement priority/P3 component/tsdb
|
Else if snapshots are enabled, you might lose all the exemplars on a restart. This should be fixed before the next release.
|
1.0
|
Include exemplars in snapshot - Else if snapshots are enabled, you might lose all the exemplars on a restart. This should be fixed before the next release.
|
non_test
|
include exemplars in snapshot else if snapshots are enabled you might lose all the exemplars on a restart this should be fixed before the next release
| 0
|
25,929
| 26,124,113,366
|
IssuesEvent
|
2022-12-28 16:07:24
|
siteorigin/siteorigin-panels
|
https://api.github.com/repos/siteorigin/siteorigin-panels
|
opened
|
Column Resizing Rework
|
enhancement usability
|
The existing column resizing functionality isn't very intuitive and while there are a lot of options, most aren't useful. I suggest we rework the column resizing section to be more straightforward and to do this I think we should remove everything but the column number field and add conditionally displayed resize options based on the number of columns present. They will be more visually pleasing and useful.

Example predefined resize options based on the number of columns present.
1: Show no resize options.
2: 50/50, 75/25, 25/50
3: 33/33/33/ 25/50/25
etc.
Possible designs:

With percentages visible:

Right aligned:

|
True
|
Column Resizing Rework - The existing column resizing functionality isn't very intuitive and while there are a lot of options, most aren't useful. I suggest we rework the column resizing section to be more straightforward and to do this I think we should remove everything but the column number field and add conditionally displayed resize options based on the number of columns present. They will be more visually pleasing and useful.

Example predefined resize options based on the number of columns present.
1: Show no resize options.
2: 50/50, 75/25, 25/50
3: 33/33/33/ 25/50/25
etc.
Possible designs:

With percentages visible:

Right aligned:

|
non_test
|
column resizing rework the existing column resizing functionality isn t very intuitive and while there are a lot of options most aren t useful i suggest we rework the column resizing section to be more straightforward and to do this i think we should remove everything but the column number field and add conditionally displayed resize options based on the number of columns present they will be more visually pleasing and useful example predefined resize options based on the number of columns present show no resize options etc possible designs with percentages visible right aligned
| 0
|
242,794
| 7,846,870,599
|
IssuesEvent
|
2018-06-19 16:39:14
|
Cloud-CV/EvalAI
|
https://api.github.com/repos/Cloud-CV/EvalAI
|
closed
|
'Dataset split with that codename already exists' error
|
backend enhancement medium-difficulty new-feature priority-high
|
## Current Scenario
When a user creates a challenge using a zip configuration that has some dataset split codename, then the challenge is created successfully. If he tries to create a similar challenge with similar dataset split name then he gets an error that `Dataset split with that codename already exists` which should not happen since different challenges can have same dataset split codename.
## Deliverables
- [ ] Remove `unique=True` from DatasetSplit's codename field
- [ ] Set unique_together `pk` and `dataset_split` in Challenge phase model
- [ ] Don't forget to name migrations
|
1.0
|
'Dataset split with that codename already exists' error - ## Current Scenario
When a user creates a challenge using a zip configuration that has some dataset split codename, then the challenge is created successfully. If he tries to create a similar challenge with similar dataset split name then he gets an error that `Dataset split with that codename already exists` which should not happen since different challenges can have same dataset split codename.
## Deliverables
- [ ] Remove `unique=True` from DatasetSplit's codename field
- [ ] Set unique_together `pk` and `dataset_split` in Challenge phase model
- [ ] Don't forget to name migrations
|
non_test
|
dataset split with that codename already exists error current scenario when a user creates a challenge using a zip configuration that has some dataset split codename then the challenge is created successfully if he tries to create a similar challenge with similar dataset split name then he gets an error that dataset split with that codename already exists which should not happen since different challenges can have same dataset split codename deliverables remove unique true from datasetsplit s codename field set unique together pk and dataset split in challenge phase model don t forget to name migrations
| 0
|
348,796
| 31,719,117,848
|
IssuesEvent
|
2023-09-10 07:38:05
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
ccl/sqlproxyccl: TestProxyAgainstSecureCRDB failed
|
C-test-failure O-robot release-blocker T-sql-foundations branch-release-22.2
|
ccl/sqlproxyccl.TestProxyAgainstSecureCRDB [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11709951?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11709951?buildTab=artifacts#/) on release-22.2 @ [59a2f42f39e0bfd8d5140f1493c602d9dccd43eb](https://github.com/cockroachdb/cockroach/commits/59a2f42f39e0bfd8d5140f1493c602d9dccd43eb):
```
GOROOT/src/runtime/proc.go:363 +0xd6 fp=0xc0043b5b78 sp=0xc0043b5b58 pc=0x468d96
runtime.selectgo(0xc0043b5da0, 0xc0043b5d38, 0x0?, 0x0, 0xcb3faa?, 0x1)
GOROOT/src/runtime/select.go:328 +0x8bc fp=0xc0043b5cd8 sp=0xc0043b5b78 pc=0x47a2dc
github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).initCancelingWorkers.func1({0x8cb9558, 0xc002c3c360})
github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:234 +0x14e fp=0xc0043b5e20 sp=0xc0043b5cd8 pc=0x5084d0e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:489 +0x1f7 fp=0xc0043b5fe0 sp=0xc0043b5e20 pc=0x17dad57
runtime.goexit()
src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0043b5fe8 sp=0xc0043b5fe0 pc=0x49c881
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:480 +0x61a
goroutine 5310 [select]:
runtime.gopark(0xc00245be58?, 0x2?, 0xa8?, 0xc6?, 0xc00245bdd4?)
GOROOT/src/runtime/proc.go:363 +0xd6 fp=0xc001ddac28 sp=0xc001ddac08 pc=0x468d96
runtime.selectgo(0xc001ddae58, 0xc00245bdd0, 0x0?, 0x0, 0xc000dfc568?, 0x1)
GOROOT/src/runtime/select.go:328 +0x8bc fp=0xc001ddad88 sp=0xc001ddac28 pc=0x47a2dc
google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc002452af0, 0x1)
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/controlbuf.go:407 +0x1c5 fp=0xc001ddae88 sp=0xc001ddad88 pc=0x104aae5
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00258ec60)
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/controlbuf.go:534 +0x125 fp=0xc001ddaf38 sp=0xc001ddae88 pc=0x104b705
google.golang.org/grpc/internal/transport.NewServerTransport.func2()
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/http2_server.go:332 +0x1dd fp=0xc001ddafe0 sp=0xc001ddaf38 pc=0x1070bbd
runtime.goexit()
src/runtime/asm_amd64.s:1594 +0x1 fp=0xc001ddafe8 sp=0xc001ddafe0 pc=0x49c881
created by google.golang.org/grpc/internal/transport.NewServerTransport
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/http2_server.go:329 +0x271e
goroutine 5956 [select]:
runtime.gopark(0xc005873d60?, 0x2?, 0xe0?, 0x71?, 0xc005873cac?)
GOROOT/src/runtime/proc.go:363 +0xd6 fp=0xc005873ae8 sp=0xc005873ac8 pc=0x468d96
runtime.selectgo(0xc005873d60, 0xc005873ca8, 0xc002b63ec0?, 0x0, 0x2?, 0x1)
GOROOT/src/runtime/select.go:328 +0x8bc fp=0xc005873c48 sp=0xc005873ae8 pc=0x47a2dc
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed.(*RangeFeed).processEvents(0xc001b2e420, {0x8cb94b0, 0xc0032a1840}, 0x0?, 0xc002916ae0)
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed/rangefeed.go:351 +0x11a fp=0xc005873e98 sp=0xc005873c48 pc=0x2fc5a5a
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed.(*RangeFeed).run.func2({0x8cb94b0, 0xc0032a1840})
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed/rangefeed.go:308 +0x65 fp=0xc005873ef8 sp=0xc005873e98 pc=0x2fc5645
github.com/cockroachdb/cockroach/pkg/util/ctxgroup.Group.GoCtx.func1()
github.com/cockroachdb/cockroach/pkg/util/ctxgroup/ctxgroup.go:168 +0x4d fp=0xc005873f38 sp=0xc005873ef8 pc=0x2f4b3cd
golang.org/x/sync/errgroup.(*Group).Go.func1()
golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:74 +0x87 fp=0xc005873fe0 sp=0xc005873f38 pc=0x24d8547
runtime.goexit()
src/runtime/asm_amd64.s:1594 +0x1 fp=0xc005873fe8 sp=0xc005873fe0 pc=0x49c881
created by golang.org/x/sync/errgroup.(*Group).Go
golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:71 +0x12f
goroutine 5542 [running]:
goroutine running on other thread; stack unavailable
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:480 +0x61a
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-foundations @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestProxyAgainstSecureCRDB.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
ccl/sqlproxyccl: TestProxyAgainstSecureCRDB failed - ccl/sqlproxyccl.TestProxyAgainstSecureCRDB [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11709951?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/11709951?buildTab=artifacts#/) on release-22.2 @ [59a2f42f39e0bfd8d5140f1493c602d9dccd43eb](https://github.com/cockroachdb/cockroach/commits/59a2f42f39e0bfd8d5140f1493c602d9dccd43eb):
```
GOROOT/src/runtime/proc.go:363 +0xd6 fp=0xc0043b5b78 sp=0xc0043b5b58 pc=0x468d96
runtime.selectgo(0xc0043b5da0, 0xc0043b5d38, 0x0?, 0x0, 0xcb3faa?, 0x1)
GOROOT/src/runtime/select.go:328 +0x8bc fp=0xc0043b5cd8 sp=0xc0043b5b78 pc=0x47a2dc
github.com/cockroachdb/cockroach/pkg/sql.(*DistSQLPlanner).initCancelingWorkers.func1({0x8cb9558, 0xc002c3c360})
github.com/cockroachdb/cockroach/pkg/sql/distsql_running.go:234 +0x14e fp=0xc0043b5e20 sp=0xc0043b5cd8 pc=0x5084d0e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:489 +0x1f7 fp=0xc0043b5fe0 sp=0xc0043b5e20 pc=0x17dad57
runtime.goexit()
src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0043b5fe8 sp=0xc0043b5fe0 pc=0x49c881
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:480 +0x61a
goroutine 5310 [select]:
runtime.gopark(0xc00245be58?, 0x2?, 0xa8?, 0xc6?, 0xc00245bdd4?)
GOROOT/src/runtime/proc.go:363 +0xd6 fp=0xc001ddac28 sp=0xc001ddac08 pc=0x468d96
runtime.selectgo(0xc001ddae58, 0xc00245bdd0, 0x0?, 0x0, 0xc000dfc568?, 0x1)
GOROOT/src/runtime/select.go:328 +0x8bc fp=0xc001ddad88 sp=0xc001ddac28 pc=0x47a2dc
google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc002452af0, 0x1)
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/controlbuf.go:407 +0x1c5 fp=0xc001ddae88 sp=0xc001ddad88 pc=0x104aae5
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00258ec60)
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/controlbuf.go:534 +0x125 fp=0xc001ddaf38 sp=0xc001ddae88 pc=0x104b705
google.golang.org/grpc/internal/transport.NewServerTransport.func2()
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/http2_server.go:332 +0x1dd fp=0xc001ddafe0 sp=0xc001ddaf38 pc=0x1070bbd
runtime.goexit()
src/runtime/asm_amd64.s:1594 +0x1 fp=0xc001ddafe8 sp=0xc001ddafe0 pc=0x49c881
created by google.golang.org/grpc/internal/transport.NewServerTransport
google.golang.org/grpc/internal/transport/external/org_golang_google_grpc/internal/transport/http2_server.go:329 +0x271e
goroutine 5956 [select]:
runtime.gopark(0xc005873d60?, 0x2?, 0xe0?, 0x71?, 0xc005873cac?)
GOROOT/src/runtime/proc.go:363 +0xd6 fp=0xc005873ae8 sp=0xc005873ac8 pc=0x468d96
runtime.selectgo(0xc005873d60, 0xc005873ca8, 0xc002b63ec0?, 0x0, 0x2?, 0x1)
GOROOT/src/runtime/select.go:328 +0x8bc fp=0xc005873c48 sp=0xc005873ae8 pc=0x47a2dc
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed.(*RangeFeed).processEvents(0xc001b2e420, {0x8cb94b0, 0xc0032a1840}, 0x0?, 0xc002916ae0)
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed/rangefeed.go:351 +0x11a fp=0xc005873e98 sp=0xc005873c48 pc=0x2fc5a5a
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed.(*RangeFeed).run.func2({0x8cb94b0, 0xc0032a1840})
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangefeed/rangefeed.go:308 +0x65 fp=0xc005873ef8 sp=0xc005873e98 pc=0x2fc5645
github.com/cockroachdb/cockroach/pkg/util/ctxgroup.Group.GoCtx.func1()
github.com/cockroachdb/cockroach/pkg/util/ctxgroup/ctxgroup.go:168 +0x4d fp=0xc005873f38 sp=0xc005873ef8 pc=0x2f4b3cd
golang.org/x/sync/errgroup.(*Group).Go.func1()
golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:74 +0x87 fp=0xc005873fe0 sp=0xc005873f38 pc=0x24d8547
runtime.goexit()
src/runtime/asm_amd64.s:1594 +0x1 fp=0xc005873fe8 sp=0xc005873fe0 pc=0x49c881
created by golang.org/x/sync/errgroup.(*Group).Go
golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:71 +0x12f
goroutine 5542 [running]:
goroutine running on other thread; stack unavailable
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:480 +0x61a
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-foundations @cockroachdb/server
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestProxyAgainstSecureCRDB.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
ccl sqlproxyccl testproxyagainstsecurecrdb failed ccl sqlproxyccl testproxyagainstsecurecrdb with on release goroot src runtime proc go fp sp pc runtime selectgo goroot src runtime select go fp sp pc github com cockroachdb cockroach pkg sql distsqlplanner initcancelingworkers github com cockroachdb cockroach pkg sql distsql running go fp sp pc github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go fp sp pc runtime goexit src runtime asm s fp sp pc created by github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go goroutine runtime gopark goroot src runtime proc go fp sp pc runtime selectgo goroot src runtime select go fp sp pc google golang org grpc internal transport controlbuffer get google golang org grpc internal transport external org golang google grpc internal transport controlbuf go fp sp pc google golang org grpc internal transport loopywriter run google golang org grpc internal transport external org golang google grpc internal transport controlbuf go fp sp pc google golang org grpc internal transport newservertransport google golang org grpc internal transport external org golang google grpc internal transport server go fp sp pc runtime goexit src runtime asm s fp sp pc created by google golang org grpc internal transport newservertransport google golang org grpc internal transport external org golang google grpc internal transport server go goroutine runtime gopark goroot src runtime proc go fp sp pc runtime selectgo goroot src runtime select go fp sp pc github com cockroachdb cockroach pkg kv kvclient rangefeed rangefeed processevents github com cockroachdb cockroach pkg kv kvclient rangefeed rangefeed go fp sp pc github com cockroachdb cockroach pkg kv kvclient rangefeed rangefeed run github com cockroachdb cockroach pkg kv kvclient rangefeed rangefeed go fp sp pc github com cockroachdb cockroach pkg util ctxgroup group goctx github com cockroachdb cockroach pkg util ctxgroup ctxgroup go fp sp pc golang org x sync errgroup group go golang org x sync errgroup external org golang x sync errgroup errgroup go fp sp pc runtime goexit src runtime asm s fp sp pc created by golang org x sync errgroup group go golang org x sync errgroup external org golang x sync errgroup errgroup go goroutine goroutine running on other thread stack unavailable created by github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go parameters tags bazel gss race help see also cc cockroachdb sql foundations cockroachdb server
| 1
|
168,617
| 13,097,657,900
|
IssuesEvent
|
2020-08-03 17:53:26
|
thefrontside/bigtest
|
https://api.github.com/repos/thefrontside/bigtest
|
opened
|
Agent Frame sometimes interferes with test Frame
|
@bigtest/agent 🐛bug
|
Depending on the CSS of the app under test, the agent frame doesn't play nicely with it:

|
1.0
|
Agent Frame sometimes interferes with test Frame - Depending on the CSS of the app under test, the agent frame doesn't play nicely with it:

|
test
|
agent frame sometimes interferes with test frame depending on the css of the app under test the agent frame doesn t play nicely with it
| 1
|
324,935
| 27,832,815,034
|
IssuesEvent
|
2023-03-20 07:02:43
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
DATA RACE TestMultiValuedIndexOnlineDDL
|
type/bug component/test severity/moderate
|
## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
```
=================
WARNING: DATA RACE
Read at 0x00c0186691f0 by goroutine 519066:
github.com/pingcap/tidb/planner/core.newBasePlan()
planner/core/plan.go:728 +0x8a
github.com/pingcap/tidb/planner/core.newBaseLogicalPlan()
planner/core/plan.go:743 +0x10f
github.com/pingcap/tidb/planner/core.LogicalTableDual.Init()
planner/core/initialize.go:175 +0x23f
github.com/pingcap/tidb/planner/core.rewriteAstExpr()
planner/core/expression_rewriter.go:74 +0x274
github.com/pingcap/tidb/expression.ColumnInfos2ColumnsAndNames()
expression/expression.go:1001 +0xd7a
github.com/pingcap/tidb/util/admin.makeRowDecoder()
util/admin/admin.go:196 +0x3b6
github.com/pingcap/tidb/util/admin.iterRecords()
util/admin/admin.go:224 +0x6b5
github.com/pingcap/tidb/util/admin.CheckRecordAndIndex()
util/admin/admin.go:186 +0x4fd
github.com/pingcap/tidb/executor.(*CheckTableExec).checkTableRecord()
executor/executor.go:1097 +0x644
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2.1()
executor/executor.go:1053 +0x24d
github.com/pingcap/tidb/util.WithRecovery()
util/misc.go:96 +0x6d
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2()
executor/executor.go:1042 +0xc7
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run.func1()
util/wait_group_wrapper.go:154 +0x73
Previous write at 0x00c0186691f0 by goroutine 519065:
github.com/pingcap/tidb/planner/core.newBasePlan()
planner/core/plan.go:728 +0xa5
github.com/pingcap/tidb/planner/core.newBaseLogicalPlan()
planner/core/plan.go:743 +0x10f
github.com/pingcap/tidb/planner/core.LogicalTableDual.Init()
planner/core/initialize.go:175 +0x23f
github.com/pingcap/tidb/planner/core.rewriteAstExpr()
planner/core/expression_rewriter.go:74 +0x274
github.com/pingcap/tidb/expression.ColumnInfos2ColumnsAndNames()
expression/expression.go:1001 +0xd7a
github.com/pingcap/tidb/util/admin.makeRowDecoder()
util/admin/admin.go:196 +0x3b6
github.com/pingcap/tidb/util/admin.iterRecords()
util/admin/admin.go:224 +0x6b5
github.com/pingcap/tidb/util/admin.CheckRecordAndIndex()
util/admin/admin.go:186 +0x4fd
github.com/pingcap/tidb/executor.(*CheckTableExec).checkTableRecord()
executor/executor.go:1097 +0x644
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2.1()
executor/executor.go:1053 +0x24d
github.com/pingcap/tidb/util.WithRecovery()
util/misc.go:96 +0x6d
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2()
executor/executor.go:1042 +0xc7
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run.func1()
util/wait_group_wrapper.go:154 +0x73
Goroutine 519066 (running) created at:
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run()
util/wait_group_wrapper.go:152 +0xe4
github.com/pingcap/tidb/executor.(*CheckTableExec).Next()
executor/executor.go:1041 +0xa0f
github.com/pingcap/tidb/executor.Next()
executor/executor.go:326 +0x326
github.com/pingcap/tidb/executor.(*ExecStmt).next()
executor/adapter.go:1212 +0x89
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor()
executor/adapter.go:957 +0x4f9
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelay()
executor/adapter.go:782 +0x34a
github.com/pingcap/tidb/executor.(*ExecStmt).Exec()
executor/adapter.go:577 +0x129e
github.com/pingcap/tidb/session.runStmt()
session/session.go:2333 +0x62f
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2190 +0x10dd
github.com/pingcap/tidb/testkit.(*TestKit).ExecWithContext()
testkit/testkit.go:325 +0x8ae
github.com/pingcap/tidb/testkit.(*TestKit).MustExecWithContext()
testkit/testkit.go:133 +0xb7
github.com/pingcap/tidb/testkit.(*TestKit).MustExec()
testkit/testkit.go:128 +0x138
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:60 +0x550
github.com/pingcap/tidb/domain.(*Domain).rebuildSysVarCache()
domain/sysvar_cache.go:146 +0x8c4
github.com/pingcap/tidb/domain.(*Domain).LoadSysVarCacheLoop()
domain/domain.go:1470 +0xa8
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3314 +0x6d3
github.com/pingcap/tidb/domain.(*Domain).GetSessionCache()
domain/sysvar_cache.go:62 +0x5c
github.com/pingcap/tidb/session.(*session).loadCommonGlobalVariablesIfNeeded()
session/session.go:3656 +0x104
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2093 +0x145
github.com/pingcap/tidb/session.(*session).ExecuteInternal()
session/session.go:1628 +0x31b
github.com/pingcap/tidb/domain.(*Domain).LoadPrivilegeLoop()
domain/domain.go:1414 +0x130
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3307 +0x684
github.com/pingcap/tidb/testkit.bootstrap()
testkit/mockstore.go:85 +0xac
github.com/pingcap/tidb/testkit.CreateMockStoreAndDomain()
testkit/mockstore.go:70 +0xe9
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:29 +0x58
testing.tRunner()
GOROOT/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1629 +0x47
Goroutine 519065 (running) created at:
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run()
util/wait_group_wrapper.go:152 +0xe4
github.com/pingcap/tidb/executor.(*CheckTableExec).Next()
executor/executor.go:1041 +0xa0f
github.com/pingcap/tidb/executor.Next()
executor/executor.go:326 +0x326
github.com/pingcap/tidb/executor.(*ExecStmt).next()
executor/adapter.go:1212 +0x89
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor()
executor/adapter.go:957 +0x4f9
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelay()
executor/adapter.go:782 +0x34a
github.com/pingcap/tidb/executor.(*ExecStmt).Exec()
executor/adapter.go:577 +0x129e
github.com/pingcap/tidb/session.runStmt()
session/session.go:2333 +0x62f
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2190 +0x10dd
github.com/pingcap/tidb/testkit.(*TestKit).ExecWithContext()
testkit/testkit.go:325 +0x8ae
github.com/pingcap/tidb/testkit.(*TestKit).MustExecWithContext()
testkit/testkit.go:133 +0xb7
github.com/pingcap/tidb/testkit.(*TestKit).MustExec()
testkit/testkit.go:128 +0x138
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:60 +0x550
github.com/pingcap/tidb/domain.(*Domain).rebuildSysVarCache()
domain/sysvar_cache.go:146 +0x8c4
github.com/pingcap/tidb/domain.(*Domain).LoadSysVarCacheLoop()
domain/domain.go:1470 +0xa8
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3314 +0x6d3
github.com/pingcap/tidb/domain.(*Domain).GetSessionCache()
domain/sysvar_cache.go:62 +0x5c
github.com/pingcap/tidb/session.(*session).loadCommonGlobalVariablesIfNeeded()
session/session.go:3656 +0x104
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2093 +0x145
github.com/pingcap/tidb/session.(*session).ExecuteInternal()
session/session.go:1628 +0x31b
github.com/pingcap/tidb/domain.(*Domain).LoadPrivilegeLoop()
domain/domain.go:1414 +0x130
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3307 +0x684
github.com/pingcap/tidb/testkit.bootstrap()
testkit/mockstore.go:85 +0xac
github.com/pingcap/tidb/testkit.CreateMockStoreAndDomain()
testkit/mockstore.go:70 +0xe9
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:29 +0x58
testing.tRunner()
GOROOT/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1629 +0x47
==================
```
### 4. What is your TiDB version? (Required)
<!-- Paste the output of SELECT tidb_version() -->
|
1.0
|
DATA RACE TestMultiValuedIndexOnlineDDL - ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
```
=================
WARNING: DATA RACE
Read at 0x00c0186691f0 by goroutine 519066:
github.com/pingcap/tidb/planner/core.newBasePlan()
planner/core/plan.go:728 +0x8a
github.com/pingcap/tidb/planner/core.newBaseLogicalPlan()
planner/core/plan.go:743 +0x10f
github.com/pingcap/tidb/planner/core.LogicalTableDual.Init()
planner/core/initialize.go:175 +0x23f
github.com/pingcap/tidb/planner/core.rewriteAstExpr()
planner/core/expression_rewriter.go:74 +0x274
github.com/pingcap/tidb/expression.ColumnInfos2ColumnsAndNames()
expression/expression.go:1001 +0xd7a
github.com/pingcap/tidb/util/admin.makeRowDecoder()
util/admin/admin.go:196 +0x3b6
github.com/pingcap/tidb/util/admin.iterRecords()
util/admin/admin.go:224 +0x6b5
github.com/pingcap/tidb/util/admin.CheckRecordAndIndex()
util/admin/admin.go:186 +0x4fd
github.com/pingcap/tidb/executor.(*CheckTableExec).checkTableRecord()
executor/executor.go:1097 +0x644
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2.1()
executor/executor.go:1053 +0x24d
github.com/pingcap/tidb/util.WithRecovery()
util/misc.go:96 +0x6d
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2()
executor/executor.go:1042 +0xc7
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run.func1()
util/wait_group_wrapper.go:154 +0x73
Previous write at 0x00c0186691f0 by goroutine 519065:
github.com/pingcap/tidb/planner/core.newBasePlan()
planner/core/plan.go:728 +0xa5
github.com/pingcap/tidb/planner/core.newBaseLogicalPlan()
planner/core/plan.go:743 +0x10f
github.com/pingcap/tidb/planner/core.LogicalTableDual.Init()
planner/core/initialize.go:175 +0x23f
github.com/pingcap/tidb/planner/core.rewriteAstExpr()
planner/core/expression_rewriter.go:74 +0x274
github.com/pingcap/tidb/expression.ColumnInfos2ColumnsAndNames()
expression/expression.go:1001 +0xd7a
github.com/pingcap/tidb/util/admin.makeRowDecoder()
util/admin/admin.go:196 +0x3b6
github.com/pingcap/tidb/util/admin.iterRecords()
util/admin/admin.go:224 +0x6b5
github.com/pingcap/tidb/util/admin.CheckRecordAndIndex()
util/admin/admin.go:186 +0x4fd
github.com/pingcap/tidb/executor.(*CheckTableExec).checkTableRecord()
executor/executor.go:1097 +0x644
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2.1()
executor/executor.go:1053 +0x24d
github.com/pingcap/tidb/util.WithRecovery()
util/misc.go:96 +0x6d
github.com/pingcap/tidb/executor.(*CheckTableExec).Next.func2()
executor/executor.go:1042 +0xc7
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run.func1()
util/wait_group_wrapper.go:154 +0x73
Goroutine 519066 (running) created at:
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run()
util/wait_group_wrapper.go:152 +0xe4
github.com/pingcap/tidb/executor.(*CheckTableExec).Next()
executor/executor.go:1041 +0xa0f
github.com/pingcap/tidb/executor.Next()
executor/executor.go:326 +0x326
github.com/pingcap/tidb/executor.(*ExecStmt).next()
executor/adapter.go:1212 +0x89
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor()
executor/adapter.go:957 +0x4f9
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelay()
executor/adapter.go:782 +0x34a
github.com/pingcap/tidb/executor.(*ExecStmt).Exec()
executor/adapter.go:577 +0x129e
github.com/pingcap/tidb/session.runStmt()
session/session.go:2333 +0x62f
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2190 +0x10dd
github.com/pingcap/tidb/testkit.(*TestKit).ExecWithContext()
testkit/testkit.go:325 +0x8ae
github.com/pingcap/tidb/testkit.(*TestKit).MustExecWithContext()
testkit/testkit.go:133 +0xb7
github.com/pingcap/tidb/testkit.(*TestKit).MustExec()
testkit/testkit.go:128 +0x138
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:60 +0x550
github.com/pingcap/tidb/domain.(*Domain).rebuildSysVarCache()
domain/sysvar_cache.go:146 +0x8c4
github.com/pingcap/tidb/domain.(*Domain).LoadSysVarCacheLoop()
domain/domain.go:1470 +0xa8
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3314 +0x6d3
github.com/pingcap/tidb/domain.(*Domain).GetSessionCache()
domain/sysvar_cache.go:62 +0x5c
github.com/pingcap/tidb/session.(*session).loadCommonGlobalVariablesIfNeeded()
session/session.go:3656 +0x104
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2093 +0x145
github.com/pingcap/tidb/session.(*session).ExecuteInternal()
session/session.go:1628 +0x31b
github.com/pingcap/tidb/domain.(*Domain).LoadPrivilegeLoop()
domain/domain.go:1414 +0x130
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3307 +0x684
github.com/pingcap/tidb/testkit.bootstrap()
testkit/mockstore.go:85 +0xac
github.com/pingcap/tidb/testkit.CreateMockStoreAndDomain()
testkit/mockstore.go:70 +0xe9
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:29 +0x58
testing.tRunner()
GOROOT/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1629 +0x47
Goroutine 519065 (running) created at:
github.com/pingcap/tidb/util.(*WaitGroupWrapper).Run()
util/wait_group_wrapper.go:152 +0xe4
github.com/pingcap/tidb/executor.(*CheckTableExec).Next()
executor/executor.go:1041 +0xa0f
github.com/pingcap/tidb/executor.Next()
executor/executor.go:326 +0x326
github.com/pingcap/tidb/executor.(*ExecStmt).next()
executor/adapter.go:1212 +0x89
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor()
executor/adapter.go:957 +0x4f9
github.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelay()
executor/adapter.go:782 +0x34a
github.com/pingcap/tidb/executor.(*ExecStmt).Exec()
executor/adapter.go:577 +0x129e
github.com/pingcap/tidb/session.runStmt()
session/session.go:2333 +0x62f
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2190 +0x10dd
github.com/pingcap/tidb/testkit.(*TestKit).ExecWithContext()
testkit/testkit.go:325 +0x8ae
github.com/pingcap/tidb/testkit.(*TestKit).MustExecWithContext()
testkit/testkit.go:133 +0xb7
github.com/pingcap/tidb/testkit.(*TestKit).MustExec()
testkit/testkit.go:128 +0x138
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:60 +0x550
github.com/pingcap/tidb/domain.(*Domain).rebuildSysVarCache()
domain/sysvar_cache.go:146 +0x8c4
github.com/pingcap/tidb/domain.(*Domain).LoadSysVarCacheLoop()
domain/domain.go:1470 +0xa8
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3314 +0x6d3
github.com/pingcap/tidb/domain.(*Domain).GetSessionCache()
domain/sysvar_cache.go:62 +0x5c
github.com/pingcap/tidb/session.(*session).loadCommonGlobalVariablesIfNeeded()
session/session.go:3656 +0x104
github.com/pingcap/tidb/session.(*session).ExecuteStmt()
session/session.go:2093 +0x145
github.com/pingcap/tidb/session.(*session).ExecuteInternal()
session/session.go:1628 +0x31b
github.com/pingcap/tidb/domain.(*Domain).LoadPrivilegeLoop()
domain/domain.go:1414 +0x130
github.com/pingcap/tidb/session.BootstrapSession()
session/session.go:3307 +0x684
github.com/pingcap/tidb/testkit.bootstrap()
testkit/mockstore.go:85 +0xac
github.com/pingcap/tidb/testkit.CreateMockStoreAndDomain()
testkit/mockstore.go:70 +0xe9
github.com/pingcap/tidb/ddl_test.TestMultiValuedIndexOnlineDDL()
ddl/mv_index_test.go:29 +0x58
testing.tRunner()
GOROOT/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
GOROOT/src/testing/testing.go:1629 +0x47
==================
```
### 4. What is your TiDB version? (Required)
<!-- Paste the output of SELECT tidb_version() -->
|
test
|
data race testmultivaluedindexonlineddl bug report please answer these questions before submitting your issue thanks minimal reproduce step required what did you expect to see required what did you see instead required warning data race read at by goroutine github com pingcap tidb planner core newbaseplan planner core plan go github com pingcap tidb planner core newbaselogicalplan planner core plan go github com pingcap tidb planner core logicaltabledual init planner core initialize go github com pingcap tidb planner core rewriteastexpr planner core expression rewriter go github com pingcap tidb expression expression expression go github com pingcap tidb util admin makerowdecoder util admin admin go github com pingcap tidb util admin iterrecords util admin admin go github com pingcap tidb util admin checkrecordandindex util admin admin go github com pingcap tidb executor checktableexec checktablerecord executor executor go github com pingcap tidb executor checktableexec next executor executor go github com pingcap tidb util withrecovery util misc go github com pingcap tidb executor checktableexec next executor executor go github com pingcap tidb util waitgroupwrapper run util wait group wrapper go previous write at by goroutine github com pingcap tidb planner core newbaseplan planner core plan go github com pingcap tidb planner core newbaselogicalplan planner core plan go github com pingcap tidb planner core logicaltabledual init planner core initialize go github com pingcap tidb planner core rewriteastexpr planner core expression rewriter go github com pingcap tidb expression expression expression go github com pingcap tidb util admin makerowdecoder util admin admin go github com pingcap tidb util admin iterrecords util admin admin go github com pingcap tidb util admin checkrecordandindex util admin admin go github com pingcap tidb executor checktableexec checktablerecord executor executor go github com pingcap tidb executor checktableexec next executor executor go github com pingcap tidb util withrecovery util misc go github com pingcap tidb executor checktableexec next executor executor go github com pingcap tidb util waitgroupwrapper run util wait group wrapper go goroutine running created at github com pingcap tidb util waitgroupwrapper run util wait group wrapper go github com pingcap tidb executor checktableexec next executor executor go github com pingcap tidb executor next executor executor go github com pingcap tidb executor execstmt next executor adapter go github com pingcap tidb executor execstmt handlenodelayexecutor executor adapter go github com pingcap tidb executor execstmt handlenodelay executor adapter go github com pingcap tidb executor execstmt exec executor adapter go github com pingcap tidb session runstmt session session go github com pingcap tidb session session executestmt session session go github com pingcap tidb testkit testkit execwithcontext testkit testkit go github com pingcap tidb testkit testkit mustexecwithcontext testkit testkit go github com pingcap tidb testkit testkit mustexec testkit testkit go github com pingcap tidb ddl test testmultivaluedindexonlineddl ddl mv index test go github com pingcap tidb domain domain rebuildsysvarcache domain sysvar cache go github com pingcap tidb domain domain loadsysvarcacheloop domain domain go github com pingcap tidb session bootstrapsession session session go github com pingcap tidb domain domain getsessioncache domain sysvar cache go github com pingcap tidb session session loadcommonglobalvariablesifneeded session session go github com pingcap tidb session session executestmt session session go github com pingcap tidb session session executeinternal session session go github com pingcap tidb domain domain loadprivilegeloop domain domain go github com pingcap tidb session bootstrapsession session session go github com pingcap tidb testkit bootstrap testkit mockstore go github com pingcap tidb testkit createmockstoreanddomain testkit mockstore go github com pingcap tidb ddl test testmultivaluedindexonlineddl ddl mv index test go testing trunner goroot src testing testing go testing t run goroot src testing testing go goroutine running created at github com pingcap tidb util waitgroupwrapper run util wait group wrapper go github com pingcap tidb executor checktableexec next executor executor go github com pingcap tidb executor next executor executor go github com pingcap tidb executor execstmt next executor adapter go github com pingcap tidb executor execstmt handlenodelayexecutor executor adapter go github com pingcap tidb executor execstmt handlenodelay executor adapter go github com pingcap tidb executor execstmt exec executor adapter go github com pingcap tidb session runstmt session session go github com pingcap tidb session session executestmt session session go github com pingcap tidb testkit testkit execwithcontext testkit testkit go github com pingcap tidb testkit testkit mustexecwithcontext testkit testkit go github com pingcap tidb testkit testkit mustexec testkit testkit go github com pingcap tidb ddl test testmultivaluedindexonlineddl ddl mv index test go github com pingcap tidb domain domain rebuildsysvarcache domain sysvar cache go github com pingcap tidb domain domain loadsysvarcacheloop domain domain go github com pingcap tidb session bootstrapsession session session go github com pingcap tidb domain domain getsessioncache domain sysvar cache go github com pingcap tidb session session loadcommonglobalvariablesifneeded session session go github com pingcap tidb session session executestmt session session go github com pingcap tidb session session executeinternal session session go github com pingcap tidb domain domain loadprivilegeloop domain domain go github com pingcap tidb session bootstrapsession session session go github com pingcap tidb testkit bootstrap testkit mockstore go github com pingcap tidb testkit createmockstoreanddomain testkit mockstore go github com pingcap tidb ddl test testmultivaluedindexonlineddl ddl mv index test go testing trunner goroot src testing testing go testing t run goroot src testing testing go what is your tidb version required
| 1
|
321,628
| 27,544,370,849
|
IssuesEvent
|
2023-03-07 10:42:23
|
vegaprotocol/frontend-monorepo
|
https://api.github.com/repos/vegaprotocol/frontend-monorepo
|
closed
|
Add e2e tests for tranche service
|
Testing 🧪 Governance
|
## The Chore
The simplest way would probably be to mock the service response and validate the UI
|
1.0
|
Add e2e tests for tranche service - ## The Chore
The simplest way would probably be to mock the service response and validate the UI
|
test
|
add tests for tranche service the chore the simplest way would probably be to mock the service response and validate the ui
| 1
|
44,895
| 5,659,257,455
|
IssuesEvent
|
2017-04-10 12:30:37
|
OAButton/discussion
|
https://api.github.com/repos/OAButton/discussion
|
closed
|
Better response when articles are Open Access
|
Blocked: Copy Blocked: Development Blocked: Test enhancement Website
|
Just got this through our bug form from the author of said article:
> This http://journals.sagepub.com/doi/full/10.1177/0165551516648108 is supposed to be leading to an available (full-text) article, but when I enter the URL to your system, I get "The research you need isn't available, but you can create a request!"
I took a look, I'm guessing this is because the article is already Open Access via a journal - but not via a repository we harvest from.
Wondering if we can improve the response here at all. Assigning myself to come back & think of options
|
1.0
|
Better response when articles are Open Access - Just got this through our bug form from the author of said article:
> This http://journals.sagepub.com/doi/full/10.1177/0165551516648108 is supposed to be leading to an available (full-text) article, but when I enter the URL to your system, I get "The research you need isn't available, but you can create a request!"
I took a look, I'm guessing this is because the article is already Open Access via a journal - but not via a repository we harvest from.
Wondering if we can improve the response here at all. Assigning myself to come back & think of options
|
test
|
better response when articles are open access just got this through our bug form from the author of said article this is supposed to be leading to an available full text article but when i enter the url to your system i get the research you need isn t available but you can create a request i took a look i m guessing this is because the article is already open access via a journal but not via a repository we harvest from wondering if we can improve the response here at all assigning myself to come back think of options
| 1
|
163,402
| 12,728,119,857
|
IssuesEvent
|
2020-06-25 01:30:05
|
CodaProtocol/coda
|
https://api.github.com/repos/CodaProtocol/coda
|
closed
|
Node startup failure report
|
testnet-bug
|
Linux version: ubuntu 18.04
Last info:
```
2020-06-09 23:23:57 UTC [Info] Coda daemon is now doing ledger catchup
2020-06-09 23:25:59 UTC [Fatal] after lock transition, the best tip consensus state is out of sync with the local state -- bug in either required_local_state_sync or frontier_root_transition.
2020-06-09 23:25:59 UTC [Fatal] Unhandled top-level exception: $exn
Generating crash report
exn: "(monitor.ml.Error\n (Failure \"local state desynced after applying diffs to full frontier\")\n (\"Raised at file \\\"stdlib.ml\\\", line 33, characters 17-33\"\n \"Called from file \\\"src/lib/debug_assert/debug_assert.ml\\\" (inlined), line 1, characters 25-33\"\n \"Called from file \\\"src/lib/transition_frontier/full_frontier/full_frontier.ml\\\", line 606, characters 4-1023\"\n \"Called from file \\\"src/lib/transition_frontier/transition_frontier.ml\\\", line 292, characters 4-93\"\n \"Called from file \\\"src/lib/transition_handler/processor.ml\\\", line 57, characters 10-68\"\n \"Called from file \\\"src/lib/rose_tree/rose_tree.ml\\\", line 79, characters 18-24\"\n \"Called from file \\\"src/deferred_or_error.ml\\\", line 101, characters 23-33\"\n \"Called from file \\\"src/deferred0.ml\\\", line 56, characters 64-69\"\n \"Called from file \\\"src/job_queue.ml\\\" (inlined), line 131, characters 2-5\"\n \"Called from file \\\"src/job_queue.ml\\\", line 171, characters 6-47\"\n \"Caught by monitor coda\"))"
2020-06-09 23:25:59 UTC [Error] Unconsumed item in cache: $cache
cache: "cached item was not consumed (cache name = \"Transition_handler__Unprocessed_transition_cache\")"
```
[coda_crash_report_2020-06-09_23-26-00.443388.tar.gz](https://github.com/CodaProtocol/coda/files/4755376/coda_crash_report_2020-06-09_23-26-00.443388.tar.gz)
|
1.0
|
Node startup failure report - Linux version: ubuntu 18.04
Last info:
```
2020-06-09 23:23:57 UTC [Info] Coda daemon is now doing ledger catchup
2020-06-09 23:25:59 UTC [Fatal] after lock transition, the best tip consensus state is out of sync with the local state -- bug in either required_local_state_sync or frontier_root_transition.
2020-06-09 23:25:59 UTC [Fatal] Unhandled top-level exception: $exn
Generating crash report
exn: "(monitor.ml.Error\n (Failure \"local state desynced after applying diffs to full frontier\")\n (\"Raised at file \\\"stdlib.ml\\\", line 33, characters 17-33\"\n \"Called from file \\\"src/lib/debug_assert/debug_assert.ml\\\" (inlined), line 1, characters 25-33\"\n \"Called from file \\\"src/lib/transition_frontier/full_frontier/full_frontier.ml\\\", line 606, characters 4-1023\"\n \"Called from file \\\"src/lib/transition_frontier/transition_frontier.ml\\\", line 292, characters 4-93\"\n \"Called from file \\\"src/lib/transition_handler/processor.ml\\\", line 57, characters 10-68\"\n \"Called from file \\\"src/lib/rose_tree/rose_tree.ml\\\", line 79, characters 18-24\"\n \"Called from file \\\"src/deferred_or_error.ml\\\", line 101, characters 23-33\"\n \"Called from file \\\"src/deferred0.ml\\\", line 56, characters 64-69\"\n \"Called from file \\\"src/job_queue.ml\\\" (inlined), line 131, characters 2-5\"\n \"Called from file \\\"src/job_queue.ml\\\", line 171, characters 6-47\"\n \"Caught by monitor coda\"))"
2020-06-09 23:25:59 UTC [Error] Unconsumed item in cache: $cache
cache: "cached item was not consumed (cache name = \"Transition_handler__Unprocessed_transition_cache\")"
```
[coda_crash_report_2020-06-09_23-26-00.443388.tar.gz](https://github.com/CodaProtocol/coda/files/4755376/coda_crash_report_2020-06-09_23-26-00.443388.tar.gz)
|
test
|
node startup failure report linux version ubuntu last info utc coda daemon is now doing ledger catchup utc after lock transition the best tip consensus state is out of sync with the local state bug in either required local state sync or frontier root transition utc unhandled top level exception exn generating crash report exn monitor ml error n failure local state desynced after applying diffs to full frontier n raised at file stdlib ml line characters n called from file src lib debug assert debug assert ml inlined line characters n called from file src lib transition frontier full frontier full frontier ml line characters n called from file src lib transition frontier transition frontier ml line characters n called from file src lib transition handler processor ml line characters n called from file src lib rose tree rose tree ml line characters n called from file src deferred or error ml line characters n called from file src ml line characters n called from file src job queue ml inlined line characters n called from file src job queue ml line characters n caught by monitor coda utc unconsumed item in cache cache cache cached item was not consumed cache name transition handler unprocessed transition cache
| 1
|
213,474
| 16,521,948,961
|
IssuesEvent
|
2021-05-26 15:24:14
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Manual test run on Linux for 1.25.x Release
|
OS/Desktop OS/Linux QA/Yes release-notes/exclude tests
|
### Installer
- [x] Check the installer is close to the size of the last release
- [x] Check signature: On macOS, run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. On Windows, right-click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double-click on the signature. Make sure it says "The digital signature is OK" in the popup window
### About pages
- [x] Verify that both `chrome://` and `about://` forward to `brave://` (run through several internal pages)
### Importing
- [x] Verify that you can import `History`, `Favorites/Bookmarks` and `Passwords` from Google Chrome
- [x] Verify that you can import `History`, `Favorites/Bookmarks`, `Passwords`, `Search Engines` and `Autofill` from Firefox
- [x] Verify that you can import `Favorites/Bookmarks` from Microsoft Edge
- [x] Verify that importing bookmarks using `Bookmark HTML File` retains the folder structure on a clean profile
### Context menus
- [x] Verify you can block a page element using `Block element via selector` context-menu item
- [x] Verify selecting `Manage custom filters` opens `brave://adblock` in a NTP
- [x] Verify removing the rule from `brave://adblock` reflects the change on the website, after reload
### Extensions/Plugins
- [x] Verify pdfium, Torrent viewer extensions are installed automatically on fresh profile and cannot be disabled (they don't show up in `brave://extensions`)
- [x] Verify older version of an extension gets updated to new version via Google server
- [x] Verify older version of an extension gets updated to new version via Brave server
- [x] Verify that `magnet` links and `.torrent` files correctly open WebTorrent and you're able to download the file(s)
- **Tip:** Free torrents available via https://webtorrent.io/free-torrents
### Chrome Web Store (CWS)
- [x] Verify that installing https://chrome.google.com/webstore/detail/adblock-plus-free-ad-bloc/cfhdojbkjhnklbpkdaibdccddilifddb from CWS displays the `Brave has not reviewed the extension.` warning via the "Add Extension" modal
- [x] Verify that installing https://chrome.google.com/webstore/detail/lastpass-free-password-ma/hdokiejnpimakedhajhdlcegeplioahd from CWS doesn't display the `Brave has not reviewed the extension.` warning via the "Add Extension" modal
### PDF
- [x] Test that you can print a PDF
- [x] Test that PDF is loaded over HTTPS at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [x] Test that PDF is loaded over HTTP at http://www.pdf995.com/samples/pdf.pdf
- [x] Test that https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6357&rep=rep1&type=pdf opens without issues
### Widevine
- [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time
- [x] Test that you can stream on Netflix on a fresh profile after installing Widevine
- [x] Verify `Widevine Notification` is shown when you visit HBO Max for the first time
- [x] Test that you can stream on HBO Max on a fresh profile after installing Widevine
### Geolocation
- [x] Check that https://browserleaks.com/geo works and shows correct location
- [x] Check that https://html5demos.com/geo/ works but doesn't require an accurate location
### Crash Reporting
- [x] Check that loading `brave://crash` & `brave://gpucrash` causes the new tab to crash
- [x] Check that `brave://crashes` lists the `Uploaded Crash Report ID` once the report has been submitted
- [x] Verify the crash ID matches the report on Backtrace using `_rxid equal [ value ]`
### Bravery settings
- [x] Verify that HTTPS Everywhere works by loading http://https-everywhere.badssl.com/
- [x] Turning HTTPS Everywhere off and Shields off both disable the redirect to https://https-everywhere.badssl.com/
- [x] Verify that toggling `Ads and trackers blocked` works as expected
- [x] Visit https://testsafebrowsing.appspot.com/s/phishing.html, verify that Safe Browsing (via our Proxy) works for all the listed items
- [x] Visit https://www.blizzard.com and then turn on script blocking, page should not load.
- [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked
- [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run)
- [x] In `brave://settings/security`, choose a DNS provider from the providers listed under Use secure DNS, load `https://browserleaks.com/dns`, and verify your ISP's DNS resolvers aren't detected and shown; only your chosen DoH provider should appear.
- [x] Open a New Private Window with Tor, load `https://browserleaks.com/dns`, and verify your ISP's DNS resolvers aren't detected and shown.
### Fingerprint Tests
- [x] Visit https://jsfiddle.net/bkf50r8v/13/, ensure 3 blocked items are listed in Shields. Result window should show `got canvas fingerprint 0` and `got webgl fingerprint 00`
- [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ only when `Block all fingerprinting protection` is on
- [x] Test that Brave browser isn't detected via user-agent, on https://www.whatismybrowser.com. The site **will** say "Looks like Brave on [Windows/Linux/macOS]", and yet, "But it's announcing that it's Chrome [version] on [platform]"
- [x] Test that https://diafygi.github.io/webrtc-ips/ doesn't leak IP address when `Block all fingerprinting protection` is on
### Brave Ads
- [x] Verify when you enable Rewards from panel or `brave://rewards`, Ads are enabled by default
- [x] Verify Ads UI (panel, settings, etc) shows when in a region with Ads support
- [x] Verify Ads UI (panel, settings, etc) does not show when in a region without Ads support. Verify the Ads panel does show the 'Sorry! Ads are not yet available in your region.' message.
- [x] Verify when the system language is English, the Browser language is French, and you are in one of the supported regions, Ad notifications are still served to you.
- [x] Verify you are served Ad notifications when Ads are enabled
- [x] Verify ad earnings are reflected in the rewards widget on the NTP.
- [x] Verify when Ads are toggled off, there are no Ad messages in the logs
- [x] Verify when Rewards are toggled off (but Ads were not explicitly toggled off), there are no Ads logs recorded
- [x] Verify view/click/dismiss/landed ad notifications show in `confirmations.json`
- [x] Verify pages you browse to are being classified in the logs
- [x] Verify tokens are redeemed by viewing the logs (you can use `--rewards=debug=true` to shorten redemption time)
- [x] Verify Ad is not shown if a tab is playing media and is only shown after it stops playing
### Rewards
- [x] Verify that none of the reward endpoints are being contacted when a user visits a media publisher (`youtube.com`, `reddit.com`, `twitter.com`, `github.com`) and hasn't interacted with rewards
- [x] Verify that `rewards.brave.com`, `pcdn.brave.com`, `grant.rewards.brave.com` or `api.rewards.brave.com` are not being contacted
- [x] Verify you are able to create a new wallet.
- [x] Verify you are able to restore a wallet.
- [x] Verify account balance shows correct BAT and USD value.
- [x] Verify actions taken (claiming grant, tipping, auto-contribute) display in wallet panel.
- [x] Verify AC monthly budget shows correct BAT and USD value.
- [x] Verify you are able to exclude a publisher from the auto-contribute table and popup list of sites.
- [x] Verify you are able to exclude a publisher by using the toggle on the Rewards Panel.
- [x] Verify you are able to perform an auto contribution.
- [x] Verify auto contribution is reflected in the rewards widget on the new-tab page (NTP).
- [x] Verify monthly statement shows expected data.
- [x] Verify when you click on the BR panel while on a site, the panel displays site-specific information (site favicon, domain, attention %).
- [x] Verify BR panel shows message about an unverified publisher.
- [x] Verify one time and monthly tip banners show a message about unverified publisher.
- [x] Verify one time tip and monthly tip banners show a verified checkmark for a verified creator.
- [x] Verify when you click on `Send a tip`, the custom tip banner displays if set up.
- [x] Verify custom tip banner is also displayed for monthly contribution.
- [x] Verify you are able to make one-time tip and they display in Tips panel.
- [x] Verify tip is reflected in the rewards widget on the NTP.
- [x] Verify when you tip an unverified publisher, the one time tip is recorded in the Pending Contributions list.
- [x] Verify you are able to make recurring tip and they display in Monthly Contributions panel.
- [x] Verify you are able to adjust your recurring tip amount from the BR panel.
- [x] Verify recurring tip is reflected in the rewards widget on the NTP.
- [x] Verify you can tip a verified website.
- [x] Verify the website displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified YouTube creator.
- [x] Verify the YouTube creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Vimeo creator.
- [x] Verify the Vimeo creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Twitch creator.
- [x] Verify the Twitch creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Twitter user from the panel.
- [x] Verify you can tip a verified Twitter user via inline tip button.
- [x] Verify the in-line tip button is spaced properly.
- [x] Verify you can tip a verified GitHub user from the panel.
- [x] Verify you can tip a verified GitHub user via inline tip button.
- [x] Verify the GitHub creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Reddit user from the panel.
- [x] Verify you can tip a verified Reddit user via inline tip button.
- [x] Verify if you disable auto-contribute you are still able to tip creators.
- [x] Verify if auto-contribute is disabled AC does not occur.
- [x] Verify if Rewards is disabled AC does not occur.
- [x] Verify that disabling Rewards and enabling it again does not lose state.
- [x] Adjust min visit/time in settings. Visit some sites to verify they are added to the table after the specified settings.
- [x] Uphold cases
- [x] Verify you are able to connect a KYC'd Uphold wallet to Rewards.
- [x] Verify wallet balance in Brave updates when BAT is added to the Brave Browser card.
- [x] Verify if you only have user-controlled BAT (BAT in Uphold only), you can only tip KYC'd creators, any tips to non-KYC'd creators go to the Pending Contributions list.
- [x] Verify connected (verified but not KYC'd) publishers display messaging on panel and tip banner.
- [x] Verify you are able to perform an auto contribute using Uphold BAT.
### Social-media blocking settings
- [x] Verify individual `Social media blocking` buttons works as intended when enabled/disabled by visiting https://fmarier.github.io/brave-testing/social-widgets.html
- [x] visit `brave://settings/privacy` -> `Site and Shields Settings` -> `Cookies and site data` and ensure that
- [x] both `https://[*.]firebaseapp.com` & `https://accounts.google.com` are added into `Sites that can always use cookies` when `Allow Google login buttons on third party sites` is enabled
- [x] both `https://[*.]firebaseapp.com` & `https://accounts.google.com` are removed from `Sites that can always use cookies` when `Allow Google login buttons on third party sites` is disabled
- [x] ensure that you can log in into https://www.expensify.com while `Allow Google login buttons on third party sites` is enabled
- [x] ensure that once `Allow Google login buttons on third party sites` has been disabled, you can't log in into https://www.expensify.com
### Sync
- [x] Verify you are able to create a sync chain and add a mobile/computer to the chain
- [x] Verify you are able to join an existing sync chain using code words
- [x] Verify the device name is shown properly when sync chain is created
- [x] Verify you are able to add a new mobile device to the chain via QR code/code words
- [x] Verify newly created bookmarks get sync'd to all devices on the sync chain
- [x] Verify existing bookmarks on current profile gets sync'd to all devices on the sync chain
- [x] Verify folder structure is retained after sync completes
- [x] Verify bookmarks don't duplicate when sync'd from other devices
- [x] Verify removing bookmark from device gets sync'd to all devices on the sync chain
- [x] Verify adding/removing a bookmark in offline mode gets sync'd to all devices on the sync chain when device comes online
- [x] With only two devices in chain, verify removing the other device resets the sync on b-c as well
### Tor Tabs
- [x] Visit https://check.torproject.org in a Tor window, ensure it shows a success message for using a Tor exit node
- [x] Visit https://check.torproject.org in a Tor window, note down exit node IP address. Do a hard refresh (Ctrl+Shift+R/Cmd+Shift+R), ensure exit IP changes after page reloads
- [x] Visit https://check.torproject.org in a Tor window, note down exit node IP address. Click `New Tor connection for this site` in app menu, ensure the exit node IP address changes after page is reloaded
- [x] Visit https://protonirockerxow.onion & https://brave5t5rjjg3s6k.onion/ in a Tor window and ensure both pages resolve
- [x] Visit https://browserleaks.com/geo in a Tor window, ensure location isn't shown
- [x] Verify Torrent viewer doesn't load in a Tor window
- [ ] Ensure you are able to download a file in a Tor window. Verify all Download/Cancel, Download/Retry and Download works in Tor window
### Cookie and Cache
- [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the evercookie site does not remember the old evercookie value
### Chromium/Brave GPU
- [x] Verify that `brave://gpu` (Brave) matches `chrome://gpu` (Chrome) when using the same Chromium version
### Startup & Components
- [x] Verify that Brave is only contacting `*.brave.com` endpoints on first launch using either `Charles Proxy`, `Fiddler`, `Wireshark` or `LittleSnitch` (or a similar application)
- [x] Remove the following component folders and ensure that they're being re-downloaded after restarting the browser:
- [x] `afalakplffnnnlkncjhbmahjfjhmlkal`: `AutoplayWhitelist.dat`, `ExtensionWhitelist.dat`, `ReferrerWhitelist.json` and `Greaselion.json`
- [x] `CertificateRevocation`
- [x] `cffkpbalmllkdoenhmdmpbkajipdjfam`: `rs-ABPFilterParserData.dat` & `regional_catalog.json` (AdBlock)
- [x] `gccbbckogglekeggclmmekihdgdpdgoe`: (Sponsored New Tab Images)
- [x] `jicbkmdloagakknpihibphagfckhjdih`: `speedreader-updater.dat`
- [x] `oofiananboodjbbmdelgdommihjbkfag`: HTTPSE
- [x] `Safe Browsing`
- [x] Restart the browser, load `brave://components`, wait for 8 mins and verify that no component shows any errors
**Note:** Always double check `brave://components` to make sure there's no errors/missing version numbers
### Session storage
- [x] Temporarily move away your browser profile and test that a new profile is created on browser relaunch
- macOS - `~/Library/Application\ Support/BraveSoftware/`
- Windows - `%userprofile%\appdata\Local\BraveSoftware\`
- Linux(Ubuntu) - `~/.config/BraveSoftware/`
- [x] Test that both windows and tabs are being restored, including the current active tab
- [x] Ensure that tabs are being lazy loaded when a previous session is being restored
### Upgrade
- [x] Make sure that data from the last version appears in the new version OK
- [x] Ensure that `brave://version` lists the expected Brave & Chromium versions
- [x] With data from the last version, verify that:
- [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened
- [x] Cookies are preserved
- [x] Installed extensions are retained and work correctly
- [x] Opened tabs can be reloaded
- [x] Stored passwords are preserved
- [x] Sync chain created in previous version is retained
- [x] Social media-blocking buttons changes are retained
- [x] Rewards
- [x] Wallet balance is retained
- [x] Auto-contribute list is retained
- [x] Both Tips and Monthly Contributions are retained
- [x] Wallet panel transactions list is retained
- [x] Changes to rewards settings are retained
- [x] Ensure that Auto Contribute is not being enabled when upgrading to a new version if AC was disabled
- [x] Ads
- [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained
- [x] Changes to ads settings are retained
- [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled
- [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
|
1.0
|
Manual test run on Linux for 1.25.x Release - ### Installer
- [x] Check the installer is close to the size of the last release
- [x] Check signature: On macOS, run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. On Windows, right-click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double-click on the signature. Make sure it says "The digital signature is OK" in the popup window
### About pages
- [x] Verify that both `chrome://` and `about://` forward to `brave://` (run through several internal pages)
### Importing
- [x] Verify that you can import `History`, `Favorites/Bookmarks` and `Passwords` from Google Chrome
- [x] Verify that you can import `History`, `Favorites/Bookmarks`, `Passwords`, `Search Engines` and `Autofill` from Firefox
- [x] Verify that you can import `Favorites/Bookmarks` from Microsoft Edge
- [x] Verify that importing bookmarks using `Bookmark HTML File` retains the folder structure on a clean profile
### Context menus
- [x] Verify you can block a page element using `Block element via selector` context-menu item
- [x] Verify selecting `Manage custom filters` opens `brave://adblock` in a NTP
- [x] Verify removing the rule from `brave://adblock` reflects the change on the website, after reload
### Extensions/Plugins
- [x] Verify pdfium, Torrent viewer extensions are installed automatically on fresh profile and cannot be disabled (they don't show up in `brave://extensions`)
- [x] Verify older version of an extension gets updated to new version via Google server
- [x] Verify older version of an extension gets updated to new version via Brave server
- [x] Verify that `magnet` links and `.torrent` files correctly open WebTorrent and you're able to download the file(s)
- **Tip:** Free torrents available via https://webtorrent.io/free-torrents
### Chrome Web Store (CWS)
- [x] Verify that installing https://chrome.google.com/webstore/detail/adblock-plus-free-ad-bloc/cfhdojbkjhnklbpkdaibdccddilifddb from CWS displays the `Brave has not reviewed the extension.` warning via the "Add Extension" modal
- [x] Verify that installing https://chrome.google.com/webstore/detail/lastpass-free-password-ma/hdokiejnpimakedhajhdlcegeplioahd from CWS doesn't display the `Brave has not reviewed the extension.` warning via the "Add Extension" modal
### PDF
- [x] Test that you can print a PDF
- [x] Test that PDF is loaded over HTTPS at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [x] Test that PDF is loaded over HTTP at http://www.pdf995.com/samples/pdf.pdf
- [x] Test that https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.6357&rep=rep1&type=pdf opens without issues
### Widevine
- [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time
- [x] Test that you can stream on Netflix on a fresh profile after installing Widevine
- [x] Verify `Widevine Notification` is shown when you visit HBO Max for the first time
- [x] Test that you can stream on HBO Max on a fresh profile after installing Widevine
### Geolocation
- [x] Check that https://browserleaks.com/geo works and shows correct location
- [x] Check that https://html5demos.com/geo/ works but doesn't require an accurate location
### Crash Reporting
- [x] Check that loading `brave://crash` & `brave://gpucrash` causes the new tab to crash
- [x] Check that `brave://crashes` lists the `Uploaded Crash Report ID` once the report has been submitted
- [x] Verify the crash ID matches the report on Backtrace using `_rxid equal [ value ]`
### Bravery settings
- [x] Verify that HTTPS Everywhere works by loading http://https-everywhere.badssl.com/
- [x] Turning HTTPS Everywhere off and Shields off both disable the redirect to https://https-everywhere.badssl.com/
- [x] Verify that toggling `Ads and trackers blocked` works as expected
- [x] Visit https://testsafebrowsing.appspot.com/s/phishing.html, verify that Safe Browsing (via our Proxy) works for all the listed items
- [x] Visit https://www.blizzard.com and then turn on script blocking, page should not load.
- [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked
- [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run)
- [x] In `brave://settings/security`, choose a DNS provider from the providers listed under Use secure DNS, load `https://browserleaks.com/dns`, and verify your ISP's DNS resolvers aren't detected and shown; only your chosen DoH provider should appear.
- [x] Open a New Private Window with Tor, load `https://browserleaks.com/dns`, and verify your ISP's DNS resolvers aren't detected and shown.
### Fingerprint Tests
- [x] Visit https://jsfiddle.net/bkf50r8v/13/, ensure 3 blocked items are listed in Shields. Result window should show `got canvas fingerprint 0` and `got webgl fingerprint 00`
- [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ only when `Block all fingerprinting protection` is on
- [x] Test that Brave browser isn't detected via user-agent, on https://www.whatismybrowser.com. The site **will** say "Looks like Brave on [Windows/Linux/macOS]", and yet, "But it's announcing that it's Chrome [version] on [platform]"
- [x] Test that https://diafygi.github.io/webrtc-ips/ doesn't leak IP address when `Block all fingerprinting protection` is on
### Brave Ads
- [x] Verify when you enable Rewards from panel or `brave://rewards`, Ads are enabled by default
- [x] Verify Ads UI (panel, settings, etc) shows when in a region with Ads support
- [x] Verify Ads UI (panel, settings, etc) does not show when in a region without Ads support. Verify the Ads panel does show the 'Sorry! Ads are not yet available in your region.' message.
- [x] Verify when the system language is English, the Browser language is French, and you are in one of the supported regions, Ad notifications are still served to you.
- [x] Verify you are served Ad notifications when Ads are enabled
- [x] Verify ad earnings are reflected in the rewards widget on the NTP.
- [x] Verify when Ads are toggled off, there are no Ad messages in the logs
- [x] Verify when Rewards are toggled off (but Ads were not explicitly toggled off), there are no Ads logs recorded
- [x] Verify view/click/dismiss/landed ad notifications show in `confirmations.json`
- [x] Verify pages you browse to are being classified in the logs
- [x] Verify tokens are redeemed by viewing the logs (you can use `--rewards=debug=true` to shorten redemption time)
- [x] Verify Ad is not shown if a tab is playing media and is only shown after it stops playing
### Rewards
- [x] Verify that none of the reward endpoints are being contacted when a user visits a media publisher (`youtube.com`, `reddit.com`, `twitter.com`, `github.com`) and hasn't interacted with rewards
- [x] Verify that `rewards.brave.com`, `pcdn.brave.com`, `grant.rewards.brave.com` or `api.rewards.brave.com` are not being contacted
- [x] Verify you are able to create a new wallet.
- [x] Verify you are able to restore a wallet.
- [x] Verify account balance shows correct BAT and USD value.
- [x] Verify actions taken (claiming grant, tipping, auto-contribute) display in wallet panel.
- [x] Verify AC monthly budget shows correct BAT and USD value.
- [x] Verify you are able to exclude a publisher from the auto-contribute table and popup list of sites.
- [x] Verify you are able to exclude a publisher by using the toggle on the Rewards Panel.
- [x] Verify you are able to perform an auto contribution.
- [x] Verify auto contribution is reflected in the rewards widget on the new-tab page (NTP).
- [x] Verify monthly statement shows expected data.
- [x] Verify when you click on the BR panel while on a site, the panel displays site-specific information (site favicon, domain, attention %).
- [x] Verify BR panel shows message about an unverified publisher.
- [x] Verify one time and monthly tip banners show a message about unverified publisher.
- [x] Verify one time tip and monthly tip banners show a verified checkmark for a verified creator.
- [x] Verify when you click on `Send a tip`, the custom tip banner displays if set up.
- [x] Verify custom tip banner is also displayed for monthly contribution.
- [x] Verify you are able to make one-time tip and they display in Tips panel.
- [x] Verify tip is reflected in the rewards widget on the NTP.
- [x] Verify when you tip an unverified publisher, the one time tip is recorded in the Pending Contributions list.
- [x] Verify you are able to make recurring tip and they display in Monthly Contributions panel.
- [x] Verify you are able to adjust your recurring tip amount from the BR panel.
- [x] Verify recurring tip is reflected in the rewards widget on the NTP.
- [x] Verify you can tip a verified website.
- [x] Verify the website displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified YouTube creator.
- [x] Verify the YouTube creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Vimeo creator.
- [x] Verify the Vimeo creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Twitch creator.
- [x] Verify the Twitch creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Twitter user from the panel.
- [x] Verify you can tip a verified Twitter user via inline tip button.
- [x] Verify the in-line tip button is spaced properly.
- [x] Verify you can tip a verified GitHub user from the panel.
- [x] Verify you can tip a verified GitHub user via inline tip button.
- [x] Verify the GitHub creator displays in the auto-contribute list after specified amount of time/visits per settings.
- [x] Verify you can tip a verified Reddit user from the panel.
- [x] Verify you can tip a verified Reddit user via inline tip button.
- [x] Verify if you disable auto-contribute you are still able to tip creators.
- [x] Verify if auto-contribute is disabled AC does not occur.
- [x] Verify if Rewards is disabled AC does not occur.
- [x] Verify that disabling Rewards and enabling it again does not lose state.
- [x] Adjust min visit/time in settings. Visit some sites to verify they are added to the table after the specified settings.
- [x] Uphold cases
- [x] Verify you are able to connect a KYC'd Uphold wallet to Rewards.
- [x] Verify wallet balance in Brave updates when BAT is added to the Brave Browser card.
- [x] Verify if you only have user-controlled BAT (BAT in Uphold only), you can only tip KYC'd creators, any tips to non-KYC'd creators go to the Pending Contributions list.
- [x] Verify connected (verified but not KYC'd) publishers display messaging on panel and tip banner.
- [x] Verify you are able to perform an auto contribute using Uphold BAT.
### Social-media blocking settings
- [x] Verify individual `Social media blocking` buttons works as intended when enabled/disabled by visiting https://fmarier.github.io/brave-testing/social-widgets.html
- [x] visit `brave://settings/privacy` -> `Site and Shields Settings` -> `Cookies and site data` and ensure that
- [x] both `https://[*.]firebaseapp.com` & `https://accounts.google.com` are added into `Sites that can always use cookies` when `Allow Google login buttons on third party sites` is enabled
- [x] both `https://[*.]firebaseapp.com` & `https://accounts.google.com` are removed from `Sites that can always use cookies` when `Allow Google login buttons on third party sites` is disabled
- [x] ensure that you can log in into https://www.expensify.com while `Allow Google login buttons on third party sites` is enabled
- [x] ensure that once `Allow Google login buttons on third party sites` has been disabled, you can't log in into https://www.expensify.com
### Sync
- [x] Verify you are able to create a sync chain and add a mobile/computer to the chain
- [x] Verify you are able to join an existing sync chain using code words
- [x] Verify the device name is shown properly when sync chain is created
- [x] Verify you are able to add a new mobile device to the chain via QR code/code words
- [x] Verify newly created bookmarks get sync'd to all devices on the sync chain
- [x] Verify existing bookmarks on current profile gets sync'd to all devices on the sync chain
- [x] Verify folder structure is retained after sync completes
- [x] Verify bookmarks don't duplicate when sync'd from other devices
- [x] Verify removing bookmark from device gets sync'd to all devices on the sync chain
- [x] Verify adding/removing a bookmark in offline mode gets sync'd to all devices on the sync chain when device comes online
- [x] With only two devices in chain, verify removing the other device resets the sync on b-c as well
### Tor Tabs
- [x] Visit https://check.torproject.org in a Tor window, ensure it shows a success message for using a Tor exit node
- [x] Visit https://check.torproject.org in a Tor window, note down exit node IP address. Do a hard refresh (Ctrl+Shift+R/Cmd+Shift+R), ensure exit IP changes after page reloads
- [x] Visit https://check.torproject.org in a Tor window, note down exit node IP address. Click `New Tor connection for this site` in app menu, ensure the exit node IP address changes after page is reloaded
- [x] Visit https://protonirockerxow.onion & https://brave5t5rjjg3s6k.onion/ in a Tor window and ensure both pages resolve
- [x] Visit https://browserleaks.com/geo in a Tor window, ensure location isn't shown
- [x] Verify Torrent viewer doesn't load in a Tor window
- [ ] Ensure you are able to download a file in a Tor window. Verify all Download/Cancel, Download/Retry and Download works in Tor window
### Cookie and Cache
- [x] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the evercookie site does not remember the old evercookie value
### Chromium/Brave GPU
- [x] Verify that `brave://gpu` (Brave) matches `chrome://gpu` (Chrome) when using the same Chromium version
### Startup & Components
- [x] Verify that Brave is only contacting `*.brave.com` endpoints on first launch using either `Charles Proxy`, `Fiddler`, `Wireshark` or `LittleSnitch` (or a similar application)
- [x] Remove the following component folders and ensure that they're being re-downloaded after restarting the browser:
- [x] `afalakplffnnnlkncjhbmahjfjhmlkal`: `AutoplayWhitelist.dat`, `ExtensionWhitelist.dat`, `ReferrerWhitelist.json` and `Greaselion.json`
- [x] `CertificateRevocation`
- [x] `cffkpbalmllkdoenhmdmpbkajipdjfam`: `rs-ABPFilterParserData.dat` & `regional_catalog.json` (AdBlock)
- [x] `gccbbckogglekeggclmmekihdgdpdgoe`: (Sponsored New Tab Images)
- [x] `jicbkmdloagakknpihibphagfckhjdih`: `speedreader-updater.dat`
- [x] `oofiananboodjbbmdelgdommihjbkfag`: HTTPSE
- [x] `Safe Browsing`
- [x] Restart the browser, load `brave://components`, wait for 8 mins and verify that no component shows any errors
**Note:** Always double check `brave://components` to make sure there's no errors/missing version numbers
### Session storage
- [x] Temporarily move away your browser profile and test that a new profile is created on browser relaunch
- macOS - `~/Library/Application\ Support/BraveSoftware/`
- Windows - `%userprofile%\appdata\Local\BraveSoftware\`
- Linux(Ubuntu) - `~/.config/BraveSoftware/`
- [x] Test that both windows and tabs are being restored, including the current active tab
- [x] Ensure that tabs are being lazy loaded when a previous session is being restored
### Upgrade
- [x] Make sure that data from the last version appears in the new version OK
- [x] Ensure that `brave://version` lists the expected Brave & Chromium versions
- [x] With data from the last version, verify that:
- [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened
- [x] Cookies are preserved
- [x] Installed extensions are retained and work correctly
- [x] Opened tabs can be reloaded
- [x] Stored passwords are preserved
- [x] Sync chain created in previous version is retained
- [x] Social media-blocking buttons changes are retained
- [x] Rewards
- [x] Wallet balance is retained
- [x] Auto-contribute list is retained
- [x] Both Tips and Monthly Contributions are retained
- [x] Wallet panel transactions list is retained
- [x] Changes to rewards settings are retained
- [x] Ensure that Auto Contribute is not being enabled when upgrading to a new version if AC was disabled
- [x] Ads
- [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained
- [x] Changes to ads settings are retained
- [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled
- [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
|
test
|
manual test run on linux for x release installer check the installer is close to the size of the last release check signature on macos run spctl assess verbose applications brave browser beta app and make sure it returns accepted on windows right click on the brave installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window about pages verify that both chrome and about forward to brave run through several internal pages importing verify that you can import history favorites bookmarks and passwords from google chrome verify that you can import history favorites bookmarks passwords search engines and autofill from firefox verify that you can import favorites bookmarks from microsoft edge verify that importing bookmarks using bookmark html file retains the folder structure on a clean profile context menus verify you can block a page element using block element via selector context menu item verify selecting manage custom filters opens brave adblock in a ntp verify removing the rule from brave adblock reflects the change on the website after reload extensions plugins verify pdfium torrent viewer extensions are installed automatically on fresh profile and cannot be disabled they don t show up in brave extensions verify older version of an extension gets updated to new version via google server verify older version of an extension gets updated to new version via brave server verify that magnet links and torrent files correctly open webtorrent and you re able to download the file s tip free torrents available via chrome web store cws verify that installing from cws displays the brave has not reviewed the extension warning via the add extension modal verify that installing from cws doesn t display the brave has not reviewed the extension warning via the add extension modal pdf test that you can print a pdf test that pdf is loaded over https at test that pdf is loaded over http at test that opens without issues widevine verify widevine notification is shown when you visit netflix for the first time test that you can stream on netflix on a fresh profile after installing widevine verify widevine notification is shown when you visit hbo max for the first time test that you can stream on hbo max on a fresh profile after installing widevine geolocation check that works and shows correct location check that works but doesn t require an accurate location crash reporting check that loading brave crash brave gpucrash causes the new tab to crash check that brave crashes lists the uploaded crash report id once the report has been submitted verify the crash id matches the report on backtrace using rxid equal bravery settings verify that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to verify that toggling ads and trackers blocked works as expected visit verify that safe browsing via our proxy works for all the listed items visit and then turn on script blocking page should not load test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that shows up as grey not red no mixed content scripts are run in brave settings security choose a dns provider from the providers listed under use secure dns load and verify your isp s dns resolvers aren t detected and shown only your chosen doh provider should appear open a new private window with tor load and verify your isp s dns resolvers aren t detected and shown fingerprint tests visit ensure blocked items are listed in shields result window should show got canvas fingerprint and got webgl fingerprint test that audio fingerprint is blocked at only when block all fingerprinting protection is on test that brave browser isn t detected via user agent on the site will say looks like brave on and yet but it s announcing that it s chrome on test that doesn t leak ip address when block all fingerprinting protection is on brave ads verify when you enable rewards from panel or brave rewards ads are enabled by default verify ads ui panel settings etc shows when in a region with ads support verify ads ui panel settings etc does not show when in a region without ads support verify the ads panel does show the sorry ads are not yet available in your region message verify when the system language is english the browser language is french and you are in one of the supported regions ad notifications are still served to you verify you are served ad notifications when ads are enabled verify ad earnings are reflected in the rewards widget on the ntp verify when ads are toggled off there are no ad messages in the logs verify when rewards are toggled off but ads were not explicitly toggled off there are no ads logs recorded verify view click dismiss landed ad notifications show in confirmations json verify pages you browse to are being classified in the logs verify tokens are redeemed by viewing the logs you can use rewards debug true to shorten redemption time verify ad is not shown if a tab is playing media and is only shown after it stops playing rewards verify that none of the reward endpoints are being contacted when a user visits a media publisher youtube com reddit com twitter com github com and hasn t interacted with rewards verify that rewards brave com pcdn brave com grant rewards brave com or api rewards brave com are not being contacted verify you are able to create a new wallet verify you are able to restore a wallet verify account balance shows correct bat and usd value verify actions taken claiming grant tipping auto contribute display in wallet panel verify ac monthly budget shows correct bat and usd value verify you are able to exclude a publisher from the auto contribute table and popup list of sites verify you are able to exclude a publisher by using the toggle on the rewards panel verify you are able to perform an auto contribution verify auto contribution is reflected in the rewards widget on the new tab page ntp verify monthly statement shows expected data verify when you click on the br panel while on a site the panel displays site specific information site favicon domain attention verify br panel shows message about an unverified publisher verify one time and monthly tip banners show a message about unverified publisher verify one time tip and monthly tip banners show a verified checkmark for a verified creator verify when you click on send a tip the custom tip banner displays if set up verify custom tip banner is also displayed for monthly contribution verify you are able to make one time tip and they display in tips panel verify tip is reflected in the rewards widget on the ntp verify when you tip an unverified publisher the one time tip is recorded in the pending contributions list verify you are able to make recurring tip and they display in monthly contributions panel verify you are able to adjust your recurring tip amount from the br panel verify recurring tip is reflected in the rewards widget on the ntp verify you can tip a verified website verify the website displays in the auto contribute list after specified amount of time visits per settings verify you can tip a verified youtube creator verify the youtube creator displays in the auto contribute list after specified amount of time visits per settings verify you can tip a verified vimeo creator verify the vimeo creator displays in the auto contribute list after specified amount of time visits per settings verify you can tip a verified twitch creator verify the twitch creator displays in the auto contribute list after specified amount of time visits per settings verify you can tip a verified twitter user from the panel verify you can tip a verified twitter user via inline tip button verify the in line tip button is spaced properly verify you can tip a verified github user from the panel verify you can tip a verified github user via inline tip button verify the github creator displays in the auto contribute list after specified amount of time visits per settings verify you can tip a verified reddit user from the panel verify you can tip a verified reddit user via inline tip button verify if you disable auto contribute you are still able to tip creators verify if auto contribute is disabled ac does not occur verify if rewards is disabled ac does not occur verify that disabling rewards and enabling it again does not lose state adjust min visit time in settings visit some sites to verify they are added to the table after the specified settings uphold cases verify you are able to connect a kyc d uphold wallet to rewards verify wallet balance in brave updates when bat is added to the brave browser card verify if you only have user controlled bat bat in uphold only you can only tip kyc d creators any tips to non kyc d creators go to the pending contributions list verify connected verified but not kyc d publishers display messaging on panel and tip banner verify you are able to perform an auto contribute using uphold bat social media blocking settings verify individual social media blocking buttons works as intended when enabled disabled by visiting visit brave settings privacy site and shields settings cookies and site data and ensure that both https firebaseapp com are added into sites that can always use cookies when allow google login buttons on third party sites is enabled both https firebaseapp com are removed from sites that can always use cookies when allow google login buttons on third party sites is disabled ensure that you can log in into while allow google login buttons on third party sites is enabled ensure that once allow google login buttons on third party sites has been disabled you can t log in into sync verify you are able to create a sync chain and add a mobile computer to the chain verify you are able to join an existing sync chain using code words verify the device name is shown properly when sync chain is created verify you are able to add a new mobile device to the chain via qr code code words verify newly created bookmarks get sync d to all devices on the sync chain verify existing bookmarks on current profile gets sync d to all devices on the sync chain verify folder structure is retained after sync completes verify bookmarks don t duplicate when sync d from other devices verify removing bookmark from device gets sync d to all devices on the sync chain verify adding removing a bookmark in offline mode gets sync d to all devices on the sync chain when device comes online with only two devices in chain verify removing the other device resets the sync on b c as well tor tabs visit in a tor window ensure it shows a success message for using a tor exit node visit in a tor window note down exit node ip address do a hard refresh ctrl shift r cmd shift r ensure exit ip changes after page reloads visit in a tor window note down exit node ip address click new tor connection for this site in app menu ensure the exit node ip address changes after page is reloaded visit in a tor window and ensure both pages resolve visit in a tor window ensure location isn t shown verify torrent viewer doesn t load in a tor window ensure you are able to download a file in a tor window verify all download cancel download retry and download works in tor window cookie and cache go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value chromium brave gpu verify that brave gpu brave matches chrome gpu chrome when using the same chromium version startup components verify that brave is only contacting brave com endpoints on first launch using either charles proxy fiddler wireshark or littlesnitch or a similar application remove the following component folders and ensure that they re being re downloaded after restarting the browser afalakplffnnnlkncjhbmahjfjhmlkal autoplaywhitelist dat extensionwhitelist dat referrerwhitelist json and greaselion json certificaterevocation cffkpbalmllkdoenhmdmpbkajipdjfam rs abpfilterparserdata dat regional catalog json adblock gccbbckogglekeggclmmekihdgdpdgoe sponsored new tab images jicbkmdloagakknpihibphagfckhjdih speedreader updater dat oofiananboodjbbmdelgdommihjbkfag httpse safe browsing restart the browser load brave components wait for mins and verify that no component shows any errors note always double check brave components to make sure there s no errors missing version numbers session storage temporarily move away your browser profile and test that a new profile is created on browser relaunch macos library application support bravesoftware windows userprofile appdata local bravesoftware linux ubuntu config bravesoftware test that both windows and tabs are being restored including the current active tab ensure that tabs are being lazy loaded when a previous session is being restored upgrade make sure that data from the last version appears in the new version ok ensure that brave version lists the expected brave chromium versions with data from the last version verify that bookmarks on the bookmark toolbar and bookmark folders can be opened cookies are preserved installed extensions are retained and work correctly opened tabs can be reloaded stored passwords are preserved sync chain created in previous version is retained social media blocking buttons changes are retained rewards wallet balance is retained auto contribute list is retained both tips and monthly contributions are retained wallet panel transactions list is retained changes to rewards settings are retained ensure that auto contribute is not being enabled when upgrading to a new version if ac was disabled ads both estimated pending rewards ad notifications received this month are retained changes to ads settings are retained ensure that ads are not being enabled when upgrading to a new version if they were disabled ensure that ads are not disabled when upgrading to a new version if they were enabled
| 1
|
67,469
| 7,048,351,937
|
IssuesEvent
|
2018-01-02 17:18:11
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
10 GKE 1.4-1.6 upgrade jobs fail a variety of Job, SchedulePredicate tests
|
lifecycle/stale priority/failing-test sig/apps team/gke
|
Testgrid: https://k8s-testgrid.appspot.com/release-1.6-upgrade-skew#gke-container_vm-1.4-container_vm-1.6-upgrade-master&sort-by-failures=
Failure cluster: https://storage.googleapis.com/k8s-gubernator/triage/index.html#5e1dbad0da736548d58a
/assign @foxish
|
1.0
|
10 GKE 1.4-1.6 upgrade jobs fail a variety of Job, SchedulePredicate tests - Testgrid: https://k8s-testgrid.appspot.com/release-1.6-upgrade-skew#gke-container_vm-1.4-container_vm-1.6-upgrade-master&sort-by-failures=
Failure cluster: https://storage.googleapis.com/k8s-gubernator/triage/index.html#5e1dbad0da736548d58a
/assign @foxish
|
test
|
gke upgrade jobs fail a variety of job schedulepredicate tests testgrid failure cluster assign foxish
| 1
|
21,481
| 3,899,901,115
|
IssuesEvent
|
2016-04-18 00:50:43
|
cyoung/stratux
|
https://api.github.com/repos/cyoung/stratux
|
closed
|
v0.8r2 suddenly failing to connect
|
testing
|
1. Stratux version: v 0.8r2 + AvSquirrel special sh 3dd1
2. Stratux config:
SDR
[ ] single
[X ] dual
GPS
[X ] yes
[ ] no
type: BU-353
AHRS
[ ] yes
[X] no
power source: Anker E5
usb cable: Anker cable
3. EFB app and version: (iFly v5.9.26b)
EFB platform: (Android KK)
EFB hardware: (two Android tablets)
4. Description of your issue:
Been flying with v0.8r2 for a couple weeks now. About ten trouble free hours. And then, suddenly, Saturday, on a return flight home (where it had been working trouble free on the way out), it suddenly began failing. At first it wouldn't send GPS info to iFly, although traffic info was coming thru.
Yesterday (Tuesday) it would connect and then disconnect after about five minutes. I would have to do power down resets in flight. Eventually, it wouldn't connect at all. Strong signal on Ch 1. But no heartbeat and not able to connect to the web UI. Couldn't see messages on Avare's I/O Plug-in.
I've since reimaged the SD card and Stratux is working again. (On the ground. Haven't flown with it yet.)
Here's a link to a truncated stratux.log file, where I only included yesterday's troublesome flight.
[stratux.v0.8r2.avsquirrel.3dd1.log.txt](https://github.com/cyoung/stratux/files/207229/stratux.v0.8r2.avsquirrel.3dd1.log.txt)
|
1.0
|
v0.8r2 suddenly failing to connect - 1. Stratux version: v 0.8r2 + AvSquirrel special sh 3dd1
2. Stratux config:
SDR
[ ] single
[X ] dual
GPS
[X ] yes
[ ] no
type: BU-353
AHRS
[ ] yes
[X] no
power source: Anker E5
usb cable: Anker cable
3. EFB app and version: (iFly v5.9.26b)
EFB platform: (Android KK)
EFB hardware: (two Android tablets)
4. Description of your issue:
Been flying with v0.8r2 for a couple weeks now. About ten trouble free hours. And then, suddenly, Saturday, on a return flight home (where it had been working trouble free on the way out), it suddenly began failing. At first it wouldn't send GPS info to iFly, although traffic info was coming thru.
Yesterday (Tuesday) it would connect and then disconnect after about five minutes. I would have to do power down resets in flight. Eventually, it wouldn't connect at all. Strong signal on Ch 1. But no heartbeat and not able to connect to the web UI. Couldn't see messages on Avare's I/O Plug-in.
I've since reimaged the SD card and Stratux is working again. (On the ground. Haven't flown with it yet.)
Here's a link to a truncated stratux.log file, where I only included yesterday's troublesome flight.
[stratux.v0.8r2.avsquirrel.3dd1.log.txt](https://github.com/cyoung/stratux/files/207229/stratux.v0.8r2.avsquirrel.3dd1.log.txt)
|
test
|
suddenly failing to connect stratux version v avsquirrel special sh stratux config sdr single dual gps yes no type bu ahrs yes no power source anker usb cable anker cable efb app and version ifly efb platform android kk efb hardware two android tablets description of your issue been flying with for a couple weeks now about ten trouble free hours and then suddenly saturday on a return flight home where it had been working trouble free on the way out it suddenly began failing at first it wouldn t send gps info to ifly although traffic info was coming thru yesterday tuesday it would connect and then disconnect after about five minutes i would have to do power down resets in flight eventually it wouldn t connect at all strong signal on ch but no heartbeat and not able to connect to the web ui couldn t see messages on avare s i o plug in i ve since reimaged the sd card and stratux is working again on the ground haven t flown with it yet here s a link to a truncated stratux log file where i only included yesterday s troublesome flight
| 1
|
337,119
| 10,210,795,168
|
IssuesEvent
|
2019-08-14 15:29:07
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
BigQuery: query_parameters fails if None is bound as parameter
|
api: bigquery priority: p2 type: bug
|
OS Type & Version: Ubuntu 19.04 x64
Python version: Python 3.7.3
Packges: latest up to this date:
```
'google-cloud-bigquery==1.18.0',
```
**Steps to reproduce**
1. Create a query, bind `None` (`NULL`) as parameter
2. Execute it
3. Call query_parameters
**Code example**
```py
from google.cloud import bigquery
client = bigquery.Client.from_service_account_json(
<...>
)
job = client.query(
"SELECT LOWER(@none_value)",
job_config=bigquery.QueryJobConfig(
query_parameters=[
bigquery.ScalarQueryParameter('none_value', 'STRING', None)
]
)
)
result = list(job.result())
query_parameters = job.query_parameters
```
**Stack trace**
```
Traceback (most recent call last):
File "test.py", line 16, in <module>
query_parameters = job.query_parameters
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 2472, in query_parameters
return self._configuration.query_parameters
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 2200, in query_parameters
return _from_api_repr_query_parameters(prop)
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 1965, in _from_api_repr_query_parameters
return [_query_param_from_api_repr(mapping) for mapping in resource]
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 1965, in <listcomp>
return [_query_param_from_api_repr(mapping) for mapping in resource]
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/query.py", line 625, in _query_param_from_api_repr
return klass.from_api_repr(resource)
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/query.py", line 129, in from_api_repr
value = resource["parameterValue"]["value"]
KeyError: 'parameterValue'
```
This is related to https://github.com/googleapis/google-cloud-python/issues/7309
|
1.0
|
BigQuery: query_parameters fails if None is bound as parameter - OS Type & Version: Ubuntu 19.04 x64
Python version: Python 3.7.3
Packges: latest up to this date:
```
'google-cloud-bigquery==1.18.0',
```
**Steps to reproduce**
1. Create a query, bind `None` (`NULL`) as parameter
2. Execute it
3. Call query_parameters
**Code example**
```py
from google.cloud import bigquery
client = bigquery.Client.from_service_account_json(
<...>
)
job = client.query(
"SELECT LOWER(@none_value)",
job_config=bigquery.QueryJobConfig(
query_parameters=[
bigquery.ScalarQueryParameter('none_value', 'STRING', None)
]
)
)
result = list(job.result())
query_parameters = job.query_parameters
```
**Stack trace**
```
Traceback (most recent call last):
File "test.py", line 16, in <module>
query_parameters = job.query_parameters
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 2472, in query_parameters
return self._configuration.query_parameters
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 2200, in query_parameters
return _from_api_repr_query_parameters(prop)
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 1965, in _from_api_repr_query_parameters
return [_query_param_from_api_repr(mapping) for mapping in resource]
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/job.py", line 1965, in <listcomp>
return [_query_param_from_api_repr(mapping) for mapping in resource]
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/query.py", line 625, in _query_param_from_api_repr
return klass.from_api_repr(resource)
File "/test/venv/lib/python3.7/site-packages/google/cloud/bigquery/query.py", line 129, in from_api_repr
value = resource["parameterValue"]["value"]
KeyError: 'parameterValue'
```
This is related to https://github.com/googleapis/google-cloud-python/issues/7309
|
non_test
|
bigquery query parameters fails if none is bound as parameter os type version ubuntu python version python packges latest up to this date google cloud bigquery steps to reproduce create a query bind none null as parameter execute it call query parameters code example py from google cloud import bigquery client bigquery client from service account json job client query select lower none value job config bigquery queryjobconfig query parameters bigquery scalarqueryparameter none value string none result list job result query parameters job query parameters stack trace traceback most recent call last file test py line in query parameters job query parameters file test venv lib site packages google cloud bigquery job py line in query parameters return self configuration query parameters file test venv lib site packages google cloud bigquery job py line in query parameters return from api repr query parameters prop file test venv lib site packages google cloud bigquery job py line in from api repr query parameters return file test venv lib site packages google cloud bigquery job py line in return file test venv lib site packages google cloud bigquery query py line in query param from api repr return klass from api repr resource file test venv lib site packages google cloud bigquery query py line in from api repr value resource keyerror parametervalue this is related to
| 0
|
420,923
| 12,246,000,146
|
IssuesEvent
|
2020-05-05 13:50:14
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Small issue while saving a context
|
Accepted Priority: Medium bug geOrchestra
|
## Description
<!-- Add here a few sentences describing the bug. -->
When a context is saved through the wizard, saving operation take some time and there is any spinner to notify that the save operation is in progress.
Furthermore, when the context is saved, the user is brought back to the context list page where the green popup message does not appear completely but cut and there is a strange blink of the page

## How to reproduce
<!-- A list of steps to reproduce the bug -->
- Open the context manager
- Open a context in edit mode
- Navigate the wizard steps untill step 3 and save
*Expected Result*
<!-- Describe here the expected result -->
A loading spinner appears when you click on Save to notify that the saving operation is in progress.
As soon as the context is saved the user is brought back to the context list page without any blink effect on that page and with the popup's message entirely visible on top-right.
*Current Result*
<!-- Describe here the current behavior -->
The saving spinner is missing and the green popup message is not entirely visible on top-right and there is a strange blink of the page.
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
|
1.0
|
Small issue while saving a context - ## Description
<!-- Add here a few sentences describing the bug. -->
When a context is saved through the wizard, saving operation take some time and there is any spinner to notify that the save operation is in progress.
Furthermore, when the context is saved, the user is brought back to the context list page where the green popup message does not appear completely but cut and there is a strange blink of the page

## How to reproduce
<!-- A list of steps to reproduce the bug -->
- Open the context manager
- Open a context in edit mode
- Navigate the wizard steps untill step 3 and save
*Expected Result*
<!-- Describe here the expected result -->
A loading spinner appears when you click on Save to notify that the saving operation is in progress.
As soon as the context is saved the user is brought back to the context list page without any blink effect on that page and with the popup's message entirely visible on top-right.
*Current Result*
<!-- Describe here the current behavior -->
The saving spinner is missing and the green popup message is not entirely visible on top-right and there is a strange blink of the page.
- [x] Not browser related
<details><summary> <b>Browser info</b> </summary>
<!-- If browser related, please compile the following table -->
<!-- If your browser is not in the list please add a new row to the table with the version -->
(use this site: <a href="https://www.whatsmybrowser.org/">https://www.whatsmybrowser.org/</a> for non expert users)
| Browser Affected | Version |
|---|---|
|Internet Explorer| |
|Edge| |
|Chrome| |
|Firefox| |
|Safari| |
</details>
## Other useful information
<!-- error stack trace, screenshot, videos, or link to repository code are welcome -->
|
non_test
|
small issue while saving a context description when a context is saved through the wizard saving operation take some time and there is any spinner to notify that the save operation is in progress furthermore when the context is saved the user is brought back to the context list page where the green popup message does not appear completely but cut and there is a strange blink of the page how to reproduce open the context manager open a context in edit mode navigate the wizard steps untill step and save expected result a loading spinner appears when you click on save to notify that the saving operation is in progress as soon as the context is saved the user is brought back to the context list page without any blink effect on that page and with the popup s message entirely visible on top right current result the saving spinner is missing and the green popup message is not entirely visible on top right and there is a strange blink of the page not browser related browser info use this site a href for non expert users browser affected version internet explorer edge chrome firefox safari other useful information
| 0
|
195,764
| 14,762,305,186
|
IssuesEvent
|
2021-01-09 02:54:22
|
geerlingguy/raspberry-pi-pcie-devices
|
https://api.github.com/repos/geerlingguy/raspberry-pi-pcie-devices
|
closed
|
Test EDUP PCIe Intel AX200 WiFi 6 Card
|
testing complete
|
I bought an [EDUP PCIe Intel AX200 WiFi 6 Card](https://amzn.to/3pnFF8S), which uses the Intel AX200 chip, which is _supposedly_ friendlier with Linux than the Realtek chip I tried integrating in the ASUS PCE-AC51 over in #20.

Note: It should work with the `iwlwifi` driver—supposedly.
|
1.0
|
Test EDUP PCIe Intel AX200 WiFi 6 Card - I bought an [EDUP PCIe Intel AX200 WiFi 6 Card](https://amzn.to/3pnFF8S), which uses the Intel AX200 chip, which is _supposedly_ friendlier with Linux than the Realtek chip I tried integrating in the ASUS PCE-AC51 over in #20.

Note: It should work with the `iwlwifi` driver—supposedly.
|
test
|
test edup pcie intel wifi card i bought an which uses the intel chip which is supposedly friendlier with linux than the realtek chip i tried integrating in the asus pce over in note it should work with the iwlwifi driver—supposedly
| 1
|
31,671
| 11,966,883,673
|
IssuesEvent
|
2020-04-06 05:06:49
|
ceramicskate0/SWELF
|
https://api.github.com/repos/ceramicskate0/SWELF
|
closed
|
Add sysmon integ check feature addition
|
App Enhancement New Feature Searching File Change Security Enhancment
|
Based on POC at https://github.com/matterpreter/Shhmon an uploaded bad sysmon driver caused crash of sysmon. While IOCs are there the current integ check SWELF does may not by default find this. This shoul dbe built into app due to reliance on sysmon working. (SWELF will not fix or resolve issue but should alert when found per run).
IOCs to add to example Seachs.txt file and Sysmon Event ID 255 into application for sec_check():
- Sysmon Event ID 255 - Error message with a detail of DriverCommunication
- Windows System Event ID 1 - From the source "FilterManager" stating File System Filter '\<DriverName\>' (Version 0.0, \<Timstamp\>) unloaded successfully.
- Windows Security Event ID 4672 - SeLoadDriverPrivileges being granted to an account other than SYSTEM
- Sysmon Event ID 1/Windows Security Event 4688 - Abnormal high-integrity process correlating with the driver unload. This event woudl be the last before the driver error in Sysmon
|
True
|
Add sysmon integ check feature addition - Based on POC at https://github.com/matterpreter/Shhmon an uploaded bad sysmon driver caused crash of sysmon. While IOCs are there the current integ check SWELF does may not by default find this. This shoul dbe built into app due to reliance on sysmon working. (SWELF will not fix or resolve issue but should alert when found per run).
IOCs to add to example Seachs.txt file and Sysmon Event ID 255 into application for sec_check():
- Sysmon Event ID 255 - Error message with a detail of DriverCommunication
- Windows System Event ID 1 - From the source "FilterManager" stating File System Filter '\<DriverName\>' (Version 0.0, \<Timstamp\>) unloaded successfully.
- Windows Security Event ID 4672 - SeLoadDriverPrivileges being granted to an account other than SYSTEM
- Sysmon Event ID 1/Windows Security Event 4688 - Abnormal high-integrity process correlating with the driver unload. This event woudl be the last before the driver error in Sysmon
|
non_test
|
add sysmon integ check feature addition based on poc at an uploaded bad sysmon driver caused crash of sysmon while iocs are there the current integ check swelf does may not by default find this this shoul dbe built into app due to reliance on sysmon working swelf will not fix or resolve issue but should alert when found per run iocs to add to example seachs txt file and sysmon event id into application for sec check sysmon event id error message with a detail of drivercommunication windows system event id from the source filtermanager stating file system filter version unloaded successfully windows security event id seloaddriverprivileges being granted to an account other than system sysmon event id windows security event abnormal high integrity process correlating with the driver unload this event woudl be the last before the driver error in sysmon
| 0
|
125,396
| 26,651,892,040
|
IssuesEvent
|
2023-01-25 14:16:24
|
dotnet/interactive
|
https://api.github.com/repos/dotnet/interactive
|
closed
|
VS Code extension tool check should be `===` not `>=`
|
bug Area-VS Code Extension Impact-High
|
When the extension is installing the backing tool, the comparison should be `===` not `>=`; i.e., ensure _exact_ version match.
|
1.0
|
VS Code extension tool check should be `===` not `>=` - When the extension is installing the backing tool, the comparison should be `===` not `>=`; i.e., ensure _exact_ version match.
|
non_test
|
vs code extension tool check should be not when the extension is installing the backing tool the comparison should be not i e ensure exact version match
| 0
|
31,145
| 2,732,341,767
|
IssuesEvent
|
2015-04-17 04:44:01
|
gakshay/hookhook
|
https://api.github.com/repos/gakshay/hookhook
|
closed
|
Add google analytics to the application to track the usage
|
Medium Priority Ready Story Point: 1
|
- [ ] create GA code
- [ ] Add it to codebase on main layout file
|
1.0
|
Add google analytics to the application to track the usage - - [ ] create GA code
- [ ] Add it to codebase on main layout file
|
non_test
|
add google analytics to the application to track the usage create ga code add it to codebase on main layout file
| 0
|
26,570
| 13,056,098,272
|
IssuesEvent
|
2020-07-30 03:39:07
|
ballista-compute/ballista
|
https://api.github.com/repos/ballista-compute/ballista
|
opened
|
Re-implement scheduler / executor threading model
|
performance rust
|
The scheduler in 0.3.0-alpha-1 was sufficient to demonstrate true distributed compute but the design is extremely dumb and it will never be able to achieve good performance.
I have been working on the next iteration of the design on my whiteboard and plan on implementing it this weekend. I am creating this issue for visibility and will share the design this weekend, assuming it works out.
|
True
|
Re-implement scheduler / executor threading model - The scheduler in 0.3.0-alpha-1 was sufficient to demonstrate true distributed compute but the design is extremely dumb and it will never be able to achieve good performance.
I have been working on the next iteration of the design on my whiteboard and plan on implementing it this weekend. I am creating this issue for visibility and will share the design this weekend, assuming it works out.
|
non_test
|
re implement scheduler executor threading model the scheduler in alpha was sufficient to demonstrate true distributed compute but the design is extremely dumb and it will never be able to achieve good performance i have been working on the next iteration of the design on my whiteboard and plan on implementing it this weekend i am creating this issue for visibility and will share the design this weekend assuming it works out
| 0
|
151,916
| 23,891,202,383
|
IssuesEvent
|
2022-09-08 11:37:55
|
equinor/design-system
|
https://api.github.com/repos/equinor/design-system
|
closed
|
New icons for REN & offshore wind
|
design 💡 feature request icons
|
**Is your feature request related to a problem? Please describe.**
In Vortex we have come across the need for various icons that do not yet exist in EDS.
Vessel - SOV (like the existing EDS icon, but waves were reduced)
Vessel - CTV
Craning &
No Craning
Handheld radio, aka TETRA Radio
Turbine / WTG (on and offshore could be distinguished)
Tool bags, in 3 varieties - regular, wheel and rope
Substations, both on and offshore varieties
**Describe the solution you'd like**
Would like these icons reviewed by the EDS team, adjusted if need be and added to the EDS library. As it is possible that these icons will be useful to others as well.
Note: neither the existing turbine icon (fan) nor the suggested one here (stick figure) are yet ideal.
**Describe alternatives you've considered**
Material design icons were explored but found to be insufficient.
**Additional context**



The components in out figma files: https://www.figma.com/file/vNUMP31ZD8KeP4kv3ZbXKI/Vortex-MVP-Concept-2021?node-id=3735%3A303167
Icon draft space: https://www.figma.com/file/vNUMP31ZD8KeP4kv3ZbXKI/Vortex-MVP-Concept-2021?node-id=8469%3A400572
|
1.0
|
New icons for REN & offshore wind - **Is your feature request related to a problem? Please describe.**
In Vortex we have come across the need for various icons that do not yet exist in EDS.
Vessel - SOV (like the existing EDS icon, but waves were reduced)
Vessel - CTV
Craning &
No Craning
Handheld radio, aka TETRA Radio
Turbine / WTG (on and offshore could be distinguished)
Tool bags, in 3 varieties - regular, wheel and rope
Substations, both on and offshore varieties
**Describe the solution you'd like**
Would like these icons reviewed by the EDS team, adjusted if need be and added to the EDS library. As it is possible that these icons will be useful to others as well.
Note: neither the existing turbine icon (fan) nor the suggested one here (stick figure) are yet ideal.
**Describe alternatives you've considered**
Material design icons were explored but found to be insufficient.
**Additional context**



The components in out figma files: https://www.figma.com/file/vNUMP31ZD8KeP4kv3ZbXKI/Vortex-MVP-Concept-2021?node-id=3735%3A303167
Icon draft space: https://www.figma.com/file/vNUMP31ZD8KeP4kv3ZbXKI/Vortex-MVP-Concept-2021?node-id=8469%3A400572
|
non_test
|
new icons for ren offshore wind is your feature request related to a problem please describe in vortex we have come across the need for various icons that do not yet exist in eds vessel sov like the existing eds icon but waves were reduced vessel ctv craning no craning handheld radio aka tetra radio turbine wtg on and offshore could be distinguished tool bags in varieties regular wheel and rope substations both on and offshore varieties describe the solution you d like would like these icons reviewed by the eds team adjusted if need be and added to the eds library as it is possible that these icons will be useful to others as well note neither the existing turbine icon fan nor the suggested one here stick figure are yet ideal describe alternatives you ve considered material design icons were explored but found to be insufficient additional context the components in out figma files icon draft space
| 0
|
146,762
| 11,754,918,181
|
IssuesEvent
|
2020-03-13 08:22:47
|
IBMStreams/streamsx.topology
|
https://api.github.com/repos/IBMStreams/streamsx.topology
|
closed
|
Test failure on rhel6 platform
|
test
|
There are 2 test failures at a rhel6 platform. The test logs are attached
[com.ibm.streamsx.topology.test.spl.SPLOperatorsTest_1_summary.txt](https://github.com/IBMStreams/streamsx.topology/files/3107810/com.ibm.streamsx.topology.test.spl.SPLOperatorsTest_1_summary.txt)
[TEST-com.ibm.streamsx.topology.test.spl.SPLOperatorsTest.xml.gz](https://github.com/IBMStreams/streamsx.topology/files/3107839/TEST-com.ibm.streamsx.topology.test.spl.SPLOperatorsTest.xml.gz)
|
1.0
|
Test failure on rhel6 platform - There are 2 test failures at a rhel6 platform. The test logs are attached
[com.ibm.streamsx.topology.test.spl.SPLOperatorsTest_1_summary.txt](https://github.com/IBMStreams/streamsx.topology/files/3107810/com.ibm.streamsx.topology.test.spl.SPLOperatorsTest_1_summary.txt)
[TEST-com.ibm.streamsx.topology.test.spl.SPLOperatorsTest.xml.gz](https://github.com/IBMStreams/streamsx.topology/files/3107839/TEST-com.ibm.streamsx.topology.test.spl.SPLOperatorsTest.xml.gz)
|
test
|
test failure on platform there are test failures at a platform the test logs are attached
| 1
|
75,301
| 7,468,375,585
|
IssuesEvent
|
2018-04-02 18:45:12
|
MultiPoolMiner/MultiPoolMiner
|
https://api.github.com/repos/MultiPoolMiner/MultiPoolMiner
|
closed
|
Excavator 1.4.4a: Testusers wanted
|
available for testing help wanted watchlisted
|
Testusers wanted!
In an attempt to create AMD only and NVIDIA only miners I rewrote the Excavator miner files. This will fix https://github.com/MultiPoolMiner/MultiPoolMiner/issues/1075
Since I do not own AMD hardware I need some assistance.
### **How to do do this quick test:**
**You need to have a mixed rig with AMD and NVIDIA cards installed**. Then
1. Manually update Excavator to the latest version
2. Delete all Stats\Excavator*.txt files (this will trigger benchmarking)
3. Move ALL miners files (except ExcavatorAmd1.ps1 from the zip file) to some other directory
4. Run MPM; It should begin mining **on all AMD cards** (check the miner screen), but it should **NOT use any NVIDIA card**. If this is true: Horray, step 1 done
5. Next: Stop MPM
6. Replace ExcavatorAmd1.ps1 with ExcavatorNvidia1.ps1 from the zip file
7. Start MPM; It should begin mining **on all NVIDIA cards** (check the miner screen), but it should **NOT use any AMD card**. If this is true: Horray, step 2 done. Mission completed!
8. Restore all other miner files (where did you put them in step 3???)
9. Report findings back here - thank you!
Manual Update of Excavator
Version 1.4.4 alpha NVIDIA: https://github.com/nicehash/excavator/releases/tag/v1.4.4a
You have to update manually, because Excavator is a proprietary software by NiceHash and has a special [EULA](https://github.com/nicehash/excavator/blob/master/excavator-EULA.txt).
Unfortunately I cannot verify this behavior from the content in the log files - so uploading logs won't help me :-(
|
1.0
|
Excavator 1.4.4a: Testusers wanted - Testusers wanted!
In an attempt to create AMD only and NVIDIA only miners I rewrote the Excavator miner files. This will fix https://github.com/MultiPoolMiner/MultiPoolMiner/issues/1075
Since I do not own AMD hardware I need some assistance.
### **How to do do this quick test:**
**You need to have a mixed rig with AMD and NVIDIA cards installed**. Then
1. Manually update Excavator to the latest version
2. Delete all Stats\Excavator*.txt files (this will trigger benchmarking)
3. Move ALL miners files (except ExcavatorAmd1.ps1 from the zip file) to some other directory
4. Run MPM; It should begin mining **on all AMD cards** (check the miner screen), but it should **NOT use any NVIDIA card**. If this is true: Horray, step 1 done
5. Next: Stop MPM
6. Replace ExcavatorAmd1.ps1 with ExcavatorNvidia1.ps1 from the zip file
7. Start MPM; It should begin mining **on all NVIDIA cards** (check the miner screen), but it should **NOT use any AMD card**. If this is true: Horray, step 2 done. Mission completed!
8. Restore all other miner files (where did you put them in step 3???)
9. Report findings back here - thank you!
Manual Update of Excavator
Version 1.4.4 alpha NVIDIA: https://github.com/nicehash/excavator/releases/tag/v1.4.4a
You have to update manually, because Excavator is a proprietary software by NiceHash and has a special [EULA](https://github.com/nicehash/excavator/blob/master/excavator-EULA.txt).
Unfortunately I cannot verify this behavior from the content in the log files - so uploading logs won't help me :-(
|
test
|
excavator testusers wanted testusers wanted in an attempt to create amd only and nvidia only miners i rewrote the excavator miner files this will fix since i do not own amd hardware i need some assistance how to do do this quick test you need to have a mixed rig with amd and nvidia cards installed then manually update excavator to the latest version delete all stats excavator txt files this will trigger benchmarking move all miners files except from the zip file to some other directory run mpm it should begin mining on all amd cards check the miner screen but it should not use any nvidia card if this is true horray step done next stop mpm replace with from the zip file start mpm it should begin mining on all nvidia cards check the miner screen but it should not use any amd card if this is true horray step done mission completed restore all other miner files where did you put them in step report findings back here thank you manual update of excavator version alpha nvidia you have to update manually because excavator is a proprietary software by nicehash and has a special unfortunately i cannot verify this behavior from the content in the log files so uploading logs won t help me
| 1
|
134,002
| 10,878,038,988
|
IssuesEvent
|
2019-11-16 14:55:49
|
ldapjs/node-ldapjs
|
https://api.github.com/repos/ldapjs/node-ldapjs
|
closed
|
Client timeout doesn't work
|
potential-ci-test
|
I'm creating a client with "timeout" option to be able to authenticate a user with the backup LDAP server if the main LDAP server is unavailable, but it doesn't work, the timeout is not triggered.
Wondering if that is a known bug, or can you please provide a working example with timeout option?
|
1.0
|
Client timeout doesn't work - I'm creating a client with "timeout" option to be able to authenticate a user with the backup LDAP server if the main LDAP server is unavailable, but it doesn't work, the timeout is not triggered.
Wondering if that is a known bug, or can you please provide a working example with timeout option?
|
test
|
client timeout doesn t work i m creating a client with timeout option to be able to authenticate a user with the backup ldap server if the main ldap server is unavailable but it doesn t work the timeout is not triggered wondering if that is a known bug or can you please provide a working example with timeout option
| 1
|
165,045
| 26,085,584,424
|
IssuesEvent
|
2022-12-26 02:03:05
|
1nMarket/frontend
|
https://api.github.com/repos/1nMarket/frontend
|
closed
|
[design] home_피드페이지 스타일
|
design🎨
|
## ⭐ 주요 기능
home_피드 없을 경우 피드페이지 스타일
## 📋 진행사항
- [ ] 피드 없을 경우 피드페이지 스타일
## 🚨 특이사항
이 외 특이사항을 명시해주세요.
|
1.0
|
[design] home_피드페이지 스타일 - ## ⭐ 주요 기능
home_피드 없을 경우 피드페이지 스타일
## 📋 진행사항
- [ ] 피드 없을 경우 피드페이지 스타일
## 🚨 특이사항
이 외 특이사항을 명시해주세요.
|
non_test
|
home 피드페이지 스타일 ⭐ 주요 기능 home 피드 없을 경우 피드페이지 스타일 📋 진행사항 피드 없을 경우 피드페이지 스타일 🚨 특이사항 이 외 특이사항을 명시해주세요
| 0
|
184,118
| 31,821,238,807
|
IssuesEvent
|
2023-09-14 02:35:54
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
[UX][Design] Create Lo-Fi Wireframe Structure for Past Appointments
|
design ux HCE-Checkin
|
While we work on [scope discovery](https://github.com/department-of-veterans-affairs/va.gov-team/issues/65592) and [back-end user flows](https://github.com/department-of-veterans-affairs/va.gov-team/issues/65593) for travel reimbursement for past appointments, we need to understand what parts of the userflow (Veteran-facing) screens we already have a handle on, in the form of complete or proposed designs, and begin to construct a low-fi wireframe flow with the available screens.
[Link to 18F designs]
## Tasks
- [ ] Create lo-fi wireframes for discovery purposes
- [ ] Identify what design work will still need to be done
- [ ] Share with Kristen and UX
- [ ] Work with Kristen to determine what design work should be done by PCI or Appointments team
- [ ] Create f/u tickets
## AC
- [ ] Initial lo-fi wireframe structure created
- [ ] PCI vs. Appointments team roles have been clarified for next design steps
- [ ] F/u tickets created
|
1.0
|
[UX][Design] Create Lo-Fi Wireframe Structure for Past Appointments - While we work on [scope discovery](https://github.com/department-of-veterans-affairs/va.gov-team/issues/65592) and [back-end user flows](https://github.com/department-of-veterans-affairs/va.gov-team/issues/65593) for travel reimbursement for past appointments, we need to understand what parts of the userflow (Veteran-facing) screens we already have a handle on, in the form of complete or proposed designs, and begin to construct a low-fi wireframe flow with the available screens.
[Link to 18F designs]
## Tasks
- [ ] Create lo-fi wireframes for discovery purposes
- [ ] Identify what design work will still need to be done
- [ ] Share with Kristen and UX
- [ ] Work with Kristen to determine what design work should be done by PCI or Appointments team
- [ ] Create f/u tickets
## AC
- [ ] Initial lo-fi wireframe structure created
- [ ] PCI vs. Appointments team roles have been clarified for next design steps
- [ ] F/u tickets created
|
non_test
|
create lo fi wireframe structure for past appointments while we work on and for travel reimbursement for past appointments we need to understand what parts of the userflow veteran facing screens we already have a handle on in the form of complete or proposed designs and begin to construct a low fi wireframe flow with the available screens tasks create lo fi wireframes for discovery purposes identify what design work will still need to be done share with kristen and ux work with kristen to determine what design work should be done by pci or appointments team create f u tickets ac initial lo fi wireframe structure created pci vs appointments team roles have been clarified for next design steps f u tickets created
| 0
|
102,230
| 8,822,286,021
|
IssuesEvent
|
2019-01-02 08:42:46
|
SatelliteQE/robottelo
|
https://api.github.com/repos/SatelliteQE/robottelo
|
opened
|
api test_positive_update_interval test occasionally creates an invalid sync plan
|
6.5 API test-failure
|
invalid meaning with "custom cron" interval but without a cronline specified:
```
2018-12-21 13:48:12 - nailgun.client - DEBUG - Making HTTP POST request to https://host.com/katello/api/v2/organizations/271/sync_plans with options {'auth': ('admin', 'changeme'), 'verify': False, 'headers': {'content-type': 'application/json'}}, no params and data {"interval": "custom cron", "enabled": true, "name": "fvRmhdIyaGkf", "sync_date": "2516-10-31 06:25:18", "organization_id": 271}.
2018-12-21 13:48:12 - nailgun.client - WARNING - Received HTTP 500 response: {"displayMessage":"Cron expression is not valid!","errors":["Cron expression is not valid!"]}
```
This is probably an equivalent of the cli problem here: #6459
|
1.0
|
api test_positive_update_interval test occasionally creates an invalid sync plan - invalid meaning with "custom cron" interval but without a cronline specified:
```
2018-12-21 13:48:12 - nailgun.client - DEBUG - Making HTTP POST request to https://host.com/katello/api/v2/organizations/271/sync_plans with options {'auth': ('admin', 'changeme'), 'verify': False, 'headers': {'content-type': 'application/json'}}, no params and data {"interval": "custom cron", "enabled": true, "name": "fvRmhdIyaGkf", "sync_date": "2516-10-31 06:25:18", "organization_id": 271}.
2018-12-21 13:48:12 - nailgun.client - WARNING - Received HTTP 500 response: {"displayMessage":"Cron expression is not valid!","errors":["Cron expression is not valid!"]}
```
This is probably an equivalent of the cli problem here: #6459
|
test
|
api test positive update interval test occasionally creates an invalid sync plan invalid meaning with custom cron interval but without a cronline specified nailgun client debug making http post request to with options auth admin changeme verify false headers content type application json no params and data interval custom cron enabled true name fvrmhdiyagkf sync date organization id nailgun client warning received http response displaymessage cron expression is not valid errors this is probably an equivalent of the cli problem here
| 1
|
202,553
| 15,286,970,068
|
IssuesEvent
|
2021-02-23 15:16:22
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: kv/splits/nodes=3/quiesce=false failed
|
C-test-failure O-roachtest O-robot branch-release-20.2 release-blocker
|
[(roachtest).kv/splits/nodes=3/quiesce=false failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46):
```
| golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) output in run_074851.646_n4_workload_run_kv
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657161-1612856692-11-n4cpu4:4 -- ./workload run kv --init --max-ops=1 --concurrency=192 --splits=30000 {pgurl:1-3} returned
| stderr:
| ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload)
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload run kv --init --max-ops=1 --concurrency=192 --splits=30000 {pgurl:1-3}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2654,kv.go:504,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650
| main.registerKVSplits.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:504
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/kv/splits/nodes=3/quiesce=false](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=artifacts#/kv/splits/nodes=3/quiesce=false)
Related:
- #59889 roachtest: kv/splits/nodes=3/quiesce=false failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv%2Fsplits%2Fnodes%3D3%2Fquiesce%3Dfalse.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: kv/splits/nodes=3/quiesce=false failed - [(roachtest).kv/splits/nodes=3/quiesce=false failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46):
```
| golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/pkg/mod/golang.org/x/sync@v0.0.0-20190911185100-cd5d95a43a6e/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (2) output in run_074851.646_n4_workload_run_kv
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657161-1612856692-11-n4cpu4:4 -- ./workload run kv --init --max-ops=1 --concurrency=192 --splits=30000 {pgurl:1-3} returned
| stderr:
| ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload)
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload run kv --init --max-ops=1 --concurrency=192 --splits=30000 {pgurl:1-3}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
cluster.go:2654,kv.go:504,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650
| main.registerKVSplits.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/kv.go:504
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5652
| runtime.main
| /usr/local/go/src/runtime/proc.go:191
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1374
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<details><summary>More</summary><p>
Artifacts: [/kv/splits/nodes=3/quiesce=false](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=artifacts#/kv/splits/nodes=3/quiesce=false)
Related:
- #59889 roachtest: kv/splits/nodes=3/quiesce=false failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Akv%2Fsplits%2Fnodes%3D3%2Fquiesce%3Dfalse.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
test
|
roachtest kv splits nodes quiesce false failed on golang org x sync errgroup group go home agent work go pkg mod golang org x sync errgroup errgroup go runtime goexit usr local go src runtime asm s wraps output in run workload run kv wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload run kv init max ops concurrency splits pgurl returned stderr workload lib linux gnu libm so version glibc not found required by workload error command problem exit status command problem wraps node command with error workload run kv init max ops concurrency splits pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go kv go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main registerkvsplits home agent work go src github com cockroachdb cockroach pkg cmd roachtest kv go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts related roachtest kv splits nodes quiesce false failed powered by
| 1
|
9,054
| 3,020,026,214
|
IssuesEvent
|
2015-07-31 03:08:43
|
DynamoRIO/drmemory
|
https://api.github.com/repos/DynamoRIO/drmemory
|
closed
|
test suite results.txt checks completely disabled accidentally
|
Component-Tests Priority-Critical
|
We just discovered that commit 0a5978caee3edec334c8d15b08d80515ff801931
completely disabled all checks of results.txt because it uses a
TOOL_DR_MEMORY check in runtest.cmake where only TOOL_DR_HEAPSTAT is set.
It's not clear how to easily catch such problems: do we need a test of our
test script?
We should fix ASAP.
|
1.0
|
test suite results.txt checks completely disabled accidentally - We just discovered that commit 0a5978caee3edec334c8d15b08d80515ff801931
completely disabled all checks of results.txt because it uses a
TOOL_DR_MEMORY check in runtest.cmake where only TOOL_DR_HEAPSTAT is set.
It's not clear how to easily catch such problems: do we need a test of our
test script?
We should fix ASAP.
|
test
|
test suite results txt checks completely disabled accidentally we just discovered that commit completely disabled all checks of results txt because it uses a tool dr memory check in runtest cmake where only tool dr heapstat is set it s not clear how to easily catch such problems do we need a test of our test script we should fix asap
| 1
|
133,191
| 18,843,897,176
|
IssuesEvent
|
2021-11-11 12:52:28
|
Geonovum/KP-APIs
|
https://api.github.com/repos/Geonovum/KP-APIs
|
opened
|
API-53: Hide irrelevant implementation details - Feedback Publieke Consultatie
|
API design rules (normatief) Consultatie
|
Originele bericht van Provincie Zuid-Holland:
```
API-53: Hide irrelevant implementation details
Het eerste criterium is: The API design should not necessarily be a 1-on-1 mapping of the underlying domain- or persistence model
Dit is niet toetsbaar: het is niet gewenst maar het is mag wel. Ook is het lastig om deze toets op een API uit te voeren. Dus: of weglaten als het zo vrijblijvend bedoeld is, of het explicieter maken op een manier dat het toestbaar wordt.
```
Interpretatie vanuit Logius voor aanpassing van de ADR:
- onderscheid aanbrengen in:
1. convenience api's
1. process api's
1. system api's
- duidelijk aangeven dat dit een design richting is en niet een runtime verifieerbare regel is.
|
1.0
|
API-53: Hide irrelevant implementation details - Feedback Publieke Consultatie - Originele bericht van Provincie Zuid-Holland:
```
API-53: Hide irrelevant implementation details
Het eerste criterium is: The API design should not necessarily be a 1-on-1 mapping of the underlying domain- or persistence model
Dit is niet toetsbaar: het is niet gewenst maar het is mag wel. Ook is het lastig om deze toets op een API uit te voeren. Dus: of weglaten als het zo vrijblijvend bedoeld is, of het explicieter maken op een manier dat het toestbaar wordt.
```
Interpretatie vanuit Logius voor aanpassing van de ADR:
- onderscheid aanbrengen in:
1. convenience api's
1. process api's
1. system api's
- duidelijk aangeven dat dit een design richting is en niet een runtime verifieerbare regel is.
|
non_test
|
api hide irrelevant implementation details feedback publieke consultatie originele bericht van provincie zuid holland api hide irrelevant implementation details het eerste criterium is the api design should not necessarily be a on mapping of the underlying domain or persistence model dit is niet toetsbaar het is niet gewenst maar het is mag wel ook is het lastig om deze toets op een api uit te voeren dus of weglaten als het zo vrijblijvend bedoeld is of het explicieter maken op een manier dat het toestbaar wordt interpretatie vanuit logius voor aanpassing van de adr onderscheid aanbrengen in convenience api s process api s system api s duidelijk aangeven dat dit een design richting is en niet een runtime verifieerbare regel is
| 0
|
230,391
| 18,666,978,428
|
IssuesEvent
|
2021-10-30 01:43:00
|
aces/Loris
|
https://api.github.com/repos/aces/Loris
|
closed
|
Bvl Feedback visible/accessible when de-activated in Module Manager
|
Bug 24.0.0-testing
|
The pencil icon that opens Bvl Feedback is still visible in the top menu bar (next to the Help "?" icon), when the module has been set to Active:No in the Module Manager. Clicking on the pencil icon will open Bvl Feedback.
To reproduce: set Active=No for bvl_feedback. The pencil icon in the top menu bar is still Visible and clickable - opening Bvl feedback - when viewing the following:
* Timepoint List
* Instrument list
* from any instrument form
|
1.0
|
Bvl Feedback visible/accessible when de-activated in Module Manager - The pencil icon that opens Bvl Feedback is still visible in the top menu bar (next to the Help "?" icon), when the module has been set to Active:No in the Module Manager. Clicking on the pencil icon will open Bvl Feedback.
To reproduce: set Active=No for bvl_feedback. The pencil icon in the top menu bar is still Visible and clickable - opening Bvl feedback - when viewing the following:
* Timepoint List
* Instrument list
* from any instrument form
|
test
|
bvl feedback visible accessible when de activated in module manager the pencil icon that opens bvl feedback is still visible in the top menu bar next to the help icon when the module has been set to active no in the module manager clicking on the pencil icon will open bvl feedback to reproduce set active no for bvl feedback the pencil icon in the top menu bar is still visible and clickable opening bvl feedback when viewing the following timepoint list instrument list from any instrument form
| 1
|
272,638
| 23,689,692,900
|
IssuesEvent
|
2022-08-29 09:39:18
|
airbytehq/airbyte
|
https://api.github.com/repos/airbytehq/airbyte
|
closed
|
E2E Testing Tool: Implement Update version scenario action
|
type/enhancement team/connectors-java e2e-testing-tool
|
Implement Update version scenario action for testing tool
|
1.0
|
E2E Testing Tool: Implement Update version scenario action - Implement Update version scenario action for testing tool
|
test
|
testing tool implement update version scenario action implement update version scenario action for testing tool
| 1
|
265,979
| 23,213,964,031
|
IssuesEvent
|
2022-08-02 12:40:43
|
LE2HE/coding
|
https://api.github.com/repos/LE2HE/coding
|
closed
|
수 정렬
|
coding test
|
BACKJOON
==========
2750번
---------
> 1번째 줄에 수의 개수 N이 주어진다.
> 2번째 줄부터 N개의 줄에 숫자가 주어진다.
> 오름차순 정렬한 결과를 1줄에 1개씩 출력한다.
> 단, 수는 중복되지 않는다.
>
> 링크 : [2750](https://www.acmicpc.net/problem/2750)
|
1.0
|
수 정렬 - BACKJOON
==========
2750번
---------
> 1번째 줄에 수의 개수 N이 주어진다.
> 2번째 줄부터 N개의 줄에 숫자가 주어진다.
> 오름차순 정렬한 결과를 1줄에 1개씩 출력한다.
> 단, 수는 중복되지 않는다.
>
> 링크 : [2750](https://www.acmicpc.net/problem/2750)
|
test
|
수 정렬 backjoon 줄에 수의 개수 n이 주어진다 줄부터 n개의 줄에 숫자가 주어진다 오름차순 정렬한 결과를 출력한다 단 수는 중복되지 않는다 링크
| 1
|
294,107
| 25,346,293,772
|
IssuesEvent
|
2022-11-19 08:22:46
|
apache/tvm
|
https://api.github.com/repos/apache/tvm
|
opened
|
[Flaky Test] `tests/python/relay/opencl_texture/test_network.py::test_mobilenet_v1_fp32[opencl -device=adreno] `
|
test: flaky needs-triage
|
These tests were found to be flaky (intermittently failing on `main` or failed in a PR with unrelated changes). See [the docs](https://github.com/apache/tvm/blob/main/docs/contribute/ci.rst#handling-flaky-failures) for details.
### Tests(s)
- `tests/python/relay/opencl_texture/test_network.py::test_mobilenet_v1_fp32[opencl -device=adreno] `
### Jenkins Links
- https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/4764/tests
|
1.0
|
[Flaky Test] `tests/python/relay/opencl_texture/test_network.py::test_mobilenet_v1_fp32[opencl -device=adreno] ` - These tests were found to be flaky (intermittently failing on `main` or failed in a PR with unrelated changes). See [the docs](https://github.com/apache/tvm/blob/main/docs/contribute/ci.rst#handling-flaky-failures) for details.
### Tests(s)
- `tests/python/relay/opencl_texture/test_network.py::test_mobilenet_v1_fp32[opencl -device=adreno] `
### Jenkins Links
- https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/main/4764/tests
|
test
|
tests python relay opencl texture test network py test mobilenet these tests were found to be flaky intermittently failing on main or failed in a pr with unrelated changes see for details tests s tests python relay opencl texture test network py test mobilenet jenkins links
| 1
|
309,259
| 26,659,125,234
|
IssuesEvent
|
2023-01-25 19:24:54
|
MPMG-DCC-UFMG/F01
|
https://api.github.com/repos/MPMG-DCC-UFMG/F01
|
closed
|
Teste de generalizacao para a tag Acesso à Informação - Informações - Jesuânia
|
generalization test development tag - Acesso à Informação template - ABO (21) subtag - Informações
|
DoD: Realizar o teste de Generalização do validador da tag Acesso à Informação - Informações para o Município de Jesuânia.
|
1.0
|
Teste de generalizacao para a tag Acesso à Informação - Informações - Jesuânia - DoD: Realizar o teste de Generalização do validador da tag Acesso à Informação - Informações para o Município de Jesuânia.
|
test
|
teste de generalizacao para a tag acesso à informação informações jesuânia dod realizar o teste de generalização do validador da tag acesso à informação informações para o município de jesuânia
| 1
|
266,069
| 8,362,576,182
|
IssuesEvent
|
2018-10-03 17:13:25
|
meumobi/ion-employee
|
https://api.github.com/repos/meumobi/ion-employee
|
closed
|
Setup INT environment
|
high-priority pull-request
|
### Expected behaviour
Should allow to test on integration and build for production.
Check out doc about [environments management](http://meumobi.github.io/ionic/2018/05/10/managing-aliases-environment-variables-ionc.html)
### Actual behaviour
only one env is used
### Steps to reproduce
1.
2.
3.
### Expected responses
- Why it happens
- How to fix it
- How to test
|
1.0
|
Setup INT environment - ### Expected behaviour
Should allow to test on integration and build for production.
Check out doc about [environments management](http://meumobi.github.io/ionic/2018/05/10/managing-aliases-environment-variables-ionc.html)
### Actual behaviour
only one env is used
### Steps to reproduce
1.
2.
3.
### Expected responses
- Why it happens
- How to fix it
- How to test
|
non_test
|
setup int environment expected behaviour should allow to test on integration and build for production check out doc about actual behaviour only one env is used steps to reproduce expected responses why it happens how to fix it how to test
| 0
|
29,074
| 23,705,040,920
|
IssuesEvent
|
2022-08-29 23:36:53
|
cal-itp/benefits
|
https://api.github.com/repos/cal-itp/benefits
|
closed
|
Use production Login.gov
|
chore deliverable infrastructure
|
## Acceptance Criteria
- [x] On the production site, when a user clicks the sign in button, they get directed to https://secure.login.gov.
## Additional context
<!-- Include information about scope, time frame, person who requested the task, links to resources -->
There is currently only a development authorization server, which uses the Login.gov sandbox for authentication.
## What is the definition of done?
- [x] CDT gets contract signed
- [x] Production configuration for authorization server is complete with Login.gov
- [x] Production authorization server available
- [x] Production Benefits app configured to point to it
|
1.0
|
Use production Login.gov - ## Acceptance Criteria
- [x] On the production site, when a user clicks the sign in button, they get directed to https://secure.login.gov.
## Additional context
<!-- Include information about scope, time frame, person who requested the task, links to resources -->
There is currently only a development authorization server, which uses the Login.gov sandbox for authentication.
## What is the definition of done?
- [x] CDT gets contract signed
- [x] Production configuration for authorization server is complete with Login.gov
- [x] Production authorization server available
- [x] Production Benefits app configured to point to it
|
non_test
|
use production login gov acceptance criteria on the production site when a user clicks the sign in button they get directed to additional context there is currently only a development authorization server which uses the login gov sandbox for authentication what is the definition of done cdt gets contract signed production configuration for authorization server is complete with login gov production authorization server available production benefits app configured to point to it
| 0
|
224,796
| 24,791,506,701
|
IssuesEvent
|
2022-10-24 14:05:20
|
tibblesnbits/discovered_check
|
https://api.github.com/repos/tibblesnbits/discovered_check
|
closed
|
Data queries are prone to SQL Injection
|
bug security
|
The queries used to pull data from the database are using string concatenation to add variables into the query. This is prone to SQL injection and should be fixed by replacing instances like `WHERE LOWER(white_player) = LOWER(${req.query.user})` with `WHERE LOWER(white_player) = LOWER($1)`.
|
True
|
Data queries are prone to SQL Injection - The queries used to pull data from the database are using string concatenation to add variables into the query. This is prone to SQL injection and should be fixed by replacing instances like `WHERE LOWER(white_player) = LOWER(${req.query.user})` with `WHERE LOWER(white_player) = LOWER($1)`.
|
non_test
|
data queries are prone to sql injection the queries used to pull data from the database are using string concatenation to add variables into the query this is prone to sql injection and should be fixed by replacing instances like where lower white player lower req query user with where lower white player lower
| 0
|
350,732
| 24,997,691,863
|
IssuesEvent
|
2022-11-03 03:12:01
|
AY2223S1-CS2103T-T12-1/tp
|
https://api.github.com/repos/AY2223S1-CS2103T-T12-1/tp
|
closed
|
[PE-D][Tester C] Duplicate explanations in list section
|
documentation priority.LOW
|

In `List` feature explanation, there are two lines saying the same "List the classes that have been created.".
Maybe it is better to delete the "List the classes that have been created." with indentation as there's no need to repeat the same words. Just like what you did for the `Exit` feature.

<!--session: 1666945016311-9e7e2cba-277f-4261-b4be-89db23d08675-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.VeryLow` `type.DocumentationBug`
original: SweetPotato0213/ped#6
|
1.0
|
[PE-D][Tester C] Duplicate explanations in list section - 
In `List` feature explanation, there are two lines saying the same "List the classes that have been created.".
Maybe it is better to delete the "List the classes that have been created." with indentation as there's no need to repeat the same words. Just like what you did for the `Exit` feature.

<!--session: 1666945016311-9e7e2cba-277f-4261-b4be-89db23d08675-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.VeryLow` `type.DocumentationBug`
original: SweetPotato0213/ped#6
|
non_test
|
duplicate explanations in list section in list feature explanation there are two lines saying the same list the classes that have been created maybe it is better to delete the list the classes that have been created with indentation as there s no need to repeat the same words just like what you did for the exit feature labels severity verylow type documentationbug original ped
| 0
|
200,858
| 15,160,924,271
|
IssuesEvent
|
2021-02-12 08:06:04
|
ME-ICA/aroma
|
https://api.github.com/repos/ME-ICA/aroma
|
opened
|
Move testing data into OSF
|
Good First Issue Tests
|
<!--
This is a suggested issue template for ICA-AROMA.
-->
<!--
Summarize the issue in 1-2 sentences, linking other issues if they are relevant
-->
### Summary
We could make the package much lighter by moving the data we use for testing into an OSF repository.
<!--
If desired, add suggested next steps.
If you foresee them in a particular order or priority, please use numbering
-->
### Next Steps
- Create an OSF repo for `aroma` under ME-ICA.
- Upload files under `tests/data` to the OSF repo.
- Write a function to download files for testing in `conftest.py`.
- Remove the `tests/data` directory.
|
1.0
|
Move testing data into OSF - <!--
This is a suggested issue template for ICA-AROMA.
-->
<!--
Summarize the issue in 1-2 sentences, linking other issues if they are relevant
-->
### Summary
We could make the package much lighter by moving the data we use for testing into an OSF repository.
<!--
If desired, add suggested next steps.
If you foresee them in a particular order or priority, please use numbering
-->
### Next Steps
- Create an OSF repo for `aroma` under ME-ICA.
- Upload files under `tests/data` to the OSF repo.
- Write a function to download files for testing in `conftest.py`.
- Remove the `tests/data` directory.
|
test
|
move testing data into osf this is a suggested issue template for ica aroma summarize the issue in sentences linking other issues if they are relevant summary we could make the package much lighter by moving the data we use for testing into an osf repository if desired add suggested next steps if you foresee them in a particular order or priority please use numbering next steps create an osf repo for aroma under me ica upload files under tests data to the osf repo write a function to download files for testing in conftest py remove the tests data directory
| 1
|
104,876
| 9,011,945,743
|
IssuesEvent
|
2019-02-05 15:48:35
|
fedora-infra/bodhi
|
https://api.github.com/repos/fedora-infra/bodhi
|
opened
|
Sometimes the integration tests fail to stop their containers
|
Crash Critical Tests
|
I've been seeing a problem somewhat frequently this week where the integration tests sometimes fail like this:
```
$ bci all
<snip>
f29-integration ============================= test session starts ==============================
f29-integration platform linux -- Python 3.7.2, pytest-3.6.4, py-1.5.4, pluggy-0.6.0
f29-integration rootdir: /home/vagrant/bodhi, inifile: setup.cfg
f29-integration plugins: cov-2.5.1
f29-integration collected 7 items
f29-integration
f29-integration devel/ci/integration/tests/test_bodhi.py . [ 14%]
f29-integration devel/ci/integration/tests/test_bodhi_cli.py ....F/usr/lib/python3.7/site-packages/_pytest/assertion/rewrite.py:6: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
f29-integration import imp
f29-integration /usr/lib/python3.7/site-packages/more_itertools/more.py:3: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
f29-integration from collections import Counter, defaultdict, deque, Sequence
f29-integration Traceback (most recent call last):
f29-integration File "/usr/lib/python3.7/site-packages/docker/api/client.py", line 229, in _raise_for_status
f29-integration response.raise_for_status()
f29-integration File "/usr/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
f29-integration raise HTTPError(http_error_msg, response=self)
f29-integration requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.26/containers/a5a9661e72978b42826076e61f85f44f7cedff32e8a79e5b51be1eb1fdf34151/kill
f29-integration
f29-integration During handling of the above exception, another exception occurred:
f29-integration
f29-integration Traceback (most recent call last):
f29-integration File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main
f29-integration "__main__", mod_spec)
f29-integration File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
f29-integration exec(code, run_globals)
f29-integration File "/usr/lib/python3.7/site-packages/pytest.py", line 67, in <module>
f29-integration raise SystemExit(pytest.main())
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/config/__init__.py", line 64, in main
f29-integration return config.hook.pytest_cmdline_main(config=config)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 617, in __call__
f29-integration return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 222, in _hookexec
f29-integration return self._inner_hookexec(hook, methods, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 216, in <lambda>
f29-integration firstresult=hook.spec_opts.get('firstresult'),
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 201, in _multicall
f29-integration return outcome.get_result()
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 76, in get_result
f29-integration raise ex[1].with_traceback(ex[2])
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 180, in _multicall
f29-integration res = hook_impl.function(*args)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/main.py", line 208, in pytest_cmdline_main
f29-integration return wrap_session(config, _main)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/main.py", line 201, in wrap_session
f29-integration session=session, exitstatus=session.exitstatus
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 617, in __call__
f29-integration return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 222, in _hookexec
f29-integration return self._inner_hookexec(hook, methods, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 216, in <lambda>
f29-integration firstresult=hook.spec_opts.get('firstresult'),
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 196, in _multicall
f29-integration gen.send(outcome)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/terminal.py", line 583, in pytest_sessionfinish
f29-integration outcome.get_result()
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 76, in get_result
f29-integration raise ex[1].with_traceback(ex[2])
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 180, in _multicall
f29-integration res = hook_impl.function(*args)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 59, in pytest_sessionfinish
f29-integration session._setupstate.teardown_all()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 525, in teardown_all
f29-integration self._pop_and_teardown()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 497, in _pop_and_teardown
f29-integration self._teardown_with_finalization(colitem)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 515, in _teardown_with_finalization
f29-integration self._callfinalizers(colitem)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 512, in _callfinalizers
f29-integration py.builtin._reraise(*exc)
f29-integration File "/usr/lib/python3.7/site-packages/py/_builtin.py", line 227, in _reraise
f29-integration raise cls.with_traceback(val, tb)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 505, in _callfinalizers
f29-integration fin()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/fixtures.py", line 798, in finish
f29-integration py.builtin._reraise(*e)
f29-integration File "/usr/lib/python3.7/site-packages/py/_builtin.py", line 227, in _reraise
f29-integration raise cls.with_traceback(val, tb)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/fixtures.py", line 792, in finish
f29-integration func()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/fixtures.py", line 740, in teardown
f29-integration next(it)
f29-integration File "/home/vagrant/bodhi/devel/ci/integration/tests/fixtures/greenwave.py", line 49, in greenwave_container
f29-integration container.kill()
f29-integration File "/usr/lib/python3.7/site-packages/conu/backend/docker/container.py", line 570, in kill
f29-integration self.d.kill(self.get_id(), signal=signal)
f29-integration File "/usr/lib/python3.7/site-packages/docker/utils/decorators.py", line 19, in wrapped
f29-integration return f(self, resource_id, *args, **kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/docker/api/container.py", line 754, in kill
f29-integration self._raise_for_status(res)
f29-integration File "/usr/lib/python3.7/site-packages/docker/api/client.py", line 231, in _raise_for_status
f29-integration raise create_api_error_from_http_exception(e)
f29-integration File "/usr/lib/python3.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
f29-integration raise cls(e, response=response, explanation=explanation)
f29-integration docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot kill container a5a9661e72978b42826076e61f85f44f7cedff32e8a79e5b51be1eb1fdf34151: Container a5a9661e72978b42826076e61f85f44f7cedff32e8a79e5b51be1eb1fdf34151 is not running")
f29-integration sys:1: ResourceWarning: unclosed <socket.socket fd=24, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('172.18.0.1', 51950)>
f29-integration ResourceWarning: Enable tracemalloc to get the object allocation traceback
```
Unfortunately, it seems that this traceback is causing us not to get the information about the test that was running, as it seems to be a failure in the test cleanup code, or something along those lines.
It does seem to often, or possibly always, be the act of killing the Greenwave container that fails. I will likely wrap that part of the cleanup in a try/except to catch this exception and see if that helps figure out what is going on.
|
1.0
|
Sometimes the integration tests fail to stop their containers - I've been seeing a problem somewhat frequently this week where the integration tests sometimes fail like this:
```
$ bci all
<snip>
f29-integration ============================= test session starts ==============================
f29-integration platform linux -- Python 3.7.2, pytest-3.6.4, py-1.5.4, pluggy-0.6.0
f29-integration rootdir: /home/vagrant/bodhi, inifile: setup.cfg
f29-integration plugins: cov-2.5.1
f29-integration collected 7 items
f29-integration
f29-integration devel/ci/integration/tests/test_bodhi.py . [ 14%]
f29-integration devel/ci/integration/tests/test_bodhi_cli.py ....F/usr/lib/python3.7/site-packages/_pytest/assertion/rewrite.py:6: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
f29-integration import imp
f29-integration /usr/lib/python3.7/site-packages/more_itertools/more.py:3: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
f29-integration from collections import Counter, defaultdict, deque, Sequence
f29-integration Traceback (most recent call last):
f29-integration File "/usr/lib/python3.7/site-packages/docker/api/client.py", line 229, in _raise_for_status
f29-integration response.raise_for_status()
f29-integration File "/usr/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
f29-integration raise HTTPError(http_error_msg, response=self)
f29-integration requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.26/containers/a5a9661e72978b42826076e61f85f44f7cedff32e8a79e5b51be1eb1fdf34151/kill
f29-integration
f29-integration During handling of the above exception, another exception occurred:
f29-integration
f29-integration Traceback (most recent call last):
f29-integration File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main
f29-integration "__main__", mod_spec)
f29-integration File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
f29-integration exec(code, run_globals)
f29-integration File "/usr/lib/python3.7/site-packages/pytest.py", line 67, in <module>
f29-integration raise SystemExit(pytest.main())
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/config/__init__.py", line 64, in main
f29-integration return config.hook.pytest_cmdline_main(config=config)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 617, in __call__
f29-integration return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 222, in _hookexec
f29-integration return self._inner_hookexec(hook, methods, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 216, in <lambda>
f29-integration firstresult=hook.spec_opts.get('firstresult'),
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 201, in _multicall
f29-integration return outcome.get_result()
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 76, in get_result
f29-integration raise ex[1].with_traceback(ex[2])
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 180, in _multicall
f29-integration res = hook_impl.function(*args)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/main.py", line 208, in pytest_cmdline_main
f29-integration return wrap_session(config, _main)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/main.py", line 201, in wrap_session
f29-integration session=session, exitstatus=session.exitstatus
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 617, in __call__
f29-integration return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 222, in _hookexec
f29-integration return self._inner_hookexec(hook, methods, kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/__init__.py", line 216, in <lambda>
f29-integration firstresult=hook.spec_opts.get('firstresult'),
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 196, in _multicall
f29-integration gen.send(outcome)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/terminal.py", line 583, in pytest_sessionfinish
f29-integration outcome.get_result()
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 76, in get_result
f29-integration raise ex[1].with_traceback(ex[2])
f29-integration File "/usr/lib/python3.7/site-packages/pluggy/callers.py", line 180, in _multicall
f29-integration res = hook_impl.function(*args)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 59, in pytest_sessionfinish
f29-integration session._setupstate.teardown_all()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 525, in teardown_all
f29-integration self._pop_and_teardown()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 497, in _pop_and_teardown
f29-integration self._teardown_with_finalization(colitem)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 515, in _teardown_with_finalization
f29-integration self._callfinalizers(colitem)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 512, in _callfinalizers
f29-integration py.builtin._reraise(*exc)
f29-integration File "/usr/lib/python3.7/site-packages/py/_builtin.py", line 227, in _reraise
f29-integration raise cls.with_traceback(val, tb)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/runner.py", line 505, in _callfinalizers
f29-integration fin()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/fixtures.py", line 798, in finish
f29-integration py.builtin._reraise(*e)
f29-integration File "/usr/lib/python3.7/site-packages/py/_builtin.py", line 227, in _reraise
f29-integration raise cls.with_traceback(val, tb)
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/fixtures.py", line 792, in finish
f29-integration func()
f29-integration File "/usr/lib/python3.7/site-packages/_pytest/fixtures.py", line 740, in teardown
f29-integration next(it)
f29-integration File "/home/vagrant/bodhi/devel/ci/integration/tests/fixtures/greenwave.py", line 49, in greenwave_container
f29-integration container.kill()
f29-integration File "/usr/lib/python3.7/site-packages/conu/backend/docker/container.py", line 570, in kill
f29-integration self.d.kill(self.get_id(), signal=signal)
f29-integration File "/usr/lib/python3.7/site-packages/docker/utils/decorators.py", line 19, in wrapped
f29-integration return f(self, resource_id, *args, **kwargs)
f29-integration File "/usr/lib/python3.7/site-packages/docker/api/container.py", line 754, in kill
f29-integration self._raise_for_status(res)
f29-integration File "/usr/lib/python3.7/site-packages/docker/api/client.py", line 231, in _raise_for_status
f29-integration raise create_api_error_from_http_exception(e)
f29-integration File "/usr/lib/python3.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
f29-integration raise cls(e, response=response, explanation=explanation)
f29-integration docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot kill container a5a9661e72978b42826076e61f85f44f7cedff32e8a79e5b51be1eb1fdf34151: Container a5a9661e72978b42826076e61f85f44f7cedff32e8a79e5b51be1eb1fdf34151 is not running")
f29-integration sys:1: ResourceWarning: unclosed <socket.socket fd=24, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('172.18.0.1', 51950)>
f29-integration ResourceWarning: Enable tracemalloc to get the object allocation traceback
```
Unfortunately, it seems that this traceback is causing us not to get the information about the test that was running, as it seems to be a failure in the test cleanup code, or something along those lines.
It does seem to often, or possibly always, be the act of killing the Greenwave container that fails. I will likely wrap that part of the cleanup in a try/except to catch this exception and see if that helps figure out what is going on.
|
test
|
sometimes the integration tests fail to stop their containers i ve been seeing a problem somewhat frequently this week where the integration tests sometimes fail like this bci all integration test session starts integration platform linux python pytest py pluggy integration rootdir home vagrant bodhi inifile setup cfg integration plugins cov integration collected items integration integration devel ci integration tests test bodhi py integration devel ci integration tests test bodhi cli py f usr lib site packages pytest assertion rewrite py deprecationwarning the imp module is deprecated in favour of importlib see the module s documentation for alternative uses integration import imp integration usr lib site packages more itertools more py deprecationwarning using or importing the abcs from collections instead of from collections abc is deprecated and in it will stop working integration from collections import counter defaultdict deque sequence integration traceback most recent call last integration file usr lib site packages docker api client py line in raise for status integration response raise for status integration file usr lib site packages requests models py line in raise for status integration raise httperror http error msg response self integration requests exceptions httperror server error internal server error for url http docker localhost containers kill integration integration during handling of the above exception another exception occurred integration integration traceback most recent call last integration file usr runpy py line in run module as main integration main mod spec integration file usr runpy py line in run code integration exec code run globals integration file usr lib site packages pytest py line in integration raise systemexit pytest main integration file usr lib site packages pytest config init py line in main integration return config hook pytest cmdline main config config integration file usr lib site packages pluggy init py line in call integration return self hookexec self self nonwrappers self wrappers kwargs integration file usr lib site packages pluggy init py line in hookexec integration return self inner hookexec hook methods kwargs integration file usr lib site packages pluggy init py line in integration firstresult hook spec opts get firstresult integration file usr lib site packages pluggy callers py line in multicall integration return outcome get result integration file usr lib site packages pluggy callers py line in get result integration raise ex with traceback ex integration file usr lib site packages pluggy callers py line in multicall integration res hook impl function args integration file usr lib site packages pytest main py line in pytest cmdline main integration return wrap session config main integration file usr lib site packages pytest main py line in wrap session integration session session exitstatus session exitstatus integration file usr lib site packages pluggy init py line in call integration return self hookexec self self nonwrappers self wrappers kwargs integration file usr lib site packages pluggy init py line in hookexec integration return self inner hookexec hook methods kwargs integration file usr lib site packages pluggy init py line in integration firstresult hook spec opts get firstresult integration file usr lib site packages pluggy callers py line in multicall integration gen send outcome integration file usr lib site packages pytest terminal py line in pytest sessionfinish integration outcome get result integration file usr lib site packages pluggy callers py line in get result integration raise ex with traceback ex integration file usr lib site packages pluggy callers py line in multicall integration res hook impl function args integration file usr lib site packages pytest runner py line in pytest sessionfinish integration session setupstate teardown all integration file usr lib site packages pytest runner py line in teardown all integration self pop and teardown integration file usr lib site packages pytest runner py line in pop and teardown integration self teardown with finalization colitem integration file usr lib site packages pytest runner py line in teardown with finalization integration self callfinalizers colitem integration file usr lib site packages pytest runner py line in callfinalizers integration py builtin reraise exc integration file usr lib site packages py builtin py line in reraise integration raise cls with traceback val tb integration file usr lib site packages pytest runner py line in callfinalizers integration fin integration file usr lib site packages pytest fixtures py line in finish integration py builtin reraise e integration file usr lib site packages py builtin py line in reraise integration raise cls with traceback val tb integration file usr lib site packages pytest fixtures py line in finish integration func integration file usr lib site packages pytest fixtures py line in teardown integration next it integration file home vagrant bodhi devel ci integration tests fixtures greenwave py line in greenwave container integration container kill integration file usr lib site packages conu backend docker container py line in kill integration self d kill self get id signal signal integration file usr lib site packages docker utils decorators py line in wrapped integration return f self resource id args kwargs integration file usr lib site packages docker api container py line in kill integration self raise for status res integration file usr lib site packages docker api client py line in raise for status integration raise create api error from http exception e integration file usr lib site packages docker errors py line in create api error from http exception integration raise cls e response response explanation explanation integration docker errors apierror server error internal server error cannot kill container container is not running integration sys resourcewarning unclosed integration resourcewarning enable tracemalloc to get the object allocation traceback unfortunately it seems that this traceback is causing us not to get the information about the test that was running as it seems to be a failure in the test cleanup code or something along those lines it does seem to often or possibly always be the act of killing the greenwave container that fails i will likely wrap that part of the cleanup in a try except to catch this exception and see if that helps figure out what is going on
| 1
|
17,463
| 10,706,362,250
|
IssuesEvent
|
2019-10-24 15:16:28
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
--attach-acr example does not work
|
Pri1 awaiting-product-team-response container-service/svc cxp doc-provided in-progress triaged
|
I updated to the latest (2.0.75) az cli but I get a
`az: error: unrecognized arguments: --attach-acr`
error.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ed59d472-ac33-5c03-dbb7-0e86e88c7dd8
* Version Independent ID: 69e1ad8c-2dd3-cc1b-2b31-996a5d866cc0
* Content: [Integrate Azure Container Registry with Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration#configure-acr-integration-for-existing-aks-clusters)
* Content Source: [articles/aks/cluster-container-registry-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/cluster-container-registry-integration.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
1.0
|
--attach-acr example does not work - I updated to the latest (2.0.75) az cli but I get a
`az: error: unrecognized arguments: --attach-acr`
error.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ed59d472-ac33-5c03-dbb7-0e86e88c7dd8
* Version Independent ID: 69e1ad8c-2dd3-cc1b-2b31-996a5d866cc0
* Content: [Integrate Azure Container Registry with Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration#configure-acr-integration-for-existing-aks-clusters)
* Content Source: [articles/aks/cluster-container-registry-integration.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/cluster-container-registry-integration.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
non_test
|
attach acr example does not work i updated to the latest az cli but i get a az error unrecognized arguments attach acr error document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
| 0
|
307,397
| 26,529,234,545
|
IssuesEvent
|
2023-01-19 11:08:42
|
apache/pulsar
|
https://api.github.com/repos/apache/pulsar
|
opened
|
Flaky-test: AdminApiOffloadTest.testSetTopicOffloadPolicies
|
component/test flaky-tests
|
### Search before asking
- [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar.
### Example failure
https://github.com/apache/pulsar/actions/runs/3957519115/jobs/6778166020#step:11:918
### Exception stacktrace
```
Error: Tests run: 28, Failures: 1, Errors: 0, Skipped: 21, Time elapsed: 23.623 s <<< FAILURE! - in org.apache.pulsar.broker.admin.AdminApiOffloadTest
Error: testSetTopicOffloadPolicies(org.apache.pulsar.broker.admin.AdminApiOffloadTest) Time elapsed: 0.043 s <<< FAILURE!
java.lang.AssertionError: expected [300] but found [-1]
at org.testng.Assert.fail(Assert.java:110)
at org.testng.Assert.failNotEquals(Assert.java:1413)
at org.testng.Assert.assertEqualsImpl(Assert.java:149)
at org.testng.Assert.assertEquals(Assert.java:131)
at org.testng.Assert.assertEquals(Assert.java:911)
at org.testng.Assert.assertEquals(Assert.java:945)
at org.apache.pulsar.broker.admin.AdminApiOffloadTest.testSetTopicOffloadPolicies(AdminApiOffloadTest.java:310)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.invokers.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:139)
at org.testng.internal.invokers.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:47)
at org.testng.internal.invokers.InvokeMethodRunnable.call(InvokeMethodRunnable.java:76)
at org.testng.internal.invokers.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
```
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR!
|
2.0
|
Flaky-test: AdminApiOffloadTest.testSetTopicOffloadPolicies - ### Search before asking
- [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar.
### Example failure
https://github.com/apache/pulsar/actions/runs/3957519115/jobs/6778166020#step:11:918
### Exception stacktrace
```
Error: Tests run: 28, Failures: 1, Errors: 0, Skipped: 21, Time elapsed: 23.623 s <<< FAILURE! - in org.apache.pulsar.broker.admin.AdminApiOffloadTest
Error: testSetTopicOffloadPolicies(org.apache.pulsar.broker.admin.AdminApiOffloadTest) Time elapsed: 0.043 s <<< FAILURE!
java.lang.AssertionError: expected [300] but found [-1]
at org.testng.Assert.fail(Assert.java:110)
at org.testng.Assert.failNotEquals(Assert.java:1413)
at org.testng.Assert.assertEqualsImpl(Assert.java:149)
at org.testng.Assert.assertEquals(Assert.java:131)
at org.testng.Assert.assertEquals(Assert.java:911)
at org.testng.Assert.assertEquals(Assert.java:945)
at org.apache.pulsar.broker.admin.AdminApiOffloadTest.testSetTopicOffloadPolicies(AdminApiOffloadTest.java:310)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.testng.internal.invokers.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:139)
at org.testng.internal.invokers.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:47)
at org.testng.internal.invokers.InvokeMethodRunnable.call(InvokeMethodRunnable.java:76)
at org.testng.internal.invokers.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
```
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR!
|
test
|
flaky test adminapioffloadtest testsettopicoffloadpolicies search before asking i searched in the and found nothing similar example failure exception stacktrace error tests run failures errors skipped time elapsed s failure in org apache pulsar broker admin adminapioffloadtest error testsettopicoffloadpolicies org apache pulsar broker admin adminapioffloadtest time elapsed s failure java lang assertionerror expected but found at org testng assert fail assert java at org testng assert failnotequals assert java at org testng assert assertequalsimpl assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at org testng assert assertequals assert java at org apache pulsar broker admin adminapioffloadtest testsettopicoffloadpolicies adminapioffloadtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal invokers methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invokers invokemethodrunnable runone invokemethodrunnable java at org testng internal invokers invokemethodrunnable call invokemethodrunnable java at org testng internal invokers invokemethodrunnable call invokemethodrunnable java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java are you willing to submit a pr i m willing to submit a pr
| 1
|
262,092
| 22,794,163,583
|
IssuesEvent
|
2022-07-10 13:07:13
|
team-yaza/mozi-client
|
https://api.github.com/repos/team-yaza/mozi-client
|
closed
|
Main 페이지 레이아웃
|
refactor feature style test
|
## 🌴 **MAIN 페이지 레이아웃**
Inbox Page Layout 잡기
## 📋 **태스크 리스트**
- [ ] Todo 1
## 📚 참고자료
[YAZA 노션](https://roomy-phone-06d.notion.site/Team-Yaza-92625391b533460fb797bbce9f8839df)
|
1.0
|
Main 페이지 레이아웃 - ## 🌴 **MAIN 페이지 레이아웃**
Inbox Page Layout 잡기
## 📋 **태스크 리스트**
- [ ] Todo 1
## 📚 참고자료
[YAZA 노션](https://roomy-phone-06d.notion.site/Team-Yaza-92625391b533460fb797bbce9f8839df)
|
test
|
main 페이지 레이아웃 🌴 main 페이지 레이아웃 inbox page layout 잡기 📋 태스크 리스트 todo 📚 참고자료
| 1
|
278,083
| 24,122,739,396
|
IssuesEvent
|
2022-09-20 20:18:55
|
dsu-effectiveness/utValidateR
|
https://api.github.com/repos/dsu-effectiveness/utValidateR
|
opened
|
Rule S00b test data discrepancy
|
test data discrepancy
|
``` r
library(utValidateR)
testdf <- get_test_data(file = "student")
#> Warning in check_expected_values(., colname = expected_value_column): The following rows of test data have bad expected values and were removed: 62, 63, 64, 65, 66
#> Warning in check_rule_names(., checklist = checklist, colname = rule_name_column): The following rows of test data have bad rule names and were removed: 25, 42, 43
knitr::kable(compare_rule_output("S00b", testdf = testdf))
```
| csv_row | rule | description | expr | ssn | term_id | expected | actual |
|--------:|:-----|:--------------|:------------------------------------|:------------|:--------|:---------|:-------|
| 27 | S00b | duplicate ssn | !is_duplicated(cbind(ssn, term_id)) | 612-40-2184 | 202240 | fail | pass |
<sup>Created on 2022-09-20 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
|
1.0
|
Rule S00b test data discrepancy - ``` r
library(utValidateR)
testdf <- get_test_data(file = "student")
#> Warning in check_expected_values(., colname = expected_value_column): The following rows of test data have bad expected values and were removed: 62, 63, 64, 65, 66
#> Warning in check_rule_names(., checklist = checklist, colname = rule_name_column): The following rows of test data have bad rule names and were removed: 25, 42, 43
knitr::kable(compare_rule_output("S00b", testdf = testdf))
```
| csv_row | rule | description | expr | ssn | term_id | expected | actual |
|--------:|:-----|:--------------|:------------------------------------|:------------|:--------|:---------|:-------|
| 27 | S00b | duplicate ssn | !is_duplicated(cbind(ssn, term_id)) | 612-40-2184 | 202240 | fail | pass |
<sup>Created on 2022-09-20 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)</sup>
|
test
|
rule test data discrepancy r library utvalidater testdf get test data file student warning in check expected values colname expected value column the following rows of test data have bad expected values and were removed warning in check rule names checklist checklist colname rule name column the following rows of test data have bad rule names and were removed knitr kable compare rule output testdf testdf csv row rule description expr ssn term id expected actual duplicate ssn is duplicated cbind ssn term id fail pass created on by the
| 1
|
92,384
| 18,847,069,015
|
IssuesEvent
|
2021-11-11 16:03:34
|
HMIS/LSASampleCode
|
https://api.github.com/repos/HMIS/LSASampleCode
|
closed
|
Step 7.4.2: potential issue in Sample Code after last update
|
Specs Sample code
|
Hi @MollyMcEvilley ,
**1.**
I wonder if `n.PersonalID = hoha.PersonalID` was omitted on purpose.

Please advise.
**2.**
In the spec it is said that "EntryDate > [**LastActive** – 1 year]" and I can see it in Sample code.
But Sample Code has also "**hn.EntryDate > dateadd(yyyy, -1, n.ExitDate)**" I cannot find in Spec.

Thank you!
Natalie
|
1.0
|
Step 7.4.2: potential issue in Sample Code after last update - Hi @MollyMcEvilley ,
**1.**
I wonder if `n.PersonalID = hoha.PersonalID` was omitted on purpose.

Please advise.
**2.**
In the spec it is said that "EntryDate > [**LastActive** – 1 year]" and I can see it in Sample code.
But Sample Code has also "**hn.EntryDate > dateadd(yyyy, -1, n.ExitDate)**" I cannot find in Spec.

Thank you!
Natalie
|
non_test
|
step potential issue in sample code after last update hi mollymcevilley i wonder if n personalid hoha personalid was omitted on purpose please advise in the spec it is said that entrydate and i can see it in sample code but sample code has also hn entrydate dateadd yyyy n exitdate i cannot find in spec thank you natalie
| 0
|
230,832
| 18,718,215,922
|
IssuesEvent
|
2021-11-03 08:44:18
|
NLCR/SeznamDNNT
|
https://api.github.com/repos/NLCR/SeznamDNNT
|
closed
|
Obsah zasílaných e-mailů
|
ToTests Done p:DEV c::Functions
|
Dobrý den,
při testovaní jsme narazili na tyto problémy: Testováno na uživateli: _habetpet_, webový prohlížeč: _Chrome_, prostřední: _sdnnt-test_
- [x] 1. v e-mailu _Registrace uživatele_ změnit pan**i** na pan**í**
- [x] 2. pokud jako admin někomu resetuji heslo, tak se pošle e-mail, ve kterém je překlep **Vvám**
- [x] 3. v e-mailu _Žádost o resetování hesla_ nefunguje odkaz a zároveň není proklikávací
|
1.0
|
Obsah zasílaných e-mailů - Dobrý den,
při testovaní jsme narazili na tyto problémy: Testováno na uživateli: _habetpet_, webový prohlížeč: _Chrome_, prostřední: _sdnnt-test_
- [x] 1. v e-mailu _Registrace uživatele_ změnit pan**i** na pan**í**
- [x] 2. pokud jako admin někomu resetuji heslo, tak se pošle e-mail, ve kterém je překlep **Vvám**
- [x] 3. v e-mailu _Žádost o resetování hesla_ nefunguje odkaz a zároveň není proklikávací
|
test
|
obsah zasílaných e mailů dobrý den při testovaní jsme narazili na tyto problémy testováno na uživateli habetpet webový prohlížeč chrome prostřední sdnnt test v e mailu registrace uživatele změnit pan i na pan í pokud jako admin někomu resetuji heslo tak se pošle e mail ve kterém je překlep vvám v e mailu žádost o resetování hesla nefunguje odkaz a zároveň není proklikávací
| 1
|
353,202
| 10,549,966,726
|
IssuesEvent
|
2019-10-03 09:55:54
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
feedly.com - site is not usable
|
browser-focus-geckoview engine-gecko priority-important
|
<!-- @browser: mozilla focus (any version) -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://feedly.com/i/welcome
**Browser / Version**: mozilla focus (any version)
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: login not possible
**Steps to Reproduce**:
login not possible. Site keeps refreshing
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@git`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
feedly.com - site is not usable - <!-- @browser: mozilla focus (any version) -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://feedly.com/i/welcome
**Browser / Version**: mozilla focus (any version)
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: login not possible
**Steps to Reproduce**:
login not possible. Site keeps refreshing
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@git`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
feedly com site is not usable url browser version mozilla focus any version operating system android tested another browser yes problem type site is not usable description login not possible steps to reproduce login not possible site keeps refreshing browser configuration none submitted in the name of git from with ❤️
| 0
|
229,552
| 25,362,277,959
|
IssuesEvent
|
2022-11-21 01:02:40
|
DavidSpek/kubeflownotebooks
|
https://api.github.com/repos/DavidSpek/kubeflownotebooks
|
opened
|
CVE-2022-41886 (Medium) detected in tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl, tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl
|
security vulnerability
|
## CVE-2022-41886 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b>, <b>tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d8/d4/9fe4a157732125206185970c6e673483468bda299378be52bc4b8e765943/tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/d8/d4/9fe4a157732125206185970c6e673483468bda299378be52bc4b8e765943/tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /jupyter-tensorflow/cuda-requirements.txt</p>
<p>Path to vulnerable library: /jupyter-tensorflow/cuda-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/31/66/d9cd0b850397dbd33f070cc371a183b4903120b1c103419e9bf20568456e/tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/31/66/d9cd0b850397dbd33f070cc371a183b4903120b1c103419e9bf20568456e/tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /jupyter-tensorflow/cpu-requirements.txt</p>
<p>Path to vulnerable library: /jupyter-tensorflow/cpu-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `tf.raw_ops.ImageProjectiveTransformV2` is given a large output shape, it overflows. We have patched the issue in GitHub commit 8faa6ea692985dbe6ce10e1a3168e0bd60a723ba. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.
<p>Publish Date: 2022-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41886>CVE-2022-41886</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-41886">https://www.cve.org/CVERecord?id=CVE-2022-41886</a></p>
<p>Release Date: 2022-11-18</p>
<p>Fix Resolution: tensorflow - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-cpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-gpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-41886 (Medium) detected in tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl, tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2022-41886 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b>, <b>tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d8/d4/9fe4a157732125206185970c6e673483468bda299378be52bc4b8e765943/tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/d8/d4/9fe4a157732125206185970c6e673483468bda299378be52bc4b8e765943/tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /jupyter-tensorflow/cuda-requirements.txt</p>
<p>Path to vulnerable library: /jupyter-tensorflow/cuda-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow_gpu-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/31/66/d9cd0b850397dbd33f070cc371a183b4903120b1c103419e9bf20568456e/tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/31/66/d9cd0b850397dbd33f070cc371a183b4903120b1c103419e9bf20568456e/tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /jupyter-tensorflow/cpu-requirements.txt</p>
<p>Path to vulnerable library: /jupyter-tensorflow/cpu-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.8.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `tf.raw_ops.ImageProjectiveTransformV2` is given a large output shape, it overflows. We have patched the issue in GitHub commit 8faa6ea692985dbe6ce10e1a3168e0bd60a723ba. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.
<p>Publish Date: 2022-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41886>CVE-2022-41886</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-41886">https://www.cve.org/CVERecord?id=CVE-2022-41886</a></p>
<p>Release Date: 2022-11-18</p>
<p>Fix Resolution: tensorflow - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-cpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-gpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in tensorflow gpu whl tensorflow whl cve medium severity vulnerability vulnerable libraries tensorflow gpu whl tensorflow whl tensorflow gpu whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file jupyter tensorflow cuda requirements txt path to vulnerable library jupyter tensorflow cuda requirements txt dependency hierarchy x tensorflow gpu whl vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file jupyter tensorflow cpu requirements txt path to vulnerable library jupyter tensorflow cpu requirements txt dependency hierarchy x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an open source platform for machine learning when tf raw ops is given a large output shape it overflows we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
| 0
|
100,987
| 8,766,814,010
|
IssuesEvent
|
2018-12-17 17:51:24
|
microcks/microcks
|
https://api.github.com/repos/microcks/microcks
|
closed
|
Provide a simple CLI tool for interacting with Microcks API
|
component/tests good first issue kind/feature
|
Basically, we want to provide a lightweight tool that may be embedded into any CI/CD tooling (or other) so that we may have useful solution for most cases. First and obvious use-case is to be able to launch new tests using this tool.
This initiative has been started here : https://github.com/microcks/microcks-cli and is written in Go.
|
1.0
|
Provide a simple CLI tool for interacting with Microcks API - Basically, we want to provide a lightweight tool that may be embedded into any CI/CD tooling (or other) so that we may have useful solution for most cases. First and obvious use-case is to be able to launch new tests using this tool.
This initiative has been started here : https://github.com/microcks/microcks-cli and is written in Go.
|
test
|
provide a simple cli tool for interacting with microcks api basically we want to provide a lightweight tool that may be embedded into any ci cd tooling or other so that we may have useful solution for most cases first and obvious use case is to be able to launch new tests using this tool this initiative has been started here and is written in go
| 1
|
339,469
| 30,449,088,995
|
IssuesEvent
|
2023-07-16 03:41:33
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix ndarray.test_numpy_instance_floordiv__
|
NumPy Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix ndarray.test_numpy_instance_floordiv__ - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5565459572/jobs/10165830967"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix ndarray test numpy instance floordiv tensorflow a href src jax a href src numpy a href src torch a href src paddle a href src
| 1
|
38,716
| 8,526,913,777
|
IssuesEvent
|
2018-11-02 17:46:21
|
dotnet/coreclr
|
https://api.github.com/repos/dotnet/coreclr
|
opened
|
Should Lowering::InsertPInvokeMethodEpilog be called after control expr is lowered?
|
area-CodeGen question
|
During **Lowering::LowerCall** we set up arguments first and if the call is unmanaged insert PInvoke method epilog (PME) as required just before a CALL (or RETURN or JMP) node.
In **Lowering::InsertPInvokeMethodEpilog** there is a comment saying
https://github.com/dotnet/coreclr/blob/master/src/jit/lower.cpp#L3493-L3505 why this is needed
```
// Note: PInvoke Method Epilog (PME) needs to be inserted just before GT_RETURN, GT_JMP or GT_CALL node in execution
// order so that it is guaranteed that there will be no further PInvokes after that point in the method.
//
// Example1: GT_RETURN(op1) - say execution order is: Op1, GT_RETURN. After inserting PME, execution order would be
// Op1, PME, GT_RETURN
//
// Example2: GT_CALL(arg side effect computing nodes, Stk Args Setup, Reg Args setup). The execution order would be
// arg side effect computing nodes, Stk Args setup, Reg Args setup, GT_CALL
// After inserting PME execution order would be:
// arg side effect computing nodes, Stk Args setup, Reg Args setup, PME, GT_CALL
//
// Example3: GT_JMP. After inserting PME execution order would be: PME, GT_JMP
// That is after PME, args for GT_JMP call will be setup.
```
Next we lower a control expression (which also can have side effect computing nodes) (https://github.com/dotnet/coreclr/blob/master/src/jit/lower.cpp#L1675) and insert the result before CALL node (and after PME) **breaking** the above said invariant.
For example, if I crossgen System.Private.CoreLib.dll with `COMPlus_TailcallStress=1` during compiling `System.Diagnostics.Tracing.EventPipeEventDispatcher:CommitDispatchConfiguration():this`
the following call to `System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask` is converted to fast tail call
```
[000209] ------------ * STMT void (IL 0x10E... ???)
[000208] --C-G------- \--* CALL void System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask
[000207] ------------ this in rdi \--* LCL_VAR ref V00 this
```
Now during the lowering we get the following sequence
arg side effect computing nodes, Reg Args setup, **PME**, control expression side effect computing nodes, GT_CALL
```
lowering call (before):
N002 ( 1, 1) [000207] ------------ t207 = LCL_VAR ref V00 this u:1 (last use) $80
/--* t207 ref this in rdi
N004 ( 15, 8) [000208] --CXG------- * CALL void System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask $VN.Void
objp:
======
lowering arg : N001 ( 0, 0) [000736] ----------L- * ARGPLACE ref $13f
args:
======
late:
======
lowering arg : N002 ( 1, 1) [000207] ------------ * LCL_VAR ref V00 this u:1 (last use) $80
new node is : [000935] ------------ * PUTARG_REG ref REG rdi
======= Inserting PInvoke method epilog
results of lowering call:
N001 ( 3, 10) [000936] ------------ t936 = CNS_INT(h) long 0x7f8ce9ead190 ftn
lowering call (after):
N002 ( 1, 1) [000207] ------------ t207 = LCL_VAR ref V00 this u:1 (last use) $80
/--* t207 ref
[000935] ------------ t935 = * PUTARG_REG ref REG rdi
N001 ( 1, 1) [000937] ------------ t937 = LCL_VAR long V22 FramesRoot
/--* t937 long
N002 ( 2, 2) [000939] -c---------- t939 = * LEA(b+12) long
N003 ( 1, 1) [000938] -c---------- t938 = CNS_INT byte 1
/--* t939 long
+--* t938 byte
N004 ( 4, 4) [000940] ------------ * STOREIND byte
N001 ( 3, 10) [000936] ------------ t936 = CNS_INT(h) long 0x7f8ce9ead190 ftn
/--* t935 ref this in rdi
+--* t936 long control expr
N004 ( 15, 8) [000208] --CXG------- * CALL void System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask $VN.Void
```
resulting in the following assembly
```
IN0085: 000263 mov qword ptr [rbx+56], rdi
IN0086: 000267 mov rdi, rbx
IN0087: 00026A mov byte ptr [r14+12], 1
IN0088: 00026F lea rax, [(reloc 0x7f8ce9ead190)]
G_M56236_IG15: ; offs=000276H, size=0011H, epilog, nogc, emitadd
IN00ae: 000276 lea rsp, [rbp-28H]
IN00af: 00027A pop rbx
IN00b0: 00027B pop r12
IN00b1: 00027D pop r13
IN00b2: 00027F pop r14
IN00b3: 000281 pop r15
IN00b4: 000283 pop rbp
IN00b5: 000284 rex.jmp rax
```
Here IN0087 is PME and IN0088 is computing control expression
@dotnet/jit-contrib Shouldn't these two instructions be flipped?
|
1.0
|
Should Lowering::InsertPInvokeMethodEpilog be called after control expr is lowered? - During **Lowering::LowerCall** we set up arguments first and if the call is unmanaged insert PInvoke method epilog (PME) as required just before a CALL (or RETURN or JMP) node.
In **Lowering::InsertPInvokeMethodEpilog** there is a comment saying
https://github.com/dotnet/coreclr/blob/master/src/jit/lower.cpp#L3493-L3505 why this is needed
```
// Note: PInvoke Method Epilog (PME) needs to be inserted just before GT_RETURN, GT_JMP or GT_CALL node in execution
// order so that it is guaranteed that there will be no further PInvokes after that point in the method.
//
// Example1: GT_RETURN(op1) - say execution order is: Op1, GT_RETURN. After inserting PME, execution order would be
// Op1, PME, GT_RETURN
//
// Example2: GT_CALL(arg side effect computing nodes, Stk Args Setup, Reg Args setup). The execution order would be
// arg side effect computing nodes, Stk Args setup, Reg Args setup, GT_CALL
// After inserting PME execution order would be:
// arg side effect computing nodes, Stk Args setup, Reg Args setup, PME, GT_CALL
//
// Example3: GT_JMP. After inserting PME execution order would be: PME, GT_JMP
// That is after PME, args for GT_JMP call will be setup.
```
Next we lower a control expression (which also can have side effect computing nodes) (https://github.com/dotnet/coreclr/blob/master/src/jit/lower.cpp#L1675) and insert the result before CALL node (and after PME) **breaking** the above said invariant.
For example, if I crossgen System.Private.CoreLib.dll with `COMPlus_TailcallStress=1` during compiling `System.Diagnostics.Tracing.EventPipeEventDispatcher:CommitDispatchConfiguration():this`
the following call to `System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask` is converted to fast tail call
```
[000209] ------------ * STMT void (IL 0x10E... ???)
[000208] --C-G------- \--* CALL void System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask
[000207] ------------ this in rdi \--* LCL_VAR ref V00 this
```
Now during the lowering we get the following sequence
arg side effect computing nodes, Reg Args setup, **PME**, control expression side effect computing nodes, GT_CALL
```
lowering call (before):
N002 ( 1, 1) [000207] ------------ t207 = LCL_VAR ref V00 this u:1 (last use) $80
/--* t207 ref this in rdi
N004 ( 15, 8) [000208] --CXG------- * CALL void System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask $VN.Void
objp:
======
lowering arg : N001 ( 0, 0) [000736] ----------L- * ARGPLACE ref $13f
args:
======
late:
======
lowering arg : N002 ( 1, 1) [000207] ------------ * LCL_VAR ref V00 this u:1 (last use) $80
new node is : [000935] ------------ * PUTARG_REG ref REG rdi
======= Inserting PInvoke method epilog
results of lowering call:
N001 ( 3, 10) [000936] ------------ t936 = CNS_INT(h) long 0x7f8ce9ead190 ftn
lowering call (after):
N002 ( 1, 1) [000207] ------------ t207 = LCL_VAR ref V00 this u:1 (last use) $80
/--* t207 ref
[000935] ------------ t935 = * PUTARG_REG ref REG rdi
N001 ( 1, 1) [000937] ------------ t937 = LCL_VAR long V22 FramesRoot
/--* t937 long
N002 ( 2, 2) [000939] -c---------- t939 = * LEA(b+12) long
N003 ( 1, 1) [000938] -c---------- t938 = CNS_INT byte 1
/--* t939 long
+--* t938 byte
N004 ( 4, 4) [000940] ------------ * STOREIND byte
N001 ( 3, 10) [000936] ------------ t936 = CNS_INT(h) long 0x7f8ce9ead190 ftn
/--* t935 ref this in rdi
+--* t936 long control expr
N004 ( 15, 8) [000208] --CXG------- * CALL void System.Diagnostics.Tracing.EventPipeEventDispatcher.StartDispatchTask $VN.Void
```
resulting in the following assembly
```
IN0085: 000263 mov qword ptr [rbx+56], rdi
IN0086: 000267 mov rdi, rbx
IN0087: 00026A mov byte ptr [r14+12], 1
IN0088: 00026F lea rax, [(reloc 0x7f8ce9ead190)]
G_M56236_IG15: ; offs=000276H, size=0011H, epilog, nogc, emitadd
IN00ae: 000276 lea rsp, [rbp-28H]
IN00af: 00027A pop rbx
IN00b0: 00027B pop r12
IN00b1: 00027D pop r13
IN00b2: 00027F pop r14
IN00b3: 000281 pop r15
IN00b4: 000283 pop rbp
IN00b5: 000284 rex.jmp rax
```
Here IN0087 is PME and IN0088 is computing control expression
@dotnet/jit-contrib Shouldn't these two instructions be flipped?
|
non_test
|
should lowering insertpinvokemethodepilog be called after control expr is lowered during lowering lowercall we set up arguments first and if the call is unmanaged insert pinvoke method epilog pme as required just before a call or return or jmp node in lowering insertpinvokemethodepilog there is a comment saying why this is needed note pinvoke method epilog pme needs to be inserted just before gt return gt jmp or gt call node in execution order so that it is guaranteed that there will be no further pinvokes after that point in the method gt return say execution order is gt return after inserting pme execution order would be pme gt return gt call arg side effect computing nodes stk args setup reg args setup the execution order would be arg side effect computing nodes stk args setup reg args setup gt call after inserting pme execution order would be arg side effect computing nodes stk args setup reg args setup pme gt call gt jmp after inserting pme execution order would be pme gt jmp that is after pme args for gt jmp call will be setup next we lower a control expression which also can have side effect computing nodes and insert the result before call node and after pme breaking the above said invariant for example if i crossgen system private corelib dll with complus tailcallstress during compiling system diagnostics tracing eventpipeeventdispatcher commitdispatchconfiguration this the following call to system diagnostics tracing eventpipeeventdispatcher startdispatchtask is converted to fast tail call stmt void il c g call void system diagnostics tracing eventpipeeventdispatcher startdispatchtask this in rdi lcl var ref this now during the lowering we get the following sequence arg side effect computing nodes reg args setup pme control expression side effect computing nodes gt call lowering call before lcl var ref this u last use ref this in rdi cxg call void system diagnostics tracing eventpipeeventdispatcher startdispatchtask vn void objp lowering arg l argplace ref args late lowering arg lcl var ref this u last use new node is putarg reg ref reg rdi inserting pinvoke method epilog results of lowering call cns int h long ftn lowering call after lcl var ref this u last use ref putarg reg ref reg rdi lcl var long framesroot long c lea b long c cns int byte long byte storeind byte cns int h long ftn ref this in rdi long control expr cxg call void system diagnostics tracing eventpipeeventdispatcher startdispatchtask vn void resulting in the following assembly mov qword ptr rdi mov rdi rbx mov byte ptr lea rax g offs size epilog nogc emitadd lea rsp pop rbx pop pop pop pop pop rbp rex jmp rax here is pme and is computing control expression dotnet jit contrib shouldn t these two instructions be flipped
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.