Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
388,332
| 26,760,544,358
|
IssuesEvent
|
2023-01-31 06:19:52
|
GeorgeDong32/GoodPass
|
https://api.github.com/repos/GeorgeDong32/GoodPass
|
closed
|
主仓库资料更新
|
documentation To Do
|
# 主仓库资料更新
## Overview
更新主仓库资料
## To Do List
- [x] #50
- [x] #51
- [x] #52
- [x] #53
|
1.0
|
主仓库资料更新 - # 主仓库资料更新
## Overview
更新主仓库资料
## To Do List
- [x] #50
- [x] #51
- [x] #52
- [x] #53
|
non_process
|
主仓库资料更新 主仓库资料更新 overview 更新主仓库资料 to do list
| 0
|
20,275
| 26,906,368,422
|
IssuesEvent
|
2023-02-06 19:25:35
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
opened
|
chore: change branching model support
|
priority: p2 type: process
|
Modify the existing branching model ([gitflow][] like) that keeps `main` branch to host the _latest released_ version of Sandbox to the [trunk-based][]
[gitflow]: https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow
|
1.0
|
chore: change branching model support - Modify the existing branching model ([gitflow][] like) that keeps `main` branch to host the _latest released_ version of Sandbox to the [trunk-based][]
[gitflow]: https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow
|
process
|
chore change branching model support modify the existing branching model like that keeps main branch to host the latest released version of sandbox to the
| 1
|
20,690
| 15,888,385,838
|
IssuesEvent
|
2021-04-10 07:05:06
|
rulojuka/sbking
|
https://api.github.com/repos/rulojuka/sbking
|
opened
|
Check every Thread.sleep() to see if it may be removed
|
enhancement quick-win usability
|
This will greatly increase user perceived latency on the game!
|
True
|
Check every Thread.sleep() to see if it may be removed - This will greatly increase user perceived latency on the game!
|
non_process
|
check every thread sleep to see if it may be removed this will greatly increase user perceived latency on the game
| 0
|
174,005
| 14,443,864,143
|
IssuesEvent
|
2020-12-07 20:19:17
|
inspirezonetech/AutomateGitRepoSetup
|
https://api.github.com/repos/inspirezonetech/AutomateGitRepoSetup
|
opened
|
Update docs to match new program file name
|
documentation good first issue
|
## Current Behaviour
Docs reference `automate-git.py`
## Changes Requested
File was renamed. Docs should now reference `automate_git.py`
|
1.0
|
Update docs to match new program file name - ## Current Behaviour
Docs reference `automate-git.py`
## Changes Requested
File was renamed. Docs should now reference `automate_git.py`
|
non_process
|
update docs to match new program file name current behaviour docs reference automate git py changes requested file was renamed docs should now reference automate git py
| 0
|
53,670
| 6,743,965,130
|
IssuesEvent
|
2017-10-20 14:06:32
|
pekrau/Symbasis
|
https://api.github.com/repos/pekrau/Symbasis
|
closed
|
Change terminology: Memo, spec, task
|
design enhancement
|
spec -> memo, item, entry resource, meta, note, bill, record, card
task: no change
template -> spec
|
1.0
|
Change terminology: Memo, spec, task - spec -> memo, item, entry resource, meta, note, bill, record, card
task: no change
template -> spec
|
non_process
|
change terminology memo spec task spec memo item entry resource meta note bill record card task no change template spec
| 0
|
16,252
| 20,811,074,328
|
IssuesEvent
|
2022-03-18 02:46:56
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing/GDAL does not work anymore on systems in german language
|
Feedback stale Processing Bug
|
Hi
GDAL crashes reproducibly on my Windows 7 machine (German) with the following output:
---snip
QGIS-Version: 3.8.1-Zanzibar
QGIS-Codeversion: dcd95cc648
Qt-Version: 5.11.2
GDAL-Version: 2.4.1
GEOS-Version: 3.7.2-CAPI-1.11.0 b55d2125
PROJ-Version: Rel. 5.2.0, September 15th, 2018
Verarbeite Algorithmus…
Algorithmus Raster auf Layermaske zuschneiden startet…
Input parameters:
{ 'ALPHA_BAND' : False, 'CROP_TO_CUTLINE' : True, 'DATA_TYPE' : 0, 'INPUT' : 'path to GeoTiff file on network drive without special characters', 'KEEP_RESOLUTION' : False, 'MASK' : 'path to SHP file on network drive without special characters', 'MULTITHREADING' : False, 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'SET_RESOLUTION' : False, 'SOURCE_CRS' : None, 'TARGET_CRS' : None, 'X_RESOLUTION' : None, 'Y_RESOLUTION' : None }
GDAL command:
gdalwarp -of GTiff -cutline [path to file on network drive without special characters] -cl croplayer -crop_to_cutline [path to file on network drive without special characters] [path to temp file on local hd without special characters]
GDAL command output:
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.8/apps/qgis/./python/plugins\processing\algs\gdal\GdalAlgorithm.py", line 130, in processAlgorithm
GdalUtils.runGdal(commands, feedback)
File "C:/PROGRA~1/QGIS3~1.8/apps/qgis/./python/plugins\processing\algs\gdal\GdalUtils.py", line 106, in runGdal
for line in proc.stdout:
File "C:\PROGRA~1\QGIS3~1.8\apps\Python37\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 143: invalid start byte
Execution failed after 3.13 seconds
---snap
In fact, every raster operation (merge, reproject, etc.) crashes GDAL with said UnicodeDecodeError. QGIS 2.18 runs perfectly on the same machine. I've read in older error reports that this issue existed in QGIS 3.4.x but it seems to have been resolved since then. I've scrubbed my .profile and re-installed QGIS 3.8.1 several times, but to no avail, the problem persists. I even switched system language from German to English because this used to work as a workaround - still the same error. On my private machine (Win 10) everything works. Is this a Windows 7 issue? I've uninstalled all older QGIS versions apart from 2.18.
|
1.0
|
Processing/GDAL does not work anymore on systems in german language - Hi
GDAL crashes reproducibly on my Windows 7 machine (German) with the following output:
---snip
QGIS-Version: 3.8.1-Zanzibar
QGIS-Codeversion: dcd95cc648
Qt-Version: 5.11.2
GDAL-Version: 2.4.1
GEOS-Version: 3.7.2-CAPI-1.11.0 b55d2125
PROJ-Version: Rel. 5.2.0, September 15th, 2018
Verarbeite Algorithmus…
Algorithmus Raster auf Layermaske zuschneiden startet…
Input parameters:
{ 'ALPHA_BAND' : False, 'CROP_TO_CUTLINE' : True, 'DATA_TYPE' : 0, 'INPUT' : 'path to GeoTiff file on network drive without special characters', 'KEEP_RESOLUTION' : False, 'MASK' : 'path to SHP file on network drive without special characters', 'MULTITHREADING' : False, 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'SET_RESOLUTION' : False, 'SOURCE_CRS' : None, 'TARGET_CRS' : None, 'X_RESOLUTION' : None, 'Y_RESOLUTION' : None }
GDAL command:
gdalwarp -of GTiff -cutline [path to file on network drive without special characters] -cl croplayer -crop_to_cutline [path to file on network drive without special characters] [path to temp file on local hd without special characters]
GDAL command output:
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS3~1.8/apps/qgis/./python/plugins\processing\algs\gdal\GdalAlgorithm.py", line 130, in processAlgorithm
GdalUtils.runGdal(commands, feedback)
File "C:/PROGRA~1/QGIS3~1.8/apps/qgis/./python/plugins\processing\algs\gdal\GdalUtils.py", line 106, in runGdal
for line in proc.stdout:
File "C:\PROGRA~1\QGIS3~1.8\apps\Python37\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 143: invalid start byte
Execution failed after 3.13 seconds
---snap
In fact, every raster operation (merge, reproject, etc.) crashes GDAL with said UnicodeDecodeError. QGIS 2.18 runs perfectly on the same machine. I've read in older error reports that this issue existed in QGIS 3.4.x but it seems to have been resolved since then. I've scrubbed my .profile and re-installed QGIS 3.8.1 several times, but to no avail, the problem persists. I even switched system language from German to English because this used to work as a workaround - still the same error. On my private machine (Win 10) everything works. Is this a Windows 7 issue? I've uninstalled all older QGIS versions apart from 2.18.
|
process
|
processing gdal does not work anymore on systems in german language hi gdal crashes reproducibly on my windows machine german with the following output snip qgis version zanzibar qgis codeversion qt version gdal version geos version capi proj version rel september verarbeite algorithmus… algorithmus raster auf layermaske zuschneiden startet… input parameters alpha band false crop to cutline true data type input path to geotiff file on network drive without special characters keep resolution false mask path to shp file on network drive without special characters multithreading false nodata none options output temporary output set resolution false source crs none target crs none x resolution none y resolution none gdal command gdalwarp of gtiff cutline cl croplayer crop to cutline gdal command output traceback most recent call last file c progra apps qgis python plugins processing algs gdal gdalalgorithm py line in processalgorithm gdalutils rungdal commands feedback file c progra apps qgis python plugins processing algs gdal gdalutils py line in rungdal for line in proc stdout file c progra apps lib codecs py line in decode result consumed self buffer decode data self errors final unicodedecodeerror utf codec can t decode byte in position invalid start byte execution failed after seconds snap in fact every raster operation merge reproject etc crashes gdal with said unicodedecodeerror qgis runs perfectly on the same machine i ve read in older error reports that this issue existed in qgis x but it seems to have been resolved since then i ve scrubbed my profile and re installed qgis several times but to no avail the problem persists i even switched system language from german to english because this used to work as a workaround still the same error on my private machine win everything works is this a windows issue i ve uninstalled all older qgis versions apart from
| 1
|
184,205
| 14,972,081,524
|
IssuesEvent
|
2021-01-27 22:12:45
|
McFrappe/TTT
|
https://api.github.com/repos/McFrappe/TTT
|
closed
|
Look over the API we will use
|
documentation implementation
|
What does it provide?
What commands can we use to fetch data?
How will we display the data from the API?
What structures should we use?
etc...
etc...
|
1.0
|
Look over the API we will use - What does it provide?
What commands can we use to fetch data?
How will we display the data from the API?
What structures should we use?
etc...
etc...
|
non_process
|
look over the api we will use what does it provide what commands can we use to fetch data how will we display the data from the api what structures should we use etc etc
| 0
|
2,482
| 2,615,170,792
|
IssuesEvent
|
2015-03-01 06:52:39
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
better slideshow placeholder image for slides app
|
auto-migrated Milestone-X Priority-P3 redesign Type-Bug
|
```
we needz
```
Original issue reported on code.google.com by `paulir...@google.com` on 31 Oct 2011 at 11:55
|
1.0
|
better slideshow placeholder image for slides app - ```
we needz
```
Original issue reported on code.google.com by `paulir...@google.com` on 31 Oct 2011 at 11:55
|
non_process
|
better slideshow placeholder image for slides app we needz original issue reported on code google com by paulir google com on oct at
| 0
|
206,750
| 7,121,005,446
|
IssuesEvent
|
2018-01-19 04:58:05
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Sometimes tabbing out makes all UI uninteractable.
|
Medium Priority
|
I was in a contract UI, then tabbed out to make a bug report, reentered the game and now I cant click on anything.
|
1.0
|
Sometimes tabbing out makes all UI uninteractable. - I was in a contract UI, then tabbed out to make a bug report, reentered the game and now I cant click on anything.
|
non_process
|
sometimes tabbing out makes all ui uninteractable i was in a contract ui then tabbed out to make a bug report reentered the game and now i cant click on anything
| 0
|
5,336
| 4,896,754,805
|
IssuesEvent
|
2016-11-20 14:38:53
|
raoulvdberge/refinedstorage
|
https://api.github.com/repos/raoulvdberge/refinedstorage
|
closed
|
Horrible performance (30 FPS drop) with larger networks..
|
performance
|
Any 1.10.2.
On a good computer with a GTX 1060 I started noticing that my framerates where starting to become horribly choppy and chugging along.
I thought it might have something to do with RS since it isn't even tested on larger networks (the author said so himself). And my network isn't even large. Only about 100 64K disks and my entire network is grouped together.. No interdimensional stuff, nothing. 100x 64K disks is nothing in modded MC today.
Hopefully RS will get some major opimisation because right now I'm done with it.
Killing my RS Controller (thus taking everyhting offline) immidiately gave me back my 30 FPS and at 60 FPS solid again. (CPU usage down from 99% to 25%)
And not doing any autocrafting as it is way to buggy at its current state.
|
True
|
Horrible performance (30 FPS drop) with larger networks.. - Any 1.10.2.
On a good computer with a GTX 1060 I started noticing that my framerates where starting to become horribly choppy and chugging along.
I thought it might have something to do with RS since it isn't even tested on larger networks (the author said so himself). And my network isn't even large. Only about 100 64K disks and my entire network is grouped together.. No interdimensional stuff, nothing. 100x 64K disks is nothing in modded MC today.
Hopefully RS will get some major opimisation because right now I'm done with it.
Killing my RS Controller (thus taking everyhting offline) immidiately gave me back my 30 FPS and at 60 FPS solid again. (CPU usage down from 99% to 25%)
And not doing any autocrafting as it is way to buggy at its current state.
|
non_process
|
horrible performance fps drop with larger networks any on a good computer with a gtx i started noticing that my framerates where starting to become horribly choppy and chugging along i thought it might have something to do with rs since it isn t even tested on larger networks the author said so himself and my network isn t even large only about disks and my entire network is grouped together no interdimensional stuff nothing disks is nothing in modded mc today hopefully rs will get some major opimisation because right now i m done with it killing my rs controller thus taking everyhting offline immidiately gave me back my fps and at fps solid again cpu usage down from to and not doing any autocrafting as it is way to buggy at its current state
| 0
|
63,234
| 6,830,026,665
|
IssuesEvent
|
2017-11-09 03:53:17
|
medic/medic-webapp
|
https://api.github.com/repos/medic/medic-webapp
|
closed
|
Test flake : "sms-gateway api - api returns list of pending WO messages" - "stale element reference:"
|
Status: 5 - Ready Testing
|
Full log : [travis_258576063.txt](https://github.com/medic/medic-webapp/files/1183320/travis_258576063.txt)
```
Failures:
1) sms-gateway api - api returns list of pending WO messages - returns list and updates state
Message:
[31m Failed: stale element reference: element is not attached to the page document
(Session info: headless chrome=60.0.3112.78)
(Driver info: chromedriver=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8),platform=Linux 4.4.0-83-generic x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 101 milliseconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
Build info: version: '3.4.0', revision: 'unknown', time: 'unknown'
System info: host: 'testing-gce-cf771b23-3e52-481f-95b2-f927eb07e136', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '4.4.0-83-generic', java.version: '1.8.0_131'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8), userDataDir=/tmp/.org.chromium.Chromium.3Sq7ii}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=60.0.3112.78, platform=LINUX, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, setWindowRect=true, unexpectedAlertBehaviour=}]
Session ID: ce70a67d91e9366aeb1e632067a3f1e9[0m
Stack:
StaleElementReferenceError: stale element reference: element is not attached to the page document
(Session info: headless chrome=60.0.3112.78)
(Driver info: chromedriver=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8),platform=Linux 4.4.0-83-generic x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 101 milliseconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
Build info: version: '3.4.0', revision: 'unknown', time: 'unknown'
System info: host: 'testing-gce-cf771b23-3e52-481f-95b2-f927eb07e136', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '4.4.0-83-generic', java.version: '1.8.0_131'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8), userDataDir=/tmp/.org.chromium.Chromium.3Sq7ii}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=60.0.3112.78, platform=LINUX, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, setWindowRect=true, unexpectedAlertBehaviour=}]
Session ID: ce70a67d91e9366aeb1e632067a3f1e9
at WebDriverError (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/error.js:27:5)
at StaleElementReferenceError (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/error.js:227:5)
at Object.checkLegacyResponse (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/error.js:505:15)
at parseHttpResponse (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/http.js:509:13)
at doSend.then.response (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/http.js:440:13)
at process._tickCallback (internal/process/next_tick.js:109:7)
From: Task: WebElement.click()
at thenableWebDriverProxy.schedule (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/webdriver.js:816:17)
at WebElement.schedule_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/webdriver.js:1921:25)
at WebElement.click (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/webdriver.js:2002:17)
at actionFn (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:94:34)
at Array.map (native)
at actionResults.getWebElements.then (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:484:67)
at ManagedPromise.invokeCallback_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:1366:14)
at TaskQueue.execute_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2970:14)
at TaskQueue.executeNext_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2953:27)
at asyncRun (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2813:27)
at /home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:676:7
at process._tickCallback (internal/process/next_tick.js:109:7)Error
at ElementArrayFinder.applyAction_ (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:482:23)
at ElementArrayFinder.(anonymous function) [as click] (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:96:21)
at ElementFinder.(anonymous function) [as click] (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:873:14)
at Object.it (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:288:68)
at /home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:112:25
at new ManagedPromise (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:1067:7)
at ControlFlow.promise (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2396:12)
at schedulerExecute (/home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:95:18)
at TaskQueue.execute_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2970:14)
at TaskQueue.executeNext_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2953:27)
at asyncRun (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2860:25)
at /home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:676:7
at process._tickCallback (internal/process/next_tick.js:109:7)
From: Task: Run it("- returns list and updates state") in control flow
at Object.<anonymous> (/home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:94:19)
at /home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:64:48
at ControlFlow.emit (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/events.js:62:21)
at ControlFlow.shutdown_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2565:10)
at shutdownTask_.MicroTask (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2490:53)
at MicroTask.asyncRun (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2619:9)
at /home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:676:7
at process._tickCallback (internal/process/next_tick.js:109:7)
From asynchronous test:
Error
at Suite.describe (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:253:5)
at Suite.describe (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:225:3)
at Object.<anonymous> (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:123:1)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (/home/travis/build/medic/medic-webapp/node_modules/coffee-script/lib/coffee-script/register.js:45:36)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at /home/travis/build/medic/medic-webapp/node_modules/jasmine/lib/jasmine.js:93:5
at Array.forEach (native)
at Jasmine.loadSpecs (/home/travis/build/medic/medic-webapp/node_modules/jasmine/lib/jasmine.js:92:18)
at Jasmine.execute (/home/travis/build/medic/medic-webapp/node_modules/jasmine/lib/jasmine.js:197:8)
at /home/travis/build/medic/medic-webapp/node_modules/protractor/built/frameworks/jasmine.js:132:15
at Function.promise (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:682:9)
at /home/travis/build/medic/medic-webapp/node_modules/protractor/built/frameworks/jasmine.js:104:14
at _fulfilled (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:834:54)
at self.promiseDispatch.done (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:863:30)
at Promise.promise.promiseDispatch (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:796:13)
at /home/travis/build/medic/medic-webapp/node_modules/q/q.js:604:44
at runSingle (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:137:13)
at flush (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:125:13)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
```
|
1.0
|
Test flake : "sms-gateway api - api returns list of pending WO messages" - "stale element reference:" - Full log : [travis_258576063.txt](https://github.com/medic/medic-webapp/files/1183320/travis_258576063.txt)
```
Failures:
1) sms-gateway api - api returns list of pending WO messages - returns list and updates state
Message:
[31m Failed: stale element reference: element is not attached to the page document
(Session info: headless chrome=60.0.3112.78)
(Driver info: chromedriver=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8),platform=Linux 4.4.0-83-generic x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 101 milliseconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
Build info: version: '3.4.0', revision: 'unknown', time: 'unknown'
System info: host: 'testing-gce-cf771b23-3e52-481f-95b2-f927eb07e136', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '4.4.0-83-generic', java.version: '1.8.0_131'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8), userDataDir=/tmp/.org.chromium.Chromium.3Sq7ii}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=60.0.3112.78, platform=LINUX, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, setWindowRect=true, unexpectedAlertBehaviour=}]
Session ID: ce70a67d91e9366aeb1e632067a3f1e9[0m
Stack:
StaleElementReferenceError: stale element reference: element is not attached to the page document
(Session info: headless chrome=60.0.3112.78)
(Driver info: chromedriver=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8),platform=Linux 4.4.0-83-generic x86_64) (WARNING: The server did not provide any stacktrace information)
Command duration or timeout: 101 milliseconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
Build info: version: '3.4.0', revision: 'unknown', time: 'unknown'
System info: host: 'testing-gce-cf771b23-3e52-481f-95b2-f927eb07e136', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '4.4.0-83-generic', java.version: '1.8.0_131'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.31.488763 (092de99f48a300323ecf8c2a4e2e7cab51de5ba8), userDataDir=/tmp/.org.chromium.Chromium.3Sq7ii}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=60.0.3112.78, platform=LINUX, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, setWindowRect=true, unexpectedAlertBehaviour=}]
Session ID: ce70a67d91e9366aeb1e632067a3f1e9
at WebDriverError (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/error.js:27:5)
at StaleElementReferenceError (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/error.js:227:5)
at Object.checkLegacyResponse (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/error.js:505:15)
at parseHttpResponse (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/http.js:509:13)
at doSend.then.response (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/http.js:440:13)
at process._tickCallback (internal/process/next_tick.js:109:7)
From: Task: WebElement.click()
at thenableWebDriverProxy.schedule (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/webdriver.js:816:17)
at WebElement.schedule_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/webdriver.js:1921:25)
at WebElement.click (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/webdriver.js:2002:17)
at actionFn (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:94:34)
at Array.map (native)
at actionResults.getWebElements.then (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:484:67)
at ManagedPromise.invokeCallback_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:1366:14)
at TaskQueue.execute_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2970:14)
at TaskQueue.executeNext_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2953:27)
at asyncRun (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2813:27)
at /home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:676:7
at process._tickCallback (internal/process/next_tick.js:109:7)Error
at ElementArrayFinder.applyAction_ (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:482:23)
at ElementArrayFinder.(anonymous function) [as click] (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:96:21)
at ElementFinder.(anonymous function) [as click] (/home/travis/build/medic/medic-webapp/node_modules/protractor/lib/element.ts:873:14)
at Object.it (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:288:68)
at /home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:112:25
at new ManagedPromise (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:1067:7)
at ControlFlow.promise (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2396:12)
at schedulerExecute (/home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:95:18)
at TaskQueue.execute_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2970:14)
at TaskQueue.executeNext_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2953:27)
at asyncRun (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2860:25)
at /home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:676:7
at process._tickCallback (internal/process/next_tick.js:109:7)
From: Task: Run it("- returns list and updates state") in control flow
at Object.<anonymous> (/home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:94:19)
at /home/travis/build/medic/medic-webapp/node_modules/jasminewd2/index.js:64:48
at ControlFlow.emit (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/events.js:62:21)
at ControlFlow.shutdown_ (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2565:10)
at shutdownTask_.MicroTask (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2490:53)
at MicroTask.asyncRun (/home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:2619:9)
at /home/travis/build/medic/medic-webapp/node_modules/selenium-webdriver/lib/promise.js:676:7
at process._tickCallback (internal/process/next_tick.js:109:7)
From asynchronous test:
Error
at Suite.describe (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:253:5)
at Suite.describe (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:225:3)
at Object.<anonymous> (/home/travis/build/medic/medic-webapp/tests/protractor/e2e/sms-gateway.js:123:1)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (/home/travis/build/medic/medic-webapp/node_modules/coffee-script/lib/coffee-script/register.js:45:36)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at /home/travis/build/medic/medic-webapp/node_modules/jasmine/lib/jasmine.js:93:5
at Array.forEach (native)
at Jasmine.loadSpecs (/home/travis/build/medic/medic-webapp/node_modules/jasmine/lib/jasmine.js:92:18)
at Jasmine.execute (/home/travis/build/medic/medic-webapp/node_modules/jasmine/lib/jasmine.js:197:8)
at /home/travis/build/medic/medic-webapp/node_modules/protractor/built/frameworks/jasmine.js:132:15
at Function.promise (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:682:9)
at /home/travis/build/medic/medic-webapp/node_modules/protractor/built/frameworks/jasmine.js:104:14
at _fulfilled (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:834:54)
at self.promiseDispatch.done (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:863:30)
at Promise.promise.promiseDispatch (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:796:13)
at /home/travis/build/medic/medic-webapp/node_modules/q/q.js:604:44
at runSingle (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:137:13)
at flush (/home/travis/build/medic/medic-webapp/node_modules/q/q.js:125:13)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickCallback (internal/process/next_tick.js:104:9)
```
|
non_process
|
test flake sms gateway api api returns list of pending wo messages stale element reference full log failures sms gateway api api returns list of pending wo messages returns list and updates state message failed stale element reference element is not attached to the page document session info headless chrome driver info chromedriver platform linux generic warning the server did not provide any stacktrace information command duration or timeout milliseconds for documentation on this error please visit build info version revision unknown time unknown system info host testing gce ip os name linux os arch os version generic java version driver info org openqa selenium chrome chromedriver capabilities session id stack staleelementreferenceerror stale element reference element is not attached to the page document session info headless chrome driver info chromedriver platform linux generic warning the server did not provide any stacktrace information command duration or timeout milliseconds for documentation on this error please visit build info version revision unknown time unknown system info host testing gce ip os name linux os arch os version generic java version driver info org openqa selenium chrome chromedriver capabilities session id at webdrivererror home travis build medic medic webapp node modules selenium webdriver lib error js at staleelementreferenceerror home travis build medic medic webapp node modules selenium webdriver lib error js at object checklegacyresponse home travis build medic medic webapp node modules selenium webdriver lib error js at parsehttpresponse home travis build medic medic webapp node modules selenium webdriver lib http js at dosend then response home travis build medic medic webapp node modules selenium webdriver lib http js at process tickcallback internal process next tick js from task webelement click at thenablewebdriverproxy schedule home travis build medic medic webapp node modules selenium webdriver lib webdriver js at webelement schedule home travis build medic medic webapp node modules selenium webdriver lib webdriver js at webelement click home travis build medic medic webapp node modules selenium webdriver lib webdriver js at actionfn home travis build medic medic webapp node modules protractor lib element ts at array map native at actionresults getwebelements then home travis build medic medic webapp node modules protractor lib element ts at managedpromise invokecallback home travis build medic medic webapp node modules selenium webdriver lib promise js at taskqueue execute home travis build medic medic webapp node modules selenium webdriver lib promise js at taskqueue executenext home travis build medic medic webapp node modules selenium webdriver lib promise js at asyncrun home travis build medic medic webapp node modules selenium webdriver lib promise js at home travis build medic medic webapp node modules selenium webdriver lib promise js at process tickcallback internal process next tick js error at elementarrayfinder applyaction home travis build medic medic webapp node modules protractor lib element ts at elementarrayfinder anonymous function home travis build medic medic webapp node modules protractor lib element ts at elementfinder anonymous function home travis build medic medic webapp node modules protractor lib element ts at object it home travis build medic medic webapp tests protractor sms gateway js at home travis build medic medic webapp node modules index js at new managedpromise home travis build medic medic webapp node modules selenium webdriver lib promise js at controlflow promise home travis build medic medic webapp node modules selenium webdriver lib promise js at schedulerexecute home travis build medic medic webapp node modules index js at taskqueue execute home travis build medic medic webapp node modules selenium webdriver lib promise js at taskqueue executenext home travis build medic medic webapp node modules selenium webdriver lib promise js at asyncrun home travis build medic medic webapp node modules selenium webdriver lib promise js at home travis build medic medic webapp node modules selenium webdriver lib promise js at process tickcallback internal process next tick js from task run it returns list and updates state in control flow at object home travis build medic medic webapp node modules index js at home travis build medic medic webapp node modules index js at controlflow emit home travis build medic medic webapp node modules selenium webdriver lib events js at controlflow shutdown home travis build medic medic webapp node modules selenium webdriver lib promise js at shutdowntask microtask home travis build medic medic webapp node modules selenium webdriver lib promise js at microtask asyncrun home travis build medic medic webapp node modules selenium webdriver lib promise js at home travis build medic medic webapp node modules selenium webdriver lib promise js at process tickcallback internal process next tick js from asynchronous test error at suite describe home travis build medic medic webapp tests protractor sms gateway js at suite describe home travis build medic medic webapp tests protractor sms gateway js at object home travis build medic medic webapp tests protractor sms gateway js at module compile module js at object module extensions js module js at module load home travis build medic medic webapp node modules coffee script lib coffee script register js at trymoduleload module js at function module load module js at module require module js at require internal module js at home travis build medic medic webapp node modules jasmine lib jasmine js at array foreach native at jasmine loadspecs home travis build medic medic webapp node modules jasmine lib jasmine js at jasmine execute home travis build medic medic webapp node modules jasmine lib jasmine js at home travis build medic medic webapp node modules protractor built frameworks jasmine js at function promise home travis build medic medic webapp node modules q q js at home travis build medic medic webapp node modules protractor built frameworks jasmine js at fulfilled home travis build medic medic webapp node modules q q js at self promisedispatch done home travis build medic medic webapp node modules q q js at promise promise promisedispatch home travis build medic medic webapp node modules q q js at home travis build medic medic webapp node modules q q js at runsingle home travis build medic medic webapp node modules q q js at flush home travis build medic medic webapp node modules q q js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js
| 0
|
66,060
| 16,531,372,775
|
IssuesEvent
|
2021-05-27 06:34:16
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
opened
|
Please, bump elk versions to 7.* and fix linking
|
build-error
|
The elk version is very old (3.* while 7.* is released) and fails to link against external libxc, at least in my setup
I tried to update the elk/package.py for handling the latest version of elk, but failed. The issue is that elk slightly changed the way it links against external blas/lapack and requires to compile stubs for missing MKL/OpenBLAS
In addition handling of libxc in elk changed, I had to set
config['SRC_libxc'] = ' '.join([
'libxcf90.f90',
'libxcifc.f90'
])
after which this particular part apparently works fine.
|
1.0
|
Please, bump elk versions to 7.* and fix linking - The elk version is very old (3.* while 7.* is released) and fails to link against external libxc, at least in my setup
I tried to update the elk/package.py for handling the latest version of elk, but failed. The issue is that elk slightly changed the way it links against external blas/lapack and requires to compile stubs for missing MKL/OpenBLAS
In addition handling of libxc in elk changed, I had to set
config['SRC_libxc'] = ' '.join([
'libxcf90.f90',
'libxcifc.f90'
])
after which this particular part apparently works fine.
|
non_process
|
please bump elk versions to and fix linking the elk version is very old while is released and fails to link against external libxc at least in my setup i tried to update the elk package py for handling the latest version of elk but failed the issue is that elk slightly changed the way it links against external blas lapack and requires to compile stubs for missing mkl openblas in addition handling of libxc in elk changed i had to set config join libxcifc after which this particular part apparently works fine
| 0
|
289,993
| 25,030,318,965
|
IssuesEvent
|
2022-11-04 11:46:53
|
mozilla-mobile/fenix
|
https://api.github.com/repos/mozilla-mobile/fenix
|
opened
|
java.lang.RuntimeException: androidx.test.services.shellexecutor.ClientNotConnected
|
eng:intermittent-test eng:ui-test
|
### Firebase Test Run:
#### • `verifySponsoredShortcutsSettingsOptionTest` [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/8102878675142350551/executions/bs.2bd2d6b1b56c835e)
### Stacktrace:
11-01 14:13:09.785: E/ShellExecutorImpl(7085): --------- beginning of crash
11-01 14:13:09.787: E/AndroidRuntime(7085): FATAL EXCEPTION: AndroidTestOrchestrator
11-01 14:13:09.787: E/AndroidRuntime(7085): Process: androidx.test.orchestrator, PID: 7085
11-01 14:13:09.787: E/AndroidRuntime(7085): java.lang.RuntimeException: androidx.test.services.shellexecutor.ClientNotConnected
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommandSync(ShellExecutorImpl.java:8)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommandSync(ShellExecutorImpl.java:1)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.orchestrator.AndroidTestOrchestrator.execShellCommandSync(AndroidTestOrchestrator.java:3)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.orchestrator.AndroidTestOrchestrator.-$$Nest$smexecShellCommandSync(Unknown Source:0)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.orchestrator.AndroidTestOrchestrator$2.run(AndroidTestOrchestrator.java:1)
11-01 14:13:09.787: E/AndroidRuntime(7085): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
11-01 14:13:09.787: E/AndroidRuntime(7085): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
11-01 14:13:09.787: E/AndroidRuntime(7085): at java.lang.Thread.run(Thread.java:923)
11-01 14:13:09.787: E/AndroidRuntime(7085): Caused by: androidx.test.services.shellexecutor.ClientNotConnected
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellCommandClient.execOnServer(ShellCommandClient.java:19)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellCommandClient.execOnServerSync(ShellCommandClient.java:2)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommandSync(ShellExecutorImpl.java:2)
11-01 14:13:09.787: E/AndroidRuntime(7085): ... 7 more
11-01 14:13:09.789: E/ShellCommandClient(7085): Couldn't find a published binder
11-01 14:13:09.790: E/ShellExecutorImpl(7085): ShellCommandClient not connected. Is ShellCommandExecutor service started?
11-01 14:13:09.790: E/ShellExecutorImpl(7085): androidx.test.services.shellexecutor.ClientNotConnected
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.services.shellexecutor.ShellCommandClient.execOnServer(ShellCommandClient.java:19)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommand(ShellExecutorImpl.java:2)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommand(ShellExecutorImpl.java:1)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.orchestrator.TestRunnable.runShellCommand(TestRunnable.java:1)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.orchestrator.TestRunnable.run(TestRunnable.java:2)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at java.lang.Thread.run(Thread.java:923)
11-01 14:13:09.792: I/Process(7085): Sending signal. PID: 7085 SIG: 9
11-01 14:13:07.670: W/AppOps(545): Noting op not finished: uid 10119 pkg com.google.android.gms code 79 startTime of in progress event=1660745582295
11-01 14:13:09.796: I/DropBoxManagerService(545): add tag=data_app_crash isTagEnabled=true flags=0x2
11-01 14:13:09.800: W/BroadcastQueue(545): Background execution not allowed: receiving Intent { act=android.intent.action.DROPBOX_ENTRY_ADDED flg=0x10 (has extras) } to com.google.android.gms/.stats.service.DropBoxEntryAddedReceiver
11-01 14:13:09.801: W/BroadcastQueue(545): Background execution not allowed: receiving Intent { act=android.intent.action.DROPBOX_ENTRY_ADDED flg=0x10 (has extras) } to com.google.android.gms/.chimera.GmsIntentOperationService$PersistentTrustedReceiver
11-01 14:13:09.802: I/ActivityManager(545): Process androidx.test.orchestrator (pid 7085) has died: fg BTOP
11-01 14:13:09.803: I/libprocessgroup(545): Successfully killed process cgroup uid 10170 pid 7085 in 0ms
11-01 14:13:09.803: I/Zygote(292): Process 7085 exited due to signal 9 (Killed)
11-01 14:13:09.803: W/ActivityManager(545): Scheduling restart of crashed service androidx.test.orchestrator/.OrchestratorService in 1000ms for connection
11-01 14:13:09.804: W/ActivityManager(545): Crash of app androidx.test.orchestrator running instrumentation ComponentInfo{androidx.test.orchestrator/androidx.test.orchestrator.AndroidTestOrchestrator}
11-01 14:13:09.803: I/binder(0): undelivered TRANSACTION_COMPLETE
11-01 14:13:09.803: I/binder(0): undelivered transaction 785454, process died.
11-01 14:13:09.806: I/ActivityManager(545): Force stopping androidx.test.orchestrator appid=10170 user=0: finished inst
11-01 14:13:09.806: I/ActivityManager(545): Force stopping service ServiceRecord{d932394 u0 androidx.test.orchestrator/.OrchestratorService}
11-01 14:13:09.812: I/app_process(7067): System.exit called, status: 0
11-01 14:13:09.812: I/AndroidRuntime(7067): VM exiting with result code 0.
### Build: 11/1
|
2.0
|
java.lang.RuntimeException: androidx.test.services.shellexecutor.ClientNotConnected - ### Firebase Test Run:
#### • `verifySponsoredShortcutsSettingsOptionTest` [Firebase link](https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/8102878675142350551/executions/bs.2bd2d6b1b56c835e)
### Stacktrace:
11-01 14:13:09.785: E/ShellExecutorImpl(7085): --------- beginning of crash
11-01 14:13:09.787: E/AndroidRuntime(7085): FATAL EXCEPTION: AndroidTestOrchestrator
11-01 14:13:09.787: E/AndroidRuntime(7085): Process: androidx.test.orchestrator, PID: 7085
11-01 14:13:09.787: E/AndroidRuntime(7085): java.lang.RuntimeException: androidx.test.services.shellexecutor.ClientNotConnected
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommandSync(ShellExecutorImpl.java:8)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommandSync(ShellExecutorImpl.java:1)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.orchestrator.AndroidTestOrchestrator.execShellCommandSync(AndroidTestOrchestrator.java:3)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.orchestrator.AndroidTestOrchestrator.-$$Nest$smexecShellCommandSync(Unknown Source:0)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.orchestrator.AndroidTestOrchestrator$2.run(AndroidTestOrchestrator.java:1)
11-01 14:13:09.787: E/AndroidRuntime(7085): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
11-01 14:13:09.787: E/AndroidRuntime(7085): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
11-01 14:13:09.787: E/AndroidRuntime(7085): at java.lang.Thread.run(Thread.java:923)
11-01 14:13:09.787: E/AndroidRuntime(7085): Caused by: androidx.test.services.shellexecutor.ClientNotConnected
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellCommandClient.execOnServer(ShellCommandClient.java:19)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellCommandClient.execOnServerSync(ShellCommandClient.java:2)
11-01 14:13:09.787: E/AndroidRuntime(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommandSync(ShellExecutorImpl.java:2)
11-01 14:13:09.787: E/AndroidRuntime(7085): ... 7 more
11-01 14:13:09.789: E/ShellCommandClient(7085): Couldn't find a published binder
11-01 14:13:09.790: E/ShellExecutorImpl(7085): ShellCommandClient not connected. Is ShellCommandExecutor service started?
11-01 14:13:09.790: E/ShellExecutorImpl(7085): androidx.test.services.shellexecutor.ClientNotConnected
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.services.shellexecutor.ShellCommandClient.execOnServer(ShellCommandClient.java:19)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommand(ShellExecutorImpl.java:2)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.services.shellexecutor.ShellExecutorImpl.executeShellCommand(ShellExecutorImpl.java:1)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.orchestrator.TestRunnable.runShellCommand(TestRunnable.java:1)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at androidx.test.orchestrator.TestRunnable.run(TestRunnable.java:2)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
11-01 14:13:09.790: E/ShellExecutorImpl(7085): at java.lang.Thread.run(Thread.java:923)
11-01 14:13:09.792: I/Process(7085): Sending signal. PID: 7085 SIG: 9
11-01 14:13:07.670: W/AppOps(545): Noting op not finished: uid 10119 pkg com.google.android.gms code 79 startTime of in progress event=1660745582295
11-01 14:13:09.796: I/DropBoxManagerService(545): add tag=data_app_crash isTagEnabled=true flags=0x2
11-01 14:13:09.800: W/BroadcastQueue(545): Background execution not allowed: receiving Intent { act=android.intent.action.DROPBOX_ENTRY_ADDED flg=0x10 (has extras) } to com.google.android.gms/.stats.service.DropBoxEntryAddedReceiver
11-01 14:13:09.801: W/BroadcastQueue(545): Background execution not allowed: receiving Intent { act=android.intent.action.DROPBOX_ENTRY_ADDED flg=0x10 (has extras) } to com.google.android.gms/.chimera.GmsIntentOperationService$PersistentTrustedReceiver
11-01 14:13:09.802: I/ActivityManager(545): Process androidx.test.orchestrator (pid 7085) has died: fg BTOP
11-01 14:13:09.803: I/libprocessgroup(545): Successfully killed process cgroup uid 10170 pid 7085 in 0ms
11-01 14:13:09.803: I/Zygote(292): Process 7085 exited due to signal 9 (Killed)
11-01 14:13:09.803: W/ActivityManager(545): Scheduling restart of crashed service androidx.test.orchestrator/.OrchestratorService in 1000ms for connection
11-01 14:13:09.804: W/ActivityManager(545): Crash of app androidx.test.orchestrator running instrumentation ComponentInfo{androidx.test.orchestrator/androidx.test.orchestrator.AndroidTestOrchestrator}
11-01 14:13:09.803: I/binder(0): undelivered TRANSACTION_COMPLETE
11-01 14:13:09.803: I/binder(0): undelivered transaction 785454, process died.
11-01 14:13:09.806: I/ActivityManager(545): Force stopping androidx.test.orchestrator appid=10170 user=0: finished inst
11-01 14:13:09.806: I/ActivityManager(545): Force stopping service ServiceRecord{d932394 u0 androidx.test.orchestrator/.OrchestratorService}
11-01 14:13:09.812: I/app_process(7067): System.exit called, status: 0
11-01 14:13:09.812: I/AndroidRuntime(7067): VM exiting with result code 0.
### Build: 11/1
|
non_process
|
java lang runtimeexception androidx test services shellexecutor clientnotconnected firebase test run • verifysponsoredshortcutssettingsoptiontest stacktrace e shellexecutorimpl beginning of crash e androidruntime fatal exception androidtestorchestrator e androidruntime process androidx test orchestrator pid e androidruntime java lang runtimeexception androidx test services shellexecutor clientnotconnected e androidruntime at androidx test services shellexecutor shellexecutorimpl executeshellcommandsync shellexecutorimpl java e androidruntime at androidx test services shellexecutor shellexecutorimpl executeshellcommandsync shellexecutorimpl java e androidruntime at androidx test orchestrator androidtestorchestrator execshellcommandsync androidtestorchestrator java e androidruntime at androidx test orchestrator androidtestorchestrator nest smexecshellcommandsync unknown source e androidruntime at androidx test orchestrator androidtestorchestrator run androidtestorchestrator java e androidruntime at java util concurrent threadpoolexecutor runworker threadpoolexecutor java e androidruntime at java util concurrent threadpoolexecutor worker run threadpoolexecutor java e androidruntime at java lang thread run thread java e androidruntime caused by androidx test services shellexecutor clientnotconnected e androidruntime at androidx test services shellexecutor shellcommandclient execonserver shellcommandclient java e androidruntime at androidx test services shellexecutor shellcommandclient execonserversync shellcommandclient java e androidruntime at androidx test services shellexecutor shellexecutorimpl executeshellcommandsync shellexecutorimpl java e androidruntime more e shellcommandclient couldn t find a published binder e shellexecutorimpl shellcommandclient not connected is shellcommandexecutor service started e shellexecutorimpl androidx test services shellexecutor clientnotconnected e shellexecutorimpl at androidx test services shellexecutor shellcommandclient execonserver shellcommandclient java e shellexecutorimpl at androidx test services shellexecutor shellexecutorimpl executeshellcommand shellexecutorimpl java e shellexecutorimpl at androidx test services shellexecutor shellexecutorimpl executeshellcommand shellexecutorimpl java e shellexecutorimpl at androidx test orchestrator testrunnable runshellcommand testrunnable java e shellexecutorimpl at androidx test orchestrator testrunnable run testrunnable java e shellexecutorimpl at java util concurrent threadpoolexecutor runworker threadpoolexecutor java e shellexecutorimpl at java util concurrent threadpoolexecutor worker run threadpoolexecutor java e shellexecutorimpl at java lang thread run thread java i process sending signal pid sig w appops noting op not finished uid pkg com google android gms code starttime of in progress event i dropboxmanagerservice add tag data app crash istagenabled true flags w broadcastqueue background execution not allowed receiving intent act android intent action dropbox entry added flg has extras to com google android gms stats service dropboxentryaddedreceiver w broadcastqueue background execution not allowed receiving intent act android intent action dropbox entry added flg has extras to com google android gms chimera gmsintentoperationservice persistenttrustedreceiver i activitymanager process androidx test orchestrator pid has died fg btop i libprocessgroup successfully killed process cgroup uid pid in i zygote process exited due to signal killed w activitymanager scheduling restart of crashed service androidx test orchestrator orchestratorservice in for connection w activitymanager crash of app androidx test orchestrator running instrumentation componentinfo androidx test orchestrator androidx test orchestrator androidtestorchestrator i binder undelivered transaction complete i binder undelivered transaction process died i activitymanager force stopping androidx test orchestrator appid user finished inst i activitymanager force stopping service servicerecord androidx test orchestrator orchestratorservice i app process system exit called status i androidruntime vm exiting with result code build
| 0
|
15,188
| 18,956,074,818
|
IssuesEvent
|
2021-11-18 20:25:41
|
RobertCraigie/prisma-client-py
|
https://api.github.com/repos/RobertCraigie/prisma-client-py
|
closed
|
Add support for scalar lists
|
kind/feature process/candidate
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
PostgreSQL supports scalar array fields, e.g.
```prisma
model User {
...
emails String[]
}
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should support updating and filtering scalar list fields
## Additional context
WIP branch: `wip/scalar-lists`
|
1.0
|
Add support for scalar lists - ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
PostgreSQL supports scalar array fields, e.g.
```prisma
model User {
...
emails String[]
}
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should support updating and filtering scalar list fields
## Additional context
WIP branch: `wip/scalar-lists`
|
process
|
add support for scalar lists problem postgresql supports scalar array fields e g prisma model user emails string suggested solution we should support updating and filtering scalar list fields additional context wip branch wip scalar lists
| 1
|
12,623
| 15,015,611,368
|
IssuesEvent
|
2021-02-01 08:32:27
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Fix text case usage across screens
|
Bug P1 Participant manager Process: Fixed Process: Tested dev UI
|
Standardize use of title case, upper and lower case across screens, and update text accordingly.
|
2.0
|
[PM] Fix text case usage across screens - Standardize use of title case, upper and lower case across screens, and update text accordingly.
|
process
|
fix text case usage across screens standardize use of title case upper and lower case across screens and update text accordingly
| 1
|
21,935
| 2,643,147,709
|
IssuesEvent
|
2015-03-12 08:41:50
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
closed
|
Data JSON export - warning on Audio, BEFile
|
Priority - High Topic - Core Topic - Data I/O Type - Bug
|
Exporting some object types we have this problems
```
Warning: Model "BEFile" is not associated with model "Stream".....
Warning: Model "Audio" is not associated with model "Stream".....
```
There's something wrong with model bindings...
|
1.0
|
Data JSON export - warning on Audio, BEFile - Exporting some object types we have this problems
```
Warning: Model "BEFile" is not associated with model "Stream".....
Warning: Model "Audio" is not associated with model "Stream".....
```
There's something wrong with model bindings...
|
non_process
|
data json export warning on audio befile exporting some object types we have this problems warning model befile is not associated with model stream warning model audio is not associated with model stream there s something wrong with model bindings
| 0
|
372,893
| 11,029,970,566
|
IssuesEvent
|
2019-12-06 14:53:48
|
webpack-contrib/css-loader
|
https://api.github.com/repos/webpack-contrib/css-loader
|
closed
|
not correct import with modules
|
priority: 4 (important) semver: Patch severity: 4 (inconvenient) type: Bug type: Feature
|
* Operating System: linux
* Node Version: v11.12.0
* NPM Version: 6.9.0
* webpack Version: 4.29.6
* sass-loader Version: 7.1.0
This issue is a continuation of https://github.com/webpack-contrib/sass-loader/issues/675
### Expected behavior
We have css-modules in webpack.config.js
```js
{loader: 'css-loader', options: {modules: true, localIdentName: '[local]-sufx'}}
```
index.scss
```scss
@import "index.css";
@import "another.scss";
.im-from-scss {
bottom: 20px;
}
```
index.css
```css
.im-missing-from-css{
color: blue;
}
```
another.scss
```scss
.im-from-another-scss {
left: 40px;
}
```
in index.js we import the index.scss
```js
import scss from './index.scss';
```
the expected value of `scss` should be
```js
{
im-missing-from-css: "im-missing-from-css-sufx",
im-from-another-scss: "im-from-another-scss-sufx",
im-from-scss: "im-from-scss-sufx"
}
```
### Actual behavior
the value of `scss` is
```js
{
im-from-another-scss: "im-from-another-scss-sufx",
im-from-scss: "im-from-scss-sufx"
}
```
the css file is not included in the dictionary.
### How to reproduce
source code:
[bug-repo.tar.gz](https://github.com/webpack-contrib/css-loader/files/3050194/bug-repo.tar.gz)
`npm run build`
load build/index.html and see console output
|
1.0
|
not correct import with modules - * Operating System: linux
* Node Version: v11.12.0
* NPM Version: 6.9.0
* webpack Version: 4.29.6
* sass-loader Version: 7.1.0
This issue is a continuation of https://github.com/webpack-contrib/sass-loader/issues/675
### Expected behavior
We have css-modules in webpack.config.js
```js
{loader: 'css-loader', options: {modules: true, localIdentName: '[local]-sufx'}}
```
index.scss
```scss
@import "index.css";
@import "another.scss";
.im-from-scss {
bottom: 20px;
}
```
index.css
```css
.im-missing-from-css{
color: blue;
}
```
another.scss
```scss
.im-from-another-scss {
left: 40px;
}
```
in index.js we import the index.scss
```js
import scss from './index.scss';
```
the expected value of `scss` should be
```js
{
im-missing-from-css: "im-missing-from-css-sufx",
im-from-another-scss: "im-from-another-scss-sufx",
im-from-scss: "im-from-scss-sufx"
}
```
### Actual behavior
the value of `scss` is
```js
{
im-from-another-scss: "im-from-another-scss-sufx",
im-from-scss: "im-from-scss-sufx"
}
```
the css file is not included in the dictionary.
### How to reproduce
source code:
[bug-repo.tar.gz](https://github.com/webpack-contrib/css-loader/files/3050194/bug-repo.tar.gz)
`npm run build`
load build/index.html and see console output
|
non_process
|
not correct import with modules operating system linux node version npm version webpack version sass loader version this issue is a continuation of expected behavior we have css modules in webpack config js js loader css loader options modules true localidentname sufx index scss scss import index css import another scss im from scss bottom index css css im missing from css color blue another scss scss im from another scss left in index js we import the index scss js import scss from index scss the expected value of scss should be js im missing from css im missing from css sufx im from another scss im from another scss sufx im from scss im from scss sufx actual behavior the value of scss is js im from another scss im from another scss sufx im from scss im from scss sufx the css file is not included in the dictionary how to reproduce source code npm run build load build index html and see console output
| 0
|
14,395
| 17,404,640,642
|
IssuesEvent
|
2021-08-03 02:56:57
|
Tinet0624/eCommerceSite-inClassProject
|
https://api.github.com/repos/Tinet0624/eCommerceSite-inClassProject
|
closed
|
Add CI pipeline
|
Developer Process
|
Add continuous integration pipeline that will check to make sure code in a pull request complies successfully.
|
1.0
|
Add CI pipeline - Add continuous integration pipeline that will check to make sure code in a pull request complies successfully.
|
process
|
add ci pipeline add continuous integration pipeline that will check to make sure code in a pull request complies successfully
| 1
|
67,934
| 14,892,901,747
|
IssuesEvent
|
2021-01-21 04:01:54
|
Kites-Foundation/hello-munnar-backend
|
https://api.github.com/repos/Kites-Foundation/hello-munnar-backend
|
closed
|
CVE-2020-11023 (Medium) detected in jquery-3.2.1.min.js
|
security vulnerability
|
## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to dependency file: hello-munnar-backend/node_modules/superagent/docs/tail.html</p>
<p>Path to vulnerable library: hello-munnar-backend/node_modules/superagent/docs/tail.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Kites-Foundation/hello-munnar-backend/commit/715392ae414178eb1115e7ed56f3696d61c4a96f">715392ae414178eb1115e7ed56f3696d61c4a96f</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11023 (Medium) detected in jquery-3.2.1.min.js - ## CVE-2020-11023 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to dependency file: hello-munnar-backend/node_modules/superagent/docs/tail.html</p>
<p>Path to vulnerable library: hello-munnar-backend/node_modules/superagent/docs/tail.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Kites-Foundation/hello-munnar-backend/commit/715392ae414178eb1115e7ed56f3696d61c4a96f">715392ae414178eb1115e7ed56f3696d61c4a96f</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11023</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jquery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file hello munnar backend node modules superagent docs tail html path to vulnerable library hello munnar backend node modules superagent docs tail html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch develop vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
4,415
| 3,023,694,457
|
IssuesEvent
|
2015-08-01 19:36:43
|
HeavensGate/Eternal
|
https://api.github.com/repos/HeavensGate/Eternal
|
opened
|
Captain
|
code discuss/uncertain Gate/writing
|
We'd like to replace the antiquated, inaccurate design of the Captain with a standard of something a little more corporate and sociable. We'd like the Captain to be more of a CEO type, an upper noble in the company, someone who's more likely to talk and schmooze his way out of a conflict than smash a case and start blasting fuckos with his antique lasgun. But, at the same time, a conflict-hardened elite on a blacksite; having no military background doesn't mean you fail to draw weapons when it comes to it. The default title should be something like Station Administrator; the alternate titles should establish other concepts of highest management, including Captain.
|
1.0
|
Captain - We'd like to replace the antiquated, inaccurate design of the Captain with a standard of something a little more corporate and sociable. We'd like the Captain to be more of a CEO type, an upper noble in the company, someone who's more likely to talk and schmooze his way out of a conflict than smash a case and start blasting fuckos with his antique lasgun. But, at the same time, a conflict-hardened elite on a blacksite; having no military background doesn't mean you fail to draw weapons when it comes to it. The default title should be something like Station Administrator; the alternate titles should establish other concepts of highest management, including Captain.
|
non_process
|
captain we d like to replace the antiquated inaccurate design of the captain with a standard of something a little more corporate and sociable we d like the captain to be more of a ceo type an upper noble in the company someone who s more likely to talk and schmooze his way out of a conflict than smash a case and start blasting fuckos with his antique lasgun but at the same time a conflict hardened elite on a blacksite having no military background doesn t mean you fail to draw weapons when it comes to it the default title should be something like station administrator the alternate titles should establish other concepts of highest management including captain
| 0
|
192,235
| 22,215,919,802
|
IssuesEvent
|
2022-06-08 01:37:10
|
panasalap/linux-4.1.15
|
https://api.github.com/repos/panasalap/linux-4.1.15
|
reopened
|
CVE-2014-9710 (High) detected in linux-yocto-4.1v4.1.17
|
security vulnerability
|
## CVE-2014-9710 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yocto-4.1v4.1.17</b></p></summary>
<p>
<p>[no description]</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto-4.1>https://git.yoctoproject.org/git/linux-yocto-4.1</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/dir-item.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Btrfs implementation in the Linux kernel before 3.19 does not ensure that the visible xattr state is consistent with a requested replacement, which allows local users to bypass intended ACL settings and gain privileges via standard filesystem operations (1) during an xattr-replacement time window, related to a race condition, or (2) after an xattr-replacement attempt that fails because the data does not fit.
<p>Publish Date: 2015-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-9710>CVE-2014-9710</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-9710">https://nvd.nist.gov/vuln/detail/CVE-2014-9710</a></p>
<p>Release Date: 2015-05-27</p>
<p>Fix Resolution: 3.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2014-9710 (High) detected in linux-yocto-4.1v4.1.17 - ## CVE-2014-9710 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yocto-4.1v4.1.17</b></p></summary>
<p>
<p>[no description]</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto-4.1>https://git.yoctoproject.org/git/linux-yocto-4.1</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/dir-item.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Btrfs implementation in the Linux kernel before 3.19 does not ensure that the visible xattr state is consistent with a requested replacement, which allows local users to bypass intended ACL settings and gain privileges via standard filesystem operations (1) during an xattr-replacement time window, related to a race condition, or (2) after an xattr-replacement attempt that fails because the data does not fit.
<p>Publish Date: 2015-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-9710>CVE-2014-9710</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-9710">https://nvd.nist.gov/vuln/detail/CVE-2014-9710</a></p>
<p>Release Date: 2015-05-27</p>
<p>Fix Resolution: 3.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux yocto cve high severity vulnerability vulnerable library linux yocto library home page a href found in head commit a href found in base branch master vulnerable source files fs btrfs dir item c vulnerability details the btrfs implementation in the linux kernel before does not ensure that the visible xattr state is consistent with a requested replacement which allows local users to bypass intended acl settings and gain privileges via standard filesystem operations during an xattr replacement time window related to a race condition or after an xattr replacement attempt that fails because the data does not fit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
18,270
| 24,347,858,180
|
IssuesEvent
|
2022-10-02 15:05:48
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
opened
|
Add Marine Scotland as a source
|
research data processing back end
|
**Is your feature request related to a problem? Please describe.**
Marine Scotland publish open data regularly - however we do know they are partway migrating to another portal, so this may have to wait until the new one is available.
https://marine.gov.scot/
|
1.0
|
Add Marine Scotland as a source - **Is your feature request related to a problem? Please describe.**
Marine Scotland publish open data regularly - however we do know they are partway migrating to another portal, so this may have to wait until the new one is available.
https://marine.gov.scot/
|
process
|
add marine scotland as a source is your feature request related to a problem please describe marine scotland publish open data regularly however we do know they are partway migrating to another portal so this may have to wait until the new one is available
| 1
|
54,723
| 3,071,138,682
|
IssuesEvent
|
2015-08-19 10:03:38
|
pavel-pimenov/flylinkdc-r5xx
|
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
|
closed
|
Коллекция ссылок
|
duplicate enhancement imported Priority-Low
|
_From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on February 27, 2010 16:14:08_
День добрый, Павел.
Собственно говоря в нашей сети возникла идея касательно потенциально
полезной функции в FlyLinkDC и я постараюсь ее изложить. Прошу
прощения, если делаю это не в установленном порядке, просто
разобраться в том, где именно это стоит излагать, дабы на это обратили
внимания так и не вышло...
В общем идея на самом деле проста - почему бы ввести в FlyLink
подержку списка файлов на скачивание? К примеру, имеем банальный
текстовый файл, в котором просто список magnet-ссылок, в качестве
разделителя меж ссылками используется просто перенос на новую
строку... FlyLink "получив" этот файл (просто через "открыть список на
скачивание" или сработав по ассоциации винды на свежепридуманное
расширение файла (*.mlst или как там угодно - формальности) ведет себя
так же, как и по клику на magnet-ссылку - искать / скачать / скачать
в...
По идее функция то несложная, и весьма упростила скачивание (и, что
даже более актуально, "релизы" в ботах / медиапорталах) аудиоальбомов,
сериалов, да и просто если кто-то что-то хотел прийти в гости и
скачать.
Спасибо за внимание.
Удачи.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=62_
|
1.0
|
Коллекция ссылок - _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on February 27, 2010 16:14:08_
День добрый, Павел.
Собственно говоря в нашей сети возникла идея касательно потенциально
полезной функции в FlyLinkDC и я постараюсь ее изложить. Прошу
прощения, если делаю это не в установленном порядке, просто
разобраться в том, где именно это стоит излагать, дабы на это обратили
внимания так и не вышло...
В общем идея на самом деле проста - почему бы ввести в FlyLink
подержку списка файлов на скачивание? К примеру, имеем банальный
текстовый файл, в котором просто список magnet-ссылок, в качестве
разделителя меж ссылками используется просто перенос на новую
строку... FlyLink "получив" этот файл (просто через "открыть список на
скачивание" или сработав по ассоциации винды на свежепридуманное
расширение файла (*.mlst или как там угодно - формальности) ведет себя
так же, как и по клику на magnet-ссылку - искать / скачать / скачать
в...
По идее функция то несложная, и весьма упростила скачивание (и, что
даже более актуально, "релизы" в ботах / медиапорталах) аудиоальбомов,
сериалов, да и просто если кто-то что-то хотел прийти в гости и
скачать.
Спасибо за внимание.
Удачи.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=62_
|
non_process
|
коллекция ссылок from on february день добрый павел собственно говоря в нашей сети возникла идея касательно потенциально полезной функции в flylinkdc и я постараюсь ее изложить прошу прощения если делаю это не в установленном порядке просто разобраться в том где именно это стоит излагать дабы на это обратили внимания так и не вышло в общем идея на самом деле проста почему бы ввести в flylink подержку списка файлов на скачивание к примеру имеем банальный текстовый файл в котором просто список magnet ссылок в качестве разделителя меж ссылками используется просто перенос на новую строку flylink получив этот файл просто через открыть список на скачивание или сработав по ассоциации винды на свежепридуманное расширение файла mlst или как там угодно формальности ведет себя так же как и по клику на magnet ссылку искать скачать скачать в по идее функция то несложная и весьма упростила скачивание и что даже более актуально релизы в ботах медиапорталах аудиоальбомов сериалов да и просто если кто то что то хотел прийти в гости и скачать спасибо за внимание удачи original issue
| 0
|
5,044
| 7,858,836,421
|
IssuesEvent
|
2018-06-21 14:55:59
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
VB6 - File deletions during parse
|
bug parse-tree-processing vb6-specific
|
#3994 altered parse behaviour under VB6 to parse files in-place. However, this needs revisiting as the parse process is still deleting files during processing - this is especially apparent where a parse error leaves the process incomplete.
As VB6 is pre-release, I haven't labelled this with bug or critical, but it's essential this is resolved before VB6 is announced.
|
1.0
|
VB6 - File deletions during parse - #3994 altered parse behaviour under VB6 to parse files in-place. However, this needs revisiting as the parse process is still deleting files during processing - this is especially apparent where a parse error leaves the process incomplete.
As VB6 is pre-release, I haven't labelled this with bug or critical, but it's essential this is resolved before VB6 is announced.
|
process
|
file deletions during parse altered parse behaviour under to parse files in place however this needs revisiting as the parse process is still deleting files during processing this is especially apparent where a parse error leaves the process incomplete as is pre release i haven t labelled this with bug or critical but it s essential this is resolved before is announced
| 1
|
7,204
| 6,820,803,460
|
IssuesEvent
|
2017-11-07 15:01:41
|
hzi-braunschweig/SORMAS-Open
|
https://api.github.com/repos/hzi-braunschweig/SORMAS-Open
|
opened
|
Allow admins to delete entities
|
Infrastructure sormas-backend sormas-ui
|
Only for admins!
## Backend
- [ ] delete task
- [ ] delete sample test
- [ ] delete sample with tests
- [ ] delete visit (symptoms)
- [ ] delete contact with all visits and tasks
- [ ] delete case (symptoms, hospitalization, epi data, etc.) with all contacts, tasks and samples
- [ ] delete event participant
- [ ] delete event with all participants and tasks
## UI
- [ ] add commit/discard delete button to all respective forms for admins
## Unclear
- [ ] How to delete the same data on the mobile devices. When the entity is changed there and is sent to the server it will be created as a new entity. Idea: Make sure that a "new" entity send from the mobile device has no creation date. Otherwise reply to the device that the entity has been delete.
|
1.0
|
Allow admins to delete entities - Only for admins!
## Backend
- [ ] delete task
- [ ] delete sample test
- [ ] delete sample with tests
- [ ] delete visit (symptoms)
- [ ] delete contact with all visits and tasks
- [ ] delete case (symptoms, hospitalization, epi data, etc.) with all contacts, tasks and samples
- [ ] delete event participant
- [ ] delete event with all participants and tasks
## UI
- [ ] add commit/discard delete button to all respective forms for admins
## Unclear
- [ ] How to delete the same data on the mobile devices. When the entity is changed there and is sent to the server it will be created as a new entity. Idea: Make sure that a "new" entity send from the mobile device has no creation date. Otherwise reply to the device that the entity has been delete.
|
non_process
|
allow admins to delete entities only for admins backend delete task delete sample test delete sample with tests delete visit symptoms delete contact with all visits and tasks delete case symptoms hospitalization epi data etc with all contacts tasks and samples delete event participant delete event with all participants and tasks ui add commit discard delete button to all respective forms for admins unclear how to delete the same data on the mobile devices when the entity is changed there and is sent to the server it will be created as a new entity idea make sure that a new entity send from the mobile device has no creation date otherwise reply to the device that the entity has been delete
| 0
|
588,811
| 17,672,210,813
|
IssuesEvent
|
2021-08-23 07:53:33
|
carpentries-incubator/deep-learning-intro
|
https://api.github.com/repos/carpentries-incubator/deep-learning-intro
|
closed
|
Potential issue with keras version 2.2.4 failing to train
|
high priority Episode 2
|
Two learners during the trial lesson on june 22nd 2021 reported problems with loss rates not dropping and the network not converging on a solution. Both seem to have used a fresh conda environment and ended up with keras version 2.2.4, yet I seem to have 2.4.0. I have a copy of their notebooks and conda environments to investigate the causes further.
|
1.0
|
Potential issue with keras version 2.2.4 failing to train - Two learners during the trial lesson on june 22nd 2021 reported problems with loss rates not dropping and the network not converging on a solution. Both seem to have used a fresh conda environment and ended up with keras version 2.2.4, yet I seem to have 2.4.0. I have a copy of their notebooks and conda environments to investigate the causes further.
|
non_process
|
potential issue with keras version failing to train two learners during the trial lesson on june reported problems with loss rates not dropping and the network not converging on a solution both seem to have used a fresh conda environment and ended up with keras version yet i seem to have i have a copy of their notebooks and conda environments to investigate the causes further
| 0
|
189,276
| 15,183,732,421
|
IssuesEvent
|
2021-02-15 08:34:51
|
cksystemsgroup/monster
|
https://api.github.com/repos/cksystemsgroup/monster
|
closed
|
📖 Start Reference Book
|
documentation
|
There are many things which need to be done in a uniform way. As well as many definitions that need to be understood.
We need a way to document those definitions, standards and conventions in a structured easy to read style.
This can be included in the Rust Book documentation.
|
1.0
|
📖 Start Reference Book - There are many things which need to be done in a uniform way. As well as many definitions that need to be understood.
We need a way to document those definitions, standards and conventions in a structured easy to read style.
This can be included in the Rust Book documentation.
|
non_process
|
📖 start reference book there are many things which need to be done in a uniform way as well as many definitions that need to be understood we need a way to document those definitions standards and conventions in a structured easy to read style this can be included in the rust book documentation
| 0
|
9,387
| 12,392,960,404
|
IssuesEvent
|
2020-05-20 14:44:48
|
prisma/specs
|
https://api.github.com/repos/prisma/specs
|
closed
|
Spec request: Issues
|
area/process kind/spec spec/new
|
- Where should a user open an GitHub issue?
- What problems are we aware of? How do we deal with those?
|
1.0
|
Spec request: Issues - - Where should a user open an GitHub issue?
- What problems are we aware of? How do we deal with those?
|
process
|
spec request issues where should a user open an github issue what problems are we aware of how do we deal with those
| 1
|
8,444
| 11,614,006,577
|
IssuesEvent
|
2020-02-26 11:48:07
|
JimBobSquarePants/ImageProcessor
|
https://api.github.com/repos/JimBobSquarePants/ImageProcessor
|
closed
|
Wrong image saved in cache
|
bug postprocessor web
|
### Versions
ImageProcessor: 2.8.0
ImageProcessor.Web: 4.11.0
ImageProcessor.Web.Config: 2.6.0
ImageProcessor.Web.PostProcessor: 1.5.0
Umbraco: 7.11.1
.NET Framework version: 4.6.1
### Description
We experience that the wrong images gets cached. For example in Umbraco you have the media archive with images uploaded on different nodes. When printing the HTML for the images, we first find the node and then the node contains information about which image file on the server to use. We use the img tag with a srcset, so we have multiple sizes (multiple cached images of the same image) for each image. It is here we experience one of the sizes will have another image from the media archive than the others, which is odd since the same node has been used to create the srcset. Haven't been able to find a stable way of reproducing this, but clear the cache and load images again (different images gets the wrong images from each cache clear). It is only happening to very few images, rough guess we experienced this on 5 out of 1000 images.
Examples of different sizes:
/media/6934/irish-turnout-black.jpg?width=1110&rnd=201808100858&heightratio=1&quality=80&bgcolor=ffffff
/media/6934/irish-turnout-black.jpg?width=750&rnd=201808100858&heightratio=1&quality=80&bgcolor=ffffff
A single of the sizes as shown here would show a different image apart from the others in the img tag and the querys would not be applied correctly either.
**/media/6934/irish-turnout-black.jpg?width=350&rnd=201808100858&heightratio=1&quality=80&bgcolor=ffffff**
The 'rnd'-query, we tried to get the last write date of the image file in hope that was enough to update the cache, but with no luck.
If needed I can send the project with a database. If anything else needed, I will try to answer. But this is mostly all we got about it.
### System Configuration
Windows 10 | Windows Server 2016
MSSQL 14
|
1.0
|
Wrong image saved in cache - ### Versions
ImageProcessor: 2.8.0
ImageProcessor.Web: 4.11.0
ImageProcessor.Web.Config: 2.6.0
ImageProcessor.Web.PostProcessor: 1.5.0
Umbraco: 7.11.1
.NET Framework version: 4.6.1
### Description
We experience that the wrong images gets cached. For example in Umbraco you have the media archive with images uploaded on different nodes. When printing the HTML for the images, we first find the node and then the node contains information about which image file on the server to use. We use the img tag with a srcset, so we have multiple sizes (multiple cached images of the same image) for each image. It is here we experience one of the sizes will have another image from the media archive than the others, which is odd since the same node has been used to create the srcset. Haven't been able to find a stable way of reproducing this, but clear the cache and load images again (different images gets the wrong images from each cache clear). It is only happening to very few images, rough guess we experienced this on 5 out of 1000 images.
Examples of different sizes:
/media/6934/irish-turnout-black.jpg?width=1110&rnd=201808100858&heightratio=1&quality=80&bgcolor=ffffff
/media/6934/irish-turnout-black.jpg?width=750&rnd=201808100858&heightratio=1&quality=80&bgcolor=ffffff
A single of the sizes as shown here would show a different image apart from the others in the img tag and the querys would not be applied correctly either.
**/media/6934/irish-turnout-black.jpg?width=350&rnd=201808100858&heightratio=1&quality=80&bgcolor=ffffff**
The 'rnd'-query, we tried to get the last write date of the image file in hope that was enough to update the cache, but with no luck.
If needed I can send the project with a database. If anything else needed, I will try to answer. But this is mostly all we got about it.
### System Configuration
Windows 10 | Windows Server 2016
MSSQL 14
|
process
|
wrong image saved in cache versions imageprocessor imageprocessor web imageprocessor web config imageprocessor web postprocessor umbraco net framework version description we experience that the wrong images gets cached for example in umbraco you have the media archive with images uploaded on different nodes when printing the html for the images we first find the node and then the node contains information about which image file on the server to use we use the img tag with a srcset so we have multiple sizes multiple cached images of the same image for each image it is here we experience one of the sizes will have another image from the media archive than the others which is odd since the same node has been used to create the srcset haven t been able to find a stable way of reproducing this but clear the cache and load images again different images gets the wrong images from each cache clear it is only happening to very few images rough guess we experienced this on out of images examples of different sizes media irish turnout black jpg width amp rnd amp heightratio amp quality amp bgcolor ffffff media irish turnout black jpg width amp rnd amp heightratio amp quality amp bgcolor ffffff a single of the sizes as shown here would show a different image apart from the others in the img tag and the querys would not be applied correctly either media irish turnout black jpg width amp rnd amp heightratio amp quality amp bgcolor ffffff the rnd query we tried to get the last write date of the image file in hope that was enough to update the cache but with no luck if needed i can send the project with a database if anything else needed i will try to answer but this is mostly all we got about it system configuration windows windows server mssql
| 1
|
512,807
| 14,909,935,242
|
IssuesEvent
|
2021-01-22 08:50:12
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.reddit.com - video or audio doesn't play
|
browser-fenix engine-gecko priority-critical
|
<!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/66006 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.reddit.com/r/wallstreetbets/new
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: There is no video
**Steps to Reproduce**:
It happens repeatedly after a certain amount of browsing. Certain videos and gifs just stop loading as if there is no memory left or something. Force stop and restart browser fixes it.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/5ecd9503-6ff1-4e7f-8885-d304e1581926.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210107185759</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/1ac09a03-d947-4caa-aca4-e01315deeb12)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.reddit.com - video or audio doesn't play - <!-- @browser: Firefox Mobile 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:85.0) Gecko/85.0 Firefox/85.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/66006 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.reddit.com/r/wallstreetbets/new
**Browser / Version**: Firefox Mobile 85.0
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Video or audio doesn't play
**Description**: There is no video
**Steps to Reproduce**:
It happens repeatedly after a certain amount of browsing. Certain videos and gifs just stop loading as if there is no memory left or something. Force stop and restart browser fixes it.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/5ecd9503-6ff1-4e7f-8885-d304e1581926.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210107185759</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/1ac09a03-d947-4caa-aca4-e01315deeb12)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
video or audio doesn t play url browser version firefox mobile operating system android tested another browser no problem type video or audio doesn t play description there is no video steps to reproduce it happens repeatedly after a certain amount of browsing certain videos and gifs just stop loading as if there is no memory left or something force stop and restart browser fixes it view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
20,371
| 27,027,374,874
|
IssuesEvent
|
2023-02-11 19:27:45
|
MikaylaFischler/cc-mek-scada
|
https://api.github.com/repos/MikaylaFischler/cc-mek-scada
|
closed
|
PID Control Recovery
|
supervisor safety stability process control
|
We can't just resume the closed loop control on reactor disconnect/reconnect since there would be an output spike which would violate acceleration limits.
UPDATE: this only applies to **RATE** control, **CHARGE** doesn't care since it is always ramping.
- [x] Don't clear RPS during closed loop control
- [x] Add fault tolerant boolean flag to burn rate assignment, return fail if one of the devices has an effective limit different than the configured limit
- [x] Create idle states for closed loop states when disconnects or trips occur, RPS resets would happen here
- [x] Only exit idle states if all assigned units are ready, then restart control including ramping
|
1.0
|
PID Control Recovery - We can't just resume the closed loop control on reactor disconnect/reconnect since there would be an output spike which would violate acceleration limits.
UPDATE: this only applies to **RATE** control, **CHARGE** doesn't care since it is always ramping.
- [x] Don't clear RPS during closed loop control
- [x] Add fault tolerant boolean flag to burn rate assignment, return fail if one of the devices has an effective limit different than the configured limit
- [x] Create idle states for closed loop states when disconnects or trips occur, RPS resets would happen here
- [x] Only exit idle states if all assigned units are ready, then restart control including ramping
|
process
|
pid control recovery we can t just resume the closed loop control on reactor disconnect reconnect since there would be an output spike which would violate acceleration limits update this only applies to rate control charge doesn t care since it is always ramping don t clear rps during closed loop control add fault tolerant boolean flag to burn rate assignment return fail if one of the devices has an effective limit different than the configured limit create idle states for closed loop states when disconnects or trips occur rps resets would happen here only exit idle states if all assigned units are ready then restart control including ramping
| 1
|
314,249
| 23,512,426,484
|
IssuesEvent
|
2022-08-18 17:54:52
|
nebari-dev/nebari-docs
|
https://api.github.com/repos/nebari-dev/nebari-docs
|
opened
|
[DOC] - Update keycloak docs to include details on differences between the different roles
|
area: documentation 📖
|
### Preliminary Checks
- [X] This issue is not a question, feature request, RFC, or anything other than a bug report. Please post those things in GitHub Discussions: https://github.com/nebari-dev/nebari/discussions
### Summary
Keycloak has many wonderful features, many of which we are currently taking advantage of. However we need to make sure we document these. To start with, we should ensure that we have up-to-date docs that reflect the differences between all of the different roles that are available (i.e. `jupyterhub_admin` vs `jupyterhub_developer`).
### Steps to Resolve this Issue
Ensure docs reflect current state of keycloak authentication/authorization.
|
1.0
|
[DOC] - Update keycloak docs to include details on differences between the different roles - ### Preliminary Checks
- [X] This issue is not a question, feature request, RFC, or anything other than a bug report. Please post those things in GitHub Discussions: https://github.com/nebari-dev/nebari/discussions
### Summary
Keycloak has many wonderful features, many of which we are currently taking advantage of. However we need to make sure we document these. To start with, we should ensure that we have up-to-date docs that reflect the differences between all of the different roles that are available (i.e. `jupyterhub_admin` vs `jupyterhub_developer`).
### Steps to Resolve this Issue
Ensure docs reflect current state of keycloak authentication/authorization.
|
non_process
|
update keycloak docs to include details on differences between the different roles preliminary checks this issue is not a question feature request rfc or anything other than a bug report please post those things in github discussions summary keycloak has many wonderful features many of which we are currently taking advantage of however we need to make sure we document these to start with we should ensure that we have up to date docs that reflect the differences between all of the different roles that are available i e jupyterhub admin vs jupyterhub developer steps to resolve this issue ensure docs reflect current state of keycloak authentication authorization
| 0
|
15,208
| 19,029,530,752
|
IssuesEvent
|
2021-11-24 09:12:32
|
LOVDnl/LOVDplus
|
https://api.github.com/repos/LOVDnl/LOVDplus
|
closed
|
Conversion script stops when SSL redirect is turned on
|
bug cat: File processing and import
|
When forcing SSL access to LOVD+, the conversion script and possibly other scripts no longer work because their access to LOVD+ is registered as non-SSL. This breaks the file processing.
|
1.0
|
Conversion script stops when SSL redirect is turned on - When forcing SSL access to LOVD+, the conversion script and possibly other scripts no longer work because their access to LOVD+ is registered as non-SSL. This breaks the file processing.
|
process
|
conversion script stops when ssl redirect is turned on when forcing ssl access to lovd the conversion script and possibly other scripts no longer work because their access to lovd is registered as non ssl this breaks the file processing
| 1
|
13,342
| 15,801,397,476
|
IssuesEvent
|
2021-04-03 04:41:32
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
closed
|
Properly 16-byte align vec3's in WGSL -> SPIRV
|
area: processing kind: bug
|
Currently, naga appears to generate different offsets than Vulkan GLSL.
A GLSL -> SPIRV converter would generate SPIRV like so:
```glsl
layout(set = 0, binding = 0) uniform light_uniforms {
vec3 light_power;
vec3 light_position;
};
```
->
```spirv
OpMemberDecorate %light_uniforms 0 Offset 0
OpMemberDecorate %light_uniforms 1 Offset 16
```
But naga generates WGSL -> SPIRV like this:
```glsl
[[block]]
struct Light {
power: vec3<f32>;
position: vec3<f32>;
};
[[group(2), binding(0)]]
var light: Light;
```
->
```spirv
OpMemberDecorate %Light 0 Offset 0
OpMemberDecorate %Light 1 Offset 12
```
As you can see, vec3's are 12-byte aligned, rather than the intended 16. Not sure if this is a SPIRV-out or WGSL-in issue.
|
1.0
|
Properly 16-byte align vec3's in WGSL -> SPIRV - Currently, naga appears to generate different offsets than Vulkan GLSL.
A GLSL -> SPIRV converter would generate SPIRV like so:
```glsl
layout(set = 0, binding = 0) uniform light_uniforms {
vec3 light_power;
vec3 light_position;
};
```
->
```spirv
OpMemberDecorate %light_uniforms 0 Offset 0
OpMemberDecorate %light_uniforms 1 Offset 16
```
But naga generates WGSL -> SPIRV like this:
```glsl
[[block]]
struct Light {
power: vec3<f32>;
position: vec3<f32>;
};
[[group(2), binding(0)]]
var light: Light;
```
->
```spirv
OpMemberDecorate %Light 0 Offset 0
OpMemberDecorate %Light 1 Offset 12
```
As you can see, vec3's are 12-byte aligned, rather than the intended 16. Not sure if this is a SPIRV-out or WGSL-in issue.
|
process
|
properly byte align s in wgsl spirv currently naga appears to generate different offsets than vulkan glsl a glsl spirv converter would generate spirv like so glsl layout set binding uniform light uniforms light power light position spirv opmemberdecorate light uniforms offset opmemberdecorate light uniforms offset but naga generates wgsl spirv like this glsl struct light power position var light light spirv opmemberdecorate light offset opmemberdecorate light offset as you can see s are byte aligned rather than the intended not sure if this is a spirv out or wgsl in issue
| 1
|
81,852
| 23,595,319,129
|
IssuesEvent
|
2022-08-23 18:40:01
|
doxygen/doxygen
|
https://api.github.com/repos/doxygen/doxygen
|
closed
|
1.9.4: test suite is failing in `012_cite`
|
needinfo install/build
|
**Describe the bug**
Looks like test suite is failing in one unit `012_cite`
**Expected behavior**
Test suite should not be failling.
**Screenshots**
<details>
```console
+ cd doxygen-1.9.4
+ /usr/bin/make -O -j48 V=1 VERBOSE=1 -C x86_64-redhat-linux-gnu test ARGS=--output-on-failure
make: Entering directory '/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu'
Running tests...
/usr/bin/ctest --force-new-ctest-process --output-on-failure
Test project /home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu
Start 1: 001_a
1/82 Test #1: 001_a ........................................ Passed 0.12 sec
Start 2: 002_addindex
2/82 Test #2: 002_addindex ................................. Passed 0.12 sec
Start 3: 003_anchor
3/82 Test #3: 003_anchor ................................... Passed 0.12 sec
Start 4: 004_arg
4/82 Test #4: 004_arg ...................................... Passed 0.11 sec
Start 5: 005_attention
5/82 Test #5: 005_attention ................................ Passed 0.10 sec
Start 6: 006_author
6/82 Test #6: 006_author ................................... Passed 0.11 sec
Start 7: 007_b
7/82 Test #7: 007_b ........................................ Passed 0.12 sec
Start 8: 008_brief
8/82 Test #8: 008_brief .................................... Passed 0.12 sec
Start 9: 009_bug
9/82 Test #9: 009_bug ...................................... Passed 0.22 sec
Start 10: 010_c
10/82 Test #10: 010_c ........................................ Passed 0.11 sec
Start 11: 011_category
11/82 Test #11: 011_category ................................. Passed 0.14 sec
Start 12: 012_cite
12/82 Test #12: 012_cite .....................................***Failed 0.15 sec
1..1
not ok - [012_cite.dox]: test the \cite command
-------------------------------------
Difference between generated output and reference:
--- /home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu/testing/test_output_012/indexpage.xml 2022-08-02 17:06:44.858749403 +0000
+++ /home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012/indexpage.xml 2021-12-25 19:06:07.000000000 +0000
@@ -6,8 +6,8 @@
<briefdescription>
</briefdescription>
<detaileddescription>
- <para>See knuth79 for more info.</para>
- <para>Other references with cross references see Be09 and BertholdHeinzVigerske2009 for more info. </para>
+ <para>See <ref refid="citelist_1CITEREF_knuth79" kindref="member">[3]</ref> for more info.</para>
+ <para>Other references with cross references see <ref refid="citelist_1CITEREF_Be09" kindref="member">[1]</ref> and <ref refid="citelist_1CITEREF_BertholdHeinzVigerske2009" kindref="member">[2]</ref> for more info. </para>
</detaileddescription>
<location file="012_cite.dox"/>
</compounddef>
-------------------------------------
error: Problems running bibtex. Verify that the command 'perl --version' works from the command line. Exit code: 2
/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012_cite.dox:6: warning: \cite command to 'knuth79' does not have an associated number
/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012_cite.dox:8: warning: \cite command to 'Be09' does not have an associated number
/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012_cite.dox:8: warning: \cite command to 'BertholdHeinzVigerske2009' does not have an associated number
-------------------------------------
1 out of 1 tests failed
Start 13: 013_class
13/82 Test #13: 013_class .................................... Passed 0.15 sec
Start 14: 014_code
14/82 Test #14: 014_code ..................................... Passed 0.10 sec
Start 15: 015_cond
15/82 Test #15: 015_cond ..................................... Passed 0.10 sec
Start 16: 016_copydoc
16/82 Test #16: 016_copydoc .................................. Passed 0.11 sec
Start 17: 017_copyright
17/82 Test #17: 017_copyright ................................ Passed 0.10 sec
Start 18: 018_def
18/82 Test #18: 018_def ...................................... Passed 0.11 sec
Start 19: 019_defgroup
19/82 Test #19: 019_defgroup ................................. Passed 0.18 sec
Start 20: 020_only
20/82 Test #20: 020_only ..................................... Passed 0.11 sec
Start 21: 021_dontinclude
21/82 Test #21: 021_dontinclude .............................. Passed 0.11 sec
Start 22: 022_dot
22/82 Test #22: 022_dot ...................................... Passed 0.12 sec
Start 23: 023_e
23/82 Test #23: 023_e ........................................ Passed 0.11 sec
Start 24: 024_if
24/82 Test #24: 024_if ....................................... Passed 0.12 sec
Start 25: 025_example
25/82 Test #25: 025_example .................................. Passed 0.12 sec
Start 26: 026_exception
26/82 Test #26: 026_exception ................................ Passed 0.10 sec
Start 27: 027_extends
27/82 Test #27: 027_extends .................................. Passed 0.16 sec
Start 28: 028_formula
28/82 Test #28: 028_formula .................................. Passed 0.12 sec
Start 29: 029_hideinit
29/82 Test #29: 029_hideinit ................................. Passed 0.11 sec
Start 30: 030_htmlinclude
30/82 Test #30: 030_htmlinclude .............................. Passed 0.11 sec
Start 31: 031_image
31/82 Test #31: 031_image .................................... Passed 0.11 sec
Start 32: 032_include
32/82 Test #32: 032_include .................................. Passed 0.10 sec
Start 33: 033_internal
33/82 Test #33: 033_internal ................................. Passed 0.12 sec
Start 34: 034_internal
34/82 Test #34: 034_internal ................................. Passed 0.12 sec
Start 35: 035_invariant
35/82 Test #35: 035_invariant ................................ Passed 0.11 sec
Start 36: 036_link
36/82 Test #36: 036_link ..................................... Passed 0.11 sec
Start 37: 037_msc
37/82 Test #37: 037_msc ...................................... Passed 0.14 sec
Start 38: 038_n
38/82 Test #38: 038_n ........................................ Passed 0.11 sec
Start 39: 039_name
39/82 Test #39: 039_name ..................................... Passed 0.10 sec
Start 40: 040_namespace
40/82 Test #40: 040_namespace ................................ Passed 0.10 sec
Start 41: 041_overload
41/82 Test #41: 041_overload ................................. Passed 0.12 sec
Start 42: 042_package
42/82 Test #42: 042_package .................................. Passed 0.12 sec
Start 43: 043_page
43/82 Test #43: 043_page ..................................... Passed 0.12 sec
Start 44: 044_section
44/82 Test #44: 044_section .................................. Passed 0.12 sec
Start 45: 045_refitem
45/82 Test #45: 045_refitem .................................. Passed 0.12 sec
Start 46: 046_related
46/82 Test #46: 046_related .................................. Passed 0.12 sec
Start 47: 047_return
47/82 Test #47: 047_return ................................... Passed 0.12 sec
Start 48: 048_showinit
48/82 Test #48: 048_showinit ................................. Passed 0.12 sec
Start 49: 049_snippet
49/82 Test #49: 049_snippet .................................. Passed 0.13 sec
Start 50: 050_verbatim
50/82 Test #50: 050_verbatim ................................. Passed 0.09 sec
Start 51: 051_escape
51/82 Test #51: 051_escape ................................... Passed 0.11 sec
Start 52: 052_tilde
52/82 Test #52: 052_tilde .................................... Passed 0.11 sec
Start 53: 053_tilde
53/82 Test #53: 053_tilde .................................... Passed 0.11 sec
Start 54: 054_parblock
54/82 Test #54: 054_parblock ................................. Passed 0.13 sec
Start 55: 055_markdown
55/82 Test #55: 055_markdown ................................. Passed 0.12 sec
Start 56: 056_latexinclude
56/82 Test #56: 056_latexinclude ............................. Passed 0.11 sec
Start 57: 057_inlinenamespace
57/82 Test #57: 057_inlinenamespace .......................... Passed 0.13 sec
Start 58: 058_strong_enum
58/82 Test #58: 058_strong_enum .............................. Passed 0.14 sec
Start 59: 059_template
59/82 Test #59: 059_template ................................. Passed 0.11 sec
Start 60: 064_castoperator
60/82 Test #60: 064_castoperator ............................. Passed 0.12 sec
Start 61: 065_tilde
61/82 Test #61: 065_tilde .................................... Passed 0.10 sec
Start 62: 066_property_initializer
62/82 Test #62: 066_property_initializer ..................... Passed 0.10 sec
Start 63: 067_link_varargs
63/82 Test #63: 067_link_varargs ............................. Passed 0.10 sec
Start 64: 068_ref_varargs
64/82 Test #64: 068_ref_varargs .............................. Passed 0.09 sec
Start 65: 069_link_variadic_template
65/82 Test #65: 069_link_variadic_template ................... Passed 0.10 sec
Start 66: 070_ref_variadic_template
66/82 Test #66: 070_ref_variadic_template .................... Passed 0.09 sec
Start 67: 071_enum_in_anon_ns
67/82 Test #67: 071_enum_in_anon_ns .......................... Passed 0.09 sec
Start 68: 072_using
68/82 Test #68: 072_using .................................... Passed 0.10 sec
Start 69: 073_typed_enum
69/82 Test #69: 073_typed_enum ............................... Passed 0.10 sec
Start 70: 074_ref
70/82 Test #70: 074_ref ...................................... Passed 0.13 sec
Start 71: 075_argmatch
71/82 Test #71: 075_argmatch ................................. Passed 0.11 sec
Start 72: 076_emojis
72/82 Test #72: 076_emojis ................................... Passed 0.18 sec
Start 73: 077_no_xml_namespace_members_in_file_scope
73/82 Test #73: 077_no_xml_namespace_members_in_file_scope ... Passed 0.12 sec
Start 74: 078_xml_namespace_members_in_file_scope
74/82 Test #74: 078_xml_namespace_members_in_file_scope ...... Passed 0.11 sec
Start 75: 079_tableofcontents
75/82 Test #75: 079_tableofcontents .......................... Passed 0.11 sec
Start 76: 080_extract_private_virtual
76/82 Test #76: 080_extract_private_virtual .................. Passed 0.11 sec
Start 77: 081_brief_lists
77/82 Test #77: 081_brief_lists .............................. Passed 0.11 sec
Start 78: 082_decl_def
78/82 Test #78: 082_decl_def ................................. Passed 0.11 sec
Start 79: 083_decl_def
79/82 Test #79: 083_decl_def ................................. Passed 0.11 sec
Start 80: 084_markdown_pre
80/82 Test #80: 084_markdown_pre ............................. Passed 0.12 sec
Start 81: 085_tooltip
81/82 Test #81: 085_tooltip .................................. Passed 0.12 sec
Start 82: 086_style_tags
82/82 Test #82: 086_style_tags ............................... Passed 0.12 sec
99% tests passed, 1 tests failed out of 82
Total Test time (real) = 9.62 sec
The following tests FAILED:
12 - 012_cite (Failed)
Errors while running CTest
make: *** [Makefile:94: test] Error 8
make: Leaving directory '/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu'
```
</details>
**To Reproduce**
As input for build procedure I'm usimg dist trar ball + 0cf06873 and 5198966c.
**Version**
1.9.4
**Additional context**
cmake settings
```console
[tkloczko@devel-g2v x86_64-redhat-linux-gnu]$ cmake -L
CMake Warning:
No source or binary directory provided. Both will be assumed to be the
same as the current working directory, but note that this warning will
become a fatal error in future CMake releases.
CMake Error: The source directory "/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
-- Cache values
CMAKE_BUILD_TYPE:STRING=RelWithDebInfo
CMAKE_INSTALL_PREFIX:PATH=/usr
DOT:FILEPATH=/usr/bin/dot
Qt5Core_DIR:PATH=/usr/lib64/cmake/Qt5Core
Qt5Gui_DIR:PATH=/usr/lib64/cmake/Qt5Gui
Qt5Widgets_DIR:PATH=/usr/lib64/cmake/Qt5Widgets
Qt5Xml_DIR:PATH=/usr/lib64/cmake/Qt5Xml
Qt5_DIR:PATH=/usr/lib64/cmake/Qt5
SANITIZE_ADDRESS:BOOL=OFF
SANITIZE_LINK_STATIC:BOOL=OFF
SANITIZE_MEMORY:BOOL=OFF
SANITIZE_THREAD:BOOL=OFF
SANITIZE_UNDEFINED:BOOL=OFF
build_app:BOOL=OFF
build_doc:BOOL=OFF
build_doc_chm:BOOL=OFF
build_parse:BOOL=OFF
build_search:BOOL=OFF
build_wizard:BOOL=ON
build_xmlparser:BOOL=ON
enable_console:BOOL=OFF
enable_coverage:BOOL=OFF
static_libclang:BOOL=OFF
use_libclang:BOOL=OFF
use_sqlite3:BOOL=OFF
win_static:BOOL=OFF
```
|
1.0
|
1.9.4: test suite is failing in `012_cite` - **Describe the bug**
Looks like test suite is failing in one unit `012_cite`
**Expected behavior**
Test suite should not be failling.
**Screenshots**
<details>
```console
+ cd doxygen-1.9.4
+ /usr/bin/make -O -j48 V=1 VERBOSE=1 -C x86_64-redhat-linux-gnu test ARGS=--output-on-failure
make: Entering directory '/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu'
Running tests...
/usr/bin/ctest --force-new-ctest-process --output-on-failure
Test project /home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu
Start 1: 001_a
1/82 Test #1: 001_a ........................................ Passed 0.12 sec
Start 2: 002_addindex
2/82 Test #2: 002_addindex ................................. Passed 0.12 sec
Start 3: 003_anchor
3/82 Test #3: 003_anchor ................................... Passed 0.12 sec
Start 4: 004_arg
4/82 Test #4: 004_arg ...................................... Passed 0.11 sec
Start 5: 005_attention
5/82 Test #5: 005_attention ................................ Passed 0.10 sec
Start 6: 006_author
6/82 Test #6: 006_author ................................... Passed 0.11 sec
Start 7: 007_b
7/82 Test #7: 007_b ........................................ Passed 0.12 sec
Start 8: 008_brief
8/82 Test #8: 008_brief .................................... Passed 0.12 sec
Start 9: 009_bug
9/82 Test #9: 009_bug ...................................... Passed 0.22 sec
Start 10: 010_c
10/82 Test #10: 010_c ........................................ Passed 0.11 sec
Start 11: 011_category
11/82 Test #11: 011_category ................................. Passed 0.14 sec
Start 12: 012_cite
12/82 Test #12: 012_cite .....................................***Failed 0.15 sec
1..1
not ok - [012_cite.dox]: test the \cite command
-------------------------------------
Difference between generated output and reference:
--- /home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu/testing/test_output_012/indexpage.xml 2022-08-02 17:06:44.858749403 +0000
+++ /home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012/indexpage.xml 2021-12-25 19:06:07.000000000 +0000
@@ -6,8 +6,8 @@
<briefdescription>
</briefdescription>
<detaileddescription>
- <para>See knuth79 for more info.</para>
- <para>Other references with cross references see Be09 and BertholdHeinzVigerske2009 for more info. </para>
+ <para>See <ref refid="citelist_1CITEREF_knuth79" kindref="member">[3]</ref> for more info.</para>
+ <para>Other references with cross references see <ref refid="citelist_1CITEREF_Be09" kindref="member">[1]</ref> and <ref refid="citelist_1CITEREF_BertholdHeinzVigerske2009" kindref="member">[2]</ref> for more info. </para>
</detaileddescription>
<location file="012_cite.dox"/>
</compounddef>
-------------------------------------
error: Problems running bibtex. Verify that the command 'perl --version' works from the command line. Exit code: 2
/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012_cite.dox:6: warning: \cite command to 'knuth79' does not have an associated number
/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012_cite.dox:8: warning: \cite command to 'Be09' does not have an associated number
/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/testing/012_cite.dox:8: warning: \cite command to 'BertholdHeinzVigerske2009' does not have an associated number
-------------------------------------
1 out of 1 tests failed
Start 13: 013_class
13/82 Test #13: 013_class .................................... Passed 0.15 sec
Start 14: 014_code
14/82 Test #14: 014_code ..................................... Passed 0.10 sec
Start 15: 015_cond
15/82 Test #15: 015_cond ..................................... Passed 0.10 sec
Start 16: 016_copydoc
16/82 Test #16: 016_copydoc .................................. Passed 0.11 sec
Start 17: 017_copyright
17/82 Test #17: 017_copyright ................................ Passed 0.10 sec
Start 18: 018_def
18/82 Test #18: 018_def ...................................... Passed 0.11 sec
Start 19: 019_defgroup
19/82 Test #19: 019_defgroup ................................. Passed 0.18 sec
Start 20: 020_only
20/82 Test #20: 020_only ..................................... Passed 0.11 sec
Start 21: 021_dontinclude
21/82 Test #21: 021_dontinclude .............................. Passed 0.11 sec
Start 22: 022_dot
22/82 Test #22: 022_dot ...................................... Passed 0.12 sec
Start 23: 023_e
23/82 Test #23: 023_e ........................................ Passed 0.11 sec
Start 24: 024_if
24/82 Test #24: 024_if ....................................... Passed 0.12 sec
Start 25: 025_example
25/82 Test #25: 025_example .................................. Passed 0.12 sec
Start 26: 026_exception
26/82 Test #26: 026_exception ................................ Passed 0.10 sec
Start 27: 027_extends
27/82 Test #27: 027_extends .................................. Passed 0.16 sec
Start 28: 028_formula
28/82 Test #28: 028_formula .................................. Passed 0.12 sec
Start 29: 029_hideinit
29/82 Test #29: 029_hideinit ................................. Passed 0.11 sec
Start 30: 030_htmlinclude
30/82 Test #30: 030_htmlinclude .............................. Passed 0.11 sec
Start 31: 031_image
31/82 Test #31: 031_image .................................... Passed 0.11 sec
Start 32: 032_include
32/82 Test #32: 032_include .................................. Passed 0.10 sec
Start 33: 033_internal
33/82 Test #33: 033_internal ................................. Passed 0.12 sec
Start 34: 034_internal
34/82 Test #34: 034_internal ................................. Passed 0.12 sec
Start 35: 035_invariant
35/82 Test #35: 035_invariant ................................ Passed 0.11 sec
Start 36: 036_link
36/82 Test #36: 036_link ..................................... Passed 0.11 sec
Start 37: 037_msc
37/82 Test #37: 037_msc ...................................... Passed 0.14 sec
Start 38: 038_n
38/82 Test #38: 038_n ........................................ Passed 0.11 sec
Start 39: 039_name
39/82 Test #39: 039_name ..................................... Passed 0.10 sec
Start 40: 040_namespace
40/82 Test #40: 040_namespace ................................ Passed 0.10 sec
Start 41: 041_overload
41/82 Test #41: 041_overload ................................. Passed 0.12 sec
Start 42: 042_package
42/82 Test #42: 042_package .................................. Passed 0.12 sec
Start 43: 043_page
43/82 Test #43: 043_page ..................................... Passed 0.12 sec
Start 44: 044_section
44/82 Test #44: 044_section .................................. Passed 0.12 sec
Start 45: 045_refitem
45/82 Test #45: 045_refitem .................................. Passed 0.12 sec
Start 46: 046_related
46/82 Test #46: 046_related .................................. Passed 0.12 sec
Start 47: 047_return
47/82 Test #47: 047_return ................................... Passed 0.12 sec
Start 48: 048_showinit
48/82 Test #48: 048_showinit ................................. Passed 0.12 sec
Start 49: 049_snippet
49/82 Test #49: 049_snippet .................................. Passed 0.13 sec
Start 50: 050_verbatim
50/82 Test #50: 050_verbatim ................................. Passed 0.09 sec
Start 51: 051_escape
51/82 Test #51: 051_escape ................................... Passed 0.11 sec
Start 52: 052_tilde
52/82 Test #52: 052_tilde .................................... Passed 0.11 sec
Start 53: 053_tilde
53/82 Test #53: 053_tilde .................................... Passed 0.11 sec
Start 54: 054_parblock
54/82 Test #54: 054_parblock ................................. Passed 0.13 sec
Start 55: 055_markdown
55/82 Test #55: 055_markdown ................................. Passed 0.12 sec
Start 56: 056_latexinclude
56/82 Test #56: 056_latexinclude ............................. Passed 0.11 sec
Start 57: 057_inlinenamespace
57/82 Test #57: 057_inlinenamespace .......................... Passed 0.13 sec
Start 58: 058_strong_enum
58/82 Test #58: 058_strong_enum .............................. Passed 0.14 sec
Start 59: 059_template
59/82 Test #59: 059_template ................................. Passed 0.11 sec
Start 60: 064_castoperator
60/82 Test #60: 064_castoperator ............................. Passed 0.12 sec
Start 61: 065_tilde
61/82 Test #61: 065_tilde .................................... Passed 0.10 sec
Start 62: 066_property_initializer
62/82 Test #62: 066_property_initializer ..................... Passed 0.10 sec
Start 63: 067_link_varargs
63/82 Test #63: 067_link_varargs ............................. Passed 0.10 sec
Start 64: 068_ref_varargs
64/82 Test #64: 068_ref_varargs .............................. Passed 0.09 sec
Start 65: 069_link_variadic_template
65/82 Test #65: 069_link_variadic_template ................... Passed 0.10 sec
Start 66: 070_ref_variadic_template
66/82 Test #66: 070_ref_variadic_template .................... Passed 0.09 sec
Start 67: 071_enum_in_anon_ns
67/82 Test #67: 071_enum_in_anon_ns .......................... Passed 0.09 sec
Start 68: 072_using
68/82 Test #68: 072_using .................................... Passed 0.10 sec
Start 69: 073_typed_enum
69/82 Test #69: 073_typed_enum ............................... Passed 0.10 sec
Start 70: 074_ref
70/82 Test #70: 074_ref ...................................... Passed 0.13 sec
Start 71: 075_argmatch
71/82 Test #71: 075_argmatch ................................. Passed 0.11 sec
Start 72: 076_emojis
72/82 Test #72: 076_emojis ................................... Passed 0.18 sec
Start 73: 077_no_xml_namespace_members_in_file_scope
73/82 Test #73: 077_no_xml_namespace_members_in_file_scope ... Passed 0.12 sec
Start 74: 078_xml_namespace_members_in_file_scope
74/82 Test #74: 078_xml_namespace_members_in_file_scope ...... Passed 0.11 sec
Start 75: 079_tableofcontents
75/82 Test #75: 079_tableofcontents .......................... Passed 0.11 sec
Start 76: 080_extract_private_virtual
76/82 Test #76: 080_extract_private_virtual .................. Passed 0.11 sec
Start 77: 081_brief_lists
77/82 Test #77: 081_brief_lists .............................. Passed 0.11 sec
Start 78: 082_decl_def
78/82 Test #78: 082_decl_def ................................. Passed 0.11 sec
Start 79: 083_decl_def
79/82 Test #79: 083_decl_def ................................. Passed 0.11 sec
Start 80: 084_markdown_pre
80/82 Test #80: 084_markdown_pre ............................. Passed 0.12 sec
Start 81: 085_tooltip
81/82 Test #81: 085_tooltip .................................. Passed 0.12 sec
Start 82: 086_style_tags
82/82 Test #82: 086_style_tags ............................... Passed 0.12 sec
99% tests passed, 1 tests failed out of 82
Total Test time (real) = 9.62 sec
The following tests FAILED:
12 - 012_cite (Failed)
Errors while running CTest
make: *** [Makefile:94: test] Error 8
make: Leaving directory '/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu'
```
</details>
**To Reproduce**
As input for build procedure I'm usimg dist trar ball + 0cf06873 and 5198966c.
**Version**
1.9.4
**Additional context**
cmake settings
```console
[tkloczko@devel-g2v x86_64-redhat-linux-gnu]$ cmake -L
CMake Warning:
No source or binary directory provided. Both will be assumed to be the
same as the current working directory, but note that this warning will
become a fatal error in future CMake releases.
CMake Error: The source directory "/home/tkloczko/rpmbuild/BUILD/doxygen-1.9.4/x86_64-redhat-linux-gnu" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
-- Cache values
CMAKE_BUILD_TYPE:STRING=RelWithDebInfo
CMAKE_INSTALL_PREFIX:PATH=/usr
DOT:FILEPATH=/usr/bin/dot
Qt5Core_DIR:PATH=/usr/lib64/cmake/Qt5Core
Qt5Gui_DIR:PATH=/usr/lib64/cmake/Qt5Gui
Qt5Widgets_DIR:PATH=/usr/lib64/cmake/Qt5Widgets
Qt5Xml_DIR:PATH=/usr/lib64/cmake/Qt5Xml
Qt5_DIR:PATH=/usr/lib64/cmake/Qt5
SANITIZE_ADDRESS:BOOL=OFF
SANITIZE_LINK_STATIC:BOOL=OFF
SANITIZE_MEMORY:BOOL=OFF
SANITIZE_THREAD:BOOL=OFF
SANITIZE_UNDEFINED:BOOL=OFF
build_app:BOOL=OFF
build_doc:BOOL=OFF
build_doc_chm:BOOL=OFF
build_parse:BOOL=OFF
build_search:BOOL=OFF
build_wizard:BOOL=ON
build_xmlparser:BOOL=ON
enable_console:BOOL=OFF
enable_coverage:BOOL=OFF
static_libclang:BOOL=OFF
use_libclang:BOOL=OFF
use_sqlite3:BOOL=OFF
win_static:BOOL=OFF
```
|
non_process
|
test suite is failing in cite describe the bug looks like test suite is failing in one unit cite expected behavior test suite should not be failling screenshots console cd doxygen usr bin make o v verbose c redhat linux gnu test args output on failure make entering directory home tkloczko rpmbuild build doxygen redhat linux gnu running tests usr bin ctest force new ctest process output on failure test project home tkloczko rpmbuild build doxygen redhat linux gnu start a test a passed sec start addindex test addindex passed sec start anchor test anchor passed sec start arg test arg passed sec start attention test attention passed sec start author test author passed sec start b test b passed sec start brief test brief passed sec start bug test bug passed sec start c test c passed sec start category test category passed sec start cite test cite failed sec not ok test the cite command difference between generated output and reference home tkloczko rpmbuild build doxygen redhat linux gnu testing test output indexpage xml home tkloczko rpmbuild build doxygen testing indexpage xml see for more info other references with cross references see and for more info see for more info other references with cross references see and for more info error problems running bibtex verify that the command perl version works from the command line exit code home tkloczko rpmbuild build doxygen testing cite dox warning cite command to does not have an associated number home tkloczko rpmbuild build doxygen testing cite dox warning cite command to does not have an associated number home tkloczko rpmbuild build doxygen testing cite dox warning cite command to does not have an associated number out of tests failed start class test class passed sec start code test code passed sec start cond test cond passed sec start copydoc test copydoc passed sec start copyright test copyright passed sec start def test def passed sec start defgroup test defgroup passed sec start only test only passed sec start dontinclude test dontinclude passed sec start dot test dot passed sec start e test e passed sec start if test if passed sec start example test example passed sec start exception test exception passed sec start extends test extends passed sec start formula test formula passed sec start hideinit test hideinit passed sec start htmlinclude test htmlinclude passed sec start image test image passed sec start include test include passed sec start internal test internal passed sec start internal test internal passed sec start invariant test invariant passed sec start link test link passed sec start msc test msc passed sec start n test n passed sec start name test name passed sec start namespace test namespace passed sec start overload test overload passed sec start package test package passed sec start page test page passed sec start section test section passed sec start refitem test refitem passed sec start related test related passed sec start return test return passed sec start showinit test showinit passed sec start snippet test snippet passed sec start verbatim test verbatim passed sec start escape test escape passed sec start tilde test tilde passed sec start tilde test tilde passed sec start parblock test parblock passed sec start markdown test markdown passed sec start latexinclude test latexinclude passed sec start inlinenamespace test inlinenamespace passed sec start strong enum test strong enum passed sec start template test template passed sec start castoperator test castoperator passed sec start tilde test tilde passed sec start property initializer test property initializer passed sec start link varargs test link varargs passed sec start ref varargs test ref varargs passed sec start link variadic template test link variadic template passed sec start ref variadic template test ref variadic template passed sec start enum in anon ns test enum in anon ns passed sec start using test using passed sec start typed enum test typed enum passed sec start ref test ref passed sec start argmatch test argmatch passed sec start emojis test emojis passed sec start no xml namespace members in file scope test no xml namespace members in file scope passed sec start xml namespace members in file scope test xml namespace members in file scope passed sec start tableofcontents test tableofcontents passed sec start extract private virtual test extract private virtual passed sec start brief lists test brief lists passed sec start decl def test decl def passed sec start decl def test decl def passed sec start markdown pre test markdown pre passed sec start tooltip test tooltip passed sec start style tags test style tags passed sec tests passed tests failed out of total test time real sec the following tests failed cite failed errors while running ctest make error make leaving directory home tkloczko rpmbuild build doxygen redhat linux gnu to reproduce as input for build procedure i m usimg dist trar ball and version additional context cmake settings console cmake l cmake warning no source or binary directory provided both will be assumed to be the same as the current working directory but note that this warning will become a fatal error in future cmake releases cmake error the source directory home tkloczko rpmbuild build doxygen redhat linux gnu does not appear to contain cmakelists txt specify help for usage or press the help button on the cmake gui cache values cmake build type string relwithdebinfo cmake install prefix path usr dot filepath usr bin dot dir path usr cmake dir path usr cmake dir path usr cmake dir path usr cmake dir path usr cmake sanitize address bool off sanitize link static bool off sanitize memory bool off sanitize thread bool off sanitize undefined bool off build app bool off build doc bool off build doc chm bool off build parse bool off build search bool off build wizard bool on build xmlparser bool on enable console bool off enable coverage bool off static libclang bool off use libclang bool off use bool off win static bool off
| 0
|
19,021
| 25,027,596,398
|
IssuesEvent
|
2022-11-04 09:29:41
|
her-cat/blog-comment
|
https://api.github.com/repos/her-cat/blog-comment
|
opened
|
PHP 多进程下载必应壁纸
|
Gitalk 20200202phpmultiprocessdownloadbingwallpaper
|
https://her-cat.com/posts/2020/02/02/php-multi-process-download-bing-wallpaper/
在此之前,需要用到多进程来处理任务的时候都是用的这种“笨方法”。虽然在启动任务的时候比较麻烦,需要手动打开 n 个黑窗口,然后到指定目录下运行对应的脚本,但是在写代码的
|
1.0
|
PHP 多进程下载必应壁纸 - https://her-cat.com/posts/2020/02/02/php-multi-process-download-bing-wallpaper/
在此之前,需要用到多进程来处理任务的时候都是用的这种“笨方法”。虽然在启动任务的时候比较麻烦,需要手动打开 n 个黑窗口,然后到指定目录下运行对应的脚本,但是在写代码的
|
process
|
php 多进程下载必应壁纸 在此之前,需要用到多进程来处理任务的时候都是用的这种“笨方法”。虽然在启动任务的时候比较麻烦,需要手动打开 n 个黑窗口,然后到指定目录下运行对应的脚本,但是在写代码的
| 1
|
337,754
| 10,220,153,574
|
IssuesEvent
|
2019-08-15 20:32:14
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.engadget.com - site is not usable
|
browser-fenix engine-gecko priority-important
|
<!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.engadget.com/2019/04/04/dell-xps-13-review-perfect-ultraportable/?guccounter=1
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Page is either frozen or very slow. Other tabs also freeze
**Steps to Reproduce**:
Page is either frozen or very slow. Other tabs also freeze
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.engadget.com - site is not usable - <!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.engadget.com/2019/04/04/dell-xps-13-review-perfect-ultraportable/?guccounter=1
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Page is either frozen or very slow. Other tabs also freeze
**Steps to Reproduce**:
Page is either frozen or very slow. Other tabs also freeze
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description page is either frozen or very slow other tabs also freeze steps to reproduce page is either frozen or very slow other tabs also freeze browser configuration none from with ❤️
| 0
|
17,542
| 23,354,770,326
|
IssuesEvent
|
2022-08-10 06:10:59
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
reopened
|
UATC purls wrong
|
ontology processing problem
|
User comments (and I've confirmed):
```
All PURLs for ATC (https://bioportal.bioontology.org/ontologies/ATC/) appear to be incorrect - they contain the acronym "UATC" instead of "ATC" and consequently do not resolve to the correct records.
```
|
1.0
|
UATC purls wrong - User comments (and I've confirmed):
```
All PURLs for ATC (https://bioportal.bioontology.org/ontologies/ATC/) appear to be incorrect - they contain the acronym "UATC" instead of "ATC" and consequently do not resolve to the correct records.
```
|
process
|
uatc purls wrong user comments and i ve confirmed all purls for atc appear to be incorrect they contain the acronym uatc instead of atc and consequently do not resolve to the correct records
| 1
|
253,408
| 27,300,584,669
|
IssuesEvent
|
2023-02-24 01:20:57
|
panasalap/linux-4.19.72_1
|
https://api.github.com/repos/panasalap/linux-4.19.72_1
|
opened
|
CVE-2020-12655 (Medium) detected in linux-yoctov5.4.51
|
security vulnerability
|
## CVE-2020-12655 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/xfs/libxfs/xfs_alloc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/xfs/libxfs/xfs_alloc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in xfs_agf_verify in fs/xfs/libxfs/xfs_alloc.c in the Linux kernel through 5.6.10. Attackers may trigger a sync of excessive duration via an XFS v5 image with crafted metadata, aka CID-d0c7feaf8767.
<p>Publish Date: 2020-05-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12655>CVE-2020-12655</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12655</a></p>
<p>Release Date: 2020-05-05</p>
<p>Fix Resolution: v5.7-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-12655 (Medium) detected in linux-yoctov5.4.51 - ## CVE-2020-12655 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/xfs/libxfs/xfs_alloc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/xfs/libxfs/xfs_alloc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in xfs_agf_verify in fs/xfs/libxfs/xfs_alloc.c in the Linux kernel through 5.6.10. Attackers may trigger a sync of excessive duration via an XFS v5 image with crafted metadata, aka CID-d0c7feaf8767.
<p>Publish Date: 2020-05-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-12655>CVE-2020-12655</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12655</a></p>
<p>Release Date: 2020-05-05</p>
<p>Fix Resolution: v5.7-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files fs xfs libxfs xfs alloc c fs xfs libxfs xfs alloc c vulnerability details an issue was discovered in xfs agf verify in fs xfs libxfs xfs alloc c in the linux kernel through attackers may trigger a sync of excessive duration via an xfs image with crafted metadata aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
18,471
| 24,550,598,991
|
IssuesEvent
|
2022-10-12 12:19:13
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile apps] Unable to sign in with registered credentials in the mobile apps
|
Bug Blocker P0 iOS Android Process: Fixed Process: Tested dev
|
For some of the users > Unable to sign in with registered credentials in the mobile apps.
Can refer the below mentioned user:
bhoomikav@boston-technology.com
Getting error message only in android as in the below screenshot

|
2.0
|
[Mobile apps] Unable to sign in with registered credentials in the mobile apps - For some of the users > Unable to sign in with registered credentials in the mobile apps.
Can refer the below mentioned user:
bhoomikav@boston-technology.com
Getting error message only in android as in the below screenshot

|
process
|
unable to sign in with registered credentials in the mobile apps for some of the users unable to sign in with registered credentials in the mobile apps can refer the below mentioned user bhoomikav boston technology com getting error message only in android as in the below screenshot
| 1
|
20,352
| 27,011,525,152
|
IssuesEvent
|
2023-02-10 15:46:02
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
Dolby Cinema Processor CP750 (addition to existing module dolby-cinemaprocessor v1.1.1)
|
NOT YET PROCESSED
|
**Device**: Dolby Cinema Processor CP750
**Features to be controlled:** Those already available in dolby-cinemaprocessor v1.1.1, which allows control of CP650 and CP950. Minimum features needed would be:
_Actions:_
Set Fader to Level (0-100)
Set Format Button (choice of 7 presets)
Mute On
Mute Off
Mute Toggle
_Feedbacks & Variables:_
Current Fader Level
Current Mute Status
Current Format Button
**Notes**: CP750 works exactly like CP950, you control it by sending ASCII strings over TCP port **61408**. The issue is that the ASCII strings themselves are model-specific (in this case, they all begin with "CP750.sys"), so the existing module does not work.
I would like to kindly request the addition of CP750 as a third supported device. Please find attached an extract from Dolby installer manual listing the relevant commands. If it's easier, even a generic "send the following ASCII string to device" command would be very helpful, each user could customize it by just copying and pasting from the table (that is actually how digital cinema servers control Dolby processors, so most of the user base is already quite familiar with this). Thanks!
<img width="1119" alt="projectionniste net:docs:dolby-cp750-manual" src="https://user-images.githubusercontent.com/47261487/218131038-8cc663cb-15cc-4487-949a-3a701ee0c0ff.png">
|
1.0
|
Dolby Cinema Processor CP750 (addition to existing module dolby-cinemaprocessor v1.1.1) - **Device**: Dolby Cinema Processor CP750
**Features to be controlled:** Those already available in dolby-cinemaprocessor v1.1.1, which allows control of CP650 and CP950. Minimum features needed would be:
_Actions:_
Set Fader to Level (0-100)
Set Format Button (choice of 7 presets)
Mute On
Mute Off
Mute Toggle
_Feedbacks & Variables:_
Current Fader Level
Current Mute Status
Current Format Button
**Notes**: CP750 works exactly like CP950, you control it by sending ASCII strings over TCP port **61408**. The issue is that the ASCII strings themselves are model-specific (in this case, they all begin with "CP750.sys"), so the existing module does not work.
I would like to kindly request the addition of CP750 as a third supported device. Please find attached an extract from Dolby installer manual listing the relevant commands. If it's easier, even a generic "send the following ASCII string to device" command would be very helpful, each user could customize it by just copying and pasting from the table (that is actually how digital cinema servers control Dolby processors, so most of the user base is already quite familiar with this). Thanks!
<img width="1119" alt="projectionniste net:docs:dolby-cp750-manual" src="https://user-images.githubusercontent.com/47261487/218131038-8cc663cb-15cc-4487-949a-3a701ee0c0ff.png">
|
process
|
dolby cinema processor addition to existing module dolby cinemaprocessor device dolby cinema processor features to be controlled those already available in dolby cinemaprocessor which allows control of and minimum features needed would be actions set fader to level set format button choice of presets mute on mute off mute toggle feedbacks variables current fader level current mute status current format button notes works exactly like you control it by sending ascii strings over tcp port the issue is that the ascii strings themselves are model specific in this case they all begin with sys so the existing module does not work i would like to kindly request the addition of as a third supported device please find attached an extract from dolby installer manual listing the relevant commands if it s easier even a generic send the following ascii string to device command would be very helpful each user could customize it by just copying and pasting from the table that is actually how digital cinema servers control dolby processors so most of the user base is already quite familiar with this thanks img width alt projectionniste net docs dolby manual src
| 1
|
444,955
| 12,823,990,759
|
IssuesEvent
|
2020-07-06 12:45:19
|
algernon-A/Ploppable-RICO-Revisited
|
https://api.github.com/repos/algernon-A/Ploppable-RICO-Revisited
|
closed
|
Possible issue with zh-tw(or any other languages that are not included in the base game)
|
Priority: medium Type: enhancement
|
From reading the translation framework code I noticed that if a language is not included in the base game(not a locale option), it will never be used(such as zh-tw). TMPE has more languages than the game does so TMPE uses an additional drop-down list in mod setting to let users manually choose what language to be used. I'll try to do something similar in Find It Fix.
|
1.0
|
Possible issue with zh-tw(or any other languages that are not included in the base game) - From reading the translation framework code I noticed that if a language is not included in the base game(not a locale option), it will never be used(such as zh-tw). TMPE has more languages than the game does so TMPE uses an additional drop-down list in mod setting to let users manually choose what language to be used. I'll try to do something similar in Find It Fix.
|
non_process
|
possible issue with zh tw or any other languages that are not included in the base game from reading the translation framework code i noticed that if a language is not included in the base game not a locale option it will never be used such as zh tw tmpe has more languages than the game does so tmpe uses an additional drop down list in mod setting to let users manually choose what language to be used i ll try to do something similar in find it fix
| 0
|
302,683
| 9,285,355,248
|
IssuesEvent
|
2019-03-21 06:44:32
|
richelbilderbeek/pirouette
|
https://api.github.com/repos/richelbilderbeek/pirouette
|
closed
|
to_twin_filename fails on filenames with two dots
|
medium priority
|
BEAST2 uses filenames with two dots in it, e.g. `my.xml.state`. The `xml` indicates the format is XML, the `state` indicates it is a BEAST2 state. Both `xml` and `state` should thus remain the same.
Converting this filename to its twin should thus result in `my_twin.xml.state`.
Instead, this is converted to the incorrect `my.xml_twin.state`.
I've added a test for this:
```r
test_that("use on filesnames with two dots", {
skip("#231)
created <- to_twin_filename("example_3_beast2_output.xml.state")
expected <- "example_3_beast2_output_twin.xml.state"
expect_equal(expected, created)
})
```
|
1.0
|
to_twin_filename fails on filenames with two dots - BEAST2 uses filenames with two dots in it, e.g. `my.xml.state`. The `xml` indicates the format is XML, the `state` indicates it is a BEAST2 state. Both `xml` and `state` should thus remain the same.
Converting this filename to its twin should thus result in `my_twin.xml.state`.
Instead, this is converted to the incorrect `my.xml_twin.state`.
I've added a test for this:
```r
test_that("use on filesnames with two dots", {
skip("#231)
created <- to_twin_filename("example_3_beast2_output.xml.state")
expected <- "example_3_beast2_output_twin.xml.state"
expect_equal(expected, created)
})
```
|
non_process
|
to twin filename fails on filenames with two dots uses filenames with two dots in it e g my xml state the xml indicates the format is xml the state indicates it is a state both xml and state should thus remain the same converting this filename to its twin should thus result in my twin xml state instead this is converted to the incorrect my xml twin state i ve added a test for this r test that use on filesnames with two dots skip created to twin filename example output xml state expected example output twin xml state expect equal expected created
| 0
|
83,215
| 3,632,103,539
|
IssuesEvent
|
2016-02-11 07:43:33
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
CoreOS multinode cluster getting started guide ignores networking
|
priority/P2 team/CSI
|
docs/getting-started-guides/coreos/coreos_multinode_cluster.md
This doc does not seem to mention networking at all. People are following this doc and failing, which is not surprising given $subject
@kelseyhightower
|
1.0
|
CoreOS multinode cluster getting started guide ignores networking - docs/getting-started-guides/coreos/coreos_multinode_cluster.md
This doc does not seem to mention networking at all. People are following this doc and failing, which is not surprising given $subject
@kelseyhightower
|
non_process
|
coreos multinode cluster getting started guide ignores networking docs getting started guides coreos coreos multinode cluster md this doc does not seem to mention networking at all people are following this doc and failing which is not surprising given subject kelseyhightower
| 0
|
17,086
| 22,593,827,411
|
IssuesEvent
|
2022-06-28 23:09:00
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Installer-related classes missing
|
area-System.ServiceProcess
|
ServiceController was added to corefx, but many framework classes are lacking. With the deprecation of .NET Framework, I've been looking at Portability Analyzer and one of our significant issues is installer-related. We have a custom base service DLL we use in all of our services that handle installation of various things (event log sources, the assemblies themselves, service registration, etc) as well as uninstallation.
It uses the following classes which are missing. I haven't been able to find them in compat packs, NuGet, or corefx. Also, even looking for `ServiceProcessInstaller`, it doesn't appear to be in the reference source. Its signature is there, like an interface, but not the implementation.
Can anyone provide guidance on replacing the following classes missing from .NET Core/5?
* System.Diagnostics.EventLogInstaller
* System.Configuration.Install.InstallContext
* System.ServiceProcess.ServiceInstaller
* System.Configuration.Install.Installer
* System.ServiceProcess.ServiceProcessInstaller
* System.Configuration.Install.InstallerCollection
* System.Configuration.Install.TransactedInstaller
* System.Configuration.Install.InstallException
* System.ServiceProcess.ServiceAccount
* System.Configuration.Install.UninstallAction
The examples with Windows Services they now support have shown MS staff using `sc` to create the services manually (presumably xcopying the binaries manually too). This seems weird since there was built in support for registering services before and Core now officially supports services. Why are these classes still missing?
|
1.0
|
Installer-related classes missing - ServiceController was added to corefx, but many framework classes are lacking. With the deprecation of .NET Framework, I've been looking at Portability Analyzer and one of our significant issues is installer-related. We have a custom base service DLL we use in all of our services that handle installation of various things (event log sources, the assemblies themselves, service registration, etc) as well as uninstallation.
It uses the following classes which are missing. I haven't been able to find them in compat packs, NuGet, or corefx. Also, even looking for `ServiceProcessInstaller`, it doesn't appear to be in the reference source. Its signature is there, like an interface, but not the implementation.
Can anyone provide guidance on replacing the following classes missing from .NET Core/5?
* System.Diagnostics.EventLogInstaller
* System.Configuration.Install.InstallContext
* System.ServiceProcess.ServiceInstaller
* System.Configuration.Install.Installer
* System.ServiceProcess.ServiceProcessInstaller
* System.Configuration.Install.InstallerCollection
* System.Configuration.Install.TransactedInstaller
* System.Configuration.Install.InstallException
* System.ServiceProcess.ServiceAccount
* System.Configuration.Install.UninstallAction
The examples with Windows Services they now support have shown MS staff using `sc` to create the services manually (presumably xcopying the binaries manually too). This seems weird since there was built in support for registering services before and Core now officially supports services. Why are these classes still missing?
|
process
|
installer related classes missing servicecontroller was added to corefx but many framework classes are lacking with the deprecation of net framework i ve been looking at portability analyzer and one of our significant issues is installer related we have a custom base service dll we use in all of our services that handle installation of various things event log sources the assemblies themselves service registration etc as well as uninstallation it uses the following classes which are missing i haven t been able to find them in compat packs nuget or corefx also even looking for serviceprocessinstaller it doesn t appear to be in the reference source its signature is there like an interface but not the implementation can anyone provide guidance on replacing the following classes missing from net core system diagnostics eventloginstaller system configuration install installcontext system serviceprocess serviceinstaller system configuration install installer system serviceprocess serviceprocessinstaller system configuration install installercollection system configuration install transactedinstaller system configuration install installexception system serviceprocess serviceaccount system configuration install uninstallaction the examples with windows services they now support have shown ms staff using sc to create the services manually presumably xcopying the binaries manually too this seems weird since there was built in support for registering services before and core now officially supports services why are these classes still missing
| 1
|
20,722
| 27,421,051,559
|
IssuesEvent
|
2023-03-01 16:44:45
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Error in YAML pipeline: Deployment strategy: 'Rolling' is not supported for Environment scoped target.
|
devops/prod doc-bug Pri1 devops-cicd-process/tech
|
The site shows this example for a rolling deployment:
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: smarthotel-dev
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 5 #for percentages, mention as x%
However, when I run it, this error appears: Deployment strategy: "'Rolling' is not supported for Environment scoped target."
It is worth noting that when I try the pipeline without environment it give this error:
"
Job ####: Environment is required."
Suggestions?
Thanks,
Bill
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Error in YAML pipeline: Deployment strategy: 'Rolling' is not supported for Environment scoped target. - The site shows this example for a rolling deployment:
jobs:
- deployment: VMDeploy
displayName: web
environment:
name: smarthotel-dev
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 5 #for percentages, mention as x%
However, when I run it, this error appears: Deployment strategy: "'Rolling' is not supported for Environment scoped target."
It is worth noting that when I try the pipeline without environment it give this error:
"
Job ####: Environment is required."
Suggestions?
Thanks,
Bill
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
error in yaml pipeline deployment strategy rolling is not supported for environment scoped target the site shows this example for a rolling deployment jobs deployment vmdeploy displayname web environment name smarthotel dev resourcetype virtualmachine strategy rolling maxparallel for percentages mention as x however when i run it this error appears deployment strategy rolling is not supported for environment scoped target it is worth noting that when i try the pipeline without environment it give this error job environment is required suggestions thanks bill document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
213,422
| 16,522,726,052
|
IssuesEvent
|
2021-05-26 16:09:27
|
dask/dask
|
https://api.github.com/repos/dask/dask
|
opened
|
Update HighLevelGraph documentation
|
documentation highlevelgraph
|
I just came across [this page of the docs](https://docs.dask.org/en/latest/high-level-graphs.html) which hasn't been changed much since Dec 2018. I think it would be useful to update it to include a runnable example and some information about the `Layer` class.
Once that page is updated, it will probably make sense to update https://docs.dask.org/en/latest/phases-of-computation.html and https://docs.dask.org/en/latest/graphs.html to make it clearer how HLGs fit in.
I am happy to take on this work, but just wanted to put this here in case I am missing something.
Some resources that I am planning to draw on:
[Layer Issue](https://github.com/dask/dask/issues/6438)
[Blog post about scheduling](https://blog.dask.org/2020/07/21/faster-scheduling)
[Scheduling workshop from summit](https://zoom.us/rec/share/vO7Xx5HAupQHDEjTgmspp6VZoCY_zabi1nouOHCwl4mBSg2hVQpIFqTTpVQUNZA4.8cE5K7SjaMIUNINW?startTime=1621620212000)
Pinging the people who I have seen working on HLG related things: @rjzamora @GenevieveBuckley @gjoseph92 @ian-r-rose @jrbourbeau
|
1.0
|
Update HighLevelGraph documentation - I just came across [this page of the docs](https://docs.dask.org/en/latest/high-level-graphs.html) which hasn't been changed much since Dec 2018. I think it would be useful to update it to include a runnable example and some information about the `Layer` class.
Once that page is updated, it will probably make sense to update https://docs.dask.org/en/latest/phases-of-computation.html and https://docs.dask.org/en/latest/graphs.html to make it clearer how HLGs fit in.
I am happy to take on this work, but just wanted to put this here in case I am missing something.
Some resources that I am planning to draw on:
[Layer Issue](https://github.com/dask/dask/issues/6438)
[Blog post about scheduling](https://blog.dask.org/2020/07/21/faster-scheduling)
[Scheduling workshop from summit](https://zoom.us/rec/share/vO7Xx5HAupQHDEjTgmspp6VZoCY_zabi1nouOHCwl4mBSg2hVQpIFqTTpVQUNZA4.8cE5K7SjaMIUNINW?startTime=1621620212000)
Pinging the people who I have seen working on HLG related things: @rjzamora @GenevieveBuckley @gjoseph92 @ian-r-rose @jrbourbeau
|
non_process
|
update highlevelgraph documentation i just came across which hasn t been changed much since dec i think it would be useful to update it to include a runnable example and some information about the layer class once that page is updated it will probably make sense to update and to make it clearer how hlgs fit in i am happy to take on this work but just wanted to put this here in case i am missing something some resources that i am planning to draw on pinging the people who i have seen working on hlg related things rjzamora genevievebuckley ian r rose jrbourbeau
| 0
|
54,995
| 6,421,626,335
|
IssuesEvent
|
2017-08-09 05:45:34
|
dalgibbard/citrix_xenserver_patcher
|
https://api.github.com/repos/dalgibbard/citrix_xenserver_patcher
|
reopened
|
Support for Xenserver 7.1
|
bug pending-testing
|
Seems with version 7.1 there are some changes that break functionality:
- the ZIP file contains an **iso** file instead of an **xsupdate**
- the patch-upload command called by python returns too long no uuid, and seems the uploaded patch can be listed, but with an uuid where the second and third group of numbers is 0000, this breaks the "second validation" phase, but then after some time the patch's uuid is changed to the correct one, and the patch can be applied
Tried a simple if-else switch for the extracted file-name but the timeout which causes the uuid validation fail is too high for me to present with a solution.
|
1.0
|
Support for Xenserver 7.1 - Seems with version 7.1 there are some changes that break functionality:
- the ZIP file contains an **iso** file instead of an **xsupdate**
- the patch-upload command called by python returns too long no uuid, and seems the uploaded patch can be listed, but with an uuid where the second and third group of numbers is 0000, this breaks the "second validation" phase, but then after some time the patch's uuid is changed to the correct one, and the patch can be applied
Tried a simple if-else switch for the extracted file-name but the timeout which causes the uuid validation fail is too high for me to present with a solution.
|
non_process
|
support for xenserver seems with version there are some changes that break functionality the zip file contains an iso file instead of an xsupdate the patch upload command called by python returns too long no uuid and seems the uploaded patch can be listed but with an uuid where the second and third group of numbers is this breaks the second validation phase but then after some time the patch s uuid is changed to the correct one and the patch can be applied tried a simple if else switch for the extracted file name but the timeout which causes the uuid validation fail is too high for me to present with a solution
| 0
|
110,855
| 9,480,106,344
|
IssuesEvent
|
2019-04-20 14:59:27
|
andrewdavidmackenzie/flow
|
https://api.github.com/repos/andrewdavidmackenzie/flow
|
closed
|
Add tests for manifest generation in flowc
|
tests to do
|
Tests done
- loading a number of flows
- compiling them valid and invalid flos
Tests not done
- generating manifests
|
1.0
|
Add tests for manifest generation in flowc - Tests done
- loading a number of flows
- compiling them valid and invalid flos
Tests not done
- generating manifests
|
non_process
|
add tests for manifest generation in flowc tests done loading a number of flows compiling them valid and invalid flos tests not done generating manifests
| 0
|
10,355
| 13,179,447,750
|
IssuesEvent
|
2020-08-12 10:57:20
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Define which tab in the excel file to use in a processing model
|
Feature Request Processing
|
Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [20951](https://issues.qgis.org/issues/20951)
Redmine category:processing/modeller
---
When I use an Excel file as input in a processing model in QGIS 3.4.3, I am unable to define which tab(s) to read. QGIS only seems to read the first tab and I would like to see an option to select which tabs to read. A very simple Excel file for testing is attached.
---
- [Testfile.xlsx](https://issues.qgis.org/attachments/download/14071/Testfile.xlsx) (Magnus Nilsson)
|
1.0
|
Define which tab in the excel file to use in a processing model - Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [20951](https://issues.qgis.org/issues/20951)
Redmine category:processing/modeller
---
When I use an Excel file as input in a processing model in QGIS 3.4.3, I am unable to define which tab(s) to read. QGIS only seems to read the first tab and I would like to see an option to select which tabs to read. A very simple Excel file for testing is attached.
---
- [Testfile.xlsx](https://issues.qgis.org/attachments/download/14071/Testfile.xlsx) (Magnus Nilsson)
|
process
|
define which tab in the excel file to use in a processing model author name magnus nilsson magnus nilsson original redmine issue redmine category processing modeller when i use an excel file as input in a processing model in qgis i am unable to define which tab s to read qgis only seems to read the first tab and i would like to see an option to select which tabs to read a very simple excel file for testing is attached magnus nilsson
| 1
|
11,257
| 14,021,634,900
|
IssuesEvent
|
2020-10-29 21:38:03
|
googleapis/google-auth-library-python
|
https://api.github.com/repos/googleapis/google-auth-library-python
|
closed
|
TODO: undo pin of 'aiohttp' once 'aioresponses' releases a fix
|
type: process
|
Environment details
- OS: $ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G6020
- Python version: 3.6, 3.7, 3.8
- pip version: pip 20.2.4
- `google-auth` version: 5906c8583ca351b5385a079a30521a9a8a0c7c59
#### Steps to reproduce
1. nox -s unit
There are 9 tests that fail, all with the same error:
`TypeError: __init__() missing 1 required positional argument: 'limit'`
```
====================================================== short test summary info =======================================================
FAILED tests_async/transport/test_aiohttp_requests.py::TestCombinedResponse::test_content_compressed - TypeError: __init__() missin...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_headers_prop - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_status_prop - TypeError: __init__() missing 1 required po...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request - TypeError: __init__() missing 1 requir...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_ctx - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_http_headers - TypeError: __init__() missing 1 r...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_regexp_example - TypeError: __init__() missing 1...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_no_refresh - TypeError: __init__() missi...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_refresh - TypeError: __init__() missing ...
============================================ 9 failed, 609 passed, 12 warnings in 33.41s =============================================
```
Here is the traceback for one of the failing tests:
```
____________________________________________ TestCombinedResponse.test_content_compressed ____________________________________________
self = <tests_async.transport.test_aiohttp_requests.TestCombinedResponse object at 0x108803160>
urllib3_mock = <function decompress at 0x10880a820>
@mock.patch(
"google.auth.transport._aiohttp_requests.urllib3.response.MultiDecoder.decompress",
return_value="decompressed",
autospec=True,
)
@pytest.mark.asyncio
async def test_content_compressed(self, urllib3_mock):
rm = core.RequestMatch(
"url", headers={"Content-Encoding": "gzip"}, payload="compressed"
)
> response = await rm.build_response(core.URL("url"))
tests_async/transport/test_aiohttp_requests.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:192: in build_response
resp = self._build_response(
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:173: in _build_response
resp.content = stream_reader_factory(loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
loop = <Mock id='4437587472'>
def stream_reader_factory( # noqa
loop: 'Optional[asyncio.AbstractEventLoop]' = None
):
protocol = ResponseHandler(loop=loop)
> return StreamReader(protocol, loop=loop)
E TypeError: __init__() missing 1 required positional argument: 'limit'
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/compat.py:48: TypeError
========================================================== warnings summary ==========================================================
```
The root cause is a change in aiohttp version 3.7.0 which was released a few hours ago. The signature for StreamReader has changed, making the optional argument `limit` a required argument.
https://github.com/aio-libs/aiohttp/blob/56e78836aa7c67292ace9e256711699d51d57285/aiohttp/streams.py#L106
This change breaks aioresponses:
https://github.com/pnuckowski/aioresponses/blob/e61977f42a0164e0c572031dfb18ae95ba198df0/aioresponses/compat.py#L44
|
1.0
|
TODO: undo pin of 'aiohttp' once 'aioresponses' releases a fix - Environment details
- OS: $ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G6020
- Python version: 3.6, 3.7, 3.8
- pip version: pip 20.2.4
- `google-auth` version: 5906c8583ca351b5385a079a30521a9a8a0c7c59
#### Steps to reproduce
1. nox -s unit
There are 9 tests that fail, all with the same error:
`TypeError: __init__() missing 1 required positional argument: 'limit'`
```
====================================================== short test summary info =======================================================
FAILED tests_async/transport/test_aiohttp_requests.py::TestCombinedResponse::test_content_compressed - TypeError: __init__() missin...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_headers_prop - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_status_prop - TypeError: __init__() missing 1 required po...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request - TypeError: __init__() missing 1 requir...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_ctx - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_http_headers - TypeError: __init__() missing 1 r...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_regexp_example - TypeError: __init__() missing 1...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_no_refresh - TypeError: __init__() missi...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_refresh - TypeError: __init__() missing ...
============================================ 9 failed, 609 passed, 12 warnings in 33.41s =============================================
```
Here is the traceback for one of the failing tests:
```
____________________________________________ TestCombinedResponse.test_content_compressed ____________________________________________
self = <tests_async.transport.test_aiohttp_requests.TestCombinedResponse object at 0x108803160>
urllib3_mock = <function decompress at 0x10880a820>
@mock.patch(
"google.auth.transport._aiohttp_requests.urllib3.response.MultiDecoder.decompress",
return_value="decompressed",
autospec=True,
)
@pytest.mark.asyncio
async def test_content_compressed(self, urllib3_mock):
rm = core.RequestMatch(
"url", headers={"Content-Encoding": "gzip"}, payload="compressed"
)
> response = await rm.build_response(core.URL("url"))
tests_async/transport/test_aiohttp_requests.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:192: in build_response
resp = self._build_response(
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:173: in _build_response
resp.content = stream_reader_factory(loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
loop = <Mock id='4437587472'>
def stream_reader_factory( # noqa
loop: 'Optional[asyncio.AbstractEventLoop]' = None
):
protocol = ResponseHandler(loop=loop)
> return StreamReader(protocol, loop=loop)
E TypeError: __init__() missing 1 required positional argument: 'limit'
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/compat.py:48: TypeError
========================================================== warnings summary ==========================================================
```
The root cause is a change in aiohttp version 3.7.0 which was released a few hours ago. The signature for StreamReader has changed, making the optional argument `limit` a required argument.
https://github.com/aio-libs/aiohttp/blob/56e78836aa7c67292ace9e256711699d51d57285/aiohttp/streams.py#L106
This change breaks aioresponses:
https://github.com/pnuckowski/aioresponses/blob/e61977f42a0164e0c572031dfb18ae95ba198df0/aioresponses/compat.py#L44
|
process
|
todo undo pin of aiohttp once aioresponses releases a fix environment details os sw vers productname mac os x productversion buildversion python version pip version pip google auth version steps to reproduce nox s unit there are tests that fail all with the same error typeerror init missing required positional argument limit short test summary info failed tests async transport test aiohttp requests py testcombinedresponse test content compressed typeerror init missin failed tests async transport test aiohttp requests py testresponse test headers prop typeerror init missing required p failed tests async transport test aiohttp requests py testresponse test status prop typeerror init missing required po failed tests async transport test aiohttp requests py testauthorizedsession test request typeerror init missing requir failed tests async transport test aiohttp requests py testauthorizedsession test ctx typeerror init missing required p failed tests async transport test aiohttp requests py testauthorizedsession test http headers typeerror init missing r failed tests async transport test aiohttp requests py testauthorizedsession test regexp example typeerror init missing failed tests async transport test aiohttp requests py testauthorizedsession test request no refresh typeerror init missi failed tests async transport test aiohttp requests py testauthorizedsession test request refresh typeerror init missing failed passed warnings in here is the traceback for one of the failing tests testcombinedresponse test content compressed self mock mock patch google auth transport aiohttp requests response multidecoder decompress return value decompressed autospec true pytest mark asyncio async def test content compressed self mock rm core requestmatch url headers content encoding gzip payload compressed response await rm build response core url url tests async transport test aiohttp requests py virtualenv google auth library python lib site packages aioresponses core py in build response resp self build response virtualenv google auth library python lib site packages aioresponses core py in build response resp content stream reader factory loop loop def stream reader factory noqa loop optional none protocol responsehandler loop loop return streamreader protocol loop loop e typeerror init missing required positional argument limit virtualenv google auth library python lib site packages aioresponses compat py typeerror warnings summary the root cause is a change in aiohttp version which was released a few hours ago the signature for streamreader has changed making the optional argument limit a required argument this change breaks aioresponses
| 1
|
2,816
| 5,748,791,293
|
IssuesEvent
|
2017-04-25 02:24:31
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Strange bot name
|
add change log-processing
|
echo '141.101.76.152 - - [24/Apr/2017:04:51:25 +0000] "GET /wp-content/uploads/2014/03/14-182x300.png HTTP/1.1" 200 64049 "Client-IP:5.255.250.75" "Mozilla/5.0 (compatible; YandexImages/3.0; +http://yandex.com/bots)"' | goaccess ...
```
> 7 - Browsers
Hits h% Vis. v% Bandwidth Data
---- ------- ---- ------- ----------- ----
1 100.00% 1 100.00% 62.55 KiB Crawlers ||||||||||||||||||||||||||||
1 100.00% 1 100.00% 62.55 KiB ├─ Images/3.0 ||||||||||||||||||||||||||||
```
How does goaccess know that `YandexImages/3.0` is a crawler? (not found in source)
Why does it list it as `Images/3.0`?
|
1.0
|
Strange bot name - echo '141.101.76.152 - - [24/Apr/2017:04:51:25 +0000] "GET /wp-content/uploads/2014/03/14-182x300.png HTTP/1.1" 200 64049 "Client-IP:5.255.250.75" "Mozilla/5.0 (compatible; YandexImages/3.0; +http://yandex.com/bots)"' | goaccess ...
```
> 7 - Browsers
Hits h% Vis. v% Bandwidth Data
---- ------- ---- ------- ----------- ----
1 100.00% 1 100.00% 62.55 KiB Crawlers ||||||||||||||||||||||||||||
1 100.00% 1 100.00% 62.55 KiB ├─ Images/3.0 ||||||||||||||||||||||||||||
```
How does goaccess know that `YandexImages/3.0` is a crawler? (not found in source)
Why does it list it as `Images/3.0`?
|
process
|
strange bot name echo get wp content uploads png http client ip mozilla compatible yandeximages goaccess browsers hits h vis v bandwidth data kib crawlers kib ├─ images how does goaccess know that yandeximages is a crawler not found in source why does it list it as images
| 1
|
5,960
| 8,784,237,658
|
IssuesEvent
|
2018-12-20 09:15:51
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
partners permission bugs in different entities
|
2.0.6 Process bug
|
### tasks : after creating a sub task for a task that has a partner other than the creator and an assignee, setting the partner in the main task to editor/commentor, the role isnt saved in the sub task and instead shows the partner as a viewer .
in the main task

in the sub task:

### **projects:** in the sub tasks after adding a partner to the main project, the partner doesnt show up in the sub task.
in the main project in a task from the tasks sub tab, the partner doesnt show up as well.
in the main project

in the tasks tab/ sub projects

### **meetings:** after adding a partner to the meeting and setting him as editor/commenter, and then clicking on manage tasks, the partner doesnt show up in the task
### **documents**: after adding a partner to the document and setting him as editor/commenter, and then clicking on manage tasks, the partner doesnt show up in the task and the role change isnt saved in the document itself. in addition a user with viewer permissions can select a signature
### **folders**: after adding a partner with commenter or editor roles to the folder and then entering the documents tab from manage documents, the partners added are shown in the document but with viewer roles
### **offices**: after adding a partner with commenter or editor roles to the office and then entering the folders tab from manage folders, the partners added are shown in the folder but with viewer roles
|
1.0
|
partners permission bugs in different entities - ### tasks : after creating a sub task for a task that has a partner other than the creator and an assignee, setting the partner in the main task to editor/commentor, the role isnt saved in the sub task and instead shows the partner as a viewer .
in the main task

in the sub task:

### **projects:** in the sub tasks after adding a partner to the main project, the partner doesnt show up in the sub task.
in the main project in a task from the tasks sub tab, the partner doesnt show up as well.
in the main project

in the tasks tab/ sub projects

### **meetings:** after adding a partner to the meeting and setting him as editor/commenter, and then clicking on manage tasks, the partner doesnt show up in the task
### **documents**: after adding a partner to the document and setting him as editor/commenter, and then clicking on manage tasks, the partner doesnt show up in the task and the role change isnt saved in the document itself. in addition a user with viewer permissions can select a signature
### **folders**: after adding a partner with commenter or editor roles to the folder and then entering the documents tab from manage documents, the partners added are shown in the document but with viewer roles
### **offices**: after adding a partner with commenter or editor roles to the office and then entering the folders tab from manage folders, the partners added are shown in the folder but with viewer roles
|
process
|
partners permission bugs in different entities tasks after creating a sub task for a task that has a partner other than the creator and an assignee setting the partner in the main task to editor commentor the role isnt saved in the sub task and instead shows the partner as a viewer in the main task in the sub task projects in the sub tasks after adding a partner to the main project the partner doesnt show up in the sub task in the main project in a task from the tasks sub tab the partner doesnt show up as well in the main project in the tasks tab sub projects meetings after adding a partner to the meeting and setting him as editor commenter and then clicking on manage tasks the partner doesnt show up in the task documents after adding a partner to the document and setting him as editor commenter and then clicking on manage tasks the partner doesnt show up in the task and the role change isnt saved in the document itself in addition a user with viewer permissions can select a signature folders after adding a partner with commenter or editor roles to the folder and then entering the documents tab from manage documents the partners added are shown in the document but with viewer roles offices after adding a partner with commenter or editor roles to the office and then entering the folders tab from manage folders the partners added are shown in the folder but with viewer roles
| 1
|
9,839
| 12,833,364,979
|
IssuesEvent
|
2020-07-07 09:12:49
|
Maximus5/ConEmu
|
https://api.github.com/repos/Maximus5/ConEmu
|
closed
|
PID of forked process?
|
processes
|
Using the very latest ConEmu build. Have searched in issues but could not find the answer, though it must be a FAQ.
If I run a Powershell session in one tab, and fork a new process:
```
PS> $x = Start-Process -FilePath cmd -PassThru
```
This opens a new tab where the `cmd` process runs. I can see a new `ConEmuC64.exe` process having two children, a `conhost.exe` process and the `cmd.exe` process. However, the PID reported by Powershell (`$x.Id`) does not correspond to the PID of `cmd.exe`. In fact, it corresponds to a process that seems to stop immediately.
I assume that ConEmu must be hijacking the fork in order to provide a console?
Now, my PS script needs to eventually stop the `cmd` process. In a standard Powershell console, `$x.Id` is the PID of the process to kill, but in ConEmu... is there a way to get the PID of the `cmd` process? Alternatively... I don't *need* that process to start within ConEmu: is there a way to tell ConEmu to please not own it?
|
1.0
|
PID of forked process? - Using the very latest ConEmu build. Have searched in issues but could not find the answer, though it must be a FAQ.
If I run a Powershell session in one tab, and fork a new process:
```
PS> $x = Start-Process -FilePath cmd -PassThru
```
This opens a new tab where the `cmd` process runs. I can see a new `ConEmuC64.exe` process having two children, a `conhost.exe` process and the `cmd.exe` process. However, the PID reported by Powershell (`$x.Id`) does not correspond to the PID of `cmd.exe`. In fact, it corresponds to a process that seems to stop immediately.
I assume that ConEmu must be hijacking the fork in order to provide a console?
Now, my PS script needs to eventually stop the `cmd` process. In a standard Powershell console, `$x.Id` is the PID of the process to kill, but in ConEmu... is there a way to get the PID of the `cmd` process? Alternatively... I don't *need* that process to start within ConEmu: is there a way to tell ConEmu to please not own it?
|
process
|
pid of forked process using the very latest conemu build have searched in issues but could not find the answer though it must be a faq if i run a powershell session in one tab and fork a new process ps x start process filepath cmd passthru this opens a new tab where the cmd process runs i can see a new exe process having two children a conhost exe process and the cmd exe process however the pid reported by powershell x id does not correspond to the pid of cmd exe in fact it corresponds to a process that seems to stop immediately i assume that conemu must be hijacking the fork in order to provide a console now my ps script needs to eventually stop the cmd process in a standard powershell console x id is the pid of the process to kill but in conemu is there a way to get the pid of the cmd process alternatively i don t need that process to start within conemu is there a way to tell conemu to please not own it
| 1
|
68,813
| 14,958,283,523
|
IssuesEvent
|
2021-01-27 00:22:58
|
fufunoyu/mall
|
https://api.github.com/repos/fufunoyu/mall
|
opened
|
CVE-2020-36189 (Medium) detected in jackson-databind-2.9.4.jar
|
security vulnerability
|
## CVE-2020-36189 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: mall/mall-manager/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/mall/commit/58429d23f3045fe26e303f6e045f1c664b07b48d">58429d23f3045fe26e303f6e045f1c664b07b48d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-36189 (Medium) detected in jackson-databind-2.9.4.jar - ## CVE-2020-36189 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: mall/mall-manager/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/mall/commit/58429d23f3045fe26e303f6e045f1c664b07b48d">58429d23f3045fe26e303f6e045f1c664b07b48d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.DriverManagerConnectionSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36189>CVE-2020-36189</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file mall mall manager pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db drivermanagerconnectionsource publish date url a href cvss score details base score metrics not available
| 0
|
17,830
| 23,769,828,853
|
IssuesEvent
|
2022-09-01 15:24:55
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
closed
|
Set up continuous integration
|
Process
|
We're aiming this project's CI to be self-contained (i.e. no calling to external continuous integration systems), and we're planning to use GitHub Actions for that.
|
1.0
|
Set up continuous integration - We're aiming this project's CI to be self-contained (i.e. no calling to external continuous integration systems), and we're planning to use GitHub Actions for that.
|
process
|
set up continuous integration we re aiming this project s ci to be self contained i e no calling to external continuous integration systems and we re planning to use github actions for that
| 1
|
46,321
| 13,055,891,817
|
IssuesEvent
|
2020-07-30 03:02:27
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
cmake - clang C flags need tweaking (Trac #986)
|
Incomplete Migration Migrated from Trac cmake defect
|
Migrated from https://code.icecube.wisc.edu/ticket/986
```json
{
"status": "closed",
"changetime": "2015-05-14T15:46:29",
"description": "with clang-3.7\n\n{{{\n[ 12%] Building C object photospline/CMakeFiles/photospline.dir/private/lib/bspline.c.o\nclang: warning: optimization flag '-fno-signaling-nans' is not supported\nclang: warning: optimization flag '-fno-signaling-nans' is not supported\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1431618389155518",
"component": "cmake",
"summary": "cmake - clang C flags need tweaking",
"priority": "normal",
"keywords": "",
"time": "2015-05-14T15:25:18",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
cmake - clang C flags need tweaking (Trac #986) - Migrated from https://code.icecube.wisc.edu/ticket/986
```json
{
"status": "closed",
"changetime": "2015-05-14T15:46:29",
"description": "with clang-3.7\n\n{{{\n[ 12%] Building C object photospline/CMakeFiles/photospline.dir/private/lib/bspline.c.o\nclang: warning: optimization flag '-fno-signaling-nans' is not supported\nclang: warning: optimization flag '-fno-signaling-nans' is not supported\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1431618389155518",
"component": "cmake",
"summary": "cmake - clang C flags need tweaking",
"priority": "normal",
"keywords": "",
"time": "2015-05-14T15:25:18",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
non_process
|
cmake clang c flags need tweaking trac migrated from json status closed changetime description with clang n n n building c object photospline cmakefiles photospline dir private lib bspline c o nclang warning optimization flag fno signaling nans is not supported nclang warning optimization flag fno signaling nans is not supported n reporter nega cc resolution fixed ts component cmake summary cmake clang c flags need tweaking priority normal keywords time milestone owner nega type defect
| 0
|
1,488
| 4,059,117,544
|
IssuesEvent
|
2016-05-25 08:23:18
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Звернення до міського голови - м.Гола Пристань - Херсонська область
|
In process of testing
|
Відповідальна особа (ПІБ, посада) - Дробоженко
Світлана Миколаївна, начальник загального відділу
вул.1 Травня, 14
м.Гола Пристань
Херсонська область
75600
0553921555
vykonkom@golapristan-mrada.gov.ua
Координатор Проекту iGov в м. Херсоні та Херсонській області.
Email: kylikpavel@ukr.net
Тел: +380661622002
Блог:https://www.facebook.com/kylikpavel
|
1.0
|
Звернення до міського голови - м.Гола Пристань - Херсонська область -
Відповідальна особа (ПІБ, посада) - Дробоженко
Світлана Миколаївна, начальник загального відділу
вул.1 Травня, 14
м.Гола Пристань
Херсонська область
75600
0553921555
vykonkom@golapristan-mrada.gov.ua
Координатор Проекту iGov в м. Херсоні та Херсонській області.
Email: kylikpavel@ukr.net
Тел: +380661622002
Блог:https://www.facebook.com/kylikpavel
|
process
|
звернення до міського голови м гола пристань херсонська область відповідальна особа піб посада дробоженко світлана миколаївна начальник загального відділу вул травня м гола пристань херсонська область vykonkom golapristan mrada gov ua координатор проекту igov в м херсоні та херсонській області email kylikpavel ukr net тел блог
| 1
|
13,051
| 15,388,033,633
|
IssuesEvent
|
2021-03-03 10:15:59
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Ability to sensibly migrate column and table names
|
kind/feature process/candidate team/migrations
|
I've been playing around with Migrate quite a bit and one thing I know we'll need to do in future - but which looks quite tricky - is renaming columns and tables in the database.
## Columns
Considering the following `schema.prisma` diff:
```gql
model Foo {
- fieldName String @map("foo")
+ fieldName String @map("bar")
@map("foo_table")
}
```
This will become something like:
```sql
ALTER TABLE foo_table DROP COLUMN foo
ALTER TABLE foo_table ADD bar varchar(...)
```
When it could just be:
```sql
RENAME TABLE program_entity TO program_focus
```
## Tables
Considering the following `schema.prisma` diff:
```gql
model Foo {
fieldName String @map("foo")
- @map("foo_table")
+ @map("bar_table")
}
```
This will become something like:
```sql
DROP TABLE foo_table
CREATE TABLE bar_table
```
When it could just be:
```sql
RENAME TABLE foo_table TO bar_table
```
## Solutions?
Either, Migrate could understand these situations itself (although I can see that understanding the semantics of a diff could become quite challenging). Or, could there be a solution where you allow users the opportunity to write a custom SQL statement so they can continue to us Lift migrations in some of these more involved edge cases?
|
1.0
|
Ability to sensibly migrate column and table names - I've been playing around with Migrate quite a bit and one thing I know we'll need to do in future - but which looks quite tricky - is renaming columns and tables in the database.
## Columns
Considering the following `schema.prisma` diff:
```gql
model Foo {
- fieldName String @map("foo")
+ fieldName String @map("bar")
@map("foo_table")
}
```
This will become something like:
```sql
ALTER TABLE foo_table DROP COLUMN foo
ALTER TABLE foo_table ADD bar varchar(...)
```
When it could just be:
```sql
RENAME TABLE program_entity TO program_focus
```
## Tables
Considering the following `schema.prisma` diff:
```gql
model Foo {
fieldName String @map("foo")
- @map("foo_table")
+ @map("bar_table")
}
```
This will become something like:
```sql
DROP TABLE foo_table
CREATE TABLE bar_table
```
When it could just be:
```sql
RENAME TABLE foo_table TO bar_table
```
## Solutions?
Either, Migrate could understand these situations itself (although I can see that understanding the semantics of a diff could become quite challenging). Or, could there be a solution where you allow users the opportunity to write a custom SQL statement so they can continue to us Lift migrations in some of these more involved edge cases?
|
process
|
ability to sensibly migrate column and table names i ve been playing around with migrate quite a bit and one thing i know we ll need to do in future but which looks quite tricky is renaming columns and tables in the database columns considering the following schema prisma diff gql model foo fieldname string map foo fieldname string map bar map foo table this will become something like sql alter table foo table drop column foo alter table foo table add bar varchar when it could just be sql rename table program entity to program focus tables considering the following schema prisma diff gql model foo fieldname string map foo map foo table map bar table this will become something like sql drop table foo table create table bar table when it could just be sql rename table foo table to bar table solutions either migrate could understand these situations itself although i can see that understanding the semantics of a diff could become quite challenging or could there be a solution where you allow users the opportunity to write a custom sql statement so they can continue to us lift migrations in some of these more involved edge cases
| 1
|
231,867
| 17,759,696,168
|
IssuesEvent
|
2021-08-29 13:07:19
|
PlugFox/bloc
|
https://api.github.com/repos/PlugFox/bloc
|
closed
|
Give a temporary name to all packages for publishing in the pub
|
documentation
|
Give a temporary name to all packages for publishing in the pub
Perhaps for now I will call "`fox_*`"
|
1.0
|
Give a temporary name to all packages for publishing in the pub - Give a temporary name to all packages for publishing in the pub
Perhaps for now I will call "`fox_*`"
|
non_process
|
give a temporary name to all packages for publishing in the pub give a temporary name to all packages for publishing in the pub perhaps for now i will call fox
| 0
|
3,777
| 6,755,578,667
|
IssuesEvent
|
2017-10-24 01:27:37
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Output from ovftool in vSphere post-processor is not shown in the log
|
enhancement post-processor/vsphere
|
Issue from here: https://groups.google.com/forum/#!topic/packer-tool/xIi0kkBEYeQ
When vSphere post-processor runs ovftool stdout/stderr from ovftool gets swallowed. It would be better if it is posted to packer log, as there is no other way of getting it otherwise, and having it would help with debugging packer templates.
|
1.0
|
Output from ovftool in vSphere post-processor is not shown in the log - Issue from here: https://groups.google.com/forum/#!topic/packer-tool/xIi0kkBEYeQ
When vSphere post-processor runs ovftool stdout/stderr from ovftool gets swallowed. It would be better if it is posted to packer log, as there is no other way of getting it otherwise, and having it would help with debugging packer templates.
|
process
|
output from ovftool in vsphere post processor is not shown in the log issue from here when vsphere post processor runs ovftool stdout stderr from ovftool gets swallowed it would be better if it is posted to packer log as there is no other way of getting it otherwise and having it would help with debugging packer templates
| 1
|
16,048
| 20,192,886,544
|
IssuesEvent
|
2022-02-11 07:51:02
|
soederpop/active-mdx-software-project-test-repo
|
https://api.github.com/repos/soederpop/active-mdx-software-project-test-repo
|
closed
|
A customer should be able to pay with a credit card
|
story-created epic-payment-processing
|
# A customer should be able to pay with a credit card
As a customer I want to be able to pay with a credit card so I can complete my order
|
1.0
|
A customer should be able to pay with a credit card - # A customer should be able to pay with a credit card
As a customer I want to be able to pay with a credit card so I can complete my order
|
process
|
a customer should be able to pay with a credit card a customer should be able to pay with a credit card as a customer i want to be able to pay with a credit card so i can complete my order
| 1
|
115,513
| 17,315,794,830
|
IssuesEvent
|
2021-07-27 05:49:09
|
LingalaShalini/sqlite-3.22.0_after_fix
|
https://api.github.com/repos/LingalaShalini/sqlite-3.22.0_after_fix
|
opened
|
CVE-2019-19646 (High) detected in sqliteversion-3.22.0
|
security vulnerability
|
## CVE-2019-19646 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sqliteversion-3.22.0</b></p></summary>
<p>
<p>Official Git mirror of the SQLite source tree</p>
<p>Library home page: <a href=https://github.com/sqlite/sqlite.git>https://github.com/sqlite/sqlite.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/LingalaShalini/sqlite-3.22.0_after_fix/commit/769d541be66bc2d750fe0f57ee6241b67a13343d">769d541be66bc2d750fe0f57ee6241b67a13343d</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>sqlite-3.22.0_after_fix/src/pragma.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
pragma.c in SQLite through 3.30.1 mishandles NOT NULL in an integrity_check PRAGMA command in certain cases of generated columns.
<p>Publish Date: 2019-12-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19646>CVE-2019-19646</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19646">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19646</a></p>
<p>Release Date: 2019-12-09</p>
<p>Fix Resolution: 3.31.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-19646 (High) detected in sqliteversion-3.22.0 - ## CVE-2019-19646 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sqliteversion-3.22.0</b></p></summary>
<p>
<p>Official Git mirror of the SQLite source tree</p>
<p>Library home page: <a href=https://github.com/sqlite/sqlite.git>https://github.com/sqlite/sqlite.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/LingalaShalini/sqlite-3.22.0_after_fix/commit/769d541be66bc2d750fe0f57ee6241b67a13343d">769d541be66bc2d750fe0f57ee6241b67a13343d</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>sqlite-3.22.0_after_fix/src/pragma.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
pragma.c in SQLite through 3.30.1 mishandles NOT NULL in an integrity_check PRAGMA command in certain cases of generated columns.
<p>Publish Date: 2019-12-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19646>CVE-2019-19646</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19646">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-19646</a></p>
<p>Release Date: 2019-12-09</p>
<p>Fix Resolution: 3.31.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in sqliteversion cve high severity vulnerability vulnerable library sqliteversion official git mirror of the sqlite source tree library home page a href found in head commit a href found in base branch master vulnerable source files sqlite after fix src pragma c vulnerability details pragma c in sqlite through mishandles not null in an integrity check pragma command in certain cases of generated columns publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
30,753
| 13,306,605,047
|
IssuesEvent
|
2020-08-25 20:32:45
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Additional internal load balancer
|
Pri2 container-service/svc cxp product-question triaged
|
This Kubernetes page (https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx) mentions that it would be possible to create an additional internal Load balancer, thus exposing the ingress controller with both public ip and private ip at the same time. Is this possible with AKS ? If so can the documentation be created?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 25afcc19-0998-3583-f6d8-e2a619c0e071
* Version Independent ID: 7629191a-6b3b-fb22-8b70-fa84fd785f38
* Content: [Create an internal load balancer - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/internal-lb)
* Content Source: [articles/aks/internal-lb.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/internal-lb.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
1.0
|
Additional internal load balancer - This Kubernetes page (https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx) mentions that it would be possible to create an additional internal Load balancer, thus exposing the ingress controller with both public ip and private ip at the same time. Is this possible with AKS ? If so can the documentation be created?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 25afcc19-0998-3583-f6d8-e2a619c0e071
* Version Independent ID: 7629191a-6b3b-fb22-8b70-fa84fd785f38
* Content: [Create an internal load balancer - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/internal-lb)
* Content Source: [articles/aks/internal-lb.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/internal-lb.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
non_process
|
additional internal load balancer this kubernetes page mentions that it would be possible to create an additional internal load balancer thus exposing the ingress controller with both public ip and private ip at the same time is this possible with aks if so can the documentation be created document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
| 0
|
539,789
| 15,794,874,535
|
IssuesEvent
|
2021-04-02 11:59:27
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
Devportal Overview - Redesign
|
API-M 4.0.0 Priority/Highest React-UI Type/Improvement
|
### Describe your problem(s)
The proposed new design implementation for devportal overview page.
https://xd.adobe.com/view/67c2ddac-11d7-4ebc-8f34-2544f4d01770-3291/screen/fae73d72-916c-4beb-bef0-390baea59f4c/
|
1.0
|
Devportal Overview - Redesign - ### Describe your problem(s)
The proposed new design implementation for devportal overview page.
https://xd.adobe.com/view/67c2ddac-11d7-4ebc-8f34-2544f4d01770-3291/screen/fae73d72-916c-4beb-bef0-390baea59f4c/
|
non_process
|
devportal overview redesign describe your problem s the proposed new design implementation for devportal overview page
| 0
|
208,640
| 15,897,065,194
|
IssuesEvent
|
2021-04-11 19:39:20
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: sqlsmith/setup=seed/setting=no-mutations failed
|
C-test-failure O-roachtest O-robot branch-release-21.1 release-blocker
|
[(roachtest).sqlsmith/setup=seed/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=buildLog) on [release-21.1@5323c44e1c3d3d9ded95a9f20be5ba738cd542ec](https://github.com/cockroachdb/cockroach/commits/5323c44e1c3d3d9ded95a9f20be5ba738cd542ec):
```
The test failed on branch=release-21.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1
sqlsmith.go:206,sqlsmith.go:237,test_runner.go:767: error: pq: internal error: unexpected error from the vectorized engine: runtime error: index out of range [4] with length 3
stmt:
SELECT
regr_avgy(tab_287._float8::FLOAT8, tab_287._float8::FLOAT8)::FLOAT8 AS col_678,
3888113386:::OID AS col_679,
var_samp(tab_287._int2::INT8)::DECIMAL AS col_680
FROM
defaultdb.public.seed@seed__int8__float8__date_idx AS tab_287
GROUP BY
tab_287.tableoid, tab_287._int2, tab_287._jsonb, tab_287.crdb_internal_mvcc_timestamp, tab_287._float8
ORDER BY
tab_287.tableoid ASC
LIMIT
68:::INT8;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=seed/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=artifacts#/sqlsmith/setup=seed/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: sqlsmith/setup=seed/setting=no-mutations failed - [(roachtest).sqlsmith/setup=seed/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=buildLog) on [release-21.1@5323c44e1c3d3d9ded95a9f20be5ba738cd542ec](https://github.com/cockroachdb/cockroach/commits/5323c44e1c3d3d9ded95a9f20be5ba738cd542ec):
```
The test failed on branch=release-21.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1
sqlsmith.go:206,sqlsmith.go:237,test_runner.go:767: error: pq: internal error: unexpected error from the vectorized engine: runtime error: index out of range [4] with length 3
stmt:
SELECT
regr_avgy(tab_287._float8::FLOAT8, tab_287._float8::FLOAT8)::FLOAT8 AS col_678,
3888113386:::OID AS col_679,
var_samp(tab_287._int2::INT8)::DECIMAL AS col_680
FROM
defaultdb.public.seed@seed__int8__float8__date_idx AS tab_287
GROUP BY
tab_287.tableoid, tab_287._int2, tab_287._jsonb, tab_287.crdb_internal_mvcc_timestamp, tab_287._float8
ORDER BY
tab_287.tableoid ASC
LIMIT
68:::INT8;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=seed/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=artifacts#/sqlsmith/setup=seed/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest sqlsmith setup seed setting no mutations failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup seed setting no mutations run sqlsmith go sqlsmith go test runner go error pq internal error unexpected error from the vectorized engine runtime error index out of range with length stmt select regr avgy tab tab as col oid as col var samp tab decimal as col from defaultdb public seed seed date idx as tab group by tab tableoid tab tab jsonb tab crdb internal mvcc timestamp tab order by tab tableoid asc limit more artifacts powered by
| 0
|
403,771
| 27,434,803,436
|
IssuesEvent
|
2023-03-02 06:15:08
|
PaloAltoNetworks/pan.dev
|
https://api.github.com/repos/PaloAltoNetworks/pan.dev
|
opened
|
Issue/Help with "Get Runtime Container Audit Events"
|
documentation
|
## Documentation link
https://pan.dev/prisma-cloud/api/cwpp/get-audits-runtime-container/
## Describe the problem
we can't check latest container runtime audits by using this api
it shows only 50 container runtime audits which starts from initial audits.
we checked that other API give us 50 audits, which starts from latest audits but only this API gives us different result.
## Suggested fix
Let this API to give us latest container runtime audits
|
1.0
|
Issue/Help with "Get Runtime Container Audit Events" - ## Documentation link
https://pan.dev/prisma-cloud/api/cwpp/get-audits-runtime-container/
## Describe the problem
we can't check latest container runtime audits by using this api
it shows only 50 container runtime audits which starts from initial audits.
we checked that other API give us 50 audits, which starts from latest audits but only this API gives us different result.
## Suggested fix
Let this API to give us latest container runtime audits
|
non_process
|
issue help with get runtime container audit events documentation link describe the problem we can t check latest container runtime audits by using this api it shows only container runtime audits which starts from initial audits we checked that other api give us audits which starts from latest audits but only this api gives us different result suggested fix let this api to give us latest container runtime audits
| 0
|
11,477
| 14,344,218,793
|
IssuesEvent
|
2020-11-28 13:27:39
|
emacs-ess/ESS
|
https://api.github.com/repos/emacs-ess/ESS
|
closed
|
ess-command does not preserve .Last.value
|
process:eval
|
Hi,
I am not sure if I should post it here or at the company-mode repo. But anyways, .Last.value is totally broken with company-mode turn on. To reproduce this, `M-x R`, then in the R inferior buffer, type `1 + 1` and hit `RET`. Then type in `Last`, wait for company-mode to complete it to `Last.value`, and hit `RET`. Instead of giving a 2, it gives either a long vector of all possible completions or just `character(0)`, depending on whether `company-mode` is already initiated or not. This behavior makes `Last.value` unusable.
Thanks
|
1.0
|
ess-command does not preserve .Last.value - Hi,
I am not sure if I should post it here or at the company-mode repo. But anyways, .Last.value is totally broken with company-mode turn on. To reproduce this, `M-x R`, then in the R inferior buffer, type `1 + 1` and hit `RET`. Then type in `Last`, wait for company-mode to complete it to `Last.value`, and hit `RET`. Instead of giving a 2, it gives either a long vector of all possible completions or just `character(0)`, depending on whether `company-mode` is already initiated or not. This behavior makes `Last.value` unusable.
Thanks
|
process
|
ess command does not preserve last value hi i am not sure if i should post it here or at the company mode repo but anyways last value is totally broken with company mode turn on to reproduce this m x r then in the r inferior buffer type and hit ret then type in last wait for company mode to complete it to last value and hit ret instead of giving a it gives either a long vector of all possible completions or just character depending on whether company mode is already initiated or not this behavior makes last value unusable thanks
| 1
|
15,586
| 19,706,811,951
|
IssuesEvent
|
2022-01-12 23:10:33
|
googleapis/nodejs-bigtable
|
https://api.github.com/repos/googleapis/nodejs-bigtable
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process api: bigtable repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
147,347
| 13,206,062,537
|
IssuesEvent
|
2020-08-14 19:19:54
|
airctic/mantisshrimp
|
https://api.github.com/repos/airctic/mantisshrimp
|
opened
|
Add EffecientDet README
|
documentation enhancement good first issue
|
Add EffecientDet README to both the repo and the Models Section in Docs in same way as in for Fast RCNN
|
1.0
|
Add EffecientDet README - Add EffecientDet README to both the repo and the Models Section in Docs in same way as in for Fast RCNN
|
non_process
|
add effecientdet readme add effecientdet readme to both the repo and the models section in docs in same way as in for fast rcnn
| 0
|
21,306
| 11,627,582,503
|
IssuesEvent
|
2020-02-27 16:46:06
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
az acr helm install-cli
|
Container Registry Feature Request Needs Triage Service Attention
|
**Is your feature request related to a problem? Please describe.**
It bad to relay on external installations of helm. There is no official docker image with helm pre installed
**Describe the solution you'd like**
An easy way to install helm cli e.g. `az acr helm install-cli` which is similar to `az aks install-cli`
|
1.0
|
az acr helm install-cli - **Is your feature request related to a problem? Please describe.**
It bad to relay on external installations of helm. There is no official docker image with helm pre installed
**Describe the solution you'd like**
An easy way to install helm cli e.g. `az acr helm install-cli` which is similar to `az aks install-cli`
|
non_process
|
az acr helm install cli is your feature request related to a problem please describe it bad to relay on external installations of helm there is no official docker image with helm pre installed describe the solution you d like an easy way to install helm cli e g az acr helm install cli which is similar to az aks install cli
| 0
|
105,513
| 23,064,474,031
|
IssuesEvent
|
2022-07-25 12:51:32
|
ijlee2/ember-codemod-pod-to-octane
|
https://api.github.com/repos/ijlee2/ember-codemod-pod-to-octane
|
closed
|
Account for Sass
|
enhance: code
|
The codemod supports `ember-css-modules`, which then supports [Sass](https://github.com/dfreeman/ember-css-modules-sass). Update the migration strategies for stylesheets by checking for the file extension `.scss`.
|
1.0
|
Account for Sass - The codemod supports `ember-css-modules`, which then supports [Sass](https://github.com/dfreeman/ember-css-modules-sass). Update the migration strategies for stylesheets by checking for the file extension `.scss`.
|
non_process
|
account for sass the codemod supports ember css modules which then supports update the migration strategies for stylesheets by checking for the file extension scss
| 0
|
310,210
| 9,487,081,689
|
IssuesEvent
|
2019-04-22 15:49:59
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
Removal of the DummyModel
|
priority/quality-of-life topic/orm type/accepted feature
|
The dummy model used by QueryBuilder should be removed and replaced by aldjemy generated models
|
1.0
|
Removal of the DummyModel - The dummy model used by QueryBuilder should be removed and replaced by aldjemy generated models
|
non_process
|
removal of the dummymodel the dummy model used by querybuilder should be removed and replaced by aldjemy generated models
| 0
|
18,905
| 24,844,810,079
|
IssuesEvent
|
2022-10-26 15:07:48
|
googleapis/gax-java
|
https://api.github.com/repos/googleapis/gax-java
|
closed
|
BatcherImplTest.testThrottlingBlocking still flaky
|
type: process priority: p3
|
The test is known to be flaky. So far, we did #1589 and #1600. (However, although it may just be me, I feel this became less flaky.)
Unfortunately, this failed again in #1613. The bright side of it is that we have a different error. Hopefully this reveals more information.
```
com.google.api.gax.batching.BatcherImplTest > testThrottlingBlocking FAILED
org.mockito.exceptions.base.MockitoException:
No argument value was captured!
You might have forgotten to use argument.capture() in verify()...
...or you used capture() in stubbing but stubbed method was not called.
Be aware that it is recommended to use capture() only with verify()
Examples of correct argument capturing:
ArgumentCaptor<Person> argument = ArgumentCaptor.forClass(Person.class);
verify(mock).doSomething(argument.capture());
assertEquals("John", argument.getValue().getName());
at com.google.api.gax.batching.BatcherImplTest.testThrottlingBlocking(BatcherImplTest.java:895)
```
|
1.0
|
BatcherImplTest.testThrottlingBlocking still flaky - The test is known to be flaky. So far, we did #1589 and #1600. (However, although it may just be me, I feel this became less flaky.)
Unfortunately, this failed again in #1613. The bright side of it is that we have a different error. Hopefully this reveals more information.
```
com.google.api.gax.batching.BatcherImplTest > testThrottlingBlocking FAILED
org.mockito.exceptions.base.MockitoException:
No argument value was captured!
You might have forgotten to use argument.capture() in verify()...
...or you used capture() in stubbing but stubbed method was not called.
Be aware that it is recommended to use capture() only with verify()
Examples of correct argument capturing:
ArgumentCaptor<Person> argument = ArgumentCaptor.forClass(Person.class);
verify(mock).doSomething(argument.capture());
assertEquals("John", argument.getValue().getName());
at com.google.api.gax.batching.BatcherImplTest.testThrottlingBlocking(BatcherImplTest.java:895)
```
|
process
|
batcherimpltest testthrottlingblocking still flaky the test is known to be flaky so far we did and however although it may just be me i feel this became less flaky unfortunately this failed again in the bright side of it is that we have a different error hopefully this reveals more information com google api gax batching batcherimpltest testthrottlingblocking failed org mockito exceptions base mockitoexception no argument value was captured you might have forgotten to use argument capture in verify or you used capture in stubbing but stubbed method was not called be aware that it is recommended to use capture only with verify examples of correct argument capturing argumentcaptor argument argumentcaptor forclass person class verify mock dosomething argument capture assertequals john argument getvalue getname at com google api gax batching batcherimpltest testthrottlingblocking batcherimpltest java
| 1
|
1,457
| 4,030,199,614
|
IssuesEvent
|
2016-05-18 13:34:19
|
mapzen/leaflet-geocoder
|
https://api.github.com/repos/mapzen/leaflet-geocoder
|
closed
|
Support other types of bounds
|
processed
|
The `bounds` option should accept an object whose key-value pairs decompose to valid Mapzen Search boundary filtering parameters. See [documentation](https://mapzen.com/documentation/search/search/#combine-boundary-search-and-prioritization).
e.g.
```js
L.control.geocoder('<your-api-key>', {
bounds: {
rect: {
min_lat: 25.84,
min_lon: -106.65,
max_lat: 36.5,
max_lon: -93.51,
},
circle: {
lat: -33.856,
lon: 151.215,
radius: 50,
},
country: "AUS",
wof: "wofid"
}
}).addTo(map);
```
Passing in `rect` property is analogous to passing in a L.latLngBounds() object.
We could generalize this such that Mapzen Search could add additional boundary search types without needing to update the plugin.
|
1.0
|
Support other types of bounds - The `bounds` option should accept an object whose key-value pairs decompose to valid Mapzen Search boundary filtering parameters. See [documentation](https://mapzen.com/documentation/search/search/#combine-boundary-search-and-prioritization).
e.g.
```js
L.control.geocoder('<your-api-key>', {
bounds: {
rect: {
min_lat: 25.84,
min_lon: -106.65,
max_lat: 36.5,
max_lon: -93.51,
},
circle: {
lat: -33.856,
lon: 151.215,
radius: 50,
},
country: "AUS",
wof: "wofid"
}
}).addTo(map);
```
Passing in `rect` property is analogous to passing in a L.latLngBounds() object.
We could generalize this such that Mapzen Search could add additional boundary search types without needing to update the plugin.
|
process
|
support other types of bounds the bounds option should accept an object whose key value pairs decompose to valid mapzen search boundary filtering parameters see e g js l control geocoder bounds rect min lat min lon max lat max lon circle lat lon radius country aus wof wofid addto map passing in rect property is analogous to passing in a l latlngbounds object we could generalize this such that mapzen search could add additional boundary search types without needing to update the plugin
| 1
|
20,814
| 27,577,979,677
|
IssuesEvent
|
2023-03-08 14:22:20
|
threefoldtech/quantum-storage
|
https://api.github.com/repos/threefoldtech/quantum-storage
|
closed
|
release: pack requirements for next version
|
type_story process_wontfix
|
Next QSFS version will contains:
- `0-db` v2.0.0-rc3
- `0-db-fs` v0.1.7
- `0-stor` v0.3.0-rc.1
- Configurations
Theses build needs to be staticly compiled since every project attached now support that (previously, we had to ship some shared libraries).
This needs to build on latest V release.
|
1.0
|
release: pack requirements for next version - Next QSFS version will contains:
- `0-db` v2.0.0-rc3
- `0-db-fs` v0.1.7
- `0-stor` v0.3.0-rc.1
- Configurations
Theses build needs to be staticly compiled since every project attached now support that (previously, we had to ship some shared libraries).
This needs to build on latest V release.
|
process
|
release pack requirements for next version next qsfs version will contains db db fs stor rc configurations theses build needs to be staticly compiled since every project attached now support that previously we had to ship some shared libraries this needs to build on latest v release
| 1
|
12,640
| 15,016,707,542
|
IssuesEvent
|
2021-02-01 09:55:46
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
document server is not available after deployed
|
process_duplicate type_bug
|
<img width="689" alt="Screenshot 2021-01-28 at 12 56 12" src="https://user-images.githubusercontent.com/43240801/106135860-c4bac680-6168-11eb-81ea-226b4118e574.png">
|
1.0
|
document server is not available after deployed - <img width="689" alt="Screenshot 2021-01-28 at 12 56 12" src="https://user-images.githubusercontent.com/43240801/106135860-c4bac680-6168-11eb-81ea-226b4118e574.png">
|
process
|
document server is not available after deployed img width alt screenshot at src
| 1
|
6,457
| 9,546,564,922
|
IssuesEvent
|
2019-05-01 20:19:13
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Department of State: Ineligible GPA
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Student
What: Notification of ineligibility due to GPA
Why: The student must have a GPA of 3.0 or higher to be eligible for the program
A/C
- There will be a header "You are ineligible"
- There will be an alert style box with the following content:
- Your GPA is below 3.0 (Bold)
- You're not eligible for this internship because your GPA doesn't meet the minimum requirement.
You must have a GPA of at least 3.0 or higher to be eligible for the U.S. Department of State
Internship Program (Unpaid)
- If a student does not have a 3.0 or higher GPA they will not be allowed to proceed with their application.
- Learn more about virtual internships will be a link that opens in a new window and takes the user to: https://vsfs.state.gov/
- Learn more about other student programs will be a link that opens in a new window and takes the user to: https://careers.state.gov/intern/other-programs/
- Learn more about the Pathways Program will be a link that opens in a new window and takes the user to: https://careers.state.gov/intern/pathways/
InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/333437563/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54
|
1.0
|
Department of State: Ineligible GPA - Who: Student
What: Notification of ineligibility due to GPA
Why: The student must have a GPA of 3.0 or higher to be eligible for the program
A/C
- There will be a header "You are ineligible"
- There will be an alert style box with the following content:
- Your GPA is below 3.0 (Bold)
- You're not eligible for this internship because your GPA doesn't meet the minimum requirement.
You must have a GPA of at least 3.0 or higher to be eligible for the U.S. Department of State
Internship Program (Unpaid)
- If a student does not have a 3.0 or higher GPA they will not be allowed to proceed with their application.
- Learn more about virtual internships will be a link that opens in a new window and takes the user to: https://vsfs.state.gov/
- Learn more about other student programs will be a link that opens in a new window and takes the user to: https://careers.state.gov/intern/other-programs/
- Learn more about the Pathways Program will be a link that opens in a new window and takes the user to: https://careers.state.gov/intern/pathways/
InVision Mock: https://opm.invisionapp.com/d/main/#/console/15360465/333437563/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54
|
process
|
department of state ineligible gpa who student what notification of ineligibility due to gpa why the student must have a gpa of or higher to be eligible for the program a c there will be a header you are ineligible there will be an alert style box with the following content your gpa is below bold you re not eligible for this internship because your gpa doesn t meet the minimum requirement you must have a gpa of at least or higher to be eligible for the u s department of state internship program unpaid if a student does not have a or higher gpa they will not be allowed to proceed with their application learn more about virtual internships will be a link that opens in a new window and takes the user to learn more about other student programs will be a link that opens in a new window and takes the user to learn more about the pathways program will be a link that opens in a new window and takes the user to invision mock public link
| 1
|
359,613
| 25,246,693,942
|
IssuesEvent
|
2022-11-15 11:38:45
|
kedro-org/kedro
|
https://api.github.com/repos/kedro-org/kedro
|
opened
|
Investigate use of docsify to build docs to github pages
|
Component: Documentation 📄
|
[Docsify](https://docsify.js.org/#/) could be an alternative to Sphinx if we publish to github pages rather than readthedocs.
We need to spend some time as a spike to evaluate it; I'd like to make sure to earmark some of my time nearer the end of the year to fork Kedro and try the docs on my personal repo.
|
1.0
|
Investigate use of docsify to build docs to github pages - [Docsify](https://docsify.js.org/#/) could be an alternative to Sphinx if we publish to github pages rather than readthedocs.
We need to spend some time as a spike to evaluate it; I'd like to make sure to earmark some of my time nearer the end of the year to fork Kedro and try the docs on my personal repo.
|
non_process
|
investigate use of docsify to build docs to github pages could be an alternative to sphinx if we publish to github pages rather than readthedocs we need to spend some time as a spike to evaluate it i d like to make sure to earmark some of my time nearer the end of the year to fork kedro and try the docs on my personal repo
| 0
|
266,433
| 20,151,794,279
|
IssuesEvent
|
2022-02-09 13:07:46
|
MartinBruun/P6
|
https://api.github.com/repos/MartinBruun/P6
|
opened
|
4: Make it possible for the monitoring box to work without internet access
|
Need documentation Need grooming 0: Suggestion Improvement
|
**What should be improved upon and why?**
Because the monitoring box doesnt have access to the internet by cable we need to find another solution so we avoid problems with access to the database down the road, and how to monitor payments etc.
**How do you see it can be improved?**
Option 1 might be to make the mobile serve as intermediary between the monitoring box and the backend
Option 2 might be to just only make it possible by cable
**Any extra context, videos or screenshots that are relevant to this issue?**
Add context here if necessary
|
1.0
|
4: Make it possible for the monitoring box to work without internet access - **What should be improved upon and why?**
Because the monitoring box doesnt have access to the internet by cable we need to find another solution so we avoid problems with access to the database down the road, and how to monitor payments etc.
**How do you see it can be improved?**
Option 1 might be to make the mobile serve as intermediary between the monitoring box and the backend
Option 2 might be to just only make it possible by cable
**Any extra context, videos or screenshots that are relevant to this issue?**
Add context here if necessary
|
non_process
|
make it possible for the monitoring box to work without internet access what should be improved upon and why because the monitoring box doesnt have access to the internet by cable we need to find another solution so we avoid problems with access to the database down the road and how to monitor payments etc how do you see it can be improved option might be to make the mobile serve as intermediary between the monitoring box and the backend option might be to just only make it possible by cable any extra context videos or screenshots that are relevant to this issue add context here if necessary
| 0
|
119,303
| 12,025,952,427
|
IssuesEvent
|
2020-04-12 11:55:25
|
HarleyCooper88/IOS-TREATMENT
|
https://api.github.com/repos/HarleyCooper88/IOS-TREATMENT
|
reopened
|
iOS password app refrained
|
bug documentation duplicate enhancement good first issue invalid wontfix
|
:code IOS
:masterversions -Redeem newest cersion codes
Ctrl -ALT:CooperFiles :CooperH Masterbreach Licences:HC15.1
|
1.0
|
iOS password app refrained - :code IOS
:masterversions -Redeem newest cersion codes
Ctrl -ALT:CooperFiles :CooperH Masterbreach Licences:HC15.1
|
non_process
|
ios password app refrained code ios masterversions redeem newest cersion codes ctrl alt cooperfiles cooperh masterbreach licences
| 0
|
4,835
| 7,726,314,119
|
IssuesEvent
|
2018-05-24 20:47:27
|
kaching-hq/Privacy-and-Security
|
https://api.github.com/repos/kaching-hq/Privacy-and-Security
|
opened
|
Develop an internal Data Protection Policy
|
Processes
|
The policy should clearly set out our approach to data protection together with responsibilities for implementing the policy and monitoring compliance.
We need to make sure that management approves the policy and that we publish and communicate it to all staff.
We should also review and update it at planned intervals or when required to ensure it remains relevant.
|
1.0
|
Develop an internal Data Protection Policy - The policy should clearly set out our approach to data protection together with responsibilities for implementing the policy and monitoring compliance.
We need to make sure that management approves the policy and that we publish and communicate it to all staff.
We should also review and update it at planned intervals or when required to ensure it remains relevant.
|
process
|
develop an internal data protection policy the policy should clearly set out our approach to data protection together with responsibilities for implementing the policy and monitoring compliance we need to make sure that management approves the policy and that we publish and communicate it to all staff we should also review and update it at planned intervals or when required to ensure it remains relevant
| 1
|
188,249
| 6,774,456,030
|
IssuesEvent
|
2017-10-27 10:25:42
|
ballerinalang/composer
|
https://api.github.com/repos/ballerinalang/composer
|
closed
|
[Blocker] Ballerina composer is not starting in safari
|
Priority/High
|
Safari Version 11.0
Please refer the below screen dump

```
[Error] Warning: Accessing PropTypes via the main React package is deprecated. Use the prop-types package from npm instead.
printWarning (bundle.js:22236)
warning (bundle.js:22260)
get (bundle.js:52121)
(anonymous function) (bundle.js:313444)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226722)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226985)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:229020)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:228030)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:235062)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:212357)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:207934)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:173385)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:340426)
__webpack_require__ (bundle.js:658)
(anonymous function) (bundle.js:707)
Global Code (bundle.js:708)
[Error] Warning: Collapsible: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you're not yet ready to migrate, create-react-class is available on npm as a drop-in replacement.
printWarning (bundle.js:22236)
warning (bundle.js:22260)
createClass (bundle.js:335467)
(anonymous function) (bundle.js:313438)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226722)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226985)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:229020)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:228030)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:235062)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:212357)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:207934)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:173385)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:340426)
__webpack_require__ (bundle.js:658)
(anonymous function) (bundle.js:707)
Global Code (bundle.js:708)
[Warning] Warning : Dependency missing - Select2 Library (bundle.js, line 207824)
[Error] Error while starting app. 'arguments', 'callee', and 'caller' cannot be accessed in this context.. [native code]
http://localhost:8080/dist/bundle.js:211394:14
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
http://localhost:8080/dist/bundle.js:211395:25
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
http://localhost:8080/dist/bundle.js:211395:25
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
http://localhost:8080/dist/bundle.js:211395:25
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
Application@http://localhost:8080/dist/bundle.js:208462:39
http://localhost:8080/dist/bundle.js:173397:36
promiseReactionJob@[native code]
append (bundle.js:101300:90)
doAppend (bundle.js:101233:444)
callAppenders (bundle.js:101216:291)
log (bundle.js:101216:96)
error (bundle.js:101220:245)
promiseReactionJob
[Error] Failed to load resource: the server responded with a status of 404 (Not Found) (bootstrap.min.css.map, line 0)
[Log] [WDS] Hot Module Replacement enabled. (bundle.js, line 81983)
[Error] Failed to load resource: the server responded with a status of 404 (Not Found) (validation.worker.js.map, line 0)
```
|
1.0
|
[Blocker] Ballerina composer is not starting in safari - Safari Version 11.0
Please refer the below screen dump

```
[Error] Warning: Accessing PropTypes via the main React package is deprecated. Use the prop-types package from npm instead.
printWarning (bundle.js:22236)
warning (bundle.js:22260)
get (bundle.js:52121)
(anonymous function) (bundle.js:313444)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226722)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226985)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:229020)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:228030)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:235062)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:212357)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:207934)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:173385)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:340426)
__webpack_require__ (bundle.js:658)
(anonymous function) (bundle.js:707)
Global Code (bundle.js:708)
[Error] Warning: Collapsible: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you're not yet ready to migrate, create-react-class is available on npm as a drop-in replacement.
printWarning (bundle.js:22236)
warning (bundle.js:22260)
createClass (bundle.js:335467)
(anonymous function) (bundle.js:313438)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226722)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:226985)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:229020)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:228030)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:235062)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:212357)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:207934)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:173385)
__webpack_require__ (bundle.js:658)
fn (bundle.js:86)
(anonymous function) (bundle.js:340426)
__webpack_require__ (bundle.js:658)
(anonymous function) (bundle.js:707)
Global Code (bundle.js:708)
[Warning] Warning : Dependency missing - Select2 Library (bundle.js, line 207824)
[Error] Error while starting app. 'arguments', 'callee', and 'caller' cannot be accessed in this context.. [native code]
http://localhost:8080/dist/bundle.js:211394:14
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
http://localhost:8080/dist/bundle.js:211395:25
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
http://localhost:8080/dist/bundle.js:211395:25
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
http://localhost:8080/dist/bundle.js:211395:25
forEach@[native code]
makeImutable@http://localhost:8080/dist/bundle.js:211393:42
Application@http://localhost:8080/dist/bundle.js:208462:39
http://localhost:8080/dist/bundle.js:173397:36
promiseReactionJob@[native code]
append (bundle.js:101300:90)
doAppend (bundle.js:101233:444)
callAppenders (bundle.js:101216:291)
log (bundle.js:101216:96)
error (bundle.js:101220:245)
promiseReactionJob
[Error] Failed to load resource: the server responded with a status of 404 (Not Found) (bootstrap.min.css.map, line 0)
[Log] [WDS] Hot Module Replacement enabled. (bundle.js, line 81983)
[Error] Failed to load resource: the server responded with a status of 404 (Not Found) (validation.worker.js.map, line 0)
```
|
non_process
|
ballerina composer is not starting in safari safari version please refer the below screen dump warning accessing proptypes via the main react package is deprecated use the prop types package from npm instead printwarning bundle js warning bundle js get bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js anonymous function bundle js global code bundle js warning collapsible react createclass is deprecated and will be removed in version use plain javascript classes instead if you re not yet ready to migrate create react class is available on npm as a drop in replacement printwarning bundle js warning bundle js createclass bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js fn bundle js anonymous function bundle js webpack require bundle js anonymous function bundle js global code bundle js warning dependency missing library bundle js line error while starting app arguments callee and caller cannot be accessed in this context foreach makeimutable foreach makeimutable foreach makeimutable foreach makeimutable application promisereactionjob append bundle js doappend bundle js callappenders bundle js log bundle js error bundle js promisereactionjob failed to load resource the server responded with a status of not found bootstrap min css map line hot module replacement enabled bundle js line failed to load resource the server responded with a status of not found validation worker js map line
| 0
|
14,463
| 10,167,342,398
|
IssuesEvent
|
2019-08-07 17:58:49
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Implement GeoCode columns
|
Need: 2-Should Have Project: Vision Zero Crash Data System Service: Dev Type: DevOps Workgroup: VZ
|
As John suggested:
- [x] the original lat/lon from chris (two columns)
- [x] our geocoded results (two columns)
- [x] the QA/confirmed lat/lon (two colums)
- [x] the QA status (confirmed, pending, ...?)
Let's run with it!
|
1.0
|
Implement GeoCode columns - As John suggested:
- [x] the original lat/lon from chris (two columns)
- [x] our geocoded results (two columns)
- [x] the QA/confirmed lat/lon (two colums)
- [x] the QA status (confirmed, pending, ...?)
Let's run with it!
|
non_process
|
implement geocode columns as john suggested the original lat lon from chris two columns our geocoded results two columns the qa confirmed lat lon two colums the qa status confirmed pending let s run with it
| 0
|
5,234
| 7,769,443,735
|
IssuesEvent
|
2018-06-04 03:58:57
|
keymanapp/keyman
|
https://api.github.com/repos/keymanapp/keyman
|
closed
|
iOS - update to 11.0 SDK
|
compatibility ios
|
Per Apple advice, submissions to the App Store from April onwards must be built with the iOS 11.0 SDK.
|
True
|
iOS - update to 11.0 SDK - Per Apple advice, submissions to the App Store from April onwards must be built with the iOS 11.0 SDK.
|
non_process
|
ios update to sdk per apple advice submissions to the app store from april onwards must be built with the ios sdk
| 0
|
177,200
| 6,575,533,845
|
IssuesEvent
|
2017-09-11 16:24:12
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
compose: Messages suddenly disappear mid-edit when another message is sent during editing
|
area: compose area: message-editing bug in progress priority: high
|
When someone is editing a previous message and another user (or self) sends another message to the same stream/topic/PM, the edited message content and compose box completely disappear altogether.
[](https://gyazo.com/47c2865170e997d58444e93d8332c728)
Similar behavior occurs when one is editing a message and selects a draft to compose.
|
1.0
|
compose: Messages suddenly disappear mid-edit when another message is sent during editing - When someone is editing a previous message and another user (or self) sends another message to the same stream/topic/PM, the edited message content and compose box completely disappear altogether.
[](https://gyazo.com/47c2865170e997d58444e93d8332c728)
Similar behavior occurs when one is editing a message and selects a draft to compose.
|
non_process
|
compose messages suddenly disappear mid edit when another message is sent during editing when someone is editing a previous message and another user or self sends another message to the same stream topic pm the edited message content and compose box completely disappear altogether similar behavior occurs when one is editing a message and selects a draft to compose
| 0
|
189,245
| 14,495,817,771
|
IssuesEvent
|
2020-12-11 11:47:37
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
blake-education/dogestry: vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go; 25 LoC
|
fresh small test vendored
|
Found a possible issue in [blake-education/dogestry](https://www.github.com/blake-education/dogestry) at [vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go](https://github.com/blake-education/dogestry/blob/4a114c2dd3dc4b0d8df1385205b5f12341bd29c1/vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go#L126-L150)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable input used in defer or goroutine at line 134
[Click here to see the code in its original context.](https://github.com/blake-education/dogestry/blob/4a114c2dd3dc4b0d8df1385205b5f12341bd29c1/vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go#L126-L150)
<details>
<summary>Click here to show the 25 line(s) of Go which triggered the analyzer.</summary>
```go
for _, input := range testInputs {
expectedOutput := input
o := NewOutput()
r, err := o.AddPipe()
if err != nil {
t.Fatal(err)
}
go func(o *Output) {
if n, err := o.Write([]byte(input)); err != nil {
t.Error(err)
} else if n != len(input) {
t.Errorf("Expected %d, got %d", len(input), n)
}
if err := o.Close(); err != nil {
t.Error(err)
}
}(o)
output, err := ioutil.ReadAll(r)
if err != nil {
t.Fatal(err)
}
if string(output) != expectedOutput {
t.Errorf("Last line is not stored as return string.\nExpected: '%s'\nGot: '%s'", expectedOutput, output)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4a114c2dd3dc4b0d8df1385205b5f12341bd29c1
|
1.0
|
blake-education/dogestry: vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go; 25 LoC -
Found a possible issue in [blake-education/dogestry](https://www.github.com/blake-education/dogestry) at [vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go](https://github.com/blake-education/dogestry/blob/4a114c2dd3dc4b0d8df1385205b5f12341bd29c1/vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go#L126-L150)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable input used in defer or goroutine at line 134
[Click here to see the code in its original context.](https://github.com/blake-education/dogestry/blob/4a114c2dd3dc4b0d8df1385205b5f12341bd29c1/vendor/go/src/github.com/fsouza/go-dockerclient/engine/streams_test.go#L126-L150)
<details>
<summary>Click here to show the 25 line(s) of Go which triggered the analyzer.</summary>
```go
for _, input := range testInputs {
expectedOutput := input
o := NewOutput()
r, err := o.AddPipe()
if err != nil {
t.Fatal(err)
}
go func(o *Output) {
if n, err := o.Write([]byte(input)); err != nil {
t.Error(err)
} else if n != len(input) {
t.Errorf("Expected %d, got %d", len(input), n)
}
if err := o.Close(); err != nil {
t.Error(err)
}
}(o)
output, err := ioutil.ReadAll(r)
if err != nil {
t.Fatal(err)
}
if string(output) != expectedOutput {
t.Errorf("Last line is not stored as return string.\nExpected: '%s'\nGot: '%s'", expectedOutput, output)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4a114c2dd3dc4b0d8df1385205b5f12341bd29c1
|
non_process
|
blake education dogestry vendor go src github com fsouza go dockerclient engine streams test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable input used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for input range testinputs expectedoutput input o newoutput r err o addpipe if err nil t fatal err go func o output if n err o write byte input err nil t error err else if n len input t errorf expected d got d len input n if err o close err nil t error err o output err ioutil readall r if err nil t fatal err if string output expectedoutput t errorf last line is not stored as return string nexpected s ngot s expectedoutput output leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
311,399
| 23,384,961,591
|
IssuesEvent
|
2022-08-11 13:02:26
|
polywrap/documentation
|
https://api.github.com/repos/polywrap/documentation
|
opened
|
Shouldn't we remove wrappers-cli from the docs
|
bug documentation
|
```
wrappers CLI#
wrappers is the deprecated (but working) prototype for the polywrap CLI's deploy command. It is a command-line tool for publishing Wasm wrappers to IPFS and ENS. IPFS publishing is done via a wrap-persistence-node endpoint. By default it publishes to the public https://ipfs.wrappers.io endpoint.
```
|
1.0
|
Shouldn't we remove wrappers-cli from the docs - ```
wrappers CLI#
wrappers is the deprecated (but working) prototype for the polywrap CLI's deploy command. It is a command-line tool for publishing Wasm wrappers to IPFS and ENS. IPFS publishing is done via a wrap-persistence-node endpoint. By default it publishes to the public https://ipfs.wrappers.io endpoint.
```
|
non_process
|
shouldn t we remove wrappers cli from the docs wrappers cli wrappers is the deprecated but working prototype for the polywrap cli s deploy command it is a command line tool for publishing wasm wrappers to ipfs and ens ipfs publishing is done via a wrap persistence node endpoint by default it publishes to the public endpoint
| 0
|
2,813
| 5,738,575,944
|
IssuesEvent
|
2017-04-23 05:51:51
|
SIMEXP/niak
|
https://api.github.com/repos/SIMEXP/niak
|
closed
|
pipeline for confound regression + smoothing
|
enhancement preprocessing
|
It would be useful to have a pipeline that grabs the outputs of the preprocessing (right after resampling), and just re-generates the confound regression and smoothing. This would make it easy to explore alternative processing paths.
|
1.0
|
pipeline for confound regression + smoothing - It would be useful to have a pipeline that grabs the outputs of the preprocessing (right after resampling), and just re-generates the confound regression and smoothing. This would make it easy to explore alternative processing paths.
|
process
|
pipeline for confound regression smoothing it would be useful to have a pipeline that grabs the outputs of the preprocessing right after resampling and just re generates the confound regression and smoothing this would make it easy to explore alternative processing paths
| 1
|
147,393
| 23,211,131,307
|
IssuesEvent
|
2022-08-02 10:12:53
|
nextcloud/contacts
|
https://api.github.com/repos/nextcloud/contacts
|
closed
|
REQ: list address books under "All contacts"
|
enhancement wontfix design 0. Needs triage
|
### Is your feature request related to a problem? Please describe.
In contacts
only "All contacts" and "Not grouped" are showed at the top of left menu
### Describe the solution you'd like
list of all the Address books listed under "all contacts"
### Describe alternatives you've considered
_No response_
### Additional context
give groups a different color - easily seen in the contact details
|
1.0
|
REQ: list address books under "All contacts" - ### Is your feature request related to a problem? Please describe.
In contacts
only "All contacts" and "Not grouped" are showed at the top of left menu
### Describe the solution you'd like
list of all the Address books listed under "all contacts"
### Describe alternatives you've considered
_No response_
### Additional context
give groups a different color - easily seen in the contact details
|
non_process
|
req list address books under all contacts is your feature request related to a problem please describe in contacts only all contacts and not grouped are showed at the top of left menu describe the solution you d like list of all the address books listed under all contacts describe alternatives you ve considered no response additional context give groups a different color easily seen in the contact details
| 0
|
81,271
| 7,777,380,082
|
IssuesEvent
|
2018-06-05 11:00:37
|
Nineclown/The-Convenient-ATM
|
https://api.github.com/repos/Nineclown/The-Convenient-ATM
|
closed
|
Failed test: 해당하는 회차의 당첨번호를 받아오는 기능
|
Failed Test
|
1.0주차를 정상적인 주차로 인식한다.
2.로또 번호 0을 정상적인 번호로 인식한다.
3.현재 존재하지 않는 주차를 입력해도 정상 인식한다.
T223: 해당하는 회차의 당첨번호를 받아오는 기능
https://jaehyun379.testrail.io/index.php?/tests/view/223
|
1.0
|
Failed test: 해당하는 회차의 당첨번호를 받아오는 기능 - 1.0주차를 정상적인 주차로 인식한다.
2.로또 번호 0을 정상적인 번호로 인식한다.
3.현재 존재하지 않는 주차를 입력해도 정상 인식한다.
T223: 해당하는 회차의 당첨번호를 받아오는 기능
https://jaehyun379.testrail.io/index.php?/tests/view/223
|
non_process
|
failed test 해당하는 회차의 당첨번호를 받아오는 기능 정상적인 주차로 인식한다 로또 번호 정상적인 번호로 인식한다 현재 존재하지 않는 주차를 입력해도 정상 인식한다 해당하는 회차의 당첨번호를 받아오는 기능
| 0
|
73,542
| 7,343,680,620
|
IssuesEvent
|
2018-03-07 12:15:35
|
nodejs/node-core-utils
|
https://api.github.com/repos/nodejs/node-core-utils
|
closed
|
Add tests for cross-repository/cross-owner fixes
|
feature request good first issue test
|
Opening a separate issue so it does not get buried...
<del>After https://github.com/joyeecheung/node-core-utils/pull/70 we've fixed the fixes parsing bug (kudos to @apapirovski !) but now it would skip cross-repo fixes. I think we can reliably get the fixes url with `getRefUrlFromOP()` which parses the HTML to get the link. </del>
It works! But would need some tests to make sure it does not regress..
|
1.0
|
Add tests for cross-repository/cross-owner fixes - Opening a separate issue so it does not get buried...
<del>After https://github.com/joyeecheung/node-core-utils/pull/70 we've fixed the fixes parsing bug (kudos to @apapirovski !) but now it would skip cross-repo fixes. I think we can reliably get the fixes url with `getRefUrlFromOP()` which parses the HTML to get the link. </del>
It works! But would need some tests to make sure it does not regress..
|
non_process
|
add tests for cross repository cross owner fixes opening a separate issue so it does not get buried after we ve fixed the fixes parsing bug kudos to apapirovski but now it would skip cross repo fixes i think we can reliably get the fixes url with getrefurlfromop which parses the html to get the link it works but would need some tests to make sure it does not regress
| 0
|
3,668
| 6,700,147,857
|
IssuesEvent
|
2017-10-11 02:41:59
|
lxde/lxqt
|
https://api.github.com/repos/lxde/lxqt
|
closed
|
No folder icon appears in desktop
|
invalid/dup/rejected wont-process-this wontfix
|
Distribution Manjaro
LXDE 0.11.0
When creating a folder in the Desktop, it creates the folder without icon, only with the name of the folder, to create file it shows the icon.
The same thing only happens on the desktop, in other applications the same is normal.
Image of the same:
https://mega.nz/#!gEoRTJgK!anG_TDOFtSudzZc9lxjBbwUGqX7j5910Es3mmj-ph6g
|
1.0
|
No folder icon appears in desktop - Distribution Manjaro
LXDE 0.11.0
When creating a folder in the Desktop, it creates the folder without icon, only with the name of the folder, to create file it shows the icon.
The same thing only happens on the desktop, in other applications the same is normal.
Image of the same:
https://mega.nz/#!gEoRTJgK!anG_TDOFtSudzZc9lxjBbwUGqX7j5910Es3mmj-ph6g
|
process
|
no folder icon appears in desktop distribution manjaro lxde when creating a folder in the desktop it creates the folder without icon only with the name of the folder to create file it shows the icon the same thing only happens on the desktop in other applications the same is normal image of the same
| 1
|
15,246
| 19,183,456,550
|
IssuesEvent
|
2021-12-04 20:12:19
|
ethereum/EIPs
|
https://api.github.com/repos/ethereum/EIPs
|
closed
|
Discussion: EIP-1 - EIP Editor Criteria (process for new editors)
|
type: EIP1 (Process) stale
|
Proposal text can be found at: https://github.com/ethereum/EIPs/pull/2172
It adds an EIP Editor Criteria section to EIP-1: a transparent process for appointing new EIP editors.
There have been discussions about adding more people as editors, but there is no PR with a process proposal in this repository, at this time.
It is a start, I expect the community and current editors to participate and improve.
My approach is:
- to make the requirements somewhat strict and based on merit. I think it is better to start with "strict" and see how the process works and how many new editors are still willing to meet requirements. Afterward, the process can be updated.
- start with a process that can be implemented in Github (because this is the current medium), but could eventually be decentralized
Disclaimer: I do not fit any editor category from the proposal, not now and not in the near future.
|
1.0
|
Discussion: EIP-1 - EIP Editor Criteria (process for new editors) - Proposal text can be found at: https://github.com/ethereum/EIPs/pull/2172
It adds an EIP Editor Criteria section to EIP-1: a transparent process for appointing new EIP editors.
There have been discussions about adding more people as editors, but there is no PR with a process proposal in this repository, at this time.
It is a start, I expect the community and current editors to participate and improve.
My approach is:
- to make the requirements somewhat strict and based on merit. I think it is better to start with "strict" and see how the process works and how many new editors are still willing to meet requirements. Afterward, the process can be updated.
- start with a process that can be implemented in Github (because this is the current medium), but could eventually be decentralized
Disclaimer: I do not fit any editor category from the proposal, not now and not in the near future.
|
process
|
discussion eip eip editor criteria process for new editors proposal text can be found at it adds an eip editor criteria section to eip a transparent process for appointing new eip editors there have been discussions about adding more people as editors but there is no pr with a process proposal in this repository at this time it is a start i expect the community and current editors to participate and improve my approach is to make the requirements somewhat strict and based on merit i think it is better to start with strict and see how the process works and how many new editors are still willing to meet requirements afterward the process can be updated start with a process that can be implemented in github because this is the current medium but could eventually be decentralized disclaimer i do not fit any editor category from the proposal not now and not in the near future
| 1
|
1,571
| 4,167,233,768
|
IssuesEvent
|
2016-06-20 08:43:54
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
RAMdisk should be visible as `raw disk` when assigning roles to disks
|
process_duplicate type_feature
|
When creating a RAMdisk, it should be visible when rescanning your disks when assigning roles to disks.
|
1.0
|
RAMdisk should be visible as `raw disk` when assigning roles to disks - When creating a RAMdisk, it should be visible when rescanning your disks when assigning roles to disks.
|
process
|
ramdisk should be visible as raw disk when assigning roles to disks when creating a ramdisk it should be visible when rescanning your disks when assigning roles to disks
| 1
|
21,945
| 30,446,800,033
|
IssuesEvent
|
2023-07-15 19:28:49
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pyutils 0.0.1b5 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b5",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils-0.0.1b5/src/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpgnrgfyoa/pyutils"
}
}```
|
1.0
|
pyutils 0.0.1b5 has 2 GuardDog issues - https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b5",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils-0.0.1b5/src/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpgnrgfyoa/pyutils"
}
}```
|
process
|
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt python utils pytils silent process execution location pyutils src pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpgnrgfyoa pyutils
| 1
|
259,085
| 19,585,327,925
|
IssuesEvent
|
2022-01-05 05:43:41
|
Sunbird-RC/community
|
https://api.github.com/repos/Sunbird-RC/community
|
closed
|
Not able to configure Sunbird-Rc core to run properly
|
documentation
|
Unable to find, how keycloak needs to be configured.
Not able to find API specification
|
1.0
|
Not able to configure Sunbird-Rc core to run properly - Unable to find, how keycloak needs to be configured.
Not able to find API specification
|
non_process
|
not able to configure sunbird rc core to run properly unable to find how keycloak needs to be configured not able to find api specification
| 0
|
19,011
| 25,012,602,137
|
IssuesEvent
|
2022-11-03 16:17:22
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Add Rosetta to Gradle build
|
enhancement process
|
### Problem
We need to reach feature parity with the Maven build before we can replace it with Gradle.
### Solution
* Add a simple Golang Gradle plugin in `buildSrc` that:
* Downloads the Go SDK and decompresses it
* Executes `fmt`, `build`, `clean`, `fix`, `test`, `generate`
### Alternatives
_No response_
|
1.0
|
Add Rosetta to Gradle build - ### Problem
We need to reach feature parity with the Maven build before we can replace it with Gradle.
### Solution
* Add a simple Golang Gradle plugin in `buildSrc` that:
* Downloads the Go SDK and decompresses it
* Executes `fmt`, `build`, `clean`, `fix`, `test`, `generate`
### Alternatives
_No response_
|
process
|
add rosetta to gradle build problem we need to reach feature parity with the maven build before we can replace it with gradle solution add a simple golang gradle plugin in buildsrc that downloads the go sdk and decompresses it executes fmt build clean fix test generate alternatives no response
| 1
|
254,979
| 27,484,683,624
|
IssuesEvent
|
2023-03-04 01:07:49
|
panasalap/linux-4.1.15
|
https://api.github.com/repos/panasalap/linux-4.1.15
|
opened
|
CVE-2022-0435 (High) detected in linuxlinux-4.1.17
|
security vulnerability
|
## CVE-2022-0435 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A stack overflow flaw was found in the Linux kernel's TIPC protocol functionality in the way a user sends a packet with malicious content where the number of domain member nodes is higher than the 64 allowed. This flaw allows a remote user to crash the system or possibly escalate their privileges if they have access to the TIPC network.
<p>Publish Date: 2022-03-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0435>CVE-2022-0435</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-0435">https://nvd.nist.gov/vuln/detail/CVE-2022-0435</a></p>
<p>Release Date: 2022-03-25</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68,5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-0435 (High) detected in linuxlinux-4.1.17 - ## CVE-2022-0435 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A stack overflow flaw was found in the Linux kernel's TIPC protocol functionality in the way a user sends a packet with malicious content where the number of domain member nodes is higher than the 64 allowed. This flaw allows a remote user to crash the system or possibly escalate their privileges if they have access to the TIPC network.
<p>Publish Date: 2022-03-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0435>CVE-2022-0435</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-0435">https://nvd.nist.gov/vuln/detail/CVE-2022-0435</a></p>
<p>Release Date: 2022-03-25</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68,5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in base branch master vulnerable source files vulnerability details a stack overflow flaw was found in the linux kernel s tipc protocol functionality in the way a user sends a packet with malicious content where the number of domain member nodes is higher than the allowed this flaw allows a remote user to crash the system or possibly escalate their privileges if they have access to the tipc network publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
77,086
| 21,664,806,269
|
IssuesEvent
|
2022-05-07 02:36:23
|
dotnet/efcore
|
https://api.github.com/repos/dotnet/efcore
|
opened
|
Allow specifying different index facets per table in TPT, TPC or entity splitting
|
type-enhancement area-model-building area-relational-mapping
|
Split off from #19811
``` C#
modelBuilder.Entity<Customer>().ToTable("Customer");
modelBuilder.Entity<SpecialCustomer>().ToTable("SpecialCustomer", tb =>
{
tb.HasIndex(c => c.Id).HasName("IX_Customer");
});
```
Alternative API:
``` C#
modelBuilder.Entity<Customer>().HasIndex(c => c.Id)
.ToTable("CustomerDetails", tb => tb.HasName("IX_Customer"));
```
|
1.0
|
Allow specifying different index facets per table in TPT, TPC or entity splitting - Split off from #19811
``` C#
modelBuilder.Entity<Customer>().ToTable("Customer");
modelBuilder.Entity<SpecialCustomer>().ToTable("SpecialCustomer", tb =>
{
tb.HasIndex(c => c.Id).HasName("IX_Customer");
});
```
Alternative API:
``` C#
modelBuilder.Entity<Customer>().HasIndex(c => c.Id)
.ToTable("CustomerDetails", tb => tb.HasName("IX_Customer"));
```
|
non_process
|
allow specifying different index facets per table in tpt tpc or entity splitting split off from c modelbuilder entity totable customer modelbuilder entity totable specialcustomer tb tb hasindex c c id hasname ix customer alternative api c modelbuilder entity hasindex c c id totable customerdetails tb tb hasname ix customer
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.