Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
819,659
| 30,748,040,349
|
IssuesEvent
|
2023-07-28 16:36:00
|
BeyondMafia/BeyondMafia-Integration
|
https://api.github.com/repos/BeyondMafia/BeyondMafia-Integration
|
closed
|
Remove Ranked Approval
|
refactor high priority site/moderation
|
The ranked approval process is a pretty unnecessary hurdle at the moment.
Better way moving forward is to remove this restriction, and simply ranked-ban players who break game rules.
|
1.0
|
Remove Ranked Approval - The ranked approval process is a pretty unnecessary hurdle at the moment.
Better way moving forward is to remove this restriction, and simply ranked-ban players who break game rules.
|
non_process
|
remove ranked approval the ranked approval process is a pretty unnecessary hurdle at the moment better way moving forward is to remove this restriction and simply ranked ban players who break game rules
| 0
|
783,918
| 27,551,184,099
|
IssuesEvent
|
2023-03-07 15:02:28
|
dotnet/msbuild
|
https://api.github.com/repos/dotnet/msbuild
|
closed
|
Resolved project and package references in some cases are faulty
|
bug needs-triage Priority:2 author-responded Iteration:2023February
|
### Issue Description
Suppose there is a `Main` project reference a multi-targeting `Lib` project, and to every `Lib` project's target framework, there are other different projects and packages referenced by `Lib`. The result is build success but run fail.
The dependencies that copy to local are faulty.
### Steps to Reproduce
I put a similar sample in [sanmuru/MSBuild-bug](/sanmuru/MSBuild-bug).
* `Console`(Main) project reference `CoreLib`(Lib) project with `SetTargetFramework="TargetFramework=netstandard2.0"`.
* `CoreLib`(Lib) project reference `Net462UtilLib` project and `System.Text.Json` package when targeting `net462`.
* `CoreLib`(Lib) project reference `NetStandard20UtilLib` project and `Newtonsoft.Json` package when targeting `netstandard2.0`.
* Build result shows `Console`(Main) project do reference the correct output assembly (and its dependencies) which from `NetStandard20UtilLib` project, but as well copy output assembly from `Net462UtilLib` project and `System.Text.Json` package instead of `Newtonsoft.Json` package to local.
### Expected Behavior
No idea whether it is a feature or a bug, if it is a bug then fix it, or crash build process if it is a feature.
### Versions & Configurations
.NET Runtime: 7.0.200-preview.22628.1
|
1.0
|
Resolved project and package references in some cases are faulty - ### Issue Description
Suppose there is a `Main` project reference a multi-targeting `Lib` project, and to every `Lib` project's target framework, there are other different projects and packages referenced by `Lib`. The result is build success but run fail.
The dependencies that copy to local are faulty.
### Steps to Reproduce
I put a similar sample in [sanmuru/MSBuild-bug](/sanmuru/MSBuild-bug).
* `Console`(Main) project reference `CoreLib`(Lib) project with `SetTargetFramework="TargetFramework=netstandard2.0"`.
* `CoreLib`(Lib) project reference `Net462UtilLib` project and `System.Text.Json` package when targeting `net462`.
* `CoreLib`(Lib) project reference `NetStandard20UtilLib` project and `Newtonsoft.Json` package when targeting `netstandard2.0`.
* Build result shows `Console`(Main) project do reference the correct output assembly (and its dependencies) which from `NetStandard20UtilLib` project, but as well copy output assembly from `Net462UtilLib` project and `System.Text.Json` package instead of `Newtonsoft.Json` package to local.
### Expected Behavior
No idea whether it is a feature or a bug, if it is a bug then fix it, or crash build process if it is a feature.
### Versions & Configurations
.NET Runtime: 7.0.200-preview.22628.1
|
non_process
|
resolved project and package references in some cases are faulty issue description suppose there is a main project reference a multi targeting lib project and to every lib project s target framework there are other different projects and packages referenced by lib the result is build success but run fail the dependencies that copy to local are faulty steps to reproduce i put a similar sample in sanmuru msbuild bug console main project reference corelib lib project with settargetframework targetframework corelib lib project reference project and system text json package when targeting corelib lib project reference project and newtonsoft json package when targeting build result shows console main project do reference the correct output assembly and its dependencies which from project but as well copy output assembly from project and system text json package instead of newtonsoft json package to local expected behavior no idea whether it is a feature or a bug if it is a bug then fix it or crash build process if it is a feature versions configurations net runtime preview
| 0
|
26,628
| 27,036,373,829
|
IssuesEvent
|
2023-02-12 20:34:28
|
tailscale/tailscale
|
https://api.github.com/repos/tailscale/tailscale
|
closed
|
Handle Okta 'user is not assigned' errors more gracefully
|
L3 Some users P3 Can't get started T5 Usability identity bug
|
### What is the issue?
When a user exists in an Okta tenant, but is not assigned to the Tailscale app (either directly or via a group), Okta returns an 'access denied' error, with no further information.
We should match this error code and description and direct to a KB article that lists appropriate mitigation steps.
https://login.tailscale.com/a/oauth_response?state=tc-27f9cf592fa81cd8&error=access_denied&error_description=User+is+not+assigned+to+the+client+application
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
This error occurs for manually created app integrations, as well as the Tailscale app installed from the OIN
### OS
_No response_
### OS version
_No response_
### Tailscale version
_No response_
### Bug report
_No response_
<img src="https://frontapp.com/assets/img/favicons/favicon-32x32.png" height="16" width="16" alt="Front logo" /> [Front conversations](https://app.frontapp.com/open/top_3uxch)
|
True
|
Handle Okta 'user is not assigned' errors more gracefully - ### What is the issue?
When a user exists in an Okta tenant, but is not assigned to the Tailscale app (either directly or via a group), Okta returns an 'access denied' error, with no further information.
We should match this error code and description and direct to a KB article that lists appropriate mitigation steps.
https://login.tailscale.com/a/oauth_response?state=tc-27f9cf592fa81cd8&error=access_denied&error_description=User+is+not+assigned+to+the+client+application
### Steps to reproduce
_No response_
### Are there any recent changes that introduced the issue?
This error occurs for manually created app integrations, as well as the Tailscale app installed from the OIN
### OS
_No response_
### OS version
_No response_
### Tailscale version
_No response_
### Bug report
_No response_
<img src="https://frontapp.com/assets/img/favicons/favicon-32x32.png" height="16" width="16" alt="Front logo" /> [Front conversations](https://app.frontapp.com/open/top_3uxch)
|
non_process
|
handle okta user is not assigned errors more gracefully what is the issue when a user exists in an okta tenant but is not assigned to the tailscale app either directly or via a group okta returns an access denied error with no further information we should match this error code and description and direct to a kb article that lists appropriate mitigation steps steps to reproduce no response are there any recent changes that introduced the issue this error occurs for manually created app integrations as well as the tailscale app installed from the oin os no response os version no response tailscale version no response bug report no response
| 0
|
64
| 3,317,794,188
|
IssuesEvent
|
2015-11-06 23:47:31
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
opened
|
test_dynamic_port fails with "ConnectionError: " when trying to access exposed port.
|
kind/bug setup/automation
|
Server build from master - Nov 6
test_dynamic_port fails with "ConnectionError: " when trying to access exposed port.
```
=================================== FAILURES ===================================
______________________________ test_dynamic_port _______________________________
client = <cattle.Client object at 0x7f3faa95b3d0>, test_name = 'test-675834'
def test_dynamic_port(client, test_name):
c = client.create_container(name=test_name,
networkMode=MANAGED_NETWORK,
imageUuid=TEST_IMAGE_UUID)
c = client.wait_success(c)
ports = c.ports_link()
assert len(ports) == 1
port = ports[0]
assert port.publicPort is None
port = client.wait_success(client.update(port, publicPort=3001))
assert port.publicPort == 3001
> ping_port(port)
cattlevalidationtest/core/test_container.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cattlevalidationtest/core/common_fixtures.py:306: in ping_port
pong = get_port_content(port, 'ping')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
port = {'protocol': u'tcp', 'transitioning': u'no', 'kind': u'imagePort', 'publicIpAddressId': u'1ia125560', 'created': u'201...ountId': u'1a5', 'name': None, 'transitioningMessage': None, 'createdTS': 1446843690000, 'transitioningProgress': None}
path = 'ping', params = {}
def get_port_content(port, path, params={}):
assert port.publicPort is not None
assert port.publicIpAddressId is not None
url = 'http://{}:{}/{}'.format(port.publicIpAddress().address,
port.publicPort,
path)
e = None
for i in range(60):
try:
return requests.get(url, params=params, timeout=5).text
except Exception as e1:
e = e1
logger.exception('Failed to call %s', url)
time.sleep(1)
pass
if e is not None:
> raise e
E ConnectionError: ('Connection aborted.', BadStatusLine('SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2\r\n',))
ConnectionError: ('Connection aborted.', BadStatusLine('SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2\r\n',))
ERROR:common_fixtures:Failed to call http://104.154.57.203:3001/ping
Traceback (most recent call last):
File "/scratch/tests/validation/cattlevalidationtest/core/common_fixtures.py", line 292, in get_port_content
return requests.get(url, params=params, timeout=5).text
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/api.py", line 68, in get
return request('get', url, **kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', BadStatusLine('SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2\r\n',))
```
|
1.0
|
test_dynamic_port fails with "ConnectionError: " when trying to access exposed port. - Server build from master - Nov 6
test_dynamic_port fails with "ConnectionError: " when trying to access exposed port.
```
=================================== FAILURES ===================================
______________________________ test_dynamic_port _______________________________
client = <cattle.Client object at 0x7f3faa95b3d0>, test_name = 'test-675834'
def test_dynamic_port(client, test_name):
c = client.create_container(name=test_name,
networkMode=MANAGED_NETWORK,
imageUuid=TEST_IMAGE_UUID)
c = client.wait_success(c)
ports = c.ports_link()
assert len(ports) == 1
port = ports[0]
assert port.publicPort is None
port = client.wait_success(client.update(port, publicPort=3001))
assert port.publicPort == 3001
> ping_port(port)
cattlevalidationtest/core/test_container.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cattlevalidationtest/core/common_fixtures.py:306: in ping_port
pong = get_port_content(port, 'ping')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
port = {'protocol': u'tcp', 'transitioning': u'no', 'kind': u'imagePort', 'publicIpAddressId': u'1ia125560', 'created': u'201...ountId': u'1a5', 'name': None, 'transitioningMessage': None, 'createdTS': 1446843690000, 'transitioningProgress': None}
path = 'ping', params = {}
def get_port_content(port, path, params={}):
assert port.publicPort is not None
assert port.publicIpAddressId is not None
url = 'http://{}:{}/{}'.format(port.publicIpAddress().address,
port.publicPort,
path)
e = None
for i in range(60):
try:
return requests.get(url, params=params, timeout=5).text
except Exception as e1:
e = e1
logger.exception('Failed to call %s', url)
time.sleep(1)
pass
if e is not None:
> raise e
E ConnectionError: ('Connection aborted.', BadStatusLine('SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2\r\n',))
ConnectionError: ('Connection aborted.', BadStatusLine('SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2\r\n',))
ERROR:common_fixtures:Failed to call http://104.154.57.203:3001/ping
Traceback (most recent call last):
File "/scratch/tests/validation/cattlevalidationtest/core/common_fixtures.py", line 292, in get_port_content
return requests.get(url, params=params, timeout=5).text
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/api.py", line 68, in get
return request('get', url, **kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/scratch/tests/validation/.tox/py27/local/lib/python2.7/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', BadStatusLine('SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2\r\n',))
```
|
non_process
|
test dynamic port fails with connectionerror when trying to access exposed port server build from master nov test dynamic port fails with connectionerror when trying to access exposed port failures test dynamic port client test name test def test dynamic port client test name c client create container name test name networkmode managed network imageuuid test image uuid c client wait success c ports c ports link assert len ports port ports assert port publicport is none port client wait success client update port publicport assert port publicport ping port port cattlevalidationtest core test container py cattlevalidationtest core common fixtures py in ping port pong get port content port ping port protocol u tcp transitioning u no kind u imageport publicipaddressid u created u ountid u name none transitioningmessage none createdts transitioningprogress none path ping params def get port content port path params assert port publicport is not none assert port publicipaddressid is not none url port publicport path e none for i in range try return requests get url params params timeout text except exception as e logger exception failed to call s url time sleep pass if e is not none raise e e connectionerror connection aborted badstatusline ssh openssh ubuntu r n connectionerror connection aborted badstatusline ssh openssh ubuntu r n error common fixtures failed to call traceback most recent call last file scratch tests validation cattlevalidationtest core common fixtures py line in get port content return requests get url params params timeout text file scratch tests validation tox local lib site packages requests api py line in get return request get url kwargs file scratch tests validation tox local lib site packages requests api py line in request response session request method method url url kwargs file scratch tests validation tox local lib site packages requests sessions py line in request resp self send prep send kwargs file scratch tests validation tox local lib site packages requests sessions py line in send r adapter send request kwargs file scratch tests validation tox local lib site packages requests adapters py line in send raise connectionerror err request request connectionerror connection aborted badstatusline ssh openssh ubuntu r n
| 0
|
4,360
| 7,260,514,076
|
IssuesEvent
|
2018-02-18 10:53:18
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] algorithm to fix invalid geometries using native
makeValid() implementation
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/0293bc76a0bb6e7ec55df59b8bb2764204552247 by alexbruy
Unfortunately this naughty coder did not write a description... :-(
|
1.0
|
[FEATURE][processing] algorithm to fix invalid geometries using native
makeValid() implementation - Original commit: https://github.com/qgis/QGIS/commit/0293bc76a0bb6e7ec55df59b8bb2764204552247 by alexbruy
Unfortunately this naughty coder did not write a description... :-(
|
process
|
algorithm to fix invalid geometries using native makevalid implementation original commit by alexbruy unfortunately this naughty coder did not write a description
| 1
|
380,191
| 11,255,057,190
|
IssuesEvent
|
2020-01-12 05:48:30
|
potion-cellar/14ersForecast
|
https://api.github.com/repos/potion-cellar/14ersForecast
|
closed
|
Map: Tooltips can show Trace" (inches tick)
|
bug feature-maps low priority
|
If value is Trace, the tooltip should not show the `"` symbol.
|
1.0
|
Map: Tooltips can show Trace" (inches tick) - If value is Trace, the tooltip should not show the `"` symbol.
|
non_process
|
map tooltips can show trace inches tick if value is trace the tooltip should not show the symbol
| 0
|
3,251
| 3,098,713,435
|
IssuesEvent
|
2015-08-28 12:59:36
|
projectatomic/atomic-reactor
|
https://api.github.com/repos/projectatomic/atomic-reactor
|
opened
|
squashing: provide id of base image to the tool
|
bug post-build plugin priority 1
|
so it will squash just the built image, not multiple layers from base image
|
1.0
|
squashing: provide id of base image to the tool - so it will squash just the built image, not multiple layers from base image
|
non_process
|
squashing provide id of base image to the tool so it will squash just the built image not multiple layers from base image
| 0
|
14,937
| 18,365,763,139
|
IssuesEvent
|
2021-10-10 02:25:45
|
varabyte/kobweb
|
https://api.github.com/repos/varabyte/kobweb
|
opened
|
Refactor some KobwebProject code back into the Application plugin (or some common module?)
|
process
|
The project parsing (KobwebProject.parseData) code probably makes sense to be Gradle only since only Gradle knows the "pages" and "group" values. Then we can remove the embeddedcompiler dependency from the CLI.
|
1.0
|
Refactor some KobwebProject code back into the Application plugin (or some common module?) - The project parsing (KobwebProject.parseData) code probably makes sense to be Gradle only since only Gradle knows the "pages" and "group" values. Then we can remove the embeddedcompiler dependency from the CLI.
|
process
|
refactor some kobwebproject code back into the application plugin or some common module the project parsing kobwebproject parsedata code probably makes sense to be gradle only since only gradle knows the pages and group values then we can remove the embeddedcompiler dependency from the cli
| 1
|
204,917
| 15,955,134,262
|
IssuesEvent
|
2021-04-15 14:18:03
|
gt-coar/gt-coar-lab
|
https://api.github.com/repos/gt-coar/gt-coar-lab
|
closed
|
add ipydrawio
|
deps documentation
|
[ipydrawio 1.0.1](https://github.com/conda-forge/ipydrawio-feedstock) is up, and ready for integration.
- [ ] add to the development environment
- [ ] add to the shipped lab
|
1.0
|
add ipydrawio - [ipydrawio 1.0.1](https://github.com/conda-forge/ipydrawio-feedstock) is up, and ready for integration.
- [ ] add to the development environment
- [ ] add to the shipped lab
|
non_process
|
add ipydrawio is up and ready for integration add to the development environment add to the shipped lab
| 0
|
246,204
| 7,893,243,502
|
IssuesEvent
|
2018-06-28 17:24:12
|
GCTC-NTGC/TalentCloud
|
https://api.github.com/repos/GCTC-NTGC/TalentCloud
|
closed
|
Admin - Populate Live Site with Fake "Real" Content for Demo
|
FED High Priority Medium Complexity Task
|
# Description
For the demo, it's important that we have content that looks as though it were real. This includes populating a Hiring Manager's profile, as well as creating 3 different job posters to help provide variety during the demo.
# Required for Completion
Add/remove others as necessary for this specific task.
- [x] Populate Hiring Manager Profile
- [x] Populate 3 Job Posters
|
1.0
|
Admin - Populate Live Site with Fake "Real" Content for Demo - # Description
For the demo, it's important that we have content that looks as though it were real. This includes populating a Hiring Manager's profile, as well as creating 3 different job posters to help provide variety during the demo.
# Required for Completion
Add/remove others as necessary for this specific task.
- [x] Populate Hiring Manager Profile
- [x] Populate 3 Job Posters
|
non_process
|
admin populate live site with fake real content for demo description for the demo it s important that we have content that looks as though it were real this includes populating a hiring manager s profile as well as creating different job posters to help provide variety during the demo required for completion add remove others as necessary for this specific task populate hiring manager profile populate job posters
| 0
|
7,024
| 10,173,217,042
|
IssuesEvent
|
2019-08-08 12:38:34
|
heim-rs/heim
|
https://api.github.com/repos/heim-rs/heim
|
closed
|
process::Process::status for Windows
|
A-process C-enhancement O-windows
|
It requires digging through the `winternl.h` stuff, so it will be a very interesting problem. As for now, stub is implemented, which returns `Status::Running` all the time for all processes.
|
1.0
|
process::Process::status for Windows - It requires digging through the `winternl.h` stuff, so it will be a very interesting problem. As for now, stub is implemented, which returns `Status::Running` all the time for all processes.
|
process
|
process process status for windows it requires digging through the winternl h stuff so it will be a very interesting problem as for now stub is implemented which returns status running all the time for all processes
| 1
|
134,433
| 19,188,901,168
|
IssuesEvent
|
2021-12-05 17:17:00
|
Wonderland-Mobile/Issue-Tracker
|
https://api.github.com/repos/Wonderland-Mobile/Issue-Tracker
|
closed
|
Consider making "The Search for the Rainbow Spirits" unlockable by completing "Deeper Into Wonderland".
|
enhancement wontfix As Designed
|
A lot of mobile games that fans call "pay to win" are winnable without paying a single cent. However, from what i can tell, "The Search for the Rainbow Spirits" is only unlockable by paying $5.99. While i understand that this would help increase the games revenue, this technically makes the game pay to win. I would hate to see players miss out on "The Search for the Rainbow Spirits", which i consider to be the highlight of the Classic Trilogy.
|
1.0
|
Consider making "The Search for the Rainbow Spirits" unlockable by completing "Deeper Into Wonderland". - A lot of mobile games that fans call "pay to win" are winnable without paying a single cent. However, from what i can tell, "The Search for the Rainbow Spirits" is only unlockable by paying $5.99. While i understand that this would help increase the games revenue, this technically makes the game pay to win. I would hate to see players miss out on "The Search for the Rainbow Spirits", which i consider to be the highlight of the Classic Trilogy.
|
non_process
|
consider making the search for the rainbow spirits unlockable by completing deeper into wonderland a lot of mobile games that fans call pay to win are winnable without paying a single cent however from what i can tell the search for the rainbow spirits is only unlockable by paying while i understand that this would help increase the games revenue this technically makes the game pay to win i would hate to see players miss out on the search for the rainbow spirits which i consider to be the highlight of the classic trilogy
| 0
|
886
| 3,350,762,291
|
IssuesEvent
|
2015-11-17 15:56:52
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
child_process.exec TypeError: Cannot read property 'addListener' of undefined
|
child_process confirmed-bug
|
Here's a code sample that triggers the error condition:
```javascript
var child_process = require('child_process'),
exec = child_process.exec,
children = [],
exitHandler = function(i,err,stdout,stderr){
console.log(i,err,stdout,stderr);
},
i = 0;
for(i=0;i<1000;i++){
children.push(exec('sleep 10 && echo hello',exitHandler.bind(null,i)));
}
```
If I do not wrap with try/catch it will crash after ~258 iterations on io.js v1.6.3 on OSX 10.10.2
Here is the stack trace:
```
child_process.js:753
child.stdout.addListener('data', function(chunk) {
^
TypeError: Cannot read property 'addListener' of undefined
at Object.exports.execFile (child_process.js:753:15)
at exports.exec (child_process.js:622:18)
at Object.<anonymous> (/Users/gcochard/iojs/test.js:10:19)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:428:10)
at Module.load (module.js:335:32)
at Function.Module._load (module.js:290:12)
at Function.Module.runMain (module.js:451:10)
at startup (node.js:123:18)
at node.js:867:3
```
It seems that if a callback is passed, the general async-ism is to call it with any errors. Therefore the following:
```javascript
child.stdout.addListener('data', function(chunk) {...});
```
Should change to something like this:
```javascript
if(!child.stdout || !child.stderr){
return errorhandler(new Error('Something went wrong'));
}
child.stdout.addListener('data', function(chunk) {...});
```
And the error handler should also include a null check on `child.stdout` and `child.stderr`.
-----------------------------------------------------
Additionally, when I *do* wrap with try/catch, it looks like it is emitting a spawn error on [line 1022](https://github.com/iojs/io.js/blob/v1.x/lib/child_process.js#L1022) which is not handled, even when adding an error listener on the child. It appears to be emitted by the `ChildProcess` object on [line 1042](https://github.com/iojs/io.js/blob/v1.x/lib/child_process.js#L1042). Its stack trace is as follows:
```
events.js:141
throw er; // Unhandled 'error' event
^
Error: spawn /bin/sh EAGAIN
at exports._errnoException (util.js:749:11)
at Process.ChildProcess._handle.onexit (child_process.js:1022:32)
at child_process.js:1114:20
at process._tickCallback (node.js:339:13)
at Function.Module.runMain (module.js:453:11)
at startup (node.js:123:18)
at node.js:867:3
```
I know this is a contrived example, but I've hit this edge case before.
Please let me know if I can clarify anything here. The expected behavior is that the script keeps running, rather than crashing io.js with a stack trace.
|
1.0
|
child_process.exec TypeError: Cannot read property 'addListener' of undefined - Here's a code sample that triggers the error condition:
```javascript
var child_process = require('child_process'),
exec = child_process.exec,
children = [],
exitHandler = function(i,err,stdout,stderr){
console.log(i,err,stdout,stderr);
},
i = 0;
for(i=0;i<1000;i++){
children.push(exec('sleep 10 && echo hello',exitHandler.bind(null,i)));
}
```
If I do not wrap with try/catch it will crash after ~258 iterations on io.js v1.6.3 on OSX 10.10.2
Here is the stack trace:
```
child_process.js:753
child.stdout.addListener('data', function(chunk) {
^
TypeError: Cannot read property 'addListener' of undefined
at Object.exports.execFile (child_process.js:753:15)
at exports.exec (child_process.js:622:18)
at Object.<anonymous> (/Users/gcochard/iojs/test.js:10:19)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:428:10)
at Module.load (module.js:335:32)
at Function.Module._load (module.js:290:12)
at Function.Module.runMain (module.js:451:10)
at startup (node.js:123:18)
at node.js:867:3
```
It seems that if a callback is passed, the general async-ism is to call it with any errors. Therefore the following:
```javascript
child.stdout.addListener('data', function(chunk) {...});
```
Should change to something like this:
```javascript
if(!child.stdout || !child.stderr){
return errorhandler(new Error('Something went wrong'));
}
child.stdout.addListener('data', function(chunk) {...});
```
And the error handler should also include a null check on `child.stdout` and `child.stderr`.
-----------------------------------------------------
Additionally, when I *do* wrap with try/catch, it looks like it is emitting a spawn error on [line 1022](https://github.com/iojs/io.js/blob/v1.x/lib/child_process.js#L1022) which is not handled, even when adding an error listener on the child. It appears to be emitted by the `ChildProcess` object on [line 1042](https://github.com/iojs/io.js/blob/v1.x/lib/child_process.js#L1042). Its stack trace is as follows:
```
events.js:141
throw er; // Unhandled 'error' event
^
Error: spawn /bin/sh EAGAIN
at exports._errnoException (util.js:749:11)
at Process.ChildProcess._handle.onexit (child_process.js:1022:32)
at child_process.js:1114:20
at process._tickCallback (node.js:339:13)
at Function.Module.runMain (module.js:453:11)
at startup (node.js:123:18)
at node.js:867:3
```
I know this is a contrived example, but I've hit this edge case before.
Please let me know if I can clarify anything here. The expected behavior is that the script keeps running, rather than crashing io.js with a stack trace.
|
process
|
child process exec typeerror cannot read property addlistener of undefined here s a code sample that triggers the error condition javascript var child process require child process exec child process exec children exithandler function i err stdout stderr console log i err stdout stderr i for i i i children push exec sleep echo hello exithandler bind null i if i do not wrap with try catch it will crash after iterations on io js on osx here is the stack trace child process js child stdout addlistener data function chunk typeerror cannot read property addlistener of undefined at object exports execfile child process js at exports exec child process js at object users gcochard iojs test js at module compile module js at object module extensions js module js at module load module js at function module load module js at function module runmain module js at startup node js at node js it seems that if a callback is passed the general async ism is to call it with any errors therefore the following javascript child stdout addlistener data function chunk should change to something like this javascript if child stdout child stderr return errorhandler new error something went wrong child stdout addlistener data function chunk and the error handler should also include a null check on child stdout and child stderr additionally when i do wrap with try catch it looks like it is emitting a spawn error on which is not handled even when adding an error listener on the child it appears to be emitted by the childprocess object on its stack trace is as follows events js throw er unhandled error event error spawn bin sh eagain at exports errnoexception util js at process childprocess handle onexit child process js at child process js at process tickcallback node js at function module runmain module js at startup node js at node js i know this is a contrived example but i ve hit this edge case before please let me know if i can clarify anything here the expected behavior is that the script keeps running rather than crashing io js with a stack trace
| 1
|
7,466
| 10,563,218,056
|
IssuesEvent
|
2019-10-04 20:22:11
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Firestore: 'test_watch_collection' systest flakes
|
api: firestore flaky testing type: process
|
Similar to #6605, #6921.
From this [Kokoro Firestore run](https://source.cloud.google.com/results/invocations/6579f747-286a-4545-b959-00d5fd1e530c/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Ffirestore/log):
```python
____________________________ test_watch_collection _____________________________
client = <google.cloud.firestore_v1beta1.client.Client object at 0x7ff27e1c25d0>
cleanup = <built-in method append of list object at 0x7ff27e1ac878>
def test_watch_collection(client, cleanup):
db = client
doc_ref = db.collection(u"users").document(u"alovelace" + unique_resource_id())
collection_ref = db.collection(u"users")
# Initial setting
doc_ref.set({u"first": u"Jane", u"last": u"Doe", u"born": 1900})
# Setup listener
def on_snapshot(docs, changes, read_time):
on_snapshot.called_count += 1
for doc in [doc for doc in docs if doc.id == doc_ref.id]:
on_snapshot.born = doc.get("born")
on_snapshot.called_count = 0
on_snapshot.born = 0
collection_ref.on_snapshot(on_snapshot)
# delay here so initial on_snapshot occurs and isn't combined with set
sleep(1)
doc_ref.set({u"first": u"Ada", u"last": u"Lovelace", u"born": 1815})
for _ in range(10):
if on_snapshot.born == 1815:
break
sleep(1)
if on_snapshot.born != 1815:
raise AssertionError(
> "Expected the last document update to update born: " + str(on_snapshot.born)
)
E AssertionError: Expected the last document update to update born: 1900
```
|
1.0
|
Firestore: 'test_watch_collection' systest flakes - Similar to #6605, #6921.
From this [Kokoro Firestore run](https://source.cloud.google.com/results/invocations/6579f747-286a-4545-b959-00d5fd1e530c/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Ffirestore/log):
```python
____________________________ test_watch_collection _____________________________
client = <google.cloud.firestore_v1beta1.client.Client object at 0x7ff27e1c25d0>
cleanup = <built-in method append of list object at 0x7ff27e1ac878>
def test_watch_collection(client, cleanup):
db = client
doc_ref = db.collection(u"users").document(u"alovelace" + unique_resource_id())
collection_ref = db.collection(u"users")
# Initial setting
doc_ref.set({u"first": u"Jane", u"last": u"Doe", u"born": 1900})
# Setup listener
def on_snapshot(docs, changes, read_time):
on_snapshot.called_count += 1
for doc in [doc for doc in docs if doc.id == doc_ref.id]:
on_snapshot.born = doc.get("born")
on_snapshot.called_count = 0
on_snapshot.born = 0
collection_ref.on_snapshot(on_snapshot)
# delay here so initial on_snapshot occurs and isn't combined with set
sleep(1)
doc_ref.set({u"first": u"Ada", u"last": u"Lovelace", u"born": 1815})
for _ in range(10):
if on_snapshot.born == 1815:
break
sleep(1)
if on_snapshot.born != 1815:
raise AssertionError(
> "Expected the last document update to update born: " + str(on_snapshot.born)
)
E AssertionError: Expected the last document update to update born: 1900
```
|
process
|
firestore test watch collection systest flakes similar to from this python test watch collection client cleanup def test watch collection client cleanup db client doc ref db collection u users document u alovelace unique resource id collection ref db collection u users initial setting doc ref set u first u jane u last u doe u born setup listener def on snapshot docs changes read time on snapshot called count for doc in on snapshot born doc get born on snapshot called count on snapshot born collection ref on snapshot on snapshot delay here so initial on snapshot occurs and isn t combined with set sleep doc ref set u first u ada u last u lovelace u born for in range if on snapshot born break sleep if on snapshot born raise assertionerror expected the last document update to update born str on snapshot born e assertionerror expected the last document update to update born
| 1
|
5,781
| 8,631,773,783
|
IssuesEvent
|
2018-11-22 08:54:52
|
dzhw/zofar
|
https://api.github.com/repos/dzhw/zofar
|
opened
|
Abs 12.3 survey termination (Dez 18)
|
category: services category: technical.processes prio: ? type: backlog.task
|
### **the survey ends on Monday 17.12.**
- [ ] final return statistic (@andreaschu )
- [ ] reroute the link (@vdick or @dzhwmeisner )
- [ ] undeploy survey (@vdick or @dzhwmeisner )
- [ ] prepare export (@andreaschu )
|
1.0
|
Abs 12.3 survey termination (Dez 18) - ### **the survey ends on Monday 17.12.**
- [ ] final return statistic (@andreaschu )
- [ ] reroute the link (@vdick or @dzhwmeisner )
- [ ] undeploy survey (@vdick or @dzhwmeisner )
- [ ] prepare export (@andreaschu )
|
process
|
abs survey termination dez the survey ends on monday final return statistic andreaschu reroute the link vdick or dzhwmeisner undeploy survey vdick or dzhwmeisner prepare export andreaschu
| 1
|
14,520
| 17,618,522,904
|
IssuesEvent
|
2021-08-18 12:50:14
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
closed
|
Mix of binary encoded ISO-8859-1 data in header fields (e.g. to) fails mail processing
|
bug verified prioritised by payment mail processing
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 4.1
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #11047694
### Expected behavior:
* Zammad will decode the header fields (e.g. to) correctly and import the mail.
### Actual behavior:
* In some special cases, processing of mails fail if the mail header fields (e.g. to) contains mixed binary ISO-8859-1 data and move the mail to unprocessible_mails.
**Log:**
```
"ERROR: #<Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ASCII-8BIT>"
Traceback (most recent call last):
36: from bin/rails:9:in `<main>'
35: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
34: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
33: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
32: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
31: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
30: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
29: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
28: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
27: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
26: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
25: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
24: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
23: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
22: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
21: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
20: from ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
19: from ws/zammad-dev/app/models/channel/email_parser.rb:119:in `process'
18: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
17: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
16: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
15: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
14: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
13: from ws/zammad-dev/app/models/channel/email_parser.rb:120:in `block in process'
12: from ws/zammad-dev/app/models/channel/email_parser.rb:141:in `_process'
11: from ws/zammad-dev/app/models/channel/email_parser.rb:85:in `parse'
10: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `message_header_hash'
9: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `map'
8: from ws/zammad-dev/app/models/channel/email_parser.rb:568:in `block in message_header_hash'
7: from ws/zammad-dev/lib/core_ext/object.rb:5:in `to_utf8'
6: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/field.rb:203:in `to_s'
5: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/common/common_field.rb:26:in `to_s'
4: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:62:in `decoded'
3: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:84:in `do_decode'
2: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:109:in `decode_encode'
1: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `value_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `join': incompatible character encodings: UTF-8 and ASCII-8BIT (Encoding::CompatibilityError)
35: from bin/rails:9:in `<main>'
34: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
33: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
32: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
31: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
30: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
29: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
28: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
27: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
26: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
25: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
24: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
23: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
22: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
21: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
20: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
19: from ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
18: from ws/zammad-dev/app/models/channel/email_parser.rb:119:in `process'
17: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
16: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
15: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
14: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
13: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
12: from ws/zammad-dev/app/models/channel/email_parser.rb:120:in `block in process'
11: from ws/zammad-dev/app/models/channel/email_parser.rb:141:in `_process'
10: from ws/zammad-dev/app/models/channel/email_parser.rb:85:in `parse'
9: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `message_header_hash'
8: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `map'
7: from ws/zammad-dev/app/models/channel/email_parser.rb:564:in `block in message_header_hash'
6: from ws/zammad-dev/app/models/channel/email_parser.rb:579:in `rescue in block in message_header_hash'
5: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/field.rb:239:in `method_missing'
4: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:62:in `decoded'
3: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:84:in `do_decode'
2: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:109:in `decode_encode'
1: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `value_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `join': incompatible character encodings: UTF-8 and ASCII-8BIT (Encoding::CompatibilityError)
18: from bin/rails:9:in `<main>'
17: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
16: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
15: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
14: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
13: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
12: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
11: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
10: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
9: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
8: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
7: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
6: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
5: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
4: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
3: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
2: from ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
1: from ws/zammad-dev/app/models/channel/email_parser.rb:117:in `process'
ws/zammad-dev/app/models/channel/email_parser.rb:135:in `rescue in process': #<Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ASCII-8BIT> (RuntimeError)
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `join'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `value_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:109:in `decode_encode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:84:in `do_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:62:in `decoded'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/field.rb:239:in `method_missing'
ws/zammad-dev/app/models/channel/email_parser.rb:579:in `rescue in block in message_header_hash'
ws/zammad-dev/app/models/channel/email_parser.rb:564:in `block in message_header_hash'
ws/zammad-dev/app/models/channel/email_parser.rb:563:in `map'
ws/zammad-dev/app/models/channel/email_parser.rb:563:in `message_header_hash'
ws/zammad-dev/app/models/channel/email_parser.rb:85:in `parse'
ws/zammad-dev/app/models/channel/email_parser.rb:141:in `_process'
ws/zammad-dev/app/models/channel/email_parser.rb:120:in `block in process'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
ws/zammad-dev/app/models/channel/email_parser.rb:119:in `process'
ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
.rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
.rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
.rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
bin/rails:9:in `<main>'
```
### Steps to reproduce the behavior:
* Download the attached mail file. [mail-iso-88591-1-failing.box.zip](https://github.com/zammad/zammad/files/6981235/mail-iso-88591-1-failing.box.zip)
Process it via: cat /path/to/mail.box | time rails r 'Channel::Driver::MailStdin.new'
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Mix of binary encoded ISO-8859-1 data in header fields (e.g. to) fails mail processing - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 4.1
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #11047694
### Expected behavior:
* Zammad will decode the header fields (e.g. to) correctly and import the mail.
### Actual behavior:
* In some special cases, processing of mails fail if the mail header fields (e.g. to) contains mixed binary ISO-8859-1 data and move the mail to unprocessible_mails.
**Log:**
```
"ERROR: #<Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ASCII-8BIT>"
Traceback (most recent call last):
36: from bin/rails:9:in `<main>'
35: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
34: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
33: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
32: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
31: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
30: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
29: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
28: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
27: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
26: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
25: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
24: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
23: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
22: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
21: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
20: from ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
19: from ws/zammad-dev/app/models/channel/email_parser.rb:119:in `process'
18: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
17: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
16: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
15: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
14: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
13: from ws/zammad-dev/app/models/channel/email_parser.rb:120:in `block in process'
12: from ws/zammad-dev/app/models/channel/email_parser.rb:141:in `_process'
11: from ws/zammad-dev/app/models/channel/email_parser.rb:85:in `parse'
10: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `message_header_hash'
9: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `map'
8: from ws/zammad-dev/app/models/channel/email_parser.rb:568:in `block in message_header_hash'
7: from ws/zammad-dev/lib/core_ext/object.rb:5:in `to_utf8'
6: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/field.rb:203:in `to_s'
5: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/common/common_field.rb:26:in `to_s'
4: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:62:in `decoded'
3: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:84:in `do_decode'
2: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:109:in `decode_encode'
1: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `value_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `join': incompatible character encodings: UTF-8 and ASCII-8BIT (Encoding::CompatibilityError)
35: from bin/rails:9:in `<main>'
34: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
33: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
32: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
31: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
30: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
29: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
28: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
27: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
26: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
25: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
24: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
23: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
22: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
21: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
20: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
19: from ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
18: from ws/zammad-dev/app/models/channel/email_parser.rb:119:in `process'
17: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
16: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
15: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
14: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
13: from .rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
12: from ws/zammad-dev/app/models/channel/email_parser.rb:120:in `block in process'
11: from ws/zammad-dev/app/models/channel/email_parser.rb:141:in `_process'
10: from ws/zammad-dev/app/models/channel/email_parser.rb:85:in `parse'
9: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `message_header_hash'
8: from ws/zammad-dev/app/models/channel/email_parser.rb:563:in `map'
7: from ws/zammad-dev/app/models/channel/email_parser.rb:564:in `block in message_header_hash'
6: from ws/zammad-dev/app/models/channel/email_parser.rb:579:in `rescue in block in message_header_hash'
5: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/field.rb:239:in `method_missing'
4: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:62:in `decoded'
3: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:84:in `do_decode'
2: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:109:in `decode_encode'
1: from .rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `value_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `join': incompatible character encodings: UTF-8 and ASCII-8BIT (Encoding::CompatibilityError)
18: from bin/rails:9:in `<main>'
17: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
16: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
15: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
14: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
13: from .rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
12: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
11: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
10: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
9: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
8: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
7: from .rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
6: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
5: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
4: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
3: from .rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
2: from ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
1: from ws/zammad-dev/app/models/channel/email_parser.rb:117:in `process'
ws/zammad-dev/app/models/channel/email_parser.rb:135:in `rescue in process': #<Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ASCII-8BIT> (RuntimeError)
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `join'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:137:in `value_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/encodings.rb:109:in `decode_encode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:84:in `do_decode'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/fields/unstructured_field.rb:62:in `decoded'
.rvm/gems/ruby-2.7.4/bundler/gems/mail-9265cf75bbe3/lib/mail/field.rb:239:in `method_missing'
ws/zammad-dev/app/models/channel/email_parser.rb:579:in `rescue in block in message_header_hash'
ws/zammad-dev/app/models/channel/email_parser.rb:564:in `block in message_header_hash'
ws/zammad-dev/app/models/channel/email_parser.rb:563:in `map'
ws/zammad-dev/app/models/channel/email_parser.rb:563:in `message_header_hash'
ws/zammad-dev/app/models/channel/email_parser.rb:85:in `parse'
ws/zammad-dev/app/models/channel/email_parser.rb:141:in `_process'
ws/zammad-dev/app/models/channel/email_parser.rb:120:in `block in process'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch'
.rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout'
ws/zammad-dev/app/models/channel/email_parser.rb:119:in `process'
ws/zammad-dev/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `new'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `eval'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands/runner/runner_command.rb:45:in `perform'
.rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
.rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
.rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command/base.rb:69:in `perform'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/command.rb:46:in `invoke'
.rvm/gems/ruby-2.7.4/gems/railties-6.0.4/lib/rails/commands.rb:18:in `<main>'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
.rvm/gems/ruby-2.7.4/gems/bootsnap-1.7.7/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
bin/rails:9:in `<main>'
```
### Steps to reproduce the behavior:
* Download the attached mail file. [mail-iso-88591-1-failing.box.zip](https://github.com/zammad/zammad/files/6981235/mail-iso-88591-1-failing.box.zip)
Process it via: cat /path/to/mail.box | time rails r 'Channel::Driver::MailStdin.new'
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
mix of binary encoded iso data in header fields e g to fails mail processing hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package any operating system any database version any elasticsearch version any browser version any ticket id expected behavior zammad will decode the header fields e g to correctly and import the mail actual behavior in some special cases processing of mails fail if the mail header fields e g to contains mixed binary iso data and move the mail to unprocessible mails log error traceback most recent call last from bin rails in from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from rvm gems ruby gems railties lib rails commands rb in from rvm gems ruby gems railties lib rails command rb in invoke from rvm gems ruby gems railties lib rails command base rb in perform from rvm gems ruby gems thor lib thor rb in dispatch from rvm gems ruby gems thor lib thor invocation rb in invoke command from rvm gems ruby gems thor lib thor command rb in run from rvm gems ruby gems railties lib rails commands runner runner command rb in perform from rvm gems ruby gems railties lib rails commands runner runner command rb in eval from rvm gems ruby gems railties lib rails commands runner runner command rb in from rvm gems ruby gems railties lib rails commands runner runner command rb in new from ws zammad dev app models channel driver mail stdin rb in initialize from ws zammad dev app models channel email parser rb in process from rvm rubies ruby lib ruby timeout rb in timeout from rvm rubies ruby lib ruby timeout rb in catch from rvm rubies ruby lib ruby timeout rb in catch from rvm rubies ruby lib ruby timeout rb in block in catch from rvm rubies ruby lib ruby timeout rb in block in timeout from ws zammad dev app models channel email parser rb in block in process from ws zammad dev app models channel email parser rb in process from ws zammad dev app models channel email parser rb in parse from ws zammad dev app models channel email parser rb in message header hash from ws zammad dev app models channel email parser rb in map from ws zammad dev app models channel email parser rb in block in message header hash from ws zammad dev lib core ext object rb in to from rvm gems ruby bundler gems mail lib mail field rb in to s from rvm gems ruby bundler gems mail lib mail fields common common field rb in to s from rvm gems ruby bundler gems mail lib mail fields unstructured field rb in decoded from rvm gems ruby bundler gems mail lib mail fields unstructured field rb in do decode from rvm gems ruby bundler gems mail lib mail encodings rb in decode encode from rvm gems ruby bundler gems mail lib mail encodings rb in value decode rvm gems ruby bundler gems mail lib mail encodings rb in join incompatible character encodings utf and ascii encoding compatibilityerror from bin rails in from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from rvm gems ruby gems railties lib rails commands rb in from rvm gems ruby gems railties lib rails command rb in invoke from rvm gems ruby gems railties lib rails command base rb in perform from rvm gems ruby gems thor lib thor rb in dispatch from rvm gems ruby gems thor lib thor invocation rb in invoke command from rvm gems ruby gems thor lib thor command rb in run from rvm gems ruby gems railties lib rails commands runner runner command rb in perform from rvm gems ruby gems railties lib rails commands runner runner command rb in eval from rvm gems ruby gems railties lib rails commands runner runner command rb in from rvm gems ruby gems railties lib rails commands runner runner command rb in new from ws zammad dev app models channel driver mail stdin rb in initialize from ws zammad dev app models channel email parser rb in process from rvm rubies ruby lib ruby timeout rb in timeout from rvm rubies ruby lib ruby timeout rb in catch from rvm rubies ruby lib ruby timeout rb in catch from rvm rubies ruby lib ruby timeout rb in block in catch from rvm rubies ruby lib ruby timeout rb in block in timeout from ws zammad dev app models channel email parser rb in block in process from ws zammad dev app models channel email parser rb in process from ws zammad dev app models channel email parser rb in parse from ws zammad dev app models channel email parser rb in message header hash from ws zammad dev app models channel email parser rb in map from ws zammad dev app models channel email parser rb in block in message header hash from ws zammad dev app models channel email parser rb in rescue in block in message header hash from rvm gems ruby bundler gems mail lib mail field rb in method missing from rvm gems ruby bundler gems mail lib mail fields unstructured field rb in decoded from rvm gems ruby bundler gems mail lib mail fields unstructured field rb in do decode from rvm gems ruby bundler gems mail lib mail encodings rb in decode encode from rvm gems ruby bundler gems mail lib mail encodings rb in value decode rvm gems ruby bundler gems mail lib mail encodings rb in join incompatible character encodings utf and ascii encoding compatibilityerror from bin rails in from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from rvm gems ruby gems railties lib rails commands rb in from rvm gems ruby gems railties lib rails command rb in invoke from rvm gems ruby gems railties lib rails command base rb in perform from rvm gems ruby gems thor lib thor rb in dispatch from rvm gems ruby gems thor lib thor invocation rb in invoke command from rvm gems ruby gems thor lib thor command rb in run from rvm gems ruby gems railties lib rails commands runner runner command rb in perform from rvm gems ruby gems railties lib rails commands runner runner command rb in eval from rvm gems ruby gems railties lib rails commands runner runner command rb in from rvm gems ruby gems railties lib rails commands runner runner command rb in new from ws zammad dev app models channel driver mail stdin rb in initialize from ws zammad dev app models channel email parser rb in process ws zammad dev app models channel email parser rb in rescue in process runtimeerror rvm gems ruby bundler gems mail lib mail encodings rb in join rvm gems ruby bundler gems mail lib mail encodings rb in value decode rvm gems ruby bundler gems mail lib mail encodings rb in decode encode rvm gems ruby bundler gems mail lib mail fields unstructured field rb in do decode rvm gems ruby bundler gems mail lib mail fields unstructured field rb in decoded rvm gems ruby bundler gems mail lib mail field rb in method missing ws zammad dev app models channel email parser rb in rescue in block in message header hash ws zammad dev app models channel email parser rb in block in message header hash ws zammad dev app models channel email parser rb in map ws zammad dev app models channel email parser rb in message header hash ws zammad dev app models channel email parser rb in parse ws zammad dev app models channel email parser rb in process ws zammad dev app models channel email parser rb in block in process rvm rubies ruby lib ruby timeout rb in block in timeout rvm rubies ruby lib ruby timeout rb in block in catch rvm rubies ruby lib ruby timeout rb in catch rvm rubies ruby lib ruby timeout rb in catch rvm rubies ruby lib ruby timeout rb in timeout ws zammad dev app models channel email parser rb in process ws zammad dev app models channel driver mail stdin rb in initialize rvm gems ruby gems railties lib rails commands runner runner command rb in new rvm gems ruby gems railties lib rails commands runner runner command rb in rvm gems ruby gems railties lib rails commands runner runner command rb in eval rvm gems ruby gems railties lib rails commands runner runner command rb in perform rvm gems ruby gems thor lib thor command rb in run rvm gems ruby gems thor lib thor invocation rb in invoke command rvm gems ruby gems thor lib thor rb in dispatch rvm gems ruby gems railties lib rails command base rb in perform rvm gems ruby gems railties lib rails command rb in invoke rvm gems ruby gems railties lib rails commands rb in rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require bin rails in steps to reproduce the behavior download the attached mail file process it via cat path to mail box time rails r channel driver mailstdin new yes i m sure this is a bug and no feature request or a general question
| 1
|
248,239
| 26,784,984,413
|
IssuesEvent
|
2023-02-01 01:30:16
|
hoodies-training/landing-page
|
https://api.github.com/repos/hoodies-training/landing-page
|
opened
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz, http-cache-semantics-4.1.0.tgz
|
security vulnerability
|
## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>http-cache-semantics-3.8.1.tgz</b>, <b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>
<details><summary><b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-3.12.1.tgz (Root Library)
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/package-json/node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-3.12.1.tgz (Root Library)
- latest-version-5.1.0.tgz
- package-json-6.5.0.tgz
- got-9.6.0.tgz
- cacheable-request-6.1.0.tgz
- :x: **http-cache-semantics-4.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/hoodies-training/landing-page/commit/c6b95c53e4e375da4f0c6ea6de8a3612a8d98853">c6b95c53e4e375da4f0c6ea6de8a3612a8d98853</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz, http-cache-semantics-4.1.0.tgz - ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>http-cache-semantics-3.8.1.tgz</b>, <b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>
<details><summary><b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-3.12.1.tgz (Root Library)
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/package-json/node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-3.12.1.tgz (Root Library)
- latest-version-5.1.0.tgz
- package-json-6.5.0.tgz
- got-9.6.0.tgz
- cacheable-request-6.1.0.tgz
- :x: **http-cache-semantics-4.1.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/hoodies-training/landing-page/commit/c6b95c53e4e375da4f0c6ea6de8a3612a8d98853">c6b95c53e4e375da4f0c6ea6de8a3612a8d98853</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in http cache semantics tgz http cache semantics tgz cve medium severity vulnerability vulnerable libraries http cache semantics tgz http cache semantics tgz http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file package json path to vulnerable library node modules http cache semantics package json dependency hierarchy gatsby tgz root library got tgz cacheable request tgz x http cache semantics tgz vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file package json path to vulnerable library node modules package json node modules http cache semantics package json dependency hierarchy gatsby tgz root library latest version tgz package json tgz got tgz cacheable request tgz x http cache semantics tgz vulnerable library found in head commit a href found in base branch main vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics step up your open source security game with mend
| 0
|
180,994
| 14,849,268,899
|
IssuesEvent
|
2021-01-18 00:40:21
|
nkdAgility/azure-devops-migration-tools
|
https://api.github.com/repos/nkdAgility/azure-devops-migration-tools
|
closed
|
How to properly exclude source work items with NodeBasePaths
|
DocumentationException no-issue-activity
|
From the documentation it appears that the NodeBasePaths in a WorkItemMigrationConfig are suppose to filter out and not migrate work items in the source that are not in or under those paths. That is not the behavior I am seeing. Work items in the source that are neither in or under the area/iteration paths I specify in the configuration file are still being migrated.
Could this be happening because I also need to change my query to exclude work items not in these paths? The documentation reads to me that specifying paths in NodeBasePaths should exclude work items in the source without me having to change my query to do so. Isn't that why the NodeBasePaths setting exists in the first place? Can you please clarify.
I posted here because I can't tell if this is a question or a bug.
Version: 11.9.20.0
Source: TFS 2017, English
Target: Azure DevOps. English
[migration.log](https://github.com/nkdAgility/azure-devops-migration-tools/files/5646123/migration.log)
[WorkItemMigrationConfiguration.txt](https://github.com/nkdAgility/azure-devops-migration-tools/files/5646125/WorkItemMigrationConfiguration.txt)
|
1.0
|
How to properly exclude source work items with NodeBasePaths - From the documentation it appears that the NodeBasePaths in a WorkItemMigrationConfig are suppose to filter out and not migrate work items in the source that are not in or under those paths. That is not the behavior I am seeing. Work items in the source that are neither in or under the area/iteration paths I specify in the configuration file are still being migrated.
Could this be happening because I also need to change my query to exclude work items not in these paths? The documentation reads to me that specifying paths in NodeBasePaths should exclude work items in the source without me having to change my query to do so. Isn't that why the NodeBasePaths setting exists in the first place? Can you please clarify.
I posted here because I can't tell if this is a question or a bug.
Version: 11.9.20.0
Source: TFS 2017, English
Target: Azure DevOps. English
[migration.log](https://github.com/nkdAgility/azure-devops-migration-tools/files/5646123/migration.log)
[WorkItemMigrationConfiguration.txt](https://github.com/nkdAgility/azure-devops-migration-tools/files/5646125/WorkItemMigrationConfiguration.txt)
|
non_process
|
how to properly exclude source work items with nodebasepaths from the documentation it appears that the nodebasepaths in a workitemmigrationconfig are suppose to filter out and not migrate work items in the source that are not in or under those paths that is not the behavior i am seeing work items in the source that are neither in or under the area iteration paths i specify in the configuration file are still being migrated could this be happening because i also need to change my query to exclude work items not in these paths the documentation reads to me that specifying paths in nodebasepaths should exclude work items in the source without me having to change my query to do so isn t that why the nodebasepaths setting exists in the first place can you please clarify i posted here because i can t tell if this is a question or a bug version source tfs english target azure devops english
| 0
|
7,629
| 10,730,180,367
|
IssuesEvent
|
2019-10-28 16:55:30
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Provide documentation for setting up VSCode for contribution
|
process: contributing
|
I tried to understand the code in the driver and server package in order to fix this issue correctly: https://github.com/cypress-io/cypress/issues/5367
So I started by searching for the definition of cy.visit to understand what's going on there. After finding navigation.coffee, I wanted to navigate through the code. Unfortunatelly there is no "jump to symbol" support for those files in VSCode.
It would be nice, if one can provide a documentation that shows how to setup VSCode or that suggests an alternative IDE (but as there is a .vscode folder in the root folder, I think it is recommended to use VSCode :-) ).
|
1.0
|
Provide documentation for setting up VSCode for contribution - I tried to understand the code in the driver and server package in order to fix this issue correctly: https://github.com/cypress-io/cypress/issues/5367
So I started by searching for the definition of cy.visit to understand what's going on there. After finding navigation.coffee, I wanted to navigate through the code. Unfortunatelly there is no "jump to symbol" support for those files in VSCode.
It would be nice, if one can provide a documentation that shows how to setup VSCode or that suggests an alternative IDE (but as there is a .vscode folder in the root folder, I think it is recommended to use VSCode :-) ).
|
process
|
provide documentation for setting up vscode for contribution i tried to understand the code in the driver and server package in order to fix this issue correctly so i started by searching for the definition of cy visit to understand what s going on there after finding navigation coffee i wanted to navigate through the code unfortunatelly there is no jump to symbol support for those files in vscode it would be nice if one can provide a documentation that shows how to setup vscode or that suggests an alternative ide but as there is a vscode folder in the root folder i think it is recommended to use vscode
| 1
|
391,127
| 26,881,193,743
|
IssuesEvent
|
2023-02-05 17:01:44
|
ManevilleF/hexx
|
https://api.github.com/repos/ManevilleF/hexx
|
closed
|
Missing documentation for from/to vector methods
|
documentation
|
The conversion methods from/to `Vec2` and `IVec2` implemented for `Hex` don't make it clear these methods shouldn't be used for converting from/to world coordinates.
# Solution
Give pointers to the `HexLayout` struct and it's uses in the documentation for these methods
|
1.0
|
Missing documentation for from/to vector methods - The conversion methods from/to `Vec2` and `IVec2` implemented for `Hex` don't make it clear these methods shouldn't be used for converting from/to world coordinates.
# Solution
Give pointers to the `HexLayout` struct and it's uses in the documentation for these methods
|
non_process
|
missing documentation for from to vector methods the conversion methods from to and implemented for hex don t make it clear these methods shouldn t be used for converting from to world coordinates solution give pointers to the hexlayout struct and it s uses in the documentation for these methods
| 0
|
147,804
| 11,808,292,253
|
IssuesEvent
|
2020-03-19 13:08:20
|
OpenLiberty/open-liberty
|
https://api.github.com/repos/OpenLiberty/open-liberty
|
closed
|
Test Failure: DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun_Lite feature update hung for over 27 minutes
|
team:Zombie Apocalypse test bug
|
Test Failure (20200316-0224): com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun_Lite
```
junit.framework.AssertionFailedError: 2020-03-16-09:11:44:318 Missing success message in output. PersistentExecutorsTestServlet is starting testTaskDoesRun<br>
<pre>ERROR in testTaskDoesRun:
java.lang.Exception: Task 1 did not complete any executions within allotted interval. TaskStatus[1]@bc45e369 SCHEDULED,UNATTEMPTED 2020/03/16-08:44:35.595-UTC[1584348275595]
at web.PersistentExecutorsTestServlet.testTaskDoesRun(PersistentExecutorsTestServlet.java:160)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at web.PersistentExecutorsTestServlet.doGet(PersistentExecutorsTestServlet.java:63)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1226)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1010)
at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1134)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:415)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:374)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:548)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:482)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:347)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:318)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047)
at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:239)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:825)
</pre>
at com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.runInServlet(DelayExecutionFATTest.java:97)
at com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun(DelayExecutionFATTest.java:362)
at com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun_Lite(DelayExecutionFATTest.java:149)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:193)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:314)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:167)
```
Log for passing iteration of test:
```
[3/16/20, 8:43:30:376 UTC] 0000004d SystemOut O -----> testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-2E) starting
[3/16/20, 8:43:39:693 UTC] 00000029 com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[3/16/20, 8:43:39:781 UTC] 00000029 SystemOut O MRUBundleFileList: about to close bundle file: /home/jazz_build/_7qLVIWdDEeqaXeM8BMnlag-EBC.PROD.WASRTC-A67-000-00-00/jbe/build/dev/image/output/wlp/lib/com.ibm.ws.javaee.dd.common_1.1.39.jar
[3/16/20, 8:43:39:796 UTC] 00000029 SystemOut O test.concurrent.persistent.delayexec.internal.TestServiceImpl.notificationCreated, notified with ( FeatureBundlesProcessed)
[3/16/20, 8:43:39:817 UTC] 00000027 com.ibm.ws.config.xml.internal.ConfigRefresher A CWWKG0017I: The server configuration was successfully updated in 30.152 seconds.
[3/16/20, 8:43:39:891 UTC] 00000029 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0013I: The server removed the following features: [osgiConsole-1.0].
[3/16/20, 8:43:39:891 UTC] 00000029 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0008I: Feature update completed in 30.216 seconds.
[3/16/20, 8:43:39:984 UTC] 0000004d SystemOut O <----- testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-2E) successful
```
Log for failing iteration of test (test case gives up waiting and reports the exception)
```
[3/16/20, 8:44:53:148 UTC] 0000004d SystemOut O -----> testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-4E) starting
[3/16/20, 8:45:02:384 UTC] 00000028 com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[3/16/20, 8:50:53:608 UTC] 0000004d SystemOut O <----- testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-4E) failed:
[3/16/20, 8:50:53:609 UTC] 0000004d SystemOut O java.lang.Exception: Task 1 did not complete any executions within allotted interval. TaskStatus[1]@bc45e369 SCHEDULED,UNATTEMPTED 2020/03/16-08:44:35.595-UTC[1584348275595]
[3/16/20, 8:50:53:609 UTC] 0000004d SystemOut O at web.PersistentExecutorsTestServlet.testTaskDoesRun(PersistentExecutorsTestServlet.java:160)
[3/16/20, 9:11:44:276 UTC] 0000004d SystemOut O at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[3/16/20, 9:11:44:277 UTC] 0000004d SystemOut O at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[3/16/20, 9:11:44:277 UTC] 0000004d SystemOut O at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[3/16/20, 9:11:44:277 UTC] 0000004d SystemOut O at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[3/16/20, 9:11:44:278 UTC] 0000004d SystemOut O at web.PersistentExecutorsTestServlet.doGet(PersistentExecutorsTestServlet.java:63)
[3/16/20, 9:11:44:279 UTC] 00000028 SystemOut O MRUBundleFileList: about to close bundle file: /home/jazz_build/_7qLVIWdDEeqaXeM8BMnlag-EBC.PROD.WASRTC-A67-000-00-00/jbe/build/dev/image/output/wlp/lib/com.ibm.ws.javaee.dd.common_1.1.39.jar
[3/16/20, 9:11:44:285 UTC] 0000004d SystemOut O at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
[3/16/20, 9:11:44:286 UTC] 0000004d SystemOut O at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[3/16/20, 9:11:44:286 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
[3/16/20, 9:11:44:286 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729)
[3/16/20, 9:11:44:287 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426)
[3/16/20, 9:11:44:287 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1226)
[3/16/20, 9:11:44:288 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1010)
[3/16/20, 9:11:44:288 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75)
[3/16/20, 9:11:44:289 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938)
[3/16/20, 9:11:44:289 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
[3/16/20, 9:11:44:289 UTC] 0000004d SystemOut O at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1134)
[3/16/20, 9:11:44:290 UTC] 0000004d SystemOut O at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:415)
[3/16/20, 9:11:44:290 UTC] 0000004d SystemOut O at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:374)
[3/16/20, 9:11:44:291 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:548)
[3/16/20, 9:11:44:291 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:482)
[3/16/20, 9:11:44:292 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:347)
[3/16/20, 9:11:44:292 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:318)
[3/16/20, 9:11:44:292 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167)
[3/16/20, 9:11:44:293 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75)
[3/16/20, 9:11:44:293 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
[3/16/20, 9:11:44:294 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
[3/16/20, 9:11:44:294 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958)
[3/16/20, 9:11:44:295 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047)
[3/16/20, 9:11:44:296 UTC] 0000004d SystemOut O at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:239)
[3/16/20, 9:11:44:296 UTC] 0000004d SystemOut O at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[3/16/20, 9:11:44:296 UTC] 0000004d SystemOut O at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[3/16/20, 9:11:44:297 UTC] 0000004d SystemOut O at java.base/java.lang.Thread.run(Thread.java:825)
[3/16/20, 9:11:44:313 UTC] 00000028 SystemOut O test.concurrent.persistent.delayexec.internal.TestServiceImpl.notificationCreated, notified with ( FeatureBundlesProcessed)
[3/16/20, 9:11:44:355 UTC] 00000027 com.ibm.ws.config.xml.internal.ConfigRefresher A CWWKG0017I: The server configuration was successfully updated in 1632.010 seconds.
[3/16/20, 9:11:44:426 UTC] 00000028 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0013I: The server removed the following features: [osgiConsole-1.0].
[3/16/20, 9:11:44:426 UTC] 00000028 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0008I: Feature update completed in 1632.059 seconds.
```
In the failing case, the feature update, which the test depends upon, takes 1632 seconds (27.2 minutes).
It isn't even an update to the features being tested by this bucket. I'm not sure what we can do about this extreme infrastructure slowness other than make the test case tolerate it. The test was already tolerating up to 6 minutes. Now it will need to tolerate 30.
|
1.0
|
Test Failure: DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun_Lite feature update hung for over 27 minutes - Test Failure (20200316-0224): com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun_Lite
```
junit.framework.AssertionFailedError: 2020-03-16-09:11:44:318 Missing success message in output. PersistentExecutorsTestServlet is starting testTaskDoesRun<br>
<pre>ERROR in testTaskDoesRun:
java.lang.Exception: Task 1 did not complete any executions within allotted interval. TaskStatus[1]@bc45e369 SCHEDULED,UNATTEMPTED 2020/03/16-08:44:35.595-UTC[1584348275595]
at web.PersistentExecutorsTestServlet.testTaskDoesRun(PersistentExecutorsTestServlet.java:160)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at web.PersistentExecutorsTestServlet.doGet(PersistentExecutorsTestServlet.java:63)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1226)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1010)
at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1134)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:415)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:374)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:548)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:482)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:347)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:318)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167)
at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047)
at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:239)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:825)
</pre>
at com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.runInServlet(DelayExecutionFATTest.java:97)
at com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun(DelayExecutionFATTest.java:362)
at com.ibm.ws.concurrent.persistent.delayexec.DelayExecutionFATTest.testRescheduleUnderConfigUpdateRun_Lite(DelayExecutionFATTest.java:149)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.custom.junit.runner.FATRunner$1.evaluate(FATRunner.java:193)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:314)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:167)
```
Log for passing iteration of test:
```
[3/16/20, 8:43:30:376 UTC] 0000004d SystemOut O -----> testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-2E) starting
[3/16/20, 8:43:39:693 UTC] 00000029 com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[3/16/20, 8:43:39:781 UTC] 00000029 SystemOut O MRUBundleFileList: about to close bundle file: /home/jazz_build/_7qLVIWdDEeqaXeM8BMnlag-EBC.PROD.WASRTC-A67-000-00-00/jbe/build/dev/image/output/wlp/lib/com.ibm.ws.javaee.dd.common_1.1.39.jar
[3/16/20, 8:43:39:796 UTC] 00000029 SystemOut O test.concurrent.persistent.delayexec.internal.TestServiceImpl.notificationCreated, notified with ( FeatureBundlesProcessed)
[3/16/20, 8:43:39:817 UTC] 00000027 com.ibm.ws.config.xml.internal.ConfigRefresher A CWWKG0017I: The server configuration was successfully updated in 30.152 seconds.
[3/16/20, 8:43:39:891 UTC] 00000029 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0013I: The server removed the following features: [osgiConsole-1.0].
[3/16/20, 8:43:39:891 UTC] 00000029 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0008I: Feature update completed in 30.216 seconds.
[3/16/20, 8:43:39:984 UTC] 0000004d SystemOut O <----- testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-2E) successful
```
Log for failing iteration of test (test case gives up waiting and reports the exception)
```
[3/16/20, 8:44:53:148 UTC] 0000004d SystemOut O -----> testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-4E) starting
[3/16/20, 8:45:02:384 UTC] 00000028 com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[3/16/20, 8:50:53:608 UTC] 0000004d SystemOut O <----- testTaskDoesRun(invoked by testRescheduleUnderConfigUpdateRun-4E) failed:
[3/16/20, 8:50:53:609 UTC] 0000004d SystemOut O java.lang.Exception: Task 1 did not complete any executions within allotted interval. TaskStatus[1]@bc45e369 SCHEDULED,UNATTEMPTED 2020/03/16-08:44:35.595-UTC[1584348275595]
[3/16/20, 8:50:53:609 UTC] 0000004d SystemOut O at web.PersistentExecutorsTestServlet.testTaskDoesRun(PersistentExecutorsTestServlet.java:160)
[3/16/20, 9:11:44:276 UTC] 0000004d SystemOut O at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[3/16/20, 9:11:44:277 UTC] 0000004d SystemOut O at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[3/16/20, 9:11:44:277 UTC] 0000004d SystemOut O at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[3/16/20, 9:11:44:277 UTC] 0000004d SystemOut O at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[3/16/20, 9:11:44:278 UTC] 0000004d SystemOut O at web.PersistentExecutorsTestServlet.doGet(PersistentExecutorsTestServlet.java:63)
[3/16/20, 9:11:44:279 UTC] 00000028 SystemOut O MRUBundleFileList: about to close bundle file: /home/jazz_build/_7qLVIWdDEeqaXeM8BMnlag-EBC.PROD.WASRTC-A67-000-00-00/jbe/build/dev/image/output/wlp/lib/com.ibm.ws.javaee.dd.common_1.1.39.jar
[3/16/20, 9:11:44:285 UTC] 0000004d SystemOut O at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
[3/16/20, 9:11:44:286 UTC] 0000004d SystemOut O at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
[3/16/20, 9:11:44:286 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
[3/16/20, 9:11:44:286 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729)
[3/16/20, 9:11:44:287 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426)
[3/16/20, 9:11:44:287 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1226)
[3/16/20, 9:11:44:288 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1010)
[3/16/20, 9:11:44:288 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75)
[3/16/20, 9:11:44:289 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938)
[3/16/20, 9:11:44:289 UTC] 0000004d SystemOut O at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
[3/16/20, 9:11:44:289 UTC] 0000004d SystemOut O at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1134)
[3/16/20, 9:11:44:290 UTC] 0000004d SystemOut O at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:415)
[3/16/20, 9:11:44:290 UTC] 0000004d SystemOut O at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:374)
[3/16/20, 9:11:44:291 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:548)
[3/16/20, 9:11:44:291 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:482)
[3/16/20, 9:11:44:292 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:347)
[3/16/20, 9:11:44:292 UTC] 0000004d SystemOut O at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.ready(HttpInboundLink.java:318)
[3/16/20, 9:11:44:292 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:167)
[3/16/20, 9:11:44:293 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:75)
[3/16/20, 9:11:44:293 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
[3/16/20, 9:11:44:294 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
[3/16/20, 9:11:44:294 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:958)
[3/16/20, 9:11:44:295 UTC] 0000004d SystemOut O at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1047)
[3/16/20, 9:11:44:296 UTC] 0000004d SystemOut O at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:239)
[3/16/20, 9:11:44:296 UTC] 0000004d SystemOut O at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[3/16/20, 9:11:44:296 UTC] 0000004d SystemOut O at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[3/16/20, 9:11:44:297 UTC] 0000004d SystemOut O at java.base/java.lang.Thread.run(Thread.java:825)
[3/16/20, 9:11:44:313 UTC] 00000028 SystemOut O test.concurrent.persistent.delayexec.internal.TestServiceImpl.notificationCreated, notified with ( FeatureBundlesProcessed)
[3/16/20, 9:11:44:355 UTC] 00000027 com.ibm.ws.config.xml.internal.ConfigRefresher A CWWKG0017I: The server configuration was successfully updated in 1632.010 seconds.
[3/16/20, 9:11:44:426 UTC] 00000028 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0013I: The server removed the following features: [osgiConsole-1.0].
[3/16/20, 9:11:44:426 UTC] 00000028 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0008I: Feature update completed in 1632.059 seconds.
```
In the failing case, the feature update, which the test depends upon, takes 1632 seconds (27.2 minutes).
It isn't even an update to the features being tested by this bucket. I'm not sure what we can do about this extreme infrastructure slowness other than make the test case tolerate it. The test was already tolerating up to 6 minutes. Now it will need to tolerate 30.
|
non_process
|
test failure delayexecutionfattest testrescheduleunderconfigupdaterun lite feature update hung for over minutes test failure com ibm ws concurrent persistent delayexec delayexecutionfattest testrescheduleunderconfigupdaterun lite junit framework assertionfailederror missing success message in output persistentexecutorstestservlet is starting testtaskdoesrun error in testtaskdoesrun java lang exception task did not complete any executions within allotted interval taskstatus scheduled unattempted utc at web persistentexecutorstestservlet testtaskdoesrun persistentexecutorstestservlet java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at web persistentexecutorstestservlet doget persistentexecutorstestservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at com ibm ws webcontainer servlet servletwrapper service servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer servlet cacheservletwrapper handlerequest cacheservletwrapper java at com ibm ws webcontainer webcontainer handlerequest webcontainer java at com ibm ws webcontainer osgi dynamicvirtualhost run dynamicvirtualhost java at com ibm ws http dispatcher internal channel httpdispatcherlink taskwrapper run httpdispatcherlink java at com ibm ws http dispatcher internal channel httpdispatcherlink wraphandlerandexecute httpdispatcherlink java at com ibm ws http dispatcher internal channel httpdispatcherlink ready httpdispatcherlink java at com ibm ws http channel internal inbound httpinboundlink handlediscrimination httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink handlenewrequest httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink processrequest httpinboundlink java at com ibm ws http channel internal inbound httpinboundlink ready httpinboundlink java at com ibm ws tcpchannel internal newconnectioninitialreadcallback sendtodiscriminators newconnectioninitialreadcallback java at com ibm ws tcpchannel internal newconnectioninitialreadcallback complete newconnectioninitialreadcallback java at com ibm ws tcpchannel internal workqueuemanager requestcomplete workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager attemptio workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager workerrun workqueuemanager java at com ibm ws tcpchannel internal workqueuemanager worker run workqueuemanager java at com ibm ws threading internal executorserviceimpl runnablewrapper run executorserviceimpl java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java at com ibm ws concurrent persistent delayexec delayexecutionfattest runinservlet delayexecutionfattest java at com ibm ws concurrent persistent delayexec delayexecutionfattest testrescheduleunderconfigupdaterun delayexecutionfattest java at com ibm ws concurrent persistent delayexec delayexecutionfattest testrescheduleunderconfigupdaterun lite delayexecutionfattest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner run fatrunner java log for passing iteration of test systemout o testtaskdoesrun invoked by testrescheduleunderconfigupdaterun starting com ibm ws kernel feature internal featuremanager i feature update started systemout o mrubundlefilelist about to close bundle file home jazz build ebc prod wasrtc jbe build dev image output wlp lib com ibm ws javaee dd common jar systemout o test concurrent persistent delayexec internal testserviceimpl notificationcreated notified with featurebundlesprocessed com ibm ws config xml internal configrefresher a the server configuration was successfully updated in seconds com ibm ws kernel feature internal featuremanager a the server removed the following features com ibm ws kernel feature internal featuremanager a feature update completed in seconds systemout o testtaskdoesrun invoked by testrescheduleunderconfigupdaterun successful log for failing iteration of test test case gives up waiting and reports the exception systemout o testtaskdoesrun invoked by testrescheduleunderconfigupdaterun starting com ibm ws kernel feature internal featuremanager i feature update started systemout o testtaskdoesrun invoked by testrescheduleunderconfigupdaterun failed systemout o java lang exception task did not complete any executions within allotted interval taskstatus scheduled unattempted utc systemout o at web persistentexecutorstestservlet testtaskdoesrun persistentexecutorstestservlet java systemout o at java base jdk internal reflect nativemethodaccessorimpl native method systemout o at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java systemout o at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java systemout o at java base java lang reflect method invoke method java systemout o at web persistentexecutorstestservlet doget persistentexecutorstestservlet java systemout o mrubundlefilelist about to close bundle file home jazz build ebc prod wasrtc jbe build dev image output wlp lib com ibm ws javaee dd common jar systemout o at javax servlet http httpservlet service httpservlet java systemout o at javax servlet http httpservlet service httpservlet java systemout o at com ibm ws webcontainer servlet servletwrapper service servletwrapper java systemout o at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java systemout o at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java systemout o at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java systemout o at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java systemout o at com ibm ws webcontainer servlet cacheservletwrapper handlerequest cacheservletwrapper java systemout o at com ibm ws webcontainer webcontainer handlerequest webcontainer java systemout o at com ibm ws webcontainer osgi dynamicvirtualhost run dynamicvirtualhost java systemout o at com ibm ws http dispatcher internal channel httpdispatcherlink taskwrapper run httpdispatcherlink java systemout o at com ibm ws http dispatcher internal channel httpdispatcherlink wraphandlerandexecute httpdispatcherlink java systemout o at com ibm ws http dispatcher internal channel httpdispatcherlink ready httpdispatcherlink java systemout o at com ibm ws http channel internal inbound httpinboundlink handlediscrimination httpinboundlink java systemout o at com ibm ws http channel internal inbound httpinboundlink handlenewrequest httpinboundlink java systemout o at com ibm ws http channel internal inbound httpinboundlink processrequest httpinboundlink java systemout o at com ibm ws http channel internal inbound httpinboundlink ready httpinboundlink java systemout o at com ibm ws tcpchannel internal newconnectioninitialreadcallback sendtodiscriminators newconnectioninitialreadcallback java systemout o at com ibm ws tcpchannel internal newconnectioninitialreadcallback complete newconnectioninitialreadcallback java systemout o at com ibm ws tcpchannel internal workqueuemanager requestcomplete workqueuemanager java systemout o at com ibm ws tcpchannel internal workqueuemanager attemptio workqueuemanager java systemout o at com ibm ws tcpchannel internal workqueuemanager workerrun workqueuemanager java systemout o at com ibm ws tcpchannel internal workqueuemanager worker run workqueuemanager java systemout o at com ibm ws threading internal executorserviceimpl runnablewrapper run executorserviceimpl java systemout o at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java systemout o at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java systemout o at java base java lang thread run thread java systemout o test concurrent persistent delayexec internal testserviceimpl notificationcreated notified with featurebundlesprocessed com ibm ws config xml internal configrefresher a the server configuration was successfully updated in seconds com ibm ws kernel feature internal featuremanager a the server removed the following features com ibm ws kernel feature internal featuremanager a feature update completed in seconds in the failing case the feature update which the test depends upon takes seconds minutes it isn t even an update to the features being tested by this bucket i m not sure what we can do about this extreme infrastructure slowness other than make the test case tolerate it the test was already tolerating up to minutes now it will need to tolerate
| 0
|
7,030
| 10,191,948,670
|
IssuesEvent
|
2019-08-12 09:46:15
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
microsoft.com
|
whitelisting process
|
*@mitchellkrogza commented on May 30, 2019, 10:51 AM UTC:*
**Domains or links**
microsoft.com
**More Information**
**Have you requested removal from other sources?**
Please include all relevant links to your existing removals / whitelistings.
**Additional context**
Add any other context about the problem here.
<g-emoji class="g-emoji" alias="exclamation" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2757.png">❗️</g-emoji>
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
*This issue was moved by [mitchellkrogza](https://github.com/mitchellkrogza) from [mitchellkrogza/Badd-Boyz-Hosts#64](https://github.com/mitchellkrogza/Badd-Boyz-Hosts/issues/64).*
|
1.0
|
microsoft.com - *@mitchellkrogza commented on May 30, 2019, 10:51 AM UTC:*
**Domains or links**
microsoft.com
**More Information**
**Have you requested removal from other sources?**
Please include all relevant links to your existing removals / whitelistings.
**Additional context**
Add any other context about the problem here.
<g-emoji class="g-emoji" alias="exclamation" fallback-src="https://github.githubassets.com/images/icons/emoji/unicode/2757.png">❗️</g-emoji>
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
*This issue was moved by [mitchellkrogza](https://github.com/mitchellkrogza) from [mitchellkrogza/Badd-Boyz-Hosts#64](https://github.com/mitchellkrogza/Badd-Boyz-Hosts/issues/64).*
|
process
|
microsoft com mitchellkrogza commented on may am utc domains or links microsoft com more information have you requested removal from other sources please include all relevant links to your existing removals whitelistings additional context add any other context about the problem here g emoji class g emoji alias exclamation fallback src we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process this issue was moved by from
| 1
|
7,226
| 10,353,154,218
|
IssuesEvent
|
2019-09-05 10:52:24
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
in "my tasks", the list updates to every task the user created, after adding watchers to a created my task, sorting by favorites
|
Process bug Tasks
|
go to tasks
click on my task
create a task
add a watcher to it/ sort by favorite
the app loads a list of every list the user created, and not necessarily assigned to:

|
1.0
|
in "my tasks", the list updates to every task the user created, after adding watchers to a created my task, sorting by favorites - go to tasks
click on my task
create a task
add a watcher to it/ sort by favorite
the app loads a list of every list the user created, and not necessarily assigned to:

|
process
|
in my tasks the list updates to every task the user created after adding watchers to a created my task sorting by favorites go to tasks click on my task create a task add a watcher to it sort by favorite the app loads a list of every list the user created and not necessarily assigned to
| 1
|
20,655
| 27,329,690,217
|
IssuesEvent
|
2023-02-25 13:12:57
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
closed
|
design landing page on figma
|
Processing Task Sprint 1
|
Task Tests
test 1:
-go to figma page: https://www.figma.com/proto/5qJUyXFUAdbtiobIQYqH20/Project-Prototype?node-id=51%3A153&scaling=scale-down&page-id=0%3A1
check for the login/sign up button on top right of screen.
look for the filter button on top right of screen
check the navigation bar on top left of screen
see the current events posted (UB hackathon for all of them)
look for the bookmark button, in the shape of a star, next to each title of the event.
test 2:
click on login button on top right
ensure that it opens the login page
|
1.0
|
design landing page on figma - Task Tests
test 1:
-go to figma page: https://www.figma.com/proto/5qJUyXFUAdbtiobIQYqH20/Project-Prototype?node-id=51%3A153&scaling=scale-down&page-id=0%3A1
check for the login/sign up button on top right of screen.
look for the filter button on top right of screen
check the navigation bar on top left of screen
see the current events posted (UB hackathon for all of them)
look for the bookmark button, in the shape of a star, next to each title of the event.
test 2:
click on login button on top right
ensure that it opens the login page
|
process
|
design landing page on figma task tests test go to figma page check for the login sign up button on top right of screen look for the filter button on top right of screen check the navigation bar on top left of screen see the current events posted ub hackathon for all of them look for the bookmark button in the shape of a star next to each title of the event test click on login button on top right ensure that it opens the login page
| 1
|
104,911
| 4,226,876,224
|
IssuesEvent
|
2016-07-02 19:26:27
|
pdean1/CS6920-Group-4-Project
|
https://api.github.com/repos/pdean1/CS6920-Group-4-Project
|
closed
|
BUG FIX: BudgetDAL.cs references columns that do not exist
|
bug High Priority
|
Fix time 5 minutes
Bug cause: https://github.com/pdean1/CS6920-Group-4-Project/commit/2906a0bf04e8b5d255727b96552e363a98b40ec6 This commit bumped the ordinal count by one, I assume becuase the old DB Scripts were still being used.
|
1.0
|
BUG FIX: BudgetDAL.cs references columns that do not exist - Fix time 5 minutes
Bug cause: https://github.com/pdean1/CS6920-Group-4-Project/commit/2906a0bf04e8b5d255727b96552e363a98b40ec6 This commit bumped the ordinal count by one, I assume becuase the old DB Scripts were still being used.
|
non_process
|
bug fix budgetdal cs references columns that do not exist fix time minutes bug cause this commit bumped the ordinal count by one i assume becuase the old db scripts were still being used
| 0
|
4,285
| 7,190,615,589
|
IssuesEvent
|
2018-02-02 17:53:38
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Desired source vs. actual source
|
libs-etherlib status-inprocess type-enhancement
|
Using etherlib_init and getSource/setSource functions, we can specify the desired configuration.
The node software delivers its own name with the `web3_clientVersion` RPC call and QuickBlocks' `getVersionFromClient` call.
We could, conceivably, store a `desired_source` vs. an `actual_source` data that would evaluate (at initialization) what sort of state it's in. Easy to implement inside the `setSource` / `getSource` section.
Also--it should be `pushSource` / `popSource` than `SFString save = getSource...setSource(new)...do stuff...setSource(save)`
|
1.0
|
Desired source vs. actual source - Using etherlib_init and getSource/setSource functions, we can specify the desired configuration.
The node software delivers its own name with the `web3_clientVersion` RPC call and QuickBlocks' `getVersionFromClient` call.
We could, conceivably, store a `desired_source` vs. an `actual_source` data that would evaluate (at initialization) what sort of state it's in. Easy to implement inside the `setSource` / `getSource` section.
Also--it should be `pushSource` / `popSource` than `SFString save = getSource...setSource(new)...do stuff...setSource(save)`
|
process
|
desired source vs actual source using etherlib init and getsource setsource functions we can specify the desired configuration the node software delivers its own name with the clientversion rpc call and quickblocks getversionfromclient call we could conceivably store a desired source vs an actual source data that would evaluate at initialization what sort of state it s in easy to implement inside the setsource getsource section also it should be pushsource popsource than sfstring save getsource setsource new do stuff setsource save
| 1
|
80,854
| 10,212,822,484
|
IssuesEvent
|
2019-08-14 20:27:17
|
locationtech/geotrellis
|
https://api.github.com/repos/locationtech/geotrellis
|
opened
|
Remove or revise wiki pages
|
documentation
|
Some of the pages in the GitHub wiki are obsolete and should be updated or removed. These include:
- https://github.com/locationtech/geotrellis/wiki/Operations-to-Implement
- https://github.com/locationtech/geotrellis/wiki/Release-notes
- https://github.com/locationtech/geotrellis/wiki/Roadmap
|
1.0
|
Remove or revise wiki pages - Some of the pages in the GitHub wiki are obsolete and should be updated or removed. These include:
- https://github.com/locationtech/geotrellis/wiki/Operations-to-Implement
- https://github.com/locationtech/geotrellis/wiki/Release-notes
- https://github.com/locationtech/geotrellis/wiki/Roadmap
|
non_process
|
remove or revise wiki pages some of the pages in the github wiki are obsolete and should be updated or removed these include
| 0
|
250,879
| 18,911,425,096
|
IssuesEvent
|
2021-11-16 14:29:16
|
TinkoffCreditSystems/taiga-ui
|
https://api.github.com/repos/TinkoffCreditSystems/taiga-ui
|
closed
|
📚 - Menu behavior issues
|
documentation
|
### What is the affected URL?
https://taiga-ui.dev/next/dialogs
### Description
https://user-images.githubusercontent.com/12021443/140023935-8031bb99-7ddb-474f-8aff-44368c914d06.mov
### Which browsers have you used?
- [X] Chrome
- [ ] Firefox
- [ ] Safari
- [ ] Edge
### Which operating systems have you used?
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
|
1.0
|
📚 - Menu behavior issues - ### What is the affected URL?
https://taiga-ui.dev/next/dialogs
### Description
https://user-images.githubusercontent.com/12021443/140023935-8031bb99-7ddb-474f-8aff-44368c914d06.mov
### Which browsers have you used?
- [X] Chrome
- [ ] Firefox
- [ ] Safari
- [ ] Edge
### Which operating systems have you used?
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
|
non_process
|
📚 menu behavior issues what is the affected url description which browsers have you used chrome firefox safari edge which operating systems have you used macos windows linux ios android
| 0
|
332,050
| 10,083,681,781
|
IssuesEvent
|
2019-07-25 14:10:01
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
increase spacing between profit and loss change and and total on your overview
|
Priority: Low
|
The build looks too tight

Should look more like this to allow for a bit of breathing space

|
1.0
|
increase spacing between profit and loss change and and total on your overview - The build looks too tight

Should look more like this to allow for a bit of breathing space

|
non_process
|
increase spacing between profit and loss change and and total on your overview the build looks too tight should look more like this to allow for a bit of breathing space
| 0
|
765,192
| 26,836,695,734
|
IssuesEvent
|
2023-02-02 20:08:12
|
gamefreedomgit/Maelstrom
|
https://api.github.com/repos/gamefreedomgit/Maelstrom
|
opened
|
Warrior's Intimidating Shout fear instantly breaks on the main target
|
Class: Warrior Spell PVP Priority: High Status: Confirmed
|
Title. https://www.youtube.com/watch?v=l_Ln6ygqWV0 Secondary targets have the proper damage threshold set.
https://wowwiki-archive.fandom.com/wiki/Intimidating_Shout
"...this fear is not as easy to break as the primary target's."
|
1.0
|
Warrior's Intimidating Shout fear instantly breaks on the main target - Title. https://www.youtube.com/watch?v=l_Ln6ygqWV0 Secondary targets have the proper damage threshold set.
https://wowwiki-archive.fandom.com/wiki/Intimidating_Shout
"...this fear is not as easy to break as the primary target's."
|
non_process
|
warrior s intimidating shout fear instantly breaks on the main target title secondary targets have the proper damage threshold set this fear is not as easy to break as the primary target s
| 0
|
10,614
| 13,438,568,035
|
IssuesEvent
|
2020-09-07 18:27:56
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
opened
|
New `strftime` remap function
|
domain: mapping domain: processing type: feature
|
A `strftime` function takes a timestamp as argument and formats it according to [Rust's strftime format](https://docs.rs/chrono/0.3.1/chrono/format/strftime/index.html):
## Example
```
.timestamp_string = strftime(.timestamp, "%F")
```
Would result in:
```js
{
"timestamp_string": "2020-08-03"
}
```
|
1.0
|
New `strftime` remap function - A `strftime` function takes a timestamp as argument and formats it according to [Rust's strftime format](https://docs.rs/chrono/0.3.1/chrono/format/strftime/index.html):
## Example
```
.timestamp_string = strftime(.timestamp, "%F")
```
Would result in:
```js
{
"timestamp_string": "2020-08-03"
}
```
|
process
|
new strftime remap function a strftime function takes a timestamp as argument and formats it according to example timestamp string strftime timestamp f would result in js timestamp string
| 1
|
150,475
| 11,962,546,907
|
IssuesEvent
|
2020-04-05 12:51:21
|
Rocologo/BagOfGold
|
https://api.github.com/repos/Rocologo/BagOfGold
|
closed
|
Bugs with drop-money-on-ground-itemtype set as ITEM ?
|
Fixed - To be tested
|
Hi @Rocologo
You'll find a copy/paste from my forum post bellow.
But as you anwered about "gringotts_style" mode, I have a question : ITEM mode do not support split and stacks ? Only gringotts_style supports it ?
Also, you have to know that I decided to switch from SKULLS to ITEMS because I had problems with shopkeepers (works with citizens). It works with money items (default; emerald), and I was not able to make it work with SKULLS. If you have an idea for that it would be nice !
The original post :
Until now, I used Bagsofgold with drop-money-on-ground-itemtype: SKULL, no problem at all with that.
Now I want to use it with the emerald as ITEM.
And a lot of things are not working anymore.
- can't stack given/withdrawn emeralds to make a bigger emerald "bag"
- - tested creative & survival
- can't split given/withdrawn emeralds to have two emerald with lower values
- - tested creative & survival
- can't deposit to the bank at all (console spam error bellow)
- can't withdraw from the bank if you already have ONE emerald in your inventory
- Your money is duplicated, each time you open inventory if you are in creative.
- - This is a old bug that does not happen anymore with the skulls If i'm correct
I never tried your plugin with an item before, is it the normal behaviour ? :oops:
If not, there is a lot of bugs right there.
Thanks !
Some errors in console
`[14:56:51] [Server thread/ERROR]: Could not pass event PlayerInteractEvent to BagOfGold v2.9.2
java.lang.IllegalArgumentException: Invalid UUID string:
at java.util.UUID.fromString(UUID.java:215) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.setReward(Reward.java:136) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.<init>(Reward.java:81) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.getReward(Reward.java:371) ~[?:?]
at one.lindegaard.BagOfGold.rewards.BagOfGoldItems.getSpaceForBagOfGoldMoney(BagOfGoldItems.java:347) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardManager.getSpaceForMoney(RewardManager.java:569) ~[?:?]
at one.lindegaard.BagOfGold.bank.BankSign.onPlayerInteractEvent(BankSign.java:156) ~[?:?]
at com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor119.execute(Unknown Source) ~[?:?]
at org.bukkit.plugin.EventExecutor.lambda$create$1(EventExecutor.java:69) ~[patched_1.15.2.jar:git-Paper-143]
at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:559) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.craftbukkit.v1_15_R1.event.CraftEventFactory.callPlayerInteractEvent(CraftEventFactory.java:463) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerInteractManager.a(PlayerInteractManager.java:448) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerConnection.a(PlayerConnection.java:1378) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PacketPlayInUseItem.a(PacketPlayInUseItem.java:27) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PacketPlayInUseItem.a(PacketPlayInUseItem.java:5) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerConnectionUtils.lambda$ensureMainThread$0(PlayerConnectionUtils.java:23) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.TickTask.run(SourceFile:18) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeTask(IAsyncTaskHandler.java:136) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandlerReentrant.executeTask(SourceFile:23) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeNext(IAsyncTaskHandler.java:109) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.ba(MinecraftServer.java:1038) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.executeNext(MinecraftServer.java:1031) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.awaitTasks(IAsyncTaskHandler.java:119) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.a(MinecraftServer.java:1102) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.run(MinecraftServer.java:934) ~[patched_1.15.2.jar:git-Paper-143]
at java.lang.Thread.run(Thread.java:834) [?:?]
[15:00:44] [Server thread/ERROR]: Could not pass event InventoryCloseEvent to BagOfGold v2.9.2
java.lang.IllegalArgumentException: Invalid UUID string:
at java.util.UUID.fromString(UUID.java:215) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.setReward(Reward.java:136) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.<init>(Reward.java:81) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.getReward(Reward.java:371) ~[?:?]
at one.lindegaard.BagOfGold.rewards.BagOfGoldItems.getAmountOfBagOfGoldMoneyInInventory(BagOfGoldItems.java:279) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardManager.getAmountInInventory(RewardManager.java:401) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardManager.adjustPlayerBalanceToAmounOfMoneyInInventory(RewardManager.java:535) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardListeners.onInventoryCloseEvent(RewardListeners.java:39) ~[?:?]
at com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor113.execute(Unknown Source) ~[?:?]
at org.bukkit.plugin.EventExecutor.lambda$create$1(EventExecutor.java:69) ~[patched_1.15.2.jar:git-Paper-143]
at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:559) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.craftbukkit.v1_15_R1.event.CraftEventFactory.handleInventoryCloseEvent(CraftEventFactory.java:1375) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerList.disconnect(PlayerList.java:407) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerConnection.a(PlayerConnection.java:1506) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.NetworkManager.handleDisconnection(NetworkManager.java:356) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.ServerConnection.c(ServerConnection.java:163) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.b(MinecraftServer.java:1269) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.DedicatedServer.b(DedicatedServer.java:430) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.a(MinecraftServer.java:1112) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.run(MinecraftServer.java:934) ~[patched_1.15.2.jar:git-Paper-143]
at java.lang.Thread.run(Thread.java:834) [?:?]`
|
1.0
|
Bugs with drop-money-on-ground-itemtype set as ITEM ? - Hi @Rocologo
You'll find a copy/paste from my forum post bellow.
But as you anwered about "gringotts_style" mode, I have a question : ITEM mode do not support split and stacks ? Only gringotts_style supports it ?
Also, you have to know that I decided to switch from SKULLS to ITEMS because I had problems with shopkeepers (works with citizens). It works with money items (default; emerald), and I was not able to make it work with SKULLS. If you have an idea for that it would be nice !
The original post :
Until now, I used Bagsofgold with drop-money-on-ground-itemtype: SKULL, no problem at all with that.
Now I want to use it with the emerald as ITEM.
And a lot of things are not working anymore.
- can't stack given/withdrawn emeralds to make a bigger emerald "bag"
- - tested creative & survival
- can't split given/withdrawn emeralds to have two emerald with lower values
- - tested creative & survival
- can't deposit to the bank at all (console spam error bellow)
- can't withdraw from the bank if you already have ONE emerald in your inventory
- Your money is duplicated, each time you open inventory if you are in creative.
- - This is a old bug that does not happen anymore with the skulls If i'm correct
I never tried your plugin with an item before, is it the normal behaviour ? :oops:
If not, there is a lot of bugs right there.
Thanks !
Some errors in console
`[14:56:51] [Server thread/ERROR]: Could not pass event PlayerInteractEvent to BagOfGold v2.9.2
java.lang.IllegalArgumentException: Invalid UUID string:
at java.util.UUID.fromString(UUID.java:215) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.setReward(Reward.java:136) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.<init>(Reward.java:81) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.getReward(Reward.java:371) ~[?:?]
at one.lindegaard.BagOfGold.rewards.BagOfGoldItems.getSpaceForBagOfGoldMoney(BagOfGoldItems.java:347) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardManager.getSpaceForMoney(RewardManager.java:569) ~[?:?]
at one.lindegaard.BagOfGold.bank.BankSign.onPlayerInteractEvent(BankSign.java:156) ~[?:?]
at com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor119.execute(Unknown Source) ~[?:?]
at org.bukkit.plugin.EventExecutor.lambda$create$1(EventExecutor.java:69) ~[patched_1.15.2.jar:git-Paper-143]
at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:559) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.craftbukkit.v1_15_R1.event.CraftEventFactory.callPlayerInteractEvent(CraftEventFactory.java:463) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerInteractManager.a(PlayerInteractManager.java:448) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerConnection.a(PlayerConnection.java:1378) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PacketPlayInUseItem.a(PacketPlayInUseItem.java:27) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PacketPlayInUseItem.a(PacketPlayInUseItem.java:5) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerConnectionUtils.lambda$ensureMainThread$0(PlayerConnectionUtils.java:23) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.TickTask.run(SourceFile:18) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeTask(IAsyncTaskHandler.java:136) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandlerReentrant.executeTask(SourceFile:23) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.executeNext(IAsyncTaskHandler.java:109) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.ba(MinecraftServer.java:1038) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.executeNext(MinecraftServer.java:1031) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.IAsyncTaskHandler.awaitTasks(IAsyncTaskHandler.java:119) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.a(MinecraftServer.java:1102) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.run(MinecraftServer.java:934) ~[patched_1.15.2.jar:git-Paper-143]
at java.lang.Thread.run(Thread.java:834) [?:?]
[15:00:44] [Server thread/ERROR]: Could not pass event InventoryCloseEvent to BagOfGold v2.9.2
java.lang.IllegalArgumentException: Invalid UUID string:
at java.util.UUID.fromString(UUID.java:215) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.setReward(Reward.java:136) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.<init>(Reward.java:81) ~[?:?]
at one.lindegaard.BagOfGold.rewards.Reward.getReward(Reward.java:371) ~[?:?]
at one.lindegaard.BagOfGold.rewards.BagOfGoldItems.getAmountOfBagOfGoldMoneyInInventory(BagOfGoldItems.java:279) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardManager.getAmountInInventory(RewardManager.java:401) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardManager.adjustPlayerBalanceToAmounOfMoneyInInventory(RewardManager.java:535) ~[?:?]
at one.lindegaard.BagOfGold.rewards.RewardListeners.onInventoryCloseEvent(RewardListeners.java:39) ~[?:?]
at com.destroystokyo.paper.event.executor.asm.generated.GeneratedEventExecutor113.execute(Unknown Source) ~[?:?]
at org.bukkit.plugin.EventExecutor.lambda$create$1(EventExecutor.java:69) ~[patched_1.15.2.jar:git-Paper-143]
at co.aikar.timings.TimedEventExecutor.execute(TimedEventExecutor.java:80) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:70) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:559) ~[patched_1.15.2.jar:git-Paper-143]
at org.bukkit.craftbukkit.v1_15_R1.event.CraftEventFactory.handleInventoryCloseEvent(CraftEventFactory.java:1375) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerList.disconnect(PlayerList.java:407) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.PlayerConnection.a(PlayerConnection.java:1506) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.NetworkManager.handleDisconnection(NetworkManager.java:356) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.ServerConnection.c(ServerConnection.java:163) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.b(MinecraftServer.java:1269) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.DedicatedServer.b(DedicatedServer.java:430) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.a(MinecraftServer.java:1112) ~[patched_1.15.2.jar:git-Paper-143]
at net.minecraft.server.v1_15_R1.MinecraftServer.run(MinecraftServer.java:934) ~[patched_1.15.2.jar:git-Paper-143]
at java.lang.Thread.run(Thread.java:834) [?:?]`
|
non_process
|
bugs with drop money on ground itemtype set as item hi rocologo you ll find a copy paste from my forum post bellow but as you anwered about gringotts style mode i have a question item mode do not support split and stacks only gringotts style supports it also you have to know that i decided to switch from skulls to items because i had problems with shopkeepers works with citizens it works with money items default emerald and i was not able to make it work with skulls if you have an idea for that it would be nice the original post until now i used bagsofgold with drop money on ground itemtype skull no problem at all with that now i want to use it with the emerald as item and a lot of things are not working anymore can t stack given withdrawn emeralds to make a bigger emerald bag tested creative survival can t split given withdrawn emeralds to have two emerald with lower values tested creative survival can t deposit to the bank at all console spam error bellow can t withdraw from the bank if you already have one emerald in your inventory your money is duplicated each time you open inventory if you are in creative this is a old bug that does not happen anymore with the skulls if i m correct i never tried your plugin with an item before is it the normal behaviour oops if not there is a lot of bugs right there thanks some errors in console could not pass event playerinteractevent to bagofgold java lang illegalargumentexception invalid uuid string at java util uuid fromstring uuid java at one lindegaard bagofgold rewards reward setreward reward java at one lindegaard bagofgold rewards reward reward java at one lindegaard bagofgold rewards reward getreward reward java at one lindegaard bagofgold rewards bagofgolditems getspaceforbagofgoldmoney bagofgolditems java at one lindegaard bagofgold rewards rewardmanager getspaceformoney rewardmanager java at one lindegaard bagofgold bank banksign onplayerinteractevent banksign java at com destroystokyo paper event executor asm generated execute unknown source at org bukkit plugin eventexecutor lambda create eventexecutor java at co aikar timings timedeventexecutor execute timedeventexecutor java at org bukkit plugin registeredlistener callevent registeredlistener java at org bukkit plugin simplepluginmanager callevent simplepluginmanager java at org bukkit craftbukkit event crafteventfactory callplayerinteractevent crafteventfactory java at net minecraft server playerinteractmanager a playerinteractmanager java at net minecraft server playerconnection a playerconnection java at net minecraft server packetplayinuseitem a packetplayinuseitem java at net minecraft server packetplayinuseitem a packetplayinuseitem java at net minecraft server playerconnectionutils lambda ensuremainthread playerconnectionutils java at net minecraft server ticktask run sourcefile at net minecraft server iasynctaskhandler executetask iasynctaskhandler java at net minecraft server iasynctaskhandlerreentrant executetask sourcefile at net minecraft server iasynctaskhandler executenext iasynctaskhandler java at net minecraft server minecraftserver ba minecraftserver java at net minecraft server minecraftserver executenext minecraftserver java at net minecraft server iasynctaskhandler awaittasks iasynctaskhandler java at net minecraft server minecraftserver a minecraftserver java at net minecraft server minecraftserver run minecraftserver java at java lang thread run thread java could not pass event inventorycloseevent to bagofgold java lang illegalargumentexception invalid uuid string at java util uuid fromstring uuid java at one lindegaard bagofgold rewards reward setreward reward java at one lindegaard bagofgold rewards reward reward java at one lindegaard bagofgold rewards reward getreward reward java at one lindegaard bagofgold rewards bagofgolditems getamountofbagofgoldmoneyininventory bagofgolditems java at one lindegaard bagofgold rewards rewardmanager getamountininventory rewardmanager java at one lindegaard bagofgold rewards rewardmanager adjustplayerbalancetoamounofmoneyininventory rewardmanager java at one lindegaard bagofgold rewards rewardlisteners oninventorycloseevent rewardlisteners java at com destroystokyo paper event executor asm generated execute unknown source at org bukkit plugin eventexecutor lambda create eventexecutor java at co aikar timings timedeventexecutor execute timedeventexecutor java at org bukkit plugin registeredlistener callevent registeredlistener java at org bukkit plugin simplepluginmanager callevent simplepluginmanager java at org bukkit craftbukkit event crafteventfactory handleinventorycloseevent crafteventfactory java at net minecraft server playerlist disconnect playerlist java at net minecraft server playerconnection a playerconnection java at net minecraft server networkmanager handledisconnection networkmanager java at net minecraft server serverconnection c serverconnection java at net minecraft server minecraftserver b minecraftserver java at net minecraft server dedicatedserver b dedicatedserver java at net minecraft server minecraftserver a minecraftserver java at net minecraft server minecraftserver run minecraftserver java at java lang thread run thread java
| 0
|
121,745
| 17,662,359,745
|
IssuesEvent
|
2021-08-21 19:28:54
|
ghc-dev/Joshua-Braun
|
https://api.github.com/repos/ghc-dev/Joshua-Braun
|
opened
|
CVE-2020-1753 (Medium) detected in ansible-2.9.9.tar.gz
|
security vulnerability
|
## CVE-2020-1753 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Joshua-Braun/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Joshua-Braun/commit/c33920b35d4c7d67985bc23f375407c17c23a549">c33920b35d4c7d67985bc23f375407c17c23a549</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753>CVE-2020-1753</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/202006-11">https://security.gentoo.org/glsa/202006-11</a></p>
<p>Fix Resolution: All Ansible users should upgrade to the latest version # emerge --sync
# emerge --ask --oneshot --verbose >=app-admin/ansible-2.9.7 >= </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-1753","vulnerabilityDetails":"A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-1753 (Medium) detected in ansible-2.9.9.tar.gz - ## CVE-2020-1753 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Joshua-Braun/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Joshua-Braun/commit/c33920b35d4c7d67985bc23f375407c17c23a549">c33920b35d4c7d67985bc23f375407c17c23a549</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753>CVE-2020-1753</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/202006-11">https://security.gentoo.org/glsa/202006-11</a></p>
<p>Fix Resolution: All Ansible users should upgrade to the latest version # emerge --sync
# emerge --ask --oneshot --verbose >=app-admin/ansible-2.9.7 >= </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-1753","vulnerabilityDetails":"A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in ansible tar gz cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file joshua braun requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a security flaw was found in ansible engine all ansible x versions prior to all ansible x versions prior to and all ansible x versions prior to when managing kubernetes using the module sensitive parameters such as passwords and tokens are passed to kubectl from the command line not using an environment variable or an input configuration file this will disclose passwords and tokens from process list and no log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href fix resolution all ansible users should upgrade to the latest version emerge sync emerge ask oneshot verbose app admin ansible isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails a security flaw was found in ansible engine all ansible x versions prior to all ansible x versions prior to and all ansible x versions prior to when managing kubernetes using the module sensitive parameters such as passwords and tokens are passed to kubectl from the command line not using an environment variable or an input configuration file this will disclose passwords and tokens from process list and no log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files vulnerabilityurl
| 0
|
11,556
| 14,436,734,829
|
IssuesEvent
|
2020-12-07 10:31:59
|
MelissaMorales13/4a_bien
|
https://api.github.com/repos/MelissaMorales13/4a_bien
|
closed
|
fill_size_estimating_template
|
process dashboard
|
-llenado de template de estimación de lineas de código en process dashboard
-correr el PROBE wizard
|
1.0
|
fill_size_estimating_template - -llenado de template de estimación de lineas de código en process dashboard
-correr el PROBE wizard
|
process
|
fill size estimating template llenado de template de estimación de lineas de código en process dashboard correr el probe wizard
| 1
|
27,625
| 6,889,586,649
|
IssuesEvent
|
2017-11-22 10:53:15
|
openshiftio/openshift.io
|
https://api.github.com/repos/openshiftio/openshift.io
|
opened
|
Extend API for interaction with Che workspaces
|
area/che area/codebases
|
There is need to expose Che workspace API (https://github.com/fabric8-services/fabric8-wit/blob/master/codebase/che/client.go) with the following items:
- [ ] ability to stop workspace
- [ ] ability to create workspace pointing the repository branch ? (maybe here changes for codebases are needed as well to be identified by source location + branch)
- [ ] ability to handle workspace statuses
|
1.0
|
Extend API for interaction with Che workspaces - There is need to expose Che workspace API (https://github.com/fabric8-services/fabric8-wit/blob/master/codebase/che/client.go) with the following items:
- [ ] ability to stop workspace
- [ ] ability to create workspace pointing the repository branch ? (maybe here changes for codebases are needed as well to be identified by source location + branch)
- [ ] ability to handle workspace statuses
|
non_process
|
extend api for interaction with che workspaces there is need to expose che workspace api with the following items ability to stop workspace ability to create workspace pointing the repository branch maybe here changes for codebases are needed as well to be identified by source location branch ability to handle workspace statuses
| 0
|
155,845
| 5,961,827,765
|
IssuesEvent
|
2017-05-29 19:14:29
|
dmusican/Elegit
|
https://api.github.com/repos/dmusican/Elegit
|
opened
|
Move to git flow workflow
|
enhancement priority high
|
I was wrong; @grahamearley and @kileymaki were right. We should be using git flow as a git management model. I'm tangled up in knots on the current release.
Move the repo to a git flow strategy.
|
1.0
|
Move to git flow workflow - I was wrong; @grahamearley and @kileymaki were right. We should be using git flow as a git management model. I'm tangled up in knots on the current release.
Move the repo to a git flow strategy.
|
non_process
|
move to git flow workflow i was wrong grahamearley and kileymaki were right we should be using git flow as a git management model i m tangled up in knots on the current release move the repo to a git flow strategy
| 0
|
72,301
| 15,225,240,561
|
IssuesEvent
|
2021-02-18 06:57:47
|
devikab2b/whites5
|
https://api.github.com/repos/devikab2b/whites5
|
opened
|
CVE-2021-20190 (High) detected in jackson-databind-2.6.7.3.jar
|
security vulnerability
|
## CVE-2021-20190 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.7.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: whites5/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.3/jackson-databind-2.6.7.3.jar</p>
<p>
Dependency Hierarchy:
- spark-sql_2.12-2.4.7.jar (Root Library)
- :x: **jackson-databind-2.6.7.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devikab2b/whites5/commit/b24afaf70d8746f42dcb93a7ef65ad261fda5b7f">b24afaf70d8746f42dcb93a7ef65ad261fda5b7f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in jackson-databind before 2.9.10.7. FasterXML mishandles the interaction between serialization gadgets and typing. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.
<p>Publish Date: 2021-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20190>CVE-2021-20190</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2854">https://github.com/FasterXML/jackson-databind/issues/2854</a></p>
<p>Release Date: 2021-01-19</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind-2.9.10.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-20190 (High) detected in jackson-databind-2.6.7.3.jar - ## CVE-2021-20190 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.7.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: whites5/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.3/jackson-databind-2.6.7.3.jar</p>
<p>
Dependency Hierarchy:
- spark-sql_2.12-2.4.7.jar (Root Library)
- :x: **jackson-databind-2.6.7.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/devikab2b/whites5/commit/b24afaf70d8746f42dcb93a7ef65ad261fda5b7f">b24afaf70d8746f42dcb93a7ef65ad261fda5b7f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in jackson-databind before 2.9.10.7. FasterXML mishandles the interaction between serialization gadgets and typing. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.
<p>Publish Date: 2021-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20190>CVE-2021-20190</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2854">https://github.com/FasterXML/jackson-databind/issues/2854</a></p>
<p>Release Date: 2021-01-19</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind-2.9.10.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spark sql jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details a flaw was found in jackson databind before fasterxml mishandles the interaction between serialization gadgets and typing the highest threat from this vulnerability is to data confidentiality and integrity as well as system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
10,552
| 13,338,994,685
|
IssuesEvent
|
2020-08-28 12:06:34
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
gdal2tiles fails when the input raster is very small (7x8)
|
Bug Feedback Processing
|
I want to export tiles from a very small raster (7x8) and the tiles are all transparent.
I found out that if I translate the raster to increase its size (say give it a width of 100), then the output is as expected.
Here's my code (I'm using gdal2tiles python bindings but I've also reproduced this issue with multiple versions of gdal2tiles.py)
`gdal2tiles.generate_tiles(in_raster_path, out_dir, zoom='12-24')`
I'm attaching the raster file
[bug_gdal2tiles.zip](https://github.com/qgis/QGIS/files/5136771/bug_gdal2tiles.zip)
|
1.0
|
gdal2tiles fails when the input raster is very small (7x8) - I want to export tiles from a very small raster (7x8) and the tiles are all transparent.
I found out that if I translate the raster to increase its size (say give it a width of 100), then the output is as expected.
Here's my code (I'm using gdal2tiles python bindings but I've also reproduced this issue with multiple versions of gdal2tiles.py)
`gdal2tiles.generate_tiles(in_raster_path, out_dir, zoom='12-24')`
I'm attaching the raster file
[bug_gdal2tiles.zip](https://github.com/qgis/QGIS/files/5136771/bug_gdal2tiles.zip)
|
process
|
fails when the input raster is very small i want to export tiles from a very small raster and the tiles are all transparent i found out that if i translate the raster to increase its size say give it a width of then the output is as expected here s my code i m using python bindings but i ve also reproduced this issue with multiple versions of py generate tiles in raster path out dir zoom i m attaching the raster file
| 1
|
13,783
| 16,541,559,008
|
IssuesEvent
|
2021-05-27 17:28:42
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
QGIS crashes when editing Processing script
|
Bug Crash/Data Corruption Processing
|
Found this on master while testing this other ticket
https://github.com/qgis/QGIS/issues/41719
basically a Processing script can be edited without issues, but if the editor dialog is open and the script run, then a subsequent edit causes an instant crash.
|
1.0
|
QGIS crashes when editing Processing script - Found this on master while testing this other ticket
https://github.com/qgis/QGIS/issues/41719
basically a Processing script can be edited without issues, but if the editor dialog is open and the script run, then a subsequent edit causes an instant crash.
|
process
|
qgis crashes when editing processing script found this on master while testing this other ticket basically a processing script can be edited without issues but if the editor dialog is open and the script run then a subsequent edit causes an instant crash
| 1
|
17,807
| 23,730,052,186
|
IssuesEvent
|
2022-08-31 00:20:44
|
FTBTeam/FTB-App
|
https://api.github.com/repos/FTBTeam/FTB-App
|
closed
|
ftb app stoped launching ftb university over night
|
subprocess bug:unconfirmed awaiting response possibly minecraft status: stale
|
### What Operating System
Windows 11
### App Version
202112101445-bdf8bdbaca-release
### UI Version
bdf8bdbaca
### Log Files
https://pste.ch/qiqigozeva
### Debug Code
FTB-DBGXAHAYIRIJA
### Describe the bug
app was working perfectly last night but today its not i have changed nothing at all
### Steps to reproduce
1.open ftb app
2.press play
3.nothing happens
### Expected behaviour
minecraft launcher opens
### Screenshots
https://www.reddit.com/r/feedthebeast/comments/suxdl3/neither_modpack_is_working_do_you_have_any_idea/
### Additional information
_No response_
|
1.0
|
ftb app stoped launching ftb university over night - ### What Operating System
Windows 11
### App Version
202112101445-bdf8bdbaca-release
### UI Version
bdf8bdbaca
### Log Files
https://pste.ch/qiqigozeva
### Debug Code
FTB-DBGXAHAYIRIJA
### Describe the bug
app was working perfectly last night but today its not i have changed nothing at all
### Steps to reproduce
1.open ftb app
2.press play
3.nothing happens
### Expected behaviour
minecraft launcher opens
### Screenshots
https://www.reddit.com/r/feedthebeast/comments/suxdl3/neither_modpack_is_working_do_you_have_any_idea/
### Additional information
_No response_
|
process
|
ftb app stoped launching ftb university over night what operating system windows app version release ui version log files debug code ftb dbgxahayirija describe the bug app was working perfectly last night but today its not i have changed nothing at all steps to reproduce open ftb app press play nothing happens expected behaviour minecraft launcher opens screenshots additional information no response
| 1
|
22,428
| 31,160,039,251
|
IssuesEvent
|
2023-08-16 15:23:27
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/k8sattribute] e2e test fails to setup pod in required time limit
|
bug processor/k8sattributes flaky test
|
### Component(s)
processor/k8sattributes
### What happened?
## Description
Logs from test failure:
```
=== RUN TestE2E
k8s_telemetrygen.go:64:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/internal/k8stest/k8s_telemetrygen.go:64
/home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:75
Error: Condition never satisfied
Test: TestE2E
Messages: telemetrygen pod of Workload [Job] in datatype [logs] haven't started within 3 minutes, latest pod phase is
W0710 [14](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065#step:11:15):50:11.23[17](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065#step:11:18)66 17282 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
W0710 14:50:11.427792 17282 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
W0710 14:50:11.628093 17282 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
--- FAIL: TestE2E ([19](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065#step:11:20)7.01s)
```
[Test failure link](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065)
This occurred on an unrelated [PR](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24065), so I assume it's a flaky test and just wasn't able to set up the environment within the time limit. I have not investigated further to confirm though.
### Collector version
v0.81.0
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
[processor/k8sattribute] e2e test fails to setup pod in required time limit - ### Component(s)
processor/k8sattributes
### What happened?
## Description
Logs from test failure:
```
=== RUN TestE2E
k8s_telemetrygen.go:64:
Error Trace: /home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/internal/k8stest/k8s_telemetrygen.go:64
/home/runner/work/opentelemetry-collector-contrib/opentelemetry-collector-contrib/processor/k8sattributesprocessor/e2e_test.go:75
Error: Condition never satisfied
Test: TestE2E
Messages: telemetrygen pod of Workload [Job] in datatype [logs] haven't started within 3 minutes, latest pod phase is
W0710 [14](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065#step:11:15):50:11.23[17](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065#step:11:18)66 17282 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
W0710 14:50:11.427792 17282 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
W0710 14:50:11.628093 17282 warnings.go:70] child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
--- FAIL: TestE2E ([19](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065#step:11:20)7.01s)
```
[Test failure link](https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/5509366365/jobs/10042200389?pr=24065)
This occurred on an unrelated [PR](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/24065), so I assume it's a flaky test and just wasn't able to set up the environment within the time limit. I have not investigated further to confirm though.
### Collector version
v0.81.0
### Environment information
## Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
### OpenTelemetry Collector configuration
_No response_
### Log output
_No response_
### Additional context
_No response_
|
process
|
test fails to setup pod in required time limit component s processor what happened description logs from test failure run telemetrygen go error trace home runner work opentelemetry collector contrib opentelemetry collector contrib internal telemetrygen go home runner work opentelemetry collector contrib opentelemetry collector contrib processor test go error condition never satisfied test messages telemetrygen pod of workload in datatype haven t started within minutes latest pod phase is warnings go child pods are preserved by default when jobs are deleted set propagationpolicy background to remove them or set propagationpolicy orphan to suppress this warning warnings go child pods are preserved by default when jobs are deleted set propagationpolicy background to remove them or set propagationpolicy orphan to suppress this warning warnings go child pods are preserved by default when jobs are deleted set propagationpolicy background to remove them or set propagationpolicy orphan to suppress this warning fail this occurred on an unrelated so i assume it s a flaky test and just wasn t able to set up the environment within the time limit i have not investigated further to confirm though collector version environment information environment os e g ubuntu compiler if manually compiled e g go opentelemetry collector configuration no response log output no response additional context no response
| 1
|
15,166
| 18,923,734,521
|
IssuesEvent
|
2021-11-17 06:55:05
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
reopened
|
storage: error in sample test
|
api: storage type: process samples
|
There are [sample test failures](https://source.cloud.google.com/results/invocations/6f7d2e57-41b3-46eb-b42b-230875c38009/log) in google-cloud-storage. This was noticed in PR [16028](https://github.com/googleapis/google-cloud-ruby/pull/16028)
|
1.0
|
storage: error in sample test - There are [sample test failures](https://source.cloud.google.com/results/invocations/6f7d2e57-41b3-46eb-b42b-230875c38009/log) in google-cloud-storage. This was noticed in PR [16028](https://github.com/googleapis/google-cloud-ruby/pull/16028)
|
process
|
storage error in sample test there are in google cloud storage this was noticed in pr
| 1
|
241,183
| 7,809,470,141
|
IssuesEvent
|
2018-06-12 00:47:40
|
debtcollective/parent
|
https://api.github.com/repos/debtcollective/parent
|
closed
|
Yes/No option does not show required fields when "Yes" is selected and the "save and close" button is used to save the form
|
bug disputes priority: medium
|
## Steps to reproduce
1) Begin a new Wage Garnishment dispute with option C
2) Open the information form
3) Answer "yes" to the "Are you a FFEL Loan Holder" question
4) Click "save and close" in the upper right hand corner (do not use the button at the bottom to complete the form)
5) Refresh the page
6) Open the form again
## Expected results
The sub-fields to the FFEL Loan Holder yes/no question should display
## Actual results
The sub-fields do not display and when you fill out the rest of the available form fields and submit the form using the complete form button at the bottom, nothing happens. In the console, there are errors about the sub-fields not being focus-able.
Present in current tools and in old tools. Tested on `staging.debtcollective.org` on April 15, 2018.
|
1.0
|
Yes/No option does not show required fields when "Yes" is selected and the "save and close" button is used to save the form - ## Steps to reproduce
1) Begin a new Wage Garnishment dispute with option C
2) Open the information form
3) Answer "yes" to the "Are you a FFEL Loan Holder" question
4) Click "save and close" in the upper right hand corner (do not use the button at the bottom to complete the form)
5) Refresh the page
6) Open the form again
## Expected results
The sub-fields to the FFEL Loan Holder yes/no question should display
## Actual results
The sub-fields do not display and when you fill out the rest of the available form fields and submit the form using the complete form button at the bottom, nothing happens. In the console, there are errors about the sub-fields not being focus-able.
Present in current tools and in old tools. Tested on `staging.debtcollective.org` on April 15, 2018.
|
non_process
|
yes no option does not show required fields when yes is selected and the save and close button is used to save the form steps to reproduce begin a new wage garnishment dispute with option c open the information form answer yes to the are you a ffel loan holder question click save and close in the upper right hand corner do not use the button at the bottom to complete the form refresh the page open the form again expected results the sub fields to the ffel loan holder yes no question should display actual results the sub fields do not display and when you fill out the rest of the available form fields and submit the form using the complete form button at the bottom nothing happens in the console there are errors about the sub fields not being focus able present in current tools and in old tools tested on staging debtcollective org on april
| 0
|
500,735
| 14,512,996,339
|
IssuesEvent
|
2020-12-13 03:24:20
|
MelbourneDeveloper/Device.Net
|
https://api.github.com/repos/MelbourneDeveloper/Device.Net
|
closed
|
USB Control transactions
|
High Priority
|
I am trying to create a .netcoreapp (2.1) to update firmware on ST devices i.e. http://eliasoenal.com/st%20website/17068.pdf. All of the examples I have seen use control transactions e.g.
```
UsbSetupPacket setup = new UsbSetupPacket()
{
RequestType = (byte)(DFU_RequestType | USB_DIR_IN),
Request = (byte)DFU_UPLOAD,
Value = (short)blockNum,
Index = 0
};
usb.ControlTransfer(ref setup, data, length, out int len);
```
Is this possible with this lib? If so, are there any examples?
Thanks.
|
1.0
|
USB Control transactions - I am trying to create a .netcoreapp (2.1) to update firmware on ST devices i.e. http://eliasoenal.com/st%20website/17068.pdf. All of the examples I have seen use control transactions e.g.
```
UsbSetupPacket setup = new UsbSetupPacket()
{
RequestType = (byte)(DFU_RequestType | USB_DIR_IN),
Request = (byte)DFU_UPLOAD,
Value = (short)blockNum,
Index = 0
};
usb.ControlTransfer(ref setup, data, length, out int len);
```
Is this possible with this lib? If so, are there any examples?
Thanks.
|
non_process
|
usb control transactions i am trying to create a netcoreapp to update firmware on st devices i e all of the examples i have seen use control transactions e g usbsetuppacket setup new usbsetuppacket requesttype byte dfu requesttype usb dir in request byte dfu upload value short blocknum index usb controltransfer ref setup data length out int len is this possible with this lib if so are there any examples thanks
| 0
|
9,247
| 12,281,226,456
|
IssuesEvent
|
2020-05-08 15:28:38
|
pelias/whosonfirst
|
https://api.github.com/repos/pelias/whosonfirst
|
closed
|
Allow user to whitelist postalcode countries
|
processed
|
The whosonfirst importer currently forces a user looking to import postalcodes to download and import postal codes from literally every single one of the 213 countries. That seems a bit excessive if, say, they're only interesting in importing Germany postal codes. Perhaps the importer can look for an array in pelias config of whitelisted country ISO2's to import. My personal use case is that I'm just trying to run a test import and I'm hacking the code a bit to just run a single country.
|
1.0
|
Allow user to whitelist postalcode countries - The whosonfirst importer currently forces a user looking to import postalcodes to download and import postal codes from literally every single one of the 213 countries. That seems a bit excessive if, say, they're only interesting in importing Germany postal codes. Perhaps the importer can look for an array in pelias config of whitelisted country ISO2's to import. My personal use case is that I'm just trying to run a test import and I'm hacking the code a bit to just run a single country.
|
process
|
allow user to whitelist postalcode countries the whosonfirst importer currently forces a user looking to import postalcodes to download and import postal codes from literally every single one of the countries that seems a bit excessive if say they re only interesting in importing germany postal codes perhaps the importer can look for an array in pelias config of whitelisted country s to import my personal use case is that i m just trying to run a test import and i m hacking the code a bit to just run a single country
| 1
|
71,896
| 23,844,827,892
|
IssuesEvent
|
2022-09-06 13:18:20
|
martinrotter/rssguard
|
https://api.github.com/repos/martinrotter/rssguard
|
opened
|
[BUG]: Appimage does not work on Ubuntu Jammy
|
Type-Defect
|
### Brief description of the issue
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### How to reproduce the bug?
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### What was the expected result?
?
### What actually happened?
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### Debug log
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### Operating system and version
* OS: Ubuntu Jammy
* RSS Guard version: 4.2.4-1f6d7c0b-linux64.AppImage
|
1.0
|
[BUG]: Appimage does not work on Ubuntu Jammy - ### Brief description of the issue
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### How to reproduce the bug?
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### What was the expected result?
?
### What actually happened?
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### Debug log
./rssguard-4.2.4-1f6d7c0b-linux64.AppImage
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6Core.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libQt6WebEngineCore.so.6)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libglib-2.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstaudio-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstvideo-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgstreamer-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libnspr4.so)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libgsttag-1.0.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libsystemd.so.0)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
/tmp/.mount_rssgualweSWh/AppRun.wrapped: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.30' not found (required by /tmp/.mount_rssgualweSWh/usr/bin/../lib/libudev.so.1)
### Operating system and version
* OS: Ubuntu Jammy
* RSS Guard version: 4.2.4-1f6d7c0b-linux64.AppImage
|
non_process
|
appimage does not work on ubuntu jammy brief description of the issue rssguard appimage tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libglib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstaudio so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstvideo so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstreamer so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgsttag so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so how to reproduce the bug rssguard appimage tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libglib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstaudio so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstvideo so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstreamer so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgsttag so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so what was the expected result what actually happened rssguard appimage tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libglib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstaudio so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstvideo so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstreamer so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgsttag so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so debug log rssguard appimage tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libglib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstaudio so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstvideo so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgstreamer so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib so tmp mount rssgualweswh apprun wrapped lib linux gnu libm so version glibc not found required by tmp mount rssgualweswh usr bin lib libgsttag so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libsystemd so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so tmp mount rssgualweswh apprun wrapped lib linux gnu libc so version glibc not found required by tmp mount rssgualweswh usr bin lib libudev so operating system and version os ubuntu jammy rss guard version appimage
| 0
|
4,803
| 7,696,902,742
|
IssuesEvent
|
2018-05-18 16:49:18
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
opened
|
automating Image Gallery component for AAC foster process page
|
Feature: Foster Application Feature: In-page Images Feature: Process Size: S Team: Dev
|
@cthibaultatx is designing the Image Gallery component for the AAC foster Process page.
There may be an opportunity for us to automate the updating of images in this gallery. Before our meeting with Erin and Lorien from AAC on 5/29, let's see what we can learn about the data sourcing options to see if this is even feasible.
Depending on the complexity, this might be an opportunity for an ATX Hack for Change project.
|
1.0
|
automating Image Gallery component for AAC foster process page - @cthibaultatx is designing the Image Gallery component for the AAC foster Process page.
There may be an opportunity for us to automate the updating of images in this gallery. Before our meeting with Erin and Lorien from AAC on 5/29, let's see what we can learn about the data sourcing options to see if this is even feasible.
Depending on the complexity, this might be an opportunity for an ATX Hack for Change project.
|
process
|
automating image gallery component for aac foster process page cthibaultatx is designing the image gallery component for the aac foster process page there may be an opportunity for us to automate the updating of images in this gallery before our meeting with erin and lorien from aac on let s see what we can learn about the data sourcing options to see if this is even feasible depending on the complexity this might be an opportunity for an atx hack for change project
| 1
|
184,352
| 14,288,941,529
|
IssuesEvent
|
2020-11-23 18:27:38
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
Test case: rsend_024_pos
|
Component: Test Suite Status: Stale
|
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | all
Distribution Version | all
Linux Kernel | all
Architecture | all
ZFS Version | 0.7.0-rc3
SPL Version | 0.7.0-rc3
### Describe the problem you're observing
Rarely observed failure of rsend_024_pos during automated testing.
### Describe how to reproduce the problem
Reproducible by the buildbot.
### Include any warning/errors/backtraces from the system logs
http://build.zfsonlinux.org/builders/Ubuntu%2014.04%20i686%20%28TEST%29/builds/686
```
Test: /usr/share/zfs/zfs-tests/tests/functional/rsend/rsend_024_pos (run as root) [00:00] [FAIL]
04:31:45.34 ASSERTION: Verify resumability of a full ZFS send/receive with the source filesystem unmounted
04:31:45.37 ERROR: /sbin/zfs destroy -r testpool1.27205/recvfs exited 1
04:31:45.37 cannot destroy 'testpool1.27205/recvfs': dataset is busy
04:31:45.37 NOTE: Performing local cleanup via log_onexit (cleanup_pool testpool1.27205)
04:31:45.38 SUCCESS: rm -f -rf /var/tmp/backdir-rsend/*
04:31:45.40 ERROR: /sbin/zfs destroy -Rf testpool1.27205 exited 1
04:31:45.40 cannot destroy 'testpool1.27205/recvfs': dataset is busy
```
|
1.0
|
Test case: rsend_024_pos - ### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | all
Distribution Version | all
Linux Kernel | all
Architecture | all
ZFS Version | 0.7.0-rc3
SPL Version | 0.7.0-rc3
### Describe the problem you're observing
Rarely observed failure of rsend_024_pos during automated testing.
### Describe how to reproduce the problem
Reproducible by the buildbot.
### Include any warning/errors/backtraces from the system logs
http://build.zfsonlinux.org/builders/Ubuntu%2014.04%20i686%20%28TEST%29/builds/686
```
Test: /usr/share/zfs/zfs-tests/tests/functional/rsend/rsend_024_pos (run as root) [00:00] [FAIL]
04:31:45.34 ASSERTION: Verify resumability of a full ZFS send/receive with the source filesystem unmounted
04:31:45.37 ERROR: /sbin/zfs destroy -r testpool1.27205/recvfs exited 1
04:31:45.37 cannot destroy 'testpool1.27205/recvfs': dataset is busy
04:31:45.37 NOTE: Performing local cleanup via log_onexit (cleanup_pool testpool1.27205)
04:31:45.38 SUCCESS: rm -f -rf /var/tmp/backdir-rsend/*
04:31:45.40 ERROR: /sbin/zfs destroy -Rf testpool1.27205 exited 1
04:31:45.40 cannot destroy 'testpool1.27205/recvfs': dataset is busy
```
|
non_process
|
test case rsend pos system information type version name distribution name all distribution version all linux kernel all architecture all zfs version spl version describe the problem you re observing rarely observed failure of rsend pos during automated testing describe how to reproduce the problem reproducible by the buildbot include any warning errors backtraces from the system logs test usr share zfs zfs tests tests functional rsend rsend pos run as root assertion verify resumability of a full zfs send receive with the source filesystem unmounted error sbin zfs destroy r recvfs exited cannot destroy recvfs dataset is busy note performing local cleanup via log onexit cleanup pool success rm f rf var tmp backdir rsend error sbin zfs destroy rf exited cannot destroy recvfs dataset is busy
| 0
|
737,768
| 25,530,825,306
|
IssuesEvent
|
2022-11-29 08:16:23
|
pystardust/ani-cli
|
https://api.github.com/repos/pystardust/ani-cli
|
closed
|
Add feature to be able to use 9anime as provider
|
type: feature request priority 4: wishlist
|
**Is your feature request related to a problem? Please describe.**
I believe you should be able to switch to 9anime cuz why not
**Describe the solution you'd like**
add 9anime as stream link
**Describe alternatives you've considered**
- Ignoring this and using other sites already supported
**Additional context**
idk I just like it better but no one likes ads 🤷🏿♂️
|
1.0
|
Add feature to be able to use 9anime as provider - **Is your feature request related to a problem? Please describe.**
I believe you should be able to switch to 9anime cuz why not
**Describe the solution you'd like**
add 9anime as stream link
**Describe alternatives you've considered**
- Ignoring this and using other sites already supported
**Additional context**
idk I just like it better but no one likes ads 🤷🏿♂️
|
non_process
|
add feature to be able to use as provider is your feature request related to a problem please describe i believe you should be able to switch to cuz why not describe the solution you d like add as stream link describe alternatives you ve considered ignoring this and using other sites already supported additional context idk i just like it better but no one likes ads 🤷🏿♂️
| 0
|
155,827
| 24,526,554,553
|
IssuesEvent
|
2022-10-11 13:32:36
|
hypothesis/frontend-shared
|
https://api.github.com/repos/hypothesis/frontend-shared
|
opened
|
Make it possible to "turn sizing off" for `Button`, perhaps other sized components
|
component design pattern
|
Padding and spacing within buttons can be adjusted via the `size` prop, which applies some canned padding and spacing for `sm`, `md`, `lg` values, e.g. However, we have an awful lot of legacy buttons with custom or mis-matched padding and spacing.
To match current custom button styling using `Button`, it's currently necessary to either:
* Pass `!important` `classes` (e.g. `classes="!px-1.5 !py-2"`) or
* Use `ButtonBase` instead (which won't apply sizing, but also won't apply colors, etc., so you have to re-invent the wheel a bit)
One option might be to introduce an `unsized` prop similar to `ButtonBase`'s `unstyled` prop. When set, padding and spacing would not be applied. The user of `Button` could then set non-important size-related classes via `classes`.
|
1.0
|
Make it possible to "turn sizing off" for `Button`, perhaps other sized components - Padding and spacing within buttons can be adjusted via the `size` prop, which applies some canned padding and spacing for `sm`, `md`, `lg` values, e.g. However, we have an awful lot of legacy buttons with custom or mis-matched padding and spacing.
To match current custom button styling using `Button`, it's currently necessary to either:
* Pass `!important` `classes` (e.g. `classes="!px-1.5 !py-2"`) or
* Use `ButtonBase` instead (which won't apply sizing, but also won't apply colors, etc., so you have to re-invent the wheel a bit)
One option might be to introduce an `unsized` prop similar to `ButtonBase`'s `unstyled` prop. When set, padding and spacing would not be applied. The user of `Button` could then set non-important size-related classes via `classes`.
|
non_process
|
make it possible to turn sizing off for button perhaps other sized components padding and spacing within buttons can be adjusted via the size prop which applies some canned padding and spacing for sm md lg values e g however we have an awful lot of legacy buttons with custom or mis matched padding and spacing to match current custom button styling using button it s currently necessary to either pass important classes e g classes px py or use buttonbase instead which won t apply sizing but also won t apply colors etc so you have to re invent the wheel a bit one option might be to introduce an unsized prop similar to buttonbase s unstyled prop when set padding and spacing would not be applied the user of button could then set non important size related classes via classes
| 0
|
13,177
| 15,604,555,610
|
IssuesEvent
|
2021-03-19 04:11:35
|
klarEDA/klar-EDA
|
https://api.github.com/repos/klarEDA/klar-EDA
|
closed
|
Implement a format identifier method for date in csv data preprocessor
|
data-preprocessing enhancement gssoc21
|
**Description**
> The method should be able to identify and convert the date into a specific static format.
> The functionality of the method can be described as below -
> 1. Take in any type of date as input (for example - 2021-11-13)
> 2. Identify the format (for example - YYYY-MM-DD)
> 3. Convert the date into any desired format (for example - DD/MM/YYYY)
**Assumptions**
> The following assumptions can be made during the implementation
> 1. No time is present in the given input date.
> 2. The input will be only a string.
> 3. A list of input patterns can be assumed. (For example - you can assume the input will be in either of any known formats mentioned).
`input_date_format = [ 'DD/MM/YYYY', 'YYYY/DD/MM', 'MM/DD/YYYY', 'YYYY/MM/DD', 'DD-MM-YYYY', 'YYYY-DD-MM', 'MM-DD-YYYY', 'YYYY-MM-DD' ]`
**Input**
> A string in any of the formats mentioned above (The contributor is free to add any other formats)
> An expected output format the input date should be converted to
**Output**
> A date in string converted into the desired format.
**Note**
The use of standard python libraries is highly recommended.
|
1.0
|
Implement a format identifier method for date in csv data preprocessor - **Description**
> The method should be able to identify and convert the date into a specific static format.
> The functionality of the method can be described as below -
> 1. Take in any type of date as input (for example - 2021-11-13)
> 2. Identify the format (for example - YYYY-MM-DD)
> 3. Convert the date into any desired format (for example - DD/MM/YYYY)
**Assumptions**
> The following assumptions can be made during the implementation
> 1. No time is present in the given input date.
> 2. The input will be only a string.
> 3. A list of input patterns can be assumed. (For example - you can assume the input will be in either of any known formats mentioned).
`input_date_format = [ 'DD/MM/YYYY', 'YYYY/DD/MM', 'MM/DD/YYYY', 'YYYY/MM/DD', 'DD-MM-YYYY', 'YYYY-DD-MM', 'MM-DD-YYYY', 'YYYY-MM-DD' ]`
**Input**
> A string in any of the formats mentioned above (The contributor is free to add any other formats)
> An expected output format the input date should be converted to
**Output**
> A date in string converted into the desired format.
**Note**
The use of standard python libraries is highly recommended.
|
process
|
implement a format identifier method for date in csv data preprocessor description the method should be able to identify and convert the date into a specific static format the functionality of the method can be described as below take in any type of date as input for example identify the format for example yyyy mm dd convert the date into any desired format for example dd mm yyyy assumptions the following assumptions can be made during the implementation no time is present in the given input date the input will be only a string a list of input patterns can be assumed for example you can assume the input will be in either of any known formats mentioned input date format input a string in any of the formats mentioned above the contributor is free to add any other formats an expected output format the input date should be converted to output a date in string converted into the desired format note the use of standard python libraries is highly recommended
| 1
|
28,720
| 5,533,696,321
|
IssuesEvent
|
2017-03-21 13:57:03
|
brenns10/slacksoc
|
https://api.github.com/repos/brenns10/slacksoc
|
closed
|
Concurrency considerations
|
documentation
|
I don't have strong documentation yet (well, not even a stable plugin interface). But this is important to document.
If you're creating a plugin which will do some long term work (web request, sleep, etc), you need to start your own goroutine. This means you'll need to synchronize access to your plugin struct and the Bot struct. Thankfully, the RTM API is synchronized with channels, and I *think* that the main API is also synchronized.
The Bot struct will need a lock for reading its maps.
|
1.0
|
Concurrency considerations - I don't have strong documentation yet (well, not even a stable plugin interface). But this is important to document.
If you're creating a plugin which will do some long term work (web request, sleep, etc), you need to start your own goroutine. This means you'll need to synchronize access to your plugin struct and the Bot struct. Thankfully, the RTM API is synchronized with channels, and I *think* that the main API is also synchronized.
The Bot struct will need a lock for reading its maps.
|
non_process
|
concurrency considerations i don t have strong documentation yet well not even a stable plugin interface but this is important to document if you re creating a plugin which will do some long term work web request sleep etc you need to start your own goroutine this means you ll need to synchronize access to your plugin struct and the bot struct thankfully the rtm api is synchronized with channels and i think that the main api is also synchronized the bot struct will need a lock for reading its maps
| 0
|
628,868
| 20,016,554,474
|
IssuesEvent
|
2022-02-01 12:40:53
|
ramp4-pcar4/story-ramp
|
https://api.github.com/repos/ramp4-pcar4/story-ramp
|
opened
|
Check into map extent alignment issues in different resolutions
|
StoryRAMP Viewer RAMP Needs: estimate Priority: High
|
Clients have raised that at different browser resolutions, they're getting default map extents that are blocked by the legend pane.
This task is to investigate the map extents at different browser sizes, and make necessary config adjustments so that the data is as fully displayed as possible and not blocked by interface elements.
|
1.0
|
Check into map extent alignment issues in different resolutions - Clients have raised that at different browser resolutions, they're getting default map extents that are blocked by the legend pane.
This task is to investigate the map extents at different browser sizes, and make necessary config adjustments so that the data is as fully displayed as possible and not blocked by interface elements.
|
non_process
|
check into map extent alignment issues in different resolutions clients have raised that at different browser resolutions they re getting default map extents that are blocked by the legend pane this task is to investigate the map extents at different browser sizes and make necessary config adjustments so that the data is as fully displayed as possible and not blocked by interface elements
| 0
|
20,941
| 27,802,019,187
|
IssuesEvent
|
2023-03-17 16:32:24
|
varabyte/kotter
|
https://api.github.com/repos/varabyte/kotter
|
closed
|
Add a test for Terminal#width
|
good first issue process
|
See issue #95 which happened because I was lax about overriding the width value for native terminals.
At this point, I'm fairly confident width logic works right because I have so much existing code using this feature. But it would be really nice, still, to have a test for it.
Step 1) Allow passing the width value into `TestTerminal`
Step 2) Write a few tests to stress / verify long lines render / repaint as expected.
|
1.0
|
Add a test for Terminal#width - See issue #95 which happened because I was lax about overriding the width value for native terminals.
At this point, I'm fairly confident width logic works right because I have so much existing code using this feature. But it would be really nice, still, to have a test for it.
Step 1) Allow passing the width value into `TestTerminal`
Step 2) Write a few tests to stress / verify long lines render / repaint as expected.
|
process
|
add a test for terminal width see issue which happened because i was lax about overriding the width value for native terminals at this point i m fairly confident width logic works right because i have so much existing code using this feature but it would be really nice still to have a test for it step allow passing the width value into testterminal step write a few tests to stress verify long lines render repaint as expected
| 1
|
11,129
| 13,957,688,532
|
IssuesEvent
|
2020-10-24 08:09:33
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
IE: potential harvesting issues with new IE CWS service
|
Geoportal Harvesting process IE - Ireland
|
From: Margaret Twynam Muldoon <Margaret.TwynamMuldoon@housing.gov.ie>
Sent: 11 June 2019 11:13
To: JRC INSPIRE SUPPORT
Cc: Inspire_IE
Subject: [Geoportal Helpdesk]
Dear JRC INSPIRE Support,
We are in the process of upgrading our Geoportal and made a switch on 29th March 2019 (new Geoportal CSW endpoint [https://inspire.geohive.ie/geoportal/csw?SERVICE=CSW&VERSION=2.0.2&REQUEST=GetCapabilities])
This has passed the INSPIRE validators so we expect that you should be able to connect to it without any problems. Please be aware of the fact that the content (metadata) of the CSW is being updated since that date (29/03/19). We notified jrc-sdi-notify@ec.europa.eu on 29/03/19 explaining this.
However, we have since noticed there appears to be harvesting issues. On investigation of this issue we note the CSW URL we provided you with do not appear to match. The layers that are available through our service do not match those available through the INSPIRE Geoportal.
e.g. INSPIRE Water Framework Directive Groundwater Waterbodies layer is available at https://inspire.geohive.ie/ but doesn’t not appear at the INSPIRE GEOPORTAL http://inspire-geoportal.ec.europa.eu/results.html?country=ie&view=details&theme=none
Can you please let us know if you are experiencing any issues harvesting from our CWS service?
https://inspire.geohive.ie/geoportal/csw?SERVICE=CSW&VERSION=2.0.2&REQUEST=GetCapabilities
Many thanks in advance. We look forward to hearing from you.
Regards,
Margaret
|
1.0
|
IE: potential harvesting issues with new IE CWS service - From: Margaret Twynam Muldoon <Margaret.TwynamMuldoon@housing.gov.ie>
Sent: 11 June 2019 11:13
To: JRC INSPIRE SUPPORT
Cc: Inspire_IE
Subject: [Geoportal Helpdesk]
Dear JRC INSPIRE Support,
We are in the process of upgrading our Geoportal and made a switch on 29th March 2019 (new Geoportal CSW endpoint [https://inspire.geohive.ie/geoportal/csw?SERVICE=CSW&VERSION=2.0.2&REQUEST=GetCapabilities])
This has passed the INSPIRE validators so we expect that you should be able to connect to it without any problems. Please be aware of the fact that the content (metadata) of the CSW is being updated since that date (29/03/19). We notified jrc-sdi-notify@ec.europa.eu on 29/03/19 explaining this.
However, we have since noticed there appears to be harvesting issues. On investigation of this issue we note the CSW URL we provided you with do not appear to match. The layers that are available through our service do not match those available through the INSPIRE Geoportal.
e.g. INSPIRE Water Framework Directive Groundwater Waterbodies layer is available at https://inspire.geohive.ie/ but doesn’t not appear at the INSPIRE GEOPORTAL http://inspire-geoportal.ec.europa.eu/results.html?country=ie&view=details&theme=none
Can you please let us know if you are experiencing any issues harvesting from our CWS service?
https://inspire.geohive.ie/geoportal/csw?SERVICE=CSW&VERSION=2.0.2&REQUEST=GetCapabilities
Many thanks in advance. We look forward to hearing from you.
Regards,
Margaret
|
process
|
ie potential harvesting issues with new ie cws service from margaret twynam muldoon lt margaret twynammuldoon housing gov ie gt sent june to jrc inspire support cc inspire ie subject dear jrc inspire support we are in the process of upgrading our geoportal and made a switch on march new geoportal csw endpoint this has passed the inspire validators so we expect that you should be able to connect to it without any problems please be aware of the fact that the content metadata of the csw is being updated since that date we notified jrc sdi notify ec europa eu on explaining this however we have since noticed there appears to be harvesting issues on investigation of this issue we note the csw url we provided you with do not appear to match the layers that are available through our service do not match those available through the inspire geoportal e g inspire water framework directive groundwater waterbodies layer is available at but doesn rsquo t not appear at the inspire geoportal can you please let us know if you are experiencing any issues harvesting from our cws service many thanks in advance we look forward to hearing from you regards margaret
| 1
|
28,948
| 23,622,294,566
|
IssuesEvent
|
2022-08-24 22:00:18
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
unable to "dotnet restore" to use my own build of coreclr
|
question help wanted area-Infrastructure-coreclr
|
i am following https://github.com/dotnet/coreclr/blob/master/Documentation/workflow/UsingDotNetCli.md but can't success, please help
<img width="2560" alt="screen shot 2018-09-22 at 9 23 47 pm" src="https://user-images.githubusercontent.com/36571333/45917687-dd868380-bead-11e8-8f3c-0166768596cd.png">
|
1.0
|
unable to "dotnet restore" to use my own build of coreclr - i am following https://github.com/dotnet/coreclr/blob/master/Documentation/workflow/UsingDotNetCli.md but can't success, please help
<img width="2560" alt="screen shot 2018-09-22 at 9 23 47 pm" src="https://user-images.githubusercontent.com/36571333/45917687-dd868380-bead-11e8-8f3c-0166768596cd.png">
|
non_process
|
unable to dotnet restore to use my own build of coreclr i am following but can t success please help img width alt screen shot at pm src
| 0
|
21,387
| 29,202,231,238
|
IssuesEvent
|
2023-05-21 00:37:31
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] Test Analyst na Coodesh
|
SALVADOR TESTE BANCO DE DADOS SQL REQUISITOS SELENIUM CUCUMBER REMOTO PROCESSOS GITHUB INGLÊS SEGURANÇA UMA QUALIDADE TESTES AUTOMATIZADOS METODOLOGIAS ÁGEIS NEGÓCIOS AUTOMAÇÃO DE TESTES TESTES MANUAIS Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/test-analyst-172621911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>O <strong>Grupo Fácil</strong> busca <strong><ins>Test Analyst</ins></strong> para compor seu time de desenvolvimento do NOVO PRODUTO!</p>
<p>Terá como missão planejar, desenvolver os testes automatizados e implantar a partir da análise funcional do NOVO PRODUTO, a fim de atestar a qualidade das aplicações.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Automação de Testes;</li>
<li>Conhecimento dos conceitos fundamentais de teste de software;</li>
<li>Planejamento e elaboração de testes (análise, estratégia e escopo dos testes);</li>
<li>Execução de testes (execução, consolidação, análise e comunicação dos resultados e progresso dos testes);</li>
<li>Consultas de banco de dados (Preferência SQL).</li>
</ul>
## Grupo Fácil:
<p>Ao longo de 27 anos de história, o Grupo Fácil se tornou referência nacional em sistemas, softwares e serviços para a gestão de negócios nas áreas financeira e de crédito, da saúde e no setor imobiliário.</p>
<p>O Grupo Fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes. </p><a href='https://coodesh.com/empresas/grupo-facil'>Veja mais no site</a>
## Habilidades:
- Selenium
- Cucumber
- SQL
## Local:
100% Remoto
## Requisitos:
- Conhecimento em ferramentas de automatização (Ex.: Selenium, Cucumber);
- Graduação em cursos de tecnologia;
- Sólida experiência com planejamento de testes e na sua execução;
- Experiência em testes manuais;
- Conhecimento em Testes automatizados;
- Experiência em metodologias ágeis.
## Diferenciais:
- Especialização ou certificação em processos de qualidade de software;
- Experiência também em desenvolvimento;
- Conhecimento avançado em inglês;
- Experiência em sistemas de operadoras de saúde.
## Benefícios:
- Assistência Médica e odontológica;
- Convênio com farmácia;
- Participação nos lucros;
- Vale refeição;
- Vale transporte;
- Parcerias e convênios;
- Programas de saúde e bem-estar.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Test Analyst na Grupo Fácil](https://coodesh.com/vagas/test-analyst-172621911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
Testes/Q.A
|
1.0
|
[Remoto] Test Analyst na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/test-analyst-172621911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>O <strong>Grupo Fácil</strong> busca <strong><ins>Test Analyst</ins></strong> para compor seu time de desenvolvimento do NOVO PRODUTO!</p>
<p>Terá como missão planejar, desenvolver os testes automatizados e implantar a partir da análise funcional do NOVO PRODUTO, a fim de atestar a qualidade das aplicações.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Automação de Testes;</li>
<li>Conhecimento dos conceitos fundamentais de teste de software;</li>
<li>Planejamento e elaboração de testes (análise, estratégia e escopo dos testes);</li>
<li>Execução de testes (execução, consolidação, análise e comunicação dos resultados e progresso dos testes);</li>
<li>Consultas de banco de dados (Preferência SQL).</li>
</ul>
## Grupo Fácil:
<p>Ao longo de 27 anos de história, o Grupo Fácil se tornou referência nacional em sistemas, softwares e serviços para a gestão de negócios nas áreas financeira e de crédito, da saúde e no setor imobiliário.</p>
<p>O Grupo Fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes. </p><a href='https://coodesh.com/empresas/grupo-facil'>Veja mais no site</a>
## Habilidades:
- Selenium
- Cucumber
- SQL
## Local:
100% Remoto
## Requisitos:
- Conhecimento em ferramentas de automatização (Ex.: Selenium, Cucumber);
- Graduação em cursos de tecnologia;
- Sólida experiência com planejamento de testes e na sua execução;
- Experiência em testes manuais;
- Conhecimento em Testes automatizados;
- Experiência em metodologias ágeis.
## Diferenciais:
- Especialização ou certificação em processos de qualidade de software;
- Experiência também em desenvolvimento;
- Conhecimento avançado em inglês;
- Experiência em sistemas de operadoras de saúde.
## Benefícios:
- Assistência Médica e odontológica;
- Convênio com farmácia;
- Participação nos lucros;
- Vale refeição;
- Vale transporte;
- Parcerias e convênios;
- Programas de saúde e bem-estar.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Test Analyst na Grupo Fácil](https://coodesh.com/vagas/test-analyst-172621911?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Categoria
Testes/Q.A
|
process
|
test analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 o grupo fácil busca test analyst para compor seu time de desenvolvimento do novo produto terá como missão planejar desenvolver os testes automatizados e implantar a partir da análise funcional do novo produto a fim de atestar a qualidade das aplicações responsabilidades automação de testes conhecimento dos conceitos fundamentais de teste de software planejamento e elaboração de testes análise estratégia e escopo dos testes execução de testes execução consolidação análise e comunicação dos resultados e progresso dos testes consultas de banco de dados preferência sql grupo fácil ao longo de anos de história o grupo fácil se tornou referência nacional em sistemas softwares e serviços para a gestão de negócios nas áreas financeira e de crédito da saúde e no setor imobiliário o grupo fácil é formado por um conjunto de empresas que se destacam pela solidez e ousadia em projetos que otimizam processos e oferecem mais segurança e rentabilidade para seus clientes nbsp habilidades selenium cucumber sql local remoto requisitos conhecimento em ferramentas de automatização ex selenium cucumber graduação em cursos de tecnologia sólida experiência com planejamento de testes e na sua execução experiência em testes manuais conhecimento em testes automatizados experiência em metodologias ágeis diferenciais especialização ou certificação em processos de qualidade de software experiência também em desenvolvimento conhecimento avançado em inglês experiência em sistemas de operadoras de saúde benefícios assistência médica e odontológica convênio com farmácia participação nos lucros vale refeição vale transporte parcerias e convênios programas de saúde e bem estar como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto categoria testes q a
| 1
|
2,477
| 2,615,170,509
|
IssuesEvent
|
2015-03-01 06:52:19
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
Copy for Business section
|
auto-migrated Milestone-Q42011-1 New Priority-P2 redesign Type-Bug
|
```
Create the content for the /enterprise page of html5rocks.com
```
Original issue reported on code.google.com by `ericbide...@html5rocks.com` on 19 Oct 2011 at 3:47
* Blocking: #663
|
1.0
|
Copy for Business section - ```
Create the content for the /enterprise page of html5rocks.com
```
Original issue reported on code.google.com by `ericbide...@html5rocks.com` on 19 Oct 2011 at 3:47
* Blocking: #663
|
non_process
|
copy for business section create the content for the enterprise page of com original issue reported on code google com by ericbide com on oct at blocking
| 0
|
987
| 3,022,649,760
|
IssuesEvent
|
2015-07-31 21:42:24
|
catapult-project/catapult
|
https://api.github.com/repos/catapult-project/catapult
|
closed
|
Add a presubmit check for csslint and gjslint
|
Infrastructure P2
|
<a href="https://github.com/natduca"><img src="https://avatars.githubusercontent.com/u/412396?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [natduca](https://github.com/natduca)**
_Monday Sep 22, 2014 at 19:56 GMT_
_Originally opened as https://github.com/google/trace-viewer/issues/49_
----
_From [nd...@chromium.org](https://code.google.com/u/102435256078839283966/) on June 16, 2012 22:21:46_
When trace-viewer was in the chrome repo, we picked up presubmit checks for both javascript and css lint. We should try to bring those back online.
_Original issue: http://code.google.com/p/trace-viewer/issues/detail?id=43_
|
1.0
|
Add a presubmit check for csslint and gjslint - <a href="https://github.com/natduca"><img src="https://avatars.githubusercontent.com/u/412396?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [natduca](https://github.com/natduca)**
_Monday Sep 22, 2014 at 19:56 GMT_
_Originally opened as https://github.com/google/trace-viewer/issues/49_
----
_From [nd...@chromium.org](https://code.google.com/u/102435256078839283966/) on June 16, 2012 22:21:46_
When trace-viewer was in the chrome repo, we picked up presubmit checks for both javascript and css lint. We should try to bring those back online.
_Original issue: http://code.google.com/p/trace-viewer/issues/detail?id=43_
|
non_process
|
add a presubmit check for csslint and gjslint issue by monday sep at gmt originally opened as from on june when trace viewer was in the chrome repo we picked up presubmit checks for both javascript and css lint we should try to bring those back online original issue
| 0
|
3,862
| 6,808,629,759
|
IssuesEvent
|
2017-11-04 05:51:27
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
Document 'hidden' command line options
|
apps-all status-inprocess tools-all type-enhancement
|
Some tools have hidden command line options. For example, the --silent option on grabABI. These are for use in special cases, and do not warrant inclusion in the help text sent to the screen, but they should be documented. Here's a current list, that is probably out of date since this issue was posted.
./apps/grabABI/options.cpp: @silent
./apps/blockScrape/options.cpp: @skip
./apps/ethslurp/options.cpp: @--sleep
./apps/ethslurp/options.cpp: @--func
./apps/ethslurp/options.cpp: @--errFilt
./apps/ethslurp/options.cpp: @--reverse
./apps/ethslurp/options.cpp: @--acct_id
./apps/ethslurp/options.cpp: @--cache
./apps/miniBlocks/options.cpp: @skip
./monitors/cacheMan/options.cpp: @s(k)ip
./tools/getBlock/options.cpp: @f(o)rce (see note in #473)
./tools/getBloom/options.cpp: @f(o)rce
|
1.0
|
Document 'hidden' command line options - Some tools have hidden command line options. For example, the --silent option on grabABI. These are for use in special cases, and do not warrant inclusion in the help text sent to the screen, but they should be documented. Here's a current list, that is probably out of date since this issue was posted.
./apps/grabABI/options.cpp: @silent
./apps/blockScrape/options.cpp: @skip
./apps/ethslurp/options.cpp: @--sleep
./apps/ethslurp/options.cpp: @--func
./apps/ethslurp/options.cpp: @--errFilt
./apps/ethslurp/options.cpp: @--reverse
./apps/ethslurp/options.cpp: @--acct_id
./apps/ethslurp/options.cpp: @--cache
./apps/miniBlocks/options.cpp: @skip
./monitors/cacheMan/options.cpp: @s(k)ip
./tools/getBlock/options.cpp: @f(o)rce (see note in #473)
./tools/getBloom/options.cpp: @f(o)rce
|
process
|
document hidden command line options some tools have hidden command line options for example the silent option on grababi these are for use in special cases and do not warrant inclusion in the help text sent to the screen but they should be documented here s a current list that is probably out of date since this issue was posted apps grababi options cpp silent apps blockscrape options cpp skip apps ethslurp options cpp sleep apps ethslurp options cpp func apps ethslurp options cpp errfilt apps ethslurp options cpp reverse apps ethslurp options cpp acct id apps ethslurp options cpp cache apps miniblocks options cpp skip monitors cacheman options cpp s k ip tools getblock options cpp f o rce see note in tools getbloom options cpp f o rce
| 1
|
13,363
| 15,826,731,237
|
IssuesEvent
|
2021-04-06 07:49:03
|
ultimate-pa/ultimate
|
https://api.github.com/repos/ultimate-pa/ultimate
|
closed
|
Problem with validator from SVCOMP 2021
|
external processes investigation needed possible bug
|
Hi,
I'm using the version of UAutomizer from the archive2021 repo of SVCOMP and I'm getting and error when trying to validate a witness. I'm running the following cmd
```
./Ultimate.py --spec ~/git/sv-benchmarks/c/properties/unreach-call.prp --file ~/git/sv-benchmarks/c/pthread-wmm/mix000_power.oepc.i --architecture 32bit --validate ~/git/Dat3M/output/witness.graphml
```
which results in
```
Checking for ERROR reachability
Using default analysis
Version 7b2dab56
Calling Ultimate with: /usr/bin/java -Dosgi.configuration.area=/Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data/config -Xmx15G -Xms4m -jar /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/plugins/org.eclipse.equinox.launcher_1.5.800.v20200727-1323.jar -data @noDefault -ultimatedata /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data -tc /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/AutomizerReachWitnessValidation.xml -i /Users/ponce/git/sv-benchmarks/c/pthread-wmm/mix000_power.oepc.i /Users/ponce/git/Dat3M/output/witness.graphml -s /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/svcomp-Reach-32bit-Automizer_Default.epf --cacsl2boogietranslator.entry.function main --traceabstraction.compute.hoare.annotation.of.negated.interpolant.automaton,.abstraction.and.cfg false
.................................................................................................................................................................................
Execution finished normally
Using bit-precise analysis
Retrying with bit-precise analysis
Calling Ultimate with: /usr/bin/java -Dosgi.configuration.area=/Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data/config -Xmx15G -Xms4m -jar /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/plugins/org.eclipse.equinox.launcher_1.5.800.v20200727-1323.jar -data @noDefault -ultimatedata /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data -tc /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/AutomizerReachWitnessValidation.xml -i /Users/ponce/git/sv-benchmarks/c/pthread-wmm/mix000_power.oepc.i /Users/ponce/git/Dat3M/output/witness.graphml -s /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/svcomp-Reach-32bit-Automizer_Bitvector.epf --cacsl2boogietranslator.entry.function main --traceabstraction.compute.hoare.annotation.of.negated.interpolant.automaton,.abstraction.and.cfg false
...................................................................................................................................................................................
Execution finished normally
Writing output log to file Ultimate.log
Result:
ERROR: ExceptionOrErrorResult: NullPointerException: null
```
Attached are the two used files (taks + witness) + the log from UAutomizer.
[Archive.zip](https://github.com/ultimate-pa/ultimate/files/6250510/Archive.zip)
Am I missing any option flag maybe?
|
1.0
|
Problem with validator from SVCOMP 2021 - Hi,
I'm using the version of UAutomizer from the archive2021 repo of SVCOMP and I'm getting and error when trying to validate a witness. I'm running the following cmd
```
./Ultimate.py --spec ~/git/sv-benchmarks/c/properties/unreach-call.prp --file ~/git/sv-benchmarks/c/pthread-wmm/mix000_power.oepc.i --architecture 32bit --validate ~/git/Dat3M/output/witness.graphml
```
which results in
```
Checking for ERROR reachability
Using default analysis
Version 7b2dab56
Calling Ultimate with: /usr/bin/java -Dosgi.configuration.area=/Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data/config -Xmx15G -Xms4m -jar /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/plugins/org.eclipse.equinox.launcher_1.5.800.v20200727-1323.jar -data @noDefault -ultimatedata /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data -tc /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/AutomizerReachWitnessValidation.xml -i /Users/ponce/git/sv-benchmarks/c/pthread-wmm/mix000_power.oepc.i /Users/ponce/git/Dat3M/output/witness.graphml -s /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/svcomp-Reach-32bit-Automizer_Default.epf --cacsl2boogietranslator.entry.function main --traceabstraction.compute.hoare.annotation.of.negated.interpolant.automaton,.abstraction.and.cfg false
.................................................................................................................................................................................
Execution finished normally
Using bit-precise analysis
Retrying with bit-precise analysis
Calling Ultimate with: /usr/bin/java -Dosgi.configuration.area=/Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data/config -Xmx15G -Xms4m -jar /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/plugins/org.eclipse.equinox.launcher_1.5.800.v20200727-1323.jar -data @noDefault -ultimatedata /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/data -tc /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/AutomizerReachWitnessValidation.xml -i /Users/ponce/git/sv-benchmarks/c/pthread-wmm/mix000_power.oepc.i /Users/ponce/git/Dat3M/output/witness.graphml -s /Users/ponce/git/forks/archives-2021/2021/UAutomizer-linux/config/svcomp-Reach-32bit-Automizer_Bitvector.epf --cacsl2boogietranslator.entry.function main --traceabstraction.compute.hoare.annotation.of.negated.interpolant.automaton,.abstraction.and.cfg false
...................................................................................................................................................................................
Execution finished normally
Writing output log to file Ultimate.log
Result:
ERROR: ExceptionOrErrorResult: NullPointerException: null
```
Attached are the two used files (taks + witness) + the log from UAutomizer.
[Archive.zip](https://github.com/ultimate-pa/ultimate/files/6250510/Archive.zip)
Am I missing any option flag maybe?
|
process
|
problem with validator from svcomp hi i m using the version of uautomizer from the repo of svcomp and i m getting and error when trying to validate a witness i m running the following cmd ultimate py spec git sv benchmarks c properties unreach call prp file git sv benchmarks c pthread wmm power oepc i architecture validate git output witness graphml which results in checking for error reachability using default analysis version calling ultimate with usr bin java dosgi configuration area users ponce git forks archives uautomizer linux data config jar users ponce git forks archives uautomizer linux plugins org eclipse equinox launcher jar data nodefault ultimatedata users ponce git forks archives uautomizer linux data tc users ponce git forks archives uautomizer linux config automizerreachwitnessvalidation xml i users ponce git sv benchmarks c pthread wmm power oepc i users ponce git output witness graphml s users ponce git forks archives uautomizer linux config svcomp reach automizer default epf entry function main traceabstraction compute hoare annotation of negated interpolant automaton abstraction and cfg false execution finished normally using bit precise analysis retrying with bit precise analysis calling ultimate with usr bin java dosgi configuration area users ponce git forks archives uautomizer linux data config jar users ponce git forks archives uautomizer linux plugins org eclipse equinox launcher jar data nodefault ultimatedata users ponce git forks archives uautomizer linux data tc users ponce git forks archives uautomizer linux config automizerreachwitnessvalidation xml i users ponce git sv benchmarks c pthread wmm power oepc i users ponce git output witness graphml s users ponce git forks archives uautomizer linux config svcomp reach automizer bitvector epf entry function main traceabstraction compute hoare annotation of negated interpolant automaton abstraction and cfg false execution finished normally writing output log to file ultimate log result error exceptionorerrorresult nullpointerexception null attached are the two used files taks witness the log from uautomizer am i missing any option flag maybe
| 1
|
18,776
| 24,678,486,338
|
IssuesEvent
|
2022-10-18 19:05:11
|
dtcenter/MET
|
https://api.github.com/repos/dtcenter/MET
|
closed
|
Enhance ASCII2NC to read NDBC buoy data
|
requestor: NOAA/EMC type: new feature reporting: DTC NOAA R2O requestor: NOAA/OPC MET: PreProcessing Tools (Point) priority: high
|
## Describe the New Feature ##
The desire to use buoy data in ASCII format for verification within MET has come up twice recently as seen in this [GitHub discussion](https://github.com/dtcenter/METplus/discussions/1747). This functionality is needed by both NOAA/EMC and NOAA/OPC. While the interim solution is using python embedding of point observations, the task for this issue is to support it directly via the ASCII2NC tool in MET. Please refer to this dtcenter/METplus#1482 issue for a related METplus use case.
Recommend supporting [NDBC Buoy](https://www.ndbc.noaa.gov/) data. This issue includes:
- Contacting @DeannaSpindler-NOAA to retrieve sample data.
- Adding support for a new `-format` option to read this data.
- Add new unit test(s) to demonstrate this functionality.
- Update the User's Guide accordingly.
### Acceptance Testing ###
*List input data types and sources.*
*Describe tests required for new functionality.*
### Time Estimate ###
*Estimate the amount of work required here.*
*Issues should represent approximately 1 to 3 days of work.*
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
None needed.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2773542
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required: @davidalbo
- [x] Select **scientist(s)** or **no scientist** required: @j-opatz
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
Likely no impacts, but recommend adding/updating METplus use cases to leverage this new functionality.
## New Feature Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [x] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [x] Complete the development and test your changes.
- [x] Add/update log messages for easier debugging.
- [x] Add/update unit tests.
- [x] Add/update documentation.
- [x] Push local changes to GitHub.
- [x] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [x] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [x] Iterate until the reviewer(s) accept and merge your changes.
- [x] Delete your fork or branch.
- [x] Close this issue.
|
1.0
|
Enhance ASCII2NC to read NDBC buoy data - ## Describe the New Feature ##
The desire to use buoy data in ASCII format for verification within MET has come up twice recently as seen in this [GitHub discussion](https://github.com/dtcenter/METplus/discussions/1747). This functionality is needed by both NOAA/EMC and NOAA/OPC. While the interim solution is using python embedding of point observations, the task for this issue is to support it directly via the ASCII2NC tool in MET. Please refer to this dtcenter/METplus#1482 issue for a related METplus use case.
Recommend supporting [NDBC Buoy](https://www.ndbc.noaa.gov/) data. This issue includes:
- Contacting @DeannaSpindler-NOAA to retrieve sample data.
- Adding support for a new `-format` option to read this data.
- Add new unit test(s) to demonstrate this functionality.
- Update the User's Guide accordingly.
### Acceptance Testing ###
*List input data types and sources.*
*Describe tests required for new functionality.*
### Time Estimate ###
*Estimate the amount of work required here.*
*Issues should represent approximately 1 to 3 days of work.*
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
None needed.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
2773542
## Define the Metadata ##
### Assignee ###
- [x] Select **engineer(s)** or **no engineer** required: @davidalbo
- [x] Select **scientist(s)** or **no scientist** required: @j-opatz
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdataio](https://github.com/dtcenter/METdataio/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
Likely no impacts, but recommend adding/updating METplus use cases to leverage this new functionality.
## New Feature Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [x] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [x] Complete the development and test your changes.
- [x] Add/update log messages for easier debugging.
- [x] Add/update unit tests.
- [x] Add/update documentation.
- [x] Push local changes to GitHub.
- [x] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [x] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [x] Iterate until the reviewer(s) accept and merge your changes.
- [x] Delete your fork or branch.
- [x] Close this issue.
|
process
|
enhance to read ndbc buoy data describe the new feature the desire to use buoy data in ascii format for verification within met has come up twice recently as seen in this this functionality is needed by both noaa emc and noaa opc while the interim solution is using python embedding of point observations the task for this issue is to support it directly via the tool in met please refer to this dtcenter metplus issue for a related metplus use case recommend supporting data this issue includes contacting deannaspindler noaa to retrieve sample data adding support for a new format option to read this data add new unit test s to demonstrate this functionality update the user s guide accordingly acceptance testing list input data types and sources describe tests required for new functionality time estimate estimate the amount of work required here issues should represent approximately to days of work sub issues consider breaking the new feature down into sub issues none needed relevant deadlines list relevant project deadlines here or state none funding source define the metadata assignee select engineer s or no engineer required davidalbo select scientist s or no scientist required j opatz labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components likely no impacts but recommend adding updating metplus use cases to leverage this new functionality new feature checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
| 1
|
5,641
| 8,499,512,297
|
IssuesEvent
|
2018-10-29 17:21:00
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[Banner] Define the Banner MVP
|
[Banner] type:Process
|
Definition of done:
- There is a Banner design doc listed at go/material-ios-design-docs.
- The design doc includes a list of MVP features and a separate list of non-MVP features.
- The team has had a chance to review and LGTM the set of MVP features.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/118211204](http://b/118211204)
|
1.0
|
[Banner] Define the Banner MVP - Definition of done:
- There is a Banner design doc listed at go/material-ios-design-docs.
- The design doc includes a list of MVP features and a separate list of non-MVP features.
- The team has had a chance to review and LGTM the set of MVP features.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/118211204](http://b/118211204)
|
process
|
define the banner mvp definition of done there is a banner design doc listed at go material ios design docs the design doc includes a list of mvp features and a separate list of non mvp features the team has had a chance to review and lgtm the set of mvp features internal data associated internal bug
| 1
|
18,024
| 24,032,795,259
|
IssuesEvent
|
2022-09-15 16:19:31
|
googleapis/java-pubsub-group-kafka-connector
|
https://api.github.com/repos/googleapis/java-pubsub-group-kafka-connector
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'pubsub-group-kafka-connector' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'pubsub-group-kafka-connector' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname pubsub group kafka connector invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
8,426
| 11,594,005,019
|
IssuesEvent
|
2020-02-24 14:37:40
|
SE-Garden/tms-webserver
|
https://api.github.com/repos/SE-Garden/tms-webserver
|
opened
|
OJTレポート検索APIの実装
|
kind:機能 process:MK/UT
|
## 概要
OJTレポート検索APIの実装
## ゴール
- OJTレポート検索APIの実装
- OJTレポート検索APIの単体試験は実施しない
## 成果物
- ソースコード
## 関連Issue
- None
|
1.0
|
OJTレポート検索APIの実装 - ## 概要
OJTレポート検索APIの実装
## ゴール
- OJTレポート検索APIの実装
- OJTレポート検索APIの単体試験は実施しない
## 成果物
- ソースコード
## 関連Issue
- None
|
process
|
ojtレポート検索apiの実装 概要 ojtレポート検索apiの実装 ゴール ojtレポート検索apiの実装 ojtレポート検索apiの単体試験は実施しない 成果物 ソースコード 関連issue none
| 1
|
28,037
| 5,427,813,172
|
IssuesEvent
|
2017-03-03 14:24:58
|
DevExpress/testcafe
|
https://api.github.com/repos/DevExpress/testcafe
|
opened
|
Change debugging logging style
|
AREA: client AREA: server DOCUMENTATION: required SYSTEM: runner TYPE: enhancement
|
### Are you requesting a feature or reporting a bug?
feature
### What is the current behavior?
debugging messages are added one by one in console (for each debugging stop point)
### What is the expected behavior?
We should show one debugging message and update it on each debugging stops
|
1.0
|
Change debugging logging style - ### Are you requesting a feature or reporting a bug?
feature
### What is the current behavior?
debugging messages are added one by one in console (for each debugging stop point)
### What is the expected behavior?
We should show one debugging message and update it on each debugging stops
|
non_process
|
change debugging logging style are you requesting a feature or reporting a bug feature what is the current behavior debugging messages are added one by one in console for each debugging stop point what is the expected behavior we should show one debugging message and update it on each debugging stops
| 0
|
26,779
| 7,868,873,283
|
IssuesEvent
|
2018-06-24 05:57:17
|
zeebe-io/zeebe
|
https://api.github.com/repos/zeebe-io/zeebe
|
opened
|
Access violation in ZeebeClientTest
|
bug unstable build
|
Had yesterday two access violations in the ZeebeClientTest.
```java
[INFO] Running io.zeebe.client.workflow.WorkflowRepositoryTest
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.322 s - in io.zeebe.client.workflow.WorkflowRepositoryTest
[INFO] Running io.zeebe.client.ZeebeClientTest
[WARNING] Corrupted stdin stream in forked JVM 1. See the dump file C:\Users\zell\development\zeebe\client-java\target\surefire-reports\2018-06-23T18-43-34_283-jvmRun1.dumpstream
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 213, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:05 min
[INFO] Finished at: 2018-06-23T18:44:26+02:00
[INFO] Final Memory: 34M/577M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20:test (default-test) on project zeebe-client-java: There are test failures.
[ERROR]
[ERROR] Please refer to C:\Users\zell\development\zeebe\client-java\target\surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was cmd.exe /X /C ""C:\Program Files\Java\jdk1.8.0_161\jre\bin\java" -jar C:\Users\zell\development\zeebe\client-java\target\surefire\surefirebooter5466655551250276849.jar C:\Users\zell\development\zeebe\client-java\target\surefire 2018-06-23T18-43-34_283-jvmRun1 surefire5185185090467177196tmp surefire_09021425793475733599tmp"
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
[ERROR] Crashed tests:
[ERROR] io.zeebe.client.ZeebeClientTest
[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was cmd.exe /X /C ""C:\Program Files\Java\jdk1.8.0_161\jre\bin\java" -jar C:\Users\zell\development\zeebe\client-java\target\surefire\surefirebooter5466655551250276849.jar C:\Users\zell\development\zeebe\client-java\target\surefire 2018-06-23T18-43-34_283-jvmRun1 surefire5185185090467177196tmp surefire_09021425793475733599tmp"
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
[ERROR] Crashed tests:
[ERROR] io.zeebe.client.ZeebeClientTest
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:679)
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:533)
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:279)
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:243)
[ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1077)
[ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:907)
[ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:785)
[ERROR] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
[ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
[ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
[ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
[ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
[ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
[ERROR] at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
[ERROR] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
[ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
[ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
[ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
[ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:955)
[ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:290)
[ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:194)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[ERROR] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR] at java.lang.reflect.Method.invoke(Method.java:498)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
[ERROR]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
```
[hs_err_pid10508.log](https://github.com/zeebe-io/zeebe/files/2130522/hs_err_pid10508.log)
[hs_err_pid13212.log](https://github.com/zeebe-io/zeebe/files/2130523/hs_err_pid13212.log)
|
1.0
|
Access violation in ZeebeClientTest - Had yesterday two access violations in the ZeebeClientTest.
```java
[INFO] Running io.zeebe.client.workflow.WorkflowRepositoryTest
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.322 s - in io.zeebe.client.workflow.WorkflowRepositoryTest
[INFO] Running io.zeebe.client.ZeebeClientTest
[WARNING] Corrupted stdin stream in forked JVM 1. See the dump file C:\Users\zell\development\zeebe\client-java\target\surefire-reports\2018-06-23T18-43-34_283-jvmRun1.dumpstream
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 213, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:05 min
[INFO] Finished at: 2018-06-23T18:44:26+02:00
[INFO] Final Memory: 34M/577M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20:test (default-test) on project zeebe-client-java: There are test failures.
[ERROR]
[ERROR] Please refer to C:\Users\zell\development\zeebe\client-java\target\surefire-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was cmd.exe /X /C ""C:\Program Files\Java\jdk1.8.0_161\jre\bin\java" -jar C:\Users\zell\development\zeebe\client-java\target\surefire\surefirebooter5466655551250276849.jar C:\Users\zell\development\zeebe\client-java\target\surefire 2018-06-23T18-43-34_283-jvmRun1 surefire5185185090467177196tmp surefire_09021425793475733599tmp"
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
[ERROR] Crashed tests:
[ERROR] io.zeebe.client.ZeebeClientTest
[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was cmd.exe /X /C ""C:\Program Files\Java\jdk1.8.0_161\jre\bin\java" -jar C:\Users\zell\development\zeebe\client-java\target\surefire\surefirebooter5466655551250276849.jar C:\Users\zell\development\zeebe\client-java\target\surefire 2018-06-23T18-43-34_283-jvmRun1 surefire5185185090467177196tmp surefire_09021425793475733599tmp"
[ERROR] Error occurred in starting fork, check output in log
[ERROR] Process Exit Code: 1
[ERROR] Crashed tests:
[ERROR] io.zeebe.client.ZeebeClientTest
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:679)
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:533)
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:279)
[ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:243)
[ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1077)
[ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:907)
[ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:785)
[ERROR] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
[ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
[ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
[ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
[ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
[ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
[ERROR] at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
[ERROR] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
[ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
[ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
[ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
[ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:955)
[ERROR] at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:290)
[ERROR] at org.apache.maven.cli.MavenCli.main(MavenCli.java:194)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[ERROR] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR] at java.lang.reflect.Method.invoke(Method.java:498)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
[ERROR] at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
[ERROR]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
```
[hs_err_pid10508.log](https://github.com/zeebe-io/zeebe/files/2130522/hs_err_pid10508.log)
[hs_err_pid13212.log](https://github.com/zeebe-io/zeebe/files/2130523/hs_err_pid13212.log)
|
non_process
|
access violation in zeebeclienttest had yesterday two access violations in the zeebeclienttest java running io zeebe client workflow workflowrepositorytest tests run failures errors skipped time elapsed s in io zeebe client workflow workflowrepositorytest running io zeebe client zeebeclienttest corrupted stdin stream in forked jvm see the dump file c users zell development zeebe client java target surefire reports dumpstream results tests run failures errors skipped build failure total time min finished at final memory failed to execute goal org apache maven plugins maven surefire plugin test default test on project zeebe client java there are test failures please refer to c users zell development zeebe client java target surefire reports for the individual test results please refer to dump files if any exist jvmrun dump dumpstream and jvmrun dumpstream the forked vm terminated without properly saying goodbye vm crash or system exit called command was cmd exe x c c program files java jre bin java jar c users zell development zeebe client java target surefire jar c users zell development zeebe client java target surefire surefire error occurred in starting fork check output in log process exit code crashed tests io zeebe client zeebeclienttest org apache maven surefire booter surefirebooterforkexception the forked vm terminated without properly saying goodbye vm crash or system exit called command was cmd exe x c c program files java jre bin java jar c users zell development zeebe client java target surefire jar c users zell development zeebe client java target surefire surefire error occurred in starting fork check output in log process exit code crashed tests io zeebe client zeebeclienttest at org apache maven plugin surefire booterclient forkstarter fork forkstarter java at org apache maven plugin surefire booterclient forkstarter fork forkstarter java at org apache maven plugin surefire booterclient forkstarter run forkstarter java at org apache maven plugin surefire booterclient forkstarter run forkstarter java at org apache maven plugin surefire abstractsurefiremojo executeprovider abstractsurefiremojo java at org apache maven plugin surefire abstractsurefiremojo executeafterpreconditionschecked abstractsurefiremojo java at org apache maven plugin surefire abstractsurefiremojo execute abstractsurefiremojo java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java to see the full stack trace of the errors re run maven with the e switch re run maven using the x switch to enable full debug logging for more information about the errors and possible solutions please read the following articles
| 0
|
1,804
| 4,540,544,765
|
IssuesEvent
|
2016-09-09 14:57:42
|
MModel/MetaModel
|
https://api.github.com/repos/MModel/MetaModel
|
closed
|
Alias support
|
[Difficulty] Easy [Propose] Enhancement [Status] In Process
|
Like Ruby on Rails, MetaModel needs aliases to act quickly.
+ meta init => meta i
+ meta generate => meta g
+ meta build => meta b
+ meta clean => meta c
|
1.0
|
Alias support - Like Ruby on Rails, MetaModel needs aliases to act quickly.
+ meta init => meta i
+ meta generate => meta g
+ meta build => meta b
+ meta clean => meta c
|
process
|
alias support like ruby on rails metamodel needs aliases to act quickly meta init meta i meta generate meta g meta build meta b meta clean meta c
| 1
|
15,469
| 19,682,388,175
|
IssuesEvent
|
2022-01-11 18:04:16
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
closed
|
Make connector-test-kit run with indexes on mongodb
|
process/candidate kind/tech team/migrations team/client topic: mongodb
|
TL;DR : the migration engine `SchemaPush` command was implemented for MongoDB in https://github.com/prisma/prisma-engines/pull/2287
It is the command that is used by the query engine to set up databases for tests in connector-test-kit. As a result of the command actually creating indexes now, CI started failing on QE tests.
https://github.com/prisma/prisma-engines/pull/2287/commits/1c74eca0def7a23809a252508f52e0e15d43e416 reverts the test setup for the QE to the old behaviour, but I imagine we would like to run the QE tests _with indexes_ eventually, so I am creating this issue.
Steps to take:
- Revert the changes in 1c74eca0def7a23809a252508f52e0e15d43e416 and use `SchemaPush` again
- Fix the failing tests. Hopefully, these are just different ordering problems because of the former lack of indexes.
|
1.0
|
Make connector-test-kit run with indexes on mongodb - TL;DR : the migration engine `SchemaPush` command was implemented for MongoDB in https://github.com/prisma/prisma-engines/pull/2287
It is the command that is used by the query engine to set up databases for tests in connector-test-kit. As a result of the command actually creating indexes now, CI started failing on QE tests.
https://github.com/prisma/prisma-engines/pull/2287/commits/1c74eca0def7a23809a252508f52e0e15d43e416 reverts the test setup for the QE to the old behaviour, but I imagine we would like to run the QE tests _with indexes_ eventually, so I am creating this issue.
Steps to take:
- Revert the changes in 1c74eca0def7a23809a252508f52e0e15d43e416 and use `SchemaPush` again
- Fix the failing tests. Hopefully, these are just different ordering problems because of the former lack of indexes.
|
process
|
make connector test kit run with indexes on mongodb tl dr the migration engine schemapush command was implemented for mongodb in it is the command that is used by the query engine to set up databases for tests in connector test kit as a result of the command actually creating indexes now ci started failing on qe tests reverts the test setup for the qe to the old behaviour but i imagine we would like to run the qe tests with indexes eventually so i am creating this issue steps to take revert the changes in and use schemapush again fix the failing tests hopefully these are just different ordering problems because of the former lack of indexes
| 1
|
103,447
| 11,356,610,088
|
IssuesEvent
|
2020-01-24 23:21:17
|
pokusew/node-pcsclite
|
https://api.github.com/repos/pokusew/node-pcsclite
|
closed
|
Cannot install on Windows 10 x64 with Node.js 6.11.2
|
documentation question
|
**Hello, I receive the following message when trying to install your moduel (npm install nfc-pcsc):**
598 silly install printInstalled
599 verbose stack Error: @pokusew/pcsclite@0.4.17 install: `node-gyp rebuild`
599 verbose stack Exit status 1
599 verbose stack at EventEmitter.<anonymous> (C:\job\node\node_modules\npm\lib\utils\lifecycle.js:255:16)
599 verbose stack at emitTwo (events.js:106:13)
599 verbose stack at EventEmitter.emit (events.js:191:7)
599 verbose stack at ChildProcess.<anonymous> (C:\job\node\node_modules\npm\lib\utils\spawn.js:40:14)
599 verbose stack at emitTwo (events.js:106:13)
599 verbose stack at ChildProcess.emit (events.js:191:7)
599 verbose stack at maybeClose (internal/child_process.js:891:16)
599 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)
600 verbose pkgid @pokusew/pcsclite@0.4.17
601 verbose cwd C:\job\node
602 error Windows_NT 10.0.15063
603 error argv "C:\\job\\node\\node.exe" "C:\\job\\node\\node_modules\\npm\\bin\\npm-cli.js" "install" "nfc-pcsc"
604 error node v6.11.2
605 error npm v3.10.10
606 error code ELIFECYCLE
607 error @pokusew/pcsclite@0.4.17 install: `node-gyp rebuild`
607 error Exit status 1
608 error Failed at the @pokusew/pcsclite@0.4.17 install script 'node-gyp rebuild'.
608 error Make sure you have the latest version of node.js and npm installed.
608 error If you do, this is most likely a problem with the @pokusew/pcsclite package,
608 error not with npm itself.
608 error Tell the author that this fails on your system:
608 error node-gyp rebuild
608 error You can get information on how to open an issue for this project with:
608 error npm bugs @pokusew/pcsclite
608 error Or if that isn't available, you can get their info via:
608 error npm owner ls @pokusew/pcsclite
608 error There is likely additional logging output above.
609 verbose exit [ 1, true ]
**Can you please suggest me how to fix that and install your module.**
|
1.0
|
Cannot install on Windows 10 x64 with Node.js 6.11.2 - **Hello, I receive the following message when trying to install your moduel (npm install nfc-pcsc):**
598 silly install printInstalled
599 verbose stack Error: @pokusew/pcsclite@0.4.17 install: `node-gyp rebuild`
599 verbose stack Exit status 1
599 verbose stack at EventEmitter.<anonymous> (C:\job\node\node_modules\npm\lib\utils\lifecycle.js:255:16)
599 verbose stack at emitTwo (events.js:106:13)
599 verbose stack at EventEmitter.emit (events.js:191:7)
599 verbose stack at ChildProcess.<anonymous> (C:\job\node\node_modules\npm\lib\utils\spawn.js:40:14)
599 verbose stack at emitTwo (events.js:106:13)
599 verbose stack at ChildProcess.emit (events.js:191:7)
599 verbose stack at maybeClose (internal/child_process.js:891:16)
599 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)
600 verbose pkgid @pokusew/pcsclite@0.4.17
601 verbose cwd C:\job\node
602 error Windows_NT 10.0.15063
603 error argv "C:\\job\\node\\node.exe" "C:\\job\\node\\node_modules\\npm\\bin\\npm-cli.js" "install" "nfc-pcsc"
604 error node v6.11.2
605 error npm v3.10.10
606 error code ELIFECYCLE
607 error @pokusew/pcsclite@0.4.17 install: `node-gyp rebuild`
607 error Exit status 1
608 error Failed at the @pokusew/pcsclite@0.4.17 install script 'node-gyp rebuild'.
608 error Make sure you have the latest version of node.js and npm installed.
608 error If you do, this is most likely a problem with the @pokusew/pcsclite package,
608 error not with npm itself.
608 error Tell the author that this fails on your system:
608 error node-gyp rebuild
608 error You can get information on how to open an issue for this project with:
608 error npm bugs @pokusew/pcsclite
608 error Or if that isn't available, you can get their info via:
608 error npm owner ls @pokusew/pcsclite
608 error There is likely additional logging output above.
609 verbose exit [ 1, true ]
**Can you please suggest me how to fix that and install your module.**
|
non_process
|
cannot install on windows with node js hello i receive the following message when trying to install your moduel npm install nfc pcsc silly install printinstalled verbose stack error pokusew pcsclite install node gyp rebuild verbose stack exit status verbose stack at eventemitter c job node node modules npm lib utils lifecycle js verbose stack at emittwo events js verbose stack at eventemitter emit events js verbose stack at childprocess c job node node modules npm lib utils spawn js verbose stack at emittwo events js verbose stack at childprocess emit events js verbose stack at maybeclose internal child process js verbose stack at process childprocess handle onexit internal child process js verbose pkgid pokusew pcsclite verbose cwd c job node error windows nt error argv c job node node exe c job node node modules npm bin npm cli js install nfc pcsc error node error npm error code elifecycle error pokusew pcsclite install node gyp rebuild error exit status error failed at the pokusew pcsclite install script node gyp rebuild error make sure you have the latest version of node js and npm installed error if you do this is most likely a problem with the pokusew pcsclite package error not with npm itself error tell the author that this fails on your system error node gyp rebuild error you can get information on how to open an issue for this project with error npm bugs pokusew pcsclite error or if that isn t available you can get their info via error npm owner ls pokusew pcsclite error there is likely additional logging output above verbose exit can you please suggest me how to fix that and install your module
| 0
|
7,429
| 10,547,058,302
|
IssuesEvent
|
2019-10-02 23:28:02
|
OCFL/spec
|
https://api.github.com/repos/OCFL/spec
|
closed
|
Create an RFC process
|
Process/Extensions/Related
|
Create a github repository in the OCFL organization for the purpose of community review and publishing of individual OCFL-related RFCs. Make any decisions related to policies and ongoing maintenance.
## Background
At the 8/14 [OCFL community meeting](https://github.com/OCFL/spec/wiki/2019.08.14-Community-Meeting#rfc-proposal-aaron-birkland) there was interest in creating an RFC process separate from the specification effort. This was found to be a nice solution to [citing specific layouts](https://github.com/OCFL/spec/issues/351#issuecomment-518829309), and collecting implementation details/recommendations that are out of scope for the spec, but are a practical necessity.
An example collection of RFCs related to specific layout choices may be found in [the demo repository](https://birkland.github.io/ocfl-rfc-demo/). At the community meeting, it was thought that:
* The implementation notes would be a good home for a general description of considerations when creating software for an OCFL repository, whereas published RFCs would be a good home for specific implementation patterns
* Some RFCs might eventually make their way into a future specification revision
* The specification might cite RFCs, and certain information might move from the spec into a RFC (for example, defining or amending the list of supported hash algorithms)
## Open Questions
* What is the policy for review and merging of RFCs into this repository?
* Who would be the maintainers of the RFC repository? Community volunteers who may or may not be editors?
|
1.0
|
Create an RFC process - Create a github repository in the OCFL organization for the purpose of community review and publishing of individual OCFL-related RFCs. Make any decisions related to policies and ongoing maintenance.
## Background
At the 8/14 [OCFL community meeting](https://github.com/OCFL/spec/wiki/2019.08.14-Community-Meeting#rfc-proposal-aaron-birkland) there was interest in creating an RFC process separate from the specification effort. This was found to be a nice solution to [citing specific layouts](https://github.com/OCFL/spec/issues/351#issuecomment-518829309), and collecting implementation details/recommendations that are out of scope for the spec, but are a practical necessity.
An example collection of RFCs related to specific layout choices may be found in [the demo repository](https://birkland.github.io/ocfl-rfc-demo/). At the community meeting, it was thought that:
* The implementation notes would be a good home for a general description of considerations when creating software for an OCFL repository, whereas published RFCs would be a good home for specific implementation patterns
* Some RFCs might eventually make their way into a future specification revision
* The specification might cite RFCs, and certain information might move from the spec into a RFC (for example, defining or amending the list of supported hash algorithms)
## Open Questions
* What is the policy for review and merging of RFCs into this repository?
* Who would be the maintainers of the RFC repository? Community volunteers who may or may not be editors?
|
process
|
create an rfc process create a github repository in the ocfl organization for the purpose of community review and publishing of individual ocfl related rfcs make any decisions related to policies and ongoing maintenance background at the there was interest in creating an rfc process separate from the specification effort this was found to be a nice solution to and collecting implementation details recommendations that are out of scope for the spec but are a practical necessity an example collection of rfcs related to specific layout choices may be found in at the community meeting it was thought that the implementation notes would be a good home for a general description of considerations when creating software for an ocfl repository whereas published rfcs would be a good home for specific implementation patterns some rfcs might eventually make their way into a future specification revision the specification might cite rfcs and certain information might move from the spec into a rfc for example defining or amending the list of supported hash algorithms open questions what is the policy for review and merging of rfcs into this repository who would be the maintainers of the rfc repository community volunteers who may or may not be editors
| 1
|
212,875
| 7,243,582,785
|
IssuesEvent
|
2018-02-14 12:17:34
|
jrantamaki/supertimemachine
|
https://api.github.com/repos/jrantamaki/supertimemachine
|
closed
|
Bug: Calculation of elapsed time is wrong
|
bug frontend priority: high
|
Used Duration does not work properly when timestamps are for different dates.
|
1.0
|
Bug: Calculation of elapsed time is wrong - Used Duration does not work properly when timestamps are for different dates.
|
non_process
|
bug calculation of elapsed time is wrong used duration does not work properly when timestamps are for different dates
| 0
|
142,549
| 13,033,675,150
|
IssuesEvent
|
2020-07-28 07:24:00
|
Maxi35/pyrelay
|
https://api.github.com/repos/Maxi35/pyrelay
|
closed
|
Is there any documentation or tutorials available?
|
documentation
|
I am mildly proficient with python, but am struggling to pick up your syntax just from the single plugin bundled with the project.
The whole thing works phenomenally so far, and I would really like to get into some interesting developments soon. Is there anyway you can post more plugins so I can learn by example if you do not have the time to create some tutorials for people to begin using the program?
Thank you very much for your time working on this project. it's incredible and I simply cannot wait to get started with it.
|
1.0
|
Is there any documentation or tutorials available? - I am mildly proficient with python, but am struggling to pick up your syntax just from the single plugin bundled with the project.
The whole thing works phenomenally so far, and I would really like to get into some interesting developments soon. Is there anyway you can post more plugins so I can learn by example if you do not have the time to create some tutorials for people to begin using the program?
Thank you very much for your time working on this project. it's incredible and I simply cannot wait to get started with it.
|
non_process
|
is there any documentation or tutorials available i am mildly proficient with python but am struggling to pick up your syntax just from the single plugin bundled with the project the whole thing works phenomenally so far and i would really like to get into some interesting developments soon is there anyway you can post more plugins so i can learn by example if you do not have the time to create some tutorials for people to begin using the program thank you very much for your time working on this project it s incredible and i simply cannot wait to get started with it
| 0
|
587,137
| 17,605,391,149
|
IssuesEvent
|
2021-08-17 16:23:57
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
closed
|
Autoreplacing text with emoji prevents from links editing
|
bug Chat priority 2: medium
|
There could be more cases, but an obvious one: when i want to edit a link in a text box, removing `/` replaces the `:/ ` with emoji, which is unwanted in this case. I think we need to make suggestions for such replacements and allow user to decide wether he needs it or not
**To reproduce:**
1. open any chat
2. paste a link to the chat box
3. try to remove `/`
https://user-images.githubusercontent.com/82375995/129055813-5a72ebbf-b608-4f63-a75d-e945d7ebf18a.mov
|
1.0
|
Autoreplacing text with emoji prevents from links editing - There could be more cases, but an obvious one: when i want to edit a link in a text box, removing `/` replaces the `:/ ` with emoji, which is unwanted in this case. I think we need to make suggestions for such replacements and allow user to decide wether he needs it or not
**To reproduce:**
1. open any chat
2. paste a link to the chat box
3. try to remove `/`
https://user-images.githubusercontent.com/82375995/129055813-5a72ebbf-b608-4f63-a75d-e945d7ebf18a.mov
|
non_process
|
autoreplacing text with emoji prevents from links editing there could be more cases but an obvious one when i want to edit a link in a text box removing replaces the with emoji which is unwanted in this case i think we need to make suggestions for such replacements and allow user to decide wether he needs it or not to reproduce open any chat paste a link to the chat box try to remove
| 0
|
377,823
| 11,184,853,817
|
IssuesEvent
|
2019-12-31 20:38:01
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.0 staging-1299] Animals: Client crash at some point
|
High Priority
|
Hard to reproduce. My steps was:
1. Take a bow ans arrows
2. Type /safe
2. Fly around ans fire some deers and goats
3. Have a crash at some point
```
<size=60.00%>Exception: NullReferenceException
Message:Object reference not set to an instance of an object.
Source:Eco.Simulation
System.NullReferenceException: Object reference not set to an instance of an object.
at Eco.Simulation.Agents.Animal.Attack()</size>
```
|
1.0
|
[0.9.0 staging-1299] Animals: Client crash at some point - Hard to reproduce. My steps was:
1. Take a bow ans arrows
2. Type /safe
2. Fly around ans fire some deers and goats
3. Have a crash at some point
```
<size=60.00%>Exception: NullReferenceException
Message:Object reference not set to an instance of an object.
Source:Eco.Simulation
System.NullReferenceException: Object reference not set to an instance of an object.
at Eco.Simulation.Agents.Animal.Attack()</size>
```
|
non_process
|
animals client crash at some point hard to reproduce my steps was take a bow ans arrows type safe fly around ans fire some deers and goats have a crash at some point exception nullreferenceexception message object reference not set to an instance of an object source eco simulation system nullreferenceexception object reference not set to an instance of an object at eco simulation agents animal attack
| 0
|
654,980
| 21,675,267,531
|
IssuesEvent
|
2022-05-08 16:02:16
|
chaotic-aur/packages
|
https://api.github.com/repos/chaotic-aur/packages
|
closed
|
[Request] MyPy
|
request:new-pkg priority:low
|
### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/mypy-git
### Utility this package has for you
Python development, static typing.
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a few.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_
|
1.0
|
[Request] MyPy - ### Link to the package(s) in the AUR
https://aur.archlinux.org/packages/mypy-git
### Utility this package has for you
Python development, static typing.
### Do you consider the package(s) to be useful for every Chaotic-AUR user?
No, but for a few.
### Do you consider the package to be useful for feature testing/preview?
- [ ] Yes
### Have you tested if the package builds in a clean chroot?
- [X] Yes
### Does the package's license allow redistributing it?
YES!
### Have you searched the issues to ensure this request is unique?
- [X] YES!
### Have you read the README to ensure this package is not banned?
- [X] YES!
### More information
_No response_
|
non_process
|
mypy link to the package s in the aur utility this package has for you python development static typing do you consider the package s to be useful for every chaotic aur user no but for a few do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information no response
| 0
|
20,103
| 26,638,256,986
|
IssuesEvent
|
2023-01-25 00:36:04
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
closed
|
Repeated augmentation layer
|
preprocessing roadmap
|
Repeated augmentation layer [1] has also become an important recipe to train SoTA image classification models. The abstract of [1] pretty much sums up what it is:
> Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.
**References**
[1] [Augment Your Batch: Improving Generalization Through Instance Repetition](https://openaccess.thecvf.com/content_CVPR_2020/papers/Hoffer_Augment_Your_Batch_Improving_Generalization_Through_Instance_Repetition_CVPR_2020_paper.pdf)
|
1.0
|
Repeated augmentation layer - Repeated augmentation layer [1] has also become an important recipe to train SoTA image classification models. The abstract of [1] pretty much sums up what it is:
> Large-batch SGD is important for scaling training of deep neural networks. However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered. We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling for a fixed budget of optimization steps. We analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets. Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art. Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently.
**References**
[1] [Augment Your Batch: Improving Generalization Through Instance Repetition](https://openaccess.thecvf.com/content_CVPR_2020/papers/Hoffer_Augment_Your_Batch_Improving_Generalization_Through_Instance_Repetition_CVPR_2020_paper.pdf)
|
process
|
repeated augmentation layer repeated augmentation layer has also become an important recipe to train sota image classification models the abstract of pretty much sums up what it is large batch sgd is important for scaling training of deep neural networks however without fine tuning hyperparameter schedules the generalization of the model may be hampered we propose to use batch augmentation replicating instances of samples within the same batch with different data augmentations batch augmentation acts as a regularizer and an accelerator increasing both generalization and performance scaling for a fixed budget of optimization steps we analyze the effect of batch augmentation on gradient variance and show that it empirically improves convergence for a wide variety of networks and datasets our results show that batch augmentation reduces the number of necessary sgd updates to achieve the same accuracy as the state of the art overall this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently references
| 1
|
4,493
| 7,346,179,194
|
IssuesEvent
|
2018-03-07 19:50:10
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Azure AD > App Registrations > Settings > Properties > Required Permissions
|
active-directory cxp in-process triaged
|
Step 1 Register Application, Point 8 (Grant permissions across your tenant for your application): The "Required Permissions" has now it's own tab in Settings in the category "API Access" and is not anymore below Settings > Properties
Wrong: Settings > Properties > Required Permissions
Current: Settings -> Required Permissions
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0574fe6c-3e23-b803-f12b-aa698d238d5b
* Version Independent ID: bf7ebc09-c16f-0b1e-53f8-35f305af0c26
* [Content](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-devquickstarts-angular)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/active-directory-devquickstarts-angular.md)
* Service: active-directory
|
1.0
|
Azure AD > App Registrations > Settings > Properties > Required Permissions - Step 1 Register Application, Point 8 (Grant permissions across your tenant for your application): The "Required Permissions" has now it's own tab in Settings in the category "API Access" and is not anymore below Settings > Properties
Wrong: Settings > Properties > Required Permissions
Current: Settings -> Required Permissions
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0574fe6c-3e23-b803-f12b-aa698d238d5b
* Version Independent ID: bf7ebc09-c16f-0b1e-53f8-35f305af0c26
* [Content](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-devquickstarts-angular)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/active-directory-devquickstarts-angular.md)
* Service: active-directory
|
process
|
azure ad app registrations settings properties required permissions step register application point grant permissions across your tenant for your application the required permissions has now it s own tab in settings in the category api access and is not anymore below settings properties wrong settings properties required permissions current settings required permissions document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service active directory
| 1
|
12,912
| 15,286,644,727
|
IssuesEvent
|
2021-02-23 14:55:45
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
opened
|
Update % calculation on the FE
|
ShapeupProcess challenge- recommender-tool
|
Jaccard index gives a score range of [0-1]. FE % match score calculation must be updated to reflect this.
@dedywahyudi
cc @Oanh-and-only-Oanh
|
1.0
|
Update % calculation on the FE - Jaccard index gives a score range of [0-1]. FE % match score calculation must be updated to reflect this.
@dedywahyudi
cc @Oanh-and-only-Oanh
|
process
|
update calculation on the fe jaccard index gives a score range of fe match score calculation must be updated to reflect this dedywahyudi cc oanh and only oanh
| 1
|
292,852
| 25,244,374,167
|
IssuesEvent
|
2022-11-15 10:00:19
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions - migration actions cloneIndex resolves left cluster_shard_limit_exceeded when the action would exceed the maximum normal open shards
|
Team:Core failed-test
|
A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 280000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:62:3
at _dispatchDescribe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:98:26)
at describe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:60:5)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:59:1)
at Runtime._execModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1646:24)
at Runtime._loadModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1185:12)
at Runtime.requireModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1009:12)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:13)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:389:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:475:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23497#01847704-a6ed-451b-a0ae-f94694f8b3b3)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions","test.name":"migration actions cloneIndex resolves left cluster_shard_limit_exceeded when the action would exceed the maximum normal open shards","test.failCount":1}} -->
|
1.0
|
Failing test: Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions - migration actions cloneIndex resolves left cluster_shard_limit_exceeded when the action would exceed the maximum normal open shards - A test failed on a tracked branch
```
Error: thrown: "Exceeded timeout of 280000 ms for a hook.
Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:62:3
at _dispatchDescribe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:98:26)
at describe (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/index.js:60:5)
at Object.<anonymous> (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/src/core/server/integration_tests/saved_objects/migrations/actions/actions.test.ts:59:1)
at Runtime._execModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1646:24)
at Runtime._loadModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1185:12)
at Runtime.requireModule (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runtime/build/index.js:1009:12)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:13)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:389:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-8e651ab3d6147ec4/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:475:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/23497#01847704-a6ed-451b-a0ae-f94694f8b3b3)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Integration Tests.src/core/server/integration_tests/saved_objects/migrations/actions","test.name":"migration actions cloneIndex resolves left cluster_shard_limit_exceeded when the action would exceed the maximum normal open shards","test.failCount":1}} -->
|
non_process
|
failing test jest integration tests src core server integration tests saved objects migrations actions migration actions cloneindex resolves left cluster shard limit exceeded when the action would exceed the maximum normal open shards a test failed on a tracked branch error thrown exceeded timeout of ms for a hook use jest settimeout newtimeout to increase the timeout value if this is a long running test at var lib buildkite agent builds kb spot elastic kibana on merge kibana src core server integration tests saved objects migrations actions actions test ts at dispatchdescribe var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build index js at describe var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build index js at object var lib buildkite agent builds kb spot elastic kibana on merge kibana src core server integration tests saved objects migrations actions actions test ts at runtime execmodule var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runtime build index js at runtime loadmodule var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runtime build index js at runtime requiremodule var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runtime build index js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js first failure
| 0
|
495
| 2,938,860,137
|
IssuesEvent
|
2015-07-01 13:29:45
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
Раздел "документи" предлагает загрузку файлов с некорректными расширениями
|
hi priority In process of testing test version
|
При загрузке документов через https://e-gov.org.ua/api/service/documents/download/{ID} никаким образом не контролируется content-type, например при скачивании моего паспорта в файле в формате .jpg его было предложено скачать как passport.zip.
Необходимо, чтобы при скачиваний файлов им было подставлено корректное расширение файла.
|
1.0
|
Раздел "документи" предлагает загрузку файлов с некорректными расширениями - При загрузке документов через https://e-gov.org.ua/api/service/documents/download/{ID} никаким образом не контролируется content-type, например при скачивании моего паспорта в файле в формате .jpg его было предложено скачать как passport.zip.
Необходимо, чтобы при скачиваний файлов им было подставлено корректное расширение файла.
|
process
|
раздел документи предлагает загрузку файлов с некорректными расширениями при загрузке документов через никаким образом не контролируется content type например при скачивании моего паспорта в файле в формате jpg его было предложено скачать как passport zip необходимо чтобы при скачиваний файлов им было подставлено корректное расширение файла
| 1
|
3,930
| 6,848,559,297
|
IssuesEvent
|
2017-11-13 18:56:35
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
opened
|
TODO update attribute annotation and name when updating to a new ng-dynamic-forms version
|
dev process refactoring
|
This TODO item:
https://github.com/syndesisio/syndesis-ui/blob/master/src/app/common/ui-patternfly/syndesis-form-control.component.ts#L52-L55
|
1.0
|
TODO update attribute annotation and name when updating to a new ng-dynamic-forms version - This TODO item:
https://github.com/syndesisio/syndesis-ui/blob/master/src/app/common/ui-patternfly/syndesis-form-control.component.ts#L52-L55
|
process
|
todo update attribute annotation and name when updating to a new ng dynamic forms version this todo item
| 1
|
15,756
| 19,911,811,191
|
IssuesEvent
|
2022-01-25 17:55:37
|
input-output-hk/high-assurance-legacy
|
https://api.github.com/repos/input-output-hk/high-assurance-legacy
|
closed
|
Modify the names of introduction rules of `weak_transition`
|
language: isabelle topic: process calculus type: improvement
|
Our goal is to improve the names of the introduction rules of `weak_transition`:
- We want to keep `strong_transition`, because it’s a good name.
- We want to replace `silent_transition` by `empty_transition`, since also transitions constructed using the other introduction rules can be silent.
- We want to replace `composed_transition` by `compound_transition`.
|
1.0
|
Modify the names of introduction rules of `weak_transition` - Our goal is to improve the names of the introduction rules of `weak_transition`:
- We want to keep `strong_transition`, because it’s a good name.
- We want to replace `silent_transition` by `empty_transition`, since also transitions constructed using the other introduction rules can be silent.
- We want to replace `composed_transition` by `compound_transition`.
|
process
|
modify the names of introduction rules of weak transition our goal is to improve the names of the introduction rules of weak transition we want to keep strong transition because it’s a good name we want to replace silent transition by empty transition since also transitions constructed using the other introduction rules can be silent we want to replace composed transition by compound transition
| 1
|
19,650
| 26,009,130,010
|
IssuesEvent
|
2022-12-20 22:49:26
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Gradle workflow fails to upload artifact
|
bug process
|
### Description
The gradle workflow failed during artifact upload when triggered by a tag. It looks to be caused by concurrent upload of the same artifact file from the v1/v2 jobs of the same module.
### Steps to reproduce
https://github.com/hashgraph/hedera-mirror-node/actions/runs/3744125491/jobs/6357133635
### Additional context
_No response_
### Hedera network
other
### Version
v0.71.0-SNAPSHOT
### Operating system
None
|
1.0
|
Gradle workflow fails to upload artifact - ### Description
The gradle workflow failed during artifact upload when triggered by a tag. It looks to be caused by concurrent upload of the same artifact file from the v1/v2 jobs of the same module.
### Steps to reproduce
https://github.com/hashgraph/hedera-mirror-node/actions/runs/3744125491/jobs/6357133635
### Additional context
_No response_
### Hedera network
other
### Version
v0.71.0-SNAPSHOT
### Operating system
None
|
process
|
gradle workflow fails to upload artifact description the gradle workflow failed during artifact upload when triggered by a tag it looks to be caused by concurrent upload of the same artifact file from the jobs of the same module steps to reproduce additional context no response hedera network other version snapshot operating system none
| 1
|
12,706
| 15,079,625,055
|
IssuesEvent
|
2021-02-05 10:25:10
|
peopledoc/procrastinate
|
https://api.github.com/repos/peopledoc/procrastinate
|
closed
|
PytestUnraisableExceptionWarning exception on tests/unit/test_connector.py
|
Issue contains: Some Python Issue type: Process
|
We currently get a `PytestUnraisableExceptionWarning` exception when running the `tests/unit/test_connector.py::test_missing_app_async` test:
```
tests/unit/test_connector.py::test_missing_app_async[listen_notify-kwargs5]
/home/elemoine/.virtualenvs/procrastinate/lib/python3.9/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning: Exception ignored in: <function BaseConnector.__de
l__ at 0x7fd340c8a0d0>
Traceback (most recent call last):
File "/home/elemoine/src/procrastinate/procrastinate/connector.py", line 56, in __del__
self.close()
File "/home/elemoine/src/procrastinate/procrastinate/connector.py", line 20, in close
raise NotImplementedError
NotImplementedError
warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
-- Docs: https://docs.pytest.org/en/stable/warnings.html
```
This is because `BaseConnector.__del__` calls `close`, which raises an exception.
|
1.0
|
PytestUnraisableExceptionWarning exception on tests/unit/test_connector.py - We currently get a `PytestUnraisableExceptionWarning` exception when running the `tests/unit/test_connector.py::test_missing_app_async` test:
```
tests/unit/test_connector.py::test_missing_app_async[listen_notify-kwargs5]
/home/elemoine/.virtualenvs/procrastinate/lib/python3.9/site-packages/_pytest/unraisableexception.py:78: PytestUnraisableExceptionWarning: Exception ignored in: <function BaseConnector.__de
l__ at 0x7fd340c8a0d0>
Traceback (most recent call last):
File "/home/elemoine/src/procrastinate/procrastinate/connector.py", line 56, in __del__
self.close()
File "/home/elemoine/src/procrastinate/procrastinate/connector.py", line 20, in close
raise NotImplementedError
NotImplementedError
warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
-- Docs: https://docs.pytest.org/en/stable/warnings.html
```
This is because `BaseConnector.__del__` calls `close`, which raises an exception.
|
process
|
pytestunraisableexceptionwarning exception on tests unit test connector py we currently get a pytestunraisableexceptionwarning exception when running the tests unit test connector py test missing app async test tests unit test connector py test missing app async home elemoine virtualenvs procrastinate lib site packages pytest unraisableexception py pytestunraisableexceptionwarning exception ignored in function baseconnector de l at traceback most recent call last file home elemoine src procrastinate procrastinate connector py line in del self close file home elemoine src procrastinate procrastinate connector py line in close raise notimplementederror notimplementederror warnings warn pytest pytestunraisableexceptionwarning msg docs this is because baseconnector del calls close which raises an exception
| 1
|
8,435
| 11,597,596,737
|
IssuesEvent
|
2020-02-24 21:13:23
|
gkiar/reproreading
|
https://api.github.com/repos/gkiar/reproreading
|
opened
|
Paper: A New Pathology in the Simulation of Chaotic Dynamical Systems on Digital Computers - Boghosian - 2019 - Advanced Theory and Simulations - Wiley Online Library
|
processing stability stats
|
URL: [https://onlinelibrary.wiley.com/doi/full/10.1002/adts.201900125](https://onlinelibrary.wiley.com/doi/full/10.1002/adts.201900125)
|
1.0
|
Paper: A New Pathology in the Simulation of Chaotic Dynamical Systems on Digital Computers - Boghosian - 2019 - Advanced Theory and Simulations - Wiley Online Library - URL: [https://onlinelibrary.wiley.com/doi/full/10.1002/adts.201900125](https://onlinelibrary.wiley.com/doi/full/10.1002/adts.201900125)
|
process
|
paper a new pathology in the simulation of chaotic dynamical systems on digital computers boghosian advanced theory and simulations wiley online library url
| 1
|
366,826
| 10,827,562,241
|
IssuesEvent
|
2019-11-10 10:14:13
|
kubernetes/kubeadm
|
https://api.github.com/repos/kubernetes/kubeadm
|
opened
|
kinder: stop caching kubeadmVersion
|
area/kinder good first issue help wanted priority/backlog
|
In kinder, we are caching some info at node level, but this created some problems when those info changes during the node lifecycle, e.g. due to upgrade.
So we want to make sure to remove caching of all the information that might change.
This issue is about removing caching for the `kubeadmVersion`, and this requires to:
- Remove the variable `kubeadmVersion ` from [here](https://github.com/kubernetes/kubeadm/blob/9d3952249e85df8c198fccae07d427f6e99dc710/kinder/pkg/cluster/status/node.go#L49)
- Change this [func](https://github.com/kubernetes/kubeadm/blob/9d3952249e85df8c198fccae07d427f6e99dc710/kinder/pkg/cluster/status/node.go#L190) to return the value read from the container
For your reference, you can look at [this commit](https://github.com/kubernetes/kubeadm/pull/1897/commits/1cb3c3e27d9dbbd5191e6cd6b01c636c538cee7f) that removed caching for `kubernetesVersion`
|
1.0
|
kinder: stop caching kubeadmVersion - In kinder, we are caching some info at node level, but this created some problems when those info changes during the node lifecycle, e.g. due to upgrade.
So we want to make sure to remove caching of all the information that might change.
This issue is about removing caching for the `kubeadmVersion`, and this requires to:
- Remove the variable `kubeadmVersion ` from [here](https://github.com/kubernetes/kubeadm/blob/9d3952249e85df8c198fccae07d427f6e99dc710/kinder/pkg/cluster/status/node.go#L49)
- Change this [func](https://github.com/kubernetes/kubeadm/blob/9d3952249e85df8c198fccae07d427f6e99dc710/kinder/pkg/cluster/status/node.go#L190) to return the value read from the container
For your reference, you can look at [this commit](https://github.com/kubernetes/kubeadm/pull/1897/commits/1cb3c3e27d9dbbd5191e6cd6b01c636c538cee7f) that removed caching for `kubernetesVersion`
|
non_process
|
kinder stop caching kubeadmversion in kinder we are caching some info at node level but this created some problems when those info changes during the node lifecycle e g due to upgrade so we want to make sure to remove caching of all the information that might change this issue is about removing caching for the kubeadmversion and this requires to remove the variable kubeadmversion from change this to return the value read from the container for your reference you can look at that removed caching for kubernetesversion
| 0
|
592,759
| 17,929,412,701
|
IssuesEvent
|
2021-09-10 07:08:56
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
opened
|
Link to AMO stats in review page
|
component: reviewer tools component: statistics priority: p3
|
Once https://github.com/mozilla/addons-server/issues/17880 is fixed, we'll have
fully functional stats pages on AMO for deleted add-ons, which might be useful
to reviewers with privileged access.
Let's add a link to the review detail page to easily access these stats pages on
AMO: this is needed because, for deleted add-on, we need to use the add-on ID
instead of slug in all the URLs.
|
1.0
|
Link to AMO stats in review page - Once https://github.com/mozilla/addons-server/issues/17880 is fixed, we'll have
fully functional stats pages on AMO for deleted add-ons, which might be useful
to reviewers with privileged access.
Let's add a link to the review detail page to easily access these stats pages on
AMO: this is needed because, for deleted add-on, we need to use the add-on ID
instead of slug in all the URLs.
|
non_process
|
link to amo stats in review page once is fixed we ll have fully functional stats pages on amo for deleted add ons which might be useful to reviewers with privileged access let s add a link to the review detail page to easily access these stats pages on amo this is needed because for deleted add on we need to use the add on id instead of slug in all the urls
| 0
|
382,328
| 11,303,991,719
|
IssuesEvent
|
2020-01-17 21:36:27
|
okta/okta-oidc-js
|
https://api.github.com/repos/okta/okta-oidc-js
|
closed
|
[okta-react] Resource not found (Session) error kills app with no redirect
|
awaiting-response bug priority-high
|
<!--
Please help us process GitHub Issues faster by providing the following information.
Note: If you have a question, please post it on the Okta Developer Forum (https://devforum.okta.com) instead. Issues in this repository are reserved for bug reports and feature requests.
-->
## I'm submitting this issue for the package(s):
- [ ] jwt-verifier
- [ ] okta-angular
- [ ] oidc-middleware
- [x] okta-react
- [ ] okta-react-native
- [ ] okta-vue
## I'm submitting a:
- [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
- [ ] Feature request
- [ ] Other (Describe below)
## Current behavior
I'm using the @okta/okta-react package and sometimes we randomly have an error that will completely kill the app. Tried looking through existing issues with no success. I'm including the error below as I'm not entirely sure if the issue could just be in our setup:
```
import React from 'react';
import ReactDOM from 'react-dom';
import { Provider } from 'react-redux';
import { BrowserRouter as Router } from 'react-router-dom';
import { Security } from '@okta/okta-react';
import store from './store';
import { okta } from './config';
import App from './components/App';
import * as serviceWorker from './serviceWorker';
ReactDOM.render(
<Provider store={store}>
<Router>
<Security
issuer={okta.issuerId}
client_id={okta.clientId}
redirect_uri={okta.redirectUrl}
onAuthRequired={({ history }: any) => {
history.push('/login');
}}
>
<App />
</Security>
</Router>
</Provider>,
document.getElementById('root')
);
```
Also, here's the error that we're getting at random intervals that terminates the app:
```
Unhandled Rejection (AuthApiError): Not found: Resource not found: me (Session)./node_modules/@okta/okta-auth-js/lib/errors/AuthApiError.js
node_modules/@okta/okta-auth-js/lib/errors/AuthApiError.js:26
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/http.js
node_modules/@okta/okta-auth-js/lib/http.js:19
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/tx.js
node_modules/@okta/okta-auth-js/lib/tx.js:15
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/builderUtil.js
node_modules/@okta/okta-auth-js/lib/builderUtil.js:14
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/browser/browser.js
node_modules/@okta/okta-auth-js/lib/browser/browser.js:20
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/reqwest/index.js
node_modules/@okta/okta-auth-js/reqwest/index.js:17
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/browser/browserIndex.js
node_modules/@okta/okta-auth-js/lib/browser/browserIndex.js:14
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-react/dist/Auth.js
node_modules/@okta/okta-react/dist/Auth.js:41
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-react/dist/Security.js
node_modules/@okta/okta-react/dist/Security.js:37
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-react/dist/index.js
node_modules/@okta/okta-react/dist/index.js:8
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
Module../src/index.tsx
http://localhost:8000/static/js/main.chunk.js:1333:74
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
0
http://localhost:8000/static/js/main.chunk.js:3444:18
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
checkDeferredModules
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:45
42 | }
43 | if(fulfilled) {
44 | deferredModules.splice(i--, 1);
> 45 | result = __webpack_require__(__webpack_require__.s = deferredModule[0]);
| ^
46 | }
47 | }
48 | return result;
View compiled
Array.webpackJsonpCallback [as push]
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:32
29 | deferredModules.push.apply(deferredModules, executeModules || []);
30 |
31 | // run deferred modules when all chunks ready
> 32 | return checkDeferredModules();
| ^
33 | };
34 | function checkDeferredModules() {
35 | var result;
View compiled
(anonymous function)
http://localhost:8000/static/js/main.chunk.js:1:57
```
Our only current resolution requires us to clear the local storage manually and any cache that we have which is not ideal.
## Expected behavior
When the session expires, it should redirect the app to the URL supplied to the `onAuthRequired` prop of the `<Security/>` component.
## Minimal reproduction of the problem with instructions
We use the login page example provided from https://github.com/okta/okta-signin-widget and https://developer.okta.com/code/react/okta_react_sign-in_widget/. Not sure what else would be needed to document the issue.
## Extra information about the use case/user story you are trying to implement
N/A
## Environment
- Package Version: @okta/okta-react@^1.2.0, @okta/okta-signin-widget@^2.18.0
- Browser: Chrome 73.0.3683.86
- OS: macOS 10.14.4
- Node version (`node -v`): 10.15.3
- Other: create-react-app with react@16.8.6, react-dom@16.8.6, and react-scripts@2.1.8
|
1.0
|
[okta-react] Resource not found (Session) error kills app with no redirect - <!--
Please help us process GitHub Issues faster by providing the following information.
Note: If you have a question, please post it on the Okta Developer Forum (https://devforum.okta.com) instead. Issues in this repository are reserved for bug reports and feature requests.
-->
## I'm submitting this issue for the package(s):
- [ ] jwt-verifier
- [ ] okta-angular
- [ ] oidc-middleware
- [x] okta-react
- [ ] okta-react-native
- [ ] okta-vue
## I'm submitting a:
- [x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
- [ ] Feature request
- [ ] Other (Describe below)
## Current behavior
I'm using the @okta/okta-react package and sometimes we randomly have an error that will completely kill the app. Tried looking through existing issues with no success. I'm including the error below as I'm not entirely sure if the issue could just be in our setup:
```
import React from 'react';
import ReactDOM from 'react-dom';
import { Provider } from 'react-redux';
import { BrowserRouter as Router } from 'react-router-dom';
import { Security } from '@okta/okta-react';
import store from './store';
import { okta } from './config';
import App from './components/App';
import * as serviceWorker from './serviceWorker';
ReactDOM.render(
<Provider store={store}>
<Router>
<Security
issuer={okta.issuerId}
client_id={okta.clientId}
redirect_uri={okta.redirectUrl}
onAuthRequired={({ history }: any) => {
history.push('/login');
}}
>
<App />
</Security>
</Router>
</Provider>,
document.getElementById('root')
);
```
Also, here's the error that we're getting at random intervals that terminates the app:
```
Unhandled Rejection (AuthApiError): Not found: Resource not found: me (Session)./node_modules/@okta/okta-auth-js/lib/errors/AuthApiError.js
node_modules/@okta/okta-auth-js/lib/errors/AuthApiError.js:26
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/http.js
node_modules/@okta/okta-auth-js/lib/http.js:19
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/tx.js
node_modules/@okta/okta-auth-js/lib/tx.js:15
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/builderUtil.js
node_modules/@okta/okta-auth-js/lib/builderUtil.js:14
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/browser/browser.js
node_modules/@okta/okta-auth-js/lib/browser/browser.js:20
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/reqwest/index.js
node_modules/@okta/okta-auth-js/reqwest/index.js:17
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-auth-js/lib/browser/browserIndex.js
node_modules/@okta/okta-auth-js/lib/browser/browserIndex.js:14
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-react/dist/Auth.js
node_modules/@okta/okta-react/dist/Auth.js:41
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-react/dist/Security.js
node_modules/@okta/okta-react/dist/Security.js:37
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
./node_modules/@okta/okta-react/dist/index.js
node_modules/@okta/okta-react/dist/index.js:8
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
Module../src/index.tsx
http://localhost:8000/static/js/main.chunk.js:1333:74
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
fn
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:149
146 | );
147 | hotCurrentParents = [];
148 | }
> 149 | return __webpack_require__(request);
| ^
150 | };
151 | var ObjectFactory = function ObjectFactory(name) {
152 | return {
View compiled
0
http://localhost:8000/static/js/main.chunk.js:3444:18
__webpack_require__
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:786
783 | };
784 |
785 | // Execute the module function
> 786 | modules[moduleId].call(module.exports, module, module.exports, hotCreateRequire(moduleId));
| ^
787 |
788 | // Flag the module as loaded
789 | module.l = true;
View compiled
checkDeferredModules
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:45
42 | }
43 | if(fulfilled) {
44 | deferredModules.splice(i--, 1);
> 45 | result = __webpack_require__(__webpack_require__.s = deferredModule[0]);
| ^
46 | }
47 | }
48 | return result;
View compiled
Array.webpackJsonpCallback [as push]
/Users/dmoore/Development/ags/frontend/webpack/bootstrap:32
29 | deferredModules.push.apply(deferredModules, executeModules || []);
30 |
31 | // run deferred modules when all chunks ready
> 32 | return checkDeferredModules();
| ^
33 | };
34 | function checkDeferredModules() {
35 | var result;
View compiled
(anonymous function)
http://localhost:8000/static/js/main.chunk.js:1:57
```
Our only current resolution requires us to clear the local storage manually and any cache that we have which is not ideal.
## Expected behavior
When the session expires, it should redirect the app to the URL supplied to the `onAuthRequired` prop of the `<Security/>` component.
## Minimal reproduction of the problem with instructions
We use the login page example provided from https://github.com/okta/okta-signin-widget and https://developer.okta.com/code/react/okta_react_sign-in_widget/. Not sure what else would be needed to document the issue.
## Extra information about the use case/user story you are trying to implement
N/A
## Environment
- Package Version: @okta/okta-react@^1.2.0, @okta/okta-signin-widget@^2.18.0
- Browser: Chrome 73.0.3683.86
- OS: macOS 10.14.4
- Node version (`node -v`): 10.15.3
- Other: create-react-app with react@16.8.6, react-dom@16.8.6, and react-scripts@2.1.8
|
non_process
|
resource not found session error kills app with no redirect please help us process github issues faster by providing the following information note if you have a question please post it on the okta developer forum instead issues in this repository are reserved for bug reports and feature requests i m submitting this issue for the package s jwt verifier okta angular oidc middleware okta react okta react native okta vue i m submitting a bug report feature request other describe below current behavior i m using the okta okta react package and sometimes we randomly have an error that will completely kill the app tried looking through existing issues with no success i m including the error below as i m not entirely sure if the issue could just be in our setup import react from react import reactdom from react dom import provider from react redux import browserrouter as router from react router dom import security from okta okta react import store from store import okta from config import app from components app import as serviceworker from serviceworker reactdom render security issuer okta issuerid client id okta clientid redirect uri okta redirecturl onauthrequired history any history push login document getelementbyid root also here s the error that we re getting at random intervals that terminates the app unhandled rejection authapierror not found resource not found me session node modules okta okta auth js lib errors authapierror js node modules okta okta auth js lib errors authapierror js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta auth js lib http js node modules okta okta auth js lib http js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta auth js lib tx js node modules okta okta auth js lib tx js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta auth js lib builderutil js node modules okta okta auth js lib builderutil js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta auth js lib browser browser js node modules okta okta auth js lib browser browser js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta auth js reqwest index js node modules okta okta auth js reqwest index js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta auth js lib browser browserindex js node modules okta okta auth js lib browser browserindex js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta react dist auth js node modules okta okta react dist auth js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta react dist security js node modules okta okta react dist security js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled node modules okta okta react dist index js node modules okta okta react dist index js webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled module src index tsx webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled fn users dmoore development ags frontend webpack bootstrap hotcurrentparents return webpack require request var objectfactory function objectfactory name return view compiled webpack require users dmoore development ags frontend webpack bootstrap execute the module function modules call module exports module module exports hotcreaterequire moduleid flag the module as loaded module l true view compiled checkdeferredmodules users dmoore development ags frontend webpack bootstrap if fulfilled deferredmodules splice i result webpack require webpack require s deferredmodule return result view compiled array webpackjsonpcallback users dmoore development ags frontend webpack bootstrap deferredmodules push apply deferredmodules executemodules run deferred modules when all chunks ready return checkdeferredmodules function checkdeferredmodules var result view compiled anonymous function our only current resolution requires us to clear the local storage manually and any cache that we have which is not ideal expected behavior when the session expires it should redirect the app to the url supplied to the onauthrequired prop of the component minimal reproduction of the problem with instructions we use the login page example provided from and not sure what else would be needed to document the issue extra information about the use case user story you are trying to implement n a environment package version okta okta react okta okta signin widget browser chrome os macos node version node v other create react app with react react dom and react scripts
| 0
|
100,360
| 21,302,139,709
|
IssuesEvent
|
2022-04-15 05:40:40
|
rocky/python-decompile3
|
https://api.github.com/repos/rocky/python-decompile3
|
closed
|
I get an the end a error code named"Parse error at or near `JUMP_LOOP' instruction at offset 200"
|
invalid bytecode
|
my python version is 3.8 and I got the pyc decompiled from a exe with python-to-exe than I added the magic number.
here is another info may be needed:
OS : win 10
Python 3.8.0
decompyle3, version 3.9.0a1
Python bytecode 3.8 (3413)
Source code size mod 2**32
issue : I get an the end a error code named"Parse error at or near `JUMP_LOOP' instruction at offset 200".
The first picture is when it starts

the second is the error

the 3 thing is the File I try to get the source of
https://github.com/leejshades/ActiveIQHealthManager/blob/master/PYZ-00.pyz_extracted/OntapHealthCheckTool.pyc
|
1.0
|
I get an the end a error code named"Parse error at or near `JUMP_LOOP' instruction at offset 200" - my python version is 3.8 and I got the pyc decompiled from a exe with python-to-exe than I added the magic number.
here is another info may be needed:
OS : win 10
Python 3.8.0
decompyle3, version 3.9.0a1
Python bytecode 3.8 (3413)
Source code size mod 2**32
issue : I get an the end a error code named"Parse error at or near `JUMP_LOOP' instruction at offset 200".
The first picture is when it starts

the second is the error

the 3 thing is the File I try to get the source of
https://github.com/leejshades/ActiveIQHealthManager/blob/master/PYZ-00.pyz_extracted/OntapHealthCheckTool.pyc
|
non_process
|
i get an the end a error code named parse error at or near jump loop instruction at offset my python version is and i got the pyc decompiled from a exe with python to exe than i added the magic number here is another info may be needed os win python version python bytecode source code size mod issue i get an the end a error code named parse error at or near jump loop instruction at offset the first picture is when it starts the second is the error the thing is the file i try to get the source of
| 0
|
817,040
| 30,623,048,287
|
IssuesEvent
|
2023-07-24 09:33:56
|
yukieiji/ExtremeRoles
|
https://api.github.com/repos/yukieiji/ExtremeRoles
|
opened
|
ExtremeSkinsのハット/バイザー/ネームプレート再対応
|
優先度:高/Priority:High バグ/Bug 機能拡張/Enhancement ExtremeSkins
|
- ExSのハット/バイザー/ネームプレートはAmongUs v2023.06.14から不具合があるためパージされている状態になっている
- それを元に戻す
- Twitterの[アンケート](https://twitter.com/yukieiji/status/1681557501840596992)的にすべての対応が待たれている
|
1.0
|
ExtremeSkinsのハット/バイザー/ネームプレート再対応 - - ExSのハット/バイザー/ネームプレートはAmongUs v2023.06.14から不具合があるためパージされている状態になっている
- それを元に戻す
- Twitterの[アンケート](https://twitter.com/yukieiji/status/1681557501840596992)的にすべての対応が待たれている
|
non_process
|
extremeskinsのハット バイザー ネームプレート再対応 exsのハット バイザー ネームプレートはamongus それを元に戻す twitterの
| 0
|
6,318
| 9,334,034,897
|
IssuesEvent
|
2019-03-28 15:32:07
|
raxod502/straight.el
|
https://api.github.com/repos/raxod502/straight.el
|
closed
|
Creating *straight-process* buffer when run emacs
|
process-buffer support
|
Hello,
Thank you for straight.el! This is great package. But one question. When I start Emacs I see one more buffers with name `*straight-process*`, which contains these lines:
```
$ cd /Users/d.evsyukov/
$ find /dev/null -newermt 2018-01-01\ 12\:00\:00
/dev/null
[Return code: 0]
```
How I can close this buffer automatically? Or why this buffer was created?
|
1.0
|
Creating *straight-process* buffer when run emacs - Hello,
Thank you for straight.el! This is great package. But one question. When I start Emacs I see one more buffers with name `*straight-process*`, which contains these lines:
```
$ cd /Users/d.evsyukov/
$ find /dev/null -newermt 2018-01-01\ 12\:00\:00
/dev/null
[Return code: 0]
```
How I can close this buffer automatically? Or why this buffer was created?
|
process
|
creating straight process buffer when run emacs hello thank you for straight el this is great package but one question when i start emacs i see one more buffers with name straight process which contains these lines cd users d evsyukov find dev null newermt dev null how i can close this buffer automatically or why this buffer was created
| 1
|
72,057
| 7,276,637,010
|
IssuesEvent
|
2018-02-21 16:55:22
|
pvlib/pvlib-python
|
https://api.github.com/repos/pvlib/pvlib-python
|
closed
|
update minimum numpy to 1.10.1
|
Release testing
|
I'd like to upgrade the minimum supported numpy version to 1.10.1. This numpy version was released in Oct 2015.
Numpy 1.10 added two features that are useful to us:
1. nan support in ``assert_allclose``
2. scalar support in ``np.digitize``
We've hacked around 1. in the test suite for awhile now, but it would be great to remove the hacked code.
#429 proposes using ``np.digitize`` to replace manual binning in the ``perez`` function. I could hack around this too, but I'd rather bump the numpy spec.
Any objections to bumping the numpy spec?
|
1.0
|
update minimum numpy to 1.10.1 - I'd like to upgrade the minimum supported numpy version to 1.10.1. This numpy version was released in Oct 2015.
Numpy 1.10 added two features that are useful to us:
1. nan support in ``assert_allclose``
2. scalar support in ``np.digitize``
We've hacked around 1. in the test suite for awhile now, but it would be great to remove the hacked code.
#429 proposes using ``np.digitize`` to replace manual binning in the ``perez`` function. I could hack around this too, but I'd rather bump the numpy spec.
Any objections to bumping the numpy spec?
|
non_process
|
update minimum numpy to i d like to upgrade the minimum supported numpy version to this numpy version was released in oct numpy added two features that are useful to us nan support in assert allclose scalar support in np digitize we ve hacked around in the test suite for awhile now but it would be great to remove the hacked code proposes using np digitize to replace manual binning in the perez function i could hack around this too but i d rather bump the numpy spec any objections to bumping the numpy spec
| 0
|
5,118
| 3,900,440,306
|
IssuesEvent
|
2016-04-18 06:02:27
|
lionheart/openradar-mirror
|
https://api.github.com/repos/lionheart/openradar-mirror
|
opened
|
12364220: iOS 6 3D-rotation causes layers to render without respect for view hierarchy
|
classification:ui/usability reproducible:100% status:open
|
#### Description
Summary:
After applying a CATransform3D to a layer to rotate its view, other views render in incorrect hierarchical order: a subview A that is hierarchically below a subview B renders on top of A. Bug occurs when run on iOS 6 only.
Steps to Reproduce:
Please see the example project, which was built specifically to isolate this bug. Run it and follow the instructions. You will immediately see the bug.
The sample project causes the bug to occur by the following coding steps:
1. Create a partially transparent view and add it to the main view (ex self.bunny)
2. Create a "hidden" view, which will only be shown after the animation is complete. This view will be "revealed" by the rotation animation. Make the view hidden. Add it to the main view. (ex self.hiddenView)
3. Create the view that will rotate. Add it to the main view (ex self.rotationView)
4. Create a CATransform3D with perspective, which rotates on the y-axix.
5. Perform the animation: unhide the hidden view from 2, and animate the view from 3 using the CATransform3D from 4.
6. Run the code on an iOS 6 device or simulator (I'm using iOS Simulator 6.0 (358.4)).
Expected Results:
The hidden view from 2 should be fully visible, with nothing else rendered on top of it.
This is because the rotation view rotated away from the hidden view, and the hidden view is hierarchically on top of the transparent view from 1,
Actual Results:
The hidden view from 2 is partially visible, but it's obscured by the partially transparent view from 1.
Regression:
This works fine when run on iOS 5 devices or simulator: this behaviour was introduced in iOS 6.
Notes:
1. There is a work-around: if you set zPositions on the incorrectly rendered layers, in increasing order of height in the view hierarchy, rendering is correct.
2. If you were to add a control to the hidden view (such as a button), you would see that touch events are correctly handled: the hidden view receives touch events, even though the transparent view is rendered on top.
3. Attached is a simple sample project showing the bug, and showing the work-around. Try running it on iOS 5 to see correct behaviour. Also attached is screenshots showing the view before and after animating.
-
Product Version: iOS 6 Simulator and Device
Created: 2012-09-25T01:50:21.083999
Originated: 2012-09-24T21:43:00
Open Radar Link: http://www.openradar.me/12364220
|
True
|
12364220: iOS 6 3D-rotation causes layers to render without respect for view hierarchy - #### Description
Summary:
After applying a CATransform3D to a layer to rotate its view, other views render in incorrect hierarchical order: a subview A that is hierarchically below a subview B renders on top of A. Bug occurs when run on iOS 6 only.
Steps to Reproduce:
Please see the example project, which was built specifically to isolate this bug. Run it and follow the instructions. You will immediately see the bug.
The sample project causes the bug to occur by the following coding steps:
1. Create a partially transparent view and add it to the main view (ex self.bunny)
2. Create a "hidden" view, which will only be shown after the animation is complete. This view will be "revealed" by the rotation animation. Make the view hidden. Add it to the main view. (ex self.hiddenView)
3. Create the view that will rotate. Add it to the main view (ex self.rotationView)
4. Create a CATransform3D with perspective, which rotates on the y-axix.
5. Perform the animation: unhide the hidden view from 2, and animate the view from 3 using the CATransform3D from 4.
6. Run the code on an iOS 6 device or simulator (I'm using iOS Simulator 6.0 (358.4)).
Expected Results:
The hidden view from 2 should be fully visible, with nothing else rendered on top of it.
This is because the rotation view rotated away from the hidden view, and the hidden view is hierarchically on top of the transparent view from 1,
Actual Results:
The hidden view from 2 is partially visible, but it's obscured by the partially transparent view from 1.
Regression:
This works fine when run on iOS 5 devices or simulator: this behaviour was introduced in iOS 6.
Notes:
1. There is a work-around: if you set zPositions on the incorrectly rendered layers, in increasing order of height in the view hierarchy, rendering is correct.
2. If you were to add a control to the hidden view (such as a button), you would see that touch events are correctly handled: the hidden view receives touch events, even though the transparent view is rendered on top.
3. Attached is a simple sample project showing the bug, and showing the work-around. Try running it on iOS 5 to see correct behaviour. Also attached is screenshots showing the view before and after animating.
-
Product Version: iOS 6 Simulator and Device
Created: 2012-09-25T01:50:21.083999
Originated: 2012-09-24T21:43:00
Open Radar Link: http://www.openradar.me/12364220
|
non_process
|
ios rotation causes layers to render without respect for view hierarchy description summary after applying a to a layer to rotate its view other views render in incorrect hierarchical order a subview a that is hierarchically below a subview b renders on top of a bug occurs when run on ios only steps to reproduce please see the example project which was built specifically to isolate this bug run it and follow the instructions you will immediately see the bug the sample project causes the bug to occur by the following coding steps create a partially transparent view and add it to the main view ex self bunny create a hidden view which will only be shown after the animation is complete this view will be revealed by the rotation animation make the view hidden add it to the main view ex self hiddenview create the view that will rotate add it to the main view ex self rotationview create a with perspective which rotates on the y axix perform the animation unhide the hidden view from and animate the view from using the from run the code on an ios device or simulator i m using ios simulator expected results the hidden view from should be fully visible with nothing else rendered on top of it this is because the rotation view rotated away from the hidden view and the hidden view is hierarchically on top of the transparent view from actual results the hidden view from is partially visible but it s obscured by the partially transparent view from regression this works fine when run on ios devices or simulator this behaviour was introduced in ios notes there is a work around if you set zpositions on the incorrectly rendered layers in increasing order of height in the view hierarchy rendering is correct if you were to add a control to the hidden view such as a button you would see that touch events are correctly handled the hidden view receives touch events even though the transparent view is rendered on top attached is a simple sample project showing the bug and showing the work around try running it on ios to see correct behaviour also attached is screenshots showing the view before and after animating product version ios simulator and device created originated open radar link
| 0
|
206,254
| 23,372,678,117
|
IssuesEvent
|
2022-08-10 21:33:53
|
lirantal/daloradius
|
https://api.github.com/repos/lirantal/daloradius
|
closed
|
User Password Login Portal
|
question security
|
Why is it necessary to enter the password of the Portal Login of the users? Couldn't you simply compare the password hash and use it for login?
|
True
|
User Password Login Portal - Why is it necessary to enter the password of the Portal Login of the users? Couldn't you simply compare the password hash and use it for login?
|
non_process
|
user password login portal why is it necessary to enter the password of the portal login of the users couldn t you simply compare the password hash and use it for login
| 0
|
15,655
| 19,846,846,170
|
IssuesEvent
|
2022-01-21 07:43:19
|
ooi-data/RS03ASHS-MJ03B-09-BOTPTA304-streamed-botpt_iris_sample
|
https://api.github.com/repos/ooi-data/RS03ASHS-MJ03B-09-BOTPTA304-streamed-botpt_iris_sample
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:43:19.219538.
## Details
Flow name: `RS03ASHS-MJ03B-09-BOTPTA304-streamed-botpt_iris_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (2777778,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (2777778,)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T07:43:19.219538.
## Details
Flow name: `RS03ASHS-MJ03B-09-BOTPTA304-streamed-botpt_iris_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (2777778,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (2777778,)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed botpt iris sample task name processing task error type valueerror error message cannot reshape array of size into shape traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync self data file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in process for setitem chunk self decode chunk cdata file srv conda envs notebook lib site packages zarr core py line in decode chunk chunk chunk reshape expected shape or self chunks order self order valueerror cannot reshape array of size into shape
| 1
|
8,377
| 11,525,729,656
|
IssuesEvent
|
2020-02-15 10:34:36
|
SE-Garden/tms-webserver
|
https://api.github.com/repos/SE-Garden/tms-webserver
|
opened
|
Dockerイメージ作成対応
|
kind:アーキ process:検討
|
## 概要
稼働環境は、コンテナにしたいと考えているため、
DockerImageの作成までをビルドプロセスに組み込みたい。
## ゴール
Gradleのビルド時にDockerImageの作成までを行得るようにする。
パイプラインはその後。
## 成果物
- build.gradle
- CI系設定ファイル?
## 関連Issue
|
1.0
|
Dockerイメージ作成対応 - ## 概要
稼働環境は、コンテナにしたいと考えているため、
DockerImageの作成までをビルドプロセスに組み込みたい。
## ゴール
Gradleのビルド時にDockerImageの作成までを行得るようにする。
パイプラインはその後。
## 成果物
- build.gradle
- CI系設定ファイル?
## 関連Issue
|
process
|
dockerイメージ作成対応 概要 稼働環境は、コンテナにしたいと考えているため、 dockerimageの作成までをビルドプロセスに組み込みたい。 ゴール gradleのビルド時にdockerimageの作成までを行得るようにする。 パイプラインはその後。 成果物 build gradle ci系設定ファイル? 関連issue
| 1
|
10,943
| 13,752,733,291
|
IssuesEvent
|
2020-10-06 14:51:08
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
Limit testnet deployments to 2 days.
|
process_wontfix type_feature
|
@despiegk requested the max expirary date for deployments on testnet would be 2 days. To make sure ppl don't reserve for weeks / months so other ppl can deploy stuff as well.
|
1.0
|
Limit testnet deployments to 2 days. - @despiegk requested the max expirary date for deployments on testnet would be 2 days. To make sure ppl don't reserve for weeks / months so other ppl can deploy stuff as well.
|
process
|
limit testnet deployments to days despiegk requested the max expirary date for deployments on testnet would be days to make sure ppl don t reserve for weeks months so other ppl can deploy stuff as well
| 1
|
58,609
| 7,165,645,932
|
IssuesEvent
|
2018-01-29 15:03:06
|
phetsims/masses-and-springs
|
https://api.github.com/repos/phetsims/masses-and-springs
|
closed
|
Dev version for interviews
|
design:interviews status:ready-for-review
|
In a slack conversation @ariel-phet made said:
> I would suggest having a decent dev test before interviews, however the QA team is fairly back logged currently. Once you fix that bug, I would say do about an hour of "self QA" (a good habit anyhow, and see if things seem stable (for instance just hammer away on Chrome). If generally everything is functional, smooth, and no obvious bugs, make a dev version, and I will also take a look. Then Amy can interview on something like chrome and know it is in pretty good shape
I will tag any new bugs that I find in this issue until a stable version of MAS is secured.
|
1.0
|
Dev version for interviews - In a slack conversation @ariel-phet made said:
> I would suggest having a decent dev test before interviews, however the QA team is fairly back logged currently. Once you fix that bug, I would say do about an hour of "self QA" (a good habit anyhow, and see if things seem stable (for instance just hammer away on Chrome). If generally everything is functional, smooth, and no obvious bugs, make a dev version, and I will also take a look. Then Amy can interview on something like chrome and know it is in pretty good shape
I will tag any new bugs that I find in this issue until a stable version of MAS is secured.
|
non_process
|
dev version for interviews in a slack conversation ariel phet made said i would suggest having a decent dev test before interviews however the qa team is fairly back logged currently once you fix that bug i would say do about an hour of self qa a good habit anyhow and see if things seem stable for instance just hammer away on chrome if generally everything is functional smooth and no obvious bugs make a dev version and i will also take a look then amy can interview on something like chrome and know it is in pretty good shape i will tag any new bugs that i find in this issue until a stable version of mas is secured
| 0
|
56
| 2,507,943,319
|
IssuesEvent
|
2015-01-12 21:52:41
|
Whonix/Whonix
|
https://api.github.com/repos/Whonix/Whonix
|
closed
|
Eliminating any external untrusted contact with the Gateway
|
enhancement security upstream wait
|
This proposal makes sense only if someday:
-Tor natively supports a secure time source
https://trac.torproject.org/projects/tor/ticket/6894
-Tor has a controlport whitelisting mechanism built-in
https://trac.torproject.org/projects/tor/ticket/8369
Benefits:
If the proposal below is implemented in tandem with the features above, Whonix gateway avoids any untrusted external contact.
>Since you're open to suggestions let me put this out there and see what you think.
-The purpose of whonixcheck is to help the user know if they done a grave misconfiguration and to provide information about the whonix environment in general.
-whonixcheck is not a diagnostic tool to detect any hypothetical misconfiguration made by you in the codebase. (Not implying your skill is any less than excellent, just discussing hypotheticals)
Good security practice dictates that no functionality more than necessary should be enabled or used. And the remaining tools that are needed are hardened to the maximum extent possible. Even if the tools used have a high security assurance, its always best to not use anything in the first place when not needed - the enemy cannot exploit what is not there.
***
With these points in mind I suggest:
-Disabling the all cURL based requests on the gateway and here's why:
1. TBB checks for the gateway are unnecessary as there is no TBB supposed to be on there.
2. While useful in the workstation, because the user might shoot themselves in the foot by configuring it to connect directly to the internet, checking that traffic goes through Tor on the gateway is of little utility as no user caused misconfiguration is relevant in that context.
3. Whonixnews could be checked for and fetched via the Whonix repos, alleviating the need for cURL in this usecase. (Maybe this can be used for the ws version too)
-Even in the event that a trusted pinning system is figured out and deployed, it would not be good to have it on the gateway in case the user mistakenly runs Iceweasel on there.
- Whonixcheck and its curl based functionality is vital and necessary to keep users safe from themselves. Therefore I am not advocating removing this needed functionality in its workstation counterpart.
|
True
|
Eliminating any external untrusted contact with the Gateway - This proposal makes sense only if someday:
-Tor natively supports a secure time source
https://trac.torproject.org/projects/tor/ticket/6894
-Tor has a controlport whitelisting mechanism built-in
https://trac.torproject.org/projects/tor/ticket/8369
Benefits:
If the proposal below is implemented in tandem with the features above, Whonix gateway avoids any untrusted external contact.
>Since you're open to suggestions let me put this out there and see what you think.
-The purpose of whonixcheck is to help the user know if they done a grave misconfiguration and to provide information about the whonix environment in general.
-whonixcheck is not a diagnostic tool to detect any hypothetical misconfiguration made by you in the codebase. (Not implying your skill is any less than excellent, just discussing hypotheticals)
Good security practice dictates that no functionality more than necessary should be enabled or used. And the remaining tools that are needed are hardened to the maximum extent possible. Even if the tools used have a high security assurance, its always best to not use anything in the first place when not needed - the enemy cannot exploit what is not there.
***
With these points in mind I suggest:
-Disabling the all cURL based requests on the gateway and here's why:
1. TBB checks for the gateway are unnecessary as there is no TBB supposed to be on there.
2. While useful in the workstation, because the user might shoot themselves in the foot by configuring it to connect directly to the internet, checking that traffic goes through Tor on the gateway is of little utility as no user caused misconfiguration is relevant in that context.
3. Whonixnews could be checked for and fetched via the Whonix repos, alleviating the need for cURL in this usecase. (Maybe this can be used for the ws version too)
-Even in the event that a trusted pinning system is figured out and deployed, it would not be good to have it on the gateway in case the user mistakenly runs Iceweasel on there.
- Whonixcheck and its curl based functionality is vital and necessary to keep users safe from themselves. Therefore I am not advocating removing this needed functionality in its workstation counterpart.
|
non_process
|
eliminating any external untrusted contact with the gateway this proposal makes sense only if someday tor natively supports a secure time source tor has a controlport whitelisting mechanism built in benefits if the proposal below is implemented in tandem with the features above whonix gateway avoids any untrusted external contact since you re open to suggestions let me put this out there and see what you think the purpose of whonixcheck is to help the user know if they done a grave misconfiguration and to provide information about the whonix environment in general whonixcheck is not a diagnostic tool to detect any hypothetical misconfiguration made by you in the codebase not implying your skill is any less than excellent just discussing hypotheticals good security practice dictates that no functionality more than necessary should be enabled or used and the remaining tools that are needed are hardened to the maximum extent possible even if the tools used have a high security assurance its always best to not use anything in the first place when not needed the enemy cannot exploit what is not there with these points in mind i suggest disabling the all curl based requests on the gateway and here s why tbb checks for the gateway are unnecessary as there is no tbb supposed to be on there while useful in the workstation because the user might shoot themselves in the foot by configuring it to connect directly to the internet checking that traffic goes through tor on the gateway is of little utility as no user caused misconfiguration is relevant in that context whonixnews could be checked for and fetched via the whonix repos alleviating the need for curl in this usecase maybe this can be used for the ws version too even in the event that a trusted pinning system is figured out and deployed it would not be good to have it on the gateway in case the user mistakenly runs iceweasel on there whonixcheck and its curl based functionality is vital and necessary to keep users safe from themselves therefore i am not advocating removing this needed functionality in its workstation counterpart
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.