Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
143,329
| 5,514,367,528
|
IssuesEvent
|
2017-03-17 14:58:52
|
vmware/vic
|
https://api.github.com/repos/vmware/vic
|
closed
|
Random VIC UI test failure in CI
|
area/infra area/ui priority/high
|
As a VIC developer, I should not see random VIC UI test failures in CI.
Acceptance criteria: No more random failures.
---
From time to time, I will see random UI test failures. We do not pull CI logs when this happens. Usually, these failures will go away if the build gets restarted.
```
[exec] webpack: Compiled successfully.
[exec] 14 03 2017 21:45:13.355:INFO [karma]: Karma v1.3.0 server started at http://localhost:9876/
[exec] 14 03 2017 21:45:13.374:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
[exec] 14 03 2017 21:45:13.380:INFO [launcher]: Starting browser PhantomJS
[exec] 14 03 2017 21:45:14.574:INFO [PhantomJS 2.1.1 (Linux 0.0.0)]: Connected on socket /#rlluSYdc5uVrlIweAAAA with id 93918158
[exec] PhantomJS 2.1.1 (Linux 0.0.0): Executed 0 of 22 SUCCESS (0 secs / 0 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0) VIC UI Unit Tests should create the main app successfully FAILED
[exec] Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
[exec] PhantomJS 2.1.1 (Linux 0.0.0): Executed 1 of 22 (1 FAILED) (0 secs / 5.477 secs)
[exec] PhantomJS 2.1.1 (Linux 0.0.0) VIC UI Unit Tests should create the main app successfully FAILED
[exec] Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 2 of 22 (1 FAILED) (0 secs / 5.513 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 3 of 22 (1 FAILED) (0 secs / 5.514 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 4 of 22 (1 FAILED) (0 secs / 5.523 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 5 of 22 (1 FAILED) (0 secs / 5.531 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 6 of 22 (1 FAILED) (0 secs / 5.532 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 7 of 22 (1 FAILED) (0 secs / 5.542 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 8 of 22 (1 FAILED) (0 secs / 5.828 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 9 of 22 (1 FAILED) (0 secs / 6.133 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 10 of 22 (1 FAILED) (0 secs / 6.367 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 11 of 22 (1 FAILED) (0 secs / 6.475 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 12 of 22 (1 FAILED) (0 secs / 6.541 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 13 of 22 (1 FAILED) (0 secs / 6.832 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 14 of 22 (1 FAILED) (0 secs / 7.159 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 15 of 22 (1 FAILED) (0 secs / 7.16 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 16 of 22 (1 FAILED) (0 secs / 9.041 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 17 of 22 (1 FAILED) (0 secs / 9.962 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 18 of 22 (1 FAILED) (0 secs / 23.012 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 19 of 22 (1 FAILED) (0 secs / 24.38 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 20 of 22 (1 FAILED) (0 secs / 24.753 secs)
[exec] 14 03 2017 21:45:48.563:WARN [web-server]: 404: /vsphere-client/vic/assets/vic-icons/100x100.png
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 21 of 22 (1 FAILED) (0 secs / 26.742 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 22 of 22 (1 FAILED) (0 secs / 26.803 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 22 of 22 (1 FAILED) (21.679 secs / 26.803 secs)
[exec] File [/drone/src/github.com/vmware/vic/ui/vic-ui-h5c/vic/src/vic-app/src/app/vm.interface.ts] ignored, nothing could be mapped
```
|
1.0
|
Random VIC UI test failure in CI - As a VIC developer, I should not see random VIC UI test failures in CI.
Acceptance criteria: No more random failures.
---
From time to time, I will see random UI test failures. We do not pull CI logs when this happens. Usually, these failures will go away if the build gets restarted.
```
[exec] webpack: Compiled successfully.
[exec] 14 03 2017 21:45:13.355:INFO [karma]: Karma v1.3.0 server started at http://localhost:9876/
[exec] 14 03 2017 21:45:13.374:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
[exec] 14 03 2017 21:45:13.380:INFO [launcher]: Starting browser PhantomJS
[exec] 14 03 2017 21:45:14.574:INFO [PhantomJS 2.1.1 (Linux 0.0.0)]: Connected on socket /#rlluSYdc5uVrlIweAAAA with id 93918158
[exec] PhantomJS 2.1.1 (Linux 0.0.0): Executed 0 of 22 SUCCESS (0 secs / 0 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0) VIC UI Unit Tests should create the main app successfully FAILED
[exec] Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
[exec] PhantomJS 2.1.1 (Linux 0.0.0): Executed 1 of 22 (1 FAILED) (0 secs / 5.477 secs)
[exec] PhantomJS 2.1.1 (Linux 0.0.0) VIC UI Unit Tests should create the main app successfully FAILED
[exec] Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 2 of 22 (1 FAILED) (0 secs / 5.513 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 3 of 22 (1 FAILED) (0 secs / 5.514 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 4 of 22 (1 FAILED) (0 secs / 5.523 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 5 of 22 (1 FAILED) (0 secs / 5.531 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 6 of 22 (1 FAILED) (0 secs / 5.532 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 7 of 22 (1 FAILED) (0 secs / 5.542 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 8 of 22 (1 FAILED) (0 secs / 5.828 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 9 of 22 (1 FAILED) (0 secs / 6.133 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 10 of 22 (1 FAILED) (0 secs / 6.367 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 11 of 22 (1 FAILED) (0 secs / 6.475 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 12 of 22 (1 FAILED) (0 secs / 6.541 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 13 of 22 (1 FAILED) (0 secs / 6.832 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 14 of 22 (1 FAILED) (0 secs / 7.159 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 15 of 22 (1 FAILED) (0 secs / 7.16 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 16 of 22 (1 FAILED) (0 secs / 9.041 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 17 of 22 (1 FAILED) (0 secs / 9.962 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 18 of 22 (1 FAILED) (0 secs / 23.012 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 19 of 22 (1 FAILED) (0 secs / 24.38 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 20 of 22 (1 FAILED) (0 secs / 24.753 secs)
[exec] 14 03 2017 21:45:48.563:WARN [web-server]: 404: /vsphere-client/vic/assets/vic-icons/100x100.png
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 21 of 22 (1 FAILED) (0 secs / 26.742 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 22 of 22 (1 FAILED) (0 secs / 26.803 secs)
[exec] APhantomJS 2.1.1 (Linux 0.0.0): Executed 22 of 22 (1 FAILED) (21.679 secs / 26.803 secs)
[exec] File [/drone/src/github.com/vmware/vic/ui/vic-ui-h5c/vic/src/vic-app/src/app/vm.interface.ts] ignored, nothing could be mapped
```
|
non_process
|
random vic ui test failure in ci as a vic developer i should not see random vic ui test failures in ci acceptance criteria no more random failures from time to time i will see random ui test failures we do not pull ci logs when this happens usually these failures will go away if the build gets restarted webpack compiled successfully info karma server started at info launching browser phantomjs with unlimited concurrency info starting browser phantomjs info connected on socket with id phantomjs linux executed of success secs secs aphantomjs linux vic ui unit tests should create the main app successfully failed error timeout async callback was not invoked within timeout specified by jasmine default timeout interval phantomjs linux executed of failed secs secs phantomjs linux vic ui unit tests should create the main app successfully failed error timeout async callback was not invoked within timeout specified by jasmine default timeout interval aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs warn vsphere client vic assets vic icons png aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs aphantomjs linux executed of failed secs secs file ignored nothing could be mapped
| 0
|
3,790
| 6,774,612,300
|
IssuesEvent
|
2017-10-27 11:05:51
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
opened
|
Change the "Participate" button so it doesn't imply side effects
|
component: processes good first issue hacktoberfest target: user-experience
|
# This is a Feature Proposal
#### :tophat: Description
By looking at how other users navigate on the platform, having a "Participate" button on participatory processes seems to imply there's side effects.
This impacts the usage of some people who don't enter participatory processes because they're afraid it's an action that maybe can't be undone.
#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***: Meta Decidim
* ***Browser & version***:
* ***Screenshot***:
* ***Error messages***:
* ***URL to reproduce the error***:
|
1.0
|
Change the "Participate" button so it doesn't imply side effects - # This is a Feature Proposal
#### :tophat: Description
By looking at how other users navigate on the platform, having a "Participate" button on participatory processes seems to imply there's side effects.
This impacts the usage of some people who don't enter participatory processes because they're afraid it's an action that maybe can't be undone.
#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***: Meta Decidim
* ***Browser & version***:
* ***Screenshot***:
* ***Error messages***:
* ***URL to reproduce the error***:
|
process
|
change the participate button so it doesn t imply side effects this is a feature proposal tophat description by looking at how other users navigate on the platform having a participate button on participatory processes seems to imply there s side effects this impacts the usage of some people who don t enter participatory processes because they re afraid it s an action that maybe can t be undone pushpin related issues none clipboard additional data decidim deployment where you found the issue meta decidim browser version screenshot error messages url to reproduce the error
| 1
|
21,946
| 30,446,800,146
|
IssuesEvent
|
2023-07-15 19:28:50
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pyutils 0.0.1b10 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b10",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp_ohtio56/pyutils"
}
}```
|
1.0
|
pyutils 0.0.1b10 has 2 GuardDog issues - https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b10",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp_ohtio56/pyutils"
}
}```
|
process
|
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt python utils pytils silent process execution location pyutils exec utils py pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmp pyutils
| 1
|
302,358
| 9,257,298,294
|
IssuesEvent
|
2019-03-17 04:36:04
|
publiclab/plots2
|
https://api.github.com/repos/publiclab/plots2
|
closed
|
Clicking on Publish Note shows buffering icon
|
help wanted priority
|
When I click on Publish on the link for "Publish", it shows a buffering icon and does not publish the note. I am using Mozilla Firefox browser and this was an issue while publishing my draft Outreachy proposal on this [ link

](https://publiclab.org/post?n=17360&title=Outreachy%20proposal:%20&tags=software,soc,soc-2019,soc,outreachy,outreachy-2019,outreachy-2019-proposals,response:17359).
Inspector console

|
1.0
|
Clicking on Publish Note shows buffering icon - When I click on Publish on the link for "Publish", it shows a buffering icon and does not publish the note. I am using Mozilla Firefox browser and this was an issue while publishing my draft Outreachy proposal on this [ link

](https://publiclab.org/post?n=17360&title=Outreachy%20proposal:%20&tags=software,soc,soc-2019,soc,outreachy,outreachy-2019,outreachy-2019-proposals,response:17359).
Inspector console

|
non_process
|
clicking on publish note shows buffering icon when i click on publish on the link for publish it shows a buffering icon and does not publish the note i am using mozilla firefox browser and this was an issue while publishing my draft outreachy proposal on this link inspector console
| 0
|
375
| 7,113,135,460
|
IssuesEvent
|
2018-01-17 19:23:56
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
System.Net.Http.WinHttpResponseStream leading to crash in SafeWinHttpHandle.ReleaseHandle
|
area-System.Net.Http bug tenet-reliability
|
We are reporting daily crashes after moving our application from .Net 4.6 to .Net Core 2.0. When the crash occurs all 64 Cores are spiked at 100% CPU.
Example Call Stack:
#1: SafeWinHttpHandle.ReleaseHandle()
```
00 coreclr!EEPolicy::HandleFatalError
01 coreclr!ProcessCLRException
02 ntdll!RtlpExecuteHandlerForException
03 ntdll!RtlDispatchException
04 ntdll!KiUserExceptionDispatch
05 crypt32!ReleaseContextElement
06 crypt32!CertFreeCertificateContext
07 winhttp!WEBIO_REQUEST::{dtor}
08 winhttp!WEBIO_REQUEST::`scalar deleting destructor'
09 winhttp!HTTP_BASE_OBJECT::Dereference
0a winhttp!HTTP_USER_REQUEST::_SafeDetachSysReq
0b winhttp!HTTP_USER_REQUEST::Shutdown
0c winhttp!HTTP_REQUEST_HANDLE_OBJECT::SafeShutdownUsrReq
0d winhttp!_InternetCloseHandle
0e winhttp!WinHttpCloseHandle
Child SP IP Call Site
00000109e165da28 00007ffaf35a4f86 [InlinedCallFrame: 00000109e165da28] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
00000109e165da28 00007ffa943ab463 [InlinedCallFrame: 00000109e165da28] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
00000109e165dab0 00007ffaf1bb6b08 Interop+WinHttp+SafeWinHttpHandle.ReleaseHandle() [E:\A\_work\774\s\corefx\src\Common\src\Interop\Windows\winhttp\Interop.SafeWinHttpHandle.cs @ 59]
00000109e165dcd0 00007ffaf3482d33 [GCFrame: 00000109e165dcd0]
00000109e165dd08 00007ffaf3482d33 [GCFrame: 00000109e165dd08]
00000109e165de58 00007ffaf3482d33 [HelperMethodFrame_1OBJ: 00000109e165de58] System.Runtime.InteropServices.SafeHandle.InternalDispose()
00000109e165dfc0 00007ffaf1bd1ca7 System.Net.Http.WinHttpResponseStream.Dispose(Boolean) [E:\A\_work\774\s\corefx\src\System.Net.Http.WinHttpHandler\src\System\Net\Http\WinHttpResponseStream.cs @ 282]
00000109e165e000 00007ffaf2e1a72a System.IO.Stream.Close() [E:\A\_work\308\s\src\mscorlib\src\System\IO\Stream.cs @ 263]
00000109e165e030 00007ffaf1bc97d0 System.Net.Http.NoWriteNoSeekStreamContent+c.b__4_0(System.Threading.Tasks.Task, System.Object) [E:\A\_work\774\s\corefx\src\Common\src\System\Net\Http\NoWriteNoSeekStreamContent.cs @ 51]
00000109e165e070 00007ffaf2d871ce System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\308\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 145]
00000109e165e0e0 00007ffaf2e143d6 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\Task.cs @ 2454]
00000109e165e180 00007ffaf2f78446 System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\ThreadPoolTaskScheduler.cs @ 76]
00000109e165e1d0 00007ffaf2e439b3 System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\TaskScheduler.cs @ 210]
00000109e165e230 00007ffaf2e800df System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\TaskContinuation.cs @ 256]
00000109e165e280 00007ffaf2e155af System.Threading.Tasks.Task.RunContinuations(System.Object) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\Task.cs @ 3263]
00000109e165e370 00007ffaf2e81705 System.Threading.Tasks.Task`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].TrySetResult(System.Threading.Tasks.VoidTaskResult) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\future.cs @ 425]
00000109e165e3b0 00007ffaf2e5dc89 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetExistingTaskResult(System.Threading.Tasks.VoidTaskResult) [E:\A\_work\308\s\src\mscorlib\src\System\Runtime\CompilerServices\AsyncMethodBuilder.cs @ 605]
00000109e165e3f0 00007ffaf2e5dc1c System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetResult(System.Threading.Tasks.Task`1) [E:\A\_work\308\s\src\mscorlib\src\System\Runtime\CompilerServices\AsyncMethodBuilder.cs @ 646]
00000109e165e420 00007ffaf1bd2532 System.Net.Http.WinHttpResponseStream+d__18.MoveNext() [E:\A\_work\774\s\corefx\src\System.Net.Http.WinHttpHandler\src\System\Net\Http\WinHttpResponseStream.cs @ 163]
00000109e165e4e0 00007ffaf2d871ce System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\308\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 145]
00000109e165e550 00007ffaf2d871ce System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\308\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 145]
00000109e165e5c0 00007ffaf2e143d6 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\Task.cs @ 2454]
00000109e165e660 00007ffaf2e537f9 System.Threading.ThreadPoolWorkQueue.Dispatch() [E:\A\_work\308\s\src\mscorlib\src\System\Threading\ThreadPool.cs @ 582]
```
#2: SafeWinHttpHandle.ReleaseHandle()
```
Child SP IP Call Site
000000bb26d9c590 00007ffd81220c8a [FaultingExceptionFrame: 000000bb26d9c590]
000000bb26d9ca90 00007ffd6754e6c6 System.Net.Http.WinHttpRequestCallback.RequestCallback(IntPtr, System.Net.Http.WinHttpRequestState, UInt32, IntPtr, UInt32)
000000bb26d9ea70 00007ffd68903190 [FaultingExceptionFrame: 000000bb26d9ea70]
000000bb26d9ef70 00007ffd6754e57e System.Net.Http.WinHttpRequestCallback.RequestCallback(IntPtr, System.Net.Http.WinHttpRequestState, UInt32, IntPtr, UInt32)
000000bb26d9efe0 00007ffd6754e4a7 System.Net.Http.WinHttpRequestCallback.WinHttpCallback(IntPtr, IntPtr, UInt32, IntPtr, UInt32)
000000bb26d9f030 00007ffd099b1332 DomainBoundILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int32, Int64, Int32)
000000bb26d9f2c8 00007ffd68a02e4a [InlinedCallFrame: 000000bb26d9f2c8] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
000000bb26d9f2c8 00007ffd099b34c3 [InlinedCallFrame: 000000bb26d9f2c8] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
000000bb26d9f350 00007ffd67536b08 Interop+WinHttp+SafeWinHttpHandle.ReleaseHandle()
000000bb26d9f4a0 00007ffd68a02d33 [GCFrame: 000000bb26d9f4a0]
000000bb26d9f618 00007ffd68a02d33 [GCFrame: 000000bb26d9f618]
000000bb26d9f6b8 00007ffd68a02d33 [HelperMethodFrame_1OBJ: 000000bb26d9f6b8] System.Runtime.InteropServices.SafeHandle.InternalFinalize()
000000bb26d9f7c0 00007ffd68359b16 System.Runtime.InteropServices.SafeHandle.Finalize()
000000bb26d9fbf0 00007ffd68a02ca6 [DebuggerU2MCatchHandlerFrame: 000000bb26d9fbf0]
```
#3: WinHttpHandler.HandleAsyncException ()
```
Exception object: 000000f740eb92f8
Exception type: System.NullReferenceException
Message: Object reference not set to an instance of an object.
InnerException: <none>
StackTrace (generated):
System.Net.Http.WinHttpHandler.HandleAsyncException(System.Net.Http.WinHttpRequestState, System.Exception)
System_Net_Http!System.Net.Http.WinHttpHandler+<StartRequest>d__105.MoveNext()+
System_Private_CoreLib!System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()System_Private_CoreLib!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
System_Private_CoreLib!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
#4
```
Thread Id: 1273 OS Id: 5be0 Locks: 0
Thread is Alive
Last Exception: (System.ExecutionEngineException) (null)
0000000cbaf6deb8 0000000000000000 InlinedCallFrame
0000000cbaf6deb8 0000000000000000 InlinedCallFrame
0000000cbaf6de90 00007ffbd2ac41b3 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr)
0000000cbaf6df40 00007ffc31906768 Interop+WinHttp+SafeWinHttpHandle.ReleaseHandle()
0000000cbaf6e160 0000000000000000 GCFrame
0000000cbaf6e198 0000000000000000 GCFrame
0000000cbaf6e2e8 0000000000000000 HelperMethodFrame_1OBJ
0000000cbaf6e450 00007ffc31921767 System.Net.Http.WinHttpResponseStream.Dispose(Boolean)
0000000cbaf6e490 00007ffc2dcd009a System.IO.Stream.Close()
0000000cbaf6e4c0 00007ffc31919310 System.Net.Http.NoWriteNoSeekStreamContent+<>c.<SerializeToStreamAsync>b__4_0(System.Threading.Tasks.Task, System.Object)
0000000cbaf6e500 00007ffc2dc3b3be System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000cbaf6e570 00007ffc2dcc9d46 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
0000000cbaf6e610 00007ffc2de2c466 System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
0000000cbaf6e660 00007ffc2dcf7213 System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
0000000cbaf6e6c0 00007ffc2dd33a5f System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
0000000cbaf6e710 00007ffc2dccaf1f System.Threading.Tasks.Task.RunContinuations(System.Object)
0000000cbaf6e800 00007ffc2dd35085 System.Threading.Tasks.Task`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].TrySetResult(System.Threading.Tasks.VoidTaskResult)
0000000cbaf6e840 00007ffc2dd114f9 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetExistingTaskResult(System.Threading.Tasks.VoidTaskResult)
0000000cbaf6e880 00007ffc2dd1148c System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetResult(System.Threading.Tasks.Task`1<System.Threading.Tasks.VoidTaskResult>)
0000000cbaf6e8b0 00007ffc31921fc6 System.Net.Http.WinHttpResponseStream+<CopyToAsyncCore>d__18.MoveNext()
0000000cbaf6e970 00007ffc2dc3b3be System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000cbaf6e9e0 00007ffc2dc3b3be System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000cbaf6ea50 00007ffc2dcc9d46 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
0000000cbaf6eaf0 00007ffc2dd07069 System.Threading.ThreadPoolWorkQueue.Dispatch()
```
Some Notes on the usage of `HttpClient`:
* Shared instance of `HttpClientHandler` as `static readonly HttpMessageHandler`
* Most often invoked methods:
* `public object Invoke(string method, Type returnType = null, object parameters = null)`
* `public async Task<object> InvokeAsync(string method, Type returnType = null, object parameters = null, CancellationToken cancellationToken = default (CancellationToken))`
* The core functionality is implemented by `async Task<object> GetResponseAsync `
* Instantiates new instance of `HttpClient` on every call as new `HttpClient(webServerInvoker.Handler, false)` where `webServerInvoker.Handler` is the shared static instance.
* Awaits for `PostAsync` result, a timed `CancellationToken` is provided .
* Always disposes `HttpClient` instance at the end of the call.
* Note that synchronous public object Invoke enforces its own timeout while waiting for InvokeAsync Task result. It cancels the CancellationTokenSource if the timeout is hit, in turn cancelling any pending HttpClient operation and immediately disposing of HttpClient instance.
* The timeout is provided by the caller. It ranges from 30 seconds to 5 minutes.
[EDIT] Formatting changes by @karelz
|
True
|
System.Net.Http.WinHttpResponseStream leading to crash in SafeWinHttpHandle.ReleaseHandle - We are reporting daily crashes after moving our application from .Net 4.6 to .Net Core 2.0. When the crash occurs all 64 Cores are spiked at 100% CPU.
Example Call Stack:
#1: SafeWinHttpHandle.ReleaseHandle()
```
00 coreclr!EEPolicy::HandleFatalError
01 coreclr!ProcessCLRException
02 ntdll!RtlpExecuteHandlerForException
03 ntdll!RtlDispatchException
04 ntdll!KiUserExceptionDispatch
05 crypt32!ReleaseContextElement
06 crypt32!CertFreeCertificateContext
07 winhttp!WEBIO_REQUEST::{dtor}
08 winhttp!WEBIO_REQUEST::`scalar deleting destructor'
09 winhttp!HTTP_BASE_OBJECT::Dereference
0a winhttp!HTTP_USER_REQUEST::_SafeDetachSysReq
0b winhttp!HTTP_USER_REQUEST::Shutdown
0c winhttp!HTTP_REQUEST_HANDLE_OBJECT::SafeShutdownUsrReq
0d winhttp!_InternetCloseHandle
0e winhttp!WinHttpCloseHandle
Child SP IP Call Site
00000109e165da28 00007ffaf35a4f86 [InlinedCallFrame: 00000109e165da28] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
00000109e165da28 00007ffa943ab463 [InlinedCallFrame: 00000109e165da28] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
00000109e165dab0 00007ffaf1bb6b08 Interop+WinHttp+SafeWinHttpHandle.ReleaseHandle() [E:\A\_work\774\s\corefx\src\Common\src\Interop\Windows\winhttp\Interop.SafeWinHttpHandle.cs @ 59]
00000109e165dcd0 00007ffaf3482d33 [GCFrame: 00000109e165dcd0]
00000109e165dd08 00007ffaf3482d33 [GCFrame: 00000109e165dd08]
00000109e165de58 00007ffaf3482d33 [HelperMethodFrame_1OBJ: 00000109e165de58] System.Runtime.InteropServices.SafeHandle.InternalDispose()
00000109e165dfc0 00007ffaf1bd1ca7 System.Net.Http.WinHttpResponseStream.Dispose(Boolean) [E:\A\_work\774\s\corefx\src\System.Net.Http.WinHttpHandler\src\System\Net\Http\WinHttpResponseStream.cs @ 282]
00000109e165e000 00007ffaf2e1a72a System.IO.Stream.Close() [E:\A\_work\308\s\src\mscorlib\src\System\IO\Stream.cs @ 263]
00000109e165e030 00007ffaf1bc97d0 System.Net.Http.NoWriteNoSeekStreamContent+c.b__4_0(System.Threading.Tasks.Task, System.Object) [E:\A\_work\774\s\corefx\src\Common\src\System\Net\Http\NoWriteNoSeekStreamContent.cs @ 51]
00000109e165e070 00007ffaf2d871ce System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\308\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 145]
00000109e165e0e0 00007ffaf2e143d6 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\Task.cs @ 2454]
00000109e165e180 00007ffaf2f78446 System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\ThreadPoolTaskScheduler.cs @ 76]
00000109e165e1d0 00007ffaf2e439b3 System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\TaskScheduler.cs @ 210]
00000109e165e230 00007ffaf2e800df System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\TaskContinuation.cs @ 256]
00000109e165e280 00007ffaf2e155af System.Threading.Tasks.Task.RunContinuations(System.Object) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\Task.cs @ 3263]
00000109e165e370 00007ffaf2e81705 System.Threading.Tasks.Task`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].TrySetResult(System.Threading.Tasks.VoidTaskResult) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\future.cs @ 425]
00000109e165e3b0 00007ffaf2e5dc89 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetExistingTaskResult(System.Threading.Tasks.VoidTaskResult) [E:\A\_work\308\s\src\mscorlib\src\System\Runtime\CompilerServices\AsyncMethodBuilder.cs @ 605]
00000109e165e3f0 00007ffaf2e5dc1c System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetResult(System.Threading.Tasks.Task`1) [E:\A\_work\308\s\src\mscorlib\src\System\Runtime\CompilerServices\AsyncMethodBuilder.cs @ 646]
00000109e165e420 00007ffaf1bd2532 System.Net.Http.WinHttpResponseStream+d__18.MoveNext() [E:\A\_work\774\s\corefx\src\System.Net.Http.WinHttpHandler\src\System\Net\Http\WinHttpResponseStream.cs @ 163]
00000109e165e4e0 00007ffaf2d871ce System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\308\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 145]
00000109e165e550 00007ffaf2d871ce System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object) [E:\A\_work\308\s\src\mscorlib\shared\System\Threading\ExecutionContext.cs @ 145]
00000109e165e5c0 00007ffaf2e143d6 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef) [E:\A\_work\308\s\src\mscorlib\src\System\Threading\Tasks\Task.cs @ 2454]
00000109e165e660 00007ffaf2e537f9 System.Threading.ThreadPoolWorkQueue.Dispatch() [E:\A\_work\308\s\src\mscorlib\src\System\Threading\ThreadPool.cs @ 582]
```
#2: SafeWinHttpHandle.ReleaseHandle()
```
Child SP IP Call Site
000000bb26d9c590 00007ffd81220c8a [FaultingExceptionFrame: 000000bb26d9c590]
000000bb26d9ca90 00007ffd6754e6c6 System.Net.Http.WinHttpRequestCallback.RequestCallback(IntPtr, System.Net.Http.WinHttpRequestState, UInt32, IntPtr, UInt32)
000000bb26d9ea70 00007ffd68903190 [FaultingExceptionFrame: 000000bb26d9ea70]
000000bb26d9ef70 00007ffd6754e57e System.Net.Http.WinHttpRequestCallback.RequestCallback(IntPtr, System.Net.Http.WinHttpRequestState, UInt32, IntPtr, UInt32)
000000bb26d9efe0 00007ffd6754e4a7 System.Net.Http.WinHttpRequestCallback.WinHttpCallback(IntPtr, IntPtr, UInt32, IntPtr, UInt32)
000000bb26d9f030 00007ffd099b1332 DomainBoundILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int32, Int64, Int32)
000000bb26d9f2c8 00007ffd68a02e4a [InlinedCallFrame: 000000bb26d9f2c8] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
000000bb26d9f2c8 00007ffd099b34c3 [InlinedCallFrame: 000000bb26d9f2c8] Interop+WinHttp.WinHttpCloseHandle(IntPtr)
000000bb26d9f350 00007ffd67536b08 Interop+WinHttp+SafeWinHttpHandle.ReleaseHandle()
000000bb26d9f4a0 00007ffd68a02d33 [GCFrame: 000000bb26d9f4a0]
000000bb26d9f618 00007ffd68a02d33 [GCFrame: 000000bb26d9f618]
000000bb26d9f6b8 00007ffd68a02d33 [HelperMethodFrame_1OBJ: 000000bb26d9f6b8] System.Runtime.InteropServices.SafeHandle.InternalFinalize()
000000bb26d9f7c0 00007ffd68359b16 System.Runtime.InteropServices.SafeHandle.Finalize()
000000bb26d9fbf0 00007ffd68a02ca6 [DebuggerU2MCatchHandlerFrame: 000000bb26d9fbf0]
```
#3: WinHttpHandler.HandleAsyncException ()
```
Exception object: 000000f740eb92f8
Exception type: System.NullReferenceException
Message: Object reference not set to an instance of an object.
InnerException: <none>
StackTrace (generated):
System.Net.Http.WinHttpHandler.HandleAsyncException(System.Net.Http.WinHttpRequestState, System.Exception)
System_Net_Http!System.Net.Http.WinHttpHandler+<StartRequest>d__105.MoveNext()+
System_Private_CoreLib!System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()System_Private_CoreLib!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
System_Private_CoreLib!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
#4
```
Thread Id: 1273 OS Id: 5be0 Locks: 0
Thread is Alive
Last Exception: (System.ExecutionEngineException) (null)
0000000cbaf6deb8 0000000000000000 InlinedCallFrame
0000000cbaf6deb8 0000000000000000 InlinedCallFrame
0000000cbaf6de90 00007ffbd2ac41b3 DomainBoundILStubClass.IL_STUB_PInvoke(IntPtr)
0000000cbaf6df40 00007ffc31906768 Interop+WinHttp+SafeWinHttpHandle.ReleaseHandle()
0000000cbaf6e160 0000000000000000 GCFrame
0000000cbaf6e198 0000000000000000 GCFrame
0000000cbaf6e2e8 0000000000000000 HelperMethodFrame_1OBJ
0000000cbaf6e450 00007ffc31921767 System.Net.Http.WinHttpResponseStream.Dispose(Boolean)
0000000cbaf6e490 00007ffc2dcd009a System.IO.Stream.Close()
0000000cbaf6e4c0 00007ffc31919310 System.Net.Http.NoWriteNoSeekStreamContent+<>c.<SerializeToStreamAsync>b__4_0(System.Threading.Tasks.Task, System.Object)
0000000cbaf6e500 00007ffc2dc3b3be System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000cbaf6e570 00007ffc2dcc9d46 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
0000000cbaf6e610 00007ffc2de2c466 System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
0000000cbaf6e660 00007ffc2dcf7213 System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
0000000cbaf6e6c0 00007ffc2dd33a5f System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
0000000cbaf6e710 00007ffc2dccaf1f System.Threading.Tasks.Task.RunContinuations(System.Object)
0000000cbaf6e800 00007ffc2dd35085 System.Threading.Tasks.Task`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].TrySetResult(System.Threading.Tasks.VoidTaskResult)
0000000cbaf6e840 00007ffc2dd114f9 System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetExistingTaskResult(System.Threading.Tasks.VoidTaskResult)
0000000cbaf6e880 00007ffc2dd1148c System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.Threading.Tasks.VoidTaskResult, System.Private.CoreLib]].SetResult(System.Threading.Tasks.Task`1<System.Threading.Tasks.VoidTaskResult>)
0000000cbaf6e8b0 00007ffc31921fc6 System.Net.Http.WinHttpResponseStream+<CopyToAsyncCore>d__18.MoveNext()
0000000cbaf6e970 00007ffc2dc3b3be System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000cbaf6e9e0 00007ffc2dc3b3be System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
0000000cbaf6ea50 00007ffc2dcc9d46 System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
0000000cbaf6eaf0 00007ffc2dd07069 System.Threading.ThreadPoolWorkQueue.Dispatch()
```
Some Notes on the usage of `HttpClient`:
* Shared instance of `HttpClientHandler` as `static readonly HttpMessageHandler`
* Most often invoked methods:
* `public object Invoke(string method, Type returnType = null, object parameters = null)`
* `public async Task<object> InvokeAsync(string method, Type returnType = null, object parameters = null, CancellationToken cancellationToken = default (CancellationToken))`
* The core functionality is implemented by `async Task<object> GetResponseAsync `
* Instantiates new instance of `HttpClient` on every call as new `HttpClient(webServerInvoker.Handler, false)` where `webServerInvoker.Handler` is the shared static instance.
* Awaits for `PostAsync` result, a timed `CancellationToken` is provided .
* Always disposes `HttpClient` instance at the end of the call.
* Note that synchronous public object Invoke enforces its own timeout while waiting for InvokeAsync Task result. It cancels the CancellationTokenSource if the timeout is hit, in turn cancelling any pending HttpClient operation and immediately disposing of HttpClient instance.
* The timeout is provided by the caller. It ranges from 30 seconds to 5 minutes.
[EDIT] Formatting changes by @karelz
|
non_process
|
system net http winhttpresponsestream leading to crash in safewinhttphandle releasehandle we are reporting daily crashes after moving our application from net to net core when the crash occurs all cores are spiked at cpu example call stack safewinhttphandle releasehandle coreclr eepolicy handlefatalerror coreclr processclrexception ntdll rtlpexecutehandlerforexception ntdll rtldispatchexception ntdll kiuserexceptiondispatch releasecontextelement certfreecertificatecontext winhttp webio request dtor winhttp webio request scalar deleting destructor winhttp http base object dereference winhttp http user request safedetachsysreq winhttp http user request shutdown winhttp http request handle object safeshutdownusrreq winhttp internetclosehandle winhttp winhttpclosehandle child sp ip call site interop winhttp winhttpclosehandle intptr interop winhttp winhttpclosehandle intptr interop winhttp safewinhttphandle releasehandle system runtime interopservices safehandle internaldispose system net http winhttpresponsestream dispose boolean system io stream close system net http nowritenoseekstreamcontent c b system threading tasks task system object system threading executioncontext run system threading executioncontext system threading contextcallback system object system threading tasks task executewiththreadlocal system threading tasks task byref system threading tasks threadpooltaskscheduler tryexecutetaskinline system threading tasks task boolean system threading tasks taskscheduler tryruninline system threading tasks task boolean system threading tasks taskcontinuation inlineifpossibleorelsequeue system threading tasks task boolean system threading tasks task runcontinuations system object system threading tasks task trysetresult system threading tasks voidtaskresult system runtime compilerservices asynctaskmethodbuilder setexistingtaskresult system threading tasks voidtaskresult system runtime compilerservices asynctaskmethodbuilder setresult system threading tasks task system net http winhttpresponsestream d movenext system threading executioncontext run system threading executioncontext system threading contextcallback system object system threading executioncontext run system threading executioncontext system threading contextcallback system object system threading tasks task executewiththreadlocal system threading tasks task byref system threading threadpoolworkqueue dispatch safewinhttphandle releasehandle child sp ip call site system net http winhttprequestcallback requestcallback intptr system net http winhttprequeststate intptr system net http winhttprequestcallback requestcallback intptr system net http winhttprequeststate intptr system net http winhttprequestcallback winhttpcallback intptr intptr intptr domainboundilstubclass il stub reversepinvoke interop winhttp winhttpclosehandle intptr interop winhttp winhttpclosehandle intptr interop winhttp safewinhttphandle releasehandle system runtime interopservices safehandle internalfinalize system runtime interopservices safehandle finalize winhttphandler handleasyncexception exception object exception type system nullreferenceexception message object reference not set to an instance of an object innerexception stacktrace generated system net http winhttphandler handleasyncexception system net http winhttprequeststate system exception system net http system net http winhttphandler d movenext system private corelib system runtime exceptionservices exceptiondispatchinfo throw system private corelib system threading executioncontext run system threading executioncontext system threading contextcallback system object system private corelib system threading threadpoolworkqueue dispatch thread id os id locks thread is alive last exception system executionengineexception null inlinedcallframe inlinedcallframe domainboundilstubclass il stub pinvoke intptr interop winhttp safewinhttphandle releasehandle gcframe gcframe helpermethodframe system net http winhttpresponsestream dispose boolean system io stream close system net http nowritenoseekstreamcontent c b system threading tasks task system object system threading executioncontext run system threading executioncontext system threading contextcallback system object system threading tasks task executewiththreadlocal system threading tasks task byref system threading tasks threadpooltaskscheduler tryexecutetaskinline system threading tasks task boolean system threading tasks taskscheduler tryruninline system threading tasks task boolean system threading tasks taskcontinuation inlineifpossibleorelsequeue system threading tasks task boolean system threading tasks task runcontinuations system object system threading tasks task trysetresult system threading tasks voidtaskresult system runtime compilerservices asynctaskmethodbuilder setexistingtaskresult system threading tasks voidtaskresult system runtime compilerservices asynctaskmethodbuilder setresult system threading tasks task system net http winhttpresponsestream d movenext system threading executioncontext run system threading executioncontext system threading contextcallback system object system threading executioncontext run system threading executioncontext system threading contextcallback system object system threading tasks task executewiththreadlocal system threading tasks task byref system threading threadpoolworkqueue dispatch some notes on the usage of httpclient shared instance of httpclienthandler as static readonly httpmessagehandler most often invoked methods public object invoke string method type returntype null object parameters null public async task invokeasync string method type returntype null object parameters null cancellationtoken cancellationtoken default cancellationtoken the core functionality is implemented by async task getresponseasync instantiates new instance of httpclient on every call as new httpclient webserverinvoker handler false where webserverinvoker handler is the shared static instance awaits for postasync result a timed cancellationtoken is provided always disposes httpclient instance at the end of the call note that synchronous public object invoke enforces its own timeout while waiting for invokeasync task result it cancels the cancellationtokensource if the timeout is hit in turn cancelling any pending httpclient operation and immediately disposing of httpclient instance the timeout is provided by the caller it ranges from seconds to minutes formatting changes by karelz
| 0
|
1,166
| 3,655,481,216
|
IssuesEvent
|
2016-02-17 16:26:02
|
cfpb/hmda-platform-ui
|
https://api.github.com/repos/cfpb/hmda-platform-ui
|
closed
|
Start and stop validation
|
Data Validation Event Processing File Submission Persistence question
|
In the Pilot "summary and findings" document users talk about wanting the ability to start and stop the process. Its also mentioned with the idea of multiple users which makes me think they would like to run the checks but allow someone else to sign at the end.
Questions:
- Would we have a "pause" button in the UI?
- Rather than a "pause" button, could this just be done at certain points? (We save progress when syntax is complete, when validity is complete, when quality is complete, etc. This would mean that they would have to wait until at least a certain point.)
- Would it be easier to track if only done at certain points? Seems like the UI might be easier, not sure about the backend.
@debseidner @wpears @benguhin
|
1.0
|
Start and stop validation - In the Pilot "summary and findings" document users talk about wanting the ability to start and stop the process. Its also mentioned with the idea of multiple users which makes me think they would like to run the checks but allow someone else to sign at the end.
Questions:
- Would we have a "pause" button in the UI?
- Rather than a "pause" button, could this just be done at certain points? (We save progress when syntax is complete, when validity is complete, when quality is complete, etc. This would mean that they would have to wait until at least a certain point.)
- Would it be easier to track if only done at certain points? Seems like the UI might be easier, not sure about the backend.
@debseidner @wpears @benguhin
|
process
|
start and stop validation in the pilot summary and findings document users talk about wanting the ability to start and stop the process its also mentioned with the idea of multiple users which makes me think they would like to run the checks but allow someone else to sign at the end questions would we have a pause button in the ui rather than a pause button could this just be done at certain points we save progress when syntax is complete when validity is complete when quality is complete etc this would mean that they would have to wait until at least a certain point would it be easier to track if only done at certain points seems like the ui might be easier not sure about the backend debseidner wpears benguhin
| 1
|
571,532
| 17,023,319,084
|
IssuesEvent
|
2021-07-03 01:24:33
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Direction of road not displayed
|
Component: merkaartor Priority: minor Resolution: fixed Type: enhancement
|
**[Submitted to the original trac issue database at 8.57am, Tuesday, 4th November 2008]**
The direction of a road or the orientation of an area is not visible. This way you don't know if a oneway street is pointing in the correct direction or if an area is including or excluding water.
|
1.0
|
Direction of road not displayed - **[Submitted to the original trac issue database at 8.57am, Tuesday, 4th November 2008]**
The direction of a road or the orientation of an area is not visible. This way you don't know if a oneway street is pointing in the correct direction or if an area is including or excluding water.
|
non_process
|
direction of road not displayed the direction of a road or the orientation of an area is not visible this way you don t know if a oneway street is pointing in the correct direction or if an area is including or excluding water
| 0
|
9,103
| 12,180,550,851
|
IssuesEvent
|
2020-04-28 12:39:49
|
digitalmethodsinitiative/4cat
|
https://api.github.com/repos/digitalmethodsinitiative/4cat
|
opened
|
tf-idf processor crashes
|
bug processors
|
```
Traceback (most recent call last):
File "4cat-daemon.py", line 235, in <module>
start()
File "4cat-daemon.py", line 120, in start
bootstrap.run(as_daemon=True)
File "/opt/4cat/backend/bootstrap.py", line 48, in run
WorkerManager(logger=log, database=db, queue=queue, as_daemon=as_daemon)
File "/opt/4cat/backend/lib/manager.py", line 57, in __init__
self.loop()
File "/opt/4cat/backend/lib/manager.py", line 115, in loop
self.delegate()
File "/opt/4cat/backend/lib/manager.py", line 96, in delegate
worker_class = all_modules.load_worker_class(worker_info)
File "/opt/4cat/backend/lib/module_loader.py", line 275, in load_worker_class
importlib.import_module(module)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/opt/4cat/backend/workers/../../processors/text-analysis/tf_idf.py", line 8, in <module>
import pandas as pd
File "/usr/local/lib/python3.7/dist-packages/pandas/__init__.py", line 42, in <module>
from pandas.core.api import *
File "/usr/local/lib/python3.7/dist-packages/pandas/core/api.py", line 10, in <module>
from pandas.core.groupby.groupby import Grouper
File "/usr/local/lib/python3.7/dist-packages/pandas/core/groupby/__init__.py", line 2, in <module>
from pandas.core.groupby.groupby import (
File "/usr/local/lib/python3.7/dist-packages/pandas/core/groupby/groupby.py", line 49, in <module>
from pandas.core.frame import DataFrame
File "/usr/local/lib/python3.7/dist-packages/pandas/core/frame.py", line 74, in <module>
from pandas.core.series import Series
File "/usr/local/lib/python3.7/dist-packages/pandas/core/series.py", line 81, in <module>
import pandas.plotting._core as gfx
File "/usr/local/lib/python3.7/dist-packages/pandas/plotting/__init__.py", line 11, in <module>
from pandas.plotting._core import boxplot
File "/usr/local/lib/python3.7/dist-packages/pandas/plotting/_core.py", line 45, in <module>
from pandas.plotting import _converter
File "/usr/local/lib/python3.7/dist-packages/pandas/plotting/_converter.py", line 8, in <module>
import matplotlib.units as units
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 1091, in <module>
rcParams = rc_params()
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 932, in rc_params
fname = matplotlib_fname()
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 746, in matplotlib_fname
for fname in gen_candidates():
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 735, in gen_candidates
yield os.path.join(os.getcwd(), 'matplotlibrc')
FileNotFoundError: [Errno 2] No such file or directory
```
|
1.0
|
tf-idf processor crashes - ```
Traceback (most recent call last):
File "4cat-daemon.py", line 235, in <module>
start()
File "4cat-daemon.py", line 120, in start
bootstrap.run(as_daemon=True)
File "/opt/4cat/backend/bootstrap.py", line 48, in run
WorkerManager(logger=log, database=db, queue=queue, as_daemon=as_daemon)
File "/opt/4cat/backend/lib/manager.py", line 57, in __init__
self.loop()
File "/opt/4cat/backend/lib/manager.py", line 115, in loop
self.delegate()
File "/opt/4cat/backend/lib/manager.py", line 96, in delegate
worker_class = all_modules.load_worker_class(worker_info)
File "/opt/4cat/backend/lib/module_loader.py", line 275, in load_worker_class
importlib.import_module(module)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/opt/4cat/backend/workers/../../processors/text-analysis/tf_idf.py", line 8, in <module>
import pandas as pd
File "/usr/local/lib/python3.7/dist-packages/pandas/__init__.py", line 42, in <module>
from pandas.core.api import *
File "/usr/local/lib/python3.7/dist-packages/pandas/core/api.py", line 10, in <module>
from pandas.core.groupby.groupby import Grouper
File "/usr/local/lib/python3.7/dist-packages/pandas/core/groupby/__init__.py", line 2, in <module>
from pandas.core.groupby.groupby import (
File "/usr/local/lib/python3.7/dist-packages/pandas/core/groupby/groupby.py", line 49, in <module>
from pandas.core.frame import DataFrame
File "/usr/local/lib/python3.7/dist-packages/pandas/core/frame.py", line 74, in <module>
from pandas.core.series import Series
File "/usr/local/lib/python3.7/dist-packages/pandas/core/series.py", line 81, in <module>
import pandas.plotting._core as gfx
File "/usr/local/lib/python3.7/dist-packages/pandas/plotting/__init__.py", line 11, in <module>
from pandas.plotting._core import boxplot
File "/usr/local/lib/python3.7/dist-packages/pandas/plotting/_core.py", line 45, in <module>
from pandas.plotting import _converter
File "/usr/local/lib/python3.7/dist-packages/pandas/plotting/_converter.py", line 8, in <module>
import matplotlib.units as units
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 1091, in <module>
rcParams = rc_params()
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 932, in rc_params
fname = matplotlib_fname()
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 746, in matplotlib_fname
for fname in gen_candidates():
File "/usr/local/lib/python3.7/dist-packages/matplotlib/__init__.py", line 735, in gen_candidates
yield os.path.join(os.getcwd(), 'matplotlibrc')
FileNotFoundError: [Errno 2] No such file or directory
```
|
process
|
tf idf processor crashes traceback most recent call last file daemon py line in start file daemon py line in start bootstrap run as daemon true file opt backend bootstrap py line in run workermanager logger log database db queue queue as daemon as daemon file opt backend lib manager py line in init self loop file opt backend lib manager py line in loop self delegate file opt backend lib manager py line in delegate worker class all modules load worker class worker info file opt backend lib module loader py line in load worker class importlib import module module file usr lib importlib init py line in import module return bootstrap gcd import name package level file line in gcd import file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file opt backend workers processors text analysis tf idf py line in import pandas as pd file usr local lib dist packages pandas init py line in from pandas core api import file usr local lib dist packages pandas core api py line in from pandas core groupby groupby import grouper file usr local lib dist packages pandas core groupby init py line in from pandas core groupby groupby import file usr local lib dist packages pandas core groupby groupby py line in from pandas core frame import dataframe file usr local lib dist packages pandas core frame py line in from pandas core series import series file usr local lib dist packages pandas core series py line in import pandas plotting core as gfx file usr local lib dist packages pandas plotting init py line in from pandas plotting core import boxplot file usr local lib dist packages pandas plotting core py line in from pandas plotting import converter file usr local lib dist packages pandas plotting converter py line in import matplotlib units as units file usr local lib dist packages matplotlib init py line in rcparams rc params file usr local lib dist packages matplotlib init py line in rc params fname matplotlib fname file usr local lib dist packages matplotlib init py line in matplotlib fname for fname in gen candidates file usr local lib dist packages matplotlib init py line in gen candidates yield os path join os getcwd matplotlibrc filenotfounderror no such file or directory
| 1
|
22,665
| 31,896,002,310
|
IssuesEvent
|
2023-09-18 01:48:21
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - associatedSequences
|
Term - change normative Task Group - Material Sample Process - complete Class - MaterialEntity
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Sequences are not associated with occurrences, but with physical material.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?):
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_associatedSequences
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): associatedSequences
* Term label (English, not normative): Associated Sequences
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): **MaterialEntity** ~~Occurrence~~
* Definition of the term (normative): A list (concatenated and separated) of identifiers (publication, global unique identifier, URI) of genetic sequence information associated with the **dwc:MaterialEntity.** ~~Occurrence~~.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): http://www.ncbi.nlm.nih.gov/nuccore/U34853.1, http://www.ncbi.nlm.nih.gov/nuccore/GU328060 | http://www.ncbi.nlm.nih.gov/nuccore/AF326093
* Refines (identifier of the broader term this term refines; normative):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Sequences/Sequence/ID-in-Database + constant
|
1.0
|
Change term - associatedSequences - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Sequences are not associated with occurrences, but with physical material.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?):
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_associatedSequences
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): associatedSequences
* Term label (English, not normative): Associated Sequences
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): **MaterialEntity** ~~Occurrence~~
* Definition of the term (normative): A list (concatenated and separated) of identifiers (publication, global unique identifier, URI) of genetic sequence information associated with the **dwc:MaterialEntity.** ~~Occurrence~~.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): http://www.ncbi.nlm.nih.gov/nuccore/U34853.1, http://www.ncbi.nlm.nih.gov/nuccore/GU328060 | http://www.ncbi.nlm.nih.gov/nuccore/AF326093
* Refines (identifier of the broader term this term refines; normative):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): DataSets/DataSet/Units/Unit/Sequences/Sequence/ID-in-Database + constant
|
process
|
change term associatedsequences term change submitter efficacy justification why is this change necessary sequences are not associated with occurrences but with physical material demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes associatedsequences term label english not normative associated sequences organized in class e g occurrence event location taxon materialentity occurrence definition of the term normative a list concatenated and separated of identifiers publication global unique identifier uri of genetic sequence information associated with the dwc materialentity occurrence usage comments recommendations regarding content etc not normative examples not normative refines identifier of the broader term this term refines normative replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative datasets dataset units unit sequences sequence id in database constant
| 1
|
14,411
| 17,462,987,519
|
IssuesEvent
|
2021-08-06 13:12:05
|
CIAT-DAPA/subsets_genebank_accessions
|
https://api.github.com/repos/CIAT-DAPA/subsets_genebank_accessions
|
closed
|
Processing extracted value indicator
|
processing
|
|Indicator|Europe |Oceania|Asia|North America|South America|Africa|
|---|---|---|---|---|---|---|
|Ph|Ready|Ready|Ready|Ready|Ready|Ready|
|Texture|Ready|Ready|Ready|Ready|Ready|Ready|
|BLDFIE|Ready|Ready|Ready|Ready|Ready|Ready|
|CECSOL|Ready|Ready|Ready|Ready|Ready|Ready|
|ORCDRC|Ready|Ready|Ready|Ready|Ready|Ready|
|Salinity|---|---|---|---|---|---|
|
1.0
|
Processing extracted value indicator - |Indicator|Europe |Oceania|Asia|North America|South America|Africa|
|---|---|---|---|---|---|---|
|Ph|Ready|Ready|Ready|Ready|Ready|Ready|
|Texture|Ready|Ready|Ready|Ready|Ready|Ready|
|BLDFIE|Ready|Ready|Ready|Ready|Ready|Ready|
|CECSOL|Ready|Ready|Ready|Ready|Ready|Ready|
|ORCDRC|Ready|Ready|Ready|Ready|Ready|Ready|
|Salinity|---|---|---|---|---|---|
|
process
|
processing extracted value indicator indicator europe oceania asia north america south america africa ph ready ready ready ready ready ready texture ready ready ready ready ready ready bldfie ready ready ready ready ready ready cecsol ready ready ready ready ready ready orcdrc ready ready ready ready ready ready salinity
| 1
|
2,818
| 5,766,889,639
|
IssuesEvent
|
2017-04-27 08:33:24
|
reasonml/reason-tools
|
https://api.github.com/repos/reasonml/reason-tools
|
closed
|
Streamline build process
|
cat-process type-feature
|
Ideally there would be a single process watching and we'd have an incremental build flow from Reason into JS.
Right now, Rebel can't watch on OSX and there isn't a great way to integrate webpack and Rebel. Looking into ideas there.
|
1.0
|
Streamline build process - Ideally there would be a single process watching and we'd have an incremental build flow from Reason into JS.
Right now, Rebel can't watch on OSX and there isn't a great way to integrate webpack and Rebel. Looking into ideas there.
|
process
|
streamline build process ideally there would be a single process watching and we d have an incremental build flow from reason into js right now rebel can t watch on osx and there isn t a great way to integrate webpack and rebel looking into ideas there
| 1
|
10,608
| 13,434,951,913
|
IssuesEvent
|
2020-09-07 12:11:40
|
prisma/language-tools
|
https://api.github.com/repos/prisma/language-tools
|
closed
|
Abnormal CPU usage
|
bug/2-confirmed kind/bug process/candidate
|
I have a few traces, but not sure how useful they are:
1. Output of `code --status`:
<details>
<summary>With Prisma Insider v7.0.3:</summary>
<pre>
Version: Code 1.48.2 (a0479759d6e9ea56afa657e454193f72aef85bd0, 2020-08-25T10:09:08.021Z)
OS Version: Darwin x64 19.6.0
CPUs: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 x 2200)
Memory (System): 16.00GB (1.12GB free)
Load (avg): 4, 5, 3
VM: 0%
Screen Reader: no
Process Argv: --inspect-extensions=9993
GPU Status: 2d_canvas: enabled
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
oop_rasterization: disabled_off
protected_video_decode: unavailable_off
rasterization: enabled
skia_renderer: disabled_off_ok
video_decode: enabled
viz_display_compositor: enabled_on
viz_hit_test_surface_layer: disabled_off_ok
webgl: enabled
webgl2: enabled
CPU % Mem MB PID Process
0 98 47408 code main
0 66 47414 gpu-process
0 16 47417 utility
0 131 47435 shared-process
0 0 47717 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 295 47524 window (schema.prisma — studio)
262 524 47525 extensionHost
0 82 47529 electron_node tsserver.js
0 131 47530 electron_node tsserver.js
0 82 47536 electron_node typingsInstaller.js typesMap.js
0 33 47531 electron_node cli.js
0 49 47532 electron_node server.js
0 33 47526 watcherService
0 33 47528 searchService
Workspace Stats:
| Window (schema.prisma — studio)
| Folder (studio): 921 files
| File types: ts(233) svg(93) js(89) tsx(74) scss(72) map(55) json(34)
| DS_Store(31) sh(24) html(16)
| Conf files: package.json(10) tsconfig.json(6) github-actions(3)
| project.json(1) webpack.config.js(1)
</pre>
</details>
<details>
<summary>With Prisma (Stable?) v2.6.1:</summary>
<pre>
Version: Code 1.48.2 (a0479759d6e9ea56afa657e454193f72aef85bd0, 2020-08-25T10:09:08.021Z)
OS Version: Darwin x64 19.6.0
CPUs: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 x 2200)
Memory (System): 16.00GB (1.18GB free)
Load (avg): 4, 4, 4
VM: 0%
Screen Reader: no
Process Argv: --inspect-extensions=9993
GPU Status: 2d_canvas: enabled
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
oop_rasterization: disabled_off
protected_video_decode: unavailable_off
rasterization: enabled
skia_renderer: disabled_off_ok
video_decode: enabled
viz_display_compositor: enabled_on
viz_hit_test_surface_layer: disabled_off_ok
webgl: enabled
webgl2: enabled
CPU % Mem MB PID Process
0 98 47408 code main
0 66 47414 gpu-process
0 16 47417 utility
0 131 47435 shared-process
0 0 47789 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
46 295 47740 window (schema.prisma — studio)
209 541 47743 extensionHost
0 82 47748 electron_node tsserver.js
0 147 47749 electron_node tsserver.js
0 82 47755 electron_node typingsInstaller.js typesMap.js
0 33 47751 electron_node cli.js
0 49 47752 electron_node server.js
0 33 47744 watcherService
0 33 47747 searchService
Workspace Stats:
| Window (schema.prisma — studio)
| Folder (studio): 921 files
| File types: ts(233) svg(93) js(89) tsx(74) scss(72) map(55) json(34)
| DS_Store(31) sh(24) html(16)
| Conf files: package.json(10) tsconfig.json(6) github-actions(3)
| project.json(1) webpack.config.js(1)
</pre>
</details>
<details>
<summary>Without any Prisma Extension</summary>
<pre>
Version: Code 1.48.2 (a0479759d6e9ea56afa657e454193f72aef85bd0, 2020-08-25T10:09:08.021Z)
OS Version: Darwin x64 19.6.0
CPUs: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 x 2200)
Memory (System): 16.00GB (1.56GB free)
Load (avg): 3, 4, 3
VM: 0%
Screen Reader: no
Process Argv: --inspect-extensions=9993
GPU Status: 2d_canvas: enabled
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
oop_rasterization: disabled_off
protected_video_decode: unavailable_off
rasterization: enabled
skia_renderer: disabled_off_ok
video_decode: enabled
viz_display_compositor: enabled_on
viz_hit_test_surface_layer: disabled_off_ok
webgl: enabled
webgl2: enabled
CPU % Mem MB PID Process
0 98 47408 code main
0 66 47414 gpu-process
0 16 47417 utility
0 131 47435 shared-process
0 0 47832 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 279 47799 window (schema.prisma — studio)
0 98 47800 extensionHost
0 82 47801 electron_node tsserver.js
0 131 47802 electron_node tsserver.js
0 82 47806 electron_node typingsInstaller.js typesMap.js
0 49 47804 electron_node server.js
0 33 47805 searchService
0 33 47809 watcherService
Workspace Stats:
| Window (schema.prisma — studio)
| Folder (studio): 921 files
| File types: ts(233) svg(93) js(89) tsx(74) scss(72) map(55) json(34)
| DS_Store(31) sh(24) html(16)
| Conf files: package.json(10) tsconfig.json(6) github-actions(3)
| project.json(1) webpack.config.js(1)
</pre>
</details>
Notable difference is the CPU usage by `extensionHost`.
2. Profiling info (from VSCode) (with Prisma Insider v7.0.3):
[CPU-20200903T140515.170Z.cpuprofile.txt](https://github.com/prisma/language-tools/files/5169119/CPU-20200903T140515.170Z.cpuprofile.txt)
I've been following this guide for this information: https://github.com/Microsoft/vscode/wiki/Performance-Issues
Also note that there isn't one single action that causes the slowdown, just having the extension enabled seems to be enough.
If you need any more info, please let me know!
|
1.0
|
Abnormal CPU usage - I have a few traces, but not sure how useful they are:
1. Output of `code --status`:
<details>
<summary>With Prisma Insider v7.0.3:</summary>
<pre>
Version: Code 1.48.2 (a0479759d6e9ea56afa657e454193f72aef85bd0, 2020-08-25T10:09:08.021Z)
OS Version: Darwin x64 19.6.0
CPUs: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 x 2200)
Memory (System): 16.00GB (1.12GB free)
Load (avg): 4, 5, 3
VM: 0%
Screen Reader: no
Process Argv: --inspect-extensions=9993
GPU Status: 2d_canvas: enabled
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
oop_rasterization: disabled_off
protected_video_decode: unavailable_off
rasterization: enabled
skia_renderer: disabled_off_ok
video_decode: enabled
viz_display_compositor: enabled_on
viz_hit_test_surface_layer: disabled_off_ok
webgl: enabled
webgl2: enabled
CPU % Mem MB PID Process
0 98 47408 code main
0 66 47414 gpu-process
0 16 47417 utility
0 131 47435 shared-process
0 0 47717 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 295 47524 window (schema.prisma — studio)
262 524 47525 extensionHost
0 82 47529 electron_node tsserver.js
0 131 47530 electron_node tsserver.js
0 82 47536 electron_node typingsInstaller.js typesMap.js
0 33 47531 electron_node cli.js
0 49 47532 electron_node server.js
0 33 47526 watcherService
0 33 47528 searchService
Workspace Stats:
| Window (schema.prisma — studio)
| Folder (studio): 921 files
| File types: ts(233) svg(93) js(89) tsx(74) scss(72) map(55) json(34)
| DS_Store(31) sh(24) html(16)
| Conf files: package.json(10) tsconfig.json(6) github-actions(3)
| project.json(1) webpack.config.js(1)
</pre>
</details>
<details>
<summary>With Prisma (Stable?) v2.6.1:</summary>
<pre>
Version: Code 1.48.2 (a0479759d6e9ea56afa657e454193f72aef85bd0, 2020-08-25T10:09:08.021Z)
OS Version: Darwin x64 19.6.0
CPUs: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 x 2200)
Memory (System): 16.00GB (1.18GB free)
Load (avg): 4, 4, 4
VM: 0%
Screen Reader: no
Process Argv: --inspect-extensions=9993
GPU Status: 2d_canvas: enabled
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
oop_rasterization: disabled_off
protected_video_decode: unavailable_off
rasterization: enabled
skia_renderer: disabled_off_ok
video_decode: enabled
viz_display_compositor: enabled_on
viz_hit_test_surface_layer: disabled_off_ok
webgl: enabled
webgl2: enabled
CPU % Mem MB PID Process
0 98 47408 code main
0 66 47414 gpu-process
0 16 47417 utility
0 131 47435 shared-process
0 0 47789 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
46 295 47740 window (schema.prisma — studio)
209 541 47743 extensionHost
0 82 47748 electron_node tsserver.js
0 147 47749 electron_node tsserver.js
0 82 47755 electron_node typingsInstaller.js typesMap.js
0 33 47751 electron_node cli.js
0 49 47752 electron_node server.js
0 33 47744 watcherService
0 33 47747 searchService
Workspace Stats:
| Window (schema.prisma — studio)
| Folder (studio): 921 files
| File types: ts(233) svg(93) js(89) tsx(74) scss(72) map(55) json(34)
| DS_Store(31) sh(24) html(16)
| Conf files: package.json(10) tsconfig.json(6) github-actions(3)
| project.json(1) webpack.config.js(1)
</pre>
</details>
<details>
<summary>Without any Prisma Extension</summary>
<pre>
Version: Code 1.48.2 (a0479759d6e9ea56afa657e454193f72aef85bd0, 2020-08-25T10:09:08.021Z)
OS Version: Darwin x64 19.6.0
CPUs: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 x 2200)
Memory (System): 16.00GB (1.56GB free)
Load (avg): 3, 4, 3
VM: 0%
Screen Reader: no
Process Argv: --inspect-extensions=9993
GPU Status: 2d_canvas: enabled
flash_3d: enabled
flash_stage3d: enabled
flash_stage3d_baseline: enabled
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
oop_rasterization: disabled_off
protected_video_decode: unavailable_off
rasterization: enabled
skia_renderer: disabled_off_ok
video_decode: enabled
viz_display_compositor: enabled_on
viz_hit_test_surface_layer: disabled_off_ok
webgl: enabled
webgl2: enabled
CPU % Mem MB PID Process
0 98 47408 code main
0 66 47414 gpu-process
0 16 47417 utility
0 131 47435 shared-process
0 0 47832 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 279 47799 window (schema.prisma — studio)
0 98 47800 extensionHost
0 82 47801 electron_node tsserver.js
0 131 47802 electron_node tsserver.js
0 82 47806 electron_node typingsInstaller.js typesMap.js
0 49 47804 electron_node server.js
0 33 47805 searchService
0 33 47809 watcherService
Workspace Stats:
| Window (schema.prisma — studio)
| Folder (studio): 921 files
| File types: ts(233) svg(93) js(89) tsx(74) scss(72) map(55) json(34)
| DS_Store(31) sh(24) html(16)
| Conf files: package.json(10) tsconfig.json(6) github-actions(3)
| project.json(1) webpack.config.js(1)
</pre>
</details>
Notable difference is the CPU usage by `extensionHost`.
2. Profiling info (from VSCode) (with Prisma Insider v7.0.3):
[CPU-20200903T140515.170Z.cpuprofile.txt](https://github.com/prisma/language-tools/files/5169119/CPU-20200903T140515.170Z.cpuprofile.txt)
I've been following this guide for this information: https://github.com/Microsoft/vscode/wiki/Performance-Issues
Also note that there isn't one single action that causes the slowdown, just having the extension enabled seems to be enough.
If you need any more info, please let me know!
|
process
|
abnormal cpu usage i have a few traces but not sure how useful they are output of code status with prisma insider version code os version darwin cpus intel r core tm cpu x memory system free load avg vm screen reader no process argv inspect extensions gpu status canvas enabled flash enabled flash enabled flash baseline enabled gpu compositing enabled metal disabled off multiple raster threads enabled on oop rasterization disabled off protected video decode unavailable off rasterization enabled skia renderer disabled off ok video decode enabled viz display compositor enabled on viz hit test surface layer disabled off ok webgl enabled enabled cpu mem mb pid process code main gpu process utility shared process bin ps ax o pid ppid pcpu pmem command window schema prisma — studio extensionhost electron node tsserver js electron node tsserver js electron node typingsinstaller js typesmap js electron node cli js electron node server js watcherservice searchservice workspace stats window schema prisma — studio folder studio files file types ts svg js tsx scss map json ds store sh html conf files package json tsconfig json github actions project json webpack config js with prisma stable version code os version darwin cpus intel r core tm cpu x memory system free load avg vm screen reader no process argv inspect extensions gpu status canvas enabled flash enabled flash enabled flash baseline enabled gpu compositing enabled metal disabled off multiple raster threads enabled on oop rasterization disabled off protected video decode unavailable off rasterization enabled skia renderer disabled off ok video decode enabled viz display compositor enabled on viz hit test surface layer disabled off ok webgl enabled enabled cpu mem mb pid process code main gpu process utility shared process bin ps ax o pid ppid pcpu pmem command window schema prisma — studio extensionhost electron node tsserver js electron node tsserver js electron node typingsinstaller js typesmap js electron node cli js electron node server js watcherservice searchservice workspace stats window schema prisma — studio folder studio files file types ts svg js tsx scss map json ds store sh html conf files package json tsconfig json github actions project json webpack config js without any prisma extension version code os version darwin cpus intel r core tm cpu x memory system free load avg vm screen reader no process argv inspect extensions gpu status canvas enabled flash enabled flash enabled flash baseline enabled gpu compositing enabled metal disabled off multiple raster threads enabled on oop rasterization disabled off protected video decode unavailable off rasterization enabled skia renderer disabled off ok video decode enabled viz display compositor enabled on viz hit test surface layer disabled off ok webgl enabled enabled cpu mem mb pid process code main gpu process utility shared process bin ps ax o pid ppid pcpu pmem command window schema prisma — studio extensionhost electron node tsserver js electron node tsserver js electron node typingsinstaller js typesmap js electron node server js searchservice watcherservice workspace stats window schema prisma — studio folder studio files file types ts svg js tsx scss map json ds store sh html conf files package json tsconfig json github actions project json webpack config js notable difference is the cpu usage by extensionhost profiling info from vscode with prisma insider i ve been following this guide for this information also note that there isn t one single action that causes the slowdown just having the extension enabled seems to be enough if you need any more info please let me know
| 1
|
12,034
| 14,738,638,348
|
IssuesEvent
|
2021-01-07 05:19:54
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Phoenix SAB- Missing Cycle but invoices, duplicate cycles. Reports & Accounts not matching
|
anc-ops anc-process anp-urgent ant-bug ant-parent/primary ant-support
|
In GitLab by @kdjstudios on Jun 29, 2018, 10:16
**Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-29-67784/conversation
**Server:** Internal
**Client/Site:** Phoenix
**Account:** Mutli
**Issue:**
Hi- There is no billing cycle for 2/1/18 in drop down list for Phoenix site. When we go into to the individual accounts, we can see 2/1/18 invoices. (There are only 4 accounts in this site.)
Example: Running Service Code summary for 3/1/18 cycle, Tri County was invoiced $13102.19 and another $2000= $15,102.19 but when we go into the Account, under billing history, this amount is listed under 4/1/18 cycle.
Also, Concorida Law- in the Account, under billing history, Concordia Law has two 4/1/18 invoice cycles listed.
|
1.0
|
Phoenix SAB- Missing Cycle but invoices, duplicate cycles. Reports & Accounts not matching - In GitLab by @kdjstudios on Jun 29, 2018, 10:16
**Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-29-67784/conversation
**Server:** Internal
**Client/Site:** Phoenix
**Account:** Mutli
**Issue:**
Hi- There is no billing cycle for 2/1/18 in drop down list for Phoenix site. When we go into to the individual accounts, we can see 2/1/18 invoices. (There are only 4 accounts in this site.)
Example: Running Service Code summary for 3/1/18 cycle, Tri County was invoiced $13102.19 and another $2000= $15,102.19 but when we go into the Account, under billing history, this amount is listed under 4/1/18 cycle.
Also, Concorida Law- in the Account, under billing history, Concordia Law has two 4/1/18 invoice cycles listed.
|
process
|
phoenix sab missing cycle but invoices duplicate cycles reports accounts not matching in gitlab by kdjstudios on jun submitted by cori bartlett helpdesk server internal client site phoenix account mutli issue hi there is no billing cycle for in drop down list for phoenix site when we go into to the individual accounts we can see invoices there are only accounts in this site example running service code summary for cycle tri county was invoiced and another but when we go into the account under billing history this amount is listed under cycle also concorida law in the account under billing history concordia law has two invoice cycles listed
| 1
|
334,891
| 24,443,866,331
|
IssuesEvent
|
2022-10-06 16:19:19
|
SchwarzIT/terraform-provider-stackit
|
https://api.github.com/repos/SchwarzIT/terraform-provider-stackit
|
closed
|
Update docs to v0.2.0
|
documentation
|
Before release, doc version needs to be updated from v0.1.2 to v0.2.0
relevant files:
`README.md` and `examples/provider/provider.tf`
|
1.0
|
Update docs to v0.2.0 - Before release, doc version needs to be updated from v0.1.2 to v0.2.0
relevant files:
`README.md` and `examples/provider/provider.tf`
|
non_process
|
update docs to before release doc version needs to be updated from to relevant files readme md and examples provider provider tf
| 0
|
119,201
| 12,016,675,120
|
IssuesEvent
|
2020-04-10 16:37:42
|
SAPMarco/UI5-navigation-and-routing
|
https://api.github.com/repos/SAPMarco/UI5-navigation-and-routing
|
closed
|
New Router/Navigation-Wiki entry
|
documentation
|
Add a new wiki entry within this [Repository](https://github.com/SAPMarco/SAPMarco.github.io). The topics should include:
* Pattern Matching
* Mandatory/Optional parameters + queryparameters
* Handling of matched routes in controller -> Basically just onRouteMatched implementations
Example in [here](https://sapui5.hana.ondemand.com/#/topic/b8561ff6f4c34c85a91ed06d20814cd3).
|
1.0
|
New Router/Navigation-Wiki entry - Add a new wiki entry within this [Repository](https://github.com/SAPMarco/SAPMarco.github.io). The topics should include:
* Pattern Matching
* Mandatory/Optional parameters + queryparameters
* Handling of matched routes in controller -> Basically just onRouteMatched implementations
Example in [here](https://sapui5.hana.ondemand.com/#/topic/b8561ff6f4c34c85a91ed06d20814cd3).
|
non_process
|
new router navigation wiki entry add a new wiki entry within this the topics should include pattern matching mandatory optional parameters queryparameters handling of matched routes in controller basically just onroutematched implementations example in
| 0
|
22,294
| 30,848,032,101
|
IssuesEvent
|
2023-08-02 14:52:46
|
akuity/kargo
|
https://api.github.com/repos/akuity/kargo
|
opened
|
cli build
|
area/chore area/ci-process area/release-process priority/normal
|
The current CI process does not do a test build of the CLI, nor does the current release process compile the CLI for various common os/arch combos and publish the resulting binaries.
Both of these need to be addressed.
|
2.0
|
cli build - The current CI process does not do a test build of the CLI, nor does the current release process compile the CLI for various common os/arch combos and publish the resulting binaries.
Both of these need to be addressed.
|
process
|
cli build the current ci process does not do a test build of the cli nor does the current release process compile the cli for various common os arch combos and publish the resulting binaries both of these need to be addressed
| 1
|
42,577
| 17,196,185,080
|
IssuesEvent
|
2021-07-16 17:45:20
|
hashicorp/terraform-provider-aws
|
https://api.github.com/repos/hashicorp/terraform-provider-aws
|
closed
|
Data source for aws_directory_service_directory
|
enhancement new-data-source service/directoryservice stale
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Please add a data source for **aws_directory_service_directory** resources.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* aws_directory_service_directory
|
2.0
|
Data source for aws_directory_service_directory - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Please add a data source for **aws_directory_service_directory** resources.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* aws_directory_service_directory
|
non_process
|
data source for aws directory service directory community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description please add a data source for aws directory service directory resources new or affected resource s aws directory service directory
| 0
|
10,290
| 13,145,083,881
|
IssuesEvent
|
2020-08-08 01:30:32
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
Filebeat - output pattern matched from list of include_lines/RegEx
|
:Processors Filebeat Stalled enhancement needs_team
|
filebeat has option to define array of patterns to match through the include_lines feature. However it doesn't let us know which pattern matched currently from the array of include_lines.
(e.g)UseCase: Exception Monitor
i define list for patterns to match in a log file using include_lines, current config captures the error and returns me the line... but i want to error count on each type of pattern... unless filebeat outputs the pattern it has matched through include_lines, i can make a count of each exception.
Even i case of RegEx, we can output pattern which really matched on the message
https://discuss.elastic.co/t/filebeat-to-ouput-which-pattern-matched-from-the-include-lines-list/88600
|
1.0
|
Filebeat - output pattern matched from list of include_lines/RegEx - filebeat has option to define array of patterns to match through the include_lines feature. However it doesn't let us know which pattern matched currently from the array of include_lines.
(e.g)UseCase: Exception Monitor
i define list for patterns to match in a log file using include_lines, current config captures the error and returns me the line... but i want to error count on each type of pattern... unless filebeat outputs the pattern it has matched through include_lines, i can make a count of each exception.
Even i case of RegEx, we can output pattern which really matched on the message
https://discuss.elastic.co/t/filebeat-to-ouput-which-pattern-matched-from-the-include-lines-list/88600
|
process
|
filebeat output pattern matched from list of include lines regex filebeat has option to define array of patterns to match through the include lines feature however it doesn t let us know which pattern matched currently from the array of include lines e g usecase exception monitor i define list for patterns to match in a log file using include lines current config captures the error and returns me the line but i want to error count on each type of pattern unless filebeat outputs the pattern it has matched through include lines i can make a count of each exception even i case of regex we can output pattern which really matched on the message
| 1
|
15,625
| 20,143,442,638
|
IssuesEvent
|
2022-02-09 03:20:20
|
pepkit/geofetch
|
https://api.github.com/repos/pepkit/geofetch
|
opened
|
Metadata standardization
|
compatibility
|
At the moment, geofetch can download, filter, save metadata for the specific accessions in GEO. But metadata in GEO is stored in different, messy ways. Some of the information can be redundant and some can be stored in different places.
e.g. sample genome information may be 3 (or more) may have 3 different keys (dictionary keys):
- 'Sample_description': ['assembly: 'hg19', ...]
- "Sample_characteristics_ch1": ['genome build': 'hg19', ...]
- "Sample_data_processing": ['Genome_build': 'hg19', ...]
To create good, standardized PEP .csv metadata file, all information has to be be carefuly preprocessed.
In my opinion we have to create new class, or set of function, that is separated from geofetch and will standardiz all GEO metadata.
|
True
|
Metadata standardization - At the moment, geofetch can download, filter, save metadata for the specific accessions in GEO. But metadata in GEO is stored in different, messy ways. Some of the information can be redundant and some can be stored in different places.
e.g. sample genome information may be 3 (or more) may have 3 different keys (dictionary keys):
- 'Sample_description': ['assembly: 'hg19', ...]
- "Sample_characteristics_ch1": ['genome build': 'hg19', ...]
- "Sample_data_processing": ['Genome_build': 'hg19', ...]
To create good, standardized PEP .csv metadata file, all information has to be be carefuly preprocessed.
In my opinion we have to create new class, or set of function, that is separated from geofetch and will standardiz all GEO metadata.
|
non_process
|
metadata standardization at the moment geofetch can download filter save metadata for the specific accessions in geo but metadata in geo is stored in different messy ways some of the information can be redundant and some can be stored in different places e g sample genome information may be or more may have different keys dictionary keys sample description sample characteristics sample data processing to create good standardized pep csv metadata file all information has to be be carefuly preprocessed in my opinion we have to create new class or set of function that is separated from geofetch and will standardiz all geo metadata
| 0
|
2,153
| 5,002,602,154
|
IssuesEvent
|
2016-12-11 13:49:58
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
The property heap_size_limit from getHeapStatistics is incorrect for memory over 4032MB
|
process V8
|
* **Version**: 7.2.1
* **Platform**: Linux 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: v8
<!-- Enter your issue details below this comment. -->
When calling the v8 module's getHeapStatistics function the heap_size_limit property is incorrect for memory sizes set over 4GB. Specifically if you set max_old_space >= 4032, heap_size_limit is incorrect. To reproduce:
node --max_old_space_size=3072
> require('v8').getHeapStatistics().heap_size_limit
3288334336
This is as you would expect, but look at what 4096 does:
node --max_old_space_size=4096
> require('v8').getHeapStatistics().heap_size_limit
67108864
64 MB???? That's not right. After some trial and error, the magic breaking point is 4032:
node --max_old_space_size=4032
> require('v8').getHeapStatistics().heap_size_limit
0
Heap size of ZERO, when set to 4092.
This also happens on Mac OS X 10.11.6 and Node v5.11.0
|
1.0
|
The property heap_size_limit from getHeapStatistics is incorrect for memory over 4032MB - * **Version**: 7.2.1
* **Platform**: Linux 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: v8
<!-- Enter your issue details below this comment. -->
When calling the v8 module's getHeapStatistics function the heap_size_limit property is incorrect for memory sizes set over 4GB. Specifically if you set max_old_space >= 4032, heap_size_limit is incorrect. To reproduce:
node --max_old_space_size=3072
> require('v8').getHeapStatistics().heap_size_limit
3288334336
This is as you would expect, but look at what 4096 does:
node --max_old_space_size=4096
> require('v8').getHeapStatistics().heap_size_limit
67108864
64 MB???? That's not right. After some trial and error, the magic breaking point is 4032:
node --max_old_space_size=4032
> require('v8').getHeapStatistics().heap_size_limit
0
Heap size of ZERO, when set to 4092.
This also happens on Mac OS X 10.11.6 and Node v5.11.0
|
process
|
the property heap size limit from getheapstatistics is incorrect for memory over version platform linux smp tue feb utc gnu linux subsystem when calling the module s getheapstatistics function the heap size limit property is incorrect for memory sizes set over specifically if you set max old space heap size limit is incorrect to reproduce node max old space size require getheapstatistics heap size limit this is as you would expect but look at what does node max old space size require getheapstatistics heap size limit mb that s not right after some trial and error the magic breaking point is node max old space size require getheapstatistics heap size limit heap size of zero when set to this also happens on mac os x and node
| 1
|
8,649
| 11,789,966,455
|
IssuesEvent
|
2020-03-17 18:04:48
|
nearprotocol/NEPs
|
https://api.github.com/repos/nearprotocol/NEPs
|
opened
|
Spec release process
|
process
|
Original suggestion:
- Spec release every month, version X
- Clients release every month with 1 month offset from spec. E.g. 1 month from spec release X, clients vX are released.
This requires:
- Spec has a list of clients that are “supported” - e.g. which release schedule is syncronized. If some client team is not on schedule, issues, etc -> they can be removed from “supported” list. Also can apply via PR to supported list if already following requirements.
- Spec changes already have pull requests to clients that are enough to understand scope of work at least. E.g. if it’s something trivial, then only to one client PR is required, if it’s something substantial - then to all supported clients.
Questions from @bowenwang1996
Refined suggestion:
- Spec “stable” release every month, vX
- Clients release at the same time with vX spec.
Still same idea with “supported” clients.
You submit PR to spec repo, it gets reviewed, debated and marked as “implementing” after acceptance.
This means all clients start developing it. After changes are merged into “master” in all supported clients -> spec PR is merged into “master”. (e.g. there is a checklist in PR that is required to have all supported clients PRs linked and merged).
Note that while implementing spec might change due to discovered implementation details or due to testing / benchmarking.
After that have the usual stabilization month on clients side and released together spec + clients.
If some client implementation is consistently lagging - they will be cut from “support” list.
|
1.0
|
Spec release process - Original suggestion:
- Spec release every month, version X
- Clients release every month with 1 month offset from spec. E.g. 1 month from spec release X, clients vX are released.
This requires:
- Spec has a list of clients that are “supported” - e.g. which release schedule is syncronized. If some client team is not on schedule, issues, etc -> they can be removed from “supported” list. Also can apply via PR to supported list if already following requirements.
- Spec changes already have pull requests to clients that are enough to understand scope of work at least. E.g. if it’s something trivial, then only to one client PR is required, if it’s something substantial - then to all supported clients.
Questions from @bowenwang1996
Refined suggestion:
- Spec “stable” release every month, vX
- Clients release at the same time with vX spec.
Still same idea with “supported” clients.
You submit PR to spec repo, it gets reviewed, debated and marked as “implementing” after acceptance.
This means all clients start developing it. After changes are merged into “master” in all supported clients -> spec PR is merged into “master”. (e.g. there is a checklist in PR that is required to have all supported clients PRs linked and merged).
Note that while implementing spec might change due to discovered implementation details or due to testing / benchmarking.
After that have the usual stabilization month on clients side and released together spec + clients.
If some client implementation is consistently lagging - they will be cut from “support” list.
|
process
|
spec release process original suggestion spec release every month version x clients release every month with month offset from spec e g month from spec release x clients vx are released this requires spec has a list of clients that are “supported” e g which release schedule is syncronized if some client team is not on schedule issues etc they can be removed from “supported” list also can apply via pr to supported list if already following requirements spec changes already have pull requests to clients that are enough to understand scope of work at least e g if it’s something trivial then only to one client pr is required if it’s something substantial then to all supported clients questions from refined suggestion spec “stable” release every month vx clients release at the same time with vx spec still same idea with “supported” clients you submit pr to spec repo it gets reviewed debated and marked as “implementing” after acceptance this means all clients start developing it after changes are merged into “master” in all supported clients spec pr is merged into “master” e g there is a checklist in pr that is required to have all supported clients prs linked and merged note that while implementing spec might change due to discovered implementation details or due to testing benchmarking after that have the usual stabilization month on clients side and released together spec clients if some client implementation is consistently lagging they will be cut from “support” list
| 1
|
14,916
| 18,350,925,180
|
IssuesEvent
|
2021-10-08 12:27:11
|
ankidroid/Anki-Android
|
https://api.github.com/repos/ankidroid/Anki-Android
|
closed
|
[Bug] CI - Broken test: resetResetAllElementsFromOnboarding
|
Dev Test process
|
Works on my machine.
The following returns 4 on my machine. 5 on CI
```kotlin
val featuresAvailableForReset = OnboardingUtils.featureConstants.size
```
```
com.ichi2.anki.OnboardingUtilsTest > resetResetAllElementsFromOnboarding FAILED
java.lang.AssertionError: All onboarding identifiers are available for reset
Expected: <5>
but: was <4>
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at com.ichi2.anki.OnboardingUtilsTest.resetResetAllElementsFromOnboarding(OnboardingUtilsTest.kt:37)
```
|
1.0
|
[Bug] CI - Broken test: resetResetAllElementsFromOnboarding - Works on my machine.
The following returns 4 on my machine. 5 on CI
```kotlin
val featuresAvailableForReset = OnboardingUtils.featureConstants.size
```
```
com.ichi2.anki.OnboardingUtilsTest > resetResetAllElementsFromOnboarding FAILED
java.lang.AssertionError: All onboarding identifiers are available for reset
Expected: <5>
but: was <4>
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at com.ichi2.anki.OnboardingUtilsTest.resetResetAllElementsFromOnboarding(OnboardingUtilsTest.kt:37)
```
|
process
|
ci broken test resetresetallelementsfromonboarding works on my machine the following returns on my machine on ci kotlin val featuresavailableforreset onboardingutils featureconstants size com anki onboardingutilstest resetresetallelementsfromonboarding failed java lang assertionerror all onboarding identifiers are available for reset expected but was at org hamcrest matcherassert assertthat matcherassert java at com anki onboardingutilstest resetresetallelementsfromonboarding onboardingutilstest kt
| 1
|
17,728
| 23,630,167,429
|
IssuesEvent
|
2022-08-25 08:39:35
|
argosp/trialdash
|
https://api.github.com/repos/argosp/trialdash
|
closed
|
add new device(entity)
|
enhancement in process
|
- [x] dividing the TBP model and GIS model
- [x] visible entity on map button
- [x] styling the map entities
- [x] filter & search logic
- [x] show entity name checkbox
- [x] mark entity/multiple in TBP table
- [x] research of PUT ENTITIES button on SAVE button after DnD
- [x] add right click on map to open menu for add entity to TBP
- [x] #261
- [x] create add/remove entity (from TBP table on map) popup after right click on entity on map
- [x] #274
- [x] for each selected entity - place it on map by its coordinates
- [x] prev presentation of TBP entities on map
- [x] select map logic
|
1.0
|
add new device(entity) - - [x] dividing the TBP model and GIS model
- [x] visible entity on map button
- [x] styling the map entities
- [x] filter & search logic
- [x] show entity name checkbox
- [x] mark entity/multiple in TBP table
- [x] research of PUT ENTITIES button on SAVE button after DnD
- [x] add right click on map to open menu for add entity to TBP
- [x] #261
- [x] create add/remove entity (from TBP table on map) popup after right click on entity on map
- [x] #274
- [x] for each selected entity - place it on map by its coordinates
- [x] prev presentation of TBP entities on map
- [x] select map logic
|
process
|
add new device entity dividing the tbp model and gis model visible entity on map button styling the map entities filter search logic show entity name checkbox mark entity multiple in tbp table research of put entities button on save button after dnd add right click on map to open menu for add entity to tbp create add remove entity from tbp table on map popup after right click on entity on map for each selected entity place it on map by its coordinates prev presentation of tbp entities on map select map logic
| 1
|
750,395
| 26,200,462,770
|
IssuesEvent
|
2023-01-03 16:59:42
|
robotframework/robotframework
|
https://api.github.com/repos/robotframework/robotframework
|
opened
|
Possibility to give a custom name to a suite using `Name` setting
|
enhancement priority: medium acknowledge
|
Suite names are got from file or directory names by default so that, for example, `example_suite.robot` creates a suite `Example Suite`. This works fine in general, but makes it inconvenient or even impossible to use special characters like `!` or `_`. Longer suite names can also be inconvenient as file/directory names. An easy solution to allow using whatever names is adding a new `Name`. The name would still be set based on the file/directory name by default, but this new setting would allow overriding it.
Being able to set a custom name for suites like this would make issue #4015 more powerful. Without this you needed to use the `--name` option in addition to `__init__.robot` files to be able to fully configure the virtual top level suite created when executing multiple files/directories.
This change only affects parsing and is fairly straightforward. This is a good issue for anyone interested to get more familiar with Robot's parser!
|
1.0
|
Possibility to give a custom name to a suite using `Name` setting - Suite names are got from file or directory names by default so that, for example, `example_suite.robot` creates a suite `Example Suite`. This works fine in general, but makes it inconvenient or even impossible to use special characters like `!` or `_`. Longer suite names can also be inconvenient as file/directory names. An easy solution to allow using whatever names is adding a new `Name`. The name would still be set based on the file/directory name by default, but this new setting would allow overriding it.
Being able to set a custom name for suites like this would make issue #4015 more powerful. Without this you needed to use the `--name` option in addition to `__init__.robot` files to be able to fully configure the virtual top level suite created when executing multiple files/directories.
This change only affects parsing and is fairly straightforward. This is a good issue for anyone interested to get more familiar with Robot's parser!
|
non_process
|
possibility to give a custom name to a suite using name setting suite names are got from file or directory names by default so that for example example suite robot creates a suite example suite this works fine in general but makes it inconvenient or even impossible to use special characters like or longer suite names can also be inconvenient as file directory names an easy solution to allow using whatever names is adding a new name the name would still be set based on the file directory name by default but this new setting would allow overriding it being able to set a custom name for suites like this would make issue more powerful without this you needed to use the name option in addition to init robot files to be able to fully configure the virtual top level suite created when executing multiple files directories this change only affects parsing and is fairly straightforward this is a good issue for anyone interested to get more familiar with robot s parser
| 0
|
18,011
| 24,030,709,250
|
IssuesEvent
|
2022-09-15 14:50:20
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
Obsolete GO:0051673 membrane disruption in another organism
|
obsoletion multi-species process
|
Dear all,
The proposal has been made to obsolete GO:0051673 membrane disruption in another organism. The reason for obsoletion is that this term represents a molecular function, 'pore forming activity' (requested in #24007)
This term has no annotations, no mappings, and is not present in any subsets. You can comment on the ticket:
Thanks, Pascale
|
1.0
|
Obsolete GO:0051673 membrane disruption in another organism - Dear all,
The proposal has been made to obsolete GO:0051673 membrane disruption in another organism. The reason for obsoletion is that this term represents a molecular function, 'pore forming activity' (requested in #24007)
This term has no annotations, no mappings, and is not present in any subsets. You can comment on the ticket:
Thanks, Pascale
|
process
|
obsolete go membrane disruption in another organism dear all the proposal has been made to obsolete go membrane disruption in another organism the reason for obsoletion is that this term represents a molecular function pore forming activity requested in this term has no annotations no mappings and is not present in any subsets you can comment on the ticket thanks pascale
| 1
|
14,230
| 17,150,534,297
|
IssuesEvent
|
2021-07-13 19:57:02
|
googleapis/python-bigtable
|
https://api.github.com/repos/googleapis/python-bigtable
|
closed
|
samples.instanceadmin.test_instanceadmin: test_add_and_delete_cluster failed
|
api: bigtable flakybot: flaky flakybot: issue samples type: process
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 1e5128575d06096124e086366ba9c28bb6d2dbf6
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/69e93bac-2c35-44d4-bac3-970135272ec0), [Sponge](http://sponge2/69e93bac-2c35-44d4-bac3-970135272ec0)
status: failed
<details><summary>Test output</summary><br><pre>args = (parent: "projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059"
cluster_id: "instanceadmin-920"
c...ocation: "projects/python-docs-samples-tests/locations/us-central1-a"
serve_nodes: 1
default_storage_type: SSD
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.30.0')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7efeafef4fd0>
request = parent: "projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059"
cluster_id: "instanceadmin-920"
cl... location: "projects/python-docs-samples-tests/locations/us-central1-a"
serve_nodes: 1
default_storage_type: SSD
}
timeout = None
metadata = [('x-goog-request-params', 'parent=projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.30.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7efeafe0f978>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7efeadd1f508>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "The instance is currently being changed, please try again."
E debug_error_string = "{"created":"@1625735069.728455632","description":"Error received from peer ipv4:142.250.99.95:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"The instance is currently being changed, please try again.","grpc_status":14}"
E >
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
capsys = <_pytest.capture.CaptureFixture object at 0x7efeafe89240>
dispose_of = <function dispose_of.<locals>.disposal at 0x7efeb0eee2f0>
def test_add_and_delete_cluster(capsys, dispose_of):
dispose_of(INSTANCE)
# This won't work, because the instance isn't created yet
instanceadmin.add_cluster(PROJECT, INSTANCE, CLUSTER2)
out = capsys.readouterr().out
assert f"Instance {INSTANCE} does not exist" in out
# Get the instance created
instanceadmin.run_instance_operations(PROJECT, INSTANCE, CLUSTER1)
capsys.readouterr() # throw away output
# Add a cluster to that instance
> instanceadmin.add_cluster(PROJECT, INSTANCE, CLUSTER2)
test_instanceadmin.py:131:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
instanceadmin.py:158: in add_cluster
cluster.create()
../../google/cloud/bigtable/cluster.py:301: in create
"cluster": cluster_pb,
../../google/cloud/bigtable_admin_v2/services/bigtable_instance_admin/client.py:1001: in create_cluster
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "The instance is currently...l.cc","file_line":1066,"grpc_message":"The instance is currently being changed, please try again.","grpc_status":14}"
>
> ???
E google.api_core.exceptions.ServiceUnavailable: 503 The instance is currently being changed, please try again.
<string>:3: ServiceUnavailable</pre></details>
|
1.0
|
samples.instanceadmin.test_instanceadmin: test_add_and_delete_cluster failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 1e5128575d06096124e086366ba9c28bb6d2dbf6
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/69e93bac-2c35-44d4-bac3-970135272ec0), [Sponge](http://sponge2/69e93bac-2c35-44d4-bac3-970135272ec0)
status: failed
<details><summary>Test output</summary><br><pre>args = (parent: "projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059"
cluster_id: "instanceadmin-920"
c...ocation: "projects/python-docs-samples-tests/locations/us-central1-a"
serve_nodes: 1
default_storage_type: SSD
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.30.0')]}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7efeafef4fd0>
request = parent: "projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059"
cluster_id: "instanceadmin-920"
cl... location: "projects/python-docs-samples-tests/locations/us-central1-a"
serve_nodes: 1
default_storage_type: SSD
}
timeout = None
metadata = [('x-goog-request-params', 'parent=projects/python-docs-samples-tests/instances/instanceadmin-636-1625735059'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.30.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7efeafe0f978>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7efeadd1f508>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "The instance is currently being changed, please try again."
E debug_error_string = "{"created":"@1625735069.728455632","description":"Error received from peer ipv4:142.250.99.95:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"The instance is currently being changed, please try again.","grpc_status":14}"
E >
.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
capsys = <_pytest.capture.CaptureFixture object at 0x7efeafe89240>
dispose_of = <function dispose_of.<locals>.disposal at 0x7efeb0eee2f0>
def test_add_and_delete_cluster(capsys, dispose_of):
dispose_of(INSTANCE)
# This won't work, because the instance isn't created yet
instanceadmin.add_cluster(PROJECT, INSTANCE, CLUSTER2)
out = capsys.readouterr().out
assert f"Instance {INSTANCE} does not exist" in out
# Get the instance created
instanceadmin.run_instance_operations(PROJECT, INSTANCE, CLUSTER1)
capsys.readouterr() # throw away output
# Add a cluster to that instance
> instanceadmin.add_cluster(PROJECT, INSTANCE, CLUSTER2)
test_instanceadmin.py:131:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
instanceadmin.py:158: in add_cluster
cluster.create()
../../google/cloud/bigtable/cluster.py:301: in create
"cluster": cluster_pb,
../../google/cloud/bigtable_admin_v2/services/bigtable_instance_admin/client.py:1001: in create_cluster
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "The instance is currently...l.cc","file_line":1066,"grpc_message":"The instance is currently being changed, please try again.","grpc_status":14}"
>
> ???
E google.api_core.exceptions.ServiceUnavailable: 503 The instance is currently being changed, please try again.
<string>:3: ServiceUnavailable</pre></details>
|
process
|
samples instanceadmin test instanceadmin test add and delete cluster failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args parent projects python docs samples tests instances instanceadmin cluster id instanceadmin c ocation projects python docs samples tests locations us a serve nodes default storage type ssd kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs nox py lib site packages google api core grpc helpers py self request parent projects python docs samples tests instances instanceadmin cluster id instanceadmin cl location projects python docs samples tests locations us a serve nodes default storage type ssd timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox py lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode unavailable e details the instance is currently being changed please try again e debug error string created description error received from peer file src core lib surface call cc file line grpc message the instance is currently being changed please try again grpc status e nox py lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception capsys dispose of disposal at def test add and delete cluster capsys dispose of dispose of instance this won t work because the instance isn t created yet instanceadmin add cluster project instance out capsys readouterr out assert f instance instance does not exist in out get the instance created instanceadmin run instance operations project instance capsys readouterr throw away output add a cluster to that instance instanceadmin add cluster project instance test instanceadmin py instanceadmin py in add cluster cluster create google cloud bigtable cluster py in create cluster cluster pb google cloud bigtable admin services bigtable instance admin client py in create cluster response rpc request retry retry timeout timeout metadata metadata nox py lib site packages google api core gapic method py in call return wrapped func args kwargs nox py lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value inactiverpcerror of rpc that terminated with status statuscode unavailable details the instance is currently l cc file line grpc message the instance is currently being changed please try again grpc status e google api core exceptions serviceunavailable the instance is currently being changed please try again serviceunavailable
| 1
|
255,157
| 19,295,082,600
|
IssuesEvent
|
2021-12-12 13:01:52
|
HN-2021-Boccacius/git-project-Boccace
|
https://api.github.com/repos/HN-2021-Boccacius/git-project-Boccace
|
opened
|
Restructuring the repository to better document the individual work and collaborate.
|
documentation enhancement
|
After a briefing (12/12/2021) with Professor Clérice, we decided to reorganize the repository in a way that would allow two things:
- Better documentation of the entries and documentation of each step of the work, as to make the procedure **clear** for anyone that takes a look into the repository and wishes to be guided through. This will be achieved with the creation of issues at every upload/step/enhancement/documentation procedure or every problem encountered before any push/pull requests that correspond to it.
- Better structuring of the folders to facilitate external verification of the models, the training corpus,and its initial transcription, the verification corpus (ground truth). This allows for **easier collaboration** in a second time if necessary for the Boccace project.
|
1.0
|
Restructuring the repository to better document the individual work and collaborate. - After a briefing (12/12/2021) with Professor Clérice, we decided to reorganize the repository in a way that would allow two things:
- Better documentation of the entries and documentation of each step of the work, as to make the procedure **clear** for anyone that takes a look into the repository and wishes to be guided through. This will be achieved with the creation of issues at every upload/step/enhancement/documentation procedure or every problem encountered before any push/pull requests that correspond to it.
- Better structuring of the folders to facilitate external verification of the models, the training corpus,and its initial transcription, the verification corpus (ground truth). This allows for **easier collaboration** in a second time if necessary for the Boccace project.
|
non_process
|
restructuring the repository to better document the individual work and collaborate after a briefing with professor clérice we decided to reorganize the repository in a way that would allow two things better documentation of the entries and documentation of each step of the work as to make the procedure clear for anyone that takes a look into the repository and wishes to be guided through this will be achieved with the creation of issues at every upload step enhancement documentation procedure or every problem encountered before any push pull requests that correspond to it better structuring of the folders to facilitate external verification of the models the training corpus and its initial transcription the verification corpus ground truth this allows for easier collaboration in a second time if necessary for the boccace project
| 0
|
19,793
| 26,177,603,370
|
IssuesEvent
|
2023-01-02 11:45:36
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
reopened
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Sun Jan 1 03:32 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3816326102)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Sun Jan 1 05:46 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3816840649)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Sun Jan 1 03:47 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3816523942)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Sun Jan 1 03:32 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3816326102)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Sun Jan 1 05:46 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3816840649)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Sun Jan 1 03:47 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3816523942)**
|
process
|
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated sun jan pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sun jan pst ✅ nbsp integration test succeeded requested by on commit last updated sun jan pst
| 1
|
791,011
| 27,846,594,012
|
IssuesEvent
|
2023-03-20 15:51:37
|
fractal-analytics-platform/fractal-server
|
https://api.github.com/repos/fractal-analytics-platform/fractal-server
|
opened
|
Review `SlurmConfig` to specify which attributes need to scale up with number of tasks
|
High Priority
|
Once we include many-tasks SLURM jobs (ref #572), some of the attributes in `SlurmConfig` (also ref #580) need to be treated in a special way - because they need to scale with the number of parallel tasks (the one which I typically call `n_parallel_ftasks_per_script`).
That is:
* They need to be stored internally as numbers (rather than strings);
* They need to have the correct units (MB, GB, ...);
* They need to scale up proportionally to `n_parallel_ftasks_per_script`, in the common SBATCH options of a script (that is, the ones in the preamble - rather than the one in the `srun` commands).
|
1.0
|
Review `SlurmConfig` to specify which attributes need to scale up with number of tasks - Once we include many-tasks SLURM jobs (ref #572), some of the attributes in `SlurmConfig` (also ref #580) need to be treated in a special way - because they need to scale with the number of parallel tasks (the one which I typically call `n_parallel_ftasks_per_script`).
That is:
* They need to be stored internally as numbers (rather than strings);
* They need to have the correct units (MB, GB, ...);
* They need to scale up proportionally to `n_parallel_ftasks_per_script`, in the common SBATCH options of a script (that is, the ones in the preamble - rather than the one in the `srun` commands).
|
non_process
|
review slurmconfig to specify which attributes need to scale up with number of tasks once we include many tasks slurm jobs ref some of the attributes in slurmconfig also ref need to be treated in a special way because they need to scale with the number of parallel tasks the one which i typically call n parallel ftasks per script that is they need to be stored internally as numbers rather than strings they need to have the correct units mb gb they need to scale up proportionally to n parallel ftasks per script in the common sbatch options of a script that is the ones in the preamble rather than the one in the srun commands
| 0
|
49,824
| 7,542,379,999
|
IssuesEvent
|
2018-04-17 12:51:21
|
molgenis/molgenis
|
https://api.github.com/repos/molgenis/molgenis
|
closed
|
Import API documentation broken example link
|
6.0 bug documentation priority-first
|
### How to Reproduce
Go to advanced data upload documentation: https://molgenis.gitbooks.io/molgenis/content/user_documentation/guide-upload.html
Scroll down to import api example
### Expected behavior
Working example with link to file in our repo, for instance:
https://rawgit.com/molgenis/molgenis/master/molgenis-data-vcf/src/test/resources/testFile.vcf
### Observed behavior

|
1.0
|
Import API documentation broken example link - ### How to Reproduce
Go to advanced data upload documentation: https://molgenis.gitbooks.io/molgenis/content/user_documentation/guide-upload.html
Scroll down to import api example
### Expected behavior
Working example with link to file in our repo, for instance:
https://rawgit.com/molgenis/molgenis/master/molgenis-data-vcf/src/test/resources/testFile.vcf
### Observed behavior

|
non_process
|
import api documentation broken example link how to reproduce go to advanced data upload documentation scroll down to import api example expected behavior working example with link to file in our repo for instance observed behavior
| 0
|
9,209
| 12,239,369,978
|
IssuesEvent
|
2020-05-04 21:32:48
|
nrnb/GoogleSummerOfCode
|
https://api.github.com/repos/nrnb/GoogleSummerOfCode
|
closed
|
Develop a system that generates various spatial SBML models using deep learning
|
Difficulty: 2 Image processing Java Machine learning Python SBML XML
|
### Background
[SBML(Systems Biology Markup Language)](http://sbml.org) has several extension packages to extend its capability. One of its extension is called [Spatial Processes (spatial)](http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/spatial) which supports for describing processes that involve a spatial component, and describing the geometries involved. SBML spatial extension enables users to build a spatial model and run a [spatial simulation](https://github.com/funasoul/docker-spatialsim/raw/master/images/sam2d_s1_cyt.gif).
We have been working on the development of a software tool, [XitoSBML](https://github.com/spatialsimulator/XitoSBML): a spatial model builder that will generate a spatial SBML model from microscopic images.
Although XitoSBML provides a user-friendly UI to create a spatial SBML model, there only exists few spatial SBML models for spatial simulation. In order to use XitoSBML, cell regions in microscopic images of cells need to be segmented beforehand by image processing. This process (which is called segmentation) is a bottleneck of creating spatial SBML models.
On the other hand, the accuracy of image processing using deep learning has been remarkable in recent years, and methods for [highly accurate segmentation](https://arxiv.org/abs/1505.04597) have been proposed.
This summer, we would like to mentor a student who will implement a system that automatically generates various spatial SBML models by comprehensively segmenting microscopic images of cells using deep learning and XitoSBML.
### Goal
As described in the "Background", the project could be split down into following 4 tasks.
- Collect images from a database in which microscopic images of cells have been published.
- Prepare some datasets which will be used by deep learning algorithm for performing segmentation.
- Train the learning machine (deep learning algorithm) to comprehensively segment microscopic images of cells
- Implement a software tool to automatically convert segmented images to Spatial SBML Models using XitoSBML
The following API / frameworks will be used for this project.
- [PyTorch](https://pytorch.org/) ... for deep learning
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x) ... for XitoSBML
- [JSBML](http://sbml.org/Software/JSBML) ... for XitoSBML
### Difficulty Level 2
Although this project seems to have many tasks to solve, each of the tasks will not require enormous lines/time to code, because there already exists convenient API, well-documented API docs and an existing implementation of the learning machine. The most important part is to understand the specification of the SBML spatial extension and preparing the training datasets.
### Skills
Java and Python programming skills and some basic knowledge of handling XML documents are required. Nice to have knowledge/experience on SBML, image processing and mathematical background on machine learning.
- (essential) Java, Python, XML
- (nice to have) SBML, Image processing, mathematical background on machine learning
### Public Repository
- [XitoSBML](https://github.com/spatialsimulator/XitoSBML)
### References
- [SBML Spatial specification rel 0.94](https://sourceforge.net/p/sbml/code/HEAD/tree/trunk/specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml-l3v1-rel0.94.pdf)
- [JSBML Documentation](http://sbml.org/Software/JSBML/docs)
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x)
- [PyTorch](https://pytorch.org/)
### Potential Mentors
- [Akira Funahashi](https://github.com/funasoul) Keio University, Japan
- [Yuta Tokuoka](https://github.com/tokkuman) Keio University, Japan
- [Kaito Ii](https://github.com/kaitoii11) Hewlett-Packard Japan, Ltd. , Japan
### Contact
[Akira Funahashi](mailto:funasoul@gmail.com)
[Yuta Tokuoka](mailto:tokuoka@fun.bio.keio.ac.jp)
[Kaito Ii](mailto:kaitoii1111@gmail.com)
|
1.0
|
Develop a system that generates various spatial SBML models using deep learning - ### Background
[SBML(Systems Biology Markup Language)](http://sbml.org) has several extension packages to extend its capability. One of its extension is called [Spatial Processes (spatial)](http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/spatial) which supports for describing processes that involve a spatial component, and describing the geometries involved. SBML spatial extension enables users to build a spatial model and run a [spatial simulation](https://github.com/funasoul/docker-spatialsim/raw/master/images/sam2d_s1_cyt.gif).
We have been working on the development of a software tool, [XitoSBML](https://github.com/spatialsimulator/XitoSBML): a spatial model builder that will generate a spatial SBML model from microscopic images.
Although XitoSBML provides a user-friendly UI to create a spatial SBML model, there only exists few spatial SBML models for spatial simulation. In order to use XitoSBML, cell regions in microscopic images of cells need to be segmented beforehand by image processing. This process (which is called segmentation) is a bottleneck of creating spatial SBML models.
On the other hand, the accuracy of image processing using deep learning has been remarkable in recent years, and methods for [highly accurate segmentation](https://arxiv.org/abs/1505.04597) have been proposed.
This summer, we would like to mentor a student who will implement a system that automatically generates various spatial SBML models by comprehensively segmenting microscopic images of cells using deep learning and XitoSBML.
### Goal
As described in the "Background", the project could be split down into following 4 tasks.
- Collect images from a database in which microscopic images of cells have been published.
- Prepare some datasets which will be used by deep learning algorithm for performing segmentation.
- Train the learning machine (deep learning algorithm) to comprehensively segment microscopic images of cells
- Implement a software tool to automatically convert segmented images to Spatial SBML Models using XitoSBML
The following API / frameworks will be used for this project.
- [PyTorch](https://pytorch.org/) ... for deep learning
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x) ... for XitoSBML
- [JSBML](http://sbml.org/Software/JSBML) ... for XitoSBML
### Difficulty Level 2
Although this project seems to have many tasks to solve, each of the tasks will not require enormous lines/time to code, because there already exists convenient API, well-documented API docs and an existing implementation of the learning machine. The most important part is to understand the specification of the SBML spatial extension and preparing the training datasets.
### Skills
Java and Python programming skills and some basic knowledge of handling XML documents are required. Nice to have knowledge/experience on SBML, image processing and mathematical background on machine learning.
- (essential) Java, Python, XML
- (nice to have) SBML, Image processing, mathematical background on machine learning
### Public Repository
- [XitoSBML](https://github.com/spatialsimulator/XitoSBML)
### References
- [SBML Spatial specification rel 0.94](https://sourceforge.net/p/sbml/code/HEAD/tree/trunk/specifications/sbml-level-3/version-1/spatial/specification/spatial-v1-sbml-l3v1-rel0.94.pdf)
- [JSBML Documentation](http://sbml.org/Software/JSBML/docs)
- [ImageJ API](https://imagej.net/Developing_Plugins_for_ImageJ_1.x)
- [PyTorch](https://pytorch.org/)
### Potential Mentors
- [Akira Funahashi](https://github.com/funasoul) Keio University, Japan
- [Yuta Tokuoka](https://github.com/tokkuman) Keio University, Japan
- [Kaito Ii](https://github.com/kaitoii11) Hewlett-Packard Japan, Ltd. , Japan
### Contact
[Akira Funahashi](mailto:funasoul@gmail.com)
[Yuta Tokuoka](mailto:tokuoka@fun.bio.keio.ac.jp)
[Kaito Ii](mailto:kaitoii1111@gmail.com)
|
process
|
develop a system that generates various spatial sbml models using deep learning background has several extension packages to extend its capability one of its extension is called which supports for describing processes that involve a spatial component and describing the geometries involved sbml spatial extension enables users to build a spatial model and run a we have been working on the development of a software tool a spatial model builder that will generate a spatial sbml model from microscopic images although xitosbml provides a user friendly ui to create a spatial sbml model there only exists few spatial sbml models for spatial simulation in order to use xitosbml cell regions in microscopic images of cells need to be segmented beforehand by image processing this process which is called segmentation is a bottleneck of creating spatial sbml models on the other hand the accuracy of image processing using deep learning has been remarkable in recent years and methods for have been proposed this summer we would like to mentor a student who will implement a system that automatically generates various spatial sbml models by comprehensively segmenting microscopic images of cells using deep learning and xitosbml goal as described in the background the project could be split down into following tasks collect images from a database in which microscopic images of cells have been published prepare some datasets which will be used by deep learning algorithm for performing segmentation train the learning machine deep learning algorithm to comprehensively segment microscopic images of cells implement a software tool to automatically convert segmented images to spatial sbml models using xitosbml the following api frameworks will be used for this project for deep learning for xitosbml for xitosbml difficulty level although this project seems to have many tasks to solve each of the tasks will not require enormous lines time to code because there already exists convenient api well documented api docs and an existing implementation of the learning machine the most important part is to understand the specification of the sbml spatial extension and preparing the training datasets skills java and python programming skills and some basic knowledge of handling xml documents are required nice to have knowledge experience on sbml image processing and mathematical background on machine learning essential java python xml nice to have sbml image processing mathematical background on machine learning public repository references potential mentors keio university japan keio university japan hewlett packard japan ltd japan contact mailto funasoul gmail com mailto tokuoka fun bio keio ac jp mailto gmail com
| 1
|
3,356
| 6,487,654,988
|
IssuesEvent
|
2017-08-20 10:03:08
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
First time blockScraper runs it runs silently until block 47,000
|
apps-blockScrape status-inprocess type-enhancement
|
It feels weird since nothing is happening for quite a while.
Also--I should be able to predict how long it will take to slurp the whole chain because I know how long it's taken to get where it currenly is and how many total blocks we're at at.
|
1.0
|
First time blockScraper runs it runs silently until block 47,000 - It feels weird since nothing is happening for quite a while.
Also--I should be able to predict how long it will take to slurp the whole chain because I know how long it's taken to get where it currenly is and how many total blocks we're at at.
|
process
|
first time blockscraper runs it runs silently until block it feels weird since nothing is happening for quite a while also i should be able to predict how long it will take to slurp the whole chain because i know how long it s taken to get where it currenly is and how many total blocks we re at at
| 1
|
10,024
| 13,044,142,999
|
IssuesEvent
|
2020-07-29 03:43:48
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `OctString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `OctString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `OctString` from TiDB -
## Description
Port the scalar function `OctString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function octstring from tidb description port the scalar function octstring from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
181,629
| 14,061,180,045
|
IssuesEvent
|
2020-11-03 07:39:57
|
eclipse/che
|
https://api.github.com/repos/eclipse/che
|
opened
|
Happy Path failing because of missing chectl parameter
|
e2e-test/failure kind/bug
|
### Describe the bug
The workspace for the Happy Path test is not created so the Happy Path fails. The problem is that `chectl` is expecting extra parameter which is not present int the command:
```
chectl workspace:create --start --access-token <token> --chenamespace=eclipse-che --devfile=https://raw.githubusercontent.com/eclipse/che/master/tests/e2e/files/happy-path/happy-path-workspace.yaml
› Current Kubernetes context: 'eclipse-che/192-168-42-57:8443/developer'
Error: Eclipse Che server API endpoint is required. Use
'--che-api-endpoint' to provide it.
```
### Che version
<!-- (if workspace is running, version can be obtained with help/about menu) -->
- [ ] latest
- [x] nightly
- [ ] other: please specify
### Steps to reproduce
Run the job for testing Happy Path on nightly Che with nightly chectl.
### Expected behavior
Happy Path should pass.
### Runtime
- [ ] kubernetes (include output of `kubectl version`)
- [ ] Openshift (include output of `oc version`)
- [x] minikube (include output of `minikube version` and `kubectl version`) [job](https://codeready-workspaces-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/che-pr-tests/view/K8S/job/Che-Theia-PR-check-E2E-Happy-path-tests-against-K8S__reserved/1834/console)
- [x] minishift (include output of `minishift version` and `oc version`) [job](https://ci.centos.org/view/Devtools/job/devtools-che-nightly-happy-path-test/269/console)
- [ ] docker-desktop + K8S (include output of `docker version` and `kubectl version`)
- [ ] other: (please specify)
### Screenshots

### Installation method
- [x] chectl
> chectl server:start --listr-renderer=verbose -a operator -p minishift --k8spodreadytimeout=360000 --che-operator-cr-patch-yaml=/tmp/custom-resource-patch.yaml --chenamespace=eclipse-che
- [ ] OperatorHub
- [ ] I don't know
### Environment
- [ ] my computer
- [ ] Windows
- [ ] Linux
- [ ] macOS
- [ ] Cloud
- [ ] Amazon
- [ ] Azure
- [ ] GCE
- [ ] other (please specify)
- [x] other: please specify Linux
|
1.0
|
Happy Path failing because of missing chectl parameter - ### Describe the bug
The workspace for the Happy Path test is not created so the Happy Path fails. The problem is that `chectl` is expecting extra parameter which is not present int the command:
```
chectl workspace:create --start --access-token <token> --chenamespace=eclipse-che --devfile=https://raw.githubusercontent.com/eclipse/che/master/tests/e2e/files/happy-path/happy-path-workspace.yaml
› Current Kubernetes context: 'eclipse-che/192-168-42-57:8443/developer'
Error: Eclipse Che server API endpoint is required. Use
'--che-api-endpoint' to provide it.
```
### Che version
<!-- (if workspace is running, version can be obtained with help/about menu) -->
- [ ] latest
- [x] nightly
- [ ] other: please specify
### Steps to reproduce
Run the job for testing Happy Path on nightly Che with nightly chectl.
### Expected behavior
Happy Path should pass.
### Runtime
- [ ] kubernetes (include output of `kubectl version`)
- [ ] Openshift (include output of `oc version`)
- [x] minikube (include output of `minikube version` and `kubectl version`) [job](https://codeready-workspaces-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/che-pr-tests/view/K8S/job/Che-Theia-PR-check-E2E-Happy-path-tests-against-K8S__reserved/1834/console)
- [x] minishift (include output of `minishift version` and `oc version`) [job](https://ci.centos.org/view/Devtools/job/devtools-che-nightly-happy-path-test/269/console)
- [ ] docker-desktop + K8S (include output of `docker version` and `kubectl version`)
- [ ] other: (please specify)
### Screenshots

### Installation method
- [x] chectl
> chectl server:start --listr-renderer=verbose -a operator -p minishift --k8spodreadytimeout=360000 --che-operator-cr-patch-yaml=/tmp/custom-resource-patch.yaml --chenamespace=eclipse-che
- [ ] OperatorHub
- [ ] I don't know
### Environment
- [ ] my computer
- [ ] Windows
- [ ] Linux
- [ ] macOS
- [ ] Cloud
- [ ] Amazon
- [ ] Azure
- [ ] GCE
- [ ] other (please specify)
- [x] other: please specify Linux
|
non_process
|
happy path failing because of missing chectl parameter describe the bug the workspace for the happy path test is not created so the happy path fails the problem is that chectl is expecting extra parameter which is not present int the command chectl workspace create start access token chenamespace eclipse che devfile › current kubernetes context eclipse che developer error eclipse che server api endpoint is required use che api endpoint to provide it che version latest nightly other please specify steps to reproduce run the job for testing happy path on nightly che with nightly chectl expected behavior happy path should pass runtime kubernetes include output of kubectl version openshift include output of oc version minikube include output of minikube version and kubectl version minishift include output of minishift version and oc version docker desktop include output of docker version and kubectl version other please specify screenshots installation method chectl chectl server start listr renderer verbose a operator p minishift che operator cr patch yaml tmp custom resource patch yaml chenamespace eclipse che operatorhub i don t know environment my computer windows linux macos cloud amazon azure gce other please specify other please specify linux
| 0
|
2,793
| 5,723,398,767
|
IssuesEvent
|
2017-04-20 12:09:02
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Table does not refresh automatically when I search for data
|
bug inprocess
|
Hello,
When I search for data, the table does not refresh automatically to display the results found and I have to change pagination or number of rows displayed to see the result.
I did not have this issue, but when I started using Tabs containing multiple Tab generated with a loop, it appeared.
There is any custom options to fix it?
Thanks in advance for your reply.
|
1.0
|
Table does not refresh automatically when I search for data - Hello,
When I search for data, the table does not refresh automatically to display the results found and I have to change pagination or number of rows displayed to see the result.
I did not have this issue, but when I started using Tabs containing multiple Tab generated with a loop, it appeared.
There is any custom options to fix it?
Thanks in advance for your reply.
|
process
|
table does not refresh automatically when i search for data hello when i search for data the table does not refresh automatically to display the results found and i have to change pagination or number of rows displayed to see the result i did not have this issue but when i started using tabs containing multiple tab generated with a loop it appeared there is any custom options to fix it thanks in advance for your reply
| 1
|
5,679
| 8,558,442,126
|
IssuesEvent
|
2018-11-08 18:16:19
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][needs-docs][processing] add parameter representing raster band
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/f96442884b248694b36405f6909bcb1740baed27 by web-flow
[FEATURE][needs-docs][processing] add parameter representing raster band
|
1.0
|
[FEATURE][needs-docs][processing] add parameter representing raster band - Original commit: https://github.com/qgis/QGIS/commit/f96442884b248694b36405f6909bcb1740baed27 by web-flow
[FEATURE][needs-docs][processing] add parameter representing raster band
|
process
|
add parameter representing raster band original commit by web flow add parameter representing raster band
| 1
|
46,556
| 5,821,142,846
|
IssuesEvent
|
2017-05-06 02:13:01
|
shelljs/shelljs
|
https://api.github.com/repos/shelljs/shelljs
|
closed
|
Echo tests unnecessarily run tests in own process
|
high priority test
|
To prevent writing to the process' own stdout, the `echo` tests were written to run each test in its own process, and the stdout of that process was captured. Because ava runs each test in its own process anyway, the `echo` tests no longer need to run in separate processes.
|
1.0
|
Echo tests unnecessarily run tests in own process - To prevent writing to the process' own stdout, the `echo` tests were written to run each test in its own process, and the stdout of that process was captured. Because ava runs each test in its own process anyway, the `echo` tests no longer need to run in separate processes.
|
non_process
|
echo tests unnecessarily run tests in own process to prevent writing to the process own stdout the echo tests were written to run each test in its own process and the stdout of that process was captured because ava runs each test in its own process anyway the echo tests no longer need to run in separate processes
| 0
|
11,848
| 14,662,501,030
|
IssuesEvent
|
2020-12-29 07:22:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Study with 'Token validation and Eligibility test' > Number of invited count is not reducing if user not eligible for the study
|
Bug P1 Participant manager Process: Fixed Process: Tested QA
|
Steps
1. Join a study with 'Token validation and Eligibility test'
2. Withdrwan from the the study
3. Enable invitation and Send invitation
4. Fails the eligibility test and observe the number of invited count
AR : Count is not reducing
ER : Number of invited count should be reduced
[Note : this should be handled in Sites (including Site participnt registry > All and Invited tabs) , Studies and Apps tabs ]

|
2.0
|
Study with 'Token validation and Eligibility test' > Number of invited count is not reducing if user not eligible for the study - Steps
1. Join a study with 'Token validation and Eligibility test'
2. Withdrwan from the the study
3. Enable invitation and Send invitation
4. Fails the eligibility test and observe the number of invited count
AR : Count is not reducing
ER : Number of invited count should be reduced
[Note : this should be handled in Sites (including Site participnt registry > All and Invited tabs) , Studies and Apps tabs ]

|
process
|
study with token validation and eligibility test number of invited count is not reducing if user not eligible for the study steps join a study with token validation and eligibility test withdrwan from the the study enable invitation and send invitation fails the eligibility test and observe the number of invited count ar count is not reducing er number of invited count should be reduced
| 1
|
710,830
| 24,437,268,279
|
IssuesEvent
|
2022-10-06 12:24:03
|
zitadel/zitadel
|
https://api.github.com/repos/zitadel/zitadel
|
closed
|
[Actions] add "customise token" flow types
|
type: enhancement lang: go category: backend category: frontend lang: angular priority: high
|
As a developer I want to add custom claims to userinfo and access tokens because my application needs additional information I want to retrieve from ZITADEL.
create action during set user info to add custom claims (only extend)
- [ ] new flow type "customise token"
- [ ] new trigger "pre user info creation"
- during user info is called (via endpoint or id token creation)
- allow http client, user- and org-metadata
- `AppendClaim` function on input param (userinfo-object, token) to add additional claims
- [ ] new trigger "pre access token creation"
- during creation of access token via API
- allow http client, user- and org-metadata
- `AppendClaim` function on input param (userinfo-object, token) to add additional claims
- [ ] check if login flow has to be extended as well
_Originally posted by @adlerhurst in https://github.com/zitadel/zitadel/issues/4265#issuecomment-1237065338_
|
1.0
|
[Actions] add "customise token" flow types - As a developer I want to add custom claims to userinfo and access tokens because my application needs additional information I want to retrieve from ZITADEL.
create action during set user info to add custom claims (only extend)
- [ ] new flow type "customise token"
- [ ] new trigger "pre user info creation"
- during user info is called (via endpoint or id token creation)
- allow http client, user- and org-metadata
- `AppendClaim` function on input param (userinfo-object, token) to add additional claims
- [ ] new trigger "pre access token creation"
- during creation of access token via API
- allow http client, user- and org-metadata
- `AppendClaim` function on input param (userinfo-object, token) to add additional claims
- [ ] check if login flow has to be extended as well
_Originally posted by @adlerhurst in https://github.com/zitadel/zitadel/issues/4265#issuecomment-1237065338_
|
non_process
|
add customise token flow types as a developer i want to add custom claims to userinfo and access tokens because my application needs additional information i want to retrieve from zitadel create action during set user info to add custom claims only extend new flow type customise token new trigger pre user info creation during user info is called via endpoint or id token creation allow http client user and org metadata appendclaim function on input param userinfo object token to add additional claims new trigger pre access token creation during creation of access token via api allow http client user and org metadata appendclaim function on input param userinfo object token to add additional claims check if login flow has to be extended as well originally posted by adlerhurst in
| 0
|
216,043
| 7,300,630,862
|
IssuesEvent
|
2018-02-27 00:39:50
|
ZetaGlest/zetaglest-data
|
https://api.github.com/repos/ZetaGlest/zetaglest-data
|
closed
|
[submission]The Seventh Element
|
help wanted priority: normal scenarios
|
**Seventh Element:**
**Download Link:**
[https://github.com/KeithSammut/seventh_element](https://github.com/KeithSammut/seventh_element)
**Scenario Validation:**
[https://gist.github.com/KeithSammut/b2000f8823479977b3cf627c3ac7df1d](https://gist.github.com/KeithSammut/b2000f8823479977b3cf627c3ac7df1d)
|
1.0
|
[submission]The Seventh Element - **Seventh Element:**
**Download Link:**
[https://github.com/KeithSammut/seventh_element](https://github.com/KeithSammut/seventh_element)
**Scenario Validation:**
[https://gist.github.com/KeithSammut/b2000f8823479977b3cf627c3ac7df1d](https://gist.github.com/KeithSammut/b2000f8823479977b3cf627c3ac7df1d)
|
non_process
|
the seventh element seventh element download link scenario validation
| 0
|
18,294
| 6,574,215,353
|
IssuesEvent
|
2017-09-11 12:00:26
|
csmk/frabjous
|
https://api.github.com/repos/csmk/frabjous
|
closed
|
www-apps/spreed-webrtc: new package
|
in progress new ebuild
|
> Spreed WebRTC implements a WebRTC audio/video call and conferencing server and web client.
- https://www.spreed.me
- https://github.com/strukturag/spreed-webrtc
|
1.0
|
www-apps/spreed-webrtc: new package - > Spreed WebRTC implements a WebRTC audio/video call and conferencing server and web client.
- https://www.spreed.me
- https://github.com/strukturag/spreed-webrtc
|
non_process
|
www apps spreed webrtc new package spreed webrtc implements a webrtc audio video call and conferencing server and web client
| 0
|
16,940
| 22,291,482,889
|
IssuesEvent
|
2022-06-12 12:34:25
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
如何实现pullhttpflv 然后 本地 httpflv sub
|
#Bug #Question *In process *Indefinite delay
|
感谢这么好的流媒体的库
仔细看了一下demo,没有找到httpflv推流
想从一个httpflv上拉流然后推流到本地httpflv,只靠包实现,不想外带一个文件
我本来想用 http 直接读远程字节流,然后http server 写字节流到本地地址,调试了很久各种错误😢
|
1.0
|
如何实现pullhttpflv 然后 本地 httpflv sub - 感谢这么好的流媒体的库
仔细看了一下demo,没有找到httpflv推流
想从一个httpflv上拉流然后推流到本地httpflv,只靠包实现,不想外带一个文件
我本来想用 http 直接读远程字节流,然后http server 写字节流到本地地址,调试了很久各种错误😢
|
process
|
如何实现pullhttpflv 然后 本地 httpflv sub 感谢这么好的流媒体的库 仔细看了一下demo,没有找到httpflv推流 想从一个httpflv上拉流然后推流到本地httpflv,只靠包实现,不想外带一个文件 我本来想用 http 直接读远程字节流,然后http server 写字节流到本地地址,调试了很久各种错误😢
| 1
|
1,422
| 3,988,784,458
|
IssuesEvent
|
2016-05-09 11:23:34
|
thesgc/chembiohub_helpdesk
|
https://api.github.com/repos/thesgc/chembiohub_helpdesk
|
reopened
|
All references to $rootScope.projects and cbh.projects should be replaced as they can cause unexplainable crashes in the application
|
processed backlog suggestion Query - done?
|
If a page references these two objects then bad things can happend if you link to that page from another page.
It is much better to reference the global variable projectList as this will be loaded no matter what type of page referesh takes you to a particular page.
|
1.0
|
All references to $rootScope.projects and cbh.projects should be replaced as they can cause unexplainable crashes in the application - If a page references these two objects then bad things can happend if you link to that page from another page.
It is much better to reference the global variable projectList as this will be loaded no matter what type of page referesh takes you to a particular page.
|
process
|
all references to rootscope projects and cbh projects should be replaced as they can cause unexplainable crashes in the application if a page references these two objects then bad things can happend if you link to that page from another page it is much better to reference the global variable projectlist as this will be loaded no matter what type of page referesh takes you to a particular page
| 1
|
7,477
| 6,968,814,685
|
IssuesEvent
|
2017-12-11 00:22:16
|
chlorophyll/chlorophyll
|
https://api.github.com/repos/chlorophyll/chlorophyll
|
closed
|
Backend: File loader
|
backend feature infrastructure
|
Load the JSON blob description of a project and initialize all the relevant structures on the backend.
|
1.0
|
Backend: File loader - Load the JSON blob description of a project and initialize all the relevant structures on the backend.
|
non_process
|
backend file loader load the json blob description of a project and initialize all the relevant structures on the backend
| 0
|
20,799
| 27,549,138,087
|
IssuesEvent
|
2023-03-07 13:52:55
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
closed
|
"Field reference lost" error if text is selected when clicking Add/Edit Citation
|
Word Processor Integration
|
https://forums.zotero.org/discussion/comment/429709/#Comment_429709
At least on macOS. If we can detect this, we should either unselect and just put the cursor at the end or show a clearer error if we can't unselect.
|
1.0
|
"Field reference lost" error if text is selected when clicking Add/Edit Citation - https://forums.zotero.org/discussion/comment/429709/#Comment_429709
At least on macOS. If we can detect this, we should either unselect and just put the cursor at the end or show a clearer error if we can't unselect.
|
process
|
field reference lost error if text is selected when clicking add edit citation at least on macos if we can detect this we should either unselect and just put the cursor at the end or show a clearer error if we can t unselect
| 1
|
5,926
| 8,750,896,742
|
IssuesEvent
|
2018-12-13 20:36:24
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
vsphere-template post-processor issue
|
bug docs good first issue post-processor/vsphere-template
|
Packer Version: 1.3.1
The **vsphere-template** post-processor requires that you set `keep_registered` to `true` in the **vmware-iso** builder, even if you are using the artifact from the **vsphere** post-processor and not the artifact created directly from the **vmware-iso** builder.
Ideally this would not be a requirement because I would not like to manually delete a VM after each build. If this is the desired behavior then it should be mentioned in the documentation.
https://github.com/hashicorp/packer/blob/c8970b86eba142eac349c7ef292959386be9a6b4/post-processor/vsphere-template/post-processor.go#L91-L105
```
==> vmware-iso: Connected to SSH!
==> vmware-iso: Gracefully halting virtual machine...
vmware-iso: Waiting for VMware to clean up after itself...
==> vmware-iso: Deleting unnecessary VMware files...
vmware-iso: Deleting: /vmfs/volumes/datastore1/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0/vmware.log
==> vmware-iso: Compacting all attached virtual disks...
vmware-iso: Compacting virtual disk 1
==> vmware-iso: Cleaning VMX prior to finishing up...
vmware-iso: Unmounting floppy from VMX...
vmware-iso: Detaching ISO from CD-ROM device...
vmware-iso: Disabling VNC server...
vmware-iso: Removing Ethernet Interfaces...
==> vmware-iso: Exporting virtual machine...
vmware-iso: Executing: ovftool --noSSLVerify=true --skipManifestCheck -tt=ova vi://root:****@<ip>/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0 TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Opening VI source: vi://root@<ip>:443/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Opening VI source: vi://root@<ip>:443/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Opening OVA target: TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Writing OVA package: TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0.ova
Transfer Completed
vmware-iso: Completed successfully
vmware-iso:
==> vmware-iso: Destroying virtual machine...
==> vmware-iso: Running post-processor: vsphere
vmware-iso (vsphere): Uploading TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0.ova to vSphere
vmware-iso (vsphere):
==> vmware-iso: Running post-processor: vsphere-template
Build 'vmware-iso' errored: 1 error(s) occurred:
* Post-processor failed: To use this post-processor with exporting behavior you need set keep_registered as true
==> Some builds didn't complete successfully and had errors:
--> vmware-iso: 1 error(s) occurred:
* Post-processor failed: To use this post-processor with exporting behavior you need set keep_registered as true
==> Builds finished but no artifacts were created.
```
|
1.0
|
vsphere-template post-processor issue - Packer Version: 1.3.1
The **vsphere-template** post-processor requires that you set `keep_registered` to `true` in the **vmware-iso** builder, even if you are using the artifact from the **vsphere** post-processor and not the artifact created directly from the **vmware-iso** builder.
Ideally this would not be a requirement because I would not like to manually delete a VM after each build. If this is the desired behavior then it should be mentioned in the documentation.
https://github.com/hashicorp/packer/blob/c8970b86eba142eac349c7ef292959386be9a6b4/post-processor/vsphere-template/post-processor.go#L91-L105
```
==> vmware-iso: Connected to SSH!
==> vmware-iso: Gracefully halting virtual machine...
vmware-iso: Waiting for VMware to clean up after itself...
==> vmware-iso: Deleting unnecessary VMware files...
vmware-iso: Deleting: /vmfs/volumes/datastore1/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0/vmware.log
==> vmware-iso: Compacting all attached virtual disks...
vmware-iso: Compacting virtual disk 1
==> vmware-iso: Cleaning VMX prior to finishing up...
vmware-iso: Unmounting floppy from VMX...
vmware-iso: Detaching ISO from CD-ROM device...
vmware-iso: Disabling VNC server...
vmware-iso: Removing Ethernet Interfaces...
==> vmware-iso: Exporting virtual machine...
vmware-iso: Executing: ovftool --noSSLVerify=true --skipManifestCheck -tt=ova vi://root:****@<ip>/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0 TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Opening VI source: vi://root@<ip>:443/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Opening VI source: vi://root@<ip>:443/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Opening OVA target: TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0
vmware-iso: Writing OVA package: TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0.ova
Transfer Completed
vmware-iso: Completed successfully
vmware-iso:
==> vmware-iso: Destroying virtual machine...
==> vmware-iso: Running post-processor: vsphere
vmware-iso (vsphere): Uploading TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0/TEST-ubuntu18-5bb3ba57-1b31-b0b8-fc23-36f8e7827ee0.ova to vSphere
vmware-iso (vsphere):
==> vmware-iso: Running post-processor: vsphere-template
Build 'vmware-iso' errored: 1 error(s) occurred:
* Post-processor failed: To use this post-processor with exporting behavior you need set keep_registered as true
==> Some builds didn't complete successfully and had errors:
--> vmware-iso: 1 error(s) occurred:
* Post-processor failed: To use this post-processor with exporting behavior you need set keep_registered as true
==> Builds finished but no artifacts were created.
```
|
process
|
vsphere template post processor issue packer version the vsphere template post processor requires that you set keep registered to true in the vmware iso builder even if you are using the artifact from the vsphere post processor and not the artifact created directly from the vmware iso builder ideally this would not be a requirement because i would not like to manually delete a vm after each build if this is the desired behavior then it should be mentioned in the documentation vmware iso connected to ssh vmware iso gracefully halting virtual machine vmware iso waiting for vmware to clean up after itself vmware iso deleting unnecessary vmware files vmware iso deleting vmfs volumes test vmware log vmware iso compacting all attached virtual disks vmware iso compacting virtual disk vmware iso cleaning vmx prior to finishing up vmware iso unmounting floppy from vmx vmware iso detaching iso from cd rom device vmware iso disabling vnc server vmware iso removing ethernet interfaces vmware iso exporting virtual machine vmware iso executing ovftool nosslverify true skipmanifestcheck tt ova vi root test test vmware iso opening vi source vi root test vmware iso opening vi source vi root test vmware iso opening ova target test vmware iso writing ova package test test ova transfer completed vmware iso completed successfully vmware iso vmware iso destroying virtual machine vmware iso running post processor vsphere vmware iso vsphere uploading test test ova to vsphere vmware iso vsphere vmware iso running post processor vsphere template build vmware iso errored error s occurred post processor failed to use this post processor with exporting behavior you need set keep registered as true some builds didn t complete successfully and had errors vmware iso error s occurred post processor failed to use this post processor with exporting behavior you need set keep registered as true builds finished but no artifacts were created
| 1
|
753,466
| 26,347,725,322
|
IssuesEvent
|
2023-01-11 00:15:57
|
tendermint/tendermint
|
https://api.github.com/repos/tendermint/tendermint
|
closed
|
State sync from local snapshot: Implementation
|
stale backlog-priority
|
Tasks:
- [ ] Implementation (existing implementation of ADR083: #9541)
- [ ] Introduce necessary changes to documentation and/or spec to keep them up to date
- [ ] Check if #4642 can be closed once implementation merged
DoD:
- Implementation code complete and all PRs merged
- Any changes needed to the documentation/spec are merged
|
1.0
|
State sync from local snapshot: Implementation - Tasks:
- [ ] Implementation (existing implementation of ADR083: #9541)
- [ ] Introduce necessary changes to documentation and/or spec to keep them up to date
- [ ] Check if #4642 can be closed once implementation merged
DoD:
- Implementation code complete and all PRs merged
- Any changes needed to the documentation/spec are merged
|
non_process
|
state sync from local snapshot implementation tasks implementation existing implementation of introduce necessary changes to documentation and or spec to keep them up to date check if can be closed once implementation merged dod implementation code complete and all prs merged any changes needed to the documentation spec are merged
| 0
|
752,176
| 26,275,938,968
|
IssuesEvent
|
2023-01-06 22:01:29
|
literalpie/storybook-framework-qwik
|
https://api.github.com/repos/literalpie/storybook-framework-qwik
|
closed
|
Failed to find Storybook presets (Windows)
|
high-priority
|
I tried to use this extension, but I'm facing the following failure. Any idea ?
```powershell
E:\git\qwik-test\qwik-app>npm run storybook
> storybook
> storybook dev -p 6006
@storybook/cli v7.0.0-beta.19
(node:21796) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
WARN Failed to load preset: "storybook-framework-qwik\\preset"
ERR! Error: Cannot find module 'storybook-framework-qwik\preset'
ERR! Require stack:
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\telemetry\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\bin\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\storybook\index.js
ERR! at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
ERR! at Module._load (node:internal/modules/cjs/loader:841:27)
ERR! at Module.require (node:internal/modules/cjs/loader:1061:19)
ERR! at require (node:internal/modules/cjs/helpers:103:18)
ERR! at interopRequireDefault (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:6:21)
ERR! at getContent (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:164)
ERR! at loadPreset (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:349)
ERR! at E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:409
ERR! at Array.map (<anonymous>)
ERR! at loadPresets (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:391)
ERR! Error: Cannot find module 'storybook-framework-qwik\preset'
ERR! Require stack:
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\telemetry\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\bin\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\storybook\index.js
ERR! at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
ERR! at Module._load (node:internal/modules/cjs/loader:841:27)
ERR! at Module.require (node:internal/modules/cjs/loader:1061:19)
ERR! at require (node:internal/modules/cjs/helpers:103:18)
ERR! at interopRequireDefault (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:6:21)
ERR! at getContent (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:164)
ERR! at loadPreset (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:349)
ERR! at E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:409
ERR! at Array.map (<anonymous>)
ERR! at loadPresets (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:391) {
ERR! code: 'MODULE_NOT_FOUND',
ERR! requireStack: [
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\core-common\\dist\\index.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\telemetry\\dist\\index.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\cli\\dist\\generate.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\cli\\bin\\index.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\storybook\\index.js'
ERR! ]
ERR! }
ERR! TypeError: Cannot destructure property 'renderer' of '(intermediate value)' as it is undefined.
ERR! at buildDevStandalone (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:1795)
ERR! at async withTelemetry (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:5533)
ERR! at async dev (E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js:441:440)
ERR! TypeError: Cannot destructure property 'renderer' of '(intermediate value)' as it is undefined.
ERR! at buildDevStandalone (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:1795)
ERR! at async withTelemetry (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:5533)
ERR! at async dev (E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js:441:440)
WARN Broken build, fix the error above.
WARN You may need to refresh the browser.
```
|
1.0
|
Failed to find Storybook presets (Windows) - I tried to use this extension, but I'm facing the following failure. Any idea ?
```powershell
E:\git\qwik-test\qwik-app>npm run storybook
> storybook
> storybook dev -p 6006
@storybook/cli v7.0.0-beta.19
(node:21796) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
WARN Failed to load preset: "storybook-framework-qwik\\preset"
ERR! Error: Cannot find module 'storybook-framework-qwik\preset'
ERR! Require stack:
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\telemetry\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\bin\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\storybook\index.js
ERR! at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
ERR! at Module._load (node:internal/modules/cjs/loader:841:27)
ERR! at Module.require (node:internal/modules/cjs/loader:1061:19)
ERR! at require (node:internal/modules/cjs/helpers:103:18)
ERR! at interopRequireDefault (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:6:21)
ERR! at getContent (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:164)
ERR! at loadPreset (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:349)
ERR! at E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:409
ERR! at Array.map (<anonymous>)
ERR! at loadPresets (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:391)
ERR! Error: Cannot find module 'storybook-framework-qwik\preset'
ERR! Require stack:
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\telemetry\dist\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\bin\index.js
ERR! - E:\git\qwik-test\qwik-app\node_modules\storybook\index.js
ERR! at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
ERR! at Module._load (node:internal/modules/cjs/loader:841:27)
ERR! at Module.require (node:internal/modules/cjs/loader:1061:19)
ERR! at require (node:internal/modules/cjs/helpers:103:18)
ERR! at interopRequireDefault (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:6:21)
ERR! at getContent (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:164)
ERR! at loadPreset (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:10:349)
ERR! at E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:409
ERR! at Array.map (<anonymous>)
ERR! at loadPresets (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-common\dist\index.js:12:391) {
ERR! code: 'MODULE_NOT_FOUND',
ERR! requireStack: [
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\core-common\\dist\\index.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\telemetry\\dist\\index.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\cli\\dist\\generate.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\@storybook\\cli\\bin\\index.js',
ERR! 'E:\\git\\qwik-test\\qwik-app\\node_modules\\storybook\\index.js'
ERR! ]
ERR! }
ERR! TypeError: Cannot destructure property 'renderer' of '(intermediate value)' as it is undefined.
ERR! at buildDevStandalone (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:1795)
ERR! at async withTelemetry (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:5533)
ERR! at async dev (E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js:441:440)
ERR! TypeError: Cannot destructure property 'renderer' of '(intermediate value)' as it is undefined.
ERR! at buildDevStandalone (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:1795)
ERR! at async withTelemetry (E:\git\qwik-test\qwik-app\node_modules\@storybook\core-server\dist\index.js:33:5533)
ERR! at async dev (E:\git\qwik-test\qwik-app\node_modules\@storybook\cli\dist\generate.js:441:440)
WARN Broken build, fix the error above.
WARN You may need to refresh the browser.
```
|
non_process
|
failed to find storybook presets windows i tried to use this extension but i m facing the following failure any idea powershell e git qwik test qwik app npm run storybook storybook storybook dev p storybook cli beta node experimentalwarning the fetch api is an experimental feature this feature could change at any time use node trace warnings to show where the warning was created warn failed to load preset storybook framework qwik preset err error cannot find module storybook framework qwik preset err require stack err e git qwik test qwik app node modules storybook core common dist index js err e git qwik test qwik app node modules storybook telemetry dist index js err e git qwik test qwik app node modules storybook cli dist generate js err e git qwik test qwik app node modules storybook cli bin index js err e git qwik test qwik app node modules storybook index js err at module resolvefilename node internal modules cjs loader err at module load node internal modules cjs loader err at module require node internal modules cjs loader err at require node internal modules cjs helpers err at interoprequiredefault e git qwik test qwik app node modules storybook core common dist index js err at getcontent e git qwik test qwik app node modules storybook core common dist index js err at loadpreset e git qwik test qwik app node modules storybook core common dist index js err at e git qwik test qwik app node modules storybook core common dist index js err at array map err at loadpresets e git qwik test qwik app node modules storybook core common dist index js err error cannot find module storybook framework qwik preset err require stack err e git qwik test qwik app node modules storybook core common dist index js err e git qwik test qwik app node modules storybook telemetry dist index js err e git qwik test qwik app node modules storybook cli dist generate js err e git qwik test qwik app node modules storybook cli bin index js err e git qwik test qwik app node modules storybook index js err at module resolvefilename node internal modules cjs loader err at module load node internal modules cjs loader err at module require node internal modules cjs loader err at require node internal modules cjs helpers err at interoprequiredefault e git qwik test qwik app node modules storybook core common dist index js err at getcontent e git qwik test qwik app node modules storybook core common dist index js err at loadpreset e git qwik test qwik app node modules storybook core common dist index js err at e git qwik test qwik app node modules storybook core common dist index js err at array map err at loadpresets e git qwik test qwik app node modules storybook core common dist index js err code module not found err requirestack err e git qwik test qwik app node modules storybook core common dist index js err e git qwik test qwik app node modules storybook telemetry dist index js err e git qwik test qwik app node modules storybook cli dist generate js err e git qwik test qwik app node modules storybook cli bin index js err e git qwik test qwik app node modules storybook index js err err err typeerror cannot destructure property renderer of intermediate value as it is undefined err at builddevstandalone e git qwik test qwik app node modules storybook core server dist index js err at async withtelemetry e git qwik test qwik app node modules storybook core server dist index js err at async dev e git qwik test qwik app node modules storybook cli dist generate js err typeerror cannot destructure property renderer of intermediate value as it is undefined err at builddevstandalone e git qwik test qwik app node modules storybook core server dist index js err at async withtelemetry e git qwik test qwik app node modules storybook core server dist index js err at async dev e git qwik test qwik app node modules storybook cli dist generate js warn broken build fix the error above warn you may need to refresh the browser
| 0
|
4,179
| 7,113,366,214
|
IssuesEvent
|
2018-01-17 20:12:03
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Discussion/Tracking SnapshotCreator support
|
discuss lib / src performance process
|
Relevant earlier discussion on https://github.com/nodejs/node/issues/9473 .
`v8::SnapshotCreator` is a means to capture a heap snapshot of JS, CodeGen, and C++ bindings and revive them w/o performing loading/evaluation steps that got to there. This issue is discussing what would be needed for tools like `webpack` which run many times and have significant startup cost need in order to utilize snapshots.
CC: @hashseed @refack
- [ ] All C++ bindings available as `intptr_t`
- [ ] C++ addon API/declaration to register external `intptr_t`s
- [ ] CLI flags for `--make-snapshot` and `--from-snapshot`
- [ ] JS API to declare `main()` functions for snapshots (save to `v8::Private::ForApi($main_symbol_name)`).
- [ ] JS API to declare `vm.Context` for snapshot
- [ ] Serializer for
- [ ] ArrayBuffer/TypedArrays
- [ ] HandleWrap
- [ ] ObjectWrap
- [ ] Timer
- [ ] STDIO
- [ ] `require.cache` paths?
- [ ] File format for snapshots
- [ ] C++ Addon inlining / symlinking
~~The v8 API might be able to have [some changes made](https://groups.google.com/forum/#!topic/v8-dev/P956F2VGmPo)~~ (LANDED)
Right now the v8 API would need a `--make-snapshot` CLI flag since `v8::SnapshotCreator` controls `Isolate` creation and node would need to use the created isolate.
Since all JS handles need to be closed when creating the snapshot, a `main()` function would need to be declared during snapshot creation after all possible preloading has occurred. The snapshot could then be taken when node exits if exiting normally (note, `unref`'d handles may still exist).
Some utility like `WarmUpSnapshotDataBlob` from v8 so that the JIT code is warm when loaded off disk also relevant.
|
1.0
|
Discussion/Tracking SnapshotCreator support - Relevant earlier discussion on https://github.com/nodejs/node/issues/9473 .
`v8::SnapshotCreator` is a means to capture a heap snapshot of JS, CodeGen, and C++ bindings and revive them w/o performing loading/evaluation steps that got to there. This issue is discussing what would be needed for tools like `webpack` which run many times and have significant startup cost need in order to utilize snapshots.
CC: @hashseed @refack
- [ ] All C++ bindings available as `intptr_t`
- [ ] C++ addon API/declaration to register external `intptr_t`s
- [ ] CLI flags for `--make-snapshot` and `--from-snapshot`
- [ ] JS API to declare `main()` functions for snapshots (save to `v8::Private::ForApi($main_symbol_name)`).
- [ ] JS API to declare `vm.Context` for snapshot
- [ ] Serializer for
- [ ] ArrayBuffer/TypedArrays
- [ ] HandleWrap
- [ ] ObjectWrap
- [ ] Timer
- [ ] STDIO
- [ ] `require.cache` paths?
- [ ] File format for snapshots
- [ ] C++ Addon inlining / symlinking
~~The v8 API might be able to have [some changes made](https://groups.google.com/forum/#!topic/v8-dev/P956F2VGmPo)~~ (LANDED)
Right now the v8 API would need a `--make-snapshot` CLI flag since `v8::SnapshotCreator` controls `Isolate` creation and node would need to use the created isolate.
Since all JS handles need to be closed when creating the snapshot, a `main()` function would need to be declared during snapshot creation after all possible preloading has occurred. The snapshot could then be taken when node exits if exiting normally (note, `unref`'d handles may still exist).
Some utility like `WarmUpSnapshotDataBlob` from v8 so that the JIT code is warm when loaded off disk also relevant.
|
process
|
discussion tracking snapshotcreator support relevant earlier discussion on snapshotcreator is a means to capture a heap snapshot of js codegen and c bindings and revive them w o performing loading evaluation steps that got to there this issue is discussing what would be needed for tools like webpack which run many times and have significant startup cost need in order to utilize snapshots cc hashseed refack all c bindings available as intptr t c addon api declaration to register external intptr t s cli flags for make snapshot and from snapshot js api to declare main functions for snapshots save to private forapi main symbol name js api to declare vm context for snapshot serializer for arraybuffer typedarrays handlewrap objectwrap timer stdio require cache paths file format for snapshots c addon inlining symlinking the api might be able to have landed right now the api would need a make snapshot cli flag since snapshotcreator controls isolate creation and node would need to use the created isolate since all js handles need to be closed when creating the snapshot a main function would need to be declared during snapshot creation after all possible preloading has occurred the snapshot could then be taken when node exits if exiting normally note unref d handles may still exist some utility like warmupsnapshotdatablob from so that the jit code is warm when loaded off disk also relevant
| 1
|
272,664
| 8,515,926,165
|
IssuesEvent
|
2018-10-31 23:43:10
|
projectcalico/calico-dcos
|
https://api.github.com/repos/projectcalico/calico-dcos
|
closed
|
Broken README link
|
priority/P2
|
> View the calico-dcos documentation for the latest release here.
The link on the README is broken for the above line.
|
1.0
|
Broken README link - > View the calico-dcos documentation for the latest release here.
The link on the README is broken for the above line.
|
non_process
|
broken readme link view the calico dcos documentation for the latest release here the link on the readme is broken for the above line
| 0
|
3,931
| 6,848,709,321
|
IssuesEvent
|
2017-11-13 19:27:45
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
closed
|
Reformat code base to comply with both prettier & tslint
|
dev process refactoring
|
As they are not currently aligned--if you run `yarn format`, which is quite useful for complying with style guidelines, `yarn lint` will fail due to discrepancies in the rules (e.g. semicolon). Also, much of the code base does not currently use the Prettier styling we have defined, so they need to be updated.
|
1.0
|
Reformat code base to comply with both prettier & tslint - As they are not currently aligned--if you run `yarn format`, which is quite useful for complying with style guidelines, `yarn lint` will fail due to discrepancies in the rules (e.g. semicolon). Also, much of the code base does not currently use the Prettier styling we have defined, so they need to be updated.
|
process
|
reformat code base to comply with both prettier tslint as they are not currently aligned if you run yarn format which is quite useful for complying with style guidelines yarn lint will fail due to discrepancies in the rules e g semicolon also much of the code base does not currently use the prettier styling we have defined so they need to be updated
| 1
|
45,793
| 5,961,978,009
|
IssuesEvent
|
2017-05-29 19:57:01
|
fsr-itse/1327
|
https://api.github.com/repos/fsr-itse/1327
|
closed
|
Menu bar overlays the internal link creation dialog
|
[P] major [T] design/ui
|

There's not an awful lot to say about this ...
|
1.0
|
Menu bar overlays the internal link creation dialog - 
There's not an awful lot to say about this ...
|
non_process
|
menu bar overlays the internal link creation dialog there s not an awful lot to say about this
| 0
|
2,455
| 5,239,839,881
|
IssuesEvent
|
2017-01-31 11:06:42
|
MTG/freesound
|
https://api.github.com/repos/MTG/freesound
|
opened
|
Copy previews and uploaded sounds to secondary location
|
_Processing
|
For our internal testing purposes, we need to be able to have sound previews and sound original files duplicated in a different disk volume. In order to do that, previews should be copied to the alternative volume at the moment these are generated, and original files should also be copied to that volume once they are described.
|
1.0
|
Copy previews and uploaded sounds to secondary location - For our internal testing purposes, we need to be able to have sound previews and sound original files duplicated in a different disk volume. In order to do that, previews should be copied to the alternative volume at the moment these are generated, and original files should also be copied to that volume once they are described.
|
process
|
copy previews and uploaded sounds to secondary location for our internal testing purposes we need to be able to have sound previews and sound original files duplicated in a different disk volume in order to do that previews should be copied to the alternative volume at the moment these are generated and original files should also be copied to that volume once they are described
| 1
|
293,154
| 25,274,613,625
|
IssuesEvent
|
2022-11-16 11:44:26
|
gameserverapp/Platform
|
https://api.github.com/repos/gameserverapp/Platform
|
closed
|
Maps shown incorrectly
|
status: to be tested
|
Hi there, I'm using the rconconnect feature and we have 13 maps currently running. On the character screen of the dashboard section it's showing the island map for everyone.

|
1.0
|
Maps shown incorrectly - Hi there, I'm using the rconconnect feature and we have 13 maps currently running. On the character screen of the dashboard section it's showing the island map for everyone.

|
non_process
|
maps shown incorrectly hi there i m using the rconconnect feature and we have maps currently running on the character screen of the dashboard section it s showing the island map for everyone
| 0
|
788,105
| 27,743,037,418
|
IssuesEvent
|
2023-03-15 15:24:18
|
bivashy/MC-Auth-with-Link
|
https://api.github.com/repos/bivashy/MC-Auth-with-Link
|
closed
|
Query all linked account causes Exception
|
type: bug priority: medium
|
For example, if we use admin-panel, it`ll throw that error. Probably we should also select columns with grouping in AuthAccountDao
```
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: java.sql.SQLException: Inner query must have only 1 select column specified instead of *
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at com.j256.ormlite.stmt.Where.in(Where.java:653)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at com.j256.ormlite.stmt.Where.in(Where.java:275)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.dao.AuthAccountDao.lambda$queryAllLinkedAccounts$4(AuthAccountDao.java:57)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.dao.SupplierExceptionCatcher.execute(SupplierExceptionCatcher.java:6)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.dao.AuthAccountDao.queryAllLinkedAccounts(AuthAccountDao.java:56)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.AuthAccountDatabaseProxy.lambda$getAllLinkedAccounts$4(AuthAccountDatabaseProxy.java:56)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at java.base/java.lang.Thread.run(Thread.java:829)
```
Thanks to maksimka for providing this info. Issue started at: https://discord.com/channels/967375733443923978/1085500345410666506
|
1.0
|
Query all linked account causes Exception - For example, if we use admin-panel, it`ll throw that error. Probably we should also select columns with grouping in AuthAccountDao
```
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: java.sql.SQLException: Inner query must have only 1 select column specified instead of *
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at com.j256.ormlite.stmt.Where.in(Where.java:653)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at com.j256.ormlite.stmt.Where.in(Where.java:275)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.dao.AuthAccountDao.lambda$queryAllLinkedAccounts$4(AuthAccountDao.java:57)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.dao.SupplierExceptionCatcher.execute(SupplierExceptionCatcher.java:6)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.dao.AuthAccountDao.queryAllLinkedAccounts(AuthAccountDao.java:56)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at me.mastercapexd.auth.storage.AuthAccountDatabaseProxy.lambda$getAllLinkedAccounts$4(AuthAccountDatabaseProxy.java:56)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
[07:07:56] [Thread-203/ERROR] [com.velocitypowered.proxy.console.VelocityConsole]: at java.base/java.lang.Thread.run(Thread.java:829)
```
Thanks to maksimka for providing this info. Issue started at: https://discord.com/channels/967375733443923978/1085500345410666506
|
non_process
|
query all linked account causes exception for example if we use admin panel it ll throw that error probably we should also select columns with grouping in authaccountdao java sql sqlexception inner query must have only select column specified instead of at com ormlite stmt where in where java at com ormlite stmt where in where java at me mastercapexd auth storage dao authaccountdao lambda queryalllinkedaccounts authaccountdao java at me mastercapexd auth storage dao supplierexceptioncatcher execute supplierexceptioncatcher java at me mastercapexd auth storage dao authaccountdao queryalllinkedaccounts authaccountdao java at me mastercapexd auth storage authaccountdatabaseproxy lambda getalllinkedaccounts authaccountdatabaseproxy java at java base java util concurrent completablefuture asyncsupply run completablefuture java at java base java lang thread run thread java thanks to maksimka for providing this info issue started at
| 0
|
127,672
| 10,477,847,676
|
IssuesEvent
|
2019-09-23 21:57:09
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: tpccbench/nodes=3/cpu=4 failed
|
C-test-failure O-roachtest O-robot
|
SHA: https://github.com/cockroachdb/cockroach/commits/4784fe3c51545db5fb5d411937ec1db2ef2b9761
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=tpccbench/nodes=3/cpu=4 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1472753&tab=buildLog
```
The test failed on branch=provisional_201909060000_v19.2.0-beta.20190910, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190906-1472753/tpccbench/nodes=3/cpu=4/run_1
cluster.go:2114,tpcc.go:868,tpcc.go:579,test_runner.go:688: exit status 1
```
|
2.0
|
roachtest: tpccbench/nodes=3/cpu=4 failed - SHA: https://github.com/cockroachdb/cockroach/commits/4784fe3c51545db5fb5d411937ec1db2ef2b9761
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=tpccbench/nodes=3/cpu=4 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1472753&tab=buildLog
```
The test failed on branch=provisional_201909060000_v19.2.0-beta.20190910, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190906-1472753/tpccbench/nodes=3/cpu=4/run_1
cluster.go:2114,tpcc.go:868,tpcc.go:579,test_runner.go:688: exit status 1
```
|
non_process
|
roachtest tpccbench nodes cpu failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests tpccbench nodes cpu pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch provisional beta cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts tpccbench nodes cpu run cluster go tpcc go tpcc go test runner go exit status
| 0
|
22,239
| 30,790,272,897
|
IssuesEvent
|
2023-07-31 15:42:01
|
symfony/symfony-docs
|
https://api.github.com/repos/symfony/symfony-docs
|
closed
|
[Messenger][Process] add `RunProcessMessage` and `RunProcessMessageHand…
|
Process hasPR Messenger
|
| Q | A
| ------------ | ---
| Feature PR | symfony/symfony#49813
| PR author(s) | @kbond
| Merged in | 6.4
We created this issue to not forget to document this new feature. We would really appreciate if you can help us with this task. If you are not sure how to do it, please ask us and we will help you.
To fix this issue, please create a PR against the 6.4 branch in the [symfony-docs repository](https://github.com/symfony/symfony-docs).
Thank you! :smiley:
|
1.0
|
[Messenger][Process] add `RunProcessMessage` and `RunProcessMessageHand… - | Q | A
| ------------ | ---
| Feature PR | symfony/symfony#49813
| PR author(s) | @kbond
| Merged in | 6.4
We created this issue to not forget to document this new feature. We would really appreciate if you can help us with this task. If you are not sure how to do it, please ask us and we will help you.
To fix this issue, please create a PR against the 6.4 branch in the [symfony-docs repository](https://github.com/symfony/symfony-docs).
Thank you! :smiley:
|
process
|
add runprocessmessage and runprocessmessagehand… q a feature pr symfony symfony pr author s kbond merged in we created this issue to not forget to document this new feature we would really appreciate if you can help us with this task if you are not sure how to do it please ask us and we will help you to fix this issue please create a pr against the branch in the thank you smiley
| 1
|
53,084
| 13,110,974,761
|
IssuesEvent
|
2020-08-04 21:49:03
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Simples app for flutter build release ios is also getting build failed
|
a: build platform-ios severe: crash tool waiting for customer response
|
I have been facing problem with this command `flutter build ios --release --verbose` for my other app. So what I did I build a basic bare minimal flutter app just the counter example. So then I build it I assign the signing and capabilities etc. Then I tried to run this command `flutter build ios --release --verbose `it also failed so meaning some thing else need to be done is it for it to be successful.
Here is the output of the logs just the part where it shows failed.
```
Failed to build iOS app
[ ] Error output from Xcode build:
↳
[ ] ** BUILD FAILED **
The following build commands failed:
Ld /Users/mymac/Library/Developer/Xcode/DerivedData/Runner-cnsnreurrfqzvcbaxnkjmnpjzoqh/Build/Intermediates.noindex/Runner.build/Release-iphoneos/Runner.build/Objects-normal/armv7/Binary/Runner normal
armv7
Ld /Users/mymac/Library/Developer/Xcode/DerivedData/Runner-cnsnreurrfqzvcbaxnkjmnpjzoqh/Build/Intermediates.noindex/Runner.build/Release-iphoneos/Runner.build/Objects-normal/arm64/Binary/Runner normal
arm64
(2 failures)
```
So what should be done to get a successful build release and also the obfuscate and to be able to upload app store.
|
1.0
|
Simples app for flutter build release ios is also getting build failed - I have been facing problem with this command `flutter build ios --release --verbose` for my other app. So what I did I build a basic bare minimal flutter app just the counter example. So then I build it I assign the signing and capabilities etc. Then I tried to run this command `flutter build ios --release --verbose `it also failed so meaning some thing else need to be done is it for it to be successful.
Here is the output of the logs just the part where it shows failed.
```
Failed to build iOS app
[ ] Error output from Xcode build:
↳
[ ] ** BUILD FAILED **
The following build commands failed:
Ld /Users/mymac/Library/Developer/Xcode/DerivedData/Runner-cnsnreurrfqzvcbaxnkjmnpjzoqh/Build/Intermediates.noindex/Runner.build/Release-iphoneos/Runner.build/Objects-normal/armv7/Binary/Runner normal
armv7
Ld /Users/mymac/Library/Developer/Xcode/DerivedData/Runner-cnsnreurrfqzvcbaxnkjmnpjzoqh/Build/Intermediates.noindex/Runner.build/Release-iphoneos/Runner.build/Objects-normal/arm64/Binary/Runner normal
arm64
(2 failures)
```
So what should be done to get a successful build release and also the obfuscate and to be able to upload app store.
|
non_process
|
simples app for flutter build release ios is also getting build failed i have been facing problem with this command flutter build ios release verbose for my other app so what i did i build a basic bare minimal flutter app just the counter example so then i build it i assign the signing and capabilities etc then i tried to run this command flutter build ios release verbose it also failed so meaning some thing else need to be done is it for it to be successful here is the output of the logs just the part where it shows failed failed to build ios app error output from xcode build ↳ build failed the following build commands failed ld users mymac library developer xcode deriveddata runner cnsnreurrfqzvcbaxnkjmnpjzoqh build intermediates noindex runner build release iphoneos runner build objects normal binary runner normal ld users mymac library developer xcode deriveddata runner cnsnreurrfqzvcbaxnkjmnpjzoqh build intermediates noindex runner build release iphoneos runner build objects normal binary runner normal failures so what should be done to get a successful build release and also the obfuscate and to be able to upload app store
| 0
|
808,642
| 30,105,264,400
|
IssuesEvent
|
2023-06-30 00:11:48
|
apache/kyuubi
|
https://api.github.com/repos/apache/kyuubi
|
closed
|
[Bug] Remove default settings `spark.sql.execution.topKSortFallbackThreshold`
|
kind:bug priority:major
|
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
### Search before asking
- [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no similar issues.
### Describe the bug
The query results may be inaccurate.
Setting the topKSortFallbackThreshold value may lead to inaccurate results
https://issues.apache.org/jira/browse/SPARK-44240
https://github.com/apache/kyuubi/issues/1018
### Affects Version(s)
master/1.7/1.6/1.5
### Kyuubi Server Log Output
_No response_
### Kyuubi Engine Log Output
_No response_
### Kyuubi Server Configurations
_No response_
### Kyuubi Engine Configurations
_No response_
### Additional context
_No response_
### Are you willing to submit PR?
- [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix.
- [ ] No. I cannot submit a PR at this time.
|
1.0
|
[Bug] Remove default settings `spark.sql.execution.topKSortFallbackThreshold` - ### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
### Search before asking
- [X] I have searched in the [issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no similar issues.
### Describe the bug
The query results may be inaccurate.
Setting the topKSortFallbackThreshold value may lead to inaccurate results
https://issues.apache.org/jira/browse/SPARK-44240
https://github.com/apache/kyuubi/issues/1018
### Affects Version(s)
master/1.7/1.6/1.5
### Kyuubi Server Log Output
_No response_
### Kyuubi Engine Log Output
_No response_
### Kyuubi Server Configurations
_No response_
### Kyuubi Engine Configurations
_No response_
### Additional context
_No response_
### Are you willing to submit PR?
- [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix.
- [ ] No. I cannot submit a PR at this time.
|
non_process
|
remove default settings spark sql execution topksortfallbackthreshold code of conduct i agree to follow this project s search before asking i have searched in the and found no similar issues describe the bug the query results may be inaccurate setting the topksortfallbackthreshold value may lead to inaccurate results affects version s master kyuubi server log output no response kyuubi engine log output no response kyuubi server configurations no response kyuubi engine configurations no response additional context no response are you willing to submit pr yes i would be willing to submit a pr with guidance from the kyuubi community to fix no i cannot submit a pr at this time
| 0
|
7,690
| 10,776,060,792
|
IssuesEvent
|
2019-11-03 18:17:25
|
IIIF/api
|
https://api.github.com/repos/IIIF/api
|
opened
|
What is the policy for updates to previous versions?
|
notes process
|
@scossu in #1876:
> Is there a life cycle policy for the specs? I find it reasonable to consider 2.x as a "stable" but not "deprecated" or "frozen" version, i.e. there are no new features added but non-breaking edits for major/critical issues are possible [...]. I understand that this may create additional work for the editors but it seems like more transparent communication.
There is an [editorial process](https://iiif.io/community/policy/editorial/) for creating new versions of specs. I think there should be a section 1.6 that documents when we would create a new version that is not the newly highest numbered one, and that it should be restricted to security/privacy fixes only. The right solution is to instead implement the new most recent, rather than patching a previous version.
As we have seen from other environments, once the door is open to stay in a previous version and still have updates, then the transition time is extremely long. Python 3.0 was released more than a decade ago, with updates to 2.6 and 2.7 over that time. Only now that 2.x is finally end of life is everyone making good on the transition. We don't have the user base of something like Python to be concerned about for backwards incompatible changes, thankfully.
Given the incompatibilities between 2 and 3 and the strangeness of cross-version references, I think it is best to do everything possible to ensure that organizations update as quickly as feasible. Given the timelines for new minor versions, the speediness of updating implementations to the most recent is likely in the same order of magnitude of updating the previous spec. Further, any issue significant enough to warrant a change to a previous version is highly likely to be backwards incompatible ... meaning that it should be in a new major version anyway. Finally, the number of possible privacy and security implications is very low and likely limited exclusively to auth. Given that no serious issues (that weren't already known, such as not using auth for presentation api) have been raised since its publication, it seems unlikely that this will ever come into play.
With these considerations, I would propose that previous versions will be updated for privacy or security issues. If the fix is backwards compatible, it will be via the creation of a new minor version (e.g. Presentation 2.2). If the fix is not backwards compatible, then the change will be in the form of a warning callout that the issue exists, and the recommendation is to instead update to the most recent version.
|
1.0
|
What is the policy for updates to previous versions? -
@scossu in #1876:
> Is there a life cycle policy for the specs? I find it reasonable to consider 2.x as a "stable" but not "deprecated" or "frozen" version, i.e. there are no new features added but non-breaking edits for major/critical issues are possible [...]. I understand that this may create additional work for the editors but it seems like more transparent communication.
There is an [editorial process](https://iiif.io/community/policy/editorial/) for creating new versions of specs. I think there should be a section 1.6 that documents when we would create a new version that is not the newly highest numbered one, and that it should be restricted to security/privacy fixes only. The right solution is to instead implement the new most recent, rather than patching a previous version.
As we have seen from other environments, once the door is open to stay in a previous version and still have updates, then the transition time is extremely long. Python 3.0 was released more than a decade ago, with updates to 2.6 and 2.7 over that time. Only now that 2.x is finally end of life is everyone making good on the transition. We don't have the user base of something like Python to be concerned about for backwards incompatible changes, thankfully.
Given the incompatibilities between 2 and 3 and the strangeness of cross-version references, I think it is best to do everything possible to ensure that organizations update as quickly as feasible. Given the timelines for new minor versions, the speediness of updating implementations to the most recent is likely in the same order of magnitude of updating the previous spec. Further, any issue significant enough to warrant a change to a previous version is highly likely to be backwards incompatible ... meaning that it should be in a new major version anyway. Finally, the number of possible privacy and security implications is very low and likely limited exclusively to auth. Given that no serious issues (that weren't already known, such as not using auth for presentation api) have been raised since its publication, it seems unlikely that this will ever come into play.
With these considerations, I would propose that previous versions will be updated for privacy or security issues. If the fix is backwards compatible, it will be via the creation of a new minor version (e.g. Presentation 2.2). If the fix is not backwards compatible, then the change will be in the form of a warning callout that the issue exists, and the recommendation is to instead update to the most recent version.
|
process
|
what is the policy for updates to previous versions scossu in is there a life cycle policy for the specs i find it reasonable to consider x as a stable but not deprecated or frozen version i e there are no new features added but non breaking edits for major critical issues are possible i understand that this may create additional work for the editors but it seems like more transparent communication there is an for creating new versions of specs i think there should be a section that documents when we would create a new version that is not the newly highest numbered one and that it should be restricted to security privacy fixes only the right solution is to instead implement the new most recent rather than patching a previous version as we have seen from other environments once the door is open to stay in a previous version and still have updates then the transition time is extremely long python was released more than a decade ago with updates to and over that time only now that x is finally end of life is everyone making good on the transition we don t have the user base of something like python to be concerned about for backwards incompatible changes thankfully given the incompatibilities between and and the strangeness of cross version references i think it is best to do everything possible to ensure that organizations update as quickly as feasible given the timelines for new minor versions the speediness of updating implementations to the most recent is likely in the same order of magnitude of updating the previous spec further any issue significant enough to warrant a change to a previous version is highly likely to be backwards incompatible meaning that it should be in a new major version anyway finally the number of possible privacy and security implications is very low and likely limited exclusively to auth given that no serious issues that weren t already known such as not using auth for presentation api have been raised since its publication it seems unlikely that this will ever come into play with these considerations i would propose that previous versions will be updated for privacy or security issues if the fix is backwards compatible it will be via the creation of a new minor version e g presentation if the fix is not backwards compatible then the change will be in the form of a warning callout that the issue exists and the recommendation is to instead update to the most recent version
| 1
|
4,880
| 7,758,604,100
|
IssuesEvent
|
2018-05-31 20:10:09
|
AlexsLemonade/refinebio
|
https://api.github.com/repos/AlexsLemonade/refinebio
|
closed
|
Implement tximport for RNA-seq pipeline
|
RNA-seq processor review salmon sci review
|
Use tximport along with the gene-to-transcript mapping already contained within the processor to implement this part of the salmon pipeline:

This code does essentially what we need, we just need to include this into the Data Refinery's salmon pipeline: https://github.com/jaclyn-taroni/ref-txome/blob/79e2f64ffe6a71c5103a150bd3159efb784cddeb/4-athaliana_tximport.R
(Note that this script contains a link to a tutorial.)
Note that this should be done on a per-experiment basis, rather than a per-sample basis.
|
1.0
|
Implement tximport for RNA-seq pipeline - Use tximport along with the gene-to-transcript mapping already contained within the processor to implement this part of the salmon pipeline:

This code does essentially what we need, we just need to include this into the Data Refinery's salmon pipeline: https://github.com/jaclyn-taroni/ref-txome/blob/79e2f64ffe6a71c5103a150bd3159efb784cddeb/4-athaliana_tximport.R
(Note that this script contains a link to a tutorial.)
Note that this should be done on a per-experiment basis, rather than a per-sample basis.
|
process
|
implement tximport for rna seq pipeline use tximport along with the gene to transcript mapping already contained within the processor to implement this part of the salmon pipeline this code does essentially what we need we just need to include this into the data refinery s salmon pipeline note that this script contains a link to a tutorial note that this should be done on a per experiment basis rather than a per sample basis
| 1
|
603
| 3,074,622,851
|
IssuesEvent
|
2015-08-20 08:35:22
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Indirect reference to keys erases one file, causes failures
|
bug obsolete P2 preprocess
|
Encountered an issue similar to this with a much larger sample, encountered this specific issue while trying to debug the original with a clean set of samples.
In my original case, the product defines a large set of plugins. Each set defines content in one set of maps, and creates a related map to define keys for topics in that plug-in.
All plug-ins pull in a common map from a peer directory, which in turn pulls in every key map. The result: every plug-in pulls in the common key map with a single topicref, which in turn pulls in key definitions from every plug-in; everybody can then use the common keys. No files are generated from outside the plug-in, but keys are retrieved.
When I construct a similar sample with the hierarchy samples, using 1.8.2, I get a java crash during the dit2xhtml stage. In this case all of my content and all of my keys are inside of samples/ but I go out and back in to define the keys. Checking the temp directory shows that tasks/washingthecar.xml is empty (zero length file), even though it is defined the same as other files. The only way this runs is if I set generate.copy.outer=3, which adjusts the output directory, even though no files are generated outside of that directory.
To reproduce, add this line to samples/hierarchy.ditamap:
```xml
<topicref href="../peer/allkeys.ditamap" format="ditamap"/>
```
Create this file peer/allkeys.ditamap:
```xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"../dtd/technicalContent/dtd/map.dtd">
<map>
<topicref href="../samples/samplekeys.ditamap" format="ditamap"/>
</map>
```
And create samples/samplekeys.ditamap:
```xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"../dtd/technicalContent/dtd/map.dtd">
<map title="keys">
<topicref href="tasks/garagetaskoverview.xml" type="concept" keys="overview"
processing-role="resource-only"/>
<topicref href="concepts/lawnmower.xml" type="concept" keys="lawnmower"
processing-role="resource-only"/>
</map>
```
I get the failure with this command:
C:\DITA-OT1.8.M2>ant -f build.xml -Dargs.input=samples/hierarchy.ditamap -Dtranstype=xhtml -Dclean.temp=no
I get a "clean" build with this:
C:\DITA-OT1.8.M2>ant -f build.xml -Dargs.input=samples/hierarchy.ditamap -Dtranstype=xhtml -Dclean.temp=no -Douter.control=quiet -Dgenerate.copy.outer=3
The clean build adjusts the output directory, which isn't actually needed, but is more of a minor annoyance than anything (I am not surprised by that when using -Dgenerate.copy.outer=3).
|
1.0
|
Indirect reference to keys erases one file, causes failures - Encountered an issue similar to this with a much larger sample, encountered this specific issue while trying to debug the original with a clean set of samples.
In my original case, the product defines a large set of plugins. Each set defines content in one set of maps, and creates a related map to define keys for topics in that plug-in.
All plug-ins pull in a common map from a peer directory, which in turn pulls in every key map. The result: every plug-in pulls in the common key map with a single topicref, which in turn pulls in key definitions from every plug-in; everybody can then use the common keys. No files are generated from outside the plug-in, but keys are retrieved.
When I construct a similar sample with the hierarchy samples, using 1.8.2, I get a java crash during the dit2xhtml stage. In this case all of my content and all of my keys are inside of samples/ but I go out and back in to define the keys. Checking the temp directory shows that tasks/washingthecar.xml is empty (zero length file), even though it is defined the same as other files. The only way this runs is if I set generate.copy.outer=3, which adjusts the output directory, even though no files are generated outside of that directory.
To reproduce, add this line to samples/hierarchy.ditamap:
```xml
<topicref href="../peer/allkeys.ditamap" format="ditamap"/>
```
Create this file peer/allkeys.ditamap:
```xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"../dtd/technicalContent/dtd/map.dtd">
<map>
<topicref href="../samples/samplekeys.ditamap" format="ditamap"/>
</map>
```
And create samples/samplekeys.ditamap:
```xml
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN"
"../dtd/technicalContent/dtd/map.dtd">
<map title="keys">
<topicref href="tasks/garagetaskoverview.xml" type="concept" keys="overview"
processing-role="resource-only"/>
<topicref href="concepts/lawnmower.xml" type="concept" keys="lawnmower"
processing-role="resource-only"/>
</map>
```
I get the failure with this command:
C:\DITA-OT1.8.M2>ant -f build.xml -Dargs.input=samples/hierarchy.ditamap -Dtranstype=xhtml -Dclean.temp=no
I get a "clean" build with this:
C:\DITA-OT1.8.M2>ant -f build.xml -Dargs.input=samples/hierarchy.ditamap -Dtranstype=xhtml -Dclean.temp=no -Douter.control=quiet -Dgenerate.copy.outer=3
The clean build adjusts the output directory, which isn't actually needed, but is more of a minor annoyance than anything (I am not surprised by that when using -Dgenerate.copy.outer=3).
|
process
|
indirect reference to keys erases one file causes failures encountered an issue similar to this with a much larger sample encountered this specific issue while trying to debug the original with a clean set of samples in my original case the product defines a large set of plugins each set defines content in one set of maps and creates a related map to define keys for topics in that plug in all plug ins pull in a common map from a peer directory which in turn pulls in every key map the result every plug in pulls in the common key map with a single topicref which in turn pulls in key definitions from every plug in everybody can then use the common keys no files are generated from outside the plug in but keys are retrieved when i construct a similar sample with the hierarchy samples using i get a java crash during the stage in this case all of my content and all of my keys are inside of samples but i go out and back in to define the keys checking the temp directory shows that tasks washingthecar xml is empty zero length file even though it is defined the same as other files the only way this runs is if i set generate copy outer which adjusts the output directory even though no files are generated outside of that directory to reproduce add this line to samples hierarchy ditamap xml create this file peer allkeys ditamap xml doctype map public oasis dtd dita map en dtd technicalcontent dtd map dtd and create samples samplekeys ditamap xml doctype map public oasis dtd dita map en dtd technicalcontent dtd map dtd topicref href tasks garagetaskoverview xml type concept keys overview processing role resource only topicref href concepts lawnmower xml type concept keys lawnmower processing role resource only i get the failure with this command c dita ant f build xml dargs input samples hierarchy ditamap dtranstype xhtml dclean temp no i get a clean build with this c dita ant f build xml dargs input samples hierarchy ditamap dtranstype xhtml dclean temp no douter control quiet dgenerate copy outer the clean build adjusts the output directory which isn t actually needed but is more of a minor annoyance than anything i am not surprised by that when using dgenerate copy outer
| 1
|
326,618
| 9,958,444,930
|
IssuesEvent
|
2019-07-05 21:09:58
|
ProgerXP/Notepad2e
|
https://api.github.com/repos/ProgerXP/Notepad2e
|
closed
|
Change main menu accelerators
|
high priority
|
Remove accelerators from menu items (of the top submenu only) which have another hotkey. This is needed because there are many menu items and we waste available letters on commands that can be called with a global hotkey.
This means removing accelerators from:
- File > New, Open, Revert, 3 Save [...], Rename, Print, Recent
- keep `E&xit` for traditional compatibility
- also change: `Save On &Lose Focus`
- submenus (Launch, etc.) here and in other menus are untouched
- Edit - all except Clear Clipboard and 5 commands with submenus
- View - untouched, it's rarely used, small enough and has no accelerator conflicts
- Settings - Tab Settings, Auto Close, Always On Top, Transparent Mode, File Change, 3 last settings-related commands
- change: `Evaluate &Math Expressions`, `C&trl+Wheel`, `S&how Toolbar`
- `?` - untouched
|
1.0
|
Change main menu accelerators - Remove accelerators from menu items (of the top submenu only) which have another hotkey. This is needed because there are many menu items and we waste available letters on commands that can be called with a global hotkey.
This means removing accelerators from:
- File > New, Open, Revert, 3 Save [...], Rename, Print, Recent
- keep `E&xit` for traditional compatibility
- also change: `Save On &Lose Focus`
- submenus (Launch, etc.) here and in other menus are untouched
- Edit - all except Clear Clipboard and 5 commands with submenus
- View - untouched, it's rarely used, small enough and has no accelerator conflicts
- Settings - Tab Settings, Auto Close, Always On Top, Transparent Mode, File Change, 3 last settings-related commands
- change: `Evaluate &Math Expressions`, `C&trl+Wheel`, `S&how Toolbar`
- `?` - untouched
|
non_process
|
change main menu accelerators remove accelerators from menu items of the top submenu only which have another hotkey this is needed because there are many menu items and we waste available letters on commands that can be called with a global hotkey this means removing accelerators from file new open revert save rename print recent keep e xit for traditional compatibility also change save on lose focus submenus launch etc here and in other menus are untouched edit all except clear clipboard and commands with submenus view untouched it s rarely used small enough and has no accelerator conflicts settings tab settings auto close always on top transparent mode file change last settings related commands change evaluate math expressions c trl wheel s how toolbar untouched
| 0
|
1,553
| 4,155,937,559
|
IssuesEvent
|
2016-06-16 16:20:22
|
altoxml/schema
|
https://api.github.com/repos/altoxml/schema
|
closed
|
Process Result tracking (IMPACT)
|
2 discussion processing history
|
Champion: Clemens Neudecker
Submitter: Impact
Submitted: 2013-02
Status: discussion
------------------------------------------------------------
submitted - initial status when proposal is submitted
discussion - proposal is being discussed within the board
review - xsd code is being reviewed
accepted - proposal is accepted
rejected - proposal is rejected
draft - accepted proposal is in public commenting period
published - proposal is published in a schema version
Backwards compatible ??
To ALTO version ?
Purpose
A lot of software tools and also human interactions are involved in different steps of the digitisation process. Each of them may affect an ALTO file by doing some refinements or corrections. From our point of view it would be desirable to keep track of the changes and verification done by the different agents which are involved in the digitisation process. This would allow a simple kind of a document history and gives also important information about the trustworthily of the whole document. If for example everything was verified by a service provider than we can asume that the quality of the document is very high. Storing the old values as well as the new ones would increase the filesize tremendously.
Correction and Validation are possible outcomes of the same process.
Implementation
The ALTO schema already defines a <postProcessingStep> element. The intention of this element is to record any details about those process steps that were carried out after the creation of the full text. The <postProcessingStep> element is optional and not part of the actual page’s definition in ALTO.
In order to store information about the correction and verification process for individual text lines, words etc. the following elements are added to the <postProcessingStep> section:
•<processingStepType> stores the type of process step. It is a free text field, though IMPACT internal constraints require the element’s value to be set to “correction”.
•<processresult> groups all elements regarding the result of the process. The element’s value attribute contains information about the outcome of the process. The <processresult> element is repeatable. Each element represents a specific outcome of the process that is recorded in the element’s value attribute. This attribute may only contain two values: “corrected” or “verified”.
•<processesElements> is an element that wraps around all <pe> elements that were processed with the actual result as stated in the <processresult> element’s value attribute.
•<pe> element contain the ID-value of an individual text line or word element. Unprocessed are not listed here.
If an element had not been processed, the element is not listed within <processresult>.
Example:
```xml
<postProcessingStep ID="0003">
<processingDateTime>2012-05-26T09:34:00+02:00</processingDateTime>
<processingAgency>ACME Agency</processingAgency>
<processingStepDescription>Proofreading</processingStepDescription>
<processingStepSettings>Double keying required</processingStepSettings>
<processingSoftware>
<softwareCreator>ACME Software Corp.</softwareCreator>
<softwareName>Proofer</softwareName>
<softwareVersion>12.1</softwareVersion>
<applicationDescription>Distributed proofreading software</applicationDescription>
</processingSoftware>
<processingResult value="Proof reading performed">
<processedElements>
<pe>P4_TB00003</pe>
<pe>P4_TB00002</pe>
<pe>P4_ST00004</pe>
</processedElements>
</processingResult>
<processingResult value="Uncorrected">
<processedElements>
<pe>P4_TB00003</pe>
<pe>P4_TB00002</pe>
<pe>P4_ST00004</pe>
</processedElements>
</processingResult>
</postProcessingStep>
```
Schema changes draft
Current schema Changed schema
```xml
<xsd:complexType name="processingStepType">
<xsd:annotation>
<xsd:documentation>A processing step.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="processingDateTime" type="dateTimeType" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Date or DateTime the image was processed.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingAgency" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Identifies the organizationlevel producer(s) of the processed image.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepDescription" type="xsd:string" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>An ordinal listing of the image processing steps performed. For example, "image despeckling."</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepSettings" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>A description of any setting of the processing application.
For example, for a multi-engine OCR application this might include the
engines which were used. Ideally, this description should be adequate so
that someone else using the same application can produce identical
results.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingSoftware" type="processingSoftwareType" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="processingStepType">
<xsd:annotation>
<xsd:documentation>A processing step.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="processingStepType" type="dateTimeType" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Type of processing step</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingDateTime" type="dateTimeType" minOccurs="0"> <xsd:annotation> <xsd:documentation>Date or DateTime the image was processed.</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingAgency" type="xsd:string" minOccurs="0"> <xsd:annotation> <xsd:documentation>Identifies the organizationlevel producer(s) of the
processed image.</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingStepDescription" type="xsd:string" minOccurs="0" maxOccurs="unbounded"> <xsd:annotation> <xsd:documentation>An ordinal listing of the image processing steps performed.
For example, "image despeckling."</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingStepSettings" type="xsd:string" minOccurs="0"> <xsd:annotation> <xsd:documentation>A description of any setting of the processing application.
For example, for a multi-engine OCR application this might include the
engines which were used. Ideally, this description should be adequate so
that someone else using the same application can produce identical
results.</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingSoftware" type="processingSoftwareType" minOccurs="0"/> <xsd:element name="processingResult" type="processingResultType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence></xsd:complexType>
<xsd:complexType name="processingResultType">
<xsd:annotation> <xsd:documentation>List of processed elements.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="processedElements" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>ID of processed element</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:sequence>
<xsd:element name="pe" type="xsd:IDREF" minOccurs="1" maxOccurs="unbounded"> </xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="value" type="xsd:string"></xsd:attribute>
</xsd:complexType>
```
|
1.0
|
Process Result tracking (IMPACT) - Champion: Clemens Neudecker
Submitter: Impact
Submitted: 2013-02
Status: discussion
------------------------------------------------------------
submitted - initial status when proposal is submitted
discussion - proposal is being discussed within the board
review - xsd code is being reviewed
accepted - proposal is accepted
rejected - proposal is rejected
draft - accepted proposal is in public commenting period
published - proposal is published in a schema version
Backwards compatible ??
To ALTO version ?
Purpose
A lot of software tools and also human interactions are involved in different steps of the digitisation process. Each of them may affect an ALTO file by doing some refinements or corrections. From our point of view it would be desirable to keep track of the changes and verification done by the different agents which are involved in the digitisation process. This would allow a simple kind of a document history and gives also important information about the trustworthily of the whole document. If for example everything was verified by a service provider than we can asume that the quality of the document is very high. Storing the old values as well as the new ones would increase the filesize tremendously.
Correction and Validation are possible outcomes of the same process.
Implementation
The ALTO schema already defines a <postProcessingStep> element. The intention of this element is to record any details about those process steps that were carried out after the creation of the full text. The <postProcessingStep> element is optional and not part of the actual page’s definition in ALTO.
In order to store information about the correction and verification process for individual text lines, words etc. the following elements are added to the <postProcessingStep> section:
•<processingStepType> stores the type of process step. It is a free text field, though IMPACT internal constraints require the element’s value to be set to “correction”.
•<processresult> groups all elements regarding the result of the process. The element’s value attribute contains information about the outcome of the process. The <processresult> element is repeatable. Each element represents a specific outcome of the process that is recorded in the element’s value attribute. This attribute may only contain two values: “corrected” or “verified”.
•<processesElements> is an element that wraps around all <pe> elements that were processed with the actual result as stated in the <processresult> element’s value attribute.
•<pe> element contain the ID-value of an individual text line or word element. Unprocessed are not listed here.
If an element had not been processed, the element is not listed within <processresult>.
Example:
```xml
<postProcessingStep ID="0003">
<processingDateTime>2012-05-26T09:34:00+02:00</processingDateTime>
<processingAgency>ACME Agency</processingAgency>
<processingStepDescription>Proofreading</processingStepDescription>
<processingStepSettings>Double keying required</processingStepSettings>
<processingSoftware>
<softwareCreator>ACME Software Corp.</softwareCreator>
<softwareName>Proofer</softwareName>
<softwareVersion>12.1</softwareVersion>
<applicationDescription>Distributed proofreading software</applicationDescription>
</processingSoftware>
<processingResult value="Proof reading performed">
<processedElements>
<pe>P4_TB00003</pe>
<pe>P4_TB00002</pe>
<pe>P4_ST00004</pe>
</processedElements>
</processingResult>
<processingResult value="Uncorrected">
<processedElements>
<pe>P4_TB00003</pe>
<pe>P4_TB00002</pe>
<pe>P4_ST00004</pe>
</processedElements>
</processingResult>
</postProcessingStep>
```
Schema changes draft
Current schema Changed schema
```xml
<xsd:complexType name="processingStepType">
<xsd:annotation>
<xsd:documentation>A processing step.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="processingDateTime" type="dateTimeType" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Date or DateTime the image was processed.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingAgency" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Identifies the organizationlevel producer(s) of the processed image.</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepDescription" type="xsd:string" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>An ordinal listing of the image processing steps performed. For example, "image despeckling."</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingStepSettings" type="xsd:string" minOccurs="0">
<xsd:annotation>
<xsd:documentation>A description of any setting of the processing application.
For example, for a multi-engine OCR application this might include the
engines which were used. Ideally, this description should be adequate so
that someone else using the same application can produce identical
results.
</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingSoftware" type="processingSoftwareType" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="processingStepType">
<xsd:annotation>
<xsd:documentation>A processing step.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="processingStepType" type="dateTimeType" minOccurs="0">
<xsd:annotation>
<xsd:documentation>Type of processing step</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="processingDateTime" type="dateTimeType" minOccurs="0"> <xsd:annotation> <xsd:documentation>Date or DateTime the image was processed.</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingAgency" type="xsd:string" minOccurs="0"> <xsd:annotation> <xsd:documentation>Identifies the organizationlevel producer(s) of the
processed image.</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingStepDescription" type="xsd:string" minOccurs="0" maxOccurs="unbounded"> <xsd:annotation> <xsd:documentation>An ordinal listing of the image processing steps performed.
For example, "image despeckling."</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingStepSettings" type="xsd:string" minOccurs="0"> <xsd:annotation> <xsd:documentation>A description of any setting of the processing application.
For example, for a multi-engine OCR application this might include the
engines which were used. Ideally, this description should be adequate so
that someone else using the same application can produce identical
results.</xsd:documentation> </xsd:annotation> </xsd:element> <xsd:element name="processingSoftware" type="processingSoftwareType" minOccurs="0"/> <xsd:element name="processingResult" type="processingResultType" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence></xsd:complexType>
<xsd:complexType name="processingResultType">
<xsd:annotation> <xsd:documentation>List of processed elements.</xsd:documentation>
</xsd:annotation>
<xsd:sequence>
<xsd:element name="processedElements" minOccurs="0" maxOccurs="unbounded">
<xsd:annotation>
<xsd:documentation>ID of processed element</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:sequence>
<xsd:element name="pe" type="xsd:IDREF" minOccurs="1" maxOccurs="unbounded"> </xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="value" type="xsd:string"></xsd:attribute>
</xsd:complexType>
```
|
process
|
process result tracking impact champion clemens neudecker submitter impact submitted status discussion submitted initial status when proposal is submitted discussion proposal is being discussed within the board review xsd code is being reviewed accepted proposal is accepted rejected proposal is rejected draft accepted proposal is in public commenting period published proposal is published in a schema version backwards compatible to alto version purpose a lot of software tools and also human interactions are involved in different steps of the digitisation process each of them may affect an alto file by doing some refinements or corrections from our point of view it would be desirable to keep track of the changes and verification done by the different agents which are involved in the digitisation process this would allow a simple kind of a document history and gives also important information about the trustworthily of the whole document if for example everything was verified by a service provider than we can asume that the quality of the document is very high storing the old values as well as the new ones would increase the filesize tremendously correction and validation are possible outcomes of the same process implementation the alto schema already defines a element the intention of this element is to record any details about those process steps that were carried out after the creation of the full text the element is optional and not part of the actual page’s definition in alto in order to store information about the correction and verification process for individual text lines words etc the following elements are added to the section • stores the type of process step it is a free text field though impact internal constraints require the element’s value to be set to “correction” • groups all elements regarding the result of the process the element’s value attribute contains information about the outcome of the process the element is repeatable each element represents a specific outcome of the process that is recorded in the element’s value attribute this attribute may only contain two values “corrected” or “verified” • is an element that wraps around all elements that were processed with the actual result as stated in the element’s value attribute • element contain the id value of an individual text line or word element unprocessed are not listed here if an element had not been processed the element is not listed within example xml acme agency proofreading double keying required acme software corp proofer distributed proofreading software schema changes draft current schema changed schema xml a processing step date or datetime the image was processed identifies the organizationlevel producer s of the processed image an ordinal listing of the image processing steps performed for example image despeckling a description of any setting of the processing application for example for a multi engine ocr application this might include the engines which were used ideally this description should be adequate so that someone else using the same application can produce identical results a processing step type of processing step date or datetime the image was processed identifies the organizationlevel producer s of the processed image an ordinal listing of the image processing steps performed for example image despeckling a description of any setting of the processing application for example for a multi engine ocr application this might include the engines which were used ideally this description should be adequate so that someone else using the same application can produce identical results list of processed elements id of processed element
| 1
|
1,838
| 4,643,892,893
|
IssuesEvent
|
2016-09-30 14:49:22
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Publication writers never update existing publications
|
4. Ready for Review bug Processors
|
We need to be able to update their data, specially now with #415
|
1.0
|
Publication writers never update existing publications - We need to be able to update their data, specially now with #415
|
process
|
publication writers never update existing publications we need to be able to update their data specially now with
| 1
|
6,530
| 9,622,115,470
|
IssuesEvent
|
2019-05-14 12:20:43
|
nextmoov/nextmoov
|
https://api.github.com/repos/nextmoov/nextmoov
|
closed
|
The Great Migration 🦜
|
#Dev Tools & Processes [zube]: In Progress
|
[EDIT by @thomashermine ]
**This is the beginning of The Great Migration 🦜**
Current status:
- [ ] Ops :
- [x] Update Drone, set to connect to GitHub instead of BitBucket
- [ ] Repository : Migrate all the things! ✊
- [x] cambio
- [x] cambio-start-web
- [x] cambio-start-app
- [x] cambio-registration-middleware
- [x] modalizy
- [x] modalizy-app-app
- [x] modalizy-services-api
- [x] modalizy-services-etl
- [x] modalizy-notifications-middleware
- [x] modalizy-signing
- [x] nextmoov-ticketing
- [x] nextmoov-ticketing-api
- [x] smb-pto
- [x] smb-pto-frontend
|
1.0
|
The Great Migration 🦜 - [EDIT by @thomashermine ]
**This is the beginning of The Great Migration 🦜**
Current status:
- [ ] Ops :
- [x] Update Drone, set to connect to GitHub instead of BitBucket
- [ ] Repository : Migrate all the things! ✊
- [x] cambio
- [x] cambio-start-web
- [x] cambio-start-app
- [x] cambio-registration-middleware
- [x] modalizy
- [x] modalizy-app-app
- [x] modalizy-services-api
- [x] modalizy-services-etl
- [x] modalizy-notifications-middleware
- [x] modalizy-signing
- [x] nextmoov-ticketing
- [x] nextmoov-ticketing-api
- [x] smb-pto
- [x] smb-pto-frontend
|
process
|
the great migration 🦜 this is the beginning of the great migration 🦜 current status ops update drone set to connect to github instead of bitbucket repository migrate all the things ✊ cambio cambio start web cambio start app cambio registration middleware modalizy modalizy app app modalizy services api modalizy services etl modalizy notifications middleware modalizy signing nextmoov ticketing nextmoov ticketing api smb pto smb pto frontend
| 1
|
19,866
| 26,278,132,378
|
IssuesEvent
|
2023-01-07 02:03:05
|
rusefi/rusefi_documentation
|
https://api.github.com/repos/rusefi/rusefi_documentation
|
closed
|
PROBLEM STATEMENT: wiki2-human https://github.com/rusefi/rusefi/wiki is ugly
|
wiki location & process change
|
As of October 6, 2021 https://github.com/rusefi/rusefi/wiki "wiki2-human" is the primary location of rusEFI knowledge.
As is https://github.com/rusefi/rusefi/wiki is definitely ugly.
We were thinking to move https://github.com/rusefi/rusefi/wiki to wiki.rusefi.com (known as "wiki3") because of indexing issue #138 that's no longer an issue. Yet open question if we would have to move https://github.com/rusefi/rusefi/wiki to wiki.rusefi.com just in order to gain L&F freedom.
|
1.0
|
PROBLEM STATEMENT: wiki2-human https://github.com/rusefi/rusefi/wiki is ugly - As of October 6, 2021 https://github.com/rusefi/rusefi/wiki "wiki2-human" is the primary location of rusEFI knowledge.
As is https://github.com/rusefi/rusefi/wiki is definitely ugly.
We were thinking to move https://github.com/rusefi/rusefi/wiki to wiki.rusefi.com (known as "wiki3") because of indexing issue #138 that's no longer an issue. Yet open question if we would have to move https://github.com/rusefi/rusefi/wiki to wiki.rusefi.com just in order to gain L&F freedom.
|
process
|
problem statement human is ugly as of october human is the primary location of rusefi knowledge as is is definitely ugly we were thinking to move to wiki rusefi com known as because of indexing issue that s no longer an issue yet open question if we would have to move to wiki rusefi com just in order to gain l f freedom
| 1
|
232,074
| 25,564,943,959
|
IssuesEvent
|
2022-11-30 13:39:45
|
Tim-Demo/IdentityServer4
|
https://api.github.com/repos/Tim-Demo/IdentityServer4
|
opened
|
CVE-2022-38900 (High) detected in decode-uri-component-0.2.0.tgz
|
security vulnerability
|
## CVE-2022-38900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p></summary>
<p>A better decodeURIComponent</p>
<p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p>
<p>Path to dependency file: /samples/Clients/old/MvcImplicitJwtRequest/package.json</p>
<p>Path to vulnerable library: /samples/Clients/old/MvcImplicitJwtRequest/node_modules/decode-uri-component/package.json,/samples/Clients/old/MvcImplicit/node_modules/decode-uri-component/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.8.11.tgz (Root Library)
- liftoff-2.5.0.tgz
- findup-sync-2.0.0.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- source-map-resolve-0.5.2.tgz
- :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/IdentityServer4/commit/7d47835acfa4a3e45eb8050d9af2dbb2c6ad1df4">7d47835acfa4a3e45eb8050d9af2dbb2c6ad1df4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
True
|
CVE-2022-38900 (High) detected in decode-uri-component-0.2.0.tgz - ## CVE-2022-38900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p></summary>
<p>A better decodeURIComponent</p>
<p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p>
<p>Path to dependency file: /samples/Clients/old/MvcImplicitJwtRequest/package.json</p>
<p>Path to vulnerable library: /samples/Clients/old/MvcImplicitJwtRequest/node_modules/decode-uri-component/package.json,/samples/Clients/old/MvcImplicit/node_modules/decode-uri-component/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.8.11.tgz (Root Library)
- liftoff-2.5.0.tgz
- findup-sync-2.0.0.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- source-map-resolve-0.5.2.tgz
- :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/IdentityServer4/commit/7d47835acfa4a3e45eb8050d9af2dbb2c6ad1df4">7d47835acfa4a3e45eb8050d9af2dbb2c6ad1df4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
non_process
|
cve high detected in decode uri component tgz cve high severity vulnerability vulnerable library decode uri component tgz a better decodeuricomponent library home page a href path to dependency file samples clients old mvcimplicitjwtrequest package json path to vulnerable library samples clients old mvcimplicitjwtrequest node modules decode uri component package json samples clients old mvcimplicit node modules decode uri component package json dependency hierarchy gulp tgz root library liftoff tgz findup sync tgz micromatch tgz snapdragon tgz source map resolve tgz x decode uri component tgz vulnerable library found in head commit a href found in base branch master vulnerability details decode uri component is vulnerable to improper input validation resulting in dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
| 0
|
106,580
| 13,311,149,852
|
IssuesEvent
|
2020-08-26 07:49:34
|
spotify/backstage
|
https://api.github.com/repos/spotify/backstage
|
opened
|
Another Example Plugin for Storybook
|
component design
|
## 🗒 General
<!--- Write a nice note to the community requesting the creation of a new component! -->
<!--- Include an image of your component. Bonus points for a GIF! -->
Hi! Would love some help getting the UI for this 'example plugin' built and incorporated into our storybook! This is just a static page example of how a plugin could be structured.

## 💻 Usage
This is a generic example that demonstrates how one might structure a plugin within Backstage.
## 📐 Specs
<!--- Include images that detail the redlines for your component.-->
<!--- Once we get our Figma workspace set up, we'll be posting the Figma files rather than doing specs by hand.-->
You can access the Figma file [here](https://www.figma.com/community/file/850673348101741100). Feel free to duplicate it and open the 'Create a Plugin' page on the left side panel. If you click into the different elements, you can see the component specs. We've also leveraged some existing components in this design - namely, the [header](https://backstage.io/storybook/?path=/story/header--home), the [sidebar](https://backstage.io/storybook/?path=/story/sidebar--sample-sidebar), [tabs](https://backstage.io/storybook/?path=/story/tabs--default), and the [progress card](https://backstage.io/storybook/?path=/story/progress-card--default).
<img width="1546" alt="Screenshot 2020-08-26 at 9 47 02 AM" src="https://user-images.githubusercontent.com/61153904/91275975-24a7de00-e781-11ea-9d83-472baecdab80.png">
DM @katz on Discord if you have any questions! Thank you!
|
1.0
|
Another Example Plugin for Storybook - ## 🗒 General
<!--- Write a nice note to the community requesting the creation of a new component! -->
<!--- Include an image of your component. Bonus points for a GIF! -->
Hi! Would love some help getting the UI for this 'example plugin' built and incorporated into our storybook! This is just a static page example of how a plugin could be structured.

## 💻 Usage
This is a generic example that demonstrates how one might structure a plugin within Backstage.
## 📐 Specs
<!--- Include images that detail the redlines for your component.-->
<!--- Once we get our Figma workspace set up, we'll be posting the Figma files rather than doing specs by hand.-->
You can access the Figma file [here](https://www.figma.com/community/file/850673348101741100). Feel free to duplicate it and open the 'Create a Plugin' page on the left side panel. If you click into the different elements, you can see the component specs. We've also leveraged some existing components in this design - namely, the [header](https://backstage.io/storybook/?path=/story/header--home), the [sidebar](https://backstage.io/storybook/?path=/story/sidebar--sample-sidebar), [tabs](https://backstage.io/storybook/?path=/story/tabs--default), and the [progress card](https://backstage.io/storybook/?path=/story/progress-card--default).
<img width="1546" alt="Screenshot 2020-08-26 at 9 47 02 AM" src="https://user-images.githubusercontent.com/61153904/91275975-24a7de00-e781-11ea-9d83-472baecdab80.png">
DM @katz on Discord if you have any questions! Thank you!
|
non_process
|
another example plugin for storybook 🗒 general hi would love some help getting the ui for this example plugin built and incorporated into our storybook this is just a static page example of how a plugin could be structured 💻 usage this is a generic example that demonstrates how one might structure a plugin within backstage 📐 specs you can access the figma file feel free to duplicate it and open the create a plugin page on the left side panel if you click into the different elements you can see the component specs we ve also leveraged some existing components in this design namely the the and the img width alt screenshot at am src dm katz on discord if you have any questions thank you
| 0
|
69,182
| 7,126,411,627
|
IssuesEvent
|
2018-01-20 09:56:07
|
openshiftio/openshift.io
|
https://api.github.com/repos/openshiftio/openshift.io
|
opened
|
Testing on production
|
team/test train/heather type/epic
|
As a developer, I want to test my service on production without affecting currently deployed versions and users so I can gain more confidence when shipping new feature.
- [ ] PoC for being able to deploy a new version of the service (with a stack similar to OSIO ones) without affecting existing version and normal users
|
1.0
|
Testing on production - As a developer, I want to test my service on production without affecting currently deployed versions and users so I can gain more confidence when shipping new feature.
- [ ] PoC for being able to deploy a new version of the service (with a stack similar to OSIO ones) without affecting existing version and normal users
|
non_process
|
testing on production as a developer i want to test my service on production without affecting currently deployed versions and users so i can gain more confidence when shipping new feature poc for being able to deploy a new version of the service with a stack similar to osio ones without affecting existing version and normal users
| 0
|
20,735
| 27,432,661,447
|
IssuesEvent
|
2023-03-02 03:29:34
|
cse442-at-ub/project_s23-team-infinity
|
https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity
|
opened
|
Create the login backend functionality of the login page
|
IO Task Processing Task Sprint 2
|
**Task Test**
*Test1*
1) Add an email/username and password pair in our SQL database
2) Start the web application
3) Verify the page is in the login page
4) Verify there are two fields in the left for user to type their email and password
5) Type the email/username and password we created in step 1 in the fields
6) Click login button
7) Verify the page redirects to the home page
*Test 2*
1) Add an email/username and password pair in our SQL database
2) Start the web application
3) Verify the page is in the login page
4) Verify there are two fields in the left for user to type their email and password
5) Type an email/username and password that is different from what we created in step 1 in the fields
6) Click login button
7) Verify the page stay in the login page
8) Verify that a error message pops out in the login page
|
1.0
|
Create the login backend functionality of the login page - **Task Test**
*Test1*
1) Add an email/username and password pair in our SQL database
2) Start the web application
3) Verify the page is in the login page
4) Verify there are two fields in the left for user to type their email and password
5) Type the email/username and password we created in step 1 in the fields
6) Click login button
7) Verify the page redirects to the home page
*Test 2*
1) Add an email/username and password pair in our SQL database
2) Start the web application
3) Verify the page is in the login page
4) Verify there are two fields in the left for user to type their email and password
5) Type an email/username and password that is different from what we created in step 1 in the fields
6) Click login button
7) Verify the page stay in the login page
8) Verify that a error message pops out in the login page
|
process
|
create the login backend functionality of the login page task test add an email username and password pair in our sql database start the web application verify the page is in the login page verify there are two fields in the left for user to type their email and password type the email username and password we created in step in the fields click login button verify the page redirects to the home page test add an email username and password pair in our sql database start the web application verify the page is in the login page verify there are two fields in the left for user to type their email and password type an email username and password that is different from what we created in step in the fields click login button verify the page stay in the login page verify that a error message pops out in the login page
| 1
|
6,207
| 9,115,915,530
|
IssuesEvent
|
2019-02-22 07:14:00
|
reactor/reactor-core
|
https://api.github.com/repos/reactor/reactor-core
|
opened
|
Move master to the a release branch 3.2.x
|
process
|
In order to prepare master for 3.3 commits, move the stable release into a dedicated 3.2 branch.
|
1.0
|
Move master to the a release branch 3.2.x - In order to prepare master for 3.3 commits, move the stable release into a dedicated 3.2 branch.
|
process
|
move master to the a release branch x in order to prepare master for commits move the stable release into a dedicated branch
| 1
|
426,818
| 29,661,187,927
|
IssuesEvent
|
2023-06-10 07:01:08
|
unikraft/docs
|
https://api.github.com/repos/unikraft/docs
|
opened
|
USoC23: Update session 02: Baby Steps
|
documentation enhancement
|
The second session of [Unikraft Summer of Code](https://unikraft.org/community/hackathons/usoc23/) will be a short look behind the scenes. The participants should:
* Start using the `make`-based approach to configure and build applications
* Understand the configuration step, see the modularity of Unikraft
* Understand the concepts of `internal library vs external library`, `core`, `applications`
There will be no actual coding done, just poking around the config and build system.
The session can be picked up from the [hackathons sessions](https://unikraft.org/community/hackathons/2023-05-porto/behind-scenes/).
|
1.0
|
USoC23: Update session 02: Baby Steps - The second session of [Unikraft Summer of Code](https://unikraft.org/community/hackathons/usoc23/) will be a short look behind the scenes. The participants should:
* Start using the `make`-based approach to configure and build applications
* Understand the configuration step, see the modularity of Unikraft
* Understand the concepts of `internal library vs external library`, `core`, `applications`
There will be no actual coding done, just poking around the config and build system.
The session can be picked up from the [hackathons sessions](https://unikraft.org/community/hackathons/2023-05-porto/behind-scenes/).
|
non_process
|
update session baby steps the second session of will be a short look behind the scenes the participants should start using the make based approach to configure and build applications understand the configuration step see the modularity of unikraft understand the concepts of internal library vs external library core applications there will be no actual coding done just poking around the config and build system the session can be picked up from the
| 0
|
12,769
| 15,148,507,108
|
IssuesEvent
|
2021-02-11 10:42:57
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
closed
|
Panther should not fail for 0 size files
|
bug p1 team:data processing
|
### Describe the bug
Panther log processor will fail if it encounters a 0-size file in S3, leading to this:
```
{"level":"warn","ts":1611835832.5195754,"caller":"processor/stream.go:132","msg":"Skipping event due to error","application":"panther","requestId":"170b5710-bd3e-4446-9647-fb7d05d68dbf","error":"EOF"}
```
### Steps to reproduce
Steps to reproduce the behavior:
1. Onboard a new S3 source
2. Sent an empty file to S3
3. See error
4. See notification in SQS DLQ
### Expected behavior
Panther would not log an error/warning and delete the S3 notification for such an object.
### Environment
How are you deploying or using Panther?
- Panther version or commit: 1.15.2
|
1.0
|
Panther should not fail for 0 size files - ### Describe the bug
Panther log processor will fail if it encounters a 0-size file in S3, leading to this:
```
{"level":"warn","ts":1611835832.5195754,"caller":"processor/stream.go:132","msg":"Skipping event due to error","application":"panther","requestId":"170b5710-bd3e-4446-9647-fb7d05d68dbf","error":"EOF"}
```
### Steps to reproduce
Steps to reproduce the behavior:
1. Onboard a new S3 source
2. Sent an empty file to S3
3. See error
4. See notification in SQS DLQ
### Expected behavior
Panther would not log an error/warning and delete the S3 notification for such an object.
### Environment
How are you deploying or using Panther?
- Panther version or commit: 1.15.2
|
process
|
panther should not fail for size files describe the bug panther log processor will fail if it encounters a size file in leading to this level warn ts caller processor stream go msg skipping event due to error application panther requestid error eof steps to reproduce steps to reproduce the behavior onboard a new source sent an empty file to see error see notification in sqs dlq expected behavior panther would not log an error warning and delete the notification for such an object environment how are you deploying or using panther panther version or commit
| 1
|
4,367
| 2,550,843,280
|
IssuesEvent
|
2015-02-01 23:51:00
|
mbutterworth/cloaked-spice
|
https://api.github.com/repos/mbutterworth/cloaked-spice
|
opened
|
Update Research Report content type/strategy for broader relevance
|
output: content type priority: should research research: finding research: report type: development
|
Research team had some trepidation about the current content type and would like to fine tune the section for max relevance for the broader digital team
##Ideas
- creating a new section that relates study findings to a broader & relevant topic
- make key findings less granular / bigger context
- create a new conclusion section (naming pending) to detail the key finding
|
1.0
|
Update Research Report content type/strategy for broader relevance - Research team had some trepidation about the current content type and would like to fine tune the section for max relevance for the broader digital team
##Ideas
- creating a new section that relates study findings to a broader & relevant topic
- make key findings less granular / bigger context
- create a new conclusion section (naming pending) to detail the key finding
|
non_process
|
update research report content type strategy for broader relevance research team had some trepidation about the current content type and would like to fine tune the section for max relevance for the broader digital team ideas creating a new section that relates study findings to a broader relevant topic make key findings less granular bigger context create a new conclusion section naming pending to detail the key finding
| 0
|
7,765
| 10,887,807,598
|
IssuesEvent
|
2019-11-18 15:11:22
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Tag releases doesn't work on Linux
|
type: process
|
Reported error: SslCryptographicException: error:0E076071:configuration file routines:MODULE_RUN:unknown module name
Will try to reproduce and then see whether a later version fixes this.
|
1.0
|
Tag releases doesn't work on Linux - Reported error: SslCryptographicException: error:0E076071:configuration file routines:MODULE_RUN:unknown module name
Will try to reproduce and then see whether a later version fixes this.
|
process
|
tag releases doesn t work on linux reported error sslcryptographicexception error configuration file routines module run unknown module name will try to reproduce and then see whether a later version fixes this
| 1
|
43,101
| 11,157,463,795
|
IssuesEvent
|
2019-12-25 12:58:53
|
neuronsimulator/nrn
|
https://api.github.com/repos/neuronsimulator/nrn
|
closed
|
Update travis CI to include latest GCC versions
|
CI building enhancement
|
To avoid issues like #365, travis CI should test multiple GCC versions including latest release like v9.
|
1.0
|
Update travis CI to include latest GCC versions - To avoid issues like #365, travis CI should test multiple GCC versions including latest release like v9.
|
non_process
|
update travis ci to include latest gcc versions to avoid issues like travis ci should test multiple gcc versions including latest release like
| 0
|
371
| 2,534,938,247
|
IssuesEvent
|
2015-01-25 14:22:00
|
radiowarwick/digiplay_legacy
|
https://api.github.com/repos/radiowarwick/digiplay_legacy
|
closed
|
Website form validation
|
defect Website
|
Need to ensure all forms validate correctly. Currently a lot of discrepancy with regard to directory/file permissions.
|
1.0
|
Website form validation - Need to ensure all forms validate correctly. Currently a lot of discrepancy with regard to directory/file permissions.
|
non_process
|
website form validation need to ensure all forms validate correctly currently a lot of discrepancy with regard to directory file permissions
| 0
|
3,738
| 6,733,146,720
|
IssuesEvent
|
2017-10-18 13:59:23
|
york-region-tpss/stp
|
https://api.github.com/repos/york-region-tpss/stp
|
closed
|
Extra Work Report - Allow Different Unit Price for Each Item
|
enhancement process workflow
|
In the old system, the unit price must be the same for different extra work items in each contract item.
|
1.0
|
Extra Work Report - Allow Different Unit Price for Each Item - In the old system, the unit price must be the same for different extra work items in each contract item.
|
process
|
extra work report allow different unit price for each item in the old system the unit price must be the same for different extra work items in each contract item
| 1
|
5,485
| 8,358,882,751
|
IssuesEvent
|
2018-10-03 05:49:13
|
u-root/u-bmc
|
https://api.github.com/repos/u-root/u-bmc
|
opened
|
Upgrade U-boot
|
help wanted process
|
We're using OpenBMC's u-boot which is based on 2016. This means that there is no chance that we can upstream any u-boot modifications we would like to do.
u-boot mainline does not support AST24xx series so that would have to be ported, but the good thing is that it's not that much stuff we require from the CPU - so it could maybe be not that hard.
Still, collecting all the contributor material and making sure it's all licensed properly etc is going to be a time sink.
|
1.0
|
Upgrade U-boot - We're using OpenBMC's u-boot which is based on 2016. This means that there is no chance that we can upstream any u-boot modifications we would like to do.
u-boot mainline does not support AST24xx series so that would have to be ported, but the good thing is that it's not that much stuff we require from the CPU - so it could maybe be not that hard.
Still, collecting all the contributor material and making sure it's all licensed properly etc is going to be a time sink.
|
process
|
upgrade u boot we re using openbmc s u boot which is based on this means that there is no chance that we can upstream any u boot modifications we would like to do u boot mainline does not support series so that would have to be ported but the good thing is that it s not that much stuff we require from the cpu so it could maybe be not that hard still collecting all the contributor material and making sure it s all licensed properly etc is going to be a time sink
| 1
|
22,628
| 31,860,131,533
|
IssuesEvent
|
2023-09-15 10:17:08
|
mysociety/alaveteli
|
https://api.github.com/repos/mysociety/alaveteli
|
closed
|
Broken attachment masking - attachments encoded as uuencoded
|
x:uk bug f:attachments-processing
|
For attachments encoded as uuencoded, these are parsed outside of our `MailHandler` library. Meaning the original attachment can never be found.
|
1.0
|
Broken attachment masking - attachments encoded as uuencoded - For attachments encoded as uuencoded, these are parsed outside of our `MailHandler` library. Meaning the original attachment can never be found.
|
process
|
broken attachment masking attachments encoded as uuencoded for attachments encoded as uuencoded these are parsed outside of our mailhandler library meaning the original attachment can never be found
| 1
|
5,359
| 8,188,410,733
|
IssuesEvent
|
2018-08-30 01:44:55
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Can I custom options of http_user_agent?
|
add log-processing question
|
env:
goaccess1.2
The data colume of "OPERATING SYSTEMS" Panel have too many "Unknow"
I think data colume of "OPERATING SYSTEMS" is corresponds to "http_user_agent"!
I see http_user_agent have Android,ios,Windows...
Can I custom options of http_user_agent?
For example:
I want to add a app-version-0.1 to http_user_agent
|
1.0
|
Can I custom options of http_user_agent? - env:
goaccess1.2
The data colume of "OPERATING SYSTEMS" Panel have too many "Unknow"
I think data colume of "OPERATING SYSTEMS" is corresponds to "http_user_agent"!
I see http_user_agent have Android,ios,Windows...
Can I custom options of http_user_agent?
For example:
I want to add a app-version-0.1 to http_user_agent
|
process
|
can i custom options of http user agent env the data colume of operating systems panel have too many unknow i think data colume of operating systems is corresponds to http user agent i see http user agent have android,ios,windows can i custom options of http user agent for example i want to add a app version to http user agent
| 1
|
11,963
| 14,728,631,698
|
IssuesEvent
|
2021-01-06 10:13:17
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
App participant registry > Enrolled studies pop-up>Records should be displayed in descending order
|
Bug P2 Participant manager Process: Tested dev
|
AR : App participant registry > Enrolled studies pop-up > Sites records present under studies are not displayed in descending order
ER : Records should be displayed in descending order

|
1.0
|
App participant registry > Enrolled studies pop-up>Records should be displayed in descending order - AR : App participant registry > Enrolled studies pop-up > Sites records present under studies are not displayed in descending order
ER : Records should be displayed in descending order

|
process
|
app participant registry enrolled studies pop up records should be displayed in descending order ar app participant registry enrolled studies pop up sites records present under studies are not displayed in descending order er records should be displayed in descending order
| 1
|
3,457
| 4,323,437,439
|
IssuesEvent
|
2016-07-25 16:59:24
|
google/material-design-lite
|
https://api.github.com/repos/google/material-design-lite
|
opened
|
Decrement all v2 packages to v0.x before initial publishing
|
infrastructure
|
We should stabilize on 1.x versions once we are ready to release.
|
1.0
|
Decrement all v2 packages to v0.x before initial publishing - We should stabilize on 1.x versions once we are ready to release.
|
non_process
|
decrement all packages to x before initial publishing we should stabilize on x versions once we are ready to release
| 0
|
3,804
| 6,782,829,549
|
IssuesEvent
|
2017-10-30 09:43:48
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Improve citing interface search result order
|
Word Processor Integration
|
- "Ibid" doesn't sort items in [any reasonable order](https://forums.zotero.org/discussion/68183/z-word-plugin-search-behaviour-changed-counterintuitive-results). Should probably list last cited item, or the item above the current citation field as the first one.
- [Inconsistent results](https://forums.zotero.org/discussion/68183/z-word-plugin-search-behaviour-changed-counterintuitive-results) with "messy typing".
|
1.0
|
Improve citing interface search result order - - "Ibid" doesn't sort items in [any reasonable order](https://forums.zotero.org/discussion/68183/z-word-plugin-search-behaviour-changed-counterintuitive-results). Should probably list last cited item, or the item above the current citation field as the first one.
- [Inconsistent results](https://forums.zotero.org/discussion/68183/z-word-plugin-search-behaviour-changed-counterintuitive-results) with "messy typing".
|
process
|
improve citing interface search result order ibid doesn t sort items in should probably list last cited item or the item above the current citation field as the first one with messy typing
| 1
|
234,312
| 25,826,308,020
|
IssuesEvent
|
2022-12-12 13:12:20
|
SocialSchools/cmsplugin-socialschools
|
https://api.github.com/repos/SocialSchools/cmsplugin-socialschools
|
opened
|
underscore-min-1.4.4.js: 1 vulnerabilities (highest severity is: 7.2)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-min-1.4.4.js</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js">https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js</a></p>
<p>Path to vulnerable library: /build/lib/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js,/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/SocialSchools/cmsplugin-socialschools/commit/3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a">3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (underscore-min version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-23358](https://www.mend.io/vulnerability-database/CVE-2021-23358) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.2 | underscore-min-1.4.4.js | Direct | underscore - 1.12.1,1.13.0-2 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23358</summary>
### Vulnerable Library - <b>underscore-min-1.4.4.js</b></p>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js">https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js</a></p>
<p>Path to vulnerable library: /build/lib/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js,/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js</p>
<p>
Dependency Hierarchy:
- :x: **underscore-min-1.4.4.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SocialSchools/cmsplugin-socialschools/commit/3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a">3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
<p></p>
</details>
|
True
|
underscore-min-1.4.4.js: 1 vulnerabilities (highest severity is: 7.2) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-min-1.4.4.js</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js">https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js</a></p>
<p>Path to vulnerable library: /build/lib/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js,/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/SocialSchools/cmsplugin-socialschools/commit/3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a">3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (underscore-min version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-23358](https://www.mend.io/vulnerability-database/CVE-2021-23358) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.2 | underscore-min-1.4.4.js | Direct | underscore - 1.12.1,1.13.0-2 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23358</summary>
### Vulnerable Library - <b>underscore-min-1.4.4.js</b></p>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js">https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.4.4/underscore-min.js</a></p>
<p>Path to vulnerable library: /build/lib/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js,/cmsplugin_socialschools/static/cmsplugin_socialschools/js/lib/underscore-min.js</p>
<p>
Dependency Hierarchy:
- :x: **underscore-min-1.4.4.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SocialSchools/cmsplugin-socialschools/commit/3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a">3fc23ab56cd63380d4a9ded65e6f6aaaa40bc34a</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
<p></p>
</details>
|
non_process
|
underscore min js vulnerabilities highest severity is vulnerable library underscore min js javascript s functional programming helper library library home page a href path to vulnerable library build lib cmsplugin socialschools static cmsplugin socialschools js lib underscore min js cmsplugin socialschools static cmsplugin socialschools js lib underscore min js found in head commit a href vulnerabilities cve severity cvss dependency type fixed in underscore min version remediation available high underscore min js direct underscore details cve vulnerable library underscore min js javascript s functional programming helper library library home page a href path to vulnerable library build lib cmsplugin socialschools static cmsplugin socialschools js lib underscore min js cmsplugin socialschools static cmsplugin socialschools js lib underscore min js dependency hierarchy x underscore min js vulnerable library found in head commit a href found in base branch develop vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution underscore
| 0
|
11,055
| 13,889,117,991
|
IssuesEvent
|
2020-10-19 07:25:35
|
zerolab-fe/awesome-nodejs
|
https://api.github.com/repos/zerolab-fe/awesome-nodejs
|
closed
|
nodemon
|
Process management
|
在👆 Title 处填写包名,并补充下面信息:
```json
{
"repoUrl": "https://github.com/remy/nodemon",
"description": "监视 node.js 应用程序中的任何更改并自动重启服务器"
}
```
|
1.0
|
nodemon - 在👆 Title 处填写包名,并补充下面信息:
```json
{
"repoUrl": "https://github.com/remy/nodemon",
"description": "监视 node.js 应用程序中的任何更改并自动重启服务器"
}
```
|
process
|
nodemon 在👆 title 处填写包名,并补充下面信息: json repourl description 监视 node js 应用程序中的任何更改并自动重启服务器
| 1
|
41,722
| 5,394,061,192
|
IssuesEvent
|
2017-02-27 00:47:13
|
Alamofire/AlamofireImage
|
https://api.github.com/repos/Alamofire/AlamofireImage
|
closed
|
Custom cache key on af_setImage
|
feature request needs investigation needs tests
|
We have an image URL that's unique generated every time we call an end point, the URL is short living and expires in about 10 mins. When I use the af_setImage function, the default identifier is generated base on the request.absoluteURL, which does not work well in this case. I'm looking at KIngfisher library where they have option to set the custom cache key like
` let resource = ImageResource(downloadURL:imageURL, cacheKey: customKey)`
` imageView.kf.setImage(with: resource ....`
I would think AlamofireImage can do something similar by replacing the urlRequest parameter with a resource object that wraps around download url and cache key?
|
1.0
|
Custom cache key on af_setImage - We have an image URL that's unique generated every time we call an end point, the URL is short living and expires in about 10 mins. When I use the af_setImage function, the default identifier is generated base on the request.absoluteURL, which does not work well in this case. I'm looking at KIngfisher library where they have option to set the custom cache key like
` let resource = ImageResource(downloadURL:imageURL, cacheKey: customKey)`
` imageView.kf.setImage(with: resource ....`
I would think AlamofireImage can do something similar by replacing the urlRequest parameter with a resource object that wraps around download url and cache key?
|
non_process
|
custom cache key on af setimage we have an image url that s unique generated every time we call an end point the url is short living and expires in about mins when i use the af setimage function the default identifier is generated base on the request absoluteurl which does not work well in this case i m looking at kingfisher library where they have option to set the custom cache key like let resource imageresource downloadurl imageurl cachekey customkey imageview kf setimage with resource i would think alamofireimage can do something similar by replacing the urlrequest parameter with a resource object that wraps around download url and cache key
| 0
|
526,843
| 15,302,787,580
|
IssuesEvent
|
2021-02-24 15:06:27
|
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2
|
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2
|
closed
|
The Grand Mosque of Lordaeron
|
:beetle: bug :beetle: :exclamation: priority high
|
<!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is:**
`-`
**What expansions do you have installed?**
`-`
**Are you using any submods/mods? If so, which?**
`-`
**Please explain your issue in as much detail as possible:**
<details>
<summary>Click to expand</summary>

</details>
**Steps to reproduce the issue:**
`-`
**Upload an attachment below: .zip of your save, or screenshots:**
<details>
<summary>Click to expand</summary>

</details>
|
1.0
|
The Grand Mosque of Lordaeron - <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is:**
`-`
**What expansions do you have installed?**
`-`
**Are you using any submods/mods? If so, which?**
`-`
**Please explain your issue in as much detail as possible:**
<details>
<summary>Click to expand</summary>

</details>
**Steps to reproduce the issue:**
`-`
**Upload an attachment below: .zip of your save, or screenshots:**
<details>
<summary>Click to expand</summary>

</details>
|
non_process
|
the grand mosque of lordaeron do not remove pre existing lines your mod version is what expansions do you have installed are you using any submods mods if so which please explain your issue in as much detail as possible click to expand steps to reproduce the issue upload an attachment below zip of your save or screenshots click to expand
| 0
|
15,216
| 19,072,395,594
|
IssuesEvent
|
2021-11-27 05:38:05
|
home-climate-control/dz
|
https://api.github.com/repos/home-climate-control/dz
|
opened
|
SwitchableHvacDevice blows up in heating mode
|
process control
|
### Expected Behavior
`SwitchableHvacDevice` works in heating mode and only refuses to act as a fan (when a fan feature is explicitly asked for)
### Actual Behavior
`SwitchableHvacDevice` blows up with `java.lang.IllegalArgumentException: fanSpeed=1.0 is not supported by this instance (not in cooling mode)`
### Root Cause
Unit controller produces non-zero fan signal whereas it's only supposed to produce non-zero demand
### Workaround
Since this is happening in an experimental branch, the exception will be downgraded to a warning until process control logic is fixed.
|
1.0
|
SwitchableHvacDevice blows up in heating mode - ### Expected Behavior
`SwitchableHvacDevice` works in heating mode and only refuses to act as a fan (when a fan feature is explicitly asked for)
### Actual Behavior
`SwitchableHvacDevice` blows up with `java.lang.IllegalArgumentException: fanSpeed=1.0 is not supported by this instance (not in cooling mode)`
### Root Cause
Unit controller produces non-zero fan signal whereas it's only supposed to produce non-zero demand
### Workaround
Since this is happening in an experimental branch, the exception will be downgraded to a warning until process control logic is fixed.
|
process
|
switchablehvacdevice blows up in heating mode expected behavior switchablehvacdevice works in heating mode and only refuses to act as a fan when a fan feature is explicitly asked for actual behavior switchablehvacdevice blows up with java lang illegalargumentexception fanspeed is not supported by this instance not in cooling mode root cause unit controller produces non zero fan signal whereas it s only supposed to produce non zero demand workaround since this is happening in an experimental branch the exception will be downgraded to a warning until process control logic is fixed
| 1
|
637,273
| 20,624,507,078
|
IssuesEvent
|
2022-03-07 20:55:50
|
comp195/senior-project-spring-2022-blueprint-automation-tool
|
https://api.github.com/repos/comp195/senior-project-spring-2022-blueprint-automation-tool
|
closed
|
Project Snapshot 1
|
Normal Priority
|
(Copied from Canvas Assignment Description).
- [x] Use Git team assignment to track your progress
- [x] Create a Readme file in your repository that describes the project.
- [x] This document should include project title, name and email for all team members
- [x] Project Description
- [x] Project Components
- [x] Special Notes as needed (this may be added to throughout the semester)
- [x] Either upload or use the Wiki to create a project plan
- [x] Use the Git Wiki to create a page for each snapshot
- [x] Identify completed activities
- [x] List activities behind schedule
- [x] Specify tasks to be completed for next snapshot
- [x] List challenges, concerns, blockers impeding the project from moving forward
- [x] Weekly Stand Up Meeting with Professor
- [x] Project Artifacts - This should be checked into Git files
|
1.0
|
Project Snapshot 1 - (Copied from Canvas Assignment Description).
- [x] Use Git team assignment to track your progress
- [x] Create a Readme file in your repository that describes the project.
- [x] This document should include project title, name and email for all team members
- [x] Project Description
- [x] Project Components
- [x] Special Notes as needed (this may be added to throughout the semester)
- [x] Either upload or use the Wiki to create a project plan
- [x] Use the Git Wiki to create a page for each snapshot
- [x] Identify completed activities
- [x] List activities behind schedule
- [x] Specify tasks to be completed for next snapshot
- [x] List challenges, concerns, blockers impeding the project from moving forward
- [x] Weekly Stand Up Meeting with Professor
- [x] Project Artifacts - This should be checked into Git files
|
non_process
|
project snapshot copied from canvas assignment description use git team assignment to track your progress create a readme file in your repository that describes the project this document should include project title name and email for all team members project description project components special notes as needed this may be added to throughout the semester either upload or use the wiki to create a project plan use the git wiki to create a page for each snapshot identify completed activities list activities behind schedule specify tasks to be completed for next snapshot list challenges concerns blockers impeding the project from moving forward weekly stand up meeting with professor project artifacts this should be checked into git files
| 0
|
8,524
| 11,701,995,830
|
IssuesEvent
|
2020-03-06 20:58:16
|
MicrosoftDocs/vsts-docs
|
https://api.github.com/repos/MicrosoftDocs/vsts-docs
|
closed
|
When an approver approves a pending stage in the pipeline, can a pipeline get access to the approver + comment in a variable?
|
Pri1 cba devops-cicd-process/tech devops/prod duplicate
|
When an approver approves a pending stage in the pipeline, can a pipeline get access to the approver + comment in a variable?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b067a175-f640-7503-9c1e-f0130c6dbeda
* Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36
* Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass)
* Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/approvals.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @azooinmyluggage
* Microsoft Alias: **shashban**
|
1.0
|
When an approver approves a pending stage in the pipeline, can a pipeline get access to the approver + comment in a variable? - When an approver approves a pending stage in the pipeline, can a pipeline get access to the approver + comment in a variable?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b067a175-f640-7503-9c1e-f0130c6dbeda
* Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36
* Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass)
* Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/approvals.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @azooinmyluggage
* Microsoft Alias: **shashban**
|
process
|
when an approver approves a pending stage in the pipeline can a pipeline get access to the approver comment in a variable when an approver approves a pending stage in the pipeline can a pipeline get access to the approver comment in a variable document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login azooinmyluggage microsoft alias shashban
| 1
|
18,378
| 24,508,518,998
|
IssuesEvent
|
2022-10-10 18:50:04
|
cagov/design-system
|
https://api.github.com/repos/cagov/design-system
|
closed
|
Process: issue template cleanup
|
Process improvement
|
I order to simplify external feature requests we will be moving to a non-git form in coda to ensure issues added to GitHub are relevant to product and project needs.
Tasks:
- [ ] Keep the following templates: general, bug
- [ ] Remove all other templates
|
1.0
|
Process: issue template cleanup - I order to simplify external feature requests we will be moving to a non-git form in coda to ensure issues added to GitHub are relevant to product and project needs.
Tasks:
- [ ] Keep the following templates: general, bug
- [ ] Remove all other templates
|
process
|
process issue template cleanup i order to simplify external feature requests we will be moving to a non git form in coda to ensure issues added to github are relevant to product and project needs tasks keep the following templates general bug remove all other templates
| 1
|
204,936
| 7,092,723,975
|
IssuesEvent
|
2018-01-12 17:37:29
|
Proyecto-EGC-G1/AdminCensos-EGC-G1
|
https://api.github.com/repos/Proyecto-EGC-G1/AdminCensos-EGC-G1
|
closed
|
Modificación models.py
|
Priority: Medium Status: In Progress bug
|
Se necesita un modelo Role necesario para ejecutar el proyecto en el servidor. La clase Role es necesaria para userAccount.
|
1.0
|
Modificación models.py - Se necesita un modelo Role necesario para ejecutar el proyecto en el servidor. La clase Role es necesaria para userAccount.
|
non_process
|
modificación models py se necesita un modelo role necesario para ejecutar el proyecto en el servidor la clase role es necesaria para useraccount
| 0
|
11,424
| 14,248,117,823
|
IssuesEvent
|
2020-11-19 12:28:40
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
some un-pushed builtin UDFs
|
difficulty/medium sig/coprocessor status/help-wanted
|
All the builtin UDFs supported by TiDB are:
https://github.com/pingcap/tidb/blob/master/expression/builtin.go#L292
All the builtin UDFs can be pushed down to TiKV are:
https://github.com/pingcap/tidb/blob/master/expression/expr_to_pb.go#L265
Following is part of UDFs which are still not being pushed to TiKV:
## cast functions
- [x] ast.Cast
## logic functions
- [x] ast.LogicXor
## arithmetical functions.
- [ ] ast.UnaryMinus
- [ ] ast.Mod
- [ ] ast.IntDiv
## bitwise functions
- [ ] ast.BitLength
- [ ] ast.BitNeg
- [ ] ast.BitCount
- [ ] ast.LeftShift
- [ ] ast.RightShift
- [x] ast.And
- [x] ast.Or
- [x] ast.Xor
## compare functions
- [ ] ast.IsTruth
- [ ] ast.IsFalsity
- [ ] ast.Regexp
- [ ] ast.Greatest
- [ ] ast.Least
- [ ] ast.Interval
## math functions
- [x] ast.Abs
- [ ] ast.Acos
- [ ] ast.Asin
- [ ] ast.Atan
- [ ] ast.Atan2
- [ ] ast.Ceil
- [ ] ast.Ceiling
- [ ] ast.Conv
- [ ] ast.Cos
- [ ] ast.Cot
- [ ] ast.CRC32
- [ ] ast.Degrees
- [ ] ast.Exp
- [ ] ast.Floor
- [ ] ast.Ln
- [ ] ast.Log
- [ ] ast.Log2
- [ ] ast.Log10
- [ ] ast.PI
- [ ] ast.Pow
- [ ] ast.Power
- [ ] ast.Radians
- [ ] ast.Rand
- [ ] ast.Round
- [ ] ast.Sign
- [ ] ast.Sin
- [ ] ast.Sqrt
- [ ] ast.Tan
- [ ] ast.Truncate
## time functions
- [ ] ast.AddDate
- [ ] ast.DateAdd
- [ ] ast.SubDate
- [ ] ast.DateSub
- [ ] ast.AddTime
- [ ] ast.ConvertTz
- [ ] ast.Curdate
- [ ] ast.CurrentDate
- [ ] ast.CurrentTime
- [ ] ast.CurrentTimestamp
- [ ] ast.Curtime
- [ ] ast.Date
- [ ] ast.DateLiteral
- [ ] ast.DateDiff
- [ ] ast.Day
- [ ] ast.DayName
- [ ] ast.DayOfMonth
- [ ] ast.DayOfWeek
- [ ] ast.DayOfYear
- [ ] ast.Extract
- [ ] ast.FromDays
- [ ] ast.FromUnixTime
- [ ] ast.GetFormat
- [ ] ast.Hour
- [ ] ast.LocalTime
- [ ] ast.LocalTimestamp
- [ ] ast.MakeDate
- [ ] ast.MakeTime
- [ ] ast.MicroSecond
- [ ] ast.Minute
- [ ] ast.Month
- [ ] ast.MonthName
- [ ] ast.Now
- [ ] ast.PeriodAdd
- [ ] ast.PeriodDiff
- [ ] ast.Quarter
- [ ] ast.SecToTime
- [ ] ast.Second
- [ ] ast.StrToDate
- [ ] ast.SubTime
- [ ] ast.Sysdate
- [ ] ast.Time
- [ ] ast.TimeLiteral
- [ ] ast.TimeFormat
- [ ] ast.TimeToSec
- [ ] ast.TimeDiff
- [ ] ast.Timestamp
- [ ] ast.TimestampLiteral
- [ ] ast.TimestampAdd
- [ ] ast.TimestampDiff
- [ ] ast.ToDays
- [ ] ast.ToSeconds
- [ ] ast.UnixTimestamp
- [ ] ast.UTCDate
- [ ] ast.UTCTime
- [ ] ast.UTCTimestamp
- [ ] ast.Week
- [ ] ast.Weekday
- [ ] ast.WeekOfYear
- [ ] ast.Year
- [ ] ast.YearWeek
- [ ] ast.LastDay
## string functions
- [ ] ast.ASCII
- [ ] ast.Bin
- [ ] ast.Concat
- [ ] ast.ConcatWS
- [ ] ast.Convert
- [ ] ast.Elt
- [ ] ast.ExportSet
- [ ] ast.Field
- [ ] ast.Format
- [ ] ast.FromBase64
- [ ] ast.InsertFunc
- [ ] ast.Instr
- [ ] ast.Lcase
- [ ] ast.Left
- [ ] ast.Right
- [ ] ast.Length
- [ ] ast.LoadFile
- [ ] ast.Locate
- [ ] ast.Lower
- [ ] ast.Lpad
- [ ] ast.LTrim
- [ ] ast.Mid
- [ ] ast.MakeSet
- [ ] ast.Oct
- [ ] ast.Ord
- [ ] ast.Position
- [ ] ast.Quote
- [ ] ast.Repeat
- [ ] ast.Replace
- [ ] ast.Reverse
- [ ] ast.RTrim
- [ ] ast.Space
- [ ] ast.Strcmp
- [ ] ast.Substring
- [ ] ast.Substr
- [ ] ast.SubstringIndex
- [ ] ast.ToBase64
- [ ] ast.Trim
- [ ] ast.Upper
- [ ] ast.Ucase
- [ ] ast.Hex
- [ ] ast.Unhex
- [ ] ast.Rpad
- [ ] ast.CharFunc
- [ ] ast.CharLength
- [ ] ast.CharacterLength
- [ ] ast.FindInSet
|
1.0
|
some un-pushed builtin UDFs - All the builtin UDFs supported by TiDB are:
https://github.com/pingcap/tidb/blob/master/expression/builtin.go#L292
All the builtin UDFs can be pushed down to TiKV are:
https://github.com/pingcap/tidb/blob/master/expression/expr_to_pb.go#L265
Following is part of UDFs which are still not being pushed to TiKV:
## cast functions
- [x] ast.Cast
## logic functions
- [x] ast.LogicXor
## arithmetical functions.
- [ ] ast.UnaryMinus
- [ ] ast.Mod
- [ ] ast.IntDiv
## bitwise functions
- [ ] ast.BitLength
- [ ] ast.BitNeg
- [ ] ast.BitCount
- [ ] ast.LeftShift
- [ ] ast.RightShift
- [x] ast.And
- [x] ast.Or
- [x] ast.Xor
## compare functions
- [ ] ast.IsTruth
- [ ] ast.IsFalsity
- [ ] ast.Regexp
- [ ] ast.Greatest
- [ ] ast.Least
- [ ] ast.Interval
## math functions
- [x] ast.Abs
- [ ] ast.Acos
- [ ] ast.Asin
- [ ] ast.Atan
- [ ] ast.Atan2
- [ ] ast.Ceil
- [ ] ast.Ceiling
- [ ] ast.Conv
- [ ] ast.Cos
- [ ] ast.Cot
- [ ] ast.CRC32
- [ ] ast.Degrees
- [ ] ast.Exp
- [ ] ast.Floor
- [ ] ast.Ln
- [ ] ast.Log
- [ ] ast.Log2
- [ ] ast.Log10
- [ ] ast.PI
- [ ] ast.Pow
- [ ] ast.Power
- [ ] ast.Radians
- [ ] ast.Rand
- [ ] ast.Round
- [ ] ast.Sign
- [ ] ast.Sin
- [ ] ast.Sqrt
- [ ] ast.Tan
- [ ] ast.Truncate
## time functions
- [ ] ast.AddDate
- [ ] ast.DateAdd
- [ ] ast.SubDate
- [ ] ast.DateSub
- [ ] ast.AddTime
- [ ] ast.ConvertTz
- [ ] ast.Curdate
- [ ] ast.CurrentDate
- [ ] ast.CurrentTime
- [ ] ast.CurrentTimestamp
- [ ] ast.Curtime
- [ ] ast.Date
- [ ] ast.DateLiteral
- [ ] ast.DateDiff
- [ ] ast.Day
- [ ] ast.DayName
- [ ] ast.DayOfMonth
- [ ] ast.DayOfWeek
- [ ] ast.DayOfYear
- [ ] ast.Extract
- [ ] ast.FromDays
- [ ] ast.FromUnixTime
- [ ] ast.GetFormat
- [ ] ast.Hour
- [ ] ast.LocalTime
- [ ] ast.LocalTimestamp
- [ ] ast.MakeDate
- [ ] ast.MakeTime
- [ ] ast.MicroSecond
- [ ] ast.Minute
- [ ] ast.Month
- [ ] ast.MonthName
- [ ] ast.Now
- [ ] ast.PeriodAdd
- [ ] ast.PeriodDiff
- [ ] ast.Quarter
- [ ] ast.SecToTime
- [ ] ast.Second
- [ ] ast.StrToDate
- [ ] ast.SubTime
- [ ] ast.Sysdate
- [ ] ast.Time
- [ ] ast.TimeLiteral
- [ ] ast.TimeFormat
- [ ] ast.TimeToSec
- [ ] ast.TimeDiff
- [ ] ast.Timestamp
- [ ] ast.TimestampLiteral
- [ ] ast.TimestampAdd
- [ ] ast.TimestampDiff
- [ ] ast.ToDays
- [ ] ast.ToSeconds
- [ ] ast.UnixTimestamp
- [ ] ast.UTCDate
- [ ] ast.UTCTime
- [ ] ast.UTCTimestamp
- [ ] ast.Week
- [ ] ast.Weekday
- [ ] ast.WeekOfYear
- [ ] ast.Year
- [ ] ast.YearWeek
- [ ] ast.LastDay
## string functions
- [ ] ast.ASCII
- [ ] ast.Bin
- [ ] ast.Concat
- [ ] ast.ConcatWS
- [ ] ast.Convert
- [ ] ast.Elt
- [ ] ast.ExportSet
- [ ] ast.Field
- [ ] ast.Format
- [ ] ast.FromBase64
- [ ] ast.InsertFunc
- [ ] ast.Instr
- [ ] ast.Lcase
- [ ] ast.Left
- [ ] ast.Right
- [ ] ast.Length
- [ ] ast.LoadFile
- [ ] ast.Locate
- [ ] ast.Lower
- [ ] ast.Lpad
- [ ] ast.LTrim
- [ ] ast.Mid
- [ ] ast.MakeSet
- [ ] ast.Oct
- [ ] ast.Ord
- [ ] ast.Position
- [ ] ast.Quote
- [ ] ast.Repeat
- [ ] ast.Replace
- [ ] ast.Reverse
- [ ] ast.RTrim
- [ ] ast.Space
- [ ] ast.Strcmp
- [ ] ast.Substring
- [ ] ast.Substr
- [ ] ast.SubstringIndex
- [ ] ast.ToBase64
- [ ] ast.Trim
- [ ] ast.Upper
- [ ] ast.Ucase
- [ ] ast.Hex
- [ ] ast.Unhex
- [ ] ast.Rpad
- [ ] ast.CharFunc
- [ ] ast.CharLength
- [ ] ast.CharacterLength
- [ ] ast.FindInSet
|
process
|
some un pushed builtin udfs all the builtin udfs supported by tidb are all the builtin udfs can be pushed down to tikv are following is part of udfs which are still not being pushed to tikv cast functions ast cast logic functions ast logicxor arithmetical functions ast unaryminus ast mod ast intdiv bitwise functions ast bitlength ast bitneg ast bitcount ast leftshift ast rightshift ast and ast or ast xor compare functions ast istruth ast isfalsity ast regexp ast greatest ast least ast interval math functions ast abs ast acos ast asin ast atan ast ast ceil ast ceiling ast conv ast cos ast cot ast ast degrees ast exp ast floor ast ln ast log ast ast ast pi ast pow ast power ast radians ast rand ast round ast sign ast sin ast sqrt ast tan ast truncate time functions ast adddate ast dateadd ast subdate ast datesub ast addtime ast converttz ast curdate ast currentdate ast currenttime ast currenttimestamp ast curtime ast date ast dateliteral ast datediff ast day ast dayname ast dayofmonth ast dayofweek ast dayofyear ast extract ast fromdays ast fromunixtime ast getformat ast hour ast localtime ast localtimestamp ast makedate ast maketime ast microsecond ast minute ast month ast monthname ast now ast periodadd ast perioddiff ast quarter ast sectotime ast second ast strtodate ast subtime ast sysdate ast time ast timeliteral ast timeformat ast timetosec ast timediff ast timestamp ast timestampliteral ast timestampadd ast timestampdiff ast todays ast toseconds ast unixtimestamp ast utcdate ast utctime ast utctimestamp ast week ast weekday ast weekofyear ast year ast yearweek ast lastday string functions ast ascii ast bin ast concat ast concatws ast convert ast elt ast exportset ast field ast format ast ast insertfunc ast instr ast lcase ast left ast right ast length ast loadfile ast locate ast lower ast lpad ast ltrim ast mid ast makeset ast oct ast ord ast position ast quote ast repeat ast replace ast reverse ast rtrim ast space ast strcmp ast substring ast substr ast substringindex ast ast trim ast upper ast ucase ast hex ast unhex ast rpad ast charfunc ast charlength ast characterlength ast findinset
| 1
|
21,563
| 29,922,573,803
|
IssuesEvent
|
2023-06-22 00:38:09
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] FullStack Developer (PHP) na Coodesh
|
SALVADOR PJ BANCO DE DADOS PHP MYSQL JAVASCRIPT FULL-STACK HTML LARAVEL AGILE SQL MOBILE CAKEPHP REQUISITOS REMOTO PROCESSOS GITHUB UMA MANUTENÇÃO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/desenvolvedor-full-stack-143422620?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Loginfo está em busca de FullStack Developer (PHP) para compor seu time!</p>
<p>Como especialistas, escolhemos entregar ao mercado uma solução totalmente mobile, capaz de cobrir de ponta a ponta a intralogística e o dia a dia de nossos clientes. Somos movidos a tecnologia e temos fome de resultados. Nosso principal objetivo é transformar o mercado entregando logística aprimorada, apoiando a logística e o comércio exterior com a otimização de processos, redução de custos e ganho de produtividade no recebimento, armazenagem e expedição.</p>
<p>Responsabilidades:</p>
<ul>
<li>Irá atuar como Full Stack no desenvolvimento de projetos e melhorias no sistema de gestão WMS para áreas alfandegadas e armazéns gerais;</li>
<li>Realizar mapeamento de banco de dados;</li>
<li>Manutenção e melhoria no sistema existente;</li>
<li>Desenvolver novas funcionalidades para todas as áreas da organização.</li>
</ul>
## Loginfo Tecnologia da Informação LTDA:
<p>Desde 2014, ano de nossa fundação, escolhemos conectar processos operacionais e comunicação. Decidimos tornar cada vez mais digital, ágil e intuitivo o mercado dos setores logísticos, portuários e de armazéns gerais. Somos inovadores em tudo que nos propomos a fazer. Como especialistas, escolhemos entregar ao mercado uma solução totalmente mobile, capaz de cobrir de ponta a ponta a intralogística e o dia a dia de nossos clientes. Somos movidos a tecnologia e temos fome de resultados. Nosso principal objetivo é transformar o mercado entregando logística aprimorada, apoiando a logística e o comércio exterior com a otimização de processos, redução de custos e ganho de produtividade no recebimento, armazenagem e expedição.</p></p>
## Habilidades:
- Agile
- PHP
- CSS
- Javascript
- Laravel
- MySQL
- HTML
## Local:
100% Remoto
## Requisitos:
- Domínio em PHP;
- Conhecimento Framework: Laravel ou CakePHP;
- Vivência com Github;
- Domínio em Javascript, HTML, CSS e SQL.
## Benefícios:
- Horários flexíveis;
- Gympass;
- Alura;
- Seguro de Vida.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [FullStack Developer (PHP) na Loginfo Tecnologia da Informação LTDA](https://coodesh.com/jobs/desenvolvedor-full-stack-143422620?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Full-Stack
|
1.0
|
[Remoto] FullStack Developer (PHP) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/jobs/desenvolvedor-full-stack-143422620?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A Loginfo está em busca de FullStack Developer (PHP) para compor seu time!</p>
<p>Como especialistas, escolhemos entregar ao mercado uma solução totalmente mobile, capaz de cobrir de ponta a ponta a intralogística e o dia a dia de nossos clientes. Somos movidos a tecnologia e temos fome de resultados. Nosso principal objetivo é transformar o mercado entregando logística aprimorada, apoiando a logística e o comércio exterior com a otimização de processos, redução de custos e ganho de produtividade no recebimento, armazenagem e expedição.</p>
<p>Responsabilidades:</p>
<ul>
<li>Irá atuar como Full Stack no desenvolvimento de projetos e melhorias no sistema de gestão WMS para áreas alfandegadas e armazéns gerais;</li>
<li>Realizar mapeamento de banco de dados;</li>
<li>Manutenção e melhoria no sistema existente;</li>
<li>Desenvolver novas funcionalidades para todas as áreas da organização.</li>
</ul>
## Loginfo Tecnologia da Informação LTDA:
<p>Desde 2014, ano de nossa fundação, escolhemos conectar processos operacionais e comunicação. Decidimos tornar cada vez mais digital, ágil e intuitivo o mercado dos setores logísticos, portuários e de armazéns gerais. Somos inovadores em tudo que nos propomos a fazer. Como especialistas, escolhemos entregar ao mercado uma solução totalmente mobile, capaz de cobrir de ponta a ponta a intralogística e o dia a dia de nossos clientes. Somos movidos a tecnologia e temos fome de resultados. Nosso principal objetivo é transformar o mercado entregando logística aprimorada, apoiando a logística e o comércio exterior com a otimização de processos, redução de custos e ganho de produtividade no recebimento, armazenagem e expedição.</p></p>
## Habilidades:
- Agile
- PHP
- CSS
- Javascript
- Laravel
- MySQL
- HTML
## Local:
100% Remoto
## Requisitos:
- Domínio em PHP;
- Conhecimento Framework: Laravel ou CakePHP;
- Vivência com Github;
- Domínio em Javascript, HTML, CSS e SQL.
## Benefícios:
- Horários flexíveis;
- Gympass;
- Alura;
- Seguro de Vida.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [FullStack Developer (PHP) na Loginfo Tecnologia da Informação LTDA](https://coodesh.com/jobs/desenvolvedor-full-stack-143422620?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Full-Stack
|
process
|
fullstack developer php na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a loginfo está em busca de fullstack developer php para compor seu time como especialistas escolhemos entregar ao mercado uma solução totalmente mobile capaz de cobrir de ponta a ponta a intralogística e o dia a dia de nossos clientes somos movidos a tecnologia e temos fome de resultados nosso principal objetivo é transformar o mercado entregando logística aprimorada apoiando a logística e o comércio exterior com a otimização de processos redução de custos e ganho de produtividade no recebimento armazenagem e expedição responsabilidades irá atuar como full stack no desenvolvimento de projetos e melhorias no sistema de gestão wms para áreas alfandegadas e armazéns gerais realizar mapeamento de banco de dados manutenção e melhoria no sistema existente desenvolver novas funcionalidades para todas as áreas da organização loginfo tecnologia da informação ltda desde ano de nossa fundação escolhemos conectar processos operacionais e comunicação decidimos tornar cada vez mais digital ágil e intuitivo o mercado dos setores logísticos portuários e de armazéns gerais somos inovadores em tudo que nos propomos a fazer como especialistas escolhemos entregar ao mercado uma solução totalmente mobile capaz de cobrir de ponta a ponta a intralogística e o dia a dia de nossos clientes somos movidos a tecnologia e temos fome de resultados nosso principal objetivo é transformar o mercado entregando logística aprimorada apoiando a logística e o comércio exterior com a otimização de processos redução de custos e ganho de produtividade no recebimento armazenagem e expedição habilidades agile php css javascript laravel mysql html local remoto requisitos domínio em php conhecimento framework laravel ou cakephp vivência com github domínio em javascript html css e sql benefícios horários flexíveis gympass alura seguro de vida como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria full stack
| 1
|
597,268
| 18,159,479,482
|
IssuesEvent
|
2021-09-27 07:58:38
|
wso2/product-microgateway
|
https://api.github.com/repos/wso2/product-microgateway
|
opened
|
Invoke APIs using an API key via Choreo Connect
|
Type/New Feature Priority/Normal
|
### Describe your problem(s)
Currently Choreo Connect (CC) supports to invoke APIs using OAuth2 tokens (obtained from dev portal) and Internal API Keys (obtained from APIM publisher). But CC doesn't support for production or sandbox related API keys.
### Describe your solution
Add a new feature for CC to invoke APIs using an API key.
### How will you implement it
Below section describes how API Key feature is going to facilitate CC.
- [ ] Support to invoke APIs using API key in the request header with the default name of (`api_key`)
- [ ] Support to invoke APIs by specifying API key as a query parameter with the default name.
- [ ] Support to invoke APIs by using a user specified API key name defined in the swagger definitions.
**API Key in request header**
<img width="1434" alt="Screenshot 2021-09-27 at 13 08 36" src="https://user-images.githubusercontent.com/40932779/134865314-f95e5b33-e460-4afb-8b3d-a14843aa4c04.png">
**API Key in query parameter**
<img width="1434" alt="Screenshot 2021-09-27 at 13 12 39" src="https://user-images.githubusercontent.com/40932779/134865451-5783af73-c2e4-455a-b401-33cbdc7fe717.png">
---
### Optional Fields
#### Related Issues:
N/A
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
N/A
|
1.0
|
Invoke APIs using an API key via Choreo Connect - ### Describe your problem(s)
Currently Choreo Connect (CC) supports to invoke APIs using OAuth2 tokens (obtained from dev portal) and Internal API Keys (obtained from APIM publisher). But CC doesn't support for production or sandbox related API keys.
### Describe your solution
Add a new feature for CC to invoke APIs using an API key.
### How will you implement it
Below section describes how API Key feature is going to facilitate CC.
- [ ] Support to invoke APIs using API key in the request header with the default name of (`api_key`)
- [ ] Support to invoke APIs by specifying API key as a query parameter with the default name.
- [ ] Support to invoke APIs by using a user specified API key name defined in the swagger definitions.
**API Key in request header**
<img width="1434" alt="Screenshot 2021-09-27 at 13 08 36" src="https://user-images.githubusercontent.com/40932779/134865314-f95e5b33-e460-4afb-8b3d-a14843aa4c04.png">
**API Key in query parameter**
<img width="1434" alt="Screenshot 2021-09-27 at 13 12 39" src="https://user-images.githubusercontent.com/40932779/134865451-5783af73-c2e4-455a-b401-33cbdc7fe717.png">
---
### Optional Fields
#### Related Issues:
N/A
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
N/A
|
non_process
|
invoke apis using an api key via choreo connect describe your problem s currently choreo connect cc supports to invoke apis using tokens obtained from dev portal and internal api keys obtained from apim publisher but cc doesn t support for production or sandbox related api keys describe your solution add a new feature for cc to invoke apis using an api key how will you implement it below section describes how api key feature is going to facilitate cc support to invoke apis using api key in the request header with the default name of api key support to invoke apis by specifying api key as a query parameter with the default name support to invoke apis by using a user specified api key name defined in the swagger definitions api key in request header img width alt screenshot at src api key in query parameter img width alt screenshot at src optional fields related issues n a suggested labels suggested assignees n a
| 0
|
25,006
| 4,120,541,277
|
IssuesEvent
|
2016-06-08 18:15:23
|
rstnewbies/Bluebird
|
https://api.github.com/repos/rstnewbies/Bluebird
|
closed
|
Who joined the chat
|
idea in testing
|
Right now I don't see whether someone has joined the chat. I want to see some kind of information e.g. "user45840 joined this room"
|
1.0
|
Who joined the chat - Right now I don't see whether someone has joined the chat. I want to see some kind of information e.g. "user45840 joined this room"
|
non_process
|
who joined the chat right now i don t see whether someone has joined the chat i want to see some kind of information e g joined this room
| 0
|
15,564
| 19,703,504,265
|
IssuesEvent
|
2022-01-12 19:08:03
|
googleapis/java-shared-config
|
https://api.github.com/repos/googleapis/java-shared-config
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* must have required property 'client_documentation' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'library_type' in .repo-metadata.json
* must have required property 'client_documentation' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json must have required property client documentation in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
118,252
| 15,263,285,653
|
IssuesEvent
|
2021-02-22 02:17:31
|
urbit/landscape
|
https://api.github.com/repos/urbit/landscape
|
opened
|
groups: provide feedback on an archived group
|
design development-stream
|

I'm currently in a group that was archived, but there was no indication of this from the group.
commit: urbit/urbit@e9a9863
|
1.0
|
groups: provide feedback on an archived group -

I'm currently in a group that was archived, but there was no indication of this from the group.
commit: urbit/urbit@e9a9863
|
non_process
|
groups provide feedback on an archived group i m currently in a group that was archived but there was no indication of this from the group commit urbit urbit
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.