Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
74,347
| 20,142,018,073
|
IssuesEvent
|
2022-02-09 00:54:32
|
microsoft/PowerToys
|
https://api.github.com/repos/microsoft/PowerToys
|
closed
|
[Build] Build failed: Path to exceeds max length
|
Issue-Bug Product-PowerToys Run Area-Build Needs-Triage
|
### Microsoft PowerToys version
f2a3fa5ec68d1b07ab347beadc8f6e9160069cce
### Running as admin
- [ ] Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Build Launcher.
### ✔️ Expected Behavior
Build works.
### ❌ Actual Behavior
```
Severity Code Description Project File Line Suppression State
Error MSB3491 Could not write lines to file "obj\x64\Debug\net5.0-windows10.0.18362.0\Microsoft.PowerToys.Run.Plugin.WindowsTerminal.UnitTests.GeneratedMSBuildEditorConfig.editorconfig". Path: obj\x64\Debug\net5.0-windows10.0.18362.0\Microsoft.PowerToys.Run.Plugin.WindowsTerminal.UnitTests.GeneratedMSBuildEditorConfig.editorconfig exceeds the OS max path limit. The fully qualified file name must be less than 260 characters. Microsoft.PowerToys.Run.Plugin.WindowsTerminal.UnitTests C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\Roslyn\Microsoft.Managed.Core.targets 150
```
### Other Software
_No response_
|
1.0
|
[Build] Build failed: Path to exceeds max length - ### Microsoft PowerToys version
f2a3fa5ec68d1b07ab347beadc8f6e9160069cce
### Running as admin
- [ ] Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Build Launcher.
### ✔️ Expected Behavior
Build works.
### ❌ Actual Behavior
```
Severity Code Description Project File Line Suppression State
Error MSB3491 Could not write lines to file "obj\x64\Debug\net5.0-windows10.0.18362.0\Microsoft.PowerToys.Run.Plugin.WindowsTerminal.UnitTests.GeneratedMSBuildEditorConfig.editorconfig". Path: obj\x64\Debug\net5.0-windows10.0.18362.0\Microsoft.PowerToys.Run.Plugin.WindowsTerminal.UnitTests.GeneratedMSBuildEditorConfig.editorconfig exceeds the OS max path limit. The fully qualified file name must be less than 260 characters. Microsoft.PowerToys.Run.Plugin.WindowsTerminal.UnitTests C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\Roslyn\Microsoft.Managed.Core.targets 150
```
### Other Software
_No response_
|
non_process
|
build failed path to exceeds max length microsoft powertoys version running as admin yes area s with issue powertoys run steps to reproduce build launcher ✔️ expected behavior build works ❌ actual behavior severity code description project file line suppression state error could not write lines to file obj debug microsoft powertoys run plugin windowsterminal unittests generatedmsbuildeditorconfig editorconfig path obj debug microsoft powertoys run plugin windowsterminal unittests generatedmsbuildeditorconfig editorconfig exceeds the os max path limit the fully qualified file name must be less than characters microsoft powertoys run plugin windowsterminal unittests c program files microsoft visual studio community msbuild current bin roslyn microsoft managed core targets other software no response
| 0
|
9,038
| 12,130,107,934
|
IssuesEvent
|
2020-04-23 00:30:40
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from appengine/standard/endpoints-frameworks-v2/quickstart/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from appengine/standard/endpoints-frameworks-v2/quickstart/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from appengine/standard/endpoints-frameworks-v2/quickstart/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/endpoints-frameworks-v2/quickstart/requirements-test.txt
|
process
|
remove gcp devrel py tools from appengine standard endpoints frameworks quickstart requirements test txt remove gcp devrel py tools from appengine standard endpoints frameworks quickstart requirements test txt
| 1
|
8,150
| 11,354,731,938
|
IssuesEvent
|
2020-01-24 18:19:40
|
googleapis/java-texttospeech
|
https://api.github.com/repos/googleapis/java-texttospeech
|
closed
|
Promote to GA
|
type: process
|
Package name: **google-cloud-texttospeech**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Promote to GA - Package name: **google-cloud-texttospeech**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
promote to ga package name google cloud texttospeech current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
9,193
| 12,229,616,961
|
IssuesEvent
|
2020-05-04 01:07:08
|
chfor183/data_science_articles
|
https://api.github.com/repos/chfor183/data_science_articles
|
opened
|
Outliers
|
Data Preprocessing
|
## TL;DR
Yes
## Key Takeaways
- Remove 2% at each end
- Remove 1.5x the interquartile range (Q2+Q3)*1.5
- Study individually
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
|
1.0
|
Outliers - ## TL;DR
Yes
## Key Takeaways
- Remove 2% at each end
- Remove 1.5x the interquartile range (Q2+Q3)*1.5
- Study individually
## Useful Code Snippets
```
function test() {
console.log("notice the blank line before this function?");
}
```
## Articles/Ressources
|
process
|
outliers tl dr yes key takeaways remove at each end remove the interquartile range study individually useful code snippets function test console log notice the blank line before this function articles ressources
| 1
|
31,209
| 11,887,383,012
|
IssuesEvent
|
2020-03-28 01:30:41
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
mono test failure: System.Security.Cryptography.Csp.Tests
|
area-System.Security untriaged
|
Found failure in "Libraries Test Run release mono OSX x64 Debug" on PR https://github.com/dotnet/runtime/pull/34208:
https://dev.azure.com/dnceng/public/_build/results?buildId=577051
```
.packages/microsoft.dotnet.helix.sdk/5.0.0-beta.20171.1/tools/Microsoft.DotNet.Helix.Sdk.MultiQueue.targets(76,5): error : (NETCORE_ENGINEERING_TELEMETRY=Test) Work item c237d5d2-991c-4ac3-a630-f4b4c2a7cb69/System.Security.Cryptography.Csp.Tests in job c237d5d2-991c-4ac3-a630-f4b4c2a7cb69 has failed.
.packages/microsoft.dotnet.helix.sdk/5.0.0-beta.20171.1/tools/Microsoft.DotNet.Helix.Sdk.MultiQueue.targets(76,5): error : (NETCORE_ENGINEERING_TELEMETRY=Test) Failure log: https://helix.dot.net/api/2019-06-17/jobs/c237d5d2-991c-4ac3-a630-f4b4c2a7cb69/workitems/System.Security.Cryptography.Csp.Tests/console .
(NETCORE_ENGINEERING_TELEMETRY=Build) Build failed (exit code '1').
Bash exited with code '1'.
```
https://helix.dot.net/api/2019-06-17/jobs/c237d5d2-991c-4ac3-a630-f4b4c2a7cb69/workitems/System.Security.Cryptography.Csp.Tests/console
``` System.Security.Cryptography.Dsa.Tests.DsaArraySignatureFormatTests.SignDataVerifyData_SHA1(signatureFormat: Rfc3279DerSequence) [FAIL]
Assert.True() Failure
Expected: True
Actual: False
Stack Trace:
/_/src/libraries/Common/tests/System/Security/Cryptography/AlgorithmImplementations/DSA/DsaFamilySignatureFormatTests.cs(152,0): at System.Security.Cryptography.Algorithms.Tests.DsaFamilySignatureFormatTests.SignDataVerifyData_SHA1(DSASignatureFormat signatureFormat)
/_/src/mono/netcore/System.Private.CoreLib/src/System/Reflection/RuntimeMethodInfo.cs(360,0): at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
|
True
|
mono test failure: System.Security.Cryptography.Csp.Tests - Found failure in "Libraries Test Run release mono OSX x64 Debug" on PR https://github.com/dotnet/runtime/pull/34208:
https://dev.azure.com/dnceng/public/_build/results?buildId=577051
```
.packages/microsoft.dotnet.helix.sdk/5.0.0-beta.20171.1/tools/Microsoft.DotNet.Helix.Sdk.MultiQueue.targets(76,5): error : (NETCORE_ENGINEERING_TELEMETRY=Test) Work item c237d5d2-991c-4ac3-a630-f4b4c2a7cb69/System.Security.Cryptography.Csp.Tests in job c237d5d2-991c-4ac3-a630-f4b4c2a7cb69 has failed.
.packages/microsoft.dotnet.helix.sdk/5.0.0-beta.20171.1/tools/Microsoft.DotNet.Helix.Sdk.MultiQueue.targets(76,5): error : (NETCORE_ENGINEERING_TELEMETRY=Test) Failure log: https://helix.dot.net/api/2019-06-17/jobs/c237d5d2-991c-4ac3-a630-f4b4c2a7cb69/workitems/System.Security.Cryptography.Csp.Tests/console .
(NETCORE_ENGINEERING_TELEMETRY=Build) Build failed (exit code '1').
Bash exited with code '1'.
```
https://helix.dot.net/api/2019-06-17/jobs/c237d5d2-991c-4ac3-a630-f4b4c2a7cb69/workitems/System.Security.Cryptography.Csp.Tests/console
``` System.Security.Cryptography.Dsa.Tests.DsaArraySignatureFormatTests.SignDataVerifyData_SHA1(signatureFormat: Rfc3279DerSequence) [FAIL]
Assert.True() Failure
Expected: True
Actual: False
Stack Trace:
/_/src/libraries/Common/tests/System/Security/Cryptography/AlgorithmImplementations/DSA/DsaFamilySignatureFormatTests.cs(152,0): at System.Security.Cryptography.Algorithms.Tests.DsaFamilySignatureFormatTests.SignDataVerifyData_SHA1(DSASignatureFormat signatureFormat)
/_/src/mono/netcore/System.Private.CoreLib/src/System/Reflection/RuntimeMethodInfo.cs(360,0): at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
```
|
non_process
|
mono test failure system security cryptography csp tests found failure in libraries test run release mono osx debug on pr packages microsoft dotnet helix sdk beta tools microsoft dotnet helix sdk multiqueue targets error netcore engineering telemetry test work item system security cryptography csp tests in job has failed packages microsoft dotnet helix sdk beta tools microsoft dotnet helix sdk multiqueue targets error netcore engineering telemetry test failure log netcore engineering telemetry build build failed exit code bash exited with code system security cryptography dsa tests dsaarraysignatureformattests signdataverifydata signatureformat assert true failure expected true actual false stack trace src libraries common tests system security cryptography algorithmimplementations dsa dsafamilysignatureformattests cs at system security cryptography algorithms tests dsafamilysignatureformattests signdataverifydata dsasignatureformat signatureformat src mono netcore system private corelib src system reflection runtimemethodinfo cs at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture
| 0
|
14,735
| 17,970,651,633
|
IssuesEvent
|
2021-09-14 01:14:05
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Add a rosetta helm chart
|
enhancement P1 process rosetta
|
**Problem**
Helm chart currently contains sub charts for all modules expect rosetta
**Solution**
Add a sub chart that uses the rosetta docker image and exposes customizable configs.
As a chart it can be similar to the REST API chart
**Alternatives**
**Additional Context**
Depends on docker images deployed and available in GCR
|
1.0
|
Add a rosetta helm chart - **Problem**
Helm chart currently contains sub charts for all modules expect rosetta
**Solution**
Add a sub chart that uses the rosetta docker image and exposes customizable configs.
As a chart it can be similar to the REST API chart
**Alternatives**
**Additional Context**
Depends on docker images deployed and available in GCR
|
process
|
add a rosetta helm chart problem helm chart currently contains sub charts for all modules expect rosetta solution add a sub chart that uses the rosetta docker image and exposes customizable configs as a chart it can be similar to the rest api chart alternatives additional context depends on docker images deployed and available in gcr
| 1
|
6,599
| 9,682,230,970
|
IssuesEvent
|
2019-05-23 08:43:01
|
brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine
|
https://api.github.com/repos/brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine
|
closed
|
!!!ПЛАНЫ МЕНЯЮТСЯ!!! Работа с файлами
|
C++ Work in process help wanted
|
@goldmen4ik little job for u
## Для комфортной работы с программой необходимо, чтобы считывание правил и исходной строки происходило из файла.
* В функции input_rules добавить цикл, в котором будут считываться файлы.
* Также добавить считывание исходной строки из файла в функции input_main_row
|
1.0
|
!!!ПЛАНЫ МЕНЯЮТСЯ!!! Работа с файлами - @goldmen4ik little job for u
## Для комфортной работы с программой необходимо, чтобы считывание правил и исходной строки происходило из файла.
* В функции input_rules добавить цикл, в котором будут считываться файлы.
* Также добавить считывание исходной строки из файла в функции input_main_row
|
process
|
планы меняются работа с файлами little job for u для комфортной работы с программой необходимо чтобы считывание правил и исходной строки происходило из файла в функции input rules добавить цикл в котором будут считываться файлы также добавить считывание исходной строки из файла в функции input main row
| 1
|
4,712
| 7,550,041,430
|
IssuesEvent
|
2018-04-18 15:44:46
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
reopened
|
Remove `integrity` attributes from resources
|
!IMPORTANT! AREA: client AREA: server SYSTEM: resource processing TYPE: bug
|
See: http://www.w3.org/TR/SRI/
This attribute is used to avoid MITM attacks. Attribute contains SHA of the desired resource. Which doesn't work for us. This is widely used currently. And it brokes e.g. GitHub proxying. I suggest to remove this attribute entirely. It applies only to `<script>` and `<link>` elements.
|
1.0
|
Remove `integrity` attributes from resources - See: http://www.w3.org/TR/SRI/
This attribute is used to avoid MITM attacks. Attribute contains SHA of the desired resource. Which doesn't work for us. This is widely used currently. And it brokes e.g. GitHub proxying. I suggest to remove this attribute entirely. It applies only to `<script>` and `<link>` elements.
|
process
|
remove integrity attributes from resources see this attribute is used to avoid mitm attacks attribute contains sha of the desired resource which doesn t work for us this is widely used currently and it brokes e g github proxying i suggest to remove this attribute entirely it applies only to and elements
| 1
|
57,882
| 24,258,072,494
|
IssuesEvent
|
2022-09-27 19:40:15
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
[Enhancement] Ability to search by Project ID
|
Service: Dev Need: 1-Must Have Type: Enhancement Product: Moped Project: Moped v2.0
|
Now that we have project IDs rendering everywhere (#9551), users should easily be able to search by project ID with the non-advanced search features.
<img width="1447" alt="Screen Shot 2022-09-21 at 2 24 23 PM" src="https://user-images.githubusercontent.com/14793120/191582332-2e56aa5e-ea30-46f5-a3b9-940f0a269ff8.png">
|
1.0
|
[Enhancement] Ability to search by Project ID - Now that we have project IDs rendering everywhere (#9551), users should easily be able to search by project ID with the non-advanced search features.
<img width="1447" alt="Screen Shot 2022-09-21 at 2 24 23 PM" src="https://user-images.githubusercontent.com/14793120/191582332-2e56aa5e-ea30-46f5-a3b9-940f0a269ff8.png">
|
non_process
|
ability to search by project id now that we have project ids rendering everywhere users should easily be able to search by project id with the non advanced search features img width alt screen shot at pm src
| 0
|
20,362
| 6,034,229,861
|
IssuesEvent
|
2017-06-09 10:28:36
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
Clean up code in MultipleFileProperty
|
Quality: Code Quality
|
#19826 was merged in time for the release but comments on the code are valid and need to be addressed.
I think other clean ups would be worthwhile such as making some variables, `constexpr` rather than constructing them each time through the methods, e.g. [here](https://github.com/mantidproject/mantid/blob/0488b194c4a00d1b7046d689c7cd076ef9d85844/Framework/API/src/MultipleFileProperty.cpp#L247).
|
1.0
|
Clean up code in MultipleFileProperty - #19826 was merged in time for the release but comments on the code are valid and need to be addressed.
I think other clean ups would be worthwhile such as making some variables, `constexpr` rather than constructing them each time through the methods, e.g. [here](https://github.com/mantidproject/mantid/blob/0488b194c4a00d1b7046d689c7cd076ef9d85844/Framework/API/src/MultipleFileProperty.cpp#L247).
|
non_process
|
clean up code in multiplefileproperty was merged in time for the release but comments on the code are valid and need to be addressed i think other clean ups would be worthwhile such as making some variables constexpr rather than constructing them each time through the methods e g
| 0
|
19,324
| 25,472,107,774
|
IssuesEvent
|
2022-11-25 11:04:58
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[IDP] [PM] Getting an error message in the edit admin screen
|
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
**Pre-condition:** mfa should be disabled in the PM
Getting an error message when edited admin in the application,
**Steps:**
1. Login to PM
2. Click on 'Admins' tab
3. Edit admin in the list
4. Remove the phone number
5. Click on 'Save' button and Verify
**AR:** Getting an error message as attached in the below screenshot
**ER:** Admins details should be updated even without phone number field

|
3.0
|
[IDP] [PM] Getting an error message in the edit admin screen - **Pre-condition:** mfa should be disabled in the PM
Getting an error message when edited admin in the application,
**Steps:**
1. Login to PM
2. Click on 'Admins' tab
3. Edit admin in the list
4. Remove the phone number
5. Click on 'Save' button and Verify
**AR:** Getting an error message as attached in the below screenshot
**ER:** Admins details should be updated even without phone number field

|
process
|
getting an error message in the edit admin screen pre condition mfa should be disabled in the pm getting an error message when edited admin in the application steps login to pm click on admins tab edit admin in the list remove the phone number click on save button and verify ar getting an error message as attached in the below screenshot er admins details should be updated even without phone number field
| 1
|
14,708
| 17,909,920,523
|
IssuesEvent
|
2021-09-09 02:50:37
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Cuda Tensor IPC Does Not Work Properly With P2P Access
|
module: multiprocessing module: cuda triaged
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
CUDA support both memory IPC and P2P Access, and also combination of them, which means If I allocate a block of memory on device A and send it to child process which runs on device B with CUDA IPC API and device A and device B support P2P access, child process can directly access data on device A through ptr created from cudaIPCMemHandle.
But in Pytorch, If I create a cuda tensor on deivce A and send it through queue to child process running on device B(P2P access enabled already), child process failed to access data on device B through ptr created from cudaIPCMemHandle.
## To Reproduce
Steps to reproduce the behavior:
1. create cuda Tensor on device A and enable P2P access between device A and device B
2. send tensor to child process which runs on device B through torch.multiprocessing.Queue
3. launch a kernel on device B and try to access data through tensor's data ptr
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
Everything should works fine but I encountered illegal memory access error
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux) Linux :
- How you installed PyTorch (`conda`, `pip`, source) pip:
- Build command you used (if compiling from source):
- Python version 3.7.5 :
- CUDA/cuDNN version 11.2 :
## Additional context
<!-- Add any other context about the problem here. -->
cc @VitalyFedyunin @ngimel
|
1.0
|
Cuda Tensor IPC Does Not Work Properly With P2P Access - ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
CUDA support both memory IPC and P2P Access, and also combination of them, which means If I allocate a block of memory on device A and send it to child process which runs on device B with CUDA IPC API and device A and device B support P2P access, child process can directly access data on device A through ptr created from cudaIPCMemHandle.
But in Pytorch, If I create a cuda tensor on deivce A and send it through queue to child process running on device B(P2P access enabled already), child process failed to access data on device B through ptr created from cudaIPCMemHandle.
## To Reproduce
Steps to reproduce the behavior:
1. create cuda Tensor on device A and enable P2P access between device A and device B
2. send tensor to child process which runs on device B through torch.multiprocessing.Queue
3. launch a kernel on device B and try to access data through tensor's data ptr
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
Everything should works fine but I encountered illegal memory access error
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux) Linux :
- How you installed PyTorch (`conda`, `pip`, source) pip:
- Build command you used (if compiling from source):
- Python version 3.7.5 :
- CUDA/cuDNN version 11.2 :
## Additional context
<!-- Add any other context about the problem here. -->
cc @VitalyFedyunin @ngimel
|
process
|
cuda tensor ipc does not work properly with access 🐛 bug cuda support both memory ipc and access and also combination of them which means if i allocate a block of memory on device a and send it to child process which runs on device b with cuda ipc api and device a and device b support access child process can directly access data on device a through ptr created from cudaipcmemhandle but in pytorch if i create a cuda tensor on deivce a and send it through queue to child process running on device b access enabled already child process failed to access data on device b through ptr created from cudaipcmemhandle to reproduce steps to reproduce the behavior create cuda tensor on device a and enable access between device a and device b send tensor to child process which runs on device b through torch multiprocessing queue launch a kernel on device b and try to access data through tensor s data ptr expected behavior everything should works fine but i encountered illegal memory access error environment please copy and paste the output from our or fill out the checklist below manually you can get the script and run it with wget for security purposes please check the contents of collect env py before running it python collect env py pytorch version e g os e g linux linux how you installed pytorch conda pip source pip build command you used if compiling from source python version cuda cudnn version additional context cc vitalyfedyunin ngimel
| 1
|
29,254
| 14,018,897,184
|
IssuesEvent
|
2020-10-29 17:25:56
|
reima-ecom/reima-theme
|
https://api.github.com/repos/reima-ecom/reima-theme
|
closed
|
Eagerly load above-the-fold images
|
performance
|
Pages should load the hero image (first module) eagerly.
Product pages should load the main product picture eagerly.
See README for details.
|
True
|
Eagerly load above-the-fold images - Pages should load the hero image (first module) eagerly.
Product pages should load the main product picture eagerly.
See README for details.
|
non_process
|
eagerly load above the fold images pages should load the hero image first module eagerly product pages should load the main product picture eagerly see readme for details
| 0
|
9,771
| 12,758,656,627
|
IssuesEvent
|
2020-06-29 03:06:31
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report] 字节跳动小程序ad webpack 校验出错
|
processing
|
**问题描述**
npm run build:cross 的时候,提示:
```
<ad> is not supported in bytedance environment!
```
**环境信息描述**
至少包含以下部分:
1. 系统类型(Mac或者Windows) mac
2. Mpx依赖版本:
> "@mpxjs/api-proxy": "^2.5.20",
> "@mpxjs/core": "^2.5.28",
> "@mpxjs/fetch": "^2.3.9"
ad标签组件已经支持: https://microapp.bytedance.com/dev/cn/mini-app/develop/component/open-capacity/ad
|
1.0
|
[Bug report] 字节跳动小程序ad webpack 校验出错 - **问题描述**
npm run build:cross 的时候,提示:
```
<ad> is not supported in bytedance environment!
```
**环境信息描述**
至少包含以下部分:
1. 系统类型(Mac或者Windows) mac
2. Mpx依赖版本:
> "@mpxjs/api-proxy": "^2.5.20",
> "@mpxjs/core": "^2.5.28",
> "@mpxjs/fetch": "^2.3.9"
ad标签组件已经支持: https://microapp.bytedance.com/dev/cn/mini-app/develop/component/open-capacity/ad
|
process
|
字节跳动小程序ad webpack 校验出错 问题描述 npm run build cross 的时候,提示: is not supported in bytedance environment 环境信息描述 至少包含以下部分: 系统类型 mac或者windows mac mpx依赖版本 mpxjs api proxy mpxjs core mpxjs fetch ad标签组件已经支持:
| 1
|
11,944
| 14,708,853,095
|
IssuesEvent
|
2021-01-05 00:46:58
|
yuta252/startlens_react_frontend
|
https://api.github.com/repos/yuta252/startlens_react_frontend
|
closed
|
ユーザープロフィール機能の実装
|
dev process
|
## 概要
ユーザーのプロフィール(基本情報)を表示・編集する機能を実装する。
## 変更点
- [ ] プロフィール基本情報の表示
- [ ] プロフィール基本情報の編集
- [ ] プロフィール画像の表示及びアップデート処理
## 追加タスク
## 課題
---
## 参照
---
## 備考
--
|
1.0
|
ユーザープロフィール機能の実装 - ## 概要
ユーザーのプロフィール(基本情報)を表示・編集する機能を実装する。
## 変更点
- [ ] プロフィール基本情報の表示
- [ ] プロフィール基本情報の編集
- [ ] プロフィール画像の表示及びアップデート処理
## 追加タスク
## 課題
---
## 参照
---
## 備考
--
|
process
|
ユーザープロフィール機能の実装 概要 ユーザーのプロフィール(基本情報)を表示・編集する機能を実装する。 変更点 プロフィール基本情報の表示 プロフィール基本情報の編集 プロフィール画像の表示及びアップデート処理 追加タスク 課題 参照 備考
| 1
|
109,905
| 4,415,288,533
|
IssuesEvent
|
2016-08-14 00:03:51
|
AUV-IITK/auv
|
https://api.github.com/repos/AUV-IITK/auv
|
closed
|
(task_handler_layer) Launch file to launch all nodes together
|
priority
|
Also on launch all nodes should be inactive. Only when they get a goal should they start doing image processing or sending motion goals to motion library
|
1.0
|
(task_handler_layer) Launch file to launch all nodes together - Also on launch all nodes should be inactive. Only when they get a goal should they start doing image processing or sending motion goals to motion library
|
non_process
|
task handler layer launch file to launch all nodes together also on launch all nodes should be inactive only when they get a goal should they start doing image processing or sending motion goals to motion library
| 0
|
79,976
| 23,080,359,963
|
IssuesEvent
|
2022-07-26 06:31:49
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
Setup elasticsearch dependency monitoring with Snyk
|
:Delivery/Build Team:Delivery
|
We want to monitor the elasticsearch dependencies on a regular basis using Snyk to be able to recognise vulnerabilities faster and being able to proactively react on security issues within the production dependencies.
- [x] Setup Gradle plugin for monitoring dependencies in snyk ( [#88036](https://github.com/elastic/elasticsearch/pull/88036) )
- [x] Allow configuring snyk target reference and lifecycle properties ( [88220](https://github.com/elastic/elasticsearch/pull/88220) )
- [x] Setup CI job for automated update of snyk dependency monitoring of versions under development ( [88522](https://github.com/elastic/elasticsearch/pull/88522) )
- [x] Setup automation to update of snyk dependency monitoring of released versions
|
1.0
|
Setup elasticsearch dependency monitoring with Snyk - We want to monitor the elasticsearch dependencies on a regular basis using Snyk to be able to recognise vulnerabilities faster and being able to proactively react on security issues within the production dependencies.
- [x] Setup Gradle plugin for monitoring dependencies in snyk ( [#88036](https://github.com/elastic/elasticsearch/pull/88036) )
- [x] Allow configuring snyk target reference and lifecycle properties ( [88220](https://github.com/elastic/elasticsearch/pull/88220) )
- [x] Setup CI job for automated update of snyk dependency monitoring of versions under development ( [88522](https://github.com/elastic/elasticsearch/pull/88522) )
- [x] Setup automation to update of snyk dependency monitoring of released versions
|
non_process
|
setup elasticsearch dependency monitoring with snyk we want to monitor the elasticsearch dependencies on a regular basis using snyk to be able to recognise vulnerabilities faster and being able to proactively react on security issues within the production dependencies setup gradle plugin for monitoring dependencies in snyk allow configuring snyk target reference and lifecycle properties setup ci job for automated update of snyk dependency monitoring of versions under development setup automation to update of snyk dependency monitoring of released versions
| 0
|
95,564
| 19,716,916,039
|
IssuesEvent
|
2022-01-13 11:56:19
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Consider special-casing `== ""` in the JIT?
|
area-CodeGen-coreclr untriaged in pr
|
I tried enabling CA1820 ("Test for empty strings using string length") and it flagged more than 100 cases in a libraries build. Many of them were of the form `s == ""` or `s != ""`. These have measurably worse codegen than just doing a null and length check, e.g.
https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAZgAJiAmcgYQFgAoAbyfPfIAcoBLAN2wYYlAIxJywCBAA25ALIiAFMREAGcgOkBXGAEpyAXgB8G7NuEGD5AETWA3Gw7d+g4SvGSZ8qsrWnz+sb+OuQ8uOQAdhAYkVrSsgBkCcEwAHQAMjARAOYYABaGVqoOjAC+QA===
```C#
public class C
{
private static bool M1(string value) => value == "";
private static bool M2(string value) => value is not null && value.Length == 0;
}
```
producing
```
C.M1(System.String)
L0000: push rax
L0001: mov rax, 0x201b0fc3020
L000b: cmp rcx, [rax]
L000e: je short L0053
L0010: test rcx, rcx
L0013: je short L001b
L0015: cmp dword ptr [rcx+8], 0
L0019: je short L0022
L001b: xor eax, eax
L001d: add rsp, 8
L0021: ret
L0022: lea rdx, [rcx+0xc]
L0026: mov r8, 0x201b0fc3020
L0030: mov r8, [r8]
L0033: add r8, 0xc
L0037: mov [rsp], r8
L003b: mov ecx, [rcx+8]
L003e: add ecx, ecx
L0040: mov r8d, ecx
L0043: mov rcx, rdx
L0046: mov rdx, [rsp]
L004a: add rsp, 8
L004e: jmp 0x00007ffb1b5eafa0
L0053: mov eax, 1
L0058: jmp short L001d
C.M2(System.String)
L0000: test rcx, rcx
L0003: je short L0010
L0005: cmp dword ptr [rcx+8], 0
L0009: sete al
L000c: movzx eax, al
L000f: ret
L0010: xor eax, eax
L0012: ret
```
While a developer could have written a better check, I wonder if it'd be worthwhile special-casing a comparison against an empty string literal (both "" and string.Empty) in the JIT?
cc: @egorbo
|
1.0
|
Consider special-casing `== ""` in the JIT? - I tried enabling CA1820 ("Test for empty strings using string length") and it flagged more than 100 cases in a libraries build. Many of them were of the form `s == ""` or `s != ""`. These have measurably worse codegen than just doing a null and length check, e.g.
https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAZgAJiAmcgYQFgAoAbyfPfIAcoBLAN2wYYlAIxJywCBAA25ALIiAFMREAGcgOkBXGAEpyAXgB8G7NuEGD5AETWA3Gw7d+g4SvGSZ8qsrWnz+sb+OuQ8uOQAdhAYkVrSsgBkCcEwAHQAMjARAOYYABaGVqoOjAC+QA===
```C#
public class C
{
private static bool M1(string value) => value == "";
private static bool M2(string value) => value is not null && value.Length == 0;
}
```
producing
```
C.M1(System.String)
L0000: push rax
L0001: mov rax, 0x201b0fc3020
L000b: cmp rcx, [rax]
L000e: je short L0053
L0010: test rcx, rcx
L0013: je short L001b
L0015: cmp dword ptr [rcx+8], 0
L0019: je short L0022
L001b: xor eax, eax
L001d: add rsp, 8
L0021: ret
L0022: lea rdx, [rcx+0xc]
L0026: mov r8, 0x201b0fc3020
L0030: mov r8, [r8]
L0033: add r8, 0xc
L0037: mov [rsp], r8
L003b: mov ecx, [rcx+8]
L003e: add ecx, ecx
L0040: mov r8d, ecx
L0043: mov rcx, rdx
L0046: mov rdx, [rsp]
L004a: add rsp, 8
L004e: jmp 0x00007ffb1b5eafa0
L0053: mov eax, 1
L0058: jmp short L001d
C.M2(System.String)
L0000: test rcx, rcx
L0003: je short L0010
L0005: cmp dword ptr [rcx+8], 0
L0009: sete al
L000c: movzx eax, al
L000f: ret
L0010: xor eax, eax
L0012: ret
```
While a developer could have written a better check, I wonder if it'd be worthwhile special-casing a comparison against an empty string literal (both "" and string.Empty) in the JIT?
cc: @egorbo
|
non_process
|
consider special casing in the jit i tried enabling test for empty strings using string length and it flagged more than cases in a libraries build many of them were of the form s or s these have measurably worse codegen than just doing a null and length check e g c public class c private static bool string value value private static bool string value value is not null value length producing c system string push rax mov rax cmp rcx je short test rcx rcx je short cmp dword ptr je short xor eax eax add rsp ret lea rdx mov mov add mov mov ecx add ecx ecx mov ecx mov rcx rdx mov rdx add rsp jmp mov eax jmp short c system string test rcx rcx je short cmp dword ptr sete al movzx eax al ret xor eax eax ret while a developer could have written a better check i wonder if it d be worthwhile special casing a comparison against an empty string literal both and string empty in the jit cc egorbo
| 0
|
20,073
| 26,564,375,935
|
IssuesEvent
|
2023-01-20 18:40:58
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Square is not a square when exported
|
scope: image processing bug: pending
|
**Describe the bug/issue**
When exporting an image that has been cropped to aspect ratio 1:1 with the crop the result is not a square, the other side is one pixel off.
**To Reproduce**
1. Open an image
2. Crop image to 1:1, keeping the full height
3. Return to darkroom
4. Export image full sized image
**Expected behavior**
Both sides are equal.
**Screenshots**


**Platform**
* darktable version : 3.6.0.1
* OS : Windows 10
* OpenCL : no impact
**Additional context**
The same error occurs also in my virtual Ubuntu (21.04) and in Windows with the latest master (3.7.0+649).
There are no modules with geometric transformations applied (e.g. crop & **rotate** or lens correction).
This looks similar as error #7465
**Example file**
[2021_07_30.zip](https://github.com/darktable-org/darktable/files/6941199/2021_07_30.zip)
|
1.0
|
Square is not a square when exported - **Describe the bug/issue**
When exporting an image that has been cropped to aspect ratio 1:1 with the crop the result is not a square, the other side is one pixel off.
**To Reproduce**
1. Open an image
2. Crop image to 1:1, keeping the full height
3. Return to darkroom
4. Export image full sized image
**Expected behavior**
Both sides are equal.
**Screenshots**


**Platform**
* darktable version : 3.6.0.1
* OS : Windows 10
* OpenCL : no impact
**Additional context**
The same error occurs also in my virtual Ubuntu (21.04) and in Windows with the latest master (3.7.0+649).
There are no modules with geometric transformations applied (e.g. crop & **rotate** or lens correction).
This looks similar as error #7465
**Example file**
[2021_07_30.zip](https://github.com/darktable-org/darktable/files/6941199/2021_07_30.zip)
|
process
|
square is not a square when exported describe the bug issue when exporting an image that has been cropped to aspect ratio with the crop the result is not a square the other side is one pixel off to reproduce open an image crop image to keeping the full height return to darkroom export image full sized image expected behavior both sides are equal screenshots platform darktable version os windows opencl no impact additional context the same error occurs also in my virtual ubuntu and in windows with the latest master there are no modules with geometric transformations applied e g crop rotate or lens correction this looks similar as error example file
| 1
|
10,665
| 2,622,179,682
|
IssuesEvent
|
2015-03-04 00:18:15
|
byzhang/leveldb
|
https://api.github.com/repos/byzhang/leveldb
|
opened
|
correct library naming on mac os x
|
auto-migrated Priority-Medium Type-Defect
|
```
Would it be possible to name library as {name}.{version}.{extension} ?
And maybe add install prefix + make target ?
```
Original issue reported on code.google.com by `humdumde...@gmail.com` on 13 Nov 2012 at 2:01
|
1.0
|
correct library naming on mac os x - ```
Would it be possible to name library as {name}.{version}.{extension} ?
And maybe add install prefix + make target ?
```
Original issue reported on code.google.com by `humdumde...@gmail.com` on 13 Nov 2012 at 2:01
|
non_process
|
correct library naming on mac os x would it be possible to name library as name version extension and maybe add install prefix make target original issue reported on code google com by humdumde gmail com on nov at
| 0
|
12,939
| 15,303,671,873
|
IssuesEvent
|
2021-02-24 16:03:25
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
opened
|
Change term - subgenus
|
Class - Taxon Process - ready for public comment Term - change
|
## Change term
* Submitter: @tucotuco
* Justification (why is this change necessary?): Based on discussions in Issue #176, it was suggested that the examples for subgenus were not strictly correct. The examples were changed to remove the reference to the genus, but the definition of the term still includes "Values should include the genus to avoid homonym confusion." First of all, that should be a usage note and not a part of the definition and should have been pulled when work was done to divide comments from definitions. This is an interesting case, because it isn't actually a semantic change that is being proposed, despite the fact that the definition would change. Thus, it is a little unclear whether it would have to go through public review. To stay on the safe side, we should include it with a series of Taxon term change requests. The [issue that inspired this change request](Issue #319) was made by @Jegelewicz.
* Proponents (who needs this change): Everyone sharing or searching for information based on subgenus.
Proposed new attributes of the term:
* Term name (in lowerCamelCase): subgenus
* Organized in Class (e.g. Location, Taxon): Taxon
* Definition of the term: The full scientific name of the subgenus in which the taxon is classified.
* Usage comments (recommendations regarding content, etc.): Values should not include the genus, not should they be in parentheses.
* Examples: `Strobus`, `Amerigo`, `Pilosella`
* Refines (identifier of the broader term this term refines, if applicable):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/subgenus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Identifications/Identification/TaxonIdentified/ScientificName/NameAtomised/Zoological/Subgenus
|
1.0
|
Change term - subgenus - ## Change term
* Submitter: @tucotuco
* Justification (why is this change necessary?): Based on discussions in Issue #176, it was suggested that the examples for subgenus were not strictly correct. The examples were changed to remove the reference to the genus, but the definition of the term still includes "Values should include the genus to avoid homonym confusion." First of all, that should be a usage note and not a part of the definition and should have been pulled when work was done to divide comments from definitions. This is an interesting case, because it isn't actually a semantic change that is being proposed, despite the fact that the definition would change. Thus, it is a little unclear whether it would have to go through public review. To stay on the safe side, we should include it with a series of Taxon term change requests. The [issue that inspired this change request](Issue #319) was made by @Jegelewicz.
* Proponents (who needs this change): Everyone sharing or searching for information based on subgenus.
Proposed new attributes of the term:
* Term name (in lowerCamelCase): subgenus
* Organized in Class (e.g. Location, Taxon): Taxon
* Definition of the term: The full scientific name of the subgenus in which the taxon is classified.
* Usage comments (recommendations regarding content, etc.): Values should not include the genus, not should they be in parentheses.
* Examples: `Strobus`, `Amerigo`, `Pilosella`
* Refines (identifier of the broader term this term refines, if applicable):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/subgenus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Identifications/Identification/TaxonIdentified/ScientificName/NameAtomised/Zoological/Subgenus
|
process
|
change term subgenus change term submitter tucotuco justification why is this change necessary based on discussions in issue it was suggested that the examples for subgenus were not strictly correct the examples were changed to remove the reference to the genus but the definition of the term still includes values should include the genus to avoid homonym confusion first of all that should be a usage note and not a part of the definition and should have been pulled when work was done to divide comments from definitions this is an interesting case because it isn t actually a semantic change that is being proposed despite the fact that the definition would change thus it is a little unclear whether it would have to go through public review to stay on the safe side we should include it with a series of taxon term change requests the issue was made by jegelewicz proponents who needs this change everyone sharing or searching for information based on subgenus proposed new attributes of the term term name in lowercamelcase subgenus organized in class e g location taxon taxon definition of the term the full scientific name of the subgenus in which the taxon is classified usage comments recommendations regarding content etc values should not include the genus not should they be in parentheses examples strobus amerigo pilosella refines identifier of the broader term this term refines if applicable replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit identifications identification taxonidentified scientificname nameatomised zoological subgenus
| 1
|
12,506
| 14,961,860,201
|
IssuesEvent
|
2021-01-27 08:26:50
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Audit Logs] Event is not triggered for Response Datastore module in Android and iOS
|
Bug P1 Participant datastore Process: Reopened
|
**Event:**
STUDY_METADATA_RECEIVED
|
1.0
|
[Audit Logs] Event is not triggered for Response Datastore module in Android and iOS - **Event:**
STUDY_METADATA_RECEIVED
|
process
|
event is not triggered for response datastore module in android and ios event study metadata received
| 1
|
144,771
| 19,301,915,772
|
IssuesEvent
|
2021-12-13 07:07:26
|
SmartBear/soapui
|
https://api.github.com/repos/SmartBear/soapui
|
closed
|
CVE-2021-39152 (High) detected in xstream-1.4.13.jar - autoclosed
|
security vulnerability
|
## CVE-2021-39152 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.13.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: soapui/soapui/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/thoughtworks/xstream/1.4.13/xstream-1.4.13.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.13.jar** (Vulnerable Library)
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a Java runtime version 14 to 8. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the [Security Framework](https://x-stream.github.io/security.html#framework), you will have to use at least version 1.4.18.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39152>CVE-2021-39152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-xw4p-crpj-vjx2">https://github.com/x-stream/xstream/security/advisories/GHSA-xw4p-crpj-vjx2</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.13","packageFilePaths":["/soapui/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.thoughtworks.xstream:xstream:1.4.13","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18","isBinary":false}],"baseBranches":["next"],"vulnerabilityIdentifier":"CVE-2021-39152","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a Java runtime version 14 to 8. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. If you rely on XStream\u0027s default blacklist of the [Security Framework](https://x-stream.github.io/security.html#framework), you will have to use at least version 1.4.18.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39152","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-39152 (High) detected in xstream-1.4.13.jar - autoclosed - ## CVE-2021-39152 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.13.jar</b></p></summary>
<p></p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: soapui/soapui/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/thoughtworks/xstream/1.4.13/xstream-1.4.13.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.13.jar** (Vulnerable Library)
<p>Found in base branch: <b>next</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a Java runtime version 14 to 8. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the [Security Framework](https://x-stream.github.io/security.html#framework), you will have to use at least version 1.4.18.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39152>CVE-2021-39152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-xw4p-crpj-vjx2">https://github.com/x-stream/xstream/security/advisories/GHSA-xw4p-crpj-vjx2</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.13","packageFilePaths":["/soapui/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.thoughtworks.xstream:xstream:1.4.13","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18","isBinary":false}],"baseBranches":["next"],"vulnerabilityIdentifier":"CVE-2021-39152","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a Java runtime version 14 to 8. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. If you rely on XStream\u0027s default blacklist of the [Security Framework](https://x-stream.github.io/security.html#framework), you will have to use at least version 1.4.18.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39152","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in xstream jar autoclosed cve high severity vulnerability vulnerable library xstream jar library home page a href path to dependency file soapui soapui pom xml path to vulnerable library canner repository thoughtworks xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in base branch next vulnerability details xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a java runtime version to no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com thoughtworks xstream xstream isminimumfixversionavailable true minimumfixversion com thoughtworks xstream xstream isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a java runtime version to no user is affected who followed the recommendation to setup xstream security framework with a whitelist limited to the minimal required types if you rely on xstream default blacklist of the you will have to use at least version vulnerabilityurl
| 0
|
17,008
| 23,425,067,088
|
IssuesEvent
|
2022-08-14 09:06:56
|
Sirttas/ElementalCraft
|
https://api.github.com/repos/Sirttas/ElementalCraft
|
closed
|
[Suggestion] Project MMO Compatibility
|
enhancement Compatibility
|
[Project MMO](https://www.curseforge.com/minecraft/mc-mods/project-mmo) is a mod where you can level up different skills (agility, combat, farming, cooking, smithing, etc.) and you get better and better perks as you level.
So specifically, it would be great if ElementalCraft furnaces gave smithing/cooking XP depending on what it's processing. Also, the perk of the smithing and cooking skills are that you get an increased chance of extra outputs as you level. So it would be great if EC furnaces also supported those.
There is an API listed [about 3/4 of the way down this page](https://harmonised7.github.io/minecraft/pmmo/Features.html) although I'm not sure how helpful it could be to you.
|
True
|
[Suggestion] Project MMO Compatibility - [Project MMO](https://www.curseforge.com/minecraft/mc-mods/project-mmo) is a mod where you can level up different skills (agility, combat, farming, cooking, smithing, etc.) and you get better and better perks as you level.
So specifically, it would be great if ElementalCraft furnaces gave smithing/cooking XP depending on what it's processing. Also, the perk of the smithing and cooking skills are that you get an increased chance of extra outputs as you level. So it would be great if EC furnaces also supported those.
There is an API listed [about 3/4 of the way down this page](https://harmonised7.github.io/minecraft/pmmo/Features.html) although I'm not sure how helpful it could be to you.
|
non_process
|
project mmo compatibility is a mod where you can level up different skills agility combat farming cooking smithing etc and you get better and better perks as you level so specifically it would be great if elementalcraft furnaces gave smithing cooking xp depending on what it s processing also the perk of the smithing and cooking skills are that you get an increased chance of extra outputs as you level so it would be great if ec furnaces also supported those there is an api listed although i m not sure how helpful it could be to you
| 0
|
13,330
| 15,790,017,906
|
IssuesEvent
|
2021-04-02 00:18:03
|
e4exp/paper_manager_abstract
|
https://api.github.com/repos/e4exp/paper_manager_abstract
|
opened
|
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
|
GAN Image Synthesis Natural Language Processing Vision-Language
|
- https://arxiv.org/abs/2103.17249
- 2021
StyleGANが様々な分野で非常にリアルな画像を生成できることに触発され、最近では、生成された画像や実物の画像を操作するためにStyleGANの潜在空間をどのように使用するかを理解することに多くの研究が集中している。
しかし、意味のある潜在的な操作を発見するためには、多くの自由度を人間が丹念に調べたり、目的の操作ごとに画像を集めて注釈を付けたりする必要があります。
本研究では、最近導入されたCLIP(Contrastive Language-Image Pre-training)モデルを活用することで、このような手作業を必要としないStyleGAN画像操作のためのテキストベースのインターフェースを開発することを検討する。
まず、CLIPベースの損失を利用して、ユーザーが提供するテキストプロンプトに応じて入力潜在ベクトルを修正する最適化スキームを紹介します。
次に、与えられた入力画像に対するテキスト誘導の潜在的操作ステップを推論する潜在的マッパーについて説明し、より高速で安定したテキストベースの操作を可能にします。
最後に、テキストプロンプトをStyleGANのスタイル空間における入力に依存しない指示にマッピングする方法を提示し、インタラクティブなテキスト駆動型の画像操作を可能にする。
広範な結果と比較により、我々のアプローチの有効性が実証された。
|
1.0
|
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery - - https://arxiv.org/abs/2103.17249
- 2021
StyleGANが様々な分野で非常にリアルな画像を生成できることに触発され、最近では、生成された画像や実物の画像を操作するためにStyleGANの潜在空間をどのように使用するかを理解することに多くの研究が集中している。
しかし、意味のある潜在的な操作を発見するためには、多くの自由度を人間が丹念に調べたり、目的の操作ごとに画像を集めて注釈を付けたりする必要があります。
本研究では、最近導入されたCLIP(Contrastive Language-Image Pre-training)モデルを活用することで、このような手作業を必要としないStyleGAN画像操作のためのテキストベースのインターフェースを開発することを検討する。
まず、CLIPベースの損失を利用して、ユーザーが提供するテキストプロンプトに応じて入力潜在ベクトルを修正する最適化スキームを紹介します。
次に、与えられた入力画像に対するテキスト誘導の潜在的操作ステップを推論する潜在的マッパーについて説明し、より高速で安定したテキストベースの操作を可能にします。
最後に、テキストプロンプトをStyleGANのスタイル空間における入力に依存しない指示にマッピングする方法を提示し、インタラクティブなテキスト駆動型の画像操作を可能にする。
広範な結果と比較により、我々のアプローチの有効性が実証された。
|
process
|
styleclip text driven manipulation of stylegan imagery styleganが様々な分野で非常にリアルな画像を生成できることに触発され、最近では、生成された画像や実物の画像を操作するためにstyleganの潜在空間をどのように使用するかを理解することに多くの研究が集中している。 しかし、意味のある潜在的な操作を発見するためには、多くの自由度を人間が丹念に調べたり、目的の操作ごとに画像を集めて注釈を付けたりする必要があります。 本研究では、最近導入されたclip(contrastive language image pre training)モデルを活用することで、このような手作業を必要としないstylegan画像操作のためのテキストベースのインターフェースを開発することを検討する。 まず、clipベースの損失を利用して、ユーザーが提供するテキストプロンプトに応じて入力潜在ベクトルを修正する最適化スキームを紹介します。 次に、与えられた入力画像に対するテキスト誘導の潜在的操作ステップを推論する潜在的マッパーについて説明し、より高速で安定したテキストベースの操作を可能にします。 最後に、テキストプロンプトをstyleganのスタイル空間における入力に依存しない指示にマッピングする方法を提示し、インタラクティブなテキスト駆動型の画像操作を可能にする。 広範な結果と比較により、我々のアプローチの有効性が実証された。
| 1
|
131,204
| 18,233,002,216
|
IssuesEvent
|
2021-10-01 01:02:39
|
jmacwhitesource/cloud-pipeline
|
https://api.github.com/repos/jmacwhitesource/cloud-pipeline
|
opened
|
CVE-2021-23446 (High) detected in handsontable-0.33.0.tgz
|
security vulnerability
|
## CVE-2021-23446 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handsontable-0.33.0.tgz</b></p></summary>
<p>Spreadsheet-like data grid editor that provides copy/paste functionality compatible with Excel/Google Docs</p>
<p>Library home page: <a href="https://registry.npmjs.org/handsontable/-/handsontable-0.33.0.tgz">https://registry.npmjs.org/handsontable/-/handsontable-0.33.0.tgz</a></p>
<p>Path to dependency file: cloud-pipeline/client/package.json</p>
<p>Path to vulnerable library: cloud-pipeline/client/node_modules/handsontable/package.json</p>
<p>
Dependency Hierarchy:
- react-handsontable-0.3.2.tgz (Root Library)
- :x: **handsontable-0.33.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handsontable before 10.0.0; the package handsontable from 0 and before 10.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) in Handsontable.helper.isNumeric function.
<p>Publish Date: 2021-09-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23446>CVE-2021-23446</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23446">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23446</a></p>
<p>Release Date: 2021-09-29</p>
<p>Fix Resolution: handsontable - 10.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handsontable","packageVersion":"0.33.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-handsontable:0.3.2;handsontable:0.33.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handsontable - 10.0.0"}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-23446","vulnerabilityDetails":"The package handsontable before 10.0.0; the package handsontable from 0 and before 10.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) in Handsontable.helper.isNumeric function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23446","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23446 (High) detected in handsontable-0.33.0.tgz - ## CVE-2021-23446 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handsontable-0.33.0.tgz</b></p></summary>
<p>Spreadsheet-like data grid editor that provides copy/paste functionality compatible with Excel/Google Docs</p>
<p>Library home page: <a href="https://registry.npmjs.org/handsontable/-/handsontable-0.33.0.tgz">https://registry.npmjs.org/handsontable/-/handsontable-0.33.0.tgz</a></p>
<p>Path to dependency file: cloud-pipeline/client/package.json</p>
<p>Path to vulnerable library: cloud-pipeline/client/node_modules/handsontable/package.json</p>
<p>
Dependency Hierarchy:
- react-handsontable-0.3.2.tgz (Root Library)
- :x: **handsontable-0.33.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handsontable before 10.0.0; the package handsontable from 0 and before 10.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) in Handsontable.helper.isNumeric function.
<p>Publish Date: 2021-09-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23446>CVE-2021-23446</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23446">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23446</a></p>
<p>Release Date: 2021-09-29</p>
<p>Fix Resolution: handsontable - 10.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handsontable","packageVersion":"0.33.0","packageFilePaths":["/client/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-handsontable:0.3.2;handsontable:0.33.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handsontable - 10.0.0"}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-23446","vulnerabilityDetails":"The package handsontable before 10.0.0; the package handsontable from 0 and before 10.0.0 are vulnerable to Regular Expression Denial of Service (ReDoS) in Handsontable.helper.isNumeric function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23446","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in handsontable tgz cve high severity vulnerability vulnerable library handsontable tgz spreadsheet like data grid editor that provides copy paste functionality compatible with excel google docs library home page a href path to dependency file cloud pipeline client package json path to vulnerable library cloud pipeline client node modules handsontable package json dependency hierarchy react handsontable tgz root library x handsontable tgz vulnerable library found in base branch develop vulnerability details the package handsontable before the package handsontable from and before are vulnerable to regular expression denial of service redos in handsontable helper isnumeric function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handsontable isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react handsontable handsontable isminimumfixversionavailable true minimumfixversion handsontable basebranches vulnerabilityidentifier cve vulnerabilitydetails the package handsontable before the package handsontable from and before are vulnerable to regular expression denial of service redos in handsontable helper isnumeric function vulnerabilityurl
| 0
|
262,254
| 8,257,682,953
|
IssuesEvent
|
2018-09-13 06:29:34
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.w3schools.com - see bug description
|
browser-firefox-mobile priority-important
|
<!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 5.1.1; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.w3schools.com/tags/tryit.asp?filename=tryhtml_form_submit
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 5.1.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: a long click doesn't allow pasting text anymore
**Steps to Reproduce**:
Normally I click long in a textfield and it markes the text and I can paste the clipboard text. This doesn't work anymore.
But it always was a strange behavior as an empty textfield did never work, I had to write at least a letter, then I could mark it and paste the link or the text.
In Chrome it works.
[](https://webcompat.com/uploads/2018/9/0b0c6909-afbf-46b7-9ba1-48f4b358ca59.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.w3schools.com - see bug description - <!-- @browser: Firefox Mobile 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 5.1.1; Mobile; rv:64.0) Gecko/64.0 Firefox/64.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.w3schools.com/tags/tryit.asp?filename=tryhtml_form_submit
**Browser / Version**: Firefox Mobile 64.0
**Operating System**: Android 5.1.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: a long click doesn't allow pasting text anymore
**Steps to Reproduce**:
Normally I click long in a textfield and it markes the text and I can paste the clipboard text. This doesn't work anymore.
But it always was a strange behavior as an empty textfield did never work, I had to write at least a letter, then I could mark it and paste the link or the text.
In Chrome it works.
[](https://webcompat.com/uploads/2018/9/0b0c6909-afbf-46b7-9ba1-48f4b358ca59.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description a long click doesn t allow pasting text anymore steps to reproduce normally i click long in a textfield and it markes the text and i can paste the clipboard text this doesn t work anymore but it always was a strange behavior as an empty textfield did never work i had to write at least a letter then i could mark it and paste the link or the text in chrome it works from with ❤️
| 0
|
68,275
| 9,166,016,701
|
IssuesEvent
|
2019-03-02 00:07:43
|
pkmgarcia/codefoo-9
|
https://api.github.com/repos/pkmgarcia/codefoo-9
|
opened
|
Document the front end application
|
documentation easy front end
|
## What's wrong?
There is no documentation (dependencies, how to build, etc.).
## How do I fix it?
Update or create frontend/README.md
|
1.0
|
Document the front end application - ## What's wrong?
There is no documentation (dependencies, how to build, etc.).
## How do I fix it?
Update or create frontend/README.md
|
non_process
|
document the front end application what s wrong there is no documentation dependencies how to build etc how do i fix it update or create frontend readme md
| 0
|
3,892
| 6,820,987,226
|
IssuesEvent
|
2017-11-07 15:31:56
|
WormBase/wormbase-pipeline
|
https://api.github.com/repos/WormBase/wormbase-pipeline
|
closed
|
Anatomy function hash text reassignment proposal
|
MODELS Processing
|
The conversion of leaf tags that are themselves data with an optional Text field has the consequence of loosing the leaf tag and therefore data. It is proposed to remove the ?Text field in this instance as this appears to be optional additional information about the leaf tag qualifier that has been selected.
Current Model:
```
#Anatomy_function_info Autonomous Text
Nonautonomous Text
Sufficient Text
Insufficient Text
Necessary Text
Unnecessary Text
Remark Text
```
Proposed:
```
#Anatomy_function_info Autonomous
Nonautonomous
Sufficient
Insufficient
Necessary
Unnecessary
Remark UNIQUE Text
```
The Remark Text could be a structured single Text field where the linkage to the leaf tag could be echoed
Associated issue: https://github.com/WormBase/datomic-to-catalyst/issues/142
|
1.0
|
Anatomy function hash text reassignment proposal - The conversion of leaf tags that are themselves data with an optional Text field has the consequence of loosing the leaf tag and therefore data. It is proposed to remove the ?Text field in this instance as this appears to be optional additional information about the leaf tag qualifier that has been selected.
Current Model:
```
#Anatomy_function_info Autonomous Text
Nonautonomous Text
Sufficient Text
Insufficient Text
Necessary Text
Unnecessary Text
Remark Text
```
Proposed:
```
#Anatomy_function_info Autonomous
Nonautonomous
Sufficient
Insufficient
Necessary
Unnecessary
Remark UNIQUE Text
```
The Remark Text could be a structured single Text field where the linkage to the leaf tag could be echoed
Associated issue: https://github.com/WormBase/datomic-to-catalyst/issues/142
|
process
|
anatomy function hash text reassignment proposal the conversion of leaf tags that are themselves data with an optional text field has the consequence of loosing the leaf tag and therefore data it is proposed to remove the text field in this instance as this appears to be optional additional information about the leaf tag qualifier that has been selected current model anatomy function info autonomous text nonautonomous text sufficient text insufficient text necessary text unnecessary text remark text proposed anatomy function info autonomous nonautonomous sufficient insufficient necessary unnecessary remark unique text the remark text could be a structured single text field where the linkage to the leaf tag could be echoed associated issue
| 1
|
9,913
| 12,953,235,735
|
IssuesEvent
|
2020-07-19 23:48:24
|
PHPSocialNetwork/phpfastcache
|
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
|
closed
|
Maybe support Memcached::OPT_PREFIX_KEY ?
|
7.1 8.0 8.1 >_< Working & Scheduled [-_-] In Process
|
**Configuration (optional)**
- **PhpFastCache version:** "7.1.1"
- **PhpFastCache API version:** "2.0.4"
- **PHP version:** "7.2"
- **Operating system:** "Ubuntu 18.04"
**My question**
> There are some projects with same memcached in my server. I find the redis config has 'optPrefix' but memcached none.
|
1.0
|
Maybe support Memcached::OPT_PREFIX_KEY ? - **Configuration (optional)**
- **PhpFastCache version:** "7.1.1"
- **PhpFastCache API version:** "2.0.4"
- **PHP version:** "7.2"
- **Operating system:** "Ubuntu 18.04"
**My question**
> There are some projects with same memcached in my server. I find the redis config has 'optPrefix' but memcached none.
|
process
|
maybe support memcached opt prefix key configuration optional phpfastcache version phpfastcache api version php version operating system ubuntu my question there are some projects with same memcached in my server i find the redis config has optprefix but memcached none
| 1
|
14,803
| 18,103,473,639
|
IssuesEvent
|
2021-09-22 16:29:04
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
closed
|
Process for decommissioning short-lived hubs
|
:label: team-process type: task needs: review
|
# Summary
<!-- What is the context needed to understand this task -->
For hubs we deploy for a short-term, such as to support workshops and conferences, we should have a decommission process that checks-in with the Community Representative/Hub Admins that removing the hub is ok, any data from NFS servers they'd like to keep is migrated, and the hub is gracefully removed.
# Actions
<!-- A list of actions to take to complete this task -->
- [ ] Decide and document how hubs should be decommissioned, including migrating data if required
- [ ] Create an issue template/form that tracks the decommission process and communications with Hub Representatives/Admins
|
1.0
|
Process for decommissioning short-lived hubs - # Summary
<!-- What is the context needed to understand this task -->
For hubs we deploy for a short-term, such as to support workshops and conferences, we should have a decommission process that checks-in with the Community Representative/Hub Admins that removing the hub is ok, any data from NFS servers they'd like to keep is migrated, and the hub is gracefully removed.
# Actions
<!-- A list of actions to take to complete this task -->
- [ ] Decide and document how hubs should be decommissioned, including migrating data if required
- [ ] Create an issue template/form that tracks the decommission process and communications with Hub Representatives/Admins
|
process
|
process for decommissioning short lived hubs summary for hubs we deploy for a short term such as to support workshops and conferences we should have a decommission process that checks in with the community representative hub admins that removing the hub is ok any data from nfs servers they d like to keep is migrated and the hub is gracefully removed actions decide and document how hubs should be decommissioned including migrating data if required create an issue template form that tracks the decommission process and communications with hub representatives admins
| 1
|
15,019
| 18,733,374,970
|
IssuesEvent
|
2021-11-04 02:09:53
|
MicrosoftDocs/windows-uwp
|
https://api.github.com/repos/MicrosoftDocs/windows-uwp
|
closed
|
ms-settings app network URI
|
product-question uwp/prod processes-and-threading/tech
|
Is there a URI which can be used to trigger the network 'reset now' button?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 987ec16c-9456-93a4-177a-dbd563be7eb7
* Version Independent ID: f41f0344-f7f6-f092-a6bf-fc4184a9b460
* Content: [Launch the Windows Settings app - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app)
* Content Source: [windows-apps-src/launch-resume/launch-settings-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-settings-app.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @lastnameholiu
* Microsoft Alias: **alholiu**
|
1.0
|
ms-settings app network URI - Is there a URI which can be used to trigger the network 'reset now' button?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 987ec16c-9456-93a4-177a-dbd563be7eb7
* Version Independent ID: f41f0344-f7f6-f092-a6bf-fc4184a9b460
* Content: [Launch the Windows Settings app - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/launch-settings-app)
* Content Source: [windows-apps-src/launch-resume/launch-settings-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/launch-settings-app.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @lastnameholiu
* Microsoft Alias: **alholiu**
|
process
|
ms settings app network uri is there a uri which can be used to trigger the network reset now button document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login lastnameholiu microsoft alias alholiu
| 1
|
11,987
| 14,737,151,885
|
IssuesEvent
|
2021-01-07 01:01:07
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Towne - posting payments - problem
|
anc-external anc-ops anc-process anc-ui anp-0.5 ant-bug ant-support
|
In GitLab by @kdjstudios on Apr 18, 2018, 16:06
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-18-12861/conversation
**Server:** External
**Client/Site:** ALL
**Account:** ALL
**Issue:**
Sooooo, I’m not sure why this would be happening again but:
When I am posting payments the same thing is happening as last week:
When I click ‘save payment’ it gives me the window that says ‘confirm payment amount’ and has the blue ‘ok’ button.
I click the ‘ok’ and it moves a bit but nothing happens and so I have to click it a second time to have it acknowledge the action.
THIS WAS WORKING CORRECTLY YESTERDAY!!
|
1.0
|
Towne - posting payments - problem - In GitLab by @kdjstudios on Apr 18, 2018, 16:06
**Submitted by:** Deb Crown <dcrown@towneanswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-18-12861/conversation
**Server:** External
**Client/Site:** ALL
**Account:** ALL
**Issue:**
Sooooo, I’m not sure why this would be happening again but:
When I am posting payments the same thing is happening as last week:
When I click ‘save payment’ it gives me the window that says ‘confirm payment amount’ and has the blue ‘ok’ button.
I click the ‘ok’ and it moves a bit but nothing happens and so I have to click it a second time to have it acknowledge the action.
THIS WAS WORKING CORRECTLY YESTERDAY!!
|
process
|
towne posting payments problem in gitlab by kdjstudios on apr submitted by deb crown helpdesk server external client site all account all issue sooooo i’m not sure why this would be happening again but when i am posting payments the same thing is happening as last week when i click ‘save payment’ it gives me the window that says ‘confirm payment amount’ and has the blue ‘ok’ button i click the ‘ok’ and it moves a bit but nothing happens and so i have to click it a second time to have it acknowledge the action this was working correctly yesterday
| 1
|
7,028
| 10,188,946,038
|
IssuesEvent
|
2019-08-11 15:24:42
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
process.title don't change process name
|
confirmed-bug libuv macos process
|
<!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v10.16.1
* **Platform**: 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/RELEASE_X86_64 x86_64
* **Subsystem**: `process`
<!-- Please provide more details below this comment. -->
Hello
In `v10.16.1`
```js
process.title = "test";
```
stops changing process name
In `v10.15.3`
All works as expected
|
1.0
|
process.title don't change process name - <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v10.16.1
* **Platform**: 18.7.0 Darwin Kernel Version 18.7.0: Thu Jun 20 18:42:21 PDT 2019; root:xnu-4903.270.47~4/RELEASE_X86_64 x86_64
* **Subsystem**: `process`
<!-- Please provide more details below this comment. -->
Hello
In `v10.16.1`
```js
process.title = "test";
```
stops changing process name
In `v10.15.3`
All works as expected
|
process
|
process title don t change process name thank you for reporting a possible bug in node js please fill in as much of the template below as you can version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify the affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you can version platform darwin kernel version thu jun pdt root xnu release subsystem process hello in js process title test stops changing process name in all works as expected
| 1
|
199,568
| 15,049,230,280
|
IssuesEvent
|
2021-02-03 11:12:07
|
lutraconsulting/input-manual-tests
|
https://api.github.com/repos/lutraconsulting/input-manual-tests
|
closed
|
Test Execution InputApp 0.7.8 (android)
|
test execution
|
## Test plan for Input manual testing
| Test environment | Value |
|---|---|
| Input Version: | dc69f04fdb2826c549b3304724b16aba9092e529 arm64-v8a |
| Mergin Version: | |
| Mergin URL: <> | |
| QGIS Version: | |
| Mergin plugin Version: | |
| Mobile OS: | Android arm64-v8a |
| Date of Execution: | 15/1/21 |
---
### Test Cases
- [x] ( #2 ) TC 01: Mergin & Projects Manipulation
C10 - the Home page is not empty: `Capturing field data tour` and `Start here!` are there - is it OK?
F: not available
- [x] ( #3 ) TC 02: Sync & Project Status
Consider changing the prerequisite to 'User has downloaded HIS project...' otherwise in A8. you won't see any project in My Projects.
- [ ] ( #4 ) TC 03: Map Canvas
- [ ] ( #5 ) TC 04: Recording
- [ ] ( #6 ) TC 05: Forms
- [ ] ( #7 ) TC 06: Data Providers
- [ ] ( #8 ) TC 07: Translations
- [ ] ( #18 ) TC 08: System Specifics
- [ ] ( #19 ) TC 09: Welcome Screen & Project
- [ ] ( #22 ) TC 10: Permissions
---
| Test Execution Outcome | |
|---|---|
| Issues Created During Testing: | LINK TO ISSUE(S) |
**Success** / **Bugs Created** (erase one)
|
1.0
|
Test Execution InputApp 0.7.8 (android) - ## Test plan for Input manual testing
| Test environment | Value |
|---|---|
| Input Version: | dc69f04fdb2826c549b3304724b16aba9092e529 arm64-v8a |
| Mergin Version: | |
| Mergin URL: <> | |
| QGIS Version: | |
| Mergin plugin Version: | |
| Mobile OS: | Android arm64-v8a |
| Date of Execution: | 15/1/21 |
---
### Test Cases
- [x] ( #2 ) TC 01: Mergin & Projects Manipulation
C10 - the Home page is not empty: `Capturing field data tour` and `Start here!` are there - is it OK?
F: not available
- [x] ( #3 ) TC 02: Sync & Project Status
Consider changing the prerequisite to 'User has downloaded HIS project...' otherwise in A8. you won't see any project in My Projects.
- [ ] ( #4 ) TC 03: Map Canvas
- [ ] ( #5 ) TC 04: Recording
- [ ] ( #6 ) TC 05: Forms
- [ ] ( #7 ) TC 06: Data Providers
- [ ] ( #8 ) TC 07: Translations
- [ ] ( #18 ) TC 08: System Specifics
- [ ] ( #19 ) TC 09: Welcome Screen & Project
- [ ] ( #22 ) TC 10: Permissions
---
| Test Execution Outcome | |
|---|---|
| Issues Created During Testing: | LINK TO ISSUE(S) |
**Success** / **Bugs Created** (erase one)
|
non_process
|
test execution inputapp android test plan for input manual testing test environment value input version mergin version mergin url qgis version mergin plugin version mobile os android date of execution test cases tc mergin projects manipulation the home page is not empty capturing field data tour and start here are there is it ok f not available tc sync project status consider changing the prerequisite to user has downloaded his project otherwise in you won t see any project in my projects tc map canvas tc recording tc forms tc data providers tc translations tc system specifics tc welcome screen project tc permissions test execution outcome issues created during testing link to issue s success bugs created erase one
| 0
|
9,500
| 12,488,909,272
|
IssuesEvent
|
2020-05-31 16:21:29
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Customizing the PDF extension doesn't work
|
bug plugin/pdf preprocess priority/high
|
## Expected Behavior
In the plug-in that extends the capabilities of PDF have to work set through customization.dir.
## Actual Behavior
The standard "org.dita.pdf2" plugin ignores the "customization.dir" setting.
## Steps to Reproduce
1. Copy folder "com.example.print-pdf" from dita-ot-3.5\docsrc\samples\plugins\ to DITA OT\plugins
2. Change string
` <xsl:variable name="page-height">297mm</xsl:variable>` in file plugins\com.example.print-pdf\cfg\fo\attrs\custom.xsl to `<xsl:variable name="page-height">100mm</xsl:variable>`.
3. Run dita-ot-dir/bin/dita --install
4. Run bin\dita --input=my.ditamap --format=print-pdf
Document is builded, but the page size remains standard.
My own plugin that extends PDF in a similar way (using "customization.dir") worked in version 3.3.4, but doesn't work in 3.5.
* DITA-OT version: 3.5
* Operating system and version: Windows
* I run DITA OT: bin\dita --input=my.ditamap --format=print-pdf
* Transformation type: custom ("com.example.print-pdf")
|
1.0
|
Customizing the PDF extension doesn't work - ## Expected Behavior
In the plug-in that extends the capabilities of PDF have to work set through customization.dir.
## Actual Behavior
The standard "org.dita.pdf2" plugin ignores the "customization.dir" setting.
## Steps to Reproduce
1. Copy folder "com.example.print-pdf" from dita-ot-3.5\docsrc\samples\plugins\ to DITA OT\plugins
2. Change string
` <xsl:variable name="page-height">297mm</xsl:variable>` in file plugins\com.example.print-pdf\cfg\fo\attrs\custom.xsl to `<xsl:variable name="page-height">100mm</xsl:variable>`.
3. Run dita-ot-dir/bin/dita --install
4. Run bin\dita --input=my.ditamap --format=print-pdf
Document is builded, but the page size remains standard.
My own plugin that extends PDF in a similar way (using "customization.dir") worked in version 3.3.4, but doesn't work in 3.5.
* DITA-OT version: 3.5
* Operating system and version: Windows
* I run DITA OT: bin\dita --input=my.ditamap --format=print-pdf
* Transformation type: custom ("com.example.print-pdf")
|
process
|
customizing the pdf extension doesn t work expected behavior in the plug in that extends the capabilities of pdf have to work set through customization dir actual behavior the standard org dita plugin ignores the customization dir setting steps to reproduce copy folder com example print pdf from dita ot docsrc samples plugins to dita ot plugins change string in file plugins com example print pdf cfg fo attrs custom xsl to run dita ot dir bin dita install run bin dita input my ditamap format print pdf document is builded but the page size remains standard my own plugin that extends pdf in a similar way using customization dir worked in version but doesn t work in dita ot version operating system and version windows i run dita ot bin dita input my ditamap format print pdf transformation type custom com example print pdf
| 1
|
21,272
| 28,442,176,851
|
IssuesEvent
|
2023-04-16 02:40:42
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
OMBB BUG
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
mauvaise orientation des polygones lors du lancement de l'algorithme OMBB. ce ci est lié à la version QGIS 3.22
L'algorithme fonctionne parfaitement sur les version précédentes
Voici un exemple

### Steps to reproduce the issue
TOOLBOX => OMBB
polygone réorienté par rotation
### Versions
QGIS 3.22.2
### Supported QGIS version
- [x] I'm running a supported QGIS version according to the roadmap.
### New profile
- [x] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
OMBB BUG - ### What is the bug or the crash?
mauvaise orientation des polygones lors du lancement de l'algorithme OMBB. ce ci est lié à la version QGIS 3.22
L'algorithme fonctionne parfaitement sur les version précédentes
Voici un exemple

### Steps to reproduce the issue
TOOLBOX => OMBB
polygone réorienté par rotation
### Versions
QGIS 3.22.2
### Supported QGIS version
- [x] I'm running a supported QGIS version according to the roadmap.
### New profile
- [x] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
ombb bug what is the bug or the crash mauvaise orientation des polygones lors du lancement de l algorithme ombb ce ci est lié à la version qgis l algorithme fonctionne parfaitement sur les version précédentes voici un exemple steps to reproduce the issue toolbox ombb polygone réorienté par rotation versions qgis supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
5,819
| 8,653,214,335
|
IssuesEvent
|
2018-11-27 10:14:50
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
reopened
|
in documents, adding a file doesnt work when on "files" tab
|
Process bug
|
in documents, adding a file doesnt work when on "files" tab

|
1.0
|
in documents, adding a file doesnt work when on "files" tab - in documents, adding a file doesnt work when on "files" tab

|
process
|
in documents adding a file doesnt work when on files tab in documents adding a file doesnt work when on files tab
| 1
|
15,124
| 18,858,190,007
|
IssuesEvent
|
2021-11-12 09:29:09
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][needs-docs][processing] add gdal_viewshed algorithm
|
Automatic new feature Processing Alg 3.12
|
Original commit: https://github.com/qgis/QGIS/commit/02fbe42a306a289aa48c6fe39a9c5603d00ac347 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
1.0
|
[FEATURE][needs-docs][processing] add gdal_viewshed algorithm - Original commit: https://github.com/qgis/QGIS/commit/02fbe42a306a289aa48c6fe39a9c5603d00ac347 by nyalldawson
Unfortunately this naughty coder did not write a description... :-(
|
process
|
add gdal viewshed algorithm original commit by nyalldawson unfortunately this naughty coder did not write a description
| 1
|
811,601
| 30,293,857,821
|
IssuesEvent
|
2023-07-09 16:02:11
|
matrixorigin/matrixone
|
https://api.github.com/repos/matrixorigin/matrixone
|
opened
|
[Subtask]: avoid conflict between compact block transactions and newly committed deletes happening after the compaction start
|
priority/p0 kind/subtask 1.0-perf-tp
|
### Parent Issue
#10208
### Detail of Subtask
avoid conflict between compact block transactions and newly committed deletes happening after the compaction start
### Describe implementation you've considered
Compaction transactions usually take a long time, and when the transaction is ready to commit, there may be new deletes committed in between, and these new deletes are not involved in the persistence of the Compaction transaction. Right now, we are rolling back the compaction transaction and try again later.
This kind of conflict lengthens the cycle of the entire compaction.
### Additional information
_No response_
|
1.0
|
[Subtask]: avoid conflict between compact block transactions and newly committed deletes happening after the compaction start - ### Parent Issue
#10208
### Detail of Subtask
avoid conflict between compact block transactions and newly committed deletes happening after the compaction start
### Describe implementation you've considered
Compaction transactions usually take a long time, and when the transaction is ready to commit, there may be new deletes committed in between, and these new deletes are not involved in the persistence of the Compaction transaction. Right now, we are rolling back the compaction transaction and try again later.
This kind of conflict lengthens the cycle of the entire compaction.
### Additional information
_No response_
|
non_process
|
avoid conflict between compact block transactions and newly committed deletes happening after the compaction start parent issue detail of subtask avoid conflict between compact block transactions and newly committed deletes happening after the compaction start describe implementation you ve considered compaction transactions usually take a long time and when the transaction is ready to commit there may be new deletes committed in between and these new deletes are not involved in the persistence of the compaction transaction right now we are rolling back the compaction transaction and try again later this kind of conflict lengthens the cycle of the entire compaction additional information no response
| 0
|
7,861
| 11,036,096,712
|
IssuesEvent
|
2019-12-07 18:16:34
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
`childProcess.killed` should be `true` after `process.kill(childProcess.pid)` is called
|
child_process feature request
|
**Is your feature request related to a problem? Please describe.**
`childProcess.killed` is inconsistent depending on how the child process was killed:
```js
const { spawn } = require('child_process')
const childProcess = spawn('sleep', [5e6])
childProcess.kill()
console.log(childProcess.killed) // true
```
```js
const { spawn } = require('child_process')
const childProcess = spawn('sleep', [5e6])
process.kill(childProcess.pid)
console.log(childProcess.killed) // false
```
**Describe the solution you'd like**
`killed` should be `true` when the process is killed through an external process (`process.kill()` or `kill` in the terminal).
**Versions**
Node `12.1.0`, Ubuntu `19.04`.
|
1.0
|
`childProcess.killed` should be `true` after `process.kill(childProcess.pid)` is called - **Is your feature request related to a problem? Please describe.**
`childProcess.killed` is inconsistent depending on how the child process was killed:
```js
const { spawn } = require('child_process')
const childProcess = spawn('sleep', [5e6])
childProcess.kill()
console.log(childProcess.killed) // true
```
```js
const { spawn } = require('child_process')
const childProcess = spawn('sleep', [5e6])
process.kill(childProcess.pid)
console.log(childProcess.killed) // false
```
**Describe the solution you'd like**
`killed` should be `true` when the process is killed through an external process (`process.kill()` or `kill` in the terminal).
**Versions**
Node `12.1.0`, Ubuntu `19.04`.
|
process
|
childprocess killed should be true after process kill childprocess pid is called is your feature request related to a problem please describe childprocess killed is inconsistent depending on how the child process was killed js const spawn require child process const childprocess spawn sleep childprocess kill console log childprocess killed true js const spawn require child process const childprocess spawn sleep process kill childprocess pid console log childprocess killed false describe the solution you d like killed should be true when the process is killed through an external process process kill or kill in the terminal versions node ubuntu
| 1
|
6,119
| 8,996,228,715
|
IssuesEvent
|
2019-02-02 00:11:15
|
bow-simulation/virtualbow
|
https://api.github.com/repos/bow-simulation/virtualbow
|
closed
|
Evaluate Qbs as an alternative build system to cmake
|
area: software process prio: normal type: idea
|
In GitLab by @spfeifer on Nov 25, 2017, 20:27
[http://doc.qt.io/qbs/](http://doc.qt.io/qbs/)
|
1.0
|
Evaluate Qbs as an alternative build system to cmake - In GitLab by @spfeifer on Nov 25, 2017, 20:27
[http://doc.qt.io/qbs/](http://doc.qt.io/qbs/)
|
process
|
evaluate qbs as an alternative build system to cmake in gitlab by spfeifer on nov
| 1
|
2,072
| 4,889,337,418
|
IssuesEvent
|
2016-11-18 09:52:36
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Wrong processing link that was created in non-top window and added to top window.
|
AREA: client COMPLEXITY: easy SYSTEM: resource processing TYPE: bug
|
Files to reproduce:
### Index.js
``` javascript
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
var content = '';
if (req.url === '/')
content = fs.readFileSync('index.html');
else if (req.url === '/iframe.html')
content = fs.readFileSync('iframe.html');
res.end(content);
}).listen(3000);
```
### Index.html
```
javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Main content</h1>
<iframe src="/iframe.html"></iframe>
</body>
</html>
```
### Iframe.html
``` javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Iframe content</h1>
<script>
var link = document.createElement('a');
link.href = '/url.html';
link.textContent = 'link';
window.top.document.body.appendChild(link);
</script>
</body>
</html>
```
|
1.0
|
Wrong processing link that was created in non-top window and added to top window. - Files to reproduce:
### Index.js
``` javascript
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
var content = '';
if (req.url === '/')
content = fs.readFileSync('index.html');
else if (req.url === '/iframe.html')
content = fs.readFileSync('iframe.html');
res.end(content);
}).listen(3000);
```
### Index.html
```
javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Main content</h1>
<iframe src="/iframe.html"></iframe>
</body>
</html>
```
### Iframe.html
``` javascript
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<h1>Iframe content</h1>
<script>
var link = document.createElement('a');
link.href = '/url.html';
link.textContent = 'link';
window.top.document.body.appendChild(link);
</script>
</body>
</html>
```
|
process
|
wrong processing link that was created in non top window and added to top window files to reproduce index js javascript var http require http var fs require fs http createserver function req res var content if req url content fs readfilesync index html else if req url iframe html content fs readfilesync iframe html res end content listen index html javascript title main content iframe html javascript title iframe content var link document createelement a link href url html link textcontent link window top document body appendchild link
| 1
|
13,513
| 16,049,237,934
|
IssuesEvent
|
2021-04-22 16:59:04
|
icra/ecam
|
https://api.github.com/repos/icra/ecam
|
closed
|
Units in "Compare Assessments"
|
discussed in process
|
Hi Lluis, the units might need to get adapted:

It only shows "kg" not even "kgCO2eq". It should be possible to select whether:

For our reporting in WaCCliM, kgCO2eq/ serviced population is most relevant :)
|
1.0
|
Units in "Compare Assessments" - Hi Lluis, the units might need to get adapted:

It only shows "kg" not even "kgCO2eq". It should be possible to select whether:

For our reporting in WaCCliM, kgCO2eq/ serviced population is most relevant :)
|
process
|
units in compare assessments hi lluis the units might need to get adapted it only shows kg not even it should be possible to select whether for our reporting in wacclim serviced population is most relevant
| 1
|
3,232
| 6,289,280,096
|
IssuesEvent
|
2017-07-19 18:51:20
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
[S.D.Process] TestProcessStartTime throws NRE on UAP
|
area-System.Diagnostics.Process
|
(Test case will be added soon, creating issue so that I can disable that in the PR)
```
ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessStartTime [FAIL]
System.NullReferenceException : Object reference not set to an instance of an object.
```
|
1.0
|
[S.D.Process] TestProcessStartTime throws NRE on UAP - (Test case will be added soon, creating issue so that I can disable that in the PR)
```
ERROR: System.Diagnostics.Tests.ProcessTests.TestProcessStartTime [FAIL]
System.NullReferenceException : Object reference not set to an instance of an object.
```
|
process
|
testprocessstarttime throws nre on uap test case will be added soon creating issue so that i can disable that in the pr error system diagnostics tests processtests testprocessstarttime system nullreferenceexception object reference not set to an instance of an object
| 1
|
4,349
| 7,252,832,101
|
IssuesEvent
|
2018-02-16 01:00:13
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Problems with referers (KeyPrases from google)
|
log-processing
|
Hi Im looking for keyprases from google not empty. Attached are a report in JSON format.
querystrings from google.es and google it are striped it. In the original NGINX log they show.
Thanks in advance for the help.
[goaccess-1517393407555.txt](https://github.com/allinurl/goaccess/files/1681256/goaccess-1517393407555.txt)
|
1.0
|
Problems with referers (KeyPrases from google) - Hi Im looking for keyprases from google not empty. Attached are a report in JSON format.
querystrings from google.es and google it are striped it. In the original NGINX log they show.
Thanks in advance for the help.
[goaccess-1517393407555.txt](https://github.com/allinurl/goaccess/files/1681256/goaccess-1517393407555.txt)
|
process
|
problems with referers keyprases from google hi im looking for keyprases from google not empty attached are a report in json format querystrings from google es and google it are striped it in the original nginx log they show thanks in advance for the help
| 1
|
22,040
| 30,560,058,593
|
IssuesEvent
|
2023-07-20 14:06:01
|
threefoldtech/tfgrid-sdk-ts
|
https://api.github.com/repos/threefoldtech/tfgrid-sdk-ts
|
closed
|
Dashboard raises an error on connecting to the Polkadot extension, also the loading credentials loading forever
|
process_wontfix type_bug
|
### Description
I'm having trouble accessing the explore tab in the dashboard after connecting my account and refreshing the page. The nodes section seems to be stuck in an endless loading loop.
also, I'm seeing an error message that states "Can't get any account information from Polkadot extension. Please ensure that you have registered an account on it." However, I have registered six accounts already.
### Steps to reproduce
- Go to the dashboard on the local server
- Refresh the page
### Logs/Alerts
[Screencast from 06-07-23 11:23:08.webm](https://github.com/threefoldtech/tfgrid-sdk-ts/assets/57001890/7fccbc29-98df-4549-abff-27f9a0e67644)
[Screencast from 06-07-23 11:35:13.webm](https://github.com/threefoldtech/tfgrid-sdk-ts/assets/57001890/7312cdd4-1d17-4a95-bc00-04a4979e5760)
|
1.0
|
Dashboard raises an error on connecting to the Polkadot extension, also the loading credentials loading forever - ### Description
I'm having trouble accessing the explore tab in the dashboard after connecting my account and refreshing the page. The nodes section seems to be stuck in an endless loading loop.
also, I'm seeing an error message that states "Can't get any account information from Polkadot extension. Please ensure that you have registered an account on it." However, I have registered six accounts already.
### Steps to reproduce
- Go to the dashboard on the local server
- Refresh the page
### Logs/Alerts
[Screencast from 06-07-23 11:23:08.webm](https://github.com/threefoldtech/tfgrid-sdk-ts/assets/57001890/7fccbc29-98df-4549-abff-27f9a0e67644)
[Screencast from 06-07-23 11:35:13.webm](https://github.com/threefoldtech/tfgrid-sdk-ts/assets/57001890/7312cdd4-1d17-4a95-bc00-04a4979e5760)
|
process
|
dashboard raises an error on connecting to the polkadot extension also the loading credentials loading forever description i m having trouble accessing the explore tab in the dashboard after connecting my account and refreshing the page the nodes section seems to be stuck in an endless loading loop also i m seeing an error message that states can t get any account information from polkadot extension please ensure that you have registered an account on it however i have registered six accounts already steps to reproduce go to the dashboard on the local server refresh the page logs alerts
| 1
|
4,622
| 7,468,401,438
|
IssuesEvent
|
2018-04-02 18:50:51
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test failure: System.Diagnostics.Process.Tests
|
area-System.Diagnostics.Process
|
(I'm new, so maybe I'm totally missing something, but I'm following the instructions as best I can and I keep getting test errors.)
So I (successfully) ran the build, made some changes to some libraries and started experiencing some issues. I wanted to get back to a good state, so I did
> git checkout master
> clean -all
> build-tests.cmd # Failed with lots of errors, cause I should have run build first.
> build.cmd # Success
> build-tests.cmd
Error:
> D:\Repos\dotnet\corefx\dir.traversal.targets(77,5): error : (No message specified) [D:\Repos\dotnet\corefx\src\tests.builds]
>
> Build FAILED.
>
> D:\Repos\dotnet\corefx\Tools\tests.targets(492,5): warning : System.Diagnostics.Process.Tests Total: 254, Errors: 0, Failed: 1, Skipped: 2, Time: 68.485s [D:\Repos\dotnet\corefx\src\System.Diagn
> ostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
> D:\Repos\dotnet\corefx\Tools\tests.targets(492,5): warning MSB3073: The command "D:\Repos\dotnet\corefx\bin/tests/System.Diagnostics.Process.Tests/netcoreapp-Windows_NT-Debug-x64//RunTests.cmd D:\Re
> pos\dotnet\corefx\bin/testhost/netcoreapp-Windows_NT-Debug-x64/" exited with code 1. [D:\Repos\dotnet\corefx\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
> D:\Repos\dotnet\corefx\Tools\tests.targets(500,5): error : One or more tests failed while running tests from 'System.Diagnostics.Process.Tests' please check D:\Repos\dotnet\corefx\bin/tests/System.D
> iagnostics.Process.Tests/netcoreapp-Windows_NT-Debug-x64/testResults.xml for details! [D:\Repos\dotnet\corefx\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
> D:\Repos\dotnet\corefx\dir.traversal.targets(77,5): error : (No message specified) [D:\Repos\dotnet\corefx\src\tests.builds]
> 2 Warning(s)
> 2 Error(s)
"testResults.xml" does not exist in the specified location.
With build.cmd taking 8 minutes and build-tests.cmd taking 14 minutes, debugging this blindly just isn't going to work. (On a Quad-core, 16 GB RAM, SSD)
I'm currently on
> commit 22d053f88c619199b38da4a098e47a087cf98c56 (HEAD -> master, origin/master, origin/HEAD)
> Author: Jan Kotas <jkotas@microsoft.com>
> Date: Wed Mar 28 15:46:00 2018 -0700
>
> Delete workaround for ComImport types on Unix (#28518)
>
> https://github.com/dotnet/coreclr/issues/16804 is fixed. We do not need the workaround anymore.
Ideas? In the meantime, I'll pull the latest and then run clean (do I need to do this?), build and build-tests again.
|
1.0
|
Test failure: System.Diagnostics.Process.Tests - (I'm new, so maybe I'm totally missing something, but I'm following the instructions as best I can and I keep getting test errors.)
So I (successfully) ran the build, made some changes to some libraries and started experiencing some issues. I wanted to get back to a good state, so I did
> git checkout master
> clean -all
> build-tests.cmd # Failed with lots of errors, cause I should have run build first.
> build.cmd # Success
> build-tests.cmd
Error:
> D:\Repos\dotnet\corefx\dir.traversal.targets(77,5): error : (No message specified) [D:\Repos\dotnet\corefx\src\tests.builds]
>
> Build FAILED.
>
> D:\Repos\dotnet\corefx\Tools\tests.targets(492,5): warning : System.Diagnostics.Process.Tests Total: 254, Errors: 0, Failed: 1, Skipped: 2, Time: 68.485s [D:\Repos\dotnet\corefx\src\System.Diagn
> ostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
> D:\Repos\dotnet\corefx\Tools\tests.targets(492,5): warning MSB3073: The command "D:\Repos\dotnet\corefx\bin/tests/System.Diagnostics.Process.Tests/netcoreapp-Windows_NT-Debug-x64//RunTests.cmd D:\Re
> pos\dotnet\corefx\bin/testhost/netcoreapp-Windows_NT-Debug-x64/" exited with code 1. [D:\Repos\dotnet\corefx\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
> D:\Repos\dotnet\corefx\Tools\tests.targets(500,5): error : One or more tests failed while running tests from 'System.Diagnostics.Process.Tests' please check D:\Repos\dotnet\corefx\bin/tests/System.D
> iagnostics.Process.Tests/netcoreapp-Windows_NT-Debug-x64/testResults.xml for details! [D:\Repos\dotnet\corefx\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
> D:\Repos\dotnet\corefx\dir.traversal.targets(77,5): error : (No message specified) [D:\Repos\dotnet\corefx\src\tests.builds]
> 2 Warning(s)
> 2 Error(s)
"testResults.xml" does not exist in the specified location.
With build.cmd taking 8 minutes and build-tests.cmd taking 14 minutes, debugging this blindly just isn't going to work. (On a Quad-core, 16 GB RAM, SSD)
I'm currently on
> commit 22d053f88c619199b38da4a098e47a087cf98c56 (HEAD -> master, origin/master, origin/HEAD)
> Author: Jan Kotas <jkotas@microsoft.com>
> Date: Wed Mar 28 15:46:00 2018 -0700
>
> Delete workaround for ComImport types on Unix (#28518)
>
> https://github.com/dotnet/coreclr/issues/16804 is fixed. We do not need the workaround anymore.
Ideas? In the meantime, I'll pull the latest and then run clean (do I need to do this?), build and build-tests again.
|
process
|
test failure system diagnostics process tests i m new so maybe i m totally missing something but i m following the instructions as best i can and i keep getting test errors so i successfully ran the build made some changes to some libraries and started experiencing some issues i wanted to get back to a good state so i did git checkout master clean all build tests cmd failed with lots of errors cause i should have run build first build cmd success build tests cmd error d repos dotnet corefx dir traversal targets error no message specified build failed d repos dotnet corefx tools tests targets warning system diagnostics process tests total errors failed skipped time d repos dotnet corefx src system diagn ostics process tests system diagnostics process tests csproj d repos dotnet corefx tools tests targets warning the command d repos dotnet corefx bin tests system diagnostics process tests netcoreapp windows nt debug runtests cmd d re pos dotnet corefx bin testhost netcoreapp windows nt debug exited with code d repos dotnet corefx tools tests targets error one or more tests failed while running tests from system diagnostics process tests please check d repos dotnet corefx bin tests system d iagnostics process tests netcoreapp windows nt debug testresults xml for details d repos dotnet corefx dir traversal targets error no message specified warning s error s testresults xml does not exist in the specified location with build cmd taking minutes and build tests cmd taking minutes debugging this blindly just isn t going to work on a quad core gb ram ssd i m currently on commit head master origin master origin head author jan kotas date wed mar delete workaround for comimport types on unix is fixed we do not need the workaround anymore ideas in the meantime i ll pull the latest and then run clean do i need to do this build and build tests again
| 1
|
5,943
| 8,767,727,275
|
IssuesEvent
|
2018-12-17 20:40:02
|
googleapis/google-auth-library-nodejs
|
https://api.github.com/repos/googleapis/google-auth-library-nodejs
|
closed
|
Can you do a new release so we can have the fix for #524? Revoke does not work in current release but is fixed in master.
|
type: process
|
Can you do a new release so we can have the fix for #524? Revoke does not work in current release but is fixed in master.
|
1.0
|
Can you do a new release so we can have the fix for #524? Revoke does not work in current release but is fixed in master. - Can you do a new release so we can have the fix for #524? Revoke does not work in current release but is fixed in master.
|
process
|
can you do a new release so we can have the fix for revoke does not work in current release but is fixed in master can you do a new release so we can have the fix for revoke does not work in current release but is fixed in master
| 1
|
146,032
| 5,592,333,748
|
IssuesEvent
|
2017-03-30 03:54:56
|
nus-mtp/nus-oracle
|
https://api.github.com/repos/nus-mtp/nus-oracle
|
opened
|
[Input Validation] Create a component that displays list of input mistakes
|
high priority UI
|
* More user friendly to list down validation mistakes
|
1.0
|
[Input Validation] Create a component that displays list of input mistakes - * More user friendly to list down validation mistakes
|
non_process
|
create a component that displays list of input mistakes more user friendly to list down validation mistakes
| 0
|
18,853
| 24,767,063,125
|
IssuesEvent
|
2022-10-22 17:09:58
|
fertadeo/ISPC-2do-Cuat-Proyecto
|
https://api.github.com/repos/fertadeo/ISPC-2do-Cuat-Proyecto
|
closed
|
#TK 14.3 Agregar mapa y redes en footer de la vista de Administración
|
in process
|
Agragar al footer redes sociales y mapa de ubicación
|
1.0
|
#TK 14.3 Agregar mapa y redes en footer de la vista de Administración - Agragar al footer redes sociales y mapa de ubicación
|
process
|
tk agregar mapa y redes en footer de la vista de administración agragar al footer redes sociales y mapa de ubicación
| 1
|
4,481
| 7,343,530,591
|
IssuesEvent
|
2018-03-07 11:42:06
|
fablabbcn/fablabs.io
|
https://api.github.com/repos/fablabbcn/fablabs.io
|
opened
|
Address autocomplete does not work in China
|
Approval Process bug
|
When adding a `Lab`, the address autocomplete that uses Google API seems to be not working, see #390. We should either:
- Use a service that works for all countries, including China
- Use Baidu just for Chinese users
- Move the autocomplete server side
|
1.0
|
Address autocomplete does not work in China - When adding a `Lab`, the address autocomplete that uses Google API seems to be not working, see #390. We should either:
- Use a service that works for all countries, including China
- Use Baidu just for Chinese users
- Move the autocomplete server side
|
process
|
address autocomplete does not work in china when adding a lab the address autocomplete that uses google api seems to be not working see we should either use a service that works for all countries including china use baidu just for chinese users move the autocomplete server side
| 1
|
31,289
| 11,906,022,862
|
IssuesEvent
|
2020-03-30 19:37:14
|
stefanfreitag/cicd_angular_s3
|
https://api.github.com/repos/stefanfreitag/cicd_angular_s3
|
opened
|
CVE-2015-9251 (Medium) detected in jquery-1.7.2.min.js
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/cicd_angular_s3/node_modules/jmespath/index.html</p>
<p>Path to vulnerable library: /cicd_angular_s3/node_modules/jmespath/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/stefanfreitag/cicd_angular_s3/commit/9dcacc7edbcf699889e69374564a8345626d43e4">9dcacc7edbcf699889e69374564a8345626d43e4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-9251 (Medium) detected in jquery-1.7.2.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.2.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/cicd_angular_s3/node_modules/jmespath/index.html</p>
<p>Path to vulnerable library: /cicd_angular_s3/node_modules/jmespath/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/stefanfreitag/cicd_angular_s3/commit/9dcacc7edbcf699889e69374564a8345626d43e4">9dcacc7edbcf699889e69374564a8345626d43e4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm cicd angular node modules jmespath index html path to vulnerable library cicd angular node modules jmespath index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
3,593
| 4,542,049,118
|
IssuesEvent
|
2016-09-09 19:53:14
|
Storj/bridge
|
https://api.github.com/repos/Storj/bridge
|
closed
|
vulnerability: allow weird character in buckets
|
security
|
### Package Versions
Replace the values below using the output from `npm list storj-bridge`.
```
3.1.0
```
```
v4.5.0
```
### Expected Behavior
```
bucket id should error out when invalid
```
### Actual Behavior
```
application allows you to escape bucket id potentially allowing a slew of other commands to work
again it would have to be coded right and tricky to do but the opening is there
```
### Steps to Reproduce
Please include the steps the reproduce the issue, numbered below. Include as
much detail as possible.
1.storj list-files \
2.storj list-files /
Log
PS C:\Users\storj_cli> storj list-files \ -d
[Tue Aug 30 2016 16:26:42 GMT-0400 (Eastern Daylight Time)] [debug] Request: {"baseUrl":"https://api.storj.io","uri":"
/buckets/\\/files","method":"GET","qs":{"__nonce":"db27dabf-81b2-40c6-a38e-6fbd8336e580"},"json":true,"headers":{"x-pubk
ey":"036ee38f5f8cb79efd07a993a32a6e1a2693bc728b41c486664eb7464d7cbcaa54","x-signature":"304502203bae42751d54c1a7d8e09be0
246244058984814e1f2c059bee606c93fce6e99002210094ce6a0efcbb9a6ae5de26be671b00ae0041268d9e5e2418131980680022bbc4"}}
[Tue Aug 30 2016 16:26:42 GMT-0400 (Eastern Daylight Time)] [debug] Body: "Cannot GET /buckets///files?__nonce=db27dab
f-81b2-40c6-a38e-6fbd8336e580\n"
[Tue Aug 30 2016 16:26:42 GMT-0400 (Eastern Daylight Time)] [error] Cannot GET /buckets///files?__nonce=db27dabf-81b2-
40c6-a38e-6fbd8336e580
|
True
|
vulnerability: allow weird character in buckets - ### Package Versions
Replace the values below using the output from `npm list storj-bridge`.
```
3.1.0
```
```
v4.5.0
```
### Expected Behavior
```
bucket id should error out when invalid
```
### Actual Behavior
```
application allows you to escape bucket id potentially allowing a slew of other commands to work
again it would have to be coded right and tricky to do but the opening is there
```
### Steps to Reproduce
Please include the steps the reproduce the issue, numbered below. Include as
much detail as possible.
1.storj list-files \
2.storj list-files /
Log
PS C:\Users\storj_cli> storj list-files \ -d
[Tue Aug 30 2016 16:26:42 GMT-0400 (Eastern Daylight Time)] [debug] Request: {"baseUrl":"https://api.storj.io","uri":"
/buckets/\\/files","method":"GET","qs":{"__nonce":"db27dabf-81b2-40c6-a38e-6fbd8336e580"},"json":true,"headers":{"x-pubk
ey":"036ee38f5f8cb79efd07a993a32a6e1a2693bc728b41c486664eb7464d7cbcaa54","x-signature":"304502203bae42751d54c1a7d8e09be0
246244058984814e1f2c059bee606c93fce6e99002210094ce6a0efcbb9a6ae5de26be671b00ae0041268d9e5e2418131980680022bbc4"}}
[Tue Aug 30 2016 16:26:42 GMT-0400 (Eastern Daylight Time)] [debug] Body: "Cannot GET /buckets///files?__nonce=db27dab
f-81b2-40c6-a38e-6fbd8336e580\n"
[Tue Aug 30 2016 16:26:42 GMT-0400 (Eastern Daylight Time)] [error] Cannot GET /buckets///files?__nonce=db27dabf-81b2-
40c6-a38e-6fbd8336e580
|
non_process
|
vulnerability allow weird character in buckets package versions replace the values below using the output from npm list storj bridge expected behavior bucket id should error out when invalid actual behavior application allows you to escape bucket id potentially allowing a slew of other commands to work again it would have to be coded right and tricky to do but the opening is there steps to reproduce please include the steps the reproduce the issue numbered below include as much detail as possible storj list files storj list files log ps c users storj cli storj list files d request baseurl buckets files method get qs nonce json true headers x pubk ey x signature body cannot get buckets files nonce f n cannot get buckets files nonce
| 0
|
21,036
| 27,979,257,085
|
IssuesEvent
|
2023-03-26 00:18:59
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Color reconstruction fails to be applied during export
|
scope: image processing bug: pending no-issue-activity
|
**Describe the bug**
I had a photo which included a street lamp. I enabled `color reconstruction` module and disabled `highlight reconstruction`. It fixed the highlight quite well. However on exported image the correction was gone. It looked like the reconstruction module was turned off. Moreover re-opening the image in darktable I also didn't see modules effect. I had turn it off and on again for it to display correctly. It still didn't work with export though.
It is present both with OpenCL and CPU processing. Curiously both those pipelines produce slightly different results but that is probably caused by some other modules like `color correction` as I can see a slight change of tint between exported images.
**To Reproduce**
Regretfully, I can't reproduce it reliably. On another photo color reconstruction module was working as expected. Might again be some interaction of modules... I know it will be next to impossible to pinpoint the bug though.
**Screenshots**

**Platform:**
- Darktable Version: current HEAD: d50e30bf4cfb89383af419b06665a196d86a7df7
- OS: Gentoo Linux
- OpenCL activated
- AMD Fury, amdgpu, ROCm 4
|
1.0
|
Color reconstruction fails to be applied during export - **Describe the bug**
I had a photo which included a street lamp. I enabled `color reconstruction` module and disabled `highlight reconstruction`. It fixed the highlight quite well. However on exported image the correction was gone. It looked like the reconstruction module was turned off. Moreover re-opening the image in darktable I also didn't see modules effect. I had turn it off and on again for it to display correctly. It still didn't work with export though.
It is present both with OpenCL and CPU processing. Curiously both those pipelines produce slightly different results but that is probably caused by some other modules like `color correction` as I can see a slight change of tint between exported images.
**To Reproduce**
Regretfully, I can't reproduce it reliably. On another photo color reconstruction module was working as expected. Might again be some interaction of modules... I know it will be next to impossible to pinpoint the bug though.
**Screenshots**

**Platform:**
- Darktable Version: current HEAD: d50e30bf4cfb89383af419b06665a196d86a7df7
- OS: Gentoo Linux
- OpenCL activated
- AMD Fury, amdgpu, ROCm 4
|
process
|
color reconstruction fails to be applied during export describe the bug i had a photo which included a street lamp i enabled color reconstruction module and disabled highlight reconstruction it fixed the highlight quite well however on exported image the correction was gone it looked like the reconstruction module was turned off moreover re opening the image in darktable i also didn t see modules effect i had turn it off and on again for it to display correctly it still didn t work with export though it is present both with opencl and cpu processing curiously both those pipelines produce slightly different results but that is probably caused by some other modules like color correction as i can see a slight change of tint between exported images to reproduce regretfully i can t reproduce it reliably on another photo color reconstruction module was working as expected might again be some interaction of modules i know it will be next to impossible to pinpoint the bug though screenshots platform darktable version current head os gentoo linux opencl activated amd fury amdgpu rocm
| 1
|
21,115
| 28,078,847,949
|
IssuesEvent
|
2023-03-30 03:38:20
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Renaming metrics does not work on the metrics generated by spanmetrics connector
|
bug processor/metricstransform
|
### Component(s)
processor/metricstransform
### What happened?
## Description
I'm using `metricstranform` processor and `spanmetrics` connector in my otel collector as below. There are two transformations contained in the `metricstranform` processor. The first transformation which contains operations to add label `app` and update label (rename the label `http.url` to `url`) works as expected. But the second transformation which rename the metrics `duration_bucket` (this metrics is generated by the `spanmetrics` connector from received spans) to `duration_seconds` does not work as expected. From the exported metrics, I can only find the original metrics `duration_bucket` and can not find the `duration_seconds` metrics.
```
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
processors:
batch:
timeout: 10s
send_batch_size: 10000
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets:
[10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 1800ms, 2s, 5s, 7s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
metricstransform:
transforms:
- include: duration_*
match_type: regexp
action: update
operations:
- action: add_label
new_label: app
new_value: assist
- action: update_label
label: http.url
new_label: url
- include: duration_bucket
match_type: strict
action: update
new_name: duration_seconds
exporters:
jaeger:
endpoint: my-jaeger-collector-headless.jaeger-demo.svc:14250
tls:
ca_file: "/etc/pki/ca-trust/source/service-ca/service-ca.crt"
prometheus:
endpoint: "0.0.0.0:8889"
send_timestamps: true
metric_expiration: 1440m
connectors:
spanmetrics:
histogram:
explicit:
buckets: [10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 2s, 5s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
- name: http.route
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [spanmetrics, jaeger]
metrics:
receivers: [spanmetrics]
processors: [batch, metricstransform]
exporters: [prometheus]
image: >-
ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.74.0
mode: statefulset
volumeMounts:
- mountPath: /etc/pki/ca-trust/source/service-ca
name: cabundle-volume
volumes:
- configMap:
name: my-otelcol-cabundle
name: cabundle-volume
```
## Steps to Reproduce
1. Create an otel collector with `spec` as described above in the description, and create an instrumentation for auto-instrumentation.
2. Check exported metrics data via `localhost:8889/metrics` within otel collector pod.
## Expected Result
I can find the `duration_seconds` metrics after renaming with the metricstranform processor.
## Actual Result
I still find the original `duration_bucket` metrics before renaming, and can not find the `duration_seconds` metrics.
### Collector version
v0.74.0
### Environment information
## Environment
OpenShift Cluster: 4.10.50
Kubernetes version: v1.23.12+8a6bfe4
### OpenTelemetry Collector configuration
```yaml
receivers:
otlp:
protocols:
grpc:
http:
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
processors:
batch:
timeout: 10s
send_batch_size: 10000
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets:
[10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 1800ms, 2s, 5s, 7s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
metricstransform:
transforms:
- include: duration_*
match_type: regexp
action: update
operations:
- action: add_label
new_label: app
new_value: assist
- action: update_label
label: http.url
new_label: url
- include: duration_bucket
match_type: strict
action: update
new_name: duration_seconds
exporters:
jaeger:
endpoint: my-jaeger-collector-headless.jaeger-demo.svc:14250
tls:
ca_file: "/etc/pki/ca-trust/source/service-ca/service-ca.crt"
prometheus:
endpoint: "0.0.0.0:8889"
send_timestamps: true
metric_expiration: 1440m
connectors:
spanmetrics:
histogram:
explicit:
buckets: [10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 2s, 5s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
- name: http.route
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [spanmetrics, jaeger]
metrics:
receivers: [spanmetrics]
processors: [batch, metricstransform]
exporters: [prometheus]
```
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
Renaming metrics does not work on the metrics generated by spanmetrics connector - ### Component(s)
processor/metricstransform
### What happened?
## Description
I'm using `metricstranform` processor and `spanmetrics` connector in my otel collector as below. There are two transformations contained in the `metricstranform` processor. The first transformation which contains operations to add label `app` and update label (rename the label `http.url` to `url`) works as expected. But the second transformation which rename the metrics `duration_bucket` (this metrics is generated by the `spanmetrics` connector from received spans) to `duration_seconds` does not work as expected. From the exported metrics, I can only find the original metrics `duration_bucket` and can not find the `duration_seconds` metrics.
```
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
processors:
batch:
timeout: 10s
send_batch_size: 10000
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets:
[10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 1800ms, 2s, 5s, 7s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
metricstransform:
transforms:
- include: duration_*
match_type: regexp
action: update
operations:
- action: add_label
new_label: app
new_value: assist
- action: update_label
label: http.url
new_label: url
- include: duration_bucket
match_type: strict
action: update
new_name: duration_seconds
exporters:
jaeger:
endpoint: my-jaeger-collector-headless.jaeger-demo.svc:14250
tls:
ca_file: "/etc/pki/ca-trust/source/service-ca/service-ca.crt"
prometheus:
endpoint: "0.0.0.0:8889"
send_timestamps: true
metric_expiration: 1440m
connectors:
spanmetrics:
histogram:
explicit:
buckets: [10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 2s, 5s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
- name: http.route
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [spanmetrics, jaeger]
metrics:
receivers: [spanmetrics]
processors: [batch, metricstransform]
exporters: [prometheus]
image: >-
ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.74.0
mode: statefulset
volumeMounts:
- mountPath: /etc/pki/ca-trust/source/service-ca
name: cabundle-volume
volumes:
- configMap:
name: my-otelcol-cabundle
name: cabundle-volume
```
## Steps to Reproduce
1. Create an otel collector with `spec` as described above in the description, and create an instrumentation for auto-instrumentation.
2. Check exported metrics data via `localhost:8889/metrics` within otel collector pod.
## Expected Result
I can find the `duration_seconds` metrics after renaming with the metricstranform processor.
## Actual Result
I still find the original `duration_bucket` metrics before renaming, and can not find the `duration_seconds` metrics.
### Collector version
v0.74.0
### Environment information
## Environment
OpenShift Cluster: 4.10.50
Kubernetes version: v1.23.12+8a6bfe4
### OpenTelemetry Collector configuration
```yaml
receivers:
otlp:
protocols:
grpc:
http:
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
processors:
batch:
timeout: 10s
send_batch_size: 10000
spanmetrics:
metrics_exporter: prometheus
latency_histogram_buckets:
[10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 1800ms, 2s, 5s, 7s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
metricstransform:
transforms:
- include: duration_*
match_type: regexp
action: update
operations:
- action: add_label
new_label: app
new_value: assist
- action: update_label
label: http.url
new_label: url
- include: duration_bucket
match_type: strict
action: update
new_name: duration_seconds
exporters:
jaeger:
endpoint: my-jaeger-collector-headless.jaeger-demo.svc:14250
tls:
ca_file: "/etc/pki/ca-trust/source/service-ca/service-ca.crt"
prometheus:
endpoint: "0.0.0.0:8889"
send_timestamps: true
metric_expiration: 1440m
connectors:
spanmetrics:
histogram:
explicit:
buckets: [10ms, 100ms, 200ms, 400ms, 800ms, 1s, 1200ms, 1400ms, 1600ms, 2s, 5s]
dimensions:
- name: http.method
- name: http.status_code
- name: http.target
- name: http.url
- name: http.route
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [spanmetrics, jaeger]
metrics:
receivers: [spanmetrics]
processors: [batch, metricstransform]
exporters: [prometheus]
```
### Log output
_No response_
### Additional context
_No response_
|
process
|
renaming metrics does not work on the metrics generated by spanmetrics connector component s processor metricstransform what happened description i m using metricstranform processor and spanmetrics connector in my otel collector as below there are two transformations contained in the metricstranform processor the first transformation which contains operations to add label app and update label rename the label http url to url works as expected but the second transformation which rename the metrics duration bucket this metrics is generated by the spanmetrics connector from received spans to duration seconds does not work as expected from the exported metrics i can only find the original metrics duration bucket and can not find the duration seconds metrics spec config receivers otlp protocols grpc http otlp spanmetrics protocols grpc endpoint processors batch timeout send batch size spanmetrics metrics exporter prometheus latency histogram buckets dimensions name http method name http status code name http target name http url metricstransform transforms include duration match type regexp action update operations action add label new label app new value assist action update label label http url new label url include duration bucket match type strict action update new name duration seconds exporters jaeger endpoint my jaeger collector headless jaeger demo svc tls ca file etc pki ca trust source service ca service ca crt prometheus endpoint send timestamps true metric expiration connectors spanmetrics histogram explicit buckets dimensions name http method name http status code name http target name http url name http route service pipelines traces receivers processors exporters metrics receivers processors exporters image ghcr io open telemetry opentelemetry collector releases opentelemetry collector contrib mode statefulset volumemounts mountpath etc pki ca trust source service ca name cabundle volume volumes configmap name my otelcol cabundle name cabundle volume steps to reproduce create an otel collector with spec as described above in the description and create an instrumentation for auto instrumentation check exported metrics data via localhost metrics within otel collector pod expected result i can find the duration seconds metrics after renaming with the metricstranform processor actual result i still find the original duration bucket metrics before renaming and can not find the duration seconds metrics collector version environment information environment openshift cluster kubernetes version opentelemetry collector configuration yaml receivers otlp protocols grpc http otlp spanmetrics protocols grpc endpoint processors batch timeout send batch size spanmetrics metrics exporter prometheus latency histogram buckets dimensions name http method name http status code name http target name http url metricstransform transforms include duration match type regexp action update operations action add label new label app new value assist action update label label http url new label url include duration bucket match type strict action update new name duration seconds exporters jaeger endpoint my jaeger collector headless jaeger demo svc tls ca file etc pki ca trust source service ca service ca crt prometheus endpoint send timestamps true metric expiration connectors spanmetrics histogram explicit buckets dimensions name http method name http status code name http target name http url name http route service pipelines traces receivers processors exporters metrics receivers processors exporters log output no response additional context no response
| 1
|
2,936
| 5,920,780,801
|
IssuesEvent
|
2017-05-22 21:07:57
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
reopened
|
Process tests failing in Portable run due to needing elevation
|
area-System.Diagnostics.Process os-linux test-run-portable
| ERROR: type should be string, got "\r\nhttps://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fportable~2Fcli~2F/build/20170521.02/workItem/System.Diagnostics.Process.Tests\r\n\r\nThese tests need elevation. Either the runs need to be elevated, the tests need to elevate themselves, or they need filtering out by trait and covered some other way."
|
1.0
|
Process tests failing in Portable run due to needing elevation -
https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fportable~2Fcli~2F/build/20170521.02/workItem/System.Diagnostics.Process.Tests
These tests need elevation. Either the runs need to be elevated, the tests need to elevate themselves, or they need filtering out by trait and covered some other way.
|
process
|
process tests failing in portable run due to needing elevation these tests need elevation either the runs need to be elevated the tests need to elevate themselves or they need filtering out by trait and covered some other way
| 1
|
22,752
| 32,072,426,365
|
IssuesEvent
|
2023-09-25 08:53:07
|
aws/sagemaker-python-sdk
|
https://api.github.com/repos/aws/sagemaker-python-sdk
|
closed
|
TypeError: can only concatenate str (not "list") to str
|
component: processing
|
This error appears when you run the ScriptProcessor.run(). The script its at thew same level of my notebook in pagemaker Studio. It doesn't change if I add input, and output. I'm using a custom image on ECR. I also tried to save my script on s3 and pass the s3 path.
```
from sagemaker.processing import Processor
processor = ScriptProcessor(role=role,
image_uri=processing_image_uri,
instance_count=processing_instance_count,
instance_type=processing_instance_type,
base_job_name=base_job_name,
sagemaker_session = sagemaker_session,
command="python3")
code = 'main.py'
processor.run(
code = code
)
```
**Screenshots or logs**

**System information**
A description of your system. Please provide:
- **SageMaker Python SDK version**: 2.88.1 and 2.93.1
- **Python version**:3.x
**Additional context**
Add any other context about the problem here.
|
1.0
|
TypeError: can only concatenate str (not "list") to str - This error appears when you run the ScriptProcessor.run(). The script its at thew same level of my notebook in pagemaker Studio. It doesn't change if I add input, and output. I'm using a custom image on ECR. I also tried to save my script on s3 and pass the s3 path.
```
from sagemaker.processing import Processor
processor = ScriptProcessor(role=role,
image_uri=processing_image_uri,
instance_count=processing_instance_count,
instance_type=processing_instance_type,
base_job_name=base_job_name,
sagemaker_session = sagemaker_session,
command="python3")
code = 'main.py'
processor.run(
code = code
)
```
**Screenshots or logs**

**System information**
A description of your system. Please provide:
- **SageMaker Python SDK version**: 2.88.1 and 2.93.1
- **Python version**:3.x
**Additional context**
Add any other context about the problem here.
|
process
|
typeerror can only concatenate str not list to str this error appears when you run the scriptprocessor run the script its at thew same level of my notebook in pagemaker studio it doesn t change if i add input and output i m using a custom image on ecr i also tried to save my script on and pass the path from sagemaker processing import processor processor scriptprocessor role role image uri processing image uri instance count processing instance count instance type processing instance type base job name base job name sagemaker session sagemaker session command code main py processor run code code screenshots or logs system information a description of your system please provide sagemaker python sdk version and python version x additional context add any other context about the problem here
| 1
|
17,721
| 23,625,566,544
|
IssuesEvent
|
2022-08-25 03:17:09
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add The Rusty Venture Show
|
suggested title in process
|
Title:
The Rusty Venture Show
Type (film/tv show):
TV
Film or show in which it appears:
The Venture Bros.
Is the parent film/show streaming anywhere?
HBO Max.
There are also clips on the Adult Swim page: https://www.adultswim.com/videos/the-venture-bros
About when in the parent film/show does it appear?
Sporadically throughout the run. s3e11 "ORB" is probably the best source for images, @ 0:00 & 9:11.
(I have the Blu-Rays, and can do captures if needed.)
Actual footage of the film/show can be seen (yes/no)?
Yes
Summary:
The animated adventures of boy adventurer Rusty Venture and Team Venture.
Pull Quote:
"Brought to you by: smoking!"
Cast:
Rusty Venture, Dr. Jonas Venture, Kano, Hector, H.E.L.P.eR.
Studio:
Venture-Industries
Categories:
Adventure, Travel, Animation
Number of Seasons:
(Canonically it was a short run. 2 or 3, maybe?)
Content Rating:
TV-PG
|
1.0
|
Add The Rusty Venture Show - Title:
The Rusty Venture Show
Type (film/tv show):
TV
Film or show in which it appears:
The Venture Bros.
Is the parent film/show streaming anywhere?
HBO Max.
There are also clips on the Adult Swim page: https://www.adultswim.com/videos/the-venture-bros
About when in the parent film/show does it appear?
Sporadically throughout the run. s3e11 "ORB" is probably the best source for images, @ 0:00 & 9:11.
(I have the Blu-Rays, and can do captures if needed.)
Actual footage of the film/show can be seen (yes/no)?
Yes
Summary:
The animated adventures of boy adventurer Rusty Venture and Team Venture.
Pull Quote:
"Brought to you by: smoking!"
Cast:
Rusty Venture, Dr. Jonas Venture, Kano, Hector, H.E.L.P.eR.
Studio:
Venture-Industries
Categories:
Adventure, Travel, Animation
Number of Seasons:
(Canonically it was a short run. 2 or 3, maybe?)
Content Rating:
TV-PG
|
process
|
add the rusty venture show title the rusty venture show type film tv show tv film or show in which it appears the venture bros is the parent film show streaming anywhere hbo max there are also clips on the adult swim page about when in the parent film show does it appear sporadically throughout the run orb is probably the best source for images i have the blu rays and can do captures if needed actual footage of the film show can be seen yes no yes summary the animated adventures of boy adventurer rusty venture and team venture pull quote brought to you by smoking cast rusty venture dr jonas venture kano hector h e l p er studio venture industries categories adventure travel animation number of seasons canonically it was a short run or maybe content rating tv pg
| 1
|
99,058
| 4,045,339,109
|
IssuesEvent
|
2016-05-21 23:07:43
|
Valenchak/sharpsense
|
https://api.github.com/repos/Valenchak/sharpsense
|
closed
|
RXD, RTS, DTR - Should be driven low before turning off the Sim900 power MOSFET
|
auto-migrated Category-Firmware Priority-Critical Type-Task
|
```
Since GSM_RXD, GSM_RTS, GSM_DTR pins will be pulled up internally to PSoC3 (or
driven strong), before turning off the power to SIM900 (using MOSFET), it has
to be ensured that these three pins are driven low, so that they won't back
power the sim900.
```
Original issue reported on code.google.com by `mysorena...@gmail.com` on 1 Feb 2012 at 4:21
|
1.0
|
RXD, RTS, DTR - Should be driven low before turning off the Sim900 power MOSFET - ```
Since GSM_RXD, GSM_RTS, GSM_DTR pins will be pulled up internally to PSoC3 (or
driven strong), before turning off the power to SIM900 (using MOSFET), it has
to be ensured that these three pins are driven low, so that they won't back
power the sim900.
```
Original issue reported on code.google.com by `mysorena...@gmail.com` on 1 Feb 2012 at 4:21
|
non_process
|
rxd rts dtr should be driven low before turning off the power mosfet since gsm rxd gsm rts gsm dtr pins will be pulled up internally to or driven strong before turning off the power to using mosfet it has to be ensured that these three pins are driven low so that they won t back power the original issue reported on code google com by mysorena gmail com on feb at
| 0
|
291
| 2,731,419,473
|
IssuesEvent
|
2015-04-16 20:15:16
|
cfpb/hmda-viz-prototype
|
https://api.github.com/repos/cfpb/hmda-viz-prototype
|
closed
|
report_list.json files
|
enhancement Processing
|
We're creating MSA-MD folders when no reports exist. The folder has the report_list.json file with just an empty object.
Here is an example:
https://github.com/cfpb/hmda-viz-prototype/tree/gh-pages/aggregate/2013/alabama/albertville
- [ ] don't create the msa-md folder if there are no reports
- [ ] rename the file to either reports.json or report-list.json (with a hyphen) to be consistent with other file/folder names
|
1.0
|
report_list.json files - We're creating MSA-MD folders when no reports exist. The folder has the report_list.json file with just an empty object.
Here is an example:
https://github.com/cfpb/hmda-viz-prototype/tree/gh-pages/aggregate/2013/alabama/albertville
- [ ] don't create the msa-md folder if there are no reports
- [ ] rename the file to either reports.json or report-list.json (with a hyphen) to be consistent with other file/folder names
|
process
|
report list json files we re creating msa md folders when no reports exist the folder has the report list json file with just an empty object here is an example don t create the msa md folder if there are no reports rename the file to either reports json or report list json with a hyphen to be consistent with other file folder names
| 1
|
13,781
| 10,455,690,037
|
IssuesEvent
|
2019-09-19 22:03:14
|
OregonDigital/OD2
|
https://api.github.com/repos/OregonDigital/OD2
|
opened
|
WC7 - Setup on-premises k8s cluster
|
Epic Infrastructure Priority - High
|
During WC7 we need to build out the new on-premises staging environment at OSU.
- [ ] Setup control plane and worker VMs
- [ ] Setup Kubernetes cluster
- [ ] Integrate with OSULP Rancher
- [ ] Update CI/CD build/deploy pipeline
**lib-odcp1-3** (control plane nodes)
- 4 CPU
- 8GB RAM
- 100GB disk
**lib-odwork1-4** (worker nodes)
- 4 CPU
- 24GB RAM
- 100GB disk
**32 TB NFS Volume** (high capacity asset/derivative storage)
|
1.0
|
WC7 - Setup on-premises k8s cluster - During WC7 we need to build out the new on-premises staging environment at OSU.
- [ ] Setup control plane and worker VMs
- [ ] Setup Kubernetes cluster
- [ ] Integrate with OSULP Rancher
- [ ] Update CI/CD build/deploy pipeline
**lib-odcp1-3** (control plane nodes)
- 4 CPU
- 8GB RAM
- 100GB disk
**lib-odwork1-4** (worker nodes)
- 4 CPU
- 24GB RAM
- 100GB disk
**32 TB NFS Volume** (high capacity asset/derivative storage)
|
non_process
|
setup on premises cluster during we need to build out the new on premises staging environment at osu setup control plane and worker vms setup kubernetes cluster integrate with osulp rancher update ci cd build deploy pipeline lib control plane nodes cpu ram disk lib worker nodes cpu ram disk tb nfs volume high capacity asset derivative storage
| 0
|
28,006
| 22,751,488,864
|
IssuesEvent
|
2022-07-07 13:28:03
|
w3f/polkadot-wiki
|
https://api.github.com/repos/w3f/polkadot-wiki
|
closed
|
Audit local link references for 'Page not Found' errors
|
Wiki Infrastructure
|
This issue builds off #3136 which was recently closed. That task involved resolving all the external link urls but did not include references to other local markdown. While scanning the console output for build warnings and errors I noticed the following:
```
[INFO] Docusaurus found broken links!
Please check the pages of your site in the list below, and make sure you don't reference any path that does not exist.
Note: it's possible to ignore broken links with the 'onBrokenLinks' Docusaurus configuration, and let the build pass.
Exhaustive list of all broken links found:
- On source page path = /docs/build-wallets:
-> linking to learn-treasury.md (resolved as: /docs/learn-treasury.md)
```
I need to look more into how this was generated but it points to at least 1 real broken link:
https://wiki.polkadot.network/docs/learn-treasury.md
If we can validate that the Docusaurus scan is checking all the local links, the resolution to this issue is simply fixing the one path outlined above. If other local links are determined to be broken it may be worth looking into scanning the repo with another strategy or investigating how Docusaurus is performing the check.
|
1.0
|
Audit local link references for 'Page not Found' errors - This issue builds off #3136 which was recently closed. That task involved resolving all the external link urls but did not include references to other local markdown. While scanning the console output for build warnings and errors I noticed the following:
```
[INFO] Docusaurus found broken links!
Please check the pages of your site in the list below, and make sure you don't reference any path that does not exist.
Note: it's possible to ignore broken links with the 'onBrokenLinks' Docusaurus configuration, and let the build pass.
Exhaustive list of all broken links found:
- On source page path = /docs/build-wallets:
-> linking to learn-treasury.md (resolved as: /docs/learn-treasury.md)
```
I need to look more into how this was generated but it points to at least 1 real broken link:
https://wiki.polkadot.network/docs/learn-treasury.md
If we can validate that the Docusaurus scan is checking all the local links, the resolution to this issue is simply fixing the one path outlined above. If other local links are determined to be broken it may be worth looking into scanning the repo with another strategy or investigating how Docusaurus is performing the check.
|
non_process
|
audit local link references for page not found errors this issue builds off which was recently closed that task involved resolving all the external link urls but did not include references to other local markdown while scanning the console output for build warnings and errors i noticed the following docusaurus found broken links please check the pages of your site in the list below and make sure you don t reference any path that does not exist note it s possible to ignore broken links with the onbrokenlinks docusaurus configuration and let the build pass exhaustive list of all broken links found on source page path docs build wallets linking to learn treasury md resolved as docs learn treasury md i need to look more into how this was generated but it points to at least real broken link if we can validate that the docusaurus scan is checking all the local links the resolution to this issue is simply fixing the one path outlined above if other local links are determined to be broken it may be worth looking into scanning the repo with another strategy or investigating how docusaurus is performing the check
| 0
|
15,829
| 20,020,646,742
|
IssuesEvent
|
2022-02-01 16:06:33
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Implement datamodel parser validations for two-way embedded many-to-many relations on MongoDB
|
process/candidate team/migrations topic: mongodb
|
We want to enable relations looking like this:
```
model Book {
...
author_ids String[] @db.ObjectId
authors Author[] @relation(fields: [author_ids], references: id)
}
model Author {
...
book_ids String[] @db.ObjectId
books Book[] @relation(fields: [book_ids], references: id)
}
```
This is a type of many-to-many relations with the following rules:
- It is only possible on mongodb
- The `@relation` attributes on both sides have both `fields:` and `references:` as required arguments.
- The `references:` argument must reference a single-field ID on the opposite model. No multi-field ids, no referencing unique criteria other than a single field id.
- Both sides of the relation must be present
- Both sides of the relation must have the list arity.
**This will be the only type of many-to-many relations allowed on mongodb**. In other words, implicit many-to-many relations without `fields: [...]` with scalar backing fields are not valid. This should be validated.
More importantly,
- we will need to look at existing relation validations, and filter out which are general, and which are only valid on SQL connectors, then implement that separation.
- we will need to look at the reformatting logic and make sure
- there is no automatic reformatting for this type of relations, or that it produces valid code
- we do not break relation reformatting for other types of relations in the process
|
1.0
|
Implement datamodel parser validations for two-way embedded many-to-many relations on MongoDB - We want to enable relations looking like this:
```
model Book {
...
author_ids String[] @db.ObjectId
authors Author[] @relation(fields: [author_ids], references: id)
}
model Author {
...
book_ids String[] @db.ObjectId
books Book[] @relation(fields: [book_ids], references: id)
}
```
This is a type of many-to-many relations with the following rules:
- It is only possible on mongodb
- The `@relation` attributes on both sides have both `fields:` and `references:` as required arguments.
- The `references:` argument must reference a single-field ID on the opposite model. No multi-field ids, no referencing unique criteria other than a single field id.
- Both sides of the relation must be present
- Both sides of the relation must have the list arity.
**This will be the only type of many-to-many relations allowed on mongodb**. In other words, implicit many-to-many relations without `fields: [...]` with scalar backing fields are not valid. This should be validated.
More importantly,
- we will need to look at existing relation validations, and filter out which are general, and which are only valid on SQL connectors, then implement that separation.
- we will need to look at the reformatting logic and make sure
- there is no automatic reformatting for this type of relations, or that it produces valid code
- we do not break relation reformatting for other types of relations in the process
|
process
|
implement datamodel parser validations for two way embedded many to many relations on mongodb we want to enable relations looking like this model book author ids string db objectid authors author relation fields references id model author book ids string db objectid books book relation fields references id this is a type of many to many relations with the following rules it is only possible on mongodb the relation attributes on both sides have both fields and references as required arguments the references argument must reference a single field id on the opposite model no multi field ids no referencing unique criteria other than a single field id both sides of the relation must be present both sides of the relation must have the list arity this will be the only type of many to many relations allowed on mongodb in other words implicit many to many relations without fields with scalar backing fields are not valid this should be validated more importantly we will need to look at existing relation validations and filter out which are general and which are only valid on sql connectors then implement that separation we will need to look at the reformatting logic and make sure there is no automatic reformatting for this type of relations or that it produces valid code we do not break relation reformatting for other types of relations in the process
| 1
|
12,168
| 14,741,611,270
|
IssuesEvent
|
2021-01-07 10:53:26
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Rockland - TAC-6570
|
anc-process anp-important ant-bug ant-enhancement ant-parent/primary has attachment
|
In GitLab by @kdjstudios on Jan 29, 2019, 09:19
**Submitted by:** Michele Burns <michele.burns@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-29-24769/conversation
**Server:** Internal
**Client/Site:** Rockland
**Account:** TAC-6570
**Issue:**
Please see the attached ledger for the account listed in the subject.
The balance on the account in the ledger is $0.01, but the account
balance shows as $1000.00.
The account balance should be $0.01. Can you please look into why this
is not reflected in the account balance?
Please adjust accordingly.
[Rescue+Tees_Police+Tees_Ledger+_1_.pdf](/uploads/4a00b8631e3fce877bb671a2d506e695/Rescue+Tees_Police+Tees_Ledger+_1_.pdf)
**TEST CASES SHEET** - [Payment Apply in Draft Invoice (Test Cases)] (https://docs.google.com/spreadsheets/d/1dqUjmGK5HgBjd2xJt007C8DIMUPP5NA4q5bRfLQ7sMo/edit?usp=sharing)
|
1.0
|
Rockland - TAC-6570 - In GitLab by @kdjstudios on Jan 29, 2019, 09:19
**Submitted by:** Michele Burns <michele.burns@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-29-24769/conversation
**Server:** Internal
**Client/Site:** Rockland
**Account:** TAC-6570
**Issue:**
Please see the attached ledger for the account listed in the subject.
The balance on the account in the ledger is $0.01, but the account
balance shows as $1000.00.
The account balance should be $0.01. Can you please look into why this
is not reflected in the account balance?
Please adjust accordingly.
[Rescue+Tees_Police+Tees_Ledger+_1_.pdf](/uploads/4a00b8631e3fce877bb671a2d506e695/Rescue+Tees_Police+Tees_Ledger+_1_.pdf)
**TEST CASES SHEET** - [Payment Apply in Draft Invoice (Test Cases)] (https://docs.google.com/spreadsheets/d/1dqUjmGK5HgBjd2xJt007C8DIMUPP5NA4q5bRfLQ7sMo/edit?usp=sharing)
|
process
|
rockland tac in gitlab by kdjstudios on jan submitted by michele burns helpdesk server internal client site rockland account tac issue please see the attached ledger for the account listed in the subject the balance on the account in the ledger is but the account balance shows as the account balance should be can you please look into why this is not reflected in the account balance please adjust accordingly uploads rescue tees police tees ledger pdf test cases sheet
| 1
|
20,374
| 27,028,702,062
|
IssuesEvent
|
2023-02-11 23:19:02
|
PHPSocialNetwork/phpfastcache
|
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
|
closed
|
getItemsByTag() - empty after one item has expired
|
0_0 Bug [-_-] In Process ~_~ Issue confirmed 8.1 9.1
|
### What type of issue is this?
Incorrect/unexpected/unexplainable behavior
### Operating system + version
Debian 11
### PHP version
8.1
### Connector/Database version (if applicable)
Memcached, Files
### Phpfastcache version
9.1.2 ✅
### Describe the issue you're facing
I create multiple keys with different expiration values using expiresAfter(). I'm using the addTags() function to search for the keys if necessary (e.g to manage them in a CMS). When a first item key expires, all other keys with the same tag name are immediately gone and are not callable anymore with getItemsByTag().
So basically if you have a 5 seconds key and another 10 seconds key (both initialized at the same time) with the same tag name, both items will be returned by getItemsByTag(). But if the 5 seconds key has expired, the 10 seconds key is also gone when calling getItemsByTag(). The 10 seconds key is still physically there but cannot be found by the previously declared tag anymore. So a 10 seconds timer would be gone within 5 seconds in this scenario. If the 5 seconds timer would not exist, the 10 seconds timer would of course counting down until 0.
### Expected behavior
All keys with the same tag name should be still callable, even if previous keys with the same tag has been expired.
### Code sample (optional)
_No response_
### Suggestion to fix the issue (optional)
_No response_
### References (optional)
_No response_
### Do you have anything more you want to share? (optional)
In the video you can see, that when the statistics key expires, the player object key is also gone while it should have 5 seconds left. At the same time, a new statistics key will be created (which is not important).
https://i.gyazo.com/ad584c00607032c41a83db2378ff51d8.mp4
### Have you searched in our Wiki before posting ?
- [X] I have searched over the Wiki
|
1.0
|
getItemsByTag() - empty after one item has expired - ### What type of issue is this?
Incorrect/unexpected/unexplainable behavior
### Operating system + version
Debian 11
### PHP version
8.1
### Connector/Database version (if applicable)
Memcached, Files
### Phpfastcache version
9.1.2 ✅
### Describe the issue you're facing
I create multiple keys with different expiration values using expiresAfter(). I'm using the addTags() function to search for the keys if necessary (e.g to manage them in a CMS). When a first item key expires, all other keys with the same tag name are immediately gone and are not callable anymore with getItemsByTag().
So basically if you have a 5 seconds key and another 10 seconds key (both initialized at the same time) with the same tag name, both items will be returned by getItemsByTag(). But if the 5 seconds key has expired, the 10 seconds key is also gone when calling getItemsByTag(). The 10 seconds key is still physically there but cannot be found by the previously declared tag anymore. So a 10 seconds timer would be gone within 5 seconds in this scenario. If the 5 seconds timer would not exist, the 10 seconds timer would of course counting down until 0.
### Expected behavior
All keys with the same tag name should be still callable, even if previous keys with the same tag has been expired.
### Code sample (optional)
_No response_
### Suggestion to fix the issue (optional)
_No response_
### References (optional)
_No response_
### Do you have anything more you want to share? (optional)
In the video you can see, that when the statistics key expires, the player object key is also gone while it should have 5 seconds left. At the same time, a new statistics key will be created (which is not important).
https://i.gyazo.com/ad584c00607032c41a83db2378ff51d8.mp4
### Have you searched in our Wiki before posting ?
- [X] I have searched over the Wiki
|
process
|
getitemsbytag empty after one item has expired what type of issue is this incorrect unexpected unexplainable behavior operating system version debian php version connector database version if applicable memcached files phpfastcache version ✅ describe the issue you re facing i create multiple keys with different expiration values using expiresafter i m using the addtags function to search for the keys if necessary e g to manage them in a cms when a first item key expires all other keys with the same tag name are immediately gone and are not callable anymore with getitemsbytag so basically if you have a seconds key and another seconds key both initialized at the same time with the same tag name both items will be returned by getitemsbytag but if the seconds key has expired the seconds key is also gone when calling getitemsbytag the seconds key is still physically there but cannot be found by the previously declared tag anymore so a seconds timer would be gone within seconds in this scenario if the seconds timer would not exist the seconds timer would of course counting down until expected behavior all keys with the same tag name should be still callable even if previous keys with the same tag has been expired code sample optional no response suggestion to fix the issue optional no response references optional no response do you have anything more you want to share optional in the video you can see that when the statistics key expires the player object key is also gone while it should have seconds left at the same time a new statistics key will be created which is not important have you searched in our wiki before posting i have searched over the wiki
| 1
|
364,803
| 10,773,460,570
|
IssuesEvent
|
2019-11-02 20:47:20
|
minj/foxtrick
|
https://api.github.com/repos/minj/foxtrick
|
reopened
|
Rewrite the player faces functionality in MatchOrderInterface
|
MatchOrder NT Priority-Medium accepted enhancement
|
Possible to use available data to generate faces:

Should fix NT player faces
|
1.0
|
Rewrite the player faces functionality in MatchOrderInterface - Possible to use available data to generate faces:

Should fix NT player faces
|
non_process
|
rewrite the player faces functionality in matchorderinterface possible to use available data to generate faces should fix nt player faces
| 0
|
434,798
| 12,528,000,988
|
IssuesEvent
|
2020-06-04 08:52:05
|
MLH-Fellowship/CodeVidLive
|
https://api.github.com/repos/MLH-Fellowship/CodeVidLive
|
closed
|
Covid 19 case dataset with geo locations
|
duplicate priority/high
|
Crawl news and merge?
Checking out existing Covid Data on Kaggle
|
1.0
|
Covid 19 case dataset with geo locations - Crawl news and merge?
Checking out existing Covid Data on Kaggle
|
non_process
|
covid case dataset with geo locations crawl news and merge checking out existing covid data on kaggle
| 0
|
613,413
| 19,089,533,447
|
IssuesEvent
|
2021-11-29 10:30:26
|
hashicorp/terraform-cdk
|
https://api.github.com/repos/hashicorp/terraform-cdk
|
closed
|
Bundle CLI for distribution
|
enhancement cdktf-cli committed priority/critical-urgent size/medium
|
<!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
It might be worthwhile to bundle the CLI for distribution and include all dependencies via esbuild
- We have a known, tested state in terms of dependencies which we're distributing
- It's probably faster to install
- Using the CLI as a dependency in other projects becomes less of a headache when thinking about conflicting package versions (react, graphql, express, ...)
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
|
1.0
|
Bundle CLI for distribution - <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
It might be worthwhile to bundle the CLI for distribution and include all dependencies via esbuild
- We have a known, tested state in terms of dependencies which we're distributing
- It's probably faster to install
- Using the CLI as a dependency in other projects becomes less of a headache when thinking about conflicting package versions (react, graphql, express, ...)
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
|
non_process
|
bundle cli for distribution community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description it might be worthwhile to bundle the cli for distribution and include all dependencies via esbuild we have a known tested state in terms of dependencies which we re distributing it s probably faster to install using the cli as a dependency in other projects becomes less of a headache when thinking about conflicting package versions react graphql express references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation
| 0
|
14,166
| 17,086,041,061
|
IssuesEvent
|
2021-07-08 11:59:42
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[SB] Anchor date based custom scheduling > Order of anchor runs is incorrect in GetStudyActivityList API
|
Bug P1 Process: Fixed Process: Tested dev Study datastore
|
Steps:
1. Configure a date based source questionnaire
2. Add a questionnaire and choose anchor based custom schdule
3. Configure some runs involving negative X/Y values
4. Publish updates
5. Observe the order in GetStudyActivityList API
Actual: Order is incorrect
Expected: Order should lowest X value run to Highest
Issue should be fixed for questionnaires and active tasks custom schedule
Issue should be fixed for `start `& `end `arrays and `anchorRuns `array


|
2.0
|
[SB] Anchor date based custom scheduling > Order of anchor runs is incorrect in GetStudyActivityList API - Steps:
1. Configure a date based source questionnaire
2. Add a questionnaire and choose anchor based custom schdule
3. Configure some runs involving negative X/Y values
4. Publish updates
5. Observe the order in GetStudyActivityList API
Actual: Order is incorrect
Expected: Order should lowest X value run to Highest
Issue should be fixed for questionnaires and active tasks custom schedule
Issue should be fixed for `start `& `end `arrays and `anchorRuns `array


|
process
|
anchor date based custom scheduling order of anchor runs is incorrect in getstudyactivitylist api steps configure a date based source questionnaire add a questionnaire and choose anchor based custom schdule configure some runs involving negative x y values publish updates observe the order in getstudyactivitylist api actual order is incorrect expected order should lowest x value run to highest issue should be fixed for questionnaires and active tasks custom schedule issue should be fixed for start end arrays and anchorruns array
| 1
|
163,292
| 13,914,694,298
|
IssuesEvent
|
2020-10-20 22:43:10
|
EOSIO/eosjs
|
https://api.github.com/repos/EOSIO/eosjs
|
closed
|
[docs] Suggestion / Change Request
|
bug documentation
|
```
const resp = await rpc.get_table_rows({
json: true, // Get the response as json
code: 'eosio.token', // Contract that we target
scope: 'testacc' // Account that owns the data
table: 'accounts' // Table name
limit: 10, // Maximum number of rows that we want to get
reverse = false, // Optional: Get reversed data
show_payer = false, // Optional: Show ram payer
});
console.log(resp.rows);
```
There are some `,` characters missing in the above code.
And the `=` in the JSON should be replaced with `:`.
<!--1. Please change the issue title to describe your documentation change request. 2. Accurrately describe your issue in the issue. 3. DO NOT remove this message or the file link below [dyn_doc_cr] -->File: [docs/03_reading-blockchain-examples.md](https://github.com/EOSIO/eosjs/blob/develop/docs/03_reading-blockchain-examples.md)
|
1.0
|
[docs] Suggestion / Change Request - ```
const resp = await rpc.get_table_rows({
json: true, // Get the response as json
code: 'eosio.token', // Contract that we target
scope: 'testacc' // Account that owns the data
table: 'accounts' // Table name
limit: 10, // Maximum number of rows that we want to get
reverse = false, // Optional: Get reversed data
show_payer = false, // Optional: Show ram payer
});
console.log(resp.rows);
```
There are some `,` characters missing in the above code.
And the `=` in the JSON should be replaced with `:`.
<!--1. Please change the issue title to describe your documentation change request. 2. Accurrately describe your issue in the issue. 3. DO NOT remove this message or the file link below [dyn_doc_cr] -->File: [docs/03_reading-blockchain-examples.md](https://github.com/EOSIO/eosjs/blob/develop/docs/03_reading-blockchain-examples.md)
|
non_process
|
suggestion change request const resp await rpc get table rows json true get the response as json code eosio token contract that we target scope testacc account that owns the data table accounts table name limit maximum number of rows that we want to get reverse false optional get reversed data show payer false optional show ram payer console log resp rows there are some characters missing in the above code and the in the json should be replaced with file
| 0
|
21,021
| 27,969,901,368
|
IssuesEvent
|
2023-03-25 00:16:21
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
[Feature request] Use the one or more selection areas to auto generate the parmetric mask
|
feature: enhancement difficulty: hard scope: image processing no-issue-activity
|
If you want to create a parametric mask, you have to move the slider manually for one (or more) channels. Color picker helps to estimate where the desired areas to be selected are located on each channel.
Can color picker area be used to auto generate the parametric mask?
The best way to explain what I mean is to give an example.
In this image I would like to mask red and blue berries:

Now I choose an area to mask the red berries. Color and brightness values are read out and the parametric mask is generated:

To this mask I add an additional value by selecting the blue berry:

In order to make the mask even more precise, I choose at the end an area that should be subtracted from the mask:

I have to admit that I'm not a developer and I can't judge whether such a thing would be achievable. However, from a user perspective, I think this would be a great enhancement.
|
1.0
|
[Feature request] Use the one or more selection areas to auto generate the parmetric mask - If you want to create a parametric mask, you have to move the slider manually for one (or more) channels. Color picker helps to estimate where the desired areas to be selected are located on each channel.
Can color picker area be used to auto generate the parametric mask?
The best way to explain what I mean is to give an example.
In this image I would like to mask red and blue berries:

Now I choose an area to mask the red berries. Color and brightness values are read out and the parametric mask is generated:

To this mask I add an additional value by selecting the blue berry:

In order to make the mask even more precise, I choose at the end an area that should be subtracted from the mask:

I have to admit that I'm not a developer and I can't judge whether such a thing would be achievable. However, from a user perspective, I think this would be a great enhancement.
|
process
|
use the one or more selection areas to auto generate the parmetric mask if you want to create a parametric mask you have to move the slider manually for one or more channels color picker helps to estimate where the desired areas to be selected are located on each channel can color picker area be used to auto generate the parametric mask the best way to explain what i mean is to give an example in this image i would like to mask red and blue berries now i choose an area to mask the red berries color and brightness values are read out and the parametric mask is generated to this mask i add an additional value by selecting the blue berry in order to make the mask even more precise i choose at the end an area that should be subtracted from the mask i have to admit that i m not a developer and i can t judge whether such a thing would be achievable however from a user perspective i think this would be a great enhancement
| 1
|
422,666
| 12,286,669,573
|
IssuesEvent
|
2020-05-09 08:33:39
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Stacked Bar Chart Incorrect Rendering with Negative Values
|
Priority:P2 Type:Bug Visualization/
|
**Describe the bug**
Stacked bar chart produces incorrect results when given a combination of positive and negative values to stack. Issue seems to be present for both - Custom/SQL and Native Query types of questions. Tested with Metabase versions v0.32.8 and v0.32.10. Database type seems to be irrelevant, tested with Postgres, H2 and MS SQL Server.
**To Reproduce**
1. Create a new Native Query or Custom question where underlying values contain combination of negatives and positives, for example for native query use this sample sql:
```
select *
from (values
('2019-07-30', 'A', 10)
,('2019-07-30', 'B', 5)
,('2019-07-30', 'C', 3)
,('2019-07-31', 'A', 10)
,('2019-07-31', 'B', -5)
,('2019-07-31', 'C', -3)
) t1 ("Date", "Category", "Net Sales")
```

2. Based on the data above build a visualisation of type Bar with X-axis set to 'Date' and 'Category', and Y-axis to 'Net Sales'
3. Under 'Display' tab select Stacking option 'Stack'
4. Observe visualization results

**Issues with the above visualization, 2nd stacked bar**:
- Total Net Sales for 2019-07-31 look like they are at 10 while actually it's 2 (= 10 - 5 - 3)
- 2019-07-31 Net Sales for Category A are at 2, while it should be 10
- Net sales for Categories B and C look like positive 5 and 3, while they are negative
**Expected behavior**
Expected result - something inline with the below:

**Information about your Metabase Installation:**
- Your databases:
Tested with Postgres, H2, MS SQL Server
- Metabase version:
Tested with v0.32.8 and v0.32.10
- Metabase hosting environment:
Docker
- Metabase internal database:
Postgres
**Severity**
Blocking usage of Metabase Stacked Bar chart visualization type.
Some existing reports are affected.
**Additional context**
There are potentially related or similar issues mentioned previously but they seem to lack details and response:
- https://github.com/metabase/metabase/issues/9147
- https://github.com/metabase/metabase/issues/5763
- https://discourse.metabase.com/t/how-to-make-a-correct-bar-plot-with-negative-values/4910
Please advise if this is not a bug and/or there is a known workaround.
:arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
|
1.0
|
Stacked Bar Chart Incorrect Rendering with Negative Values - **Describe the bug**
Stacked bar chart produces incorrect results when given a combination of positive and negative values to stack. Issue seems to be present for both - Custom/SQL and Native Query types of questions. Tested with Metabase versions v0.32.8 and v0.32.10. Database type seems to be irrelevant, tested with Postgres, H2 and MS SQL Server.
**To Reproduce**
1. Create a new Native Query or Custom question where underlying values contain combination of negatives and positives, for example for native query use this sample sql:
```
select *
from (values
('2019-07-30', 'A', 10)
,('2019-07-30', 'B', 5)
,('2019-07-30', 'C', 3)
,('2019-07-31', 'A', 10)
,('2019-07-31', 'B', -5)
,('2019-07-31', 'C', -3)
) t1 ("Date", "Category", "Net Sales")
```

2. Based on the data above build a visualisation of type Bar with X-axis set to 'Date' and 'Category', and Y-axis to 'Net Sales'
3. Under 'Display' tab select Stacking option 'Stack'
4. Observe visualization results

**Issues with the above visualization, 2nd stacked bar**:
- Total Net Sales for 2019-07-31 look like they are at 10 while actually it's 2 (= 10 - 5 - 3)
- 2019-07-31 Net Sales for Category A are at 2, while it should be 10
- Net sales for Categories B and C look like positive 5 and 3, while they are negative
**Expected behavior**
Expected result - something inline with the below:

**Information about your Metabase Installation:**
- Your databases:
Tested with Postgres, H2, MS SQL Server
- Metabase version:
Tested with v0.32.8 and v0.32.10
- Metabase hosting environment:
Docker
- Metabase internal database:
Postgres
**Severity**
Blocking usage of Metabase Stacked Bar chart visualization type.
Some existing reports are affected.
**Additional context**
There are potentially related or similar issues mentioned previously but they seem to lack details and response:
- https://github.com/metabase/metabase/issues/9147
- https://github.com/metabase/metabase/issues/5763
- https://discourse.metabase.com/t/how-to-make-a-correct-bar-plot-with-negative-values/4910
Please advise if this is not a bug and/or there is a known workaround.
:arrow_down: Please click the :+1: reaction instead of leaving a `+1` or `update?` comment
|
non_process
|
stacked bar chart incorrect rendering with negative values describe the bug stacked bar chart produces incorrect results when given a combination of positive and negative values to stack issue seems to be present for both custom sql and native query types of questions tested with metabase versions and database type seems to be irrelevant tested with postgres and ms sql server to reproduce create a new native query or custom question where underlying values contain combination of negatives and positives for example for native query use this sample sql select from values a b c a b c date category net sales based on the data above build a visualisation of type bar with x axis set to date and category and y axis to net sales under display tab select stacking option stack observe visualization results issues with the above visualization stacked bar total net sales for look like they are at while actually it s net sales for category a are at while it should be net sales for categories b and c look like positive and while they are negative expected behavior expected result something inline with the below information about your metabase installation your databases tested with postgres ms sql server metabase version tested with and metabase hosting environment docker metabase internal database postgres severity blocking usage of metabase stacked bar chart visualization type some existing reports are affected additional context there are potentially related or similar issues mentioned previously but they seem to lack details and response please advise if this is not a bug and or there is a known workaround arrow down please click the reaction instead of leaving a or update comment
| 0
|
12,325
| 14,882,272,713
|
IssuesEvent
|
2021-01-20 11:39:39
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile Apps] Signup > Email code expiry time for resent verification is set to 48 mins instead of 48 hours
|
Bug Hydra P1 Process: Fixed
|
**Steps:**
1. Navigate to singup in mobile apps
2. Enter email and password fields Eg. on 16th Jan 1PM
3. Navigated to verification step.
4. Expiry time for the email code will be on 18th Jan 1PM
5. Click on resend verification code
6. Observe that expiry time for resent verification is 48 mins i.e 16th Jan 1:48 PM
**Actual:** Email code expiry time for resent verification is incorrect and set to 48 minutes from the time when user clicks on resend verification code
**Expected:** Email code expiry time for resent verification should be always 48 hours
Issue not observed for PM
Before resend i.e after signup -

After resend i.e after clicking on resend verification -

|
1.0
|
[Mobile Apps] Signup > Email code expiry time for resent verification is set to 48 mins instead of 48 hours - **Steps:**
1. Navigate to singup in mobile apps
2. Enter email and password fields Eg. on 16th Jan 1PM
3. Navigated to verification step.
4. Expiry time for the email code will be on 18th Jan 1PM
5. Click on resend verification code
6. Observe that expiry time for resent verification is 48 mins i.e 16th Jan 1:48 PM
**Actual:** Email code expiry time for resent verification is incorrect and set to 48 minutes from the time when user clicks on resend verification code
**Expected:** Email code expiry time for resent verification should be always 48 hours
Issue not observed for PM
Before resend i.e after signup -

After resend i.e after clicking on resend verification -

|
process
|
signup email code expiry time for resent verification is set to mins instead of hours steps navigate to singup in mobile apps enter email and password fields eg on jan navigated to verification step expiry time for the email code will be on jan click on resend verification code observe that expiry time for resent verification is mins i e jan pm actual email code expiry time for resent verification is incorrect and set to minutes from the time when user clicks on resend verification code expected email code expiry time for resent verification should be always hours issue not observed for pm before resend i e after signup after resend i e after clicking on resend verification
| 1
|
13,183
| 15,610,833,228
|
IssuesEvent
|
2021-03-19 13:39:36
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Bad binary version according to synchronous/asynchronous task
|
Bug Process Stalled Status: Needs Review
|
**Symfony version(s) affected**: 5.0.5
**Description**
```
$process = new Process(
['curl', 'https://www.google.fr'],
base_path()
);
$process->disableOutput();
$process->start();
```
I work on MacOS, curl apple version is not yet up to date so I've installed with brew curl-openssl.
I've exported binary path on my PATH
**How to reproduce**
**On synchronous task :**
- Process run successfully
- cURL version used is good
**On asynchronous task :**
- Process exit
- cURL is an older version (using the default MacOS curl version at /usr/bin/curl)
Someone's already had the problem ?
|
1.0
|
Bad binary version according to synchronous/asynchronous task - **Symfony version(s) affected**: 5.0.5
**Description**
```
$process = new Process(
['curl', 'https://www.google.fr'],
base_path()
);
$process->disableOutput();
$process->start();
```
I work on MacOS, curl apple version is not yet up to date so I've installed with brew curl-openssl.
I've exported binary path on my PATH
**How to reproduce**
**On synchronous task :**
- Process run successfully
- cURL version used is good
**On asynchronous task :**
- Process exit
- cURL is an older version (using the default MacOS curl version at /usr/bin/curl)
Someone's already had the problem ?
|
process
|
bad binary version according to synchronous asynchronous task symfony version s affected description process new process base path process disableoutput process start i work on macos curl apple version is not yet up to date so i ve installed with brew curl openssl i ve exported binary path on my path how to reproduce on synchronous task process run successfully curl version used is good on asynchronous task process exit curl is an older version using the default macos curl version at usr bin curl someone s already had the problem
| 1
|
21,933
| 11,660,540,211
|
IssuesEvent
|
2020-03-03 03:41:34
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
opened
|
RW Permit Form Update
|
Product: AMANDA Project: ATD AMANDA Backlog Service: Apps Type: Enhancement Workgroup: ROW migrated
|
Description: Update all RW permit forms to incorporate the property info tab comments.
Request Date: 2018-11-29 16:28:00
Request ID: DTS18-103015
Status: Backlog
Customer Priority: 0
Level of Effort: Minor
DTS URL: https://atd.knack.com/dts#service-requests/view-issue-details/5c0067f6253cc617b53b9c78
*Migrated from [atd-amanda #33](https://github.com/cityofaustin/atd-amanda/issues/33)*
|
1.0
|
RW Permit Form Update - Description: Update all RW permit forms to incorporate the property info tab comments.
Request Date: 2018-11-29 16:28:00
Request ID: DTS18-103015
Status: Backlog
Customer Priority: 0
Level of Effort: Minor
DTS URL: https://atd.knack.com/dts#service-requests/view-issue-details/5c0067f6253cc617b53b9c78
*Migrated from [atd-amanda #33](https://github.com/cityofaustin/atd-amanda/issues/33)*
|
non_process
|
rw permit form update description update all rw permit forms to incorporate the property info tab comments request date request id status backlog customer priority level of effort minor dts url migrated from
| 0
|
12,048
| 14,738,830,420
|
IssuesEvent
|
2021-01-07 05:49:58
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Download Usage on No usage billign cycle
|
anc-ops anc-process anp-1.5 ant-bug ant-support
|
In GitLab by @kdjstudios on Aug 3, 2018, 09:00
The download usage button is still visible on a no usage billing cycle. This needs to be removed. Currently it will download the previous billing cycles usage which will certainly cause issues if anyone tries to use that.
|
1.0
|
Download Usage on No usage billign cycle - In GitLab by @kdjstudios on Aug 3, 2018, 09:00
The download usage button is still visible on a no usage billing cycle. This needs to be removed. Currently it will download the previous billing cycles usage which will certainly cause issues if anyone tries to use that.
|
process
|
download usage on no usage billign cycle in gitlab by kdjstudios on aug the download usage button is still visible on a no usage billing cycle this needs to be removed currently it will download the previous billing cycles usage which will certainly cause issues if anyone tries to use that
| 1
|
436,729
| 12,552,281,224
|
IssuesEvent
|
2020-06-06 17:34:27
|
TheNovi/Nui
|
https://api.github.com/repos/TheNovi/Nui
|
closed
|
Form rewriting
|
Gui enhancement high priority
|
The whole Field class system is unnecessary.
Just add set and get methods to all Widgets (use IMethods)
And to forms add Widgets with labels. Which will be saved in dict with their bind method.
|
1.0
|
Form rewriting - The whole Field class system is unnecessary.
Just add set and get methods to all Widgets (use IMethods)
And to forms add Widgets with labels. Which will be saved in dict with their bind method.
|
non_process
|
form rewriting the whole field class system is unnecessary just add set and get methods to all widgets use imethods and to forms add widgets with labels which will be saved in dict with their bind method
| 0
|
345,932
| 30,854,236,738
|
IssuesEvent
|
2023-08-02 19:12:20
|
unoplatform/uno.toolkit.ui
|
https://api.github.com/repos/unoplatform/uno.toolkit.ui
|
closed
|
[UITest] Add an UITest that launches Runtime Tests
|
kind/enhancement area/tests
|
Extract the [UI Test from Uno](https://github.com/unoplatform/uno/blob/master/src/SamplesApp/SamplesApp.UITests/RuntimeTests.cs) that is responsible for kicking off the runtime tests and add this as a test to the Toolkit UI tests
|
1.0
|
[UITest] Add an UITest that launches Runtime Tests - Extract the [UI Test from Uno](https://github.com/unoplatform/uno/blob/master/src/SamplesApp/SamplesApp.UITests/RuntimeTests.cs) that is responsible for kicking off the runtime tests and add this as a test to the Toolkit UI tests
|
non_process
|
add an uitest that launches runtime tests extract the that is responsible for kicking off the runtime tests and add this as a test to the toolkit ui tests
| 0
|
16,339
| 20,997,134,722
|
IssuesEvent
|
2022-03-29 14:21:20
|
sjmog/smartflix
|
https://api.github.com/repos/sjmog/smartflix
|
opened
|
Render shows to the homepage
|
Rails/File processing Rails/Haml
|
You have just set up a Rails application with a test-driven dummy view! ����
In this challenge, you will update the application so the root route renders the shows from the [provided CSV file](../training-data/netflix_titles.zip).
Here's how it should look by the end of this ticket:

## To complete this ticket, you will have to:
- [ ] Write a new acceptance test that asserts: when the user visits the homepage, the page content should include each show title in the [provided CSV file](../training-data/netflix_titles.csv).
- [ ] Configure your Rails app to use [Haml](https://haml.info/) for the views.
- [ ] Create a new controller to show all shows. Make sure you're following the [Rails naming conventions](https://guides.rubyonrails.org/action_controller_overview.html)!
- [ ] Create a new route so that users visiting the root of your application are directed to the index action of your new controller. Make sure you're following the [Rails routing conventions](https://guides.rubyonrails.org/routing.html)!
- [ ] Pass the acceptance test by displaying all shows from the [provided CSV file](../training-data/netflix_titles.zip) file.
## Tips
- There are a lot of shows in the [provided CSV file](../training-data/netflix_titles.zip)! You may need to limit the number you render to the view.
|
1.0
|
Render shows to the homepage - You have just set up a Rails application with a test-driven dummy view! ����
In this challenge, you will update the application so the root route renders the shows from the [provided CSV file](../training-data/netflix_titles.zip).
Here's how it should look by the end of this ticket:

## To complete this ticket, you will have to:
- [ ] Write a new acceptance test that asserts: when the user visits the homepage, the page content should include each show title in the [provided CSV file](../training-data/netflix_titles.csv).
- [ ] Configure your Rails app to use [Haml](https://haml.info/) for the views.
- [ ] Create a new controller to show all shows. Make sure you're following the [Rails naming conventions](https://guides.rubyonrails.org/action_controller_overview.html)!
- [ ] Create a new route so that users visiting the root of your application are directed to the index action of your new controller. Make sure you're following the [Rails routing conventions](https://guides.rubyonrails.org/routing.html)!
- [ ] Pass the acceptance test by displaying all shows from the [provided CSV file](../training-data/netflix_titles.zip) file.
## Tips
- There are a lot of shows in the [provided CSV file](../training-data/netflix_titles.zip)! You may need to limit the number you render to the view.
|
process
|
render shows to the homepage you have just set up a rails application with a test driven dummy view ���� in this challenge you will update the application so the root route renders the shows from the training data netflix titles zip here s how it should look by the end of this ticket images smartflix png to complete this ticket you will have to write a new acceptance test that asserts when the user visits the homepage the page content should include each show title in the training data netflix titles csv configure your rails app to use for the views create a new controller to show all shows make sure you re following the create a new route so that users visiting the root of your application are directed to the index action of your new controller make sure you re following the pass the acceptance test by displaying all shows from the training data netflix titles zip file tips there are a lot of shows in the training data netflix titles zip you may need to limit the number you render to the view
| 1
|
15,707
| 10,338,404,475
|
IssuesEvent
|
2019-09-03 16:50:44
|
jeakfrw/jeak-framework
|
https://api.github.com/repos/jeakfrw/jeak-framework
|
opened
|
[🚀] Custom event when significant client properties change
|
api enhancement performance service
|
**Describe the solution you'd like**
Nearly all property changes on users do not trigger events which makes detecting them rather hard to detect. The framework should provide a way for plugins to detect changes and act upon them precisely - instead of crawling over all clients each time. This would be similar to the ``channeledited`` event which already offers a map of changed properties.
The implementation approach could also feature some sort of change handler that could be registered on a property.
Lastly, this could also be used for channels or basically all data holders in general.
**Describe why you would like to see this implemented**
A reduction in boilerplate code would be the immediate result as all plugins that work with changing client properties will no longer have to crawl over all/watched channels after a cache-update.
**Additional context**
Feature requested by @genius42 with regard to being able to detect changes in assigned server groups.
|
1.0
|
[🚀] Custom event when significant client properties change - **Describe the solution you'd like**
Nearly all property changes on users do not trigger events which makes detecting them rather hard to detect. The framework should provide a way for plugins to detect changes and act upon them precisely - instead of crawling over all clients each time. This would be similar to the ``channeledited`` event which already offers a map of changed properties.
The implementation approach could also feature some sort of change handler that could be registered on a property.
Lastly, this could also be used for channels or basically all data holders in general.
**Describe why you would like to see this implemented**
A reduction in boilerplate code would be the immediate result as all plugins that work with changing client properties will no longer have to crawl over all/watched channels after a cache-update.
**Additional context**
Feature requested by @genius42 with regard to being able to detect changes in assigned server groups.
|
non_process
|
custom event when significant client properties change describe the solution you d like nearly all property changes on users do not trigger events which makes detecting them rather hard to detect the framework should provide a way for plugins to detect changes and act upon them precisely instead of crawling over all clients each time this would be similar to the channeledited event which already offers a map of changed properties the implementation approach could also feature some sort of change handler that could be registered on a property lastly this could also be used for channels or basically all data holders in general describe why you would like to see this implemented a reduction in boilerplate code would be the immediate result as all plugins that work with changing client properties will no longer have to crawl over all watched channels after a cache update additional context feature requested by with regard to being able to detect changes in assigned server groups
| 0
|
16,304
| 20,960,721,706
|
IssuesEvent
|
2022-03-27 19:05:26
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Capital Beat
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Capital Beat
Type (film/tv show): TV Show
Film or show in which it appears: _The West Wing_
* Season 1, Episode 1 "Pilot"
* Season 2, Episode 4 "In This White House"
* Season 2, Episode 13 "Bartlet's Third State of the Union"
* Season 3, Episode 9 "Bartlet for America"
Is the parent film/show streaming anywhere? HBO Max
About when in the parent film/show does it appear? I don't have HBO Max, so I can't break this one down.
Actual footage of the film/show can be seen (yes/no)? Yes, both on TV screens in the show, and from a behind-the-scenes perspective as the show is being recorded.
|
1.0
|
Add Capital Beat - Please add as much of the following info as you can:
Title: Capital Beat
Type (film/tv show): TV Show
Film or show in which it appears: _The West Wing_
* Season 1, Episode 1 "Pilot"
* Season 2, Episode 4 "In This White House"
* Season 2, Episode 13 "Bartlet's Third State of the Union"
* Season 3, Episode 9 "Bartlet for America"
Is the parent film/show streaming anywhere? HBO Max
About when in the parent film/show does it appear? I don't have HBO Max, so I can't break this one down.
Actual footage of the film/show can be seen (yes/no)? Yes, both on TV screens in the show, and from a behind-the-scenes perspective as the show is being recorded.
|
process
|
add capital beat please add as much of the following info as you can title capital beat type film tv show tv show film or show in which it appears the west wing season episode pilot season episode in this white house season episode bartlet s third state of the union season episode bartlet for america is the parent film show streaming anywhere hbo max about when in the parent film show does it appear i don t have hbo max so i can t break this one down actual footage of the film show can be seen yes no yes both on tv screens in the show and from a behind the scenes perspective as the show is being recorded
| 1
|
19,106
| 25,158,032,653
|
IssuesEvent
|
2022-11-10 14:53:17
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
The server failed to resume the transaction. Desc:3400000001.
|
bug/2-confirmed kind/bug process/candidate topic: transaction topic: sql server team/client
|
Hi Prisma Team! My Prisma Client just crashed. This is the report:
## Versions
| Name | Version |
|-----------------|--------------------|
| Node | v16.13.2 |
| OS | debian-openssl-1.1.x|
| Prisma Client | 0.0.0 |
| Query Engine | query-engine fd66403800591e7da0f4b941056ef3fb926fffea|
| Database | undefined|
## Logs
```
.1.x are fine
plusX Execution permissions of /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/query-engine-debian-openssl-1.1.x are fine
prisma:engine {"cwd":"/home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/tests/functional/issues/9678/.generated/issues.9678 (provider=sqlserver)/prisma"}
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/.prisma/client
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/runtime
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client
plusX Execution permissions of /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/query-engine-debian-openssl-1.1.x are fine
prisma:engine {"flags":["--enable-raw-queries","--enable-metrics","--enable-open-telemetry","--port","34423"]}
prisma:engine stdout Starting a mssql pool with 9 connections.
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Started query engine http server on http://127.0.0.1:34423
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/.prisma/client
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/runtime
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client
plusX Execution permissions of /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/query-engine-debian-openssl-1.1.x are fine
prisma:engine stdout Begin transaction
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Begin transaction
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Commit transaction
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Begin transaction
prisma:engine stdout Begin transaction
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Begin transaction
prisma:engine Client Version: 0.0.0
prisma:engine Engine Version: query-engine fd66403800591e7da0f4b941056ef3fb926fffea
prisma:engine Active provider: sqlserver
prisma:engine stdout Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
prisma:engine stdout The transaction active in this session has been committed or aborted by another session.
prisma:engine stdout The server failed to resume the transaction. Desc:3400000001.
prisma:engine {"error":{"code":"P2034","clientVersion":"0.0.0","meta":{}}}
prisma:engine stdout The transaction active in this session has been committed or aborted by another session.
prisma:engine stdout The server failed to resume the transaction. Desc:3400000001.
prisma:engine {"error":{"code":"P1018","clientVersion":"0.0.0","meta":{"message":"Transaction was already closed: The server failed to resume the transaction. Desc:3400000001."}}}
prisma:engine stdout The transaction active in this session has been committed or aborted by another session.
prisma:engine stdout The server failed to resume the transaction. Desc:3400000001.
prisma:engine {"error":{"code":"P1018","clientVersion":"0.0.0","meta":{"message":"Transaction was already closed: The server failed to resume the transaction. Desc:3400000001."}}}
```
## Client Snippet
```ts
// PLEASE FILL YOUR CODE SNIPPET HERE
```
## Schema
```prisma
// PLEASE ADD YOUR SCHEMA HERE IF POSSIBLE
```
## Prisma Engine Query
```
{"X"}
```
|
1.0
|
The server failed to resume the transaction. Desc:3400000001. - Hi Prisma Team! My Prisma Client just crashed. This is the report:
## Versions
| Name | Version |
|-----------------|--------------------|
| Node | v16.13.2 |
| OS | debian-openssl-1.1.x|
| Prisma Client | 0.0.0 |
| Query Engine | query-engine fd66403800591e7da0f4b941056ef3fb926fffea|
| Database | undefined|
## Logs
```
.1.x are fine
plusX Execution permissions of /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/query-engine-debian-openssl-1.1.x are fine
prisma:engine {"cwd":"/home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/tests/functional/issues/9678/.generated/issues.9678 (provider=sqlserver)/prisma"}
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/.prisma/client
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/runtime
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client
plusX Execution permissions of /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/query-engine-debian-openssl-1.1.x are fine
prisma:engine {"flags":["--enable-raw-queries","--enable-metrics","--enable-open-telemetry","--port","34423"]}
prisma:engine stdout Starting a mssql pool with 9 connections.
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Started query engine http server on http://127.0.0.1:34423
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/.prisma/client
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/runtime
prisma:engine Search for Query Engine in /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client
plusX Execution permissions of /home/ubuntu/actions-runner/_work/prisma/prisma/packages/client/query-engine-debian-openssl-1.1.x are fine
prisma:engine stdout Begin transaction
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Begin transaction
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Commit transaction
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Begin transaction
prisma:engine stdout Begin transaction
prisma:engine stdout Performing a TLS handshake
prisma:engine stdout Trusting the server certificate without validation.
prisma:engine stdout TLS handshake successful
prisma:engine stdout Turning TLS off after a login. All traffic from here on is not encrypted.
prisma:engine stdout Database change from 'cl8j545mc0002vojx87z41auj' to 'master'
prisma:engine stdout Changed database context to 'cl8j545mc0002vojx87z41auj'.
prisma:engine stdout SQL collation changed to windows-1252
prisma:engine stdout Microsoft SQL Server�� version 2567962639
prisma:engine stdout Packet size change from '4096' to '4096'
prisma:engine stdout Begin transaction
prisma:engine Client Version: 0.0.0
prisma:engine Engine Version: query-engine fd66403800591e7da0f4b941056ef3fb926fffea
prisma:engine Active provider: sqlserver
prisma:engine stdout Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
prisma:engine stdout The transaction active in this session has been committed or aborted by another session.
prisma:engine stdout The server failed to resume the transaction. Desc:3400000001.
prisma:engine {"error":{"code":"P2034","clientVersion":"0.0.0","meta":{}}}
prisma:engine stdout The transaction active in this session has been committed or aborted by another session.
prisma:engine stdout The server failed to resume the transaction. Desc:3400000001.
prisma:engine {"error":{"code":"P1018","clientVersion":"0.0.0","meta":{"message":"Transaction was already closed: The server failed to resume the transaction. Desc:3400000001."}}}
prisma:engine stdout The transaction active in this session has been committed or aborted by another session.
prisma:engine stdout The server failed to resume the transaction. Desc:3400000001.
prisma:engine {"error":{"code":"P1018","clientVersion":"0.0.0","meta":{"message":"Transaction was already closed: The server failed to resume the transaction. Desc:3400000001."}}}
```
## Client Snippet
```ts
// PLEASE FILL YOUR CODE SNIPPET HERE
```
## Schema
```prisma
// PLEASE ADD YOUR SCHEMA HERE IF POSSIBLE
```
## Prisma Engine Query
```
{"X"}
```
|
process
|
the server failed to resume the transaction desc hi prisma team my prisma client just crashed this is the report versions name version node os debian openssl x prisma client query engine query engine database undefined logs x are fine plusx execution permissions of home ubuntu actions runner work prisma prisma packages client query engine debian openssl x are fine prisma engine cwd home ubuntu actions runner work prisma prisma packages client tests functional issues generated issues provider sqlserver prisma prisma engine search for query engine in home ubuntu actions runner work prisma prisma prisma client prisma engine search for query engine in home ubuntu actions runner work prisma prisma packages client runtime prisma engine search for query engine in home ubuntu actions runner work prisma prisma packages client plusx execution permissions of home ubuntu actions runner work prisma prisma packages client query engine debian openssl x are fine prisma engine flags prisma engine stdout starting a mssql pool with connections prisma engine stdout performing a tls handshake prisma engine stdout trusting the server certificate without validation prisma engine stdout tls handshake successful prisma engine stdout turning tls off after a login all traffic from here on is not encrypted prisma engine stdout database change from to master prisma engine stdout changed database context to prisma engine stdout sql collation changed to windows prisma engine stdout microsoft sql server�� version prisma engine stdout packet size change from to prisma engine stdout started query engine http server on prisma engine search for query engine in home ubuntu actions runner work prisma prisma prisma client prisma engine search for query engine in home ubuntu actions runner work prisma prisma packages client runtime prisma engine search for query engine in home ubuntu actions runner work prisma prisma packages client plusx execution permissions of home ubuntu actions runner work prisma prisma packages client query engine debian openssl x are fine prisma engine stdout begin transaction prisma engine stdout performing a tls handshake prisma engine stdout trusting the server certificate without validation prisma engine stdout tls handshake successful prisma engine stdout turning tls off after a login all traffic from here on is not encrypted prisma engine stdout database change from to master prisma engine stdout changed database context to prisma engine stdout sql collation changed to windows prisma engine stdout microsoft sql server�� version prisma engine stdout packet size change from to prisma engine stdout begin transaction prisma engine stdout performing a tls handshake prisma engine stdout trusting the server certificate without validation prisma engine stdout tls handshake successful prisma engine stdout turning tls off after a login all traffic from here on is not encrypted prisma engine stdout commit transaction prisma engine stdout database change from to master prisma engine stdout changed database context to prisma engine stdout sql collation changed to windows prisma engine stdout microsoft sql server�� version prisma engine stdout packet size change from to prisma engine stdout begin transaction prisma engine stdout begin transaction prisma engine stdout performing a tls handshake prisma engine stdout trusting the server certificate without validation prisma engine stdout tls handshake successful prisma engine stdout turning tls off after a login all traffic from here on is not encrypted prisma engine stdout database change from to master prisma engine stdout changed database context to prisma engine stdout sql collation changed to windows prisma engine stdout microsoft sql server�� version prisma engine stdout packet size change from to prisma engine stdout begin transaction prisma engine client version prisma engine engine version query engine prisma engine active provider sqlserver prisma engine stdout transaction process id was deadlocked on lock resources with another process and has been chosen as the deadlock victim rerun the transaction prisma engine stdout the transaction active in this session has been committed or aborted by another session prisma engine stdout the server failed to resume the transaction desc prisma engine error code clientversion meta prisma engine stdout the transaction active in this session has been committed or aborted by another session prisma engine stdout the server failed to resume the transaction desc prisma engine error code clientversion meta message transaction was already closed the server failed to resume the transaction desc prisma engine stdout the transaction active in this session has been committed or aborted by another session prisma engine stdout the server failed to resume the transaction desc prisma engine error code clientversion meta message transaction was already closed the server failed to resume the transaction desc client snippet ts please fill your code snippet here schema prisma please add your schema here if possible prisma engine query x
| 1
|
166,068
| 12,890,110,024
|
IssuesEvent
|
2020-07-13 15:29:43
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: tpcc/mixed-headroom/n5cpu16 failed
|
C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker
|
[(roachtest).tpcc/mixed-headroom/n5cpu16 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2054042&tab=buildLog) on [release-20.1@9c486c55b8a92cf2f1e441458b7a8c7ee2a55679](https://github.com/cockroachdb/cockroach/commits/9c486c55b8a92cf2f1e441458b7a8c7ee2a55679):
```
| 4990.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4990.0s 0 0.0 190.0 0.0 0.0 0.0 0.0 payment
| 4990.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
| 4991.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 delivery
| 4991.0s 0 0.0 189.4 0.0 0.0 0.0 0.0 newOrder
| 4991.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4991.0s 0 0.0 189.9 0.0 0.0 0.0 0.0 payment
| 4991.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
| 4992.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 delivery
| 4992.0s 0 0.0 189.3 0.0 0.0 0.0 0.0 newOrder
| 4992.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4992.0s 0 0.0 189.9 0.0 0.0 0.0 0.0 payment
| 4992.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
| _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
| 4993.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 delivery
| 4993.0s 0 0.0 189.3 0.0 0.0 0.0 0.0 newOrder
| 4993.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4993.0s 0 0.0 189.8 0.0 0.0 0.0 0.0 payment
| 4993.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
Wraps: (5) exit status 30
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2467,tpcc.go:168,tpcc.go:266,test_runner.go:757: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2455
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2463
| main.runTPCC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:168
| main.registerTPCC.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:266
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:757
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2511
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2425
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5420
| runtime.main
| /usr/local/go/src/runtime/proc.go:190
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1373
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/tpcc/mixed-headroom/n5cpu16](https://teamcity.cockroachdb.com/viewLog.html?buildId=2054042&tab=artifacts#/tpcc/mixed-headroom/n5cpu16)
Related:
- #50439 roachtest: tpcc/mixed-headroom/n5cpu16 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpcc%2Fmixed-headroom%2Fn5cpu16.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: tpcc/mixed-headroom/n5cpu16 failed - [(roachtest).tpcc/mixed-headroom/n5cpu16 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2054042&tab=buildLog) on [release-20.1@9c486c55b8a92cf2f1e441458b7a8c7ee2a55679](https://github.com/cockroachdb/cockroach/commits/9c486c55b8a92cf2f1e441458b7a8c7ee2a55679):
```
| 4990.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4990.0s 0 0.0 190.0 0.0 0.0 0.0 0.0 payment
| 4990.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
| 4991.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 delivery
| 4991.0s 0 0.0 189.4 0.0 0.0 0.0 0.0 newOrder
| 4991.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4991.0s 0 0.0 189.9 0.0 0.0 0.0 0.0 payment
| 4991.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
| 4992.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 delivery
| 4992.0s 0 0.0 189.3 0.0 0.0 0.0 0.0 newOrder
| 4992.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4992.0s 0 0.0 189.9 0.0 0.0 0.0 0.0 payment
| 4992.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
| _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)
| 4993.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 delivery
| 4993.0s 0 0.0 189.3 0.0 0.0 0.0 0.0 newOrder
| 4993.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 orderStatus
| 4993.0s 0 0.0 189.8 0.0 0.0 0.0 0.0 payment
| 4993.0s 0 0.0 19.0 0.0 0.0 0.0 0.0 stockLevel
Wraps: (5) exit status 30
Error types: (1) *withstack.withStack (2) *safedetails.withSafeDetails (3) *errutil.withMessage (4) *main.withCommandDetails (5) *exec.ExitError
cluster.go:2467,tpcc.go:168,tpcc.go:266,test_runner.go:757: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
| main.(*monitor).WaitE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2455
| main.(*monitor).Wait
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2463
| main.runTPCC
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:168
| main.registerTPCC.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tpcc.go:266
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:757
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
| main.(*monitor).wait.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2511
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
| main.init
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2425
| runtime.doInit
| /usr/local/go/src/runtime/proc.go:5420
| runtime.main
| /usr/local/go/src/runtime/proc.go:190
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1373
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withMessage (3) *withstack.withStack (4) *errutil.withMessage (5) *withstack.withStack (6) *errors.errorString
```
<details><summary>More</summary><p>
Artifacts: [/tpcc/mixed-headroom/n5cpu16](https://teamcity.cockroachdb.com/viewLog.html?buildId=2054042&tab=artifacts#/tpcc/mixed-headroom/n5cpu16)
Related:
- #50439 roachtest: tpcc/mixed-headroom/n5cpu16 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Atpcc%2Fmixed-headroom%2Fn5cpu16.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_process
|
roachtest tpcc mixed headroom failed on orderstatus payment stocklevel delivery neworder orderstatus payment stocklevel delivery neworder orderstatus payment stocklevel elapsed errors ops sec inst ops sec cum ms ms ms pmax ms delivery neworder orderstatus payment stocklevel wraps exit status error types withstack withstack safedetails withsafedetails errutil withmessage main withcommanddetails exec exiterror cluster go tpcc go tpcc go test runner go monitor failure monitor task failed t fatal was called attached stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runtpcc home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main registertpcc home agent work go src github com cockroachdb cockroach pkg cmd roachtest tpcc go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withmessage withstack withstack errutil withmessage withstack withstack errors errorstring more artifacts related roachtest tpcc mixed headroom failed powered by
| 0
|
601,110
| 18,372,262,633
|
IssuesEvent
|
2021-10-11 02:03:57
|
AY2122S1-CS2113T-F14-2/tp
|
https://api.github.com/repos/AY2122S1-CS2113T-F14-2/tp
|
closed
|
As a user, I would like to set my height, weight and net calorie goal
|
type.Story priority.High
|
so that I can start tracking my progress.
|
1.0
|
As a user, I would like to set my height, weight and net calorie goal - so that I can start tracking my progress.
|
non_process
|
as a user i would like to set my height weight and net calorie goal so that i can start tracking my progress
| 0
|
5,709
| 8,565,124,174
|
IssuesEvent
|
2018-11-09 18:51:41
|
easy-software-ufal/annotations_repos
|
https://api.github.com/repos/easy-software-ufal/annotations_repos
|
opened
|
autofac/Autofac When creating a metadata attribute that inherits from another metadata attribute, it's necessary to re-apply the [MetadataAttribute] attribute
|
C# RMA wrong processing
|
Issue: `https://github.com/autofac/Autofac/issues/442`
PR: `null`
|
1.0
|
autofac/Autofac When creating a metadata attribute that inherits from another metadata attribute, it's necessary to re-apply the [MetadataAttribute] attribute - Issue: `https://github.com/autofac/Autofac/issues/442`
PR: `null`
|
process
|
autofac autofac when creating a metadata attribute that inherits from another metadata attribute it s necessary to re apply the attribute issue pr null
| 1
|
13,075
| 15,418,870,477
|
IssuesEvent
|
2021-03-05 09:23:50
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: intracellular signal transduction involved in positive regulation of G2/M transition of mitotic cell cycle
|
New term request PomBase cell cycle and DNA processes mini-project signaling waiting for feedback
|
NTR: intracellular signal transduction involved in positive regulation of G2/M transition of mitotic cell cycle
Any intracellular signal transduction that is involved in positive regulation of G2/M transition of mitotic cell cycle
ref PMID:15917811
|
1.0
|
NTR: intracellular signal transduction involved in positive regulation of G2/M transition of mitotic cell cycle - NTR: intracellular signal transduction involved in positive regulation of G2/M transition of mitotic cell cycle
Any intracellular signal transduction that is involved in positive regulation of G2/M transition of mitotic cell cycle
ref PMID:15917811
|
process
|
ntr intracellular signal transduction involved in positive regulation of m transition of mitotic cell cycle ntr intracellular signal transduction involved in positive regulation of m transition of mitotic cell cycle any intracellular signal transduction that is involved in positive regulation of m transition of mitotic cell cycle ref pmid
| 1
|
45,182
| 2,920,674,075
|
IssuesEvent
|
2015-06-24 20:10:57
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
closed
|
UX: need a start-time for things like 'get pods'
|
area/usability component/CLI help-wanted kind/enhancement priority/P3 team/UX
|
Docker's "started: about an hour ago" UX is pretty nice. We are sorely lacking any indicator of age when we do things like "get pods"
|
1.0
|
UX: need a start-time for things like 'get pods' - Docker's "started: about an hour ago" UX is pretty nice. We are sorely lacking any indicator of age when we do things like "get pods"
|
non_process
|
ux need a start time for things like get pods docker s started about an hour ago ux is pretty nice we are sorely lacking any indicator of age when we do things like get pods
| 0
|
9,875
| 8,214,043,504
|
IssuesEvent
|
2018-09-04 21:34:44
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Pub get socket error trying to find package angular at https://pub.dartlang.org
|
area-infrastructure
|
I'm using dart 1.24.3 and I'm trying to resolve dependencies for the first time (angular package) I was trying out the example code (tour of hero) but I can't seem to get past this error message "got socket error trying to find package angular at https://pub.dartlang.org" I'm new to dart so I don't know what really is the problem... my internet connection does not have proxy settings
Operating system : windows 10
Dart SDK: 1.24.3
|
1.0
|
Pub get socket error trying to find package angular at https://pub.dartlang.org -
I'm using dart 1.24.3 and I'm trying to resolve dependencies for the first time (angular package) I was trying out the example code (tour of hero) but I can't seem to get past this error message "got socket error trying to find package angular at https://pub.dartlang.org" I'm new to dart so I don't know what really is the problem... my internet connection does not have proxy settings
Operating system : windows 10
Dart SDK: 1.24.3
|
non_process
|
pub get socket error trying to find package angular at i m using dart and i m trying to resolve dependencies for the first time angular package i was trying out the example code tour of hero but i can t seem to get past this error message got socket error trying to find package angular at i m new to dart so i don t know what really is the problem my internet connection does not have proxy settings operating system windows dart sdk
| 0
|
8,489
| 11,647,024,571
|
IssuesEvent
|
2020-03-01 12:54:13
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
[Console Error related to Vault]
|
Bug In Process Priority:Minor Server Bug
|
**Describe the bug**
It seems Vault is throwing the following error while trying to fetch the players name.
**To Reproduce**
Steps to reproduce the behavior:
1. Have Vault
2. Have Essentials X
3. Have QuickShop
4. Make Shop: See error
**Expected behavior**
Upon creating a shop, you will get the following errors;
[21:41:30 WARN]: java.lang.RuntimeException: Economy username cannot be null
**Paste link:**
https://pastebin.com/PwfyzHTs
|
1.0
|
[Console Error related to Vault] - **Describe the bug**
It seems Vault is throwing the following error while trying to fetch the players name.
**To Reproduce**
Steps to reproduce the behavior:
1. Have Vault
2. Have Essentials X
3. Have QuickShop
4. Make Shop: See error
**Expected behavior**
Upon creating a shop, you will get the following errors;
[21:41:30 WARN]: java.lang.RuntimeException: Economy username cannot be null
**Paste link:**
https://pastebin.com/PwfyzHTs
|
process
|
describe the bug it seems vault is throwing the following error while trying to fetch the players name to reproduce steps to reproduce the behavior have vault have essentials x have quickshop make shop see error expected behavior upon creating a shop you will get the following errors java lang runtimeexception economy username cannot be null paste link
| 1
|
12,917
| 15,293,358,263
|
IssuesEvent
|
2021-02-24 00:08:47
|
PHPSocialNetwork/phpfastcache
|
https://api.github.com/repos/PHPSocialNetwork/phpfastcache
|
closed
|
SQLite has duplicate folder in path
|
0_0 Bug 8.0 8.1 [-_-] In Process
|
**Configuration**
- **PhpFastCache version:** 8.0.4
- **PhpFastCache API version:** 3.0.0
- **PHP version:** >= 7.4
- **Operating system:** windows / linux
**Describe the bug**
Apparently sqlite has been using a path akin to `<config "path">/<config "securityKey">/<driver name, aka "Sqlite">/sqlite/<file name, ex. db1/indexing>` since v5 (according to git history) where `getPath()` gained the driver folder segment, but the sqlite driver still appending another "sqlite" to it
As the files driver seemed to have a similar history (it's `FILE_DIR` seemed to have been unused since v5 and finally removed with v8) I'd assume it's just an oversight from then...
**Expected behavior**
the path being `<config "path">/<config "securityKey">/<driver name, aka "Sqlite">/<file name, ex. db1/indexing>`
maybe with the driver name lowercased, but eh, details
While not strictly part of the behavior mentioned, I noticed that `securityKey` doesn't seem to be "nullable" (aka, setting it to an empty string), to avoid an additional folder from being created. (making the conditional check in `getPath()` almost, if never hit: `if ($securityKey !== '') $securityKey .= '/';`)
This seems to be caused by the `!$securityKey` check which not only considers unset or `null` variables as false, but also string "" and "0" (I actually had to look this up too because it's non-common behavior, coming from other languages)
I'm just not a fan of having multiple layers of almost empty folders, of which I know won't get any more content.
|
1.0
|
SQLite has duplicate folder in path - **Configuration**
- **PhpFastCache version:** 8.0.4
- **PhpFastCache API version:** 3.0.0
- **PHP version:** >= 7.4
- **Operating system:** windows / linux
**Describe the bug**
Apparently sqlite has been using a path akin to `<config "path">/<config "securityKey">/<driver name, aka "Sqlite">/sqlite/<file name, ex. db1/indexing>` since v5 (according to git history) where `getPath()` gained the driver folder segment, but the sqlite driver still appending another "sqlite" to it
As the files driver seemed to have a similar history (it's `FILE_DIR` seemed to have been unused since v5 and finally removed with v8) I'd assume it's just an oversight from then...
**Expected behavior**
the path being `<config "path">/<config "securityKey">/<driver name, aka "Sqlite">/<file name, ex. db1/indexing>`
maybe with the driver name lowercased, but eh, details
While not strictly part of the behavior mentioned, I noticed that `securityKey` doesn't seem to be "nullable" (aka, setting it to an empty string), to avoid an additional folder from being created. (making the conditional check in `getPath()` almost, if never hit: `if ($securityKey !== '') $securityKey .= '/';`)
This seems to be caused by the `!$securityKey` check which not only considers unset or `null` variables as false, but also string "" and "0" (I actually had to look this up too because it's non-common behavior, coming from other languages)
I'm just not a fan of having multiple layers of almost empty folders, of which I know won't get any more content.
|
process
|
sqlite has duplicate folder in path configuration phpfastcache version phpfastcache api version php version operating system windows linux describe the bug apparently sqlite has been using a path akin to sqlite since according to git history where getpath gained the driver folder segment but the sqlite driver still appending another sqlite to it as the files driver seemed to have a similar history it s file dir seemed to have been unused since and finally removed with i d assume it s just an oversight from then expected behavior the path being maybe with the driver name lowercased but eh details while not strictly part of the behavior mentioned i noticed that securitykey doesn t seem to be nullable aka setting it to an empty string to avoid an additional folder from being created making the conditional check in getpath almost if never hit if securitykey securitykey this seems to be caused by the securitykey check which not only considers unset or null variables as false but also string and i actually had to look this up too because it s non common behavior coming from other languages i m just not a fan of having multiple layers of almost empty folders of which i know won t get any more content
| 1
|
15,189
| 9,515,016,966
|
IssuesEvent
|
2019-04-26 03:22:30
|
zviaz/vortex
|
https://api.github.com/repos/zviaz/vortex
|
opened
|
account activation
|
enhancement feature security
|
require users to validate their email via an activation link before they can login
|
True
|
account activation - require users to validate their email via an activation link before they can login
|
non_process
|
account activation require users to validate their email via an activation link before they can login
| 0
|
58,838
| 6,621,494,156
|
IssuesEvent
|
2017-09-21 19:24:22
|
Andreas-Schoenefeldt/vt.at-drupal
|
https://api.github.com/repos/Andreas-Schoenefeldt/vt.at-drupal
|
closed
|
Capitalize input
|
prio2 ready for test
|
> Beim Anmeldeformular: Bitte fixe Voreinstellung beim Land – Österreich. Und dass alle Felder mit Großbuchstaben beginnend ausgefüllt werden müssen. Ich muss das sehr häufig händisch korrigieren. Manche füllen es auch alles in Großbuchstaben aus – das ist auch sehr unpraktisch.
- done as a js-widget, which will bring the input into the correct capitalized form.
|
1.0
|
Capitalize input - > Beim Anmeldeformular: Bitte fixe Voreinstellung beim Land – Österreich. Und dass alle Felder mit Großbuchstaben beginnend ausgefüllt werden müssen. Ich muss das sehr häufig händisch korrigieren. Manche füllen es auch alles in Großbuchstaben aus – das ist auch sehr unpraktisch.
- done as a js-widget, which will bring the input into the correct capitalized form.
|
non_process
|
capitalize input beim anmeldeformular bitte fixe voreinstellung beim land – österreich und dass alle felder mit großbuchstaben beginnend ausgefüllt werden müssen ich muss das sehr häufig händisch korrigieren manche füllen es auch alles in großbuchstaben aus – das ist auch sehr unpraktisch done as a js widget which will bring the input into the correct capitalized form
| 0
|
12,387
| 14,907,061,252
|
IssuesEvent
|
2021-01-22 02:11:06
|
nion-software/nionswift
|
https://api.github.com/repos/nion-software/nionswift
|
opened
|
Add a dialog to generate data items
|
f - IO f - computations f - organization f - processing f - user-interface feature type - enhancement
|
Menu items to generate basic images such as ramp, constants, radial, gaussian, movie, EELS spectra, SI, etc.
Dialog should be extensible.
Initial thoughts are that user should be able to specify sequence, collection dimensions, and datum dimensions. Then they should be able to select from a number of options for filling the datum dimension. Packages should be able to provide extensions to the dialog (mainly in data filling, e.g. EELS data). Some filling options may be stackable, e.g. adding Poisson noise. Also some filling options may be functions of sequence or collection indexes.
One additional idea is that the generator could be a computation, meaning it could be retroactively edited using the computation editor.
|
1.0
|
Add a dialog to generate data items - Menu items to generate basic images such as ramp, constants, radial, gaussian, movie, EELS spectra, SI, etc.
Dialog should be extensible.
Initial thoughts are that user should be able to specify sequence, collection dimensions, and datum dimensions. Then they should be able to select from a number of options for filling the datum dimension. Packages should be able to provide extensions to the dialog (mainly in data filling, e.g. EELS data). Some filling options may be stackable, e.g. adding Poisson noise. Also some filling options may be functions of sequence or collection indexes.
One additional idea is that the generator could be a computation, meaning it could be retroactively edited using the computation editor.
|
process
|
add a dialog to generate data items menu items to generate basic images such as ramp constants radial gaussian movie eels spectra si etc dialog should be extensible initial thoughts are that user should be able to specify sequence collection dimensions and datum dimensions then they should be able to select from a number of options for filling the datum dimension packages should be able to provide extensions to the dialog mainly in data filling e g eels data some filling options may be stackable e g adding poisson noise also some filling options may be functions of sequence or collection indexes one additional idea is that the generator could be a computation meaning it could be retroactively edited using the computation editor
| 1
|
6,068
| 8,902,733,725
|
IssuesEvent
|
2019-01-17 08:33:52
|
Juris-M/citeproc-js
|
https://api.github.com/repos/Juris-M/citeproc-js
|
closed
|
Treat "container-title-short" the same as "journalAbbreviation"
|
fix in process
|
Hi Frank, @zuphilip ran into the following issue (https://github.com/citation-style-language/styles/pull/3618#discussion_r208234949):
```xml
<bibliography>
<layout>
<group delimiter="/">
<text variable="container-title-short"/>
<text variable="container-title" form="short"/>
<text variable="container-title"/>
</group>
</layout>
</bibliography>
```
produces
> Comp. Polit. Stud./Comp. Polit. Stud./Comparative Political Studies
> TWO/The World is Open/The World is Open
The first line is a journal article, with
```
"container-title": "Comparative Political Studies",
"journalAbbreviation": "Comp. Polit. Stud."
```
So far, so good. Both `variable="container-title-short"` and `"container-title" form="short"` produce the abbreviated container-title. (see https://github.com/citation-style-language/csl-editor/blob/1319eeaebb5ca13f001d7773f8cc30a923718466/src/exampleData.js#L56 for CSL JSON)
However, for the second line, we have a chapter with:
```
"container-title": "The World is Open",
"container-title-short": "TWO"
```
Here, `"container-title" form="short"` gives the full title. Shouldn't it give the contents of "container-title-short"?
I was able to reproduce this in Zotero 5.0.54.
|
1.0
|
Treat "container-title-short" the same as "journalAbbreviation" - Hi Frank, @zuphilip ran into the following issue (https://github.com/citation-style-language/styles/pull/3618#discussion_r208234949):
```xml
<bibliography>
<layout>
<group delimiter="/">
<text variable="container-title-short"/>
<text variable="container-title" form="short"/>
<text variable="container-title"/>
</group>
</layout>
</bibliography>
```
produces
> Comp. Polit. Stud./Comp. Polit. Stud./Comparative Political Studies
> TWO/The World is Open/The World is Open
The first line is a journal article, with
```
"container-title": "Comparative Political Studies",
"journalAbbreviation": "Comp. Polit. Stud."
```
So far, so good. Both `variable="container-title-short"` and `"container-title" form="short"` produce the abbreviated container-title. (see https://github.com/citation-style-language/csl-editor/blob/1319eeaebb5ca13f001d7773f8cc30a923718466/src/exampleData.js#L56 for CSL JSON)
However, for the second line, we have a chapter with:
```
"container-title": "The World is Open",
"container-title-short": "TWO"
```
Here, `"container-title" form="short"` gives the full title. Shouldn't it give the contents of "container-title-short"?
I was able to reproduce this in Zotero 5.0.54.
|
process
|
treat container title short the same as journalabbreviation hi frank zuphilip ran into the following issue xml produces comp polit stud comp polit stud comparative political studies two the world is open the world is open the first line is a journal article with container title comparative political studies journalabbreviation comp polit stud so far so good both variable container title short and container title form short produce the abbreviated container title see for csl json however for the second line we have a chapter with container title the world is open container title short two here container title form short gives the full title shouldn t it give the contents of container title short i was able to reproduce this in zotero
| 1
|
19,077
| 25,117,151,181
|
IssuesEvent
|
2022-11-09 03:40:54
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Release checklist 0.67
|
enhancement process
|
### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on
relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open
for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.67.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
## Previewnet
- [x] Deploy to Kubernetes
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
1.0
|
Release checklist 0.67 - ### Problem
We need a checklist to verify the release is rolled out successfully.
### Solution
## Preparation
- [x] Milestone field populated on
relevant [issues](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aclosed+no%3Amilestone+sort%3Aupdated-desc)
- [x] Nothing open
for [milestone](https://github.com/hashgraph/hedera-mirror-node/issues?q=is%3Aopen+sort%3Aupdated-desc+milestone%3A0.67.0)
- [x] GitHub checks for branch are passing
- [x] Automated Kubernetes deployment successful
- [x] Tag release
- [x] Upload release artifacts
- [x] Publish release
## Performance
- [x] Deploy to Kubernetes
- [x] Deploy to VM
- [x] gRPC API performance tests
- [x] Importer performance tests
- [x] REST API performance tests
## Previewnet
- [x] Deploy to Kubernetes
- [x] Deploy to VM
## Staging
- [x] Deploy to Kubernetes
## Testnet
- [x] Deploy to VM
## Mainnet
- [x] Deploy to Kubernetes EU
- [x] Deploy to Kubernetes NA
- [x] Deploy to VM
- [x] Deploy to ETL
### Alternatives
_No response_
|
process
|
release checklist problem we need a checklist to verify the release is rolled out successfully solution preparation milestone field populated on relevant nothing open for github checks for branch are passing automated kubernetes deployment successful tag release upload release artifacts publish release performance deploy to kubernetes deploy to vm grpc api performance tests importer performance tests rest api performance tests previewnet deploy to kubernetes deploy to vm staging deploy to kubernetes testnet deploy to vm mainnet deploy to kubernetes eu deploy to kubernetes na deploy to vm deploy to etl alternatives no response
| 1
|
17,836
| 23,776,437,683
|
IssuesEvent
|
2022-09-01 21:28:40
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Example for "split" not working
|
duplicate devops/prod product-feedback cba Pri1 devops-cicd-process/tech
|
The example described in the [split section](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#split) is not working. I copied the example into a fresh pipeline and encounter the following error: `Unrecognized value: 'split'. Located at position 1 within expression: split(variables.environments, ','). For more help, refer to https://go.microsoft.com/fwlink/?linkid=842996`.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Example for "split" not working -
The example described in the [split section](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#split) is not working. I copied the example into a fresh pipeline and encounter the following error: `Unrecognized value: 'split'. Located at position 1 within expression: split(variables.environments, ','). For more help, refer to https://go.microsoft.com/fwlink/?linkid=842996`.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6
* Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18
* Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops)
* Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/expressions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
example for split not working the example described in the is not working i copied the example into a fresh pipeline and encounter the following error unrecognized value split located at position within expression split variables environments for more help refer to document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
760,163
| 26,631,683,980
|
IssuesEvent
|
2023-01-24 18:21:24
|
hlxsites/merative
|
https://api.github.com/repos/hlxsites/merative
|
closed
|
Sachin's Design QA - Tablet
|
Top Priority Phase 1B QA
|
**Tablet: 768**
- Top navigation bar – Padding on left and right is wrong. It’s 96 right now, should be 36.
- Merative logo size is different.
- Extra space of 32px is added above the title merative blog. It should be 64 only.
- The title of the merative blog and bodycopy is on sand background.
- The spacing between the bodycopy and the edge of the sand background is 48px.
- From the sand background the filters have spacing of 32px.
- The spacing between the filters and the feature article should be 32px.
- Article image size should be 336x294.
- The spacing between the bodycopy and the author in the feature article should be 48px.
- The title font of feature article should be 18px Alliance no.1 light.
- The author font should also be Alliance no.1 light 12 px.
- Extra padding of 16px is seen on left side. The padding should 36px from both sides.
- The spacing between the card article title and its bodycopy should be 8px. It’s 16px now.
- The space between bodycopy and the author's name should be 24px.
- The author font should be Alliance no.1 light not regular.
- The spacing below the CTA should be 64px and not 160px.
- Filter font should be Mobile/heading/h4 - 18px medium and not light.
- Padding of the entire filter from left should be 36px and not 16px.
- The cross icon should be of 20px size.
- The cross icon spacing from the right should be 24px.
- The spacing between filter text and Merative blog container should be 32px and then an 8px space is added above merative blog text.
- The spacing below merative blog text and divider rule should be 24px.
- The spacing above categories (and divider rule) should be 16px and below should be 24px.
- The spacing between all the categories should be 16px.
- Same follows as above (Point 7 and 8) for the audience and topic categories.
**Tablet: Article 768**
- Top nav merative logo and hamburger padding is incorrect.
- Breadcrumb padding on left is wrong. It should be 36px as our grid padding.
- The article hierarchy is wrong.
- After breadcrumb Article title should come > Tags > Article image > Author name > Share links > Reading progress.
- Article image size should be 696x468.
- No padding seen on right hand side.
- Spacing above related article should be 64px.
- Spacing below related article should be 48px.
- The color of card should be white.
- The image size inside the card should be 336x224.
- The spacing of title and bodycopy inside the card should be 8px.
- The spacing from bodycopy to author name should be 24px.
- Authour font should be light.
- Leadgen grid is wrong.
- Ready for consultation font should be Mobile/heading/h2 32px sorce serif pro light.
- Bodycopy should be 16px alliance no.1 light
- Spacing from bodycopy to CTA should be 32px.
|
1.0
|
Sachin's Design QA - Tablet - **Tablet: 768**
- Top navigation bar – Padding on left and right is wrong. It’s 96 right now, should be 36.
- Merative logo size is different.
- Extra space of 32px is added above the title merative blog. It should be 64 only.
- The title of the merative blog and bodycopy is on sand background.
- The spacing between the bodycopy and the edge of the sand background is 48px.
- From the sand background the filters have spacing of 32px.
- The spacing between the filters and the feature article should be 32px.
- Article image size should be 336x294.
- The spacing between the bodycopy and the author in the feature article should be 48px.
- The title font of feature article should be 18px Alliance no.1 light.
- The author font should also be Alliance no.1 light 12 px.
- Extra padding of 16px is seen on left side. The padding should 36px from both sides.
- The spacing between the card article title and its bodycopy should be 8px. It’s 16px now.
- The space between bodycopy and the author's name should be 24px.
- The author font should be Alliance no.1 light not regular.
- The spacing below the CTA should be 64px and not 160px.
- Filter font should be Mobile/heading/h4 - 18px medium and not light.
- Padding of the entire filter from left should be 36px and not 16px.
- The cross icon should be of 20px size.
- The cross icon spacing from the right should be 24px.
- The spacing between filter text and Merative blog container should be 32px and then an 8px space is added above merative blog text.
- The spacing below merative blog text and divider rule should be 24px.
- The spacing above categories (and divider rule) should be 16px and below should be 24px.
- The spacing between all the categories should be 16px.
- Same follows as above (Point 7 and 8) for the audience and topic categories.
**Tablet: Article 768**
- Top nav merative logo and hamburger padding is incorrect.
- Breadcrumb padding on left is wrong. It should be 36px as our grid padding.
- The article hierarchy is wrong.
- After breadcrumb Article title should come > Tags > Article image > Author name > Share links > Reading progress.
- Article image size should be 696x468.
- No padding seen on right hand side.
- Spacing above related article should be 64px.
- Spacing below related article should be 48px.
- The color of card should be white.
- The image size inside the card should be 336x224.
- The spacing of title and bodycopy inside the card should be 8px.
- The spacing from bodycopy to author name should be 24px.
- Authour font should be light.
- Leadgen grid is wrong.
- Ready for consultation font should be Mobile/heading/h2 32px sorce serif pro light.
- Bodycopy should be 16px alliance no.1 light
- Spacing from bodycopy to CTA should be 32px.
|
non_process
|
sachin s design qa tablet tablet top navigation bar – padding on left and right is wrong it’s right now should be merative logo size is different extra space of is added above the title merative blog it should be only the title of the merative blog and bodycopy is on sand background the spacing between the bodycopy and the edge of the sand background is from the sand background the filters have spacing of the spacing between the filters and the feature article should be article image size should be the spacing between the bodycopy and the author in the feature article should be the title font of feature article should be alliance no light the author font should also be alliance no light px extra padding of is seen on left side the padding should from both sides the spacing between the card article title and its bodycopy should be it’s now the space between bodycopy and the author s name should be the author font should be alliance no light not regular the spacing below the cta should be and not filter font should be mobile heading medium and not light padding of the entire filter from left should be and not the cross icon should be of size the cross icon spacing from the right should be the spacing between filter text and merative blog container should be and then an space is added above merative blog text the spacing below merative blog text and divider rule should be the spacing above categories and divider rule should be and below should be the spacing between all the categories should be same follows as above point and for the audience and topic categories tablet article top nav merative logo and hamburger padding is incorrect breadcrumb padding on left is wrong it should be as our grid padding the article hierarchy is wrong after breadcrumb article title should come tags article image author name share links reading progress article image size should be no padding seen on right hand side spacing above related article should be spacing below related article should be the color of card should be white the image size inside the card should be the spacing of title and bodycopy inside the card should be the spacing from bodycopy to author name should be authour font should be light leadgen grid is wrong ready for consultation font should be mobile heading sorce serif pro light bodycopy should be alliance no light spacing from bodycopy to cta should be
| 0
|
543,660
| 15,884,220,197
|
IssuesEvent
|
2021-04-09 18:33:21
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Service Provider - Mandatory claims are not working as expected
|
Affected/5.4.0 Priority/Highest bug
|
**Operating System:** Ubuntu 16.04
**Java Version:** Oracle JDK 1.8.0_144
**Packs:** IS 5.4.0 GA
**Setup:** Standalone Pack
**Steps to reproduce:**
1. Implement scenario 17 [1] in this blog [2]. ( Simply this contains SP with OAuth 2.0 inbound authenticator)
2. Added few mandatory claims using Local Claim Dialect.

3. Then tried to log in to SPA through the user.
**Expected Result**
1. Initial login of the user, it should require all mandatory claims.
**Actual Result**
- Only asking about two claims as follows:

As there are more mandatory claims, it should request from the user.
**More Issues identified with claims**
- When we test out Travelocity app for other scenarios, it can be seen that it may ask that all mandatory claims not having previous sessions. But with this scenario, it may not gonna ask for mandatory claims whether there is no any session.
- With this solution IS gonna ask for different claims time to time. There is no any exact pattern. Sometime it may ask while sometimes not.
Please verify the functionality of mandatory local claim used in this scenario. Discussion on this can be found in mail thread "[WSO2 IS] Clarification on Claim Configuration in Service Provider".
[1] https://github.com/Dilshani/identity-test-integration/tree/master/solution17
[2] https://medium.facilelogin.com/thirty-solution-patterns-with-the-wso2-identity-server-16f9fd0c0389
|
1.0
|
Service Provider - Mandatory claims are not working as expected - **Operating System:** Ubuntu 16.04
**Java Version:** Oracle JDK 1.8.0_144
**Packs:** IS 5.4.0 GA
**Setup:** Standalone Pack
**Steps to reproduce:**
1. Implement scenario 17 [1] in this blog [2]. ( Simply this contains SP with OAuth 2.0 inbound authenticator)
2. Added few mandatory claims using Local Claim Dialect.

3. Then tried to log in to SPA through the user.
**Expected Result**
1. Initial login of the user, it should require all mandatory claims.
**Actual Result**
- Only asking about two claims as follows:

As there are more mandatory claims, it should request from the user.
**More Issues identified with claims**
- When we test out Travelocity app for other scenarios, it can be seen that it may ask that all mandatory claims not having previous sessions. But with this scenario, it may not gonna ask for mandatory claims whether there is no any session.
- With this solution IS gonna ask for different claims time to time. There is no any exact pattern. Sometime it may ask while sometimes not.
Please verify the functionality of mandatory local claim used in this scenario. Discussion on this can be found in mail thread "[WSO2 IS] Clarification on Claim Configuration in Service Provider".
[1] https://github.com/Dilshani/identity-test-integration/tree/master/solution17
[2] https://medium.facilelogin.com/thirty-solution-patterns-with-the-wso2-identity-server-16f9fd0c0389
|
non_process
|
service provider mandatory claims are not working as expected operating system ubuntu java version oracle jdk packs is ga setup standalone pack steps to reproduce implement scenario in this blog simply this contains sp with oauth inbound authenticator added few mandatory claims using local claim dialect then tried to log in to spa through the user expected result initial login of the user it should require all mandatory claims actual result only asking about two claims as follows as there are more mandatory claims it should request from the user more issues identified with claims when we test out travelocity app for other scenarios it can be seen that it may ask that all mandatory claims not having previous sessions but with this scenario it may not gonna ask for mandatory claims whether there is no any session with this solution is gonna ask for different claims time to time there is no any exact pattern sometime it may ask while sometimes not please verify the functionality of mandatory local claim used in this scenario discussion on this can be found in mail thread clarification on claim configuration in service provider
| 0
|
99,142
| 4,048,325,481
|
IssuesEvent
|
2016-05-23 09:51:54
|
markitus18/Project-II
|
https://api.github.com/repos/markitus18/Project-II
|
closed
|
Flying Units should be able to go to non-walkable tiles
|
enhancement low priority
|
It's kind of frustrating when using a flying unit not being able to go to a non-walkable tile. If enabling that would be too much work at least a "You can't go there" label would be okay
|
1.0
|
Flying Units should be able to go to non-walkable tiles - It's kind of frustrating when using a flying unit not being able to go to a non-walkable tile. If enabling that would be too much work at least a "You can't go there" label would be okay
|
non_process
|
flying units should be able to go to non walkable tiles it s kind of frustrating when using a flying unit not being able to go to a non walkable tile if enabling that would be too much work at least a you can t go there label would be okay
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.