Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
350,688
| 31,931,944,540
|
IssuesEvent
|
2023-09-19 08:02:28
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix math.test_tensorflow_reduce_logsumexp
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6195778261"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix math.test_tensorflow_reduce_logsumexp - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6194295732/job/16817071183"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6195778261"><img src=https://img.shields.io/badge/-failure-red></a>
|
test
|
fix math test tensorflow reduce logsumexp numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
| 1
|
133,868
| 10,865,389,282
|
IssuesEvent
|
2019-11-14 18:53:38
|
rancher/rke
|
https://api.github.com/repos/rancher/rke
|
closed
|
full-cluster-state configmap is not updated on certificate rotation
|
[zube]: To Test kind/bug team/ca
|
**RKE version:**
v0.3.2
**Steps to Reproduce:**
Create a cluster with 3 nodes each with all roles, run `rke up`, and run `rke cert rotate`.
**Results:**
The cluster state which is stored in the local `cluster.rkestate` file and the one stored in the configmap `full-cluster-state` are not identical.
Check md5sum of local `cluster.rkestate` (tested on MacOS):
```
echo "$(cat cluster.rkestate)" | md5
```
Check md5sum of configmap `full-cluster-state` (tested on controlplane node):
```
docker run --rm --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro --entrypoint bash $(docker inspect $(docker images -q --filter=label=org.label-schema.vcs-url=https://github.com/rancher/hyperkube.git) --format='{{index .RepoTags 0}}' | tail -1) -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml -n kube-system get configmap full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .' | md5sum
```
After running `rke up`, the checksums match but it should be done as part of the certificate rotation.
|
1.0
|
full-cluster-state configmap is not updated on certificate rotation - **RKE version:**
v0.3.2
**Steps to Reproduce:**
Create a cluster with 3 nodes each with all roles, run `rke up`, and run `rke cert rotate`.
**Results:**
The cluster state which is stored in the local `cluster.rkestate` file and the one stored in the configmap `full-cluster-state` are not identical.
Check md5sum of local `cluster.rkestate` (tested on MacOS):
```
echo "$(cat cluster.rkestate)" | md5
```
Check md5sum of configmap `full-cluster-state` (tested on controlplane node):
```
docker run --rm --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro --entrypoint bash $(docker inspect $(docker images -q --filter=label=org.label-schema.vcs-url=https://github.com/rancher/hyperkube.git) --format='{{index .RepoTags 0}}' | tail -1) -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml -n kube-system get configmap full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .' | md5sum
```
After running `rke up`, the checksums match but it should be done as part of the certificate rotation.
|
test
|
full cluster state configmap is not updated on certificate rotation rke version steps to reproduce create a cluster with nodes each with all roles run rke up and run rke cert rotate results the cluster state which is stored in the local cluster rkestate file and the one stored in the configmap full cluster state are not identical check of local cluster rkestate tested on macos echo cat cluster rkestate check of configmap full cluster state tested on controlplane node docker run rm net host v docker inspect kubelet format range mounts if eq destination etc kubernetes source end end ssl etc kubernetes ssl ro entrypoint bash docker inspect docker images q filter label org label schema vcs url format index repotags tail c kubectl kubeconfig etc kubernetes ssl kubecfg kube node yaml n kube system get configmap full cluster state o json jq r data full cluster state jq r after running rke up the checksums match but it should be done as part of the certificate rotation
| 1
|
85,990
| 8,015,708,744
|
IssuesEvent
|
2018-07-25 10:55:23
|
telstra/open-kilda
|
https://api.github.com/repos/telstra/open-kilda
|
opened
|
[atdd-staging] Find a way to implement 'fail-fast' behavior for atdd-staging scenarios
|
area/testing
|
We need to implement 'fail-fast' behavior (tests execution stops after the first test failure) in the atdd-staging module because in most cases when one test failed, something is wrong with the test environment and there is no sense to run other scenarios because of a low percentage of their success.
|
1.0
|
[atdd-staging] Find a way to implement 'fail-fast' behavior for atdd-staging scenarios - We need to implement 'fail-fast' behavior (tests execution stops after the first test failure) in the atdd-staging module because in most cases when one test failed, something is wrong with the test environment and there is no sense to run other scenarios because of a low percentage of their success.
|
test
|
find a way to implement fail fast behavior for atdd staging scenarios we need to implement fail fast behavior tests execution stops after the first test failure in the atdd staging module because in most cases when one test failed something is wrong with the test environment and there is no sense to run other scenarios because of a low percentage of their success
| 1
|
266,989
| 23,271,758,416
|
IssuesEvent
|
2022-08-05 00:25:48
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
DNS Configmap tests sometimes panics in ipv6 envs
|
sig/network kind/flake sig/testing needs-triage
|
### Which jobs are flaking?
We've seen this line of [test/e2e/network/dns_configmap.go:301](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/dns_configmap.go#L301) panic from time to time. It is caused by an index out of bounds. The array length check on that line seems to be incorrect.
We see this flakiness when running in ipv6 or dualstack-ipv6-primary environments.
We do not see this flakiness in ipv4 environments.
### Which tests are flaking?
[test/e2e/network/dns_configmap.go:301](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/dns_configmap.go#L301)
### Since when has it been flaking?
We've seen these flakes over several months.
### Testgrid link
_No response_
### Reason for failure (if possible)
incorrect array length check on [test/e2e/network/dns_configmap.go:301](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/dns_configmap.go#L301), and the right most `array[1]` is out of bounds.
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig
|
1.0
|
DNS Configmap tests sometimes panics in ipv6 envs - ### Which jobs are flaking?
We've seen this line of [test/e2e/network/dns_configmap.go:301](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/dns_configmap.go#L301) panic from time to time. It is caused by an index out of bounds. The array length check on that line seems to be incorrect.
We see this flakiness when running in ipv6 or dualstack-ipv6-primary environments.
We do not see this flakiness in ipv4 environments.
### Which tests are flaking?
[test/e2e/network/dns_configmap.go:301](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/dns_configmap.go#L301)
### Since when has it been flaking?
We've seen these flakes over several months.
### Testgrid link
_No response_
### Reason for failure (if possible)
incorrect array length check on [test/e2e/network/dns_configmap.go:301](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/dns_configmap.go#L301), and the right most `array[1]` is out of bounds.
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig
|
test
|
dns configmap tests sometimes panics in envs which jobs are flaking we ve seen this line of panic from time to time it is caused by an index out of bounds the array length check on that line seems to be incorrect we see this flakiness when running in or dualstack primary environments we do not see this flakiness in environments which tests are flaking since when has it been flaking we ve seen these flakes over several months testgrid link no response reason for failure if possible incorrect array length check on and the right most array is out of bounds anything else we need to know no response relevant sig s sig
| 1
|
111,939
| 14,173,251,690
|
IssuesEvent
|
2020-11-12 18:04:02
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
reopened
|
ModalBottomSheet with isScrollControlled doesn't respect SafeArea
|
a: layout f: material design found in release: 1.20 framework has reproducible steps
|
## Steps to Reproduce
1. Create a modal bottom sheet with Scaffold as its child
2. Wrap the Scaffold into a SafeArea widget
3. Add a Text widget in Scaffold's body
4. Run the application on a device with notch
5. Open the modal bottom sheet and notice that the text is under the status bar
## Code example
```
import 'package:flutter/material.dart';
class ModalBottomSheet extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: Container(),
floatingActionButton: FloatingActionButton(
onPressed: () {
showModalBottomSheet(
context: context,
isScrollControlled: true,
builder: (context) {
return SafeArea(
child: Scaffold(
backgroundColor: Colors.black45,
body: Text(
'Text in safe area',
style: TextStyle(backgroundColor: Colors.red),
),
),
);
});
},
child: Icon(Icons.add),
),
);
}
}
```
## Screenshots
<img width="390" alt="Screen Shot 2019-08-25 at 12 44 39 PM" src="https://user-images.githubusercontent.com/33702668/63648275-225a1080-c736-11e9-9717-7b11a549a633.png">
## Logs
```
[✓] Flutter (Channel stable, v1.7.8+hotfix.4, on Mac OS X 10.14.6 18G87, locale en-US)
• Flutter version 1.7.8+hotfix.4 at /Users/user/flutter
• Framework revision 20e59316b8 (5 weeks ago), 2019-07-18 20:04:33 -0700
• Engine revision fee001c93f
• Dart version 2.4.0
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/user/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 10.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.3, Build version 10G8
• CocoaPods version 1.7.5
[✗] iOS tools - develop for iOS devices
✗ libimobiledevice and ideviceinstaller are not installed. To install with Brew, run:
brew update
brew install --HEAD usbmuxd
brew link usbmuxd
brew install --HEAD libimobiledevice
brew install ideviceinstaller
✗ ios-deploy not installed. To install:
brew install ios-deploy
! Brew can be used to install tools for iOS device development.
Download brew at https://brew.sh/.
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.37.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.3.0
[✓] Connected device (2 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
• iPhone Xs Max • 9C9C7B29-18A5-4D3A-8F53-CE6EB58C5411 • ios • com.apple.CoreSimulator.SimRuntime.iOS-12-4 (simulator)
! Doctor found issues in 1 category.
```
|
1.0
|
ModalBottomSheet with isScrollControlled doesn't respect SafeArea - ## Steps to Reproduce
1. Create a modal bottom sheet with Scaffold as its child
2. Wrap the Scaffold into a SafeArea widget
3. Add a Text widget in Scaffold's body
4. Run the application on a device with notch
5. Open the modal bottom sheet and notice that the text is under the status bar
## Code example
```
import 'package:flutter/material.dart';
class ModalBottomSheet extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: Container(),
floatingActionButton: FloatingActionButton(
onPressed: () {
showModalBottomSheet(
context: context,
isScrollControlled: true,
builder: (context) {
return SafeArea(
child: Scaffold(
backgroundColor: Colors.black45,
body: Text(
'Text in safe area',
style: TextStyle(backgroundColor: Colors.red),
),
),
);
});
},
child: Icon(Icons.add),
),
);
}
}
```
## Screenshots
<img width="390" alt="Screen Shot 2019-08-25 at 12 44 39 PM" src="https://user-images.githubusercontent.com/33702668/63648275-225a1080-c736-11e9-9717-7b11a549a633.png">
## Logs
```
[✓] Flutter (Channel stable, v1.7.8+hotfix.4, on Mac OS X 10.14.6 18G87, locale en-US)
• Flutter version 1.7.8+hotfix.4 at /Users/user/flutter
• Framework revision 20e59316b8 (5 weeks ago), 2019-07-18 20:04:33 -0700
• Engine revision fee001c93f
• Dart version 2.4.0
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/user/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-28, build-tools 28.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 10.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 10.3, Build version 10G8
• CocoaPods version 1.7.5
[✗] iOS tools - develop for iOS devices
✗ libimobiledevice and ideviceinstaller are not installed. To install with Brew, run:
brew update
brew install --HEAD usbmuxd
brew link usbmuxd
brew install --HEAD libimobiledevice
brew install ideviceinstaller
✗ ios-deploy not installed. To install:
brew install ios-deploy
! Brew can be used to install tools for iOS device development.
Download brew at https://brew.sh/.
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 37.0.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.37.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.3.0
[✓] Connected device (2 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
• iPhone Xs Max • 9C9C7B29-18A5-4D3A-8F53-CE6EB58C5411 • ios • com.apple.CoreSimulator.SimRuntime.iOS-12-4 (simulator)
! Doctor found issues in 1 category.
```
|
non_test
|
modalbottomsheet with isscrollcontrolled doesn t respect safearea steps to reproduce create a modal bottom sheet with scaffold as its child wrap the scaffold into a safearea widget add a text widget in scaffold s body run the application on a device with notch open the modal bottom sheet and notice that the text is under the status bar code example import package flutter material dart class modalbottomsheet extends statelesswidget override widget build buildcontext context return scaffold appbar appbar body container floatingactionbutton floatingactionbutton onpressed showmodalbottomsheet context context isscrollcontrolled true builder context return safearea child scaffold backgroundcolor colors body text text in safe area style textstyle backgroundcolor colors red child icon icons add screenshots img width alt screen shot at pm src logs flutter channel stable hotfix on mac os x locale en us • flutter version hotfix at users user flutter • framework revision weeks ago • engine revision • dart version android toolchain develop for android devices android sdk version • android sdk at users user library android sdk • android ndk location not configured optional useful for native profiling support • platform android build tools • java binary at applications android studio app contents jre jdk contents home bin java • java version openjdk runtime environment build release • all android licenses accepted xcode develop for ios and macos xcode • xcode at applications xcode app contents developer • xcode build version • cocoapods version ios tools develop for ios devices ✗ libimobiledevice and ideviceinstaller are not installed to install with brew run brew update brew install head usbmuxd brew link usbmuxd brew install head libimobiledevice brew install ideviceinstaller ✗ ios deploy not installed to install brew install ios deploy brew can be used to install tools for ios device development download brew at android studio version • android studio at applications android studio app contents • flutter plugin version • dart plugin version • java version openjdk runtime environment build release vs code version • vs code at applications visual studio code app contents • flutter extension version connected device available • android sdk built for • emulator • android • android api emulator • iphone xs max • • ios • com apple coresimulator simruntime ios simulator doctor found issues in category
| 0
|
163,047
| 12,702,228,873
|
IssuesEvent
|
2020-06-22 19:45:59
|
nicolargo/glances
|
https://api.github.com/repos/nicolargo/glances
|
closed
|
Bug: [fs] plugin needs to reflect user disk space usage
|
needs test
|
#### Bug description
[python-pystache](https://aur.archlinux.org/packages/python-pystache/) is required for the mustache templating.
When I start glances with 2 mins refresh time, I want to execute an action that alerts me when my disk usage is above 90%.
What I see is if I use the "%" character, everything breaks with error **AttributeError: 'NoneType' object has no attribute 'get_stats_display'**, so I need to use two "%%" characters to escape it.
The second issue is I want to get an alert email, and I am getting two alert emails related to disk usage at the same time.
When I see [`actions.py`](https://github.com/nicolargo/glances/blob/develop/glances/actions.py), I observe:
> Goal: avoid to execute the same command twice
Technically, the action is being executed twice, since I am getting two identical alert emails at the same time.
When I execute the echo command directly from the command line outside of glances and its conf, I get one email.
So there is definitely an issue within `glances actions`.
#### Versions
* Glances & psutil (glances -V): Glances v3.1.4.1 with PsUtil v5.7.0
* Operating System (lsb_release -a): LSB Version: 1.4, Arch Linux rolling
Packages: [glances](https://www.archlinux.org/packages/community/any/glances/) & [python-pystache](https://aur.archlinux.org/packages/python-pystache/)
#### Conf
```ini
[fs]
disable=False
# Define the list of hidden file system (comma-separated regexp)
hide=/boot.*,/snap.*
# Define filesystem space thresholds in %
# Default values if not defined: 50/70/90
# It is also possible to define per mount point value
# Example: /_careful=40
careful=50
warning=70
critical=90
critical_action_repeat=echo -e "Used filesystem disk space for {{device_name}} is at {{percent}}%%.\nPlease cleanup the filesystem to clear the alert.\nScaleway server: $(uname -rn)" | mail -s "CRITICAL: disk usage above 90%%" -r postmaster@example.com hlfh@example.com
# Allow additional file system types (comma-separated FS type)
#allow=zfs
```
#### systemd `/etc/systemd/system/glances.service`
```systemd
[Unit]
Description=Glances Server
[Service]
ExecStart=/usr/bin/glances --quiet -t 120
[Install]
WantedBy=multi-user.target
```
#### Logs
What I am getting in debug mode:
```log
2020-05-12 15:58:35,928 -- INFO -- Start Glances 3.1.4.1
2020-05-12 15:58:35,928 -- INFO -- CPython 3.8.2 and psutil 5.7.0 detected
```
|
1.0
|
Bug: [fs] plugin needs to reflect user disk space usage - #### Bug description
[python-pystache](https://aur.archlinux.org/packages/python-pystache/) is required for the mustache templating.
When I start glances with 2 mins refresh time, I want to execute an action that alerts me when my disk usage is above 90%.
What I see is if I use the "%" character, everything breaks with error **AttributeError: 'NoneType' object has no attribute 'get_stats_display'**, so I need to use two "%%" characters to escape it.
The second issue is I want to get an alert email, and I am getting two alert emails related to disk usage at the same time.
When I see [`actions.py`](https://github.com/nicolargo/glances/blob/develop/glances/actions.py), I observe:
> Goal: avoid to execute the same command twice
Technically, the action is being executed twice, since I am getting two identical alert emails at the same time.
When I execute the echo command directly from the command line outside of glances and its conf, I get one email.
So there is definitely an issue within `glances actions`.
#### Versions
* Glances & psutil (glances -V): Glances v3.1.4.1 with PsUtil v5.7.0
* Operating System (lsb_release -a): LSB Version: 1.4, Arch Linux rolling
Packages: [glances](https://www.archlinux.org/packages/community/any/glances/) & [python-pystache](https://aur.archlinux.org/packages/python-pystache/)
#### Conf
```ini
[fs]
disable=False
# Define the list of hidden file system (comma-separated regexp)
hide=/boot.*,/snap.*
# Define filesystem space thresholds in %
# Default values if not defined: 50/70/90
# It is also possible to define per mount point value
# Example: /_careful=40
careful=50
warning=70
critical=90
critical_action_repeat=echo -e "Used filesystem disk space for {{device_name}} is at {{percent}}%%.\nPlease cleanup the filesystem to clear the alert.\nScaleway server: $(uname -rn)" | mail -s "CRITICAL: disk usage above 90%%" -r postmaster@example.com hlfh@example.com
# Allow additional file system types (comma-separated FS type)
#allow=zfs
```
#### systemd `/etc/systemd/system/glances.service`
```systemd
[Unit]
Description=Glances Server
[Service]
ExecStart=/usr/bin/glances --quiet -t 120
[Install]
WantedBy=multi-user.target
```
#### Logs
What I am getting in debug mode:
```log
2020-05-12 15:58:35,928 -- INFO -- Start Glances 3.1.4.1
2020-05-12 15:58:35,928 -- INFO -- CPython 3.8.2 and psutil 5.7.0 detected
```
|
test
|
bug plugin needs to reflect user disk space usage bug description is required for the mustache templating when i start glances with mins refresh time i want to execute an action that alerts me when my disk usage is above what i see is if i use the character everything breaks with error attributeerror nonetype object has no attribute get stats display so i need to use two characters to escape it the second issue is i want to get an alert email and i am getting two alert emails related to disk usage at the same time when i see i observe goal avoid to execute the same command twice technically the action is being executed twice since i am getting two identical alert emails at the same time when i execute the echo command directly from the command line outside of glances and its conf i get one email so there is definitely an issue within glances actions versions glances psutil glances v glances with psutil operating system lsb release a lsb version arch linux rolling packages conf ini disable false define the list of hidden file system comma separated regexp hide boot snap define filesystem space thresholds in default values if not defined it is also possible to define per mount point value example careful careful warning critical critical action repeat echo e used filesystem disk space for device name is at percent nplease cleanup the filesystem to clear the alert nscaleway server uname rn mail s critical disk usage above r postmaster example com hlfh example com allow additional file system types comma separated fs type allow zfs systemd etc systemd system glances service systemd description glances server execstart usr bin glances quiet t wantedby multi user target logs what i am getting in debug mode log info start glances info cpython and psutil detected
| 1
|
343,589
| 24,775,544,886
|
IssuesEvent
|
2022-10-23 17:34:18
|
Azure/NoOpsAccelerator
|
https://api.github.com/repos/Azure/NoOpsAccelerator
|
closed
|
Walkthrough for AKS Workload doesn't work
|
bug documentation good first issue
|
**Describe the bug**
Following the walkthrough below doesn't result in a successful deployment as there are prerequisites that aren't called out in the README.md like the need for an AKS Service Principal.
**To Reproduce**
Follow the steps in [Deploy the Workload](https://github.com/Azure/NoOpsAccelerator/tree/main/src/bicep/workloads/wl-aks-spoke#deploy-the-workload) and watch for the red stuff.
**Expected behavior**
No red stuff. Complete deployment of a working AKS cluster in the landing zone.
**Screenshots**
n/a
**Desktop (please complete the following information):**
n/a
**Smartphone (please complete the following information if applicable):**
n.a
**Software versions used:**
n/a
**Additional context**
Add any other context about the problem here.
|
1.0
|
Walkthrough for AKS Workload doesn't work - **Describe the bug**
Following the walkthrough below doesn't result in a successful deployment as there are prerequisites that aren't called out in the README.md like the need for an AKS Service Principal.
**To Reproduce**
Follow the steps in [Deploy the Workload](https://github.com/Azure/NoOpsAccelerator/tree/main/src/bicep/workloads/wl-aks-spoke#deploy-the-workload) and watch for the red stuff.
**Expected behavior**
No red stuff. Complete deployment of a working AKS cluster in the landing zone.
**Screenshots**
n/a
**Desktop (please complete the following information):**
n/a
**Smartphone (please complete the following information if applicable):**
n.a
**Software versions used:**
n/a
**Additional context**
Add any other context about the problem here.
|
non_test
|
walkthrough for aks workload doesn t work describe the bug following the walkthrough below doesn t result in a successful deployment as there are prerequisites that aren t called out in the readme md like the need for an aks service principal to reproduce follow the steps in and watch for the red stuff expected behavior no red stuff complete deployment of a working aks cluster in the landing zone screenshots n a desktop please complete the following information n a smartphone please complete the following information if applicable n a software versions used n a additional context add any other context about the problem here
| 0
|
220,347
| 17,190,015,686
|
IssuesEvent
|
2021-07-16 09:34:34
|
Vividh25/Sign-Up-Flow
|
https://api.github.com/repos/Vividh25/Sign-Up-Flow
|
closed
|
Sign Up Page - Submit Button - Tests
|
Testing
|
- [x] Test to check whether the button gets disabled if the user has not filled the fields properly.
- [x] Test to check if the button is disabled for 2-3 seconds after pressing.
- [x] Test to check if the button leads to the OTP page.
|
1.0
|
Sign Up Page - Submit Button - Tests - - [x] Test to check whether the button gets disabled if the user has not filled the fields properly.
- [x] Test to check if the button is disabled for 2-3 seconds after pressing.
- [x] Test to check if the button leads to the OTP page.
|
test
|
sign up page submit button tests test to check whether the button gets disabled if the user has not filled the fields properly test to check if the button is disabled for seconds after pressing test to check if the button leads to the otp page
| 1
|
111,428
| 11,732,360,311
|
IssuesEvent
|
2020-03-11 03:23:38
|
UBC-MDS/pypuck
|
https://api.github.com/repos/UBC-MDS/pypuck
|
closed
|
Adhere to PEP-8 Styleguide
|
documentation
|
Ensure all `.py` files adhere to the PEP-8 style guide using the Flake8 package.
|
1.0
|
Adhere to PEP-8 Styleguide - Ensure all `.py` files adhere to the PEP-8 style guide using the Flake8 package.
|
non_test
|
adhere to pep styleguide ensure all py files adhere to the pep style guide using the package
| 0
|
223,291
| 17,110,212,812
|
IssuesEvent
|
2021-07-10 05:57:49
|
svelte-jp/svelte-site-jp
|
https://api.github.com/repos/svelte-jp/svelte-site-jp
|
closed
|
[Docs]04-compile-time varsReport の翻訳
|
documentation translation
|
## 対象ドキュメント
- 対象ファイル
- [content/docs/ja/04-compile-time.ja.md の追加部分](https://github.com/svelte-jp/svelte-site-jp/pull/460/files#diff-e3093898533feb897c17352992e386e61d5c7c18fd7a53376e1562aff1ee4f05)
- ※翻訳は70行目のみ
## 翻訳したい/してほしい
- 翻訳してほしい
## 備考
特になし
|
1.0
|
[Docs]04-compile-time varsReport の翻訳 - ## 対象ドキュメント
- 対象ファイル
- [content/docs/ja/04-compile-time.ja.md の追加部分](https://github.com/svelte-jp/svelte-site-jp/pull/460/files#diff-e3093898533feb897c17352992e386e61d5c7c18fd7a53376e1562aff1ee4f05)
- ※翻訳は70行目のみ
## 翻訳したい/してほしい
- 翻訳してほしい
## 備考
特になし
|
non_test
|
compile time varsreport の翻訳 対象ドキュメント 対象ファイル ※ 翻訳したい してほしい 翻訳してほしい 備考 特になし
| 0
|
119,716
| 10,062,046,453
|
IssuesEvent
|
2019-07-22 23:25:25
|
tgstation/tgstation
|
https://api.github.com/repos/tgstation/tgstation
|
closed
|
Golems can put the bloodchiller on their belt slot and can't get it out
|
Bug Tested/Reproduced
|
## Reproduction:
Be a golem, put the bloodchiller on your belt slot, never be able to get it back.
(The bloodchiller is that one xenobio crossbreed)
|
1.0
|
Golems can put the bloodchiller on their belt slot and can't get it out - ## Reproduction:
Be a golem, put the bloodchiller on your belt slot, never be able to get it back.
(The bloodchiller is that one xenobio crossbreed)
|
test
|
golems can put the bloodchiller on their belt slot and can t get it out reproduction be a golem put the bloodchiller on your belt slot never be able to get it back the bloodchiller is that one xenobio crossbreed
| 1
|
9,185
| 8,554,065,337
|
IssuesEvent
|
2018-11-08 04:05:08
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Number in strictFilter Value results in "no good match"
|
cognitive-services/svc cxp product-question triaged
|
we sync the qna maker with an internal system. so we added a metadata syncitemid with a number as value, e.g. "name":"syncitemid", "value":"123"
after we added this metadata to all our questions in the qnamaker, we were getting always "no good match" result from the qna REST service - with and without using a strictFilter in query. Changing the value to "_123" fixed that and we where getting the question/answer with and without using a strictFilter. Changing it back to "123" and we are getting "no good match".
Does the qna maker REST service check if the value is a number or try to parse it?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: cb1d6a8c-954c-9ce8-6101-c19b20724c91
* Version Independent ID: 5c219f6c-260b-c17a-7f41-1bf4095aa521
* Content: [Metadata with GenerateAnswer API - QnA Maker - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure//cognitive-services/qnamaker/how-to/metadata-generateanswer-usage)
* Content Source: [articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md)
* Service: **cognitive-services**
* GitHub Login: @tulasim88
* Microsoft Alias: **tulasim88**
|
1.0
|
Number in strictFilter Value results in "no good match" - we sync the qna maker with an internal system. so we added a metadata syncitemid with a number as value, e.g. "name":"syncitemid", "value":"123"
after we added this metadata to all our questions in the qnamaker, we were getting always "no good match" result from the qna REST service - with and without using a strictFilter in query. Changing the value to "_123" fixed that and we where getting the question/answer with and without using a strictFilter. Changing it back to "123" and we are getting "no good match".
Does the qna maker REST service check if the value is a number or try to parse it?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: cb1d6a8c-954c-9ce8-6101-c19b20724c91
* Version Independent ID: 5c219f6c-260b-c17a-7f41-1bf4095aa521
* Content: [Metadata with GenerateAnswer API - QnA Maker - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure//cognitive-services/qnamaker/how-to/metadata-generateanswer-usage)
* Content Source: [articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md)
* Service: **cognitive-services**
* GitHub Login: @tulasim88
* Microsoft Alias: **tulasim88**
|
non_test
|
number in strictfilter value results in no good match we sync the qna maker with an internal system so we added a metadata syncitemid with a number as value e g name syncitemid value after we added this metadata to all our questions in the qnamaker we were getting always no good match result from the qna rest service with and without using a strictfilter in query changing the value to fixed that and we where getting the question answer with and without using a strictfilter changing it back to and we are getting no good match does the qna maker rest service check if the value is a number or try to parse it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services github login microsoft alias
| 0
|
184,979
| 14,291,794,689
|
IssuesEvent
|
2020-11-23 23:28:04
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
opened
|
Storybook Tests: only the first code excerpt is loaded in search results
|
bug team/search testing
|
In storybook tests which show search results (both for GraphQL and streaming search), only the first code excerpt is loading:

After debugging, I found out that the code excerpt thinks it's not visible. The library we are using to determine visibility is react-visibility-sensor, which looks like it's dead: https://github.com/joshwnj/react-visibility-sensor/issues/179
We should consider switching to a newer library that uses InsersectionObserver instead, such as this one: https://github.com/thebuilder/react-intersection-observer
**Note that this does not happen in the actual web app, only in storybook tests.**
|
1.0
|
Storybook Tests: only the first code excerpt is loaded in search results - In storybook tests which show search results (both for GraphQL and streaming search), only the first code excerpt is loading:

After debugging, I found out that the code excerpt thinks it's not visible. The library we are using to determine visibility is react-visibility-sensor, which looks like it's dead: https://github.com/joshwnj/react-visibility-sensor/issues/179
We should consider switching to a newer library that uses InsersectionObserver instead, such as this one: https://github.com/thebuilder/react-intersection-observer
**Note that this does not happen in the actual web app, only in storybook tests.**
|
test
|
storybook tests only the first code excerpt is loaded in search results in storybook tests which show search results both for graphql and streaming search only the first code excerpt is loading after debugging i found out that the code excerpt thinks it s not visible the library we are using to determine visibility is react visibility sensor which looks like it s dead we should consider switching to a newer library that uses insersectionobserver instead such as this one note that this does not happen in the actual web app only in storybook tests
| 1
|
49,057
| 13,438,311,579
|
IssuesEvent
|
2020-09-07 17:45:49
|
AOSC-Dev/aosc-os-abbs
|
https://api.github.com/repos/AOSC-Dev/aosc-os-abbs
|
closed
|
sane-backends: security update to 1.0.30
|
security to-stable upgrade
|
<!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2020-12861, CVE-2020-12865
**Other security advisory IDs:** RHSA-2020:2902-01
**Description:**
This release fixes several security related issues and a build issue.
### Backends
- `epson2`: fixes CVE-2020-12867 (GHSL-2020-075) and several memory
management issues found while addressing that CVE
- `epsonds`: addresses out-of-bound memory access issues to fix
CVE-2020-12862 (GHSL-2020-082) and CVE-2020-12863 (GHSL-2020-083),
addresses a buffer overflow fixing CVE-2020-12865 (GHSL-2020-084)
and disables network autodiscovery to mitigate CVE-2020-12866
(GHSL-2020-079), CVE-2020-12861 (GHSL-2020-080) and CVE-2020-12864
(GHSL-2020-081). Note that this backend does not support network
scanners to begin with.
- `magicolor`: fixes a floating point exception and uninitialized data
read
- fixes an overflow in `sanei_tcp_read()`
**Patches:** N/A
**PoC(s):** <!-- Please list links to available PoCs (Proofs of Concept). -->
**Architectural progress (Mainline):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
- [x] AArch64 `arm64`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
True
|
sane-backends: security update to 1.0.30 - <!-- Please remove items do not apply. -->
**CVE IDs:** CVE-2020-12861, CVE-2020-12865
**Other security advisory IDs:** RHSA-2020:2902-01
**Description:**
This release fixes several security related issues and a build issue.
### Backends
- `epson2`: fixes CVE-2020-12867 (GHSL-2020-075) and several memory
management issues found while addressing that CVE
- `epsonds`: addresses out-of-bound memory access issues to fix
CVE-2020-12862 (GHSL-2020-082) and CVE-2020-12863 (GHSL-2020-083),
addresses a buffer overflow fixing CVE-2020-12865 (GHSL-2020-084)
and disables network autodiscovery to mitigate CVE-2020-12866
(GHSL-2020-079), CVE-2020-12861 (GHSL-2020-080) and CVE-2020-12864
(GHSL-2020-081). Note that this backend does not support network
scanners to begin with.
- `magicolor`: fixes a floating point exception and uninitialized data
read
- fixes an overflow in `sanei_tcp_read()`
**Patches:** N/A
**PoC(s):** <!-- Please list links to available PoCs (Proofs of Concept). -->
**Architectural progress (Mainline):**
<!-- Please remove any architecture to which the security vulnerabilities do not apply. -->
- [x] AMD64 `amd64`
- [x] AArch64 `arm64`
<!-- If the specified package is `noarch`, please use the stub below. -->
<!-- - [ ] Architecture-independent `noarch` -->
|
non_test
|
sane backends security update to cve ids cve cve other security advisory ids rhsa description this release fixes several security related issues and a build issue backends fixes cve ghsl and several memory management issues found while addressing that cve epsonds addresses out of bound memory access issues to fix cve ghsl and cve ghsl addresses a buffer overflow fixing cve ghsl and disables network autodiscovery to mitigate cve ghsl cve ghsl and cve ghsl note that this backend does not support network scanners to begin with magicolor fixes a floating point exception and uninitialized data read fixes an overflow in sanei tcp read patches n a poc s architectural progress mainline
| 0
|
222,867
| 17,497,869,716
|
IssuesEvent
|
2021-08-10 04:47:47
|
spacemeshos/go-spacemesh
|
https://api.github.com/repos/spacemeshos/go-spacemesh
|
closed
|
Hare inbound gossip message processing delays
|
bug Hare Protocol BLOCKER testnet concurrency
|
## Description
Nodes are taking as long as a minute to receive inbound Hare gossip messages and report them as valid (after which they're gossiped on to other peers).
I can't tell exactly what's going on under the hood without full debug logs. It could be an issue with the Hare broker priority queue, message validation, or something else. I suspect it's happening during the Hare broker `eventLoop`.
## Environment
Please complete the following information:
- OS: Linux
- Node Version: v0.1.41
## Additional Resources

from https://testnet-kibana.spacemesh.io/goto/29beacaf0fe3b5c3757deed8d6f8788a
|
1.0
|
Hare inbound gossip message processing delays - ## Description
Nodes are taking as long as a minute to receive inbound Hare gossip messages and report them as valid (after which they're gossiped on to other peers).
I can't tell exactly what's going on under the hood without full debug logs. It could be an issue with the Hare broker priority queue, message validation, or something else. I suspect it's happening during the Hare broker `eventLoop`.
## Environment
Please complete the following information:
- OS: Linux
- Node Version: v0.1.41
## Additional Resources

from https://testnet-kibana.spacemesh.io/goto/29beacaf0fe3b5c3757deed8d6f8788a
|
test
|
hare inbound gossip message processing delays description nodes are taking as long as a minute to receive inbound hare gossip messages and report them as valid after which they re gossiped on to other peers i can t tell exactly what s going on under the hood without full debug logs it could be an issue with the hare broker priority queue message validation or something else i suspect it s happening during the hare broker eventloop environment please complete the following information os linux node version additional resources from
| 1
|
831,219
| 32,041,546,945
|
IssuesEvent
|
2023-09-22 19:49:33
|
FlutterFlow/flutterflow-issues
|
https://api.github.com/repos/FlutterFlow/flutterflow-issues
|
closed
|
JSON Paths Bugs - Filtering Expressions and paths that select elements from arrays and more
|
status: confirmed priority: medium
|
### Has your issue been reported?
- [X] I have searched the existing issues and confirm it has not been reported.
- [X] I give permission for members of the FlutterFlow team to access and test my project for the sole purpose of investigating this issue.
### Current Behavior
I have 6 new bugs that showed up today and they all relate to JSON paths.
1. Filtering expressions that are named paths in the API tab
2. Any paths that use selecting elements from arrays
3. One more that I haven't pinpointed the source (see images - this is the one I used for the bug report code since the others can't be accessed through the widget tree to generate one)
Please note that this app is in production with users and works.
### Expected Behavior
The JSON paths should not show errors in the FlutterFlow Editor
### Steps to Reproduce
I'll try to reproduce for y'all when I have time
### Reproducible from Blank
- [X] The steps to reproduce above start from a blank project.
### Bug Report Code (Required)
ChJDb250YWluZXJfMnp6a3Y4N3QSiQIKD0NvbHVtbl82ZG9rdXBiNxKlAQoOSW1hZ2VfMDQweHJpajAYByKOATJ+Cjdwcm9qZWN0cy9teS1iYWx0by15djNvYXQvYXNzZXRzL3Q1Z3Nmc2Vhc3ZmZi9pY29uLTMucG5nEAIYAyIWCgkJAAAAAAAATEASCQkAAAAAAABMQCokCQAAAAAAACBAEQAAAAAAACBAGQAAAAAAACBAIQAAAAAAACBAaPQDWgkRAAAAAAAALkD6AwBiABJFCg1UZXh0X3NkYmJkZDNiGAIiMBIXCg5TaG9wIG15IApTdG9yZTACQAaoAQBaEhEAAAAAAAAuQCEAAAAAAAAuQPoDAGIAGAQiBSIA+gMAGAEigAEKTgoLCgkJAAAAAAAAWUASMhIkCQAAAAAAABhAEQAAAAAAABhAGQAAAAAAABhAIQAAAAAAABhAQgYI/////w9aAGIAIgBJAAAAAAAAWUBSAFobCQAAAAAAABBAEQAAAAAAADRAIQAAAAAAADRAegIYAfoDAPIFCQkAAAAAAADwP2IAigFeElgKCHhxdGFjZDN1EkwqPxI9CAxCECIOCgoKCHVzZXJEYXRhEAFKJyIlCiMkLnByaW1fdmV0X2Z1bmRyYWlzaW5nLmUtc3RvcmVfbGlua6oCCDk3cnZ3cXB0GgIIAQ==
### Context
This issue has stopped me from being able to use the platform.
I cannot delete and try to rebuild any of this as this is a huge project with multiple devs.
Also, all the snapshots have the same issue so I can't revert.
### Visual documentation


### Additional Info
_No response_
### Environment
```markdown
- FlutterFlow version: v3.1
- Platform: Web
- Browser name and version: Version 116.0.5845.97 (Official Build) (64-bit)
- Operating system and version affected: Windows 10
```
|
1.0
|
JSON Paths Bugs - Filtering Expressions and paths that select elements from arrays and more - ### Has your issue been reported?
- [X] I have searched the existing issues and confirm it has not been reported.
- [X] I give permission for members of the FlutterFlow team to access and test my project for the sole purpose of investigating this issue.
### Current Behavior
I have 6 new bugs that showed up today and they all relate to JSON paths.
1. Filtering expressions that are named paths in the API tab
2. Any paths that use selecting elements from arrays
3. One more that I haven't pinpointed the source (see images - this is the one I used for the bug report code since the others can't be accessed through the widget tree to generate one)
Please note that this app is in production with users and works.
### Expected Behavior
The JSON paths should not show errors in the FlutterFlow Editor
### Steps to Reproduce
I'll try to reproduce for y'all when I have time
### Reproducible from Blank
- [X] The steps to reproduce above start from a blank project.
### Bug Report Code (Required)
ChJDb250YWluZXJfMnp6a3Y4N3QSiQIKD0NvbHVtbl82ZG9rdXBiNxKlAQoOSW1hZ2VfMDQweHJpajAYByKOATJ+Cjdwcm9qZWN0cy9teS1iYWx0by15djNvYXQvYXNzZXRzL3Q1Z3Nmc2Vhc3ZmZi9pY29uLTMucG5nEAIYAyIWCgkJAAAAAAAATEASCQkAAAAAAABMQCokCQAAAAAAACBAEQAAAAAAACBAGQAAAAAAACBAIQAAAAAAACBAaPQDWgkRAAAAAAAALkD6AwBiABJFCg1UZXh0X3NkYmJkZDNiGAIiMBIXCg5TaG9wIG15IApTdG9yZTACQAaoAQBaEhEAAAAAAAAuQCEAAAAAAAAuQPoDAGIAGAQiBSIA+gMAGAEigAEKTgoLCgkJAAAAAAAAWUASMhIkCQAAAAAAABhAEQAAAAAAABhAGQAAAAAAABhAIQAAAAAAABhAQgYI/////w9aAGIAIgBJAAAAAAAAWUBSAFobCQAAAAAAABBAEQAAAAAAADRAIQAAAAAAADRAegIYAfoDAPIFCQkAAAAAAADwP2IAigFeElgKCHhxdGFjZDN1EkwqPxI9CAxCECIOCgoKCHVzZXJEYXRhEAFKJyIlCiMkLnByaW1fdmV0X2Z1bmRyYWlzaW5nLmUtc3RvcmVfbGlua6oCCDk3cnZ3cXB0GgIIAQ==
### Context
This issue has stopped me from being able to use the platform.
I cannot delete and try to rebuild any of this as this is a huge project with multiple devs.
Also, all the snapshots have the same issue so I can't revert.
### Visual documentation


### Additional Info
_No response_
### Environment
```markdown
- FlutterFlow version: v3.1
- Platform: Web
- Browser name and version: Version 116.0.5845.97 (Official Build) (64-bit)
- Operating system and version affected: Windows 10
```
|
non_test
|
json paths bugs filtering expressions and paths that select elements from arrays and more has your issue been reported i have searched the existing issues and confirm it has not been reported i give permission for members of the flutterflow team to access and test my project for the sole purpose of investigating this issue current behavior i have new bugs that showed up today and they all relate to json paths filtering expressions that are named paths in the api tab any paths that use selecting elements from arrays one more that i haven t pinpointed the source see images this is the one i used for the bug report code since the others can t be accessed through the widget tree to generate one please note that this app is in production with users and works expected behavior the json paths should not show errors in the flutterflow editor steps to reproduce i ll try to reproduce for y all when i have time reproducible from blank the steps to reproduce above start from a blank project bug report code required gmagaeigaektgolcgkjaaaaaaaawuasmhikcqaaaaaaabhaeqaaaaaaabhagqaaaaaaabhaiqaaaaaaabhaqgyi context this issue has stopped me from being able to use the platform i cannot delete and try to rebuild any of this as this is a huge project with multiple devs also all the snapshots have the same issue so i can t revert visual documentation additional info no response environment markdown flutterflow version platform web browser name and version version official build bit operating system and version affected windows
| 0
|
107,099
| 9,201,888,678
|
IssuesEvent
|
2019-03-07 20:51:50
|
open-apparel-registry/open-apparel-registry
|
https://api.github.com/repos/open-apparel-registry/open-apparel-registry
|
closed
|
Show country name instead of code on the list detail page
|
tested/verified
|
## Overview
We standardize on using the 2-character code internally, but the uploaded lists almost always use the full country name. Show the full country name on the list detail page.
### Describe the solution you'd like
* Replace the "Country Code" column with "Country"
|
1.0
|
Show country name instead of code on the list detail page - ## Overview
We standardize on using the 2-character code internally, but the uploaded lists almost always use the full country name. Show the full country name on the list detail page.
### Describe the solution you'd like
* Replace the "Country Code" column with "Country"
|
test
|
show country name instead of code on the list detail page overview we standardize on using the character code internally but the uploaded lists almost always use the full country name show the full country name on the list detail page describe the solution you d like replace the country code column with country
| 1
|
11,193
| 28,367,208,403
|
IssuesEvent
|
2023-04-12 14:35:41
|
OpenCTI-Platform/connectors
|
https://api.github.com/repos/OpenCTI-Platform/connectors
|
closed
|
Modularization of relation refs
|
feature solved architecture
|
## Information
* Linked to [issue 3012](https://github.com/OpenCTI-Platform/opencti/issues/3012)
## Bug resolution
* handle multiple x-opencti-linked-ref
|
1.0
|
Modularization of relation refs - ## Information
* Linked to [issue 3012](https://github.com/OpenCTI-Platform/opencti/issues/3012)
## Bug resolution
* handle multiple x-opencti-linked-ref
|
non_test
|
modularization of relation refs information linked to bug resolution handle multiple x opencti linked ref
| 0
|
416,459
| 28,083,307,005
|
IssuesEvent
|
2023-03-30 08:06:45
|
magang-mknows/cs
|
https://api.github.com/repos/magang-mknows/cs
|
closed
|
Week 1 : Base Component | Card
|
documentation enhancement
|
- [ ] Base Style Card
- [ ] Props for Text, Custom Size, Custom Icon or Image
- [ ] Custom Title on Card
- [ ] Custom Description on Card
|
1.0
|
Week 1 : Base Component | Card - - [ ] Base Style Card
- [ ] Props for Text, Custom Size, Custom Icon or Image
- [ ] Custom Title on Card
- [ ] Custom Description on Card
|
non_test
|
week base component card base style card props for text custom size custom icon or image custom title on card custom description on card
| 0
|
8,552
| 6,568,937,497
|
IssuesEvent
|
2017-09-09 00:12:06
|
opensim-org/opensim-core
|
https://api.github.com/repos/opensim-org/opensim-core
|
opened
|
Model::scale() calls initSystem() three times, which is expensive
|
Performance
|
It does not seem like it should be necessary to destroy and rebuild the underlying computational system 3 times when scaling a Model.
`Model::scale()` calls `SimbodyEngine::scale()` [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/Model/Model.cpp#L1478); `initSystem()` is called :one: [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/SimbodyEngine/SimbodyEngine.cpp#L821) and :two: [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/SimbodyEngine/SimbodyEngine.cpp#L839). On returning, `Model::scale()` calls `initSystem()` itself :three: [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/Model/Model.cpp#L1504).
|
True
|
Model::scale() calls initSystem() three times, which is expensive - It does not seem like it should be necessary to destroy and rebuild the underlying computational system 3 times when scaling a Model.
`Model::scale()` calls `SimbodyEngine::scale()` [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/Model/Model.cpp#L1478); `initSystem()` is called :one: [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/SimbodyEngine/SimbodyEngine.cpp#L821) and :two: [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/SimbodyEngine/SimbodyEngine.cpp#L839). On returning, `Model::scale()` calls `initSystem()` itself :three: [here](https://github.com/opensim-org/opensim-core/blob/master/OpenSim/Simulation/Model/Model.cpp#L1504).
|
non_test
|
model scale calls initsystem three times which is expensive it does not seem like it should be necessary to destroy and rebuild the underlying computational system times when scaling a model model scale calls simbodyengine scale initsystem is called one and two on returning model scale calls initsystem itself three
| 0
|
315,942
| 27,120,697,592
|
IssuesEvent
|
2023-02-15 22:35:08
|
UWB-Biocomputing/Graphitti
|
https://api.github.com/repos/UWB-Biocomputing/Graphitti
|
closed
|
Generated unit test executables are not included in .gitignore
|
testing serialization
|
The generated unit test executables for serialization and deserialization are not included in `.gitignore` so `git` identifies them as untracked files.
|
1.0
|
Generated unit test executables are not included in .gitignore - The generated unit test executables for serialization and deserialization are not included in `.gitignore` so `git` identifies them as untracked files.
|
test
|
generated unit test executables are not included in gitignore the generated unit test executables for serialization and deserialization are not included in gitignore so git identifies them as untracked files
| 1
|
185,029
| 6,718,398,329
|
IssuesEvent
|
2017-10-15 12:23:28
|
johndeverall/BehaviourCoder
|
https://api.github.com/repos/johndeverall/BehaviourCoder
|
closed
|
A bunch of errors appear in the log when creating and cancelling trials / restarting sessions etc.
|
Priority: Critical Type: Bug
|
Here is an example of some. I haven't worked out exact replication steps yet. I think this is likely actually two defects.
```
'Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at de.bochumuniruhr.psy.bio.behaviourcoder.Main$5.actionPerformed(Main.java:399)
at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
at javax.swing.AbstractButton.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(Unknown Source)
at java.awt.Component.processMouseEvent(Unknown Source)
at javax.swing.JComponent.processMouseEvent(Unknown Source)
at java.awt.Component.processEvent(Unknown Source)
at java.awt.Container.processEvent(Unknown Source)
at java.awt.Component.dispatchEventImpl(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Window.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
at java.awt.EventQueue.access$500(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at de.bochumuniruhr.psy.bio.behaviourcoder.Main$6.actionPerformed(Main.java:427)
at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
at javax.swing.AbstractButton.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(Unknown Source)
at java.awt.Component.processMouseEvent(Unknown Source)
at javax.swing.JComponent.processMouseEvent(Unknown Source)
at java.awt.Component.processEvent(Unknown Source)
at java.awt.Container.processEvent(Unknown Source)
at java.awt.Component.dispatchEventImpl(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Window.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
at java.awt.EventQueue.access$500(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)
'
```
|
1.0
|
A bunch of errors appear in the log when creating and cancelling trials / restarting sessions etc. - Here is an example of some. I haven't worked out exact replication steps yet. I think this is likely actually two defects.
```
'Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at de.bochumuniruhr.psy.bio.behaviourcoder.Main$5.actionPerformed(Main.java:399)
at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
at javax.swing.AbstractButton.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(Unknown Source)
at java.awt.Component.processMouseEvent(Unknown Source)
at javax.swing.JComponent.processMouseEvent(Unknown Source)
at java.awt.Component.processEvent(Unknown Source)
at java.awt.Container.processEvent(Unknown Source)
at java.awt.Component.dispatchEventImpl(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Window.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
at java.awt.EventQueue.access$500(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at de.bochumuniruhr.psy.bio.behaviourcoder.Main$6.actionPerformed(Main.java:427)
at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
at javax.swing.AbstractButton$Handler.actionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
at javax.swing.AbstractButton.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(Unknown Source)
at java.awt.Component.processMouseEvent(Unknown Source)
at javax.swing.JComponent.processMouseEvent(Unknown Source)
at java.awt.Component.processEvent(Unknown Source)
at java.awt.Container.processEvent(Unknown Source)
at java.awt.Component.dispatchEventImpl(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
at java.awt.Container.dispatchEventImpl(Unknown Source)
at java.awt.Window.dispatchEventImpl(Unknown Source)
at java.awt.Component.dispatchEvent(Unknown Source)
at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
at java.awt.EventQueue.access$500(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.awt.EventQueue$3.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.awt.EventQueue$4.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(Unknown Source)
at java.awt.EventQueue.dispatchEvent(Unknown Source)
at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
at java.awt.EventDispatchThread.run(Unknown Source)
'
```
|
non_test
|
a bunch of errors appear in the log when creating and cancelling trials restarting sessions etc here is an example of some i haven t worked out exact replication steps yet i think this is likely actually two defects exception in thread awt eventqueue java lang nullpointerexception at de bochumuniruhr psy bio behaviourcoder main actionperformed main java at javax swing abstractbutton fireactionperformed unknown source at javax swing abstractbutton handler actionperformed unknown source at javax swing defaultbuttonmodel fireactionperformed unknown source at javax swing defaultbuttonmodel setpressed unknown source at javax swing abstractbutton doclick unknown source at javax swing plaf basic basicmenuitemui doclick unknown source at javax swing plaf basic basicmenuitemui handler mousereleased unknown source at java awt component processmouseevent unknown source at javax swing jcomponent processmouseevent unknown source at java awt component processevent unknown source at java awt container processevent unknown source at java awt component dispatcheventimpl unknown source at java awt container dispatcheventimpl unknown source at java awt component dispatchevent unknown source at java awt lightweightdispatcher retargetmouseevent unknown source at java awt lightweightdispatcher processmouseevent unknown source at java awt lightweightdispatcher dispatchevent unknown source at java awt container dispatcheventimpl unknown source at java awt window dispatcheventimpl unknown source at java awt component dispatchevent unknown source at java awt eventqueue dispatcheventimpl unknown source at java awt eventqueue access unknown source at java awt eventqueue run unknown source at java awt eventqueue run unknown source at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege unknown source at java security protectiondomain javasecurityaccessimpl dointersectionprivilege unknown source at java awt eventqueue run unknown source at java awt eventqueue run unknown source at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege unknown source at java awt eventqueue dispatchevent unknown source at java awt eventdispatchthread pumponeeventforfilters unknown source at java awt eventdispatchthread pumpeventsforfilter unknown source at java awt eventdispatchthread pumpeventsforhierarchy unknown source at java awt eventdispatchthread pumpevents unknown source at java awt eventdispatchthread pumpevents unknown source at java awt eventdispatchthread run unknown source exception in thread awt eventqueue java lang nullpointerexception at de bochumuniruhr psy bio behaviourcoder main actionperformed main java at javax swing abstractbutton fireactionperformed unknown source at javax swing abstractbutton handler actionperformed unknown source at javax swing defaultbuttonmodel fireactionperformed unknown source at javax swing defaultbuttonmodel setpressed unknown source at javax swing abstractbutton doclick unknown source at javax swing plaf basic basicmenuitemui doclick unknown source at javax swing plaf basic basicmenuitemui handler mousereleased unknown source at java awt component processmouseevent unknown source at javax swing jcomponent processmouseevent unknown source at java awt component processevent unknown source at java awt container processevent unknown source at java awt component dispatcheventimpl unknown source at java awt container dispatcheventimpl unknown source at java awt component dispatchevent unknown source at java awt lightweightdispatcher retargetmouseevent unknown source at java awt lightweightdispatcher processmouseevent unknown source at java awt lightweightdispatcher dispatchevent unknown source at java awt container dispatcheventimpl unknown source at java awt window dispatcheventimpl unknown source at java awt component dispatchevent unknown source at java awt eventqueue dispatcheventimpl unknown source at java awt eventqueue access unknown source at java awt eventqueue run unknown source at java awt eventqueue run unknown source at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege unknown source at java security protectiondomain javasecurityaccessimpl dointersectionprivilege unknown source at java awt eventqueue run unknown source at java awt eventqueue run unknown source at java security accesscontroller doprivileged native method at java security protectiondomain javasecurityaccessimpl dointersectionprivilege unknown source at java awt eventqueue dispatchevent unknown source at java awt eventdispatchthread pumponeeventforfilters unknown source at java awt eventdispatchthread pumpeventsforfilter unknown source at java awt eventdispatchthread pumpeventsforhierarchy unknown source at java awt eventdispatchthread pumpevents unknown source at java awt eventdispatchthread pumpevents unknown source at java awt eventdispatchthread run unknown source
| 0
|
14,588
| 25,198,659,451
|
IssuesEvent
|
2022-11-12 21:13:37
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
Failed to pick up right Node.js version in sub folder
|
type:bug status:requirements priority-5-triage
|
### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I have a mono repo. The sub folder `/grafana/hm-panel-plugin` expects using Node.js 16.
However, when Renovate tries to update package at `/grafana/hm-panel-plugin/package.json`,
I got [error](https://github.com/Hongbo-Miao/hongbomiao.com/pull/5997#issuecomment-1312449243) from Reonvate:
```sh
npm ERR! code EBADENGINE
npm ERR! engine Unsupported engine
npm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0
npm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0
npm ERR! notsup Required: {"node":"16.x","npm":"8.x"}
npm ERR! notsup Actual: {"npm":"8.19.3","node":"v18.12.1"}
npm ERR! A complete log of this run can be found in:
npm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T18_31_35_449Z-debug-0.log
```
I think Renovate may use the Node.js version 18 in the root [/.nvmrc](https://github.com/Hongbo-Miao/hongbomiao.com/blob/main/.nvmrc) instead of Node.js version 16 at [/grafana/hm-panel-plugin/.nvmrc](https://github.com/Hongbo-Miao/hongbomiao.com/blob/main/grafana/hm-panel-plugin/.nvmrc)
### Relevant debug logs
<details><summary>Logs</summary>
Part of log at https://app.renovatebot.com/dashboard#github/Hongbo-Miao/hongbomiao.com/886333983
```sh
lock file error(branch="renovate/tanstack-query-monorepo")
{
"err": {
"name": "ExecError",
"cmd": "/bin/sh -c docker run --rm --name=renovate_sidecar --label=renovate_child -v \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\":\"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -v \"/tmp/containerbase\":\"/tmp/containerbase\" -e NPM_CONFIG_CACHE -e npm_config_store -e BUILDPACK_CACHE_DIR -e CONTAINERBASE_CACHE_DIR -w \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin\" docker.io/renovate/sidecar bash -l -c \"install-tool node 18.12.1 && install-tool npm 8.19.3 && hash -d npm 2>/dev/null || true && npm install --package-lock-only --no-audit --ignore-scripts\"",
"stderr": "npm ERR! code EBADENGINE\nnpm ERR! engine Unsupported engine\nnpm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Required: {\"node\":\"16.x\",\"npm\":\"8.x\"}\nnpm ERR! notsup Actual: {\"npm\":\"8.19.3\",\"node\":\"v18.12.1\"}\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T08_26_30_850Z-debug-0.log\n",
"stdout": "installing v2 tool node v18.12.1\nlinking tool node v18.12.1\nnode: v18.12.1 /usr/local/bin/node\nnpm: 8.19.2 /usr/local/bin/npm\nInstalled v2 /usr/local/buildpack/tools/v2/node.sh in 7 seconds\nskip cleanup, not a docker build: 96b4d4cb4ab1\ninstalling v2 tool npm v8.19.3\n\nadded 1 package in 6s\nlinking tool npm v8.19.3\n8.19.3\nInstalled v2 /usr/local/buildpack/tools/v2/npm.sh in 8 seconds\nskip cleanup, not a docker build: 96b4d4cb4ab1\n",
"options": {
"cwd": "/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin",
"encoding": "utf-8",
"env": {
"NPM_CONFIG_CACHE": "/tmp/renovate-cache/others/npm",
"npm_config_store": "/tmp/renovate-cache/others/pnpm",
"HOME": "/home/user",
"PATH": "/home/user/.local/bin:/home/user/bin:/opt/buildpack/tools/python/3.9.3/bin:/home/user/.npm-global/bin:/home/ubuntu/renovateapp/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
},
"exitCode": 1,
"message": "Command failed: docker run --rm --name=renovate_sidecar --label=renovate_child -v \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\":\"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -v \"/tmp/containerbase\":\"/tmp/containerbase\" -e NPM_CONFIG_CACHE -e npm_config_store -e BUILDPACK_CACHE_DIR -e CONTAINERBASE_CACHE_DIR -w \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin\" docker.io/renovate/sidecar bash -l -c \"install-tool node 18.12.1 && install-tool npm 8.19.3 && hash -d npm 2>/dev/null || true && npm install --package-lock-only --no-audit --ignore-scripts\"\nnpm ERR! code EBADENGINE\nnpm ERR! engine Unsupported engine\nnpm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Required: {\"node\":\"16.x\",\"npm\":\"8.x\"}\nnpm ERR! notsup Actual: {\"npm\":\"8.19.3\",\"node\":\"v18.12.1\"}\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T08_26_30_850Z-debug-0.log\n",
"stack": "ExecError: Command failed: docker run --rm --name=renovate_sidecar --label=renovate_child -v \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\":\"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -v \"/tmp/containerbase\":\"/tmp/containerbase\" -e NPM_CONFIG_CACHE -e npm_config_store -e BUILDPACK_CACHE_DIR -e CONTAINERBASE_CACHE_DIR -w \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin\" docker.io/renovate/sidecar bash -l -c \"install-tool node 18.12.1 && install-tool npm 8.19.3 && hash -d npm 2>/dev/null || true && npm install --package-lock-only --no-audit --ignore-scripts\"\nnpm ERR! code EBADENGINE\nnpm ERR! engine Unsupported engine\nnpm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Required: {\"node\":\"16.x\",\"npm\":\"8.x\"}\nnpm ERR! notsup Actual: {\"npm\":\"8.19.3\",\"node\":\"v18.12.1\"}\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T08_26_30_850Z-debug-0.log\n\n at ChildProcess.<anonymous> (/home/ubuntu/renovateapp/node_modules/renovate/dist/util/exec/common.js:87:24)\n at ChildProcess.emit (node:events:525:35)\n at ChildProcess.emit (node:domain:489:12)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)"
},
"type": "npm"
}
```
</details>
### Have you created a minimal reproduction repository?
No reproduction, but I have linked to a public repo where it occurs
|
1.0
|
Failed to pick up right Node.js version in sub folder - ### How are you running Renovate?
Mend Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### If you're self-hosting Renovate, select which platform you are using.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I have a mono repo. The sub folder `/grafana/hm-panel-plugin` expects using Node.js 16.
However, when Renovate tries to update package at `/grafana/hm-panel-plugin/package.json`,
I got [error](https://github.com/Hongbo-Miao/hongbomiao.com/pull/5997#issuecomment-1312449243) from Reonvate:
```sh
npm ERR! code EBADENGINE
npm ERR! engine Unsupported engine
npm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0
npm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0
npm ERR! notsup Required: {"node":"16.x","npm":"8.x"}
npm ERR! notsup Actual: {"npm":"8.19.3","node":"v18.12.1"}
npm ERR! A complete log of this run can be found in:
npm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T18_31_35_449Z-debug-0.log
```
I think Renovate may use the Node.js version 18 in the root [/.nvmrc](https://github.com/Hongbo-Miao/hongbomiao.com/blob/main/.nvmrc) instead of Node.js version 16 at [/grafana/hm-panel-plugin/.nvmrc](https://github.com/Hongbo-Miao/hongbomiao.com/blob/main/grafana/hm-panel-plugin/.nvmrc)
### Relevant debug logs
<details><summary>Logs</summary>
Part of log at https://app.renovatebot.com/dashboard#github/Hongbo-Miao/hongbomiao.com/886333983
```sh
lock file error(branch="renovate/tanstack-query-monorepo")
{
"err": {
"name": "ExecError",
"cmd": "/bin/sh -c docker run --rm --name=renovate_sidecar --label=renovate_child -v \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\":\"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -v \"/tmp/containerbase\":\"/tmp/containerbase\" -e NPM_CONFIG_CACHE -e npm_config_store -e BUILDPACK_CACHE_DIR -e CONTAINERBASE_CACHE_DIR -w \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin\" docker.io/renovate/sidecar bash -l -c \"install-tool node 18.12.1 && install-tool npm 8.19.3 && hash -d npm 2>/dev/null || true && npm install --package-lock-only --no-audit --ignore-scripts\"",
"stderr": "npm ERR! code EBADENGINE\nnpm ERR! engine Unsupported engine\nnpm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Required: {\"node\":\"16.x\",\"npm\":\"8.x\"}\nnpm ERR! notsup Actual: {\"npm\":\"8.19.3\",\"node\":\"v18.12.1\"}\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T08_26_30_850Z-debug-0.log\n",
"stdout": "installing v2 tool node v18.12.1\nlinking tool node v18.12.1\nnode: v18.12.1 /usr/local/bin/node\nnpm: 8.19.2 /usr/local/bin/npm\nInstalled v2 /usr/local/buildpack/tools/v2/node.sh in 7 seconds\nskip cleanup, not a docker build: 96b4d4cb4ab1\ninstalling v2 tool npm v8.19.3\n\nadded 1 package in 6s\nlinking tool npm v8.19.3\n8.19.3\nInstalled v2 /usr/local/buildpack/tools/v2/npm.sh in 8 seconds\nskip cleanup, not a docker build: 96b4d4cb4ab1\n",
"options": {
"cwd": "/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin",
"encoding": "utf-8",
"env": {
"NPM_CONFIG_CACHE": "/tmp/renovate-cache/others/npm",
"npm_config_store": "/tmp/renovate-cache/others/pnpm",
"HOME": "/home/user",
"PATH": "/home/user/.local/bin:/home/user/bin:/opt/buildpack/tools/python/3.9.3/bin:/home/user/.npm-global/bin:/home/ubuntu/renovateapp/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LC_ALL": "C.UTF-8",
"LANG": "C.UTF-8",
"BUILDPACK_CACHE_DIR": "/tmp/containerbase",
"CONTAINERBASE_CACHE_DIR": "/tmp/containerbase"
},
"maxBuffer": 10485760,
"timeout": 900000
},
"exitCode": 1,
"message": "Command failed: docker run --rm --name=renovate_sidecar --label=renovate_child -v \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\":\"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -v \"/tmp/containerbase\":\"/tmp/containerbase\" -e NPM_CONFIG_CACHE -e npm_config_store -e BUILDPACK_CACHE_DIR -e CONTAINERBASE_CACHE_DIR -w \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin\" docker.io/renovate/sidecar bash -l -c \"install-tool node 18.12.1 && install-tool npm 8.19.3 && hash -d npm 2>/dev/null || true && npm install --package-lock-only --no-audit --ignore-scripts\"\nnpm ERR! code EBADENGINE\nnpm ERR! engine Unsupported engine\nnpm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Required: {\"node\":\"16.x\",\"npm\":\"8.x\"}\nnpm ERR! notsup Actual: {\"npm\":\"8.19.3\",\"node\":\"v18.12.1\"}\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T08_26_30_850Z-debug-0.log\n",
"stack": "ExecError: Command failed: docker run --rm --name=renovate_sidecar --label=renovate_child -v \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\":\"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -v \"/tmp/containerbase\":\"/tmp/containerbase\" -e NPM_CONFIG_CACHE -e npm_config_store -e BUILDPACK_CACHE_DIR -e CONTAINERBASE_CACHE_DIR -w \"/mnt/renovate/gh/Hongbo-Miao/hongbomiao.com/grafana/hm-panel-plugin\" docker.io/renovate/sidecar bash -l -c \"install-tool node 18.12.1 && install-tool npm 8.19.3 && hash -d npm 2>/dev/null || true && npm install --package-lock-only --no-audit --ignore-scripts\"\nnpm ERR! code EBADENGINE\nnpm ERR! engine Unsupported engine\nnpm ERR! engine Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Not compatible with your version of node/npm: hm-panel-plugin@1.0.0\nnpm ERR! notsup Required: {\"node\":\"16.x\",\"npm\":\"8.x\"}\nnpm ERR! notsup Actual: {\"npm\":\"8.19.3\",\"node\":\"v18.12.1\"}\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /tmp/renovate-cache/others/npm/_logs/2022-11-12T08_26_30_850Z-debug-0.log\n\n at ChildProcess.<anonymous> (/home/ubuntu/renovateapp/node_modules/renovate/dist/util/exec/common.js:87:24)\n at ChildProcess.emit (node:events:525:35)\n at ChildProcess.emit (node:domain:489:12)\n at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)"
},
"type": "npm"
}
```
</details>
### Have you created a minimal reproduction repository?
No reproduction, but I have linked to a public repo where it occurs
|
non_test
|
failed to pick up right node js version in sub folder how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response if you re self hosting renovate select which platform you are using no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug i have a mono repo the sub folder grafana hm panel plugin expects using node js however when renovate tries to update package at grafana hm panel plugin package json i got from reonvate sh npm err code ebadengine npm err engine unsupported engine npm err engine not compatible with your version of node npm hm panel plugin npm err notsup not compatible with your version of node npm hm panel plugin npm err notsup required node x npm x npm err notsup actual npm node npm err a complete log of this run can be found in npm err tmp renovate cache others npm logs debug log i think renovate may use the node js version in the root instead of node js version at relevant debug logs logs part of log at sh lock file error branch renovate tanstack query monorepo err name execerror cmd bin sh c docker run rm name renovate sidecar label renovate child v mnt renovate gh hongbo miao hongbomiao com mnt renovate gh hongbo miao hongbomiao com v tmp renovate cache tmp renovate cache v tmp containerbase tmp containerbase e npm config cache e npm config store e buildpack cache dir e containerbase cache dir w mnt renovate gh hongbo miao hongbomiao com grafana hm panel plugin docker io renovate sidecar bash l c install tool node install tool npm hash d npm dev null true npm install package lock only no audit ignore scripts stderr npm err code ebadengine nnpm err engine unsupported engine nnpm err engine not compatible with your version of node npm hm panel plugin nnpm err notsup not compatible with your version of node npm hm panel plugin nnpm err notsup required node x npm x nnpm err notsup actual npm node n nnpm err a complete log of this run can be found in nnpm err tmp renovate cache others npm logs debug log n stdout installing tool node nlinking tool node nnode usr local bin node nnpm usr local bin npm ninstalled usr local buildpack tools node sh in seconds nskip cleanup not a docker build ninstalling tool npm n nadded package in nlinking tool npm ninstalled usr local buildpack tools npm sh in seconds nskip cleanup not a docker build n options cwd mnt renovate gh hongbo miao hongbomiao com grafana hm panel plugin encoding utf env npm config cache tmp renovate cache others npm npm config store tmp renovate cache others pnpm home home user path home user local bin home user bin opt buildpack tools python bin home user npm global bin home ubuntu renovateapp node modules bin usr local sbin usr local bin usr sbin usr bin sbin bin lc all c utf lang c utf buildpack cache dir tmp containerbase containerbase cache dir tmp containerbase maxbuffer timeout exitcode message command failed docker run rm name renovate sidecar label renovate child v mnt renovate gh hongbo miao hongbomiao com mnt renovate gh hongbo miao hongbomiao com v tmp renovate cache tmp renovate cache v tmp containerbase tmp containerbase e npm config cache e npm config store e buildpack cache dir e containerbase cache dir w mnt renovate gh hongbo miao hongbomiao com grafana hm panel plugin docker io renovate sidecar bash l c install tool node install tool npm hash d npm dev null true npm install package lock only no audit ignore scripts nnpm err code ebadengine nnpm err engine unsupported engine nnpm err engine not compatible with your version of node npm hm panel plugin nnpm err notsup not compatible with your version of node npm hm panel plugin nnpm err notsup required node x npm x nnpm err notsup actual npm node n nnpm err a complete log of this run can be found in nnpm err tmp renovate cache others npm logs debug log n stack execerror command failed docker run rm name renovate sidecar label renovate child v mnt renovate gh hongbo miao hongbomiao com mnt renovate gh hongbo miao hongbomiao com v tmp renovate cache tmp renovate cache v tmp containerbase tmp containerbase e npm config cache e npm config store e buildpack cache dir e containerbase cache dir w mnt renovate gh hongbo miao hongbomiao com grafana hm panel plugin docker io renovate sidecar bash l c install tool node install tool npm hash d npm dev null true npm install package lock only no audit ignore scripts nnpm err code ebadengine nnpm err engine unsupported engine nnpm err engine not compatible with your version of node npm hm panel plugin nnpm err notsup not compatible with your version of node npm hm panel plugin nnpm err notsup required node x npm x nnpm err notsup actual npm node n nnpm err a complete log of this run can be found in nnpm err tmp renovate cache others npm logs debug log n n at childprocess home ubuntu renovateapp node modules renovate dist util exec common js n at childprocess emit node events n at childprocess emit node domain n at process childprocess handle onexit node internal child process type npm have you created a minimal reproduction repository no reproduction but i have linked to a public repo where it occurs
| 0
|
98,086
| 20,606,533,697
|
IssuesEvent
|
2022-03-07 01:25:47
|
inventree/InvenTree
|
https://api.github.com/repos/inventree/InvenTree
|
closed
|
[BUG] App QR scanner fails with server version 0.6
|
bug barcode app
|
**Describe the bug**
Barcode / QR code scanner in iPhone App does not work. App version 0.5.6 and Inventree version 0.6.1
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Installed a fresh docker version of 0.6.1.
2. Added a part category and a single new part and displayed the QR code for that part.
3. Opened the iPhone App (version 0.5.6), connected to server.
4. Scan barcode within the app5.
5. App fails with "No match for barcode" message
**Expected behavior**
Expected to open the part in the App as it worked in 0.5.4 of Inventree with the App.
The App fails with the message "No match for barcode"
inventree-proxy log contains message "POST /api/barcode/ HTTP/1.1" 200 130 "-" "Dart/2.13 (dart:io)" "-"
**Deployment Method**
- [x ] Docker
- [ ] Bare Metal
**Version Information**
App version 0.5.6 and Inventree version 0.6.1
|
1.0
|
[BUG] App QR scanner fails with server version 0.6 - **Describe the bug**
Barcode / QR code scanner in iPhone App does not work. App version 0.5.6 and Inventree version 0.6.1
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Installed a fresh docker version of 0.6.1.
2. Added a part category and a single new part and displayed the QR code for that part.
3. Opened the iPhone App (version 0.5.6), connected to server.
4. Scan barcode within the app5.
5. App fails with "No match for barcode" message
**Expected behavior**
Expected to open the part in the App as it worked in 0.5.4 of Inventree with the App.
The App fails with the message "No match for barcode"
inventree-proxy log contains message "POST /api/barcode/ HTTP/1.1" 200 130 "-" "Dart/2.13 (dart:io)" "-"
**Deployment Method**
- [x ] Docker
- [ ] Bare Metal
**Version Information**
App version 0.5.6 and Inventree version 0.6.1
|
non_test
|
app qr scanner fails with server version describe the bug barcode qr code scanner in iphone app does not work app version and inventree version steps to reproduce steps to reproduce the behavior installed a fresh docker version of added a part category and a single new part and displayed the qr code for that part opened the iphone app version connected to server scan barcode within the app fails with no match for barcode message expected behavior expected to open the part in the app as it worked in of inventree with the app the app fails with the message no match for barcode inventree proxy log contains message post api barcode http dart dart io deployment method docker bare metal version information app version and inventree version
| 0
|
41,726
| 5,394,725,052
|
IssuesEvent
|
2017-02-27 05:05:30
|
Microsoft/vsts-tasks
|
https://api.github.com/repos/Microsoft/vsts-tasks
|
closed
|
PublishTestResults .ts version handles wildcards differently than the .ps1 version
|
Area: Test
|
I have build definitions for a cross platform project that includes running googletest unit tests in both Linux and windows. I have the junit style xml test results on each of them being generated in the staging directory (e.g. BuildAgent/_work/1/a). On our windows build platform, **/TEST-*.xml works fine, and the test results are published properly.
Under Linux, it appears that if wildcards are used, it uses the System.DefaultWorkingDirectory (e.g. BuildAgent/_work/1/s), and so far I haven't managed to it to publish results unless I name the files without wildcards.
The problem was in both the vsoagent version, and the latest vsts version -- though it appears I am using PublishTestResults 1.0.22. I haven't yet decoded the documentation or other search results to figure out how to upload a new version of a task to the server.
It does appear that the latest version on the master branch has different code that uses the find-files-legacy library to find the files, but it still references System.DefaultWorkingDirectory.
None the less, this issue may be fixed in the latest version, but since I haven't seen any similar issue report, I am posting it here to help direct others facing the same issue.
Ultimately, I want to be able to use wildcards, and I want it to search the same directory relative to build variables on each system, not withstanding file system differences.
|
1.0
|
PublishTestResults .ts version handles wildcards differently than the .ps1 version - I have build definitions for a cross platform project that includes running googletest unit tests in both Linux and windows. I have the junit style xml test results on each of them being generated in the staging directory (e.g. BuildAgent/_work/1/a). On our windows build platform, **/TEST-*.xml works fine, and the test results are published properly.
Under Linux, it appears that if wildcards are used, it uses the System.DefaultWorkingDirectory (e.g. BuildAgent/_work/1/s), and so far I haven't managed to it to publish results unless I name the files without wildcards.
The problem was in both the vsoagent version, and the latest vsts version -- though it appears I am using PublishTestResults 1.0.22. I haven't yet decoded the documentation or other search results to figure out how to upload a new version of a task to the server.
It does appear that the latest version on the master branch has different code that uses the find-files-legacy library to find the files, but it still references System.DefaultWorkingDirectory.
None the less, this issue may be fixed in the latest version, but since I haven't seen any similar issue report, I am posting it here to help direct others facing the same issue.
Ultimately, I want to be able to use wildcards, and I want it to search the same directory relative to build variables on each system, not withstanding file system differences.
|
test
|
publishtestresults ts version handles wildcards differently than the version i have build definitions for a cross platform project that includes running googletest unit tests in both linux and windows i have the junit style xml test results on each of them being generated in the staging directory e g buildagent work a on our windows build platform test xml works fine and the test results are published properly under linux it appears that if wildcards are used it uses the system defaultworkingdirectory e g buildagent work s and so far i haven t managed to it to publish results unless i name the files without wildcards the problem was in both the vsoagent version and the latest vsts version though it appears i am using publishtestresults i haven t yet decoded the documentation or other search results to figure out how to upload a new version of a task to the server it does appear that the latest version on the master branch has different code that uses the find files legacy library to find the files but it still references system defaultworkingdirectory none the less this issue may be fixed in the latest version but since i haven t seen any similar issue report i am posting it here to help direct others facing the same issue ultimately i want to be able to use wildcards and i want it to search the same directory relative to build variables on each system not withstanding file system differences
| 1
|
704,127
| 24,186,698,283
|
IssuesEvent
|
2022-09-23 13:52:53
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
bigcountypreps.com - Desktop layout displayed instead of mobile layout
|
os-ios browser-firefox-ios priority-normal type-trackingprotection severity-critical action-needssitepatch
|
<!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74605 -->
**URL**: https://bigcountypreps.com/big-board-forum/?p=%2F%3Fembedded%3D1%26
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.4.2
**Tested Another Browser**: Yes Safari
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
Can’t explore forum on mobile.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
bigcountypreps.com - Desktop layout displayed instead of mobile layout - <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74605 -->
**URL**: https://bigcountypreps.com/big-board-forum/?p=%2F%3Fembedded%3D1%26
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.4.2
**Tested Another Browser**: Yes Safari
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
Can’t explore forum on mobile.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_test
|
bigcountypreps com desktop layout displayed instead of mobile layout url browser version firefox ios operating system ios tested another browser yes safari problem type design is broken description items are overlapped steps to reproduce can’t explore forum on mobile browser configuration none from with ❤️
| 0
|
332,744
| 29,491,864,091
|
IssuesEvent
|
2023-06-02 14:01:30
|
dudykr/stc
|
https://api.github.com/repos/dudykr/stc
|
closed
|
Fix unit test: `tests/pass-only/conformance/internalModules/DeclarationMerging/ModuleAndEnumWithSameNameAndCommonRoot/.1.ts`
|
tsc-unit-test
|
Related test input: https://github.com/dudykr/stc/blob/main/crates/stc_ts_file_analyzer/tests/pass-only/conformance/internalModules/DeclarationMerging/ModuleAndEnumWithSameNameAndCommonRoot/.1.ts
Test file name: `tests/pass-only/conformance/internalModules/DeclarationMerging/ModuleAndEnumWithSameNameAndCommonRoot/.1.ts`
---
I (`@kdy1`) may expand this in the future.
If you want to work on this issue, please ping `@kdy1` in the comment.
---
This issue is created by sync script.
|
1.0
|
Fix unit test: `tests/pass-only/conformance/internalModules/DeclarationMerging/ModuleAndEnumWithSameNameAndCommonRoot/.1.ts` -
Related test input: https://github.com/dudykr/stc/blob/main/crates/stc_ts_file_analyzer/tests/pass-only/conformance/internalModules/DeclarationMerging/ModuleAndEnumWithSameNameAndCommonRoot/.1.ts
Test file name: `tests/pass-only/conformance/internalModules/DeclarationMerging/ModuleAndEnumWithSameNameAndCommonRoot/.1.ts`
---
I (`@kdy1`) may expand this in the future.
If you want to work on this issue, please ping `@kdy1` in the comment.
---
This issue is created by sync script.
|
test
|
fix unit test tests pass only conformance internalmodules declarationmerging moduleandenumwithsamenameandcommonroot ts related test input test file name tests pass only conformance internalmodules declarationmerging moduleandenumwithsamenameandcommonroot ts i may expand this in the future if you want to work on this issue please ping in the comment this issue is created by sync script
| 1
|
266,906
| 28,480,261,605
|
IssuesEvent
|
2023-04-18 01:39:00
|
artsking/linux-4.19.72
|
https://api.github.com/repos/artsking/linux-4.19.72
|
opened
|
CVE-2023-30772 (Medium) detected in linux-yoctov5.4.51
|
Mend: dependency security vulnerability
|
## CVE-2023-30772 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.19.72/commit/8519fe4185f1a7567a708f01b476f195b0f1046c">8519fe4185f1a7567a708f01b476f195b0f1046c</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/power/supply/da9150-charger.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/power/supply/da9150-charger.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/power/supply/da9150-charger.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 6.2.9 has a race condition and resultant use-after-free in drivers/power/supply/da9150-charger.c if a physically proximate attacker unplugs a device.
<p>Publish Date: 2023-04-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-30772>CVE-2023-30772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-30772">https://www.linuxkernelcves.com/cves/CVE-2023-30772</a></p>
<p>Release Date: 2023-04-16</p>
<p>Fix Resolution: v4.14.312,v4.19.280,v5.4.240,v5.10.177,v5.15.105,v6.1.22,v6.2.9,v6.3-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2023-30772 (Medium) detected in linux-yoctov5.4.51 - ## CVE-2023-30772 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.19.72/commit/8519fe4185f1a7567a708f01b476f195b0f1046c">8519fe4185f1a7567a708f01b476f195b0f1046c</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/power/supply/da9150-charger.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/power/supply/da9150-charger.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/power/supply/da9150-charger.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 6.2.9 has a race condition and resultant use-after-free in drivers/power/supply/da9150-charger.c if a physically proximate attacker unplugs a device.
<p>Publish Date: 2023-04-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-30772>CVE-2023-30772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-30772">https://www.linuxkernelcves.com/cves/CVE-2023-30772</a></p>
<p>Release Date: 2023-04-16</p>
<p>Fix Resolution: v4.14.312,v4.19.280,v5.4.240,v5.10.177,v5.15.105,v6.1.22,v6.2.9,v6.3-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in linux cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers power supply charger c drivers power supply charger c drivers power supply charger c vulnerability details the linux kernel before has a race condition and resultant use after free in drivers power supply charger c if a physically proximate attacker unplugs a device publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
5,527
| 2,945,354,555
|
IssuesEvent
|
2015-07-03 12:24:03
|
netty/netty
|
https://api.github.com/repos/netty/netty
|
closed
|
Example code in ByteBuf 4.0 javadoc uses 3.x API
|
documentation
|
https://netty.io/4.0/api/io/netty/buffer/ByteBuf.html
// Iterates the readable bytes of a buffer.
ByteBuf buffer = ...;
while (buffer.readable()) {
System.out.println(buffer.readByte());
}
But readable() was changed to isReadable() in 4.0
|
1.0
|
Example code in ByteBuf 4.0 javadoc uses 3.x API - https://netty.io/4.0/api/io/netty/buffer/ByteBuf.html
// Iterates the readable bytes of a buffer.
ByteBuf buffer = ...;
while (buffer.readable()) {
System.out.println(buffer.readByte());
}
But readable() was changed to isReadable() in 4.0
|
non_test
|
example code in bytebuf javadoc uses x api iterates the readable bytes of a buffer bytebuf buffer while buffer readable system out println buffer readbyte but readable was changed to isreadable in
| 0
|
351,051
| 31,933,580,443
|
IssuesEvent
|
2023-09-19 09:01:44
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix activations.test_tensorflow_relu
|
TensorFlow Frontend Sub Task Failing Test
|
| | |
|---|---|
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix activations.test_tensorflow_relu - | | |
|---|---|
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/6200560441/job/16835488067"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix activations test tensorflow relu paddle a href src numpy a href src jax a href src tensorflow a href src torch a href src
| 1
|
66,159
| 12,729,129,570
|
IssuesEvent
|
2020-06-25 04:54:10
|
nopSolutions/nopCommerce
|
https://api.github.com/repos/nopSolutions/nopCommerce
|
opened
|
Add database type mapping for int64/long
|
refactoring / source code
|
There is no type mapping in MigrationManager.cs for long type.
A line should be added to _typeMapping :
[typeof(long)] = c => c.AsInt64(),
Source: https://www.nopcommerce.com/en/boards/topic/82849/43-missing-database-type-mapping-for-int64long
|
1.0
|
Add database type mapping for int64/long - There is no type mapping in MigrationManager.cs for long type.
A line should be added to _typeMapping :
[typeof(long)] = c => c.AsInt64(),
Source: https://www.nopcommerce.com/en/boards/topic/82849/43-missing-database-type-mapping-for-int64long
|
non_test
|
add database type mapping for long there is no type mapping in migrationmanager cs for long type a line should be added to typemapping c c source
| 0
|
746,752
| 26,043,821,852
|
IssuesEvent
|
2022-12-22 12:46:59
|
telerik/kendo-ui-core
|
https://api.github.com/repos/telerik/kendo-ui-core
|
closed
|
Wrong dropdown width when set through list.width
|
Bug C: DropDownList SEV: Low jQuery Priority 5 FP: Planned
|
### Bug report
When the width of the DropDownList is set using the list.width() and the DropDownList gets opened there is a slight delay in expanding the popup width.
### Reproduction of the problem
1. Open the [Dojo](https://dojo.telerik.com/@NeliKondova/EHobUqOV) and open the DropDownList
### Current behavior
There is a delay in expanding the popup width.
#### The issue is a regression starting with 2022 R1 SP1
####Workaround
` $('.k-popup').width(600);
`
[Dojo](https://dojo.telerik.com/@NeliKondova/OgENipOl)
### Expected/desired behavior
There should be no delay when the width od the DropDownlist is set..
### Environment
* **Kendo UI version:** 2022.2.802
* **Browser:** [all ]
|
1.0
|
Wrong dropdown width when set through list.width - ### Bug report
When the width of the DropDownList is set using the list.width() and the DropDownList gets opened there is a slight delay in expanding the popup width.
### Reproduction of the problem
1. Open the [Dojo](https://dojo.telerik.com/@NeliKondova/EHobUqOV) and open the DropDownList
### Current behavior
There is a delay in expanding the popup width.
#### The issue is a regression starting with 2022 R1 SP1
####Workaround
` $('.k-popup').width(600);
`
[Dojo](https://dojo.telerik.com/@NeliKondova/OgENipOl)
### Expected/desired behavior
There should be no delay when the width od the DropDownlist is set..
### Environment
* **Kendo UI version:** 2022.2.802
* **Browser:** [all ]
|
non_test
|
wrong dropdown width when set through list width bug report when the width of the dropdownlist is set using the list width and the dropdownlist gets opened there is a slight delay in expanding the popup width reproduction of the problem open the and open the dropdownlist current behavior there is a delay in expanding the popup width the issue is a regression starting with workaround k popup width expected desired behavior there should be no delay when the width od the dropdownlist is set environment kendo ui version browser
| 0
|
69,488
| 7,135,858,897
|
IssuesEvent
|
2018-01-23 03:25:47
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
Minor verbose issues during testing
|
fix:minor testing
|
Issue visible on [Travis](https://travis-ci.org/neuropoly/spinalcordtoolbox/jobs/330619681).
~~~
Checking sct_get_centerline........................./home/travis/build/neuropoly/spinalcordtoolbox/python/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
[OK]
Checking sct_image..................................[OK]
Checking sct_label_utils............................sct_label_utils -i test_centerofmass.nii.gz -display # in /tmp/sct-180119010104-sct_label_utils_sct_testing_data-6ABsWw
[OK]
~~~
|
1.0
|
Minor verbose issues during testing - Issue visible on [Travis](https://travis-ci.org/neuropoly/spinalcordtoolbox/jobs/330619681).
~~~
Checking sct_get_centerline........................./home/travis/build/neuropoly/spinalcordtoolbox/python/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
[OK]
Checking sct_image..................................[OK]
Checking sct_label_utils............................sct_label_utils -i test_centerofmass.nii.gz -display # in /tmp/sct-180119010104-sct_label_utils_sct_testing_data-6ABsWw
[OK]
~~~
|
test
|
minor verbose issues during testing issue visible on checking sct get centerline home travis build neuropoly spinalcordtoolbox python lib site packages matplotlib font manager py userwarning matplotlib is building the font cache using fc list this may take a moment warnings warn matplotlib is building the font cache using fc list this may take a moment checking sct image checking sct label utils sct label utils i test centerofmass nii gz display in tmp sct sct label utils sct testing data
| 1
|
8,692
| 3,779,507,633
|
IssuesEvent
|
2016-03-18 08:42:48
|
OData/odata.net
|
https://api.github.com/repos/OData/odata.net
|
closed
|
Strongname validation failure in a community build
|
3 - Resolved (code ready)
|
Enlist the ODataLib repo in a newly installed VS2013 computer, open Microsoft.OData.Lite.sln, build and run unit tests.
Unit tests failed with strong name validation.
To reduce the friction of community contributions:
* Skip strong name validation with a myget package for non-official build. Refer to the WebApi repo.
https://github.com/OData/WebApi/blob/master/OData/WebApiOData.msbuild
* Do not enable strong name validation if it is not a Release and official build. Refer to the RESTier repo.
https://github.com/OData/RESTier/commit/a68c779d184f028e896b3e572269ba100728e87d
|
1.0
|
Strongname validation failure in a community build - Enlist the ODataLib repo in a newly installed VS2013 computer, open Microsoft.OData.Lite.sln, build and run unit tests.
Unit tests failed with strong name validation.
To reduce the friction of community contributions:
* Skip strong name validation with a myget package for non-official build. Refer to the WebApi repo.
https://github.com/OData/WebApi/blob/master/OData/WebApiOData.msbuild
* Do not enable strong name validation if it is not a Release and official build. Refer to the RESTier repo.
https://github.com/OData/RESTier/commit/a68c779d184f028e896b3e572269ba100728e87d
|
non_test
|
strongname validation failure in a community build enlist the odatalib repo in a newly installed computer open microsoft odata lite sln build and run unit tests unit tests failed with strong name validation to reduce the friction of community contributions skip strong name validation with a myget package for non official build refer to the webapi repo do not enable strong name validation if it is not a release and official build refer to the restier repo
| 0
|
274,899
| 23,877,721,407
|
IssuesEvent
|
2022-09-07 20:49:56
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
[test-triage] partner_flash_ctrl
|
Component:TestTriage
|
### Hierarchy of regression failure
Block level
### Failure Description
```
* `Error-[XMRE] Cross-module reference resolution error` has 1 failures:
* Test default has 1 failures.
* default\
Line 1809, in log /workspaces/repo/scratch/master/flash_ctrl-sim-vcs/default/build.log
Error-[XMRE] Cross-module reference resolution error
../src/lowrisc_dv_flash_ctrl_sim_0.1/tb/tb.sv, 342
Error found while trying to resolve cross-module reference.
token 'gen_generic'. Originating module 'tb'.
Source info: uvm_config_db::set(null, "*.env", "flash_ctrl_mem_vif[0]",
```
### Steps to Reproduce
- Commit hash where failure was observed
`620c8c0c07dc0ab05448c0e1ad707d82072f58da`
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`dvsim hw/foundry/ip/flash_ctrl/dv/partner_flash_ctrl_sim_cfg.hjson -i partner_flash_ctrl_init_busy`
- Kokoro build number if applicable
### Tests with similar or related failures
_No response_
|
1.0
|
[test-triage] partner_flash_ctrl - ### Hierarchy of regression failure
Block level
### Failure Description
```
* `Error-[XMRE] Cross-module reference resolution error` has 1 failures:
* Test default has 1 failures.
* default\
Line 1809, in log /workspaces/repo/scratch/master/flash_ctrl-sim-vcs/default/build.log
Error-[XMRE] Cross-module reference resolution error
../src/lowrisc_dv_flash_ctrl_sim_0.1/tb/tb.sv, 342
Error found while trying to resolve cross-module reference.
token 'gen_generic'. Originating module 'tb'.
Source info: uvm_config_db::set(null, "*.env", "flash_ctrl_mem_vif[0]",
```
### Steps to Reproduce
- Commit hash where failure was observed
`620c8c0c07dc0ab05448c0e1ad707d82072f58da`
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
`dvsim hw/foundry/ip/flash_ctrl/dv/partner_flash_ctrl_sim_cfg.hjson -i partner_flash_ctrl_init_busy`
- Kokoro build number if applicable
### Tests with similar or related failures
_No response_
|
test
|
partner flash ctrl hierarchy of regression failure block level failure description error cross module reference resolution error has failures test default has failures default line in log workspaces repo scratch master flash ctrl sim vcs default build log error cross module reference resolution error src lowrisc dv flash ctrl sim tb tb sv error found while trying to resolve cross module reference token gen generic originating module tb source info uvm config db set null env flash ctrl mem vif steps to reproduce commit hash where failure was observed dvsim invocation command to reproduce the failure inclusive of build and run seeds dvsim hw foundry ip flash ctrl dv partner flash ctrl sim cfg hjson i partner flash ctrl init busy kokoro build number if applicable tests with similar or related failures no response
| 1
|
54,294
| 6,378,172,767
|
IssuesEvent
|
2017-08-02 12:03:29
|
astropy/astropy
|
https://api.github.com/repos/astropy/astropy
|
closed
|
FAILURES: TestUtilMode.test_mode_pil_image -- conda update astropy
|
Close? Duplicate io.fits testing
|
I have updated my astropy version running
```
> conda update astropy
```
Then, to test the installed version of astropy I ran the function astropy.test() and I got this error:
```
========================================================================== FAILURES ==========================================================================
______________________________________________________________ TestUtilMode.test_mode_pil_image ______________________________________________________________
self = <astropy.io.fits.tests.test_util.TestUtilMode object at 0x1181cb190>
@pytest.mark.skipif("not HAS_PIL")
def test_mode_pil_image(self):
img = np.random.randint(0, 255, (5, 5, 3)).astype(np.uint8)
result = Image.fromarray(img)
result.save(self.temp('test_simple.jpg'))
# PIL doesn't support append mode. So it will allways use binary read.
> with Image.open(self.temp('test_simple.jpg')) as fileobj:
anaconda/lib/python2.7/site-packages/astropy/io/fits/tests/test_util.py:91:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=5x5 at 0x112DBF8C0>, name = '__exit__'
def __getattr__(self, name):
if name == "__array_interface__":
# numpy array interface support
new = {}
shape, typestr = _conv_type_shape(self)
new['shape'] = shape
new['typestr'] = typestr
new['data'] = self.tostring()
return new
> raise AttributeError(name)
E AttributeError: __exit__
anaconda/lib/python2.7/site-packages/PIL/Image.py:512: AttributeError
======================================== 1 failed, 11357 passed, 177 skipped, 43 xfailed, 3 xpassed in 387.18 seconds ========================================
1
```
Is there something I can do to fix the problem?
|
1.0
|
FAILURES: TestUtilMode.test_mode_pil_image -- conda update astropy - I have updated my astropy version running
```
> conda update astropy
```
Then, to test the installed version of astropy I ran the function astropy.test() and I got this error:
```
========================================================================== FAILURES ==========================================================================
______________________________________________________________ TestUtilMode.test_mode_pil_image ______________________________________________________________
self = <astropy.io.fits.tests.test_util.TestUtilMode object at 0x1181cb190>
@pytest.mark.skipif("not HAS_PIL")
def test_mode_pil_image(self):
img = np.random.randint(0, 255, (5, 5, 3)).astype(np.uint8)
result = Image.fromarray(img)
result.save(self.temp('test_simple.jpg'))
# PIL doesn't support append mode. So it will allways use binary read.
> with Image.open(self.temp('test_simple.jpg')) as fileobj:
anaconda/lib/python2.7/site-packages/astropy/io/fits/tests/test_util.py:91:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=5x5 at 0x112DBF8C0>, name = '__exit__'
def __getattr__(self, name):
if name == "__array_interface__":
# numpy array interface support
new = {}
shape, typestr = _conv_type_shape(self)
new['shape'] = shape
new['typestr'] = typestr
new['data'] = self.tostring()
return new
> raise AttributeError(name)
E AttributeError: __exit__
anaconda/lib/python2.7/site-packages/PIL/Image.py:512: AttributeError
======================================== 1 failed, 11357 passed, 177 skipped, 43 xfailed, 3 xpassed in 387.18 seconds ========================================
1
```
Is there something I can do to fix the problem?
|
test
|
failures testutilmode test mode pil image conda update astropy i have updated my astropy version running conda update astropy then to test the installed version of astropy i ran the function astropy test and i got this error failures testutilmode test mode pil image self pytest mark skipif not has pil def test mode pil image self img np random randint astype np result image fromarray img result save self temp test simple jpg pil doesn t support append mode so it will allways use binary read with image open self temp test simple jpg as fileobj anaconda lib site packages astropy io fits tests test util py self name exit def getattr self name if name array interface numpy array interface support new shape typestr conv type shape self new shape new typestr new self tostring return new raise attributeerror name e attributeerror exit anaconda lib site packages pil image py attributeerror failed passed skipped xfailed xpassed in seconds is there something i can do to fix the problem
| 1
|
75,472
| 7,473,148,080
|
IssuesEvent
|
2018-04-03 14:37:31
|
EnMasseProject/enmasse
|
https://api.github.com/repos/EnMasseProject/enmasse
|
opened
|
system-tests: new test for auto scaleup after manual scaledown
|
component/systemtests test development
|
1. create plan for queue which consume 0.5 broker resource
2. create queue1, queue2 create queue3, queue4
4. send 1000 messages into each of queues above
5. manually scale broker StatefulSet to 1 pod
6. wait until StatefulSet will be automatically scaled up to 2 pods
7. try to receive mesages from all queues above
|
2.0
|
system-tests: new test for auto scaleup after manual scaledown - 1. create plan for queue which consume 0.5 broker resource
2. create queue1, queue2 create queue3, queue4
4. send 1000 messages into each of queues above
5. manually scale broker StatefulSet to 1 pod
6. wait until StatefulSet will be automatically scaled up to 2 pods
7. try to receive mesages from all queues above
|
test
|
system tests new test for auto scaleup after manual scaledown create plan for queue which consume broker resource create create send messages into each of queues above manually scale broker statefulset to pod wait until statefulset will be automatically scaled up to pods try to receive mesages from all queues above
| 1
|
61,267
| 6,731,480,977
|
IssuesEvent
|
2017-10-18 07:48:25
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
OL Feature Info marker disappear on second click
|
bug OL3 Priority: High Tested
|
FeatureInfo with OpenLayers map changes marker on second click.
### Steps to reproduce:
- Open an OpenLayers map (with at least one layer)
- Enable feature info tool
- Click on the map. You see the marker
- Click again in another point. Now you see a red circle instead of marker
|
1.0
|
OL Feature Info marker disappear on second click - FeatureInfo with OpenLayers map changes marker on second click.
### Steps to reproduce:
- Open an OpenLayers map (with at least one layer)
- Enable feature info tool
- Click on the map. You see the marker
- Click again in another point. Now you see a red circle instead of marker
|
test
|
ol feature info marker disappear on second click featureinfo with openlayers map changes marker on second click steps to reproduce open an openlayers map with at least one layer enable feature info tool click on the map you see the marker click again in another point now you see a red circle instead of marker
| 1
|
35,413
| 4,974,743,936
|
IssuesEvent
|
2016-12-06 08:05:42
|
wangding/courses
|
https://api.github.com/repos/wangding/courses
|
closed
|
13.3 提交黑盒测试设计思路
|
Testing Learning
|
被测程序名称:ProcessOn 文件和文件夹管理
被测程序的“需求规格说明书”请通过在线使用和探索获得,注意两个问题:
文件管理都有哪些功能;
文件夹管理都有哪些功能;
设计工具:ProcessOn在线思维导图
设计方法:等价类、边界值,等
任务要求:用思维导图工具为被测程序设计黑盒测试案例,并将设计好的思维导图发布出来;
在问题更新的描述中,贴上 ProcessOn 发布后的思维导图的 URL 地址。
|
1.0
|
13.3 提交黑盒测试设计思路 - 被测程序名称:ProcessOn 文件和文件夹管理
被测程序的“需求规格说明书”请通过在线使用和探索获得,注意两个问题:
文件管理都有哪些功能;
文件夹管理都有哪些功能;
设计工具:ProcessOn在线思维导图
设计方法:等价类、边界值,等
任务要求:用思维导图工具为被测程序设计黑盒测试案例,并将设计好的思维导图发布出来;
在问题更新的描述中,贴上 ProcessOn 发布后的思维导图的 URL 地址。
|
test
|
提交黑盒测试设计思路 被测程序名称:processon 文件和文件夹管理 被测程序的“需求规格说明书”请通过在线使用和探索获得,注意两个问题: 文件管理都有哪些功能; 文件夹管理都有哪些功能; 设计工具:processon在线思维导图 设计方法:等价类、边界值,等 任务要求:用思维导图工具为被测程序设计黑盒测试案例,并将设计好的思维导图发布出来; 在问题更新的描述中,贴上 processon 发布后的思维导图的 url 地址。
| 1
|
340,672
| 30,535,373,816
|
IssuesEvent
|
2023-07-19 16:56:58
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix paddle_tensor.test_paddle_instance_bitwise_xor
|
Sub Task Failing Test Paddle Frontend
|
| | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5599074213"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5600789863"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5600669312"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix paddle_tensor.test_paddle_instance_bitwise_xor - | | |
|---|---|
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5599074213"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5600789863"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5600669312"><img src=https://img.shields.io/badge/-success-success></a>
|
test
|
fix paddle tensor test paddle instance bitwise xor numpy a href src jax a href src torch a href src
| 1
|
7,877
| 2,938,761,815
|
IssuesEvent
|
2015-07-01 12:56:55
|
stan-dev/stan
|
https://api.github.com/repos/stan-dev/stan
|
closed
|
unnormalized min/max tests failing for I/O on Intel compilers
|
Bug testing
|
There's noramlization issue with the icpc compiler, which Ben Goodrich tracked down:
Message 0:
I now have an icpc version 15. I am getting a test failure on this branch at
```
[ RUN ] io_dump.reader_big_doubles
unknown file: Failure
C++ exception with description "data b value 2.22507e-308 beyond numeric range" thrown in the test body.
```
Message 1:
Yes, changing dmin to lowest() passes in C++11 mode, as does changing dmin
to -dmax without C++11.
Message 2:
Right now, it is
```
double dmax = std::numeric_limits<double>::max();
double dmin = std::numeric_limits<double>::min();
```
which fails, but changing dmin to either
```
double dmin = std::numeric_limits<double>::lower();
```
or
```
double dmin = -dmax;
```
passes. So, I think it does have something to do with the normalization.
Message 3:
OK. This passes:
```
double dmin = 2.225075e-308;
```
whereas
```
double dmin = std::numeric_limits<double>::min(); // about 2.22507_4_e-308
```
fails. So, it looks as if icpc can't handle that edge case.
|
1.0
|
unnormalized min/max tests failing for I/O on Intel compilers - There's noramlization issue with the icpc compiler, which Ben Goodrich tracked down:
Message 0:
I now have an icpc version 15. I am getting a test failure on this branch at
```
[ RUN ] io_dump.reader_big_doubles
unknown file: Failure
C++ exception with description "data b value 2.22507e-308 beyond numeric range" thrown in the test body.
```
Message 1:
Yes, changing dmin to lowest() passes in C++11 mode, as does changing dmin
to -dmax without C++11.
Message 2:
Right now, it is
```
double dmax = std::numeric_limits<double>::max();
double dmin = std::numeric_limits<double>::min();
```
which fails, but changing dmin to either
```
double dmin = std::numeric_limits<double>::lower();
```
or
```
double dmin = -dmax;
```
passes. So, I think it does have something to do with the normalization.
Message 3:
OK. This passes:
```
double dmin = 2.225075e-308;
```
whereas
```
double dmin = std::numeric_limits<double>::min(); // about 2.22507_4_e-308
```
fails. So, it looks as if icpc can't handle that edge case.
|
test
|
unnormalized min max tests failing for i o on intel compilers there s noramlization issue with the icpc compiler which ben goodrich tracked down message i now have an icpc version i am getting a test failure on this branch at io dump reader big doubles unknown file failure c exception with description data b value beyond numeric range thrown in the test body message yes changing dmin to lowest passes in c mode as does changing dmin to dmax without c message right now it is double dmax std numeric limits max double dmin std numeric limits min which fails but changing dmin to either double dmin std numeric limits lower or double dmin dmax passes so i think it does have something to do with the normalization message ok this passes double dmin whereas double dmin std numeric limits min about e fails so it looks as if icpc can t handle that edge case
| 1
|
365,056
| 25,518,963,824
|
IssuesEvent
|
2022-11-28 18:44:01
|
cloudflare/cloudflare-docs
|
https://api.github.com/repos/cloudflare/cloudflare-docs
|
closed
|
Add a note to mTLS page if using a custom Root CA
|
documentation content:edit
|
### Which Cloudflare product does this pertain to?
SSL
### Existing documentation URL(s)
https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/
### Section that requires update
Footnote addition.
### What needs to change?
The chapter doesn't differentiate cases when using a Cloudflare managed CA or a custom CA, which is done in the Zero Trust dashboard (at least as of nov 2022).
### How should it change?
Add a footnote: "if using your own CA, please check this documentation: https://developers.cloudflare.com/cloudflare-one/identity/devices/access-integrations/mutual-tls-authentication/"
### Additional information
_No response_
|
1.0
|
Add a note to mTLS page if using a custom Root CA - ### Which Cloudflare product does this pertain to?
SSL
### Existing documentation URL(s)
https://developers.cloudflare.com/ssl/client-certificates/enable-mtls/
### Section that requires update
Footnote addition.
### What needs to change?
The chapter doesn't differentiate cases when using a Cloudflare managed CA or a custom CA, which is done in the Zero Trust dashboard (at least as of nov 2022).
### How should it change?
Add a footnote: "if using your own CA, please check this documentation: https://developers.cloudflare.com/cloudflare-one/identity/devices/access-integrations/mutual-tls-authentication/"
### Additional information
_No response_
|
non_test
|
add a note to mtls page if using a custom root ca which cloudflare product does this pertain to ssl existing documentation url s section that requires update footnote addition what needs to change the chapter doesn t differentiate cases when using a cloudflare managed ca or a custom ca which is done in the zero trust dashboard at least as of nov how should it change add a footnote if using your own ca please check this documentation additional information no response
| 0
|
248,107
| 7,927,518,245
|
IssuesEvent
|
2018-07-06 08:20:19
|
canonical-websites/www.ubuntu.com
|
https://api.github.com/repos/canonical-websites/www.ubuntu.com
|
closed
|
Global nav - Enterprise dropdown- Kubernetes links are not correct - DEV
|
Priority: High
|
please check the [copydoc](https://docs.google.com/document/d/1YBdQvLuqEpEQr_QqyycxMhZr4OaLapMlNJyYKCmPJsQ/edit)

|
1.0
|
Global nav - Enterprise dropdown- Kubernetes links are not correct - DEV - please check the [copydoc](https://docs.google.com/document/d/1YBdQvLuqEpEQr_QqyycxMhZr4OaLapMlNJyYKCmPJsQ/edit)

|
non_test
|
global nav enterprise dropdown kubernetes links are not correct dev please check the
| 0
|
696,184
| 23,889,256,916
|
IssuesEvent
|
2022-09-08 10:07:33
|
Kong/kubernetes-ingress-controller
|
https://api.github.com/repos/Kong/kubernetes-ingress-controller
|
closed
|
Remove support for v1alpha2 GatewayClass,Gateway,HTTPRoute
|
priority/high area/gateway-api area/maintenance
|
### Problem Statement
Going forward there's no reason to maintain compatibility with the `v1alpha2` versions of `GatewayClass`, `Gateway`, and `HTTPRoute`, and in fact the [storage version for the APIs is now v1beta1 which stops new v1alpha2 resources from being usable (old resources remain usable but should be upgraded)](https://github.com/kubernetes-sigs/gateway-api/commit/5f296d8fd04e22dba8a3a84ffb6dce106714a837).
### Proposed Solution
- [x] #2890
- [x] #2899
- [x] #2895
### Acceptance Criteria
- [x] references and code support for `v1alpha2.{GatewayClass,Gateway,HTTPRoute}` are removed
- [x] don't forget to update the `examples/`!
- [x] as a user I can deploy and manage `v1beta1.GatewayClass` resources
- [x] as a user I can deploy and manage `v1beta1.Gateway` resources
- [x] as a user I can deploy and manage `v1beta1.HTTPRoute` resources
- [ ] documentation uses `v1beta1`, does not reference `v1alpha2` for the applicable APIs ([docs#4407](https://github.com/Kong/docs.konghq.com/issues/4407))
|
1.0
|
Remove support for v1alpha2 GatewayClass,Gateway,HTTPRoute - ### Problem Statement
Going forward there's no reason to maintain compatibility with the `v1alpha2` versions of `GatewayClass`, `Gateway`, and `HTTPRoute`, and in fact the [storage version for the APIs is now v1beta1 which stops new v1alpha2 resources from being usable (old resources remain usable but should be upgraded)](https://github.com/kubernetes-sigs/gateway-api/commit/5f296d8fd04e22dba8a3a84ffb6dce106714a837).
### Proposed Solution
- [x] #2890
- [x] #2899
- [x] #2895
### Acceptance Criteria
- [x] references and code support for `v1alpha2.{GatewayClass,Gateway,HTTPRoute}` are removed
- [x] don't forget to update the `examples/`!
- [x] as a user I can deploy and manage `v1beta1.GatewayClass` resources
- [x] as a user I can deploy and manage `v1beta1.Gateway` resources
- [x] as a user I can deploy and manage `v1beta1.HTTPRoute` resources
- [ ] documentation uses `v1beta1`, does not reference `v1alpha2` for the applicable APIs ([docs#4407](https://github.com/Kong/docs.konghq.com/issues/4407))
|
non_test
|
remove support for gatewayclass gateway httproute problem statement going forward there s no reason to maintain compatibility with the versions of gatewayclass gateway and httproute and in fact the proposed solution acceptance criteria references and code support for gatewayclass gateway httproute are removed don t forget to update the examples as a user i can deploy and manage gatewayclass resources as a user i can deploy and manage gateway resources as a user i can deploy and manage httproute resources documentation uses does not reference for the applicable apis
| 0
|
266,547
| 23,245,189,408
|
IssuesEvent
|
2022-08-03 19:24:11
|
umee-network/umee
|
https://api.github.com/repos/umee-network/umee
|
opened
|
Leverage functional test1
|
T:Test
|
<!-- markdownlint-disable MD041 -->
## Test scenario
We use the genesis file from: https://github.com/umee-network/umee/issues/1206
* oracle prices: Umee: 0.02usd, Atom: 1usd, Juno: 0.5usd
* `a_1, ..., a_100` supply and collateralize 10'000umee. `a_101, ... a_200` supply and collateralize 1000atoms and 200juno.
* Everything in parallel:
* `a_1, ..., a_20` borrow 10atoms
* `a_21, ..., a_40` borrow 100atoms
* `a_41, ..., a_60` borrow 150atoms
* `a_61, ..., a_100` borrow 300juno
* price of atom grows to 2usd, price of juno grows to 0.7 usd.
* Need to liquidate whatever possible in parallel.... send many competing transactions.
* use queries to make sure things were completed
---
## For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
|
1.0
|
Leverage functional test1 - <!-- markdownlint-disable MD041 -->
## Test scenario
We use the genesis file from: https://github.com/umee-network/umee/issues/1206
* oracle prices: Umee: 0.02usd, Atom: 1usd, Juno: 0.5usd
* `a_1, ..., a_100` supply and collateralize 10'000umee. `a_101, ... a_200` supply and collateralize 1000atoms and 200juno.
* Everything in parallel:
* `a_1, ..., a_20` borrow 10atoms
* `a_21, ..., a_40` borrow 100atoms
* `a_41, ..., a_60` borrow 150atoms
* `a_61, ..., a_100` borrow 300juno
* price of atom grows to 2usd, price of juno grows to 0.7 usd.
* Need to liquidate whatever possible in parallel.... send many competing transactions.
* use queries to make sure things were completed
---
## For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
|
test
|
leverage functional test scenario we use the genesis file from oracle prices umee atom juno a a supply and collateralize a a supply and collateralize and everything in parallel a a borrow a a borrow a a borrow a a borrow price of atom grows to price of juno grows to usd need to liquidate whatever possible in parallel send many competing transactions use queries to make sure things were completed for admin use not duplicate issue appropriate labels applied appropriate contributors tagged contributor assigned self assigned
| 1
|
359,348
| 25,232,680,713
|
IssuesEvent
|
2022-11-14 21:15:57
|
chef/chef-web-docs
|
https://api.github.com/repos/chef/chef-web-docs
|
closed
|
"[INTAKE]" Documentation for New K8s Deployment for Automate
|
Status: Untriaged Documentation Type: Docs Intake
|
# Documentation for New K8s Deployment for Automate
We need to document how Automate can be deployed in K8s environment.
We need to make it simple for customers to take advantage of their K8s Cluster to use Automate at scale.
## Product
- [x] Chef Automate
Brief product description: [3-4 lines]
We need to document how Automate can be deployed in K8s environment.
We need to make it simple for customers to take advantage of their K8s Cluster to use Automate at scale.
Product manager: Ankur Mundhra
|
1.0
|
"[INTAKE]" Documentation for New K8s Deployment for Automate - # Documentation for New K8s Deployment for Automate
We need to document how Automate can be deployed in K8s environment.
We need to make it simple for customers to take advantage of their K8s Cluster to use Automate at scale.
## Product
- [x] Chef Automate
Brief product description: [3-4 lines]
We need to document how Automate can be deployed in K8s environment.
We need to make it simple for customers to take advantage of their K8s Cluster to use Automate at scale.
Product manager: Ankur Mundhra
|
non_test
|
documentation for new deployment for automate documentation for new deployment for automate we need to document how automate can be deployed in environment we need to make it simple for customers to take advantage of their cluster to use automate at scale product chef automate brief product description we need to document how automate can be deployed in environment we need to make it simple for customers to take advantage of their cluster to use automate at scale product manager ankur mundhra
| 0
|
132,366
| 10,742,544,979
|
IssuesEvent
|
2019-10-29 22:54:28
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
teamcity: failed test: TestHashJoinerAgainstProcessor
|
C-test-failure O-robot
|
The following tests appear to have failed on master (test): TestHashJoinerAgainstProcessor
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestHashJoinerAgainstProcessor).
[#1563297](https://teamcity.cockroachdb.com/viewLog.html?buildId=1563297):
```
TestHashJoinerAgainstProcessor
--- FAIL: test/TestHashJoinerAgainstProcessor (4.130s)
------- Stdout: -------
--- join type = FULL_OUTER onExpr = "" filter = "" seed = 3960962367447138496 run = 2 ---
--- lEqCols = [0] rEqCols = [0] ---
--- inputTypes = [{{StringFamily 0 0 [] 0x65613d0 0 <nil> [] [] 19 <nil>}}] ---
columnar_operators_test.go:267: unexpected meta &{Ranges:[] Err:assertion failure
- error with attached stack trace:
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError.func1
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:77
runtime.gopanic
/usr/local/go/src/runtime/panic.go:522
github.com/cockroachdb/cockroach/pkg/col/coldata.(*Bytes).UpdateOffsetsToBeNonDecreasing
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/bytes.go:75
github.com/cockroachdb/cockroach/pkg/col/coldata.(*MemBatch).SetLength
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/batch.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).emitUnmatched
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:330
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:267
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:149
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).nextAdapter
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:91
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:177
github.com/cockroachdb/cockroach/pkg/sql/distsql.verifyColOperator
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_utils_test.go:119
github.com/cockroachdb/cockroach/pkg/sql/distsql.TestHashJoinerAgainstProcessor
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_operators_test.go:256
testing.tRunner
/usr/local/go/src/testing/testing.go:865
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1337
- error with embedded safe details: unexpected error from the vectorized runtime: %+v
-- arg 1: <string>
- unexpected error from the vectorized runtime: unexpectedly found a decreasing non-zero offset: previous max=27, found=7 TraceData:[] TxnCoordMeta:<nil> RowNum:<nil> SamplerProgress:<nil> BulkProcessorProgress:<nil> Metrics:<nil>} from columnar operator
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed test: TestHashJoinerAgainstProcessor - The following tests appear to have failed on master (test): TestHashJoinerAgainstProcessor
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestHashJoinerAgainstProcessor).
[#1563297](https://teamcity.cockroachdb.com/viewLog.html?buildId=1563297):
```
TestHashJoinerAgainstProcessor
--- FAIL: test/TestHashJoinerAgainstProcessor (4.130s)
------- Stdout: -------
--- join type = FULL_OUTER onExpr = "" filter = "" seed = 3960962367447138496 run = 2 ---
--- lEqCols = [0] rEqCols = [0] ---
--- inputTypes = [{{StringFamily 0 0 [] 0x65613d0 0 <nil> [] [] 19 <nil>}}] ---
columnar_operators_test.go:267: unexpected meta &{Ranges:[] Err:assertion failure
- error with attached stack trace:
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError.func1
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:77
runtime.gopanic
/usr/local/go/src/runtime/panic.go:522
github.com/cockroachdb/cockroach/pkg/col/coldata.(*Bytes).UpdateOffsetsToBeNonDecreasing
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/bytes.go:75
github.com/cockroachdb/cockroach/pkg/col/coldata.(*MemBatch).SetLength
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/batch.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).emitUnmatched
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:330
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:267
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:149
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).nextAdapter
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:91
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:177
github.com/cockroachdb/cockroach/pkg/sql/distsql.verifyColOperator
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_utils_test.go:119
github.com/cockroachdb/cockroach/pkg/sql/distsql.TestHashJoinerAgainstProcessor
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_operators_test.go:256
testing.tRunner
/usr/local/go/src/testing/testing.go:865
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1337
- error with embedded safe details: unexpected error from the vectorized runtime: %+v
-- arg 1: <string>
- unexpected error from the vectorized runtime: unexpectedly found a decreasing non-zero offset: previous max=27, found=7 TraceData:[] TxnCoordMeta:<nil> RowNum:<nil> SamplerProgress:<nil> BulkProcessorProgress:<nil> Metrics:<nil>} from columnar operator
```
Please assign, take a look and update the issue accordingly.
|
test
|
teamcity failed test testhashjoineragainstprocessor the following tests appear to have failed on master test testhashjoineragainstprocessor you may want to check testhashjoineragainstprocessor fail test testhashjoineragainstprocessor stdout join type full outer onexpr filter seed run leqcols reqcols inputtypes columnar operators test go unexpected meta ranges err assertion failure error with attached stack trace github com cockroachdb cockroach pkg sql colexec execerror catchvectorizedruntimeerror go src github com cockroachdb cockroach pkg sql colexec execerror error go runtime gopanic usr local go src runtime panic go github com cockroachdb cockroach pkg col coldata bytes updateoffsetstobenondecreasing go src github com cockroachdb cockroach pkg col coldata bytes go github com cockroachdb cockroach pkg col coldata membatch setlength go src github com cockroachdb cockroach pkg col coldata batch go github com cockroachdb cockroach pkg sql colexec hashjoineqop emitunmatched go src github com cockroachdb cockroach pkg sql colexec hashjoiner go github com cockroachdb cockroach pkg sql colexec hashjoineqop next go src github com cockroachdb cockroach pkg sql colexec hashjoiner go github com cockroachdb cockroach pkg sql colexec materializer next go src github com cockroachdb cockroach pkg sql colexec materializer go github com cockroachdb cockroach pkg sql colexec materializer nextadapter go src github com cockroachdb cockroach pkg sql colexec materializer go github com cockroachdb cockroach pkg sql colexec execerror catchvectorizedruntimeerror go src github com cockroachdb cockroach pkg sql colexec execerror error go github com cockroachdb cockroach pkg sql colexec materializer next go src github com cockroachdb cockroach pkg sql colexec materializer go github com cockroachdb cockroach pkg sql distsql verifycoloperator go src github com cockroachdb cockroach pkg sql distsql columnar utils test go github com cockroachdb cockroach pkg sql distsql testhashjoineragainstprocessor go src github com cockroachdb cockroach pkg sql distsql columnar operators test go testing trunner usr local go src testing testing go runtime goexit usr local go src runtime asm s error with embedded safe details unexpected error from the vectorized runtime v arg unexpected error from the vectorized runtime unexpectedly found a decreasing non zero offset previous max found tracedata txncoordmeta rownum samplerprogress bulkprocessorprogress metrics from columnar operator please assign take a look and update the issue accordingly
| 1
|
243,172
| 26,277,952,563
|
IssuesEvent
|
2023-01-07 01:35:11
|
venkateshreddypala/post-it-a4
|
https://api.github.com/repos/venkateshreddypala/post-it-a4
|
opened
|
WS-2018-0650 (High) detected in useragent-2.1.13.tgz
|
security vulnerability
|
## WS-2018-0650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.1.13.tgz</b></p></summary>
<p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p>
<p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz">https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz</a></p>
<p>Path to dependency file: /post-it-a4/package.json</p>
<p>Path to vulnerable library: /node_modules/useragent/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.0.tgz (Root Library)
- :x: **useragent-2.1.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/venkateshreddypala/post-it-a4/commits/c71d001cc56ebfa721446ecdffd026c4e7337310">c71d001cc56ebfa721446ecdffd026c4e7337310</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0650">https://nvd.nist.gov/vuln/detail/WS-2018-0650</a></p>
<p>Release Date: 2018-02-27</p>
<p>Fix Resolution: NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.4;JetBrains.Rider.Frontend5 - 213.0.20211008.154703-eap03;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2018-0650 (High) detected in useragent-2.1.13.tgz - ## WS-2018-0650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.1.13.tgz</b></p></summary>
<p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p>
<p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz">https://registry.npmjs.org/useragent/-/useragent-2.1.13.tgz</a></p>
<p>Path to dependency file: /post-it-a4/package.json</p>
<p>Path to vulnerable library: /node_modules/useragent/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.0.tgz (Root Library)
- :x: **useragent-2.1.13.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/venkateshreddypala/post-it-a4/commits/c71d001cc56ebfa721446ecdffd026c4e7337310">c71d001cc56ebfa721446ecdffd026c4e7337310</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0650">https://nvd.nist.gov/vuln/detail/WS-2018-0650</a></p>
<p>Release Date: 2018-02-27</p>
<p>Fix Resolution: NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.4;JetBrains.Rider.Frontend5 - 213.0.20211008.154703-eap03;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws high detected in useragent tgz ws high severity vulnerability vulnerable library useragent tgz fastest most accurate effecient user agent string parser uses browserscope s research for parsing library home page a href path to dependency file post it package json path to vulnerable library node modules useragent package json dependency hierarchy karma tgz root library x useragent tgz vulnerable library found in head commit a href vulnerability details regular expression denial of service redos vulnerability was found in useragent through publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nordron angulartemplate dotnetng template jetbrains rider midiator webclient step up your open source security game with mend
| 0
|
18,914
| 10,265,318,741
|
IssuesEvent
|
2019-08-22 18:33:10
|
flutter/devtools
|
https://api.github.com/repos/flutter/devtools
|
opened
|
CPU profiler start and end times should be able to be configured in user code
|
performance page
|
In order to profile a single method or sequence of events, a user should be able to specify start and end times in their code.
Maybe there is something in the dart:developer API that the user can use to do this. Then, when recording a CPU profile, we can pull samples from the interval specified by the user-triggered events.
This mode could be enabled via a setting "explicit profile interval" or something more clever.
cc @lambdabaa
|
True
|
CPU profiler start and end times should be able to be configured in user code - In order to profile a single method or sequence of events, a user should be able to specify start and end times in their code.
Maybe there is something in the dart:developer API that the user can use to do this. Then, when recording a CPU profile, we can pull samples from the interval specified by the user-triggered events.
This mode could be enabled via a setting "explicit profile interval" or something more clever.
cc @lambdabaa
|
non_test
|
cpu profiler start and end times should be able to be configured in user code in order to profile a single method or sequence of events a user should be able to specify start and end times in their code maybe there is something in the dart developer api that the user can use to do this then when recording a cpu profile we can pull samples from the interval specified by the user triggered events this mode could be enabled via a setting explicit profile interval or something more clever cc lambdabaa
| 0
|
198,697
| 14,993,286,350
|
IssuesEvent
|
2021-01-29 11:05:56
|
strictdoc-project/strictdoc
|
https://api.github.com/repos/strictdoc-project/strictdoc
|
opened
|
Enable testing on older Linux distributions
|
testing
|
One user has reported problems on Ubuntu 16, at least two issues:
1) Having troubles when installing Poetry.
2) Problem installing `xlsxwriter`.
|
1.0
|
Enable testing on older Linux distributions - One user has reported problems on Ubuntu 16, at least two issues:
1) Having troubles when installing Poetry.
2) Problem installing `xlsxwriter`.
|
test
|
enable testing on older linux distributions one user has reported problems on ubuntu at least two issues having troubles when installing poetry problem installing xlsxwriter
| 1
|
22,870
| 3,974,194,831
|
IssuesEvent
|
2016-05-04 21:16:12
|
PulpQE/pulp-smash
|
https://api.github.com/repos/PulpQE/pulp-smash
|
closed
|
Add new test for "Uploading the same Content Unit twice"
|
high priority test case
|
As per this [issue](https://pulp.plan.io/issues/1406), there is a new regression in Pulp 2.8 Beta when one tries to upload the same content (ie. rpms, puppet modules, etc) to an existing repository. We need a new automated test that does the following:
* Create a new feed-less repository
* Manually import a valid content type (i.e. RPM, etc)
* Assert that the content type was successfully uploaded into the repository
* Manually try to import the same exact content type to the same repository
* Assert that a PulpCodedException is raised
We should make sure that all valid content types (rpms, puppet modules, docker images, etc) are used for this test to make sure that we get the same, expected behavior across the board.
* [ ] docker v1
* [x] puppet
* [x] python
* [x] rpm
<ins>Not testable: docker v2 and ostree. See: <a href="https://github.com/PulpQE/pulp-smash/issues/81#issuecomment-214412605">here</a>.</ins>
|
1.0
|
Add new test for "Uploading the same Content Unit twice" - As per this [issue](https://pulp.plan.io/issues/1406), there is a new regression in Pulp 2.8 Beta when one tries to upload the same content (ie. rpms, puppet modules, etc) to an existing repository. We need a new automated test that does the following:
* Create a new feed-less repository
* Manually import a valid content type (i.e. RPM, etc)
* Assert that the content type was successfully uploaded into the repository
* Manually try to import the same exact content type to the same repository
* Assert that a PulpCodedException is raised
We should make sure that all valid content types (rpms, puppet modules, docker images, etc) are used for this test to make sure that we get the same, expected behavior across the board.
* [ ] docker v1
* [x] puppet
* [x] python
* [x] rpm
<ins>Not testable: docker v2 and ostree. See: <a href="https://github.com/PulpQE/pulp-smash/issues/81#issuecomment-214412605">here</a>.</ins>
|
test
|
add new test for uploading the same content unit twice as per this there is a new regression in pulp beta when one tries to upload the same content ie rpms puppet modules etc to an existing repository we need a new automated test that does the following create a new feed less repository manually import a valid content type i e rpm etc assert that the content type was successfully uploaded into the repository manually try to import the same exact content type to the same repository assert that a pulpcodedexception is raised we should make sure that all valid content types rpms puppet modules docker images etc are used for this test to make sure that we get the same expected behavior across the board docker puppet python rpm not testable docker and ostree see a href
| 1
|
140,200
| 11,305,151,601
|
IssuesEvent
|
2020-01-18 02:51:35
|
aristanetworks/atd-public
|
https://api.github.com/repos/aristanetworks/atd-public
|
opened
|
Update login script for latest
|
DC-Latest Topo bug labvm
|
Need to update the topology type check for media labs from:
`if topology == ‘datacenter’`
To:
`if ‘datacenter’ in topology `
|
1.0
|
Update login script for latest - Need to update the topology type check for media labs from:
`if topology == ‘datacenter’`
To:
`if ‘datacenter’ in topology `
|
test
|
update login script for latest need to update the topology type check for media labs from if topology ‘datacenter’ to if ‘datacenter’ in topology
| 1
|
290,721
| 25,089,658,721
|
IssuesEvent
|
2022-11-08 04:30:37
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Godot allowing adding files and folders with special characters (etc / \ : * " |)
|
bug platform:windows topic:editor needs testing
|
### Godot version
v3.4.4 stable official
### System information
Windows 10, Git and Github Desktop
### Issue description
When creating a new file/scenes/scripts, Godot allows special characters which let your VCS to broke (you can't commit)
I expect it not allowing special characters in file name
### Steps to reproduce
1. Create a new godot project (or open an existed one)
2. Click "New Scenes" in the File System
3. Enter some special characters
4. Save It
### Minimal reproduction project
_No response_
|
1.0
|
Godot allowing adding files and folders with special characters (etc / \ : * " |) - ### Godot version
v3.4.4 stable official
### System information
Windows 10, Git and Github Desktop
### Issue description
When creating a new file/scenes/scripts, Godot allows special characters which let your VCS to broke (you can't commit)
I expect it not allowing special characters in file name
### Steps to reproduce
1. Create a new godot project (or open an existed one)
2. Click "New Scenes" in the File System
3. Enter some special characters
4. Save It
### Minimal reproduction project
_No response_
|
test
|
godot allowing adding files and folders with special characters etc godot version stable official system information windows git and github desktop issue description when creating a new file scenes scripts godot allows special characters which let your vcs to broke you can t commit i expect it not allowing special characters in file name steps to reproduce create a new godot project or open an existed one click new scenes in the file system enter some special characters save it minimal reproduction project no response
| 1
|
484,706
| 13,943,983,667
|
IssuesEvent
|
2020-10-23 00:39:51
|
elementary/stylesheet
|
https://api.github.com/repos/elementary/stylesheet
|
closed
|
Extend the Gtk.STYLE_CLASS_FLAT to Gtk.ActionBars too
|
Bitesize Priority: Wishlist Status: Confirmed
|
This new Flat class could also be used in Gtk.ActionBars, since those are also part of the window controls, and could have the option to be styled as Flat similarly to Gtk.HeaderBars.
Here's a relevant CSS to get started to adding this:
```css
actionbar,
.action-bar {
border-top-color: transparent;
background: @colorPrimary;
color: @textColorPrimary;
box-shadow:
inset 1px 0 0 0 alpha (shade (@colorPrimary, 1.4), 0.6),
inset -1px 0 0 0 alpha (shade (@colorPrimary, 1.4), 0.6),
inset 0 -1px 0 0 alpha (shade (@colorPrimary, 1.4), 0.8);
}
```
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/51463644-extend-the-gtk-style_class_flat-to-gtk-actionbars-too?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Extend the Gtk.STYLE_CLASS_FLAT to Gtk.ActionBars too - This new Flat class could also be used in Gtk.ActionBars, since those are also part of the window controls, and could have the option to be styled as Flat similarly to Gtk.HeaderBars.
Here's a relevant CSS to get started to adding this:
```css
actionbar,
.action-bar {
border-top-color: transparent;
background: @colorPrimary;
color: @textColorPrimary;
box-shadow:
inset 1px 0 0 0 alpha (shade (@colorPrimary, 1.4), 0.6),
inset -1px 0 0 0 alpha (shade (@colorPrimary, 1.4), 0.6),
inset 0 -1px 0 0 alpha (shade (@colorPrimary, 1.4), 0.8);
}
```
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/51463644-extend-the-gtk-style_class_flat-to-gtk-actionbars-too?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F45189256&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_test
|
extend the gtk style class flat to gtk actionbars too this new flat class could also be used in gtk actionbars since those are also part of the window controls and could have the option to be styled as flat similarly to gtk headerbars here s a relevant css to get started to adding this css actionbar action bar border top color transparent background colorprimary color textcolorprimary box shadow inset alpha shade colorprimary inset alpha shade colorprimary inset alpha shade colorprimary want to back this issue we accept bounties via
| 0
|
125,341
| 12,258,094,406
|
IssuesEvent
|
2020-05-06 14:38:06
|
project-koku/koku-ui
|
https://api.github.com/repos/project-koku/koku-ui
|
closed
|
Update Azure Sources UI Wizard Instructions
|
cost model documentation
|
## User Story
As a user configuring an Azure source I want to include only the necessary permissions so that we create a minimal footprint.
## Impacts
- Sources UI Wizard
## Assumptions
- See https://github.com/project-koku/koku/issues/1768 for details on what was wrong
- The change is essentially implementing what @lcouzens mentioned here: https://github.com/project-koku/koku/issues/1768#issuecomment-589343710
## UI Details
## Acceptance Criteria
- [ ] The cost management sources wizard flow for Azure specifies these new commands
|
1.0
|
Update Azure Sources UI Wizard Instructions - ## User Story
As a user configuring an Azure source I want to include only the necessary permissions so that we create a minimal footprint.
## Impacts
- Sources UI Wizard
## Assumptions
- See https://github.com/project-koku/koku/issues/1768 for details on what was wrong
- The change is essentially implementing what @lcouzens mentioned here: https://github.com/project-koku/koku/issues/1768#issuecomment-589343710
## UI Details
## Acceptance Criteria
- [ ] The cost management sources wizard flow for Azure specifies these new commands
|
non_test
|
update azure sources ui wizard instructions user story as a user configuring an azure source i want to include only the necessary permissions so that we create a minimal footprint impacts sources ui wizard assumptions see for details on what was wrong the change is essentially implementing what lcouzens mentioned here ui details acceptance criteria the cost management sources wizard flow for azure specifies these new commands
| 0
|
21,487
| 6,157,181,079
|
IssuesEvent
|
2017-06-28 18:22:48
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
typo in the Flutter Codelab
|
dev: docs - codelab
|
There is a minor typo in the Flutter Codelab. "property" is typed twice, see the screenshot.

|
1.0
|
typo in the Flutter Codelab - There is a minor typo in the Flutter Codelab. "property" is typed twice, see the screenshot.

|
non_test
|
typo in the flutter codelab there is a minor typo in the flutter codelab property is typed twice see the screenshot
| 0
|
75,393
| 7,470,314,856
|
IssuesEvent
|
2018-04-03 04:10:08
|
s-newman/skitter
|
https://api.github.com/repos/s-newman/skitter
|
closed
|
PyLint Rating
|
required test
|
All python code should have a 10/10 rating from PyLint. An automated test should be created to perform this check.
|
1.0
|
PyLint Rating - All python code should have a 10/10 rating from PyLint. An automated test should be created to perform this check.
|
test
|
pylint rating all python code should have a rating from pylint an automated test should be created to perform this check
| 1
|
292,157
| 25,204,293,183
|
IssuesEvent
|
2022-11-13 13:47:09
|
TeamFogFog/FogFog-Server-Upptime
|
https://api.github.com/repos/TeamFogFog/FogFog-Server-Upptime
|
closed
|
🛑 Test 용 FogFog Server - DEV is down
|
status test-fog-fog-server-dev
|
In [`ff76110`](https://github.com/TeamFogFog/FogFog-Server-Upptime/commit/ff761102b62075e6c21ce5269b0423aacfa963b2
), Test 용 FogFog Server - DEV (https://klsjflskjdfljdslkfjsdlkjfl.com) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
1.0
|
🛑 Test 용 FogFog Server - DEV is down - In [`ff76110`](https://github.com/TeamFogFog/FogFog-Server-Upptime/commit/ff761102b62075e6c21ce5269b0423aacfa963b2
), Test 용 FogFog Server - DEV (https://klsjflskjdfljdslkfjsdlkjfl.com) was **down**:
- HTTP code: 0
- Response time: 0 ms
|
test
|
🛑 test 용 fogfog server dev is down in test 용 fogfog server dev was down http code response time ms
| 1
|
457,645
| 13,159,648,046
|
IssuesEvent
|
2020-08-10 16:11:08
|
ansible/galaxy_ng
|
https://api.github.com/repos/ansible/galaxy_ng
|
opened
|
UI: Filter collection search by repository
|
priority/high status/blocked status/new type/enhancement
|
- [ ] On the search page provide a filter for community, Red Hat certified, and private content
- [ ] On collection details, pull the detail from the correct repository
Subtask of #154
|
1.0
|
UI: Filter collection search by repository - - [ ] On the search page provide a filter for community, Red Hat certified, and private content
- [ ] On collection details, pull the detail from the correct repository
Subtask of #154
|
non_test
|
ui filter collection search by repository on the search page provide a filter for community red hat certified and private content on collection details pull the detail from the correct repository subtask of
| 0
|
41,253
| 5,345,354,943
|
IssuesEvent
|
2017-02-17 16:45:30
|
TheScienceMuseum/collectionsonline
|
https://api.github.com/repos/TheScienceMuseum/collectionsonline
|
closed
|
Fields not appearing on archive documents
|
bug please-test priority-2 T3h
|
We seem have lost a couple of fields from Archive document pages (despite them being in the index). See also https://github.com/TheScienceMuseum/collectionsonline/issues/747
- **Date(s)** (at top of page):`lifecycle.creation.date`
- **Extent**:`measurements.dimensions`
CO: https://collection.sciencemuseum.org.uk/documents/aa110000003
Archive site: http://archives.sciencemuseumgroup.ac.uk/Details/archivescience/110000003
```
"lifecycle": {
"creation": [
{
"date": [
{
"earliest": 1821,
"from": {
"earliest": 1821,
"latest": 1821,
"value": "1821"
},
"latest": 1905,
"range": true,
"to": {
"earliest": 1905,
"latest": 1905,
"value": "1905"
}
}
],
```
```
"measurements": {
"dimensions": [
{
"value": "11 plan press drawers and 8 linear meters of shelving"
}
]
},
```
|
1.0
|
Fields not appearing on archive documents - We seem have lost a couple of fields from Archive document pages (despite them being in the index). See also https://github.com/TheScienceMuseum/collectionsonline/issues/747
- **Date(s)** (at top of page):`lifecycle.creation.date`
- **Extent**:`measurements.dimensions`
CO: https://collection.sciencemuseum.org.uk/documents/aa110000003
Archive site: http://archives.sciencemuseumgroup.ac.uk/Details/archivescience/110000003
```
"lifecycle": {
"creation": [
{
"date": [
{
"earliest": 1821,
"from": {
"earliest": 1821,
"latest": 1821,
"value": "1821"
},
"latest": 1905,
"range": true,
"to": {
"earliest": 1905,
"latest": 1905,
"value": "1905"
}
}
],
```
```
"measurements": {
"dimensions": [
{
"value": "11 plan press drawers and 8 linear meters of shelving"
}
]
},
```
|
test
|
fields not appearing on archive documents we seem have lost a couple of fields from archive document pages despite them being in the index see also date s at top of page lifecycle creation date extent measurements dimensions co archive site lifecycle creation date earliest from earliest latest value latest range true to earliest latest value measurements dimensions value plan press drawers and linear meters of shelving
| 1
|
58,032
| 6,565,754,102
|
IssuesEvent
|
2017-09-08 09:36:58
|
RepoCamp/teapot
|
https://api.github.com/repos/RepoCamp/teapot
|
opened
|
Fix warning in Travis build
|
Component: Testing Type: Enhancement
|
```
Warning: the running version of Bundler (1.15.1) is older than the version that created the lockfile (1.15.4). We suggest you upgrade to the latest version of Bundler by running `gem install bundler`.
```
|
1.0
|
Fix warning in Travis build - ```
Warning: the running version of Bundler (1.15.1) is older than the version that created the lockfile (1.15.4). We suggest you upgrade to the latest version of Bundler by running `gem install bundler`.
```
|
test
|
fix warning in travis build warning the running version of bundler is older than the version that created the lockfile we suggest you upgrade to the latest version of bundler by running gem install bundler
| 1
|
695,858
| 23,874,239,294
|
IssuesEvent
|
2022-09-07 17:25:36
|
Rusi91/basisdokument
|
https://api.github.com/repos/Rusi91/basisdokument
|
closed
|
[Gliederungspunkte] Gliederungspunkte löschen
|
high priority user story
|
_Als Nutzer:in möchte ich Gliederungspunkte löschen können, damit sie nicht mehr angezeigt werden._ (Nach endgültigen Status nicht mehr)
**Zusammenhängendes Issue:** #63
- [x] Lösch-Icon hinzufügen
- [x] Edit-Icon hinzufügen
- [x] Vor dem Löschen sollte ein Pop-Up zur Bestätigung kommen.
- [x] Nach dem Löschen sollte der Gliederungspunkt nicht mehr angezeigt werden.
- [x] Nach dem Löschen sollte der Gliederungspunkt aus dem lokalen Speicher entfernt werden.
|
1.0
|
[Gliederungspunkte] Gliederungspunkte löschen - _Als Nutzer:in möchte ich Gliederungspunkte löschen können, damit sie nicht mehr angezeigt werden._ (Nach endgültigen Status nicht mehr)
**Zusammenhängendes Issue:** #63
- [x] Lösch-Icon hinzufügen
- [x] Edit-Icon hinzufügen
- [x] Vor dem Löschen sollte ein Pop-Up zur Bestätigung kommen.
- [x] Nach dem Löschen sollte der Gliederungspunkt nicht mehr angezeigt werden.
- [x] Nach dem Löschen sollte der Gliederungspunkt aus dem lokalen Speicher entfernt werden.
|
non_test
|
gliederungspunkte löschen als nutzer in möchte ich gliederungspunkte löschen können damit sie nicht mehr angezeigt werden nach endgültigen status nicht mehr zusammenhängendes issue lösch icon hinzufügen edit icon hinzufügen vor dem löschen sollte ein pop up zur bestätigung kommen nach dem löschen sollte der gliederungspunkt nicht mehr angezeigt werden nach dem löschen sollte der gliederungspunkt aus dem lokalen speicher entfernt werden
| 0
|
124,548
| 4,927,116,523
|
IssuesEvent
|
2016-11-26 15:10:11
|
GeoDiver/R-Core
|
https://api.github.com/repos/GeoDiver/R-Core
|
closed
|
Streamline gage analysis Script
|
Low Priority
|
Download_GEO.R L72-78
```R
entrez.gene.id <- featureData[, 'ENTREZ_GENE_ID']
go.bio <- featureData[, 'Gene Ontology Biological Process']
go.cell <- featureData[, 'Gene Ontology Cellular Component']
go.mol <- featureData[, 'Gene Ontology Molecular Function']
gene.titles <- featureData[, 'Gene Title']
genes <- data.frame(gene.names, entrez.gene.id, gene.titles, go.bio,
go.cell, go.mol)
```
So the above is already calculated and as such can be used to improve the gage script significantly....
Firstly this part can be improved by simply using the entrez gene id in the genes dataframe above instead of going through a complex method
```R
# Create two column table containing entrez IDs for geodataset
id.map.refseq <- id2eg(ids = gene.names, category = "SYMBOL",
pkg.name = package, org = as.character(keggcode.organism))
# Replace gene symbols with ENTREZ ID in dataset matrix
tryCatch({
rownames(X) <- id.map.refseq[, 2]
}, error = function(e) {
cat("ERROR: Gene symbols does not match with ENTREZ ID", file=stderr())
quit(save = "no", status = 1, runLast = FALSE)
})
# Remove rows without ENTREZ IDs
X <- X[which(is.na(rownames(X)) == FALSE), ]
geo.dataset <- X
```
Secondly, the type of gene ontology can also be determined from the genes dataframe - i.e. thereby improve the following part:
```R
#############################################################################
# Gage Data Loading #
#############################################################################
if (geneset.type == "KEGG") { # KEGG datasets
data(kegg.gs)
kg.org <- kegg.gsets(organism) # picks out orgamism gene sets
dbdata <- kg.org$kg.sets[kg.org$sigmet.idx]
} else { # GO Datasets
common.name <- as.character(bods[which(bods[, "kegg code"] == keggcode.organism), "species"])
go.hs <- go.gsets(species = common.name) # use species column of bods
if (geneset.type == "BP") { # BP = Biological Process
dbdata <- go.hs$go.sets[go.hs$go.subs$BP]
} else if (geneset.type == "MF") { # MF = molecular function
dbdata <- go.hs$go.sets[go.hs$go.subs$MF]
} else if (geneset.type == "CC") { # CC = cellular component
dbdata <- go.hs$go.sets[go.hs$go.subs$CC]
}
}
```
|
1.0
|
Streamline gage analysis Script - Download_GEO.R L72-78
```R
entrez.gene.id <- featureData[, 'ENTREZ_GENE_ID']
go.bio <- featureData[, 'Gene Ontology Biological Process']
go.cell <- featureData[, 'Gene Ontology Cellular Component']
go.mol <- featureData[, 'Gene Ontology Molecular Function']
gene.titles <- featureData[, 'Gene Title']
genes <- data.frame(gene.names, entrez.gene.id, gene.titles, go.bio,
go.cell, go.mol)
```
So the above is already calculated and as such can be used to improve the gage script significantly....
Firstly this part can be improved by simply using the entrez gene id in the genes dataframe above instead of going through a complex method
```R
# Create two column table containing entrez IDs for geodataset
id.map.refseq <- id2eg(ids = gene.names, category = "SYMBOL",
pkg.name = package, org = as.character(keggcode.organism))
# Replace gene symbols with ENTREZ ID in dataset matrix
tryCatch({
rownames(X) <- id.map.refseq[, 2]
}, error = function(e) {
cat("ERROR: Gene symbols does not match with ENTREZ ID", file=stderr())
quit(save = "no", status = 1, runLast = FALSE)
})
# Remove rows without ENTREZ IDs
X <- X[which(is.na(rownames(X)) == FALSE), ]
geo.dataset <- X
```
Secondly, the type of gene ontology can also be determined from the genes dataframe - i.e. thereby improve the following part:
```R
#############################################################################
# Gage Data Loading #
#############################################################################
if (geneset.type == "KEGG") { # KEGG datasets
data(kegg.gs)
kg.org <- kegg.gsets(organism) # picks out orgamism gene sets
dbdata <- kg.org$kg.sets[kg.org$sigmet.idx]
} else { # GO Datasets
common.name <- as.character(bods[which(bods[, "kegg code"] == keggcode.organism), "species"])
go.hs <- go.gsets(species = common.name) # use species column of bods
if (geneset.type == "BP") { # BP = Biological Process
dbdata <- go.hs$go.sets[go.hs$go.subs$BP]
} else if (geneset.type == "MF") { # MF = molecular function
dbdata <- go.hs$go.sets[go.hs$go.subs$MF]
} else if (geneset.type == "CC") { # CC = cellular component
dbdata <- go.hs$go.sets[go.hs$go.subs$CC]
}
}
```
|
non_test
|
streamline gage analysis script download geo r r entrez gene id featuredata go bio featuredata go cell featuredata go mol featuredata gene titles featuredata genes data frame gene names entrez gene id gene titles go bio go cell go mol so the above is already calculated and as such can be used to improve the gage script significantly firstly this part can be improved by simply using the entrez gene id in the genes dataframe above instead of going through a complex method r create two column table containing entrez ids for geodataset id map refseq ids gene names category symbol pkg name package org as character keggcode organism replace gene symbols with entrez id in dataset matrix trycatch rownames x id map refseq error function e cat error gene symbols does not match with entrez id file stderr quit save no status runlast false remove rows without entrez ids x x geo dataset x secondly the type of gene ontology can also be determined from the genes dataframe i e thereby improve the following part r gage data loading if geneset type kegg kegg datasets data kegg gs kg org kegg gsets organism picks out orgamism gene sets dbdata kg org kg sets else go datasets common name as character bods keggcode organism species go hs go gsets species common name use species column of bods if geneset type bp bp biological process dbdata go hs go sets else if geneset type mf mf molecular function dbdata go hs go sets else if geneset type cc cc cellular component dbdata go hs go sets
| 0
|
69,420
| 30,277,788,903
|
IssuesEvent
|
2023-07-07 21:36:02
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
Add netpol to devops-xray namespace
|
*team/ security* *team/ ops and shared services*
|
**Describe the issue**
The Compliance Operator has identified that some namespaces don't have any NetworkPolicies and is encouraging us to add them.
**Additional context**
https://github.com/bcgov/how-to-workshops/tree/master/labs/netpol-quickstart
**Definition of done**
Add KNP to `devops-xray` namespace in GOLD, GOLDDR and EMERALD clusters
|
1.0
|
Add netpol to devops-xray namespace - **Describe the issue**
The Compliance Operator has identified that some namespaces don't have any NetworkPolicies and is encouraging us to add them.
**Additional context**
https://github.com/bcgov/how-to-workshops/tree/master/labs/netpol-quickstart
**Definition of done**
Add KNP to `devops-xray` namespace in GOLD, GOLDDR and EMERALD clusters
|
non_test
|
add netpol to devops xray namespace describe the issue the compliance operator has identified that some namespaces don t have any networkpolicies and is encouraging us to add them additional context definition of done add knp to devops xray namespace in gold golddr and emerald clusters
| 0
|
60,666
| 8,453,584,200
|
IssuesEvent
|
2018-10-20 17:02:34
|
KratosMultiphysics/Kratos
|
https://api.github.com/repos/KratosMultiphysics/Kratos
|
closed
|
[Tutorials] Tutorials must be updated with the new Model
|
Documentation Invalid
|
Now with the merge of the new Model tutorials must be updated in order to continue working
|
1.0
|
[Tutorials] Tutorials must be updated with the new Model - Now with the merge of the new Model tutorials must be updated in order to continue working
|
non_test
|
tutorials must be updated with the new model now with the merge of the new model tutorials must be updated in order to continue working
| 0
|
137,927
| 11,167,750,685
|
IssuesEvent
|
2019-12-27 18:30:55
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
UI - Support creating storage class for local persistent volumes
|
[zube]: To Test kind/enhancement team/cn team/ui
|
<!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Enhancement
The use of Local Persistent Volumes requires creating a storage class with attribute `volumeBindingMode: WaitForFirstConsumer` to ensure pods are properly scheduled to nodes.
We should support creating this type of storage class in the GUI.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
```
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/
|
1.0
|
UI - Support creating storage class for local persistent volumes - <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Enhancement
The use of Local Persistent Volumes requires creating a storage class with attribute `volumeBindingMode: WaitForFirstConsumer` to ensure pods are properly scheduled to nodes.
We should support creating this type of storage class in the GUI.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
```
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/
|
test
|
ui support creating storage class for local persistent volumes please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request enhancement the use of local persistent volumes requires creating a storage class with attribute volumebindingmode waitforfirstconsumer to ensure pods are properly scheduled to nodes we should support creating this type of storage class in the gui kind storageclass apiversion storage io metadata name local storage provisioner kubernetes io no provisioner volumebindingmode waitforfirstconsumer
| 1
|
165,227
| 12,833,695,670
|
IssuesEvent
|
2020-07-07 09:44:25
|
bebbo/gcc
|
https://api.github.com/repos/bebbo/gcc
|
closed
|
ScummVM error if fbbb is on.
|
bug please test
|
item variable will be too high if fbbb is on -> error.
https://github.com/mheyer32/scummvm-amigaos3/blob/29b08bf2c4e34f8b7b9a621474ca5841a45f07a3/engines/agos/items.cpp#L385
Happens with Simon1 (OCS/Amiga) demo.
|
1.0
|
ScummVM error if fbbb is on. - item variable will be too high if fbbb is on -> error.
https://github.com/mheyer32/scummvm-amigaos3/blob/29b08bf2c4e34f8b7b9a621474ca5841a45f07a3/engines/agos/items.cpp#L385
Happens with Simon1 (OCS/Amiga) demo.
|
test
|
scummvm error if fbbb is on item variable will be too high if fbbb is on error happens with ocs amiga demo
| 1
|
93,141
| 8,401,721,779
|
IssuesEvent
|
2018-10-11 02:37:46
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed test: TestDistSQLDrainingHosts
|
C-test-failure O-robot
|
The following tests appear to have failed on master (testrace): TestDistSQLDrainingHosts
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestDistSQLDrainingHosts).
[#957642](https://teamcity.cockroachdb.com/viewLog.html?buildId=957642):
```
TestDistSQLDrainingHosts
...sary migrations have run
I181010 23:15:34.485427 13323 server/server.go:1587 [n2] serving sql connections
I181010 23:15:34.505691 13875 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I181010 23:15:34.506661 13877 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:35775} Attrs: Locality: ServerVersion:2.1 BuildTag:v2.2.0-alpha.00000000-1546-g8e1a821 StartedAt:1539213334475555525 LocalityAddress:[]} ClusterID:21a6738a-d52b-476d-800e-30c4456f3710 StartedAt:1539213334475555525 LastUp:1539213334475555525}
I181010 23:15:34.510114 13759 sql/event_log.go:126 [n1,client=127.0.0.1:46668,user=root] Event: "create_database", target: 52, info: {DatabaseName:test Statement:CREATE DATABASE IF NOT EXISTS test User:root}
I181010 23:15:34.514570 13759 sql/event_log.go:126 [n1,client=127.0.0.1:46668,user=root] Event: "create_table", target: 53, info: {TableName:test.public.nums Statement:CREATE TABLE test.public.nums (num INT) User:root}
I181010 23:15:34.536304 13759 storage/replica_command.go:298 [n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/53/1/1 [r2]
I181010 23:15:34.554306 13759 storage/store_snapshot.go:615 [n1,s1,r2/1:/{Table/53/1/1-Max}] sending preemptive snapshot 9247b79e at applied index 11
I181010 23:15:34.554546 13759 storage/store_snapshot.go:657 [n1,s1,r2/1:/{Table/53/1/1-Max}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 1, rate-limit: 2.0 MiB/sec, 1ms
I181010 23:15:34.555038 13997 storage/replica_raftstorage.go:803 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=9247b79e, encoded size=380, 1 rocksdb batches, 1 log entries)
I181010 23:15:34.555384 13997 storage/replica_raftstorage.go:809 [n2,s2,r2/?:/{Table/53/1/1-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I181010 23:15:34.556240 13759 storage/replica_command.go:812 [n1,s1,r2/1:/{Table/53/1/1-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/{Table/53/1/1-Max} [(n1,s1):1, next=2, gen=0]
I181010 23:15:34.560034 13759 storage/replica.go:3899 [n1,s1,r2/1:/{Table/53/1/1-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I181010 23:15:34.563109 13855 storage/replica_proposal.go:211 [n2,s2,r2/2:/{Table/53/1/1-Max}] new range lease repl=(n2,s2):2 seq=3 start=1539213334.560390701,0 epo=1 pro=1539213334.560392897,0 following repl=(n1,s1):1 seq=2 start=1539213334.339436732,0 exp=1539213343.339629423,0 pro=1539213334.339650808,0
I181010 23:15:34.564447 14013 storage/replica_command.go:812 [n2,s2,r2/2:/{Table/53/1/1-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:/{Table/53/1/1-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I181010 23:15:34.577124 14013 storage/replica.go:3899 [n2,s2,r2/2:/{Table/53/1/1-Max}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I181010 23:15:34.578719 14077 storage/store.go:2744 [n1,replicaGC,s1,r2/1:/{Table/53/1/1-Max}] removing replica r2/1
I181010 23:15:34.578936 14077 storage/replica.go:878 [n1,replicaGC,s1,r2/1:/{Table/53/1/1-Max}] removed 9 (2+7) keys in 0ms [clear=0ms commit=0ms]
I181010 23:15:34.630347 14029 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
I181010 23:15:34.630614 14028 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
W181010 23:15:34.630787 14060 storage/raft_transport.go:583 [n1] while processing outgoing Raft queue to node 2: rpc error: code = Unavailable desc = transport is closing:
W181010 23:15:34.632089 14009 storage/raft_transport.go:583 [n2] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = context canceled:
I181010 23:15:34.632910 14029 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-subscription
TestDistSQLDrainingHosts
...cH_vdgaEodYTSGjTn7uef3qCNooTGWNFsQncGCQQM6gIVOgtYa6cfhopb5BzBhUuvGuG-cMCkMI4giucgcEAVv5dcA1SoU0nQEDhU5Wh351Q1Ut6WepfW2BQeadeEyNRshbBsa701LrZIkgeMtuF7-UJWEpnaFpMvS-Zrt0u19nH5vJ06grGXWdFF4bUkioBvvzNp5mMUyz2b3tV-l2suTjYeaDMPz2xvldG_9DfPaP87s2fsW1RtsYbfGi-eubZ92LoCoxPJ81ngp8J1P0mnDMeq4fKLQu3PJwWOlw1QU8h3kUTgYwv4STKPwcN8-j8CIOL_4VO28ffgMAAP__nC9YuA==]]
got:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyUkEFL9DAQhu_fr_h4TwqBbfeYk-JpL63UFQ8SJDZDKLSZMklAWfrfpc1BV1jR47yT533CnBDYUWMnitDPqGEUZuGeYmRZo_Lg4N6gK4UhzDmtsVHoWQj6hDSkkaBxtK8jdWQdya6CgqNkh3GrnWWYrLzfhDxFKLQ56f8NB4JZFDinz9KYrCfoelG_F996L-RtYtnV59679rE5vnTt08PV9UXX_i-ujuLMIdKZ51JztRgFcp7KISNn6eleuN80ZWw3bgscxVS2dRkOoazWD36F6x_h_TfYLP8-AgAA__-zG6EE]]
I181010 23:20:12.235908 13865 sql/distsql_physical_planner_test.go:513 SucceedsSoon:
expected:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyskT1rwzAQhvf-inJTCoJETrpoSumUoXbJBx2KCap1GEMsmZMELcH_vdgaEodYTSGjTn7uef3qCNooTGWNFsQncGCQQM6gIVOgtYa6cfhopb5BzBhUuvGuG-cMCkMI4giucgcEAVv5dcA1SoU0nQEDhU5Wh351Q1Ut6WepfW2BQeadeEyNRshbBsa701LrZIkgeMtuF7-UJWEpnaFpMvS-Zrt0u19nH5vJ06grGXWdFF4bUkioBvvzNp5mMUyz2b3tV-l2suTjYeaDMPz2xvldG_9DfPaP87s2fsW1RtsYbfGi-eubZ92LoCoxPJ81ngp8J1P0mnDMeq4fKLQu3PJwWOlw1QU8h3kUTgYwv4STKPwcN8-j8CIOL_4VO28ffgMAAP__nC9YuA==]]
got:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyUkEFL9DAQhu_fr_h4TwqBbfeYk-JpL63UFQ8SJDZDKLSZMklAWfrfpc1BV1jR47yT533CnBDYUWMnitDPqGEUZuGeYmRZo_Lg4N6gK4UhzDmtsVHoWQj6hDSkkaBxtK8jdWQdya6CgqNkh3GrnWWYrLzfhDxFKLQ56f8NB4JZFDinz9KYrCfoelG_F996L-RtYtnV59679rE5vnTt08PV9UXX_i-ujuLMIdKZ51JztRgFcp7KISNn6eleuN80ZWw3bgscxVS2dRkOoazWD36F6x_h_TfYLP8-AgAA__-zG6EE]]
I181010 23:20:13.244508 13865 sql/distsql_physical_planner_test.go:513 SucceedsSoon:
expected:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyskT1rwzAQhvf-inJTCoJETrpoSumUoXbJBx2KCap1GEMsmZMELcH_vdgaEodYTSGjTn7uef3qCNooTGWNFsQncGCQQM6gIVOgtYa6cfhopb5BzBhUuvGuG-cMCkMI4giucgcEAVv5dcA1SoU0nQEDhU5Wh351Q1Ut6WepfW2BQeadeEyNRshbBsa701LrZIkgeMtuF7-UJWEpnaFpMvS-Zrt0u19nH5vJ06grGXWdFF4bUkioBvvzNp5mMUyz2b3tV-l2suTjYeaDMPz2xvldG_9DfPaP87s2fsW1RtsYbfGi-eubZ92LoCoxPJ81ngp8J1P0mnDMeq4fKLQu3PJwWOlw1QU8h3kUTgYwv4STKPwcN8-j8CIOL_4VO28ffgMAAP__nC9YuA==]]
got:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyUkEFL9DAQhu_fr_h4TwqBbfeYk-JpL63UFQ8SJDZDKLSZMklAWfrfpc1BV1jR47yT533CnBDYUWMnitDPqGEUZuGeYmRZo_Lg4N6gK4UhzDmtsVHoWQj6hDSkkaBxtK8jdWQdya6CgqNkh3GrnWWYrLzfhDxFKLQ56f8NB4JZFDinz9KYrCfoelG_F996L-RtYtnV59679rE5vnTt08PV9UXX_i-ujuLMIdKZ51JztRgFcp7KISNn6eleuN80ZWw3bgscxVS2dRkOoazWD36F6x_h_TfYLP8-AgAA__-zG6EE]]
I181010 23:20:14.245933 14743 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
I181010 23:20:14.246985 14742 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] transport racer
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
W181010 23:20:14.256709 14428 storage/raft_transport.go:583 [n2] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = context canceled:
W181010 23:20:14.257643 14270 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
W181010 23:20:14.256773 14478 storage/raft_transport.go:583 [n1] while processing outgoing Raft queue to node 2: rpc error: code = Canceled desc = grpc: the client connection is closing:
I181010 23:20:14.261308 14742 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] transport racer
1 [async] closedts-subscription
I181010 23:20:14.262297 14742 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] transport racer
I181010 23:20:14.353316 14504 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n2: context canceled
I181010 23:20:14.416667 14757 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n1: context canceled
I181010 23:20:14.712610 13937 kv/transport_race.go:113 transport race promotion: ran 77 iterations on up to 497 requests
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed test: TestDistSQLDrainingHosts - The following tests appear to have failed on master (testrace): TestDistSQLDrainingHosts
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestDistSQLDrainingHosts).
[#957642](https://teamcity.cockroachdb.com/viewLog.html?buildId=957642):
```
TestDistSQLDrainingHosts
...sary migrations have run
I181010 23:15:34.485427 13323 server/server.go:1587 [n2] serving sql connections
I181010 23:15:34.505691 13875 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I181010 23:15:34.506661 13877 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:35775} Attrs: Locality: ServerVersion:2.1 BuildTag:v2.2.0-alpha.00000000-1546-g8e1a821 StartedAt:1539213334475555525 LocalityAddress:[]} ClusterID:21a6738a-d52b-476d-800e-30c4456f3710 StartedAt:1539213334475555525 LastUp:1539213334475555525}
I181010 23:15:34.510114 13759 sql/event_log.go:126 [n1,client=127.0.0.1:46668,user=root] Event: "create_database", target: 52, info: {DatabaseName:test Statement:CREATE DATABASE IF NOT EXISTS test User:root}
I181010 23:15:34.514570 13759 sql/event_log.go:126 [n1,client=127.0.0.1:46668,user=root] Event: "create_table", target: 53, info: {TableName:test.public.nums Statement:CREATE TABLE test.public.nums (num INT) User:root}
I181010 23:15:34.536304 13759 storage/replica_command.go:298 [n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/53/1/1 [r2]
I181010 23:15:34.554306 13759 storage/store_snapshot.go:615 [n1,s1,r2/1:/{Table/53/1/1-Max}] sending preemptive snapshot 9247b79e at applied index 11
I181010 23:15:34.554546 13759 storage/store_snapshot.go:657 [n1,s1,r2/1:/{Table/53/1/1-Max}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 1, rate-limit: 2.0 MiB/sec, 1ms
I181010 23:15:34.555038 13997 storage/replica_raftstorage.go:803 [n2,s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=9247b79e, encoded size=380, 1 rocksdb batches, 1 log entries)
I181010 23:15:34.555384 13997 storage/replica_raftstorage.go:809 [n2,s2,r2/?:/{Table/53/1/1-Max}] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I181010 23:15:34.556240 13759 storage/replica_command.go:812 [n1,s1,r2/1:/{Table/53/1/1-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/{Table/53/1/1-Max} [(n1,s1):1, next=2, gen=0]
I181010 23:15:34.560034 13759 storage/replica.go:3899 [n1,s1,r2/1:/{Table/53/1/1-Max}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I181010 23:15:34.563109 13855 storage/replica_proposal.go:211 [n2,s2,r2/2:/{Table/53/1/1-Max}] new range lease repl=(n2,s2):2 seq=3 start=1539213334.560390701,0 epo=1 pro=1539213334.560392897,0 following repl=(n1,s1):1 seq=2 start=1539213334.339436732,0 exp=1539213343.339629423,0 pro=1539213334.339650808,0
I181010 23:15:34.564447 14013 storage/replica_command.go:812 [n2,s2,r2/2:/{Table/53/1/1-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:/{Table/53/1/1-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I181010 23:15:34.577124 14013 storage/replica.go:3899 [n2,s2,r2/2:/{Table/53/1/1-Max}] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
I181010 23:15:34.578719 14077 storage/store.go:2744 [n1,replicaGC,s1,r2/1:/{Table/53/1/1-Max}] removing replica r2/1
I181010 23:15:34.578936 14077 storage/replica.go:878 [n1,replicaGC,s1,r2/1:/{Table/53/1/1-Max}] removed 9 (2+7) keys in 0ms [clear=0ms commit=0ms]
I181010 23:15:34.630347 14029 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
I181010 23:15:34.630614 14028 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
W181010 23:15:34.630787 14060 storage/raft_transport.go:583 [n1] while processing outgoing Raft queue to node 2: rpc error: code = Unavailable desc = transport is closing:
W181010 23:15:34.632089 14009 storage/raft_transport.go:583 [n2] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = context canceled:
I181010 23:15:34.632910 14029 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-subscription
TestDistSQLDrainingHosts
...cH_vdgaEodYTSGjTn7uef3qCNooTGWNFsQncGCQQM6gIVOgtYa6cfhopb5BzBhUuvGuG-cMCkMI4giucgcEAVv5dcA1SoU0nQEDhU5Wh351Q1Ut6WepfW2BQeadeEyNRshbBsa701LrZIkgeMtuF7-UJWEpnaFpMvS-Zrt0u19nH5vJ06grGXWdFF4bUkioBvvzNp5mMUyz2b3tV-l2suTjYeaDMPz2xvldG_9DfPaP87s2fsW1RtsYbfGi-eubZ92LoCoxPJ81ngp8J1P0mnDMeq4fKLQu3PJwWOlw1QU8h3kUTgYwv4STKPwcN8-j8CIOL_4VO28ffgMAAP__nC9YuA==]]
got:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyUkEFL9DAQhu_fr_h4TwqBbfeYk-JpL63UFQ8SJDZDKLSZMklAWfrfpc1BV1jR47yT533CnBDYUWMnitDPqGEUZuGeYmRZo_Lg4N6gK4UhzDmtsVHoWQj6hDSkkaBxtK8jdWQdya6CgqNkh3GrnWWYrLzfhDxFKLQ56f8NB4JZFDinz9KYrCfoelG_F996L-RtYtnV59679rE5vnTt08PV9UXX_i-ujuLMIdKZ51JztRgFcp7KISNn6eleuN80ZWw3bgscxVS2dRkOoazWD36F6x_h_TfYLP8-AgAA__-zG6EE]]
I181010 23:20:12.235908 13865 sql/distsql_physical_planner_test.go:513 SucceedsSoon:
expected:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyskT1rwzAQhvf-inJTCoJETrpoSumUoXbJBx2KCap1GEMsmZMELcH_vdgaEodYTSGjTn7uef3qCNooTGWNFsQncGCQQM6gIVOgtYa6cfhopb5BzBhUuvGuG-cMCkMI4giucgcEAVv5dcA1SoU0nQEDhU5Wh351Q1Ut6WepfW2BQeadeEyNRshbBsa701LrZIkgeMtuF7-UJWEpnaFpMvS-Zrt0u19nH5vJ06grGXWdFF4bUkioBvvzNp5mMUyz2b3tV-l2suTjYeaDMPz2xvldG_9DfPaP87s2fsW1RtsYbfGi-eubZ92LoCoxPJ81ngp8J1P0mnDMeq4fKLQu3PJwWOlw1QU8h3kUTgYwv4STKPwcN8-j8CIOL_4VO28ffgMAAP__nC9YuA==]]
got:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyUkEFL9DAQhu_fr_h4TwqBbfeYk-JpL63UFQ8SJDZDKLSZMklAWfrfpc1BV1jR47yT533CnBDYUWMnitDPqGEUZuGeYmRZo_Lg4N6gK4UhzDmtsVHoWQj6hDSkkaBxtK8jdWQdya6CgqNkh3GrnWWYrLzfhDxFKLQ56f8NB4JZFDinz9KYrCfoelG_F996L-RtYtnV59679rE5vnTt08PV9UXX_i-ujuLMIdKZ51JztRgFcp7KISNn6eleuN80ZWw3bgscxVS2dRkOoazWD36F6x_h_TfYLP8-AgAA__-zG6EE]]
I181010 23:20:13.244508 13865 sql/distsql_physical_planner_test.go:513 SucceedsSoon:
expected:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyskT1rwzAQhvf-inJTCoJETrpoSumUoXbJBx2KCap1GEMsmZMELcH_vdgaEodYTSGjTn7uef3qCNooTGWNFsQncGCQQM6gIVOgtYa6cfhopb5BzBhUuvGuG-cMCkMI4giucgcEAVv5dcA1SoU0nQEDhU5Wh351Q1Ut6WepfW2BQeadeEyNRshbBsa701LrZIkgeMtuF7-UJWEpnaFpMvS-Zrt0u19nH5vJ06grGXWdFF4bUkioBvvzNp5mMUyz2b3tV-l2suTjYeaDMPz2xvldG_9DfPaP87s2fsW1RtsYbfGi-eubZ92LoCoxPJ81ngp8J1P0mnDMeq4fKLQu3PJwWOlw1QU8h3kUTgYwv4STKPwcN8-j8CIOL_4VO28ffgMAAP__nC9YuA==]]
got:[[https://cockroachdb.github.io/distsqlplan/decode.html#eJyUkEFL9DAQhu_fr_h4TwqBbfeYk-JpL63UFQ8SJDZDKLSZMklAWfrfpc1BV1jR47yT533CnBDYUWMnitDPqGEUZuGeYmRZo_Lg4N6gK4UhzDmtsVHoWQj6hDSkkaBxtK8jdWQdya6CgqNkh3GrnWWYrLzfhDxFKLQ56f8NB4JZFDinz9KYrCfoelG_F996L-RtYtnV59679rE5vnTt08PV9UXX_i-ujuLMIdKZ51JztRgFcp7KISNn6eleuN80ZWw3bgscxVS2dRkOoazWD36F6x_h_TfYLP8-AgAA__-zG6EE]]
I181010 23:20:14.245933 14743 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
I181010 23:20:14.246985 14742 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] transport racer
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
W181010 23:20:14.256709 14428 storage/raft_transport.go:583 [n2] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = context canceled:
W181010 23:20:14.257643 14270 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
W181010 23:20:14.256773 14478 storage/raft_transport.go:583 [n1] while processing outgoing Raft queue to node 2: rpc error: code = Canceled desc = grpc: the client connection is closing:
I181010 23:20:14.261308 14742 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] transport racer
1 [async] closedts-subscription
I181010 23:20:14.262297 14742 util/stop/stopper.go:537 quiescing; tasks left:
1 [async] transport racer
I181010 23:20:14.353316 14504 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n2: context canceled
I181010 23:20:14.416667 14757 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n1: context canceled
I181010 23:20:14.712610 13937 kv/transport_race.go:113 transport race promotion: ran 77 iterations on up to 497 requests
```
Please assign, take a look and update the issue accordingly.
|
test
|
teamcity failed test testdistsqldraininghosts the following tests appear to have failed on master testrace testdistsqldraininghosts you may want to check testdistsqldraininghosts sary migrations have run server server go serving sql connections server server update go no need to upgrade cluster already at the newest version sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag alpha startedat localityaddress clusterid startedat lastup sql event log go event create database target info databasename test statement create database if not exists test user root sql event log go event create table target info tablename test public nums statement create table test public nums num int user root storage replica command go initiating a split of this range at key table storage store snapshot go sending preemptive snapshot at applied index storage store snapshot go streamed snapshot to kv pairs log entries rate limit mib sec storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas add replica read existing descriptor table max storage replica go proposing add replica updated next storage replica proposal go new range lease repl seq start epo pro following repl seq start exp pro storage replica command go change replicas remove replica read existing descriptor table max storage replica go proposing remove replica updated next storage store go removing replica storage replica go removed keys in util stop stopper go quiescing tasks left closedts subscription closedts rangefeed subscriber util stop stopper go quiescing tasks left closedts rangefeed subscriber storage raft transport go while processing outgoing raft queue to node rpc error code unavailable desc transport is closing storage raft transport go while processing outgoing raft queue to node rpc error code canceled desc context canceled util stop stopper go quiescing tasks left closedts subscription testdistsqldraininghosts ch ujwepnafpmvs got sql distsql physical planner test go succeedssoon expected got sql distsql physical planner test go succeedssoon expected got util stop stopper go quiescing tasks left closedts subscription closedts rangefeed subscriber util stop stopper go quiescing tasks left transport racer closedts subscription closedts rangefeed subscriber storage raft transport go while processing outgoing raft queue to node rpc error code canceled desc context canceled gossip gossip go no incoming or outgoing connections storage raft transport go while processing outgoing raft queue to node rpc error code canceled desc grpc the client connection is closing util stop stopper go quiescing tasks left transport racer closedts subscription util stop stopper go quiescing tasks left transport racer rpc nodedialer nodedialer go unable to connect to context canceled rpc nodedialer nodedialer go unable to connect to context canceled kv transport race go transport race promotion ran iterations on up to requests please assign take a look and update the issue accordingly
| 1
|
130,304
| 5,113,919,866
|
IssuesEvent
|
2017-01-06 16:47:34
|
hpi-swt2/wimi-portal
|
https://api.github.com/repos/hpi-swt2/wimi-portal
|
closed
|
Allow filtering on time_sheets#index
|
enhancement priority-3
|
Just like on `projects#index`, it should be possible to filter and search time sheets on the `time_sheets#index` page. The functionality should reside in the sidebar.

|
1.0
|
Allow filtering on time_sheets#index - Just like on `projects#index`, it should be possible to filter and search time sheets on the `time_sheets#index` page. The functionality should reside in the sidebar.

|
non_test
|
allow filtering on time sheets index just like on projects index it should be possible to filter and search time sheets on the time sheets index page the functionality should reside in the sidebar
| 0
|
365,781
| 10,797,696,366
|
IssuesEvent
|
2019-11-06 08:29:17
|
jenkins-x/jx
|
https://api.github.com/repos/jenkins-x/jx
|
closed
|
Istio addon suggests wrong flags to lock version
|
area/addon kind/bug priority/important-longterm
|
### Summary
During execution of `jx create addon istio` the two charts print that you should lock down the istio version using: jx step create version pr -k charts -n install/kubernetes/helm/istio-init
This command fails due to -k not being an acceptable flag:
`Error: unknown shorthand flag: 'k' in -k`
### Steps to reproduce the behavior
Add istio through the command above and observe output.
### Expected behavior
Command that is provided should work with my environment.
### Jx version
The output of `jx version` is:
2.0.821
### Jenkins type
<!--
Select which installation type are you using.
-->
- [ ] Serverless Jenkins X Pipelines (Tekton + Prow)
- [X] Classic Jenkins
### Kubernetes cluster
GKE
### Operating system / Environment
OSX
|
1.0
|
Istio addon suggests wrong flags to lock version - ### Summary
During execution of `jx create addon istio` the two charts print that you should lock down the istio version using: jx step create version pr -k charts -n install/kubernetes/helm/istio-init
This command fails due to -k not being an acceptable flag:
`Error: unknown shorthand flag: 'k' in -k`
### Steps to reproduce the behavior
Add istio through the command above and observe output.
### Expected behavior
Command that is provided should work with my environment.
### Jx version
The output of `jx version` is:
2.0.821
### Jenkins type
<!--
Select which installation type are you using.
-->
- [ ] Serverless Jenkins X Pipelines (Tekton + Prow)
- [X] Classic Jenkins
### Kubernetes cluster
GKE
### Operating system / Environment
OSX
|
non_test
|
istio addon suggests wrong flags to lock version summary during execution of jx create addon istio the two charts print that you should lock down the istio version using jx step create version pr k charts n install kubernetes helm istio init this command fails due to k not being an acceptable flag error unknown shorthand flag k in k steps to reproduce the behavior add istio through the command above and observe output expected behavior command that is provided should work with my environment jx version the output of jx version is jenkins type select which installation type are you using serverless jenkins x pipelines tekton prow classic jenkins kubernetes cluster gke operating system environment osx
| 0
|
122,614
| 4,837,779,968
|
IssuesEvent
|
2016-11-08 23:50:17
|
DoSomething/gambit
|
https://api.github.com/repos/DoSomething/gambit
|
closed
|
Move mobilecommons interface into its own library
|
NO priority-low refactor
|
And add back into this project through the `package.json`
|
1.0
|
Move mobilecommons interface into its own library - And add back into this project through the `package.json`
|
non_test
|
move mobilecommons interface into its own library and add back into this project through the package json
| 0
|
307,742
| 26,559,300,456
|
IssuesEvent
|
2023-01-20 14:38:18
|
extreme-rock/GetARock-iOS
|
https://api.github.com/repos/extreme-rock/GetARock-iOS
|
closed
|
[Setting] Action test를 위한 branch를 생성합니다.
|
test 데이크
|
## 💡 Issue
<!-- 이슈에 대한 내용을 설명해주세요. -->
- githubAction의 정상작동을 test하기 위한 branch를 추가합니다.
## 📝 todo
- [ ] test branch 만들기
- [ ] test pr 날리기
<!-- 해야 할 일들을 적어주세요. -->
<!-- <img src="" width="30%" height="30%"> -->
|
1.0
|
[Setting] Action test를 위한 branch를 생성합니다. - ## 💡 Issue
<!-- 이슈에 대한 내용을 설명해주세요. -->
- githubAction의 정상작동을 test하기 위한 branch를 추가합니다.
## 📝 todo
- [ ] test branch 만들기
- [ ] test pr 날리기
<!-- 해야 할 일들을 적어주세요. -->
<!-- <img src="" width="30%" height="30%"> -->
|
test
|
action test를 위한 branch를 생성합니다 💡 issue githubaction의 정상작동을 test하기 위한 branch를 추가합니다 📝 todo test branch 만들기 test pr 날리기
| 1
|
100,408
| 16,489,877,385
|
IssuesEvent
|
2021-05-25 01:04:32
|
DWG61868/dagger-browser
|
https://api.github.com/repos/DWG61868/dagger-browser
|
opened
|
CVE-2021-23383 (High) detected in handlebars-4.4.5.tgz
|
security vulnerability
|
## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.5.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.5.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.5.tgz</a></p>
<p>Path to dependency file: dagger-browser/browser/package.json</p>
<p>Path to vulnerable library: dagger-browser/browser/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.2.0.tgz (Root Library)
- jest-24.9.0.tgz
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.4.5","packageFilePaths":["/browser/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.2.0;jest:24.9.0;jest-cli:24.9.0;@jest/core:24.9.0;@jest/reporters:24.9.0;istanbul-reports:2.2.6;handlebars:4.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - v4.7.7"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-23383","vulnerabilityDetails":"The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23383 (High) detected in handlebars-4.4.5.tgz - ## CVE-2021-23383 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.5.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.5.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.5.tgz</a></p>
<p>Path to dependency file: dagger-browser/browser/package.json</p>
<p>Path to vulnerable library: dagger-browser/browser/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.2.0.tgz (Root Library)
- jest-24.9.0.tgz
- jest-cli-24.9.0.tgz
- core-24.9.0.tgz
- reporters-24.9.0.tgz
- istanbul-reports-2.2.6.tgz
- :x: **handlebars-4.4.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-05-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383>CVE-2021-23383</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23383</a></p>
<p>Release Date: 2021-05-04</p>
<p>Fix Resolution: handlebars - v4.7.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.4.5","packageFilePaths":["/browser/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.2.0;jest:24.9.0;jest-cli:24.9.0;@jest/core:24.9.0;@jest/reporters:24.9.0;istanbul-reports:2.2.6;handlebars:4.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - v4.7.7"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-23383","vulnerabilityDetails":"The package handlebars before 4.7.7 are vulnerable to Prototype Pollution when selecting certain compiling options to compile templates coming from an untrusted source.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23383","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file dagger browser browser package json path to vulnerable library dagger browser browser node modules handlebars package json dependency hierarchy react scripts tgz root library jest tgz jest cli tgz core tgz reporters tgz istanbul reports tgz x handlebars tgz vulnerable library found in base branch main vulnerability details the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts jest jest cli jest core jest reporters istanbul reports handlebars isminimumfixversionavailable true minimumfixversion handlebars basebranches vulnerabilityidentifier cve vulnerabilitydetails the package handlebars before are vulnerable to prototype pollution when selecting certain compiling options to compile templates coming from an untrusted source vulnerabilityurl
| 0
|
345,813
| 30,845,086,894
|
IssuesEvent
|
2023-08-02 13:15:53
|
ita-social-projects/Space2Study-Client-mvp
|
https://api.github.com/repos/ita-social-projects/Space2Study-Client-mvp
|
closed
|
(SP: 1) Write unit test for "AccordionWithImage" component
|
FrontEnd part Unit test
|
### Component unit test
Unit test for "AccordionWithImage" component
Scenaries descriptions:
- [x] Imitate user click on title and it should open content of it
[Link to component](https://github.com/ita-social-projects/Space2Study-Client-mvp/blob/develop/tests/unit/components/accordion-with-image/AccordionWithImage.spec.jsx)
Current coverage:

|
1.0
|
(SP: 1) Write unit test for "AccordionWithImage" component - ### Component unit test
Unit test for "AccordionWithImage" component
Scenaries descriptions:
- [x] Imitate user click on title and it should open content of it
[Link to component](https://github.com/ita-social-projects/Space2Study-Client-mvp/blob/develop/tests/unit/components/accordion-with-image/AccordionWithImage.spec.jsx)
Current coverage:

|
test
|
sp write unit test for accordionwithimage component component unit test unit test for accordionwithimage component scenaries descriptions imitate user click on title and it should open content of it current coverage
| 1
|
21,907
| 3,925,949,297
|
IssuesEvent
|
2016-04-22 21:03:05
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
SslStream tests: timeout
|
2 - In Progress System.Net test bug
|
http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_release_prtest/6135/console
```
System.Net.Security.Tests.SslStreamStreamToStreamTest.SslStream_StreamToStream_Authentication_Success [FAIL]
Handshake completed in the allotted time
Expected: True
Actual: False
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\SslStreamStreamToStreamTest.cs(36,0): at System.Net.Security.Tests.SslStreamStreamToStreamTest.SslStream_StreamToStream_Authentication_Success()
System.Net.Security.Tests.CertificateValidationClientServer.CertificateValidationClientServer_EndToEnd_Ok [FAIL]
Client/Server Authentication timed out.
Expected: True
Actual: False
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\CertificateValidationClientServer.cs(80,0): at System.Net.Security.Tests.CertificateValidationClientServer.<CertificateValidationClientServer_EndToEnd_Ok>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
System.Net.Security.Tests.ClientAsyncAuthenticateTest.ClientAsyncAuthenticate_EachProtocol_Success [FAIL]
Timed Out
Expected: True
Actual: False
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\ClientAsyncAuthenticateTest.cs(252,0): at System.Net.Security.Tests.ClientAsyncAuthenticateTest.<ClientAsyncSslHelper>d__5c.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\ClientAsyncAuthenticateTest.cs(108,0): at System.Net.Security.Tests.ClientAsyncAuthenticateTest.<ClientAsyncAuthenticate_EachProtocol_Success>d__16.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
```
|
1.0
|
SslStream tests: timeout - http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_release_prtest/6135/console
```
System.Net.Security.Tests.SslStreamStreamToStreamTest.SslStream_StreamToStream_Authentication_Success [FAIL]
Handshake completed in the allotted time
Expected: True
Actual: False
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\SslStreamStreamToStreamTest.cs(36,0): at System.Net.Security.Tests.SslStreamStreamToStreamTest.SslStream_StreamToStream_Authentication_Success()
System.Net.Security.Tests.CertificateValidationClientServer.CertificateValidationClientServer_EndToEnd_Ok [FAIL]
Client/Server Authentication timed out.
Expected: True
Actual: False
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\CertificateValidationClientServer.cs(80,0): at System.Net.Security.Tests.CertificateValidationClientServer.<CertificateValidationClientServer_EndToEnd_Ok>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
System.Net.Security.Tests.ClientAsyncAuthenticateTest.ClientAsyncAuthenticate_EachProtocol_Success [FAIL]
Timed Out
Expected: True
Actual: False
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\ClientAsyncAuthenticateTest.cs(252,0): at System.Net.Security.Tests.ClientAsyncAuthenticateTest.<ClientAsyncSslHelper>d__5c.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
d:\j\workspace\dotnet_corefx_windows_release_prtest\src\System.Net.Security\tests\FunctionalTests\ClientAsyncAuthenticateTest.cs(108,0): at System.Net.Security.Tests.ClientAsyncAuthenticateTest.<ClientAsyncAuthenticate_EachProtocol_Success>d__16.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
```
|
test
|
sslstream tests timeout system net security tests sslstreamstreamtostreamtest sslstream streamtostream authentication success handshake completed in the allotted time expected true actual false stack trace d j workspace dotnet corefx windows release prtest src system net security tests functionaltests sslstreamstreamtostreamtest cs at system net security tests sslstreamstreamtostreamtest sslstream streamtostream authentication success system net security tests certificatevalidationclientserver certificatevalidationclientserver endtoend ok client server authentication timed out expected true actual false stack trace d j workspace dotnet corefx windows release prtest src system net security tests functionaltests certificatevalidationclientserver cs at system net security tests certificatevalidationclientserver d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task system net security tests clientasyncauthenticatetest clientasyncauthenticate eachprotocol success timed out expected true actual false stack trace d j workspace dotnet corefx windows release prtest src system net security tests functionaltests clientasyncauthenticatetest cs at system net security tests clientasyncauthenticatetest d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task d j workspace dotnet corefx windows release prtest src system net security tests functionaltests clientasyncauthenticatetest cs at system net security tests clientasyncauthenticatetest d movenext end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task
| 1
|
42,937
| 17,373,019,825
|
IssuesEvent
|
2021-07-30 16:27:18
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Missing storage URL's for azurefiles StorageClass
|
Pri1 assigned-to-author container-service/svc doc-enhancement triaged
|
Hello,
we have limited egress connectivity for AKS clusters and during operations we figured out that this document missing URL for azure storage services, if PVC requested by cluster:
```
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,actimeo=30,mfsymlinks,<masked> //f6728f91dc4234cbcaf7b2c.file.core.windows.net/kubernetes-dynamic-pvc-239ff7d6-daa8-4515-952f-6754b958682e /var/lib/kubelet/pods/a0158db6-65c8-43e2-ad7e-a4c02c826d12/volumes/kubernetes.io~azure-file/pvc-239ff7d6-daa8-4515-952f-6754b958682e
Output: mount error(2): No such file or directory
```
I think it's worth to add wildcard `*.file.core.windows.net` to the list of required if there is default StorageClass `azurefile` in place.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a38439d0-0125-3445-1c0a-a03b940e315d
* Version Independent ID: c6436a35-94b9-6396-3b13-9291f0a17b21
* Content: [Restrict egress traffic in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic)
* Content Source: [articles/aks/limit-egress-traffic.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/limit-egress-traffic.md)
* Service: **container-service**
* GitHub Login: @palma21
* Microsoft Alias: **jpalma**
|
1.0
|
Missing storage URL's for azurefiles StorageClass -
Hello,
we have limited egress connectivity for AKS clusters and during operations we figured out that this document missing URL for azure storage services, if PVC requested by cluster:
```
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,actimeo=30,mfsymlinks,<masked> //f6728f91dc4234cbcaf7b2c.file.core.windows.net/kubernetes-dynamic-pvc-239ff7d6-daa8-4515-952f-6754b958682e /var/lib/kubelet/pods/a0158db6-65c8-43e2-ad7e-a4c02c826d12/volumes/kubernetes.io~azure-file/pvc-239ff7d6-daa8-4515-952f-6754b958682e
Output: mount error(2): No such file or directory
```
I think it's worth to add wildcard `*.file.core.windows.net` to the list of required if there is default StorageClass `azurefile` in place.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a38439d0-0125-3445-1c0a-a03b940e315d
* Version Independent ID: c6436a35-94b9-6396-3b13-9291f0a17b21
* Content: [Restrict egress traffic in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic)
* Content Source: [articles/aks/limit-egress-traffic.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/limit-egress-traffic.md)
* Service: **container-service**
* GitHub Login: @palma21
* Microsoft Alias: **jpalma**
|
non_test
|
missing storage url s for azurefiles storageclass hello we have limited egress connectivity for aks clusters and during operations we figured out that this document missing url for azure storage services if pvc requested by cluster mounting arguments t cifs o file mode dir mode vers actimeo mfsymlinks file core windows net kubernetes dynamic pvc var lib kubelet pods volumes kubernetes io azure file pvc output mount error no such file or directory i think it s worth to add wildcard file core windows net to the list of required if there is default storageclass azurefile in place document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login microsoft alias jpalma
| 0
|
2,032
| 4,162,623,308
|
IssuesEvent
|
2016-06-17 21:08:57
|
htwg-cloud-application-development/static-code-analytics-application
|
https://api.github.com/repos/htwg-cloud-application-development/static-code-analytics-application
|
closed
|
Implement logging and save log output into logfiles
|
all-microservices
|
example:
*In your class*
```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MyClass{
private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);
...
```
*use*
```
LOG.debug("check repositoryUrl: " + repositoryUrl);
LOG.error(....);
...
```
*In application.yml: * _before: ---_
```
logging:
file: myservice.log
---
```
|
1.0
|
Implement logging and save log output into logfiles -
example:
*In your class*
```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MyClass{
private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);
...
```
*use*
```
LOG.debug("check repositoryUrl: " + repositoryUrl);
LOG.error(....);
...
```
*In application.yml: * _before: ---_
```
logging:
file: myservice.log
---
```
|
non_test
|
implement logging and save log output into logfiles example in your class import org logger import org loggerfactory public class myclass private static final logger log loggerfactory getlogger myclass class use log debug check repositoryurl repositoryurl log error in application yml before logging file myservice log
| 0
|
336,425
| 30,193,204,701
|
IssuesEvent
|
2023-07-04 17:27:00
|
MohistMC/Mohist
|
https://api.github.com/repos/MohistMC/Mohist
|
closed
|
[1.16.5] Guardvillagers Ticking entity crash
|
1.16.5 Needs Testing
|
**Minecraft Version :** 1.16.5
**Mohist Version :** 707
**Operating System :** Unbuntu 20
**Concerned mod / plugin** : Guardvillagers
**Logs :** https://haste.mohistmc.com/garegigovo.coffeescript
**Steps to Reproduce :**
1. I don’t know, it works correctly and didn’t give such failures before, apparently something influenced it, but I don’t know, the server is big, it’s impossible to keep track of all the players.
|
1.0
|
[1.16.5] Guardvillagers Ticking entity crash - **Minecraft Version :** 1.16.5
**Mohist Version :** 707
**Operating System :** Unbuntu 20
**Concerned mod / plugin** : Guardvillagers
**Logs :** https://haste.mohistmc.com/garegigovo.coffeescript
**Steps to Reproduce :**
1. I don’t know, it works correctly and didn’t give such failures before, apparently something influenced it, but I don’t know, the server is big, it’s impossible to keep track of all the players.
|
test
|
guardvillagers ticking entity crash minecraft version mohist version operating system unbuntu concerned mod plugin guardvillagers logs steps to reproduce i don’t know it works correctly and didn’t give such failures before apparently something influenced it but i don’t know the server is big it’s impossible to keep track of all the players
| 1
|
197,997
| 14,953,083,623
|
IssuesEvent
|
2021-01-26 16:16:22
|
pints-team/pints
|
https://api.github.com/repos/pints-team/pints
|
closed
|
Add value-based (numerical) tests for all samplers / optimisers
|
unit-testing
|
E.g.
- Seed
- Run 100 iterations
- Check that there's sufficient change within those iterations (and reduce n if possible)
- Store output, either in CSV or in code
- Compare
This would be _in addition to_ functional testing, and would be slightly annoying because you'd need to update the stored results any time you made changes. But probably still good to have to check the impact of e.g. refactoring
Thoughts @ben18785 @fcooper8472 @martinjrobins @DavAug @rcw5890 ?
|
1.0
|
Add value-based (numerical) tests for all samplers / optimisers - E.g.
- Seed
- Run 100 iterations
- Check that there's sufficient change within those iterations (and reduce n if possible)
- Store output, either in CSV or in code
- Compare
This would be _in addition to_ functional testing, and would be slightly annoying because you'd need to update the stored results any time you made changes. But probably still good to have to check the impact of e.g. refactoring
Thoughts @ben18785 @fcooper8472 @martinjrobins @DavAug @rcw5890 ?
|
test
|
add value based numerical tests for all samplers optimisers e g seed run iterations check that there s sufficient change within those iterations and reduce n if possible store output either in csv or in code compare this would be in addition to functional testing and would be slightly annoying because you d need to update the stored results any time you made changes but probably still good to have to check the impact of e g refactoring thoughts martinjrobins davaug
| 1
|
101,600
| 8,791,282,900
|
IssuesEvent
|
2018-12-21 12:05:28
|
SME-Issues/issues
|
https://api.github.com/repos/SME-Issues/issues
|
closed
|
Compound Query Tests Balance Payment - 21/12/18 11:01 - 5004
|
NLP Api PETEDEV pulse_tests
|
**Compound Query Tests Balance Payment**
- Total: 29
- Passed: 25
- **Pass: 24 (86%)**
- Not Understood: 1
- Error (not understood): 0
- Failed but Understood: 4 (14%)
|
1.0
|
Compound Query Tests Balance Payment - 21/12/18 11:01 - 5004 - **Compound Query Tests Balance Payment**
- Total: 29
- Passed: 25
- **Pass: 24 (86%)**
- Not Understood: 1
- Error (not understood): 0
- Failed but Understood: 4 (14%)
|
test
|
compound query tests balance payment compound query tests balance payment total passed pass not understood error not understood failed but understood
| 1
|
5,297
| 5,621,941,542
|
IssuesEvent
|
2017-04-04 11:26:20
|
Cadasta/cadasta-platform
|
https://api.github.com/repos/Cadasta/cadasta-platform
|
opened
|
No password verification when changing account's mail address
|
bug needs discussion security
|
### Steps to reproduce the error
1. Go to Edit Profile
2. Change mail address
### Actual behavior
User is not prompted to verify their password to confirm identity.
All the other fixes described in bug #1140 are properly included though:
1. Confirmation mail to new email address does not include username
2. The mail address is not actually updated until the verification link is clicked
3. Correct notification is sent to the old address
But I wonder if we should ask for password verification as well, as we do when changing the password for instance.
### Expected behavior
Ask for password verification?
@adri, what do you think?
|
True
|
No password verification when changing account's mail address - ### Steps to reproduce the error
1. Go to Edit Profile
2. Change mail address
### Actual behavior
User is not prompted to verify their password to confirm identity.
All the other fixes described in bug #1140 are properly included though:
1. Confirmation mail to new email address does not include username
2. The mail address is not actually updated until the verification link is clicked
3. Correct notification is sent to the old address
But I wonder if we should ask for password verification as well, as we do when changing the password for instance.
### Expected behavior
Ask for password verification?
@adri, what do you think?
|
non_test
|
no password verification when changing account s mail address steps to reproduce the error go to edit profile change mail address actual behavior user is not prompted to verify their password to confirm identity all the other fixes described in bug are properly included though confirmation mail to new email address does not include username the mail address is not actually updated until the verification link is clicked correct notification is sent to the old address but i wonder if we should ask for password verification as well as we do when changing the password for instance expected behavior ask for password verification adri what do you think
| 0
|
520,788
| 15,093,605,292
|
IssuesEvent
|
2021-02-07 01:30:41
|
rich-iannone/pointblank
|
https://api.github.com/repos/rich-iannone/pointblank
|
closed
|
Add tests for multiagent workflows
|
Difficulty: [2] Intermediate Effort: [2] Medium Priority: ♨︎ Critical Type: ★ Enhancement
|
Add tests for all functionality related to *multiagent* workflows. And, add a number of R scripts that can serve as manual tests.
|
1.0
|
Add tests for multiagent workflows - Add tests for all functionality related to *multiagent* workflows. And, add a number of R scripts that can serve as manual tests.
|
non_test
|
add tests for multiagent workflows add tests for all functionality related to multiagent workflows and add a number of r scripts that can serve as manual tests
| 0
|
81,248
| 15,608,239,699
|
IssuesEvent
|
2021-03-19 10:21:39
|
yamamoto42/multiappx
|
https://api.github.com/repos/yamamoto42/multiappx
|
opened
|
CVE-2020-7746 (High) detected in Chart-2.7.1.min.js
|
security vulnerability
|
## CVE-2020-7746 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Chart-2.7.1.min.js</b></p></summary>
<p>Simple HTML5 charts using the canvas element.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.min.js">https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.min.js</a></p>
<p>Path to dependency file: multiappx/dashboard.html</p>
<p>Path to vulnerable library: multiappx/dashboard.html</p>
<p>
Dependency Hierarchy:
- :x: **Chart-2.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yamamoto42/multiappx/commit/de90ee33d1e0929f45f4b6c40aa3135b09c86a20">de90ee33d1e0929f45f4b6c40aa3135b09c86a20</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package chart.js before 2.9.4. The options parameter is not properly sanitized when it is processed. When the options are processed, the existing options (or the defaults options) are deeply merged with provided options. However, during this operation, the keys of the object being set are not checked, leading to a prototype pollution.
<p>Publish Date: 2020-10-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7746>CVE-2020-7746</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7746">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7746</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: chart.js - 2.9.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7746 (High) detected in Chart-2.7.1.min.js - ## CVE-2020-7746 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Chart-2.7.1.min.js</b></p></summary>
<p>Simple HTML5 charts using the canvas element.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.min.js">https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.1/Chart.min.js</a></p>
<p>Path to dependency file: multiappx/dashboard.html</p>
<p>Path to vulnerable library: multiappx/dashboard.html</p>
<p>
Dependency Hierarchy:
- :x: **Chart-2.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yamamoto42/multiappx/commit/de90ee33d1e0929f45f4b6c40aa3135b09c86a20">de90ee33d1e0929f45f4b6c40aa3135b09c86a20</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package chart.js before 2.9.4. The options parameter is not properly sanitized when it is processed. When the options are processed, the existing options (or the defaults options) are deeply merged with provided options. However, during this operation, the keys of the object being set are not checked, leading to a prototype pollution.
<p>Publish Date: 2020-10-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7746>CVE-2020-7746</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7746">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7746</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: chart.js - 2.9.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in chart min js cve high severity vulnerability vulnerable library chart min js simple charts using the canvas element library home page a href path to dependency file multiappx dashboard html path to vulnerable library multiappx dashboard html dependency hierarchy x chart min js vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package chart js before the options parameter is not properly sanitized when it is processed when the options are processed the existing options or the defaults options are deeply merged with provided options however during this operation the keys of the object being set are not checked leading to a prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution chart js step up your open source security game with whitesource
| 0
|
101,679
| 21,766,817,215
|
IssuesEvent
|
2022-05-13 03:33:24
|
withfig/fig
|
https://api.github.com/repos/withfig/fig
|
closed
|
Figterm socket: No file at path ...
|
codebase:shell-integrations awaiting user reply
|
### Description:
> Please include a detailed description of the issue (and an image or screen recording, if applicable)

### Details:
|OS|Fig|Shell|
|-|-|-|
|macOS 12.2.1 (21D62)|1.0.56|-zsh|
<details><summary><code>fig diagnostic</code></summary>
<p>
# Fig Diagnostics
## Fig details:
- Fig version: Version 1.0.56 (B416) [U.S.]
- Bundle path: /Applications/Fig.app
- Autocomplete: true
- Settings.json: true
- Accessibility: true
- Number of specs: 0
- Symlinked dotfiles: false
- Only insert on tab: false
- Keybindings path:
- Installation Script: true
- PseudoTerminal Path: /Users/feel/.nvm/versions/node/v16.5.0/bin:/opt/homebrew/Cellar/zplug/2.4.2/bin:/opt/homebrew/opt/zplug/bin:/Users/feel/go/pkg:/Users/feel/go/bin:/Users/feel/bin:/Users/feel/Library/Python/3.8/bin:/Users/feel/.zplug:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/Apple/usr/bin:/Users/feel/.cargo/bin:/opt/homebrew/opt/fzf/bin:/Users/feel/zk/bin:/Users/feel/code/Nim/bin:/Users/feel/.local/bin:/Users/feel/.fig/bin:/opt/homebrew/opt/zplug/repos/kazhala/dotbare
- SecureKeyboardInput: false
- SecureKeyboardProcess: <none>
## Hardware Info:
- Model Name: MacBook Air
- Model Identifier: MacBookAir10,1
- Chip:
- Cores: 8
- Memory: 8 GB
## OS Info:
- macOS 12.2.1 (21D62)
## Environment:
- User Shell: /bin/zsh
- Current Directory: /Users/feel
- CLI Installed: true
- Executable Location: /opt/homebrew/bin/fig
- Current Window ID: 4628/% (com.apple.Terminal)
- Active Process: ??? (???) - ???
- Installed via Brew: true
- Environment Variables:
- TERM=xterm-256color
- TERM_SESSION_ID=28F9454A-F948-499E-AA7B-43228933A4AD
- PATH=/Users/feel/.nvm/versions/node/v16.5.0/bin:/opt/homebrew/Cellar/zplug/2.4.2/bin:/opt/homebrew/opt/zplug/bin:/Users/feel/go/pkg:/Users/feel/go/bin:/Users/feel/bin:/Users/feel/Library/Python/3.8/bin:/Users/feel/.zplug:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/Apple/usr/bin:/Users/feel/.cargo/bin:/opt/homebrew/opt/fzf/bin:/Users/feel/zk/bin:/Users/feel/code/Nim/bin:/Users/feel/.local/bin:/Users/feel/.fig/bin:/opt/homebrew/opt/zplug/repos/kazhala/dotbare
## Integrations:
- SSH: false
- TMUX: false
- iTerm: application is not present.
- Hyper: application is not present.
- Visual Studio Code: application is not present.
- Docker: false
</p>
</details>
|
1.0
|
Figterm socket: No file at path ... - ### Description:
> Please include a detailed description of the issue (and an image or screen recording, if applicable)

### Details:
|OS|Fig|Shell|
|-|-|-|
|macOS 12.2.1 (21D62)|1.0.56|-zsh|
<details><summary><code>fig diagnostic</code></summary>
<p>
# Fig Diagnostics
## Fig details:
- Fig version: Version 1.0.56 (B416) [U.S.]
- Bundle path: /Applications/Fig.app
- Autocomplete: true
- Settings.json: true
- Accessibility: true
- Number of specs: 0
- Symlinked dotfiles: false
- Only insert on tab: false
- Keybindings path:
- Installation Script: true
- PseudoTerminal Path: /Users/feel/.nvm/versions/node/v16.5.0/bin:/opt/homebrew/Cellar/zplug/2.4.2/bin:/opt/homebrew/opt/zplug/bin:/Users/feel/go/pkg:/Users/feel/go/bin:/Users/feel/bin:/Users/feel/Library/Python/3.8/bin:/Users/feel/.zplug:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/Apple/usr/bin:/Users/feel/.cargo/bin:/opt/homebrew/opt/fzf/bin:/Users/feel/zk/bin:/Users/feel/code/Nim/bin:/Users/feel/.local/bin:/Users/feel/.fig/bin:/opt/homebrew/opt/zplug/repos/kazhala/dotbare
- SecureKeyboardInput: false
- SecureKeyboardProcess: <none>
## Hardware Info:
- Model Name: MacBook Air
- Model Identifier: MacBookAir10,1
- Chip:
- Cores: 8
- Memory: 8 GB
## OS Info:
- macOS 12.2.1 (21D62)
## Environment:
- User Shell: /bin/zsh
- Current Directory: /Users/feel
- CLI Installed: true
- Executable Location: /opt/homebrew/bin/fig
- Current Window ID: 4628/% (com.apple.Terminal)
- Active Process: ??? (???) - ???
- Installed via Brew: true
- Environment Variables:
- TERM=xterm-256color
- TERM_SESSION_ID=28F9454A-F948-499E-AA7B-43228933A4AD
- PATH=/Users/feel/.nvm/versions/node/v16.5.0/bin:/opt/homebrew/Cellar/zplug/2.4.2/bin:/opt/homebrew/opt/zplug/bin:/Users/feel/go/pkg:/Users/feel/go/bin:/Users/feel/bin:/Users/feel/Library/Python/3.8/bin:/Users/feel/.zplug:/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Library/Apple/usr/bin:/Users/feel/.cargo/bin:/opt/homebrew/opt/fzf/bin:/Users/feel/zk/bin:/Users/feel/code/Nim/bin:/Users/feel/.local/bin:/Users/feel/.fig/bin:/opt/homebrew/opt/zplug/repos/kazhala/dotbare
## Integrations:
- SSH: false
- TMUX: false
- iTerm: application is not present.
- Hyper: application is not present.
- Visual Studio Code: application is not present.
- Docker: false
</p>
</details>
|
non_test
|
figterm socket no file at path description please include a detailed description of the issue and an image or screen recording if applicable details os fig shell macos zsh fig diagnostic fig diagnostics fig details fig version version bundle path applications fig app autocomplete true settings json true accessibility true number of specs symlinked dotfiles false only insert on tab false keybindings path installation script true pseudoterminal path users feel nvm versions node bin opt homebrew cellar zplug bin opt homebrew opt zplug bin users feel go pkg users feel go bin users feel bin users feel library python bin users feel zplug opt homebrew bin usr local bin usr bin bin usr sbin sbin opt bin library apple usr bin users feel cargo bin opt homebrew opt fzf bin users feel zk bin users feel code nim bin users feel local bin users feel fig bin opt homebrew opt zplug repos kazhala dotbare securekeyboardinput false securekeyboardprocess hardware info model name macbook air model identifier chip cores memory gb os info macos environment user shell bin zsh current directory users feel cli installed true executable location opt homebrew bin fig current window id com apple terminal active process installed via brew true environment variables term xterm term session id path users feel nvm versions node bin opt homebrew cellar zplug bin opt homebrew opt zplug bin users feel go pkg users feel go bin users feel bin users feel library python bin users feel zplug opt homebrew bin usr local bin usr bin bin usr sbin sbin opt bin library apple usr bin users feel cargo bin opt homebrew opt fzf bin users feel zk bin users feel code nim bin users feel local bin users feel fig bin opt homebrew opt zplug repos kazhala dotbare integrations ssh false tmux false iterm application is not present hyper application is not present visual studio code application is not present docker false
| 0
|
311,774
| 26,813,020,253
|
IssuesEvent
|
2023-02-02 00:34:17
|
ipfs/kubo
|
https://api.github.com/repos/ipfs/kubo
|
reopened
|
Flaky test: ci/gh-experiment: go test: pubsub_msg_seen_cache_test
|
topic/test failure need/analysis
|
Creating issue so others don't spend time investigating.
`TestMessageSeenCacheTTL` sometimes fails on GitHub version of our CI (log below).
@galargh I suggest we wait with investigation until https://github.com/ipfs/kubo/pull/9543 lands, as these tests will be refactored by the mentioned PR anyway.
https://github.com/ipfs/kubo/actions/runs/3926432440/jobs/6712202879#step:5:11887:
```
=== FAIL: test/integration TestMessageSeenCacheTTL (1.54s)
Computing default go-libp2p Resource Manager limits based on:
- 'Swarm.ResourceMgr.MaxMemory': "1.8 GB"
- 'Swarm.ResourceMgr.MaxFileDescriptors': 32768
Applying any user-supplied overrides on top.
Run 'ipfs swarm limit all' to see the resulting limits.
Computing default go-libp2p Resource Manager limits based on:
- 'Swarm.ResourceMgr.MaxMemory': "1.8 GB"
- 'Swarm.ResourceMgr.MaxFileDescriptors': 32768
Applying any user-supplied overrides on top.
Run 'ipfs swarm limit all' to see the resulting limits.
Computing default go-libp2p Resource Manager limits based on:
- 'Swarm.ResourceMgr.MaxMemory': "1.8 GB"
- 'Swarm.ResourceMgr.MaxFileDescriptors': 32768
Applying any user-supplied overrides on top.
Run 'ipfs swarm limit all' to see the resulting limits.
pubsub_msg_seen_cache_test.go:136: bootstrap peer=QmeDXb79V25BrSP4ENDvqZMd7qBr7SXuKSNwxxQawjdbwK, consumer peer=QmaxMa42XFxgG8Rn6TPCvsSKfZ4dbxxwnstt4AK8xaqUqS, producer peer=QmZWxchjiixjTNgh5TA8XZFrYfQPSaDkiRRH8AFCkmy41e
pubsub_msg_seen_cache_test.go:114: sending [msg_1] with duplicate message ID at [Jan 16 02:10:31.625]
pubsub_msg_seen_cache_test.go:191: did not receive [msg_1] by [Jan 16 02:10:32.626]
pubsub_msg_seen_cache_test.go:192: context deadline exceeded
```
|
1.0
|
Flaky test: ci/gh-experiment: go test: pubsub_msg_seen_cache_test - Creating issue so others don't spend time investigating.
`TestMessageSeenCacheTTL` sometimes fails on GitHub version of our CI (log below).
@galargh I suggest we wait with investigation until https://github.com/ipfs/kubo/pull/9543 lands, as these tests will be refactored by the mentioned PR anyway.
https://github.com/ipfs/kubo/actions/runs/3926432440/jobs/6712202879#step:5:11887:
```
=== FAIL: test/integration TestMessageSeenCacheTTL (1.54s)
Computing default go-libp2p Resource Manager limits based on:
- 'Swarm.ResourceMgr.MaxMemory': "1.8 GB"
- 'Swarm.ResourceMgr.MaxFileDescriptors': 32768
Applying any user-supplied overrides on top.
Run 'ipfs swarm limit all' to see the resulting limits.
Computing default go-libp2p Resource Manager limits based on:
- 'Swarm.ResourceMgr.MaxMemory': "1.8 GB"
- 'Swarm.ResourceMgr.MaxFileDescriptors': 32768
Applying any user-supplied overrides on top.
Run 'ipfs swarm limit all' to see the resulting limits.
Computing default go-libp2p Resource Manager limits based on:
- 'Swarm.ResourceMgr.MaxMemory': "1.8 GB"
- 'Swarm.ResourceMgr.MaxFileDescriptors': 32768
Applying any user-supplied overrides on top.
Run 'ipfs swarm limit all' to see the resulting limits.
pubsub_msg_seen_cache_test.go:136: bootstrap peer=QmeDXb79V25BrSP4ENDvqZMd7qBr7SXuKSNwxxQawjdbwK, consumer peer=QmaxMa42XFxgG8Rn6TPCvsSKfZ4dbxxwnstt4AK8xaqUqS, producer peer=QmZWxchjiixjTNgh5TA8XZFrYfQPSaDkiRRH8AFCkmy41e
pubsub_msg_seen_cache_test.go:114: sending [msg_1] with duplicate message ID at [Jan 16 02:10:31.625]
pubsub_msg_seen_cache_test.go:191: did not receive [msg_1] by [Jan 16 02:10:32.626]
pubsub_msg_seen_cache_test.go:192: context deadline exceeded
```
|
test
|
flaky test ci gh experiment go test pubsub msg seen cache test creating issue so others don t spend time investigating testmessageseencachettl sometimes fails on github version of our ci log below galargh i suggest we wait with investigation until lands as these tests will be refactored by the mentioned pr anyway fail test integration testmessageseencachettl computing default go resource manager limits based on swarm resourcemgr maxmemory gb swarm resourcemgr maxfiledescriptors applying any user supplied overrides on top run ipfs swarm limit all to see the resulting limits computing default go resource manager limits based on swarm resourcemgr maxmemory gb swarm resourcemgr maxfiledescriptors applying any user supplied overrides on top run ipfs swarm limit all to see the resulting limits computing default go resource manager limits based on swarm resourcemgr maxmemory gb swarm resourcemgr maxfiledescriptors applying any user supplied overrides on top run ipfs swarm limit all to see the resulting limits pubsub msg seen cache test go bootstrap peer consumer peer producer peer pubsub msg seen cache test go sending with duplicate message id at pubsub msg seen cache test go did not receive by pubsub msg seen cache test go context deadline exceeded
| 1
|
48,244
| 10,225,410,148
|
IssuesEvent
|
2019-08-16 15:05:24
|
canonical-web-and-design/maas.io
|
https://api.github.com/repos/canonical-web-and-design/maas.io
|
closed
|
Add "Copy this" button (?) to logs
|
Review: Code +1 Review: Design +1 Review: QA +1
|
## Summary
Logs in MAAS can be very long and difficult to copy, add the "copy" button to the top right corner of each log.
## Current and expected result
Currently: hard to copy logs
Expected: one-click to copy logs
|
1.0
|
Add "Copy this" button (?) to logs - ## Summary
Logs in MAAS can be very long and difficult to copy, add the "copy" button to the top right corner of each log.
## Current and expected result
Currently: hard to copy logs
Expected: one-click to copy logs
|
non_test
|
add copy this button to logs summary logs in maas can be very long and difficult to copy add the copy button to the top right corner of each log current and expected result currently hard to copy logs expected one click to copy logs
| 0
|
135,900
| 5,266,666,188
|
IssuesEvent
|
2017-02-04 15:03:27
|
TauCetiStation/TauCetiClassic
|
https://api.github.com/repos/TauCetiStation/TauCetiClassic
|
closed
|
Ошибка при осмотре лужи крови
|
bug priority: low
|
#### Подробное описание проблемы
Осмотр лужи крови дает "Это окровавленная кровь".
#### Что должно было произойти
Показывает "Это кровь".
#### Что произошло на самом деле
Показывает "Это окровавленная кровь".
#### Как повторить
Найти кровь, осмотреть ее.
#### Дополнительная информация:
http://prnt.sc/d3t3hp
|
1.0
|
Ошибка при осмотре лужи крови -
#### Подробное описание проблемы
Осмотр лужи крови дает "Это окровавленная кровь".
#### Что должно было произойти
Показывает "Это кровь".
#### Что произошло на самом деле
Показывает "Это окровавленная кровь".
#### Как повторить
Найти кровь, осмотреть ее.
#### Дополнительная информация:
http://prnt.sc/d3t3hp
|
non_test
|
ошибка при осмотре лужи крови подробное описание проблемы осмотр лужи крови дает это окровавленная кровь что должно было произойти показывает это кровь что произошло на самом деле показывает это окровавленная кровь как повторить найти кровь осмотреть ее дополнительная информация
| 0
|
7,336
| 2,894,099,816
|
IssuesEvent
|
2015-06-15 21:27:26
|
luchanz/ext-med-angular
|
https://api.github.com/repos/luchanz/ext-med-angular
|
closed
|
En la asociación de ambulancias se ignora el tipo otro
|
test
|
No les interesa perderlo en la migración
|
1.0
|
En la asociación de ambulancias se ignora el tipo otro - No les interesa perderlo en la migración
|
test
|
en la asociación de ambulancias se ignora el tipo otro no les interesa perderlo en la migración
| 1
|
169,977
| 13,168,385,987
|
IssuesEvent
|
2020-08-11 12:02:21
|
appium/appium
|
https://api.github.com/repos/appium/appium
|
closed
|
Can not set picker wheel on iOS
|
ThirdParty XCUITest
|
xcode beta4
latest appium master code
when try to set value on picker wheel for year ,
Context.iOSDriver.findElementByIosClassChain("**/XCUIElementTypePickerWheel[wdVisible==1][2]")).sendKeys("2021");
Enqueue Failure: Unsupported picker wheel "2020" PickerWheel of type 6 /Users/test/appium_master/master/node_modules/appium-webdriveragent/WebDriverAgentRunner/UITestingUITests.m 38 1
what does type 6 mean ?
2020-08-07 09:58:34:406 - [HTTP] --> POST /wd/hub/session/93a5c218-d3a6-49c3-a59a-035f91993e64/element
2020-08-07 09:58:34:406 - [HTTP] {"using":"-ios class chain","value":"/XCUIElementTypePickerWheel[wdVisible==1][2]"}
2020-08-07 09:58:34:406 - [debug] [W3C (93a5c218)] Calling AppiumDriver.findElement() with args: ["-ios class chain","/XCUIElementTypePickerWheel[wdVisible==1][2]","93a5c218-d3a6-49c3-a59a-035f91993e64"]
2020-08-07 09:58:34:406 - [debug] [XCUITest] Executing command 'findElement'
2020-08-07 09:58:34:406 - [debug] [BaseDriver] Valid locator strategies for this request: xpath, id, name, class name, -ios predicate string, -ios class chain, accessibility id
2020-08-07 09:58:34:406 - [debug] [BaseDriver] Waiting up to 30000 ms for condition
2020-08-07 09:58:34:407 - [debug] [WD Proxy] Matched '/element' to command name 'findElement'
2020-08-07 09:58:34:407 - [debug] [WD Proxy] Proxying [POST /element] to [POST http://127.0.0.1:8100/session/B9A936C8-1CF6-47A2-B3D5-48DD5D64048F/element] with body: {"using":"class chain","value":"**/XCUIElementTypePickerWheel[wdVisible==1][2]"}
2020-08-07 09:58:34:414 - [Xcode] 2020-08-07 09:58:34.414021-0400 WebDriverAgentRunner-Runner[28658:1072377] Getting the most recent active application (out of 1 total items)
2020-08-07 09:58:34:414 - [Xcode]
2020-08-07 09:58:34:415 - [Xcode] t = 69647.95s Get all elements bound by accessibility element for: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:34:415 - [Xcode]
2020-08-07 09:58:34:416 - [Xcode] t = 69647.95s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:34:416 - [Xcode]
2020-08-07 09:58:34:591 - [Xcode] t = 69648.13s Find: Descendants matching type PickerWheel
2020-08-07 09:58:34:591 - [Xcode]
2020-08-07 09:58:34:592 - [Xcode] t = 69648.13s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:34:593 - [Xcode]
2020-08-07 09:58:34:616 - [Xcode] t = 69648.15s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:34:616 - [Xcode]
2020-08-07 09:58:34:791 - [Xcode] t = 69648.33s Find: Descendants matching type PickerWheel
2020-08-07 09:58:34:791 - [Xcode]
2020-08-07 09:58:34:792 - [Xcode] t = 69648.33s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:34:792 - [Xcode]
2020-08-07 09:58:34:809 - [Xcode] t = 69648.35s Find: Identity Binding
2020-08-07 09:58:34:809 - [Xcode]
2020-08-07 09:58:34:816 - [debug] [WD Proxy] Got response with status 200: {"value":{"ELEMENT":"98040000-0000-0000-098F-000000000000","element-6066-11e4-a52e-4f735466cecf":"98040000-0000-0000-098F-000000000000"},"sessionId":"B9A936C8-1CF6-47A2-B3D5-48DD5D64048F"}
2020-08-07 09:58:34:816 - [debug] [W3C (93a5c218)] Responding to client with driver.findElement() result: {"element-6066-11e4-a52e-4f735466cecf":"98040000-0000-0000-098F-000000000000","ELEMENT":"98040000-0000-0000-098F-000000000000"}
2020-08-07 09:58:34:816 - [HTTP] <-- POST /wd/hub/session/93a5c218-d3a6-49c3-a59a-035f91993e64/element 200 410 ms - 137
2020-08-07 09:58:34:816 - [HTTP]
2020-08-07 09:58:34:842 - [HTTP] --> POST /wd/hub/session/93a5c218-d3a6-49c3-a59a-035f91993e64/element/98040000-0000-0000-098F-000000000000/value
2020-08-07 09:58:34:842 - [HTTP] {"id":"98040000-0000-0000-098F-000000000000","text":"2021","value":["2","0","2","1"]}
2020-08-07 09:58:34:843 - [debug] [W3C (93a5c218)] Calling AppiumDriver.setValue() with args: [["2","0","2","1"],"98040000-0000-0000-098F-000000000000","93a5c218-d3a6-49c3-a59a-035f91993e64"]
2020-08-07 09:58:34:843 - [debug] [XCUITest] Executing command 'setValue'
2020-08-07 09:58:34:843 - [debug] [WD Proxy] Matched '/element/98040000-0000-0000-098F-000000000000/value' to command name 'setValue'
2020-08-07 09:58:34:843 - [debug] [Protocol Converter] Added 'text' property "2021" to 'setValue' request body
2020-08-07 09:58:34:843 - [debug] [WD Proxy] Proxying [POST /element/98040000-0000-0000-098F-000000000000/value] to [POST http://127.0.0.1:8100/session/B9A936C8-1CF6-47A2-B3D5-48DD5D64048F/element/98040000-0000-0000-098F-000000000000/value] with body: {"value":["2","0","2","1"],"text":"2021"}
2020-08-07 09:58:34:846 - [Xcode] t = 69648.38s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:34:846 - [Xcode]
2020-08-07 09:58:35:001 - [Xcode] t = 69648.54s Find: Descendants matching type PickerWheel
2020-08-07 09:58:35:002 - [Xcode]
2020-08-07 09:58:35:003 - [Xcode] t = 69648.54s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:35:003 - [Xcode]
2020-08-07 09:58:35:020 - [Xcode] t = 69648.56s Find: Identity Binding
2020-08-07 09:58:35:020 - [Xcode]
2020-08-07 09:58:35:026 - [Xcode] t = 69648.56s Find the "2020" PickerWheel
2020-08-07 09:58:35:026 - [Xcode]
2020-08-07 09:58:35:028 - [Xcode] t = 69648.56s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:35:028 - [Xcode]
2020-08-07 09:58:35:203 - [Xcode] t = 69648.74s Find: Descendants matching type PickerWheel
2020-08-07 09:58:35:203 - [Xcode]
2020-08-07 09:58:35:204 - [Xcode] t = 69648.74s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:35:204 - [Xcode]
2020-08-07 09:58:35:220 - [Xcode] t = 69648.76s Find: Identity Binding
2020-08-07 09:58:35:220 - [Xcode]
2020-08-07 09:58:35:228 - [Xcode] t = 69648.76s Set value of "2020" PickerWheel to 2021
2020-08-07 09:58:35:228 - [Xcode]
2020-08-07 09:58:35:228 - [Xcode] t = 69648.77s Wait for com.dayforce.Dayforce.development to idle
2020-08-07 09:58:35:228 - [Xcode]
2020-08-07 09:58:35:232 - [Xcode] t = 69648.77s Find the "2020" PickerWheel
2020-08-07 09:58:35:232 - [Xcode]
2020-08-07 09:58:35:234 - [Xcode] t = 69648.77s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:35:234 - [Xcode]
2020-08-07 09:58:35:398 - [Xcode] t = 69648.93s Find: Descendants matching type PickerWheel
2020-08-07 09:58:35:398 - [Xcode]
2020-08-07 09:58:35:400 - [Xcode] t = 69648.94s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:35:400 - [Xcode]
2020-08-07 09:58:35:418 - [Xcode] t = 69648.95s Find: Identity Binding
2020-08-07 09:58:35:418 - [Xcode]
2020-08-07 09:58:35:426 - [Xcode] t = 69648.96s Check for interrupting elements affecting "2020" PickerWheel
2020-08-07 09:58:35:426 - [Xcode]
2020-08-07 09:58:35:434 - [Xcode] t = 69648.97s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:35:434 - [Xcode]
2020-08-07 09:58:35:554 - [Xcode] t = 69649.09s Find: Descendants matching predicate identifier == "NotificationShortLookView" OR elementType == 7
2020-08-07 09:58:35:555 - [Xcode]
2020-08-07 09:58:35:561 - [Xcode] t = 69649.10s Synthesize event
2020-08-07 09:58:35:561 - [Xcode]
2020-08-07 09:58:35:565 - [Xcode] 2020-08-07 09:58:35.564964-0400 WebDriverAgentRunner-Runner[28658:1072377] Enqueue Failure: Unsupported picker wheel "2020" PickerWheel of type 6 /Users/test/appium_master/master/node_modules/appium-webdriveragent/WebDriverAgentRunner/UITestingUITests.m 38 1
2020-08-07 09:58:35:565 - [Xcode]
|
1.0
|
Can not set picker wheel on iOS - xcode beta4
latest appium master code
when try to set value on picker wheel for year ,
Context.iOSDriver.findElementByIosClassChain("**/XCUIElementTypePickerWheel[wdVisible==1][2]")).sendKeys("2021");
Enqueue Failure: Unsupported picker wheel "2020" PickerWheel of type 6 /Users/test/appium_master/master/node_modules/appium-webdriveragent/WebDriverAgentRunner/UITestingUITests.m 38 1
what does type 6 mean ?
2020-08-07 09:58:34:406 - [HTTP] --> POST /wd/hub/session/93a5c218-d3a6-49c3-a59a-035f91993e64/element
2020-08-07 09:58:34:406 - [HTTP] {"using":"-ios class chain","value":"/XCUIElementTypePickerWheel[wdVisible==1][2]"}
2020-08-07 09:58:34:406 - [debug] [W3C (93a5c218)] Calling AppiumDriver.findElement() with args: ["-ios class chain","/XCUIElementTypePickerWheel[wdVisible==1][2]","93a5c218-d3a6-49c3-a59a-035f91993e64"]
2020-08-07 09:58:34:406 - [debug] [XCUITest] Executing command 'findElement'
2020-08-07 09:58:34:406 - [debug] [BaseDriver] Valid locator strategies for this request: xpath, id, name, class name, -ios predicate string, -ios class chain, accessibility id
2020-08-07 09:58:34:406 - [debug] [BaseDriver] Waiting up to 30000 ms for condition
2020-08-07 09:58:34:407 - [debug] [WD Proxy] Matched '/element' to command name 'findElement'
2020-08-07 09:58:34:407 - [debug] [WD Proxy] Proxying [POST /element] to [POST http://127.0.0.1:8100/session/B9A936C8-1CF6-47A2-B3D5-48DD5D64048F/element] with body: {"using":"class chain","value":"**/XCUIElementTypePickerWheel[wdVisible==1][2]"}
2020-08-07 09:58:34:414 - [Xcode] 2020-08-07 09:58:34.414021-0400 WebDriverAgentRunner-Runner[28658:1072377] Getting the most recent active application (out of 1 total items)
2020-08-07 09:58:34:414 - [Xcode]
2020-08-07 09:58:34:415 - [Xcode] t = 69647.95s Get all elements bound by accessibility element for: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:34:415 - [Xcode]
2020-08-07 09:58:34:416 - [Xcode] t = 69647.95s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:34:416 - [Xcode]
2020-08-07 09:58:34:591 - [Xcode] t = 69648.13s Find: Descendants matching type PickerWheel
2020-08-07 09:58:34:591 - [Xcode]
2020-08-07 09:58:34:592 - [Xcode] t = 69648.13s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:34:593 - [Xcode]
2020-08-07 09:58:34:616 - [Xcode] t = 69648.15s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:34:616 - [Xcode]
2020-08-07 09:58:34:791 - [Xcode] t = 69648.33s Find: Descendants matching type PickerWheel
2020-08-07 09:58:34:791 - [Xcode]
2020-08-07 09:58:34:792 - [Xcode] t = 69648.33s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:34:792 - [Xcode]
2020-08-07 09:58:34:809 - [Xcode] t = 69648.35s Find: Identity Binding
2020-08-07 09:58:34:809 - [Xcode]
2020-08-07 09:58:34:816 - [debug] [WD Proxy] Got response with status 200: {"value":{"ELEMENT":"98040000-0000-0000-098F-000000000000","element-6066-11e4-a52e-4f735466cecf":"98040000-0000-0000-098F-000000000000"},"sessionId":"B9A936C8-1CF6-47A2-B3D5-48DD5D64048F"}
2020-08-07 09:58:34:816 - [debug] [W3C (93a5c218)] Responding to client with driver.findElement() result: {"element-6066-11e4-a52e-4f735466cecf":"98040000-0000-0000-098F-000000000000","ELEMENT":"98040000-0000-0000-098F-000000000000"}
2020-08-07 09:58:34:816 - [HTTP] <-- POST /wd/hub/session/93a5c218-d3a6-49c3-a59a-035f91993e64/element 200 410 ms - 137
2020-08-07 09:58:34:816 - [HTTP]
2020-08-07 09:58:34:842 - [HTTP] --> POST /wd/hub/session/93a5c218-d3a6-49c3-a59a-035f91993e64/element/98040000-0000-0000-098F-000000000000/value
2020-08-07 09:58:34:842 - [HTTP] {"id":"98040000-0000-0000-098F-000000000000","text":"2021","value":["2","0","2","1"]}
2020-08-07 09:58:34:843 - [debug] [W3C (93a5c218)] Calling AppiumDriver.setValue() with args: [["2","0","2","1"],"98040000-0000-0000-098F-000000000000","93a5c218-d3a6-49c3-a59a-035f91993e64"]
2020-08-07 09:58:34:843 - [debug] [XCUITest] Executing command 'setValue'
2020-08-07 09:58:34:843 - [debug] [WD Proxy] Matched '/element/98040000-0000-0000-098F-000000000000/value' to command name 'setValue'
2020-08-07 09:58:34:843 - [debug] [Protocol Converter] Added 'text' property "2021" to 'setValue' request body
2020-08-07 09:58:34:843 - [debug] [WD Proxy] Proxying [POST /element/98040000-0000-0000-098F-000000000000/value] to [POST http://127.0.0.1:8100/session/B9A936C8-1CF6-47A2-B3D5-48DD5D64048F/element/98040000-0000-0000-098F-000000000000/value] with body: {"value":["2","0","2","1"],"text":"2021"}
2020-08-07 09:58:34:846 - [Xcode] t = 69648.38s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:34:846 - [Xcode]
2020-08-07 09:58:35:001 - [Xcode] t = 69648.54s Find: Descendants matching type PickerWheel
2020-08-07 09:58:35:002 - [Xcode]
2020-08-07 09:58:35:003 - [Xcode] t = 69648.54s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:35:003 - [Xcode]
2020-08-07 09:58:35:020 - [Xcode] t = 69648.56s Find: Identity Binding
2020-08-07 09:58:35:020 - [Xcode]
2020-08-07 09:58:35:026 - [Xcode] t = 69648.56s Find the "2020" PickerWheel
2020-08-07 09:58:35:026 - [Xcode]
2020-08-07 09:58:35:028 - [Xcode] t = 69648.56s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:35:028 - [Xcode]
2020-08-07 09:58:35:203 - [Xcode] t = 69648.74s Find: Descendants matching type PickerWheel
2020-08-07 09:58:35:203 - [Xcode]
2020-08-07 09:58:35:204 - [Xcode] t = 69648.74s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:35:204 - [Xcode]
2020-08-07 09:58:35:220 - [Xcode] t = 69648.76s Find: Identity Binding
2020-08-07 09:58:35:220 - [Xcode]
2020-08-07 09:58:35:228 - [Xcode] t = 69648.76s Set value of "2020" PickerWheel to 2021
2020-08-07 09:58:35:228 - [Xcode]
2020-08-07 09:58:35:228 - [Xcode] t = 69648.77s Wait for com.dayforce.Dayforce.development to idle
2020-08-07 09:58:35:228 - [Xcode]
2020-08-07 09:58:35:232 - [Xcode] t = 69648.77s Find the "2020" PickerWheel
2020-08-07 09:58:35:232 - [Xcode]
2020-08-07 09:58:35:234 - [Xcode] t = 69648.77s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:35:234 - [Xcode]
2020-08-07 09:58:35:398 - [Xcode] t = 69648.93s Find: Descendants matching type PickerWheel
2020-08-07 09:58:35:398 - [Xcode]
2020-08-07 09:58:35:400 - [Xcode] t = 69648.94s Find: Elements matching predicate 'isWDVisible == 1 AND (1 == 1 OR identifier == 0 OR frame == 0 OR value == 0 OR title == 0 OR label == 0 OR elementType == 0 OR enabled == 0 OR placeholderValue == 0 OR selected == 0)'
2020-08-07 09:58:35:400 - [Xcode]
2020-08-07 09:58:35:418 - [Xcode] t = 69648.95s Find: Identity Binding
2020-08-07 09:58:35:418 - [Xcode]
2020-08-07 09:58:35:426 - [Xcode] t = 69648.96s Check for interrupting elements affecting "2020" PickerWheel
2020-08-07 09:58:35:426 - [Xcode]
2020-08-07 09:58:35:434 - [Xcode] t = 69648.97s Requesting snapshot of accessibility hierarchy for app with pid 36617
2020-08-07 09:58:35:434 - [Xcode]
2020-08-07 09:58:35:554 - [Xcode] t = 69649.09s Find: Descendants matching predicate identifier == "NotificationShortLookView" OR elementType == 7
2020-08-07 09:58:35:555 - [Xcode]
2020-08-07 09:58:35:561 - [Xcode] t = 69649.10s Synthesize event
2020-08-07 09:58:35:561 - [Xcode]
2020-08-07 09:58:35:565 - [Xcode] 2020-08-07 09:58:35.564964-0400 WebDriverAgentRunner-Runner[28658:1072377] Enqueue Failure: Unsupported picker wheel "2020" PickerWheel of type 6 /Users/test/appium_master/master/node_modules/appium-webdriveragent/WebDriverAgentRunner/UITestingUITests.m 38 1
2020-08-07 09:58:35:565 - [Xcode]
|
test
|
can not set picker wheel on ios xcode latest appium master code when try to set value on picker wheel for year context iosdriver findelementbyiosclasschain xcuielementtypepickerwheel sendkeys enqueue failure unsupported picker wheel pickerwheel of type users test appium master master node modules appium webdriveragent webdriveragentrunner uitestinguitests m what does type mean post wd hub session element using ios class chain value xcuielementtypepickerwheel calling appiumdriver findelement with args executing command findelement valid locator strategies for this request xpath id name class name ios predicate string ios class chain accessibility id waiting up to ms for condition matched element to command name findelement proxying to with body using class chain value xcuielementtypepickerwheel webdriveragentrunner runner getting the most recent active application out of total items t get all elements bound by accessibility element for elements matching predicate iswdvisible and or identifier or frame or value or title or label or elementtype or enabled or placeholdervalue or selected t requesting snapshot of accessibility hierarchy for app with pid t find descendants matching type pickerwheel t find elements matching predicate iswdvisible and or identifier or frame or value or title or label or elementtype or enabled or placeholdervalue or selected t requesting snapshot of accessibility hierarchy for app with pid t find descendants matching type pickerwheel t find elements matching predicate iswdvisible and or identifier or frame or value or title or label or elementtype or enabled or placeholdervalue or selected t find identity binding got response with status value element element sessionid responding to client with driver findelement result element element post wd hub session element ms post wd hub session element value id text value calling appiumdriver setvalue with args executing command setvalue matched element value to command name setvalue added text property to setvalue request body proxying to with body value text t requesting snapshot of accessibility hierarchy for app with pid t find descendants matching type pickerwheel t find elements matching predicate iswdvisible and or identifier or frame or value or title or label or elementtype or enabled or placeholdervalue or selected t find identity binding t find the pickerwheel t requesting snapshot of accessibility hierarchy for app with pid t find descendants matching type pickerwheel t find elements matching predicate iswdvisible and or identifier or frame or value or title or label or elementtype or enabled or placeholdervalue or selected t find identity binding t set value of pickerwheel to t wait for com dayforce dayforce development to idle t find the pickerwheel t requesting snapshot of accessibility hierarchy for app with pid t find descendants matching type pickerwheel t find elements matching predicate iswdvisible and or identifier or frame or value or title or label or elementtype or enabled or placeholdervalue or selected t find identity binding t check for interrupting elements affecting pickerwheel t requesting snapshot of accessibility hierarchy for app with pid t find descendants matching predicate identifier notificationshortlookview or elementtype t synthesize event webdriveragentrunner runner enqueue failure unsupported picker wheel pickerwheel of type users test appium master master node modules appium webdriveragent webdriveragentrunner uitestinguitests m
| 1
|
48,886
| 6,111,864,068
|
IssuesEvent
|
2017-06-21 18:01:05
|
WikiWatershed/model-my-watershed
|
https://api.github.com/repos/WikiWatershed/model-my-watershed
|
closed
|
IE11 Content Cutoff in Full Screen
|
design
|
When running IE11 in maxmized mode, content is cut off in the bottom:

This doesn't happen when running in windowed mode:

Find and correct the mistake.
|
1.0
|
IE11 Content Cutoff in Full Screen - When running IE11 in maxmized mode, content is cut off in the bottom:

This doesn't happen when running in windowed mode:

Find and correct the mistake.
|
non_test
|
content cutoff in full screen when running in maxmized mode content is cut off in the bottom this doesn t happen when running in windowed mode find and correct the mistake
| 0
|
90,716
| 26,173,055,364
|
IssuesEvent
|
2023-01-02 04:54:59
|
CGNS/CGNS
|
https://api.github.com/repos/CGNS/CGNS
|
opened
|
[CGNS-210] Confirm that configure autotools works on both platforms and all build options and document on web page either it or cmake can be used
|
Task Build Major To Do
|
> This issue has been imported from JIRA. Read the [original ticket here](https://cgnsorg.atlassian.net/browse/CGNS-210).
- _**Created at:**_ Mon, 27 Apr 2020 02:26:42 -0500
|
1.0
|
[CGNS-210] Confirm that configure autotools works on both platforms and all build options and document on web page either it or cmake can be used -
> This issue has been imported from JIRA. Read the [original ticket here](https://cgnsorg.atlassian.net/browse/CGNS-210).
- _**Created at:**_ Mon, 27 Apr 2020 02:26:42 -0500
|
non_test
|
confirm that configure autotools works on both platforms and all build options and document on web page either it or cmake can be used this issue has been imported from jira read the created at mon apr
| 0
|
120,981
| 10,145,151,454
|
IssuesEvent
|
2019-08-05 02:45:19
|
matomo-org/matomo
|
https://api.github.com/repos/matomo-org/matomo
|
closed
|
Running UI tests removes tables from dev database
|
Bug c: Tests & QA
|
Seems to happen whenever I run any UI tests, e.g.
```
./console tests:run-ui ScheduledReports
```
I have my `[database_tests]` set up to use a different database to my dev one, and the console output suggests this is being used:
```
Setting up fixture Piwik\Plugins\ScheduledReports\tests\Fixtures\ReportSubscription...
Dropping database 'matomotest'...
Database matomotest marked as successfully set up.
```
When I next visit Matomo in the browser, I get an error because the `user_dashboard` table doesn't exist in my dev DB. I've also seen errors due to missing `user_language` and `custom_dimensions` tables when trying to run console commands.
|
1.0
|
Running UI tests removes tables from dev database - Seems to happen whenever I run any UI tests, e.g.
```
./console tests:run-ui ScheduledReports
```
I have my `[database_tests]` set up to use a different database to my dev one, and the console output suggests this is being used:
```
Setting up fixture Piwik\Plugins\ScheduledReports\tests\Fixtures\ReportSubscription...
Dropping database 'matomotest'...
Database matomotest marked as successfully set up.
```
When I next visit Matomo in the browser, I get an error because the `user_dashboard` table doesn't exist in my dev DB. I've also seen errors due to missing `user_language` and `custom_dimensions` tables when trying to run console commands.
|
test
|
running ui tests removes tables from dev database seems to happen whenever i run any ui tests e g console tests run ui scheduledreports i have my set up to use a different database to my dev one and the console output suggests this is being used setting up fixture piwik plugins scheduledreports tests fixtures reportsubscription dropping database matomotest database matomotest marked as successfully set up when i next visit matomo in the browser i get an error because the user dashboard table doesn t exist in my dev db i ve also seen errors due to missing user language and custom dimensions tables when trying to run console commands
| 1
|
35,403
| 7,733,493,349
|
IssuesEvent
|
2018-05-26 12:32:08
|
petasis/tkdnd
|
https://api.github.com/repos/petasis/tkdnd
|
closed
|
error: garbage collection is no longer supported
|
Priority-Medium Type-Defect auto-migrated
|
```
Hello,
I just tried to compile this on OSX 10.10 and I get the error:
error: garbage collection is no longer supported
which stops the build with error.
here is the whole output:
megrimm-mbpro:tkdnd-read-only megrimm$ cmake .
-- The C compiler identification is AppleClang 6.0.0.6000054
-- The CXX compiler identification is AppleClang 6.0.0.6000054
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- ===========================================================
-- Welcome to the tkdnd 2.7 build system!
-- * Selected generator: Unix Makefiles
-- * Operating System ID: Darwin-14.0.0-x86_64
-- * Installation Directory: /usr/local
-- ===========================================================
-- Searching for Tcl/Tk...
-- Found Tclsh: /usr/bin/tclsh (found version "8.5")
-- Found TCL:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/tcl.framework
-- Found TCLTK:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/tcl.framework
-- Found TK:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/tk.framework
-- TCL_INCLUDE_PATH:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/Tcl.framework/Headers
-- TCL_STUB_LIBRARY:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/usr/lib/libtclstub8.5.a
-- TK_INCLUDE_PATH:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/Tk.framework/Headers
-- TK_STUB_LIBRARY:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/usr/lib/libtkstub8.5.a
-- + Shared Library: tkdnd
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/megrimm/Desktop/tkdnd-read-only
megrimm-mbpro:tkdnd-read-only megrimm$ make
Scanning dependencies of target tkdnd2.7
[100%] Building CXX object CMakeFiles/tkdnd2.7.dir/macosx/macdnd.m.o
error: garbage collection is no longer supported
make[2]: *** [CMakeFiles/tkdnd2.7.dir/macosx/macdnd.m.o] Error 1
make[1]: *** [CMakeFiles/tkdnd2.7.dir/all] Error 2
make: *** [all] Error 2
```
Original issue reported on code.google.com by `megr...@gmail.com` on 19 Nov 2014 at 8:12
|
1.0
|
error: garbage collection is no longer supported - ```
Hello,
I just tried to compile this on OSX 10.10 and I get the error:
error: garbage collection is no longer supported
which stops the build with error.
here is the whole output:
megrimm-mbpro:tkdnd-read-only megrimm$ cmake .
-- The C compiler identification is AppleClang 6.0.0.6000054
-- The CXX compiler identification is AppleClang 6.0.0.6000054
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- ===========================================================
-- Welcome to the tkdnd 2.7 build system!
-- * Selected generator: Unix Makefiles
-- * Operating System ID: Darwin-14.0.0-x86_64
-- * Installation Directory: /usr/local
-- ===========================================================
-- Searching for Tcl/Tk...
-- Found Tclsh: /usr/bin/tclsh (found version "8.5")
-- Found TCL:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/tcl.framework
-- Found TCLTK:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/tcl.framework
-- Found TK:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/tk.framework
-- TCL_INCLUDE_PATH:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/Tcl.framework/Headers
-- TCL_STUB_LIBRARY:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/usr/lib/libtclstub8.5.a
-- TK_INCLUDE_PATH:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/System/Library/Frameworks/Tk.framework/Headers
-- TK_STUB_LIBRARY:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/S
DKs/MacOSX10.10.sdk/usr/lib/libtkstub8.5.a
-- + Shared Library: tkdnd
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/megrimm/Desktop/tkdnd-read-only
megrimm-mbpro:tkdnd-read-only megrimm$ make
Scanning dependencies of target tkdnd2.7
[100%] Building CXX object CMakeFiles/tkdnd2.7.dir/macosx/macdnd.m.o
error: garbage collection is no longer supported
make[2]: *** [CMakeFiles/tkdnd2.7.dir/macosx/macdnd.m.o] Error 1
make[1]: *** [CMakeFiles/tkdnd2.7.dir/all] Error 2
make: *** [all] Error 2
```
Original issue reported on code.google.com by `megr...@gmail.com` on 19 Nov 2014 at 8:12
|
non_test
|
error garbage collection is no longer supported hello i just tried to compile this on osx and i get the error error garbage collection is no longer supported which stops the build with error here is the whole output megrimm mbpro tkdnd read only megrimm cmake the c compiler identification is appleclang the cxx compiler identification is appleclang check for working c compiler usr bin cc check for working c compiler usr bin cc works detecting c compiler abi info detecting c compiler abi info done check for working cxx compiler usr bin c check for working cxx compiler usr bin c works detecting cxx compiler abi info detecting cxx compiler abi info done welcome to the tkdnd build system selected generator unix makefiles operating system id darwin installation directory usr local searching for tcl tk found tclsh usr bin tclsh found version found tcl applications xcode app contents developer platforms macosx platform developer s dks sdk system library frameworks tcl framework found tcltk applications xcode app contents developer platforms macosx platform developer s dks sdk system library frameworks tcl framework found tk applications xcode app contents developer platforms macosx platform developer s dks sdk system library frameworks tk framework tcl include path applications xcode app contents developer platforms macosx platform developer s dks sdk system library frameworks tcl framework headers tcl stub library applications xcode app contents developer platforms macosx platform developer s dks sdk usr lib a tk include path applications xcode app contents developer platforms macosx platform developer s dks sdk system library frameworks tk framework headers tk stub library applications xcode app contents developer platforms macosx platform developer s dks sdk usr lib a shared library tkdnd configuring done generating done build files have been written to users megrimm desktop tkdnd read only megrimm mbpro tkdnd read only megrimm make scanning dependencies of target building cxx object cmakefiles dir macosx macdnd m o error garbage collection is no longer supported make error make error make error original issue reported on code google com by megr gmail com on nov at
| 0
|
194,183
| 14,670,462,219
|
IssuesEvent
|
2020-12-30 05:00:32
|
wprig/wprig
|
https://api.github.com/repos/wprig/wprig
|
closed
|
entry-header rules too specific
|
bug css has PR needs-testing
|
The selector for `.entry-title` seems too specific in `./assets/css/src/_typography.css`:
https://github.com/wprig/wprig/blob/v2.0/assets/css/src/_typography.css#L31-L32
I think it should be just
```css
.entry-title,
.page-title {
...
}
```
Is there a reason for this level of specificity?
|
1.0
|
entry-header rules too specific - The selector for `.entry-title` seems too specific in `./assets/css/src/_typography.css`:
https://github.com/wprig/wprig/blob/v2.0/assets/css/src/_typography.css#L31-L32
I think it should be just
```css
.entry-title,
.page-title {
...
}
```
Is there a reason for this level of specificity?
|
test
|
entry header rules too specific the selector for entry title seems too specific in assets css src typography css i think it should be just css entry title page title is there a reason for this level of specificity
| 1
|
306,508
| 26,474,721,797
|
IssuesEvent
|
2023-01-17 10:14:56
|
wazuh/wazuh
|
https://api.github.com/repos/wazuh/wazuh
|
opened
|
Different checksum forever when comparing agent-groups
|
type/bug module/cluster team/framework release test/4.4.0
|
|Wazuh version|Component|
|---|---|
| 4.4.0 | Wazuh cluster |
## Description
During the workload testing of 4.4-beta1 (https://github.com/wazuh/wazuh/issues/15938#issuecomment-1385106895), the following problem has been found. It turns out that manager_3 has been showing logs like these (they are seen up to 64 more times):
```
2023/01/16 13:57:13 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] The master's checksum and the worker's checksum are different. Local checksum: 394e44a39235255b70848c1a39abe2e5161edad9 | Master checksum: fa83a177d6dc7836b77d40012d3913f9d2d9cc7b.
2023/01/16 13:57:13 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Sent request to obtain all agent-groups information from the master node.
2023/01/16 13:57:13 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Finished in 0.074s. Updated 1 chunks.
```
However, in the master it is only seen 6 times in total:
```
2023/01/16 14:07:25 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Obtained 26 chunks of data in 1.156s.
2023/01/16 14:07:25 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Requested entire agent-groups information by the worker node. Starting.
2023/01/16 14:07:25 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Sending all agent-groups information from the master node database.
2023/01/16 14:07:25 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Sending chunks.
2023/01/16 14:07:27 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Main] Command received: b'syn_wgc_e'
2023/01/16 14:07:27 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Finished in 3.118s. Updated 26 chunks.
```
Also, despite the fact that the worker receives a response for at least those 6 `Agent-groups recv full` requests:
```
2023/01/16 14:07:25 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv full] Starting.
2023/01/16 14:07:27 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv full] 26/26 chunks updated in wazuh-db in 1.849s.
2023/01/16 14:07:27 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv full] Finished in 1.866s. Updated 26 chunks.
```
It keeps forever (until the end of the test) saying that the checksum of the received and existing `agent-groups` is different:
```
2023/01/16 14:07:54 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Starting.
2023/01/16 14:07:54 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] 1/1 chunks updated in wazuh-db in 0.000s.
2023/01/16 14:07:54 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Obtained 1 chunks of data in 0.020s.
2023/01/16 14:07:54 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] The master's checksum and the worker's checksum are different. Local checksum: 394e44a39235255b70848c1a39abe2e5161edad9 | Master checksum: fa83a177d6dc7836b77d40012d3913f9d2d9cc7b.
2023/01/16 14:07:54 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Sent request to obtain all agent-groups information from the master node.
```
## Checks
<!-- Do not modify, this will be ticked during development -->
The following elements have been updated or reviewed (should also be checked if no modification is required):
- [ ] Tests (unit tests, API integration tests).
- [ ] Changelog.
- [ ] Documentation.
- [ ] Integration test mapping (using `api/test/integration/mapping/_test_mapping.py`).
|
1.0
|
Different checksum forever when comparing agent-groups - |Wazuh version|Component|
|---|---|
| 4.4.0 | Wazuh cluster |
## Description
During the workload testing of 4.4-beta1 (https://github.com/wazuh/wazuh/issues/15938#issuecomment-1385106895), the following problem has been found. It turns out that manager_3 has been showing logs like these (they are seen up to 64 more times):
```
2023/01/16 13:57:13 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] The master's checksum and the worker's checksum are different. Local checksum: 394e44a39235255b70848c1a39abe2e5161edad9 | Master checksum: fa83a177d6dc7836b77d40012d3913f9d2d9cc7b.
2023/01/16 13:57:13 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Sent request to obtain all agent-groups information from the master node.
2023/01/16 13:57:13 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Finished in 0.074s. Updated 1 chunks.
```
However, in the master it is only seen 6 times in total:
```
2023/01/16 14:07:25 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Obtained 26 chunks of data in 1.156s.
2023/01/16 14:07:25 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Requested entire agent-groups information by the worker node. Starting.
2023/01/16 14:07:25 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Sending all agent-groups information from the master node database.
2023/01/16 14:07:25 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Sending chunks.
2023/01/16 14:07:27 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Main] Command received: b'syn_wgc_e'
2023/01/16 14:07:27 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups send full] Finished in 3.118s. Updated 26 chunks.
```
Also, despite the fact that the worker receives a response for at least those 6 `Agent-groups recv full` requests:
```
2023/01/16 14:07:25 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv full] Starting.
2023/01/16 14:07:27 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv full] 26/26 chunks updated in wazuh-db in 1.849s.
2023/01/16 14:07:27 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv full] Finished in 1.866s. Updated 26 chunks.
```
It keeps forever (until the end of the test) saying that the checksum of the received and existing `agent-groups` is different:
```
2023/01/16 14:07:54 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Starting.
2023/01/16 14:07:54 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] 1/1 chunks updated in wazuh-db in 0.000s.
2023/01/16 14:07:54 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Obtained 1 chunks of data in 0.020s.
2023/01/16 14:07:54 DEBUG: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] The master's checksum and the worker's checksum are different. Local checksum: 394e44a39235255b70848c1a39abe2e5161edad9 | Master checksum: fa83a177d6dc7836b77d40012d3913f9d2d9cc7b.
2023/01/16 14:07:54 INFO: [Worker CLUSTER-Workload_benchmarks_metrics_B235_manager_3] [Agent-groups recv] Sent request to obtain all agent-groups information from the master node.
```
## Checks
<!-- Do not modify, this will be ticked during development -->
The following elements have been updated or reviewed (should also be checked if no modification is required):
- [ ] Tests (unit tests, API integration tests).
- [ ] Changelog.
- [ ] Documentation.
- [ ] Integration test mapping (using `api/test/integration/mapping/_test_mapping.py`).
|
test
|
different checksum forever when comparing agent groups wazuh version component wazuh cluster description during the workload testing of the following problem has been found it turns out that manager has been showing logs like these they are seen up to more times debug the master s checksum and the worker s checksum are different local checksum master checksum info sent request to obtain all agent groups information from the master node info finished in updated chunks however in the master it is only seen times in total debug obtained chunks of data in info requested entire agent groups information by the worker node starting info sending all agent groups information from the master node database debug sending chunks debug command received b syn wgc e info finished in updated chunks also despite the fact that the worker receives a response for at least those agent groups recv full requests info starting debug chunks updated in wazuh db in info finished in updated chunks it keeps forever until the end of the test saying that the checksum of the received and existing agent groups is different info starting debug chunks updated in wazuh db in debug obtained chunks of data in debug the master s checksum and the worker s checksum are different local checksum master checksum info sent request to obtain all agent groups information from the master node checks the following elements have been updated or reviewed should also be checked if no modification is required tests unit tests api integration tests changelog documentation integration test mapping using api test integration mapping test mapping py
| 1
|
91,965
| 8,329,129,981
|
IssuesEvent
|
2018-09-27 04:50:13
|
jemaineosia/rfoverdose
|
https://api.github.com/repos/jemaineosia/rfoverdose
|
closed
|
Enable to sell weapon drops to the npc (intense and normal)
|
Fixed. Need to test.
|
iwkna55 C0072135 Dark Beam Saber(Normal)
iwswa55 C0072137 Dark Beam Sword(Normal)
iwaxa55 C0072139 Dark Bullova(Normal)
iwmaa55 C007213B Dark Beam Pressure(Normal)
iwfia55 C0072141 Dark Gatling(Normal)
iwlua55 C0072143 Dark Flame Launcher(Normal)
iwdaa55 C0072145 Dark Wing(Normal)
iwsta55 C0072146 Dark Staff(Normal)
iwspa55 C007213D Black Lance(Normal)
iwboa55 C007213F Black Siege Bow(Normal)
iwgea55 C0072149 Black Grenade Launcher(Normal)
iwknb55 Intense Dark Beam Saber(B)
iwswb55 Intense Dark Beam Sword(B)
iwaxb55 Intense Dark Bullova(B)
iwmab55 Intense Dark Beam Pressure(B)
iwfib55 Intense Dark Gatling(B)
iwlub55 Intense Dark Flame Launcher(B)
iwstb55 Intense Dark Staff(B)
iwgeb55 Intense Black Grenade Launcher(B)
iwbob55 Intense Black Siege Bow(B)
iwspb55 Intense Black Lance(B)
|
1.0
|
Enable to sell weapon drops to the npc (intense and normal) - iwkna55 C0072135 Dark Beam Saber(Normal)
iwswa55 C0072137 Dark Beam Sword(Normal)
iwaxa55 C0072139 Dark Bullova(Normal)
iwmaa55 C007213B Dark Beam Pressure(Normal)
iwfia55 C0072141 Dark Gatling(Normal)
iwlua55 C0072143 Dark Flame Launcher(Normal)
iwdaa55 C0072145 Dark Wing(Normal)
iwsta55 C0072146 Dark Staff(Normal)
iwspa55 C007213D Black Lance(Normal)
iwboa55 C007213F Black Siege Bow(Normal)
iwgea55 C0072149 Black Grenade Launcher(Normal)
iwknb55 Intense Dark Beam Saber(B)
iwswb55 Intense Dark Beam Sword(B)
iwaxb55 Intense Dark Bullova(B)
iwmab55 Intense Dark Beam Pressure(B)
iwfib55 Intense Dark Gatling(B)
iwlub55 Intense Dark Flame Launcher(B)
iwstb55 Intense Dark Staff(B)
iwgeb55 Intense Black Grenade Launcher(B)
iwbob55 Intense Black Siege Bow(B)
iwspb55 Intense Black Lance(B)
|
test
|
enable to sell weapon drops to the npc intense and normal dark beam saber normal dark beam sword normal dark bullova normal dark beam pressure normal dark gatling normal dark flame launcher normal dark wing normal dark staff normal black lance normal black siege bow normal black grenade launcher normal intense dark beam saber b intense dark beam sword b intense dark bullova b intense dark beam pressure b intense dark gatling b intense dark flame launcher b intense dark staff b intense black grenade launcher b intense black siege bow b intense black lance b
| 1
|
297,746
| 25,760,269,837
|
IssuesEvent
|
2022-12-08 19:52:15
|
IntellectualSites/FastAsyncWorldEdit
|
https://api.github.com/repos/IntellectualSites/FastAsyncWorldEdit
|
opened
|
Error with //snow command with a radius greater than 80
|
Requires Testing
|
### Server Implementation
Paper
### Server Version
1.19.2
### Describe the bug
When using the snow command with a radius greater than 80 (personally I did //snow 160), only part of the area is covered with snow (the northern half) and FAWE reports an error and asks to report the console error.
I tried on the last version of FAWE and the penultimate one, same error.
I did tests with different radii, but as soon as you try 81 and beyond it gives this error.
### To Reproduce
Type //snow [with a radius greater than 80]
### Expected behaviour
I want to snow cover an area with a radius of 160 blocks as I have been doing every winter for many years to cover my spawn city with snow. Before I used Worldedit, this is my first winter with FAWE.
### Screenshots / Videos
_No response_
### Error log (if applicable)
https://paste.gg/p/anonymous/57a87ca9447d447a91793b08b6d4cb10
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/ba4373c3c65a4d4daaadca34f1cdc2a4
### Fawe Version
FastAsyncWorldEdit version 2.4.10
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit/ and the issue still persists.
### Anything else?
_No response_
|
1.0
|
Error with //snow command with a radius greater than 80 - ### Server Implementation
Paper
### Server Version
1.19.2
### Describe the bug
When using the snow command with a radius greater than 80 (personally I did //snow 160), only part of the area is covered with snow (the northern half) and FAWE reports an error and asks to report the console error.
I tried on the last version of FAWE and the penultimate one, same error.
I did tests with different radii, but as soon as you try 81 and beyond it gives this error.
### To Reproduce
Type //snow [with a radius greater than 80]
### Expected behaviour
I want to snow cover an area with a radius of 160 blocks as I have been doing every winter for many years to cover my spawn city with snow. Before I used Worldedit, this is my first winter with FAWE.
### Screenshots / Videos
_No response_
### Error log (if applicable)
https://paste.gg/p/anonymous/57a87ca9447d447a91793b08b6d4cb10
### Fawe Debugpaste
https://athion.net/ISPaster/paste/view/ba4373c3c65a4d4daaadca34f1cdc2a4
### Fawe Version
FastAsyncWorldEdit version 2.4.10
### Checklist
- [X] I have included a Fawe debugpaste.
- [X] I am using the newest build from https://ci.athion.net/job/FastAsyncWorldEdit/ and the issue still persists.
### Anything else?
_No response_
|
test
|
error with snow command with a radius greater than server implementation paper server version describe the bug when using the snow command with a radius greater than personally i did snow only part of the area is covered with snow the northern half and fawe reports an error and asks to report the console error i tried on the last version of fawe and the penultimate one same error i did tests with different radii but as soon as you try and beyond it gives this error to reproduce type snow expected behaviour i want to snow cover an area with a radius of blocks as i have been doing every winter for many years to cover my spawn city with snow before i used worldedit this is my first winter with fawe screenshots videos no response error log if applicable fawe debugpaste fawe version fastasyncworldedit version checklist i have included a fawe debugpaste i am using the newest build from and the issue still persists anything else no response
| 1
|
63,684
| 7,738,138,769
|
IssuesEvent
|
2018-05-28 10:48:49
|
Joshrlear/Sprynamics
|
https://api.github.com/repos/Joshrlear/Sprynamics
|
closed
|
Zillow comps API
|
Maybe? enhancement user-designer
|
Can we add this API from Zillow: <https://www.zillow.com/howto/api/GetDeepComps.htm> which gives the ability to pull comparable property sales? I’d like to allow users to not only show where the property is on the map but also be able to show nearby sold properties (also if they could choose which comparables showed that would be a plus but not necessary now). This would add an additional charge to the total cost of the design so we would need to add the extra cost and make it known to the user that it will cost extra (don’t know how much yet… probably $5).
|
1.0
|
Zillow comps API - Can we add this API from Zillow: <https://www.zillow.com/howto/api/GetDeepComps.htm> which gives the ability to pull comparable property sales? I’d like to allow users to not only show where the property is on the map but also be able to show nearby sold properties (also if they could choose which comparables showed that would be a plus but not necessary now). This would add an additional charge to the total cost of the design so we would need to add the extra cost and make it known to the user that it will cost extra (don’t know how much yet… probably $5).
|
non_test
|
zillow comps api can we add this api from zillow which gives the ability to pull comparable property sales i’d like to allow users to not only show where the property is on the map but also be able to show nearby sold properties also if they could choose which comparables showed that would be a plus but not necessary now this would add an additional charge to the total cost of the design so we would need to add the extra cost and make it known to the user that it will cost extra don’t know how much yet… probably
| 0
|
15,076
| 3,440,225,969
|
IssuesEvent
|
2015-12-14 13:41:14
|
centreon/centreon
|
https://api.github.com/repos/centreon/centreon
|
closed
|
[2.7.0-RC2] Match the default timezone in Centreon UI to PHP.
|
BetaTest Kind/enhancement Status/Validated
|
Hello,
Would it be possible to match the default timezone Centreon to PHP date.timezone option ?
The default in Centreon UI is the timezone of Abidjan.
Regards.
|
1.0
|
[2.7.0-RC2] Match the default timezone in Centreon UI to PHP. - Hello,
Would it be possible to match the default timezone Centreon to PHP date.timezone option ?
The default in Centreon UI is the timezone of Abidjan.
Regards.
|
test
|
match the default timezone in centreon ui to php hello would it be possible to match the default timezone centreon to php date timezone option the default in centreon ui is the timezone of abidjan regards
| 1
|
142,572
| 21,786,720,633
|
IssuesEvent
|
2022-05-14 08:43:27
|
prql/prql
|
https://api.github.com/repos/prql/prql
|
closed
|
Window functions over groups
|
language-design
|
I've worked with SQL a lot and now already 5 years with Pandas, and sometimes both drive me absolutely mad. In some aspects Pandas in Python is nicer, but things like window functions are absolutely horrible.
Example:
orders
------
cell (a cell on a map)
hour (datetime but rounded to hours)
amount
with some calculations we should produce this:
orders
------
cell
hour_index (unix timestamp // 3600)
hour
day_of_year
day_of_week
week_of_year
year
And then for each row, calculate
1. A = orders for the same day of week and same hour for the previous 10 weeks.
2. B = calculate mean orders per whole day for the same days of week for the previous 10 weeks.
3. C = calculate total orders yesterday.
4. A * C / B
I've done only part 1 in Pandas, and it's awful.
dc = data.copy()
data['mean_orders'] = 0
for i in tqdm(range(1, 11)):
dc.week += 1
data = data.merge(dc[['cell', 'week', 'weekday', 'orders']], on=['cell', 'weekday', 'week'], suffixes=('', '_other'))
data['mean_orders'] += data['orders_other'] / 10
data.drop('orders_other', inplace=True, axis=1)
I can think of PostgreSQL equivalent (though window function names and params might not be exactly these):
select cell, first(orders),
sum(orders, 10) over (partition by cell, day_of_week, hour order by week) / 10 mean_same_hour_over_week,
sum(orders, 168) over (partition by cell order by hour) / 7 mean_per_day_over_week,
sum(b.orders)
from orders a inner join orders y on y.hour_index between (a.hour_index - a.hour) and (a.hour_index - a.hour - 24)
group by cell, orders;
...and then put it into a CTE and do the multiplication & division.
|
1.0
|
Window functions over groups - I've worked with SQL a lot and now already 5 years with Pandas, and sometimes both drive me absolutely mad. In some aspects Pandas in Python is nicer, but things like window functions are absolutely horrible.
Example:
orders
------
cell (a cell on a map)
hour (datetime but rounded to hours)
amount
with some calculations we should produce this:
orders
------
cell
hour_index (unix timestamp // 3600)
hour
day_of_year
day_of_week
week_of_year
year
And then for each row, calculate
1. A = orders for the same day of week and same hour for the previous 10 weeks.
2. B = calculate mean orders per whole day for the same days of week for the previous 10 weeks.
3. C = calculate total orders yesterday.
4. A * C / B
I've done only part 1 in Pandas, and it's awful.
dc = data.copy()
data['mean_orders'] = 0
for i in tqdm(range(1, 11)):
dc.week += 1
data = data.merge(dc[['cell', 'week', 'weekday', 'orders']], on=['cell', 'weekday', 'week'], suffixes=('', '_other'))
data['mean_orders'] += data['orders_other'] / 10
data.drop('orders_other', inplace=True, axis=1)
I can think of PostgreSQL equivalent (though window function names and params might not be exactly these):
select cell, first(orders),
sum(orders, 10) over (partition by cell, day_of_week, hour order by week) / 10 mean_same_hour_over_week,
sum(orders, 168) over (partition by cell order by hour) / 7 mean_per_day_over_week,
sum(b.orders)
from orders a inner join orders y on y.hour_index between (a.hour_index - a.hour) and (a.hour_index - a.hour - 24)
group by cell, orders;
...and then put it into a CTE and do the multiplication & division.
|
non_test
|
window functions over groups i ve worked with sql a lot and now already years with pandas and sometimes both drive me absolutely mad in some aspects pandas in python is nicer but things like window functions are absolutely horrible example orders cell a cell on a map hour datetime but rounded to hours amount with some calculations we should produce this orders cell hour index unix timestamp hour day of year day of week week of year year and then for each row calculate a orders for the same day of week and same hour for the previous weeks b calculate mean orders per whole day for the same days of week for the previous weeks c calculate total orders yesterday a c b i ve done only part in pandas and it s awful dc data copy data for i in tqdm range dc week data data merge dc on suffixes other data data data drop orders other inplace true axis i can think of postgresql equivalent though window function names and params might not be exactly these select cell first orders sum orders over partition by cell day of week hour order by week mean same hour over week sum orders over partition by cell order by hour mean per day over week sum b orders from orders a inner join orders y on y hour index between a hour index a hour and a hour index a hour group by cell orders and then put it into a cte and do the multiplication division
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.