Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,018
| 20,188,227,598
|
IssuesEvent
|
2022-02-11 01:19:45
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Leverage a cloud application security broker (CASB)
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Networking & Connectivity Data flow
|
<a href="https://docs.microsoft.com/cloud-app-security/what-is-cloud-app-security">Leverage a cloud application security broker (CASB)</a>
<p><b>Why Consider This?</b></p>
CASBs can provide rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across Microsoft and third-party cloud services.
<p><b>Context</b></p>
<p><span>Moving to the cloud increases flexibility for employees and IT alike. However, it also introduces new challenges and complexities for keeping your organization secure. To get the full benefit of cloud apps and services, an IT team must find the right balance of supporting access while maintaining control to protect critical data."nbsp; </span></p><p><span>Microsoft Cloud App Security is a Cloud Access Security Broker that supports various deployment modes including log collection, API connectors, and reverse proxy. It provides rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across all your Microsoft and third-party cloud services.</span></p><p><span>Microsoft Cloud App Security natively integrates with leading Microsoft solutions and is designed with security professionals in mind. It provides simple deployment, centralized management, and innovative automation capabilities.</span></p><p><span>An effective CASB solution provides the following capabilities:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Discover and control the use of Shadow IT</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Protect your sensitive information anywhere in the cloud</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Protect against cyberthreats and anomalies</span></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><span>Assess the compliance of your cloud apps</span></li></ul>
<p><b>Suggested Actions</b></p>
<p><span>Implement or enhance existing CASB to integrate with other cloud native solutions to improve threat protection capabilities.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/cloud-app-security/" target="_blank"><span>Microsoft Cloud App Security documentation</span></a><span /></p>
|
1.0
|
Leverage a cloud application security broker (CASB) - <a href="https://docs.microsoft.com/cloud-app-security/what-is-cloud-app-security">Leverage a cloud application security broker (CASB)</a>
<p><b>Why Consider This?</b></p>
CASBs can provide rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across Microsoft and third-party cloud services.
<p><b>Context</b></p>
<p><span>Moving to the cloud increases flexibility for employees and IT alike. However, it also introduces new challenges and complexities for keeping your organization secure. To get the full benefit of cloud apps and services, an IT team must find the right balance of supporting access while maintaining control to protect critical data."nbsp; </span></p><p><span>Microsoft Cloud App Security is a Cloud Access Security Broker that supports various deployment modes including log collection, API connectors, and reverse proxy. It provides rich visibility, control over data travel, and sophisticated analytics to identify and combat cyberthreats across all your Microsoft and third-party cloud services.</span></p><p><span>Microsoft Cloud App Security natively integrates with leading Microsoft solutions and is designed with security professionals in mind. It provides simple deployment, centralized management, and innovative automation capabilities.</span></p><p><span>An effective CASB solution provides the following capabilities:</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span>Discover and control the use of Shadow IT</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span>Protect your sensitive information anywhere in the cloud</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span>Protect against cyberthreats and anomalies</span></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><span>Assess the compliance of your cloud apps</span></li></ul>
<p><b>Suggested Actions</b></p>
<p><span>Implement or enhance existing CASB to integrate with other cloud native solutions to improve threat protection capabilities.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/cloud-app-security/" target="_blank"><span>Microsoft Cloud App Security documentation</span></a><span /></p>
|
process
|
leverage a cloud application security broker casb why consider this casbs can provide rich visibility control over data travel and sophisticated analytics to identify and combat cyberthreats across microsoft and third party cloud services context moving to the cloud increases flexibility for employees and it alike however it also introduces new challenges and complexities for keeping your organization secure to get the full benefit of cloud apps and services an it team must find the right balance of supporting access while maintaining control to protect critical data nbsp microsoft cloud app security is a cloud access security broker that supports various deployment modes including log collection api connectors and reverse proxy it provides rich visibility control over data travel and sophisticated analytics to identify and combat cyberthreats across all your microsoft and third party cloud services microsoft cloud app security natively integrates with leading microsoft solutions and is designed with security professionals in mind it provides simple deployment centralized management and innovative automation capabilities an effective casb solution provides the following capabilities discover and control the use of shadow it protect your sensitive information anywhere in the cloud protect against cyberthreats and anomalies assess the compliance of your cloud apps suggested actions implement or enhance existing casb to integrate with other cloud native solutions to improve threat protection capabilities learn more microsoft cloud app security documentation
| 1
|
75,068
| 15,391,330,370
|
IssuesEvent
|
2021-03-03 14:27:01
|
Madhusuthanan-B/FOO
|
https://api.github.com/repos/Madhusuthanan-B/FOO
|
opened
|
CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: FOO/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>Path to vulnerable library: FOO/node_modules/sockjs/examples/hapi/html/index.html,FOO/node_modules/sockjs/examples/echo/index.html,FOO/node_modules/sockjs/examples/multiplex/index.html,FOO/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Madhusuthanan-B/FOO/commit/b157124c24c1ddf938f36ca47f9212b09527a6a9">b157124c24c1ddf938f36ca47f9212b09527a6a9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: FOO/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>Path to vulnerable library: FOO/node_modules/sockjs/examples/hapi/html/index.html,FOO/node_modules/sockjs/examples/echo/index.html,FOO/node_modules/sockjs/examples/multiplex/index.html,FOO/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Madhusuthanan-B/FOO/commit/b157124c24c1ddf938f36ca47f9212b09527a6a9">b157124c24c1ddf938f36ca47f9212b09527a6a9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file foo node modules sockjs examples hapi html index html path to vulnerable library foo node modules sockjs examples hapi html index html foo node modules sockjs examples echo index html foo node modules sockjs examples multiplex index html foo node modules sockjs examples express x index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
16,480
| 21,427,989,317
|
IssuesEvent
|
2022-04-23 00:47:57
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Cleanup job fails with multi-arch manifests
|
bug process
|
### Description
https://github.com/hashgraph/hedera-mirror-node/runs/6136689862
### Steps to reproduce
Run cleanup workflow
### Additional context
_No response_
### Hedera network
other
### Version
main
### Operating system
_No response_
|
1.0
|
Cleanup job fails with multi-arch manifests - ### Description
https://github.com/hashgraph/hedera-mirror-node/runs/6136689862
### Steps to reproduce
Run cleanup workflow
### Additional context
_No response_
### Hedera network
other
### Version
main
### Operating system
_No response_
|
process
|
cleanup job fails with multi arch manifests description steps to reproduce run cleanup workflow additional context no response hedera network other version main operating system no response
| 1
|
5,518
| 8,380,770,929
|
IssuesEvent
|
2018-10-07 18:01:32
|
bitshares/bitshares-community-ui
|
https://api.github.com/repos/bitshares/bitshares-community-ui
|
closed
|
Basic Login to Bitshares
|
Login feature process
|
Basic Login to Bitshares
+ prevent unauthorized access
+ validate account name
+ validate fields: both required & show errors

|
1.0
|
Basic Login to Bitshares - Basic Login to Bitshares
+ prevent unauthorized access
+ validate account name
+ validate fields: both required & show errors

|
process
|
basic login to bitshares basic login to bitshares prevent unauthorized access validate account name validate fields both required show errors
| 1
|
547,656
| 16,044,492,591
|
IssuesEvent
|
2021-04-22 12:07:43
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Armhf Runtime problem
|
kind/bug lang/C# priority/P2
|
<!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
v1.37.x + C# (Net 5.0)
### What operating system (Linux, Windows,...) and version?
Linux Arm (armhf / Raspbian) [Raspberry3]
### What runtime / compiler are you using (e.g. python version or version of gcc)
cmake
### What did you do?
Here is my script for compilation, and it compiles fine enough on some machines:
```
#!/bin/bash
###
#Install Tool Chain
###
sudo apt-get install build-essential autoconf libtool pkg-config
sudo apt-get install libgflags-dev libgtest-dev
sudo apt-get install clang libc++-dev
sudo apt-get install cmake
###
#Get gRPC Source Code
###
git clone https://github.com/grpc/grpc.git --branch v1.37.x
cd grpc
git submodule update --init
###
#Compile libgrpc_csharp_ext target
###
#idk what its for, but on RPI4 it breaks the script
set -ex
#intended to be run from the repository root
cd grpc/
#create location for build files
mkdir -p cmake/build
cd cmake/build
#setup the build files
cmake -DgRPC_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE="${MSBUILD_CONFIG}" ../..
#compile libgrpc_csharp_ext.so
make -j4 grpc_csharp_ext
#Copy the file back to repository root and rename it to what gRPC.Core currently looks for
cp libgrpc_csharp_ext.so ../../../libgrpc_csharp_ext.x86.so
```
### What did you expect to see?
Good Compilation + Working C# GRPC
### What did you see instead?
**Compilation [V]
C# GRPC ->**
_2021-04-20 17:46:01.7947|ERROR|LibPP54Lite.PP54LiteManager|System.InvalidOperationException: Unsupported architecture "Unknown".
at Grpc.Core.Internal.NativeExtension.GetArchitectureString()
at Grpc.Core.Internal.NativeExtension.GetNativeLibraryFilename()
at Grpc.Core.Internal.NativeExtension.LoadNativeMethodsUsingExplicitLoad()
at Grpc.Core.Internal.NativeExtension.LoadNativeMethods()
at Grpc.Core.Internal.NativeExtension..ctor()
at Grpc.Core.Internal.NativeExtension.Get()
at Grpc.Core.Internal.NativeMethods.Get()
at Grpc.Core.GrpcEnvironment.GrpcNativeInit()
at Grpc.Core.GrpcEnvironment..ctor()
at Grpc.Core.GrpcEnvironment.AddRef()
at Grpc.Core.Channel..ctor(String target, ChannelCredentials credentials, IEnumerable`1 options)
at Grpc.Core.Channel..ctor(String target, ChannelCredentials credentials)
at SharedFunctions_NetCore.ClientServer.GetCoreConnector(String corePort)_
Where GetCoreConnector is:
```
public static CoreProto.CoreProtoClient GetCoreConnector(string corePort)
{
log.Debug("Trying to connect to: 127.0.0.1:" + corePort);
var channel = new Channel("127.0.0.1:" + corePort, ChannelCredentials.Insecure);
var client = new CoreProto.CoreProtoClient(channel);
//no need to close channel
return client;
}
```
### Anything else we should know about your project / environment?
Last working and tested branch: V2.27.0
|
1.0
|
Armhf Runtime problem - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
v1.37.x + C# (Net 5.0)
### What operating system (Linux, Windows,...) and version?
Linux Arm (armhf / Raspbian) [Raspberry3]
### What runtime / compiler are you using (e.g. python version or version of gcc)
cmake
### What did you do?
Here is my script for compilation, and it compiles fine enough on some machines:
```
#!/bin/bash
###
#Install Tool Chain
###
sudo apt-get install build-essential autoconf libtool pkg-config
sudo apt-get install libgflags-dev libgtest-dev
sudo apt-get install clang libc++-dev
sudo apt-get install cmake
###
#Get gRPC Source Code
###
git clone https://github.com/grpc/grpc.git --branch v1.37.x
cd grpc
git submodule update --init
###
#Compile libgrpc_csharp_ext target
###
#idk what its for, but on RPI4 it breaks the script
set -ex
#intended to be run from the repository root
cd grpc/
#create location for build files
mkdir -p cmake/build
cd cmake/build
#setup the build files
cmake -DgRPC_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE="${MSBUILD_CONFIG}" ../..
#compile libgrpc_csharp_ext.so
make -j4 grpc_csharp_ext
#Copy the file back to repository root and rename it to what gRPC.Core currently looks for
cp libgrpc_csharp_ext.so ../../../libgrpc_csharp_ext.x86.so
```
### What did you expect to see?
Good Compilation + Working C# GRPC
### What did you see instead?
**Compilation [V]
C# GRPC ->**
_2021-04-20 17:46:01.7947|ERROR|LibPP54Lite.PP54LiteManager|System.InvalidOperationException: Unsupported architecture "Unknown".
at Grpc.Core.Internal.NativeExtension.GetArchitectureString()
at Grpc.Core.Internal.NativeExtension.GetNativeLibraryFilename()
at Grpc.Core.Internal.NativeExtension.LoadNativeMethodsUsingExplicitLoad()
at Grpc.Core.Internal.NativeExtension.LoadNativeMethods()
at Grpc.Core.Internal.NativeExtension..ctor()
at Grpc.Core.Internal.NativeExtension.Get()
at Grpc.Core.Internal.NativeMethods.Get()
at Grpc.Core.GrpcEnvironment.GrpcNativeInit()
at Grpc.Core.GrpcEnvironment..ctor()
at Grpc.Core.GrpcEnvironment.AddRef()
at Grpc.Core.Channel..ctor(String target, ChannelCredentials credentials, IEnumerable`1 options)
at Grpc.Core.Channel..ctor(String target, ChannelCredentials credentials)
at SharedFunctions_NetCore.ClientServer.GetCoreConnector(String corePort)_
Where GetCoreConnector is:
```
public static CoreProto.CoreProtoClient GetCoreConnector(string corePort)
{
log.Debug("Trying to connect to: 127.0.0.1:" + corePort);
var channel = new Channel("127.0.0.1:" + corePort, ChannelCredentials.Insecure);
var client = new CoreProto.CoreProtoClient(channel);
//no need to close channel
return client;
}
```
### Anything else we should know about your project / environment?
Last working and tested branch: V2.27.0
|
non_process
|
armhf runtime problem please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using x c net what operating system linux windows and version linux arm armhf raspbian what runtime compiler are you using e g python version or version of gcc cmake what did you do here is my script for compilation and it compiles fine enough on some machines bin bash install tool chain sudo apt get install build essential autoconf libtool pkg config sudo apt get install libgflags dev libgtest dev sudo apt get install clang libc dev sudo apt get install cmake get grpc source code git clone branch x cd grpc git submodule update init compile libgrpc csharp ext target idk what its for but on it breaks the script set ex intended to be run from the repository root cd grpc create location for build files mkdir p cmake build cd cmake build setup the build files cmake dgrpc build tests off dcmake build type msbuild config compile libgrpc csharp ext so make grpc csharp ext copy the file back to repository root and rename it to what grpc core currently looks for cp libgrpc csharp ext so libgrpc csharp ext so what did you expect to see good compilation working c grpc what did you see instead compilation c grpc error system invalidoperationexception unsupported architecture unknown at grpc core internal nativeextension getarchitecturestring at grpc core internal nativeextension getnativelibraryfilename at grpc core internal nativeextension loadnativemethodsusingexplicitload at grpc core internal nativeextension loadnativemethods at grpc core internal nativeextension ctor at grpc core internal nativeextension get at grpc core internal nativemethods get at grpc core grpcenvironment grpcnativeinit at grpc core grpcenvironment ctor at grpc core grpcenvironment addref at grpc core channel ctor string target channelcredentials credentials ienumerable options at grpc core channel ctor string target channelcredentials credentials at sharedfunctions netcore clientserver getcoreconnector string coreport where getcoreconnector is public static coreproto coreprotoclient getcoreconnector string coreport log debug trying to connect to coreport var channel new channel coreport channelcredentials insecure var client new coreproto coreprotoclient channel no need to close channel return client anything else we should know about your project environment last working and tested branch
| 0
|
340,806
| 10,278,787,926
|
IssuesEvent
|
2019-08-25 17:16:07
|
krzychu124/Cities-Skylines-Traffic-Manager-President-Edition
|
https://api.github.com/repos/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition
|
opened
|
[EPIC] WIP: Collating issues relating to pathfinding tweaks
|
EPIC JUNCTION RESTRICTIONS LANE ROUTING PARKING PATHFINDER PRIORITY SIGNS investigating
|
Will tidy this up later but just jotting down a few initial links (there are probably more that need adding, will search later)
https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/19
https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/189
https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/135
|
1.0
|
[EPIC] WIP: Collating issues relating to pathfinding tweaks - Will tidy this up later but just jotting down a few initial links (there are probably more that need adding, will search later)
https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/19
https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/189
https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/135
|
non_process
|
wip collating issues relating to pathfinding tweaks will tidy this up later but just jotting down a few initial links there are probably more that need adding will search later
| 0
|
20,787
| 27,525,489,883
|
IssuesEvent
|
2023-03-06 17:44:20
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[Epic] Time zone metadata improvements for 46
|
Querying/Processor .Backend .Epic
|
A handful of improvements to the way we we deal with time zones in the Query Processor.
### Shovel-Ready
- [ ] #14056
- [ ] #27177
- [x] ~#6439~
### Details Being Hashed Out
- [ ] #4284
- [ ] Support per-Query overrides
### Related
- [ ] #5927
### Internal Documents
[Brain dump](https://www.notion.so/metabase/Sameer-s-brain-dump-on-timezones-7e5c4fce1f78482faa00a59638b3e53d)
[Open questions](https://www.notion.so/metabase/Splitting-report-timezone-open-questions-2511e67e45e448e6bbc21a8cba158f41)
[Timezone snapshot](https://www.notion.so/metabase/Timezones-snapshot-6124cdfba58447feadc69b53d7a38787)
[Product doc](https://www.notion.so/metabase/Unlock-advanced-date-time-manipulation-038f5cf80f364d7cb93341784169743c)
[Learnings from trying to split display timezone](https://www.notion.so/metabase/Splitting-report-timezone-open-questions-2511e67e45e448e6bbc21a8cba158f41#b340c45e058a4f58848a21903f0d9d24)
|
1.0
|
[Epic] Time zone metadata improvements for 46 - A handful of improvements to the way we we deal with time zones in the Query Processor.
### Shovel-Ready
- [ ] #14056
- [ ] #27177
- [x] ~#6439~
### Details Being Hashed Out
- [ ] #4284
- [ ] Support per-Query overrides
### Related
- [ ] #5927
### Internal Documents
[Brain dump](https://www.notion.so/metabase/Sameer-s-brain-dump-on-timezones-7e5c4fce1f78482faa00a59638b3e53d)
[Open questions](https://www.notion.so/metabase/Splitting-report-timezone-open-questions-2511e67e45e448e6bbc21a8cba158f41)
[Timezone snapshot](https://www.notion.so/metabase/Timezones-snapshot-6124cdfba58447feadc69b53d7a38787)
[Product doc](https://www.notion.so/metabase/Unlock-advanced-date-time-manipulation-038f5cf80f364d7cb93341784169743c)
[Learnings from trying to split display timezone](https://www.notion.so/metabase/Splitting-report-timezone-open-questions-2511e67e45e448e6bbc21a8cba158f41#b340c45e058a4f58848a21903f0d9d24)
|
process
|
time zone metadata improvements for a handful of improvements to the way we we deal with time zones in the query processor shovel ready details being hashed out support per query overrides related internal documents
| 1
|
451,574
| 13,038,743,162
|
IssuesEvent
|
2020-07-28 15:39:12
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Dashboards with questions, that user doesn't haven't permissions to, breaks completely
|
Administration/Permissions Priority:P1 Reporting/Dashboards Type:Bug
|
**Describe the bug**
If a dashboard contains a single question, which a user does not have permissions to, then the entire dashboard breaks for that user.
**To Reproduce**
1. Create a collection "C1" under root
2. Create dashboard "D1" in "C1"
3. Simple question > Sample Dataset > Orders - save as "Q1" in "C1" and add to "D1"
4. Simple question > Sample Dataset > Orders - save as "Q2" in root (not "C1") and add to "D1"
5. Create a user "U1" that only has permissions to "C1"
6. Login as "U1" and go to "D1" in "C1" - the entire dashboard fails with `e is undefined` (Firefox) or `Cannot read property 'type' of undefined` (Chromium)

7. Reverting back to 0.35.4 shows a nice little error for that card `Sorry, you don't have permission to see this card.`

**Information about your Metabase Installation:**
Metabase 0.36.0
|
1.0
|
Dashboards with questions, that user doesn't haven't permissions to, breaks completely - **Describe the bug**
If a dashboard contains a single question, which a user does not have permissions to, then the entire dashboard breaks for that user.
**To Reproduce**
1. Create a collection "C1" under root
2. Create dashboard "D1" in "C1"
3. Simple question > Sample Dataset > Orders - save as "Q1" in "C1" and add to "D1"
4. Simple question > Sample Dataset > Orders - save as "Q2" in root (not "C1") and add to "D1"
5. Create a user "U1" that only has permissions to "C1"
6. Login as "U1" and go to "D1" in "C1" - the entire dashboard fails with `e is undefined` (Firefox) or `Cannot read property 'type' of undefined` (Chromium)

7. Reverting back to 0.35.4 shows a nice little error for that card `Sorry, you don't have permission to see this card.`

**Information about your Metabase Installation:**
Metabase 0.36.0
|
non_process
|
dashboards with questions that user doesn t haven t permissions to breaks completely describe the bug if a dashboard contains a single question which a user does not have permissions to then the entire dashboard breaks for that user to reproduce create a collection under root create dashboard in simple question sample dataset orders save as in and add to simple question sample dataset orders save as in root not and add to create a user that only has permissions to login as and go to in the entire dashboard fails with e is undefined firefox or cannot read property type of undefined chromium reverting back to shows a nice little error for that card sorry you don t have permission to see this card information about your metabase installation metabase
| 0
|
765,969
| 26,867,032,122
|
IssuesEvent
|
2023-02-04 01:59:45
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Entity Components Search Not Working
|
Priority-High (Needed for work) Bug
|
**Describe the bug**
In an Entity record, the See Components in Results is returning inappropriate records.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to for example https://arctos.database.museum/guid/Arctos:Entity:235
2. Note that this is an Elephant at the Albuquerque Biopark, with multiple blood and serum samples over time.
3. Click on See Components in Results
4. See many more, non-elephant results
**Expected behavior**
I should only see the component records for Alice the Elephant, Albuquerque Biopark Local ID 10113, not additional records from Comoros Islands snakes etc.
**Screenshots**
**Data**
Component search results:
[ArctosDatarMAuOu6i9f.zip](https://github.com/ArctosDB/arctos/files/10574613/ArctosDatarMAuOu6i9f.zip)
**Desktop (please complete the following information):**
- OS: Microsoft
- Browser Chrome
- Version [e.g. 22]
**Additional context**
It would be helpful to have the total number of component parts show up on the Entity Page above components, so that when you do a search on components you can make sure the numbers in the results are the same as what is on the component page.
Priority**
High. We are increasingly having to manage entities for endangered species recovery programs and zoos. This problem has caused a delay in loan processing.
|
1.0
|
Entity Components Search Not Working - **Describe the bug**
In an Entity record, the See Components in Results is returning inappropriate records.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to for example https://arctos.database.museum/guid/Arctos:Entity:235
2. Note that this is an Elephant at the Albuquerque Biopark, with multiple blood and serum samples over time.
3. Click on See Components in Results
4. See many more, non-elephant results
**Expected behavior**
I should only see the component records for Alice the Elephant, Albuquerque Biopark Local ID 10113, not additional records from Comoros Islands snakes etc.
**Screenshots**
**Data**
Component search results:
[ArctosDatarMAuOu6i9f.zip](https://github.com/ArctosDB/arctos/files/10574613/ArctosDatarMAuOu6i9f.zip)
**Desktop (please complete the following information):**
- OS: Microsoft
- Browser Chrome
- Version [e.g. 22]
**Additional context**
It would be helpful to have the total number of component parts show up on the Entity Page above components, so that when you do a search on components you can make sure the numbers in the results are the same as what is on the component page.
Priority**
High. We are increasingly having to manage entities for endangered species recovery programs and zoos. This problem has caused a delay in loan processing.
|
non_process
|
entity components search not working describe the bug in an entity record the see components in results is returning inappropriate records to reproduce steps to reproduce the behavior go to for example note that this is an elephant at the albuquerque biopark with multiple blood and serum samples over time click on see components in results see many more non elephant results expected behavior i should only see the component records for alice the elephant albuquerque biopark local id not additional records from comoros islands snakes etc screenshots data component search results desktop please complete the following information os microsoft browser chrome version additional context it would be helpful to have the total number of component parts show up on the entity page above components so that when you do a search on components you can make sure the numbers in the results are the same as what is on the component page priority high we are increasingly having to manage entities for endangered species recovery programs and zoos this problem has caused a delay in loan processing
| 0
|
75,613
| 9,878,826,021
|
IssuesEvent
|
2019-06-24 08:35:58
|
ktbs/ktbs
|
https://api.github.com/repos/ktbs/ktbs
|
closed
|
ktbs:hasSubject is required
|
documentation
|
In the [kTBS documentation](http://kernel-for-trace-based-systems.readthedocs.org/en/v0.4/tutorials/rest-turtle.html#add-obsels-to-trace) the first POST trace example (on a kTBS 0.4 installed via PIP) lead to:
```
403 Forbidden
403 Forbidden - Invalid data
check_new_graph: ko
* Property <http://liris.cnrs.fr/silex/2009/ktbs#hasSubject> of <http://localhost:8001/base1/t01/obs43> should have at least 1 objects; it only has 0
```
It's like `<t01/> :hasDefaultSubject "me" .` doesn't behave as we would expect.
Maybe this should be explained in the documentation ?
Thanks
|
1.0
|
ktbs:hasSubject is required - In the [kTBS documentation](http://kernel-for-trace-based-systems.readthedocs.org/en/v0.4/tutorials/rest-turtle.html#add-obsels-to-trace) the first POST trace example (on a kTBS 0.4 installed via PIP) lead to:
```
403 Forbidden
403 Forbidden - Invalid data
check_new_graph: ko
* Property <http://liris.cnrs.fr/silex/2009/ktbs#hasSubject> of <http://localhost:8001/base1/t01/obs43> should have at least 1 objects; it only has 0
```
It's like `<t01/> :hasDefaultSubject "me" .` doesn't behave as we would expect.
Maybe this should be explained in the documentation ?
Thanks
|
non_process
|
ktbs hassubject is required in the the first post trace example on a ktbs installed via pip lead to forbidden forbidden invalid data check new graph ko property of should have at least objects it only has it s like hasdefaultsubject me doesn t behave as we would expect maybe this should be explained in the documentation thanks
| 0
|
3,598
| 2,683,745,976
|
IssuesEvent
|
2015-03-28 08:34:03
|
Jasig/cas
|
https://api.github.com/repos/Jasig/cas
|
closed
|
[CAS-721] Functional Testing
|
Compatibility Testing Future Major Task
|
Create functional tests to ensure CAS features. These tests can be Selenium tests, etc.
Reported by: Scott Battaglia, id: battags
Created: Fri, 17 Oct 2008 11:56:54 -0700
Updated: Fri, 17 Oct 2008 11:56:54 -0700
JIRA: https://issues.jasig.org/browse/CAS-721
|
1.0
|
[CAS-721] Functional Testing - Create functional tests to ensure CAS features. These tests can be Selenium tests, etc.
Reported by: Scott Battaglia, id: battags
Created: Fri, 17 Oct 2008 11:56:54 -0700
Updated: Fri, 17 Oct 2008 11:56:54 -0700
JIRA: https://issues.jasig.org/browse/CAS-721
|
non_process
|
functional testing create functional tests to ensure cas features these tests can be selenium tests etc reported by scott battaglia id battags created fri oct updated fri oct jira
| 0
|
212,677
| 16,492,866,491
|
IssuesEvent
|
2021-05-25 07:04:42
|
bounswe/2021SpringGroup5
|
https://api.github.com/repos/bounswe/2021SpringGroup5
|
closed
|
Arranging RAM section for Milestone 1
|
documentation
|
I will prepare a draft for RAM (Responsibility Assignment Matrix) and list the responsibilities and group them. Then, the group members will fill their columns with the corresponding letters. If there exists a responsibility I missed, group members may add it. When finished, I will organize it and prepare for the Milestone Report 1.
|
1.0
|
Arranging RAM section for Milestone 1 - I will prepare a draft for RAM (Responsibility Assignment Matrix) and list the responsibilities and group them. Then, the group members will fill their columns with the corresponding letters. If there exists a responsibility I missed, group members may add it. When finished, I will organize it and prepare for the Milestone Report 1.
|
non_process
|
arranging ram section for milestone i will prepare a draft for ram responsibility assignment matrix and list the responsibilities and group them then the group members will fill their columns with the corresponding letters if there exists a responsibility i missed group members may add it when finished i will organize it and prepare for the milestone report
| 0
|
325,601
| 24,055,017,965
|
IssuesEvent
|
2022-09-16 15:59:40
|
lightdash/lightdash
|
https://api.github.com/repos/lightdash/lightdash
|
closed
|
Upgrade dbt to 1.2.0
|
📖 documentation ⚙️ backend
|
Unable to run lightdash on top of jaffle_shop_metrics:
https://github.com/dbt-labs/jaffle_shop_metrics/
I tried to downgrade the project's version but some of the features used there are unsupported by previous DBT versions.
I think lightdash on top of dbt-metrics can be a huge win for lightdash.
|
1.0
|
Upgrade dbt to 1.2.0 - Unable to run lightdash on top of jaffle_shop_metrics:
https://github.com/dbt-labs/jaffle_shop_metrics/
I tried to downgrade the project's version but some of the features used there are unsupported by previous DBT versions.
I think lightdash on top of dbt-metrics can be a huge win for lightdash.
|
non_process
|
upgrade dbt to unable to run lightdash on top of jaffle shop metrics i tried to downgrade the project s version but some of the features used there are unsupported by previous dbt versions i think lightdash on top of dbt metrics can be a huge win for lightdash
| 0
|
333,103
| 29,508,132,841
|
IssuesEvent
|
2023-06-03 15:19:17
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
closed
|
Fix jax_numpy_math.test_jax_numpy_around
|
JAX Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|
1.0
|
Fix jax_numpy_math.test_jax_numpy_around - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5163696541/jobs/9302175979" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|
non_process
|
fix jax numpy math test jax numpy around tensorflow img src torch img src numpy img src jax img src paddle img src
| 0
|
22,360
| 31,075,054,684
|
IssuesEvent
|
2023-08-12 11:31:58
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
meadow-connection-mssql 1.0.7 has 60 guarddog issues
|
npm-install-script shady-links npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/@isaacs/cliui/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/@npmcli/git/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/retold-harness/node_modules/@sindresorhus/is/package.json:22","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/@szmarczak/http-timer/package.json:13","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main \u0026\u0026 npm run build:bin\"","location":"package/retold-harness/node_modules/acorn/package.json:31","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main \u0026\u0026 npm run build:bin\"","location":"package/retold-harness/node_modules/acorn-node/node_modules/acorn/package.json:32","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:walk\"","location":"package/retold-harness/node_modules/acorn-walk/package.json:31","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/cacache/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/cacheable-request/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/retold-harness/node_modules/ci-info/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/defer-to-connect/package.json:14","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"install\": \"node-gyp rebuild || node suppress-error.js\",","location":"package/retold-harness/node_modules/dtrace-provider/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"postinstall\": \" node -e \\\"try{require('./_postinstall')}catch(e){}\\\" || exit 0\",","location":"package/retold-harness/node_modules/es5-ext/package.json:115","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash ./scripts/fixup.sh\",","location":"package/retold-harness/node_modules/foreground-child/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run cleanup \u0026\u0026 npm run build\",","location":"package/retold-harness/node_modules/form-data-encoder/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"install\": \"node install.js\"","location":"package/retold-harness/node_modules/fsevents/package.json:18","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash fixup.sh\",","location":"package/retold-harness/node_modules/glob/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/retold-harness/node_modules/got/package.json:18","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/hosted-git-info/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"rollup -c\"","location":"package/retold-harness/node_modules/is-plain-object/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"webpack --config lib/html-spa/webpack.config.js --mode production\",","location":"package/retold-harness/node_modules/istanbul-reports/package.json:13","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash ./scripts/fixup.sh\",","location":"package/retold-harness/node_modules/jackspeak/package.json:28","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"yarn build\",","location":"package/retold-harness/node_modules/keyv/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"rollup -c rollup.conf.js\",","location":"package/retold-harness/node_modules/loupe/package.json:30","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/lru-cache/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/make-fetch-happen/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node src/build.js \u0026\u0026 runmd --output README.md src/README_js.md\",","location":"package/retold-harness/node_modules/mime/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/minimatch/package.json:33","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/minipass/package.json:35","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/mocha/node_modules/cliui/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/retold-harness/node_modules/mocha/node_modules/get-caller-file/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/mocha/node_modules/y18n/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/retold-harness/node_modules/mocha/node_modules/yargs/package.json:86","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/named-placeholders/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/normalize-package-data/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/retold-harness/node_modules/npm-check-updates/build/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/retold-harness/node_modules/npm-check-updates/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/npm-package-arg/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/retold-harness/node_modules/nyc/node_modules/get-caller-file/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/retold-harness/node_modules/nyc/node_modules/yargs/package.json:65","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/path-scurry/node_modules/lru-cache/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/path-scurry/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/retold-harness/node_modules/pino-abstract-transport/package.json:7","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"git config --local core.hooksPath .githooks\"","location":"package/retold-harness/node_modules/rc-config-loader/package.json:40","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/retold-harness/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -b .\",","location":"package/retold-harness/node_modules/resolve-package-path/package.json:35","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/restify/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd $( git rev-parse --show-toplevel ) \u0026\u0026 husky install\",","location":"package/retold-harness/node_modules/restify/node_modules/uuid/package.json:91","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/rimraf/package.json:32","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node src/build.js\",","location":"package/retold-harness/node_modules/send/node_modules/mime/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash ./scripts/fixup.sh\",","location":"package/retold-harness/node_modules/signal-exit/package.json:74","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/retold-harness/node_modules/sonic-boom/package.json:12","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main\"","location":"package/retold-harness/node_modules/terser/node_modules/acorn/package.json:45","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/terser/package.json:72","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/retold-harness/node_modules/thread-stream/package.json:30","message":"The package.json has a script automatically running when the package is installed"},{"code":"{\"name\":\"type-detect\",\"description\":\"Improved typeof detection for node.js and the browser.\",\"keywords\":[\"type\",\"typeof\",\"types\"],\"license\":\"MIT\",\"author\":\"Jake Luer \u003cjake@alogicalparadox.com\u003e (http://alogicalparadox.com)\",\"contributors\":[\"...\":\"4.0.8\"}","location":"package/retold-harness/node_modules/type-detect/package.json:1","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/yargs-parser/package.json:30","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.#options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore',\n\t\t}).unref();","location":"package/retold-harness/node_modules/update-notifier/update-notifier.js:111","message":"This package is silently executing another executable"}],"shady-links":[{"code":" 'http://➡.ws/➡': {","location":"package/retold-harness/node_modules/url/test/index.js:568","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" href: 'http://xn--hgi.ws/➡',","location":"package/retold-harness/node_modules/url/test/index.js:569","message":"This package contains an URL to a domain with a suspicious extension"}]}```
|
1.0
|
meadow-connection-mssql 1.0.7 has 60 guarddog issues - ```{"npm-install-script":[{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/@isaacs/cliui/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/@npmcli/git/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/retold-harness/node_modules/@sindresorhus/is/package.json:22","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/@szmarczak/http-timer/package.json:13","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main \u0026\u0026 npm run build:bin\"","location":"package/retold-harness/node_modules/acorn/package.json:31","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main \u0026\u0026 npm run build:bin\"","location":"package/retold-harness/node_modules/acorn-node/node_modules/acorn/package.json:32","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:walk\"","location":"package/retold-harness/node_modules/acorn-walk/package.json:31","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/cacache/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/cacheable-request/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/retold-harness/node_modules/ci-info/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/defer-to-connect/package.json:14","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"install\": \"node-gyp rebuild || node suppress-error.js\",","location":"package/retold-harness/node_modules/dtrace-provider/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"postinstall\": \" node -e \\\"try{require('./_postinstall')}catch(e){}\\\" || exit 0\",","location":"package/retold-harness/node_modules/es5-ext/package.json:115","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash ./scripts/fixup.sh\",","location":"package/retold-harness/node_modules/foreground-child/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run cleanup \u0026\u0026 npm run build\",","location":"package/retold-harness/node_modules/form-data-encoder/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"install\": \"node install.js\"","location":"package/retold-harness/node_modules/fsevents/package.json:18","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash fixup.sh\",","location":"package/retold-harness/node_modules/glob/package.json:34","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"npm run build\"","location":"package/retold-harness/node_modules/got/package.json:18","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/hosted-git-info/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"rollup -c\"","location":"package/retold-harness/node_modules/is-plain-object/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"webpack --config lib/html-spa/webpack.config.js --mode production\",","location":"package/retold-harness/node_modules/istanbul-reports/package.json:13","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash ./scripts/fixup.sh\",","location":"package/retold-harness/node_modules/jackspeak/package.json:28","message":"The package.json has a script automatically running when the package is installed"},{"code":"\t\t\"prepare\": \"yarn build\",","location":"package/retold-harness/node_modules/keyv/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"rollup -c rollup.conf.js\",","location":"package/retold-harness/node_modules/loupe/package.json:30","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/lru-cache/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/make-fetch-happen/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node src/build.js \u0026\u0026 runmd --output README.md src/README_js.md\",","location":"package/retold-harness/node_modules/mime/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/minimatch/package.json:33","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/minipass/package.json:35","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/mocha/node_modules/cliui/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/retold-harness/node_modules/mocha/node_modules/get-caller-file/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/mocha/node_modules/y18n/package.json:41","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/retold-harness/node_modules/mocha/node_modules/yargs/package.json:86","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/named-placeholders/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/normalize-package-data/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/retold-harness/node_modules/npm-check-updates/build/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/retold-harness/node_modules/npm-check-updates/package.json:36","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/npm-package-arg/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc\",","location":"package/retold-harness/node_modules/nyc/node_modules/get-caller-file/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\",","location":"package/retold-harness/node_modules/nyc/node_modules/yargs/package.json:65","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/path-scurry/node_modules/lru-cache/package.json:15","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/path-scurry/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\",","location":"package/retold-harness/node_modules/pino-abstract-transport/package.json:7","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"git config --local core.hooksPath .githooks\"","location":"package/retold-harness/node_modules/rc-config-loader/package.json:40","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"postinstall\": \"lerna bootstrap\",","location":"package/retold-harness/node_modules/resolve/test/resolver/multirepo/package.json:8","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -b .\",","location":"package/retold-harness/node_modules/resolve-package-path/package.json:35","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node ./scripts/transpile-to-esm.js\",","location":"package/retold-harness/node_modules/restify/node_modules/lru-cache/package.json:16","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd $( git rev-parse --show-toplevel ) \u0026\u0026 husky install\",","location":"package/retold-harness/node_modules/restify/node_modules/uuid/package.json:91","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json\",","location":"package/retold-harness/node_modules/rimraf/package.json:32","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"node src/build.js\",","location":"package/retold-harness/node_modules/send/node_modules/mime/package.json:29","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"tsc -p tsconfig.json \u0026\u0026 tsc -p tsconfig-esm.json \u0026\u0026 bash ./scripts/fixup.sh\",","location":"package/retold-harness/node_modules/signal-exit/package.json:74","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/retold-harness/node_modules/sonic-boom/package.json:12","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"cd ..; npm run build:main\"","location":"package/retold-harness/node_modules/terser/node_modules/acorn/package.json:45","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run build\",","location":"package/retold-harness/node_modules/terser/package.json:72","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"husky install\"","location":"package/retold-harness/node_modules/thread-stream/package.json:30","message":"The package.json has a script automatically running when the package is installed"},{"code":"{\"name\":\"type-detect\",\"description\":\"Improved typeof detection for node.js and the browser.\",\"keywords\":[\"type\",\"typeof\",\"types\"],\"license\":\"MIT\",\"author\":\"Jake Luer \u003cjake@alogicalparadox.com\u003e (http://alogicalparadox.com)\",\"contributors\":[\"...\":\"4.0.8\"}","location":"package/retold-harness/node_modules/type-detect/package.json:1","message":"The package.json has a script automatically running when the package is installed"},{"code":" \"prepare\": \"npm run compile\"","location":"package/retold-harness/node_modules/yargs-parser/package.json:30","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"\t\tspawn(process.execPath, [path.join(__dirname, 'check.js'), JSON.stringify(this.#options)], {\n\t\t\tdetached: true,\n\t\t\tstdio: 'ignore',\n\t\t}).unref();","location":"package/retold-harness/node_modules/update-notifier/update-notifier.js:111","message":"This package is silently executing another executable"}],"shady-links":[{"code":" 'http://➡.ws/➡': {","location":"package/retold-harness/node_modules/url/test/index.js:568","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" href: 'http://xn--hgi.ws/➡',","location":"package/retold-harness/node_modules/url/test/index.js:569","message":"This package contains an URL to a domain with a suspicious extension"}]}```
|
process
|
meadow connection mssql has guarddog issues npm install script license mit author jake luer alogicalparadox com npm silent process execution n t t tdetached true n t t tstdio ignore n t t unref location package retold harness node modules update notifier update notifier js message this package is silently executing another executable shady links
| 1
|
10,673
| 13,460,674,867
|
IssuesEvent
|
2020-09-09 13:54:38
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Add physical machines to environment?
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
Is it possible to define environment as set of physical machines
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Add physical machines to environment? - Is it possible to define environment as set of physical machines
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
add physical machines to environment is it possible to define environment as set of physical machines document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
2,220
| 5,070,761,910
|
IssuesEvent
|
2016-12-26 08:20:48
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
opened
|
mirrored and striped configurations for gvinum
|
component:data processing enhancement priority: high
|
work left from #167
* mirrored gvinum configuration
* striped gvinum configuration
|
1.0
|
mirrored and striped configurations for gvinum - work left from #167
* mirrored gvinum configuration
* striped gvinum configuration
|
process
|
mirrored and striped configurations for gvinum work left from mirrored gvinum configuration striped gvinum configuration
| 1
|
1,807
| 4,541,933,728
|
IssuesEvent
|
2016-09-09 19:28:18
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
ValueError: min() arg is an empty sequence
|
bug priority: high sct_process_segmentation
|
batch_processing:
~~~
sct_process_segmentation -i t2_seg.nii.gz -p csa -vert 3:4
--
Spinal Cord Toolbox (version dev-26e7a1e783bed24b6cd5cee29244a28bd60e7508)
/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py -i t2_seg.nii.gz -p csa -vert 3:4
Check parameters:
.. segmentation file: t2_seg.nii.gz
Create temporary folder...
mkdir tmp.160909152025_975041/
Copying input data to tmp folder and convert to nii...
sct_convert -i /Users/julien/sct_example_data/t2/t2_seg.nii.gz -o tmp.160909152025_975041/segmentation.nii.gz
Change orientation to RPI...
sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz
Open segmentation volume...
Get data dimensions...
52 x 384 x 185
Smooth centerline/segmentation...
.. Get center of mass of the centerline/segmentation...
.. Computing physical coordinates of centerline/segmentation...
.. Smoothing algo = nurbs
/Users/julien/code/spinalcordtoolbox/scripts/msct_smooth.py:279: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
if z == None:
Fitting centerline using B-spline approximation...
Test: # of control points = 5
Error on approximation = 2.86 mm
Test: # of control points = 6
WARNING: NURBS instability -> wrong reconstruction
Test: # of control points = 7
WARNING: NURBS instability -> wrong reconstruction
Test: # of control points = 8
Error on approximation = 0.27 mm
Test: # of control points = 9
Error on approximation = 0.18 mm
Test: # of control points = 10
Error on approximation = 0.16 mm
Test: # of control points = 11
Error on approximation = 0.15 mm
Test: # of control points = 12
Error on approximation = 0.15 mm
Test: # of control points = 13
Error on approximation = 0.15 mm
Test: # of control points = 14
Error on approximation = 0.14 mm
Number of control points of the optimal NURBS = 14
Compute CSA...
Smooth CSA across slices...
.. No smoothing!
Compute CSA per slice...
z=0: 32.3208355417 mm^2
z=1: 30.4554836395 mm^2
z=2: 28.5876828942 mm^2
z=3: 30.528470602 mm^2
z=4: 33.4249324449 mm^2
z=5: 34.4124012632 mm^2
z=6: 31.5709234328 mm^2
z=7: 31.5927903905 mm^2
z=8: 33.5245051389 mm^2
z=9: 34.4930441764 mm^2
z=10: 34.496526981 mm^2
z=11: 34.4920323297 mm^2
z=12: 34.4783183103 mm^2
z=13: 36.3690077724 mm^2
z=14: 38.2457384208 mm^2
z=15: 35.3300026646 mm^2
z=16: 34.3196292838 mm^2
z=17: 37.1185739919 mm^2
z=18: 38.0043577613 mm^2
z=19: 36.0382340738 mm^2
z=20: 34.0799888495 mm^2
z=21: 35.9074677613 mm^2
z=22: 33.9523376525 mm^2
z=23: 36.7101376979 mm^2
z=24: 34.7589322835 mm^2
z=25: 35.6271758429 mm^2
z=26: 35.5553459806 mm^2
z=27: 37.3504402382 mm^2
z=28: 37.2712036818 mm^2
z=29: 37.1913516845 mm^2
z=30: 38.0386976911 mm^2
z=31: 37.9531265823 mm^2
z=32: 38.7904871716 mm^2
z=33: 36.8585371255 mm^2
z=34: 35.8537168531 mm^2
z=35: 36.6867755164 mm^2
z=36: 38.4269424855 mm^2
z=37: 36.5069026931 mm^2
z=38: 36.4174315007 mm^2
z=39: 37.2346938312 mm^2
z=40: 38.0489396169 mm^2
z=41: 37.9568542793 mm^2
z=42: 40.5716683423 mm^2
z=43: 37.7769449461 mm^2
z=44: 43.074016981 mm^2
z=45: 39.396429503 mm^2
z=46: 42.8831189791 mm^2
z=47: 38.3370331202 mm^2
z=48: 40.0413480231 mm^2
z=49: 42.6289464717 mm^2
z=50: 42.5518464612 mm^2
z=51: 39.8245957813 mm^2
z=52: 40.6449970033 mm^2
z=53: 40.5850994073 mm^2
z=54: 43.1716131716 mm^2
z=55: 39.5949505519 mm^2
z=56: 42.1810711985 mm^2
z=57: 38.6177887406 mm^2
z=58: 41.2004377873 mm^2
z=59: 40.2757287256 mm^2
z=60: 41.976778424 mm^2
z=61: 43.6752801244 mm^2
z=62: 42.7551833176 mm^2
z=63: 45.3252256439 mm^2
z=64: 46.1502635882 mm^2
z=65: 43.4948394622 mm^2
z=66: 47.7990925881 mm^2
z=67: 46.8889247398 mm^2
z=68: 47.7165870707 mm^2
z=69: 49.4120095272 mm^2
z=70: 45.9106888813 mm^2
z=71: 49.3407233404 mm^2
z=72: 46.7136523715 mm^2
z=73: 46.6879243328 mm^2
z=74: 49.2594432934 mm^2
z=75: 45.7869624671 mm^2
z=76: 48.3665728228 mm^2
z=77: 46.6322654593 mm^2
z=78: 47.4936913046 mm^2
z=79: 48.3599651858 mm^2
z=80: 45.7765859703 mm^2
z=81: 48.3803462024 mm^2
z=82: 47.5336606 mm^2
z=83: 51.0142089315 mm^2
z=84: 50.1777037789 mm^2
z=85: 51.9420702971 mm^2
z=86: 52.8464040396 mm^2
z=87: 50.2897330701 mm^2
z=88: 51.2045280904 mm^2
z=89: 50.3927709064 mm^2
z=90: 50.4718480317 mm^2
z=91: 47.9573903254 mm^2
z=92: 49.8194980781 mm^2
z=93: 48.2051603175 mm^2
z=94: 50.1165713279 mm^2
z=95: 52.0591476016 mm^2
z=96: 50.4909911832 mm^2
z=97: 51.5950459691 mm^2
z=98: 52.7242495453 mm^2
z=99: 55.6743604856 mm^2
z=100: 55.049458043 mm^2
z=101: 57.1507533043 mm^2
z=102: 58.3717119502 mm^2
z=103: 56.8626028701 mm^2
z=104: 59.031190496 mm^2
z=105: 56.5898166756 mm^2
z=106: 56.9189808911 mm^2
z=107: 60.0438769059 mm^2
z=108: 56.579997601 mm^2
z=109: 57.8005552367 mm^2
z=110: 59.0144603053 mm^2
z=111: 61.1704958664 mm^2
z=112: 60.4491055184 mm^2
z=113: 63.5655606626 mm^2
z=114: 63.7831945873 mm^2
z=115: 63.0183161974 mm^2
z=116: 63.2075867152 mm^2
z=117: 66.3053453547 mm^2
z=118: 64.5240931842 mm^2
z=119: 64.6818384129 mm^2
z=120: 65.8069057992 mm^2
z=121: 61.020864394 mm^2
z=122: 62.1261460317 mm^2
z=123: 65.2014581071 mm^2
z=124: 67.2846998999 mm^2
z=125: 67.38248907 mm^2
z=126: 68.4615285715 mm^2
z=127: 69.5321857437 mm^2
z=128: 69.5987127869 mm^2
z=129: 70.6516081114 mm^2
z=130: 72.6953062261 mm^2
z=131: 73.7374071648 mm^2
z=132: 72.7785668414 mm^2
z=133: 72.8103054819 mm^2
z=134: 73.8346895024 mm^2
z=135: 73.8565012316 mm^2
z=136: 72.8763021311 mm^2
z=137: 71.8925102162 mm^2
z=138: 71.9043725723 mm^2
z=139: 74.9104014967 mm^2
z=140: 74.9182181746 mm^2
z=141: 76.9224403375 mm^2
z=142: 79.9246641652 mm^2
z=143: 79.9285855885 mm^2
z=144: 79.9316083538 mm^2
z=145: 77.935334551 mm^2
z=146: 77.9370583069 mm^2
z=147: 77.9395585835 mm^2
z=148: 78.9419219028 mm^2
z=149: 77.9463038276 mm^2
z=150: 77.9503223536 mm^2
z=151: 78.9538117507 mm^2
z=152: 79.9574086576 mm^2
z=153: 78.9618734479 mm^2
z=154: 76.9662711969 mm^2
z=155: 75.9693975025 mm^2
z=156: 76.9709373818 mm^2
z=157: 76.9718813938 mm^2
z=158: 76.9716429491 mm^2
z=159: 74.9708517703 mm^2
z=160: 74.9678294041 mm^2
z=161: 73.9636026405 mm^2
z=162: 74.9564926943 mm^2
z=163: 74.9477711238 mm^2
z=164: 75.9358986086 mm^2
z=165: 73.9247926994 mm^2
z=166: 72.9107460577 mm^2
z=167: 74.8901569841 mm^2
z=168: 76.8665860765 mm^2
z=169: 76.8430875249 mm^2
z=170: 77.8151549072 mm^2
z=171: 77.7983832904 mm^2
z=172: 77.7983859588 mm^2
z=173: 80.8044442156 mm^2
z=174: 78.8296521829 mm^2
z=175: 79.8484228958 mm^2
z=176: 76.8699600478 mm^2
z=177: 79.8739981747 mm^2
z=178: 81.8717905878 mm^2
z=179: 81.8646875804 mm^2
z=180: 78.8546612957 mm^2
z=181: 79.832638815 mm^2
z=182: 80.8058779783 mm^2
z=183: 81.7734234725 mm^2
z=184: 80.7547362906 mm^2
Save results in: csa_per_slice.txt
Create volume of CSA values...
Create volume of angle values...
sct_image -i csa_volume_RPI.nii.gz -setorient AIL -o csa_volume_in_initial_orientation.nii.gz
sct_image -i angle_volume_RPI.nii.gz -setorient AIL -o angle_volume_in_initial_orientation.nii.gz
Generate output files...
WARNING: File csa_volume.nii.gz already exists. Deleting it...
File created: csa_volume.nii.gz
WARNING: File angle_volume.nii.gz already exists. Deleting it...
File created: angle_volume.nii.gz
Selected vertebral levels... 3:4
OK: ./label/template/PAM50_levels.nii.gz
Find slices corresponding to vertebral levels based on the centerline...
/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py:915: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
if vertebral_labeling_data[np.round(x_centerline_fit[i_z]), np.round(y_centerline_fit[i_z]), z_centerline[i_z]] in range(vert_levels_list[0], vert_levels_list[1]+1):
Traceback (most recent call last):
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 1035, in <module>
main(sys.argv[1:])
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 262, in main
compute_csa(fname_segmentation, output_prefix, param_default.suffix_csa_output_files, output_type, overwrite, verbose, remove_temp_files, step, smoothing_param, figure_fit, slices, vert_lev, fname_vertebral_labeling, algo_fitting = param.algo_fitting, type_window= param.type_window, window_length=param.window_length, angle_correction=angle_correction)
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 647, in compute_csa
slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels_based_centerline(vert_levels, im_vertebral_labeling.data, x_centerline_fit, y_centerline_fit, z_centerline)
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 919, in get_slices_matching_with_vertebral_levels_based_centerline
slices = str(min(matching_slices_centerline_vert_labeling))+':'+str(max(matching_slices_centerline_vert_labeling))
ValueError: min() arg is an empty sequence
~~~
|
1.0
|
ValueError: min() arg is an empty sequence - batch_processing:
~~~
sct_process_segmentation -i t2_seg.nii.gz -p csa -vert 3:4
--
Spinal Cord Toolbox (version dev-26e7a1e783bed24b6cd5cee29244a28bd60e7508)
/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py -i t2_seg.nii.gz -p csa -vert 3:4
Check parameters:
.. segmentation file: t2_seg.nii.gz
Create temporary folder...
mkdir tmp.160909152025_975041/
Copying input data to tmp folder and convert to nii...
sct_convert -i /Users/julien/sct_example_data/t2/t2_seg.nii.gz -o tmp.160909152025_975041/segmentation.nii.gz
Change orientation to RPI...
sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz
Open segmentation volume...
Get data dimensions...
52 x 384 x 185
Smooth centerline/segmentation...
.. Get center of mass of the centerline/segmentation...
.. Computing physical coordinates of centerline/segmentation...
.. Smoothing algo = nurbs
/Users/julien/code/spinalcordtoolbox/scripts/msct_smooth.py:279: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
if z == None:
Fitting centerline using B-spline approximation...
Test: # of control points = 5
Error on approximation = 2.86 mm
Test: # of control points = 6
WARNING: NURBS instability -> wrong reconstruction
Test: # of control points = 7
WARNING: NURBS instability -> wrong reconstruction
Test: # of control points = 8
Error on approximation = 0.27 mm
Test: # of control points = 9
Error on approximation = 0.18 mm
Test: # of control points = 10
Error on approximation = 0.16 mm
Test: # of control points = 11
Error on approximation = 0.15 mm
Test: # of control points = 12
Error on approximation = 0.15 mm
Test: # of control points = 13
Error on approximation = 0.15 mm
Test: # of control points = 14
Error on approximation = 0.14 mm
Number of control points of the optimal NURBS = 14
Compute CSA...
Smooth CSA across slices...
.. No smoothing!
Compute CSA per slice...
z=0: 32.3208355417 mm^2
z=1: 30.4554836395 mm^2
z=2: 28.5876828942 mm^2
z=3: 30.528470602 mm^2
z=4: 33.4249324449 mm^2
z=5: 34.4124012632 mm^2
z=6: 31.5709234328 mm^2
z=7: 31.5927903905 mm^2
z=8: 33.5245051389 mm^2
z=9: 34.4930441764 mm^2
z=10: 34.496526981 mm^2
z=11: 34.4920323297 mm^2
z=12: 34.4783183103 mm^2
z=13: 36.3690077724 mm^2
z=14: 38.2457384208 mm^2
z=15: 35.3300026646 mm^2
z=16: 34.3196292838 mm^2
z=17: 37.1185739919 mm^2
z=18: 38.0043577613 mm^2
z=19: 36.0382340738 mm^2
z=20: 34.0799888495 mm^2
z=21: 35.9074677613 mm^2
z=22: 33.9523376525 mm^2
z=23: 36.7101376979 mm^2
z=24: 34.7589322835 mm^2
z=25: 35.6271758429 mm^2
z=26: 35.5553459806 mm^2
z=27: 37.3504402382 mm^2
z=28: 37.2712036818 mm^2
z=29: 37.1913516845 mm^2
z=30: 38.0386976911 mm^2
z=31: 37.9531265823 mm^2
z=32: 38.7904871716 mm^2
z=33: 36.8585371255 mm^2
z=34: 35.8537168531 mm^2
z=35: 36.6867755164 mm^2
z=36: 38.4269424855 mm^2
z=37: 36.5069026931 mm^2
z=38: 36.4174315007 mm^2
z=39: 37.2346938312 mm^2
z=40: 38.0489396169 mm^2
z=41: 37.9568542793 mm^2
z=42: 40.5716683423 mm^2
z=43: 37.7769449461 mm^2
z=44: 43.074016981 mm^2
z=45: 39.396429503 mm^2
z=46: 42.8831189791 mm^2
z=47: 38.3370331202 mm^2
z=48: 40.0413480231 mm^2
z=49: 42.6289464717 mm^2
z=50: 42.5518464612 mm^2
z=51: 39.8245957813 mm^2
z=52: 40.6449970033 mm^2
z=53: 40.5850994073 mm^2
z=54: 43.1716131716 mm^2
z=55: 39.5949505519 mm^2
z=56: 42.1810711985 mm^2
z=57: 38.6177887406 mm^2
z=58: 41.2004377873 mm^2
z=59: 40.2757287256 mm^2
z=60: 41.976778424 mm^2
z=61: 43.6752801244 mm^2
z=62: 42.7551833176 mm^2
z=63: 45.3252256439 mm^2
z=64: 46.1502635882 mm^2
z=65: 43.4948394622 mm^2
z=66: 47.7990925881 mm^2
z=67: 46.8889247398 mm^2
z=68: 47.7165870707 mm^2
z=69: 49.4120095272 mm^2
z=70: 45.9106888813 mm^2
z=71: 49.3407233404 mm^2
z=72: 46.7136523715 mm^2
z=73: 46.6879243328 mm^2
z=74: 49.2594432934 mm^2
z=75: 45.7869624671 mm^2
z=76: 48.3665728228 mm^2
z=77: 46.6322654593 mm^2
z=78: 47.4936913046 mm^2
z=79: 48.3599651858 mm^2
z=80: 45.7765859703 mm^2
z=81: 48.3803462024 mm^2
z=82: 47.5336606 mm^2
z=83: 51.0142089315 mm^2
z=84: 50.1777037789 mm^2
z=85: 51.9420702971 mm^2
z=86: 52.8464040396 mm^2
z=87: 50.2897330701 mm^2
z=88: 51.2045280904 mm^2
z=89: 50.3927709064 mm^2
z=90: 50.4718480317 mm^2
z=91: 47.9573903254 mm^2
z=92: 49.8194980781 mm^2
z=93: 48.2051603175 mm^2
z=94: 50.1165713279 mm^2
z=95: 52.0591476016 mm^2
z=96: 50.4909911832 mm^2
z=97: 51.5950459691 mm^2
z=98: 52.7242495453 mm^2
z=99: 55.6743604856 mm^2
z=100: 55.049458043 mm^2
z=101: 57.1507533043 mm^2
z=102: 58.3717119502 mm^2
z=103: 56.8626028701 mm^2
z=104: 59.031190496 mm^2
z=105: 56.5898166756 mm^2
z=106: 56.9189808911 mm^2
z=107: 60.0438769059 mm^2
z=108: 56.579997601 mm^2
z=109: 57.8005552367 mm^2
z=110: 59.0144603053 mm^2
z=111: 61.1704958664 mm^2
z=112: 60.4491055184 mm^2
z=113: 63.5655606626 mm^2
z=114: 63.7831945873 mm^2
z=115: 63.0183161974 mm^2
z=116: 63.2075867152 mm^2
z=117: 66.3053453547 mm^2
z=118: 64.5240931842 mm^2
z=119: 64.6818384129 mm^2
z=120: 65.8069057992 mm^2
z=121: 61.020864394 mm^2
z=122: 62.1261460317 mm^2
z=123: 65.2014581071 mm^2
z=124: 67.2846998999 mm^2
z=125: 67.38248907 mm^2
z=126: 68.4615285715 mm^2
z=127: 69.5321857437 mm^2
z=128: 69.5987127869 mm^2
z=129: 70.6516081114 mm^2
z=130: 72.6953062261 mm^2
z=131: 73.7374071648 mm^2
z=132: 72.7785668414 mm^2
z=133: 72.8103054819 mm^2
z=134: 73.8346895024 mm^2
z=135: 73.8565012316 mm^2
z=136: 72.8763021311 mm^2
z=137: 71.8925102162 mm^2
z=138: 71.9043725723 mm^2
z=139: 74.9104014967 mm^2
z=140: 74.9182181746 mm^2
z=141: 76.9224403375 mm^2
z=142: 79.9246641652 mm^2
z=143: 79.9285855885 mm^2
z=144: 79.9316083538 mm^2
z=145: 77.935334551 mm^2
z=146: 77.9370583069 mm^2
z=147: 77.9395585835 mm^2
z=148: 78.9419219028 mm^2
z=149: 77.9463038276 mm^2
z=150: 77.9503223536 mm^2
z=151: 78.9538117507 mm^2
z=152: 79.9574086576 mm^2
z=153: 78.9618734479 mm^2
z=154: 76.9662711969 mm^2
z=155: 75.9693975025 mm^2
z=156: 76.9709373818 mm^2
z=157: 76.9718813938 mm^2
z=158: 76.9716429491 mm^2
z=159: 74.9708517703 mm^2
z=160: 74.9678294041 mm^2
z=161: 73.9636026405 mm^2
z=162: 74.9564926943 mm^2
z=163: 74.9477711238 mm^2
z=164: 75.9358986086 mm^2
z=165: 73.9247926994 mm^2
z=166: 72.9107460577 mm^2
z=167: 74.8901569841 mm^2
z=168: 76.8665860765 mm^2
z=169: 76.8430875249 mm^2
z=170: 77.8151549072 mm^2
z=171: 77.7983832904 mm^2
z=172: 77.7983859588 mm^2
z=173: 80.8044442156 mm^2
z=174: 78.8296521829 mm^2
z=175: 79.8484228958 mm^2
z=176: 76.8699600478 mm^2
z=177: 79.8739981747 mm^2
z=178: 81.8717905878 mm^2
z=179: 81.8646875804 mm^2
z=180: 78.8546612957 mm^2
z=181: 79.832638815 mm^2
z=182: 80.8058779783 mm^2
z=183: 81.7734234725 mm^2
z=184: 80.7547362906 mm^2
Save results in: csa_per_slice.txt
Create volume of CSA values...
Create volume of angle values...
sct_image -i csa_volume_RPI.nii.gz -setorient AIL -o csa_volume_in_initial_orientation.nii.gz
sct_image -i angle_volume_RPI.nii.gz -setorient AIL -o angle_volume_in_initial_orientation.nii.gz
Generate output files...
WARNING: File csa_volume.nii.gz already exists. Deleting it...
File created: csa_volume.nii.gz
WARNING: File angle_volume.nii.gz already exists. Deleting it...
File created: angle_volume.nii.gz
Selected vertebral levels... 3:4
OK: ./label/template/PAM50_levels.nii.gz
Find slices corresponding to vertebral levels based on the centerline...
/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py:915: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
if vertebral_labeling_data[np.round(x_centerline_fit[i_z]), np.round(y_centerline_fit[i_z]), z_centerline[i_z]] in range(vert_levels_list[0], vert_levels_list[1]+1):
Traceback (most recent call last):
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 1035, in <module>
main(sys.argv[1:])
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 262, in main
compute_csa(fname_segmentation, output_prefix, param_default.suffix_csa_output_files, output_type, overwrite, verbose, remove_temp_files, step, smoothing_param, figure_fit, slices, vert_lev, fname_vertebral_labeling, algo_fitting = param.algo_fitting, type_window= param.type_window, window_length=param.window_length, angle_correction=angle_correction)
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 647, in compute_csa
slices, vert_levels_list, warning = get_slices_matching_with_vertebral_levels_based_centerline(vert_levels, im_vertebral_labeling.data, x_centerline_fit, y_centerline_fit, z_centerline)
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 919, in get_slices_matching_with_vertebral_levels_based_centerline
slices = str(min(matching_slices_centerline_vert_labeling))+':'+str(max(matching_slices_centerline_vert_labeling))
ValueError: min() arg is an empty sequence
~~~
|
process
|
valueerror min arg is an empty sequence batch processing sct process segmentation i seg nii gz p csa vert spinal cord toolbox version dev users julien code spinalcordtoolbox scripts sct process segmentation py i seg nii gz p csa vert check parameters segmentation file seg nii gz create temporary folder mkdir tmp copying input data to tmp folder and convert to nii sct convert i users julien sct example data seg nii gz o tmp segmentation nii gz change orientation to rpi sct image i segmentation nii gz setorient rpi o segmentation rpi nii gz open segmentation volume get data dimensions x x smooth centerline segmentation get center of mass of the centerline segmentation computing physical coordinates of centerline segmentation smoothing algo nurbs users julien code spinalcordtoolbox scripts msct smooth py futurewarning comparison to none will result in an elementwise object comparison in the future if z none fitting centerline using b spline approximation test of control points error on approximation mm test of control points warning nurbs instability wrong reconstruction test of control points warning nurbs instability wrong reconstruction test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm test of control points error on approximation mm number of control points of the optimal nurbs compute csa smooth csa across slices no smoothing compute csa per slice z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm z mm save results in csa per slice txt create volume of csa values create volume of angle values sct image i csa volume rpi nii gz setorient ail o csa volume in initial orientation nii gz sct image i angle volume rpi nii gz setorient ail o angle volume in initial orientation nii gz generate output files warning file csa volume nii gz already exists deleting it file created csa volume nii gz warning file angle volume nii gz already exists deleting it file created angle volume nii gz selected vertebral levels ok label template levels nii gz find slices corresponding to vertebral levels based on the centerline users julien code spinalcordtoolbox scripts sct process segmentation py visibledeprecationwarning using a non integer number instead of an integer will result in an error in the future if vertebral labeling data np round y centerline fit z centerline in range vert levels list vert levels list traceback most recent call last file users julien code spinalcordtoolbox scripts sct process segmentation py line in main sys argv file users julien code spinalcordtoolbox scripts sct process segmentation py line in main compute csa fname segmentation output prefix param default suffix csa output files output type overwrite verbose remove temp files step smoothing param figure fit slices vert lev fname vertebral labeling algo fitting param algo fitting type window param type window window length param window length angle correction angle correction file users julien code spinalcordtoolbox scripts sct process segmentation py line in compute csa slices vert levels list warning get slices matching with vertebral levels based centerline vert levels im vertebral labeling data x centerline fit y centerline fit z centerline file users julien code spinalcordtoolbox scripts sct process segmentation py line in get slices matching with vertebral levels based centerline slices str min matching slices centerline vert labeling str max matching slices centerline vert labeling valueerror min arg is an empty sequence
| 1
|
81,370
| 23,449,111,774
|
IssuesEvent
|
2022-08-15 23:26:30
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[wasm] Perftracing build broken due to trimming errors
|
arch-wasm blocking-release blocking-clean-ci area-Build-mono in-pr
|
[Build](https://dev.azure.com/dnceng/public/_build/results?buildId=1943226&view=logs&jobId=7f41eb05-1c5f-5922-38da-d308f495fb31&j=92c81310-cb26-5923-f96e-b7c73963fa3d&t=11318e79-3fe2-58cb-1121-19fe8295c832):
```
/__w/1/s/artifacts/bin/microsoft.netcore.app.runtime.browser-wasm/Release/runtimes/browser-wasm/native/System.Private.CoreLib.dll : error IL2104: Assembly 'System.Private.CoreLib' produced trim warnings. For more information see https://aka.ms/dotnet-illink/libraries [/__w/1/s/src/mono/sample/wasm/browser-eventpipe/Wasm.Browser.EventPipe.Sample.csproj]
##[error]artifacts/bin/microsoft.netcore.app.runtime.browser-wasm/Release/runtimes/browser-wasm/native/System.Private.CoreLib.dll(0,0): error IL2104: (NETCORE_ENGINEERING_TELEMETRY=Build) Assembly 'System.Private.CoreLib' produced trim warnings. For more information see https://aka.ms/dotnet-illink/libraries
/__w/1/s/.dotnet/sdk/7.0.100-preview.5.22307.18/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.ILLink.targets(110,5): error NETSDK1144: Optimizing assemblies for size failed. Optimization can be disabled by setting the PublishTrimmed property to false. [/__w/1/s/src/mono/sample/wasm/browser-eventpipe/Wasm.Browser.EventPipe.Sample.csproj]
##[error].dotnet/sdk/7.0.100-preview.5.22307.18/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.ILLink.targets(110,5): error NETSDK1144: (NETCORE_ENGINEERING_TELEMETRY=Build) Optimizing assemblies for size failed. Optimization can be disabled by setting the PublishTrimmed property to false.
```
This is broken on `main` too.
|
1.0
|
[wasm] Perftracing build broken due to trimming errors - [Build](https://dev.azure.com/dnceng/public/_build/results?buildId=1943226&view=logs&jobId=7f41eb05-1c5f-5922-38da-d308f495fb31&j=92c81310-cb26-5923-f96e-b7c73963fa3d&t=11318e79-3fe2-58cb-1121-19fe8295c832):
```
/__w/1/s/artifacts/bin/microsoft.netcore.app.runtime.browser-wasm/Release/runtimes/browser-wasm/native/System.Private.CoreLib.dll : error IL2104: Assembly 'System.Private.CoreLib' produced trim warnings. For more information see https://aka.ms/dotnet-illink/libraries [/__w/1/s/src/mono/sample/wasm/browser-eventpipe/Wasm.Browser.EventPipe.Sample.csproj]
##[error]artifacts/bin/microsoft.netcore.app.runtime.browser-wasm/Release/runtimes/browser-wasm/native/System.Private.CoreLib.dll(0,0): error IL2104: (NETCORE_ENGINEERING_TELEMETRY=Build) Assembly 'System.Private.CoreLib' produced trim warnings. For more information see https://aka.ms/dotnet-illink/libraries
/__w/1/s/.dotnet/sdk/7.0.100-preview.5.22307.18/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.ILLink.targets(110,5): error NETSDK1144: Optimizing assemblies for size failed. Optimization can be disabled by setting the PublishTrimmed property to false. [/__w/1/s/src/mono/sample/wasm/browser-eventpipe/Wasm.Browser.EventPipe.Sample.csproj]
##[error].dotnet/sdk/7.0.100-preview.5.22307.18/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.ILLink.targets(110,5): error NETSDK1144: (NETCORE_ENGINEERING_TELEMETRY=Build) Optimizing assemblies for size failed. Optimization can be disabled by setting the PublishTrimmed property to false.
```
This is broken on `main` too.
|
non_process
|
perftracing build broken due to trimming errors w s artifacts bin microsoft netcore app runtime browser wasm release runtimes browser wasm native system private corelib dll error assembly system private corelib produced trim warnings for more information see artifacts bin microsoft netcore app runtime browser wasm release runtimes browser wasm native system private corelib dll error netcore engineering telemetry build assembly system private corelib produced trim warnings for more information see w s dotnet sdk preview sdks microsoft net sdk targets microsoft net illink targets error optimizing assemblies for size failed optimization can be disabled by setting the publishtrimmed property to false dotnet sdk preview sdks microsoft net sdk targets microsoft net illink targets error netcore engineering telemetry build optimizing assemblies for size failed optimization can be disabled by setting the publishtrimmed property to false this is broken on main too
| 0
|
92,576
| 3,872,560,185
|
IssuesEvent
|
2016-04-11 14:17:26
|
cs2103jan2016-w14-3j/main
|
https://api.github.com/repos/cs2103jan2016-w14-3j/main
|
closed
|
A user can set reminder intervals
|
not done priority.medium type.story
|
so that he will not forget about the task if he choose to ignore it the first time.
|
1.0
|
A user can set reminder intervals - so that he will not forget about the task if he choose to ignore it the first time.
|
non_process
|
a user can set reminder intervals so that he will not forget about the task if he choose to ignore it the first time
| 0
|
184,187
| 21,784,826,000
|
IssuesEvent
|
2022-05-14 01:27:47
|
bsbtd/Teste
|
https://api.github.com/repos/bsbtd/Teste
|
opened
|
CVE-2022-1650 (High) detected in eventsource-1.0.7.tgz
|
security vulnerability
|
## CVE-2022-1650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eventsource-1.0.7.tgz</b></p></summary>
<p>W3C compliant EventSource client for Node.js and browser (polyfill)</p>
<p>Library home page: <a href="https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz">https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz</a></p>
<p>Path to dependency file: /aws-mobile-appsync-chat-starter-angular/package.json</p>
<p>Path to vulnerable library: /pro-table/node_modules/eventsource/package.json,/pro-table/node_modules/eventsource/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.7.5.tgz (Root Library)
- webpack-dev-server-3.10.3.tgz
- sockjs-client-1.4.0.tgz
- :x: **eventsource-1.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/50a539d66e7a2f790cf8a8d8d1471993698c9adc">50a539d66e7a2f790cf8a8d8d1471993698c9adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository eventsource/eventsource prior to v2.0.2.
<p>Publish Date: 2022-05-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1650>CVE-2022-1650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/">https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/</a></p>
<p>Release Date: 2022-05-12</p>
<p>Fix Resolution: eventsource - 2.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-1650 (High) detected in eventsource-1.0.7.tgz - ## CVE-2022-1650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eventsource-1.0.7.tgz</b></p></summary>
<p>W3C compliant EventSource client for Node.js and browser (polyfill)</p>
<p>Library home page: <a href="https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz">https://registry.npmjs.org/eventsource/-/eventsource-1.0.7.tgz</a></p>
<p>Path to dependency file: /aws-mobile-appsync-chat-starter-angular/package.json</p>
<p>Path to vulnerable library: /pro-table/node_modules/eventsource/package.json,/pro-table/node_modules/eventsource/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.7.5.tgz (Root Library)
- webpack-dev-server-3.10.3.tgz
- sockjs-client-1.4.0.tgz
- :x: **eventsource-1.0.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/50a539d66e7a2f790cf8a8d8d1471993698c9adc">50a539d66e7a2f790cf8a8d8d1471993698c9adc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository eventsource/eventsource prior to v2.0.2.
<p>Publish Date: 2022-05-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1650>CVE-2022-1650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/">https://huntr.dev/bounties/dc9e467f-be5d-4945-867d-1044d27e9b8e/</a></p>
<p>Release Date: 2022-05-12</p>
<p>Fix Resolution: eventsource - 2.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in eventsource tgz cve high severity vulnerability vulnerable library eventsource tgz compliant eventsource client for node js and browser polyfill library home page a href path to dependency file aws mobile appsync chat starter angular package json path to vulnerable library pro table node modules eventsource package json pro table node modules eventsource package json dependency hierarchy build angular tgz root library webpack dev server tgz sockjs client tgz x eventsource tgz vulnerable library found in head commit a href found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in github repository eventsource eventsource prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution eventsource step up your open source security game with whitesource
| 0
|
104,084
| 8,960,943,135
|
IssuesEvent
|
2019-01-28 08:08:04
|
fedora-infra/bodhi
|
https://api.github.com/repos/fedora-infra/bodhi
|
closed
|
bodhi-ci clean should clean up the integration test images
|
RFE Tests
|
The ```bodhi-ci``` clean should clean up the integration test images as well.
|
1.0
|
bodhi-ci clean should clean up the integration test images - The ```bodhi-ci``` clean should clean up the integration test images as well.
|
non_process
|
bodhi ci clean should clean up the integration test images the bodhi ci clean should clean up the integration test images as well
| 0
|
855
| 3,316,511,329
|
IssuesEvent
|
2015-11-06 17:12:30
|
technofreaky/woocomerce-quick-donation
|
https://api.github.com/repos/technofreaky/woocomerce-quick-donation
|
closed
|
Automatic confirmation email not sent when donation made through payment gateway
|
Feature Request Issue Processing
|
When I make a new donation using <b>cheque/DD method</b>, the status of the donation is <b>"on-hold"</b>, I am getting the automatic confirmation email in this scenario.
But, when I make a donation using <b>payment gateway</b>, the status of the donation is <b>"processing"</b>. I am not getting the automatic confirmation email in this scenario.
Kindly fix this issue.
|
1.0
|
Automatic confirmation email not sent when donation made through payment gateway - When I make a new donation using <b>cheque/DD method</b>, the status of the donation is <b>"on-hold"</b>, I am getting the automatic confirmation email in this scenario.
But, when I make a donation using <b>payment gateway</b>, the status of the donation is <b>"processing"</b>. I am not getting the automatic confirmation email in this scenario.
Kindly fix this issue.
|
process
|
automatic confirmation email not sent when donation made through payment gateway when i make a new donation using cheque dd method the status of the donation is on hold i am getting the automatic confirmation email in this scenario but when i make a donation using payment gateway the status of the donation is processing i am not getting the automatic confirmation email in this scenario kindly fix this issue
| 1
|
7,440
| 10,554,571,886
|
IssuesEvent
|
2019-10-03 19:47:00
|
pelias/pelias
|
https://api.github.com/repos/pelias/pelias
|
closed
|
pelias testing module
|
processed question
|
it might be nice to have all our repositories linking to a testing module we control, so `npm test` would run the same thing in every module.
having an npm module called something like `pelias-testing` would allow us to add new functions to that repo and update our testing tools in a single place, we could even use the semver of `*` to ensure that the latest version was always used in each dependant repo.
|
1.0
|
pelias testing module - it might be nice to have all our repositories linking to a testing module we control, so `npm test` would run the same thing in every module.
having an npm module called something like `pelias-testing` would allow us to add new functions to that repo and update our testing tools in a single place, we could even use the semver of `*` to ensure that the latest version was always used in each dependant repo.
|
process
|
pelias testing module it might be nice to have all our repositories linking to a testing module we control so npm test would run the same thing in every module having an npm module called something like pelias testing would allow us to add new functions to that repo and update our testing tools in a single place we could even use the semver of to ensure that the latest version was always used in each dependant repo
| 1
|
10,926
| 13,726,724,979
|
IssuesEvent
|
2020-10-04 01:44:45
|
fluent/fluent-bit
|
https://api.github.com/repos/fluent/fluent-bit
|
closed
|
Fluent Bit v1.5.6 stops outputting logs to stackdriver
|
work-in-process
|
## Bug Report
**Describe the bug**
Fluent Bit v1.5.6 stops outputting logs to stackdriver. The `kubectl top pods` command shows CPU stuck at 1m. The fluentbit_output_proc_records_total metric would show 0 log sent to stackdriver. But the pod would show that it is running. Once this happens, the fluent-bit pod seems to stuck that way for hours until I delete the pod. It happens every day.
**To Reproduce**
Deploy fluent-bit as Daemonset this helm chart:
https://github.com/helm/charts/tree/master/stable/fluent-bit
- Example log message:
```
A log shows the following sequence of events:
[2020/09/29 20:43:44] [ info] [oauth2] access token from 'www.googleapis.com:443' retrieved
[2020/09/29 21:00:31] [ warn] [output:stackdriver:stackdriver.0] error
{
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 10485760 bytes.",
"status": "INVALID_ARGUMENT"
}
}
[2020/09/29 21:40:00] [ warn] [output:stackdriver:stackdriver.0] error
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 502 (Server Error)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>502.</b> <ins>That’s an error.</ins>
<p>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds. <ins>That’s all we know.</ins>
[2020/09/29 22:35:52] [ warn] [output:stackdriver:stackdriver.0] error
{
"error": {
"code": 503,
"message": "Authentication backend unavailable.",
"status": "UNAVAILABLE"
}
}
```
- Steps to reproduce the problem:
**Expected behavior**
If stackdriver backend disconnect fluent-bit for what ever reason, fluent-bit should continue to try to reconnect indefinitely. I don't expect it to halt.
**Your Environment**
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 1.5.6
* Configuration:
```
[INPUT]
Name tail
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/fluentd*.log,/var/log/containers/istio*.log,/var/log/containers/metrics-server*.log,/var/log/containers/datadog*.log,/var/log/containers/private-location-worker*.log,/var/log/containers/pulsar*.log,/var/log/containers/fluent*.log,/var/log/containers/kubecost*.log,/var/log/containers/aws-alb-ingress-controller*.log,/var/log/containers/kubernetes-dashboard-*.log
Parser docker
Tag ${STACK}.kube.*
DB /var/log/fbk8slogs.db
Mem_Buf_Limit 6MB
Skip_Long_Lines On
Alias ${STACK}.kubernetes.tail
[FILTER]
Name grep
Match ${STACK}.kube.*
Exclude log .*tag\":\s?\"noflake.*
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name log
# For Streamlio Pulsar function logs
Parser streamlio-log
Reserve_Data On
Preserve_Key On
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name log
Parser standard-log
Reserve_Data On
Preserve_Key Off
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name streamliolog
Parser embedded-json
Reserve_Data On
Preserve_Key Off
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name standardlog
Parser embedded-json
Reserve_Data Off
Preserve_Key Off
[FILTER]
Name modify
Match ${STACK}.kube.*
Rename levelname severity
Rename level severity
[FILTER]
Name kubernetes
Match ${STACK}.kube.*
Kube_Tag_Prefix ${STACK}.kube.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Annotations Off
Merge_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude On
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation lift
Nested_under kubernetes
Prefix_with tmp:
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation lift
Nested_under tmp:labels
Prefix_with tmp:labels:
[FILTER]
Name modify
Match ${STACK}.kube.*
Rename tmp:namespace_name tmp:labels:k8s_namespace
Rename tmp:pod_name tmp:labels:pod_name
Rename tmp:labels:app tmp:labels:app_name
[FILTER]
Name record_modifier
Match ${STACK}.kube.*
Record tmp:labels:stack ${STACK}
# https://github.com/fluent/fluent-bit/pull/2297
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation nest
Wildcard tmp:labels:k8s_namespace
Wildcard tmp:labels:pod_name
Wildcard tmp:labels:stack
Wildcard tmp:labels:app_name
Wildcard tmp:labels:name
Wildcard tmp:labels:sku
Nest_under labels
# Nested_under logging.googleapis.com/labels
Remove_prefix tmp:labels:
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation nest
Wildcard tmp:namespace_name*
Wildcard tmp:pod_name*
Nest_under k8s
Remove_prefix tmp:
[FILTER]
Name modify
Match ${STACK}.kube.*
Remove_wildcard tmp:
Remove log
Remove stream
# Rename key name level to severity
[FILTER]
Name modify
Match ${STACK}.kube.*
Rename level severity
# Shorten output log tags
[FILTER]
Name rewrite_tag
Match ${STACK}.kube.*
Rule $labels['app_name'] ^(.*)$ ${STACK}.kubernetes.$labels['app_name'] false
Rule $message ^.*$ ${STACK}.kubernetes.unknown_app false
Emitter_Name standard_log_emitted
[OUTPUT]
Name stackdriver
Match ${STACK}.kubernetes.*
resource global
net.keepalive off
Retry_Limit 3
```
* Environment name and version (e.g. Kubernetes? What version?): K8S 1.15
* Server type and version: See helm chart
* Operating System and version: See helm chart
* Filters and plugins: stackdriver
**Additional context**
I had to delete and recreate the pods when the fluent-bit pod stuck. A solution for me is fluent-bit if it encounter problems with stackdriver, restart the stackdriver connection would be better than just stop sending logs.
|
1.0
|
Fluent Bit v1.5.6 stops outputting logs to stackdriver - ## Bug Report
**Describe the bug**
Fluent Bit v1.5.6 stops outputting logs to stackdriver. The `kubectl top pods` command shows CPU stuck at 1m. The fluentbit_output_proc_records_total metric would show 0 log sent to stackdriver. But the pod would show that it is running. Once this happens, the fluent-bit pod seems to stuck that way for hours until I delete the pod. It happens every day.
**To Reproduce**
Deploy fluent-bit as Daemonset this helm chart:
https://github.com/helm/charts/tree/master/stable/fluent-bit
- Example log message:
```
A log shows the following sequence of events:
[2020/09/29 20:43:44] [ info] [oauth2] access token from 'www.googleapis.com:443' retrieved
[2020/09/29 21:00:31] [ warn] [output:stackdriver:stackdriver.0] error
{
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 10485760 bytes.",
"status": "INVALID_ARGUMENT"
}
}
[2020/09/29 21:40:00] [ warn] [output:stackdriver:stackdriver.0] error
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 502 (Server Error)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>502.</b> <ins>That’s an error.</ins>
<p>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds. <ins>That’s all we know.</ins>
[2020/09/29 22:35:52] [ warn] [output:stackdriver:stackdriver.0] error
{
"error": {
"code": 503,
"message": "Authentication backend unavailable.",
"status": "UNAVAILABLE"
}
}
```
- Steps to reproduce the problem:
**Expected behavior**
If stackdriver backend disconnect fluent-bit for what ever reason, fluent-bit should continue to try to reconnect indefinitely. I don't expect it to halt.
**Your Environment**
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 1.5.6
* Configuration:
```
[INPUT]
Name tail
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/fluentd*.log,/var/log/containers/istio*.log,/var/log/containers/metrics-server*.log,/var/log/containers/datadog*.log,/var/log/containers/private-location-worker*.log,/var/log/containers/pulsar*.log,/var/log/containers/fluent*.log,/var/log/containers/kubecost*.log,/var/log/containers/aws-alb-ingress-controller*.log,/var/log/containers/kubernetes-dashboard-*.log
Parser docker
Tag ${STACK}.kube.*
DB /var/log/fbk8slogs.db
Mem_Buf_Limit 6MB
Skip_Long_Lines On
Alias ${STACK}.kubernetes.tail
[FILTER]
Name grep
Match ${STACK}.kube.*
Exclude log .*tag\":\s?\"noflake.*
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name log
# For Streamlio Pulsar function logs
Parser streamlio-log
Reserve_Data On
Preserve_Key On
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name log
Parser standard-log
Reserve_Data On
Preserve_Key Off
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name streamliolog
Parser embedded-json
Reserve_Data On
Preserve_Key Off
[FILTER]
Name parser
Match ${STACK}.kube.*
Key_Name standardlog
Parser embedded-json
Reserve_Data Off
Preserve_Key Off
[FILTER]
Name modify
Match ${STACK}.kube.*
Rename levelname severity
Rename level severity
[FILTER]
Name kubernetes
Match ${STACK}.kube.*
Kube_Tag_Prefix ${STACK}.kube.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
Annotations Off
Merge_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude On
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation lift
Nested_under kubernetes
Prefix_with tmp:
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation lift
Nested_under tmp:labels
Prefix_with tmp:labels:
[FILTER]
Name modify
Match ${STACK}.kube.*
Rename tmp:namespace_name tmp:labels:k8s_namespace
Rename tmp:pod_name tmp:labels:pod_name
Rename tmp:labels:app tmp:labels:app_name
[FILTER]
Name record_modifier
Match ${STACK}.kube.*
Record tmp:labels:stack ${STACK}
# https://github.com/fluent/fluent-bit/pull/2297
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation nest
Wildcard tmp:labels:k8s_namespace
Wildcard tmp:labels:pod_name
Wildcard tmp:labels:stack
Wildcard tmp:labels:app_name
Wildcard tmp:labels:name
Wildcard tmp:labels:sku
Nest_under labels
# Nested_under logging.googleapis.com/labels
Remove_prefix tmp:labels:
[FILTER]
Name nest
Match ${STACK}.kube.*
Operation nest
Wildcard tmp:namespace_name*
Wildcard tmp:pod_name*
Nest_under k8s
Remove_prefix tmp:
[FILTER]
Name modify
Match ${STACK}.kube.*
Remove_wildcard tmp:
Remove log
Remove stream
# Rename key name level to severity
[FILTER]
Name modify
Match ${STACK}.kube.*
Rename level severity
# Shorten output log tags
[FILTER]
Name rewrite_tag
Match ${STACK}.kube.*
Rule $labels['app_name'] ^(.*)$ ${STACK}.kubernetes.$labels['app_name'] false
Rule $message ^.*$ ${STACK}.kubernetes.unknown_app false
Emitter_Name standard_log_emitted
[OUTPUT]
Name stackdriver
Match ${STACK}.kubernetes.*
resource global
net.keepalive off
Retry_Limit 3
```
* Environment name and version (e.g. Kubernetes? What version?): K8S 1.15
* Server type and version: See helm chart
* Operating System and version: See helm chart
* Filters and plugins: stackdriver
**Additional context**
I had to delete and recreate the pods when the fluent-bit pod stuck. A solution for me is fluent-bit if it encounter problems with stackdriver, restart the stackdriver connection would be better than just stop sending logs.
|
process
|
fluent bit stops outputting logs to stackdriver bug report describe the bug fluent bit stops outputting logs to stackdriver the kubectl top pods command shows cpu stuck at the fluentbit output proc records total metric would show log sent to stackdriver but the pod would show that it is running once this happens the fluent bit pod seems to stuck that way for hours until i delete the pod it happens every day to reproduce deploy fluent bit as daemonset this helm chart example log message a log shows the following sequence of events access token from retrieved error error code message request payload size exceeds the limit bytes status invalid argument error error server error margin padding html code font arial sans serif html background fff color padding body margin auto max width min height padding body background url no repeat padding right p margin overflow hidden ins color text decoration none a img border media screen and max width body background none margin top max width none padding right logo background url no repeat margin left media only screen and min resolution logo background url no repeat moz border image url media only screen and webkit min device pixel ratio logo background url no repeat webkit background size logo display inline block height width that’s an error the server encountered a temporary error and could not complete your request please try again in seconds that’s all we know error error code message authentication backend unavailable status unavailable steps to reproduce the problem expected behavior if stackdriver backend disconnect fluent bit for what ever reason fluent bit should continue to try to reconnect indefinitely i don t expect it to halt your environment version used configuration name tail path var log containers log exclude path var log containers fluentd log var log containers istio log var log containers metrics server log var log containers datadog log var log containers private location worker log var log containers pulsar log var log containers fluent log var log containers kubecost log var log containers aws alb ingress controller log var log containers kubernetes dashboard log parser docker tag stack kube db var log db mem buf limit skip long lines on alias stack kubernetes tail name grep match stack kube exclude log tag s noflake name parser match stack kube key name log for streamlio pulsar function logs parser streamlio log reserve data on preserve key on name parser match stack kube key name log parser standard log reserve data on preserve key off name parser match stack kube key name streamliolog parser embedded json reserve data on preserve key off name parser match stack kube key name standardlog parser embedded json reserve data off preserve key off name modify match stack kube rename levelname severity rename level severity name kubernetes match stack kube kube tag prefix stack kube var log containers kube url kube ca file var run secrets kubernetes io serviceaccount ca crt kube token file var run secrets kubernetes io serviceaccount token annotations off merge log on logging parser on logging exclude on name nest match stack kube operation lift nested under kubernetes prefix with tmp name nest match stack kube operation lift nested under tmp labels prefix with tmp labels name modify match stack kube rename tmp namespace name tmp labels namespace rename tmp pod name tmp labels pod name rename tmp labels app tmp labels app name name record modifier match stack kube record tmp labels stack stack name nest match stack kube operation nest wildcard tmp labels namespace wildcard tmp labels pod name wildcard tmp labels stack wildcard tmp labels app name wildcard tmp labels name wildcard tmp labels sku nest under labels nested under logging googleapis com labels remove prefix tmp labels name nest match stack kube operation nest wildcard tmp namespace name wildcard tmp pod name nest under remove prefix tmp name modify match stack kube remove wildcard tmp remove log remove stream rename key name level to severity name modify match stack kube rename level severity shorten output log tags name rewrite tag match stack kube rule labels stack kubernetes labels false rule message stack kubernetes unknown app false emitter name standard log emitted name stackdriver match stack kubernetes resource global net keepalive off retry limit environment name and version e g kubernetes what version server type and version see helm chart operating system and version see helm chart filters and plugins stackdriver additional context i had to delete and recreate the pods when the fluent bit pod stuck a solution for me is fluent bit if it encounter problems with stackdriver restart the stackdriver connection would be better than just stop sending logs
| 1
|
2,499
| 5,272,152,537
|
IssuesEvent
|
2017-02-06 11:58:00
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true
|
child_process question tty
|
[I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true](https://github.com/nodejs/node/issues/10573)


Detached: true, making tty.isatty true [Tty to true]
|
1.0
|
I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true - [I spawn through the creation of sub-thread How non-stdio: 'inherit' set tty to true](https://github.com/nodejs/node/issues/10573)


Detached: true, making tty.isatty true [Tty to true]
|
process
|
i spawn through the creation of sub thread how non stdio inherit set tty to true detached true making tty isatty true
| 1
|
4,628
| 7,473,329,925
|
IssuesEvent
|
2018-04-03 15:05:08
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
Process graph validation
|
feature jobs process graphs
|
Maybe it would be helpful to have an endpoint to validate process graphs and get detailed error messages. That could help with debugging process graphs. Or are user expected to send them to /execute always? That might produce costs. Sending them to POST /jobs is not a good idea as these "tests" currently can't be deleted.
|
1.0
|
Process graph validation - Maybe it would be helpful to have an endpoint to validate process graphs and get detailed error messages. That could help with debugging process graphs. Or are user expected to send them to /execute always? That might produce costs. Sending them to POST /jobs is not a good idea as these "tests" currently can't be deleted.
|
process
|
process graph validation maybe it would be helpful to have an endpoint to validate process graphs and get detailed error messages that could help with debugging process graphs or are user expected to send them to execute always that might produce costs sending them to post jobs is not a good idea as these tests currently can t be deleted
| 1
|
103,483
| 8,916,260,929
|
IssuesEvent
|
2019-01-19 14:56:37
|
Azure/acs-engine
|
https://api.github.com/repos/Azure/acs-engine
|
closed
|
Test/CI subscription quotas are monitored
|
kind/chore test pipeline
|
> As a product manager,
> I want to be notified when our test subscription is at or near Azure quota limits
> so I can take action before it affects CI
## Acceptance Criteria
- Key Azure quota limits are continuously monitored and when above thresholds, are reported to the ACS Engine product team
- There is a dashboard to see current quotas
I would like to use a preexisting solution already in place at Microsoft/ACS.
TBD:
- what metrics to monitor? `MaxStorageAccountsCountPerSubscriptionExceeded`, etc.?
|
1.0
|
Test/CI subscription quotas are monitored - > As a product manager,
> I want to be notified when our test subscription is at or near Azure quota limits
> so I can take action before it affects CI
## Acceptance Criteria
- Key Azure quota limits are continuously monitored and when above thresholds, are reported to the ACS Engine product team
- There is a dashboard to see current quotas
I would like to use a preexisting solution already in place at Microsoft/ACS.
TBD:
- what metrics to monitor? `MaxStorageAccountsCountPerSubscriptionExceeded`, etc.?
|
non_process
|
test ci subscription quotas are monitored as a product manager i want to be notified when our test subscription is at or near azure quota limits so i can take action before it affects ci acceptance criteria key azure quota limits are continuously monitored and when above thresholds are reported to the acs engine product team there is a dashboard to see current quotas i would like to use a preexisting solution already in place at microsoft acs tbd what metrics to monitor maxstorageaccountscountpersubscriptionexceeded etc
| 0
|
445,173
| 12,827,234,821
|
IssuesEvent
|
2020-07-06 18:04:43
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
opened
|
Add details of promoted status to xpi when signing
|
priority: p3
|
> When an add-on is added to either of the Verified or Line lists (not Spotlight), it should be reflected in the mozilla-recommendations.json file when signed, under “states” (values: “sponsored” for Verified tier 1, “verified” for Verified tier 2, “line” for Line).
Probable Autograph side changes needed too.
|
1.0
|
Add details of promoted status to xpi when signing - > When an add-on is added to either of the Verified or Line lists (not Spotlight), it should be reflected in the mozilla-recommendations.json file when signed, under “states” (values: “sponsored” for Verified tier 1, “verified” for Verified tier 2, “line” for Line).
Probable Autograph side changes needed too.
|
non_process
|
add details of promoted status to xpi when signing when an add on is added to either of the verified or line lists not spotlight it should be reflected in the mozilla recommendations json file when signed under “states” values “sponsored” for verified tier “verified” for verified tier “line” for line probable autograph side changes needed too
| 0
|
17,092
| 22,600,776,272
|
IssuesEvent
|
2022-06-29 08:56:16
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
opened
|
[processor/resourcedetection] 'docker' detector does not work in official contrib images
|
bug comp: resourcedetectionprocessor
|
**Describe the bug**
The [docker detector](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor#docker-metadata) from the resource detection processor does not work on official opentelemetry-collector-contrib images, or any other image that runs the Collector under a user other than root.
**Steps to reproduce**
Run the resource detection processor docker detector, while mounting the `/var/run/docker.sock` socket:
```
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v <mount config here> otel/opentelemetry-collector-contrib
```
**What did you expect to see?**
The Docker detector should add the `host.name` of the host machine, and its operating system.
**What did you see instead?**
The Docker detector fails because of lack of permissions
**What version did you use?**
Can be reproduced on the latest version, happens since v0.40.0 (more specifically, since #6380).
**What config did you use?**
The default configuration on the README can reproduce this
```
processors:
resourcedetection/docker:
detectors: [env, docker]
timeout: 2s
override: false
```
**Environment**
This happens on every Docker version and every Collector image since v0.40.0
**Additional context**
This happens since #6380, because of a permissions issue: the mounted socket is only readable by root. AFAICT, Docker does not currently allow mounting volumes with permissions for a specific user (see moby/moby#2259), and we can't `chown` the socket at build time, so we have to choose between running as rootless or supporting this.
This is not a problem on downstream or custom distributions that run as root.
For the hostname, a workaround is to override the OS hostname on the Docker image using something like ` --hostname $(hostname)`. I don't know of a workaround for getting the hosts' operating system.
|
1.0
|
[processor/resourcedetection] 'docker' detector does not work in official contrib images - **Describe the bug**
The [docker detector](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor#docker-metadata) from the resource detection processor does not work on official opentelemetry-collector-contrib images, or any other image that runs the Collector under a user other than root.
**Steps to reproduce**
Run the resource detection processor docker detector, while mounting the `/var/run/docker.sock` socket:
```
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro -v <mount config here> otel/opentelemetry-collector-contrib
```
**What did you expect to see?**
The Docker detector should add the `host.name` of the host machine, and its operating system.
**What did you see instead?**
The Docker detector fails because of lack of permissions
**What version did you use?**
Can be reproduced on the latest version, happens since v0.40.0 (more specifically, since #6380).
**What config did you use?**
The default configuration on the README can reproduce this
```
processors:
resourcedetection/docker:
detectors: [env, docker]
timeout: 2s
override: false
```
**Environment**
This happens on every Docker version and every Collector image since v0.40.0
**Additional context**
This happens since #6380, because of a permissions issue: the mounted socket is only readable by root. AFAICT, Docker does not currently allow mounting volumes with permissions for a specific user (see moby/moby#2259), and we can't `chown` the socket at build time, so we have to choose between running as rootless or supporting this.
This is not a problem on downstream or custom distributions that run as root.
For the hostname, a workaround is to override the OS hostname on the Docker image using something like ` --hostname $(hostname)`. I don't know of a workaround for getting the hosts' operating system.
|
process
|
docker detector does not work in official contrib images describe the bug the from the resource detection processor does not work on official opentelemetry collector contrib images or any other image that runs the collector under a user other than root steps to reproduce run the resource detection processor docker detector while mounting the var run docker sock socket docker run v var run docker sock var run docker sock ro v otel opentelemetry collector contrib what did you expect to see the docker detector should add the host name of the host machine and its operating system what did you see instead the docker detector fails because of lack of permissions what version did you use can be reproduced on the latest version happens since more specifically since what config did you use the default configuration on the readme can reproduce this processors resourcedetection docker detectors timeout override false environment this happens on every docker version and every collector image since additional context this happens since because of a permissions issue the mounted socket is only readable by root afaict docker does not currently allow mounting volumes with permissions for a specific user see moby moby and we can t chown the socket at build time so we have to choose between running as rootless or supporting this this is not a problem on downstream or custom distributions that run as root for the hostname a workaround is to override the os hostname on the docker image using something like hostname hostname i don t know of a workaround for getting the hosts operating system
| 1
|
785,660
| 27,621,710,714
|
IssuesEvent
|
2023-03-10 01:05:54
|
responsible-ai-collaborative/aiid
|
https://api.github.com/repos/responsible-ai-collaborative/aiid
|
closed
|
Production Minified React Error
|
Type:Bug Priority:High
|
Production has a series of what are likely hydration errors. The app behaves fine, but this has been popping up periodically for a while and we need to take care of it.
`react-dom.production.min.js:131 Uncaught Error: Minified React error #418; visit https://reactjs.org/docs/error-decoder.html?invariant=418 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.`
|
1.0
|
Production Minified React Error - Production has a series of what are likely hydration errors. The app behaves fine, but this has been popping up periodically for a while and we need to take care of it.
`react-dom.production.min.js:131 Uncaught Error: Minified React error #418; visit https://reactjs.org/docs/error-decoder.html?invariant=418 for the full message or use the non-minified dev environment for full errors and additional helpful warnings.`
|
non_process
|
production minified react error production has a series of what are likely hydration errors the app behaves fine but this has been popping up periodically for a while and we need to take care of it react dom production min js uncaught error minified react error visit for the full message or use the non minified dev environment for full errors and additional helpful warnings
| 0
|
18,039
| 24,049,836,250
|
IssuesEvent
|
2022-09-16 11:46:55
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Rename OneHotEncoder option sparse to sparse_output
|
module:preprocessing
|
### Task
Introduce new parameter `sparse_output` in `OneHotEncoder` and deprecate the then old `sparse` parameter.
### Background
Several estimators have an option to return sparse output.
- `RandomTreesEmbedding(sparse_output=True)`
- `LabelBinarizer(sparse_output=True)`
- `MultiLabelBinarizer(sparse_output=True)`
`OneHotEncoder(sparse=True)` seems to be the only one to deviate from `sparse_output`.
|
1.0
|
Rename OneHotEncoder option sparse to sparse_output - ### Task
Introduce new parameter `sparse_output` in `OneHotEncoder` and deprecate the then old `sparse` parameter.
### Background
Several estimators have an option to return sparse output.
- `RandomTreesEmbedding(sparse_output=True)`
- `LabelBinarizer(sparse_output=True)`
- `MultiLabelBinarizer(sparse_output=True)`
`OneHotEncoder(sparse=True)` seems to be the only one to deviate from `sparse_output`.
|
process
|
rename onehotencoder option sparse to sparse output task introduce new parameter sparse output in onehotencoder and deprecate the then old sparse parameter background several estimators have an option to return sparse output randomtreesembedding sparse output true labelbinarizer sparse output true multilabelbinarizer sparse output true onehotencoder sparse true seems to be the only one to deviate from sparse output
| 1
|
811,729
| 30,297,901,835
|
IssuesEvent
|
2023-07-10 01:44:54
|
ppy/osu
|
https://api.github.com/repos/ppy/osu
|
closed
|
Add length limit to chat text box
|
type:behavioural area:overlay-chat priority:2
|
Currently there is no length limit applied client-side, and any message that exceed the server-side limit will get thrown away. Ideally such errors would be elegantly handled and the message would be returned back to the user, but I've attempted at least adding a client-side length limit of 100 characters (which is the default value from web source), and I wasn't if that's what the web currently has (it's envvar-based, and it seems to exceed 100 characters).
### Discussed in https://github.com/ppy/osu/discussions/21245
<div type='discussions-op-text'>
<sup>Originally posted by **developomp** November 15, 2022</sup>
When I try to send a lengthy message in the osu!lazer client, it clears the text input regardless of whether the message was successfully delivered or not, forcing me to retype the whole thing. Adding a character length limit feature and doing the following for message input area would feel like a much better user experience in my opinion:
1. When a user sends a message, lock the text input.
2. If the message is successfully delivered, clear the text input. Notify user otherwise (give the text input a little shake maybe).</div>
|
1.0
|
Add length limit to chat text box - Currently there is no length limit applied client-side, and any message that exceed the server-side limit will get thrown away. Ideally such errors would be elegantly handled and the message would be returned back to the user, but I've attempted at least adding a client-side length limit of 100 characters (which is the default value from web source), and I wasn't if that's what the web currently has (it's envvar-based, and it seems to exceed 100 characters).
### Discussed in https://github.com/ppy/osu/discussions/21245
<div type='discussions-op-text'>
<sup>Originally posted by **developomp** November 15, 2022</sup>
When I try to send a lengthy message in the osu!lazer client, it clears the text input regardless of whether the message was successfully delivered or not, forcing me to retype the whole thing. Adding a character length limit feature and doing the following for message input area would feel like a much better user experience in my opinion:
1. When a user sends a message, lock the text input.
2. If the message is successfully delivered, clear the text input. Notify user otherwise (give the text input a little shake maybe).</div>
|
non_process
|
add length limit to chat text box currently there is no length limit applied client side and any message that exceed the server side limit will get thrown away ideally such errors would be elegantly handled and the message would be returned back to the user but i ve attempted at least adding a client side length limit of characters which is the default value from web source and i wasn t if that s what the web currently has it s envvar based and it seems to exceed characters discussed in originally posted by developomp november when i try to send a lengthy message in the osu lazer client it clears the text input regardless of whether the message was successfully delivered or not forcing me to retype the whole thing adding a character length limit feature and doing the following for message input area would feel like a much better user experience in my opinion when a user sends a message lock the text input if the message is successfully delivered clear the text input notify user otherwise give the text input a little shake maybe
| 0
|
13,388
| 2,755,261,347
|
IssuesEvent
|
2015-04-26 13:51:39
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Linq API help error: XML comment contains invalid XML
|
defect in progress
|
For example, by mouse over the `Intersect()` method below you get the following help message "XML comment contains invalid XML: Enat tag 'param' does not match the start tag 'T'"
```
using System;
using System.Collections.Generic;
using System.Linq;
using Bridge;
namespace ClientTestLibrary.Linq
{
class TestLinqSetOperators
{
public void Test()
{
IEnumerable<int> i;
i.Intersect();
}
}
}
```
|
1.0
|
Linq API help error: XML comment contains invalid XML - For example, by mouse over the `Intersect()` method below you get the following help message "XML comment contains invalid XML: Enat tag 'param' does not match the start tag 'T'"
```
using System;
using System.Collections.Generic;
using System.Linq;
using Bridge;
namespace ClientTestLibrary.Linq
{
class TestLinqSetOperators
{
public void Test()
{
IEnumerable<int> i;
i.Intersect();
}
}
}
```
|
non_process
|
linq api help error xml comment contains invalid xml for example by mouse over the intersect method below you get the following help message xml comment contains invalid xml enat tag param does not match the start tag t using system using system collections generic using system linq using bridge namespace clienttestlibrary linq class testlinqsetoperators public void test ienumerable i i intersect
| 0
|
652,587
| 21,556,491,307
|
IssuesEvent
|
2022-04-30 14:05:58
|
bounswe/bounswe2022group2
|
https://api.github.com/repos/bounswe/bounswe2022group2
|
closed
|
Practice App: Initialization Steps of the Project
|
priority-high status-needreview practice-app
|
### Issue Description
We will implement a practice app which will be a practice for the main project we will develop. This is the first issue of our practice app project and we will define the initialization steps that should be performed under this issue before starting to code implementation of the project. Steps of this issue will be related to settings/main files-folders/project/repository setup.
### Step Details
Steps that will be performed:
- [x] Create the "practice-app" folder @bahricanyesil
- [x] Create a template .gitignore file @bahricanyesil
- [x] Create a Readme file
- [x] Create a pull request
- [x] Assign 2 reviewers (1 from the team and 1 from our TAs) for each pull request
- [x] Link related issues (e.g. this issue) in the pull request titles/descriptions
- [x] Add branch protection rules for the master branch (by communicating with our TA) @bahricanyesil
### Final Actions
After all of the items mentioned above are done by the responsible, the reviewer will be reviewing the results of each section and provide final comments.
### Deadline of the Issue
24.04.2022 - Sunday - 23:59
### Reviewer
@xltvy
### Deadline for the Review
25.04.2022 - Monday - 20:00
|
1.0
|
Practice App: Initialization Steps of the Project - ### Issue Description
We will implement a practice app which will be a practice for the main project we will develop. This is the first issue of our practice app project and we will define the initialization steps that should be performed under this issue before starting to code implementation of the project. Steps of this issue will be related to settings/main files-folders/project/repository setup.
### Step Details
Steps that will be performed:
- [x] Create the "practice-app" folder @bahricanyesil
- [x] Create a template .gitignore file @bahricanyesil
- [x] Create a Readme file
- [x] Create a pull request
- [x] Assign 2 reviewers (1 from the team and 1 from our TAs) for each pull request
- [x] Link related issues (e.g. this issue) in the pull request titles/descriptions
- [x] Add branch protection rules for the master branch (by communicating with our TA) @bahricanyesil
### Final Actions
After all of the items mentioned above are done by the responsible, the reviewer will be reviewing the results of each section and provide final comments.
### Deadline of the Issue
24.04.2022 - Sunday - 23:59
### Reviewer
@xltvy
### Deadline for the Review
25.04.2022 - Monday - 20:00
|
non_process
|
practice app initialization steps of the project issue description we will implement a practice app which will be a practice for the main project we will develop this is the first issue of our practice app project and we will define the initialization steps that should be performed under this issue before starting to code implementation of the project steps of this issue will be related to settings main files folders project repository setup step details steps that will be performed create the practice app folder bahricanyesil create a template gitignore file bahricanyesil create a readme file create a pull request assign reviewers from the team and from our tas for each pull request link related issues e g this issue in the pull request titles descriptions add branch protection rules for the master branch by communicating with our ta bahricanyesil final actions after all of the items mentioned above are done by the responsible the reviewer will be reviewing the results of each section and provide final comments deadline of the issue sunday reviewer xltvy deadline for the review monday
| 0
|
150,747
| 5,786,665,992
|
IssuesEvent
|
2017-05-01 12:15:54
|
esikachev/abc-server
|
https://api.github.com/repos/esikachev/abc-server
|
opened
|
Encrypt of password for github account
|
feature request priority/P2
|
Now we have a password in a config file in a not encrypted state. This is not secure. We need to implement the method for safer storing of passwords.
|
1.0
|
Encrypt of password for github account - Now we have a password in a config file in a not encrypted state. This is not secure. We need to implement the method for safer storing of passwords.
|
non_process
|
encrypt of password for github account now we have a password in a config file in a not encrypted state this is not secure we need to implement the method for safer storing of passwords
| 0
|
569,607
| 17,015,495,532
|
IssuesEvent
|
2021-07-02 11:24:28
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
opened
|
Show users of a tag
|
Component: taginfo Priority: major Type: enhancement
|
**[Submitted to the original trac issue database at 3.58pm, Tuesday, 19th April 2011]**
It would be helpful to show users of a given key or key/value combination, sorted by percentage. This will help to indicate if a certain tag is only being used by a few individuals, or by a large group of people. I'm assuming that this would use the last user to edit an object, rather than considering the history of edits, which I think is perfectly acceptable.
|
1.0
|
Show users of a tag - **[Submitted to the original trac issue database at 3.58pm, Tuesday, 19th April 2011]**
It would be helpful to show users of a given key or key/value combination, sorted by percentage. This will help to indicate if a certain tag is only being used by a few individuals, or by a large group of people. I'm assuming that this would use the last user to edit an object, rather than considering the history of edits, which I think is perfectly acceptable.
|
non_process
|
show users of a tag it would be helpful to show users of a given key or key value combination sorted by percentage this will help to indicate if a certain tag is only being used by a few individuals or by a large group of people i m assuming that this would use the last user to edit an object rather than considering the history of edits which i think is perfectly acceptable
| 0
|
145,089
| 11,648,327,666
|
IssuesEvent
|
2020-03-01 20:05:54
|
Sleep-tracker-1/Back_End
|
https://api.github.com/repos/Sleep-tracker-1/Back_End
|
closed
|
Staging and CI
|
deployment testing
|
Setup a staging server runs tests and merges with master if all tests pass.
- [x] Staging branch
- [x] Staging DB
- [x] Staging environment config
- [ ] Auto merge into master on test pass
|
1.0
|
Staging and CI - Setup a staging server runs tests and merges with master if all tests pass.
- [x] Staging branch
- [x] Staging DB
- [x] Staging environment config
- [ ] Auto merge into master on test pass
|
non_process
|
staging and ci setup a staging server runs tests and merges with master if all tests pass staging branch staging db staging environment config auto merge into master on test pass
| 0
|
155,889
| 12,281,106,513
|
IssuesEvent
|
2020-05-08 15:15:54
|
d-r-q/qbit
|
https://api.github.com/repos/d-r-q/qbit
|
opened
|
Add child to parent tree test
|
api choose enhancement refactoring research tests
|
Requires API change to allow user to persist several entities in single call
Add test for factoring of tree of
```kotlin
data class ChildToParent(val id: Long?, parent: ChildToParent?)
```
|
1.0
|
Add child to parent tree test - Requires API change to allow user to persist several entities in single call
Add test for factoring of tree of
```kotlin
data class ChildToParent(val id: Long?, parent: ChildToParent?)
```
|
non_process
|
add child to parent tree test requires api change to allow user to persist several entities in single call add test for factoring of tree of kotlin data class childtoparent val id long parent childtoparent
| 0
|
229,299
| 25,319,001,549
|
IssuesEvent
|
2022-11-18 01:03:47
|
tlkh/transformers-benchmarking
|
https://api.github.com/repos/tlkh/transformers-benchmarking
|
opened
|
CVE-2022-45198 (High) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2022-45198 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- streamlit-0.55.2-py2.py3-none-any.whl (Root Library)
- :x: **Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tlkh/transformers-benchmarking/commit/39aecb84de42ed39fed4b34551a944fc9d98183f">39aecb84de42ed39fed4b34551a944fc9d98183f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pillow before 9.2.0 performs Improper Handling of Highly Compressed GIF Data (Data Amplification).
<p>Publish Date: 2022-11-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45198>CVE-2022-45198</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-11-14</p>
<p>Fix Resolution: Pillow - 9.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-45198 (High) detected in Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-45198 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/12/ad/61f8dfba88c4e56196bf6d056cdbba64dc9c5dfdfbc97d02e6472feed913/Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- streamlit-0.55.2-py2.py3-none-any.whl (Root Library)
- :x: **Pillow-6.2.2-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tlkh/transformers-benchmarking/commit/39aecb84de42ed39fed4b34551a944fc9d98183f">39aecb84de42ed39fed4b34551a944fc9d98183f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pillow before 9.2.0 performs Improper Handling of Highly Compressed GIF Data (Data Amplification).
<p>Publish Date: 2022-11-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45198>CVE-2022-45198</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-11-14</p>
<p>Fix Resolution: Pillow - 9.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in pillow whl cve high severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy streamlit none any whl root library x pillow whl vulnerable library found in head commit a href found in base branch main vulnerability details pillow before performs improper handling of highly compressed gif data data amplification publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution pillow step up your open source security game with mend
| 0
|
13,591
| 16,163,269,078
|
IssuesEvent
|
2021-05-01 02:57:51
|
tdwg/chrono
|
https://api.github.com/repos/tdwg/chrono
|
closed
|
New Term - materialDatedRelationship
|
Process - prepare for Executive review Term - add
|
## New term
* Submitter: Laura Brenskelle
* Justification (why is this term necessary?): See https://github.com/tdwg/chrono/issues/20.
* Proponents (at least two independent parties who need this term): Public review.
Proposed attributes of the new term:
* Organized in Class: ChronometricAge
* Term name (in lowerCamelCase): materialDatedRelationship
* Definition of the term: The relationship of the materialDated to the subject of the ChronometricAge record, from which the ChronometricAge of the subject is inferred.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary.
* Examples: `sameAs` (cases where the subject material was completely destructively subsampled to get the ChronometricAge), `subsampleOf` (cases where part of the original specimen was extracted as the material used to determine the ChronometricAge), `inContextWith` (cases where the ChronometricAge is inferred from materialDated, such as sediments or cultural objects, in related temporal context), `stratigraphicallyCorrelatedWith` (cases where the ChronometricAge is inferred from materialDated in a stratigraphically correlated context).
* Refines (identifier of the broader term this term refines, if applicable):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable):
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable):
|
1.0
|
New Term - materialDatedRelationship - ## New term
* Submitter: Laura Brenskelle
* Justification (why is this term necessary?): See https://github.com/tdwg/chrono/issues/20.
* Proponents (at least two independent parties who need this term): Public review.
Proposed attributes of the new term:
* Organized in Class: ChronometricAge
* Term name (in lowerCamelCase): materialDatedRelationship
* Definition of the term: The relationship of the materialDated to the subject of the ChronometricAge record, from which the ChronometricAge of the subject is inferred.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary.
* Examples: `sameAs` (cases where the subject material was completely destructively subsampled to get the ChronometricAge), `subsampleOf` (cases where part of the original specimen was extracted as the material used to determine the ChronometricAge), `inContextWith` (cases where the ChronometricAge is inferred from materialDated, such as sediments or cultural objects, in related temporal context), `stratigraphicallyCorrelatedWith` (cases where the ChronometricAge is inferred from materialDated in a stratigraphically correlated context).
* Refines (identifier of the broader term this term refines, if applicable):
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable):
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable):
|
process
|
new term materialdatedrelationship new term submitter laura brenskelle justification why is this term necessary see proponents at least two independent parties who need this term public review proposed attributes of the new term organized in class chronometricage term name in lowercamelcase materialdatedrelationship definition of the term the relationship of the materialdated to the subject of the chronometricage record from which the chronometricage of the subject is inferred usage comments recommendations regarding content etc recommended best practice is to use a controlled vocabulary examples sameas cases where the subject material was completely destructively subsampled to get the chronometricage subsampleof cases where part of the original specimen was extracted as the material used to determine the chronometricage incontextwith cases where the chronometricage is inferred from materialdated such as sediments or cultural objects in related temporal context stratigraphicallycorrelatedwith cases where the chronometricage is inferred from materialdated in a stratigraphically correlated context refines identifier of the broader term this term refines if applicable replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable
| 1
|
13,097
| 15,495,138,350
|
IssuesEvent
|
2021-03-11 00:20:29
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
Should be able to run a card with another card as its source query with just perms for the former
|
Administration/Permissions Priority:P1 Querying/Processor Type:Bug
|
Suppose we have two Cards, Card 1 and Card 2. Card 2 has a query like `{:query {:source-table "card__1}}` (i.e., Card 2 uses Card 1 as a source query). Now suppose the current User has permissions to see Card 2, but no perms for Card 1. They should still be able to run Card 2 -- we don't need to check perms for Card 1 as well.
However, we should require perms for Card 1 to save a Card with a query like `{:query {:source-table "card__1}}`, so people can't give themselves access to arbitrary Cards.
I think this is a regression in `master` (39-SNAPSHOT) -- I'll fix before the release goes out.
|
1.0
|
Should be able to run a card with another card as its source query with just perms for the former - Suppose we have two Cards, Card 1 and Card 2. Card 2 has a query like `{:query {:source-table "card__1}}` (i.e., Card 2 uses Card 1 as a source query). Now suppose the current User has permissions to see Card 2, but no perms for Card 1. They should still be able to run Card 2 -- we don't need to check perms for Card 1 as well.
However, we should require perms for Card 1 to save a Card with a query like `{:query {:source-table "card__1}}`, so people can't give themselves access to arbitrary Cards.
I think this is a regression in `master` (39-SNAPSHOT) -- I'll fix before the release goes out.
|
process
|
should be able to run a card with another card as its source query with just perms for the former suppose we have two cards card and card card has a query like query source table card i e card uses card as a source query now suppose the current user has permissions to see card but no perms for card they should still be able to run card we don t need to check perms for card as well however we should require perms for card to save a card with a query like query source table card so people can t give themselves access to arbitrary cards i think this is a regression in master snapshot i ll fix before the release goes out
| 1
|
2,630
| 5,410,114,991
|
IssuesEvent
|
2017-03-01 07:29:57
|
FujiXeroxNZ-Wellington/Indigo
|
https://api.github.com/repos/FujiXeroxNZ-Wellington/Indigo
|
closed
|
form data is displaying feedback even after saving the data in the contract processing modal
|
0-4-Contract Processing 0-Contract Management Client Side KB Article v1.0
|
The labels and input elements are all green and are not resetting their colors to defaults.
|
1.0
|
form data is displaying feedback even after saving the data in the contract processing modal - The labels and input elements are all green and are not resetting their colors to defaults.
|
process
|
form data is displaying feedback even after saving the data in the contract processing modal the labels and input elements are all green and are not resetting their colors to defaults
| 1
|
509,875
| 14,750,711,514
|
IssuesEvent
|
2021-01-08 02:56:45
|
kubesphere/kubesphere
|
https://api.github.com/repos/kubesphere/kubesphere
|
closed
|
How to compile multiple languages in jenkins by switching node
|
area/devops kind/feature priority/medium
|
## Devops in microservices
1. Most companies use kubernetes because of microservices.Kubernetes is a natural fit for microservices
2. Jenkins supports devops on kubernetes, but it's very troublesome to configure. You need to write a yaml file or configure your docker image in casc.
3. There may be dependencies between the devops of microservices.
4. If I have 100 microservices. I don’t want to build them one by one.
## Solution
We can use a yaml files like drone and gitci to build multi-tasks pipelines.
But drone and gitci build pipelines in one git-repo.
tekton can build pipelines across repo.
But tekton only supports git.
=======================================================
August 13th update
## How to compile multiple languages in jenkins by switching node.
Only one node is used in the kubesphere documentation. Unsuccessful using multiple nodes in the official jenkins documentation
|
1.0
|
How to compile multiple languages in jenkins by switching node - ## Devops in microservices
1. Most companies use kubernetes because of microservices.Kubernetes is a natural fit for microservices
2. Jenkins supports devops on kubernetes, but it's very troublesome to configure. You need to write a yaml file or configure your docker image in casc.
3. There may be dependencies between the devops of microservices.
4. If I have 100 microservices. I don’t want to build them one by one.
## Solution
We can use a yaml files like drone and gitci to build multi-tasks pipelines.
But drone and gitci build pipelines in one git-repo.
tekton can build pipelines across repo.
But tekton only supports git.
=======================================================
August 13th update
## How to compile multiple languages in jenkins by switching node.
Only one node is used in the kubesphere documentation. Unsuccessful using multiple nodes in the official jenkins documentation
|
non_process
|
how to compile multiple languages in jenkins by switching node devops in microservices most companies use kubernetes because of microservices kubernetes is a natural fit for microservices jenkins supports devops on kubernetes but it s very troublesome to configure you need to write a yaml file or configure your docker image in casc there may be dependencies between the devops of microservices if i have microservices i don’t want to build them one by one solution we can use a yaml files like drone and gitci to build multi tasks pipelines but drone and gitci build pipelines in one git repo tekton can build pipelines across repo but tekton only supports git august update how to compile multiple languages in jenkins by switching node only one node is used in the kubesphere documentation unsuccessful using multiple nodes in the official jenkins documentation
| 0
|
2,029
| 4,847,083,247
|
IssuesEvent
|
2016-11-10 13:59:24
|
woesterduolf/Mission-reisbureau
|
https://api.github.com/repos/woesterduolf/Mission-reisbureau
|
closed
|
Reisstad selecteren
|
Boekingsprocess priority: highest Type:Feature
|
**Mockup design (page 2)**
When the consumer gets to this page, he is greeted by the text in the top asking him where he wants to travel. Below that are all the possible cities he can choose from, represented by an image of a well-known building in the city. Since the customer needs to be able to see more than just these six, a scroll bar is visible to the right of the screen to scroll down to the rest of the cities.
Once a customer has selected a city, the image representing that city will be highlighted for easy visibility.
Once the customer has decided on the city, he can press the confirm button on the bottom right of the screen. This will make the confirmation window pop up, where the window asks for confirmation if this really is the city the consumer wants to go to. He can then click no to pick a different one, or yes to confirm the city and go on to the hotel preference window.
|
1.0
|
Reisstad selecteren - **Mockup design (page 2)**
When the consumer gets to this page, he is greeted by the text in the top asking him where he wants to travel. Below that are all the possible cities he can choose from, represented by an image of a well-known building in the city. Since the customer needs to be able to see more than just these six, a scroll bar is visible to the right of the screen to scroll down to the rest of the cities.
Once a customer has selected a city, the image representing that city will be highlighted for easy visibility.
Once the customer has decided on the city, he can press the confirm button on the bottom right of the screen. This will make the confirmation window pop up, where the window asks for confirmation if this really is the city the consumer wants to go to. He can then click no to pick a different one, or yes to confirm the city and go on to the hotel preference window.
|
process
|
reisstad selecteren mockup design page when the consumer gets to this page he is greeted by the text in the top asking him where he wants to travel below that are all the possible cities he can choose from represented by an image of a well known building in the city since the customer needs to be able to see more than just these six a scroll bar is visible to the right of the screen to scroll down to the rest of the cities once a customer has selected a city the image representing that city will be highlighted for easy visibility once the customer has decided on the city he can press the confirm button on the bottom right of the screen this will make the confirmation window pop up where the window asks for confirmation if this really is the city the consumer wants to go to he can then click no to pick a different one or yes to confirm the city and go on to the hotel preference window
| 1
|
175,709
| 27,963,323,892
|
IssuesEvent
|
2023-03-24 17:17:16
|
MNK-photoday/photoday
|
https://api.github.com/repos/MNK-photoday/photoday
|
closed
|
[Design] User Page 구현
|
아현 Design
|
Description
> User Page 구현
Progress
- [x] 프로필 부분 구현
- [x] 프로필 소개 부분 구현
- [x] 수정 클릭 시, 보이는 input 박스 구현
- [x] 비밀 번호 변경 클릭 시, 보이는 input 박스 구현
- [x] 팔로우 클릭 시 나오는 모달 구현
|
1.0
|
[Design] User Page 구현 - Description
> User Page 구현
Progress
- [x] 프로필 부분 구현
- [x] 프로필 소개 부분 구현
- [x] 수정 클릭 시, 보이는 input 박스 구현
- [x] 비밀 번호 변경 클릭 시, 보이는 input 박스 구현
- [x] 팔로우 클릭 시 나오는 모달 구현
|
non_process
|
user page 구현 description user page 구현 progress 프로필 부분 구현 프로필 소개 부분 구현 수정 클릭 시 보이는 input 박스 구현 비밀 번호 변경 클릭 시 보이는 input 박스 구현 팔로우 클릭 시 나오는 모달 구현
| 0
|
249,515
| 7,962,811,113
|
IssuesEvent
|
2018-07-13 15:24:05
|
containous/traefik
|
https://api.github.com/repos/containous/traefik
|
closed
|
v1.7 RC Basic Auth KV and Web UI
|
area/provider/kv area/webui kind/bug/confirmed priority/P1
|
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
The old format for KV Basic Auth was
`traefik/frontends/frontend1/basicauth/0 'user:password'`
The new value is
`traefik/frontends/frontend1/auth/basic/users/0 'user:password'`
When I remove the old format and replace it with the new I do not get prompted for a password.
Traefik Dashboard does not show either format in the frontend details tab.
### What did you expect to see?
I expected to see a password for the frontend and a value displayed on the details tab
### What did you see instead?
Nothing.
### Output of `traefik version`: (_What version of Traefik are you using?_)
```
traefik version
Version: v1.7.0-rc1
Codename: maroilles
Go version: go1.10.3
Built: 2018-07-09_03:03:16PM
OS/Arch: linux/amd64
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
Using Consul on Ubuntu 16.04
```toml
################################################################
# Global configuration
################################################################
# Enable debug mode
debug = true
logLevel = "WARN"
# EntryPoints are the ports that are open for Træfik to respond to.
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.api]
address = ":8081"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# Acme setting are used for letsencrypt to get the wildcard cert.
[acme]
email = "admin@example.com"
storage = "traefik/acme/account"
caServer = "https://acme-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.dnsChallenge]
provider = "digitalocean" # DNS Provider name (cloudflare, OVH, gandi...)
delayBeforeCheck = 0
[[acme.domains]]
main = "*.example.com"
# Consul entry point settings, where Træfik looks for the KV store.
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
[traefikLog]
filePath = "/var/log/traefik/traefik.log"
[accessLog]
filePath = "/var/log/traefik/access.log"
[api]
entrypoint = "api"
```
```bash
# New
SITE='php7a'
consul kv put traefik/frontends/$SITE/backend php7a
consul kv put traefik/frontends/$SITE/entrypoints http,https
consul kv put traefik/frontends/$SITE/routes/route0/rule Host:$SITE.example.com
consul kv put traefik/frontends/$SITE/priority 100
consul kv put traefik/frontends/$SITE/passhostheader true
consul kv put traefik/frontends/$SITE/auth/basic/users/0 'user:password'
# Old
SITE='php7a'
consul kv put traefik/frontends/$SITE/backend php7a
consul kv put traefik/frontends/$SITE/entrypoints http,https
consul kv put traefik/frontends/$SITE/routes/route0/rule Host:$SITE.example.com
consul kv put traefik/frontends/$SITE/priority 100
consul kv put traefik/frontends/$SITE/passhostheader true
consul kv put traefik/frontends/$SITE/basicauth/0 'user:password'
```
### If applicable, please paste the log output in DEBUG level (`--logLevel=DEBUG` switch)
```
time="2018-07-12T12:57:37-05:00" level=warning msg="Deprecated configuration found: /basicauth. Please use /auth/basic/.
```
Edited to remove real domains
|
1.0
|
v1.7 RC Basic Auth KV and Web UI - ### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
The old format for KV Basic Auth was
`traefik/frontends/frontend1/basicauth/0 'user:password'`
The new value is
`traefik/frontends/frontend1/auth/basic/users/0 'user:password'`
When I remove the old format and replace it with the new I do not get prompted for a password.
Traefik Dashboard does not show either format in the frontend details tab.
### What did you expect to see?
I expected to see a password for the frontend and a value displayed on the details tab
### What did you see instead?
Nothing.
### Output of `traefik version`: (_What version of Traefik are you using?_)
```
traefik version
Version: v1.7.0-rc1
Codename: maroilles
Go version: go1.10.3
Built: 2018-07-09_03:03:16PM
OS/Arch: linux/amd64
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
Using Consul on Ubuntu 16.04
```toml
################################################################
# Global configuration
################################################################
# Enable debug mode
debug = true
logLevel = "WARN"
# EntryPoints are the ports that are open for Træfik to respond to.
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.api]
address = ":8081"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
# Acme setting are used for letsencrypt to get the wildcard cert.
[acme]
email = "admin@example.com"
storage = "traefik/acme/account"
caServer = "https://acme-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.dnsChallenge]
provider = "digitalocean" # DNS Provider name (cloudflare, OVH, gandi...)
delayBeforeCheck = 0
[[acme.domains]]
main = "*.example.com"
# Consul entry point settings, where Træfik looks for the KV store.
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
[traefikLog]
filePath = "/var/log/traefik/traefik.log"
[accessLog]
filePath = "/var/log/traefik/access.log"
[api]
entrypoint = "api"
```
```bash
# New
SITE='php7a'
consul kv put traefik/frontends/$SITE/backend php7a
consul kv put traefik/frontends/$SITE/entrypoints http,https
consul kv put traefik/frontends/$SITE/routes/route0/rule Host:$SITE.example.com
consul kv put traefik/frontends/$SITE/priority 100
consul kv put traefik/frontends/$SITE/passhostheader true
consul kv put traefik/frontends/$SITE/auth/basic/users/0 'user:password'
# Old
SITE='php7a'
consul kv put traefik/frontends/$SITE/backend php7a
consul kv put traefik/frontends/$SITE/entrypoints http,https
consul kv put traefik/frontends/$SITE/routes/route0/rule Host:$SITE.example.com
consul kv put traefik/frontends/$SITE/priority 100
consul kv put traefik/frontends/$SITE/passhostheader true
consul kv put traefik/frontends/$SITE/basicauth/0 'user:password'
```
### If applicable, please paste the log output in DEBUG level (`--logLevel=DEBUG` switch)
```
time="2018-07-12T12:57:37-05:00" level=warning msg="Deprecated configuration found: /basicauth. Please use /auth/basic/.
```
Edited to remove real domains
|
non_process
|
rc basic auth kv and web ui do you want to request a feature or report a bug bug what did you do the old format for kv basic auth was traefik frontends basicauth user password the new value is traefik frontends auth basic users user password when i remove the old format and replace it with the new i do not get prompted for a password traefik dashboard does not show either format in the frontend details tab what did you expect to see i expected to see a password for the frontend and a value displayed on the details tab what did you see instead nothing output of traefik version what version of traefik are you using traefik version version codename maroilles go version built os arch linux what is your environment configuration arguments toml provider platform using consul on ubuntu toml global configuration enable debug mode debug true loglevel warn entrypoints are the ports that are open for træfik to respond to defaultentrypoints address address entrypoint https address acme setting are used for letsencrypt to get the wildcard cert email admin example com storage traefik acme account caserver entrypoint https provider digitalocean dns provider name cloudflare ovh gandi delaybeforecheck main example com consul entry point settings where træfik looks for the kv store endpoint watch true prefix traefik filepath var log traefik traefik log filepath var log traefik access log entrypoint api bash new site consul kv put traefik frontends site backend consul kv put traefik frontends site entrypoints http https consul kv put traefik frontends site routes rule host site example com consul kv put traefik frontends site priority consul kv put traefik frontends site passhostheader true consul kv put traefik frontends site auth basic users user password old site consul kv put traefik frontends site backend consul kv put traefik frontends site entrypoints http https consul kv put traefik frontends site routes rule host site example com consul kv put traefik frontends site priority consul kv put traefik frontends site passhostheader true consul kv put traefik frontends site basicauth user password if applicable please paste the log output in debug level loglevel debug switch time level warning msg deprecated configuration found basicauth please use auth basic edited to remove real domains
| 0
|
83,403
| 7,870,086,743
|
IssuesEvent
|
2018-06-24 21:29:12
|
SunwellTracker/issues
|
https://api.github.com/repos/SunwellTracker/issues
|
closed
|
[Quest - Nekrum's Medallion]
|
Works locally | Requires testing question
|
Decription: The quest [Nekrum's Medallion] is a chain from the hinterlands
How it works: Nekrum is suppose to drop the quest item
How it should work: Nekrum did not drop the quest item
Source (you should point out proofs of your report, please give us some source): His body disappeared before I could take a screenshot and I also didnt know you need to provide proof. My first Bug i am reporting
|
1.0
|
[Quest - Nekrum's Medallion] - Decription: The quest [Nekrum's Medallion] is a chain from the hinterlands
How it works: Nekrum is suppose to drop the quest item
How it should work: Nekrum did not drop the quest item
Source (you should point out proofs of your report, please give us some source): His body disappeared before I could take a screenshot and I also didnt know you need to provide proof. My first Bug i am reporting
|
non_process
|
decription the quest is a chain from the hinterlands how it works nekrum is suppose to drop the quest item how it should work nekrum did not drop the quest item source you should point out proofs of your report please give us some source his body disappeared before i could take a screenshot and i also didnt know you need to provide proof my first bug i am reporting
| 0
|
12,636
| 15,016,577,936
|
IssuesEvent
|
2021-02-01 09:46:05
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
Deploying new VDC failing - failed to fund provisioning wallet error
|
process_wontfix type_bug
|
Was trying to deploy a new VDC , after the payment , it took a lot of time and in the end , got the following error :

3bot : masid.3bot
|
1.0
|
Deploying new VDC failing - failed to fund provisioning wallet error - Was trying to deploy a new VDC , after the payment , it took a lot of time and in the end , got the following error :

3bot : masid.3bot
|
process
|
deploying new vdc failing failed to fund provisioning wallet error was trying to deploy a new vdc after the payment it took a lot of time and in the end got the following error masid
| 1
|
59,337
| 24,733,723,938
|
IssuesEvent
|
2022-10-20 19:56:21
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
define a cosmos DB connection string into an Azure function app_settings
|
enhancement good first issue service/cosmosdb
|
I feel the need to document the use case where you need to link an azure function to the cosmos DB database using the connection string.
Assume you have declared a cosmosdb resource in your TF script like this
```
resource "azurerm_cosmosdb_account" "db" {
..
}
```
and now you want to define the connection string in you your azure function:
```
resource "azurerm_function_app" "function_name" {
..
app_settings = {
....
"cosmosdb_readwrite_primary_key" = azurerm_cosmosdb_account.db.connection_strings[0],
"cosmosdb_readwrite_secondary_key" = azurerm_cosmosdb_account.db.connection_strings[1],
}
```
So, obviously this is assuming the array respects the current following description by MSFT:
az cosmosdb keys list --type connection-strings
```
{
"connectionStrings": [
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Primary SQL Connection String"
},
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Secondary SQL Connection String"
},
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Primary Read-Only SQL Connection String"
},
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Secondary Read-Only SQL Connection String"
}
]
}
```
|
1.0
|
define a cosmos DB connection string into an Azure function app_settings - I feel the need to document the use case where you need to link an azure function to the cosmos DB database using the connection string.
Assume you have declared a cosmosdb resource in your TF script like this
```
resource "azurerm_cosmosdb_account" "db" {
..
}
```
and now you want to define the connection string in you your azure function:
```
resource "azurerm_function_app" "function_name" {
..
app_settings = {
....
"cosmosdb_readwrite_primary_key" = azurerm_cosmosdb_account.db.connection_strings[0],
"cosmosdb_readwrite_secondary_key" = azurerm_cosmosdb_account.db.connection_strings[1],
}
```
So, obviously this is assuming the array respects the current following description by MSFT:
az cosmosdb keys list --type connection-strings
```
{
"connectionStrings": [
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Primary SQL Connection String"
},
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Secondary SQL Connection String"
},
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Primary Read-Only SQL Connection String"
},
{
"connectionString": "AccountEndpoint=https://<redacted>.documents.azure.com:443/;AccountKey=<redacted>",
"description": "Secondary Read-Only SQL Connection String"
}
]
}
```
|
non_process
|
define a cosmos db connection string into an azure function app settings i feel the need to document the use case where you need to link an azure function to the cosmos db database using the connection string assume you have declared a cosmosdb resource in your tf script like this resource azurerm cosmosdb account db and now you want to define the connection string in you your azure function resource azurerm function app function name app settings cosmosdb readwrite primary key azurerm cosmosdb account db connection strings cosmosdb readwrite secondary key azurerm cosmosdb account db connection strings so obviously this is assuming the array respects the current following description by msft az cosmosdb keys list type connection strings connectionstrings connectionstring accountendpoint description primary sql connection string connectionstring accountendpoint description secondary sql connection string connectionstring accountendpoint description primary read only sql connection string connectionstring accountendpoint description secondary read only sql connection string
| 0
|
802,095
| 28,633,767,796
|
IssuesEvent
|
2023-04-25 00:02:36
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Make Stream settings and Create stream UIs more consistent
|
help wanted good first issue area: stream settings priority: high
|
Following up on #19519 / #23013, we should make the following changes to make the UIs of the Stream settings > General panel and the Create stream panel more consistent:
- [ ] Move the "Announce stream" option just below "Stream description" (with no changes to the logic for whether it's shown).
- [ ] While we're here, make the following changes to the "Announce stream" option (see #22892):
- [ ] Do not show the "Announce stream" option if the user creating the stream does not have access to the announcement stream name. When the option is not shown, we should default to announcing public (or web-public) streams. (This would come up if the announcement stream is private and the user is not an admin.)
- [ ] Change the option to "Announce new stream in **#[announcement stream]**."
- [ ] Replace the "i" with a question mark linking to `/help/configure-notification-bot#new-stream-announcements`
- [ ] Reorder the stream options in `/help/create-a-stream#stream-options` to reflect the updated ordering.
- [ ] Add a "Stream permissions" section heading just above "Who can access the stream?"
|
1.0
|
Make Stream settings and Create stream UIs more consistent - Following up on #19519 / #23013, we should make the following changes to make the UIs of the Stream settings > General panel and the Create stream panel more consistent:
- [ ] Move the "Announce stream" option just below "Stream description" (with no changes to the logic for whether it's shown).
- [ ] While we're here, make the following changes to the "Announce stream" option (see #22892):
- [ ] Do not show the "Announce stream" option if the user creating the stream does not have access to the announcement stream name. When the option is not shown, we should default to announcing public (or web-public) streams. (This would come up if the announcement stream is private and the user is not an admin.)
- [ ] Change the option to "Announce new stream in **#[announcement stream]**."
- [ ] Replace the "i" with a question mark linking to `/help/configure-notification-bot#new-stream-announcements`
- [ ] Reorder the stream options in `/help/create-a-stream#stream-options` to reflect the updated ordering.
- [ ] Add a "Stream permissions" section heading just above "Who can access the stream?"
|
non_process
|
make stream settings and create stream uis more consistent following up on we should make the following changes to make the uis of the stream settings general panel and the create stream panel more consistent move the announce stream option just below stream description with no changes to the logic for whether it s shown while we re here make the following changes to the announce stream option see do not show the announce stream option if the user creating the stream does not have access to the announcement stream name when the option is not shown we should default to announcing public or web public streams this would come up if the announcement stream is private and the user is not an admin change the option to announce new stream in replace the i with a question mark linking to help configure notification bot new stream announcements reorder the stream options in help create a stream stream options to reflect the updated ordering add a stream permissions section heading just above who can access the stream
| 0
|
30,222
| 4,568,399,811
|
IssuesEvent
|
2016-09-15 14:22:06
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
[k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod
|
area/platform/gke area/test kind/upgrade-test-failure priority/P0 team/gke
|
[k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod
https://k8s-testgrid.appspot.com/release-1.4-blocking#gke-1.3-1.4-upgrade-cluster
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-1.3-1.4-upgrade-cluster/559
|
2.0
|
[k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod - [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod
https://k8s-testgrid.appspot.com/release-1.4-blocking#gke-1.3-1.4-upgrade-cluster
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-1.3-1.4-upgrade-cluster/559
|
non_process
|
pods should not start app containers if init containers fail on a restartalways pod pods should not start app containers if init containers fail on a restartalways pod
| 0
|
3,240
| 6,302,337,392
|
IssuesEvent
|
2017-07-21 10:34:18
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Can not getting complete stdout
|
child_process windows
|
Version: v4.4.7
Platform: windows 10 (64 bit)
``` javascript
const spawn = require('child_process').spawn;
const ls = spawn("D:\\Documents\\Nauman Umer\\New folder\\electron-quick-start\\PC-BASIC\\a.bat", []);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
```
**a.bat**
``` batch
"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A
```
its ouput when executed from command line:
```
D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC>"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A
940 REM The IBM Personal Computer Art
950 REM Version 1.10 (C)Copyright IBM Corp 1981, 1982
960 REM Licensed Material - Program Property of IBM
970 REM Author - Glenn Stuart Dardick
975 DEF SEG: POKE 106,0
980 SAMPLES$ = "NO"
990 GOTO 1010
[AND SO ON]
```
pcbasic is written in python and using `sys.stdout.write()` instead of `print`.
but when I run this bat file with from my electron app using to about javascript code it results
```
stdout:
D:\Documents\Nauman Umer\New folder\electron-quick-start>"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A
[PROGRAM OUTPUT MISSING]
child process exited with code 0
```
I also tried `spawn`, `exec` and `execFile` but getting same issue. and i also tried to execute pcbasic directly from program instead of from batch file but getting same problem. when i tried to redirect pcbasic output to file i noticed that a file is created but it is empty.
I tried following to redirect:
- in bat file
``` batch
"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A > a.txt
```
- in electron app
``` javascript
spawn(`"D:\\Documents\\Nauman Umer\\New folder\\electron-quick-start\\PC-BASIC\\a.bat" > a.bat`, []);
```
|
1.0
|
Can not getting complete stdout - Version: v4.4.7
Platform: windows 10 (64 bit)
``` javascript
const spawn = require('child_process').spawn;
const ls = spawn("D:\\Documents\\Nauman Umer\\New folder\\electron-quick-start\\PC-BASIC\\a.bat", []);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
```
**a.bat**
``` batch
"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A
```
its ouput when executed from command line:
```
D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC>"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A
940 REM The IBM Personal Computer Art
950 REM Version 1.10 (C)Copyright IBM Corp 1981, 1982
960 REM Licensed Material - Program Property of IBM
970 REM Author - Glenn Stuart Dardick
975 DEF SEG: POKE 106,0
980 SAMPLES$ = "NO"
990 GOTO 1010
[AND SO ON]
```
pcbasic is written in python and using `sys.stdout.write()` instead of `print`.
but when I run this bat file with from my electron app using to about javascript code it results
```
stdout:
D:\Documents\Nauman Umer\New folder\electron-quick-start>"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A
[PROGRAM OUTPUT MISSING]
child process exited with code 0
```
I also tried `spawn`, `exec` and `execFile` but getting same issue. and i also tried to execute pcbasic directly from program instead of from batch file but getting same problem. when i tried to redirect pcbasic output to file i noticed that a file is created but it is empty.
I tried following to redirect:
- in bat file
``` batch
"D:\Documents\Nauman Umer\New folder\electron-quick-start\PC-BASIC\pcbasic.com" --load="art.bas" --convert=A > a.txt
```
- in electron app
``` javascript
spawn(`"D:\\Documents\\Nauman Umer\\New folder\\electron-quick-start\\PC-BASIC\\a.bat" > a.bat`, []);
```
|
process
|
can not getting complete stdout version platform windows bit javascript const spawn require child process spawn const ls spawn d documents nauman umer new folder electron quick start pc basic a bat ls stdout on data data console log stdout data ls stderr on data data console log stderr data ls on close code console log child process exited with code code a bat batch d documents nauman umer new folder electron quick start pc basic pcbasic com load art bas convert a its ouput when executed from command line d documents nauman umer new folder electron quick start pc basic d documents nauman umer new folder electron quick start pc basic pcbasic com load art bas convert a rem the ibm personal computer art rem version c copyright ibm corp rem licensed material program property of ibm rem author glenn stuart dardick def seg poke samples no goto pcbasic is written in python and using sys stdout write instead of print but when i run this bat file with from my electron app using to about javascript code it results stdout d documents nauman umer new folder electron quick start d documents nauman umer new folder electron quick start pc basic pcbasic com load art bas convert a child process exited with code i also tried spawn exec and execfile but getting same issue and i also tried to execute pcbasic directly from program instead of from batch file but getting same problem when i tried to redirect pcbasic output to file i noticed that a file is created but it is empty i tried following to redirect in bat file batch d documents nauman umer new folder electron quick start pc basic pcbasic com load art bas convert a a txt in electron app javascript spawn d documents nauman umer new folder electron quick start pc basic a bat a bat
| 1
|
10,760
| 13,549,206,279
|
IssuesEvent
|
2020-09-17 07:51:30
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
New `sha1` remap function
|
domain: mapping domain: processing type: feature
|
As requested in #3691, the `sha1` remap function hashes the provided argument with the SHA1 algorithm.
## Examples
For all examples assume the following event:
```js
{
"message": "Hello world",
"remote_addr": "54.23.22.123"
}
```
### Path
```
.fingerprint = sha1(.message)
```
### String literal
```
.fingerprint = sha1("my string")
```
### Operators
```
.fingerprint = sha1(.message + .remote_addr)
```
I realize this example is outside of the scope of this function, but I wanted to include it for completeness.
|
1.0
|
New `sha1` remap function - As requested in #3691, the `sha1` remap function hashes the provided argument with the SHA1 algorithm.
## Examples
For all examples assume the following event:
```js
{
"message": "Hello world",
"remote_addr": "54.23.22.123"
}
```
### Path
```
.fingerprint = sha1(.message)
```
### String literal
```
.fingerprint = sha1("my string")
```
### Operators
```
.fingerprint = sha1(.message + .remote_addr)
```
I realize this example is outside of the scope of this function, but I wanted to include it for completeness.
|
process
|
new remap function as requested in the remap function hashes the provided argument with the algorithm examples for all examples assume the following event js message hello world remote addr path fingerprint message string literal fingerprint my string operators fingerprint message remote addr i realize this example is outside of the scope of this function but i wanted to include it for completeness
| 1
|
10,717
| 13,520,231,870
|
IssuesEvent
|
2020-09-15 04:11:20
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Updating placeholder k8s services fails with Service is invalid: spec.clusterIP: Invalid value: ""
|
area/API area/networking kind/bug kind/process
|
<!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. -->
/area API
/kind process
## What version of Knative?
<!-- Delete all but your choice -->
0.13.x
## Expected Behavior
<!-- Briefly describe what you expect to happen -->
Route reconciliation to successfully complete when updating placeholder k8s services.
## Actual Behavior
<!-- Briefly describe what is actually happening -->
Route reconciliation makes it as far as marking the ingress ready on the route. Reconciliation fails after `Updating placeholder k8s services with ingress information`. This is the event logged:
```
Event(v1.ObjectReference{Kind:"Route", Namespace:"knative-applications", Name:"pubsub-event-mapper", UID:"67bb5fd2-c794-48cf-af99-14a58f41d8c5", APIVersion:"serving.knative.dev/v1", ResourceVersion:"208378251", FieldPath:""}): type: 'Warning' reason: 'InternalError' Service "pubsub-event-mapper" is invalid: spec.clusterIP: Invalid value: "": field is immutable
```
## Steps to Reproduce the Problem
<!-- How can a maintainer reproduce this issue (be detailed) -->
We just followed the instructions in the docs to install knative-serving, knative-eventing, and in-memory-channel. A broker was installed in the knative-applications namespace by labeling the namespace with `knative-eventing-injection=enabled` and then a knative-serving Service was deployed to the namespace knative-applications. In our logs we saw the route reconciliation process fail.
|
1.0
|
Updating placeholder k8s services fails with Service is invalid: spec.clusterIP: Invalid value: "" - <!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. -->
/area API
/kind process
## What version of Knative?
<!-- Delete all but your choice -->
0.13.x
## Expected Behavior
<!-- Briefly describe what you expect to happen -->
Route reconciliation to successfully complete when updating placeholder k8s services.
## Actual Behavior
<!-- Briefly describe what is actually happening -->
Route reconciliation makes it as far as marking the ingress ready on the route. Reconciliation fails after `Updating placeholder k8s services with ingress information`. This is the event logged:
```
Event(v1.ObjectReference{Kind:"Route", Namespace:"knative-applications", Name:"pubsub-event-mapper", UID:"67bb5fd2-c794-48cf-af99-14a58f41d8c5", APIVersion:"serving.knative.dev/v1", ResourceVersion:"208378251", FieldPath:""}): type: 'Warning' reason: 'InternalError' Service "pubsub-event-mapper" is invalid: spec.clusterIP: Invalid value: "": field is immutable
```
## Steps to Reproduce the Problem
<!-- How can a maintainer reproduce this issue (be detailed) -->
We just followed the instructions in the docs to install knative-serving, knative-eventing, and in-memory-channel. A broker was installed in the knative-applications namespace by labeling the namespace with `knative-eventing-injection=enabled` and then a knative-serving Service was deployed to the namespace knative-applications. In our logs we saw the route reconciliation process fail.
|
process
|
updating placeholder services fails with service is invalid spec clusterip invalid value area api kind process what version of knative x expected behavior route reconciliation to successfully complete when updating placeholder services actual behavior route reconciliation makes it as far as marking the ingress ready on the route reconciliation fails after updating placeholder services with ingress information this is the event logged event objectreference kind route namespace knative applications name pubsub event mapper uid apiversion serving knative dev resourceversion fieldpath type warning reason internalerror service pubsub event mapper is invalid spec clusterip invalid value field is immutable steps to reproduce the problem we just followed the instructions in the docs to install knative serving knative eventing and in memory channel a broker was installed in the knative applications namespace by labeling the namespace with knative eventing injection enabled and then a knative serving service was deployed to the namespace knative applications in our logs we saw the route reconciliation process fail
| 1
|
10,380
| 13,193,976,496
|
IssuesEvent
|
2020-08-13 16:02:49
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
"Only a few tasks are supported in a server job at present." which ones?
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
> Only a few tasks are supported in a server job at present.
Please list which are supported.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4
* Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce
* Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
"Only a few tasks are supported in a server job at present." which ones? -
> Only a few tasks are supported in a server job at present.
Please list which are supported.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 67504b34-d64b-02a4-2e10-ab99f3b8cfe4
* Version Independent ID: 2cf63b2e-184b-7726-3b8a-d8baffd6fcce
* Content: [Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/phases.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
only a few tasks are supported in a server job at present which ones only a few tasks are supported in a server job at present please list which are supported document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
682,652
| 23,351,709,680
|
IssuesEvent
|
2022-08-10 01:11:05
|
Jonius7/SteamUI-OldGlory
|
https://api.github.com/repos/Jonius7/SteamUI-OldGlory
|
closed
|
[28 July] JS TWEAKS NOT WORKING
|
JS Priority
|
With the Steam Update on 28 July, using any of the **JS Tweaks** in OldGlory will cause the library to blackscreen.
I have not found a fix yet. In the meantime you can use the Reset button and install CSS tweaks only (don't touch the JS ones)
|
1.0
|
[28 July] JS TWEAKS NOT WORKING - With the Steam Update on 28 July, using any of the **JS Tweaks** in OldGlory will cause the library to blackscreen.
I have not found a fix yet. In the meantime you can use the Reset button and install CSS tweaks only (don't touch the JS ones)
|
non_process
|
js tweaks not working with the steam update on july using any of the js tweaks in oldglory will cause the library to blackscreen i have not found a fix yet in the meantime you can use the reset button and install css tweaks only don t touch the js ones
| 0
|
11,078
| 13,920,285,765
|
IssuesEvent
|
2020-10-21 10:13:05
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Cleaning up the 'GO:0007534 gene conversion at mating-type locus'
|
cell cycle and DNA processes obsoletion ready term merge
|
From #18777 - other changes needed in the area of homologous recombination, specifically under 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0034624 DNA recombinase assembly involved in gene conversion at mating-type locus' -> 0 annotations -> obsolete, GO-CAM model
- [x] 'GO:0000728 gene conversion at mating-type locus, DNA double-strand break formation' -> 2 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0031292 gene conversion at mating-type locus, DNA double-strand break processing' -> 6 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0000734 gene conversion at mating-type locus, DNA repair synthesis' -> 2 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0061500 gene conversion at mating-type locus, termination of copy-synthesis' -> 1 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0010708 heteroduplex formation involved in gene conversion at mating-type locus' -> 0 annotations -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0034636 strand invasion involved in gene conversion at mating-type locus ' -> 1 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
Thanks, Pascale
|
1.0
|
Cleaning up the 'GO:0007534 gene conversion at mating-type locus' - From #18777 - other changes needed in the area of homologous recombination, specifically under 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0034624 DNA recombinase assembly involved in gene conversion at mating-type locus' -> 0 annotations -> obsolete, GO-CAM model
- [x] 'GO:0000728 gene conversion at mating-type locus, DNA double-strand break formation' -> 2 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0031292 gene conversion at mating-type locus, DNA double-strand break processing' -> 6 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0000734 gene conversion at mating-type locus, DNA repair synthesis' -> 2 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0061500 gene conversion at mating-type locus, termination of copy-synthesis' -> 1 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0010708 heteroduplex formation involved in gene conversion at mating-type locus' -> 0 annotations -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
- [x] 'GO:0034636 strand invasion involved in gene conversion at mating-type locus ' -> 1 EXP -> substep, merge into 'GO:0007534 gene conversion at mating-type locus'
Thanks, Pascale
|
process
|
cleaning up the go gene conversion at mating type locus from other changes needed in the area of homologous recombination specifically under go gene conversion at mating type locus go dna recombinase assembly involved in gene conversion at mating type locus annotations obsolete go cam model go gene conversion at mating type locus dna double strand break formation exp substep merge into go gene conversion at mating type locus go gene conversion at mating type locus dna double strand break processing exp substep merge into go gene conversion at mating type locus go gene conversion at mating type locus dna repair synthesis exp substep merge into go gene conversion at mating type locus go gene conversion at mating type locus termination of copy synthesis exp substep merge into go gene conversion at mating type locus go heteroduplex formation involved in gene conversion at mating type locus annotations substep merge into go gene conversion at mating type locus go strand invasion involved in gene conversion at mating type locus exp substep merge into go gene conversion at mating type locus thanks pascale
| 1
|
23,427
| 3,851,750,482
|
IssuesEvent
|
2016-04-06 04:28:46
|
ysdn-2016/ysdn-2016.github.io
|
https://api.github.com/repos/ysdn-2016/ysdn-2016.github.io
|
closed
|
Mobile ribbon "Attend the Show" -> "Attend the Grad Show"
|
design
|

The mobile nav still has the older copy. We should change it to "Attend the Grad Show"
|
1.0
|
Mobile ribbon "Attend the Show" -> "Attend the Grad Show" - 
The mobile nav still has the older copy. We should change it to "Attend the Grad Show"
|
non_process
|
mobile ribbon attend the show attend the grad show the mobile nav still has the older copy we should change it to attend the grad show
| 0
|
8,371
| 11,520,305,916
|
IssuesEvent
|
2020-02-14 14:34:27
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion notice: phagocytosis modulation by symbiont terms
|
multi-species process obsoletion ready term merge
|
Very strange.
The corresponding positive and negative regulation terms are in different parts of the ontology.
<img width="779" alt="phagocytosis" src="https://user-images.githubusercontent.com/7359272/72616541-1c580c80-392f-11ea-8553-d2650b04a4ce.png">
|
1.0
|
Obsoletion notice: phagocytosis modulation by symbiont terms -
Very strange.
The corresponding positive and negative regulation terms are in different parts of the ontology.
<img width="779" alt="phagocytosis" src="https://user-images.githubusercontent.com/7359272/72616541-1c580c80-392f-11ea-8553-d2650b04a4ce.png">
|
process
|
obsoletion notice phagocytosis modulation by symbiont terms very strange the corresponding positive and negative regulation terms are in different parts of the ontology img width alt phagocytosis src
| 1
|
248,818
| 21,075,170,356
|
IssuesEvent
|
2022-04-02 03:16:20
|
Uuvana-Studios/longvinter-windows-client
|
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
|
closed
|
NON-PVP server very very serious bug
|
Bug Not Tested
|
**Describe the bug**
I have found some serious bug in NON-PK server setting.
i have set my server as PVP=false
so my server should not able to kill each other and should indestroiable other player's stucture
and I have found the way to destroy other player's structure in PVP=false server setting.
1. unequip weapon
2. go to other player's house( not belongs to mine )
3. try to right click on any of other player's stuctures as screenshot.
4. normally u will unable to disassemble other user's structure with 'you are trying to pick up an item on someone elses property' message ( yeah it is normal )
5. but dont give up! just look around keep try to disassemble other player's structure nearby (without weapon)
6. you will successfully disassemble (some times can , but sometimes can not just keep try to right click it with any items. you will easily destroy huge size of town in few minute in NON-PVP server ! now you are good troller!!
7. this is very serious bug. server is PVP=false setting so user cant even kill troller just follow and watch behind troller break all town
what a mess
**Screenshots**
**Desktop (please complete the following information):**
- OS: window
- Game Version 1.0.2 beta
- Steam Version [e.g. 1.0]
**Additional context**
Add any other context about the problem here.



|
1.0
|
NON-PVP server very very serious bug - **Describe the bug**
I have found some serious bug in NON-PK server setting.
i have set my server as PVP=false
so my server should not able to kill each other and should indestroiable other player's stucture
and I have found the way to destroy other player's structure in PVP=false server setting.
1. unequip weapon
2. go to other player's house( not belongs to mine )
3. try to right click on any of other player's stuctures as screenshot.
4. normally u will unable to disassemble other user's structure with 'you are trying to pick up an item on someone elses property' message ( yeah it is normal )
5. but dont give up! just look around keep try to disassemble other player's structure nearby (without weapon)
6. you will successfully disassemble (some times can , but sometimes can not just keep try to right click it with any items. you will easily destroy huge size of town in few minute in NON-PVP server ! now you are good troller!!
7. this is very serious bug. server is PVP=false setting so user cant even kill troller just follow and watch behind troller break all town
what a mess
**Screenshots**
**Desktop (please complete the following information):**
- OS: window
- Game Version 1.0.2 beta
- Steam Version [e.g. 1.0]
**Additional context**
Add any other context about the problem here.



|
non_process
|
non pvp server very very serious bug describe the bug i have found some serious bug in non pk server setting i have set my server as pvp false so my server should not able to kill each other and should indestroiable other player s stucture and i have found the way to destroy other player s structure in pvp false server setting unequip weapon go to other player s house not belongs to mine try to right click on any of other player s stuctures as screenshot normally u will unable to disassemble other user s structure with you are trying to pick up an item on someone elses property message yeah it is normal but dont give up just look around keep try to disassemble other player s structure nearby without weapon you will successfully disassemble some times can but sometimes can not just keep try to right click it with any items you will easily destroy huge size of town in few minute in non pvp server now you are good troller this is very serious bug server is pvp false setting so user cant even kill troller just follow and watch behind troller break all town what a mess screenshots desktop please complete the following information os window game version beta steam version additional context add any other context about the problem here
| 0
|
130
| 2,570,625,905
|
IssuesEvent
|
2015-02-10 10:56:37
|
FG-Team/HCJ-Website-Builder
|
https://api.github.com/repos/FG-Team/HCJ-Website-Builder
|
closed
|
Team organization Team 2
|
No Processing
|
betrifft allgemeine Aufgaben (Merging ...)
-> evtl. im Moment nicht notwendig
|
1.0
|
Team organization Team 2 - betrifft allgemeine Aufgaben (Merging ...)
-> evtl. im Moment nicht notwendig
|
process
|
team organization team betrifft allgemeine aufgaben merging evtl im moment nicht notwendig
| 1
|
108,698
| 23,649,227,760
|
IssuesEvent
|
2022-08-26 03:54:04
|
MarlinFirmware/Marlin
|
https://api.github.com/repos/MarlinFirmware/Marlin
|
closed
|
[BUG] Some arguments in G-code are parsed as Hexadecimal values
|
Bug: Confirmed ! C: G-code Parser
|
### Did you test the latest `bugfix-2.1.x` code?
Yes, and the problem still exists.
### Bug Description
Sending the G-code `G0Y0X10' results as Y target position parsed as 16 and X as 10.
G0Y0X20 is parsed as Y=32 and X=20.
### Bug Timeline
_No response_
### Expected behavior
If a hexadecimal value is intended to be parsed, then the X axis should be kept unchanged, if the hexadecimal value is not to be parsed, then Y and X should keep only the decimal values.
### Actual behavior
Hexadecimal value is parsed if a figure 0Xnn is found in the argument.
### Steps to Reproduce
Send to printer the G-code `G0Y0X10' the printer moves to Y=16 X = 10
### Version of Marlin Firmware
Latest bugfix 2.1.x
### Printer model
_No response_
### Electronics
_No response_
### Add-ons
_No response_
### Bed Leveling
No Bed Leveling
### Your Slicer
_No response_
### Host Software
OctoPrint
### Don't forget to include
- [X] A ZIP file containing your `Configuration.h` and `Configuration_adv.h`.
### Additional information & file uploads
_No response_
|
1.0
|
[BUG] Some arguments in G-code are parsed as Hexadecimal values - ### Did you test the latest `bugfix-2.1.x` code?
Yes, and the problem still exists.
### Bug Description
Sending the G-code `G0Y0X10' results as Y target position parsed as 16 and X as 10.
G0Y0X20 is parsed as Y=32 and X=20.
### Bug Timeline
_No response_
### Expected behavior
If a hexadecimal value is intended to be parsed, then the X axis should be kept unchanged, if the hexadecimal value is not to be parsed, then Y and X should keep only the decimal values.
### Actual behavior
Hexadecimal value is parsed if a figure 0Xnn is found in the argument.
### Steps to Reproduce
Send to printer the G-code `G0Y0X10' the printer moves to Y=16 X = 10
### Version of Marlin Firmware
Latest bugfix 2.1.x
### Printer model
_No response_
### Electronics
_No response_
### Add-ons
_No response_
### Bed Leveling
No Bed Leveling
### Your Slicer
_No response_
### Host Software
OctoPrint
### Don't forget to include
- [X] A ZIP file containing your `Configuration.h` and `Configuration_adv.h`.
### Additional information & file uploads
_No response_
|
non_process
|
some arguments in g code are parsed as hexadecimal values did you test the latest bugfix x code yes and the problem still exists bug description sending the g code results as y target position parsed as and x as is parsed as y and x bug timeline no response expected behavior if a hexadecimal value is intended to be parsed then the x axis should be kept unchanged if the hexadecimal value is not to be parsed then y and x should keep only the decimal values actual behavior hexadecimal value is parsed if a figure is found in the argument steps to reproduce send to printer the g code the printer moves to y x version of marlin firmware latest bugfix x printer model no response electronics no response add ons no response bed leveling no bed leveling your slicer no response host software octoprint don t forget to include a zip file containing your configuration h and configuration adv h additional information file uploads no response
| 0
|
285,090
| 24,642,299,826
|
IssuesEvent
|
2022-10-17 12:35:47
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
com.hazelcast.client.ClientReconnectTest.testCallbackAfterServerShutdown
|
Team: Client Type: Test-Failure Source: Internal
|
_5.2.z_ (commit ecf1b1a7c4572a759b5b4d0f277b646e2a9ccfab)
Failed on CorrettoJDK8: https://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-5.maintenance-CorrettoJDK8/55/testReport/junit/com.hazelcast.client/ClientReconnectTest/testCallbackAfterServerShutdown/
<details><summary>Stacktrace:</summary>
```
java.lang.NullPointerException
at com.hazelcast.client.test.ClientTestSupport.lambda$makeSureDisconnectedFromServer$0(ClientTestSupport.java:93)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1236)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1338)
at com.hazelcast.client.test.ClientTestSupport.makeSureDisconnectedFromServer(ClientTestSupport.java:92)
at com.hazelcast.client.ClientReconnectTest.testCallbackAfterServerShutdown(ClientReconnectTest.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
</details>
<details><summary>Standard output:</summary>
```
Finished Running Test: testExceptionAfterClientShutdown in 2.687 seconds.
04:47:00,135 INFO |testClientReconnectOnClusterDown| - [MetricsConfigHelper] testClientReconnectOnClusterDown - [LOCAL] [dev] [5.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
04:47:00,136 INFO |testCallbackAfterServerShutdown| - [LifecycleService] Thread-32618 - [127.0.0.1]:5701 [dev] [5.2.1-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN
04:47:00,136 INFO |testCallbackAfterServerShutdown| - [Node] Thread-32618 - [127.0.0.1]:5701 [dev] [5.2.1-SNAPSHOT] Node is already shutting down... Waiting for shutdown process to complete...
04:47:00,136 INFO |testCallbackAfterServerShutdown| - [LifecycleService] Thread-32618 - [127.0.0.1]:5701 [dev] [5.2.1-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN
BuildInfo right after testCallbackAfterServerShutdown(com.hazelcast.client.ClientReconnectTest): BuildInfo{version='5.2.1-SNAPSHOT', build='20221015', buildNumber=20221015, revision=ecf1b1a, enterprise=false, serializationVersion=1}
Hiccups measured while running test 'testCallbackAfterServerShutdown(com.hazelcast.client.ClientReconnectTest):'
04:46:55, accumulated pauses: 2535 ms, max pause: 2287 ms, pauses over 1000 ms: 1
04:47:00, accumulated pauses: 0 ms, max pause: 0 ms, pauses over 1000 ms: 0
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [logo] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT]
+ + o o o o---o o----o o o---o o o----o o--o--o
+ + + + | | / \ / | | / / \ | |
+ + + + + o----o o o o o----o | o o o o----o |
+ + + + | | / \ / | | \ / \ | |
+ + o o o o o---o o----o o----o o---o o o o----o o
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Hazelcast Platform 5.2.1-SNAPSHOT (20221015 - ecf1b1a) starting at [127.0.0.1]:5702
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Cluster name: dev
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed. For more information, see the documentation for Integrity Checker.
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Jet is enabled
No metrics recorded during the test
04:47:00,146 INFO |testClientReconnectOnClusterDown| - [MetricsConfigHelper] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
```
</details>
|
1.0
|
com.hazelcast.client.ClientReconnectTest.testCallbackAfterServerShutdown - _5.2.z_ (commit ecf1b1a7c4572a759b5b4d0f277b646e2a9ccfab)
Failed on CorrettoJDK8: https://jenkins.hazelcast.com/view/Official%20Builds/job/Hazelcast-5.maintenance-CorrettoJDK8/55/testReport/junit/com.hazelcast.client/ClientReconnectTest/testCallbackAfterServerShutdown/
<details><summary>Stacktrace:</summary>
```
java.lang.NullPointerException
at com.hazelcast.client.test.ClientTestSupport.lambda$makeSureDisconnectedFromServer$0(ClientTestSupport.java:93)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1236)
at com.hazelcast.test.HazelcastTestSupport.assertTrueEventually(HazelcastTestSupport.java:1338)
at com.hazelcast.client.test.ClientTestSupport.makeSureDisconnectedFromServer(ClientTestSupport.java:92)
at com.hazelcast.client.ClientReconnectTest.testCallbackAfterServerShutdown(ClientReconnectTest.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
```
</details>
<details><summary>Standard output:</summary>
```
Finished Running Test: testExceptionAfterClientShutdown in 2.687 seconds.
04:47:00,135 INFO |testClientReconnectOnClusterDown| - [MetricsConfigHelper] testClientReconnectOnClusterDown - [LOCAL] [dev] [5.2.1-SNAPSHOT] Overridden metrics configuration with system property 'hazelcast.metrics.collection.frequency'='1' -> 'MetricsConfig.collectionFrequencySeconds'='1'
04:47:00,136 INFO |testCallbackAfterServerShutdown| - [LifecycleService] Thread-32618 - [127.0.0.1]:5701 [dev] [5.2.1-SNAPSHOT] [127.0.0.1]:5701 is SHUTTING_DOWN
04:47:00,136 INFO |testCallbackAfterServerShutdown| - [Node] Thread-32618 - [127.0.0.1]:5701 [dev] [5.2.1-SNAPSHOT] Node is already shutting down... Waiting for shutdown process to complete...
04:47:00,136 INFO |testCallbackAfterServerShutdown| - [LifecycleService] Thread-32618 - [127.0.0.1]:5701 [dev] [5.2.1-SNAPSHOT] [127.0.0.1]:5701 is SHUTDOWN
BuildInfo right after testCallbackAfterServerShutdown(com.hazelcast.client.ClientReconnectTest): BuildInfo{version='5.2.1-SNAPSHOT', build='20221015', buildNumber=20221015, revision=ecf1b1a, enterprise=false, serializationVersion=1}
Hiccups measured while running test 'testCallbackAfterServerShutdown(com.hazelcast.client.ClientReconnectTest):'
04:46:55, accumulated pauses: 2535 ms, max pause: 2287 ms, pauses over 1000 ms: 1
04:47:00, accumulated pauses: 0 ms, max pause: 0 ms, pauses over 1000 ms: 0
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [logo] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT]
+ + o o o o---o o----o o o---o o o----o o--o--o
+ + + + | | / \ / | | / / \ | |
+ + + + + o----o o o o o----o | o o o o----o |
+ + + + | | / \ / | | \ / \ | |
+ + o o o o o---o o----o o----o o---o o o o----o o
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Hazelcast Platform 5.2.1-SNAPSHOT (20221015 - ecf1b1a) starting at [127.0.0.1]:5702
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Cluster name: dev
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed. For more information, see the documentation for Integrity Checker.
04:47:00,137 INFO |testClientReconnectOnClusterDown| - [system] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Jet is enabled
No metrics recorded during the test
04:47:00,146 INFO |testClientReconnectOnClusterDown| - [MetricsConfigHelper] testClientReconnectOnClusterDown - [127.0.0.1]:5702 [dev] [5.2.1-SNAPSHOT] Collecting debug metrics and sending to diagnostics is enabled
```
</details>
|
non_process
|
com hazelcast client clientreconnecttest testcallbackafterservershutdown z commit failed on stacktrace java lang nullpointerexception at com hazelcast client test clienttestsupport lambda makesuredisconnectedfromserver clienttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast test hazelcasttestsupport asserttrueeventually hazelcasttestsupport java at com hazelcast client test clienttestsupport makesuredisconnectedfromserver clienttestsupport java at com hazelcast client clientreconnecttest testcallbackafterservershutdown clientreconnecttest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at com hazelcast test failontimeoutstatement callablestatement call failontimeoutstatement java at java util concurrent futuretask run futuretask java at java lang thread run thread java standard output finished running test testexceptionafterclientshutdown in seconds info testclientreconnectonclusterdown testclientreconnectonclusterdown overridden metrics configuration with system property hazelcast metrics collection frequency metricsconfig collectionfrequencyseconds info testcallbackafterservershutdown thread is shutting down info testcallbackafterservershutdown thread node is already shutting down waiting for shutdown process to complete info testcallbackafterservershutdown thread is shutdown buildinfo right after testcallbackafterservershutdown com hazelcast client clientreconnecttest buildinfo version snapshot build buildnumber revision enterprise false serializationversion hiccups measured while running test testcallbackafterservershutdown com hazelcast client clientreconnecttest accumulated pauses ms max pause ms pauses over ms accumulated pauses ms max pause ms pauses over ms info testclientreconnectonclusterdown testclientreconnectonclusterdown o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o info testclientreconnectonclusterdown testclientreconnectonclusterdown copyright c hazelcast inc all rights reserved info testclientreconnectonclusterdown testclientreconnectonclusterdown hazelcast platform snapshot starting at info testclientreconnectonclusterdown testclientreconnectonclusterdown cluster name dev info testclientreconnectonclusterdown testclientreconnectonclusterdown integrity checker is disabled fail fast on corrupted executables will not be performed for more information see the documentation for integrity checker info testclientreconnectonclusterdown testclientreconnectonclusterdown jet is enabled no metrics recorded during the test info testclientreconnectonclusterdown testclientreconnectonclusterdown collecting debug metrics and sending to diagnostics is enabled
| 0
|
13,378
| 15,839,096,952
|
IssuesEvent
|
2021-04-07 00:01:08
|
googleapis/gapic-generator-go
|
https://api.github.com/repos/googleapis/gapic-generator-go
|
closed
|
bazel: generate go_gapic_repositories based on google-cloud-go go.mod
|
type: process
|
Currently, [`go_gapic_repositories.bzl`](https://github.com/googleapis/gapic-generator-go/blob/master/rules_go_gapic/go_gapic_repositories.bzl) imports the extra dependencies for the `go_gapic_library` targets. This is updated manually and ad hoc. Instead, it should be updated based on the google-cloud-go `go.mod` and via `gazelle`, just like this project's `repositories.bzl` is with the [`update-bazel-deps`](https://github.com/googleapis/gapic-generator-go/blob/master/Makefile#L28-L30) Make target.
|
1.0
|
bazel: generate go_gapic_repositories based on google-cloud-go go.mod - Currently, [`go_gapic_repositories.bzl`](https://github.com/googleapis/gapic-generator-go/blob/master/rules_go_gapic/go_gapic_repositories.bzl) imports the extra dependencies for the `go_gapic_library` targets. This is updated manually and ad hoc. Instead, it should be updated based on the google-cloud-go `go.mod` and via `gazelle`, just like this project's `repositories.bzl` is with the [`update-bazel-deps`](https://github.com/googleapis/gapic-generator-go/blob/master/Makefile#L28-L30) Make target.
|
process
|
bazel generate go gapic repositories based on google cloud go go mod currently imports the extra dependencies for the go gapic library targets this is updated manually and ad hoc instead it should be updated based on the google cloud go go mod and via gazelle just like this project s repositories bzl is with the make target
| 1
|
118,409
| 11,968,287,089
|
IssuesEvent
|
2020-04-06 08:23:19
|
cjolowicz/cookiecutter-hypermodern-python
|
https://api.github.com/repos/cjolowicz/cookiecutter-hypermodern-python
|
closed
|
Override theme on Read the Docs
|
documentation
|
The Cookiecutter documentation on RTD gets their theme instead of Alabaster with the configured logo.
|
1.0
|
Override theme on Read the Docs - The Cookiecutter documentation on RTD gets their theme instead of Alabaster with the configured logo.
|
non_process
|
override theme on read the docs the cookiecutter documentation on rtd gets their theme instead of alabaster with the configured logo
| 0
|
270,002
| 8,445,436,039
|
IssuesEvent
|
2018-10-18 21:30:06
|
jjplay175/Project-N2DE
|
https://api.github.com/repos/jjplay175/Project-N2DE
|
closed
|
Project - Losing integrity after cloning
|
Medium Priority Review bug
|
It appears that if you download the project fresh, the files are not recognised as part of the project
|
1.0
|
Project - Losing integrity after cloning - It appears that if you download the project fresh, the files are not recognised as part of the project
|
non_process
|
project losing integrity after cloning it appears that if you download the project fresh the files are not recognised as part of the project
| 0
|
42,037
| 12,867,876,131
|
IssuesEvent
|
2020-07-10 07:48:51
|
benchmarkdebricked/sentry
|
https://api.github.com/repos/benchmarkdebricked/sentry
|
opened
|
CVE-2020-10994 (Medium) detected in Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2020-10994 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/43/5a/904f2cc20ef9f9ba05f9ff1fb3dfadb1e6923e3bf6f8c8363d5dc3a179ab/Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/43/5a/904f2cc20ef9f9ba05f9ff1fb3dfadb1e6923e3bf6f8c8363d5dc3a179ab/Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/sentry</p>
<p>Path to vulnerable library: /sentry</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/sentry/commit/3f63a21af8bb1c66de6f859ad79ababfd2c21a46">3f63a21af8bb1c66de6f859ad79ababfd2c21a46</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In libImaging/Jpeg2KDecode.c in Pillow before 7.0.0, there are multiple out-of-bounds reads via a crafted JP2 file.
<p>Publish Date: 2020-06-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10994>CVE-2020-10994</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8">https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8</a></p>
<p>Release Date: 2020-06-25</p>
<p>Fix Resolution: 7.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10994 (Medium) detected in Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2020-10994 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/43/5a/904f2cc20ef9f9ba05f9ff1fb3dfadb1e6923e3bf6f8c8363d5dc3a179ab/Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/43/5a/904f2cc20ef9f9ba05f9ff1fb3dfadb1e6923e3bf6f8c8363d5dc3a179ab/Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/sentry</p>
<p>Path to vulnerable library: /sentry</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-4.2.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/sentry/commit/3f63a21af8bb1c66de6f859ad79ababfd2c21a46">3f63a21af8bb1c66de6f859ad79ababfd2c21a46</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In libImaging/Jpeg2KDecode.c in Pillow before 7.0.0, there are multiple out-of-bounds reads via a crafted JP2 file.
<p>Publish Date: 2020-06-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10994>CVE-2020-10994</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8">https://github.com/python-pillow/Pillow/commit/41b554bc56982ee4f30238a7677c0f4ff90a73a8</a></p>
<p>Release Date: 2020-06-25</p>
<p>Fix Resolution: 7.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in pillow whl cve medium severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file tmp ws scm sentry path to vulnerable library sentry dependency hierarchy x pillow whl vulnerable library found in head commit a href vulnerability details in libimaging c in pillow before there are multiple out of bounds reads via a crafted file publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
138,154
| 20,362,090,437
|
IssuesEvent
|
2022-02-20 20:40:33
|
executablebooks/sphinx-book-theme
|
https://api.github.com/repos/executablebooks/sphinx-book-theme
|
closed
|
Improvements to PDF printing CSS
|
enhancement :label: design
|
### Description / Summary
Currently the CSS that is applied when we print a page to PDF is a little bit hacky. It doesn't quite match the design principles you'd want with a printed PDF, and there is likely some low-hanging fruit that we can tackle to improve this.
### Value / benefit
The most common way for a person to get a single-page PDF is likely by printing it directly, so this would be a good way to make some quick improvements to this workflow.
### Implementation details
Here's the CSS that gets applied when the "print" action is triggered, this is where we'd need to make changes to make the output look nicer:
https://github.com/executablebooks/sphinx-book-theme/blob/39aaaa3e4d908fe7be8b7378ad474d8c6b86aa16/src/scss/_print.scss
### Tasks to complete
_No response_
|
1.0
|
Improvements to PDF printing CSS - ### Description / Summary
Currently the CSS that is applied when we print a page to PDF is a little bit hacky. It doesn't quite match the design principles you'd want with a printed PDF, and there is likely some low-hanging fruit that we can tackle to improve this.
### Value / benefit
The most common way for a person to get a single-page PDF is likely by printing it directly, so this would be a good way to make some quick improvements to this workflow.
### Implementation details
Here's the CSS that gets applied when the "print" action is triggered, this is where we'd need to make changes to make the output look nicer:
https://github.com/executablebooks/sphinx-book-theme/blob/39aaaa3e4d908fe7be8b7378ad474d8c6b86aa16/src/scss/_print.scss
### Tasks to complete
_No response_
|
non_process
|
improvements to pdf printing css description summary currently the css that is applied when we print a page to pdf is a little bit hacky it doesn t quite match the design principles you d want with a printed pdf and there is likely some low hanging fruit that we can tackle to improve this value benefit the most common way for a person to get a single page pdf is likely by printing it directly so this would be a good way to make some quick improvements to this workflow implementation details here s the css that gets applied when the print action is triggered this is where we d need to make changes to make the output look nicer tasks to complete no response
| 0
|
4,864
| 7,748,484,092
|
IssuesEvent
|
2018-05-30 08:28:56
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
dvrResourcePrefix/dvrResourceSuffix not used [DOT 2.x develop branch]
|
DITA 1.3 feature preprocess preprocess/branch-filtering
|
If my DITA Map has a construct like this:
```
<topicref href="UserManual.ditamap" format="ditamap" keyscope="A1">
<ditavalref href="author.ditaval">
<ditavalmeta>
<dvrResourcePrefix>mr</dvrResourcePrefix>
<dvrKeyscopePrefix>ks</dvrKeyscopePrefix>
</ditavalmeta>
</ditavalref>
</topicref>
```
the topics on disk either have the original name or they have the suffix "-1" but the dvrResourcePrefix/dvrResourceSuffix do not seem to be used.
|
2.0
|
dvrResourcePrefix/dvrResourceSuffix not used [DOT 2.x develop branch] - If my DITA Map has a construct like this:
```
<topicref href="UserManual.ditamap" format="ditamap" keyscope="A1">
<ditavalref href="author.ditaval">
<ditavalmeta>
<dvrResourcePrefix>mr</dvrResourcePrefix>
<dvrKeyscopePrefix>ks</dvrKeyscopePrefix>
</ditavalmeta>
</ditavalref>
</topicref>
```
the topics on disk either have the original name or they have the suffix "-1" but the dvrResourcePrefix/dvrResourceSuffix do not seem to be used.
|
process
|
dvrresourceprefix dvrresourcesuffix not used if my dita map has a construct like this mr ks the topics on disk either have the original name or they have the suffix but the dvrresourceprefix dvrresourcesuffix do not seem to be used
| 1
|
755,340
| 26,425,763,935
|
IssuesEvent
|
2023-01-14 06:00:21
|
GLEF1X/glQiwiApi
|
https://api.github.com/repos/GLEF1X/glQiwiApi
|
closed
|
Compatibility with python 3.11
|
bug help wanted good first issue priority level: medium
|
**[Reported in the official telegram group ](https://t.me/glQiwiAPIOfficial/947)**
Detected issues(fixed / not fixed):
- [X] QIWI API webhook config that is using `dataclasses` have to have only immutable defaults due to changes in python3.11
|
1.0
|
Compatibility with python 3.11 - **[Reported in the official telegram group ](https://t.me/glQiwiAPIOfficial/947)**
Detected issues(fixed / not fixed):
- [X] QIWI API webhook config that is using `dataclasses` have to have only immutable defaults due to changes in python3.11
|
non_process
|
compatibility with python detected issues fixed not fixed qiwi api webhook config that is using dataclasses have to have only immutable defaults due to changes in
| 0
|
184,907
| 14,290,116,021
|
IssuesEvent
|
2020-11-23 20:21:48
|
github-vet/rangeclosure-findings
|
https://api.github.com/repos/github-vet/rangeclosure-findings
|
closed
|
hongdoctor/magiccube-wsserv: wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go; 21 LoC
|
fresh small test
|
Found a possible issue in [hongdoctor/magiccube-wsserv](https://www.github.com/hongdoctor/magiccube-wsserv) at [wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go](https://github.com/hongdoctor/magiccube-wsserv/blob/93f30dec73ea9df939c4a208f56445deacdf8382/wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go#L916-L936)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/hongdoctor/magiccube-wsserv/blob/93f30dec73ea9df939c4a208f56445deacdf8382/wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go#L916-L936)
<details>
<summary>Click here to show the 21 line(s) of Go which triggered the analyzer.</summary>
```go
for name, testf := range hammer32 {
c := make(chan int)
var val uint32
for i := 0; i < p; i++ {
go func() {
defer func() {
if err := recover(); err != nil {
t.Error(err.(string))
}
c <- 1
}()
testf(&val, n)
}()
}
for i := 0; i < p; i++ {
<-c
}
if !strings.HasPrefix(name, "Swap") && val != uint32(n)*p {
t.Fatalf("%s: val=%d want %d", name, val, n*p)
}
}
```
</details>
commit ID: 93f30dec73ea9df939c4a208f56445deacdf8382
|
1.0
|
hongdoctor/magiccube-wsserv: wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go; 21 LoC -
Found a possible issue in [hongdoctor/magiccube-wsserv](https://www.github.com/hongdoctor/magiccube-wsserv) at [wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go](https://github.com/hongdoctor/magiccube-wsserv/blob/93f30dec73ea9df939c4a208f56445deacdf8382/wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go#L916-L936)
The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements
which capture loop variables.
[Click here to see the code in its original context.](https://github.com/hongdoctor/magiccube-wsserv/blob/93f30dec73ea9df939c4a208f56445deacdf8382/wsserv/go-v1.4.linux-amd64/src/sync/atomic/atomic_test.go#L916-L936)
<details>
<summary>Click here to show the 21 line(s) of Go which triggered the analyzer.</summary>
```go
for name, testf := range hammer32 {
c := make(chan int)
var val uint32
for i := 0; i < p; i++ {
go func() {
defer func() {
if err := recover(); err != nil {
t.Error(err.(string))
}
c <- 1
}()
testf(&val, n)
}()
}
for i := 0; i < p; i++ {
<-c
}
if !strings.HasPrefix(name, "Swap") && val != uint32(n)*p {
t.Fatalf("%s: val=%d want %d", name, val, n*p)
}
}
```
</details>
commit ID: 93f30dec73ea9df939c4a208f56445deacdf8382
|
non_process
|
hongdoctor magiccube wsserv wsserv go linux src sync atomic atomic test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for name testf range c make chan int var val for i i p i go func defer func if err recover err nil t error err string c testf val n for i i p i c if strings hasprefix name swap val n p t fatalf s val d want d name val n p commit id
| 0
|
12,806
| 15,182,090,255
|
IssuesEvent
|
2021-02-15 05:28:20
|
yuta252/startlens_learning
|
https://api.github.com/repos/yuta252/startlens_learning
|
closed
|
pytestによるテストの実施とバグ修正
|
dev process
|
## 概要
テストとしてpytestを導入しUnittestを実施する。併せてコードのリファクタリングとバグ修正を行う。
ただし、TripletNetworkやKNNモデルによる訓練・推論過程は除く。
## 変更点
- testsディレクトリでunitテストを実施した。
- resource.pyによるS3のファイルハンドリングはmotoモジュールを利用しmockする
```
pip install moto
```
- model/knn.pyにおけるファイルの書き込み読み込み処理をfixtureを利用しながらテストを実施する。併せてテストがしやすいようなモジュールに分けて関数をリファクタリングする。
## 参照
- [テスト駆動Python](https://www.amazon.co.jp/%E3%83%86%E3%82%B9%E3%83%88%E9%A7%86%E5%8B%95Python-BrianOkken-ebook/dp/B07F65PFZN)
- [Python: motoでS3・DynamoDB・SQSのモックを作る](https://ohke.hateblo.jp/entry/2020/04/11/230000)
|
1.0
|
pytestによるテストの実施とバグ修正 - ## 概要
テストとしてpytestを導入しUnittestを実施する。併せてコードのリファクタリングとバグ修正を行う。
ただし、TripletNetworkやKNNモデルによる訓練・推論過程は除く。
## 変更点
- testsディレクトリでunitテストを実施した。
- resource.pyによるS3のファイルハンドリングはmotoモジュールを利用しmockする
```
pip install moto
```
- model/knn.pyにおけるファイルの書き込み読み込み処理をfixtureを利用しながらテストを実施する。併せてテストがしやすいようなモジュールに分けて関数をリファクタリングする。
## 参照
- [テスト駆動Python](https://www.amazon.co.jp/%E3%83%86%E3%82%B9%E3%83%88%E9%A7%86%E5%8B%95Python-BrianOkken-ebook/dp/B07F65PFZN)
- [Python: motoでS3・DynamoDB・SQSのモックを作る](https://ohke.hateblo.jp/entry/2020/04/11/230000)
|
process
|
pytestによるテストの実施とバグ修正 概要 テストとしてpytestを導入しunittestを実施する。併せてコードのリファクタリングとバグ修正を行う。 ただし、tripletnetworkやknnモデルによる訓練・推論過程は除く。 変更点 testsディレクトリでunitテストを実施した。 resource pip install moto model knn pyにおけるファイルの書き込み読み込み処理をfixtureを利用しながらテストを実施する。併せてテストがしやすいようなモジュールに分けて関数をリファクタリングする。 参照
| 1
|
4,537
| 7,373,525,183
|
IssuesEvent
|
2018-03-13 17:30:39
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Problem with included file
|
app-service-web assigned-to-author doc-bug in-process triaged
|
I wanted to fix a couple lines in the include:
[!INCLUDE [Create resource group](../../../includes/app-service-web-create-resource-group.md)]
but could not figure out how to navigate to the embedded page. The two problems I see are that for the command:
az appservice list-locations
you must specify the --SDK option or it does not work.
Also, when you enter anything with your git url, since that includes your username, you are prompted NOT for your credentials but for your "Password". I'd suggest changing "Credentials" to "Password"
Also, In the paragraph where you say:
A resource group is a logical container into which Azure resources like web >apps, databases, and storage accounts are deployed and managed.
I suggest you add some additional motivation for the reader by adding the sentence:
By doing this, you can easily delete all assets associated with your work here with one simple step. That is to delete this resource group.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 116d6e28-471b-2cf4-3eb8-6249b4ef3016
* Version Independent ID: f199d3a2-0c11-e2d6-18f0-d081c76d511d
* Content: [Create a Node.js in Azure App Service on Linux | Microsoft Docs](https://docs.microsoft.com/en-us/azure/app-service/containers/quickstart-nodejs)
* Content Source: [articles/app-service/containers/quickstart-nodejs.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/containers/quickstart-nodejs.md)
* Service: **app-service-web**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
1.0
|
Problem with included file - I wanted to fix a couple lines in the include:
[!INCLUDE [Create resource group](../../../includes/app-service-web-create-resource-group.md)]
but could not figure out how to navigate to the embedded page. The two problems I see are that for the command:
az appservice list-locations
you must specify the --SDK option or it does not work.
Also, when you enter anything with your git url, since that includes your username, you are prompted NOT for your credentials but for your "Password". I'd suggest changing "Credentials" to "Password"
Also, In the paragraph where you say:
A resource group is a logical container into which Azure resources like web >apps, databases, and storage accounts are deployed and managed.
I suggest you add some additional motivation for the reader by adding the sentence:
By doing this, you can easily delete all assets associated with your work here with one simple step. That is to delete this resource group.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 116d6e28-471b-2cf4-3eb8-6249b4ef3016
* Version Independent ID: f199d3a2-0c11-e2d6-18f0-d081c76d511d
* Content: [Create a Node.js in Azure App Service on Linux | Microsoft Docs](https://docs.microsoft.com/en-us/azure/app-service/containers/quickstart-nodejs)
* Content Source: [articles/app-service/containers/quickstart-nodejs.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/containers/quickstart-nodejs.md)
* Service: **app-service-web**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**
|
process
|
problem with included file i wanted to fix a couple lines in the include includes app service web create resource group md but could not figure out how to navigate to the embedded page the two problems i see are that for the command az appservice list locations you must specify the sdk option or it does not work also when you enter anything with your git url since that includes your username you are prompted not for your credentials but for your password i d suggest changing credentials to password also in the paragraph where you say a resource group is a logical container into which azure resources like web apps databases and storage accounts are deployed and managed i suggest you add some additional motivation for the reader by adding the sentence by doing this you can easily delete all assets associated with your work here with one simple step that is to delete this resource group document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service web github login cephalin microsoft alias cephalin
| 1
|
3,922
| 2,937,997,943
|
IssuesEvent
|
2015-07-01 07:51:43
|
ndomar/megasoft-13
|
https://api.github.com/repos/ndomar/megasoft-13
|
closed
|
C4S5 Versioning
|
code-verified Component-4 doc-verified Points-13 Priority-Low scenario-verified Status-new
|
Success:
As a designer: I can load a previous version of my created project.
I can save the current version of my project.
Failure:
Not a signed in user, not his project, flashed with error.
|
1.0
|
C4S5 Versioning - Success:
As a designer: I can load a previous version of my created project.
I can save the current version of my project.
Failure:
Not a signed in user, not his project, flashed with error.
|
non_process
|
versioning success as a designer i can load a previous version of my created project i can save the current version of my project failure not a signed in user not his project flashed with error
| 0
|
59,516
| 8,367,720,626
|
IssuesEvent
|
2018-10-04 13:04:12
|
cilium/cilium
|
https://api.github.com/repos/cilium/cilium
|
closed
|
1.2 docs already point to cilium/cilium:v1.2.4
|
area/documentation kind/bug
|
It's a side effect of the docs being built from the `v1.2` branch instead of the latest tagged release.
|
1.0
|
1.2 docs already point to cilium/cilium:v1.2.4 - It's a side effect of the docs being built from the `v1.2` branch instead of the latest tagged release.
|
non_process
|
docs already point to cilium cilium it s a side effect of the docs being built from the branch instead of the latest tagged release
| 0
|
20,495
| 27,154,189,107
|
IssuesEvent
|
2023-02-17 05:48:50
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Spin the Java Starlark interpreter into a stand-alone library
|
P4 type: process team-Starlark-Interpreter stale
|
there are many use case for the skylark and the build language.
It would be great to have the BUILD language for many configuration problems on the JVM. It would be cool for this code to be available as a maven jar, or if that is not possible as a bazel repository we can cleanly depend on.
I imagine the user plugs in the standard library of functions that can be called. So, bazel proper would give access to all the built-ins of skylark's current implementation.
Basically, at this point skylark looks like a total, dynamically typed programming language (one could even imagine a project to add type inference to skylark, which I think may be possible pretty easily since it lacks recursion).
/cc @jart @laurentlb
|
1.0
|
Spin the Java Starlark interpreter into a stand-alone library - there are many use case for the skylark and the build language.
It would be great to have the BUILD language for many configuration problems on the JVM. It would be cool for this code to be available as a maven jar, or if that is not possible as a bazel repository we can cleanly depend on.
I imagine the user plugs in the standard library of functions that can be called. So, bazel proper would give access to all the built-ins of skylark's current implementation.
Basically, at this point skylark looks like a total, dynamically typed programming language (one could even imagine a project to add type inference to skylark, which I think may be possible pretty easily since it lacks recursion).
/cc @jart @laurentlb
|
process
|
spin the java starlark interpreter into a stand alone library there are many use case for the skylark and the build language it would be great to have the build language for many configuration problems on the jvm it would be cool for this code to be available as a maven jar or if that is not possible as a bazel repository we can cleanly depend on i imagine the user plugs in the standard library of functions that can be called so bazel proper would give access to all the built ins of skylark s current implementation basically at this point skylark looks like a total dynamically typed programming language one could even imagine a project to add type inference to skylark which i think may be possible pretty easily since it lacks recursion cc jart laurentlb
| 1
|
125,258
| 10,339,641,426
|
IssuesEvent
|
2019-09-03 19:51:21
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Browser Unit Tests.CheckXPackInfoChange Factory - CheckXPackInfoChange Factory "before each" hook: workFn for "does not show "license expired" banner if license is not expired."
|
failed-test
|
A test failed on a tracked branch
```
[object Object]
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+6.7/JOB=x-pack-intake,node=immutable/2/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Browser Unit Tests.CheckXPackInfoChange Factory","test.name":"CheckXPackInfoChange Factory \"before each\" hook: workFn for \"does not show \"license expired\" banner if license is not expired.\"","test.failCount":12}} -->
|
1.0
|
Failing test: Browser Unit Tests.CheckXPackInfoChange Factory - CheckXPackInfoChange Factory "before each" hook: workFn for "does not show "license expired" banner if license is not expired." - A test failed on a tracked branch
```
[object Object]
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+6.7/JOB=x-pack-intake,node=immutable/2/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Browser Unit Tests.CheckXPackInfoChange Factory","test.name":"CheckXPackInfoChange Factory \"before each\" hook: workFn for \"does not show \"license expired\" banner if license is not expired.\"","test.failCount":12}} -->
|
non_process
|
failing test browser unit tests checkxpackinfochange factory checkxpackinfochange factory before each hook workfn for does not show license expired banner if license is not expired a test failed on a tracked branch first failure
| 0
|
328,568
| 9,996,640,839
|
IssuesEvent
|
2019-07-12 00:31:06
|
Sage-Bionetworks/dccvalidator-app
|
https://api.github.com/repos/Sage-Bionetworks/dccvalidator-app
|
closed
|
Restrict metadata to CSV files
|
high priority validation
|
Currently the app fails ungracefully when people upload metadata in .xlsx. There should be a file format check and a useful message informing people that the file format is incorrect.
This will also affect subsequent checks, since some of them will not work if the app can't read the file.
|
1.0
|
Restrict metadata to CSV files - Currently the app fails ungracefully when people upload metadata in .xlsx. There should be a file format check and a useful message informing people that the file format is incorrect.
This will also affect subsequent checks, since some of them will not work if the app can't read the file.
|
non_process
|
restrict metadata to csv files currently the app fails ungracefully when people upload metadata in xlsx there should be a file format check and a useful message informing people that the file format is incorrect this will also affect subsequent checks since some of them will not work if the app can t read the file
| 0
|
22,249
| 30,801,855,222
|
IssuesEvent
|
2023-08-01 02:30:19
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@vendia/share-cli 0.12.0 has 4 guarddog issues
|
npm-install-script shady-links npm-silent-process-execution
|
```{"npm-install-script":[{"code":" \"postinstall\": \"node ./src/_post-install.js\"","location":"package/package.json:149","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" spawn(process.execPath, [requestFile, options], {\n detached: true,\n stdio: 'ignore',\n }).unref()","location":"package/src/utils/telemetry/index.js:81","message":"This package is silently executing another executable"}],"shady-links":[{"code":"// Global \"fix\" for cognito https://bit.ly/3mGPBbk \u0026 https://bit.ly/35XBqYM","location":"package/src/utils/cognito/index.js:1","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" // Ignore http://bit.ly/3nZ002k + https://github.com/oclif/command/issues/34#issuecomment-377388132","location":"package/src/utils/errors/error-details.js:16","message":"This package contains an URL to a domain with a suspicious extension"}]}```
|
1.0
|
@vendia/share-cli 0.12.0 has 4 guarddog issues - ```{"npm-install-script":[{"code":" \"postinstall\": \"node ./src/_post-install.js\"","location":"package/package.json:149","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":" spawn(process.execPath, [requestFile, options], {\n detached: true,\n stdio: 'ignore',\n }).unref()","location":"package/src/utils/telemetry/index.js:81","message":"This package is silently executing another executable"}],"shady-links":[{"code":"// Global \"fix\" for cognito https://bit.ly/3mGPBbk \u0026 https://bit.ly/35XBqYM","location":"package/src/utils/cognito/index.js:1","message":"This package contains an URL to a domain with a suspicious extension"},{"code":" // Ignore http://bit.ly/3nZ002k + https://github.com/oclif/command/issues/34#issuecomment-377388132","location":"package/src/utils/errors/error-details.js:16","message":"This package contains an URL to a domain with a suspicious extension"}]}```
|
process
|
vendia share cli has guarddog issues npm install script npm silent process execution n detached true n stdio ignore n unref location package src utils telemetry index js message this package is silently executing another executable shady links
| 1
|
101,615
| 21,727,405,595
|
IssuesEvent
|
2022-05-11 08:54:57
|
informalsystems/ibc-rs
|
https://api.github.com/repos/informalsystems/ibc-rs
|
closed
|
ChainEndpoint trait refactor (first phase)
|
good first issue relayer code-hygiene
|
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: poorly thought-out proposals may be rejected
v without deliberation
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Crate
`ibc`
## Summary
We would like to make the methods in the `Chain` trait more consistent. Refactoring this trait will probably take multiple rounds of work. This issue covers a first phase, which comprises two elements that we'd like to visit:
1. Fix some of the returned values for `query_*` methods, to make them consistent in the way in which they report non-existent values versus how they report a runtime/connection error.
2. Ensure consistency across methods with respect to their input arguments.
Both of these are described in more detail below.
## Problem Definition
The [`Chain` trait](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L68) is a major interface in the `ibc-relayer` library. The intention behind this trait is to capture all the dependencies which Hermes has towards any kind of chain (be it Cosmos SDK or something else). As it is a major connecting point between relaying logic and chain logic, this trait gets changed relatively often, and its design drifted to become less consistent over time.
Here are some examples of inconsistencies.
### Return values for query methods
The [`query_connection`](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L170) method returns a domain type that is possibly in state `Uninitialized`, and this signifies that the Connection object does not exist on-chain. We currently have to do this domain type interpretation to report to the user correctly that an object does not exist, e.g.:
https://github.com/informalsystems/ibc-rs/blob/26ef2c6fcc2527998be1a206292ad870191b42c0/relayer-cli/src/commands/query/connection.rs#L52-L65
An alternative to returning an `Uninitialized` domain type is to return a value of type `Result<Option<ConnectionEnd>, Error`.
A returned `Ok(None)` indicates that the query suceedeed but nothing was found on-chain. It's not clear if this alternative design fits all queries, however, so the other queries methods should be visited in light of this proposal, and if the design is sound then we should modify the queries correspondingly.
### Consistency of input arguments
Some improvements can be done for channel-related methods, specifically when both a `ChannelId` and a `PortId` are provided as input (for example [here](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L254)), and instead of these two types a single [`PortChannelId`](https://github.com/informalsystems/ibc-rs/blob/master/modules/src/ics24_host/identifier.rs#L390) should be used.
The method [`query_packet_commitments`](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L207) can also be modified to accept the `PortChannelId` input, instead of `QueryPacketCommitmentsRequest`, but this is not clear. If the pagination field in `QueryPacketCommitmentsRequest` is used, then we should not change it.
## Acceptance Criteria
- [ ] The `Chain` trait should be more consistent across its method signatures.
- [ ] [low-hanging TODO](https://github.com/informalsystems/ibc-rs/blob/26ef2c6fcc2527998be1a206292ad870191b42c0/relayer/src/chain.rs#L195) that should also be fixed
- [ ] follow-up work should be identified and an issue should be opened
____
#### For Admin Use
- [X] Not duplicate issue
- [X] Appropriate labels applied
- [X] Appropriate milestone (priority) applied
- [X] Appropriate contributors tagged
- [X] Contributor assigned/self-assigned
|
1.0
|
ChainEndpoint trait refactor (first phase) - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: poorly thought-out proposals may be rejected
v without deliberation
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Crate
`ibc`
## Summary
We would like to make the methods in the `Chain` trait more consistent. Refactoring this trait will probably take multiple rounds of work. This issue covers a first phase, which comprises two elements that we'd like to visit:
1. Fix some of the returned values for `query_*` methods, to make them consistent in the way in which they report non-existent values versus how they report a runtime/connection error.
2. Ensure consistency across methods with respect to their input arguments.
Both of these are described in more detail below.
## Problem Definition
The [`Chain` trait](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L68) is a major interface in the `ibc-relayer` library. The intention behind this trait is to capture all the dependencies which Hermes has towards any kind of chain (be it Cosmos SDK or something else). As it is a major connecting point between relaying logic and chain logic, this trait gets changed relatively often, and its design drifted to become less consistent over time.
Here are some examples of inconsistencies.
### Return values for query methods
The [`query_connection`](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L170) method returns a domain type that is possibly in state `Uninitialized`, and this signifies that the Connection object does not exist on-chain. We currently have to do this domain type interpretation to report to the user correctly that an object does not exist, e.g.:
https://github.com/informalsystems/ibc-rs/blob/26ef2c6fcc2527998be1a206292ad870191b42c0/relayer-cli/src/commands/query/connection.rs#L52-L65
An alternative to returning an `Uninitialized` domain type is to return a value of type `Result<Option<ConnectionEnd>, Error`.
A returned `Ok(None)` indicates that the query suceedeed but nothing was found on-chain. It's not clear if this alternative design fits all queries, however, so the other queries methods should be visited in light of this proposal, and if the design is sound then we should modify the queries correspondingly.
### Consistency of input arguments
Some improvements can be done for channel-related methods, specifically when both a `ChannelId` and a `PortId` are provided as input (for example [here](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L254)), and instead of these two types a single [`PortChannelId`](https://github.com/informalsystems/ibc-rs/blob/master/modules/src/ics24_host/identifier.rs#L390) should be used.
The method [`query_packet_commitments`](https://github.com/informalsystems/ibc-rs/blob/041ef642f7080e1faea60fc1325abde5afcf5c0a/relayer/src/chain.rs#L207) can also be modified to accept the `PortChannelId` input, instead of `QueryPacketCommitmentsRequest`, but this is not clear. If the pagination field in `QueryPacketCommitmentsRequest` is used, then we should not change it.
## Acceptance Criteria
- [ ] The `Chain` trait should be more consistent across its method signatures.
- [ ] [low-hanging TODO](https://github.com/informalsystems/ibc-rs/blob/26ef2c6fcc2527998be1a206292ad870191b42c0/relayer/src/chain.rs#L195) that should also be fixed
- [ ] follow-up work should be identified and an issue should be opened
____
#### For Admin Use
- [X] Not duplicate issue
- [X] Appropriate labels applied
- [X] Appropriate milestone (priority) applied
- [X] Appropriate contributors tagged
- [X] Contributor assigned/self-assigned
|
non_process
|
chainendpoint trait refactor first phase ☺ v ✰ thanks for opening an issue ✰ v before smashing the submit button please review the template v word of caution poorly thought out proposals may be rejected v without deliberation ☺ crate ibc summary we would like to make the methods in the chain trait more consistent refactoring this trait will probably take multiple rounds of work this issue covers a first phase which comprises two elements that we d like to visit fix some of the returned values for query methods to make them consistent in the way in which they report non existent values versus how they report a runtime connection error ensure consistency across methods with respect to their input arguments both of these are described in more detail below problem definition the is a major interface in the ibc relayer library the intention behind this trait is to capture all the dependencies which hermes has towards any kind of chain be it cosmos sdk or something else as it is a major connecting point between relaying logic and chain logic this trait gets changed relatively often and its design drifted to become less consistent over time here are some examples of inconsistencies return values for query methods the method returns a domain type that is possibly in state uninitialized and this signifies that the connection object does not exist on chain we currently have to do this domain type interpretation to report to the user correctly that an object does not exist e g an alternative to returning an uninitialized domain type is to return a value of type result error a returned ok none indicates that the query suceedeed but nothing was found on chain it s not clear if this alternative design fits all queries however so the other queries methods should be visited in light of this proposal and if the design is sound then we should modify the queries correspondingly consistency of input arguments some improvements can be done for channel related methods specifically when both a channelid and a portid are provided as input for example and instead of these two types a single should be used the method can also be modified to accept the portchannelid input instead of querypacketcommitmentsrequest but this is not clear if the pagination field in querypacketcommitmentsrequest is used then we should not change it acceptance criteria the chain trait should be more consistent across its method signatures that should also be fixed follow up work should be identified and an issue should be opened for admin use not duplicate issue appropriate labels applied appropriate milestone priority applied appropriate contributors tagged contributor assigned self assigned
| 0
|
22,304
| 30,859,670,713
|
IssuesEvent
|
2023-08-03 01:08:02
|
emily-writes-poems/emily-writes-poems-processing
|
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
|
closed
|
modal to display poems in collection
|
processing
|
like the modal created in #49, just to display current poems in the collection.
edit functionality will be added in #16
|
1.0
|
modal to display poems in collection - like the modal created in #49, just to display current poems in the collection.
edit functionality will be added in #16
|
process
|
modal to display poems in collection like the modal created in just to display current poems in the collection edit functionality will be added in
| 1
|
243,233
| 18,679,401,445
|
IssuesEvent
|
2021-11-01 02:08:37
|
element-plus/element-plus
|
https://api.github.com/repos/element-plus/element-plus
|
reopened
|
[Feature Request] Virtualized Select: Filterable -> ignore case when filtering
|
documentation
|
<!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE -->
### Existing Component
Yes
### Component Name
ElSelectV2
### Description
When filtering ElSelectV2 the search only yeilds results if you match the case of the search object.
it would be nice if it ignored case when searching, or if a new prop was added to toggle case insensitivity in filtering.
<!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE -->
|
1.0
|
[Feature Request] Virtualized Select: Filterable -> ignore case when filtering - <!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE -->
### Existing Component
Yes
### Component Name
ElSelectV2
### Description
When filtering ElSelectV2 the search only yeilds results if you match the case of the search object.
it would be nice if it ignored case when searching, or if a new prop was added to toggle case insensitivity in filtering.
<!-- generated by https://elementui.github.io/issue-generator DO NOT REMOVE -->
|
non_process
|
virtualized select filterable ignore case when filtering existing component yes component name description when filtering the search only yeilds results if you match the case of the search object it would be nice if it ignored case when searching or if a new prop was added to toggle case insensitivity in filtering
| 0
|
10,078
| 13,044,161,965
|
IssuesEvent
|
2020-07-29 03:47:27
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `SubDateAndString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `SubDateAndString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `SubDateAndString` from TiDB -
## Description
Port the scalar function `SubDateAndString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function subdateandstring from tidb description port the scalar function subdateandstring from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
12,853
| 2,722,937,608
|
IssuesEvent
|
2015-04-14 08:59:09
|
BlackCodec/tint2
|
https://api.github.com/repos/BlackCodec/tint2
|
closed
|
Battery remaining shows garbage
|
auto-migrated Component-Battery Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
It can be reproduced on a laptop.
What is the expected output? What do you see instead?
Negatives numbers in % (or numbers over 100) and wrong time left (sometimes
even negative time)
https://dl.dropbox.com/u/71236259/2012-11-23-185407_300x61_scrot.png
https://dl.dropbox.com/u/71236259/2012-11-23-185433_320x55_scrot.png
I can't manage to find any logical explanation for these numbers. Most of the
time they're random.
What version of the product are you using? On what operating system?
Crunchbang waldorf
$ tint2 --version
tint2 version 0.11-svn
```
Original issue reported on code.google.com by `sysa...@gmail.com` on 23 Nov 2012 at 5:18
|
1.0
|
Battery remaining shows garbage - ```
What steps will reproduce the problem?
It can be reproduced on a laptop.
What is the expected output? What do you see instead?
Negatives numbers in % (or numbers over 100) and wrong time left (sometimes
even negative time)
https://dl.dropbox.com/u/71236259/2012-11-23-185407_300x61_scrot.png
https://dl.dropbox.com/u/71236259/2012-11-23-185433_320x55_scrot.png
I can't manage to find any logical explanation for these numbers. Most of the
time they're random.
What version of the product are you using? On what operating system?
Crunchbang waldorf
$ tint2 --version
tint2 version 0.11-svn
```
Original issue reported on code.google.com by `sysa...@gmail.com` on 23 Nov 2012 at 5:18
|
non_process
|
battery remaining shows garbage what steps will reproduce the problem it can be reproduced on a laptop what is the expected output what do you see instead negatives numbers in or numbers over and wrong time left sometimes even negative time i can t manage to find any logical explanation for these numbers most of the time they re random what version of the product are you using on what operating system crunchbang waldorf version version svn original issue reported on code google com by sysa gmail com on nov at
| 0
|
15,343
| 19,490,871,777
|
IssuesEvent
|
2021-12-27 06:03:32
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Backward Compatible API
|
work-in-progress issue-processing-state-04
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The latest release from pypi has a breaking API change.
https://github.com/MobSF/Mobile-Security-Framework-MobSF/pull/1770
```
Traceback (most recent call last):
File "/home/runner/work/Mobile-Security-Framework-MobSF/Mobile-Security-Framework-MobSF/mobsf/StaticAnalyzer/views/android/static_analyzer.py", line 227, in static_analyzer
quark_results = quark_analysis(
File "/home/runner/work/Mobile-Security-Framework-MobSF/Mobile-Security-Framework-MobSF/mobsf/MalwareAnalyzer/views/quark.py", line 27, in quark_analysis
from quark.Objects.quark import Quark
ModuleNotFoundError: No module named 'quark.Objects'
Error: 19/Jul/2021 18:23:08 - No module named 'quark.Objects'
Error: 19/Jul/2021 18:23:08 - Internal Server Error: /api/v1/scan
Error: 19/Jul/2021 18:23:08 - Performing Static Analysis: android.apk
```
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
It's ideal if the APIs used by other tools for integrating with quark engine is consistent across release.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
Backward Compatible API - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The latest release from pypi has a breaking API change.
https://github.com/MobSF/Mobile-Security-Framework-MobSF/pull/1770
```
Traceback (most recent call last):
File "/home/runner/work/Mobile-Security-Framework-MobSF/Mobile-Security-Framework-MobSF/mobsf/StaticAnalyzer/views/android/static_analyzer.py", line 227, in static_analyzer
quark_results = quark_analysis(
File "/home/runner/work/Mobile-Security-Framework-MobSF/Mobile-Security-Framework-MobSF/mobsf/MalwareAnalyzer/views/quark.py", line 27, in quark_analysis
from quark.Objects.quark import Quark
ModuleNotFoundError: No module named 'quark.Objects'
Error: 19/Jul/2021 18:23:08 - No module named 'quark.Objects'
Error: 19/Jul/2021 18:23:08 - Internal Server Error: /api/v1/scan
Error: 19/Jul/2021 18:23:08 - Performing Static Analysis: android.apk
```
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
It's ideal if the APIs used by other tools for integrating with quark engine is consistent across release.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
backward compatible api is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when the latest release from pypi has a breaking api change traceback most recent call last file home runner work mobile security framework mobsf mobile security framework mobsf mobsf staticanalyzer views android static analyzer py line in static analyzer quark results quark analysis file home runner work mobile security framework mobsf mobile security framework mobsf mobsf malwareanalyzer views quark py line in quark analysis from quark objects quark import quark modulenotfounderror no module named quark objects error jul no module named quark objects error jul internal server error api scan error jul performing static analysis android apk describe the solution you d like a clear and concise description of what you want to happen it s ideal if the apis used by other tools for integrating with quark engine is consistent across release describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
| 1
|
139,772
| 18,853,814,088
|
IssuesEvent
|
2021-11-12 01:47:20
|
LalithK90/wasityInstitute
|
https://api.github.com/repos/LalithK90/wasityInstitute
|
opened
|
CVE-2021-41079 (High) detected in tomcat-embed-core-9.0.30.jar
|
security vulnerability
|
## CVE-2021-41079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: wasityInstitute/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Tomcat 8.5.0 to 8.5.63, 9.0.0-M1 to 9.0.43 and 10.0.0-M1 to 10.0.2 did not properly validate incoming TLS packets. When Tomcat was configured to use NIO+OpenSSL or NIO2+OpenSSL for TLS, a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service.
<p>Publish Date: 2021-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41079>CVE-2021-41079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/security-10.html">https://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2021-09-16</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.64,9.0.44,10.0.4;org.apache.tomcat.embed:tomcat-embed-core:8.5.64,9.0.44,10.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41079 (High) detected in tomcat-embed-core-9.0.30.jar - ## CVE-2021-41079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: wasityInstitute/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Tomcat 8.5.0 to 8.5.63, 9.0.0-M1 to 9.0.43 and 10.0.0-M1 to 10.0.2 did not properly validate incoming TLS packets. When Tomcat was configured to use NIO+OpenSSL or NIO2+OpenSSL for TLS, a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service.
<p>Publish Date: 2021-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41079>CVE-2021-41079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/security-10.html">https://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2021-09-16</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.64,9.0.44,10.0.4;org.apache.tomcat.embed:tomcat-embed-core:8.5.64,9.0.44,10.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file wasityinstitute build gradle path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in base branch master vulnerability details apache tomcat to to and to did not properly validate incoming tls packets when tomcat was configured to use nio openssl or openssl for tls a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core step up your open source security game with whitesource
| 0
|
22,349
| 31,027,446,722
|
IssuesEvent
|
2023-08-10 10:05:35
|
DxytJuly3/gitalk_blog
|
https://api.github.com/repos/DxytJuly3/gitalk_blog
|
opened
|
[Linux] 详析进程控制:fork子进程运行规则?怎么回收子进程?什么是进程替换?进程替换怎么操作? - July.cc Blogs
|
Gitalk /posts/Linux-Process-Control
|
https://www.julysblog.cn/posts/Linux-Process-Control
这次, 是第三次正式的对fork()系统调用进行介绍、补充
|
1.0
|
[Linux] 详析进程控制:fork子进程运行规则?怎么回收子进程?什么是进程替换?进程替换怎么操作? - July.cc Blogs - https://www.julysblog.cn/posts/Linux-Process-Control
这次, 是第三次正式的对fork()系统调用进行介绍、补充
|
process
|
详析进程控制:fork子进程运行规则?怎么回收子进程?什么是进程替换?进程替换怎么操作? july cc blogs 这次 是第三次正式的对fork 系统调用进行介绍、补充
| 1
|
5,847
| 8,672,964,083
|
IssuesEvent
|
2018-11-30 00:07:23
|
HumanCellAtlas/dcp-community
|
https://api.github.com/repos/HumanCellAtlas/dcp-community
|
closed
|
Rename charter-proposed and charter-final-review
|
charter-process
|
- [x] Rename *charter-proposed* to *charter-community-review*
- [x] Rename *charter-final-review* to *charter-oversight-review*
- [x] Clarify that *charter-oversight-review* is limited to oversight reviewers and not the general community
|
1.0
|
Rename charter-proposed and charter-final-review - - [x] Rename *charter-proposed* to *charter-community-review*
- [x] Rename *charter-final-review* to *charter-oversight-review*
- [x] Clarify that *charter-oversight-review* is limited to oversight reviewers and not the general community
|
process
|
rename charter proposed and charter final review rename charter proposed to charter community review rename charter final review to charter oversight review clarify that charter oversight review is limited to oversight reviewers and not the general community
| 1
|
10,076
| 13,044,161,957
|
IssuesEvent
|
2020-07-29 03:47:27
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `Quarter` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `Quarter` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `Quarter` from TiDB -
## Description
Port the scalar function `Quarter` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function quarter from tidb description port the scalar function quarter from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
454,909
| 13,109,193,659
|
IssuesEvent
|
2020-08-04 18:13:40
|
OregonDigital/OD2
|
https://api.github.com/repos/OregonDigital/OD2
|
closed
|
Works without identifier can not be added to collection (or otherwise saved)
|
Bug Features Migration Priority - High Ready for Development
|
### Descriptive summary
Adding a work to a collection causes the work to be saved without going through the form. This causes problems when the work is missing required metadata. The only field that seems to be causing this issue from time-to-time are some worms are missing an identifier.
These are either old works that had problems with migration early on (before form validation checks) OR recently saved works that encountered an error with our Hyrax 3.0-RC1 upgrade. Depending on the root of the problem, either we should do #815 or develop new code to make sure identifiers are acting properly and remediate the existing works.
### Expected behavior
Exploration is done to see what the root problem is and a solution is devised and implemented accordingly. Works will not exist without an identifier and other required metadata. Saving a work will not cause a crash
### Related work
This blocks/impedes Service Manager work review
### Accessibility Concerns
|
1.0
|
Works without identifier can not be added to collection (or otherwise saved) - ### Descriptive summary
Adding a work to a collection causes the work to be saved without going through the form. This causes problems when the work is missing required metadata. The only field that seems to be causing this issue from time-to-time are some worms are missing an identifier.
These are either old works that had problems with migration early on (before form validation checks) OR recently saved works that encountered an error with our Hyrax 3.0-RC1 upgrade. Depending on the root of the problem, either we should do #815 or develop new code to make sure identifiers are acting properly and remediate the existing works.
### Expected behavior
Exploration is done to see what the root problem is and a solution is devised and implemented accordingly. Works will not exist without an identifier and other required metadata. Saving a work will not cause a crash
### Related work
This blocks/impedes Service Manager work review
### Accessibility Concerns
|
non_process
|
works without identifier can not be added to collection or otherwise saved descriptive summary adding a work to a collection causes the work to be saved without going through the form this causes problems when the work is missing required metadata the only field that seems to be causing this issue from time to time are some worms are missing an identifier these are either old works that had problems with migration early on before form validation checks or recently saved works that encountered an error with our hyrax upgrade depending on the root of the problem either we should do or develop new code to make sure identifiers are acting properly and remediate the existing works expected behavior exploration is done to see what the root problem is and a solution is devised and implemented accordingly works will not exist without an identifier and other required metadata saving a work will not cause a crash related work this blocks impedes service manager work review accessibility concerns
| 0
|
18,280
| 24,369,370,249
|
IssuesEvent
|
2022-10-03 17:51:34
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
opened
|
Add National Records Scotland as a source
|
data processing back end
|
Sadly, there doesn't appear to be any API. glhf.
https://www.nrscotland.gov.uk/statistics-and-data
|
1.0
|
Add National Records Scotland as a source - Sadly, there doesn't appear to be any API. glhf.
https://www.nrscotland.gov.uk/statistics-and-data
|
process
|
add national records scotland as a source sadly there doesn t appear to be any api glhf
| 1
|
4,899
| 7,779,900,899
|
IssuesEvent
|
2018-06-05 18:17:52
|
StackSavingsTeam/stacksavings.com_templates
|
https://api.github.com/repos/StackSavingsTeam/stacksavings.com_templates
|
opened
|
Refactoring json code of template
|
On Process
|
Hay que cambiar el json de la siguiente plantilla:
http://stacksavings.com/detail-post/getting-started-on-a-new-web-development-project-23l7Tj/en
Actualmente por cada parrafo hay una variable json, se requiere que todo el texto este en una sola variable json
|
1.0
|
Refactoring json code of template - Hay que cambiar el json de la siguiente plantilla:
http://stacksavings.com/detail-post/getting-started-on-a-new-web-development-project-23l7Tj/en
Actualmente por cada parrafo hay una variable json, se requiere que todo el texto este en una sola variable json
|
process
|
refactoring json code of template hay que cambiar el json de la siguiente plantilla actualmente por cada parrafo hay una variable json se requiere que todo el texto este en una sola variable json
| 1
|
141
| 2,535,003,535
|
IssuesEvent
|
2015-01-25 15:54:08
|
lbradstreet/onyx-dashboard
|
https://api.github.com/repos/lbradstreet/onyx-dashboard
|
closed
|
Performance is getting bogged down for long running deployments (i.e. big log)
|
frontend performance
|
As the log entries come in the UI is bogging down.
|
True
|
Performance is getting bogged down for long running deployments (i.e. big log) - As the log entries come in the UI is bogging down.
|
non_process
|
performance is getting bogged down for long running deployments i e big log as the log entries come in the ui is bogging down
| 0
|
2,236
| 7,875,840,510
|
IssuesEvent
|
2018-06-25 21:52:00
|
react-navigation/react-navigation
|
https://api.github.com/repos/react-navigation/react-navigation
|
closed
|
Constructor called twice when navigate with both routeName and key specified
|
needs response from maintainer
|
### Current Behavior
the constructor called twice when nav.
<img src="https://user-images.githubusercontent.com/11990205/40289426-c586b888-5cea-11e8-84f2-be31b47fd8bd.png" width="300"/>
**i tried specify routeName only, and it works well.**
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.1
| react-native | 0.52.0
| node | 10.0.0
| npm or yarn | 5.6.0
### my route stack construct
```js
rootStack = createSwitchNavigator({
authStack,
mainStack
})
mainStack = createStackNavigator({
tabStack,
...others
})
tabStack = createBottomTabNavigator({
home,
user
})
home = createStackNavigator({
homeView
})
user = createStackNavigator({
userView
})
```
- HomeView
```jsx
...
navigation.navigate({
routeName: 'UserView',
key: 'UserView',
params: { initialPage: 2 },
});
...
```
- UserView
```jsx
...
constructor(props) {
super(props);
console.log('------>I\'m constructor');
}
...
```
there is a button in homeView, when it clicked ,i want to nav from homeView to userView with some params.
|
True
|
Constructor called twice when navigate with both routeName and key specified -
### Current Behavior
the constructor called twice when nav.
<img src="https://user-images.githubusercontent.com/11990205/40289426-c586b888-5cea-11e8-84f2-be31b47fd8bd.png" width="300"/>
**i tried specify routeName only, and it works well.**
### Your Environment
| software | version
| ---------------- | -------
| react-navigation | 2.0.1
| react-native | 0.52.0
| node | 10.0.0
| npm or yarn | 5.6.0
### my route stack construct
```js
rootStack = createSwitchNavigator({
authStack,
mainStack
})
mainStack = createStackNavigator({
tabStack,
...others
})
tabStack = createBottomTabNavigator({
home,
user
})
home = createStackNavigator({
homeView
})
user = createStackNavigator({
userView
})
```
- HomeView
```jsx
...
navigation.navigate({
routeName: 'UserView',
key: 'UserView',
params: { initialPage: 2 },
});
...
```
- UserView
```jsx
...
constructor(props) {
super(props);
console.log('------>I\'m constructor');
}
...
```
there is a button in homeView, when it clicked ,i want to nav from homeView to userView with some params.
|
non_process
|
constructor called twice when navigate with both routename and key specified current behavior the constructor called twice when nav i tried specify routename only and it works well your environment software version react navigation react native node npm or yarn my route stack construct js rootstack createswitchnavigator authstack mainstack mainstack createstacknavigator tabstack others tabstack createbottomtabnavigator home user home createstacknavigator homeview user createstacknavigator userview homeview jsx navigation navigate routename userview key userview params initialpage userview jsx constructor props super props console log i m constructor there is a button in homeview when it clicked i want to nav from homeview to userview with some params
| 0
|
10,916
| 13,691,292,357
|
IssuesEvent
|
2020-09-30 15:22:14
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
When CLI and Client are not in sync, generate errors with TypeError: Cannot read property 'type' of undefined
|
bug/2-confirmed kind/bug process/candidate team/typescript topic: cli-generate
|
<!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
When CLI and Client are not in sync, generate errors with
`TypeError: Cannot read property 'type' of undefined`
## How to reproduce
1. package.json
```
{
"name": "p2-dev",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"devDependencies": {
"@prisma/cli": "2.8.0-dev.27",
"@prisma/client": "2.8.0-dev.9"
}
}
```
2. schema.prisma
```
datasource db {
provider = "postgresql"
url = "postgresql://this-will-not-be-used-but-overwritten-in-the-prisma-client-constructor"
}
generator client {
provider = "prisma-client-js"
}
model User {
email String @default("")
id String @id
name String @default("")
}
```
3. Run `yarn; yarn prisma generate`
4. Observe the error
```
divyendusingh [p2-dev]$ yarn; yarn prisma generate
yarn install v1.22.4
[1/4] 🔍 Resolving packages...
success Already up-to-date.
✨ Done in 0.07s.
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma generate
Prisma Schema loaded from schema.prisma
Error:
TypeError: Cannot read property 'type' of undefined
at /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9439:45
at Array.map (<anonymous>)
at /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9436:43
at Array.map (<anonymous>)
at transformInputTypes (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9424:61)
at transformDmmf (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9383:15)
at getPrismaClientDMMF (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9521:10)
at buildClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11092:21)
at generateClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11148:45)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
## Expected behavior
It should not crash
## Prisma information
```
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma --version
@prisma/cli : 2.8.0-dev.27
Current platform : darwin
Query Engine : query-engine 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/introspection-engine-darwin)
Format Binary : prisma-fmt 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/prisma-fmt-darwin)
Studio : 0.291.0
Done in 2.05s.
```
|
1.0
|
When CLI and Client are not in sync, generate errors with TypeError: Cannot read property 'type' of undefined - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
When CLI and Client are not in sync, generate errors with
`TypeError: Cannot read property 'type' of undefined`
## How to reproduce
1. package.json
```
{
"name": "p2-dev",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"devDependencies": {
"@prisma/cli": "2.8.0-dev.27",
"@prisma/client": "2.8.0-dev.9"
}
}
```
2. schema.prisma
```
datasource db {
provider = "postgresql"
url = "postgresql://this-will-not-be-used-but-overwritten-in-the-prisma-client-constructor"
}
generator client {
provider = "prisma-client-js"
}
model User {
email String @default("")
id String @id
name String @default("")
}
```
3. Run `yarn; yarn prisma generate`
4. Observe the error
```
divyendusingh [p2-dev]$ yarn; yarn prisma generate
yarn install v1.22.4
[1/4] 🔍 Resolving packages...
success Already up-to-date.
✨ Done in 0.07s.
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma generate
Prisma Schema loaded from schema.prisma
Error:
TypeError: Cannot read property 'type' of undefined
at /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9439:45
at Array.map (<anonymous>)
at /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9436:43
at Array.map (<anonymous>)
at transformInputTypes (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9424:61)
at transformDmmf (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9383:15)
at getPrismaClientDMMF (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:9521:10)
at buildClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11092:21)
at generateClient (/Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/@prisma/client/generator-build/index.js:11148:45)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
```
## Expected behavior
It should not crash
## Prisma information
```
yarn run v1.22.4
$ /Users/divyendusingh/Documents/prisma/triage/p2-dev/node_modules/.bin/prisma --version
@prisma/cli : 2.8.0-dev.27
Current platform : darwin
Query Engine : query-engine 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/query-engine-darwin)
Migration Engine : migration-engine-cli 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/migration-engine-darwin)
Introspection Engine : introspection-core 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/introspection-engine-darwin)
Format Binary : prisma-fmt 7aef029819840cd88e6333b5037105264c82e2f4 (at node_modules/@prisma/cli/prisma-fmt-darwin)
Studio : 0.291.0
Done in 2.05s.
```
|
process
|
when cli and client are not in sync generate errors with typeerror cannot read property type of undefined thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description when cli and client are not in sync generate errors with typeerror cannot read property type of undefined how to reproduce package json name dev version main index js license mit devdependencies prisma cli dev prisma client dev schema prisma datasource db provider postgresql url postgresql this will not be used but overwritten in the prisma client constructor generator client provider prisma client js model user email string default id string id name string default run yarn yarn prisma generate observe the error divyendusingh yarn yarn prisma generate yarn install 🔍 resolving packages success already up to date ✨ done in yarn run users divyendusingh documents prisma triage dev node modules bin prisma generate prisma schema loaded from schema prisma error typeerror cannot read property type of undefined at users divyendusingh documents prisma triage dev node modules prisma client generator build index js at array map at users divyendusingh documents prisma triage dev node modules prisma client generator build index js at array map at transforminputtypes users divyendusingh documents prisma triage dev node modules prisma client generator build index js at transformdmmf users divyendusingh documents prisma triage dev node modules prisma client generator build index js at getprismaclientdmmf users divyendusingh documents prisma triage dev node modules prisma client generator build index js at buildclient users divyendusingh documents prisma triage dev node modules prisma client generator build index js at generateclient users divyendusingh documents prisma triage dev node modules prisma client generator build index js at processticksandrejections internal process task queues js error command failed with exit code info visit for documentation about this command expected behavior it should not crash prisma information yarn run users divyendusingh documents prisma triage dev node modules bin prisma version prisma cli dev current platform darwin query engine query engine at node modules prisma cli query engine darwin migration engine migration engine cli at node modules prisma cli migration engine darwin introspection engine introspection core at node modules prisma cli introspection engine darwin format binary prisma fmt at node modules prisma cli prisma fmt darwin studio done in
| 1
|
395,484
| 11,687,058,690
|
IssuesEvent
|
2020-03-05 12:04:56
|
MelbourneHighSchoolRobotics/RCJA_Registration_System
|
https://api.github.com/repos/MelbourneHighSchoolRobotics/RCJA_Registration_System
|
closed
|
Make login background photo not look like a potatoe
|
Priority
|
Originally was 3mb photo which was crazy big for these purposes, although have gone too far with the compression.
|
1.0
|
Make login background photo not look like a potatoe - Originally was 3mb photo which was crazy big for these purposes, although have gone too far with the compression.
|
non_process
|
make login background photo not look like a potatoe originally was photo which was crazy big for these purposes although have gone too far with the compression
| 0
|
18,670
| 24,584,864,825
|
IssuesEvent
|
2022-10-13 18:45:13
|
sysflow-telemetry/sysflow
|
https://api.github.com/repos/sysflow-telemetry/sysflow
|
closed
|
Implement journaling mechanism when exporting data in the SysFlow Processor
|
enhancement sf-processor
|
Implement a journaling mechanism to buffer exported records when the connection to backend storage is unavailable. Need to support configurable buffer sizes (number of records, size, etc.), reattempt timeouts, and optional failover (backup to disk/alternative storage).
|
1.0
|
Implement journaling mechanism when exporting data in the SysFlow Processor - Implement a journaling mechanism to buffer exported records when the connection to backend storage is unavailable. Need to support configurable buffer sizes (number of records, size, etc.), reattempt timeouts, and optional failover (backup to disk/alternative storage).
|
process
|
implement journaling mechanism when exporting data in the sysflow processor implement a journaling mechanism to buffer exported records when the connection to backend storage is unavailable need to support configurable buffer sizes number of records size etc reattempt timeouts and optional failover backup to disk alternative storage
| 1
|
20,576
| 27,238,499,893
|
IssuesEvent
|
2023-02-21 18:12:22
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
change register names in mips
|
Feature: Processor/MIPS Status: Waiting on customer
|
How can (in mips) change register names, in order to display fp instead of s8?
|
1.0
|
change register names in mips - How can (in mips) change register names, in order to display fp instead of s8?
|
process
|
change register names in mips how can in mips change register names in order to display fp instead of
| 1
|
391,244
| 11,571,128,978
|
IssuesEvent
|
2020-02-20 20:53:25
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
opened
|
amp-delight-player: Error: Cannot read property 'type' of null
|
P1: High Priority Type: Bug
|
Error: Cannot read property 'type' of null
at (https://raw.githubusercontent.com/ampproject/amphtml/2002200031230/extensions/amp-delight-player/0.1/amp-delight-player.js:248)
at (https://raw.githubusercontent.com/ampproject/amphtml/2002200031230/src/event-helper-listen.js:52)
go/ampe/CMLIxJmY9dr82AE
|
1.0
|
amp-delight-player: Error: Cannot read property 'type' of null - Error: Cannot read property 'type' of null
at (https://raw.githubusercontent.com/ampproject/amphtml/2002200031230/extensions/amp-delight-player/0.1/amp-delight-player.js:248)
at (https://raw.githubusercontent.com/ampproject/amphtml/2002200031230/src/event-helper-listen.js:52)
go/ampe/CMLIxJmY9dr82AE
|
non_process
|
amp delight player error cannot read property type of null error cannot read property type of null at at go ampe
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.