Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,153
| 11,402,220,711
|
IssuesEvent
|
2020-01-31 02:18:18
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
7000 Failed AcceptMessageSession dependencies in AppInsight per hour
|
Bug Client Service Attention Service Bus customer-reported
|
Crossposting original bug from [https://github.com/Azure/azure-service-bus-dotnet/issues/588](url)
> **Actual Behavior**
> Ensure AppInisight is configured and dependency tracking is enabled.
> Construct a SubscriptionClient for a topic with sessions enabled.
> Call RegisterSessionHandler on the client.
> (When nothing is published on the topic): every hour 7000 "AcceptMessageSession" dependencies with "Dependency call status" as False are logged in AppInsight.
> **Expected Behavior**
> When nothing is published there should not be anything logged in AppInsight. Or at least it should not log a failed dependency.
> **Versions**
> OS platform and version: Windows 10 Pro, 1803
> .NET Version: .NET 4.6
> NuGet package version or commit ID:
> Comments:
> Using RegisterMessageHandler instead of RegisterSessionHandler also causes a dependency "Receive" to be logged every 1 minute, but this has a status of True, which is much less noisy in AppInsight.
- Credits to @oletolshave, changed a bit to reflect our situation
We're facing the same issue for a while now, and it's generating roughly 7k (!) false exceptions in application insights PER hour. This not only makes it difficult to find relevant exceptions in AI, it also consumes a lot of storage space compared to our other logs.
|
2.0
|
7000 Failed AcceptMessageSession dependencies in AppInsight per hour - Crossposting original bug from [https://github.com/Azure/azure-service-bus-dotnet/issues/588](url)
> **Actual Behavior**
> Ensure AppInisight is configured and dependency tracking is enabled.
> Construct a SubscriptionClient for a topic with sessions enabled.
> Call RegisterSessionHandler on the client.
> (When nothing is published on the topic): every hour 7000 "AcceptMessageSession" dependencies with "Dependency call status" as False are logged in AppInsight.
> **Expected Behavior**
> When nothing is published there should not be anything logged in AppInsight. Or at least it should not log a failed dependency.
> **Versions**
> OS platform and version: Windows 10 Pro, 1803
> .NET Version: .NET 4.6
> NuGet package version or commit ID:
> Comments:
> Using RegisterMessageHandler instead of RegisterSessionHandler also causes a dependency "Receive" to be logged every 1 minute, but this has a status of True, which is much less noisy in AppInsight.
- Credits to @oletolshave, changed a bit to reflect our situation
We're facing the same issue for a while now, and it's generating roughly 7k (!) false exceptions in application insights PER hour. This not only makes it difficult to find relevant exceptions in AI, it also consumes a lot of storage space compared to our other logs.
|
non_process
|
failed acceptmessagesession dependencies in appinsight per hour crossposting original bug from url actual behavior ensure appinisight is configured and dependency tracking is enabled construct a subscriptionclient for a topic with sessions enabled call registersessionhandler on the client when nothing is published on the topic every hour acceptmessagesession dependencies with dependency call status as false are logged in appinsight expected behavior when nothing is published there should not be anything logged in appinsight or at least it should not log a failed dependency versions os platform and version windows pro net version net nuget package version or commit id comments using registermessagehandler instead of registersessionhandler also causes a dependency receive to be logged every minute but this has a status of true which is much less noisy in appinsight credits to oletolshave changed a bit to reflect our situation we re facing the same issue for a while now and it s generating roughly false exceptions in application insights per hour this not only makes it difficult to find relevant exceptions in ai it also consumes a lot of storage space compared to our other logs
| 0
|
15,301
| 19,340,492,192
|
IssuesEvent
|
2021-12-15 03:29:52
|
alexrp/system-terminal
|
https://api.github.com/repos/alexrp/system-terminal
|
closed
|
Figure out a `System.Diagnostics.Process` story
|
type: feature state: blocked area: drivers area: processes
|
Right now, executing a `System.Diagnostics.Process` will mess up the `termios` state on Unix systems. We need to figure out a way of dealing with this. It's not clear what the right thing to do actually *is* given that we could be in raw mode, while a child process expects cooked mode.
Further, on Windows, we need to consider how we'll deal with child processes that alter the console mode via `System.Terminal` and leave it in a state that the parent doesn't expect.
|
1.0
|
Figure out a `System.Diagnostics.Process` story - Right now, executing a `System.Diagnostics.Process` will mess up the `termios` state on Unix systems. We need to figure out a way of dealing with this. It's not clear what the right thing to do actually *is* given that we could be in raw mode, while a child process expects cooked mode.
Further, on Windows, we need to consider how we'll deal with child processes that alter the console mode via `System.Terminal` and leave it in a state that the parent doesn't expect.
|
process
|
figure out a system diagnostics process story right now executing a system diagnostics process will mess up the termios state on unix systems we need to figure out a way of dealing with this it s not clear what the right thing to do actually is given that we could be in raw mode while a child process expects cooked mode further on windows we need to consider how we ll deal with child processes that alter the console mode via system terminal and leave it in a state that the parent doesn t expect
| 1
|
8,333
| 11,493,902,016
|
IssuesEvent
|
2020-02-12 00:09:19
|
xatkit-bot-platform/xatkit-runtime
|
https://api.github.com/repos/xatkit-bot-platform/xatkit-runtime
|
opened
|
Xatkit shouldn't crash if a processor failed his initialization
|
Bug Processors
|
This is particularly true for processors like Stanford NLP ones that require external libraries. If the library cannot be found we should log an error, but Xatkit should still start.
|
1.0
|
Xatkit shouldn't crash if a processor failed his initialization - This is particularly true for processors like Stanford NLP ones that require external libraries. If the library cannot be found we should log an error, but Xatkit should still start.
|
process
|
xatkit shouldn t crash if a processor failed his initialization this is particularly true for processors like stanford nlp ones that require external libraries if the library cannot be found we should log an error but xatkit should still start
| 1
|
17,228
| 22,915,757,825
|
IssuesEvent
|
2022-07-17 00:09:26
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Append Changelog to KIC Base Image build and ISO Image build process
|
help wanted priority/important-longterm lifecycle/rotten kind/process
|
each time we comment ok-to-build-iso or ok-to-build-image our Bots creates an Docker image and ISO image
we should add a changelog.txt to file both ISO and KIC image with PR number and Commit number
so we see and audit of all merged things in that ISO / Kic Image
This task can be done by two PRs one for KIC and one for ISO
Example Pr that pushes new image by ok-to-build-image
https://github.com/kubernetes/minikube/pull/13302#issuecomment-1010400304
|
1.0
|
Append Changelog to KIC Base Image build and ISO Image build process - each time we comment ok-to-build-iso or ok-to-build-image our Bots creates an Docker image and ISO image
we should add a changelog.txt to file both ISO and KIC image with PR number and Commit number
so we see and audit of all merged things in that ISO / Kic Image
This task can be done by two PRs one for KIC and one for ISO
Example Pr that pushes new image by ok-to-build-image
https://github.com/kubernetes/minikube/pull/13302#issuecomment-1010400304
|
process
|
append changelog to kic base image build and iso image build process each time we comment ok to build iso or ok to build image our bots creates an docker image and iso image we should add a changelog txt to file both iso and kic image with pr number and commit number so we see and audit of all merged things in that iso kic image this task can be done by two prs one for kic and one for iso example pr that pushes new image by ok to build image
| 1
|
207,550
| 23,458,623,516
|
IssuesEvent
|
2022-08-16 11:10:28
|
Gal-Doron/Baragon-test-6
|
https://api.github.com/repos/Gal-Doron/Baragon-test-6
|
opened
|
async-http-client-1.9.38.jar: 1 vulnerabilities (highest severity is: 9.1)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-http-client-1.9.38.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /BaragonService/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-6/commit/10c20000fec2ffa6628601aaf45fbbd85b996de2">10c20000fec2ffa6628601aaf45fbbd85b996de2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2019-20444](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | netty-3.10.6.Final.jar | Transitive | 1.9.39 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-20444</summary>
### Vulnerable Library - <b>netty-3.10.6.Final.jar</b></p>
<p>The Netty project is an effort to provide an asynchronous event-driven
network application framework and tools for rapid development of
maintainable high performance and high scalability protocol servers and
clients. In other words, Netty is a NIO client server framework which
enables quick and easy development of network applications such as protocol
servers and clients. It greatly simplifies and streamlines network
programming such as TCP and UDP socket server.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /BaragonService/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar</p>
<p>
Dependency Hierarchy:
- async-http-client-1.9.38.jar (Root Library)
- :x: **netty-3.10.6.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-6/commit/10c20000fec2ffa6628601aaf45fbbd85b996de2">10c20000fec2ffa6628601aaf45fbbd85b996de2</a></p>
<p>Found in base branch: <b>basepom-upgrade</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
<p>Publish Date: 2020-01-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444>CVE-2019-20444</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444</a></p>
<p>Release Date: 2020-01-29</p>
<p>Fix Resolution (io.netty:netty): 4.0.0.Alpha1</p>
<p>Direct dependency fix Resolution (com.ning:async-http-client): 1.9.39</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
async-http-client-1.9.38.jar: 1 vulnerabilities (highest severity is: 9.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>async-http-client-1.9.38.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /BaragonService/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-6/commit/10c20000fec2ffa6628601aaf45fbbd85b996de2">10c20000fec2ffa6628601aaf45fbbd85b996de2</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2019-20444](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | netty-3.10.6.Final.jar | Transitive | 1.9.39 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2019-20444</summary>
### Vulnerable Library - <b>netty-3.10.6.Final.jar</b></p>
<p>The Netty project is an effort to provide an asynchronous event-driven
network application framework and tools for rapid development of
maintainable high performance and high scalability protocol servers and
clients. In other words, Netty is a NIO client server framework which
enables quick and easy development of network applications such as protocol
servers and clients. It greatly simplifies and streamlines network
programming such as TCP and UDP socket server.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /BaragonService/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar</p>
<p>
Dependency Hierarchy:
- async-http-client-1.9.38.jar (Root Library)
- :x: **netty-3.10.6.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-6/commit/10c20000fec2ffa6628601aaf45fbbd85b996de2">10c20000fec2ffa6628601aaf45fbbd85b996de2</a></p>
<p>Found in base branch: <b>basepom-upgrade</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP header that lacks a colon, which might be interpreted as a separate header with an incorrect syntax, or might be interpreted as an "invalid fold."
<p>Publish Date: 2020-01-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20444>CVE-2019-20444</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20444</a></p>
<p>Release Date: 2020-01-29</p>
<p>Fix Resolution (io.netty:netty): 4.0.0.Alpha1</p>
<p>Direct dependency fix Resolution (com.ning:async-http-client): 1.9.39</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_process
|
async http client jar vulnerabilities highest severity is vulnerable library async http client jar path to dependency file baragonservice pom xml path to vulnerable library home wss scanner repository io netty netty final netty final jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high netty final jar transitive details cve vulnerable library netty final jar the netty project is an effort to provide an asynchronous event driven network application framework and tools for rapid development of maintainable high performance and high scalability protocol servers and clients in other words netty is a nio client server framework which enables quick and easy development of network applications such as protocol servers and clients it greatly simplifies and streamlines network programming such as tcp and udp socket server library home page a href path to dependency file baragonservice pom xml path to vulnerable library home wss scanner repository io netty netty final netty final jar dependency hierarchy async http client jar root library x netty final jar vulnerable library found in head commit a href found in base branch basepom upgrade vulnerability details httpobjectdecoder java in netty before allows an http header that lacks a colon which might be interpreted as a separate header with an incorrect syntax or might be interpreted as an invalid fold publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty direct dependency fix resolution com ning async http client rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
6,934
| 10,101,619,498
|
IssuesEvent
|
2019-07-29 09:10:28
|
CurtinFRC/ModularVisionTracking
|
https://api.github.com/repos/CurtinFRC/ModularVisionTracking
|
opened
|
add priority starting threads in the vision map
|
Processes Threading enhancement visionMap
|
This allows to user to prioritise which threads start up first and/or which functions from that thread, e.g tape tracking is number 1, then ball tracking is number 2.
The hopes is that if we want to track a retro reflective ball, you want the retro tape tracking before the circular tracking.
i dunno just a thought i had. seems easy enough to implement
|
1.0
|
add priority starting threads in the vision map - This allows to user to prioritise which threads start up first and/or which functions from that thread, e.g tape tracking is number 1, then ball tracking is number 2.
The hopes is that if we want to track a retro reflective ball, you want the retro tape tracking before the circular tracking.
i dunno just a thought i had. seems easy enough to implement
|
process
|
add priority starting threads in the vision map this allows to user to prioritise which threads start up first and or which functions from that thread e g tape tracking is number then ball tracking is number the hopes is that if we want to track a retro reflective ball you want the retro tape tracking before the circular tracking i dunno just a thought i had seems easy enough to implement
| 1
|
639,985
| 20,770,589,469
|
IssuesEvent
|
2022-03-16 03:56:45
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
[Improvement] Refactor Solace Developer Portal Related implementation
|
Type/Improvement Priority/Normal APIM - 4.1.0
|
### Describe your problem(s)
The new feature of Solace broker integration with WSO2 API Manager was introduced in the APIM 4.1.0 release. For maintainability, most of the Solace implementations were written in extension mode to support decoupling. But due to some UI complications, the Developer portal related Solace implementations are binded with the other implementations.
It is better if we can do a refactoring and decouple these code implementations as well.
### Affected Products
APIM 4.1.0
|
1.0
|
[Improvement] Refactor Solace Developer Portal Related implementation - ### Describe your problem(s)
The new feature of Solace broker integration with WSO2 API Manager was introduced in the APIM 4.1.0 release. For maintainability, most of the Solace implementations were written in extension mode to support decoupling. But due to some UI complications, the Developer portal related Solace implementations are binded with the other implementations.
It is better if we can do a refactoring and decouple these code implementations as well.
### Affected Products
APIM 4.1.0
|
non_process
|
refactor solace developer portal related implementation describe your problem s the new feature of solace broker integration with api manager was introduced in the apim release for maintainability most of the solace implementations were written in extension mode to support decoupling but due to some ui complications the developer portal related solace implementations are binded with the other implementations it is better if we can do a refactoring and decouple these code implementations as well affected products apim
| 0
|
297,914
| 22,408,198,970
|
IssuesEvent
|
2022-06-18 09:50:05
|
Lu1z-Gust4v0/Fup-Final_Project
|
https://api.github.com/repos/Lu1z-Gust4v0/Fup-Final_Project
|
closed
|
Refactor args_parser file
|
bug documentation Story
|
Propose to refactor of args_parser file because of extra shit parsing and some logic errors
@Lu1z-Gust4v0
- [ ] Change parse_value fn condition to or
- [ ] Remove line arg in check fn and pass the hint_counter on the order hand
- [ ] Pass hint_counter as line number for motive line
- [ ] Document each fn to a better undestanding
|
1.0
|
Refactor args_parser file - Propose to refactor of args_parser file because of extra shit parsing and some logic errors
@Lu1z-Gust4v0
- [ ] Change parse_value fn condition to or
- [ ] Remove line arg in check fn and pass the hint_counter on the order hand
- [ ] Pass hint_counter as line number for motive line
- [ ] Document each fn to a better undestanding
|
non_process
|
refactor args parser file propose to refactor of args parser file because of extra shit parsing and some logic errors change parse value fn condition to or remove line arg in check fn and pass the hint counter on the order hand pass hint counter as line number for motive line document each fn to a better undestanding
| 0
|
18,893
| 5,730,214,458
|
IssuesEvent
|
2017-04-21 08:49:04
|
gaymers-discord/DiscoBot
|
https://api.github.com/repos/gaymers-discord/DiscoBot
|
closed
|
!role command doesn't accept roles unless they are specifically named
|
bug code-improvement pr-submitted
|
!role command doesn't accept roles unless they are specifically named, usually in Title Case. Recently happened with the creation of the `CS:GO` role that can't be added due to the way we transform the role name.
We should refactor how roles are looked up from Discord so we can avoid this.
|
1.0
|
!role command doesn't accept roles unless they are specifically named - !role command doesn't accept roles unless they are specifically named, usually in Title Case. Recently happened with the creation of the `CS:GO` role that can't be added due to the way we transform the role name.
We should refactor how roles are looked up from Discord so we can avoid this.
|
non_process
|
role command doesn t accept roles unless they are specifically named role command doesn t accept roles unless they are specifically named usually in title case recently happened with the creation of the cs go role that can t be added due to the way we transform the role name we should refactor how roles are looked up from discord so we can avoid this
| 0
|
7,709
| 10,818,264,558
|
IssuesEvent
|
2019-11-08 11:37:33
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
prisma2 init flow is throwing "File name too long"
|
bug/2-confirmed kind/regression process/candidate
|
After installing preview 16, after following the following init flow:
1. prisma2 init
2. Blank project
3. SQLite
4. Selecting both Photon and Lift > Confirm
5. Typescript
6. Demo script
It is throwing the this Rust panic:

My suspicion is that this is caused by https://github.com/prisma/prisma2/pull/841 so I am going to classify this as a regression. I use that init flow a lot to reproduce issues, so I will appreciate this being fixed relatively quickly.
|
1.0
|
prisma2 init flow is throwing "File name too long" - After installing preview 16, after following the following init flow:
1. prisma2 init
2. Blank project
3. SQLite
4. Selecting both Photon and Lift > Confirm
5. Typescript
6. Demo script
It is throwing the this Rust panic:

My suspicion is that this is caused by https://github.com/prisma/prisma2/pull/841 so I am going to classify this as a regression. I use that init flow a lot to reproduce issues, so I will appreciate this being fixed relatively quickly.
|
process
|
init flow is throwing file name too long after installing preview after following the following init flow init blank project sqlite selecting both photon and lift confirm typescript demo script it is throwing the this rust panic my suspicion is that this is caused by so i am going to classify this as a regression i use that init flow a lot to reproduce issues so i will appreciate this being fixed relatively quickly
| 1
|
13,638
| 16,328,192,817
|
IssuesEvent
|
2021-05-12 05:35:28
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Signup > Incorrect error message is displayed on trying to signup with already registered unverified user
|
Bug P1 Process: Tested QA iOS
|
**Steps:**
1. Signup with a new user with valid email and password
2. Don't enter the verification code
3. Navigate back to signup
4. Enter same email ID entered in step 1
5. Observe the error message
**Actual:** 'Your Session is Expired' message is displayed
**Expected:** 'Your account is pending activation. Please check your email for details and signin to complete activation.'
**iOS screenshot:**

Refer Android screenshot for reference:

|
1.0
|
[iOS] Signup > Incorrect error message is displayed on trying to signup with already registered unverified user - **Steps:**
1. Signup with a new user with valid email and password
2. Don't enter the verification code
3. Navigate back to signup
4. Enter same email ID entered in step 1
5. Observe the error message
**Actual:** 'Your Session is Expired' message is displayed
**Expected:** 'Your account is pending activation. Please check your email for details and signin to complete activation.'
**iOS screenshot:**

Refer Android screenshot for reference:

|
process
|
signup incorrect error message is displayed on trying to signup with already registered unverified user steps signup with a new user with valid email and password don t enter the verification code navigate back to signup enter same email id entered in step observe the error message actual your session is expired message is displayed expected your account is pending activation please check your email for details and signin to complete activation ios screenshot refer android screenshot for reference
| 1
|
200,535
| 15,801,735,435
|
IssuesEvent
|
2021-04-03 06:20:52
|
sibirrer/hierArc
|
https://api.github.com/repos/sibirrer/hierArc
|
closed
|
Minor documentation
|
documentation
|
I don't really understand what is being done here. I think, it would be helpful to add a little more description or some reference in the comment line.
https://github.com/sibirrer/hierArc/blob/b4771d98da5c7e3aab937fe0bbe1714100e3abbc/hierarc/Likelihood/SneLikelihood/sne_likelihood.py#L103
|
1.0
|
Minor documentation - I don't really understand what is being done here. I think, it would be helpful to add a little more description or some reference in the comment line.
https://github.com/sibirrer/hierArc/blob/b4771d98da5c7e3aab937fe0bbe1714100e3abbc/hierarc/Likelihood/SneLikelihood/sne_likelihood.py#L103
|
non_process
|
minor documentation i don t really understand what is being done here i think it would be helpful to add a little more description or some reference in the comment line
| 0
|
12,720
| 3,088,274,667
|
IssuesEvent
|
2015-08-25 15:49:38
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
opened
|
Quandl IAs: Remove `{{url}}` from content.handlebars
|
Bug Design Low-Hanging Fruit
|
It seems like these IA's have an unnecessary link in the title of the result that simply links back to the page (because there's no defined `url` property.
@brianrisk did/do we have something to link to instead or is removing the links OK?
|
1.0
|
Quandl IAs: Remove `{{url}}` from content.handlebars - It seems like these IA's have an unnecessary link in the title of the result that simply links back to the page (because there's no defined `url` property.
@brianrisk did/do we have something to link to instead or is removing the links OK?
|
non_process
|
quandl ias remove url from content handlebars it seems like these ia s have an unnecessary link in the title of the result that simply links back to the page because there s no defined url property brianrisk did do we have something to link to instead or is removing the links ok
| 0
|
1,081
| 3,541,666,703
|
IssuesEvent
|
2016-01-19 02:50:11
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
use transforms instead of resolution changes in the redux module
|
enhancement video processing
|
Resizing the image buffer seems to have a sizable performance impact. Instead, scale the image down with a transform and then back up to the original size
|
1.0
|
use transforms instead of resolution changes in the redux module - Resizing the image buffer seems to have a sizable performance impact. Instead, scale the image down with a transform and then back up to the original size
|
process
|
use transforms instead of resolution changes in the redux module resizing the image buffer seems to have a sizable performance impact instead scale the image down with a transform and then back up to the original size
| 1
|
293,268
| 8,974,307,937
|
IssuesEvent
|
2019-01-29 23:50:02
|
nanowrimo/nanowrimo_frontend
|
https://api.github.com/repos/nanowrimo/nanowrimo_frontend
|
closed
|
Address unavailable links in the mini-nav
|
priority
|
In the mini-nav at the very top, can we change the link for "Help Center" to read "Email Help Desk" and link it to "help@nanowrimo.org"?
Then, let's hide:
- [x] - [ ] search bar
- [x] "Get Involved"
|
1.0
|
Address unavailable links in the mini-nav - In the mini-nav at the very top, can we change the link for "Help Center" to read "Email Help Desk" and link it to "help@nanowrimo.org"?
Then, let's hide:
- [x] - [ ] search bar
- [x] "Get Involved"
|
non_process
|
address unavailable links in the mini nav in the mini nav at the very top can we change the link for help center to read email help desk and link it to help nanowrimo org then let s hide search bar get involved
| 0
|
11,108
| 13,956,400,975
|
IssuesEvent
|
2020-10-24 00:59:08
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Expose XYZM dimension with Ogr2ogr export to PostgreSQL algorithms
|
Easy fix Feature Request Processing
|
# Problem
The documentation of https://gdal.org/programs/ogr2ogr.html clearly states that :

However when running the processing algorithm `Export to PostgreSQL (Available Connections)` It only shows two options

This prevents users from being able to import data types that are `XYZM` into the database.
|
1.0
|
Expose XYZM dimension with Ogr2ogr export to PostgreSQL algorithms - # Problem
The documentation of https://gdal.org/programs/ogr2ogr.html clearly states that :

However when running the processing algorithm `Export to PostgreSQL (Available Connections)` It only shows two options

This prevents users from being able to import data types that are `XYZM` into the database.
|
process
|
expose xyzm dimension with export to postgresql algorithms problem the documentation of clearly states that however when running the processing algorithm export to postgresql available connections it only shows two options this prevents users from being able to import data types that are xyzm into the database
| 1
|
3,304
| 6,401,348,672
|
IssuesEvent
|
2017-08-05 20:01:11
|
facebook/osquery
|
https://api.github.com/repos/facebook/osquery
|
closed
|
Linux Audit publisher could implement a fast-dequeue thread
|
Linux process auditing wishlist
|
The current Linux Audit implementation dequeues from the Audit Netlink socket, parses, broadcasts to all subscribers, then writes JSON to RocksDB synchronously. This leads to queue drops and backlog stalls for >2k events/s. You can simulate Audit queue drops using:
```
./tools/analysis/system_stress.py -n 10 -i lo0
```
A better implementation would use two threads, one to dequeue from Netlink as fast as possible, the other to parse and broadcast. Implementing these two threads without requiring additional CPU resources during idle and load is required.
|
1.0
|
Linux Audit publisher could implement a fast-dequeue thread - The current Linux Audit implementation dequeues from the Audit Netlink socket, parses, broadcasts to all subscribers, then writes JSON to RocksDB synchronously. This leads to queue drops and backlog stalls for >2k events/s. You can simulate Audit queue drops using:
```
./tools/analysis/system_stress.py -n 10 -i lo0
```
A better implementation would use two threads, one to dequeue from Netlink as fast as possible, the other to parse and broadcast. Implementing these two threads without requiring additional CPU resources during idle and load is required.
|
process
|
linux audit publisher could implement a fast dequeue thread the current linux audit implementation dequeues from the audit netlink socket parses broadcasts to all subscribers then writes json to rocksdb synchronously this leads to queue drops and backlog stalls for events s you can simulate audit queue drops using tools analysis system stress py n i a better implementation would use two threads one to dequeue from netlink as fast as possible the other to parse and broadcast implementing these two threads without requiring additional cpu resources during idle and load is required
| 1
|
4,405
| 7,298,729,405
|
IssuesEvent
|
2018-02-26 17:50:18
|
jamesfulford/fulford.data
|
https://api.github.com/repos/jamesfulford/fulford.data
|
opened
|
Use "from Queue import Queue" instead of Trickle
|
.processing
|
Maybe more efficient in that it won't clutter threads with sleeping for 0.05 seconds. Probably better implementation of asynchronous queueing.
|
1.0
|
Use "from Queue import Queue" instead of Trickle - Maybe more efficient in that it won't clutter threads with sleeping for 0.05 seconds. Probably better implementation of asynchronous queueing.
|
process
|
use from queue import queue instead of trickle maybe more efficient in that it won t clutter threads with sleeping for seconds probably better implementation of asynchronous queueing
| 1
|
217,710
| 24,348,945,452
|
IssuesEvent
|
2022-10-02 17:50:29
|
venkateshreddypala/NeverNote
|
https://api.github.com/repos/venkateshreddypala/NeverNote
|
opened
|
CVE-2022-38750 (Medium) detected in snakeyaml-1.25.jar
|
security vulnerability
|
## CVE-2022-38750 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.25.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.25/snakeyaml-1.25.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-2.2.4.RELEASE.jar
- :x: **snakeyaml-1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/venkateshreddypala/NeverNote/commit/5221eede5346114f03f90b149d4c90e211459561">5221eede5346114f03f90b149d4c90e211459561</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38750>CVE-2022-38750</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-38750 (Medium) detected in snakeyaml-1.25.jar - ## CVE-2022-38750 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.25.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.25/snakeyaml-1.25.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-2.2.4.RELEASE.jar
- :x: **snakeyaml-1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/venkateshreddypala/NeverNote/commit/5221eede5346114f03f90b149d4c90e211459561">5221eede5346114f03f90b149d4c90e211459561</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stackoverflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38750>CVE-2022-38750</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027">https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=47027</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy spring boot starter data mongodb release jar root library spring boot starter release jar x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stackoverflow publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend
| 0
|
12,962
| 15,341,585,710
|
IssuesEvent
|
2021-02-27 12:39:03
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
AbortSignal doesn't abort child_process.spawn
|
child_process
|
* **Version**: v15.5.0, v15.8.0 (works fine on v15.6, v15.7)
* **Platform**: macOS 10.15.7
* **Subsystem**: child_process
### What steps will reproduce the bug?
Create a child process using spawn, and abort the controller. The signal doesn't abort child process.
### How often does it reproduce? Is there a required condition?
This occurs every time, while running the following file:
```js
const { spawn } = require('child_process');
const ac = new AbortController();
const { signal } = ac;
const cp = spawn(process.execPath, ['./infinity-demo.js'], { signal });
const stillRunningTimeout = setTimeout(() => { console.log('still running!'); cp.kill('SIGTERM'); }, 5000);
cp.on('exit', () => { clearTimeout(stillRunningTimeout); console.log('exited') });
cp.on('error', (e) => console.log('AN ERROR HAS HAPPENED', e.name));
setTimeout(()=>ac.abort(),5);
```
This is infinity-demo.js:
```js
setInterval(()=>{},1000);
```
### What is the expected behaviour?
The cp should die correctly, with the program printing:
```
AN ERROR HAS HAPPENED AbortError
exited
```
### What do you see instead?
The cp still running (even though it emits an error) with the program printing:
```
AN ERROR HAS HAPPENED AbortError
still running!
exited
```
### Additional information
This works correctly on 15.6, 15.7 but fails on 15.5 and 15.8. I believe that the tests missed this because they run on short-lived tasks (`echo` etc.) that die anyway, and in addition the cp still emits the error.
I'd be happy to provide a PR for this.
|
1.0
|
AbortSignal doesn't abort child_process.spawn - * **Version**: v15.5.0, v15.8.0 (works fine on v15.6, v15.7)
* **Platform**: macOS 10.15.7
* **Subsystem**: child_process
### What steps will reproduce the bug?
Create a child process using spawn, and abort the controller. The signal doesn't abort child process.
### How often does it reproduce? Is there a required condition?
This occurs every time, while running the following file:
```js
const { spawn } = require('child_process');
const ac = new AbortController();
const { signal } = ac;
const cp = spawn(process.execPath, ['./infinity-demo.js'], { signal });
const stillRunningTimeout = setTimeout(() => { console.log('still running!'); cp.kill('SIGTERM'); }, 5000);
cp.on('exit', () => { clearTimeout(stillRunningTimeout); console.log('exited') });
cp.on('error', (e) => console.log('AN ERROR HAS HAPPENED', e.name));
setTimeout(()=>ac.abort(),5);
```
This is infinity-demo.js:
```js
setInterval(()=>{},1000);
```
### What is the expected behaviour?
The cp should die correctly, with the program printing:
```
AN ERROR HAS HAPPENED AbortError
exited
```
### What do you see instead?
The cp still running (even though it emits an error) with the program printing:
```
AN ERROR HAS HAPPENED AbortError
still running!
exited
```
### Additional information
This works correctly on 15.6, 15.7 but fails on 15.5 and 15.8. I believe that the tests missed this because they run on short-lived tasks (`echo` etc.) that die anyway, and in addition the cp still emits the error.
I'd be happy to provide a PR for this.
|
process
|
abortsignal doesn t abort child process spawn version works fine on platform macos subsystem child process what steps will reproduce the bug create a child process using spawn and abort the controller the signal doesn t abort child process how often does it reproduce is there a required condition this occurs every time while running the following file js const spawn require child process const ac new abortcontroller const signal ac const cp spawn process execpath signal const stillrunningtimeout settimeout console log still running cp kill sigterm cp on exit cleartimeout stillrunningtimeout console log exited cp on error e console log an error has happened e name settimeout ac abort this is infinity demo js js setinterval what is the expected behaviour the cp should die correctly with the program printing an error has happened aborterror exited what do you see instead the cp still running even though it emits an error with the program printing an error has happened aborterror still running exited additional information this works correctly on but fails on and i believe that the tests missed this because they run on short lived tasks echo etc that die anyway and in addition the cp still emits the error i d be happy to provide a pr for this
| 1
|
11,595
| 14,448,621,216
|
IssuesEvent
|
2020-12-08 06:37:16
|
A01731346/5a
|
https://api.github.com/repos/A01731346/5a
|
closed
|
fill_size_estimating_template
|
process-dashboard
|
- Llenado de template de estimación de líneas de código en process dashboard
- Correr el PROBE wizard
|
1.0
|
fill_size_estimating_template - - Llenado de template de estimación de líneas de código en process dashboard
- Correr el PROBE wizard
|
process
|
fill size estimating template llenado de template de estimación de líneas de código en process dashboard correr el probe wizard
| 1
|
10,593
| 13,401,061,950
|
IssuesEvent
|
2020-09-03 16:43:05
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
opened
|
need "how to install bikeshed in one's local webauthn repo clone" instructions
|
type:process
|
I was attempting to run the ./update-bikeshed-cache.sh on my local webauthn repo clone (following the directions here: https://github.com/w3c/webauthn#updating-copies-of-bikeshed-data-files-stored-in-this-repo) and this is what I got:
```
$ ./update-bikeshed-cache.sh \
&& git add .spec-data .bikeshed-include \
&& git commit -m "Bikeshed spec data update" .
Precondition failure: expecting a bikeshed installation in ./bikeshed/
```
So that is saying that it is expecting a bikeshed install to be in `<my local path>/webauthn/bikeshed`.
In looking at https://tabatkins.github.io/bikeshed/#installing it is not clear to me how to place a "bikeshed installation" in that directory, and I'm afraid of messing up my present local bikeshed install (which is sort of a baroque mess because of the mess of python installations and environments I seem to have...).
Anyone have clean & concise instructions for how to place a "bikeshed installation" in `<my local path>/webauthn/bikeshed` **_without messing up anything_** on the local machine?
|
1.0
|
need "how to install bikeshed in one's local webauthn repo clone" instructions - I was attempting to run the ./update-bikeshed-cache.sh on my local webauthn repo clone (following the directions here: https://github.com/w3c/webauthn#updating-copies-of-bikeshed-data-files-stored-in-this-repo) and this is what I got:
```
$ ./update-bikeshed-cache.sh \
&& git add .spec-data .bikeshed-include \
&& git commit -m "Bikeshed spec data update" .
Precondition failure: expecting a bikeshed installation in ./bikeshed/
```
So that is saying that it is expecting a bikeshed install to be in `<my local path>/webauthn/bikeshed`.
In looking at https://tabatkins.github.io/bikeshed/#installing it is not clear to me how to place a "bikeshed installation" in that directory, and I'm afraid of messing up my present local bikeshed install (which is sort of a baroque mess because of the mess of python installations and environments I seem to have...).
Anyone have clean & concise instructions for how to place a "bikeshed installation" in `<my local path>/webauthn/bikeshed` **_without messing up anything_** on the local machine?
|
process
|
need how to install bikeshed in one s local webauthn repo clone instructions i was attempting to run the update bikeshed cache sh on my local webauthn repo clone following the directions here and this is what i got update bikeshed cache sh git add spec data bikeshed include git commit m bikeshed spec data update precondition failure expecting a bikeshed installation in bikeshed so that is saying that it is expecting a bikeshed install to be in webauthn bikeshed in looking at it is not clear to me how to place a bikeshed installation in that directory and i m afraid of messing up my present local bikeshed install which is sort of a baroque mess because of the mess of python installations and environments i seem to have anyone have clean concise instructions for how to place a bikeshed installation in webauthn bikeshed without messing up anything on the local machine
| 1
|
153,388
| 13,504,255,864
|
IssuesEvent
|
2020-09-13 17:10:44
|
nextjs-starter/nextjs-webapp-starter
|
https://api.github.com/repos/nextjs-starter/nextjs-webapp-starter
|
opened
|
Update repo documentation files to reflect changes in project's documentation
|
type: documentation
|
The structure of the project's documentation has changed, which include content changes navigation structure and page links. The in-repo documentation files such as the README will therefore need to be updated.
|
1.0
|
Update repo documentation files to reflect changes in project's documentation - The structure of the project's documentation has changed, which include content changes navigation structure and page links. The in-repo documentation files such as the README will therefore need to be updated.
|
non_process
|
update repo documentation files to reflect changes in project s documentation the structure of the project s documentation has changed which include content changes navigation structure and page links the in repo documentation files such as the readme will therefore need to be updated
| 0
|
440,829
| 30,760,934,747
|
IssuesEvent
|
2023-07-29 17:37:32
|
oksana-mlynska/homepage
|
https://api.github.com/repos/oksana-mlynska/homepage
|
closed
|
Скласти інтро
|
documentation BSA-hometask-level4
|
Currently, I work as a CRM tester and want to develop further in the field of QA engineering. I like to plan everything, learn about technology. I want to get more practical skills and learn best practices in testing from professionals. I know that in IT sphere continuous learning is very important, that is why I am happy to become a student of BSA.
|
1.0
|
Скласти інтро - Currently, I work as a CRM tester and want to develop further in the field of QA engineering. I like to plan everything, learn about technology. I want to get more practical skills and learn best practices in testing from professionals. I know that in IT sphere continuous learning is very important, that is why I am happy to become a student of BSA.
|
non_process
|
скласти інтро currently i work as a crm tester and want to develop further in the field of qa engineering i like to plan everything learn about technology i want to get more practical skills and learn best practices in testing from professionals i know that in it sphere continuous learning is very important that is why i am happy to become a student of bsa
| 0
|
11,949
| 14,712,698,303
|
IssuesEvent
|
2021-01-05 09:18:25
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
opened
|
Support AWS Network Firewall logs
|
story team:data processing
|
### Description
Add support for AWS Network Firewall logs: https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-logging.html
### Acceptance Criteria
- Users can select AWS Network Firewall logs when onboarding a new S3 source
- Users can write rules for AWS Network Firewall logs
|
1.0
|
Support AWS Network Firewall logs - ### Description
Add support for AWS Network Firewall logs: https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-logging.html
### Acceptance Criteria
- Users can select AWS Network Firewall logs when onboarding a new S3 source
- Users can write rules for AWS Network Firewall logs
|
process
|
support aws network firewall logs description add support for aws network firewall logs acceptance criteria users can select aws network firewall logs when onboarding a new source users can write rules for aws network firewall logs
| 1
|
1,214
| 3,420,384,068
|
IssuesEvent
|
2015-12-08 14:37:44
|
NAFITH/IraqWeb
|
https://api.github.com/repos/NAFITH/IraqWeb
|
opened
|
Manifest Duplication, specify new rule to the duplicate manifests
|
Critical Missing Requirement question
|
The system was deciding if the new manifest already exist in the database by checking the following values :
• Voyage number
• Manifest number
• Shipping line
If the values above (together) matches the values of an active manifest, then the system will not allow the user to save it and a notification message will appear to inform the user that he needs to change the value(s) .
In the current design, the shipping line has been removed from the manifest’s data and moved to the BOL’s page as part of its data .
The Question, What is the new rule to catch the duplication in manifests?
|
1.0
|
Manifest Duplication, specify new rule to the duplicate manifests - The system was deciding if the new manifest already exist in the database by checking the following values :
• Voyage number
• Manifest number
• Shipping line
If the values above (together) matches the values of an active manifest, then the system will not allow the user to save it and a notification message will appear to inform the user that he needs to change the value(s) .
In the current design, the shipping line has been removed from the manifest’s data and moved to the BOL’s page as part of its data .
The Question, What is the new rule to catch the duplication in manifests?
|
non_process
|
manifest duplication specify new rule to the duplicate manifests the system was deciding if the new manifest already exist in the database by checking the following values • voyage number • manifest number • shipping line if the values above together matches the values of an active manifest then the system will not allow the user to save it and a notification message will appear to inform the user that he needs to change the value s in the current design the shipping line has been removed from the manifest’s data and moved to the bol’s page as part of its data the question what is the new rule to catch the duplication in manifests
| 0
|
21,084
| 6,130,484,158
|
IssuesEvent
|
2017-06-24 05:43:45
|
Tilana/Classification
|
https://api.github.com/repos/Tilana/Classification
|
closed
|
remove Collection.py class
|
bug code refactoring
|
lda/Collection.py is outdated as a pandas dataframe is used to store a collection of documents.
remove class and dependencies
|
1.0
|
remove Collection.py class - lda/Collection.py is outdated as a pandas dataframe is used to store a collection of documents.
remove class and dependencies
|
non_process
|
remove collection py class lda collection py is outdated as a pandas dataframe is used to store a collection of documents remove class and dependencies
| 0
|
29,547
| 11,759,834,319
|
IssuesEvent
|
2020-03-13 18:06:10
|
01binary/elevator
|
https://api.github.com/repos/01binary/elevator
|
opened
|
WS-2018-0347 (Medium) detected in eslint-4.10.0.tgz
|
security vulnerability
|
## WS-2018-0347 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-4.10.0.tgz</b></p></summary>
<p>An AST-based pattern checker for JavaScript.</p>
<p>Library home page: <a href="https://registry.npmjs.org/eslint/-/eslint-4.10.0.tgz">https://registry.npmjs.org/eslint/-/eslint-4.10.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/elevator/ClientApp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/elevator/ClientApp/node_modules/react-scripts/node_modules/eslint/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.5.tgz (Root Library)
- :x: **eslint-4.10.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/01binary/elevator/commit/c03855450ce69cbe684e2d0017a95692e42f929f">c03855450ce69cbe684e2d0017a95692e42f929f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was descovered in eslint before 4.18.2. One of the regexes in eslint is vulnerable to catastrophic backtracking.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://github.com/eslint/eslint/commit/f6901d0bcf6c918ac4e5c6c7c4bddeb2cb715c09>WS-2018-0347</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eslint/eslint/issues/10002">https://github.com/eslint/eslint/issues/10002</a></p>
<p>Release Date: 2019-06-16</p>
<p>Fix Resolution: 4.18.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2018-0347 (Medium) detected in eslint-4.10.0.tgz - ## WS-2018-0347 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-4.10.0.tgz</b></p></summary>
<p>An AST-based pattern checker for JavaScript.</p>
<p>Library home page: <a href="https://registry.npmjs.org/eslint/-/eslint-4.10.0.tgz">https://registry.npmjs.org/eslint/-/eslint-4.10.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/elevator/ClientApp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/elevator/ClientApp/node_modules/react-scripts/node_modules/eslint/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.1.5.tgz (Root Library)
- :x: **eslint-4.10.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/01binary/elevator/commit/c03855450ce69cbe684e2d0017a95692e42f929f">c03855450ce69cbe684e2d0017a95692e42f929f</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was descovered in eslint before 4.18.2. One of the regexes in eslint is vulnerable to catastrophic backtracking.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://github.com/eslint/eslint/commit/f6901d0bcf6c918ac4e5c6c7c4bddeb2cb715c09>WS-2018-0347</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eslint/eslint/issues/10002">https://github.com/eslint/eslint/issues/10002</a></p>
<p>Release Date: 2019-06-16</p>
<p>Fix Resolution: 4.18.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in eslint tgz ws medium severity vulnerability vulnerable library eslint tgz an ast based pattern checker for javascript library home page a href path to dependency file tmp ws scm elevator clientapp package json path to vulnerable library tmp ws scm elevator clientapp node modules react scripts node modules eslint package json dependency hierarchy react scripts tgz root library x eslint tgz vulnerable library found in head commit a href vulnerability details a vulnerability was descovered in eslint before one of the regexes in eslint is vulnerable to catastrophic backtracking publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
21,646
| 30,083,029,195
|
IssuesEvent
|
2023-06-29 06:13:41
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
Xilica Audio Processors
|
NOT YET PROCESSED
|
- [x] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
Xilica Solaro
What you would like to be able to make it do from Companion:
Control audio functions such as volume sliders, toggle mixer items on and off etc.
Direct links or attachments to the ethernet control protocol or API:
[Xilica-Third-Party-Control-Manual.pdf](https://github.com/bitfocus/companion-module-requests/files/11902519/Xilica-Third-Party-Control-Manual.pdf)
|
1.0
|
Xilica Audio Processors - - [x] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
Xilica Solaro
What you would like to be able to make it do from Companion:
Control audio functions such as volume sliders, toggle mixer items on and off etc.
Direct links or attachments to the ethernet control protocol or API:
[Xilica-Third-Party-Control-Manual.pdf](https://github.com/bitfocus/companion-module-requests/files/11902519/Xilica-Third-Party-Control-Manual.pdf)
|
process
|
xilica audio processors i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control xilica solaro what you would like to be able to make it do from companion control audio functions such as volume sliders toggle mixer items on and off etc direct links or attachments to the ethernet control protocol or api
| 1
|
2,914
| 5,905,106,158
|
IssuesEvent
|
2017-05-19 11:50:52
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Parser Error when method has 0 or >1 parameters in parentheses on continued line
|
bug parse-tree-processing
|
MCVEs:
```vb
'This causes Parse Error, because 0 parameter)
Debug.Print Now _
()
'This causes Parse Error, because > 1 parameter)
Debug.Print Round _
(1, 2)
```
I'm unsure if passing 1 parameter is resolving the parentheses as Value cast of the parameter?
```vb
'This parses because exactly 1 parameter, but maybe isn't resolved correctly?
Debug.Print Round _
(1)
```
|
1.0
|
Parser Error when method has 0 or >1 parameters in parentheses on continued line - MCVEs:
```vb
'This causes Parse Error, because 0 parameter)
Debug.Print Now _
()
'This causes Parse Error, because > 1 parameter)
Debug.Print Round _
(1, 2)
```
I'm unsure if passing 1 parameter is resolving the parentheses as Value cast of the parameter?
```vb
'This parses because exactly 1 parameter, but maybe isn't resolved correctly?
Debug.Print Round _
(1)
```
|
process
|
parser error when method has or parameters in parentheses on continued line mcves vb this causes parse error because parameter debug print now this causes parse error because parameter debug print round i m unsure if passing parameter is resolving the parentheses as value cast of the parameter vb this parses because exactly parameter but maybe isn t resolved correctly debug print round
| 1
|
13,156
| 15,574,657,520
|
IssuesEvent
|
2021-03-17 10:06:20
|
googleapis/gax-dotnet
|
https://api.github.com/repos/googleapis/gax-dotnet
|
opened
|
Update gRPC dependencies before releasing 3.3.0
|
type: process
|
We need to check whether Grpc.Core 2.36.1 has any known deployment issues - there have been a few changes there.
|
1.0
|
Update gRPC dependencies before releasing 3.3.0 - We need to check whether Grpc.Core 2.36.1 has any known deployment issues - there have been a few changes there.
|
process
|
update grpc dependencies before releasing we need to check whether grpc core has any known deployment issues there have been a few changes there
| 1
|
722,609
| 24,868,984,411
|
IssuesEvent
|
2022-10-27 13:59:28
|
enjoythecode/scrum-wizards-cs321
|
https://api.github.com/repos/enjoythecode/scrum-wizards-cs321
|
closed
|
SUPER ADMIN: Change users read permissions
|
high priority @Super Admin
|
**As a**
Super Admin,
**I want to be able to**
Change the read permissions of any C-CAMS user, giving or revoking permission to read and view certain data.
**so that**
I can control the privacy of this system, which leverages massive amounts of data.
|
1.0
|
SUPER ADMIN: Change users read permissions - **As a**
Super Admin,
**I want to be able to**
Change the read permissions of any C-CAMS user, giving or revoking permission to read and view certain data.
**so that**
I can control the privacy of this system, which leverages massive amounts of data.
|
non_process
|
super admin change users read permissions as a super admin i want to be able to change the read permissions of any c cams user giving or revoking permission to read and view certain data so that i can control the privacy of this system which leverages massive amounts of data
| 0
|
4,662
| 5,221,841,837
|
IssuesEvent
|
2017-01-27 04:07:53
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
[arm32/Linux] Make clang 3.6 the default toolset for cross arm32 build
|
area-Infrastructure arm32 os-linux
|
CoreCLR repo specifies clang 3.6 for cross builds (https://github.com/dotnet/coreclr/blob/master/build.sh#L747) but CoreFX does not (see https://github.com/dotnet/corefx/blob/master/src/Native/build-native.sh).
CC @hqueue @hseok-oh @jyoungyun
|
1.0
|
[arm32/Linux] Make clang 3.6 the default toolset for cross arm32 build - CoreCLR repo specifies clang 3.6 for cross builds (https://github.com/dotnet/coreclr/blob/master/build.sh#L747) but CoreFX does not (see https://github.com/dotnet/corefx/blob/master/src/Native/build-native.sh).
CC @hqueue @hseok-oh @jyoungyun
|
non_process
|
make clang the default toolset for cross build coreclr repo specifies clang for cross builds but corefx does not see cc hqueue hseok oh jyoungyun
| 0
|
395,725
| 27,084,339,780
|
IssuesEvent
|
2023-02-14 15:59:44
|
supabase/supabase
|
https://api.github.com/repos/supabase/supabase
|
closed
|
Add docs on Testing + Debugging Postgres Functions
|
documentation good first issue
|
Note this is a similar request to [#7311](https://github.com/supabase/supabase/issues/7311)
## Context
I'm getting a postgres error for a function that is triggered whenever a new auth user is created
## The issue
I'm finding it frustrating to debug the issue. Currently it's very difficult to debug postgres function when things go wrong. The current Postgres Logs tell what the error was, but besides from that it's a guessing game when it comes to debugging in.
It's also quite painful to debug, since you need to manually trigger the event. In my case, since the trigger is whenever a new auth user is insert, I need to manually complete my sign up flow for this (I'm using signin via Slack)
To be honest this is enough to make me stop using Supabase. I really like Supabase and want to use it, but I'm already struggling to make a simple app. I shudder to think what would happen when things get more complex.
|
1.0
|
Add docs on Testing + Debugging Postgres Functions - Note this is a similar request to [#7311](https://github.com/supabase/supabase/issues/7311)
## Context
I'm getting a postgres error for a function that is triggered whenever a new auth user is created
## The issue
I'm finding it frustrating to debug the issue. Currently it's very difficult to debug postgres function when things go wrong. The current Postgres Logs tell what the error was, but besides from that it's a guessing game when it comes to debugging in.
It's also quite painful to debug, since you need to manually trigger the event. In my case, since the trigger is whenever a new auth user is insert, I need to manually complete my sign up flow for this (I'm using signin via Slack)
To be honest this is enough to make me stop using Supabase. I really like Supabase and want to use it, but I'm already struggling to make a simple app. I shudder to think what would happen when things get more complex.
|
non_process
|
add docs on testing debugging postgres functions note this is a similar request to context i m getting a postgres error for a function that is triggered whenever a new auth user is created the issue i m finding it frustrating to debug the issue currently it s very difficult to debug postgres function when things go wrong the current postgres logs tell what the error was but besides from that it s a guessing game when it comes to debugging in it s also quite painful to debug since you need to manually trigger the event in my case since the trigger is whenever a new auth user is insert i need to manually complete my sign up flow for this i m using signin via slack to be honest this is enough to make me stop using supabase i really like supabase and want to use it but i m already struggling to make a simple app i shudder to think what would happen when things get more complex
| 0
|
22,452
| 31,199,747,035
|
IssuesEvent
|
2023-08-18 01:15:07
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/k8sattributes] Allow specifying that all labels/annotations should be copied
|
enhancement processor/k8sattributes
|
### Component(s)
processor/k8sattributes
### Is your feature request related to a problem? Please describe.
Today the processor allows configuring which specific labels and annotations should be added as resource attributes. For users who want all the labels and/or annotations, this requires them to:
1. Know all the possible keys
2. List all the possible keys in the configuration
In my opinion this is a poor user experience.
### Describe the solution you'd like
As a quality of life improvement I'd like to be able to specify the "Add all the labels from the pod" or "Add all the annotations from the namespace" and so on. I think the configuration API would look something like:
```yaml
k8sattributes/:
extract:
allAnnotationsFrom:
- pod
- namespace
allLabelsFrom:
- pod
```
I also don't believe this needs to be mutually exclusive from `annotations` and `labels` configuration option. Users should be allowed to use `allAnnotationsFrom` and `annotations` so that they can specify regexp extractions to get even more attributes. I does sound like a good idea to validate that if both options are used (`allAnnotationsFrom` and `annotations` or `allLabelsFrom` and `labels`) that each extraction in `annotations`/`labels` uses regexp.
|
1.0
|
[processor/k8sattributes] Allow specifying that all labels/annotations should be copied - ### Component(s)
processor/k8sattributes
### Is your feature request related to a problem? Please describe.
Today the processor allows configuring which specific labels and annotations should be added as resource attributes. For users who want all the labels and/or annotations, this requires them to:
1. Know all the possible keys
2. List all the possible keys in the configuration
In my opinion this is a poor user experience.
### Describe the solution you'd like
As a quality of life improvement I'd like to be able to specify the "Add all the labels from the pod" or "Add all the annotations from the namespace" and so on. I think the configuration API would look something like:
```yaml
k8sattributes/:
extract:
allAnnotationsFrom:
- pod
- namespace
allLabelsFrom:
- pod
```
I also don't believe this needs to be mutually exclusive from `annotations` and `labels` configuration option. Users should be allowed to use `allAnnotationsFrom` and `annotations` so that they can specify regexp extractions to get even more attributes. I does sound like a good idea to validate that if both options are used (`allAnnotationsFrom` and `annotations` or `allLabelsFrom` and `labels`) that each extraction in `annotations`/`labels` uses regexp.
|
process
|
allow specifying that all labels annotations should be copied component s processor is your feature request related to a problem please describe today the processor allows configuring which specific labels and annotations should be added as resource attributes for users who want all the labels and or annotations this requires them to know all the possible keys list all the possible keys in the configuration in my opinion this is a poor user experience describe the solution you d like as a quality of life improvement i d like to be able to specify the add all the labels from the pod or add all the annotations from the namespace and so on i think the configuration api would look something like yaml extract allannotationsfrom pod namespace alllabelsfrom pod i also don t believe this needs to be mutually exclusive from annotations and labels configuration option users should be allowed to use allannotationsfrom and annotations so that they can specify regexp extractions to get even more attributes i does sound like a good idea to validate that if both options are used allannotationsfrom and annotations or alllabelsfrom and labels that each extraction in annotations labels uses regexp
| 1
|
322,131
| 23,892,961,523
|
IssuesEvent
|
2022-09-08 12:54:41
|
tidyverse/purrr
|
https://api.github.com/repos/tidyverse/purrr
|
closed
|
Add article for row-oriented workflow (or variations thereupon)
|
documentation tidy-dev-day :nerd_face:
|
Since this doesn't really fit in any one tidyverse repo:
https://community.rstudio.com/t/missing-workflow-in-tidyverse/20578
|
1.0
|
Add article for row-oriented workflow (or variations thereupon) - Since this doesn't really fit in any one tidyverse repo:
https://community.rstudio.com/t/missing-workflow-in-tidyverse/20578
|
non_process
|
add article for row oriented workflow or variations thereupon since this doesn t really fit in any one tidyverse repo
| 0
|
14,688
| 17,798,493,642
|
IssuesEvent
|
2021-09-01 03:09:18
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Caged Heat
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Caged Heat
Type (film/tv show): TV Show
Film or show in which it appears: All Hail the King (Marvel Short)
Is the parent film/show streaming anywhere? Yes
About when in the parent film/show does it appear? 6:10
Actual footage of the film/show can be seen (yes/no)? Yes
|
1.0
|
Caged Heat - Please add as much of the following info as you can:
Title: Caged Heat
Type (film/tv show): TV Show
Film or show in which it appears: All Hail the King (Marvel Short)
Is the parent film/show streaming anywhere? Yes
About when in the parent film/show does it appear? 6:10
Actual footage of the film/show can be seen (yes/no)? Yes
|
process
|
caged heat please add as much of the following info as you can title caged heat type film tv show tv show film or show in which it appears all hail the king marvel short is the parent film show streaming anywhere yes about when in the parent film show does it appear actual footage of the film show can be seen yes no yes
| 1
|
325,544
| 9,932,741,615
|
IssuesEvent
|
2019-07-02 10:32:56
|
xwikisas/application-googleapps
|
https://api.github.com/repos/xwikisas/application-googleapps
|
closed
|
Automatically import user photo from google account
|
Priority: Major Type: Improvement
|
When logging in with Google account a wiki account is created but the user photo is not added from it.
|
1.0
|
Automatically import user photo from google account - When logging in with Google account a wiki account is created but the user photo is not added from it.
|
non_process
|
automatically import user photo from google account when logging in with google account a wiki account is created but the user photo is not added from it
| 0
|
10,779
| 13,607,803,978
|
IssuesEvent
|
2020-09-23 00:30:17
|
googleapis/google-auth-library-java
|
https://api.github.com/repos/googleapis/google-auth-library-java
|
closed
|
Security review on Google Client_id and Client_secret
|
type: process
|
*[Following this discussion](https://github.com/googleapis/google-auth-library-java/pull/469#discussion_r479640965)*
To allow the library to generate an Id_token based on the User Credential, I reuse the client_id and the client_secret provided by the gcloud SDK. *I got them like this*
```
gcloud config set log_http_redact_token false
gcloud auth print-identity-token --log-http
```
*The request body print in plain text these values*
Therefore, these values are quite easy to find and I don't think they need a special security to protect them in this library or even on github.
However, to have a security review on this can be great to define what to do exactly.
|
1.0
|
Security review on Google Client_id and Client_secret - *[Following this discussion](https://github.com/googleapis/google-auth-library-java/pull/469#discussion_r479640965)*
To allow the library to generate an Id_token based on the User Credential, I reuse the client_id and the client_secret provided by the gcloud SDK. *I got them like this*
```
gcloud config set log_http_redact_token false
gcloud auth print-identity-token --log-http
```
*The request body print in plain text these values*
Therefore, these values are quite easy to find and I don't think they need a special security to protect them in this library or even on github.
However, to have a security review on this can be great to define what to do exactly.
|
process
|
security review on google client id and client secret to allow the library to generate an id token based on the user credential i reuse the client id and the client secret provided by the gcloud sdk i got them like this gcloud config set log http redact token false gcloud auth print identity token log http the request body print in plain text these values therefore these values are quite easy to find and i don t think they need a special security to protect them in this library or even on github however to have a security review on this can be great to define what to do exactly
| 1
|
99,543
| 16,447,550,048
|
IssuesEvent
|
2021-05-20 21:41:25
|
turkdevops/prism
|
https://api.github.com/repos/turkdevops/prism
|
reopened
|
CVE-2020-7608 (Medium) detected in yargs-parser-5.0.0.tgz
|
security vulnerability
|
## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: prism/package.json</p>
<p>Path to vulnerable library: prism/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- gulp-cli-2.2.0.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/prism/commit/5db2688a34c193ee9e41db7d1582c723828b54a5">5db2688a34c193ee9e41db7d1582c723828b54a5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7608 (Medium) detected in yargs-parser-5.0.0.tgz - ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-5.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p>
<p>Path to dependency file: prism/package.json</p>
<p>Path to vulnerable library: prism/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- gulp-4.0.2.tgz (Root Library)
- gulp-cli-2.2.0.tgz
- yargs-7.1.0.tgz
- :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/prism/commit/5db2688a34c193ee9e41db7d1582c723828b54a5">5db2688a34c193ee9e41db7d1582c723828b54a5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in yargs parser tgz cve medium severity vulnerability vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file prism package json path to vulnerable library prism node modules yargs parser package json dependency hierarchy gulp tgz root library gulp cli tgz yargs tgz x yargs parser tgz vulnerable library found in head commit a href found in base branch master vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,928
| 11,103,420,908
|
IssuesEvent
|
2019-12-17 03:47:14
|
swoft-cloud/swoft
|
https://api.github.com/repos/swoft-cloud/swoft
|
closed
|
无效的调试代码 t.txt
|
bug: fixed swoft: process
|
在Swoft\Process\Swoole下的WorkerStopListener代码里有一段测试代码建议去掉。
ile_put_contents('t.txt', 'stop');
|
1.0
|
无效的调试代码 t.txt - 在Swoft\Process\Swoole下的WorkerStopListener代码里有一段测试代码建议去掉。
ile_put_contents('t.txt', 'stop');
|
process
|
无效的调试代码 t txt 在swoft process swoole下的workerstoplistener代码里有一段测试代码建议去掉。 ile put contents t txt stop
| 1
|
257,898
| 8,148,375,211
|
IssuesEvent
|
2018-08-22 05:26:46
|
MyMICDS/MyMICDS-v2
|
https://api.github.com/repos/MyMICDS/MyMICDS-v2
|
opened
|
Part of default schedule not underlaid
|
bug effort: medium priority: urgent work length: medium
|
This is the current schedule:

It says Block B first on the schedule even though it is a free period (as expected), but lunch nor collaborative show from 11:50 to 1:10. This should be there.
|
1.0
|
Part of default schedule not underlaid - This is the current schedule:

It says Block B first on the schedule even though it is a free period (as expected), but lunch nor collaborative show from 11:50 to 1:10. This should be there.
|
non_process
|
part of default schedule not underlaid this is the current schedule it says block b first on the schedule even though it is a free period as expected but lunch nor collaborative show from to this should be there
| 0
|
137,169
| 12,747,077,138
|
IssuesEvent
|
2020-06-26 17:10:32
|
streamnative/pulsar
|
https://api.github.com/repos/streamnative/pulsar
|
closed
|
ISSUE-6927: Add C# client documentation
|
area/documentation component/documentation component/website size: 1 triage/week-19 type/feature workflow::in-review
|
Original Issue: apache/pulsar#6927
---
**Is your feature request related to a problem? Please describe.**
Since we have official c# client https://github.com/apache/pulsar-dotpulsar, we'd better add the C# client documentation in the client documentation.

|
2.0
|
ISSUE-6927: Add C# client documentation - Original Issue: apache/pulsar#6927
---
**Is your feature request related to a problem? Please describe.**
Since we have official c# client https://github.com/apache/pulsar-dotpulsar, we'd better add the C# client documentation in the client documentation.

|
non_process
|
issue add c client documentation original issue apache pulsar is your feature request related to a problem please describe since we have official c client we d better add the c client documentation in the client documentation
| 0
|
13,603
| 16,190,245,214
|
IssuesEvent
|
2021-05-04 07:23:02
|
osstotalsoft/nbb
|
https://api.github.com/repos/osstotalsoft/nbb
|
closed
|
Microservices orchestration Sample
|
process manager
|
In Microservices sample, use Orchestration via NBB.ProcessManager instead of Choreography.
If necessary move Integration event handlers logic in Command/CommandHandlers.
Add additional fields in commands/events required by process correlation
|
1.0
|
Microservices orchestration Sample - In Microservices sample, use Orchestration via NBB.ProcessManager instead of Choreography.
If necessary move Integration event handlers logic in Command/CommandHandlers.
Add additional fields in commands/events required by process correlation
|
process
|
microservices orchestration sample in microservices sample use orchestration via nbb processmanager instead of choreography if necessary move integration event handlers logic in command commandhandlers add additional fields in commands events required by process correlation
| 1
|
7,943
| 11,137,523,511
|
IssuesEvent
|
2019-12-20 19:35:38
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Change EST to ET on system pages
|
Apply Process Requirements Ready State Dept.
|
Who: All users
What: Change EST to ET
Why: To be consistent and avoid confusion
Acceptance Criteria:
Change instances of EST to be ET
Reason: EST is only half of the year as well as EDT so ET should be used
Changes should be made:
- State: Next steps page

- State: What's next page

- State: Update application page

- State: Search page in the info box that tells applicants when they can apply

|
1.0
|
Change EST to ET on system pages - Who: All users
What: Change EST to ET
Why: To be consistent and avoid confusion
Acceptance Criteria:
Change instances of EST to be ET
Reason: EST is only half of the year as well as EDT so ET should be used
Changes should be made:
- State: Next steps page

- State: What's next page

- State: Update application page

- State: Search page in the info box that tells applicants when they can apply

|
process
|
change est to et on system pages who all users what change est to et why to be consistent and avoid confusion acceptance criteria change instances of est to be et reason est is only half of the year as well as edt so et should be used changes should be made state next steps page state what s next page state update application page state search page in the info box that tells applicants when they can apply
| 1
|
17,379
| 23,200,382,894
|
IssuesEvent
|
2022-08-01 20:47:33
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] ns2.elhacker.net
|
whitelisting process
|
>**Domains or links**
not sure if is listed right now
ns2.elhacker.net
>**More Information**
>How did you discover your web site or domain was listed here?
AdGuard
>**Have you requested removal from other sources?**
Yes:
https://github.com/blocklistproject/Lists/issues/599
elhacker.net is registered domain since 2001, contains big forum, web and blog, and ns2 is mainly a e-mail server and hosting downloads like manuals, tutorials, iso's, etc
Thank you in advance.
|
1.0
|
[FALSE-POSITIVE?] ns2.elhacker.net - >**Domains or links**
not sure if is listed right now
ns2.elhacker.net
>**More Information**
>How did you discover your web site or domain was listed here?
AdGuard
>**Have you requested removal from other sources?**
Yes:
https://github.com/blocklistproject/Lists/issues/599
elhacker.net is registered domain since 2001, contains big forum, web and blog, and ns2 is mainly a e-mail server and hosting downloads like manuals, tutorials, iso's, etc
Thank you in advance.
|
process
|
elhacker net domains or links not sure if is listed right now elhacker net more information how did you discover your web site or domain was listed here adguard have you requested removal from other sources yes elhacker net is registered domain since contains big forum web and blog and is mainly a e mail server and hosting downloads like manuals tutorials iso s etc thank you in advance
| 1
|
2,940
| 5,921,685,822
|
IssuesEvent
|
2017-05-23 00:05:24
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
opened
|
System.Diagnostics.Tests.ProcessTests.TestStartOnWindowsWithBadFileFormat fails on Desktop
|
area-System.Diagnostics.Process test-run-desktop
|
This test is marked as `[ConditionalFact]` which is currently not running in Desktop. I have the local fix already for that but I can't update it in Corefx until I disable the failing tests to have a green CI, so I will go ahead and disable it.
It fails with error:
```
System.Diagnostics.Tests.ProcessTests.TestStartOnWindowsWithBadFileFormat [FAIL]
Assert.Throws() Failure
Expected: typeof(System.ComponentModel.Win32Exception)
Actual: (No exception was thrown)
Stack Trace:
D:\repos\corefxCopy\corefx\src\System.Diagnostics.Process\tests\ProcessTests.cs(1028,0): at System.Diagnostics.Tests.ProcessTests.TestStartOnWindowsWithBadFileFormat()
```
cc: @Priya91 @stephentoub
|
1.0
|
System.Diagnostics.Tests.ProcessTests.TestStartOnWindowsWithBadFileFormat fails on Desktop - This test is marked as `[ConditionalFact]` which is currently not running in Desktop. I have the local fix already for that but I can't update it in Corefx until I disable the failing tests to have a green CI, so I will go ahead and disable it.
It fails with error:
```
System.Diagnostics.Tests.ProcessTests.TestStartOnWindowsWithBadFileFormat [FAIL]
Assert.Throws() Failure
Expected: typeof(System.ComponentModel.Win32Exception)
Actual: (No exception was thrown)
Stack Trace:
D:\repos\corefxCopy\corefx\src\System.Diagnostics.Process\tests\ProcessTests.cs(1028,0): at System.Diagnostics.Tests.ProcessTests.TestStartOnWindowsWithBadFileFormat()
```
cc: @Priya91 @stephentoub
|
process
|
system diagnostics tests processtests teststartonwindowswithbadfileformat fails on desktop this test is marked as which is currently not running in desktop i have the local fix already for that but i can t update it in corefx until i disable the failing tests to have a green ci so i will go ahead and disable it it fails with error system diagnostics tests processtests teststartonwindowswithbadfileformat assert throws failure expected typeof system componentmodel actual no exception was thrown stack trace d repos corefxcopy corefx src system diagnostics process tests processtests cs at system diagnostics tests processtests teststartonwindowswithbadfileformat cc stephentoub
| 1
|
9,968
| 13,012,593,952
|
IssuesEvent
|
2020-07-25 06:36:45
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
closed
|
Can't shutdown stdin of process
|
A-tokio C-bug M-process
|
## Version
0.2.11
## Platform
Ubuntu 18.04 x64
## Subcrates
process, io
## Description
So I'm piping into and out of a process using tokio process. And the input goes fine, and the output pipe was getting to the last byte then hanging (when I did it in chunks) or just hanging if I tried to use `read_to_end`.
I ended up diagnosing this further and finding that the stdin wasn't being closed so I added a shutdown() to my code but then the write hung there (write function included).
```Rust
async fn write_buffer<W>(data: &[u8], mut w: W) -> io::Result<()> where W: ASyncWriteExt + Unpin {
let mut buffer = Cursor::new(data);
while buffer.has_remaining() {
w.write_buf(&mut buffer).await?;
}
w.shutdown().await?; // It hangs here
}
```
To solve this temporarily I've removed the shutdown and in my read I read chunks into a buffer and have a timeout to spot the end of stream since the program I'm piping to/from is very fast. But I'm wondering if there's an issue there
|
1.0
|
Can't shutdown stdin of process - ## Version
0.2.11
## Platform
Ubuntu 18.04 x64
## Subcrates
process, io
## Description
So I'm piping into and out of a process using tokio process. And the input goes fine, and the output pipe was getting to the last byte then hanging (when I did it in chunks) or just hanging if I tried to use `read_to_end`.
I ended up diagnosing this further and finding that the stdin wasn't being closed so I added a shutdown() to my code but then the write hung there (write function included).
```Rust
async fn write_buffer<W>(data: &[u8], mut w: W) -> io::Result<()> where W: ASyncWriteExt + Unpin {
let mut buffer = Cursor::new(data);
while buffer.has_remaining() {
w.write_buf(&mut buffer).await?;
}
w.shutdown().await?; // It hangs here
}
```
To solve this temporarily I've removed the shutdown and in my read I read chunks into a buffer and have a timeout to spot the end of stream since the program I'm piping to/from is very fast. But I'm wondering if there's an issue there
|
process
|
can t shutdown stdin of process version platform ubuntu subcrates process io description so i m piping into and out of a process using tokio process and the input goes fine and the output pipe was getting to the last byte then hanging when i did it in chunks or just hanging if i tried to use read to end i ended up diagnosing this further and finding that the stdin wasn t being closed so i added a shutdown to my code but then the write hung there write function included rust async fn write buffer data mut w w io result where w asyncwriteext unpin let mut buffer cursor new data while buffer has remaining w write buf mut buffer await w shutdown await it hangs here to solve this temporarily i ve removed the shutdown and in my read i read chunks into a buffer and have a timeout to spot the end of stream since the program i m piping to from is very fast but i m wondering if there s an issue there
| 1
|
17,759
| 23,676,288,167
|
IssuesEvent
|
2022-08-28 05:54:28
|
Tencent/tdesign-miniprogram
|
https://api.github.com/repos/Tencent/tdesign-miniprogram
|
closed
|
tdesign-小程序 主题颜色怎么改?
|
good first issue Stale in process
|
### tdesign 版本
微信小程序 0.13.2
### 重现链接
_No response_
### 重现步骤
_No response_
### 期望结果
_No response_
### 实际结果
_No response_
### 框架版本
_No response_
### 浏览器版本
_No response_
### 系统版本
_No response_
### Node版本
_No response_
### 补充说明
_No response_
|
1.0
|
tdesign-小程序 主题颜色怎么改? - ### tdesign 版本
微信小程序 0.13.2
### 重现链接
_No response_
### 重现步骤
_No response_
### 期望结果
_No response_
### 实际结果
_No response_
### 框架版本
_No response_
### 浏览器版本
_No response_
### 系统版本
_No response_
### Node版本
_No response_
### 补充说明
_No response_
|
process
|
tdesign 小程序 主题颜色怎么改? tdesign 版本 微信小程序 重现链接 no response 重现步骤 no response 期望结果 no response 实际结果 no response 框架版本 no response 浏览器版本 no response 系统版本 no response node版本 no response 补充说明 no response
| 1
|
5,242
| 3,911,776,935
|
IssuesEvent
|
2016-04-20 07:47:09
|
elastic/rally
|
https://api.github.com/repos/elastic/rally
|
opened
|
Allow to specify the target host(s) when using the "benchmark-only" pipeline
|
:Usability enhancement
|
The (rather dangerous) "benchmark-only" pipeline allows to run benchmarks against clusters which were not provisioned by Rally. However, we bolted this on a bit and it only allows to target localhost:9200 so far. We should allow a user to define a list of target hosts and ports that we pass then to the client driver.
|
True
|
Allow to specify the target host(s) when using the "benchmark-only" pipeline - The (rather dangerous) "benchmark-only" pipeline allows to run benchmarks against clusters which were not provisioned by Rally. However, we bolted this on a bit and it only allows to target localhost:9200 so far. We should allow a user to define a list of target hosts and ports that we pass then to the client driver.
|
non_process
|
allow to specify the target host s when using the benchmark only pipeline the rather dangerous benchmark only pipeline allows to run benchmarks against clusters which were not provisioned by rally however we bolted this on a bit and it only allows to target localhost so far we should allow a user to define a list of target hosts and ports that we pass then to the client driver
| 0
|
14,024
| 16,824,076,689
|
IssuesEvent
|
2021-06-17 16:11:14
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
closed
|
need "how to install bikeshed in one's local webauthn repo clone" instructions
|
priority:low type:process
|
I was attempting to run the `./update-bikeshed-cache.sh` on my local webauthn repo clone (following the directions here: https://github.com/w3c/webauthn#updating-copies-of-bikeshed-data-files-stored-in-this-repo) and this is what I got:
```
$ ./update-bikeshed-cache.sh \
&& git add .spec-data .bikeshed-include \
&& git commit -m "Bikeshed spec data update" .
Precondition failure: expecting a bikeshed installation in ./bikeshed/
```
So that is saying that it is expecting a bikeshed install to be in `<my local path>/webauthn/bikeshed`.
In looking at https://tabatkins.github.io/bikeshed/#installing it is not clear to me how to place a "bikeshed installation" in that directory, and I'm afraid of messing up my present local bikeshed install (which is sort of a baroque mess because of the mess of python installations and environments I seem to have...).
Anyone have clean & concise instructions for how to place a "bikeshed installation" in `<my local path>/webauthn/bikeshed` **_without messing up anything_** on the local machine?
|
1.0
|
need "how to install bikeshed in one's local webauthn repo clone" instructions - I was attempting to run the `./update-bikeshed-cache.sh` on my local webauthn repo clone (following the directions here: https://github.com/w3c/webauthn#updating-copies-of-bikeshed-data-files-stored-in-this-repo) and this is what I got:
```
$ ./update-bikeshed-cache.sh \
&& git add .spec-data .bikeshed-include \
&& git commit -m "Bikeshed spec data update" .
Precondition failure: expecting a bikeshed installation in ./bikeshed/
```
So that is saying that it is expecting a bikeshed install to be in `<my local path>/webauthn/bikeshed`.
In looking at https://tabatkins.github.io/bikeshed/#installing it is not clear to me how to place a "bikeshed installation" in that directory, and I'm afraid of messing up my present local bikeshed install (which is sort of a baroque mess because of the mess of python installations and environments I seem to have...).
Anyone have clean & concise instructions for how to place a "bikeshed installation" in `<my local path>/webauthn/bikeshed` **_without messing up anything_** on the local machine?
|
process
|
need how to install bikeshed in one s local webauthn repo clone instructions i was attempting to run the update bikeshed cache sh on my local webauthn repo clone following the directions here and this is what i got update bikeshed cache sh git add spec data bikeshed include git commit m bikeshed spec data update precondition failure expecting a bikeshed installation in bikeshed so that is saying that it is expecting a bikeshed install to be in webauthn bikeshed in looking at it is not clear to me how to place a bikeshed installation in that directory and i m afraid of messing up my present local bikeshed install which is sort of a baroque mess because of the mess of python installations and environments i seem to have anyone have clean concise instructions for how to place a bikeshed installation in webauthn bikeshed without messing up anything on the local machine
| 1
|
43,731
| 5,696,645,195
|
IssuesEvent
|
2017-04-16 14:05:18
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
API: add testing functions to public API ?
|
API Design Docs Testing
|
A lot of other projects that use pandas will (like to) use pandas testing functionality like `assert_frame_equal` in their test suite. Although the pandas testing functions are available in the namespace (#6188), they are not really 'officially' labeled as public API that other projects can use (and rely upon).
Numpy has a similar submodule `numpy.testing` (http://docs.scipy.org/doc/numpy/reference/routines.testing.html)
Some things we could do:
- make a selection of the functions in `util.testing` that we want to label as public
- add this list somewhere to the docs
- write docstrings for these public ones (the other could use that as well of course ..)
- add some tests for the public API
- I would also import them into a `pandas.testing` module, so it is this one we can publicly advertise (and users are less tempted to use other non-public functions in the `pandas.util.testing` namespace)
|
1.0
|
API: add testing functions to public API ? - A lot of other projects that use pandas will (like to) use pandas testing functionality like `assert_frame_equal` in their test suite. Although the pandas testing functions are available in the namespace (#6188), they are not really 'officially' labeled as public API that other projects can use (and rely upon).
Numpy has a similar submodule `numpy.testing` (http://docs.scipy.org/doc/numpy/reference/routines.testing.html)
Some things we could do:
- make a selection of the functions in `util.testing` that we want to label as public
- add this list somewhere to the docs
- write docstrings for these public ones (the other could use that as well of course ..)
- add some tests for the public API
- I would also import them into a `pandas.testing` module, so it is this one we can publicly advertise (and users are less tempted to use other non-public functions in the `pandas.util.testing` namespace)
|
non_process
|
api add testing functions to public api a lot of other projects that use pandas will like to use pandas testing functionality like assert frame equal in their test suite although the pandas testing functions are available in the namespace they are not really officially labeled as public api that other projects can use and rely upon numpy has a similar submodule numpy testing some things we could do make a selection of the functions in util testing that we want to label as public add this list somewhere to the docs write docstrings for these public ones the other could use that as well of course add some tests for the public api i would also import them into a pandas testing module so it is this one we can publicly advertise and users are less tempted to use other non public functions in the pandas util testing namespace
| 0
|
133,692
| 12,551,084,335
|
IssuesEvent
|
2020-06-06 13:32:34
|
svsticky/static-sticky
|
https://api.github.com/repos/svsticky/static-sticky
|
closed
|
Manual needs update
|
documentation
|
Explain the steps for setting up the site better and in more detail. The current format gives the impression that the users need to make a new contentful account.
|
1.0
|
Manual needs update - Explain the steps for setting up the site better and in more detail. The current format gives the impression that the users need to make a new contentful account.
|
non_process
|
manual needs update explain the steps for setting up the site better and in more detail the current format gives the impression that the users need to make a new contentful account
| 0
|
9,244
| 12,270,574,925
|
IssuesEvent
|
2020-05-07 15:42:30
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Move Issues from old repo to new with library-specific label
|
api: spanner type: process
|
Move all the ISSUES in the old repo to -cpp with the correct `api: ???` label. https://help.github.com/en/github/managing-your-work-on-github/transferring-an-issue-to-another-repository
|
1.0
|
Move Issues from old repo to new with library-specific label - Move all the ISSUES in the old repo to -cpp with the correct `api: ???` label. https://help.github.com/en/github/managing-your-work-on-github/transferring-an-issue-to-another-repository
|
process
|
move issues from old repo to new with library specific label move all the issues in the old repo to cpp with the correct api label
| 1
|
20,219
| 26,809,763,528
|
IssuesEvent
|
2023-02-01 21:17:24
|
ESMValGroup/ESMValCore
|
https://api.github.com/repos/ESMValGroup/ESMValCore
|
opened
|
Possibly running out of memory when using a lot of preprocessors.
|
help wanted preprocessor
|
Hi all,
In the July 2022 Meeting, I raised an issue, that I can't process the data with a lot of preprocessors (see https://github.com/ESMValGroup/Community/discussions/33#discussioncomment-3121419). When I tried it back there, it all seemed to work just fine, however, I finally ran into a problem.
I was running [this recipe](https://github.com/ESMValGroup/ESMValCore/files/10561556/recipe_bc_extremes_tx3x.txt) (it looks ugly, since it's still being developed, so please no judgement), and it works OK for the groups 'all', 'obs_abs' and 'obs_ano', but it can't process 'nat' and 'ssp245'. I thought, OK, I split the recipe and process separately 'nat' and 'ssp245' in their own recipes and recombine them for the diagnostic, but no, I seem to not be able to process them either. The computers I am working on allow only 6h jobs, but there's not much limitations for the number of the checked out processors. For this job, I checked out 27 cpus and allocated 180Gb of memory, I don't allow more than one process per cpu. (I also tried 20, and it didn't work.) My hunch is, that the more fine resolution models are not processed and just keep hanging.
A small note, I think the issue here is my preprocessor `rolling_window_statistics`, because, I think, the `iris` function it uses realizes data. I processed everything quite well without `rolling_window_statistics` on 20 cpus.
I'm not sure what's the best way of handling that here. For now, I will process the data as I did it before, but someone might be interested in it.
[main_log_debug_tx3x.txt](https://github.com/ESMValGroup/ESMValCore/files/10561559/main_log_debug_tx3x.txt)
Here are the computers specifications if that matters:

|
1.0
|
Possibly running out of memory when using a lot of preprocessors. - Hi all,
In the July 2022 Meeting, I raised an issue, that I can't process the data with a lot of preprocessors (see https://github.com/ESMValGroup/Community/discussions/33#discussioncomment-3121419). When I tried it back there, it all seemed to work just fine, however, I finally ran into a problem.
I was running [this recipe](https://github.com/ESMValGroup/ESMValCore/files/10561556/recipe_bc_extremes_tx3x.txt) (it looks ugly, since it's still being developed, so please no judgement), and it works OK for the groups 'all', 'obs_abs' and 'obs_ano', but it can't process 'nat' and 'ssp245'. I thought, OK, I split the recipe and process separately 'nat' and 'ssp245' in their own recipes and recombine them for the diagnostic, but no, I seem to not be able to process them either. The computers I am working on allow only 6h jobs, but there's not much limitations for the number of the checked out processors. For this job, I checked out 27 cpus and allocated 180Gb of memory, I don't allow more than one process per cpu. (I also tried 20, and it didn't work.) My hunch is, that the more fine resolution models are not processed and just keep hanging.
A small note, I think the issue here is my preprocessor `rolling_window_statistics`, because, I think, the `iris` function it uses realizes data. I processed everything quite well without `rolling_window_statistics` on 20 cpus.
I'm not sure what's the best way of handling that here. For now, I will process the data as I did it before, but someone might be interested in it.
[main_log_debug_tx3x.txt](https://github.com/ESMValGroup/ESMValCore/files/10561559/main_log_debug_tx3x.txt)
Here are the computers specifications if that matters:

|
process
|
possibly running out of memory when using a lot of preprocessors hi all in the july meeting i raised an issue that i can t process the data with a lot of preprocessors see when i tried it back there it all seemed to work just fine however i finally ran into a problem i was running it looks ugly since it s still being developed so please no judgement and it works ok for the groups all obs abs and obs ano but it can t process nat and i thought ok i split the recipe and process separately nat and in their own recipes and recombine them for the diagnostic but no i seem to not be able to process them either the computers i am working on allow only jobs but there s not much limitations for the number of the checked out processors for this job i checked out cpus and allocated of memory i don t allow more than one process per cpu i also tried and it didn t work my hunch is that the more fine resolution models are not processed and just keep hanging a small note i think the issue here is my preprocessor rolling window statistics because i think the iris function it uses realizes data i processed everything quite well without rolling window statistics on cpus i m not sure what s the best way of handling that here for now i will process the data as i did it before but someone might be interested in it here are the computers specifications if that matters
| 1
|
80,009
| 3,549,528,513
|
IssuesEvent
|
2016-01-20 18:22:08
|
GalliumOS/galliumos-distro
|
https://api.github.com/repos/GalliumOS/galliumos-distro
|
closed
|
Notifications Behind Full Screen Applications
|
bug priority:medium
|
Notifications for volume, brightness and battery are not visible when running applications in full screen. I found this xfce bug report and was wondering if the patch could be included with galliumos.
https://bugzilla.xfce.org/show_bug.cgi?id=7928
|
1.0
|
Notifications Behind Full Screen Applications - Notifications for volume, brightness and battery are not visible when running applications in full screen. I found this xfce bug report and was wondering if the patch could be included with galliumos.
https://bugzilla.xfce.org/show_bug.cgi?id=7928
|
non_process
|
notifications behind full screen applications notifications for volume brightness and battery are not visible when running applications in full screen i found this xfce bug report and was wondering if the patch could be included with galliumos
| 0
|
76,857
| 15,496,218,048
|
IssuesEvent
|
2021-03-11 02:16:30
|
hiucimon/react-hooks-redux-template
|
https://api.github.com/repos/hiucimon/react-hooks-redux-template
|
opened
|
CVE-2019-10744 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash.template-4.4.0.tgz</b>, <b>lodash-es-4.17.11.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary>
<p>
<details><summary><b>lodash.template-4.4.0.tgz</b></p></summary>
<p>The lodash method `_.template` exported as a module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash.template/-/lodash.template-4.4.0.tgz">https://registry.npmjs.org/lodash.template/-/lodash.template-4.4.0.tgz</a></p>
<p>Path to dependency file: /react-hooks-redux-template/package.json</p>
<p>Path to vulnerable library: react-hooks-redux-template/node_modules/lodash.template/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.3.tgz (Root Library)
- postcss-preset-env-6.3.1.tgz
- postcss-initial-3.0.0.tgz
- :x: **lodash.template-4.4.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-es-4.17.11.tgz</b></p></summary>
<p>Lodash exported as ES modules.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.11.tgz">https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.11.tgz</a></p>
<p>Path to dependency file: /react-hooks-redux-template/package.json</p>
<p>Path to vulnerable library: react-hooks-redux-template/node_modules/lodash-es/package.json</p>
<p>
Dependency Hierarchy:
- redux-3.7.2.tgz (Root Library)
- :x: **lodash-es-4.17.11.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /react-hooks-redux-template/package.json</p>
<p>Path to vulnerable library: react-hooks-redux-template/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- react-redux-4.4.10.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10744 (High) detected in multiple libraries - ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash.template-4.4.0.tgz</b>, <b>lodash-es-4.17.11.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary>
<p>
<details><summary><b>lodash.template-4.4.0.tgz</b></p></summary>
<p>The lodash method `_.template` exported as a module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash.template/-/lodash.template-4.4.0.tgz">https://registry.npmjs.org/lodash.template/-/lodash.template-4.4.0.tgz</a></p>
<p>Path to dependency file: /react-hooks-redux-template/package.json</p>
<p>Path to vulnerable library: react-hooks-redux-template/node_modules/lodash.template/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.3.tgz (Root Library)
- postcss-preset-env-6.3.1.tgz
- postcss-initial-3.0.0.tgz
- :x: **lodash.template-4.4.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-es-4.17.11.tgz</b></p></summary>
<p>Lodash exported as ES modules.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.11.tgz">https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.11.tgz</a></p>
<p>Path to dependency file: /react-hooks-redux-template/package.json</p>
<p>Path to vulnerable library: react-hooks-redux-template/node_modules/lodash-es/package.json</p>
<p>
Dependency Hierarchy:
- redux-3.7.2.tgz (Root Library)
- :x: **lodash-es-4.17.11.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /react-hooks-redux-template/package.json</p>
<p>Path to vulnerable library: react-hooks-redux-template/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- react-redux-4.4.10.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries lodash template tgz lodash es tgz lodash tgz lodash template tgz the lodash method template exported as a module library home page a href path to dependency file react hooks redux template package json path to vulnerable library react hooks redux template node modules lodash template package json dependency hierarchy react scripts tgz root library postcss preset env tgz postcss initial tgz x lodash template tgz vulnerable library lodash es tgz lodash exported as es modules library home page a href path to dependency file react hooks redux template package json path to vulnerable library react hooks redux template node modules lodash es package json dependency hierarchy redux tgz root library x lodash es tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file react hooks redux template package json path to vulnerable library react hooks redux template node modules lodash package json dependency hierarchy react redux tgz root library x lodash tgz vulnerable library vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template step up your open source security game with whitesource
| 0
|
165,374
| 12,839,200,237
|
IssuesEvent
|
2020-07-07 18:53:05
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Kubernetes failed to start after installation throwing errors
|
kind/failing-test sig/node triage/support
|
<!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI -->
**Which jobs are failing**:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) (Result: exit-code) since Tue 2020-07-07 14:13:10 EDT; 11s ago
Docs: https://kubernetes.io/docs/
Process: 58112 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 58112 (code=exited, status=255)
Jul 07 14:13:08 master-node systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 07 14:13:08 master-node systemd[1]: Unit kubelet.service entered failed state.
Jul 07 14:13:08 master-node systemd[1]: kubelet.service failed.
Jul 07 14:13:10 master-node systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
**Which test(s) are failing**:
I did below steps to fix the above errors
swapoff -a
kubeadm reset
Still i encountered errors
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:
Please help here to fix this issue
|
1.0
|
Kubernetes failed to start after installation throwing errors - <!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI -->
**Which jobs are failing**:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) (Result: exit-code) since Tue 2020-07-07 14:13:10 EDT; 11s ago
Docs: https://kubernetes.io/docs/
Process: 58112 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 58112 (code=exited, status=255)
Jul 07 14:13:08 master-node systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 07 14:13:08 master-node systemd[1]: Unit kubelet.service entered failed state.
Jul 07 14:13:08 master-node systemd[1]: kubelet.service failed.
Jul 07 14:13:10 master-node systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
**Which test(s) are failing**:
I did below steps to fix the above errors
swapoff -a
kubeadm reset
Still i encountered errors
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
**Anything else we need to know**:
Please help here to fix this issue
|
non_process
|
kubernetes failed to start after installation throwing errors which jobs are failing kubelet service kubelet the kubernetes node agent loaded loaded usr lib systemd system kubelet service enabled vendor preset disabled drop in usr lib systemd system kubelet service d └─ kubeadm conf active inactive dead result exit code since tue edt ago docs process execstart usr bin kubelet kubelet kubeconfig args kubelet config args kubelet kubeadm args kubelet extra args code exited status main pid code exited status jul master node systemd kubelet service main process exited code exited status n a jul master node systemd unit kubelet service entered failed state jul master node systemd kubelet service failed jul master node systemd stopped kubelet the kubernetes node agent which test s are failing i did below steps to fix the above errors swapoff a kubeadm reset still i encountered errors since when has it been failing testgrid link reason for failure anything else we need to know please help here to fix this issue
| 0
|
390,629
| 11,551,087,199
|
IssuesEvent
|
2020-02-19 00:19:37
|
googleapis/java-spanner-jdbc
|
https://api.github.com/repos/googleapis/java-spanner-jdbc
|
closed
|
Synthesis failed for java-spanner-jdbc
|
api: spanner autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate java-spanner-jdbc. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
On branch autosynth
nothing to commit, working tree clean
HEAD detached at FETCH_HEAD
nothing to commit, working tree clean
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 22, in <module>
templates = common_templates.java_library()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 75, in java_library
return self._generic_library("java_library", **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 43, in _generic_library
if not kwargs["metadata"]["samples"]:
KeyError: 'samples'
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/7769bc45-4de7-4c4e-8e43-b07e6b6eba12).
|
1.0
|
Synthesis failed for java-spanner-jdbc - Hello! Autosynth couldn't regenerate java-spanner-jdbc. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--']
synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py.
On branch autosynth
nothing to commit, working tree clean
HEAD detached at FETCH_HEAD
nothing to commit, working tree clean
synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/synth.py", line 22, in <module>
templates = common_templates.java_library()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 75, in java_library
return self._generic_library("java_library", **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/gcp/common.py", line 43, in _generic_library
if not kwargs["metadata"]["samples"]:
KeyError: 'samples'
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/7769bc45-4de7-4c4e-8e43-b07e6b6eba12).
|
non_process
|
synthesis failed for java spanner jdbc hello autosynth couldn t regenerate java spanner jdbc broken heart here s the output from running synth py cloning into working repo switched to branch autosynth running synthtool synthtool executing tmpfs src git autosynth working repo synth py on branch autosynth nothing to commit working tree clean head detached at fetch head nothing to commit working tree clean synthtool wrote metadata to synth metadata traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo synth py line in templates common templates java library file tmpfs src git autosynth env lib site packages synthtool gcp common py line in java library return self generic library java library kwargs file tmpfs src git autosynth env lib site packages synthtool gcp common py line in generic library if not kwargs keyerror samples synthesis failed google internal developers can see the full log
| 0
|
14,257
| 17,192,666,123
|
IssuesEvent
|
2021-07-16 13:15:09
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] Add algorithms for raising warnings and exceptions from models
|
3.14 Automatic new feature Processing Alg
|
Original commit: https://github.com/qgis/QGIS/commit/5f533e561c903e37fb6d6498d62278a3ee3b9669 by nyalldawson
These algorithms raise either a custom warning in the processing log, OR raise
an exception which causes the model execution to terminate.
An optional condition expression can be specified to control whether or not
the warning/exception is raised, allowing logic like "if the output layer from
another algorithm contains more then 10 features, then abort the model execution"
Sponsored by Fisel + König
|
1.0
|
[FEATURE][processing] Add algorithms for raising warnings and exceptions from models - Original commit: https://github.com/qgis/QGIS/commit/5f533e561c903e37fb6d6498d62278a3ee3b9669 by nyalldawson
These algorithms raise either a custom warning in the processing log, OR raise
an exception which causes the model execution to terminate.
An optional condition expression can be specified to control whether or not
the warning/exception is raised, allowing logic like "if the output layer from
another algorithm contains more then 10 features, then abort the model execution"
Sponsored by Fisel + König
|
process
|
add algorithms for raising warnings and exceptions from models original commit by nyalldawson these algorithms raise either a custom warning in the processing log or raise an exception which causes the model execution to terminate an optional condition expression can be specified to control whether or not the warning exception is raised allowing logic like if the output layer from another algorithm contains more then features then abort the model execution sponsored by fisel könig
| 1
|
13,463
| 15,950,007,931
|
IssuesEvent
|
2021-04-15 08:09:01
|
2020mt93213/Pune_BusRoutes
|
https://api.github.com/repos/2020mt93213/Pune_BusRoutes
|
closed
|
Feature: Buses in motion : Need different filename for distinct timestamps
|
enhancement process
|
## Feature: Buses in motion
Image names should be appended with timestamp.
This will help in sequential arrangements of source images to generate a GIF output
### Sample -
1. for 03 April 2021 01:51:00AM filename should be 20210403015100.png
|
1.0
|
Feature: Buses in motion : Need different filename for distinct timestamps - ## Feature: Buses in motion
Image names should be appended with timestamp.
This will help in sequential arrangements of source images to generate a GIF output
### Sample -
1. for 03 April 2021 01:51:00AM filename should be 20210403015100.png
|
process
|
feature buses in motion need different filename for distinct timestamps feature buses in motion image names should be appended with timestamp this will help in sequential arrangements of source images to generate a gif output sample for april filename should be png
| 1
|
85,223
| 10,432,703,679
|
IssuesEvent
|
2019-09-17 11:57:09
|
vtex-apps/io-documentation
|
https://api.github.com/repos/vtex-apps/io-documentation
|
closed
|
vtex-apps/store-component-template has no documentation yet
|
no-documentation
|
[vtex-apps/store-component-template](https://github.com/vtex-apps/store-component-template) hasn't created any README file yet or is not using Docs Builder
|
1.0
|
vtex-apps/store-component-template has no documentation yet - [vtex-apps/store-component-template](https://github.com/vtex-apps/store-component-template) hasn't created any README file yet or is not using Docs Builder
|
non_process
|
vtex apps store component template has no documentation yet hasn t created any readme file yet or is not using docs builder
| 0
|
11,045
| 13,864,863,122
|
IssuesEvent
|
2020-10-16 02:40:49
|
aws-cloudformation/cloudformation-cli
|
https://api.github.com/repos/aws-cloudformation/cloudformation-cli
|
closed
|
"Model name conflict" when embedded objects share same base name
|
enhancement schema processing
|
When subobjects within an object share the same parent name, their values conflict when the rewriting occurs. This should ideally generate a definition name that doesn't conflict with the other (perhaps with an iterator?).
Example debug log and schema attached.
[debug-log.txt](https://github.com/aws-cloudformation/cloudformation-cli/files/4348176/debug-log.txt)
[terraform-aws-alblistener.json.txt](https://github.com/aws-cloudformation/cloudformation-cli/files/4348179/terraform-aws-alblistener.json.txt)
|
1.0
|
"Model name conflict" when embedded objects share same base name - When subobjects within an object share the same parent name, their values conflict when the rewriting occurs. This should ideally generate a definition name that doesn't conflict with the other (perhaps with an iterator?).
Example debug log and schema attached.
[debug-log.txt](https://github.com/aws-cloudformation/cloudformation-cli/files/4348176/debug-log.txt)
[terraform-aws-alblistener.json.txt](https://github.com/aws-cloudformation/cloudformation-cli/files/4348179/terraform-aws-alblistener.json.txt)
|
process
|
model name conflict when embedded objects share same base name when subobjects within an object share the same parent name their values conflict when the rewriting occurs this should ideally generate a definition name that doesn t conflict with the other perhaps with an iterator example debug log and schema attached
| 1
|
544,525
| 15,894,326,182
|
IssuesEvent
|
2021-04-11 09:55:27
|
AY2021S2-CS2103T-T12-3/tp
|
https://api.github.com/repos/AY2021S2-CS2103T-T12-3/tp
|
closed
|
Refactor code to not use SortedFilteredPersonsList
|
priority.High type.Bug
|
View is using `SortedFilteredPersonsList` whereas commands are operating on `FilteredPersonsList`
|
1.0
|
Refactor code to not use SortedFilteredPersonsList - View is using `SortedFilteredPersonsList` whereas commands are operating on `FilteredPersonsList`
|
non_process
|
refactor code to not use sortedfilteredpersonslist view is using sortedfilteredpersonslist whereas commands are operating on filteredpersonslist
| 0
|
17,912
| 3,013,586,259
|
IssuesEvent
|
2015-07-29 09:52:48
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
InterfaceX updateWorkitemData Problem
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. develop a custom service
2. override the handleWorkItemStatusChangeEvent method and update the workitem
data when the status of workitem changes from Fired to Executing.
What is the expected output? What do you see instead?
I expect that the workitem could be started afterwards by updated value.
However, it does not.
The value in work_items table is updated, but the data is not updated in
rs_workitemcache table. I guess there is a problem in updating cache data in
YAWL.
Please use labels and text to provide additional information.
my code:
@Override public void handleWorkItemStatusChangeEvent(WorkItemRecord workItem,
String oldStatus, String newStatus) {
if( oldStatus.equalsIgnoreCase("Fired") && newStatus.equalsIgnoreCase("Executing"))
updateWorkItemData(workItem);
}
I also posted a topic here:
http://www.yawlfoundation.org/forum/viewtopic.php?f=6&t=888&sid=cdefed5402f1ebd3
667b211478bd0a58
but after a lot of effort with yawl, I think it should be a bug!
```
Original issue reported on code.google.com by `Jalali.A...@gmail.com` on 7 Feb 2013 at 9:03
|
1.0
|
InterfaceX updateWorkitemData Problem - ```
What steps will reproduce the problem?
1. develop a custom service
2. override the handleWorkItemStatusChangeEvent method and update the workitem
data when the status of workitem changes from Fired to Executing.
What is the expected output? What do you see instead?
I expect that the workitem could be started afterwards by updated value.
However, it does not.
The value in work_items table is updated, but the data is not updated in
rs_workitemcache table. I guess there is a problem in updating cache data in
YAWL.
Please use labels and text to provide additional information.
my code:
@Override public void handleWorkItemStatusChangeEvent(WorkItemRecord workItem,
String oldStatus, String newStatus) {
if( oldStatus.equalsIgnoreCase("Fired") && newStatus.equalsIgnoreCase("Executing"))
updateWorkItemData(workItem);
}
I also posted a topic here:
http://www.yawlfoundation.org/forum/viewtopic.php?f=6&t=888&sid=cdefed5402f1ebd3
667b211478bd0a58
but after a lot of effort with yawl, I think it should be a bug!
```
Original issue reported on code.google.com by `Jalali.A...@gmail.com` on 7 Feb 2013 at 9:03
|
non_process
|
interfacex updateworkitemdata problem what steps will reproduce the problem develop a custom service override the handleworkitemstatuschangeevent method and update the workitem data when the status of workitem changes from fired to executing what is the expected output what do you see instead i expect that the workitem could be started afterwards by updated value however it does not the value in work items table is updated but the data is not updated in rs workitemcache table i guess there is a problem in updating cache data in yawl please use labels and text to provide additional information my code override public void handleworkitemstatuschangeevent workitemrecord workitem string oldstatus string newstatus if oldstatus equalsignorecase fired newstatus equalsignorecase executing updateworkitemdata workitem i also posted a topic here but after a lot of effort with yawl i think it should be a bug original issue reported on code google com by jalali a gmail com on feb at
| 0
|
19,711
| 26,053,749,345
|
IssuesEvent
|
2022-12-22 21:50:03
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
closed
|
Provide a type conversion / cast processor
|
enhancement plugin - processor
|
**Is your feature request related to a problem? Please describe.**
Some pipelines have Event values in one type (e.g. string), but want to convert them to another type (e.g. integer).
**Describe the solution you'd like**
Provide a new convert processor along with the other Mutate Event Processors.
```
processor
- convert_entries:
entries:
- from_key: "mySource"
to_key: "myTarget"
type: integer
```
The default value for `to_key` can be the `from_key`. So this could be simplified in some cases:
```
processor
- convert_entries:
entries:
- from_key: "http_status"
type: integer
```
**Additional context**
With conditional routing and expressions this can help pipeline authors perform better comparisons. It also allows for sending data to OpenSearch in a more desirable format.
See #2009 for a grok-based solution for a similar problem.
|
1.0
|
Provide a type conversion / cast processor - **Is your feature request related to a problem? Please describe.**
Some pipelines have Event values in one type (e.g. string), but want to convert them to another type (e.g. integer).
**Describe the solution you'd like**
Provide a new convert processor along with the other Mutate Event Processors.
```
processor
- convert_entries:
entries:
- from_key: "mySource"
to_key: "myTarget"
type: integer
```
The default value for `to_key` can be the `from_key`. So this could be simplified in some cases:
```
processor
- convert_entries:
entries:
- from_key: "http_status"
type: integer
```
**Additional context**
With conditional routing and expressions this can help pipeline authors perform better comparisons. It also allows for sending data to OpenSearch in a more desirable format.
See #2009 for a grok-based solution for a similar problem.
|
process
|
provide a type conversion cast processor is your feature request related to a problem please describe some pipelines have event values in one type e g string but want to convert them to another type e g integer describe the solution you d like provide a new convert processor along with the other mutate event processors processor convert entries entries from key mysource to key mytarget type integer the default value for to key can be the from key so this could be simplified in some cases processor convert entries entries from key http status type integer additional context with conditional routing and expressions this can help pipeline authors perform better comparisons it also allows for sending data to opensearch in a more desirable format see for a grok based solution for a similar problem
| 1
|
1,762
| 4,469,240,142
|
IssuesEvent
|
2016-08-25 12:25:16
|
pelias/text-analyzer
|
https://api.github.com/repos/pelias/text-analyzer
|
closed
|
functions declared during execution
|
processed
|
I just noticed that we are declaring a bunch of functions inline during execution:
```javascript
module.exports.parse = function parse(query) {
var getAdminPartsBySplittingOnDelim = function(queryParts) {
...
var getAddressParts = function(query) {
...
```
It would be cleaner/more performant to declare these functions *once* outside of the `parse()` function and simply execute them from inside that scope, as it's currently written those functions are re-declared on every incoming HTTP request.
|
1.0
|
functions declared during execution - I just noticed that we are declaring a bunch of functions inline during execution:
```javascript
module.exports.parse = function parse(query) {
var getAdminPartsBySplittingOnDelim = function(queryParts) {
...
var getAddressParts = function(query) {
...
```
It would be cleaner/more performant to declare these functions *once* outside of the `parse()` function and simply execute them from inside that scope, as it's currently written those functions are re-declared on every incoming HTTP request.
|
process
|
functions declared during execution i just noticed that we are declaring a bunch of functions inline during execution javascript module exports parse function parse query var getadminpartsbysplittingondelim function queryparts var getaddressparts function query it would be cleaner more performant to declare these functions once outside of the parse function and simply execute them from inside that scope as it s currently written those functions are re declared on every incoming http request
| 1
|
6,390
| 9,473,795,278
|
IssuesEvent
|
2019-04-19 03:58:24
|
hackcambridge/hack-cambridge-website
|
https://api.github.com/repos/hackcambridge/hack-cambridge-website
|
opened
|
Set up Heroku Postgres instances for staging / PR apps
|
Epic: Dev process
|
This will allow us to run these apps with a DB, and also means we can run the full release script.
|
1.0
|
Set up Heroku Postgres instances for staging / PR apps - This will allow us to run these apps with a DB, and also means we can run the full release script.
|
process
|
set up heroku postgres instances for staging pr apps this will allow us to run these apps with a db and also means we can run the full release script
| 1
|
20,990
| 27,853,684,959
|
IssuesEvent
|
2023-03-20 20:48:57
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
opened
|
Support segmentation masks in MixUp layer
|
contribution-welcome preprocessing augmentation
|
This should follow the same structure as segmentation mask augmentation in our other preprocessing layers.
Treating masks like 1-channel images for the purpose of applying MixUp should be a simple and effective approach for this.
|
1.0
|
Support segmentation masks in MixUp layer - This should follow the same structure as segmentation mask augmentation in our other preprocessing layers.
Treating masks like 1-channel images for the purpose of applying MixUp should be a simple and effective approach for this.
|
process
|
support segmentation masks in mixup layer this should follow the same structure as segmentation mask augmentation in our other preprocessing layers treating masks like channel images for the purpose of applying mixup should be a simple and effective approach for this
| 1
|
22,655
| 31,895,827,985
|
IssuesEvent
|
2023-09-18 01:32:00
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - latestGeochronologicalEra
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: This is a dwciri: term.
Current Term definition: https://dwc.tdwg.org/list/#dwciri_latestGeochronologicalEra
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): latestGeochronologicalEra
* Term label (English, not normative): Latest Geochronological Era
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): dwciri
* Definition of the term (normative): Use to link a dwc:GeologicalContext instance to chronostratigraphic time periods at the lowest possible level in a standardized hierarchy. Use this property to point to the latest possible geological time period from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use an IRI from a controlled vocabulary. A "convenience property" that replaces Darwin Core literal-value terms related to geological context. See Section 2.7.6 of the Darwin Core RDF Guide for details.
* Examples (not normative):
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - latestGeochronologicalEra - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: This is a dwciri: term.
Current Term definition: https://dwc.tdwg.org/list/#dwciri_latestGeochronologicalEra
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): latestGeochronologicalEra
* Term label (English, not normative): Latest Geochronological Era
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): dwciri
* Definition of the term (normative): Use to link a dwc:GeologicalContext instance to chronostratigraphic time periods at the lowest possible level in a standardized hierarchy. Use this property to point to the latest possible geological time period from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use an IRI from a controlled vocabulary. A "convenience property" that replaces Darwin Core literal-value terms related to geological context. See Section 2.7.6 of the Darwin Core RDF Guide for details.
* Examples (not normative):
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term latestgeochronologicalera term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version this is a dwciri term current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes latestgeochronologicalera term label english not normative latest geochronological era organized in class e g occurrence event location taxon dwciri definition of the term normative use to link a dwc geologicalcontext instance to chronostratigraphic time periods at the lowest possible level in a standardized hierarchy use this property to point to the latest possible geological time period from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative recommended best practice is to use an iri from a controlled vocabulary a convenience property that replaces darwin core literal value terms related to geological context see section of the darwin core rdf guide for details examples not normative refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
19,699
| 26,049,472,494
|
IssuesEvent
|
2022-12-22 17:11:11
|
usdigitalresponse/usdr-gost
|
https://api.github.com/repos/usdigitalresponse/usdr-gost
|
closed
|
[Process] New user signup request.
|
process signup request
|
A partner has asked either for access to the Demo tenant, or for a new Tenant to be set up for their government. Please set this up within a week.
See https://www.jotform.com/tables/222236011470138 for details.
|
1.0
|
[Process] New user signup request. - A partner has asked either for access to the Demo tenant, or for a new Tenant to be set up for their government. Please set this up within a week.
See https://www.jotform.com/tables/222236011470138 for details.
|
process
|
new user signup request a partner has asked either for access to the demo tenant or for a new tenant to be set up for their government please set this up within a week see for details
| 1
|
3,753
| 6,733,154,208
|
IssuesEvent
|
2017-10-18 14:00:40
|
york-region-tpss/stp
|
https://api.github.com/repos/york-region-tpss/stp
|
closed
|
Load previous year's pricing data
|
process workflow
|
Load previous year's pricing data into this year's `Unit Price (LY)`.
Procedure should directly update the database column.
|
1.0
|
Load previous year's pricing data - Load previous year's pricing data into this year's `Unit Price (LY)`.
Procedure should directly update the database column.
|
process
|
load previous year s pricing data load previous year s pricing data into this year s unit price ly procedure should directly update the database column
| 1
|
455,471
| 13,127,697,595
|
IssuesEvent
|
2020-08-06 10:51:59
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
m.alibaba.com - see bug description
|
browser-firefox-mobile engine-gecko priority-important
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:68.0) Gecko/20100101 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/56264 -->
**URL**: https://m.alibaba.com/myalibaba.htm?templateName=wap2-my-alibaba&from=header#/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 6.0
**Tested Another Browser**: Yes Opera
**Problem type**: Something else
**Description**: Child Porn being sold , Human Trafficking
**Steps to Reproduce**:
REPORTED CHILD ABUSE FBI CIA NEEDS TO BE INVOLVED NOW
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/8/79aeebb9-9bd9-40e7-8585-9b90c871da2d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200501050101</li><li>channel: alpha</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/8/93b77de8-9b09-48a8-8271-a3d457321cf4)
Submitted in the name of `@Watching`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
m.alibaba.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 6.0; Mobile; rv:68.0) Gecko/20100101 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/56264 -->
**URL**: https://m.alibaba.com/myalibaba.htm?templateName=wap2-my-alibaba&from=header#/
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 6.0
**Tested Another Browser**: Yes Opera
**Problem type**: Something else
**Description**: Child Porn being sold , Human Trafficking
**Steps to Reproduce**:
REPORTED CHILD ABUSE FBI CIA NEEDS TO BE INVOLVED NOW
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/8/79aeebb9-9bd9-40e7-8585-9b90c871da2d.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200501050101</li><li>channel: alpha</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/8/93b77de8-9b09-48a8-8271-a3d457321cf4)
Submitted in the name of `@Watching`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
m alibaba com see bug description url browser version firefox mobile operating system android tested another browser yes opera problem type something else description child porn being sold human trafficking steps to reproduce reported child abuse fbi cia needs to be involved now view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel alpha hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false submitted in the name of watching from with ❤️
| 0
|
75,675
| 9,308,545,802
|
IssuesEvent
|
2019-03-25 14:46:05
|
brymut/quotet.co.ke
|
https://api.github.com/repos/brymut/quotet.co.ke
|
opened
|
Develop API
|
design enhancement
|
Development of API for data delivery from database & admin console to visitor side of the website.
|
1.0
|
Develop API - Development of API for data delivery from database & admin console to visitor side of the website.
|
non_process
|
develop api development of api for data delivery from database admin console to visitor side of the website
| 0
|
19,889
| 26,335,285,374
|
IssuesEvent
|
2023-01-10 13:55:44
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
reopened
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit e61d6bb264633c720b1ce857717f4e9638f40279
Last updated: Tue Jan 10 03:54 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3882229465)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit e61d6bb264633c720b1ce857717f4e9638f40279
Last updated: Mon Jan 9 05:50 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3873755708)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit e61d6bb264633c720b1ce857717f4e9638f40279
Last updated: Tue Jan 10 03:45 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3882672896)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit e61d6bb264633c720b1ce857717f4e9638f40279
Last updated: Tue Jan 10 03:54 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3882229465)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit e61d6bb264633c720b1ce857717f4e9638f40279
Last updated: Mon Jan 9 05:50 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3873755708)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit e61d6bb264633c720b1ce857717f4e9638f40279
Last updated: Tue Jan 10 03:45 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3882672896)**
|
process
|
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated mon jan pst ✅ nbsp integration test succeeded requested by on commit last updated tue jan pst
| 1
|
10,071
| 13,044,161,896
|
IssuesEvent
|
2020-07-29 03:47:27
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `UTCTimeWithArg` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `UTCTimeWithArg` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `UTCTimeWithArg` from TiDB -
## Description
Port the scalar function `UTCTimeWithArg` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function utctimewitharg from tidb description port the scalar function utctimewitharg from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
606,666
| 18,767,427,788
|
IssuesEvent
|
2021-11-06 06:47:08
|
PokemonAutomation/ComputerControl
|
https://api.github.com/repos/PokemonAutomation/ComputerControl
|
opened
|
Investigate non-shiny hunts.
|
enhancement P4 - Low Priority
|
Moved From: https://github.com/PokemonAutomation/SwSh-Arduino/issues/11
Which of these are possible and can we do them?
- [x] Poipole and Cosmog: Stat hunting
- [ ] Keldeo: Stat and mark hunting
- [ ] Galar Birds: Stat and mark hunting
- [x] Calyrex: Stat hunting
- [x] Regis: Stat hunting
|
1.0
|
Investigate non-shiny hunts. - Moved From: https://github.com/PokemonAutomation/SwSh-Arduino/issues/11
Which of these are possible and can we do them?
- [x] Poipole and Cosmog: Stat hunting
- [ ] Keldeo: Stat and mark hunting
- [ ] Galar Birds: Stat and mark hunting
- [x] Calyrex: Stat hunting
- [x] Regis: Stat hunting
|
non_process
|
investigate non shiny hunts moved from which of these are possible and can we do them poipole and cosmog stat hunting keldeo stat and mark hunting galar birds stat and mark hunting calyrex stat hunting regis stat hunting
| 0
|
165,464
| 26,175,512,985
|
IssuesEvent
|
2023-01-02 09:13:47
|
zitadel/zitadel
|
https://api.github.com/repos/zitadel/zitadel
|
closed
|
Helpvideos ZITADEL
|
category: design
|
The idea is, to provide short videos with how-to's for specific features/workflows in zitadel (console).
for example:
- [ ] User management
- [ ] Setting up a project
- [ ] Setting up an application
- [ ] Handling 'Actions'
- [ ] How to manage Authorizations
- [ ] Setup ZITADEL with your corporate design/brand
etc.
|
1.0
|
Helpvideos ZITADEL - The idea is, to provide short videos with how-to's for specific features/workflows in zitadel (console).
for example:
- [ ] User management
- [ ] Setting up a project
- [ ] Setting up an application
- [ ] Handling 'Actions'
- [ ] How to manage Authorizations
- [ ] Setup ZITADEL with your corporate design/brand
etc.
|
non_process
|
helpvideos zitadel the idea is to provide short videos with how to s for specific features workflows in zitadel console for example user management setting up a project setting up an application handling actions how to manage authorizations setup zitadel with your corporate design brand etc
| 0
|
259,463
| 27,621,909,964
|
IssuesEvent
|
2023-03-10 01:20:51
|
nidhi7598/linux-3.0.35
|
https://api.github.com/repos/nidhi7598/linux-3.0.35
|
closed
|
CVE-2017-10911 (Medium) detected in linuxlinux-3.0.40 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2017-10911 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The make_response function in drivers/block/xen-blkback/blkback.c in the Linux kernel before 4.11.8 allows guest OS users to obtain sensitive information from host OS (or other guest OS) kernel memory by leveraging the copying of uninitialized padding fields in Xen block-interface response structures, aka XSA-216.
<p>Publish Date: 2017-07-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-10911>CVE-2017-10911</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-10911">https://nvd.nist.gov/vuln/detail/CVE-2017-10911</a></p>
<p>Release Date: 2017-07-05</p>
<p>Fix Resolution: 4.11.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-10911 (Medium) detected in linuxlinux-3.0.40 - autoclosed - ## CVE-2017-10911 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The make_response function in drivers/block/xen-blkback/blkback.c in the Linux kernel before 4.11.8 allows guest OS users to obtain sensitive information from host OS (or other guest OS) kernel memory by leveraging the copying of uninitialized padding fields in Xen block-interface response structures, aka XSA-216.
<p>Publish Date: 2017-07-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-10911>CVE-2017-10911</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-10911">https://nvd.nist.gov/vuln/detail/CVE-2017-10911</a></p>
<p>Release Date: 2017-07-05</p>
<p>Fix Resolution: 4.11.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the make response function in drivers block xen blkback blkback c in the linux kernel before allows guest os users to obtain sensitive information from host os or other guest os kernel memory by leveraging the copying of uninitialized padding fields in xen block interface response structures aka xsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
13,731
| 16,488,337,189
|
IssuesEvent
|
2021-05-24 21:42:53
|
DSpace/DSpace
|
https://api.github.com/repos/DSpace/DSpace
|
closed
|
Index-Discovery commands not listed
|
bug e/2 interface: command-line medium priority tools: processes
|
**Describe the bug**
It seams `index-discovery` is missing or omitted from the CLI list.
**To Reproduce**
Steps to reproduce the behavior:
1. When executing `[/dspace]/bin/dspace ` a list is displayed
**Expected behavior**
In the displayed list, it's missing `index-discovery`, but regardless, if you try to execute it the process script will be executed.
```
# /dspace/bin/dspace index-discovery -b
The script has started
(Re)building index from scratch.
```
|
1.0
|
Index-Discovery commands not listed - **Describe the bug**
It seams `index-discovery` is missing or omitted from the CLI list.
**To Reproduce**
Steps to reproduce the behavior:
1. When executing `[/dspace]/bin/dspace ` a list is displayed
**Expected behavior**
In the displayed list, it's missing `index-discovery`, but regardless, if you try to execute it the process script will be executed.
```
# /dspace/bin/dspace index-discovery -b
The script has started
(Re)building index from scratch.
```
|
process
|
index discovery commands not listed describe the bug it seams index discovery is missing or omitted from the cli list to reproduce steps to reproduce the behavior when executing bin dspace a list is displayed expected behavior in the displayed list it s missing index discovery but regardless if you try to execute it the process script will be executed dspace bin dspace index discovery b the script has started re building index from scratch
| 1
|
327,510
| 28,068,011,620
|
IssuesEvent
|
2023-03-29 16:51:46
|
ossf/scorecard-action
|
https://api.github.com/repos/ossf/scorecard-action
|
closed
|
Failing e2e tests - scorecard-latest-release on ossf-tests/scorecard-action-branch-protection-e2e
|
e2e automated-tests
|
Matrix: null
Repo: https://github.com/ossf-tests/scorecard-action-branch-protection-e2e/tree/main
Run: https://github.com/ossf-tests/scorecard-action-branch-protection-e2e/actions/runs/3511112490
Workflow name: scorecard-latest-release
Workflow file: https://github.com/ossf-tests/scorecard-action-branch-protection-e2e/tree/main/.github/workflows/scorecards-latest-release.yml
Trigger: schedule
Branch: main
|
1.0
|
Failing e2e tests - scorecard-latest-release on ossf-tests/scorecard-action-branch-protection-e2e - Matrix: null
Repo: https://github.com/ossf-tests/scorecard-action-branch-protection-e2e/tree/main
Run: https://github.com/ossf-tests/scorecard-action-branch-protection-e2e/actions/runs/3511112490
Workflow name: scorecard-latest-release
Workflow file: https://github.com/ossf-tests/scorecard-action-branch-protection-e2e/tree/main/.github/workflows/scorecards-latest-release.yml
Trigger: schedule
Branch: main
|
non_process
|
failing tests scorecard latest release on ossf tests scorecard action branch protection matrix null repo run workflow name scorecard latest release workflow file trigger schedule branch main
| 0
|
16,161
| 20,599,209,613
|
IssuesEvent
|
2022-03-06 01:18:29
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
opened
|
'Join attributes by field value' throws error because of field types (when there is a field name conflict)
|
Processing Bug
|
### What is the bug or the crash?
It seems field values are shifted during a 'join attributes by field value' operation.
If by chance two consecutive fields have types that don't match, QGIS can easily throw a value error, like this:
```
Feature could not be written to Joined_layer_5d3154a1_0be4_4c9d_9ce7_fef6aa794dcb:
Could not store attribute "TOTAL_PUNTAJE": Value "RESIDENCIAL" is not a number
Could not write feature into OUTPUT
Execution failed after 0.16 seconds
```
Note this only happens when a prefix is not given as parameter value.
### Steps to reproduce the issue
Base layer: ([base_data.gpkg.zip](https://github.com/qgis/QGIS/files/8191770/base_data.gpkg.zip))

Secondary layer: ([secondary_data.gpkg.zip](https://github.com/qgis/QGIS/files/8191771/secondary_data.gpkg.zip))

Now, let's join them in this way:

Oops! We get an error.

### Versions
Tested in QGIS master.
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
'Join attributes by field value' throws error because of field types (when there is a field name conflict) - ### What is the bug or the crash?
It seems field values are shifted during a 'join attributes by field value' operation.
If by chance two consecutive fields have types that don't match, QGIS can easily throw a value error, like this:
```
Feature could not be written to Joined_layer_5d3154a1_0be4_4c9d_9ce7_fef6aa794dcb:
Could not store attribute "TOTAL_PUNTAJE": Value "RESIDENCIAL" is not a number
Could not write feature into OUTPUT
Execution failed after 0.16 seconds
```
Note this only happens when a prefix is not given as parameter value.
### Steps to reproduce the issue
Base layer: ([base_data.gpkg.zip](https://github.com/qgis/QGIS/files/8191770/base_data.gpkg.zip))

Secondary layer: ([secondary_data.gpkg.zip](https://github.com/qgis/QGIS/files/8191771/secondary_data.gpkg.zip))

Now, let's join them in this way:

Oops! We get an error.

### Versions
Tested in QGIS master.
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
join attributes by field value throws error because of field types when there is a field name conflict what is the bug or the crash it seems field values are shifted during a join attributes by field value operation if by chance two consecutive fields have types that don t match qgis can easily throw a value error like this feature could not be written to joined layer could not store attribute total puntaje value residencial is not a number could not write feature into output execution failed after seconds note this only happens when a prefix is not given as parameter value steps to reproduce the issue base layer secondary layer now let s join them in this way oops we get an error versions tested in qgis master supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
11,079
| 13,920,474,064
|
IssuesEvent
|
2020-10-21 10:29:29
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
opened
|
Python error when using "translate (convert format)" and input is a VRT file
|
Bug Processing
|
QGIS 3.10.10 on Ubuntu 20.04
```
QGIS version: 3.10.10-A Coruña
QGIS code revision: 1869829378
Qt version: 5.12.8
GDAL version: 3.0.4
GEOS version: 3.8.0-CAPI-1.13.1
PROJ version: Rel. 6.3.1, February 10th, 2020
Processing algorithm…
Algorithm 'Translate (convert format)' starting…
Input parameters:
{ 'COPY_SUBDATASETS' : False, 'DATA_TYPE' : 0, 'EXTRA' : '', 'INPUT' : '/home/giovanni/Downloads/bug_report_virtual_raster/virtual_raster_(wrong_histogram).vrt', 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'TARGET_CRS' : None }
GDAL command:
gdal_translate -of GTiff /home/giovanni/Downloads/bug_report_virtual_raster/virtual_raster_(wrong_histogram).vrt /tmp/processing_62536c3448594153848dae0e2a08b294/38c810e10ce649cd80c0c476e9f4a655/OUTPUT.tif
GDAL command output:
**/bin/sh: 1: Syntax error: "(" unexpected**
Execution completed in 0.16 seconds
Results:
{'OUTPUT': '/tmp/processing_62536c3448594153848dae0e2a08b294/38c810e10ce649cd80c0c476e9f4a655/OUTPUT.tif'}
Loading resulting layers
The following layers were not correctly generated.<ul><li>/tmp/processing_62536c3448594153848dae0e2a08b294/38c810e10ce649cd80c0c476e9f4a655/OUTPUT.tif</li></ul>You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
```
|
1.0
|
Python error when using "translate (convert format)" and input is a VRT file - QGIS 3.10.10 on Ubuntu 20.04
```
QGIS version: 3.10.10-A Coruña
QGIS code revision: 1869829378
Qt version: 5.12.8
GDAL version: 3.0.4
GEOS version: 3.8.0-CAPI-1.13.1
PROJ version: Rel. 6.3.1, February 10th, 2020
Processing algorithm…
Algorithm 'Translate (convert format)' starting…
Input parameters:
{ 'COPY_SUBDATASETS' : False, 'DATA_TYPE' : 0, 'EXTRA' : '', 'INPUT' : '/home/giovanni/Downloads/bug_report_virtual_raster/virtual_raster_(wrong_histogram).vrt', 'NODATA' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'TARGET_CRS' : None }
GDAL command:
gdal_translate -of GTiff /home/giovanni/Downloads/bug_report_virtual_raster/virtual_raster_(wrong_histogram).vrt /tmp/processing_62536c3448594153848dae0e2a08b294/38c810e10ce649cd80c0c476e9f4a655/OUTPUT.tif
GDAL command output:
**/bin/sh: 1: Syntax error: "(" unexpected**
Execution completed in 0.16 seconds
Results:
{'OUTPUT': '/tmp/processing_62536c3448594153848dae0e2a08b294/38c810e10ce649cd80c0c476e9f4a655/OUTPUT.tif'}
Loading resulting layers
The following layers were not correctly generated.<ul><li>/tmp/processing_62536c3448594153848dae0e2a08b294/38c810e10ce649cd80c0c476e9f4a655/OUTPUT.tif</li></ul>You can check the 'Log Messages Panel' in QGIS main window to find more information about the execution of the algorithm.
```
|
process
|
python error when using translate convert format and input is a vrt file qgis on ubuntu qgis version a coruña qgis code revision qt version gdal version geos version capi proj version rel february processing algorithm… algorithm translate convert format starting… input parameters copy subdatasets false data type extra input home giovanni downloads bug report virtual raster virtual raster wrong histogram vrt nodata none options output temporary output target crs none gdal command gdal translate of gtiff home giovanni downloads bug report virtual raster virtual raster wrong histogram vrt tmp processing output tif gdal command output bin sh syntax error unexpected execution completed in seconds results output tmp processing output tif loading resulting layers the following layers were not correctly generated tmp processing output tif you can check the log messages panel in qgis main window to find more information about the execution of the algorithm
| 1
|
94,334
| 19,531,673,419
|
IssuesEvent
|
2021-12-30 18:07:18
|
google/web-stories-wp
|
https://api.github.com/repos/google/web-stories-wp
|
closed
|
Design System: Update Toggle Component to Avoid aXe Contrast False Positive
|
Accessibility Type: Code Quality Pod: Pea Package: Design System
|
## Context
While adding some no violation checks to karma, the document tab flagged a bad color contrast for the active radio option in the toggle button:

It's flagging as `color contrast of 1.59 (foreground color: #131516, background color: #373a3b,` the[ real contrast is 7.02](https://webaim.org/resources/contrastchecker/?fcolor=C8CBCC&bcolor=373A3B)

So the ask here is to take a look at the toggle button in the design system to adjust is and get the real color contrast to be picked up in aXe so we can enable this check on the entire document tab and avoid other false positives where this component is used.
You can check this by enabling the disabled test in components/inspector/karma that correlates to this ticket number.
|
1.0
|
Design System: Update Toggle Component to Avoid aXe Contrast False Positive - ## Context
While adding some no violation checks to karma, the document tab flagged a bad color contrast for the active radio option in the toggle button:

It's flagging as `color contrast of 1.59 (foreground color: #131516, background color: #373a3b,` the[ real contrast is 7.02](https://webaim.org/resources/contrastchecker/?fcolor=C8CBCC&bcolor=373A3B)

So the ask here is to take a look at the toggle button in the design system to adjust is and get the real color contrast to be picked up in aXe so we can enable this check on the entire document tab and avoid other false positives where this component is used.
You can check this by enabling the disabled test in components/inspector/karma that correlates to this ticket number.
|
non_process
|
design system update toggle component to avoid axe contrast false positive context while adding some no violation checks to karma the document tab flagged a bad color contrast for the active radio option in the toggle button it s flagging as color contrast of foreground color background color the so the ask here is to take a look at the toggle button in the design system to adjust is and get the real color contrast to be picked up in axe so we can enable this check on the entire document tab and avoid other false positives where this component is used you can check this by enabling the disabled test in components inspector karma that correlates to this ticket number
| 0
|
15,736
| 19,910,373,023
|
IssuesEvent
|
2022-01-25 16:35:22
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Unification App: Write E2E tests around "Specs Page" - New component testing project
|
stage: pending release process: tests type: chore
|
### What would you like?
Write end-to-end tests to cover the new Unification work in 10.0-release branch for "[Choose a Browser](https://docs.google.com/spreadsheets/d/1iPwi89aW6aYeA0VT1XOhYdAWLuScW0okrlfcL9fzh3s/edit#gid=0)" in the App around what the specs page when on a new component testing project.
### Why is this needed?
_No response_
### Other
_No response_
|
1.0
|
Unification App: Write E2E tests around "Specs Page" - New component testing project - ### What would you like?
Write end-to-end tests to cover the new Unification work in 10.0-release branch for "[Choose a Browser](https://docs.google.com/spreadsheets/d/1iPwi89aW6aYeA0VT1XOhYdAWLuScW0okrlfcL9fzh3s/edit#gid=0)" in the App around what the specs page when on a new component testing project.
### Why is this needed?
_No response_
### Other
_No response_
|
process
|
unification app write tests around specs page new component testing project what would you like write end to end tests to cover the new unification work in release branch for in the app around what the specs page when on a new component testing project why is this needed no response other no response
| 1
|
100
| 2,537,890,503
|
IssuesEvent
|
2015-01-26 23:52:23
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
closed
|
teleport()-step (fun thought -- proposal).
|
enhancement process
|
```groovy
g.V.out.out.teleport
```
What does this do? When a `Traverser<S>` reaches `teleport()` a random step in the traversal is selected and the `Traverser<S>` is sent there to continue its walk. If `teleport()` selects itself, it is passed onto the next stage for processing (in other words, `break`).
```groovy
g.V.out.teleport.out.out.teleport.path
```
The above would yield some crazy paths (varying lengths, cycles, indeterminable halt, etc.).
I don't know the practical application, but its easy to implement with the new `BranchStep` class :D. Any good thoughts on uses? @dkuppitz @mbroecheler @spmallette ... One thought:
```groovy
g.V.out.simplePath.teleport
```
The above would yield a random, non-cycling walk. Wikipedia calls it a "loop erased random walk": http://en.wikipedia.org/wiki/Loop-erased_random_walk
|
1.0
|
teleport()-step (fun thought -- proposal). - ```groovy
g.V.out.out.teleport
```
What does this do? When a `Traverser<S>` reaches `teleport()` a random step in the traversal is selected and the `Traverser<S>` is sent there to continue its walk. If `teleport()` selects itself, it is passed onto the next stage for processing (in other words, `break`).
```groovy
g.V.out.teleport.out.out.teleport.path
```
The above would yield some crazy paths (varying lengths, cycles, indeterminable halt, etc.).
I don't know the practical application, but its easy to implement with the new `BranchStep` class :D. Any good thoughts on uses? @dkuppitz @mbroecheler @spmallette ... One thought:
```groovy
g.V.out.simplePath.teleport
```
The above would yield a random, non-cycling walk. Wikipedia calls it a "loop erased random walk": http://en.wikipedia.org/wiki/Loop-erased_random_walk
|
process
|
teleport step fun thought proposal groovy g v out out teleport what does this do when a traverser reaches teleport a random step in the traversal is selected and the traverser is sent there to continue its walk if teleport selects itself it is passed onto the next stage for processing in other words break groovy g v out teleport out out teleport path the above would yield some crazy paths varying lengths cycles indeterminable halt etc i don t know the practical application but its easy to implement with the new branchstep class d any good thoughts on uses dkuppitz mbroecheler spmallette one thought groovy g v out simplepath teleport the above would yield a random non cycling walk wikipedia calls it a loop erased random walk
| 1
|
6,796
| 9,935,487,022
|
IssuesEvent
|
2019-07-02 16:40:13
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
`run_plugins_spec` test is leaking into other tests
|
process: tests stage: needs review type: chore
|
`@ipc.send` in `run_plugins_spec` is somehow leading to failing tests later on:
https://github.com/cypress-io/cypress/blob/bbff24dc6799a2e9464eebff8e1f9ad468fa5635/packages/server/test/unit/plugins/child/run_plugins_spec.coffee#L67-L80
https://circleci.com/gh/cypress-io/cypress/123148#tests/containers/1 -
```
AssertionError: expected 'error' to equal 'load:error'
at Object.ipc.send (/root/cypress/packages/server/test/unit/plugins/child/run_plugins_spec.coffee:76:26)
at process.on (/root/cypress/packages/server/lib/plugins/child/run_plugins.js:1:1)
at emitOne (events.js:1:1)
at process.emit (events.js:1:1)
at process.emit (/root/cypress/packages/ts/node_modules/source-map-support/source-map-support.js:1:1)
at processEmit (/root/cypress/packages/server/node_modules/signal-exit/index.js:1:1)
at processEmit [as emit] (/root/cypress/packages/launcher/node_modules/signal-exit/index.js:1:1)
at process._fatalException (bootstrap_node.js:1:1)
```
https://circleci.com/gh/cypress-io/cypress/128235#tests/containers/1
|
1.0
|
`run_plugins_spec` test is leaking into other tests - `@ipc.send` in `run_plugins_spec` is somehow leading to failing tests later on:
https://github.com/cypress-io/cypress/blob/bbff24dc6799a2e9464eebff8e1f9ad468fa5635/packages/server/test/unit/plugins/child/run_plugins_spec.coffee#L67-L80
https://circleci.com/gh/cypress-io/cypress/123148#tests/containers/1 -
```
AssertionError: expected 'error' to equal 'load:error'
at Object.ipc.send (/root/cypress/packages/server/test/unit/plugins/child/run_plugins_spec.coffee:76:26)
at process.on (/root/cypress/packages/server/lib/plugins/child/run_plugins.js:1:1)
at emitOne (events.js:1:1)
at process.emit (events.js:1:1)
at process.emit (/root/cypress/packages/ts/node_modules/source-map-support/source-map-support.js:1:1)
at processEmit (/root/cypress/packages/server/node_modules/signal-exit/index.js:1:1)
at processEmit [as emit] (/root/cypress/packages/launcher/node_modules/signal-exit/index.js:1:1)
at process._fatalException (bootstrap_node.js:1:1)
```
https://circleci.com/gh/cypress-io/cypress/128235#tests/containers/1
|
process
|
run plugins spec test is leaking into other tests ipc send in run plugins spec is somehow leading to failing tests later on assertionerror expected error to equal load error at object ipc send root cypress packages server test unit plugins child run plugins spec coffee at process on root cypress packages server lib plugins child run plugins js at emitone events js at process emit events js at process emit root cypress packages ts node modules source map support source map support js at processemit root cypress packages server node modules signal exit index js at processemit root cypress packages launcher node modules signal exit index js at process fatalexception bootstrap node js
| 1
|
5,809
| 8,644,718,142
|
IssuesEvent
|
2018-11-26 04:37:53
|
gfrebello/qs-trip-planning-procedure
|
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
|
closed
|
Update Use Cases
|
Priority:High Process:Create/Update UseCase Model
|
Use Cases need to be updated so as to reflect what has been learned during the projects and possible deviations of the initial vision.
|
1.0
|
Update Use Cases - Use Cases need to be updated so as to reflect what has been learned during the projects and possible deviations of the initial vision.
|
process
|
update use cases use cases need to be updated so as to reflect what has been learned during the projects and possible deviations of the initial vision
| 1
|
492,962
| 14,223,559,385
|
IssuesEvent
|
2020-11-17 18:22:36
|
aims-group/metagrid
|
https://api.github.com/repos/aims-group/metagrid
|
closed
|
Data node status source URL is not responsive to parent container in browser
|
Platform: React Priority: Low Type: Bug
|
**Describe the bug**
A clear and concise description of what the bug is.
In the node status page, the 'Source' column is not responsive when decreasing the width of the browser. This causes it to overflow beyond the parent container.
**Desktop (please complete the following information):**
- OS: MacOS
- Browser: chrome
- Version
**To Reproduce**
Steps to reproduce the behavior:
1. Click 'Node Status' in nav bar
2. Adjust the width of the browser
3. See URLs in 'Source' column overflow
**Expected behavior**
A clear and concise description of what you expected to happen.
The URLS should be responsive and stay within the parent container's boundaries.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
|
1.0
|
Data node status source URL is not responsive to parent container in browser - **Describe the bug**
A clear and concise description of what the bug is.
In the node status page, the 'Source' column is not responsive when decreasing the width of the browser. This causes it to overflow beyond the parent container.
**Desktop (please complete the following information):**
- OS: MacOS
- Browser: chrome
- Version
**To Reproduce**
Steps to reproduce the behavior:
1. Click 'Node Status' in nav bar
2. Adjust the width of the browser
3. See URLs in 'Source' column overflow
**Expected behavior**
A clear and concise description of what you expected to happen.
The URLS should be responsive and stay within the parent container's boundaries.
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
|
non_process
|
data node status source url is not responsive to parent container in browser describe the bug a clear and concise description of what the bug is in the node status page the source column is not responsive when decreasing the width of the browser this causes it to overflow beyond the parent container desktop please complete the following information os macos browser chrome version to reproduce steps to reproduce the behavior click node status in nav bar adjust the width of the browser see urls in source column overflow expected behavior a clear and concise description of what you expected to happen the urls should be responsive and stay within the parent container s boundaries screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here
| 0
|
17,132
| 22,659,456,827
|
IssuesEvent
|
2022-07-02 00:33:24
|
google/data-transfer-project
|
https://api.github.com/repos/google/data-transfer-project
|
opened
|
one-off idempotent-ID computation is obsolete with newer DownloadableItem work
|
process
|
need to continue the cleanup in #1079 by deleting IdempotentImportExecutorHelper and converting its usages to [`ImportableItem#getIdempotentId`](https://github.com/google/data-transfer-project/blob/1fdf324b7be73a7bcb1e9f6f9ec5166217a861db/portability-types-common/src/main/java/org/datatransferproject/types/common/ImportableItem.java#L11).
essentially we're switching now from having one-off implementations of ID building to an interface (I'm filing this because it's actually important to me as a open-closed-principle issue, since I'm extending via a new Media model and don't want to keep modifying the [`IdempotentImportExecutorHelper` class](https://github.com/google/data-transfer-project/blob/02e9918a0182795666b21414f296a6c256eed963/portability-spi-transfer/src/main/java/org/datatransferproject/spi/transfer/idempotentexecutor/IdempotentImportExecutorHelper.java)).
|
1.0
|
one-off idempotent-ID computation is obsolete with newer DownloadableItem work - need to continue the cleanup in #1079 by deleting IdempotentImportExecutorHelper and converting its usages to [`ImportableItem#getIdempotentId`](https://github.com/google/data-transfer-project/blob/1fdf324b7be73a7bcb1e9f6f9ec5166217a861db/portability-types-common/src/main/java/org/datatransferproject/types/common/ImportableItem.java#L11).
essentially we're switching now from having one-off implementations of ID building to an interface (I'm filing this because it's actually important to me as a open-closed-principle issue, since I'm extending via a new Media model and don't want to keep modifying the [`IdempotentImportExecutorHelper` class](https://github.com/google/data-transfer-project/blob/02e9918a0182795666b21414f296a6c256eed963/portability-spi-transfer/src/main/java/org/datatransferproject/spi/transfer/idempotentexecutor/IdempotentImportExecutorHelper.java)).
|
process
|
one off idempotent id computation is obsolete with newer downloadableitem work need to continue the cleanup in by deleting idempotentimportexecutorhelper and converting its usages to essentially we re switching now from having one off implementations of id building to an interface i m filing this because it s actually important to me as a open closed principle issue since i m extending via a new media model and don t want to keep modifying the
| 1
|
19,891
| 26,339,921,498
|
IssuesEvent
|
2023-01-10 16:53:56
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
rh850/e1m-s2 file will not disassemble properly
|
Feature: Processor/v850 Status: Internal
|
Hello
I hope you are well.
I am trying to disassemble a Renesas rh850/e1m-s2 processor file . I am using the v850 language. Most of the file disassembles correctly. There are some instructions that do not disassemble. See the attached code below. Line 8043d66c.
804d334c 40 96 80 00 movhi 0x80 ,r0,r18
804d3350 f2 99 cmp r18 ,r19
804d3352 96 3d blt LAB_804d33c4
804d3354 04 8d sld.w 0x8 [ep],r17
804d3356 e0 0f 42 94 cvtf.ws r1,r18
804d335a 40 86 80 3f movhi 0x3f80 ,r0,r16
804d335e e6 8f 20 74 cmpf.s le,r17 ,r6,0x0
804d3362 f0 97 62 84 subf.s r16 ,r18 ,r16
804d3366 04 95 sld.w 0x8 [ep],r18
804d3368 e0 07 00 04 trfsr 0x0
804d336c f0 ?? F0h
804d336d 9f ?? 9Fh
804d336e e0 94 sst.h r18 ,0xc0 [ep]
804d3370 b2 05 be LAB_804d3376
804d3372 06 55 sld.w 0xc [ep],r10
804d3374 7f 00 jmp [lp]
|
1.0
|
rh850/e1m-s2 file will not disassemble properly - Hello
I hope you are well.
I am trying to disassemble a Renesas rh850/e1m-s2 processor file . I am using the v850 language. Most of the file disassembles correctly. There are some instructions that do not disassemble. See the attached code below. Line 8043d66c.
804d334c 40 96 80 00 movhi 0x80 ,r0,r18
804d3350 f2 99 cmp r18 ,r19
804d3352 96 3d blt LAB_804d33c4
804d3354 04 8d sld.w 0x8 [ep],r17
804d3356 e0 0f 42 94 cvtf.ws r1,r18
804d335a 40 86 80 3f movhi 0x3f80 ,r0,r16
804d335e e6 8f 20 74 cmpf.s le,r17 ,r6,0x0
804d3362 f0 97 62 84 subf.s r16 ,r18 ,r16
804d3366 04 95 sld.w 0x8 [ep],r18
804d3368 e0 07 00 04 trfsr 0x0
804d336c f0 ?? F0h
804d336d 9f ?? 9Fh
804d336e e0 94 sst.h r18 ,0xc0 [ep]
804d3370 b2 05 be LAB_804d3376
804d3372 06 55 sld.w 0xc [ep],r10
804d3374 7f 00 jmp [lp]
|
process
|
file will not disassemble properly hello i hope you are well i am trying to disassemble a renesas processor file i am using the language most of the file disassembles correctly there are some instructions that do not disassemble see the attached code below line movhi cmp blt lab sld w cvtf ws movhi cmpf s le subf s sld w trfsr sst h be lab sld w jmp
| 1
|
5,617
| 8,476,084,790
|
IssuesEvent
|
2018-10-24 20:49:01
|
nion-software/nionswift
|
https://api.github.com/repos/nion-software/nionswift
|
opened
|
Processing requirements should be based on datum rank, not total rank
|
f - processing f - sequences level - easy p2 - high type - enhancement w4 - ready
|
i.e. align "sequence" should work for any 1d or 2d data.
|
1.0
|
Processing requirements should be based on datum rank, not total rank - i.e. align "sequence" should work for any 1d or 2d data.
|
process
|
processing requirements should be based on datum rank not total rank i e align sequence should work for any or data
| 1
|
285,567
| 21,524,144,593
|
IssuesEvent
|
2022-04-28 16:40:48
|
USGS-R/regional-hydrologic-forcings-ml
|
https://api.github.com/repos/USGS-R/regional-hydrologic-forcings-ml
|
opened
|
Documentation for attributes
|
documentation
|
Create a table that defines the variable names for all watershed attributes.
Most of these variables are already defined in . We can copy that table as a starting point. The next steps are:
1. Add other variables to the table
2. Remove columns that are unnecessary for the description key
3. Add a short, easy to understand description of each variable that can be used in plots
|
1.0
|
Documentation for attributes - Create a table that defines the variable names for all watershed attributes.
Most of these variables are already defined in . We can copy that table as a starting point. The next steps are:
1. Add other variables to the table
2. Remove columns that are unnecessary for the description key
3. Add a short, easy to understand description of each variable that can be used in plots
|
non_process
|
documentation for attributes create a table that defines the variable names for all watershed attributes most of these variables are already defined in we can copy that table as a starting point the next steps are add other variables to the table remove columns that are unnecessary for the description key add a short easy to understand description of each variable that can be used in plots
| 0
|
199,058
| 15,022,222,427
|
IssuesEvent
|
2021-02-01 16:42:11
|
ImagingDataCommons/IDC-WebApp
|
https://api.github.com/repos/ImagingDataCommons/IDC-WebApp
|
closed
|
Inconsistent number of items in the cohort between portal and direct BQ query
|
bug cohorts merged:dev production testing needed testing passed
|
The total number of rows in our public view is 3075233.
```
SELECT count(SOPInstanceUID) FROM `canceridc-data.idc_views.dicom_all`
```
If I select all of the collections in IDC portal, and export the resulting cohort into BQ, I get 2017036 rows.
```
SELECT count(SOPInstanceUID) FROM `canceridc-user-data.user_manifests.manifest_cohort_82_20201222_194659`
```
The underlying query for the above is just selecting all of the collections, and does not touch any other attributes.

|
2.0
|
Inconsistent number of items in the cohort between portal and direct BQ query - The total number of rows in our public view is 3075233.
```
SELECT count(SOPInstanceUID) FROM `canceridc-data.idc_views.dicom_all`
```
If I select all of the collections in IDC portal, and export the resulting cohort into BQ, I get 2017036 rows.
```
SELECT count(SOPInstanceUID) FROM `canceridc-user-data.user_manifests.manifest_cohort_82_20201222_194659`
```
The underlying query for the above is just selecting all of the collections, and does not touch any other attributes.

|
non_process
|
inconsistent number of items in the cohort between portal and direct bq query the total number of rows in our public view is select count sopinstanceuid from canceridc data idc views dicom all if i select all of the collections in idc portal and export the resulting cohort into bq i get rows select count sopinstanceuid from canceridc user data user manifests manifest cohort the underlying query for the above is just selecting all of the collections and does not touch any other attributes
| 0
|
6,140
| 9,012,019,300
|
IssuesEvent
|
2019-02-05 15:57:56
|
EthVM/ethvm
|
https://api.github.com/repos/EthVM/ethvm
|
closed
|
Tokens: Add Token Exchange Rates to API
|
enhancement milestone:1 priority:high project:ethvm project:processing project:server
|
Currently we're fetching token information from ethplorer api directly in the front-end (see #263).
This task should add to our mongo a collection called exchange-rates that our processing engine will take care of updating each 5 minutes.
Once this task is finished, we can proceed to update the front-end to update the queries (and remove axios library).
Tasks:
- [x] Add Exchange Rates to Mongo with Kafka
- [x] Add corresponding API calls
- [ ] Update front-end to query directly our collection instead of calling Ethplorer API
|
1.0
|
Tokens: Add Token Exchange Rates to API - Currently we're fetching token information from ethplorer api directly in the front-end (see #263).
This task should add to our mongo a collection called exchange-rates that our processing engine will take care of updating each 5 minutes.
Once this task is finished, we can proceed to update the front-end to update the queries (and remove axios library).
Tasks:
- [x] Add Exchange Rates to Mongo with Kafka
- [x] Add corresponding API calls
- [ ] Update front-end to query directly our collection instead of calling Ethplorer API
|
process
|
tokens add token exchange rates to api currently we re fetching token information from ethplorer api directly in the front end see this task should add to our mongo a collection called exchange rates that our processing engine will take care of updating each minutes once this task is finished we can proceed to update the front end to update the queries and remove axios library tasks add exchange rates to mongo with kafka add corresponding api calls update front end to query directly our collection instead of calling ethplorer api
| 1
|
105,193
| 22,953,875,656
|
IssuesEvent
|
2022-07-19 09:46:50
|
Azure/autorest.go
|
https://api.github.com/repos/Azure/autorest.go
|
closed
|
Add remove non-reference type feature
|
CodeGen
|
Related issue: https://github.com/Azure/autorest.go/issues/757
We found the following case will cause breaking change in mgmt. plane SDK:
1. Discriminator model change from user-defined model to swagger common model
2. Orphan model removal
3. Add new orphan model
We'd like to add feature to remove all the non-reference type to the codegen. For now, only mgmt. new packages will enable such feature, existed packages will enable such feature when swagger breaking happens. Data plane will not use this as non-reference types have other usage in some RPs.
|
1.0
|
Add remove non-reference type feature - Related issue: https://github.com/Azure/autorest.go/issues/757
We found the following case will cause breaking change in mgmt. plane SDK:
1. Discriminator model change from user-defined model to swagger common model
2. Orphan model removal
3. Add new orphan model
We'd like to add feature to remove all the non-reference type to the codegen. For now, only mgmt. new packages will enable such feature, existed packages will enable such feature when swagger breaking happens. Data plane will not use this as non-reference types have other usage in some RPs.
|
non_process
|
add remove non reference type feature related issue we found the following case will cause breaking change in mgmt plane sdk discriminator model change from user defined model to swagger common model orphan model removal add new orphan model we d like to add feature to remove all the non reference type to the codegen for now only mgmt new packages will enable such feature existed packages will enable such feature when swagger breaking happens data plane will not use this as non reference types have other usage in some rps
| 0
|
1,616
| 2,516,611,461
|
IssuesEvent
|
2015-01-16 06:07:30
|
centre-for-educational-technology/edidaktikum
|
https://api.github.com/repos/centre-for-educational-technology/edidaktikum
|
closed
|
Grupi liikmete otsing otsib kõigi kasutajate seast, kes ei ole grupi liikmed
|
bug High Priority
|
Ehk kasutaja soovib leida liiget grupist kuid viidatud otsing teeb päringu ed liikmete hulgast, kes ei ole grupi liikmed:

|
1.0
|
Grupi liikmete otsing otsib kõigi kasutajate seast, kes ei ole grupi liikmed - Ehk kasutaja soovib leida liiget grupist kuid viidatud otsing teeb päringu ed liikmete hulgast, kes ei ole grupi liikmed:

|
non_process
|
grupi liikmete otsing otsib kõigi kasutajate seast kes ei ole grupi liikmed ehk kasutaja soovib leida liiget grupist kuid viidatud otsing teeb päringu ed liikmete hulgast kes ei ole grupi liikmed
| 0
|
3,174
| 6,226,572,390
|
IssuesEvent
|
2017-07-10 18:45:11
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
child_process,Windows: deprecate explicit use of `cmd.exe`
|
child_process windows
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: *
* **Platform**: Windows
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
https://github.com/nodejs/node/blob/master/lib/child_process.js#L444 has a fallback to use `cmd.exe` in case `process.env.ComSpec` is falsy.
This is redundant, fragile, and generally covers-up an invalid state (`%ComSpec%` should always be defined and point to a valid shell executable).
This code path should be deprecated according to the guide at https://github.com/nodejs/node/blob/master/COLLABORATOR_GUIDE.md#deprecations
and https://github.com/nodejs/node/blob/master/doc/api/deprecations.md
Ref: https://github.com/nodejs/node/pull/14149#discussion_r126477309
|
1.0
|
child_process,Windows: deprecate explicit use of `cmd.exe` - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: *
* **Platform**: Windows
* **Subsystem**: child_process
<!-- Enter your issue details below this comment. -->
https://github.com/nodejs/node/blob/master/lib/child_process.js#L444 has a fallback to use `cmd.exe` in case `process.env.ComSpec` is falsy.
This is redundant, fragile, and generally covers-up an invalid state (`%ComSpec%` should always be defined and point to a valid shell executable).
This code path should be deprecated according to the guide at https://github.com/nodejs/node/blob/master/COLLABORATOR_GUIDE.md#deprecations
and https://github.com/nodejs/node/blob/master/doc/api/deprecations.md
Ref: https://github.com/nodejs/node/pull/14149#discussion_r126477309
|
process
|
child process windows deprecate explicit use of cmd exe thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform windows subsystem child process has a fallback to use cmd exe in case process env comspec is falsy this is redundant fragile and generally covers up an invalid state comspec should always be defined and point to a valid shell executable this code path should be deprecated according to the guide at and ref
| 1
|
197,237
| 22,584,830,731
|
IssuesEvent
|
2022-06-28 14:28:56
|
diennea/carapaceproxy
|
https://api.github.com/repos/diennea/carapaceproxy
|
opened
|
Jetty > Disable TRACE http method
|
Security
|
We performed a security check on the admin interface of Carapace and we noticed a vulnerability caused by the presence of the http TRACE method.
It would be advisable to disable it.
Check info:
```
THREAT:
The remote Web server supports the TRACE and/or TRACK HTTP methods, which makes it easier for remote attackers to steal cookies and
authentication credentials or bypass the HttpOnly protection mechanism.
Track / Trace are required to be disabled to be PCI compliance.
IMPACT:
If this vulnerability is successfully exploited, attackers can potentially steal cookies and authentication credentials, or bypass the HttpOnly
protection mechanism.
SOLUTION:
Disable these methods in your web server's configuration file.
```
|
True
|
Jetty > Disable TRACE http method - We performed a security check on the admin interface of Carapace and we noticed a vulnerability caused by the presence of the http TRACE method.
It would be advisable to disable it.
Check info:
```
THREAT:
The remote Web server supports the TRACE and/or TRACK HTTP methods, which makes it easier for remote attackers to steal cookies and
authentication credentials or bypass the HttpOnly protection mechanism.
Track / Trace are required to be disabled to be PCI compliance.
IMPACT:
If this vulnerability is successfully exploited, attackers can potentially steal cookies and authentication credentials, or bypass the HttpOnly
protection mechanism.
SOLUTION:
Disable these methods in your web server's configuration file.
```
|
non_process
|
jetty disable trace http method we performed a security check on the admin interface of carapace and we noticed a vulnerability caused by the presence of the http trace method it would be advisable to disable it check info threat the remote web server supports the trace and or track http methods which makes it easier for remote attackers to steal cookies and authentication credentials or bypass the httponly protection mechanism track trace are required to be disabled to be pci compliance impact if this vulnerability is successfully exploited attackers can potentially steal cookies and authentication credentials or bypass the httponly protection mechanism solution disable these methods in your web server s configuration file
| 0
|
19,147
| 25,215,758,490
|
IssuesEvent
|
2022-11-14 09:02:47
|
googleapis/google-cloud-node
|
https://api.github.com/repos/googleapis/google-cloud-node
|
opened
|
Your .repo-metadata.json files have a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* api_shortname field missing from packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-api-apikeys/.repo-metadata.json
* api_shortname field missing from packages/google-api-apikeys/.repo-metadata.json
* api_shortname 'asset' invalid in packages/google-cloud-asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-batch/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-batch/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-analyticshub/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-bigquery-analyticshub/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-bigquery-dataexchange/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-datapolicies/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-bigquery-datapolicies/.repo-metadata.json
* api_shortname 'dms' invalid in packages/google-cloud-clouddms/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-contentwarehouse/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-contentwarehouse/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-discoveryengine/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-discoveryengine/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-gkemulticloud/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-gkemulticloud/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-security-publicca/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-security-publicca/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-iam/.repo-metadata.json
* api_shortname field missing from packages/google-iam/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-maps-addressvalidation/.repo-metadata.json
* api_shortname field missing from packages/google-maps-addressvalidation/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-maps-routing/.repo-metadata.json
* api_shortname field missing from packages/google-maps-routing/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* release_level must be equal to one of the allowed values in packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* api_shortname field missing from packages/gapic-node-templating/templates/bootstrap-templates/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-api-apikeys/.repo-metadata.json
* api_shortname field missing from packages/google-api-apikeys/.repo-metadata.json
* api_shortname 'asset' invalid in packages/google-cloud-asset/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-batch/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-batch/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnections/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appconnectors/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-appgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientconnectorservices/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-beyondcorp-clientgateways/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-analyticshub/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-bigquery-analyticshub/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-bigquery-dataexchange/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-bigquery-datapolicies/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-bigquery-datapolicies/.repo-metadata.json
* api_shortname 'dms' invalid in packages/google-cloud-clouddms/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-contentwarehouse/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-contentwarehouse/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-discoveryengine/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-discoveryengine/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-gkemulticloud/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-gkemulticloud/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-cloud-security-publicca/.repo-metadata.json
* api_shortname field missing from packages/google-cloud-security-publicca/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-iam/.repo-metadata.json
* api_shortname field missing from packages/google-iam/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-maps-addressvalidation/.repo-metadata.json
* api_shortname field missing from packages/google-maps-addressvalidation/.repo-metadata.json
* release_level must be equal to one of the allowed values in packages/google-maps-routing/.repo-metadata.json
* api_shortname field missing from packages/google-maps-routing/.repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 release level must be equal to one of the allowed values in packages gapic node templating templates bootstrap templates repo metadata json api shortname field missing from packages gapic node templating templates bootstrap templates repo metadata json release level must be equal to one of the allowed values in packages google api apikeys repo metadata json api shortname field missing from packages google api apikeys repo metadata json api shortname asset invalid in packages google cloud asset repo metadata json release level must be equal to one of the allowed values in packages google cloud batch repo metadata json api shortname field missing from packages google cloud batch repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appconnections repo metadata json api shortname field missing from packages google cloud beyondcorp appconnections repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appconnectors repo metadata json api shortname field missing from packages google cloud beyondcorp appconnectors repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp appgateways repo metadata json api shortname field missing from packages google cloud beyondcorp appgateways repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp clientconnectorservices repo metadata json api shortname field missing from packages google cloud beyondcorp clientconnectorservices repo metadata json release level must be equal to one of the allowed values in packages google cloud beyondcorp clientgateways repo metadata json api shortname field missing from packages google cloud beyondcorp clientgateways repo metadata json release level must be equal to one of the allowed values in packages google cloud bigquery analyticshub repo metadata json api shortname field missing from packages google cloud bigquery analyticshub repo metadata json api shortname field missing from packages google cloud bigquery dataexchange repo metadata json release level must be equal to one of the allowed values in packages google cloud bigquery datapolicies repo metadata json api shortname field missing from packages google cloud bigquery datapolicies repo metadata json api shortname dms invalid in packages google cloud clouddms repo metadata json release level must be equal to one of the allowed values in packages google cloud contentwarehouse repo metadata json api shortname field missing from packages google cloud contentwarehouse repo metadata json release level must be equal to one of the allowed values in packages google cloud discoveryengine repo metadata json api shortname field missing from packages google cloud discoveryengine repo metadata json release level must be equal to one of the allowed values in packages google cloud gkemulticloud repo metadata json api shortname field missing from packages google cloud gkemulticloud repo metadata json release level must be equal to one of the allowed values in packages google cloud security publicca repo metadata json api shortname field missing from packages google cloud security publicca repo metadata json release level must be equal to one of the allowed values in packages google iam repo metadata json api shortname field missing from packages google iam repo metadata json release level must be equal to one of the allowed values in packages google maps addressvalidation repo metadata json api shortname field missing from packages google maps addressvalidation repo metadata json release level must be equal to one of the allowed values in packages google maps routing repo metadata json api shortname field missing from packages google maps routing repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.