Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
386,558
| 26,689,588,351
|
IssuesEvent
|
2023-01-27 02:37:08
|
RalphHightower/RalphHightower
|
https://api.github.com/repos/RalphHightower/RalphHightower
|
opened
|
MurdaughAlex_TimeLine: P&C Podcasts
|
documentation
|
**What page should this be added to?**<br>
MurdaughAlex_TimeLine.md
**What section/heading should this be added to?**<br>
Understanding Murdaugh Podcasts
**Include the Markdown text that is to be added below:**<br>
| JANUARY 25TH, 2023 | 18 | [Murder trial officially begins with opening statements](https://understand-murdaugh.simplecast.com/episodes/murder-trial-officially-begins-with-opening-statements) |
| JANUARY 26TH, 2023 | 19 | [Understand Murdaugh: First witnesses in Alex Murdaugh murder trial take the stand](https://understand-murdaugh.simplecast.com/episodes/understand-murdaugh-first-witnesses-in-alex-murdaugh-murder-trial-take-the-stand) |
**Describe alternatives you've considered**<br>
Bookmarks in browsers are not portable.
**Additional context**<br>
Add any other context or screenshots about the feature request here.
|
1.0
|
MurdaughAlex_TimeLine: P&C Podcasts - **What page should this be added to?**<br>
MurdaughAlex_TimeLine.md
**What section/heading should this be added to?**<br>
Understanding Murdaugh Podcasts
**Include the Markdown text that is to be added below:**<br>
| JANUARY 25TH, 2023 | 18 | [Murder trial officially begins with opening statements](https://understand-murdaugh.simplecast.com/episodes/murder-trial-officially-begins-with-opening-statements) |
| JANUARY 26TH, 2023 | 19 | [Understand Murdaugh: First witnesses in Alex Murdaugh murder trial take the stand](https://understand-murdaugh.simplecast.com/episodes/understand-murdaugh-first-witnesses-in-alex-murdaugh-murder-trial-take-the-stand) |
**Describe alternatives you've considered**<br>
Bookmarks in browsers are not portable.
**Additional context**<br>
Add any other context or screenshots about the feature request here.
|
non_process
|
murdaughalex timeline p c podcasts what page should this be added to murdaughalex timeline md what section heading should this be added to understanding murdaugh podcasts include the markdown text that is to be added below january january describe alternatives you ve considered bookmarks in browsers are not portable additional context add any other context or screenshots about the feature request here
| 0
|
7,574
| 10,685,851,408
|
IssuesEvent
|
2019-10-22 13:27:11
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
creating a task in my tasks doesnt assign the user as the assignee automatically
|
2.0.7 Fixed Process bug
|
go to my tasks
create a task
the user that created the task isnt assigned to it automatically
|
1.0
|
creating a task in my tasks doesnt assign the user as the assignee automatically - go to my tasks
create a task
the user that created the task isnt assigned to it automatically
|
process
|
creating a task in my tasks doesnt assign the user as the assignee automatically go to my tasks create a task the user that created the task isnt assigned to it automatically
| 1
|
6,774
| 9,913,991,491
|
IssuesEvent
|
2019-06-28 13:21:33
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
closed
|
mvn versions:set warns duplicate dependencies declaration
|
dependencies type: process
|
(Because usually I don't need to run "mvn versions:set", this is not blocking me)
#### Environment details
Git HEAD
#### Steps to reproduce
Run `mvn versions:set -DnextSnapshot=true -B` at root directory
#### Code example
N/A
#### Stack trace
```
suztomo@suxtomo24:~/google-cloud-java/google-cloud-clients$ mvn versions:set -DnextSnapshot=true -B
[INFO] Scanning for projects...
[INFO] Downloading from google-maven-central: https://maven-central.storage-download.googleapis.com/repos/central/data/com/google/cloud/google-cloud-bom/0.97.0-alpha/google-cloud-bom-0.97.0-alpha.pom
[INFO] Downloaded from google-maven-central: https://maven-central.storage-download.googleapis.com/repos/central/data/com/google/cloud/google-cloud-bom/0.97.0-alpha/google-cloud-bom-0.97.0-alpha.pom (56 kB at 141 kB/s)
[WARNING]
[WARNING] Some problems were encountered while building the effective model for com.google.cloud:google-cloud-bigquerystorage:jar:0.97.0-beta
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: com.google.api:gax:jar -> duplicate declaration of version (?) @ line 88, column 21
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: com.google.api:gax-grpc:jar -> duplicate declaration of version (?) @ line 92, column 21
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
```
#### External references such as API reference guides used
google-cloud-clients/google-cloud-bigquerystorage/pom.xml

#### Any additional information below
|
1.0
|
mvn versions:set warns duplicate dependencies declaration - (Because usually I don't need to run "mvn versions:set", this is not blocking me)
#### Environment details
Git HEAD
#### Steps to reproduce
Run `mvn versions:set -DnextSnapshot=true -B` at root directory
#### Code example
N/A
#### Stack trace
```
suztomo@suxtomo24:~/google-cloud-java/google-cloud-clients$ mvn versions:set -DnextSnapshot=true -B
[INFO] Scanning for projects...
[INFO] Downloading from google-maven-central: https://maven-central.storage-download.googleapis.com/repos/central/data/com/google/cloud/google-cloud-bom/0.97.0-alpha/google-cloud-bom-0.97.0-alpha.pom
[INFO] Downloaded from google-maven-central: https://maven-central.storage-download.googleapis.com/repos/central/data/com/google/cloud/google-cloud-bom/0.97.0-alpha/google-cloud-bom-0.97.0-alpha.pom (56 kB at 141 kB/s)
[WARNING]
[WARNING] Some problems were encountered while building the effective model for com.google.cloud:google-cloud-bigquerystorage:jar:0.97.0-beta
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: com.google.api:gax:jar -> duplicate declaration of version (?) @ line 88, column 21
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: com.google.api:gax-grpc:jar -> duplicate declaration of version (?) @ line 92, column 21
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING]
```
#### External references such as API reference guides used
google-cloud-clients/google-cloud-bigquerystorage/pom.xml

#### Any additional information below
|
process
|
mvn versions set warns duplicate dependencies declaration because usually i don t need to run mvn versions set this is not blocking me environment details git head steps to reproduce run mvn versions set dnextsnapshot true b at root directory code example n a stack trace suztomo google cloud java google cloud clients mvn versions set dnextsnapshot true b scanning for projects downloading from google maven central downloaded from google maven central kb at kb s some problems were encountered while building the effective model for com google cloud google cloud bigquerystorage jar beta dependencies dependency groupid artifactid type classifier must be unique com google api gax jar duplicate declaration of version line column dependencies dependency groupid artifactid type classifier must be unique com google api gax grpc jar duplicate declaration of version line column it is highly recommended to fix these problems because they threaten the stability of your build for this reason future maven versions might no longer support building such malformed projects external references such as api reference guides used google cloud clients google cloud bigquerystorage pom xml any additional information below
| 1
|
12,009
| 5,132,709,456
|
IssuesEvent
|
2017-01-11 00:03:19
|
Draccoz/twc
|
https://api.github.com/repos/Draccoz/twc
|
closed
|
Preserve documentation
|
bug builder parser
|
twc should preserve element and property documentation. For elements, [Polymer documentation](https://www.polymer-project.org/1.0/docs/tools/documentation#element-summaries) gives two possible ways:
1. as HTML comment above `<dom-module>`
1. if absent, as JS comment before `Polymer()` call
With twc, there is no `<dom-module>`, just the template. It may work to always have just the JS comment, even when `<dom-module>` is used. If not, the comment will have to be moved from JS to HTML
Second, individual properties can be [documented](https://www.polymer-project.org/1.0/docs/tools/documentation#properties). Currently these comments are completely dropped by TypeScript compiler. They will have to be extracted from ts code and moved to the final output
|
1.0
|
Preserve documentation - twc should preserve element and property documentation. For elements, [Polymer documentation](https://www.polymer-project.org/1.0/docs/tools/documentation#element-summaries) gives two possible ways:
1. as HTML comment above `<dom-module>`
1. if absent, as JS comment before `Polymer()` call
With twc, there is no `<dom-module>`, just the template. It may work to always have just the JS comment, even when `<dom-module>` is used. If not, the comment will have to be moved from JS to HTML
Second, individual properties can be [documented](https://www.polymer-project.org/1.0/docs/tools/documentation#properties). Currently these comments are completely dropped by TypeScript compiler. They will have to be extracted from ts code and moved to the final output
|
non_process
|
preserve documentation twc should preserve element and property documentation for elements gives two possible ways as html comment above if absent as js comment before polymer call with twc there is no just the template it may work to always have just the js comment even when is used if not the comment will have to be moved from js to html second individual properties can be currently these comments are completely dropped by typescript compiler they will have to be extracted from ts code and moved to the final output
| 0
|
11,261
| 14,046,812,648
|
IssuesEvent
|
2020-11-02 05:46:37
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
Standardize the errors relate to coprocessor
|
sig/coprocessor type/enhancement
|
## Development Task
Standardize the errors relate to coprocessor
|
1.0
|
Standardize the errors relate to coprocessor - ## Development Task
Standardize the errors relate to coprocessor
|
process
|
standardize the errors relate to coprocessor development task standardize the errors relate to coprocessor
| 1
|
17,507
| 23,317,687,431
|
IssuesEvent
|
2022-08-08 13:51:31
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
referentialIntegrity=prisma: Delete parent relation failing when using composite id
|
bug/2-confirmed kind/bug process/candidate tech/engines team/client topic: database-provider/planetscale topic: referential actions topic: referentialIntegrity
|
### Bug description
I am trying to move to referential actions when deleting but get failures. If i don't add anything to `onDelete` the event gets deleted but not the sessions (whats the default action when using `referentialIntegrity = "prisma"`?), adding `onDelete: Cascade` I get an error `FailedPrecondition desc = Operand should contain 2 column(s) (errno 1241)`.
Looking at the query logged (shown below) it looks like only the `id` is passed to delete the session when it needs the `id` and `event_id` because of the composite id
```DELETE FROM `sessions` WHERE (`sessions`.`id`,`sessions`.`event_id`) IN ((?),(?))```
### How to reproduce
```js
const prisma = new PrismaClient({
log: ['query', 'info', 'warn', 'error'],
})
// step 2 create snippet
prisma.event.create({
data: {
id: 'prisma',
name: 'prisma-bug',
sessions: {
createMany: {
data: [
{ id: 'g', name: 'github' },
{ id: 'i', name: 'issue' },
]
}
}
}
})
//step 3 delete snippet
prisma.event.delete({ where: { id: 'prisma' }})
```
1. Create the DB with Prisma Migrate using the provided Prisma schema
2. Create an event with sessions
3. Delete an event, see the error (recommend adding logging to client)
### Expected behavior
When using cascade on a model with a composite id the delete should use all parameters of the composite id
### Prisma information
```prisma
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
shadowDatabaseUrl = env("SHADOW_DATABASE_URL")
referentialIntegrity = "prisma"
}
generator client {
provider = "prisma-client-js"
previewFeatures = ["referentialIntegrity"]
}
model Event {
id String @id
name String
sessions Session[]
@@map("events")
}
model Session {
id String
name String
event Event @relation(fields: [eventId], references: [id], onDelete: Cascade)
eventId String @map("event_id")
@@id([id, eventId])
@@map("sessions")
}
```
### Environment & setup
- OS: Windows
- Database: PlanetScale
- Node.js version: 15.3.0
### Prisma Version
```
prisma : 3.2.1
@prisma/client : 3.2.1
Current platform : windows
Query Engine (Node-API) : libquery-engine b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c
Studio : 0.435.0
Preview Features : referentialIntegrity
```
|
1.0
|
referentialIntegrity=prisma: Delete parent relation failing when using composite id - ### Bug description
I am trying to move to referential actions when deleting but get failures. If i don't add anything to `onDelete` the event gets deleted but not the sessions (whats the default action when using `referentialIntegrity = "prisma"`?), adding `onDelete: Cascade` I get an error `FailedPrecondition desc = Operand should contain 2 column(s) (errno 1241)`.
Looking at the query logged (shown below) it looks like only the `id` is passed to delete the session when it needs the `id` and `event_id` because of the composite id
```DELETE FROM `sessions` WHERE (`sessions`.`id`,`sessions`.`event_id`) IN ((?),(?))```
### How to reproduce
```js
const prisma = new PrismaClient({
log: ['query', 'info', 'warn', 'error'],
})
// step 2 create snippet
prisma.event.create({
data: {
id: 'prisma',
name: 'prisma-bug',
sessions: {
createMany: {
data: [
{ id: 'g', name: 'github' },
{ id: 'i', name: 'issue' },
]
}
}
}
})
//step 3 delete snippet
prisma.event.delete({ where: { id: 'prisma' }})
```
1. Create the DB with Prisma Migrate using the provided Prisma schema
2. Create an event with sessions
3. Delete an event, see the error (recommend adding logging to client)
### Expected behavior
When using cascade on a model with a composite id the delete should use all parameters of the composite id
### Prisma information
```prisma
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
shadowDatabaseUrl = env("SHADOW_DATABASE_URL")
referentialIntegrity = "prisma"
}
generator client {
provider = "prisma-client-js"
previewFeatures = ["referentialIntegrity"]
}
model Event {
id String @id
name String
sessions Session[]
@@map("events")
}
model Session {
id String
name String
event Event @relation(fields: [eventId], references: [id], onDelete: Cascade)
eventId String @map("event_id")
@@id([id, eventId])
@@map("sessions")
}
```
### Environment & setup
- OS: Windows
- Database: PlanetScale
- Node.js version: 15.3.0
### Prisma Version
```
prisma : 3.2.1
@prisma/client : 3.2.1
Current platform : windows
Query Engine (Node-API) : libquery-engine b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : b71d8cb16c4ddc7e3e9821f42fd09b0f82d7934c
Studio : 0.435.0
Preview Features : referentialIntegrity
```
|
process
|
referentialintegrity prisma delete parent relation failing when using composite id bug description i am trying to move to referential actions when deleting but get failures if i don t add anything to ondelete the event gets deleted but not the sessions whats the default action when using referentialintegrity prisma adding ondelete cascade i get an error failedprecondition desc operand should contain column s errno looking at the query logged shown below it looks like only the id is passed to delete the session when it needs the id and event id because of the composite id delete from sessions where sessions id sessions event id in how to reproduce js const prisma new prismaclient log step create snippet prisma event create data id prisma name prisma bug sessions createmany data id g name github id i name issue step delete snippet prisma event delete where id prisma create the db with prisma migrate using the provided prisma schema create an event with sessions delete an event see the error recommend adding logging to client expected behavior when using cascade on a model with a composite id the delete should use all parameters of the composite id prisma information prisma datasource db provider mysql url env database url shadowdatabaseurl env shadow database url referentialintegrity prisma generator client provider prisma client js previewfeatures model event id string id name string sessions session map events model session id string name string event event relation fields references ondelete cascade eventid string map event id id map sessions environment setup os windows database planetscale node js version prisma version prisma prisma client current platform windows query engine node api libquery engine at node modules prisma engines query engine windows dll node migration engine migration engine cli at node modules prisma engines migration engine windows exe introspection engine introspection core at node modules prisma engines introspection engine windows exe format binary prisma fmt at node modules prisma engines prisma fmt windows exe default engines hash studio preview features referentialintegrity
| 1
|
210,237
| 7,187,164,758
|
IssuesEvent
|
2018-02-02 03:20:28
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
[OpenBMC] openbmc in python framework refactor
|
priority:high sprint1 type:feature
|
* [ ] redefine the interface, framework for openbmc in python
design and coding
* [ ] move rpower with new framework
|
1.0
|
[OpenBMC] openbmc in python framework refactor - * [ ] redefine the interface, framework for openbmc in python
design and coding
* [ ] move rpower with new framework
|
non_process
|
openbmc in python framework refactor redefine the interface framework for openbmc in python design and coding move rpower with new framework
| 0
|
17,885
| 23,846,678,553
|
IssuesEvent
|
2022-09-06 14:30:39
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/tailsampling] SpanCount sampler makes incorrect decision on high volume of spans
|
bug processor/tailsampling
|
**Describe the bug**
Sometimes, the SpanCount sampler makes the incorrect decision even though there's many spans coming in for the same trace id. This is more likely to happen when there's a high volume of spans and traces coming in to the processor.
After looking at the code, it seems this might be caused by the internal batching behaviour of the processor.
**What version did you use?**
v0.54.0
**What config did you use?**
``` yaml
processors:
tail_sampling:
decision_wait: 10s
num_traces: 100
expected_new_traces_per_sec: 10
policies:
[
{
name: test-policy-8,
type: span_count,
span_count: {min_spans: 2}
}
]
```
**Environment**
Docker image
OS: Alpine
Compiler(if manually compiled): go 1.18.3
|
1.0
|
[processor/tailsampling] SpanCount sampler makes incorrect decision on high volume of spans - **Describe the bug**
Sometimes, the SpanCount sampler makes the incorrect decision even though there's many spans coming in for the same trace id. This is more likely to happen when there's a high volume of spans and traces coming in to the processor.
After looking at the code, it seems this might be caused by the internal batching behaviour of the processor.
**What version did you use?**
v0.54.0
**What config did you use?**
``` yaml
processors:
tail_sampling:
decision_wait: 10s
num_traces: 100
expected_new_traces_per_sec: 10
policies:
[
{
name: test-policy-8,
type: span_count,
span_count: {min_spans: 2}
}
]
```
**Environment**
Docker image
OS: Alpine
Compiler(if manually compiled): go 1.18.3
|
process
|
spancount sampler makes incorrect decision on high volume of spans describe the bug sometimes the spancount sampler makes the incorrect decision even though there s many spans coming in for the same trace id this is more likely to happen when there s a high volume of spans and traces coming in to the processor after looking at the code it seems this might be caused by the internal batching behaviour of the processor what version did you use what config did you use yaml processors tail sampling decision wait num traces expected new traces per sec policies name test policy type span count span count min spans environment docker image os alpine compiler if manually compiled go
| 1
|
326,655
| 9,958,975,804
|
IssuesEvent
|
2019-07-06 01:14:06
|
default51400/familynet
|
https://api.github.com/repos/default51400/familynet
|
opened
|
Create database using EF
|
Database high priority question task
|
Need to write all models with their data annotation (for example, reqiured, etc.). Create a database with the entire context, use the template repository (you need to clarify what Slava meant)
|
1.0
|
Create database using EF - Need to write all models with their data annotation (for example, reqiured, etc.). Create a database with the entire context, use the template repository (you need to clarify what Slava meant)
|
non_process
|
create database using ef need to write all models with their data annotation for example reqiured etc create a database with the entire context use the template repository you need to clarify what slava meant
| 0
|
602,362
| 18,467,561,315
|
IssuesEvent
|
2021-10-17 06:27:10
|
AY2122S1-CS2113-T16-3/tp
|
https://api.github.com/repos/AY2122S1-CS2113-T16-3/tp
|
opened
|
List possible recipes I can cook based on ingredients I have
|
type.Story priority.High
|
As a user, I can check which recipes I can cook based on the ingredients I currently have, so that I can save time on manually checking the ingredients and deciding on the recipe.
|
1.0
|
List possible recipes I can cook based on ingredients I have - As a user, I can check which recipes I can cook based on the ingredients I currently have, so that I can save time on manually checking the ingredients and deciding on the recipe.
|
non_process
|
list possible recipes i can cook based on ingredients i have as a user i can check which recipes i can cook based on the ingredients i currently have so that i can save time on manually checking the ingredients and deciding on the recipe
| 0
|
17,544
| 23,356,530,047
|
IssuesEvent
|
2022-08-10 07:57:13
|
inmanta/inmanta-dashboard
|
https://api.github.com/repos/inmanta/inmanta-dashboard
|
opened
|
implement package-cleanup script
|
process
|
This script in `package.json` should clean up dev builds older than 30 days. This can probably just be copied over from the web-console, with perhaps a few name changes.
|
1.0
|
implement package-cleanup script - This script in `package.json` should clean up dev builds older than 30 days. This can probably just be copied over from the web-console, with perhaps a few name changes.
|
process
|
implement package cleanup script this script in package json should clean up dev builds older than days this can probably just be copied over from the web console with perhaps a few name changes
| 1
|
2,041
| 4,848,468,534
|
IssuesEvent
|
2016-11-10 17:35:29
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
After cancelling a process list is not refreshed
|
browser: all bug comp: activiti-processList
|
1. Cancel a process
2. List is not refreshed
3. Navigate away from page and back
4. Process cancelled is now not in list
|
1.0
|
After cancelling a process list is not refreshed - 1. Cancel a process
2. List is not refreshed
3. Navigate away from page and back
4. Process cancelled is now not in list
|
process
|
after cancelling a process list is not refreshed cancel a process list is not refreshed navigate away from page and back process cancelled is now not in list
| 1
|
43,310
| 11,202,134,943
|
IssuesEvent
|
2020-01-04 09:56:39
|
helix-toolkit/helix-toolkit
|
https://api.github.com/repos/helix-toolkit/helix-toolkit
|
closed
|
MeshGeometryHelper.FindSharpEdges() does not work
|
MeshBuilder bug you take it
|
This test fails:
var meshBuilder = new MeshBuilder();
meshBuilder.AddCube();
var meshGeometry3D = meshBuilder.ToMeshGeometry3D();
IntCollection edges = meshGeometry3D.FindSharpEdges(60);
// A cube has 12 edges, 2 points for one edge, there should be 24 indices in the collection.
Assert.That(edges, Has.Count.EqualTo(24)); // edges.Count is equal to 0.
When you look at the code, `FindSharpEdges()` use a combination of the indices of the edge end points as a key for storing the normal. But an edge can appears several times with different indices. So the code is unable to match the edges and it can not compute the angle.
The dictionary key should be based on the positions instead of the indices. Something like this:
struct EdgeKey
{
readonly Vector3 position0;
readonly Vector3 position1;
public EdgeKey(Vector3 position0, Vector3 position1)
{
this.position0 = position0;
this.position1 = position1;
}
}
And `FindSharpdEdges()` should be rewritten:
public static IntCollection FindSharpEdges(this MeshGeometry3D mesh, double minimumAngle)
{
var edgeIndices = new IntCollection();
var edgeNormals = new Dictionary<EdgeKey, Vector3>();
for (int i = 0; i < mesh.TriangleIndices.Count / 3; i++)
{
int i0 = i * 3;
Vector3 p0 = mesh.Positions[mesh.TriangleIndices[i0]];
Vector3 p1 = mesh.Positions[mesh.TriangleIndices[i0 + 1]];
Vector3 p2 = mesh.Positions[mesh.TriangleIndices[i0 + 2]];
Vector3 triangleNormal = Vector3.Cross(p1 - p0, p2 - p0);
// Handle degenerated triangles.
if (triangleNormal.LengthSquared() < 0.001f) continue;
triangleNormal.Normalize();
for (int j = 0; j < 3; j++)
{
int index0 = mesh.TriangleIndices[i0 + j];
int index1 = mesh.TriangleIndices[i0 + (j + 1) % 3];
Vector3 position0 = mesh.Positions[index0];
Vector3 position1 = mesh.Positions[index1];
var edgeKey = new EdgeKey(position0, position1);
var reverseEdgeKey = new EdgeKey(position1, position0);
if (edgeNormals.TryGetValue(edgeKey, out Vector3 value) ||
edgeNormals.TryGetValue(reverseEdgeKey, out value))
{
float rawDot = Vector3.Dot(triangleNormal, value);
// Acos returns NaN if rawDot > 1 or rawDot < -1
float dot = Math.Max(-1, Math.Min(rawDot, 1));
double angle = 180 / Math.PI * Math.Acos(dot);
if (angle > minimumAngle)
{
edgeIndices.Add(index0);
edgeIndices.Add(index1);
}
}
else
edgeNormals.Add(edgeKey, triangleNormal);
}
}
return edgeIndices;
}
Here is a dodecahedraon where the edges have been drawn with this method:

|
1.0
|
MeshGeometryHelper.FindSharpEdges() does not work - This test fails:
var meshBuilder = new MeshBuilder();
meshBuilder.AddCube();
var meshGeometry3D = meshBuilder.ToMeshGeometry3D();
IntCollection edges = meshGeometry3D.FindSharpEdges(60);
// A cube has 12 edges, 2 points for one edge, there should be 24 indices in the collection.
Assert.That(edges, Has.Count.EqualTo(24)); // edges.Count is equal to 0.
When you look at the code, `FindSharpEdges()` use a combination of the indices of the edge end points as a key for storing the normal. But an edge can appears several times with different indices. So the code is unable to match the edges and it can not compute the angle.
The dictionary key should be based on the positions instead of the indices. Something like this:
struct EdgeKey
{
readonly Vector3 position0;
readonly Vector3 position1;
public EdgeKey(Vector3 position0, Vector3 position1)
{
this.position0 = position0;
this.position1 = position1;
}
}
And `FindSharpdEdges()` should be rewritten:
public static IntCollection FindSharpEdges(this MeshGeometry3D mesh, double minimumAngle)
{
var edgeIndices = new IntCollection();
var edgeNormals = new Dictionary<EdgeKey, Vector3>();
for (int i = 0; i < mesh.TriangleIndices.Count / 3; i++)
{
int i0 = i * 3;
Vector3 p0 = mesh.Positions[mesh.TriangleIndices[i0]];
Vector3 p1 = mesh.Positions[mesh.TriangleIndices[i0 + 1]];
Vector3 p2 = mesh.Positions[mesh.TriangleIndices[i0 + 2]];
Vector3 triangleNormal = Vector3.Cross(p1 - p0, p2 - p0);
// Handle degenerated triangles.
if (triangleNormal.LengthSquared() < 0.001f) continue;
triangleNormal.Normalize();
for (int j = 0; j < 3; j++)
{
int index0 = mesh.TriangleIndices[i0 + j];
int index1 = mesh.TriangleIndices[i0 + (j + 1) % 3];
Vector3 position0 = mesh.Positions[index0];
Vector3 position1 = mesh.Positions[index1];
var edgeKey = new EdgeKey(position0, position1);
var reverseEdgeKey = new EdgeKey(position1, position0);
if (edgeNormals.TryGetValue(edgeKey, out Vector3 value) ||
edgeNormals.TryGetValue(reverseEdgeKey, out value))
{
float rawDot = Vector3.Dot(triangleNormal, value);
// Acos returns NaN if rawDot > 1 or rawDot < -1
float dot = Math.Max(-1, Math.Min(rawDot, 1));
double angle = 180 / Math.PI * Math.Acos(dot);
if (angle > minimumAngle)
{
edgeIndices.Add(index0);
edgeIndices.Add(index1);
}
}
else
edgeNormals.Add(edgeKey, triangleNormal);
}
}
return edgeIndices;
}
Here is a dodecahedraon where the edges have been drawn with this method:

|
non_process
|
meshgeometryhelper findsharpedges does not work this test fails var meshbuilder new meshbuilder meshbuilder addcube var meshbuilder intcollection edges findsharpedges a cube has edges points for one edge there should be indices in the collection assert that edges has count equalto edges count is equal to when you look at the code findsharpedges use a combination of the indices of the edge end points as a key for storing the normal but an edge can appears several times with different indices so the code is unable to match the edges and it can not compute the angle the dictionary key should be based on the positions instead of the indices something like this struct edgekey readonly readonly public edgekey this this and findsharpdedges should be rewritten public static intcollection findsharpedges this mesh double minimumangle var edgeindices new intcollection var edgenormals new dictionary for int i i mesh triangleindices count i int i mesh positions mesh positions mesh positions trianglenormal cross handle degenerated triangles if trianglenormal lengthsquared continue trianglenormal normalize for int j j j int mesh triangleindices int mesh triangleindices mesh positions mesh positions var edgekey new edgekey var reverseedgekey new edgekey if edgenormals trygetvalue edgekey out value edgenormals trygetvalue reverseedgekey out value float rawdot dot trianglenormal value acos returns nan if rawdot or rawdot float dot math max math min rawdot double angle math pi math acos dot if angle minimumangle edgeindices add edgeindices add else edgenormals add edgekey trianglenormal return edgeindices here is a dodecahedraon where the edges have been drawn with this method
| 0
|
11,017
| 13,804,135,002
|
IssuesEvent
|
2020-10-11 07:24:25
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Mono.Linker.LinkerFatalErrorException: "Error processing method" at GetValueNodeFromGenericArgument
|
area-System.Diagnostics.Process untriaged
|
Hello,
Can't publish Blazor WebAssembly application with some specific code.
### Steps To Reproduce
0. Install .NET 5 RC1
1. Create some project:
```
dotnet new blazorwasm
```
2. Add some new *.cs file with following C# code:
```c#
using System.Text.Json;
namespace SomeNamespace
{
public static class StaticClass
{
public static void GenericMethod<T>()
{
T[] value = null;
JsonSerializer.Serialize(value);
}
}
}
```
3. Try to publish
```
dotnet publish
```
### Exception
```
Class.cs(8,9): error IL1005: SomeNamespace.StaticClass.GenericMethod<T>(): Error processing method 'SomeNamespace.StaticClass.GenericMethod<T>
()' in assembly 'TrimmingIssue.dll' [C:\projectsBlazor\Net5Publish\TrimmingIssue\TrimmingIssue.csproj]
Mono.Linker.LinkerFatalErrorException: C:\projectsBlazor\Net5Publish\TrimmingIssue\Class.cs(8,9): error IL1005: SomeNamespace.StaticClass.GenericMethod<T>(): Error processing method 'S
omeNamespace.StaticClass.GenericMethod<T>()' in assembly 'TrimmingIssue.dll'
---> System.InvalidOperationException: Operation is not valid due to the current state of the object.
at Mono.Linker.Dataflow.ReflectionMethodBodyScanner.GetValueNodeFromGenericArgument(TypeReference genericArgument)
at Mono.Linker.Dataflow.ReflectionMethodBodyScanner.ProcessGenericArgumentDataFlow(GenericParameter genericParameter, TypeReference genericArgument, IMemberDefinition source)
at Mono.Linker.Steps.MarkStep.MarkGenericArgumentConstructors(IGenericInstance instance, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.MarkGenericArguments(IGenericInstance instance, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.GetOriginalMethod(MethodReference method, DependencyInfo reason, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.MarkMethod(MethodReference reference, DependencyInfo reason, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.MarkInstruction(Instruction instruction, MethodDefinition method, Boolean& requiresReflectionMethodBodyScanner)
at Mono.Linker.Steps.MarkStep.MarkMethodBody(MethodBody body)
at Mono.Linker.Steps.MarkStep.ProcessMethod(MethodDefinition method, DependencyInfo& reason)
at Mono.Linker.Steps.MarkStep.ProcessQueue()
--- End of inner exception stack trace ---
at Mono.Linker.Steps.MarkStep.ProcessQueue()
at Mono.Linker.Steps.MarkStep.ProcessPrimaryQueue()
at Mono.Linker.Steps.MarkStep.Process()
at Mono.Linker.Steps.MarkStep.Process(LinkContext context)
at Mono.Linker.Pipeline.ProcessStep(LinkContext context, IStep step)
at Mono.Linker.Pipeline.Process(LinkContext context)
at Mono.Linker.Driver.Run(ILogger customLogger)
Optimizing assemblies for size, which may change the behavior of the app. Be sure to test after publishing. See: https://aka.ms/dotnet-illink
```
### Further technical details
* .NET 5 RC1 is installed
* Project targets .NET 5
* Turning the trimming off for whole the app helps: (dotnet publish -p:PublishTrimmed=False)
* Preservation for the assembly - does not help (\<linker> \<assembly fullname="..." preserve="all" /> \</linker>)
Please, feel free to ask for the details.
Thank you in advance for you help.
|
1.0
|
Mono.Linker.LinkerFatalErrorException: "Error processing method" at GetValueNodeFromGenericArgument - Hello,
Can't publish Blazor WebAssembly application with some specific code.
### Steps To Reproduce
0. Install .NET 5 RC1
1. Create some project:
```
dotnet new blazorwasm
```
2. Add some new *.cs file with following C# code:
```c#
using System.Text.Json;
namespace SomeNamespace
{
public static class StaticClass
{
public static void GenericMethod<T>()
{
T[] value = null;
JsonSerializer.Serialize(value);
}
}
}
```
3. Try to publish
```
dotnet publish
```
### Exception
```
Class.cs(8,9): error IL1005: SomeNamespace.StaticClass.GenericMethod<T>(): Error processing method 'SomeNamespace.StaticClass.GenericMethod<T>
()' in assembly 'TrimmingIssue.dll' [C:\projectsBlazor\Net5Publish\TrimmingIssue\TrimmingIssue.csproj]
Mono.Linker.LinkerFatalErrorException: C:\projectsBlazor\Net5Publish\TrimmingIssue\Class.cs(8,9): error IL1005: SomeNamespace.StaticClass.GenericMethod<T>(): Error processing method 'S
omeNamespace.StaticClass.GenericMethod<T>()' in assembly 'TrimmingIssue.dll'
---> System.InvalidOperationException: Operation is not valid due to the current state of the object.
at Mono.Linker.Dataflow.ReflectionMethodBodyScanner.GetValueNodeFromGenericArgument(TypeReference genericArgument)
at Mono.Linker.Dataflow.ReflectionMethodBodyScanner.ProcessGenericArgumentDataFlow(GenericParameter genericParameter, TypeReference genericArgument, IMemberDefinition source)
at Mono.Linker.Steps.MarkStep.MarkGenericArgumentConstructors(IGenericInstance instance, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.MarkGenericArguments(IGenericInstance instance, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.GetOriginalMethod(MethodReference method, DependencyInfo reason, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.MarkMethod(MethodReference reference, DependencyInfo reason, IMemberDefinition sourceLocationMember)
at Mono.Linker.Steps.MarkStep.MarkInstruction(Instruction instruction, MethodDefinition method, Boolean& requiresReflectionMethodBodyScanner)
at Mono.Linker.Steps.MarkStep.MarkMethodBody(MethodBody body)
at Mono.Linker.Steps.MarkStep.ProcessMethod(MethodDefinition method, DependencyInfo& reason)
at Mono.Linker.Steps.MarkStep.ProcessQueue()
--- End of inner exception stack trace ---
at Mono.Linker.Steps.MarkStep.ProcessQueue()
at Mono.Linker.Steps.MarkStep.ProcessPrimaryQueue()
at Mono.Linker.Steps.MarkStep.Process()
at Mono.Linker.Steps.MarkStep.Process(LinkContext context)
at Mono.Linker.Pipeline.ProcessStep(LinkContext context, IStep step)
at Mono.Linker.Pipeline.Process(LinkContext context)
at Mono.Linker.Driver.Run(ILogger customLogger)
Optimizing assemblies for size, which may change the behavior of the app. Be sure to test after publishing. See: https://aka.ms/dotnet-illink
```
### Further technical details
* .NET 5 RC1 is installed
* Project targets .NET 5
* Turning the trimming off for whole the app helps: (dotnet publish -p:PublishTrimmed=False)
* Preservation for the assembly - does not help (\<linker> \<assembly fullname="..." preserve="all" /> \</linker>)
Please, feel free to ask for the details.
Thank you in advance for you help.
|
process
|
mono linker linkerfatalerrorexception error processing method at getvaluenodefromgenericargument hello can t publish blazor webassembly application with some specific code steps to reproduce install net create some project dotnet new blazorwasm add some new cs file with following c code c using system text json namespace somenamespace public static class staticclass public static void genericmethod t value null jsonserializer serialize value try to publish dotnet publish exception class cs error somenamespace staticclass genericmethod error processing method somenamespace staticclass genericmethod in assembly trimmingissue dll mono linker linkerfatalerrorexception c projectsblazor trimmingissue class cs error somenamespace staticclass genericmethod error processing method s omenamespace staticclass genericmethod in assembly trimmingissue dll system invalidoperationexception operation is not valid due to the current state of the object at mono linker dataflow reflectionmethodbodyscanner getvaluenodefromgenericargument typereference genericargument at mono linker dataflow reflectionmethodbodyscanner processgenericargumentdataflow genericparameter genericparameter typereference genericargument imemberdefinition source at mono linker steps markstep markgenericargumentconstructors igenericinstance instance imemberdefinition sourcelocationmember at mono linker steps markstep markgenericarguments igenericinstance instance imemberdefinition sourcelocationmember at mono linker steps markstep getoriginalmethod methodreference method dependencyinfo reason imemberdefinition sourcelocationmember at mono linker steps markstep markmethod methodreference reference dependencyinfo reason imemberdefinition sourcelocationmember at mono linker steps markstep markinstruction instruction instruction methoddefinition method boolean requiresreflectionmethodbodyscanner at mono linker steps markstep markmethodbody methodbody body at mono linker steps markstep processmethod methoddefinition method dependencyinfo reason at mono linker steps markstep processqueue end of inner exception stack trace at mono linker steps markstep processqueue at mono linker steps markstep processprimaryqueue at mono linker steps markstep process at mono linker steps markstep process linkcontext context at mono linker pipeline processstep linkcontext context istep step at mono linker pipeline process linkcontext context at mono linker driver run ilogger customlogger optimizing assemblies for size which may change the behavior of the app be sure to test after publishing see further technical details net is installed project targets net turning the trimming off for whole the app helps dotnet publish p publishtrimmed false preservation for the assembly does not help please feel free to ask for the details thank you in advance for you help
| 1
|
316,453
| 23,632,648,043
|
IssuesEvent
|
2022-08-25 10:39:08
|
input-output-hk/cardano-db-sync
|
https://api.github.com/repos/input-output-hk/cardano-db-sync
|
opened
|
Possible breaking changes in config files for node path
|
documentation enhancement
|
From this Slack thread: https://input-output-rnd.slack.com/archives/CA919E6Q6/p1661278131542139
it looks like `cardano-node` configuration files under https://github.com/input-output-hk/cardano-node/tree/master/configuration
may be removed in the near future. Some of our `db-sync` configs rely on those files, like `mainnet`:
https://github.com/input-output-hk/cardano-db-sync/blob/master/config/mainnet-config.yaml#L15
The current source of truth for config files is: https://book.world.dev.cardano.org/environments.html
|
1.0
|
Possible breaking changes in config files for node path - From this Slack thread: https://input-output-rnd.slack.com/archives/CA919E6Q6/p1661278131542139
it looks like `cardano-node` configuration files under https://github.com/input-output-hk/cardano-node/tree/master/configuration
may be removed in the near future. Some of our `db-sync` configs rely on those files, like `mainnet`:
https://github.com/input-output-hk/cardano-db-sync/blob/master/config/mainnet-config.yaml#L15
The current source of truth for config files is: https://book.world.dev.cardano.org/environments.html
|
non_process
|
possible breaking changes in config files for node path from this slack thread it looks like cardano node configuration files under may be removed in the near future some of our db sync configs rely on those files like mainnet the current source of truth for config files is
| 0
|
117,176
| 9,915,419,535
|
IssuesEvent
|
2019-06-28 16:49:27
|
jahshaka/Studio
|
https://api.github.com/repos/jahshaka/Studio
|
closed
|
open and save scene object selection
|
fixed and waiting to be tested
|
when you open a scene for the first time in the editor , can the world item be selected by default ? we have a empty right column otherwise not a bg deal...
when you save a scene can you also save the last selected object so when you open it you open with that object selected ?
|
1.0
|
open and save scene object selection - when you open a scene for the first time in the editor , can the world item be selected by default ? we have a empty right column otherwise not a bg deal...
when you save a scene can you also save the last selected object so when you open it you open with that object selected ?
|
non_process
|
open and save scene object selection when you open a scene for the first time in the editor can the world item be selected by default we have a empty right column otherwise not a bg deal when you save a scene can you also save the last selected object so when you open it you open with that object selected
| 0
|
702,255
| 24,120,801,517
|
IssuesEvent
|
2022-09-20 18:31:18
|
googleapis/gax-nodejs
|
https://api.github.com/repos/googleapis/gax-nodejs
|
closed
|
Run system tests for some libraries kms: should pass samples tests failed
|
priority: p1 type: bug samples flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: fc7e03f7a3d3f805d0e9b132c1b34b814d3ac598
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/926a2d50-d50e-47ff-9fdf-fee6c2e4efb2), [Sponge](http://sponge2/926a2d50-d50e-47ff-9fdf-fee6c2e4efb2)
status: failed
<details><summary>Test output</summary><br><pre>Command failed with exit code 1: npm run samples-test
Error: Command failed with exit code 1: npm run samples-test
at makeError (node_modules/execa/lib/error.js:60:11)
at handlePromise (node_modules/execa/index.js:118:26)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async runSamplesTest (build/test/system-test/test.clientlibs.js:78:5)
-> /workspace/test/system-test/test.clientlibs.ts:88:3
at async Context.<anonymous> (build/test/system-test/test.clientlibs.js:119:13)
-> /workspace/test/system-test/test.clientlibs.ts:133:7</pre></details>
|
1.0
|
Run system tests for some libraries kms: should pass samples tests failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: fc7e03f7a3d3f805d0e9b132c1b34b814d3ac598
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/926a2d50-d50e-47ff-9fdf-fee6c2e4efb2), [Sponge](http://sponge2/926a2d50-d50e-47ff-9fdf-fee6c2e4efb2)
status: failed
<details><summary>Test output</summary><br><pre>Command failed with exit code 1: npm run samples-test
Error: Command failed with exit code 1: npm run samples-test
at makeError (node_modules/execa/lib/error.js:60:11)
at handlePromise (node_modules/execa/index.js:118:26)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async runSamplesTest (build/test/system-test/test.clientlibs.js:78:5)
-> /workspace/test/system-test/test.clientlibs.ts:88:3
at async Context.<anonymous> (build/test/system-test/test.clientlibs.js:119:13)
-> /workspace/test/system-test/test.clientlibs.ts:133:7</pre></details>
|
non_process
|
run system tests for some libraries kms should pass samples tests failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output command failed with exit code npm run samples test error command failed with exit code npm run samples test at makeerror node modules execa lib error js at handlepromise node modules execa index js at processticksandrejections internal process task queues js at async runsamplestest build test system test test clientlibs js workspace test system test test clientlibs ts at async context build test system test test clientlibs js workspace test system test test clientlibs ts
| 0
|
8,493
| 2,873,418,481
|
IssuesEvent
|
2015-06-08 16:55:57
|
IBM-Watson/design-library
|
https://api.github.com/repos/IBM-Watson/design-library
|
closed
|
Remove Vagrant Dependency
|
design guide enhancement
|
One of the biggest pieces of feedback we had in the retrospective was that getting set up was hard. Back when we needed Ruby and Node to work across different platforms, Vagrant made a lot of sense, but having dropped the Ruby dependency and needing Node already installed for Bower, having the overhead of Vagrant is most likely unnecessary.
- [x] Replace Vagrant code with Node code in Design Library repo (https://github.com/IBM-Watson/design-library/pull/413)
- [x] Make Runner consumable as a Node Module (https://github.com/IBM-Watson/runner/pull/14)
- [x] Update documentation
|
1.0
|
Remove Vagrant Dependency - One of the biggest pieces of feedback we had in the retrospective was that getting set up was hard. Back when we needed Ruby and Node to work across different platforms, Vagrant made a lot of sense, but having dropped the Ruby dependency and needing Node already installed for Bower, having the overhead of Vagrant is most likely unnecessary.
- [x] Replace Vagrant code with Node code in Design Library repo (https://github.com/IBM-Watson/design-library/pull/413)
- [x] Make Runner consumable as a Node Module (https://github.com/IBM-Watson/runner/pull/14)
- [x] Update documentation
|
non_process
|
remove vagrant dependency one of the biggest pieces of feedback we had in the retrospective was that getting set up was hard back when we needed ruby and node to work across different platforms vagrant made a lot of sense but having dropped the ruby dependency and needing node already installed for bower having the overhead of vagrant is most likely unnecessary replace vagrant code with node code in design library repo make runner consumable as a node module update documentation
| 0
|
15,749
| 9,049,666,855
|
IssuesEvent
|
2019-02-12 05:49:44
|
nearprotocol/nearcore
|
https://api.github.com/repos/nearprotocol/nearcore
|
opened
|
Benchmark TestNet
|
enhancement performance
|
Create benchmark for testnet:
Testing modes:
- run locally with N nodes
- run on AWS in 1 locale
- run on AWS in N locales
Testing strategy:
- Send money tx
- Execute simple WASM
Collect stats:
- transaction per second
- blocks per second
|
True
|
Benchmark TestNet - Create benchmark for testnet:
Testing modes:
- run locally with N nodes
- run on AWS in 1 locale
- run on AWS in N locales
Testing strategy:
- Send money tx
- Execute simple WASM
Collect stats:
- transaction per second
- blocks per second
|
non_process
|
benchmark testnet create benchmark for testnet testing modes run locally with n nodes run on aws in locale run on aws in n locales testing strategy send money tx execute simple wasm collect stats transaction per second blocks per second
| 0
|
869
| 3,329,640,442
|
IssuesEvent
|
2015-11-11 03:46:32
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
opened
|
spatial distortion of grid with mapped input texture
|
enhancement geometry video processing
|
base the distortion in 3d on a video source selector
|
1.0
|
spatial distortion of grid with mapped input texture - base the distortion in 3d on a video source selector
|
process
|
spatial distortion of grid with mapped input texture base the distortion in on a video source selector
| 1
|
18,500
| 24,551,163,011
|
IssuesEvent
|
2022-10-12 12:43:15
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] [Offline indicator] Error message is not displayed in dashboard screen when participant is offline
|
Bug P1 iOS Process: Fixed Process: Tested dev
|
**Steps:**
1. Install the app
2. Sign in/signup
3. Enroll into any study
4. Navigated to study activities
5. Switch off the internet
6. Navigate to dashboard
7. Observe
**Actual:** Error message is not displayed in dashboard screen when participant is offline
**Expected:** Toast message should display in dashboard screen when participant is offline

|
2.0
|
[iOS] [Offline indicator] Error message is not displayed in dashboard screen when participant is offline - **Steps:**
1. Install the app
2. Sign in/signup
3. Enroll into any study
4. Navigated to study activities
5. Switch off the internet
6. Navigate to dashboard
7. Observe
**Actual:** Error message is not displayed in dashboard screen when participant is offline
**Expected:** Toast message should display in dashboard screen when participant is offline

|
process
|
error message is not displayed in dashboard screen when participant is offline steps install the app sign in signup enroll into any study navigated to study activities switch off the internet navigate to dashboard observe actual error message is not displayed in dashboard screen when participant is offline expected toast message should display in dashboard screen when participant is offline
| 1
|
18,198
| 24,253,518,448
|
IssuesEvent
|
2022-09-27 15:50:18
|
deepset-ai/haystack
|
https://api.github.com/repos/deepset-ai/haystack
|
opened
|
Preprocessing: Warning when max doc length significantly exceeds preprocessor split length
|
type:feature topic:preprocessing
|
**Is your feature request related to a problem? Please describe.**
When running a retriever-reader pipeline, we noticed that certain inference times were much longer than others, and were even running into timeout problems. We thought this was odd, since we had a preprocessor splitting files into 250-word docs in order to avoid this. However, since `split_respect_sentence_boundary` was set to `True` by default, and since there were large passages in our files without a period (hence were parsed as a single sentence and not split by the preprocessor), this meant that some of the documents were much, much longer than 250 words (e.g., 62,000 words), hence the slowdown. It took us quite some time to figure this out, which admittedly falls partly on our shoulders, but it would have saved us considerable time and effort if we had received some sort of warning indicating that the max doc length in our document store was many multiples greater than the preprocessor's split length.
**Describe the solution you'd like**
It would be helpful to receive some kind of warning whenever the max document length after preprocessing is significantly greater than the preprocessor's split length. (I don't have an exact definition of "significantly greater" pinned down; apologies for the imprecision.)
|
1.0
|
Preprocessing: Warning when max doc length significantly exceeds preprocessor split length - **Is your feature request related to a problem? Please describe.**
When running a retriever-reader pipeline, we noticed that certain inference times were much longer than others, and were even running into timeout problems. We thought this was odd, since we had a preprocessor splitting files into 250-word docs in order to avoid this. However, since `split_respect_sentence_boundary` was set to `True` by default, and since there were large passages in our files without a period (hence were parsed as a single sentence and not split by the preprocessor), this meant that some of the documents were much, much longer than 250 words (e.g., 62,000 words), hence the slowdown. It took us quite some time to figure this out, which admittedly falls partly on our shoulders, but it would have saved us considerable time and effort if we had received some sort of warning indicating that the max doc length in our document store was many multiples greater than the preprocessor's split length.
**Describe the solution you'd like**
It would be helpful to receive some kind of warning whenever the max document length after preprocessing is significantly greater than the preprocessor's split length. (I don't have an exact definition of "significantly greater" pinned down; apologies for the imprecision.)
|
process
|
preprocessing warning when max doc length significantly exceeds preprocessor split length is your feature request related to a problem please describe when running a retriever reader pipeline we noticed that certain inference times were much longer than others and were even running into timeout problems we thought this was odd since we had a preprocessor splitting files into word docs in order to avoid this however since split respect sentence boundary was set to true by default and since there were large passages in our files without a period hence were parsed as a single sentence and not split by the preprocessor this meant that some of the documents were much much longer than words e g words hence the slowdown it took us quite some time to figure this out which admittedly falls partly on our shoulders but it would have saved us considerable time and effort if we had received some sort of warning indicating that the max doc length in our document store was many multiples greater than the preprocessor s split length describe the solution you d like it would be helpful to receive some kind of warning whenever the max document length after preprocessing is significantly greater than the preprocessor s split length i don t have an exact definition of significantly greater pinned down apologies for the imprecision
| 1
|
52,461
| 10,865,153,004
|
IssuesEvent
|
2019-11-14 18:22:12
|
twilio/twilio-node
|
https://api.github.com/repos/twilio/twilio-node
|
closed
|
Sync Map Mutate
|
code-generation difficulty: medium status: help wanted type: community enhancement
|
There should be a mutate method for Sync Maps items that is similar to the update method, but pass a mutator function instead of a new value.
|
1.0
|
Sync Map Mutate - There should be a mutate method for Sync Maps items that is similar to the update method, but pass a mutator function instead of a new value.
|
non_process
|
sync map mutate there should be a mutate method for sync maps items that is similar to the update method but pass a mutator function instead of a new value
| 0
|
5,996
| 8,805,571,047
|
IssuesEvent
|
2018-12-26 20:32:49
|
GoogleCloudPlatform/google-cloud-cpp
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-cpp
|
closed
|
Compilation fails on Ubuntu 18.04 with undefined reference to libldap with "external" libcurl
|
type: bug type: process
|
I spent some time trying to fix it, but I haven't figured it out yet - I'll need more time.
Workaround: supply `-DGOOGLE_CLOUD_CPP_CURL_PROVIDER=package` to cmake.
```
$ cat /etc/issue
Ubuntu 18.04.1 LTS \n \l
$ cmake -H. -Bbuild-output
$ cd build-output
$ make
[...]
../../../../external/lib/libcurl.a(ldap.c.o): In function `Curl_ldap':
ldap.c:(.text+0xa6): undefined reference to `ldap_url_parse'
ldap.c:(.text+0x112): undefined reference to `ldap_set_option'
ldap.c:(.text+0x121): undefined reference to `ldap_set_option'
ldap.c:(.text+0x12f): undefined reference to `ldap_init'
ldap.c:(.text+0x14d): undefined reference to `ldap_simple_bind_s'
ldap.c:(.text+0x169): undefined reference to `ldap_set_option'
ldap.c:(.text+0x177): undefined reference to `ldap_simple_bind_s'
ldap.c:(.text+0x1b2): undefined reference to `ldap_search_s'
ldap.c:(.text+0x1d3): undefined reference to `ldap_first_entry'
ldap.c:(.text+0x209): undefined reference to `ldap_get_dn'
ldap.c:(.text+0x281): undefined reference to `ldap_memfree'
ldap.c:(.text+0x294): undefined reference to `ldap_first_attribute'
ldap.c:(.text+0x2c6): undefined reference to `ldap_get_values_len'
ldap.c:(.text+0x47a): undefined reference to `ldap_set_option'
ldap.c:(.text+0x48b): undefined reference to `ldap_set_option'
ldap.c:(.text+0x496): undefined reference to `ldap_simple_bind_s'
ldap.c:(.text+0x4bd): undefined reference to `ldap_err2string'
ldap.c:(.text+0x4e8): undefined reference to `ldap_msgfree'
ldap.c:(.text+0x502): undefined reference to `ldap_free_urldesc'
ldap.c:(.text+0x513): undefined reference to `ldap_unbind_s'
ldap.c:(.text+0x58a): undefined reference to `ldap_value_free_len'
ldap.c:(.text+0x592): undefined reference to `ldap_memfree'
ldap.c:(.text+0x5a7): undefined reference to `ber_free'
ldap.c:(.text+0x5b4): undefined reference to `ldap_value_free_len'
ldap.c:(.text+0x5bc): undefined reference to `ldap_memfree'
ldap.c:(.text+0x601): undefined reference to `ldap_next_attribute'
ldap.c:(.text+0x61e): undefined reference to `ber_free'
ldap.c:(.text+0x62c): undefined reference to `ldap_next_entry'
ldap.c:(.text+0x67b): undefined reference to `ldap_err2string'
ldap.c:(.text+0x6bb): undefined reference to `ldap_err2string'
ldap.c:(.text+0x729): undefined reference to `ldap_memfree'
ldap.c:(.text+0x73f): undefined reference to `ldap_memfree'
ldap.c:(.text+0x779): undefined reference to `ldap_set_option'
ldap.c:(.text+0x788): undefined reference to `ldap_set_option'
ldap.c:(.text+0x7ac): undefined reference to `ldap_set_option'
ldap.c:(.text+0x7bd): undefined reference to `ldap_set_option'
ldap.c:(.text+0x7ca): undefined reference to `ldap_simple_bind_s'
collect2: error: ld returned 1 exit status
google/cloud/storage/examples/CMakeFiles/storage_quickstart.dir/build.make:102: recipe for target 'google/cloud/storage/examples/storage_quickstart' failed
make[2]: *** [google/cloud/storage/examples/storage_quickstart] Error 1
CMakeFiles/Makefile2:10782: recipe for target 'google/cloud/storage/examples/CMakeFiles/storage_quickstart.dir/all' failed
make[1]: *** [google/cloud/storage/examples/CMakeFiles/storage_quickstart.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
|
1.0
|
Compilation fails on Ubuntu 18.04 with undefined reference to libldap with "external" libcurl - I spent some time trying to fix it, but I haven't figured it out yet - I'll need more time.
Workaround: supply `-DGOOGLE_CLOUD_CPP_CURL_PROVIDER=package` to cmake.
```
$ cat /etc/issue
Ubuntu 18.04.1 LTS \n \l
$ cmake -H. -Bbuild-output
$ cd build-output
$ make
[...]
../../../../external/lib/libcurl.a(ldap.c.o): In function `Curl_ldap':
ldap.c:(.text+0xa6): undefined reference to `ldap_url_parse'
ldap.c:(.text+0x112): undefined reference to `ldap_set_option'
ldap.c:(.text+0x121): undefined reference to `ldap_set_option'
ldap.c:(.text+0x12f): undefined reference to `ldap_init'
ldap.c:(.text+0x14d): undefined reference to `ldap_simple_bind_s'
ldap.c:(.text+0x169): undefined reference to `ldap_set_option'
ldap.c:(.text+0x177): undefined reference to `ldap_simple_bind_s'
ldap.c:(.text+0x1b2): undefined reference to `ldap_search_s'
ldap.c:(.text+0x1d3): undefined reference to `ldap_first_entry'
ldap.c:(.text+0x209): undefined reference to `ldap_get_dn'
ldap.c:(.text+0x281): undefined reference to `ldap_memfree'
ldap.c:(.text+0x294): undefined reference to `ldap_first_attribute'
ldap.c:(.text+0x2c6): undefined reference to `ldap_get_values_len'
ldap.c:(.text+0x47a): undefined reference to `ldap_set_option'
ldap.c:(.text+0x48b): undefined reference to `ldap_set_option'
ldap.c:(.text+0x496): undefined reference to `ldap_simple_bind_s'
ldap.c:(.text+0x4bd): undefined reference to `ldap_err2string'
ldap.c:(.text+0x4e8): undefined reference to `ldap_msgfree'
ldap.c:(.text+0x502): undefined reference to `ldap_free_urldesc'
ldap.c:(.text+0x513): undefined reference to `ldap_unbind_s'
ldap.c:(.text+0x58a): undefined reference to `ldap_value_free_len'
ldap.c:(.text+0x592): undefined reference to `ldap_memfree'
ldap.c:(.text+0x5a7): undefined reference to `ber_free'
ldap.c:(.text+0x5b4): undefined reference to `ldap_value_free_len'
ldap.c:(.text+0x5bc): undefined reference to `ldap_memfree'
ldap.c:(.text+0x601): undefined reference to `ldap_next_attribute'
ldap.c:(.text+0x61e): undefined reference to `ber_free'
ldap.c:(.text+0x62c): undefined reference to `ldap_next_entry'
ldap.c:(.text+0x67b): undefined reference to `ldap_err2string'
ldap.c:(.text+0x6bb): undefined reference to `ldap_err2string'
ldap.c:(.text+0x729): undefined reference to `ldap_memfree'
ldap.c:(.text+0x73f): undefined reference to `ldap_memfree'
ldap.c:(.text+0x779): undefined reference to `ldap_set_option'
ldap.c:(.text+0x788): undefined reference to `ldap_set_option'
ldap.c:(.text+0x7ac): undefined reference to `ldap_set_option'
ldap.c:(.text+0x7bd): undefined reference to `ldap_set_option'
ldap.c:(.text+0x7ca): undefined reference to `ldap_simple_bind_s'
collect2: error: ld returned 1 exit status
google/cloud/storage/examples/CMakeFiles/storage_quickstart.dir/build.make:102: recipe for target 'google/cloud/storage/examples/storage_quickstart' failed
make[2]: *** [google/cloud/storage/examples/storage_quickstart] Error 1
CMakeFiles/Makefile2:10782: recipe for target 'google/cloud/storage/examples/CMakeFiles/storage_quickstart.dir/all' failed
make[1]: *** [google/cloud/storage/examples/CMakeFiles/storage_quickstart.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
|
process
|
compilation fails on ubuntu with undefined reference to libldap with external libcurl i spent some time trying to fix it but i haven t figured it out yet i ll need more time workaround supply dgoogle cloud cpp curl provider package to cmake cat etc issue ubuntu lts n l cmake h bbuild output cd build output make external lib libcurl a ldap c o in function curl ldap ldap c text undefined reference to ldap url parse ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap init ldap c text undefined reference to ldap simple bind s ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap simple bind s ldap c text undefined reference to ldap search s ldap c text undefined reference to ldap first entry ldap c text undefined reference to ldap get dn ldap c text undefined reference to ldap memfree ldap c text undefined reference to ldap first attribute ldap c text undefined reference to ldap get values len ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap simple bind s ldap c text undefined reference to ldap ldap c text undefined reference to ldap msgfree ldap c text undefined reference to ldap free urldesc ldap c text undefined reference to ldap unbind s ldap c text undefined reference to ldap value free len ldap c text undefined reference to ldap memfree ldap c text undefined reference to ber free ldap c text undefined reference to ldap value free len ldap c text undefined reference to ldap memfree ldap c text undefined reference to ldap next attribute ldap c text undefined reference to ber free ldap c text undefined reference to ldap next entry ldap c text undefined reference to ldap ldap c text undefined reference to ldap ldap c text undefined reference to ldap memfree ldap c text undefined reference to ldap memfree ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap set option ldap c text undefined reference to ldap simple bind s error ld returned exit status google cloud storage examples cmakefiles storage quickstart dir build make recipe for target google cloud storage examples storage quickstart failed make error cmakefiles recipe for target google cloud storage examples cmakefiles storage quickstart dir all failed make error make waiting for unfinished jobs
| 1
|
1,984
| 2,581,279,317
|
IssuesEvent
|
2015-02-13 23:47:58
|
chwangaa/Charlie
|
https://api.github.com/repos/chwangaa/Charlie
|
opened
|
Mal Positioning of Cloudmap
|
enhancement Graphic Design
|
The current display is too left, needs to be centered, and with a legible space between the heading and the map
|
1.0
|
Mal Positioning of Cloudmap - The current display is too left, needs to be centered, and with a legible space between the heading and the map
|
non_process
|
mal positioning of cloudmap the current display is too left needs to be centered and with a legible space between the heading and the map
| 0
|
8,229
| 11,415,575,158
|
IssuesEvent
|
2020-02-02 12:02:09
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
Font url import fails to resolve at runtime
|
:bug: Bug CSS Preprocessing Stale
|
**This a 🐛 bug report**
### 🎛 Configuration (.babelrc, package.json, cli command)
**tsconfig.json**
```js
{
"compilerOptions": {
"strict": true,
"target": "es2015",
"jsx": "react"
}
}
```
**package.json**
```
{
"name": "wtfloat",
"version": "0.1.0",
"description": "An interactive guide to floating point numbers",
"scripts": {
"dev": "parcel ./src/root.html",
"dev-debug": "node %NODE_DEBUG_OPTION% ./node_modules/parcel-bundler/bin/cli.js ./src/root.html",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "W. Brian Gourlie",
"license": "MIT",
"devDependencies": {
"@types/flux": "^3.1.7",
"@types/react": "^16.3.5",
"@types/react-dom": "^16.0.4",
"@types/react-router": "^4.0.23",
"@types/react-router-dom": "^4.2.6",
"@types/shelljs": "^0.7.8",
"node-sass": "^4.8.3",
"parcel-bundler": "^1.7.0",
"parcel-plugin-react-markdown": "0.0.1",
"shelljs": "^0.8.1",
"typescript": "^2.8.1"
},
"dependencies": {
"flux": "^3.1.3",
"immutable": "^3.8.2",
"react": "^16.3.1",
"react-dom": "^16.3.1",
"react-router": "^4.2.0",
"react-router-dom": "^4.2.2"
}
}
```
I'm importing a font via a scss file. Specifically:
```
@font-face {
font-family: "Besley Medium";
src: url("./static/Besley-Medium.otf") format("opentype");
}
body {
font-family: "Besley Medium", serif;
}
```
### 🤔 Expected Behavior
I expect that the font will load successfully when bundled.
### 😯 Current Behavior
I'm receiving the following runtime error: `Error: Cannot find module 'Besley-Medium.cf0cb4dc.otf,11'`
### 🔦 Context
You can clone my repository [here](https://github.com/bgourlie/wtfloat)
`npm install && npm run dev`
Open the development server in the browser and look at the console.
### 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.7.0
| Node | 8.9.4
| npm/Yarn | 5.6.0
| Operating System | Windows 10 x64
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
1.0
|
Font url import fails to resolve at runtime - **This a 🐛 bug report**
### 🎛 Configuration (.babelrc, package.json, cli command)
**tsconfig.json**
```js
{
"compilerOptions": {
"strict": true,
"target": "es2015",
"jsx": "react"
}
}
```
**package.json**
```
{
"name": "wtfloat",
"version": "0.1.0",
"description": "An interactive guide to floating point numbers",
"scripts": {
"dev": "parcel ./src/root.html",
"dev-debug": "node %NODE_DEBUG_OPTION% ./node_modules/parcel-bundler/bin/cli.js ./src/root.html",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "W. Brian Gourlie",
"license": "MIT",
"devDependencies": {
"@types/flux": "^3.1.7",
"@types/react": "^16.3.5",
"@types/react-dom": "^16.0.4",
"@types/react-router": "^4.0.23",
"@types/react-router-dom": "^4.2.6",
"@types/shelljs": "^0.7.8",
"node-sass": "^4.8.3",
"parcel-bundler": "^1.7.0",
"parcel-plugin-react-markdown": "0.0.1",
"shelljs": "^0.8.1",
"typescript": "^2.8.1"
},
"dependencies": {
"flux": "^3.1.3",
"immutable": "^3.8.2",
"react": "^16.3.1",
"react-dom": "^16.3.1",
"react-router": "^4.2.0",
"react-router-dom": "^4.2.2"
}
}
```
I'm importing a font via a scss file. Specifically:
```
@font-face {
font-family: "Besley Medium";
src: url("./static/Besley-Medium.otf") format("opentype");
}
body {
font-family: "Besley Medium", serif;
}
```
### 🤔 Expected Behavior
I expect that the font will load successfully when bundled.
### 😯 Current Behavior
I'm receiving the following runtime error: `Error: Cannot find module 'Besley-Medium.cf0cb4dc.otf,11'`
### 🔦 Context
You can clone my repository [here](https://github.com/bgourlie/wtfloat)
`npm install && npm run dev`
Open the development server in the browser and look at the console.
### 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.7.0
| Node | 8.9.4
| npm/Yarn | 5.6.0
| Operating System | Windows 10 x64
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
process
|
font url import fails to resolve at runtime this a 🐛 bug report 🎛 configuration babelrc package json cli command tsconfig json js compileroptions strict true target jsx react package json name wtfloat version description an interactive guide to floating point numbers scripts dev parcel src root html dev debug node node debug option node modules parcel bundler bin cli js src root html test echo error no test specified exit keywords author w brian gourlie license mit devdependencies types flux types react types react dom types react router types react router dom types shelljs node sass parcel bundler parcel plugin react markdown shelljs typescript dependencies flux immutable react react dom react router react router dom i m importing a font via a scss file specifically font face font family besley medium src url static besley medium otf format opentype body font family besley medium serif 🤔 expected behavior i expect that the font will load successfully when bundled 😯 current behavior i m receiving the following runtime error error cannot find module besley medium otf 🔦 context you can clone my repository npm install npm run dev open the development server in the browser and look at the console 🌍 your environment software version s parcel node npm yarn operating system windows love parcel please consider supporting our collective 👉
| 1
|
30,410
| 13,240,968,704
|
IssuesEvent
|
2020-08-19 07:24:27
|
Azure/azure-powershell
|
https://api.github.com/repos/Azure/azure-powershell
|
closed
|
$Resource.Properties.remoteDebuggingEnabled command changing value of previous execution
|
App Services Service Attention customer-reported question
|
<!--
- Make sure you are able to reproduce this issue on the latest released version of Az
- https://www.powershellgallery.com/packages/Az
- Please search the existing issues to see if there has been a similar issue filed
- For issue related to importing a module, please refer to our troubleshooting guide:
- https://github.com/Azure/azure-powershell/blob/master/documentation/troubleshoot-module-load.md
-->
## Description
I have a simple script to set $Resource.Properties.remoteDebuggingEnabled to true and false as need be (I had to simplify my script to get to the root problem)
I noticed that when I run one command against a web app, it changes the value of the previous execution I ran.
**For instance**
- Remote debugging for Webapp A is set to true
- When I run command to turn remote debugging for web app B to true it does that but sets remote debugging of Webapp A back to false which is quite strange 'cos I am not talking to WebApp A any longer.
Sample script below. (Kindly run a repro)- create 5 web apps in a resource group
$resourceGroupName = "40-web"
$appServiceName = "seun1"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01"
$resourceGroupName = "40-web"
$appServiceName = "seun2"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01" -Force
$resourceGroupName = "40-web"
$appServiceName = "seun3"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01" -Force
$resourceGroupName = "40-web"
$appServiceName = "seun4"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01" -Force
$resourceGroupName = "40-web"
$appServiceName = "seun5"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01"
## Steps to reproduce
Sample script above. I initially had a for each loop to do the job but had to remove it to understand the problem after several hours.
## Environment data
<!-- Please run $PSVersionTable and paste the output in the below code block
If running the Docker container image, indicate the tag of the image used and the version of Docker engine-->
Name Value
---- -----
PSVersion 5.1.17763.1007
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.17763.1007
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
## Module versions
<!-- Please run (Get-Module -ListAvailable) and paste the output in the below code block -->
It's a lot so I pasted the important ones.
Script 1.11.0 Az.Websites {Get-AzAppServicePlan, Set-AzAppServicePlan, New-AzAppServicePlan, Remove-AzAppServicePlan...}
Script 1.5.1 Az.Websites {Get-AzAppServicePlan, Set-AzAppServicePlan, New-AzAppServicePlan, Remove-AzAppServicePlan...}
## Debug output
<!-- Set $DebugPreference='Continue' before running the repro and paste the resulting debug stream in the below code block -->
```
```
## Error output
<!-- Please run Resolve-AzError and paste the output in the below code block -->
No error for commands pasted
Kindly assist.
|
2.0
|
$Resource.Properties.remoteDebuggingEnabled command changing value of previous execution - <!--
- Make sure you are able to reproduce this issue on the latest released version of Az
- https://www.powershellgallery.com/packages/Az
- Please search the existing issues to see if there has been a similar issue filed
- For issue related to importing a module, please refer to our troubleshooting guide:
- https://github.com/Azure/azure-powershell/blob/master/documentation/troubleshoot-module-load.md
-->
## Description
I have a simple script to set $Resource.Properties.remoteDebuggingEnabled to true and false as need be (I had to simplify my script to get to the root problem)
I noticed that when I run one command against a web app, it changes the value of the previous execution I ran.
**For instance**
- Remote debugging for Webapp A is set to true
- When I run command to turn remote debugging for web app B to true it does that but sets remote debugging of Webapp A back to false which is quite strange 'cos I am not talking to WebApp A any longer.
Sample script below. (Kindly run a repro)- create 5 web apps in a resource group
$resourceGroupName = "40-web"
$appServiceName = "seun1"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01"
$resourceGroupName = "40-web"
$appServiceName = "seun2"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01" -Force
$resourceGroupName = "40-web"
$appServiceName = "seun3"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01" -Force
$resourceGroupName = "40-web"
$appServiceName = "seun4"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01" -Force
$resourceGroupName = "40-web"
$appServiceName = "seun5"
$Resource = Get-AzResource -ResourceGroupName $resourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName $appServiceName -ApiVersion "2018-02-01"
$Resource.Properties.remoteDebuggingEnabled = "True"
$Resource | Set-AzResource -ApiVersion "2018-02-01"
## Steps to reproduce
Sample script above. I initially had a for each loop to do the job but had to remove it to understand the problem after several hours.
## Environment data
<!-- Please run $PSVersionTable and paste the output in the below code block
If running the Docker container image, indicate the tag of the image used and the version of Docker engine-->
Name Value
---- -----
PSVersion 5.1.17763.1007
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.17763.1007
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
## Module versions
<!-- Please run (Get-Module -ListAvailable) and paste the output in the below code block -->
It's a lot so I pasted the important ones.
Script 1.11.0 Az.Websites {Get-AzAppServicePlan, Set-AzAppServicePlan, New-AzAppServicePlan, Remove-AzAppServicePlan...}
Script 1.5.1 Az.Websites {Get-AzAppServicePlan, Set-AzAppServicePlan, New-AzAppServicePlan, Remove-AzAppServicePlan...}
## Debug output
<!-- Set $DebugPreference='Continue' before running the repro and paste the resulting debug stream in the below code block -->
```
```
## Error output
<!-- Please run Resolve-AzError and paste the output in the below code block -->
No error for commands pasted
Kindly assist.
|
non_process
|
resource properties remotedebuggingenabled command changing value of previous execution make sure you are able to reproduce this issue on the latest released version of az please search the existing issues to see if there has been a similar issue filed for issue related to importing a module please refer to our troubleshooting guide description i have a simple script to set resource properties remotedebuggingenabled to true and false as need be i had to simplify my script to get to the root problem i noticed that when i run one command against a web app it changes the value of the previous execution i ran for instance remote debugging for webapp a is set to true when i run command to turn remote debugging for web app b to true it does that but sets remote debugging of webapp a back to false which is quite strange cos i am not talking to webapp a any longer sample script below kindly run a repro create web apps in a resource group resourcegroupname web appservicename resource get azresource resourcegroupname resourcegroupname resourcetype microsoft web sites config resourcename appservicename apiversion resource properties remotedebuggingenabled true resource set azresource apiversion resourcegroupname web appservicename resource get azresource resourcegroupname resourcegroupname resourcetype microsoft web sites config resourcename appservicename apiversion resource properties remotedebuggingenabled true resource set azresource apiversion force resourcegroupname web appservicename resource get azresource resourcegroupname resourcegroupname resourcetype microsoft web sites config resourcename appservicename apiversion resource properties remotedebuggingenabled true resource set azresource apiversion force resourcegroupname web appservicename resource get azresource resourcegroupname resourcegroupname resourcetype microsoft web sites config resourcename appservicename apiversion resource properties remotedebuggingenabled true resource set azresource apiversion force resourcegroupname web appservicename resource get azresource resourcegroupname resourcegroupname resourcetype microsoft web sites config resourcename appservicename apiversion resource properties remotedebuggingenabled true resource set azresource apiversion steps to reproduce sample script above i initially had a for each loop to do the job but had to remove it to understand the problem after several hours environment data please run psversiontable and paste the output in the below code block if running the docker container image indicate the tag of the image used and the version of docker engine name value psversion psedition desktop pscompatibleversions buildversion clrversion wsmanstackversion psremotingprotocolversion serializationversion module versions it s a lot so i pasted the important ones script az websites get azappserviceplan set azappserviceplan new azappserviceplan remove azappserviceplan script az websites get azappserviceplan set azappserviceplan new azappserviceplan remove azappserviceplan debug output error output no error for commands pasted kindly assist
| 0
|
1,697
| 4,346,240,036
|
IssuesEvent
|
2016-07-29 15:21:57
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
reopened
|
Make OSF integration
|
3. In Development Processors
|
# Goal
Export data to OSF.
# Analysis
JamDB is a schema-less, immutable database that can optionally enforce a schema and stores provenance. It supports efficient full-text search, filtering by nested keys, and is accessible a REST API.
http://jamdb.readthedocs.io/en/latest/
We need to export our trials data to OSF using their API.
# Development
- [x] read and understand OSF documentation
- [x] write code for reading all trials with related entities from `database` (like api returns for `trials/<id>` endpoint)
- [x] write code for writing trials to OSF using batch requests
Estimated time: 4d
|
1.0
|
Make OSF integration - # Goal
Export data to OSF.
# Analysis
JamDB is a schema-less, immutable database that can optionally enforce a schema and stores provenance. It supports efficient full-text search, filtering by nested keys, and is accessible a REST API.
http://jamdb.readthedocs.io/en/latest/
We need to export our trials data to OSF using their API.
# Development
- [x] read and understand OSF documentation
- [x] write code for reading all trials with related entities from `database` (like api returns for `trials/<id>` endpoint)
- [x] write code for writing trials to OSF using batch requests
Estimated time: 4d
|
process
|
make osf integration goal export data to osf analysis jamdb is a schema less immutable database that can optionally enforce a schema and stores provenance it supports efficient full text search filtering by nested keys and is accessible a rest api we need to export our trials data to osf using their api development read and understand osf documentation write code for reading all trials with related entities from database like api returns for trials endpoint write code for writing trials to osf using batch requests estimated time
| 1
|
4,143
| 4,932,035,949
|
IssuesEvent
|
2016-11-28 12:17:37
|
idno/Known
|
https://api.github.com/repos/idno/Known
|
closed
|
.htaccess should block everything by default and only allow through requests to "the right things"
|
New feature Security
|
should I be able to request arbitrary .php files under Idno*, external or templates?
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1246844-htaccess-should-block-everything-by-default-and-only-allow-through-requests-to-the-right-things?utm_campaign=plugin&utm_content=tracker%2F300028&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F300028&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
True
|
.htaccess should block everything by default and only allow through requests to "the right things" - should I be able to request arbitrary .php files under Idno*, external or templates?
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1246844-htaccess-should-block-everything-by-default-and-only-allow-through-requests-to-the-right-things?utm_campaign=plugin&utm_content=tracker%2F300028&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F300028&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_process
|
htaccess should block everything by default and only allow through requests to the right things should i be able to request arbitrary php files under idno external or templates want to back this issue we accept bounties via
| 0
|
131,762
| 12,489,821,444
|
IssuesEvent
|
2020-05-31 20:41:41
|
edwardtheharris/machines-wat-learn-good
|
https://api.github.com/repos/edwardtheharris/machines-wat-learn-good
|
closed
|
Regression - Intro and Data
|
documentation
|
Welcome to the introduction to the regression section of the Machine Learning with Python tutorial series. By this point, you should have Scikit-Learn already installed. If not, get it, along with Pandas and matplotlib!
If you have a pre-compiled scientific distribution of Python like ActivePython from our sponsor, you should already have numpy, scipy, scikit-learn, matplotlib, and pandas installed. If not, do:
```bash
pip install numpy scipy scikit-learn matplotlib pandas
```
Along with those tutorial-wide imports, we're also going to be making use of Quandl here, which you may need to separately install, with:
```bash
pip install quandl
```
I will note again in the first part of the code, but the Quandl module used to be imported with an upper-case Q, but is now imported with a lower-cased q. In the video and sample codes, it is upper-cased.
To begin, what is regression in terms of us using it with machine learning? The goal is to take continuous data, find the equation that best fits the data, and be able forecast out a specific value. With simple linear regression, you are just simply doing this by creating a best-fit line:

From here, we can use the equation of that line to forecast out into the future, where the 'date' is the x-axis, what the price will be.
A popular use with regression is to predict stock prices. This is done because we are considering the fluidity of price over time, and attempting to forecast the next fluid price in the future using a continuous dataset.
Regression is a form of supervised machine learning, which is where the scientist teaches the machine by presenting features and then presenting the correct answer, over and over, to teach the machine. Once the machine is taught, the scientist will usually "test" the machine on some unseen data, where the scientist still knows what the correct answer is, but the machine doesn't. The machine's answers are compared to the known answers, and the machine's accuracy can be measured. If the accuracy is high enough, the scientist may consider actually employing the algorithm in the real world.
Since regression is so popularly used with stock prices, we can start there with an example. To begin, we need data. Sometimes the data is easy to acquire, and sometimes you have to go out and scrape it together, like what we did in an older tutorial series using machine learning with stock fundamentals for investing. In our case, we're able to at least start with simple stock price and volume information from Quandl. To begin, we'll start with data that grabs the stock price for Alphabet (previously Google), with the ticker of GOOGL:
```python
import pandas as pd
import Quandl
df = Quandl.get("WIKI/GOOGL")
print(df.head())
# Note: when filmed, Quandl's module was referenced with an upper-case Q, now it is a lower-case q, so
import quandl
```
At this point, we have:
| Date | Open | High | Low | Close | Volume | Ex-Dividend |
|------------|--------|--------|--------|--------|----------|-------------|
| 2004-08-19 | 100.00 | 104.06 | 95.96 | 100.34 | 44659000 | 0 |
| 2004-08-20 | 101.01 | 109.08 | 100.50 | 108.31 | 22834300 | 0 |
| 2004-08-23 | 110.75 | 113.48 | 109.05 | 109.40 | 18256100 | 0 |
| 2004-08-24 | 111.24 | 111.60 | 103.57 | 104.87 | 15247300 | 0 |
| 2004-08-25 | 104.96 | 108.00 | 103.88 | 106.00 | 9188600 | 0 |
| Date | Split Ratio | Adj. Open | Adj. High | Adj. Low | Adj. Close |
| ---------- | ----------- | --------- | --------- | -------- | ---------- |
| 2004-08-19 | 1 | 50.000 | 52.03 | 47.980 | 50.170 |
| 2004-08-20 | 1 | 50.505 | 54.54 | 50.250 | 54.155 |
| 2004-08-23 | 1 | 55.375 | 56.74 | 54.525 | 54.700 |
| 2004-08-24 | 1 | 55.620 | 55.80 | 51.785 | 52.435 |
| 2004-08-25 | 1 | 52.480 | 54.00 | 51.940 | 53.000 |
| Date | Adj. Volume |
| ---------- | ----------- |
| 2004-08-19 | 44659000 |
| 2004-08-20 | 22834300 |
| 2004-08-23 | 18256100 |
| 2004-08-24 | 15247300 |
| 2004-08-25 | 9188600 |
Awesome, off to a good start, we have the data, but maybe a bit much. To reference the intro, there exists an entire machine learning category that aims to reduce the amount of input that we process. In our case, we have quite a few columns, many are redundant, a couple don't really change. We can most likely agree that having both the regular columns and adjusted columns is redundant. Adjusted columns are the most ideal ones. Regular columns here are prices on the day, but stocks have things called stock splits, where suddenly 1 share becomes something like 2 shares, thus the value of a share is halved, but the value of the company has not halved. Adjusted columns are adjusted for stock splits over time, which makes them more reliable for doing analysis.
Thus, let's go ahead and pair down our original dataframe a bit:
```python
df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close', 'Adj. Volume']]
```
Now we just have the adjusted columns, and the volume column. A couple major points to make here. Many people talk about or hear about machine learning as if it is some sort of dark art that somehow generates value from nothing. Machine learning can highlight value if it is there, but it has to actually be there. You need meaningful data. So how do you know if you have meaningful data? My best suggestion is to just simply use your brain. Think about it. Are historical prices indicative of future prices? Some people think so, but this has been continually disproven over time. What about historical patterns? This has a bit more merit when taken to the extremes (which machine learning can help with), but is overall fairly weak. What about the relationship between price changes and volume over time, along with historical patterns? Probably a bit better. So, as you can already see, it is not the case that the more data the merrier, but we instead want to use useful data. At the same time, raw data sometimes should be transformed.
Consider daily volatility, such as with the high minus low % change? How about daily percent change? Would you consider data that is simply the Open, High, Low, Close or data that is the Close, Spread/Volatility, %change daily to be better? I would expect the latter to be more ideal. The former is all very similar data points. The latter is created based on identical data from the former, but it brings far more valuable information to the table.
Thus, not all of the data you have is useful, and sometimes you need to do further manipulation on your data to make it even more valuable before feeding it through a machine learning algorithm. Let's go ahead and transform our data next:
```python
df['HL_PCT'] = (df['Adj. High'] - df['Adj. Low']) / df['Adj. Close'] * 100.0
```
I went ahead and recorded the video version of this, not realizing my stake that it was high minus low divided by close. I meant to do High - Low, divided by the low. Feel free to fix that if you like.
This creates a new column that is the % spread based on the closing price, which is our crude measure of volatility. Next, we'll do daily percent change:
```python
df['PCT_change'] = (df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open'] * 100.0
```
Now we will define a new dataframe as:
```python
df = df[['Adj. Close', 'HL_PCT', 'PCT_change', 'Adj. Volume']]
print(df.head())
```
|
1.0
|
Regression - Intro and Data - Welcome to the introduction to the regression section of the Machine Learning with Python tutorial series. By this point, you should have Scikit-Learn already installed. If not, get it, along with Pandas and matplotlib!
If you have a pre-compiled scientific distribution of Python like ActivePython from our sponsor, you should already have numpy, scipy, scikit-learn, matplotlib, and pandas installed. If not, do:
```bash
pip install numpy scipy scikit-learn matplotlib pandas
```
Along with those tutorial-wide imports, we're also going to be making use of Quandl here, which you may need to separately install, with:
```bash
pip install quandl
```
I will note again in the first part of the code, but the Quandl module used to be imported with an upper-case Q, but is now imported with a lower-cased q. In the video and sample codes, it is upper-cased.
To begin, what is regression in terms of us using it with machine learning? The goal is to take continuous data, find the equation that best fits the data, and be able forecast out a specific value. With simple linear regression, you are just simply doing this by creating a best-fit line:

From here, we can use the equation of that line to forecast out into the future, where the 'date' is the x-axis, what the price will be.
A popular use with regression is to predict stock prices. This is done because we are considering the fluidity of price over time, and attempting to forecast the next fluid price in the future using a continuous dataset.
Regression is a form of supervised machine learning, which is where the scientist teaches the machine by presenting features and then presenting the correct answer, over and over, to teach the machine. Once the machine is taught, the scientist will usually "test" the machine on some unseen data, where the scientist still knows what the correct answer is, but the machine doesn't. The machine's answers are compared to the known answers, and the machine's accuracy can be measured. If the accuracy is high enough, the scientist may consider actually employing the algorithm in the real world.
Since regression is so popularly used with stock prices, we can start there with an example. To begin, we need data. Sometimes the data is easy to acquire, and sometimes you have to go out and scrape it together, like what we did in an older tutorial series using machine learning with stock fundamentals for investing. In our case, we're able to at least start with simple stock price and volume information from Quandl. To begin, we'll start with data that grabs the stock price for Alphabet (previously Google), with the ticker of GOOGL:
```python
import pandas as pd
import Quandl
df = Quandl.get("WIKI/GOOGL")
print(df.head())
# Note: when filmed, Quandl's module was referenced with an upper-case Q, now it is a lower-case q, so
import quandl
```
At this point, we have:
| Date | Open | High | Low | Close | Volume | Ex-Dividend |
|------------|--------|--------|--------|--------|----------|-------------|
| 2004-08-19 | 100.00 | 104.06 | 95.96 | 100.34 | 44659000 | 0 |
| 2004-08-20 | 101.01 | 109.08 | 100.50 | 108.31 | 22834300 | 0 |
| 2004-08-23 | 110.75 | 113.48 | 109.05 | 109.40 | 18256100 | 0 |
| 2004-08-24 | 111.24 | 111.60 | 103.57 | 104.87 | 15247300 | 0 |
| 2004-08-25 | 104.96 | 108.00 | 103.88 | 106.00 | 9188600 | 0 |
| Date | Split Ratio | Adj. Open | Adj. High | Adj. Low | Adj. Close |
| ---------- | ----------- | --------- | --------- | -------- | ---------- |
| 2004-08-19 | 1 | 50.000 | 52.03 | 47.980 | 50.170 |
| 2004-08-20 | 1 | 50.505 | 54.54 | 50.250 | 54.155 |
| 2004-08-23 | 1 | 55.375 | 56.74 | 54.525 | 54.700 |
| 2004-08-24 | 1 | 55.620 | 55.80 | 51.785 | 52.435 |
| 2004-08-25 | 1 | 52.480 | 54.00 | 51.940 | 53.000 |
| Date | Adj. Volume |
| ---------- | ----------- |
| 2004-08-19 | 44659000 |
| 2004-08-20 | 22834300 |
| 2004-08-23 | 18256100 |
| 2004-08-24 | 15247300 |
| 2004-08-25 | 9188600 |
Awesome, off to a good start, we have the data, but maybe a bit much. To reference the intro, there exists an entire machine learning category that aims to reduce the amount of input that we process. In our case, we have quite a few columns, many are redundant, a couple don't really change. We can most likely agree that having both the regular columns and adjusted columns is redundant. Adjusted columns are the most ideal ones. Regular columns here are prices on the day, but stocks have things called stock splits, where suddenly 1 share becomes something like 2 shares, thus the value of a share is halved, but the value of the company has not halved. Adjusted columns are adjusted for stock splits over time, which makes them more reliable for doing analysis.
Thus, let's go ahead and pair down our original dataframe a bit:
```python
df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close', 'Adj. Volume']]
```
Now we just have the adjusted columns, and the volume column. A couple major points to make here. Many people talk about or hear about machine learning as if it is some sort of dark art that somehow generates value from nothing. Machine learning can highlight value if it is there, but it has to actually be there. You need meaningful data. So how do you know if you have meaningful data? My best suggestion is to just simply use your brain. Think about it. Are historical prices indicative of future prices? Some people think so, but this has been continually disproven over time. What about historical patterns? This has a bit more merit when taken to the extremes (which machine learning can help with), but is overall fairly weak. What about the relationship between price changes and volume over time, along with historical patterns? Probably a bit better. So, as you can already see, it is not the case that the more data the merrier, but we instead want to use useful data. At the same time, raw data sometimes should be transformed.
Consider daily volatility, such as with the high minus low % change? How about daily percent change? Would you consider data that is simply the Open, High, Low, Close or data that is the Close, Spread/Volatility, %change daily to be better? I would expect the latter to be more ideal. The former is all very similar data points. The latter is created based on identical data from the former, but it brings far more valuable information to the table.
Thus, not all of the data you have is useful, and sometimes you need to do further manipulation on your data to make it even more valuable before feeding it through a machine learning algorithm. Let's go ahead and transform our data next:
```python
df['HL_PCT'] = (df['Adj. High'] - df['Adj. Low']) / df['Adj. Close'] * 100.0
```
I went ahead and recorded the video version of this, not realizing my stake that it was high minus low divided by close. I meant to do High - Low, divided by the low. Feel free to fix that if you like.
This creates a new column that is the % spread based on the closing price, which is our crude measure of volatility. Next, we'll do daily percent change:
```python
df['PCT_change'] = (df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open'] * 100.0
```
Now we will define a new dataframe as:
```python
df = df[['Adj. Close', 'HL_PCT', 'PCT_change', 'Adj. Volume']]
print(df.head())
```
|
non_process
|
regression intro and data welcome to the introduction to the regression section of the machine learning with python tutorial series by this point you should have scikit learn already installed if not get it along with pandas and matplotlib if you have a pre compiled scientific distribution of python like activepython from our sponsor you should already have numpy scipy scikit learn matplotlib and pandas installed if not do bash pip install numpy scipy scikit learn matplotlib pandas along with those tutorial wide imports we re also going to be making use of quandl here which you may need to separately install with bash pip install quandl i will note again in the first part of the code but the quandl module used to be imported with an upper case q but is now imported with a lower cased q in the video and sample codes it is upper cased to begin what is regression in terms of us using it with machine learning the goal is to take continuous data find the equation that best fits the data and be able forecast out a specific value with simple linear regression you are just simply doing this by creating a best fit line from here we can use the equation of that line to forecast out into the future where the date is the x axis what the price will be a popular use with regression is to predict stock prices this is done because we are considering the fluidity of price over time and attempting to forecast the next fluid price in the future using a continuous dataset regression is a form of supervised machine learning which is where the scientist teaches the machine by presenting features and then presenting the correct answer over and over to teach the machine once the machine is taught the scientist will usually test the machine on some unseen data where the scientist still knows what the correct answer is but the machine doesn t the machine s answers are compared to the known answers and the machine s accuracy can be measured if the accuracy is high enough the scientist may consider actually employing the algorithm in the real world since regression is so popularly used with stock prices we can start there with an example to begin we need data sometimes the data is easy to acquire and sometimes you have to go out and scrape it together like what we did in an older tutorial series using machine learning with stock fundamentals for investing in our case we re able to at least start with simple stock price and volume information from quandl to begin we ll start with data that grabs the stock price for alphabet previously google with the ticker of googl python import pandas as pd import quandl df quandl get wiki googl print df head note when filmed quandl s module was referenced with an upper case q now it is a lower case q so import quandl at this point we have date open high low close volume ex dividend date split ratio adj open adj high adj low adj close date adj volume awesome off to a good start we have the data but maybe a bit much to reference the intro there exists an entire machine learning category that aims to reduce the amount of input that we process in our case we have quite a few columns many are redundant a couple don t really change we can most likely agree that having both the regular columns and adjusted columns is redundant adjusted columns are the most ideal ones regular columns here are prices on the day but stocks have things called stock splits where suddenly share becomes something like shares thus the value of a share is halved but the value of the company has not halved adjusted columns are adjusted for stock splits over time which makes them more reliable for doing analysis thus let s go ahead and pair down our original dataframe a bit python df df now we just have the adjusted columns and the volume column a couple major points to make here many people talk about or hear about machine learning as if it is some sort of dark art that somehow generates value from nothing machine learning can highlight value if it is there but it has to actually be there you need meaningful data so how do you know if you have meaningful data my best suggestion is to just simply use your brain think about it are historical prices indicative of future prices some people think so but this has been continually disproven over time what about historical patterns this has a bit more merit when taken to the extremes which machine learning can help with but is overall fairly weak what about the relationship between price changes and volume over time along with historical patterns probably a bit better so as you can already see it is not the case that the more data the merrier but we instead want to use useful data at the same time raw data sometimes should be transformed consider daily volatility such as with the high minus low change how about daily percent change would you consider data that is simply the open high low close or data that is the close spread volatility change daily to be better i would expect the latter to be more ideal the former is all very similar data points the latter is created based on identical data from the former but it brings far more valuable information to the table thus not all of the data you have is useful and sometimes you need to do further manipulation on your data to make it even more valuable before feeding it through a machine learning algorithm let s go ahead and transform our data next python df df df df i went ahead and recorded the video version of this not realizing my stake that it was high minus low divided by close i meant to do high low divided by the low feel free to fix that if you like this creates a new column that is the spread based on the closing price which is our crude measure of volatility next we ll do daily percent change python df df df df now we will define a new dataframe as python df df print df head
| 0
|
7,878
| 19,761,013,736
|
IssuesEvent
|
2022-01-16 12:17:30
|
graphhopper/graphhopper
|
https://api.github.com/repos/graphhopper/graphhopper
|
closed
|
Check if moving TestAlgoCollector into test package of osm module is possible
|
improvement architecture
|
Then we could also use the TranslationMapTest.SINGLETON instead of the newly created instance.
|
1.0
|
Check if moving TestAlgoCollector into test package of osm module is possible - Then we could also use the TranslationMapTest.SINGLETON instead of the newly created instance.
|
non_process
|
check if moving testalgocollector into test package of osm module is possible then we could also use the translationmaptest singleton instead of the newly created instance
| 0
|
10,588
| 13,397,737,001
|
IssuesEvent
|
2020-09-03 12:06:14
|
panther-labs/panther
|
https://api.github.com/repos/panther-labs/panther
|
opened
|
Classifier fails to distinguish between Gitlab.API logs and Gitlab.Production logs
|
bug p1 team:data processing
|
### Describe the bug
Depending on the state of the classifier queue Gitlab.API logs can end up into the Gitlab.Production table
### Steps to reproduce
Not easy to reproduce with certainty because of the randomness in the classifier queue
### Expected behavior
There should be a distinction between the two
|
1.0
|
Classifier fails to distinguish between Gitlab.API logs and Gitlab.Production logs - ### Describe the bug
Depending on the state of the classifier queue Gitlab.API logs can end up into the Gitlab.Production table
### Steps to reproduce
Not easy to reproduce with certainty because of the randomness in the classifier queue
### Expected behavior
There should be a distinction between the two
|
process
|
classifier fails to distinguish between gitlab api logs and gitlab production logs describe the bug depending on the state of the classifier queue gitlab api logs can end up into the gitlab production table steps to reproduce not easy to reproduce with certainty because of the randomness in the classifier queue expected behavior there should be a distinction between the two
| 1
|
10,523
| 13,305,329,519
|
IssuesEvent
|
2020-08-25 18:21:49
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Implement an oncall rotation for Knative
|
area/productivity area/test-and-release epic kind/process
|
<!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. -->
<!--
## In what area(s)?
Remove the '> ' to select
/area API
> /area autoscale
> /area build
> /area monitoring
> /area networking
> /area test-and-release
Other classifications:
> /kind good-first-issue
/kind process
> /kind spec
-->
## Describe the feature
As proposed in https://docs.google.com/document/d/1woxDpdw6TrwwF-Y987OYVUFrSgGpyBiR3U8ehf0BX9o/edit , we would like to implement a rotation to spread the load of user supports & fighting test flakiness.
This bug track the implementation of the linked proposal.
|
1.0
|
Implement an oncall rotation for Knative - <!-- If you need to report a security issue with Knative, send an email to knative-security@googlegroups.com. -->
<!--
## In what area(s)?
Remove the '> ' to select
/area API
> /area autoscale
> /area build
> /area monitoring
> /area networking
> /area test-and-release
Other classifications:
> /kind good-first-issue
/kind process
> /kind spec
-->
## Describe the feature
As proposed in https://docs.google.com/document/d/1woxDpdw6TrwwF-Y987OYVUFrSgGpyBiR3U8ehf0BX9o/edit , we would like to implement a rotation to spread the load of user supports & fighting test flakiness.
This bug track the implementation of the linked proposal.
|
process
|
implement an oncall rotation for knative in what area s remove the to select area api area autoscale area build area monitoring area networking area test and release other classifications kind good first issue kind process kind spec describe the feature as proposed in we would like to implement a rotation to spread the load of user supports fighting test flakiness this bug track the implementation of the linked proposal
| 1
|
34,554
| 7,453,901,911
|
IssuesEvent
|
2018-03-29 13:32:23
|
kerdokullamae/test_koik_issued
|
https://api.github.com/repos/kerdokullamae/test_koik_issued
|
closed
|
Importeri vead
|
P: highest R: fixed T: defect
|
**Reported by sven syld on 9 Sep 2013 06:55 UTC**
1) admin_unit/manor/manor/1 juures tekib viga:
''
[ SQLSTATE[23505](PDOException]
): Unique violation: 7 ERROR: duplicate key value violates unique constraint "x_admin_unit_pkey"
DETAIL: Key (id)=(11136502) already exists.
''
Tabel on enne import tühjaks tehtud. au_id=11136502 esineb jah csv-s mitu korda. Kuidas see nüüd tekkida saab? Kas see võib olla seoses du.seq välja muutustega?
2) maps/description_unit annab sellise vea, see on peaaegu kindlalt seq väljast tingitud:
Unable to execute INSERT statement [INTO "dira"."description_unit" ("id", "root_id", "parent_id", "lft", "rgt", "lvl", "level_id", "visibility_id
", "ref_code", "fns", "fns_archival_file_code", "fns_archival_item_code", "seq", "archive_unit_id", "created_at", "modified_at", "import_id") VALUES (:p
0, :p1, :p2, :p3, :p4, :p5, :p6, :p7, :p8, :p9, :p10, :p11, :p12, :p13, :p14, :p15, :p16)](INSERT) [SQLSTATE[22001](wrapped:): String data, right truncated: 7 ER
ROR: value too long for type character varying(100)]
|
1.0
|
Importeri vead - **Reported by sven syld on 9 Sep 2013 06:55 UTC**
1) admin_unit/manor/manor/1 juures tekib viga:
''
[ SQLSTATE[23505](PDOException]
): Unique violation: 7 ERROR: duplicate key value violates unique constraint "x_admin_unit_pkey"
DETAIL: Key (id)=(11136502) already exists.
''
Tabel on enne import tühjaks tehtud. au_id=11136502 esineb jah csv-s mitu korda. Kuidas see nüüd tekkida saab? Kas see võib olla seoses du.seq välja muutustega?
2) maps/description_unit annab sellise vea, see on peaaegu kindlalt seq väljast tingitud:
Unable to execute INSERT statement [INTO "dira"."description_unit" ("id", "root_id", "parent_id", "lft", "rgt", "lvl", "level_id", "visibility_id
", "ref_code", "fns", "fns_archival_file_code", "fns_archival_item_code", "seq", "archive_unit_id", "created_at", "modified_at", "import_id") VALUES (:p
0, :p1, :p2, :p3, :p4, :p5, :p6, :p7, :p8, :p9, :p10, :p11, :p12, :p13, :p14, :p15, :p16)](INSERT) [SQLSTATE[22001](wrapped:): String data, right truncated: 7 ER
ROR: value too long for type character varying(100)]
|
non_process
|
importeri vead reported by sven syld on sep utc admin unit manor manor juures tekib viga pdoexception unique violation error duplicate key value violates unique constraint x admin unit pkey detail key id already exists tabel on enne import tühjaks tehtud au id esineb jah csv s mitu korda kuidas see nüüd tekkida saab kas see võib olla seoses du seq välja muutustega maps description unit annab sellise vea see on peaaegu kindlalt seq väljast tingitud unable to execute insert statement into dira description unit id root id parent id lft rgt lvl level id visibility id ref code fns fns archival file code fns archival item code seq archive unit id created at modified at import id values p insert wrapped string data right truncated er ror value too long for type character varying
| 0
|
51,187
| 13,624,574,014
|
IssuesEvent
|
2020-09-24 08:16:01
|
Techini/juice-shop
|
https://api.github.com/repos/Techini/juice-shop
|
opened
|
CVE-2020-24977 (Medium) detected in gettextv0.20
|
security vulnerability
|
## CVE-2020-24977 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gettextv0.20</b></p></summary>
<p>
<p>git://git.savannah.gnu.org/gettext.git </p>
<p>Library home page: <a href=https://github.com/autotools-mirror/gettext.git>https://github.com/autotools-mirror/gettext.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Techini/juice-shop/commit/1bdb53d1aa51e161c311aa585161e27782583fd1">1bdb53d1aa51e161c311aa585161e27782583fd1</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>juice-shop/node_modules/libxmljs2/vendor/libxml/xmlschemastypes.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
GNOME project libxml2 v2.9.10 and earlier have a global buffer over-read vulnerability in xmlEncodeEntitiesInternal at libxml2/entities.c. The issue has been fixed in commit 8e7c20a1 (20910-GITv2.9.10-103-g8e7c20a1).
<p>Publish Date: 2020-09-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24977>CVE-2020-24977</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/GNOME/libxml2/commit/8e7c20a1af8776677d7890f30b7a180567701a49">https://github.com/GNOME/libxml2/commit/8e7c20a1af8776677d7890f30b7a180567701a49</a></p>
<p>Release Date: 2020-08-03</p>
<p>Fix Resolution: Replace or update the following file: xmlschemastypes.c</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-24977 (Medium) detected in gettextv0.20 - ## CVE-2020-24977 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gettextv0.20</b></p></summary>
<p>
<p>git://git.savannah.gnu.org/gettext.git </p>
<p>Library home page: <a href=https://github.com/autotools-mirror/gettext.git>https://github.com/autotools-mirror/gettext.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Techini/juice-shop/commit/1bdb53d1aa51e161c311aa585161e27782583fd1">1bdb53d1aa51e161c311aa585161e27782583fd1</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>juice-shop/node_modules/libxmljs2/vendor/libxml/xmlschemastypes.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
GNOME project libxml2 v2.9.10 and earlier have a global buffer over-read vulnerability in xmlEncodeEntitiesInternal at libxml2/entities.c. The issue has been fixed in commit 8e7c20a1 (20910-GITv2.9.10-103-g8e7c20a1).
<p>Publish Date: 2020-09-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-24977>CVE-2020-24977</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/GNOME/libxml2/commit/8e7c20a1af8776677d7890f30b7a180567701a49">https://github.com/GNOME/libxml2/commit/8e7c20a1af8776677d7890f30b7a180567701a49</a></p>
<p>Release Date: 2020-08-03</p>
<p>Fix Resolution: Replace or update the following file: xmlschemastypes.c</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in cve medium severity vulnerability vulnerable library git git savannah gnu org gettext git library home page a href found in head commit a href vulnerable source files juice shop node modules vendor libxml xmlschemastypes c vulnerability details gnome project and earlier have a global buffer over read vulnerability in xmlencodeentitiesinternal at entities c the issue has been fixed in commit publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following file xmlschemastypes c step up your open source security game with whitesource
| 0
|
243,875
| 7,868,156,147
|
IssuesEvent
|
2018-06-23 17:51:23
|
cdgco/VestaWebInterface
|
https://api.github.com/repos/cdgco/VestaWebInterface
|
closed
|
Edit Backup Exclusions
|
Backend Priority: Medium Status: Accepted Type: Enhancement
|
Support for editing of backup exclusions. Data must be uploaded from frontend to backend, added to temp file, then directory linked to user.
Frontend framework in place.
|
1.0
|
Edit Backup Exclusions - Support for editing of backup exclusions. Data must be uploaded from frontend to backend, added to temp file, then directory linked to user.
Frontend framework in place.
|
non_process
|
edit backup exclusions support for editing of backup exclusions data must be uploaded from frontend to backend added to temp file then directory linked to user frontend framework in place
| 0
|
325,982
| 27,972,288,007
|
IssuesEvent
|
2023-03-25 06:21:48
|
qutebrowser/qutebrowser
|
https://api.github.com/repos/qutebrowser/qutebrowser
|
closed
|
iframe BDD tests flaky
|
component: tests priority: 1 - middle
|
See https://github.com/The-Compiler/qutebrowser/pull/1433#issuecomment-220830022
I'll mark them as skipped for now to avoid failing unrelated builds, but this should be fixed properly. Probably `quteproc.click_element` could be refactored to take any xpath, and that could be used to focus the iframe.
cc @lahwaacz
|
1.0
|
iframe BDD tests flaky - See https://github.com/The-Compiler/qutebrowser/pull/1433#issuecomment-220830022
I'll mark them as skipped for now to avoid failing unrelated builds, but this should be fixed properly. Probably `quteproc.click_element` could be refactored to take any xpath, and that could be used to focus the iframe.
cc @lahwaacz
|
non_process
|
iframe bdd tests flaky see i ll mark them as skipped for now to avoid failing unrelated builds but this should be fixed properly probably quteproc click element could be refactored to take any xpath and that could be used to focus the iframe cc lahwaacz
| 0
|
3,009
| 6,010,107,508
|
IssuesEvent
|
2017-06-06 12:24:31
|
openvstorage/arakoon
|
https://api.github.com/repos/openvstorage/arakoon
|
closed
|
Arakoon cluster status
|
process_wontfix type_feature
|
Arakoon should expose call to request the status of the Arakoon cluster (f.e 2/3 nodes are working fine so the healthcheck can query this status. (see https://github.com/openvstorage/openvstorage-health-check/issues/324)
|
1.0
|
Arakoon cluster status - Arakoon should expose call to request the status of the Arakoon cluster (f.e 2/3 nodes are working fine so the healthcheck can query this status. (see https://github.com/openvstorage/openvstorage-health-check/issues/324)
|
process
|
arakoon cluster status arakoon should expose call to request the status of the arakoon cluster f e nodes are working fine so the healthcheck can query this status see
| 1
|
144,084
| 13,095,375,014
|
IssuesEvent
|
2020-08-03 14:01:23
|
pvcraven/arcade
|
https://api.github.com/repos/pvcraven/arcade
|
closed
|
arcade/examples/frametime_plotter.py has no corresponding rst in doc/examples
|
documentation
|
Examples in /examples need rst's to link provide documentation.
|
1.0
|
arcade/examples/frametime_plotter.py has no corresponding rst in doc/examples - Examples in /examples need rst's to link provide documentation.
|
non_process
|
arcade examples frametime plotter py has no corresponding rst in doc examples examples in examples need rst s to link provide documentation
| 0
|
6,527
| 9,621,469,409
|
IssuesEvent
|
2019-05-14 10:42:46
|
kangarko/Boss
|
https://api.github.com/repos/kangarko/Boss
|
closed
|
********* SUPPORT DELAYED ANNOUNCEMENT *********
|
Improvement Issue: cannot reproduce Issue: complaint Issue: confirmed Issue: won't fix Question Status: in progress Status: invalid Status: not started Status: processed Status: queued
|
Hello everyone,
I sincerely apology for the delay in support. I had some tough times getting all plugins up for Minecraft 1.14 but the good news is that we now fully support that version!
I will slowly get back to all of you as soon as possible, the most critical issues tomorrow morning and the rest starting Monday next week.
Thank you for your patience and understanding.
Matej
|
1.0
|
********* SUPPORT DELAYED ANNOUNCEMENT ********* - Hello everyone,
I sincerely apology for the delay in support. I had some tough times getting all plugins up for Minecraft 1.14 but the good news is that we now fully support that version!
I will slowly get back to all of you as soon as possible, the most critical issues tomorrow morning and the rest starting Monday next week.
Thank you for your patience and understanding.
Matej
|
process
|
support delayed announcement hello everyone i sincerely apology for the delay in support i had some tough times getting all plugins up for minecraft but the good news is that we now fully support that version i will slowly get back to all of you as soon as possible the most critical issues tomorrow morning and the rest starting monday next week thank you for your patience and understanding matej
| 1
|
816,379
| 30,597,510,524
|
IssuesEvent
|
2023-07-22 01:17:05
|
dhruvkb/pls
|
https://api.github.com/repos/dhruvkb/pls
|
closed
|
Exception throw when listing non readable files
|
🟨 priority: medium 🛠 goal: fix 🐛 versioning: patch 🖥️ os: linux
|
## Description
<!-- Concisely describe the bug. -->
Exception is throw when trying to list directories that the user has not access to view.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
Create the temporary files
```sh
mkdir /tmp/test
touch /tmp/test/test
sudo chown -R root:root /tmp/test
sudo chmod -R 400 /tmp/test
```
Checkout from `ls`
```sh
ls /tmp/test
ls: cannot open directory '/tmp/test': Permission denied`
```
When trying the same with `pls` an exception is throw.
```sh
pls /tmp/test
Traceback (most recent call last):
File "/home/boomatang/.local/bin/pls", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/main.py", line 160, in main
node_specific_init(node, cli_prefs)
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/main.py", line 64, in node_specific_init
state.state.setup_git(node)
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/globals/state.py", line 120, in setup_git
self.git_root = get_git_root(directory)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/fs/git.py", line 73, in get_git_root
proc = exec_git(
^^^^^^^^^
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/fs/git.py", line 52, in exec_git
proc = subprocess.run(
^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib64/python3.11/subprocess.py", line 1950, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: PosixPath('/tmp/test')
```
## Expectation
<!-- Concisely describe what you expected to happen. -->
Would have expected a better error message.
## Screenshots
<!-- Add screenshots to show the problem; or delete the section entirely. -->
## Environment
<!-- Please complete this, unless you are certain the problem is not environment specific. -->
- Device: <!-- (_eg._ iPhone Xs; laptop) -->
- OS: <!-- (_eg._ iOS 13.5; Fedora 32) --> Fedora 38
- Browser: <!-- (_eg._ Safari; Firefox) -->
- Version: <!-- (_eg._ 13; 73) --> 5.4.0
- Other info: <!-- (_eg._ display resolution, ease-of-access settings) -->
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in resolving this bug.
|
1.0
|
Exception throw when listing non readable files - ## Description
<!-- Concisely describe the bug. -->
Exception is throw when trying to list directories that the user has not access to view.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
Create the temporary files
```sh
mkdir /tmp/test
touch /tmp/test/test
sudo chown -R root:root /tmp/test
sudo chmod -R 400 /tmp/test
```
Checkout from `ls`
```sh
ls /tmp/test
ls: cannot open directory '/tmp/test': Permission denied`
```
When trying the same with `pls` an exception is throw.
```sh
pls /tmp/test
Traceback (most recent call last):
File "/home/boomatang/.local/bin/pls", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/main.py", line 160, in main
node_specific_init(node, cli_prefs)
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/main.py", line 64, in node_specific_init
state.state.setup_git(node)
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/globals/state.py", line 120, in setup_git
self.git_root = get_git_root(directory)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/fs/git.py", line 73, in get_git_root
proc = exec_git(
^^^^^^^^^
File "/home/boomatang/.local/pipx/venvs/pls/lib64/python3.11/site-packages/pls/fs/git.py", line 52, in exec_git
proc = subprocess.run(
^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib64/python3.11/subprocess.py", line 1950, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: PosixPath('/tmp/test')
```
## Expectation
<!-- Concisely describe what you expected to happen. -->
Would have expected a better error message.
## Screenshots
<!-- Add screenshots to show the problem; or delete the section entirely. -->
## Environment
<!-- Please complete this, unless you are certain the problem is not environment specific. -->
- Device: <!-- (_eg._ iPhone Xs; laptop) -->
- OS: <!-- (_eg._ iOS 13.5; Fedora 32) --> Fedora 38
- Browser: <!-- (_eg._ Safari; Firefox) -->
- Version: <!-- (_eg._ 13; 73) --> 5.4.0
- Other info: <!-- (_eg._ display resolution, ease-of-access settings) -->
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in resolving this bug.
|
non_process
|
exception throw when listing non readable files description exception is throw when trying to list directories that the user has not access to view reproduction create the temporary files sh mkdir tmp test touch tmp test test sudo chown r root root tmp test sudo chmod r tmp test checkout from ls sh ls tmp test ls cannot open directory tmp test permission denied when trying the same with pls an exception is throw sh pls tmp test traceback most recent call last file home boomatang local bin pls line in sys exit main file home boomatang local pipx venvs pls site packages pls main py line in main node specific init node cli prefs file home boomatang local pipx venvs pls site packages pls main py line in node specific init state state setup git node file home boomatang local pipx venvs pls site packages pls globals state py line in setup git self git root get git root directory file home boomatang local pipx venvs pls site packages pls fs git py line in get git root proc exec git file home boomatang local pipx venvs pls site packages pls fs git py line in exec git proc subprocess run file usr subprocess py line in run with popen popenargs kwargs as process file usr subprocess py line in init self execute child args executable preexec fn close fds file usr subprocess py line in execute child raise child exception type errno num err msg err filename permissionerror permission denied posixpath tmp test expectation would have expected a better error message screenshots environment device os fedora browser version other info additional context resolution 🙋 i would be interested in resolving this bug
| 0
|
3,425
| 6,526,077,451
|
IssuesEvent
|
2017-08-29 18:15:32
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
SSN: latest submission failed to parse
|
ontology processing problem ready
|
Submission upload date: April, 17th.
Submission ID: 2
Error from parsing log file:
```
E, [2017-07-11T13:07:14.292054 #17044] ERROR -- : ["Exception: Rapper cannot parse rdfxml file at /srv/ncbo/repository/SSN/2/owlapi.xrdf:
rapper: Parsing URI file:///srv/ncbo/repository/SSN/2/owlapi.xrdf with parser rdfxml
rapper: Serializing with serializer ntriples\nrapper: Error - URI file:///srv/ncbo/repository/SSN/2/owlapi.xrdf:18 - Illegal rdf:nodeID value '_:genid2'
```
Snippet from owlapi.xrdf file that causes parsing issues:
```
<terms:creator>
<rdf:Description rdf:nodeID="_:genid2">
<foaf:name xml:lang="en">W3C/OGC Spatial Data on the Web Working Group</foaf:name>
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
</rdf:Description>
</terms:creator>
<terms:creator>
<rdf:Description rdf:nodeID="_:genid3">
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
<foaf:name xml:lang="en">W3C/OGC Spatial Data on the Web Working Group</foaf:name>
</rdf:Description>
</terms:creator>
```
@graybeal - these are ontology level annotations. Although Protege parses this ontology, the display of these annotations looks a bit odd:

Notice the dcterms:creator annotation with the empty brackets. I would need to check with someone in the Protege group to know whether these annotations are properly constructed. I'm guessing if they were changed to simple string values, we might get farther in the parsing process.
|
1.0
|
SSN: latest submission failed to parse - Submission upload date: April, 17th.
Submission ID: 2
Error from parsing log file:
```
E, [2017-07-11T13:07:14.292054 #17044] ERROR -- : ["Exception: Rapper cannot parse rdfxml file at /srv/ncbo/repository/SSN/2/owlapi.xrdf:
rapper: Parsing URI file:///srv/ncbo/repository/SSN/2/owlapi.xrdf with parser rdfxml
rapper: Serializing with serializer ntriples\nrapper: Error - URI file:///srv/ncbo/repository/SSN/2/owlapi.xrdf:18 - Illegal rdf:nodeID value '_:genid2'
```
Snippet from owlapi.xrdf file that causes parsing issues:
```
<terms:creator>
<rdf:Description rdf:nodeID="_:genid2">
<foaf:name xml:lang="en">W3C/OGC Spatial Data on the Web Working Group</foaf:name>
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
</rdf:Description>
</terms:creator>
<terms:creator>
<rdf:Description rdf:nodeID="_:genid3">
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"/>
<foaf:name xml:lang="en">W3C/OGC Spatial Data on the Web Working Group</foaf:name>
</rdf:Description>
</terms:creator>
```
@graybeal - these are ontology level annotations. Although Protege parses this ontology, the display of these annotations looks a bit odd:

Notice the dcterms:creator annotation with the empty brackets. I would need to check with someone in the Protege group to know whether these annotations are properly constructed. I'm guessing if they were changed to simple string values, we might get farther in the parsing process.
|
process
|
ssn latest submission failed to parse submission upload date april submission id error from parsing log file e error exception rapper cannot parse rdfxml file at srv ncbo repository ssn owlapi xrdf rapper parsing uri file srv ncbo repository ssn owlapi xrdf with parser rdfxml rapper serializing with serializer ntriples nrapper error uri file srv ncbo repository ssn owlapi xrdf illegal rdf nodeid value snippet from owlapi xrdf file that causes parsing issues ogc spatial data on the web working group rdf type rdf resource rdf type rdf resource ogc spatial data on the web working group graybeal these are ontology level annotations although protege parses this ontology the display of these annotations looks a bit odd notice the dcterms creator annotation with the empty brackets i would need to check with someone in the protege group to know whether these annotations are properly constructed i m guessing if they were changed to simple string values we might get farther in the parsing process
| 1
|
11,850
| 14,662,942,372
|
IssuesEvent
|
2020-12-29 08:33:18
|
encode/uvicorn
|
https://api.github.com/repos/encode/uvicorn
|
closed
|
Subprocess returncode is not detected when running Gunicorn with Uvicorn (with fix PR companion)
|
bug multiprocessing
|
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
When starting Gunicorn with Uvicorn worker(s), if the app uses `subprocess` to start other processes and captures the output, their `returncode` is in most cases `0`, even if the actual exit code was `1`.
### To reproduce
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
NOTE: try to keep any external dependencies *at an absolute minimum* .
In other words, remove anything that doesn't make the bug go away.
-->
Take this minimal FastAPI app (or replace with Starlette), `main.py`:
```Python
import subprocess
from fastapi import FastAPI
app = FastAPI()
@app.post("/run")
def run_subprocess():
result = subprocess.run(
["python", "-c", "import sys; sys.exit(1)"], capture_output=True
)
return {"returncode": result.returncode}
```
Then run it with:
```console
$ gunicorn -k uvicorn.workers.UvicornWorker main:app
```
Open the browser at http:127.0.0.1:8000/docs and send a request to `/run`.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The detected `returncode` should always be `1`, as the subprocess always exits with `1`.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
In most of the cases it will return a `returncode` of `0`. Strangely enough, in some cases, it will return a `returncode` of `1`.
### Debugging material
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
This is because the `UvicornWorker`, which inherits from the base Gunicorn worker, declares a method `init_signals()` (overriding the parent method) but doesn't do anything. I suspect it's because the signal handlers are declared in the `Server.install_signal_handlers()` with compatibility with `asyncio`.
But the `UvicornWorker` process is started with `os.fork()` by Gunicorn (if I understand correctly) and by the point it is forked, the Gunicorn "Arbiter" class (that handles worker processes) already set its own signal handlers.
And the signal handlers in the Gunicorn base worker reset those handlers, but the `UvicornWorker` doesn't. So, when a process started with `subprocessing` is terminated, the `SIGCHLD` signal is handled by the Gunicorn `Arbiter` (as if the terminated process was a worker) instead of by the `UvicornWorker`.
Disclaimer: why the `SIGCHLD` signal handling in the Gunicorn `Arbiter` alters the `returncode` of a process run with `subprocess`, when capturing output, is still a mystery to me. But I realized the signal handler in the `Arbiter` is expected to handle dead worker processes. And worker subclasses all seem to reset the signal handlers to revert those signals set by the `Arbiter`.
I'm also submitting a PR to fix this: https://github.com/encode/uvicorn/pull/895. It's just 3 lines of code. But debugging it and finding it took me almost a week. :sweat_smile:
### Environment
- OS / Python / Uvicorn version: just run `uvicorn --version`: `Running uvicorn 0.13.1 with CPython 3.8.5 on Linux` (it's actually installed from source, for debugging)
- Gunicorn version (also installed from source, for debugging): `gunicorn (version 20.0.4)`
- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.
```console
$ gunicorn -k uvicorn.workers.UvicornWorker main:app
```
### Additional context
<!-- Any additional information that can help understanding the problem.
Eg. linked issues, or a description of what you were trying to achieve. -->
I'm pretty sure this issue https://github.com/encode/uvicorn/issues/584 is related to the same problem.
|
1.0
|
Subprocess returncode is not detected when running Gunicorn with Uvicorn (with fix PR companion) - ### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
When starting Gunicorn with Uvicorn worker(s), if the app uses `subprocess` to start other processes and captures the output, their `returncode` is in most cases `0`, even if the actual exit code was `1`.
### To reproduce
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
NOTE: try to keep any external dependencies *at an absolute minimum* .
In other words, remove anything that doesn't make the bug go away.
-->
Take this minimal FastAPI app (or replace with Starlette), `main.py`:
```Python
import subprocess
from fastapi import FastAPI
app = FastAPI()
@app.post("/run")
def run_subprocess():
result = subprocess.run(
["python", "-c", "import sys; sys.exit(1)"], capture_output=True
)
return {"returncode": result.returncode}
```
Then run it with:
```console
$ gunicorn -k uvicorn.workers.UvicornWorker main:app
```
Open the browser at http:127.0.0.1:8000/docs and send a request to `/run`.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The detected `returncode` should always be `1`, as the subprocess always exits with `1`.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
In most of the cases it will return a `returncode` of `0`. Strangely enough, in some cases, it will return a `returncode` of `1`.
### Debugging material
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
This is because the `UvicornWorker`, which inherits from the base Gunicorn worker, declares a method `init_signals()` (overriding the parent method) but doesn't do anything. I suspect it's because the signal handlers are declared in the `Server.install_signal_handlers()` with compatibility with `asyncio`.
But the `UvicornWorker` process is started with `os.fork()` by Gunicorn (if I understand correctly) and by the point it is forked, the Gunicorn "Arbiter" class (that handles worker processes) already set its own signal handlers.
And the signal handlers in the Gunicorn base worker reset those handlers, but the `UvicornWorker` doesn't. So, when a process started with `subprocessing` is terminated, the `SIGCHLD` signal is handled by the Gunicorn `Arbiter` (as if the terminated process was a worker) instead of by the `UvicornWorker`.
Disclaimer: why the `SIGCHLD` signal handling in the Gunicorn `Arbiter` alters the `returncode` of a process run with `subprocess`, when capturing output, is still a mystery to me. But I realized the signal handler in the `Arbiter` is expected to handle dead worker processes. And worker subclasses all seem to reset the signal handlers to revert those signals set by the `Arbiter`.
I'm also submitting a PR to fix this: https://github.com/encode/uvicorn/pull/895. It's just 3 lines of code. But debugging it and finding it took me almost a week. :sweat_smile:
### Environment
- OS / Python / Uvicorn version: just run `uvicorn --version`: `Running uvicorn 0.13.1 with CPython 3.8.5 on Linux` (it's actually installed from source, for debugging)
- Gunicorn version (also installed from source, for debugging): `gunicorn (version 20.0.4)`
- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.
```console
$ gunicorn -k uvicorn.workers.UvicornWorker main:app
```
### Additional context
<!-- Any additional information that can help understanding the problem.
Eg. linked issues, or a description of what you were trying to achieve. -->
I'm pretty sure this issue https://github.com/encode/uvicorn/issues/584 is related to the same problem.
|
process
|
subprocess returncode is not detected when running gunicorn with uvicorn with fix pr companion checklist the bug is reproducible against the latest release and or master there are no similar issues or pull requests to fix it yet describe the bug when starting gunicorn with uvicorn worker s if the app uses subprocess to start other processes and captures the output their returncode is in most cases even if the actual exit code was to reproduce provide a minimal example with steps to reproduce the bug locally note try to keep any external dependencies at an absolute minimum in other words remove anything that doesn t make the bug go away take this minimal fastapi app or replace with starlette main py python import subprocess from fastapi import fastapi app fastapi app post run def run subprocess result subprocess run capture output true return returncode result returncode then run it with console gunicorn k uvicorn workers uvicornworker main app open the browser at http docs and send a request to run expected behavior the detected returncode should always be as the subprocess always exits with actual behavior in most of the cases it will return a returncode of strangely enough in some cases it will return a returncode of debugging material any tracebacks screenshots etc that can help understanding the problem note please list tracebacks in full don t truncate them if relevant consider turning on debug or trace logs for additional details see the logging section on specifically the log level flag consider using to make tracebacks logs collapsible if they re very large see this is because the uvicornworker which inherits from the base gunicorn worker declares a method init signals overriding the parent method but doesn t do anything i suspect it s because the signal handlers are declared in the server install signal handlers with compatibility with asyncio but the uvicornworker process is started with os fork by gunicorn if i understand correctly and by the point it is forked the gunicorn arbiter class that handles worker processes already set its own signal handlers and the signal handlers in the gunicorn base worker reset those handlers but the uvicornworker doesn t so when a process started with subprocessing is terminated the sigchld signal is handled by the gunicorn arbiter as if the terminated process was a worker instead of by the uvicornworker disclaimer why the sigchld signal handling in the gunicorn arbiter alters the returncode of a process run with subprocess when capturing output is still a mystery to me but i realized the signal handler in the arbiter is expected to handle dead worker processes and worker subclasses all seem to reset the signal handlers to revert those signals set by the arbiter i m also submitting a pr to fix this it s just lines of code but debugging it and finding it took me almost a week sweat smile environment os python uvicorn version just run uvicorn version running uvicorn with cpython on linux it s actually installed from source for debugging gunicorn version also installed from source for debugging gunicorn version the exact command you re running uvicorn with all flags you passed included if you run it with gunicorn please do the same if there is a reverse proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce console gunicorn k uvicorn workers uvicornworker main app additional context any additional information that can help understanding the problem eg linked issues or a description of what you were trying to achieve i m pretty sure this issue is related to the same problem
| 1
|
6,806
| 9,954,480,731
|
IssuesEvent
|
2019-07-05 08:31:32
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Windows: Bazel client is slow
|
P3 platform: windows team-Windows type: process
|
### Description of the problem / feature request:
This is an umbrella bug for Bazel client performance tuning on Windows.
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Bazel on Windows feels a lot slower than on other platforms. Let's find out why and speed it up.
### What operating system are you running Bazel on?
Windows 10, v1709
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
Built from https://github.com/bazelbuild/bazel/commit/dc986d290bad8a76d3429f73c8244376b4c59494
|
1.0
|
Windows: Bazel client is slow - ### Description of the problem / feature request:
This is an umbrella bug for Bazel client performance tuning on Windows.
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Bazel on Windows feels a lot slower than on other platforms. Let's find out why and speed it up.
### What operating system are you running Bazel on?
Windows 10, v1709
### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel.
Built from https://github.com/bazelbuild/bazel/commit/dc986d290bad8a76d3429f73c8244376b4c59494
|
process
|
windows bazel client is slow description of the problem feature request this is an umbrella bug for bazel client performance tuning on windows bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible bazel on windows feels a lot slower than on other platforms let s find out why and speed it up what operating system are you running bazel on windows if bazel info release returns development version or non git tell us how you built bazel built from
| 1
|
36,494
| 12,414,418,507
|
IssuesEvent
|
2020-05-22 14:30:39
|
trellis-ldp/trellis-extensions
|
https://api.github.com/repos/trellis-ldp/trellis-extensions
|
closed
|
CVE-2017-15708 (High) detected in commons-collections-3.2.2.jar
|
security vulnerability
|
## CVE-2017-15708 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.2.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar,/root/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar</p>
<p>
Dependency Hierarchy:
- checkstyle-8.29.jar (Root Library)
- commons-beanutils-1.9.4.jar
- :x: **commons-collections-3.2.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/trellis-ldp/trellis-extensions/commit/665b3ef7952948208e50f79426692376e939ad17">665b3ef7952948208e50f79426692376e939ad17</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Synapse, by default no authentication is required for Java Remote Method Invocation (RMI). So Apache Synapse 3.0.1 or all previous releases (3.0.0, 2.1.0, 2.0.0, 1.2, 1.1.2, 1.1.1) allows remote code execution attacks that can be performed by injecting specially crafted serialized objects. And the presence of Apache Commons Collections 3.2.1 (commons-collections-3.2.1.jar) or previous versions in Synapse distribution makes this exploitable. To mitigate the issue, we need to limit RMI access to trusted users only. Further upgrading to 3.0.1 version will eliminate the risk of having said Commons Collection version. In Synapse 3.0.1, Commons Collection has been updated to 3.2.2 version.
<p>Publish Date: 2017-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15708>CVE-2017-15708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708</a></p>
<p>Release Date: 2017-12-11</p>
<p>Fix Resolution: org.apache.synapse:Apache-Synapse:3.0.1;commons-collections:commons-collections:3.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-15708 (High) detected in commons-collections-3.2.2.jar - ## CVE-2017-15708 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.2.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar,/root/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.2/8ad72fe39fa8c91eaaf12aadb21e0c3661fe26d5/commons-collections-3.2.2.jar</p>
<p>
Dependency Hierarchy:
- checkstyle-8.29.jar (Root Library)
- commons-beanutils-1.9.4.jar
- :x: **commons-collections-3.2.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/trellis-ldp/trellis-extensions/commit/665b3ef7952948208e50f79426692376e939ad17">665b3ef7952948208e50f79426692376e939ad17</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Synapse, by default no authentication is required for Java Remote Method Invocation (RMI). So Apache Synapse 3.0.1 or all previous releases (3.0.0, 2.1.0, 2.0.0, 1.2, 1.1.2, 1.1.1) allows remote code execution attacks that can be performed by injecting specially crafted serialized objects. And the presence of Apache Commons Collections 3.2.1 (commons-collections-3.2.1.jar) or previous versions in Synapse distribution makes this exploitable. To mitigate the issue, we need to limit RMI access to trusted users only. Further upgrading to 3.0.1 version will eliminate the risk of having said Commons Collection version. In Synapse 3.0.1, Commons Collection has been updated to 3.2.2 version.
<p>Publish Date: 2017-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15708>CVE-2017-15708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-15708</a></p>
<p>Release Date: 2017-12-11</p>
<p>Fix Resolution: org.apache.synapse:Apache-Synapse:3.0.1;commons-collections:commons-collections:3.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to vulnerable library root gradle caches modules files commons collections commons collections commons collections jar root gradle caches modules files commons collections commons collections commons collections jar dependency hierarchy checkstyle jar root library commons beanutils jar x commons collections jar vulnerable library found in head commit a href vulnerability details in apache synapse by default no authentication is required for java remote method invocation rmi so apache synapse or all previous releases allows remote code execution attacks that can be performed by injecting specially crafted serialized objects and the presence of apache commons collections commons collections jar or previous versions in synapse distribution makes this exploitable to mitigate the issue we need to limit rmi access to trusted users only further upgrading to version will eliminate the risk of having said commons collection version in synapse commons collection has been updated to version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache synapse apache synapse commons collections commons collections step up your open source security game with whitesource
| 0
|
27,864
| 8,048,437,439
|
IssuesEvent
|
2018-08-01 06:44:13
|
libevent/libevent
|
https://api.github.com/repos/libevent/libevent
|
closed
|
release-2.1.8-stable MINGW64 automake vs cmake
|
build:cmake os:win32
|
I am use msys2 and tried compiling release-2.1.8-stable.
Seems that compiling is working with automake but not with cmake.
(with MSYS Makefiles and MinGW Generator)
[cmake.log](https://github.com/libevent/libevent/files/1789427/cmake.log)
[automake.log](https://github.com/libevent/libevent/files/1789428/automake.log)
|
1.0
|
release-2.1.8-stable MINGW64 automake vs cmake - I am use msys2 and tried compiling release-2.1.8-stable.
Seems that compiling is working with automake but not with cmake.
(with MSYS Makefiles and MinGW Generator)
[cmake.log](https://github.com/libevent/libevent/files/1789427/cmake.log)
[automake.log](https://github.com/libevent/libevent/files/1789428/automake.log)
|
non_process
|
release stable automake vs cmake i am use and tried compiling release stable seems that compiling is working with automake but not with cmake with msys makefiles and mingw generator
| 0
|
356,917
| 25,176,289,152
|
IssuesEvent
|
2022-11-11 09:33:10
|
cadencjk/pe
|
https://api.github.com/repos/cadencjk/pe
|
opened
|
User stories not sorted according to priority in DG
|
type.DocumentationBug severity.VeryLow
|

Some of the 2 stars are sorted at different positions in the table.
<!--session: 1668153883859-067a3b5a-f3a6-4e30-ab2a-24007eccd024-->
<!--Version: Web v3.4.4-->
|
1.0
|
User stories not sorted according to priority in DG - 
Some of the 2 stars are sorted at different positions in the table.
<!--session: 1668153883859-067a3b5a-f3a6-4e30-ab2a-24007eccd024-->
<!--Version: Web v3.4.4-->
|
non_process
|
user stories not sorted according to priority in dg some of the stars are sorted at different positions in the table
| 0
|
25,287
| 4,282,457,836
|
IssuesEvent
|
2016-07-15 09:14:14
|
Cockatrice/Cockatrice
|
https://api.github.com/repos/Cockatrice/Cockatrice
|
closed
|
Layout bug when pasting deck
|
App - Cockatrice Defect - Basic UI / UX
|
Running the latest master branch (81006d5342c95e3aa25e6bfce790875e9aebf9c7).
When opening the program, have a decklist in your clipboard and press the paste key combination (ctrl/cmd + V) without clicking on anything else.
<img width="1680" alt="screen shot 2016-07-15 at 10 42 33" src="https://cloud.githubusercontent.com/assets/2134793/16868659/32a44e2a-4a79-11e6-923a-9b9f4f8134e9.png">
Here was the deck I used. Looks like all of the items went into the `Deck Name` edit.
```
2 Abrupt Decay
2 Blood Crypt
2 Bloodstained Mire
1 Clifftop Retreat
1 Damnation
1 Elesh Norn, Grand Cenobite
4 Faithless Looting
1 Forest
1 Godless Shrine
1 Grave Titan
3 Kitchen Finks
3 Lingering Souls
2 Marsh Flats
1 Mountain
2 Murderous Cut
1 Overgrown Tomb
3 Path to Exile
1 Plains
2 Rootbound Crag
1 Sacred Foundry
2 Satyr Wayfinder
2 Siege Rhino
1 Stomping Ground
1 Sunpetal Grove
1 Swamp
4 Tarmogoyf
2 Temple Garden
2 Thoughtseize
2 Thragtusk
3 Unburial Rites
1 Vault of the Archangel
2 Wooded Foothills
1 Woodland Cemetery
1 Wurmcoil Engine
sideboard
1 Ancient Grudge
1 Anger of the Gods
1 Elesh Norn, Grand Cenobite
1 Golgari Charm
2 Inquisition of Kozilek
1 Iona, Shield of Emeria
1 Maelstrom Pulse
1 Rakdos Charm
1 Ruric Thar, the Unbowed
1 Sigarda, Host of Herons
2 Slaughter Games
2 Timely Reinforcements
```
|
1.0
|
Layout bug when pasting deck - Running the latest master branch (81006d5342c95e3aa25e6bfce790875e9aebf9c7).
When opening the program, have a decklist in your clipboard and press the paste key combination (ctrl/cmd + V) without clicking on anything else.
<img width="1680" alt="screen shot 2016-07-15 at 10 42 33" src="https://cloud.githubusercontent.com/assets/2134793/16868659/32a44e2a-4a79-11e6-923a-9b9f4f8134e9.png">
Here was the deck I used. Looks like all of the items went into the `Deck Name` edit.
```
2 Abrupt Decay
2 Blood Crypt
2 Bloodstained Mire
1 Clifftop Retreat
1 Damnation
1 Elesh Norn, Grand Cenobite
4 Faithless Looting
1 Forest
1 Godless Shrine
1 Grave Titan
3 Kitchen Finks
3 Lingering Souls
2 Marsh Flats
1 Mountain
2 Murderous Cut
1 Overgrown Tomb
3 Path to Exile
1 Plains
2 Rootbound Crag
1 Sacred Foundry
2 Satyr Wayfinder
2 Siege Rhino
1 Stomping Ground
1 Sunpetal Grove
1 Swamp
4 Tarmogoyf
2 Temple Garden
2 Thoughtseize
2 Thragtusk
3 Unburial Rites
1 Vault of the Archangel
2 Wooded Foothills
1 Woodland Cemetery
1 Wurmcoil Engine
sideboard
1 Ancient Grudge
1 Anger of the Gods
1 Elesh Norn, Grand Cenobite
1 Golgari Charm
2 Inquisition of Kozilek
1 Iona, Shield of Emeria
1 Maelstrom Pulse
1 Rakdos Charm
1 Ruric Thar, the Unbowed
1 Sigarda, Host of Herons
2 Slaughter Games
2 Timely Reinforcements
```
|
non_process
|
layout bug when pasting deck running the latest master branch when opening the program have a decklist in your clipboard and press the paste key combination ctrl cmd v without clicking on anything else img width alt screen shot at src here was the deck i used looks like all of the items went into the deck name edit abrupt decay blood crypt bloodstained mire clifftop retreat damnation elesh norn grand cenobite faithless looting forest godless shrine grave titan kitchen finks lingering souls marsh flats mountain murderous cut overgrown tomb path to exile plains rootbound crag sacred foundry satyr wayfinder siege rhino stomping ground sunpetal grove swamp tarmogoyf temple garden thoughtseize thragtusk unburial rites vault of the archangel wooded foothills woodland cemetery wurmcoil engine sideboard ancient grudge anger of the gods elesh norn grand cenobite golgari charm inquisition of kozilek iona shield of emeria maelstrom pulse rakdos charm ruric thar the unbowed sigarda host of herons slaughter games timely reinforcements
| 0
|
13,910
| 16,668,318,139
|
IssuesEvent
|
2021-06-07 07:49:55
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
opened
|
Rewritten document of an iframe with the srcdoc attribute remains unproxied
|
SYSTEM: iframe processing TYPE: bug
|
After an iframe with the `srcdoc` attribute created from script and its content dynamically written (using the document's write method), the document remains unproxied and it leads to many errors like the following:
> SyntaxError: Unexpected token = in JSON at position`
It happens when the unproxied document tries to send a post message to the parent document.
Can be reproduced with the following example:
```html
<!doctype html>
<html lang="en">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Hello, world!</title>
</head>
<body>
<script>
var iframe = document.createElement('iframe');
iframe.srcdoc = '<h1>test</h1>';
document.body.appendChild(iframe);
// iframe.contentWindow.addEventListener('unload', function (e) {
// console.trace('unload', e);
// });
iframe.contentDocument.open();
iframe.contentDocument.write('<script>parent.postMessage("myevent", "*")<\/script>');
iframe.contentDocument.close();
</script>
</body>
</html>
```
Also can be reproduced on [this](https://googleads.github.io/google-publisher-tag-samples/basic/display-test-ad/demo.html) example which displays a test google ad.
Based on analysis of errors which arise on the w3school.com website (separated from https://github.com/DevExpress/testcafe-hammerhead/issues/1921).
|
1.0
|
Rewritten document of an iframe with the srcdoc attribute remains unproxied - After an iframe with the `srcdoc` attribute created from script and its content dynamically written (using the document's write method), the document remains unproxied and it leads to many errors like the following:
> SyntaxError: Unexpected token = in JSON at position`
It happens when the unproxied document tries to send a post message to the parent document.
Can be reproduced with the following example:
```html
<!doctype html>
<html lang="en">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Hello, world!</title>
</head>
<body>
<script>
var iframe = document.createElement('iframe');
iframe.srcdoc = '<h1>test</h1>';
document.body.appendChild(iframe);
// iframe.contentWindow.addEventListener('unload', function (e) {
// console.trace('unload', e);
// });
iframe.contentDocument.open();
iframe.contentDocument.write('<script>parent.postMessage("myevent", "*")<\/script>');
iframe.contentDocument.close();
</script>
</body>
</html>
```
Also can be reproduced on [this](https://googleads.github.io/google-publisher-tag-samples/basic/display-test-ad/demo.html) example which displays a test google ad.
Based on analysis of errors which arise on the w3school.com website (separated from https://github.com/DevExpress/testcafe-hammerhead/issues/1921).
|
process
|
rewritten document of an iframe with the srcdoc attribute remains unproxied after an iframe with the srcdoc attribute created from script and its content dynamically written using the document s write method the document remains unproxied and it leads to many errors like the following syntaxerror unexpected token in json at position it happens when the unproxied document tries to send a post message to the parent document can be reproduced with the following example html hello world var iframe document createelement iframe iframe srcdoc test document body appendchild iframe iframe contentwindow addeventlistener unload function e console trace unload e iframe contentdocument open iframe contentdocument write parent postmessage myevent iframe contentdocument close also can be reproduced on example which displays a test google ad based on analysis of errors which arise on the com website separated from
| 1
|
17,668
| 23,491,339,091
|
IssuesEvent
|
2022-08-17 19:03:54
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
opened
|
Create a Bazel build
|
Process
|
As part of #13, we'll create a Blaze build for StableHLO (internal to Google3), and we can use that to bootstrap the Bazel build.
|
1.0
|
Create a Bazel build - As part of #13, we'll create a Blaze build for StableHLO (internal to Google3), and we can use that to bootstrap the Bazel build.
|
process
|
create a bazel build as part of we ll create a blaze build for stablehlo internal to and we can use that to bootstrap the bazel build
| 1
|
194,826
| 6,899,579,011
|
IssuesEvent
|
2017-11-24 14:23:40
|
rucio/rucio
|
https://api.github.com/repos/rucio/rucio
|
closed
|
Fix current flake8 errors on master and next
|
bug patch PRIORITY Release management
|
Motivation
----------
There are several flake8 errors both on master and next which need to be fixed.
`flake8 --ignore=E501 --exclude="*.cfg" bin/* lib/ tools/*.py tools/probes/common/*`
|
1.0
|
Fix current flake8 errors on master and next - Motivation
----------
There are several flake8 errors both on master and next which need to be fixed.
`flake8 --ignore=E501 --exclude="*.cfg" bin/* lib/ tools/*.py tools/probes/common/*`
|
non_process
|
fix current errors on master and next motivation there are several errors both on master and next which need to be fixed ignore exclude cfg bin lib tools py tools probes common
| 0
|
21,900
| 30,346,907,063
|
IssuesEvent
|
2023-07-11 16:00:33
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror] rules_go 0.41.0 and Gazelle 0.32.0
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.32.0/bazel-gazelle-v0.32.0.tar.gz
https://github.com/bazelbuild/rules_go/releases/download/v0.41.0/rules_go-v0.41.0.zip
|
1.0
|
[Mirror] rules_go 0.41.0 and Gazelle 0.32.0 - ### Please list the URLs of the archives you'd like to mirror:
https://github.com/bazelbuild/bazel-gazelle/releases/download/v0.32.0/bazel-gazelle-v0.32.0.tar.gz
https://github.com/bazelbuild/rules_go/releases/download/v0.41.0/rules_go-v0.41.0.zip
|
process
|
rules go and gazelle please list the urls of the archives you d like to mirror
| 1
|
9,119
| 12,196,918,574
|
IssuesEvent
|
2020-04-29 19:51:07
|
jsamr/bootiso
|
https://api.github.com/repos/jsamr/bootiso
|
closed
|
Windows 10 missing drivers during install, possibly because of MBR + UEFI instead of GPT
|
bug installation-process
|
I think a GPT option is needed for Windows 10 ISOs. Currently mbr+rsync is used, but this causes some kind of weird missing driver error in the installation setup. I tried [windows2usb](https://github.com/ValdikSS/windows2usb) with mbr and it showed the same issue, but with gpt it worked.
|
1.0
|
Windows 10 missing drivers during install, possibly because of MBR + UEFI instead of GPT - I think a GPT option is needed for Windows 10 ISOs. Currently mbr+rsync is used, but this causes some kind of weird missing driver error in the installation setup. I tried [windows2usb](https://github.com/ValdikSS/windows2usb) with mbr and it showed the same issue, but with gpt it worked.
|
process
|
windows missing drivers during install possibly because of mbr uefi instead of gpt i think a gpt option is needed for windows isos currently mbr rsync is used but this causes some kind of weird missing driver error in the installation setup i tried with mbr and it showed the same issue but with gpt it worked
| 1
|
2,819
| 5,766,999,365
|
IssuesEvent
|
2017-04-27 08:51:34
|
USC-CSSL/TACIT
|
https://api.github.com/repos/USC-CSSL/TACIT
|
closed
|
preprocessing: correct spelling option
|
Preprocessing
|
In preprocessing, can we provide an option to automatically correct misspellings?
|
1.0
|
preprocessing: correct spelling option - In preprocessing, can we provide an option to automatically correct misspellings?
|
process
|
preprocessing correct spelling option in preprocessing can we provide an option to automatically correct misspellings
| 1
|
2,254
| 5,088,655,375
|
IssuesEvent
|
2017-01-01 00:07:46
|
sw4j-org/tool-jpa-processor
|
https://api.github.com/repos/sw4j-org/tool-jpa-processor
|
opened
|
Handle @SecondaryTables Annotation
|
annotation processor task
|
Handle the `@SecondaryTables` annotation for an entity.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.47 SecondaryTables Annotation
|
1.0
|
Handle @SecondaryTables Annotation - Handle the `@SecondaryTables` annotation for an entity.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.47 SecondaryTables Annotation
|
process
|
handle secondarytables annotation handle the secondarytables annotation for an entity see secondarytables annotation
| 1
|
14,988
| 18,531,966,753
|
IssuesEvent
|
2021-10-21 07:18:09
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [Permission] App status is not getting displayed in the below mentioned screens
|
Bug P1 Participant manager Process: Fixed Process: Tested dev
|
**Login as Site admin**
1. Site > Enrollment registry.
2. Studies > Enrollment registry.
3. Studies > Study participant registry.
**Note:**
1. Issue is observed only in above three screens for the admins who has other than super admin permission.
2. Issue is observed only for open studies in Site > Enrollment registry
**Site > Enrollment registry.**

**Studies > Study participant registry.**

**Studies > Enrollment registry.**

|
2.0
|
[PM] [Permission] App status is not getting displayed in the below mentioned screens - **Login as Site admin**
1. Site > Enrollment registry.
2. Studies > Enrollment registry.
3. Studies > Study participant registry.
**Note:**
1. Issue is observed only in above three screens for the admins who has other than super admin permission.
2. Issue is observed only for open studies in Site > Enrollment registry
**Site > Enrollment registry.**

**Studies > Study participant registry.**

**Studies > Enrollment registry.**

|
process
|
app status is not getting displayed in the below mentioned screens login as site admin site enrollment registry studies enrollment registry studies study participant registry note issue is observed only in above three screens for the admins who has other than super admin permission issue is observed only for open studies in site enrollment registry site enrollment registry studies study participant registry studies enrollment registry
| 1
|
14,978
| 18,504,529,783
|
IssuesEvent
|
2021-10-19 16:59:00
|
googleapis/python-api-core
|
https://api.github.com/repos/googleapis/python-api-core
|
closed
|
Tests should pass or be skipped when 'grcp' is not installed
|
type: process
|
From [this discussion](https://github.com/googleapis/python-api-core/pull/286#discussion_r725302077).
`grpcio` and related files are [optional dependencies](https://github.com/googleapis/python-api-core/blob/e6a3489a79da00a5de919a4a41782cc3b1d7f583/setup.py#L38-L42), but we don't have any CI which ensures that the "optional" part is really true. We should test at least minimally without `grpcio` installed, and ensure that tests (or whole test modules) which depend on it are skipped.
|
1.0
|
Tests should pass or be skipped when 'grcp' is not installed - From [this discussion](https://github.com/googleapis/python-api-core/pull/286#discussion_r725302077).
`grpcio` and related files are [optional dependencies](https://github.com/googleapis/python-api-core/blob/e6a3489a79da00a5de919a4a41782cc3b1d7f583/setup.py#L38-L42), but we don't have any CI which ensures that the "optional" part is really true. We should test at least minimally without `grpcio` installed, and ensure that tests (or whole test modules) which depend on it are skipped.
|
process
|
tests should pass or be skipped when grcp is not installed from grpcio and related files are but we don t have any ci which ensures that the optional part is really true we should test at least minimally without grpcio installed and ensure that tests or whole test modules which depend on it are skipped
| 1
|
17,788
| 23,715,346,821
|
IssuesEvent
|
2022-08-30 11:18:06
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
Preload counter not incrementing
|
type: bug module: preload tool: background process priority: high status: blocked future severity: minor
|
**Describe the bug**
In specific cases, the preload counter is not incrementing. There are two separate things that are happening to the customer and they're causing the preload to stuck.
The first one is related to this `If` statement:
https://github.com/wp-media/wp-rocket/blob/8b2a567efcab3925dba6fff72c5a622ec2d16386/inc/Engine/Preload/FullProcess.php#L46
If `$item` is a string, we're not passing it. It means that the URLs are preloaded but the counter is not updated.
**To Reproduce**
Steps to reproduce the behavior:
1. Using an environment where `$item` is a string
2. Preload the cache
3. Confirm that the counter is not updated
**Expected behavior**
The counter should be updated.
**Additional context**
I used the code provided by Greg to check if it solves the issue:
http://snippi.com/s/fpmu2v6
The transient here was updated correctly then:
https://github.com/wp-media/wp-rocket/blob/8b2a567efcab3925dba6fff72c5a622ec2d16386/inc/Engine/Preload/FullProcess.php#L49
Unfortunately, the UI wasn't updated as the results weren't correct here:
https://github.com/wp-media/wp-rocket/blob/9b175725a5c568f27808875c041fc67cba264ae7/inc/Engine/Preload/PreloadSubscriber.php#L273-L274
I'm waiting to confirm that the Object Caching is not enabled in this environment.
Affected environment:
https://secure.helpscout.net/conversation/1107778087/151265?folderId=2415573
**Backlog Grooming**
- [ ] Reproduce the problem
- [x] Identify the root cause
- [x] Scope a solution
- [x] Estimate the effort
|
1.0
|
Preload counter not incrementing - **Describe the bug**
In specific cases, the preload counter is not incrementing. There are two separate things that are happening to the customer and they're causing the preload to stuck.
The first one is related to this `If` statement:
https://github.com/wp-media/wp-rocket/blob/8b2a567efcab3925dba6fff72c5a622ec2d16386/inc/Engine/Preload/FullProcess.php#L46
If `$item` is a string, we're not passing it. It means that the URLs are preloaded but the counter is not updated.
**To Reproduce**
Steps to reproduce the behavior:
1. Using an environment where `$item` is a string
2. Preload the cache
3. Confirm that the counter is not updated
**Expected behavior**
The counter should be updated.
**Additional context**
I used the code provided by Greg to check if it solves the issue:
http://snippi.com/s/fpmu2v6
The transient here was updated correctly then:
https://github.com/wp-media/wp-rocket/blob/8b2a567efcab3925dba6fff72c5a622ec2d16386/inc/Engine/Preload/FullProcess.php#L49
Unfortunately, the UI wasn't updated as the results weren't correct here:
https://github.com/wp-media/wp-rocket/blob/9b175725a5c568f27808875c041fc67cba264ae7/inc/Engine/Preload/PreloadSubscriber.php#L273-L274
I'm waiting to confirm that the Object Caching is not enabled in this environment.
Affected environment:
https://secure.helpscout.net/conversation/1107778087/151265?folderId=2415573
**Backlog Grooming**
- [ ] Reproduce the problem
- [x] Identify the root cause
- [x] Scope a solution
- [x] Estimate the effort
|
process
|
preload counter not incrementing describe the bug in specific cases the preload counter is not incrementing there are two separate things that are happening to the customer and they re causing the preload to stuck the first one is related to this if statement if item is a string we re not passing it it means that the urls are preloaded but the counter is not updated to reproduce steps to reproduce the behavior using an environment where item is a string preload the cache confirm that the counter is not updated expected behavior the counter should be updated additional context i used the code provided by greg to check if it solves the issue the transient here was updated correctly then unfortunately the ui wasn t updated as the results weren t correct here i m waiting to confirm that the object caching is not enabled in this environment affected environment backlog grooming reproduce the problem identify the root cause scope a solution estimate the effort
| 1
|
315,776
| 27,106,112,499
|
IssuesEvent
|
2023-02-15 12:14:26
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
pkg/sql/logictest/tests/local-vec-off/local-vec-off_test: TestLogic_crdb_internal failed
|
C-test-failure O-robot branch-release-22.2
|
pkg/sql/logictest/tests/local-vec-off/local-vec-off_test.TestLogic_crdb_internal [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8714671?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8714671?buildTab=artifacts#/) on release-22.2 @ [4ae29e143ad60de4ae66314055ab43e8a976c85a](https://github.com/cockroachdb/cockroach/commits/4ae29e143ad60de4ae66314055ab43e8a976c85a):
```
=== RUN TestLogic_crdb_internal
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/146d3729a0163b774c44604903a78ba7/logTestLogic_crdb_internal624837636
test_log_scope.go:79: use -show-logs to present logs inline
[12:04:25] rng seed: 4793407191780754687
logic.go:2762: let $testdb_id = 106
logic.go:2762: let $testdb_foo_id = 108
logic.go:2762: let $schema_bar_id = 111
=== CONT TestLogic_crdb_internal
logic.go:3927: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/146d3729a0163b774c44604903a78ba7/logTestLogic_crdb_internal624837636
--- FAIL: TestLogic_crdb_internal (47.14s)
=== RUN TestLogic_crdb_internal/max_retry_counter
logic.go:2705:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal:791: SELECT start_pretty, end_pretty FROM crdb_internal.ranges
WHERE split_enforced_until IS NOT NULL
AND (start_pretty LIKE '/Table/112/1%' OR start_pretty LIKE '/Table/112/2%')
expected:
/Table/112/1/1 /Table/112/1/2
/Table/112/1/2 /Table/112/1/3
/Table/112/1/3 /Table/112/2/1
/Table/112/2/1 /Table/112/2/2
/Table/112/2/2 /Table/112/2/3
/Table/112/2/3 /Table/112/3/1
but found (query options: "retry") :
[12:05:12] --- progress: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal: 107 statements
logic.go:2019:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal:803: too many errors encountered, skipping the rest of the input
[12:05:12] --- done: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal with config local-vec-off: 107 tests, 2 failures
[12:05:12] --- total progress: 107 statements
--- total: 107 tests, 2 failures
--- FAIL: TestLogic_crdb_internal/max_retry_counter (45.71s)
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestLogic_crdb_internal.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
pkg/sql/logictest/tests/local-vec-off/local-vec-off_test: TestLogic_crdb_internal failed - pkg/sql/logictest/tests/local-vec-off/local-vec-off_test.TestLogic_crdb_internal [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8714671?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8714671?buildTab=artifacts#/) on release-22.2 @ [4ae29e143ad60de4ae66314055ab43e8a976c85a](https://github.com/cockroachdb/cockroach/commits/4ae29e143ad60de4ae66314055ab43e8a976c85a):
```
=== RUN TestLogic_crdb_internal
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/146d3729a0163b774c44604903a78ba7/logTestLogic_crdb_internal624837636
test_log_scope.go:79: use -show-logs to present logs inline
[12:04:25] rng seed: 4793407191780754687
logic.go:2762: let $testdb_id = 106
logic.go:2762: let $testdb_foo_id = 108
logic.go:2762: let $schema_bar_id = 111
=== CONT TestLogic_crdb_internal
logic.go:3927: -- test log scope end --
test logs left over in: /artifacts/tmp/_tmp/146d3729a0163b774c44604903a78ba7/logTestLogic_crdb_internal624837636
--- FAIL: TestLogic_crdb_internal (47.14s)
=== RUN TestLogic_crdb_internal/max_retry_counter
logic.go:2705:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal:791: SELECT start_pretty, end_pretty FROM crdb_internal.ranges
WHERE split_enforced_until IS NOT NULL
AND (start_pretty LIKE '/Table/112/1%' OR start_pretty LIKE '/Table/112/2%')
expected:
/Table/112/1/1 /Table/112/1/2
/Table/112/1/2 /Table/112/1/3
/Table/112/1/3 /Table/112/2/1
/Table/112/2/1 /Table/112/2/2
/Table/112/2/2 /Table/112/2/3
/Table/112/2/3 /Table/112/3/1
but found (query options: "retry") :
[12:05:12] --- progress: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal: 107 statements
logic.go:2019:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal:803: too many errors encountered, skipping the rest of the input
[12:05:12] --- done: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/2694/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-fastbuild/bin/pkg/sql/logictest/tests/local-vec-off/local-vec-off_test_/local-vec-off_test.runfiles/com_github_cockroachdb_cockroach/pkg/sql/logictest/testdata/logic_test/crdb_internal with config local-vec-off: 107 tests, 2 failures
[12:05:12] --- total progress: 107 statements
--- total: 107 tests, 2 failures
--- FAIL: TestLogic_crdb_internal/max_retry_counter (45.71s)
```
<p>Parameters: <code>TAGS=bazel,gss</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestLogic_crdb_internal.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
pkg sql logictest tests local vec off local vec off test testlogic crdb internal failed pkg sql logictest tests local vec off local vec off test testlogic crdb internal with on release run testlogic crdb internal test log scope go test logs captured to artifacts tmp tmp logtestlogic crdb test log scope go use show logs to present logs inline rng seed logic go let testdb id logic go let testdb foo id logic go let schema bar id cont testlogic crdb internal logic go test log scope end test logs left over in artifacts tmp tmp logtestlogic crdb fail testlogic crdb internal run testlogic crdb internal max retry counter logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql logictest tests local vec off local vec off test local vec off test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test crdb internal select start pretty end pretty from crdb internal ranges where split enforced until is not null and start pretty like table or start pretty like table expected table table table table table table table table table table table table but found query options retry progress home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql logictest tests local vec off local vec off test local vec off test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test crdb internal statements logic go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql logictest tests local vec off local vec off test local vec off test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test crdb internal too many errors encountered skipping the rest of the input done home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out fastbuild bin pkg sql logictest tests local vec off local vec off test local vec off test runfiles com github cockroachdb cockroach pkg sql logictest testdata logic test crdb internal with config local vec off tests failures total progress statements total tests failures fail testlogic crdb internal max retry counter parameters tags bazel gss help see also cc cockroachdb sql queries
| 0
|
46,365
| 5,798,620,342
|
IssuesEvent
|
2017-05-03 02:47:00
|
crkellen/bands
|
https://api.github.com/repos/crkellen/bands
|
closed
|
invisible blocks
|
bug critical needs testing
|
Some blocks get placed but don't show that they've been placed until you rejoin
|
1.0
|
invisible blocks - Some blocks get placed but don't show that they've been placed until you rejoin
|
non_process
|
invisible blocks some blocks get placed but don t show that they ve been placed until you rejoin
| 0
|
168,361
| 20,754,721,821
|
IssuesEvent
|
2022-03-15 11:03:58
|
arngrimur/computersaysno
|
https://api.github.com/repos/arngrimur/computersaysno
|
closed
|
CVE-2021-41103 (High) detected in github.com/docker/docker-v20.10.9 - autoclosed
|
security vulnerability
|
## CVE-2021-41103 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/docker-v20.10.9</b></p></summary>
<p>Moby Project - a collaborative project for the container ecosystem to assemble container-based systems</p>
<p>
Dependency Hierarchy:
- github.com/ory/dockertest/v3-v3.8.1 (Root Library)
- github.com/docker/cli-v20.10.11
- :x: **github.com/docker/docker-v20.10.9** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/arngrimur/computersaysno/commit/c8980a5bef352bb4b9477331dcc940aca400e10b">c8980a5bef352bb4b9477331dcc940aca400e10b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
containerd is an open source container runtime with an emphasis on simplicity, robustness and portability. A bug was found in containerd where container root directories and some plugins had insufficiently restricted permissions, allowing otherwise unprivileged Linux users to traverse directory contents and execute programs. When containers included executable programs with extended permission bits (such as setuid), unprivileged Linux users could discover and execute those programs. When the UID of an unprivileged Linux user on the host collided with the file owner or group inside a container, the unprivileged Linux user on the host could discover, read, and modify those files. This vulnerability has been fixed in containerd 1.4.11 and containerd 1.5.7. Users should update to these version when they are released and may restart containers or update directory permissions to mitigate the vulnerability. Users unable to update should limit access to the host to trusted users. Update directory permission on container bundles directories.
<p>Publish Date: 2021-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41103>CVE-2021-41103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-c2h3-6mxw-7mvq">https://github.com/advisories/GHSA-c2h3-6mxw-7mvq</a></p>
<p>Release Date: 2021-10-04</p>
<p>Fix Resolution: v1.4.11,v1.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41103 (High) detected in github.com/docker/docker-v20.10.9 - autoclosed - ## CVE-2021-41103 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/docker-v20.10.9</b></p></summary>
<p>Moby Project - a collaborative project for the container ecosystem to assemble container-based systems</p>
<p>
Dependency Hierarchy:
- github.com/ory/dockertest/v3-v3.8.1 (Root Library)
- github.com/docker/cli-v20.10.11
- :x: **github.com/docker/docker-v20.10.9** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/arngrimur/computersaysno/commit/c8980a5bef352bb4b9477331dcc940aca400e10b">c8980a5bef352bb4b9477331dcc940aca400e10b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
containerd is an open source container runtime with an emphasis on simplicity, robustness and portability. A bug was found in containerd where container root directories and some plugins had insufficiently restricted permissions, allowing otherwise unprivileged Linux users to traverse directory contents and execute programs. When containers included executable programs with extended permission bits (such as setuid), unprivileged Linux users could discover and execute those programs. When the UID of an unprivileged Linux user on the host collided with the file owner or group inside a container, the unprivileged Linux user on the host could discover, read, and modify those files. This vulnerability has been fixed in containerd 1.4.11 and containerd 1.5.7. Users should update to these version when they are released and may restart containers or update directory permissions to mitigate the vulnerability. Users unable to update should limit access to the host to trusted users. Update directory permission on container bundles directories.
<p>Publish Date: 2021-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41103>CVE-2021-41103</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-c2h3-6mxw-7mvq">https://github.com/advisories/GHSA-c2h3-6mxw-7mvq</a></p>
<p>Release Date: 2021-10-04</p>
<p>Fix Resolution: v1.4.11,v1.5.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in github com docker docker autoclosed cve high severity vulnerability vulnerable library github com docker docker moby project a collaborative project for the container ecosystem to assemble container based systems dependency hierarchy github com ory dockertest root library github com docker cli x github com docker docker vulnerable library found in head commit a href found in base branch main vulnerability details containerd is an open source container runtime with an emphasis on simplicity robustness and portability a bug was found in containerd where container root directories and some plugins had insufficiently restricted permissions allowing otherwise unprivileged linux users to traverse directory contents and execute programs when containers included executable programs with extended permission bits such as setuid unprivileged linux users could discover and execute those programs when the uid of an unprivileged linux user on the host collided with the file owner or group inside a container the unprivileged linux user on the host could discover read and modify those files this vulnerability has been fixed in containerd and containerd users should update to these version when they are released and may restart containers or update directory permissions to mitigate the vulnerability users unable to update should limit access to the host to trusted users update directory permission on container bundles directories publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
786,679
| 27,661,993,029
|
IssuesEvent
|
2023-03-12 16:25:10
|
kubernetes/release
|
https://api.github.com/repos/kubernetes/release
|
closed
|
Dependency update - Golang 1.20.2/1.19.7
|
kind/feature sig/release area/release-eng area/dependency needs-priority
|
### Tracking info
<!-- Search query: https://github.com/kubernetes/release/issues?q=is%3Aissue+Dependency+update+-+Golang -->
<!-- Example: https://github.com/kubernetes/release/issues/2694 -->
Link to any previous tracking issue: https://github.com/kubernetes/release/issues/2909
<!-- golang-announce mailing list: https://groups.google.com/g/golang-announce -->
Golang mailing list announcement: https://groups.google.com/g/golang-announce/c/3-TpUx48iQY/m/Ujnw4fFHAQAJ
SIG Release Slack thread: https://kubernetes.slack.com/archives/CJH2GBF7Y/p1678208751242099
### Work items
<!-- Example: https://github.com/kubernetes/release/pull/2696 -->
- [x] `kube-cross` image update: https://github.com/kubernetes/release/pull/2949
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4312 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4895
<!-- Example: https://github.com/kubernetes/release/pull/2696 -->
- [x] `go-runner` image update: https://github.com/kubernetes/release/pull/2949
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4313 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4896
<!-- Example: https://github.com/kubernetes/release/pull/2696 -->
- [x] `releng-ci` image update: https://github.com/kubernetes/release/pull/2949
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4314 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4897
#### After kube-cross image promotion
<!-- Example: https://github.com/kubernetes/kubernetes/pull/112900 -->
- [x] kubernetes/kubernetes update (`master`): https://github.com/kubernetes/kubernetes/pull/116404
Ensure the following have been updated within the PR:
- [x] kube-cross image
- [x] go-runner image
- [x] publishing bot rules
- [x] test image
- [x] `.go-version` file
#### After go-runner image promotion
<!-- Example: https://github.com/kubernetes/release/pull/2920 -->
- [x] `debian-iptables` image update: https://github.com/kubernetes/release/pull/2954
<!-- Example: https://github.com/kubernetes/k8s.io/pull/3597 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4917
<!-- Example: https://github.com/kubernetes/release/pull/2920 -->
- [x] `distroless-iptables` image update: https://github.com/kubernetes/release/pull/2954
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4263 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4918
#### After distroless-iptables image promotion
<!-- Example: https://github.com/kubernetes/kubernetes/pull/112892 -->
- [x] kubernetes/kubernetes update (`master`): https://github.com/kubernetes/kubernetes/pull/116509
Ensure the following have been updated within the PR:
- [x] distroless-iptables image
- [x] test image
#### After kubernetes/kubernetes (master) has been updated
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-cloud-builder` image update: https://github.com/kubernetes/release/pull/2955
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-ci-builder` image variants update: https://github.com/kubernetes/release/pull/2955
### Cherry picks
- [x] Kubernetes 1.23: not doing because it is EOL
- [x] Kubernetes 1.24: https://github.com/kubernetes/kubernetes/pull/116405
- [x] Kubernetes 1.25: https://github.com/kubernetes/kubernetes/pull/116406
- [x] Kubernetes 1.26: https://github.com/kubernetes/kubernetes/pull/116407
- [x] publishing bot rule updates for active Golang versions: https://github.com/kubernetes/kubernetes/pull/116456
#### After kubernetes/kubernetes (release branches) has been updated
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-cloud-builder` image update: https://github.com/kubernetes/release/pull/2955
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-ci-builder` image variants update: https://github.com/kubernetes/release/pull/2955
<!-- Example: https://github.com/kubernetes/test-infra/pull/27712 -->
- [x] `kubekins`/`krte` image variants update: https://github.com/kubernetes/test-infra/pull/28971
### Follow-up items
<!--
Use this section to list out process improvements or items that need to be
addressed before the next Golang update.
-->
- [ ] Ensure the Golang issue template is updated with any new requirements
- [ ] <Any other follow up items>
/assign
cc: @kubernetes/release-engineering
|
1.0
|
Dependency update - Golang 1.20.2/1.19.7 - ### Tracking info
<!-- Search query: https://github.com/kubernetes/release/issues?q=is%3Aissue+Dependency+update+-+Golang -->
<!-- Example: https://github.com/kubernetes/release/issues/2694 -->
Link to any previous tracking issue: https://github.com/kubernetes/release/issues/2909
<!-- golang-announce mailing list: https://groups.google.com/g/golang-announce -->
Golang mailing list announcement: https://groups.google.com/g/golang-announce/c/3-TpUx48iQY/m/Ujnw4fFHAQAJ
SIG Release Slack thread: https://kubernetes.slack.com/archives/CJH2GBF7Y/p1678208751242099
### Work items
<!-- Example: https://github.com/kubernetes/release/pull/2696 -->
- [x] `kube-cross` image update: https://github.com/kubernetes/release/pull/2949
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4312 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4895
<!-- Example: https://github.com/kubernetes/release/pull/2696 -->
- [x] `go-runner` image update: https://github.com/kubernetes/release/pull/2949
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4313 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4896
<!-- Example: https://github.com/kubernetes/release/pull/2696 -->
- [x] `releng-ci` image update: https://github.com/kubernetes/release/pull/2949
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4314 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4897
#### After kube-cross image promotion
<!-- Example: https://github.com/kubernetes/kubernetes/pull/112900 -->
- [x] kubernetes/kubernetes update (`master`): https://github.com/kubernetes/kubernetes/pull/116404
Ensure the following have been updated within the PR:
- [x] kube-cross image
- [x] go-runner image
- [x] publishing bot rules
- [x] test image
- [x] `.go-version` file
#### After go-runner image promotion
<!-- Example: https://github.com/kubernetes/release/pull/2920 -->
- [x] `debian-iptables` image update: https://github.com/kubernetes/release/pull/2954
<!-- Example: https://github.com/kubernetes/k8s.io/pull/3597 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4917
<!-- Example: https://github.com/kubernetes/release/pull/2920 -->
- [x] `distroless-iptables` image update: https://github.com/kubernetes/release/pull/2954
<!-- Example: https://github.com/kubernetes/k8s.io/pull/4263 -->
- [x] image promotion: https://github.com/kubernetes/k8s.io/pull/4918
#### After distroless-iptables image promotion
<!-- Example: https://github.com/kubernetes/kubernetes/pull/112892 -->
- [x] kubernetes/kubernetes update (`master`): https://github.com/kubernetes/kubernetes/pull/116509
Ensure the following have been updated within the PR:
- [x] distroless-iptables image
- [x] test image
#### After kubernetes/kubernetes (master) has been updated
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-cloud-builder` image update: https://github.com/kubernetes/release/pull/2955
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-ci-builder` image variants update: https://github.com/kubernetes/release/pull/2955
### Cherry picks
- [x] Kubernetes 1.23: not doing because it is EOL
- [x] Kubernetes 1.24: https://github.com/kubernetes/kubernetes/pull/116405
- [x] Kubernetes 1.25: https://github.com/kubernetes/kubernetes/pull/116406
- [x] Kubernetes 1.26: https://github.com/kubernetes/kubernetes/pull/116407
- [x] publishing bot rule updates for active Golang versions: https://github.com/kubernetes/kubernetes/pull/116456
#### After kubernetes/kubernetes (release branches) has been updated
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-cloud-builder` image update: https://github.com/kubernetes/release/pull/2955
<!-- Example: https://github.com/kubernetes/release/pull/2699 -->
- [x] `k8s-ci-builder` image variants update: https://github.com/kubernetes/release/pull/2955
<!-- Example: https://github.com/kubernetes/test-infra/pull/27712 -->
- [x] `kubekins`/`krte` image variants update: https://github.com/kubernetes/test-infra/pull/28971
### Follow-up items
<!--
Use this section to list out process improvements or items that need to be
addressed before the next Golang update.
-->
- [ ] Ensure the Golang issue template is updated with any new requirements
- [ ] <Any other follow up items>
/assign
cc: @kubernetes/release-engineering
|
non_process
|
dependency update golang tracking info link to any previous tracking issue golang mailing list announcement sig release slack thread work items kube cross image update image promotion go runner image update image promotion releng ci image update image promotion after kube cross image promotion kubernetes kubernetes update master ensure the following have been updated within the pr kube cross image go runner image publishing bot rules test image go version file after go runner image promotion debian iptables image update image promotion distroless iptables image update image promotion after distroless iptables image promotion kubernetes kubernetes update master ensure the following have been updated within the pr distroless iptables image test image after kubernetes kubernetes master has been updated cloud builder image update ci builder image variants update cherry picks kubernetes not doing because it is eol kubernetes kubernetes kubernetes publishing bot rule updates for active golang versions after kubernetes kubernetes release branches has been updated cloud builder image update ci builder image variants update kubekins krte image variants update follow up items use this section to list out process improvements or items that need to be addressed before the next golang update ensure the golang issue template is updated with any new requirements assign cc kubernetes release engineering
| 0
|
8,340
| 11,497,801,404
|
IssuesEvent
|
2020-02-12 10:42:31
|
18F/tts-tech-portfolio
|
https://api.github.com/repos/18F/tts-tech-portfolio
|
closed
|
TTS Tech Portfolio agile approach -- update documentation to match approach
|
Jan2020-inperson epic: internal workflow/procedures needs grooming workflow: process
|
## Background information
<!-- description, links, etc. -->
## User stories
<!-- one or more -->
- As a ..., I want ... so that ...
- As a ..., I want ... so that ...
## Implementation
- [ ] grooming sections in doc
- [ ] around assignment
- [ ] when labeling happens
- [ ] epics should be ordered before ordering Backlog
- [ ] when do cards move from icebox to backlog
- [ ] when things need feedback/consensus, then the how
- [ ] Person running retro moves Retro Action items to issues
## Acceptance criteria
- [definitive thing]
- [other definitive thing]
|
1.0
|
TTS Tech Portfolio agile approach -- update documentation to match approach - ## Background information
<!-- description, links, etc. -->
## User stories
<!-- one or more -->
- As a ..., I want ... so that ...
- As a ..., I want ... so that ...
## Implementation
- [ ] grooming sections in doc
- [ ] around assignment
- [ ] when labeling happens
- [ ] epics should be ordered before ordering Backlog
- [ ] when do cards move from icebox to backlog
- [ ] when things need feedback/consensus, then the how
- [ ] Person running retro moves Retro Action items to issues
## Acceptance criteria
- [definitive thing]
- [other definitive thing]
|
process
|
tts tech portfolio agile approach update documentation to match approach background information user stories as a i want so that as a i want so that implementation grooming sections in doc around assignment when labeling happens epics should be ordered before ordering backlog when do cards move from icebox to backlog when things need feedback consensus then the how person running retro moves retro action items to issues acceptance criteria
| 1
|
220,946
| 16,991,013,043
|
IssuesEvent
|
2021-06-30 20:28:46
|
zephyrj/sim-racing-tools
|
https://api.github.com/repos/zephyrj/sim-racing-tools
|
opened
|
Update documentation for downloading latest stable release from pypi
|
documentation
|
Currently the docs only reference the git repo as a source for installing sim-racing-tools; add in the option to download the latest stable version from pypi with a standard `pip install`
|
1.0
|
Update documentation for downloading latest stable release from pypi - Currently the docs only reference the git repo as a source for installing sim-racing-tools; add in the option to download the latest stable version from pypi with a standard `pip install`
|
non_process
|
update documentation for downloading latest stable release from pypi currently the docs only reference the git repo as a source for installing sim racing tools add in the option to download the latest stable version from pypi with a standard pip install
| 0
|
12,800
| 15,180,854,777
|
IssuesEvent
|
2021-02-15 01:30:39
|
Geonovum/disgeo-arch
|
https://api.github.com/repos/Geonovum/disgeo-arch
|
closed
|
1. Inleiding
|
In behandeling - voorstel processen e.d. In behandeling - voorstel scope doel Processen Functies Componenten Scope Doel
|
In de inleiding staat: ´ Een samenhangende objectenregistratie is een uniforme registratie met daarin basisgegevens over objecten in de fysieke werkelijkheid die zich voor gebruikers als één registratie gedraagt.´
Dit is eigenlijk al een hele vreemde definitie, met een samenraapsel van ongedefinieerde begrippen. - Uniform. Wat is dan een uniforme registratie - Objecten (waarschijnlijk hoort de samenhang in de objecten te zitten) - Gedrag. Waarom moet de uniforme registratie zich voor gebruikers als een registratie gedragen? Zit daar de samenhang dan in?
Er wordt hier verwezen naar "ICT componenten" waar eerder slechts component werd gebruikt. Gezien de aard van het document past "component" beter en kan ICT overal worden weggelaten.
Verder is het bijzonder dat op basis van de applicatielaag de organisatorische inrichting voor de ICT voorzieningen bepaald wordt.
|
2.0
|
1. Inleiding - In de inleiding staat: ´ Een samenhangende objectenregistratie is een uniforme registratie met daarin basisgegevens over objecten in de fysieke werkelijkheid die zich voor gebruikers als één registratie gedraagt.´
Dit is eigenlijk al een hele vreemde definitie, met een samenraapsel van ongedefinieerde begrippen. - Uniform. Wat is dan een uniforme registratie - Objecten (waarschijnlijk hoort de samenhang in de objecten te zitten) - Gedrag. Waarom moet de uniforme registratie zich voor gebruikers als een registratie gedragen? Zit daar de samenhang dan in?
Er wordt hier verwezen naar "ICT componenten" waar eerder slechts component werd gebruikt. Gezien de aard van het document past "component" beter en kan ICT overal worden weggelaten.
Verder is het bijzonder dat op basis van de applicatielaag de organisatorische inrichting voor de ICT voorzieningen bepaald wordt.
|
process
|
inleiding in de inleiding staat ´ een samenhangende objectenregistratie is een uniforme registratie met daarin basisgegevens over objecten in de fysieke werkelijkheid die zich voor gebruikers als één registratie gedraagt ´ dit is eigenlijk al een hele vreemde definitie met een samenraapsel van ongedefinieerde begrippen uniform wat is dan een uniforme registratie objecten waarschijnlijk hoort de samenhang in de objecten te zitten gedrag waarom moet de uniforme registratie zich voor gebruikers als een registratie gedragen zit daar de samenhang dan in er wordt hier verwezen naar ict componenten waar eerder slechts component werd gebruikt gezien de aard van het document past component beter en kan ict overal worden weggelaten verder is het bijzonder dat op basis van de applicatielaag de organisatorische inrichting voor de ict voorzieningen bepaald wordt
| 1
|
237,826
| 7,767,106,454
|
IssuesEvent
|
2018-06-03 00:40:24
|
openshift/origin
|
https://api.github.com/repos/openshift/origin
|
closed
|
Top allocator in 3.7 controller in large cluster
|
area/performance component/kubernetes component/storage lifecycle/rotten priority/P2 sig/master sig/scalability sig/storage
|
Object allocation hotspots in the controller manager in a very large AWS cluster:
```
(pprof) top30
Showing nodes accounting for 55791681178, 89.00% of 62687568655 total
Dropped 3608 nodes (cum <= 313437843)
Showing top 30 nodes out of 255
flat flat% sum% cum cum%
24313857977 38.79% 38.79% 24313857977 38.79% runtime.convT2E /usr/lib/golang/src/runtime/iface.go
11801130041 18.83% 57.61% 11801130041 18.83% runtime.rawstringtmp /usr/lib/golang/src/runtime/string.go
2970754286 4.74% 62.35% 2970754286 4.74% reflect.unsafe_New /usr/lib/golang/src/runtime/malloc.go
2660585300 4.24% 66.59% 2660585300 4.24% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache.getPodsFromMap /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache/desired_state_of_world.go
```
^ needs to be fixed (reduce allocations)
```
2517120512 4.02% 70.61% 2517120512 4.02% runtime.makemap /usr/lib/golang/src/runtime/hashmap.go
1993008379 3.18% 73.79% 1993008379 3.18% runtime.convT2I /usr/lib/golang/src/runtime/iface.go
1710577009 2.73% 76.52% 1862888207 2.97% runtime.mapassign /usr/lib/golang/src/runtime/hashmap.go
1297108218 2.07% 78.59% 1297108218 2.07% reflect.(*structType).Field /usr/lib/golang/src/reflect/type.go
```
^ we're using the default converter, which is bad. not sure who is calling it
```
1133628390 1.81% 80.40% 3502433907 5.59% github.com/openshift/origin/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.XMLToStruct /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go
```
^ yuck, AWS calls are a significant portion of our total allocations
```
944836732 1.51% 81.90% 1881777273 3.00% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationGenerator).GenerateAttachVolumeFunc /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go
900336272 1.44% 83.34% 900336272 1.44% strings.genSplit /usr/lib/golang/src/strings/strings.go
472543400 0.75% 84.09% 472543400 0.75% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util.OperationCompleteHook /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util/metrics.go
```
^ also ugly
```
472217674 0.75% 84.85% 472217674 0.75% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/aws_ebs.(*awsElasticBlockStorePlugin).NewAttacher /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/aws_ebs/attacher.go
469539836 0.75% 85.59% 495050312 0.79% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume.(*VolumePluginMgr).FindPluginBySpec /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/plugins.go
311799271 0.5% 86.09% 335145206 0.53% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_NodeStatus /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
305472590 0.49% 86.58% 601388133 0.96% reflect.MakeSlice /usr/lib/golang/src/reflect/value.go
287757264 0.46% 87.04% 578085866 0.92% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*ContainerImage).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
222260101 0.35% 87.39% 733337896 1.17% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_Container /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
215721675 0.34% 87.74% 1034216975 1.65% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_PodSpec /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
166990811 0.27% 88.00% 334109768 0.53% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_EnvVar /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
130546907 0.21% 88.21% 1251787708 2.00% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*NodeStatus).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
114269904 0.18% 88.39% 1322617933 2.11% crypto/x509.parseCertificate /usr/lib/golang/src/crypto/x509/x509.go
91245262 0.15% 88.54% 1657129752 2.64% reflect.Value.call /usr/lib/golang/src/reflect/value.go
82011938 0.13% 88.67% 1555118198 2.48% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/quota.Intersection /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/quota/resources.go
55239321 0.088% 88.76% 329816728 0.53% reflect.Value.MapKeys /usr/lib/golang/src/reflect/value.go
45744820 0.073% 88.83% 672678660 1.07% encoding/asn1.parseFieldParameters /usr/lib/golang/src/encoding/asn1/common.go
29987946 0.048% 88.88% 1523614717 2.43% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion.(*Converter).doConversion /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion/converter.go
27867864 0.044% 88.92% 611110555 0.97% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.DeepCopy_v1_ObjectMeta /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.deepcopy.go
26996566 0.043% 88.97% 545187758 0.87% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/replication.(*ReplicationManager).syncReplicationController /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/replication/replication_controller.go
20524912 0.033% 89.00% 1130365268 1.80% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion.(*Converter).defaultConvert /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion/converter.go
```
As far as in-use, top consumers cumulative was:
```
Showing nodes accounting for 13.17GB, 86.82% of 15.17GB total
0 0% 0% 6.53GB 43.03% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*Pod).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
3.89GB 25.67% 25.67% 3.89GB 25.67% runtime.rawstringtmp /usr/lib/golang/src/runtime/string.go
3.15GB 20.75% 69.76% 3.41GB 22.46% runtime.mapassign /usr/lib/golang/src/runtime/hashmap.go
0.08GB 0.53% 70.28% 3.36GB 22.13% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.(*ObjectMeta).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.pb.go
0 0% 72.10% 1.42GB 9.33% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*ResourceRequirements).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
```
This list is where the bulk of in of use memory is. 6GB for pods is... a lot.
|
1.0
|
Top allocator in 3.7 controller in large cluster - Object allocation hotspots in the controller manager in a very large AWS cluster:
```
(pprof) top30
Showing nodes accounting for 55791681178, 89.00% of 62687568655 total
Dropped 3608 nodes (cum <= 313437843)
Showing top 30 nodes out of 255
flat flat% sum% cum cum%
24313857977 38.79% 38.79% 24313857977 38.79% runtime.convT2E /usr/lib/golang/src/runtime/iface.go
11801130041 18.83% 57.61% 11801130041 18.83% runtime.rawstringtmp /usr/lib/golang/src/runtime/string.go
2970754286 4.74% 62.35% 2970754286 4.74% reflect.unsafe_New /usr/lib/golang/src/runtime/malloc.go
2660585300 4.24% 66.59% 2660585300 4.24% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache.getPodsFromMap /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache/desired_state_of_world.go
```
^ needs to be fixed (reduce allocations)
```
2517120512 4.02% 70.61% 2517120512 4.02% runtime.makemap /usr/lib/golang/src/runtime/hashmap.go
1993008379 3.18% 73.79% 1993008379 3.18% runtime.convT2I /usr/lib/golang/src/runtime/iface.go
1710577009 2.73% 76.52% 1862888207 2.97% runtime.mapassign /usr/lib/golang/src/runtime/hashmap.go
1297108218 2.07% 78.59% 1297108218 2.07% reflect.(*structType).Field /usr/lib/golang/src/reflect/type.go
```
^ we're using the default converter, which is bad. not sure who is calling it
```
1133628390 1.81% 80.40% 3502433907 5.59% github.com/openshift/origin/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.XMLToStruct /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil/xml_to_struct.go
```
^ yuck, AWS calls are a significant portion of our total allocations
```
944836732 1.51% 81.90% 1881777273 3.00% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationGenerator).GenerateAttachVolumeFunc /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go
900336272 1.44% 83.34% 900336272 1.44% strings.genSplit /usr/lib/golang/src/strings/strings.go
472543400 0.75% 84.09% 472543400 0.75% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util.OperationCompleteHook /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/util/metrics.go
```
^ also ugly
```
472217674 0.75% 84.85% 472217674 0.75% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/aws_ebs.(*awsElasticBlockStorePlugin).NewAttacher /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/aws_ebs/attacher.go
469539836 0.75% 85.59% 495050312 0.79% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume.(*VolumePluginMgr).FindPluginBySpec /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/volume/plugins.go
311799271 0.5% 86.09% 335145206 0.53% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_NodeStatus /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
305472590 0.49% 86.58% 601388133 0.96% reflect.MakeSlice /usr/lib/golang/src/reflect/value.go
287757264 0.46% 87.04% 578085866 0.92% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*ContainerImage).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
222260101 0.35% 87.39% 733337896 1.17% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_Container /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
215721675 0.34% 87.74% 1034216975 1.65% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_PodSpec /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
166990811 0.27% 88.00% 334109768 0.53% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.DeepCopy_v1_EnvVar /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/zz_generated.deepcopy.go
130546907 0.21% 88.21% 1251787708 2.00% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*NodeStatus).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
114269904 0.18% 88.39% 1322617933 2.11% crypto/x509.parseCertificate /usr/lib/golang/src/crypto/x509/x509.go
91245262 0.15% 88.54% 1657129752 2.64% reflect.Value.call /usr/lib/golang/src/reflect/value.go
82011938 0.13% 88.67% 1555118198 2.48% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/quota.Intersection /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/quota/resources.go
55239321 0.088% 88.76% 329816728 0.53% reflect.Value.MapKeys /usr/lib/golang/src/reflect/value.go
45744820 0.073% 88.83% 672678660 1.07% encoding/asn1.parseFieldParameters /usr/lib/golang/src/encoding/asn1/common.go
29987946 0.048% 88.88% 1523614717 2.43% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion.(*Converter).doConversion /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion/converter.go
27867864 0.044% 88.92% 611110555 0.97% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.DeepCopy_v1_ObjectMeta /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.deepcopy.go
26996566 0.043% 88.97% 545187758 0.87% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/replication.(*ReplicationManager).syncReplicationController /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/replication/replication_controller.go
20524912 0.033% 89.00% 1130365268 1.80% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion.(*Converter).defaultConvert /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/conversion/converter.go
```
As far as in-use, top consumers cumulative was:
```
Showing nodes accounting for 13.17GB, 86.82% of 15.17GB total
0 0% 0% 6.53GB 43.03% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*Pod).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
3.89GB 25.67% 25.67% 3.89GB 25.67% runtime.rawstringtmp /usr/lib/golang/src/runtime/string.go
3.15GB 20.75% 69.76% 3.41GB 22.46% runtime.mapassign /usr/lib/golang/src/runtime/hashmap.go
0.08GB 0.53% 70.28% 3.36GB 22.13% github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.(*ObjectMeta).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.pb.go
0 0% 72.10% 1.42GB 9.33% github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1.(*ResourceRequirements).Unmarshal /builddir/build/BUILD/atomic-openshift-git-0.7c71a2d/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/api/v1/generated.pb.go
```
This list is where the bulk of in of use memory is. 6GB for pods is... a lot.
|
non_process
|
top allocator in controller in large cluster object allocation hotspots in the controller manager in a very large aws cluster pprof showing nodes accounting for of total dropped nodes cum showing top nodes out of flat flat sum cum cum runtime usr lib golang src runtime iface go runtime rawstringtmp usr lib golang src runtime string go reflect unsafe new usr lib golang src runtime malloc go github com openshift origin vendor io kubernetes pkg controller volume attachdetach cache getpodsfrommap builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg controller volume attachdetach cache desired state of world go needs to be fixed reduce allocations runtime makemap usr lib golang src runtime hashmap go runtime usr lib golang src runtime iface go runtime mapassign usr lib golang src runtime hashmap go reflect structtype field usr lib golang src reflect type go we re using the default converter which is bad not sure who is calling it github com openshift origin vendor github com aws aws sdk go private protocol xml xmlutil xmltostruct builddir build build atomic openshift git output local go src github com openshift origin vendor github com aws aws sdk go private protocol xml xmlutil xml to struct go yuck aws calls are a significant portion of our total allocations github com openshift origin vendor io kubernetes pkg volume util operationexecutor operationgenerator generateattachvolumefunc builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg volume util operationexecutor operation generator go strings gensplit usr lib golang src strings strings go github com openshift origin vendor io kubernetes pkg volume util operationcompletehook builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg volume util metrics go also ugly github com openshift origin vendor io kubernetes pkg volume aws ebs awselasticblockstoreplugin newattacher builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg volume aws ebs attacher go github com openshift origin vendor io kubernetes pkg volume volumepluginmgr findpluginbyspec builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg volume plugins go github com openshift origin vendor io kubernetes pkg api deepcopy nodestatus builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api zz generated deepcopy go reflect makeslice usr lib golang src reflect value go github com openshift origin vendor io kubernetes pkg api containerimage unmarshal builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api generated pb go github com openshift origin vendor io kubernetes pkg api deepcopy container builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api zz generated deepcopy go github com openshift origin vendor io kubernetes pkg api deepcopy podspec builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api zz generated deepcopy go github com openshift origin vendor io kubernetes pkg api deepcopy envvar builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api zz generated deepcopy go github com openshift origin vendor io kubernetes pkg api nodestatus unmarshal builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api generated pb go crypto parsecertificate usr lib golang src crypto go reflect value call usr lib golang src reflect value go github com openshift origin vendor io kubernetes pkg quota intersection builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg quota resources go reflect value mapkeys usr lib golang src reflect value go encoding parsefieldparameters usr lib golang src encoding common go github com openshift origin vendor io apimachinery pkg conversion converter doconversion builddir build build atomic openshift git output local go src github com openshift origin vendor io apimachinery pkg conversion converter go github com openshift origin vendor io apimachinery pkg apis meta deepcopy objectmeta builddir build build atomic openshift git output local go src github com openshift origin vendor io apimachinery pkg apis meta zz generated deepcopy go github com openshift origin vendor io kubernetes pkg controller replication replicationmanager syncreplicationcontroller builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg controller replication replication controller go github com openshift origin vendor io apimachinery pkg conversion converter defaultconvert builddir build build atomic openshift git output local go src github com openshift origin vendor io apimachinery pkg conversion converter go as far as in use top consumers cumulative was showing nodes accounting for of total github com openshift origin vendor io kubernetes pkg api pod unmarshal builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api generated pb go runtime rawstringtmp usr lib golang src runtime string go runtime mapassign usr lib golang src runtime hashmap go github com openshift origin vendor io apimachinery pkg apis meta objectmeta unmarshal builddir build build atomic openshift git output local go src github com openshift origin vendor io apimachinery pkg apis meta generated pb go github com openshift origin vendor io kubernetes pkg api resourcerequirements unmarshal builddir build build atomic openshift git output local go src github com openshift origin vendor io kubernetes pkg api generated pb go this list is where the bulk of in of use memory is for pods is a lot
| 0
|
37,280
| 12,477,438,562
|
IssuesEvent
|
2020-05-29 14:57:43
|
LibrIT/passhport
|
https://api.github.com/repos/LibrIT/passhport
|
closed
|
CVE-2020-11022 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.0.min.js</b>, <b>jquery-3.3.1.min.js</b>, <b>jquery-2.1.4.min.js</b>, <b>jquery-1.11.0.min.js</b>, <b>jquery-3.4.0.min.js</b>, <b>jquery-1.8.3.js</b>, <b>jquery-2.2.4.min.js</b>, <b>jquery-2.1.3.min.js</b>, <b>jquery-1.11.3.min.js</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/jquery-knob/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/jquery-knob/index.html,/passhport/passhweb/app/static/bower_components/jquery-slimscroll/examples/scroll-events.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/.attic/test-moment/index.html,/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/node_modules/js-base64/.attic/test-moment/index.html,/passhport/passhweb/app/static/bower_components/bootstrap/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/test/index.html,/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/node_modules/js-base64/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.3.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/Flot/examples/stacking/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/Flot/examples/stacking/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/image/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/selection/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/navigate/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-time/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/interacting/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/tracking/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/zooming/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/basic-usage/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-toggle/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-time-zones/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-multiple/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/ajax/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/symbols/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-errorbars/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-pie/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-types/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/canvas/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/visitors/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/categories/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/resize/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/basic-options/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/annotating/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/percentiles/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/threshold/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/realtime/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-interacting/../../jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.3.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/templates/scripts/minimal.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/templates/scripts/../../static/bower_components/jquery/dist/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.1.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/chart.js/samples/line-customTooltips.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/chart.js/samples/line-customTooltips.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/demo.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/demo.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/morris.js/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/morris.js/node_modules/redeyed/examples/browser/index.html,/passhport/passhweb/app/static/bower_components/jquery-ui/node_modules/tap/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/d3681a204773a725ef34dd377726d1c711a3d27a">d3681a204773a725ef34dd377726d1c711a3d27a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-11022 (Medium) detected in multiple libraries - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.0.min.js</b>, <b>jquery-3.3.1.min.js</b>, <b>jquery-2.1.4.min.js</b>, <b>jquery-1.11.0.min.js</b>, <b>jquery-3.4.0.min.js</b>, <b>jquery-1.8.3.js</b>, <b>jquery-2.2.4.min.js</b>, <b>jquery-2.1.3.min.js</b>, <b>jquery-1.11.3.min.js</b>, <b>jquery-1.8.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.0/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/jquery-knob/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/jquery-knob/index.html,/passhport/passhweb/app/static/bower_components/jquery-slimscroll/examples/scroll-events.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-colorpicker/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/.attic/test-moment/index.html,/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/node_modules/js-base64/.attic/test-moment/index.html,/passhport/passhweb/app/static/bower_components/bootstrap/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.0/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/ckeditor/samples/old/jquery.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/ckeditor/samples/old/jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.4.0.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/select2/node_modules/js-base64/test/index.html,/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/node_modules/js-base64/test/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.4.0.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.3.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/Flot/examples/stacking/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/Flot/examples/stacking/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/image/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/selection/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/navigate/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-time/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/interacting/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/tracking/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/zooming/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/basic-usage/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-toggle/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-time-zones/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-multiple/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/ajax/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/symbols/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-errorbars/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-pie/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/series-types/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/canvas/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/visitors/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/categories/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/resize/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/basic-options/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/annotating/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/percentiles/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/threshold/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/realtime/../../jquery.js,/passhport/passhweb/app/static/bower_components/Flot/examples/axes-interacting/../../jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.3.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/templates/scripts/minimal.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/templates/scripts/../../static/bower_components/jquery/dist/jquery.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.1.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/chart.js/samples/line-customTooltips.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/chart.js/samples/line-customTooltips.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.11.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/demo.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/bootstrap-daterangepicker/demo.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.11.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/passhport/passhweb/app/static/bower_components/morris.js/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: /passhport/passhweb/app/static/bower_components/morris.js/node_modules/redeyed/examples/browser/index.html,/passhport/passhweb/app/static/bower_components/jquery-ui/node_modules/tap/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LibrIT/passhport/commit/d3681a204773a725ef34dd377726d1c711a3d27a">d3681a204773a725ef34dd377726d1c711a3d27a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js jquery min js jquery min js jquery js jquery min js jquery min js jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components jquery knob index html path to vulnerable library passhport passhweb app static bower components jquery knob index html passhport passhweb app static bower components jquery slimscroll examples scroll events html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components bootstrap colorpicker index html path to vulnerable library passhport passhweb app static bower components bootstrap colorpicker index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components node modules js attic test moment index html path to vulnerable library passhport passhweb app static bower components node modules js attic test moment index html passhport passhweb app static bower components bootstrap daterangepicker node modules js attic test moment index html passhport passhweb app static bower components bootstrap node modules js attic test moment index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components ckeditor samples old jquery html path to vulnerable library passhport passhweb app static bower components ckeditor samples old jquery html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components node modules js test index html path to vulnerable library passhport passhweb app static bower components node modules js test index html passhport passhweb app static bower components bootstrap daterangepicker node modules js test index html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components flot examples stacking index html path to vulnerable library passhport passhweb app static bower components flot examples stacking jquery js passhport passhweb app static bower components flot examples image jquery js passhport passhweb app static bower components flot examples selection jquery js passhport passhweb app static bower components flot examples navigate jquery js passhport passhweb app static bower components flot examples axes time jquery js passhport passhweb app static bower components flot examples interacting jquery js passhport passhweb app static bower components flot examples jquery js passhport passhweb app static bower components flot examples tracking jquery js passhport passhweb app static bower components flot examples zooming jquery js passhport passhweb app static bower components flot examples basic usage jquery js passhport passhweb app static bower components flot examples series toggle jquery js passhport passhweb app static bower components flot examples axes time zones jquery js passhport passhweb app static bower components flot examples axes multiple jquery js passhport passhweb app static bower components flot examples ajax jquery js passhport passhweb app static bower components flot examples symbols jquery js passhport passhweb app static bower components flot examples series errorbars jquery js passhport passhweb app static bower components flot examples series pie jquery js passhport passhweb app static bower components flot examples series types jquery js passhport passhweb app static bower components flot examples canvas jquery js passhport passhweb app static bower components flot examples visitors jquery js passhport passhweb app static bower components flot examples categories jquery js passhport passhweb app static bower components flot examples resize jquery js passhport passhweb app static bower components flot examples basic options jquery js passhport passhweb app static bower components flot examples annotating jquery js passhport passhweb app static bower components flot examples percentiles jquery js passhport passhweb app static bower components flot examples threshold jquery js passhport passhweb app static bower components flot examples realtime jquery js passhport passhweb app static bower components flot examples axes interacting jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app templates scripts minimal html path to vulnerable library passhport passhweb app templates scripts static bower components jquery dist jquery min js dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components chart js samples line customtooltips html path to vulnerable library passhport passhweb app static bower components chart js samples line customtooltips html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components bootstrap daterangepicker demo html path to vulnerable library passhport passhweb app static bower components bootstrap daterangepicker demo html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm passhport passhweb app static bower components morris js node modules redeyed examples browser index html path to vulnerable library passhport passhweb app static bower components morris js node modules redeyed examples browser index html passhport passhweb app static bower components jquery ui node modules tap node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details in jquery before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
15,394
| 19,579,449,243
|
IssuesEvent
|
2022-01-04 19:12:59
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
closed
|
Processor to filter out (remove/drop) entire events
|
plugin - processor proposal
|
Data Prepper should provide a processor which removes entire events. This processor need not take any special input, but does require conditional support from #522. (Otherwise every event would be dropped).
**Outstanding Questions**
What name should this processor have?
|
1.0
|
Processor to filter out (remove/drop) entire events - Data Prepper should provide a processor which removes entire events. This processor need not take any special input, but does require conditional support from #522. (Otherwise every event would be dropped).
**Outstanding Questions**
What name should this processor have?
|
process
|
processor to filter out remove drop entire events data prepper should provide a processor which removes entire events this processor need not take any special input but does require conditional support from otherwise every event would be dropped outstanding questions what name should this processor have
| 1
|
592
| 3,067,152,961
|
IssuesEvent
|
2015-08-18 08:39:13
|
maraujop/django-crispy-forms
|
https://api.github.com/repos/maraujop/django-crispy-forms
|
closed
|
Remove `tests_require` block from setup.py
|
Cleanup Testing/Process
|
Historically the Make file was using this to set up the test environment but we're now favouring the pip requirements file. It can go.
|
1.0
|
Remove `tests_require` block from setup.py - Historically the Make file was using this to set up the test environment but we're now favouring the pip requirements file. It can go.
|
process
|
remove tests require block from setup py historically the make file was using this to set up the test environment but we re now favouring the pip requirements file it can go
| 1
|
12,790
| 15,169,204,366
|
IssuesEvent
|
2021-02-12 20:43:43
|
esmero/strawberryfield
|
https://api.github.com/repos/esmero/strawberryfield
|
closed
|
Big bad bad boy bug: "as:classification" using class level constants failed my sanity check!
|
Digital Preservation Events and Subscriber JSON Postprocessors bug
|
# What?
previously we have double as:structure checking (as:image, etc) Once per webform submit and once per Object Ingest via Event Subscriber. Once I removed the webform one (double processing, made little sense) my bug got exposed.
I have a constant in the JSON Helper class here
https://github.com/esmero/strawberryfield/blame/master/src/Tools/StrawberryfieldJsonHelper.php#L21-L30
Which funny, lacks a few!!!
I'm the worst. Thanks @alliomeria for testing. Correction going RC1's way. As fast as I can
@giancarlobi you want to update this too. All this is RC1 related.
|
1.0
|
Big bad bad boy bug: "as:classification" using class level constants failed my sanity check! - # What?
previously we have double as:structure checking (as:image, etc) Once per webform submit and once per Object Ingest via Event Subscriber. Once I removed the webform one (double processing, made little sense) my bug got exposed.
I have a constant in the JSON Helper class here
https://github.com/esmero/strawberryfield/blame/master/src/Tools/StrawberryfieldJsonHelper.php#L21-L30
Which funny, lacks a few!!!
I'm the worst. Thanks @alliomeria for testing. Correction going RC1's way. As fast as I can
@giancarlobi you want to update this too. All this is RC1 related.
|
process
|
big bad bad boy bug as classification using class level constants failed my sanity check what previously we have double as structure checking as image etc once per webform submit and once per object ingest via event subscriber once i removed the webform one double processing made little sense my bug got exposed i have a constant in the json helper class here which funny lacks a few i m the worst thanks alliomeria for testing correction going s way as fast as i can giancarlobi you want to update this too all this is related
| 1
|
96,228
| 3,966,389,177
|
IssuesEvent
|
2016-05-03 12:49:33
|
daronco/test-issue-migrate2
|
https://api.github.com/repos/daronco/test-issue-migrate2
|
closed
|
Allow room keys to be changed while a meeting is running
|
Priority: High Status: Resolved Type: Feature
|
---
Author Name: **Leonardo Daronco** (@daronco)
Original Redmine Issue: 828, http://dev.mconf.org/redmine/issues/828
Original Date: 2013-06-07
Original Assignee: Gustavo Miotto
---
If the keys (passwords) of a room are changed while a meeting is running, any further API call to that meeting is gonna fail, since the keys are used in most of the API calls.
We should isolate the keys used in API calls from the keys attributed by a user. Keys in API calls can be random strings generated automatically by the gem and can be always the same. The keys attributed by the user should be used only to define the permissions and roles of users in a room, but shouldn't be passed to the API. This way the user can change the keys while a meeting is running and the API calls will still work.
|
1.0
|
Allow room keys to be changed while a meeting is running - ---
Author Name: **Leonardo Daronco** (@daronco)
Original Redmine Issue: 828, http://dev.mconf.org/redmine/issues/828
Original Date: 2013-06-07
Original Assignee: Gustavo Miotto
---
If the keys (passwords) of a room are changed while a meeting is running, any further API call to that meeting is gonna fail, since the keys are used in most of the API calls.
We should isolate the keys used in API calls from the keys attributed by a user. Keys in API calls can be random strings generated automatically by the gem and can be always the same. The keys attributed by the user should be used only to define the permissions and roles of users in a room, but shouldn't be passed to the API. This way the user can change the keys while a meeting is running and the API calls will still work.
|
non_process
|
allow room keys to be changed while a meeting is running author name leonardo daronco daronco original redmine issue original date original assignee gustavo miotto if the keys passwords of a room are changed while a meeting is running any further api call to that meeting is gonna fail since the keys are used in most of the api calls we should isolate the keys used in api calls from the keys attributed by a user keys in api calls can be random strings generated automatically by the gem and can be always the same the keys attributed by the user should be used only to define the permissions and roles of users in a room but shouldn t be passed to the api this way the user can change the keys while a meeting is running and the api calls will still work
| 0
|
16,849
| 22,100,710,314
|
IssuesEvent
|
2022-06-01 13:34:16
|
green-coder/girouette
|
https://api.github.com/repos/green-coder/girouette
|
opened
|
Add a recent version of Preflight
|
enhancement contribution welcome good first issue tailwind-compat processor-tool
|
We need [to port this file](https://unpkg.com/tailwindcss@3.0.24/src/css/preflight.css) to the garden format, similarly of what was done for [this file](https://unpkg.com/tailwindcss@2.0.3/dist/base.css) in https://github.com/green-coder/girouette/blob/ab419782c932889d9e9ddeb5bbb92941063c593e/lib/girouette/src/girouette/tw/preflight.cljc#L8-L301
The comments presents in the source file must be preserved.
The preprocessor is affected, it must accept a parameter saying which version of Preflight to use.
|
1.0
|
Add a recent version of Preflight - We need [to port this file](https://unpkg.com/tailwindcss@3.0.24/src/css/preflight.css) to the garden format, similarly of what was done for [this file](https://unpkg.com/tailwindcss@2.0.3/dist/base.css) in https://github.com/green-coder/girouette/blob/ab419782c932889d9e9ddeb5bbb92941063c593e/lib/girouette/src/girouette/tw/preflight.cljc#L8-L301
The comments presents in the source file must be preserved.
The preprocessor is affected, it must accept a parameter saying which version of Preflight to use.
|
process
|
add a recent version of preflight we need to the garden format similarly of what was done for in the comments presents in the source file must be preserved the preprocessor is affected it must accept a parameter saying which version of preflight to use
| 1
|
42,916
| 5,487,257,105
|
IssuesEvent
|
2017-03-14 03:37:02
|
yarnpkg/yarn
|
https://api.github.com/repos/yarnpkg/yarn
|
closed
|
yarn-e2e/label=docker,os=ubuntu-12.04 #129 failed
|
failure test
|
Build 'yarn-e2e/label=docker,os=ubuntu-12.04' is failing!
Last 50 lines of build output:
```
[...truncated 7.53 KB...]
Selecting previously unselected package openssl.
(Reading database ... 7578 files and directories currently installed.)
Unpacking openssl (from .../openssl_1.0.1-4ubuntu5.39_amd64.deb) ...
Selecting previously unselected package ca-certificates.
Unpacking ca-certificates (from .../ca-certificates_20160104ubuntu0.12.04.1_all.deb) ...
Selecting previously unselected package yarn.
Unpacking yarn (from .../yarn_0.23.0-20170311.0529-1_all.deb) ...
Selecting previously unselected package nodejs.
Unpacking nodejs (from .../nodejs_6.10.0-1nodesource1~precise1_amd64.deb) ...
Setting up openssl (1.0.1-4ubuntu5.39) ...
Setting up ca-certificates (20160104ubuntu0.12.04.1) ...
Updating certificates in /etc/ssl/certs... 173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
Setting up yarn (0.23.0-20170311.0529-1) ...
Setting up nodejs (6.10.0-1nodesource1~precise1) ...
+ ./run-yarn-test.sh
+ cd /tmp
+ mkdir yarntest
+ cd yarntest
+ echo '{}'
+ yarn --version
0.23.0-20170311.0529
+ rm -rf /root/.cache/yarn
+ yarn add react
yarn add v0.23.0-20170311.0529
info No lockfile found.
warning No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
/usr/share/yarn/lib/fetchers/tarball-fetcher.js:141
entry.props.uid = entry.uid = 0;
^
TypeError: Cannot set property 'uid' of undefined
at Extract.extractorStream.pipe.on.on.entry (/usr/share/yarn/lib/fetchers/tarball-fetcher.js:141:25)
at emitThree (events.js:121:20)
at Extract.emit (events.js:194:7)
at Extract.onheader (/usr/share/yarn/node_modules/tar-stream/extract.js:177:10)
at Extract._write (/usr/share/yarn/node_modules/tar-stream/extract.js:243:8)
at doWrite (/usr/share/yarn/node_modules/readable-stream/lib/_stream_writable.js:347:64)
at writeOrBuffer (/usr/share/yarn/node_modules/readable-stream/lib/_stream_writable.js:336:5)
at Extract.Writable.write (/usr/share/yarn/node_modules/readable-stream/lib/_stream_writable.js:274:11)
at UnpackStream.ondata (_stream_readable.js:555:20)
at emitOne (events.js:96:13)
+ fail_with_log
+ exitcode=1
+ '[' -s yarn-error.log ']'
+ exit 1
Build step 'Execute shell' marked build as failure
```
Changes since last successful build:
- [bestander] d093c6cc1d2e28d906e0972b1d9a5ad5cd757bff - Move ROOT_USER to a separate file, fixes #2807 (#2855)
- [noreply] ddff4c5ae8f8ef65284746931c851cc642177816 - Refactored integrity generation and check (#2818)
- [noreply] 157a34aa5928792b596b1da99fc12027c9e00032 - switched to tar-fs from node-tar to unpack tars (#2826)
- [bestander] 2f49dcfe0119a960e28958679be15ace5c527ca3 - use process.mainModule.filename as npm_execpath (#2843)
- [bestander] 8ed57c0de6a8c29ac2f44a599cd39a6c2b7082f9 - Upgrade to flow v0.41.0 (#2862)
- [bestander] 2072978a0ffc2a65512e0336491485cce9bb6691 - Use release infra rather than ghr to create GitHub releases (#2839)
- [bestander] 1c36baff78fdd5dfab932f7e4d71f1082a775622 - Add optional offline mirror pruning (#2836)
- [bestander] 1650f5cbc0217833906f5e6d1628099c80cf234d - Add test for pruning on install when mirror pruning has been enabled
- [bestander] 7466829cc509b40f4873cd5ea3432e91c3c949ce - Remove Git conflict markers (#2873)
- [Daniel Lo Nigro] 1b89ee877338873473fdee0ca20571ac51686b6e - Add console.warn to process.stdin catch block
- [Daniel Lo Nigro] 664b6a4f6beeb2e1f69447e1186a8aa889e0980c - Generate coverage for uncovered source files
- [bestander] 6fb3ba5090c9c5723c6329b412a1e3a3194e9817 - Fix User Namespace issues with untaring (#2898)
[View full output](https://build.dan.cx/job/yarn-e2e/label=docker,os=ubuntu-12.04/129/)
cc @Daniel15
|
1.0
|
yarn-e2e/label=docker,os=ubuntu-12.04 #129 failed - Build 'yarn-e2e/label=docker,os=ubuntu-12.04' is failing!
Last 50 lines of build output:
```
[...truncated 7.53 KB...]
Selecting previously unselected package openssl.
(Reading database ... 7578 files and directories currently installed.)
Unpacking openssl (from .../openssl_1.0.1-4ubuntu5.39_amd64.deb) ...
Selecting previously unselected package ca-certificates.
Unpacking ca-certificates (from .../ca-certificates_20160104ubuntu0.12.04.1_all.deb) ...
Selecting previously unselected package yarn.
Unpacking yarn (from .../yarn_0.23.0-20170311.0529-1_all.deb) ...
Selecting previously unselected package nodejs.
Unpacking nodejs (from .../nodejs_6.10.0-1nodesource1~precise1_amd64.deb) ...
Setting up openssl (1.0.1-4ubuntu5.39) ...
Setting up ca-certificates (20160104ubuntu0.12.04.1) ...
Updating certificates in /etc/ssl/certs... 173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
Setting up yarn (0.23.0-20170311.0529-1) ...
Setting up nodejs (6.10.0-1nodesource1~precise1) ...
+ ./run-yarn-test.sh
+ cd /tmp
+ mkdir yarntest
+ cd yarntest
+ echo '{}'
+ yarn --version
0.23.0-20170311.0529
+ rm -rf /root/.cache/yarn
+ yarn add react
yarn add v0.23.0-20170311.0529
info No lockfile found.
warning No license field
[1/4] Resolving packages...
[2/4] Fetching packages...
/usr/share/yarn/lib/fetchers/tarball-fetcher.js:141
entry.props.uid = entry.uid = 0;
^
TypeError: Cannot set property 'uid' of undefined
at Extract.extractorStream.pipe.on.on.entry (/usr/share/yarn/lib/fetchers/tarball-fetcher.js:141:25)
at emitThree (events.js:121:20)
at Extract.emit (events.js:194:7)
at Extract.onheader (/usr/share/yarn/node_modules/tar-stream/extract.js:177:10)
at Extract._write (/usr/share/yarn/node_modules/tar-stream/extract.js:243:8)
at doWrite (/usr/share/yarn/node_modules/readable-stream/lib/_stream_writable.js:347:64)
at writeOrBuffer (/usr/share/yarn/node_modules/readable-stream/lib/_stream_writable.js:336:5)
at Extract.Writable.write (/usr/share/yarn/node_modules/readable-stream/lib/_stream_writable.js:274:11)
at UnpackStream.ondata (_stream_readable.js:555:20)
at emitOne (events.js:96:13)
+ fail_with_log
+ exitcode=1
+ '[' -s yarn-error.log ']'
+ exit 1
Build step 'Execute shell' marked build as failure
```
Changes since last successful build:
- [bestander] d093c6cc1d2e28d906e0972b1d9a5ad5cd757bff - Move ROOT_USER to a separate file, fixes #2807 (#2855)
- [noreply] ddff4c5ae8f8ef65284746931c851cc642177816 - Refactored integrity generation and check (#2818)
- [noreply] 157a34aa5928792b596b1da99fc12027c9e00032 - switched to tar-fs from node-tar to unpack tars (#2826)
- [bestander] 2f49dcfe0119a960e28958679be15ace5c527ca3 - use process.mainModule.filename as npm_execpath (#2843)
- [bestander] 8ed57c0de6a8c29ac2f44a599cd39a6c2b7082f9 - Upgrade to flow v0.41.0 (#2862)
- [bestander] 2072978a0ffc2a65512e0336491485cce9bb6691 - Use release infra rather than ghr to create GitHub releases (#2839)
- [bestander] 1c36baff78fdd5dfab932f7e4d71f1082a775622 - Add optional offline mirror pruning (#2836)
- [bestander] 1650f5cbc0217833906f5e6d1628099c80cf234d - Add test for pruning on install when mirror pruning has been enabled
- [bestander] 7466829cc509b40f4873cd5ea3432e91c3c949ce - Remove Git conflict markers (#2873)
- [Daniel Lo Nigro] 1b89ee877338873473fdee0ca20571ac51686b6e - Add console.warn to process.stdin catch block
- [Daniel Lo Nigro] 664b6a4f6beeb2e1f69447e1186a8aa889e0980c - Generate coverage for uncovered source files
- [bestander] 6fb3ba5090c9c5723c6329b412a1e3a3194e9817 - Fix User Namespace issues with untaring (#2898)
[View full output](https://build.dan.cx/job/yarn-e2e/label=docker,os=ubuntu-12.04/129/)
cc @Daniel15
|
non_process
|
yarn label docker os ubuntu failed build yarn label docker os ubuntu is failing last lines of build output selecting previously unselected package openssl reading database files and directories currently installed unpacking openssl from openssl deb selecting previously unselected package ca certificates unpacking ca certificates from ca certificates all deb selecting previously unselected package yarn unpacking yarn from yarn all deb selecting previously unselected package nodejs unpacking nodejs from nodejs deb setting up openssl setting up ca certificates updating certificates in etc ssl certs added removed done running hooks in etc ca certificates update d done setting up yarn setting up nodejs run yarn test sh cd tmp mkdir yarntest cd yarntest echo yarn version rm rf root cache yarn yarn add react yarn add info no lockfile found warning no license field resolving packages fetching packages usr share yarn lib fetchers tarball fetcher js entry props uid entry uid typeerror cannot set property uid of undefined at extract extractorstream pipe on on entry usr share yarn lib fetchers tarball fetcher js at emitthree events js at extract emit events js at extract onheader usr share yarn node modules tar stream extract js at extract write usr share yarn node modules tar stream extract js at dowrite usr share yarn node modules readable stream lib stream writable js at writeorbuffer usr share yarn node modules readable stream lib stream writable js at extract writable write usr share yarn node modules readable stream lib stream writable js at unpackstream ondata stream readable js at emitone events js fail with log exitcode exit build step execute shell marked build as failure changes since last successful build move root user to a separate file fixes refactored integrity generation and check switched to tar fs from node tar to unpack tars use process mainmodule filename as npm execpath upgrade to flow use release infra rather than ghr to create github releases add optional offline mirror pruning add test for pruning on install when mirror pruning has been enabled remove git conflict markers add console warn to process stdin catch block generate coverage for uncovered source files fix user namespace issues with untaring cc
| 0
|
13,077
| 15,419,420,404
|
IssuesEvent
|
2021-03-05 10:08:02
|
ooi-data/RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered
|
https://api.github.com/repos/ooi-data/RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered
|
opened
|
🛑 Processing failed: PermissionError
|
process
|
## Overview
`PermissionError` found in `processing_task` task during run ended on 2021-03-05T10:08:01.957585.
## Details
Flow name: `RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered`
Task name: `processing_task`
Error type: `PermissionError`
Error message: The difference between the request time and the current time is too large.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 305, in finalize_zarr
array_plan.execute()
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/api.py", line 76, in execute
self._executor.execute_plan(self._plan, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/executors/dask.py", line 24, in execute_plan
return plan.compute(**kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 281, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 563, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/threaded.py", line 76, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 487, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 317, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 222, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/core.py", line 121, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3924, in store_chunk
return load_store_chunk(x, out, index, lock, return_stored, False)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3913, in load_store_chunk
out[index] = np.asanyarray(x)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1122, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1217, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1508, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1580, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1709, in _chunk_setitems
self.chunk_store.setitems({k: v for k, v in zip(ckeys, cdatas)})
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/mapping.py", line 111, in setitems
self.fs.pipe(values)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 121, in wrapper
return maybe_sync(func, self, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 224, in _pipe
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 753, in _pipe_file
return await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
PermissionError: The difference between the request time and the current time is too large.
```
</details>
|
1.0
|
🛑 Processing failed: PermissionError - ## Overview
`PermissionError` found in `processing_task` task during run ended on 2021-03-05T10:08:01.957585.
## Details
Flow name: `RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered`
Task name: `processing_task`
Error type: `PermissionError`
Error message: The difference between the request time and the current time is too large.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 234, in _call_s3
return await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 305, in finalize_zarr
array_plan.execute()
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/api.py", line 76, in execute
self._executor.execute_plan(self._plan, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/rechunker/executors/dask.py", line 24, in execute_plan
return plan.compute(**kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 281, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/base.py", line 563, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/threaded.py", line 76, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 487, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 317, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/local.py", line 222, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/core.py", line 121, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3924, in store_chunk
return load_store_chunk(x, out, index, lock, return_stored, False)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/dask/array/core.py", line 3913, in load_store_chunk
out[index] = np.asanyarray(x)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1122, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1217, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1508, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1580, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/zarr/core.py", line 1709, in _chunk_setitems
self.chunk_store.setitems({k: v for k, v in zip(ckeys, cdatas)})
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/mapping.py", line 111, in setitems
self.fs.pipe(values)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 121, in wrapper
return maybe_sync(func, self, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 100, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 71, in sync
raise exc.with_traceback(tb)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 55, in f
result[0] = await future
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 224, in _pipe
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 753, in _pipe_file
return await self._call_s3(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err) from err
PermissionError: The difference between the request time and the current time is too large.
```
</details>
|
process
|
🛑 processing failed permissionerror overview permissionerror found in processing task task during run ended on details flow name recovered wfp dpc flnturtd instrument recovered task name processing task error type permissionerror error message the difference between the request time and the current time is too large traceback traceback most recent call last file srv conda envs notebook lib site packages core py line in call return await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call raise error class parsed response operation name botocore exceptions clienterror an error occurred requesttimetooskewed when calling the putobject operation the difference between the request time and the current time is too large the above exception was the direct cause of the following exception traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr array plan execute file srv conda envs notebook lib site packages rechunker api py line in execute self executor execute plan self plan kwargs file srv conda envs notebook lib site packages rechunker executors dask py line in execute plan return plan compute kwargs file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in store chunk return load store chunk x out index lock return stored false file srv conda envs notebook lib site packages dask array core py line in load store chunk out np asanyarray x file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems self chunk store setitems k v for k v in zip ckeys cdatas file srv conda envs notebook lib site packages fsspec mapping py line in setitems self fs pipe values file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return maybe sync func self args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in maybe sync return sync loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise exc with traceback tb file srv conda envs notebook lib site packages fsspec asyn py line in f result await future file srv conda envs notebook lib site packages fsspec asyn py line in pipe await asyncio gather file srv conda envs notebook lib site packages core py line in pipe file return await self call file srv conda envs notebook lib site packages core py line in call raise translate boto error err from err permissionerror the difference between the request time and the current time is too large
| 1
|
110,995
| 13,996,339,299
|
IssuesEvent
|
2020-10-28 05:39:51
|
ajency/Dhanda-App
|
https://api.github.com/repos/ajency/Dhanda-App
|
closed
|
No categories for DAILY,WORK BASIS,WEEKLY,WORKBASIS
|
Assigned to QA Design Issue bug
|
There are no categories for DAILY,WORK BASIS,WEEKLY,WORKBASIS
Description-As the staff must be segregated into such category like in case of monthly and hourly staff but the Daily ,weekly and work basis dont have category and are put under one category of monthly

|
1.0
|
No categories for DAILY,WORK BASIS,WEEKLY,WORKBASIS - There are no categories for DAILY,WORK BASIS,WEEKLY,WORKBASIS
Description-As the staff must be segregated into such category like in case of monthly and hourly staff but the Daily ,weekly and work basis dont have category and are put under one category of monthly

|
non_process
|
no categories for daily work basis weekly workbasis there are no categories for daily work basis weekly workbasis description as the staff must be segregated into such category like in case of monthly and hourly staff but the daily weekly and work basis dont have category and are put under one category of monthly
| 0
|
221,527
| 17,020,186,115
|
IssuesEvent
|
2021-07-02 17:39:41
|
cloudflare/cloudflare-docs
|
https://api.github.com/repos/cloudflare/cloudflare-docs
|
closed
|
How to measure specificity in route pattern
|
documentation product:workers
|
### Expected Behavior
https://github.com/cloudflare/cloudflare-docs/blob/production/products/workers/src/content/platform/routes.md#matching-behavior
Section _When more than one route pattern could match a request URL, the most specific route pattern wins. For example, the pattern www.example.com/* would take precedence over *.example.com/* when matching a request for https://www.example.com/._
Could we please clarify how specificity is calculated? If it is the number of wildcards and/or also the number of URL segments? Will `https://www.example.com/path1/*` win over `https://www.example.com/*`? It's a struggle that several workers cannot be applied in order, such as Java Servlets, and defining this better would really help. We can just guess by testing several times what are rules and what is random coincidences.
### Actual Behavior
Expect documentation to describe a deterministic behaviour for more scenarios.
### Section that requires update
The section about specificity under "Matching BEhaviour" as linked above.
### Additional information
-
|
1.0
|
How to measure specificity in route pattern - ### Expected Behavior
https://github.com/cloudflare/cloudflare-docs/blob/production/products/workers/src/content/platform/routes.md#matching-behavior
Section _When more than one route pattern could match a request URL, the most specific route pattern wins. For example, the pattern www.example.com/* would take precedence over *.example.com/* when matching a request for https://www.example.com/._
Could we please clarify how specificity is calculated? If it is the number of wildcards and/or also the number of URL segments? Will `https://www.example.com/path1/*` win over `https://www.example.com/*`? It's a struggle that several workers cannot be applied in order, such as Java Servlets, and defining this better would really help. We can just guess by testing several times what are rules and what is random coincidences.
### Actual Behavior
Expect documentation to describe a deterministic behaviour for more scenarios.
### Section that requires update
The section about specificity under "Matching BEhaviour" as linked above.
### Additional information
-
|
non_process
|
how to measure specificity in route pattern expected behavior section when more than one route pattern could match a request url the most specific route pattern wins for example the pattern would take precedence over example com when matching a request for could we please clarify how specificity is calculated if it is the number of wildcards and or also the number of url segments will win over it s a struggle that several workers cannot be applied in order such as java servlets and defining this better would really help we can just guess by testing several times what are rules and what is random coincidences actual behavior expect documentation to describe a deterministic behaviour for more scenarios section that requires update the section about specificity under matching behaviour as linked above additional information
| 0
|
16,286
| 20,911,143,054
|
IssuesEvent
|
2022-03-24 09:25:53
|
bianjieai/opb-faq
|
https://api.github.com/repos/bianjieai/opb-faq
|
opened
|
如何接入文昌链【快速低成本】开发可信分布式应用?(SDK or API)
|
Access process Description
|
### **OPB-SDK**
目前文昌链(基于IRITA OPB)拥有完善的 SDK ,包括 NFT、大数据隐私保护、跨链服务调用等模块,及运维工具支持,在性能、安全可靠性、认证及权限、可维护性、可扩展性和运维监控等多方面都满足企业级应用需求,为实体经济提供基于区块链信任机器的价值赋能。
如何接入?
1、完成BSN账户注册流程并充值;2、通过技术文档配置环境;3、通过接口调用 SDK,连接业务端系统与文昌链平台,使用链上功能。
接入流程与参考文档:https://wenchang.bianjie.ai/wenchangchain.html
### **AVATA:多链 NFT/元宇宙应用 API 服务平台**
为方便应用快速接入,边界智能基于区块链底层核心技术以及支持复杂分布式商业应用的经验自主研发了**【多链和跨链 API 服务平台Avata】**。
可支持多元资产数字化、链上链下可信交互,为复杂异构系统跨链协作提供一键式对接,助力企业简便快捷的构建分布式商业应用,将更多精力专注于业务创新与推广。
2022年3月 Avata 成功发布并上线,首批对接 BSN-文昌链,通过 API 服务封装复杂的区块链底层交互逻辑,为应用开发者开放了首批支持 NFT/元宇宙应用场景的核心服务接口。
如何接入?
1、完成Avata账户注册流程并充值;2、通过Avata技术文档调用API服务;
接入流程:https://wenchang.bianjie.ai/wenchangchain.html
参考文档:https://apis.avata.bianjie.ai
### **SDK or API 如何选择?**
1.不同:通过SDK开发dapp,需要开发与链交互的功能,通过API平台,可以直接调用API接口使用相关功能;
|
1.0
|
如何接入文昌链【快速低成本】开发可信分布式应用?(SDK or API) - ### **OPB-SDK**
目前文昌链(基于IRITA OPB)拥有完善的 SDK ,包括 NFT、大数据隐私保护、跨链服务调用等模块,及运维工具支持,在性能、安全可靠性、认证及权限、可维护性、可扩展性和运维监控等多方面都满足企业级应用需求,为实体经济提供基于区块链信任机器的价值赋能。
如何接入?
1、完成BSN账户注册流程并充值;2、通过技术文档配置环境;3、通过接口调用 SDK,连接业务端系统与文昌链平台,使用链上功能。
接入流程与参考文档:https://wenchang.bianjie.ai/wenchangchain.html
### **AVATA:多链 NFT/元宇宙应用 API 服务平台**
为方便应用快速接入,边界智能基于区块链底层核心技术以及支持复杂分布式商业应用的经验自主研发了**【多链和跨链 API 服务平台Avata】**。
可支持多元资产数字化、链上链下可信交互,为复杂异构系统跨链协作提供一键式对接,助力企业简便快捷的构建分布式商业应用,将更多精力专注于业务创新与推广。
2022年3月 Avata 成功发布并上线,首批对接 BSN-文昌链,通过 API 服务封装复杂的区块链底层交互逻辑,为应用开发者开放了首批支持 NFT/元宇宙应用场景的核心服务接口。
如何接入?
1、完成Avata账户注册流程并充值;2、通过Avata技术文档调用API服务;
接入流程:https://wenchang.bianjie.ai/wenchangchain.html
参考文档:https://apis.avata.bianjie.ai
### **SDK or API 如何选择?**
1.不同:通过SDK开发dapp,需要开发与链交互的功能,通过API平台,可以直接调用API接口使用相关功能;
|
process
|
如何接入文昌链【快速低成本】开发可信分布式应用?(sdk or api) opb sdk 目前文昌链(基于irita opb)拥有完善的 sdk ,包括 nft、大数据隐私保护、跨链服务调用等模块,及运维工具支持,在性能、安全可靠性、认证及权限、可维护性、可扩展性和运维监控等多方面都满足企业级应用需求,为实体经济提供基于区块链信任机器的价值赋能。 如何接入? 、完成bsn账户注册流程并充值; 、通过技术文档配置环境; 、通过接口调用 sdk,连接业务端系统与文昌链平台,使用链上功能。 接入流程与参考文档: avata:多链 nft 元宇宙应用 api 服务平台 为方便应用快速接入,边界智能基于区块链底层核心技术以及支持复杂分布式商业应用的经验自主研发了 【多链和跨链 api 服务平台avata】 。 可支持多元资产数字化、链上链下可信交互,为复杂异构系统跨链协作提供一键式对接,助力企业简便快捷的构建分布式商业应用,将更多精力专注于业务创新与推广。 avata 成功发布并上线,首批对接 bsn 文昌链,通过 api 服务封装复杂的区块链底层交互逻辑,为应用开发者开放了首批支持 nft 元宇宙应用场景的核心服务接口。 如何接入? 、完成avata账户注册流程并充值; 、通过avata技术文档调用api服务; 接入流程: 参考文档: sdk or api 如何选择? 不同:通过sdk开发dapp,需要开发与链交互的功能,通过api平台,可以直接调用api接口使用相关功能;
| 1
|
315,243
| 27,058,973,663
|
IssuesEvent
|
2023-02-13 18:11:15
|
python/mypy
|
https://api.github.com/repos/python/mypy
|
closed
|
Regression on i686: testI64Cast fails (mypy v1.0.0)
|
bug topic-tests topic-mypyc
|
**Bug Report**
Hello, while preparing the Debian package for mypy 1.0 (:tada: congratulations!), we are getting a `testI64Cast` failure on i686
**To Reproduce**
``` Dockerfile
FROM --platform=linux/i386 debian:unstable-slim
RUN apt-get update && apt-get install -y python3-pip python3-lxml wget make
WORKDIR /tmp
RUN wget https://github.com/python/mypy/archive/refs/tags/v1.0.0.tar.gz && \
tar xzf v*tar.gz && \
rm *.tar.gz
WORKDIR /tmp/mypy-1.0.0
RUN python3 -m pip install -r test-requirements.txt
RUN python3 -m pytest -v -n 0 -k testI64Cast
```
**Expected Behavior**
```
mypyc/test/test_irbuild.py::TestGenOps::irbuild-i64.test::testI64Cast PASSED [100%]
```
**Actual Behavior**
```
mypyc/test/test_irbuild.py::TestGenOps::irbuild-i64.test::testI64Cast FAILED [100%]
=================================== FAILURES ===================================
_________________________________ testI64Cast __________________________________
data: /tmp/mypy-1.0.0/mypyc/test-data/irbuild-i64.test:1734:
../mypy-1.0.0/mypyc/test/test_irbuild.py:83: in run_case
assert_test_output(testcase, actual, "Invalid source code output", expected_output)
E AssertionError: Invalid source code output (/tmp/mypy-1.0.0/mypyc/test-data/irbuild-i64.test, line 1734)
----------------------------- Captured stderr call -----------------------------
Expected:
...
def cast_int(x):
x :: int
r0 :: native_int
r1 :: bit
r2, r3 :: int64 (diff)
r4 :: ptr (diff)
r5 :: c_ptr (diff)
r6 :: int64 (diff)
L0:
r0 = x & 1
r1 = r0 == 0
if r1 goto L1 else goto L2 :: bool
L1:
r2 = x >> 1 (diff)
r3 = r2 (diff)
goto L3 (diff)
L2: (diff)
r4 = x ^ 1 (diff)
r5 = r4 (diff)
r6 = CPyLong_AsInt64(r5) (diff)
r3 = r6 (diff)
keep_alive x (diff)
L3: (diff)
return r3 (diff)
Actual:
...
def cast_int(x):
x :: int
r0 :: native_int
r1 :: bit
r2, r3, r4 :: int64 (diff)
r5 :: ptr (diff)
r6 :: c_ptr (diff)
r7 :: int64 (diff)
L0:
r0 = x & 1
r1 = r0 == 0
if r1 goto L1 else goto L2 :: bool
L1:
r2 = extend signed x: builtins.int to int64 (diff)
r3 = r2 >> 1 (diff)
r4 = r3 (diff)
goto L3 (diff)
L2: (diff)
r5 = x ^ 1 (diff)
r6 = r5 (diff)
r7 = CPyLong_AsInt64(r6) (diff)
r4 = r7 (diff)
keep_alive x (diff)
L3: (diff)
return r4 (diff)
Alignment of first line difference:
E: r2, r3 :: int64
A: r2, r3, r4 :: int64
^
```
**Your Environment**
- Mypy version used: v1.0.0
- Mypy command-line flags:
- Mypy configuration options from `mypy.ini` (and other config files):
- Python version used: 3.11.1
|
1.0
|
Regression on i686: testI64Cast fails (mypy v1.0.0) - **Bug Report**
Hello, while preparing the Debian package for mypy 1.0 (:tada: congratulations!), we are getting a `testI64Cast` failure on i686
**To Reproduce**
``` Dockerfile
FROM --platform=linux/i386 debian:unstable-slim
RUN apt-get update && apt-get install -y python3-pip python3-lxml wget make
WORKDIR /tmp
RUN wget https://github.com/python/mypy/archive/refs/tags/v1.0.0.tar.gz && \
tar xzf v*tar.gz && \
rm *.tar.gz
WORKDIR /tmp/mypy-1.0.0
RUN python3 -m pip install -r test-requirements.txt
RUN python3 -m pytest -v -n 0 -k testI64Cast
```
**Expected Behavior**
```
mypyc/test/test_irbuild.py::TestGenOps::irbuild-i64.test::testI64Cast PASSED [100%]
```
**Actual Behavior**
```
mypyc/test/test_irbuild.py::TestGenOps::irbuild-i64.test::testI64Cast FAILED [100%]
=================================== FAILURES ===================================
_________________________________ testI64Cast __________________________________
data: /tmp/mypy-1.0.0/mypyc/test-data/irbuild-i64.test:1734:
../mypy-1.0.0/mypyc/test/test_irbuild.py:83: in run_case
assert_test_output(testcase, actual, "Invalid source code output", expected_output)
E AssertionError: Invalid source code output (/tmp/mypy-1.0.0/mypyc/test-data/irbuild-i64.test, line 1734)
----------------------------- Captured stderr call -----------------------------
Expected:
...
def cast_int(x):
x :: int
r0 :: native_int
r1 :: bit
r2, r3 :: int64 (diff)
r4 :: ptr (diff)
r5 :: c_ptr (diff)
r6 :: int64 (diff)
L0:
r0 = x & 1
r1 = r0 == 0
if r1 goto L1 else goto L2 :: bool
L1:
r2 = x >> 1 (diff)
r3 = r2 (diff)
goto L3 (diff)
L2: (diff)
r4 = x ^ 1 (diff)
r5 = r4 (diff)
r6 = CPyLong_AsInt64(r5) (diff)
r3 = r6 (diff)
keep_alive x (diff)
L3: (diff)
return r3 (diff)
Actual:
...
def cast_int(x):
x :: int
r0 :: native_int
r1 :: bit
r2, r3, r4 :: int64 (diff)
r5 :: ptr (diff)
r6 :: c_ptr (diff)
r7 :: int64 (diff)
L0:
r0 = x & 1
r1 = r0 == 0
if r1 goto L1 else goto L2 :: bool
L1:
r2 = extend signed x: builtins.int to int64 (diff)
r3 = r2 >> 1 (diff)
r4 = r3 (diff)
goto L3 (diff)
L2: (diff)
r5 = x ^ 1 (diff)
r6 = r5 (diff)
r7 = CPyLong_AsInt64(r6) (diff)
r4 = r7 (diff)
keep_alive x (diff)
L3: (diff)
return r4 (diff)
Alignment of first line difference:
E: r2, r3 :: int64
A: r2, r3, r4 :: int64
^
```
**Your Environment**
- Mypy version used: v1.0.0
- Mypy command-line flags:
- Mypy configuration options from `mypy.ini` (and other config files):
- Python version used: 3.11.1
|
non_process
|
regression on fails mypy bug report hello while preparing the debian package for mypy tada congratulations we are getting a failure on to reproduce dockerfile from platform linux debian unstable slim run apt get update apt get install y pip lxml wget make workdir tmp run wget tar xzf v tar gz rm tar gz workdir tmp mypy run m pip install r test requirements txt run m pytest v n k expected behavior mypyc test test irbuild py testgenops irbuild test passed actual behavior mypyc test test irbuild py testgenops irbuild test failed failures data tmp mypy mypyc test data irbuild test mypy mypyc test test irbuild py in run case assert test output testcase actual invalid source code output expected output e assertionerror invalid source code output tmp mypy mypyc test data irbuild test line captured stderr call expected def cast int x x int native int bit diff ptr diff c ptr diff diff x if goto else goto bool x diff diff goto diff diff x diff diff cpylong diff diff keep alive x diff diff return diff actual def cast int x x int native int bit diff ptr diff c ptr diff diff x if goto else goto bool extend signed x builtins int to diff diff diff goto diff diff x diff diff cpylong diff diff keep alive x diff diff return diff alignment of first line difference e a your environment mypy version used mypy command line flags mypy configuration options from mypy ini and other config files python version used
| 0
|
212,899
| 7,243,825,857
|
IssuesEvent
|
2018-02-14 13:13:13
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Signing certificates for cluster addons using apiserver CA
|
area/images-registry area/security lifecycle/rotten priority/backlog sig/cluster-lifecycle team/cluster (deprecated - do not use) team/control-plane (deprecated - do not use)
|
I've been looking into getting a cluster-local docker registry up as a cluster addon (#1319) and the problem comes down to creating the certificate for the registry and trusting it. The apiserver CA cert is already visible from every node and every pod, so signing the registry certificate with the apiserver CA would eliminate any CA cert distribution issues.
Right now, we create the CA and apiserver certificate and key in a number of places (we have provider-specific ways to create them or use one of two scripts in salt). We don't copy over the CA key which is needed to sign new certificates.
One approach to fixing this is copying the CA key (might as well copy all of the pki/ directory used by easyrsa), but I'm not sure that this is a good idea. Alternative, we could add creation to every instance where the CA cert is created, but this is tedious and error prone.
|
1.0
|
Signing certificates for cluster addons using apiserver CA - I've been looking into getting a cluster-local docker registry up as a cluster addon (#1319) and the problem comes down to creating the certificate for the registry and trusting it. The apiserver CA cert is already visible from every node and every pod, so signing the registry certificate with the apiserver CA would eliminate any CA cert distribution issues.
Right now, we create the CA and apiserver certificate and key in a number of places (we have provider-specific ways to create them or use one of two scripts in salt). We don't copy over the CA key which is needed to sign new certificates.
One approach to fixing this is copying the CA key (might as well copy all of the pki/ directory used by easyrsa), but I'm not sure that this is a good idea. Alternative, we could add creation to every instance where the CA cert is created, but this is tedious and error prone.
|
non_process
|
signing certificates for cluster addons using apiserver ca i ve been looking into getting a cluster local docker registry up as a cluster addon and the problem comes down to creating the certificate for the registry and trusting it the apiserver ca cert is already visible from every node and every pod so signing the registry certificate with the apiserver ca would eliminate any ca cert distribution issues right now we create the ca and apiserver certificate and key in a number of places we have provider specific ways to create them or use one of two scripts in salt we don t copy over the ca key which is needed to sign new certificates one approach to fixing this is copying the ca key might as well copy all of the pki directory used by easyrsa but i m not sure that this is a good idea alternative we could add creation to every instance where the ca cert is created but this is tedious and error prone
| 0
|
9,355
| 12,366,367,763
|
IssuesEvent
|
2020-05-18 10:19:31
|
DiSSCo/user-stories
|
https://api.github.com/repos/DiSSCo/user-stories
|
opened
|
contribution indicators as validator citizen
|
1. NH museum 4. Data processing ICEDIG-SURVEY Research Specimen level
|
As a Citizen scientist I want to be recognized as contributor so that I can identify my contribution on validating data from external sources for this I need contribution indicators as validator
|
1.0
|
contribution indicators as validator citizen - As a Citizen scientist I want to be recognized as contributor so that I can identify my contribution on validating data from external sources for this I need contribution indicators as validator
|
process
|
contribution indicators as validator citizen as a citizen scientist i want to be recognized as contributor so that i can identify my contribution on validating data from external sources for this i need contribution indicators as validator
| 1
|
259,423
| 27,621,881,494
|
IssuesEvent
|
2023-03-10 01:18:43
|
nidhi7598/linux-3.0.35
|
https://api.github.com/repos/nidhi7598/linux-3.0.35
|
closed
|
CVE-2016-3707 (High) detected in linuxlinux-3.0.40 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2016-3707 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/icmp.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/icmp.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/icmp.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The icmp_check_sysrq function in net/ipv4/icmp.c in the kernel.org projects/rt patches for the Linux kernel, as used in the kernel-rt package before 3.10.0-327.22.1 in Red Hat Enterprise Linux for Real Time 7 and other products, allows remote attackers to execute SysRq commands via crafted ICMP Echo Request packets, as demonstrated by a brute-force attack to discover a cookie, or an attack that occurs after reading the local icmp_echo_sysrq file.
<p>Publish Date: 2016-06-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3707>CVE-2016-3707</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-3707">https://nvd.nist.gov/vuln/detail/CVE-2016-3707</a></p>
<p>Release Date: 2016-06-27</p>
<p>Fix Resolution: 3.10.0-327.22.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-3707 (High) detected in linuxlinux-3.0.40 - autoclosed - ## CVE-2016-3707 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.40</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.0/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35/commit/4cc6d4a22f88b8effe1090492c1a242ce587b492">4cc6d4a22f88b8effe1090492c1a242ce587b492</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/icmp.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/icmp.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/icmp.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The icmp_check_sysrq function in net/ipv4/icmp.c in the kernel.org projects/rt patches for the Linux kernel, as used in the kernel-rt package before 3.10.0-327.22.1 in Red Hat Enterprise Linux for Real Time 7 and other products, allows remote attackers to execute SysRq commands via crafted ICMP Echo Request packets, as demonstrated by a brute-force attack to discover a cookie, or an attack that occurs after reading the local icmp_echo_sysrq file.
<p>Publish Date: 2016-06-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3707>CVE-2016-3707</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-3707">https://nvd.nist.gov/vuln/detail/CVE-2016-3707</a></p>
<p>Release Date: 2016-06-27</p>
<p>Fix Resolution: 3.10.0-327.22.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files net icmp c net icmp c net icmp c vulnerability details the icmp check sysrq function in net icmp c in the kernel org projects rt patches for the linux kernel as used in the kernel rt package before in red hat enterprise linux for real time and other products allows remote attackers to execute sysrq commands via crafted icmp echo request packets as demonstrated by a brute force attack to discover a cookie or an attack that occurs after reading the local icmp echo sysrq file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
10,004
| 13,042,457,945
|
IssuesEvent
|
2020-07-28 22:33:39
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
post-processor/vsphere: ovftool only shows error messages from STDERR
|
bug post-processor/vsphere
|
For background see [Troubleshooting ovftool, unable to get specific errors](https://groups.google.com/d/msgid/packer-tool/536e527e-ba53-4e59-a8fc-9300661c5004%40googlegroups.com?utm_medium=email&utm_source=footer)
```golang
if err := cmd.Run(); err != nil {
err := fmt.Errorf("Error uploading virtual machine: %s\n%s\n", err, p.filterLog(errOut.String()))
return nil, false, err
}
```
If ovftool fails Packer only shows the STDERR, but it seems like at least some error messages are sent to STDOUT.
|
1.0
|
post-processor/vsphere: ovftool only shows error messages from STDERR - For background see [Troubleshooting ovftool, unable to get specific errors](https://groups.google.com/d/msgid/packer-tool/536e527e-ba53-4e59-a8fc-9300661c5004%40googlegroups.com?utm_medium=email&utm_source=footer)
```golang
if err := cmd.Run(); err != nil {
err := fmt.Errorf("Error uploading virtual machine: %s\n%s\n", err, p.filterLog(errOut.String()))
return nil, false, err
}
```
If ovftool fails Packer only shows the STDERR, but it seems like at least some error messages are sent to STDOUT.
|
process
|
post processor vsphere ovftool only shows error messages from stderr for background see golang if err cmd run err nil err fmt errorf error uploading virtual machine s n s n err p filterlog errout string return nil false err if ovftool fails packer only shows the stderr but it seems like at least some error messages are sent to stdout
| 1
|
233,031
| 25,722,530,299
|
IssuesEvent
|
2022-12-07 14:37:42
|
LynRodWS/alcor
|
https://api.github.com/repos/LynRodWS/alcor
|
opened
|
CVE-2022-42252 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2022-42252 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.36.jar</b>, <b>tomcat-embed-core-9.0.31.jar</b>, <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.33.jar</b>, <b>tomcat-embed-core-9.0.21.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/network_acl_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.1.RELEASE.jar
- :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.31.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/route_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.31/tomcat-embed-core-9.0.31.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.31.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/security_group_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.0.RELEASE.jar
- :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.33.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/elastic_ip_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.33.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.21.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/vpc_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.21/tomcat-embed-core-9.0.21.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.21.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LynRodWS/alcor/commit/3c4e28556738e020da91fd03c3aaa5d9a7c1cfed">3c4e28556738e020da91fd03c3aaa5d9a7c1cfed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
If Apache Tomcat 8.5.0 to 8.5.82, 9.0.0-M1 to 9.0.67, 10.0.0-M1 to 10.0.26 or 10.1.0-M1 to 10.1.0 was configured to ignore invalid HTTP headers via setting rejectIllegalHeader to false (the default for 8.5.x only), Tomcat did not reject a request containing an invalid Content-Length header making a request smuggling attack possible if Tomcat was located behind a reverse proxy that also failed to reject the request with the invalid header.
<p>Publish Date: 2022-11-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42252>CVE-2022-42252</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p22x-g9px-3945">https://github.com/advisories/GHSA-p22x-g9px-3945</a></p>
<p>Release Date: 2022-11-01</p>
<p>Fix Resolution: org.apache.tomcat:tomcat:8.5.83,9.0.68,10.0.27,10.1.1</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-42252 (High) detected in multiple libraries - ## CVE-2022-42252 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.36.jar</b>, <b>tomcat-embed-core-9.0.31.jar</b>, <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.33.jar</b>, <b>tomcat-embed-core-9.0.21.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/network_acl_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.1.RELEASE.jar
- :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.31.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/route_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.31/tomcat-embed-core-9.0.31.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.31.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/security_group_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.0.RELEASE.jar
- :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.33.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/elastic_ip_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.33.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.21.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/vpc_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.21/tomcat-embed-core-9.0.21.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.21.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/LynRodWS/alcor/commit/3c4e28556738e020da91fd03c3aaa5d9a7c1cfed">3c4e28556738e020da91fd03c3aaa5d9a7c1cfed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
If Apache Tomcat 8.5.0 to 8.5.82, 9.0.0-M1 to 9.0.67, 10.0.0-M1 to 10.0.26 or 10.1.0-M1 to 10.1.0 was configured to ignore invalid HTTP headers via setting rejectIllegalHeader to false (the default for 8.5.x only), Tomcat did not reject a request containing an invalid Content-Length header making a request smuggling attack possible if Tomcat was located behind a reverse proxy that also failed to reject the request with the invalid header.
<p>Publish Date: 2022-11-01
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42252>CVE-2022-42252</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p22x-g9px-3945">https://github.com/advisories/GHSA-p22x-g9px-3945</a></p>
<p>Release Date: 2022-11-01</p>
<p>Fix Resolution: org.apache.tomcat:tomcat:8.5.83,9.0.68,10.0.27,10.1.1</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar core tomcat implementation library home page a href path to dependency file services network acl manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services route manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services security group manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services elastic ip manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services vpc manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details if apache tomcat to to to or to was configured to ignore invalid http headers via setting rejectillegalheader to false the default for x only tomcat did not reject a request containing an invalid content length header making a request smuggling attack possible if tomcat was located behind a reverse proxy that also failed to reject the request with the invalid header publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat
| 0
|
18,833
| 24,736,608,632
|
IssuesEvent
|
2022-10-20 22:43:36
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
closed
|
Tradução dos testes do módulo `step-by-step`
|
[1] Bug [0] Desenvolvimento [2] Média Prioridade [3] Processamento Dinâmico
|
## Comportamento Esperado
Espera-se que os testes do módulo de passos com processamento dinâmico esteja usando Playwright ao invés de Pyppeteer.
## Comportamento Atual
Os testes desse módulo usam a biblioteca Pyppeteer. Além disso, contam com a existência de casos de testes antigos que foram deletados do módulo step-by-step.
## Passos para reproduzir o erro
Executar pytest tests e observar os erros gerados.
## Especificações da Coleta
Não se aplica
## Sistema (caso necessário)
- MP ou local: local
- Branch específica: dev
- Sistema diferente: sistema distribuído com playwright
## Screenshots (caso necessário)

|
1.0
|
Tradução dos testes do módulo `step-by-step` - ## Comportamento Esperado
Espera-se que os testes do módulo de passos com processamento dinâmico esteja usando Playwright ao invés de Pyppeteer.
## Comportamento Atual
Os testes desse módulo usam a biblioteca Pyppeteer. Além disso, contam com a existência de casos de testes antigos que foram deletados do módulo step-by-step.
## Passos para reproduzir o erro
Executar pytest tests e observar os erros gerados.
## Especificações da Coleta
Não se aplica
## Sistema (caso necessário)
- MP ou local: local
- Branch específica: dev
- Sistema diferente: sistema distribuído com playwright
## Screenshots (caso necessário)

|
process
|
tradução dos testes do módulo step by step comportamento esperado espera se que os testes do módulo de passos com processamento dinâmico esteja usando playwright ao invés de pyppeteer comportamento atual os testes desse módulo usam a biblioteca pyppeteer além disso contam com a existência de casos de testes antigos que foram deletados do módulo step by step passos para reproduzir o erro executar pytest tests e observar os erros gerados especificações da coleta não se aplica sistema caso necessário mp ou local local branch específica dev sistema diferente sistema distribuído com playwright screenshots caso necessário
| 1
|
18,840
| 24,745,181,272
|
IssuesEvent
|
2022-10-21 09:06:31
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
r.sim.water parameters
|
GRASS Processing Bug
|
### What is the bug or the crash?
The input parameters *infil* and *flow_control* for the GRASS algorithm *r.sim.water* are mandatory in QGIS, but should be optional. The Boolean parameter *time series* should be set to *No* by default. The time series option does not seem to work in QGIS, although I have not tested this extensively yet.
Documentation: [https://grass.osgeo.org/grass-stable/manuals/r.sim.water.html](https://grass.osgeo.org/grass-stable/manuals/r.sim.water.html)
### Steps to reproduce the issue
1. Run r.sim.water

### Versions
QGIS version
3.26.3-Buenos Aires
QGIS code revision
65e4edfdada
Qt version
5.15.3
Python version
3.9.5
GDAL/OGR version
3.5.1
PROJ version
9.1.0
EPSG Registry database version
v10.074 (2022-08-01)
GEOS version
3.10.3-CAPI-1.16.1
SQLite version
3.38.1
PDAL version
2.4.3
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
Active Python plugins
DEMto3D
3.4
ee_plugin
0.0.4
LAStools
1.4
mapswipetool_plugin
1.2
mmqgis
2021.9.10
processing_r
3.0.0
qlyrx
0.3.3
QuickOSM
2.1.1
quick_map_services
0.19.26
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
r.sim.water parameters - ### What is the bug or the crash?
The input parameters *infil* and *flow_control* for the GRASS algorithm *r.sim.water* are mandatory in QGIS, but should be optional. The Boolean parameter *time series* should be set to *No* by default. The time series option does not seem to work in QGIS, although I have not tested this extensively yet.
Documentation: [https://grass.osgeo.org/grass-stable/manuals/r.sim.water.html](https://grass.osgeo.org/grass-stable/manuals/r.sim.water.html)
### Steps to reproduce the issue
1. Run r.sim.water

### Versions
QGIS version
3.26.3-Buenos Aires
QGIS code revision
65e4edfdada
Qt version
5.15.3
Python version
3.9.5
GDAL/OGR version
3.5.1
PROJ version
9.1.0
EPSG Registry database version
v10.074 (2022-08-01)
GEOS version
3.10.3-CAPI-1.16.1
SQLite version
3.38.1
PDAL version
2.4.3
PostgreSQL client version
unknown
SpatiaLite version
5.0.1
QWT version
6.1.6
QScintilla2 version
2.13.1
OS version
Windows 10 Version 2009
Active Python plugins
DEMto3D
3.4
ee_plugin
0.0.4
LAStools
1.4
mapswipetool_plugin
1.2
mmqgis
2021.9.10
processing_r
3.0.0
qlyrx
0.3.3
QuickOSM
2.1.1
quick_map_services
0.19.26
db_manager
0.1.20
grassprovider
2.12.99
MetaSearch
0.3.6
processing
2.12.99
sagaprovider
2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
r sim water parameters what is the bug or the crash the input parameters infil and flow control for the grass algorithm r sim water are mandatory in qgis but should be optional the boolean parameter time series should be set to no by default the time series option does not seem to work in qgis although i have not tested this extensively yet documentation steps to reproduce the issue run r sim water versions qgis version buenos aires qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version unknown spatialite version qwt version version os version windows version active python plugins ee plugin lastools mapswipetool plugin mmqgis processing r qlyrx quickosm quick map services db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
680,720
| 23,283,468,247
|
IssuesEvent
|
2022-08-05 14:15:39
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
closed
|
Contact request message and very next message are separated with [Today] timestamp
|
bug Chat priority 2: medium E:Bugfixes S:2
|
### Description
1. create user one
2. create user two (separate instance)
3. as user one, send a message in public chat
4. as user two, click the message and send contact request
5. as user one, accept the contact request
6. as user one, send a message to 1x1 chat with user two
7. as user two, open 1x1 chat with user one
As result, the contact request message and newly received message are divided with [Today] timestamp
<img width="927" alt="Screenshot 2022-07-20 at 10 41 04" src="https://user-images.githubusercontent.com/82375995/179925782-b17a6d2f-65d2-45a1-a279-f9fbb40f9ae3.png">
|
1.0
|
Contact request message and very next message are separated with [Today] timestamp - ### Description
1. create user one
2. create user two (separate instance)
3. as user one, send a message in public chat
4. as user two, click the message and send contact request
5. as user one, accept the contact request
6. as user one, send a message to 1x1 chat with user two
7. as user two, open 1x1 chat with user one
As result, the contact request message and newly received message are divided with [Today] timestamp
<img width="927" alt="Screenshot 2022-07-20 at 10 41 04" src="https://user-images.githubusercontent.com/82375995/179925782-b17a6d2f-65d2-45a1-a279-f9fbb40f9ae3.png">
|
non_process
|
contact request message and very next message are separated with timestamp description create user one create user two separate instance as user one send a message in public chat as user two click the message and send contact request as user one accept the contact request as user one send a message to chat with user two as user two open chat with user one as result the contact request message and newly received message are divided with timestamp img width alt screenshot at src
| 0
|
5,913
| 8,735,560,294
|
IssuesEvent
|
2018-12-11 17:04:41
|
habitat-sh/habitat
|
https://api.github.com/repos/habitat-sh/habitat
|
closed
|
Habitat Supervisor may manage multiple processes
|
A-process-management C-feature Epic V-sup
|
Originally a Habitat Supervisor managed a single process. This is work related to changing the Habitat Supervisor to managed multiple processes.
|
1.0
|
Habitat Supervisor may manage multiple processes - Originally a Habitat Supervisor managed a single process. This is work related to changing the Habitat Supervisor to managed multiple processes.
|
process
|
habitat supervisor may manage multiple processes originally a habitat supervisor managed a single process this is work related to changing the habitat supervisor to managed multiple processes
| 1
|
18,547
| 24,555,309,138
|
IssuesEvent
|
2022-10-12 15:25:58
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline Indicator] App is crashing in the following scenario
|
Bug Blocker P0 Android Process: Fixed Process: Tested QA Process: Tested dev
|
**Steps:**
1. Sign in and complete the passcode process
2. When navigated to 'Study list' screen, Turn off the internet/mobile data
3. Click on 'Hamburger icon' and Click on 'ok' button in the offline indicator popup message
4. Then , Click on the enrolled study and Verify
**AR:** App is crashing as soon as we click on enrolled study
**ER:** App should not crash
https://user-images.githubusercontent.com/86007179/178471038-a98cfa0a-e988-4359-b6eb-8e90b27d0be0.mp4
|
3.0
|
[Android] [Offline Indicator] App is crashing in the following scenario - **Steps:**
1. Sign in and complete the passcode process
2. When navigated to 'Study list' screen, Turn off the internet/mobile data
3. Click on 'Hamburger icon' and Click on 'ok' button in the offline indicator popup message
4. Then , Click on the enrolled study and Verify
**AR:** App is crashing as soon as we click on enrolled study
**ER:** App should not crash
https://user-images.githubusercontent.com/86007179/178471038-a98cfa0a-e988-4359-b6eb-8e90b27d0be0.mp4
|
process
|
app is crashing in the following scenario steps sign in and complete the passcode process when navigated to study list screen turn off the internet mobile data click on hamburger icon and click on ok button in the offline indicator popup message then click on the enrolled study and verify ar app is crashing as soon as we click on enrolled study er app should not crash
| 1
|
282,324
| 30,889,276,633
|
IssuesEvent
|
2023-08-04 02:29:36
|
maddyCode23/linux-4.1.15
|
https://api.github.com/repos/maddyCode23/linux-4.1.15
|
reopened
|
CVE-2022-1974 (Medium) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2022-1974 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in the Linux kernel's NFC core functionality due to a race condition between kobject creation and delete. This vulnerability allows a local attacker with CAP_NET_ADMIN privilege to leak kernel information.
<p>Publish Date: 2022-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1974>CVE-2022-1974</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-1974">https://www.linuxkernelcves.com/cves/CVE-2022-1974</a></p>
<p>Release Date: 2022-08-31</p>
<p>Fix Resolution: v4.9.313,v4.14.278,v4.19.242,v5.4.193,v5.10.115,v5.15.39,v5.17.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-1974 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2022-1974 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A use-after-free flaw was found in the Linux kernel's NFC core functionality due to a race condition between kobject creation and delete. This vulnerability allows a local attacker with CAP_NET_ADMIN privilege to leak kernel information.
<p>Publish Date: 2022-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1974>CVE-2022-1974</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-1974">https://www.linuxkernelcves.com/cves/CVE-2022-1974</a></p>
<p>Release Date: 2022-08-31</p>
<p>Fix Resolution: v4.9.313,v4.14.278,v4.19.242,v5.4.193,v5.10.115,v5.15.39,v5.17.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details a use after free flaw was found in the linux kernel s nfc core functionality due to a race condition between kobject creation and delete this vulnerability allows a local attacker with cap net admin privilege to leak kernel information publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
10,698
| 13,493,068,486
|
IssuesEvent
|
2020-09-11 19:02:43
|
unicode-org/icu4x
|
https://api.github.com/repos/unicode-org/icu4x
|
opened
|
Unicode Conference 44 Presentation
|
C-meta C-process T-docs T-task
|
At IUC44 we're going to make the initial presentation of ICU4X project.
Goals:
* Introduce ICU4X
* Explain the Problem and Solution
* Outline 0.1 release
* Attract community to test, give feedback and help with 0.2
Since the presentation is going to be fully virtual, we need to prepare slides and record a video of the talk by **September 25**.
The plan is to prepare a slide-deck and narrative for the presentation, delegate parts of the talk to different ICU4X contributors and then @mihnita offered to stitch the video together.
The lead presenter will be @zbraniecki.
Outline:
* Introduction
* Who are we (Zibi)
* Unicode, ICU, CLDR, ECMA402 (Nebojsa?)
* ICU4C vs ICU4J (Shane? Mihai? Nebojsa?)
* Problem Statement
* Client Side Needs (Shane)
* Data Management (Shane)
* Modularity (Zibi)
* ICU4X
* Rust ecosystem (Zibi)
* DataProvider (Shane)
* Crate system (Zibi)
* FFI/WASM target (Nebojsa?)
* ICU4X 0.1 (Zibi)
* Scope
* Locale (Zibi)
* UnicodeSet (Elango)
* PluralRules (Zibi)
* DateTime (Mihai)
* Features
* Sync DataProvider (JSON, Bincode) (Shane)
* Modular design (Zibi)
* ECMA402 traits (Filip, Konstantin?)
* Performance/Memory (Zibi, Elango)
* ICU4X 2021 (Zibi)
I'll keep this comment up to date as we adjust the assignees and outline.
The timeline is:
* Sept 11 - Sept 18 - Work together on the slide deck
* Sept 18 - Evaluate the slide deck at the weekly call
* Sept 18 - Sept 20 - Record the clips for Mihai to stitch
* Sept 20 - Sept 24 - Mihai combines them together
* Sept 25 - Review of the complete video
* Sept 26 - Submission
|
1.0
|
Unicode Conference 44 Presentation - At IUC44 we're going to make the initial presentation of ICU4X project.
Goals:
* Introduce ICU4X
* Explain the Problem and Solution
* Outline 0.1 release
* Attract community to test, give feedback and help with 0.2
Since the presentation is going to be fully virtual, we need to prepare slides and record a video of the talk by **September 25**.
The plan is to prepare a slide-deck and narrative for the presentation, delegate parts of the talk to different ICU4X contributors and then @mihnita offered to stitch the video together.
The lead presenter will be @zbraniecki.
Outline:
* Introduction
* Who are we (Zibi)
* Unicode, ICU, CLDR, ECMA402 (Nebojsa?)
* ICU4C vs ICU4J (Shane? Mihai? Nebojsa?)
* Problem Statement
* Client Side Needs (Shane)
* Data Management (Shane)
* Modularity (Zibi)
* ICU4X
* Rust ecosystem (Zibi)
* DataProvider (Shane)
* Crate system (Zibi)
* FFI/WASM target (Nebojsa?)
* ICU4X 0.1 (Zibi)
* Scope
* Locale (Zibi)
* UnicodeSet (Elango)
* PluralRules (Zibi)
* DateTime (Mihai)
* Features
* Sync DataProvider (JSON, Bincode) (Shane)
* Modular design (Zibi)
* ECMA402 traits (Filip, Konstantin?)
* Performance/Memory (Zibi, Elango)
* ICU4X 2021 (Zibi)
I'll keep this comment up to date as we adjust the assignees and outline.
The timeline is:
* Sept 11 - Sept 18 - Work together on the slide deck
* Sept 18 - Evaluate the slide deck at the weekly call
* Sept 18 - Sept 20 - Record the clips for Mihai to stitch
* Sept 20 - Sept 24 - Mihai combines them together
* Sept 25 - Review of the complete video
* Sept 26 - Submission
|
process
|
unicode conference presentation at we re going to make the initial presentation of project goals introduce explain the problem and solution outline release attract community to test give feedback and help with since the presentation is going to be fully virtual we need to prepare slides and record a video of the talk by september the plan is to prepare a slide deck and narrative for the presentation delegate parts of the talk to different contributors and then mihnita offered to stitch the video together the lead presenter will be zbraniecki outline introduction who are we zibi unicode icu cldr nebojsa vs shane mihai nebojsa problem statement client side needs shane data management shane modularity zibi rust ecosystem zibi dataprovider shane crate system zibi ffi wasm target nebojsa zibi scope locale zibi unicodeset elango pluralrules zibi datetime mihai features sync dataprovider json bincode shane modular design zibi traits filip konstantin performance memory zibi elango zibi i ll keep this comment up to date as we adjust the assignees and outline the timeline is sept sept work together on the slide deck sept evaluate the slide deck at the weekly call sept sept record the clips for mihai to stitch sept sept mihai combines them together sept review of the complete video sept submission
| 1
|
53,815
| 23,073,694,570
|
IssuesEvent
|
2022-07-25 20:44:12
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
opened
|
SMO Sprint Review Updates 7/25/22
|
Type: Meeting Impact: 2-Major Service: Apps Need: 1-Must Have Workgroup: SMO Type: Enhancement Project: Public-Private Partnership P3 Management
|
[Meeting Notes](https://docs.google.com/document/d/1rhXv86hmMy15DCANdq3v2lgdEwnQPge_wUPkPl5gk_k/edit#heading=h.3sdgu3b3w33l)
**Things to update**
- [ ] Change to Y/N questions
- Is this truly a pilot to demonstrate emerging tech for mobility outcomes (i.e. not a pitch?)
- Does the company have a quality market presence (website, brochure, history)?
- Does the solution require changes to the transportation network or infrastructure?
- [ ] Add groupings (SMO will provide)
- Does the solution fit within one of the SMO groups of technology? If so, which
- [ ] Provide text box for If Yes, how so
- Does solution promote racial equity?
- Is the solution environmentally sustainable and have a path towards carbon neutrality?
- [ ] Add an editable meeting notes section for follow up to questions that Steering Committee has for Proposer
|
1.0
|
SMO Sprint Review Updates 7/25/22 - [Meeting Notes](https://docs.google.com/document/d/1rhXv86hmMy15DCANdq3v2lgdEwnQPge_wUPkPl5gk_k/edit#heading=h.3sdgu3b3w33l)
**Things to update**
- [ ] Change to Y/N questions
- Is this truly a pilot to demonstrate emerging tech for mobility outcomes (i.e. not a pitch?)
- Does the company have a quality market presence (website, brochure, history)?
- Does the solution require changes to the transportation network or infrastructure?
- [ ] Add groupings (SMO will provide)
- Does the solution fit within one of the SMO groups of technology? If so, which
- [ ] Provide text box for If Yes, how so
- Does solution promote racial equity?
- Is the solution environmentally sustainable and have a path towards carbon neutrality?
- [ ] Add an editable meeting notes section for follow up to questions that Steering Committee has for Proposer
|
non_process
|
smo sprint review updates things to update change to y n questions is this truly a pilot to demonstrate emerging tech for mobility outcomes i e not a pitch does the company have a quality market presence website brochure history does the solution require changes to the transportation network or infrastructure add groupings smo will provide does the solution fit within one of the smo groups of technology if so which provide text box for if yes how so does solution promote racial equity is the solution environmentally sustainable and have a path towards carbon neutrality add an editable meeting notes section for follow up to questions that steering committee has for proposer
| 0
|
241,926
| 26,256,994,190
|
IssuesEvent
|
2023-01-06 02:14:21
|
Guillerbr/escola-angu-app
|
https://api.github.com/repos/Guillerbr/escola-angu-app
|
opened
|
CVE-2021-37713 (High) detected in tar-4.4.10.tgz
|
security vulnerability
|
## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.10.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.10.tgz">https://registry.npmjs.org/tar/-/tar-4.4.10.tgz</a></p>
<p>Path to dependency file: /escola-angu-app/package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- cli-8.0.6.tgz (Root Library)
- pacote-9.5.0.tgz
- :x: **tar-4.4.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (@angular/cli): 8.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-37713 (High) detected in tar-4.4.10.tgz - ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.10.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.10.tgz">https://registry.npmjs.org/tar/-/tar-4.4.10.tgz</a></p>
<p>Path to dependency file: /escola-angu-app/package.json</p>
<p>Path to vulnerable library: /node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- cli-8.0.6.tgz (Root Library)
- pacote-9.5.0.tgz
- :x: **tar-4.4.10.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (@angular/cli): 8.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file escola angu app package json path to vulnerable library node modules tar package json dependency hierarchy cli tgz root library pacote tgz x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted this is in part accomplished by sanitizing absolute paths of entries within the archive skipping archive entries that contain path portions and resolving the sanitized paths against the extraction target directory this logic was insufficient on windows systems when extracting tar files that contained a path that was not an absolute path but specified a drive letter different from the extraction target such as c some path if the drive letter does not match the extraction target for example d extraction dir then the result of path resolve extractiondirectory entrypath would resolve against the current working directory on the c drive rather than the extraction target directory additionally a portion of the path could occur immediately after the drive letter such as c foo and was not properly sanitized by the logic that checked for within the normalized and split portions of the path this only affects users of node tar on windows systems these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar there is no reasonable way to work around this issue without performing the same path normalization procedures that node tar now does users are encouraged to upgrade to the latest patched versions of node tar rather than attempt to sanitize paths themselves publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution angular cli step up your open source security game with mend
| 0
|
83,471
| 10,354,722,337
|
IssuesEvent
|
2019-09-05 14:17:04
|
nextcloud/server
|
https://api.github.com/repos/nextcloud/server
|
closed
|
Voice in Italian not centered in the button stores the credentials
|
1. to develop bug design good first issue
|
In version 16.0.4 localized in Italian, when you go to the setting menu -> basic setting -> mail server the voice saves credentials does not come out completely written in the button area can you fix it? I attach screenshot

|
1.0
|
Voice in Italian not centered in the button stores the credentials - In version 16.0.4 localized in Italian, when you go to the setting menu -> basic setting -> mail server the voice saves credentials does not come out completely written in the button area can you fix it? I attach screenshot

|
non_process
|
voice in italian not centered in the button stores the credentials in version localized in italian when you go to the setting menu basic setting mail server the voice saves credentials does not come out completely written in the button area can you fix it i attach screenshot
| 0
|
47,303
| 13,056,109,437
|
IssuesEvent
|
2020-07-30 03:41:09
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
I3DSTExtractor11 does not have a DAQ method (Trac #308)
|
Migrated from Trac combo reconstruction defect
|
The 'I3DSTExtractor11' produces 'segmentation fault' when used with Q-frames in icerec_V04-00-01. Complains about libdst-extractor.dylib.
Looking at the code, It doesn't seem to have a DAQ method.
dst-extractor is at releases/V00-03-03. Only change from icerecv4 meta-projects is jeb-filter-2011 which is at the latest branch.
Im using Mac OS X version 10.6.8, xcode version: 3.2.6
Migrated from https://code.icecube.wisc.edu/ticket/308
```json
{
"status": "closed",
"changetime": "2012-07-12T16:03:29",
"description": "The 'I3DSTExtractor11' produces 'segmentation fault' when used with Q-frames in icerec_V04-00-01. Complains about libdst-extractor.dylib. \nLooking at the code, It doesn't seem to have a DAQ method.\n\ndst-extractor is at releases/V00-03-03. Only change from icerecv4 meta-projects is jeb-filter-2011 which is at the latest branch.\nIm using Mac OS X version 10.6.8, xcode version: 3.2.6",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"_ts": "1342109009000000",
"component": "combo reconstruction",
"summary": "I3DSTExtractor11 does not have a DAQ method",
"priority": "normal",
"keywords": "dst, superdst, DAQ, Q",
"time": "2011-09-13T13:07:06",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
|
1.0
|
I3DSTExtractor11 does not have a DAQ method (Trac #308) - The 'I3DSTExtractor11' produces 'segmentation fault' when used with Q-frames in icerec_V04-00-01. Complains about libdst-extractor.dylib.
Looking at the code, It doesn't seem to have a DAQ method.
dst-extractor is at releases/V00-03-03. Only change from icerecv4 meta-projects is jeb-filter-2011 which is at the latest branch.
Im using Mac OS X version 10.6.8, xcode version: 3.2.6
Migrated from https://code.icecube.wisc.edu/ticket/308
```json
{
"status": "closed",
"changetime": "2012-07-12T16:03:29",
"description": "The 'I3DSTExtractor11' produces 'segmentation fault' when used with Q-frames in icerec_V04-00-01. Complains about libdst-extractor.dylib. \nLooking at the code, It doesn't seem to have a DAQ method.\n\ndst-extractor is at releases/V00-03-03. Only change from icerecv4 meta-projects is jeb-filter-2011 which is at the latest branch.\nIm using Mac OS X version 10.6.8, xcode version: 3.2.6",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"_ts": "1342109009000000",
"component": "combo reconstruction",
"summary": "I3DSTExtractor11 does not have a DAQ method",
"priority": "normal",
"keywords": "dst, superdst, DAQ, Q",
"time": "2011-09-13T13:07:06",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
|
non_process
|
does not have a daq method trac the produces segmentation fault when used with q frames in icerec complains about libdst extractor dylib looking at the code it doesn t seem to have a daq method dst extractor is at releases only change from meta projects is jeb filter which is at the latest branch im using mac os x version xcode version migrated from json status closed changetime description the produces segmentation fault when used with q frames in icerec complains about libdst extractor dylib nlooking at the code it doesn t seem to have a daq method n ndst extractor is at releases only change from meta projects is jeb filter which is at the latest branch nim using mac os x version xcode version reporter icecube cc resolution fixed ts component combo reconstruction summary does not have a daq method priority normal keywords dst superdst daq q time milestone owner juancarlos type defect
| 0
|
20,754
| 27,488,873,762
|
IssuesEvent
|
2023-03-04 11:18:32
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Cannot exec or spawn node, npm or yarn [node snap]
|
child_process linux
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in PowerShell console (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: `v14.16.0`
* **Platform**: `Linux fabio-XPS-15-7590 5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux`
* **Subsystem**: `child_process`
* **Distributed as**: [snap](https://snapcraft.io/node)
### What steps will reproduce the bug?
Install node snap: `sudo snap install node --classic`
*Optional*: fix permission issues https://npm.github.io/installation-setup-docs/installing/a-note-on-permissions.html
Then run the following:
```js
const { exec } = require('child_process');
// Test
exec('echo "exec test is running"', (error, stdout, stderr) => {
if (error) {
console.error(`exec test error: ${error}`);
return;
}
console.log(`exec test stdout: ${stdout}`);
console.error(`exec test stderr: ${stderr}`);
});
// Node
exec('node -v', (error, stdout, stderr) => {
if (error) {
console.error(`exec node error: ${error}`);
return;
}
console.log(`exec node stdout: ${stdout}`);
console.error(`exec node stderr: ${stderr}`);
});
// NPM
exec('npm --version', (error, stdout, stderr) => {
if (error) {
console.error(`exec npm error: ${error}`);
return;
}
console.log(`exec npm stdout: ${stdout}`);
console.error(`exec npm stderr: ${stderr}`);
});
// Yarn
exec('yarn --version', (error, stdout, stderr) => {
if (error) {
console.error(`exec yarn error: ${error}`);
return;
}
console.log(`exec yarn stdout: ${stdout}`);
console.error(`exec yarn stderr: ${stderr}`);
});
// -------------------------------------------------------
const { spawn } = require('child_process');
// Test
const testSpawn = spawn('echo', ['"test spawn is running"']);
testSpawn.stdout.on('data', (data) => {
console.log(`spawn test stdout: ${data}`);
});
testSpawn.stderr.on('data', (data) => {
console.error(`spawn test stderr: ${data}`);
});
testSpawn.on('close', (code) => {
console.log(`spawn test child process exited with code ${code}`);
});
// Node
const nodeSpawn = spawn('node', ['-v']);
nodeSpawn.stdout.on('data', (data) => {
console.log(`spawn node stdout: ${data}`);
});
nodeSpawn.stderr.on('data', (data) => {
console.error(`spawn node stderr: ${data}`);
});
nodeSpawn.on('close', (code) => {
console.log(`spawn node child process exited with code ${code}`);
});
// NPM
const npmSpawn = spawn('npm', ['--version']);
npmSpawn.stdout.on('data', (data) => {
console.log(`spawn npm stdout: ${data}`);
});
npmSpawn.stderr.on('data', (data) => {
console.error(`spawn npm stderr: ${data}`);
});
npmSpawn.on('close', (code) => {
console.log(`spawn npm child process exited with code ${code}`);
});
// Yarn
const yarnSpawn = spawn('yarn', ['--version']);
yarnSpawn.stdout.on('data', (data) => {
console.log(`spawn yarn stdout: ${data}`);
});
yarnSpawn.stderr.on('data', (data) => {
console.error(`spawn yarn stderr: ${data}`);
});
yarnSpawn.on('close', (code) => {
console.log(`spawn yarn child process exited with code ${code}`);
});
```
<!--
Enter details about your bug, preferably a simple code snippet that can be
run using `node` directly without installing third-party dependencies.
-->
### How often does it reproduce? Is there a required condition?
It always happens to me. I think a requirement to be using node from the snap.
### What is the expected behaviour?
<!--
If possible please provide textual output instead of screenshots.
-->
```
$ node index.js
spawn test stdout: "test spawn is running"
spawn test child process exited with code 0
exec test stdout: exec test is running
exec test stderr:
exec node stdout: v14.16.0
exec node stderr:
spawn node stdout: v14.16.0
spawn node child process exited with code 0
spawn yarn stdout: 1.22.5
spawn yarn child process exited with code 0
exec yarn stdout: 1.22.5
exec yarn stderr:
exec npm stdout: 7.7.5
exec npm stderr:
spawn npm stdout: 7.7.5
spawn npm child process exited with code 0
```
### What do you see instead?
<!--
If possible please provide textual output instead of screenshots.
-->
```
$ node index.js
spawn test stdout: "test spawn is running"
spawn test child process exited with code 0
exec test stdout: exec test is running
exec test stderr:
exec node stdout:
exec node stderr:
spawn node child process exited with code 0
spawn yarn child process exited with code 1
exec yarn error: Error: Command failed: yarn --version
exec npm stdout:
exec npm stderr:
spawn npm child process exited with code 0
```
The output of node and npm commands is empty, while yarn fails.
However, running each of those commands from the terminal works correctly:
```
$ node -v
v14.16.0
$ npm --version
7.7.5
$ yarn --version
1.22.5
```
### Additional information
<!--
Tell us anything else you think we should know.
-->
Running a command like `which npm` returns the correct binary path both from the terminal and from the script. I also attempted to write the full path in the script, instead of the command, (e.g. `/home/fabio/.npm-global/bin/npm`) but nothing changes.
The node binary is provided by the node snap. However, I could not find clear issues in the logs. Also npm and yarn are provided in the snap, but I installed them globally under `/home/fabio/.npm-global`.
✅ I'm willing to help with debugging and getting this solved. 👨💻
|
1.0
|
Cannot exec or spawn node, npm or yarn [node snap] - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in PowerShell console (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: `v14.16.0`
* **Platform**: `Linux fabio-XPS-15-7590 5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux`
* **Subsystem**: `child_process`
* **Distributed as**: [snap](https://snapcraft.io/node)
### What steps will reproduce the bug?
Install node snap: `sudo snap install node --classic`
*Optional*: fix permission issues https://npm.github.io/installation-setup-docs/installing/a-note-on-permissions.html
Then run the following:
```js
const { exec } = require('child_process');
// Test
exec('echo "exec test is running"', (error, stdout, stderr) => {
if (error) {
console.error(`exec test error: ${error}`);
return;
}
console.log(`exec test stdout: ${stdout}`);
console.error(`exec test stderr: ${stderr}`);
});
// Node
exec('node -v', (error, stdout, stderr) => {
if (error) {
console.error(`exec node error: ${error}`);
return;
}
console.log(`exec node stdout: ${stdout}`);
console.error(`exec node stderr: ${stderr}`);
});
// NPM
exec('npm --version', (error, stdout, stderr) => {
if (error) {
console.error(`exec npm error: ${error}`);
return;
}
console.log(`exec npm stdout: ${stdout}`);
console.error(`exec npm stderr: ${stderr}`);
});
// Yarn
exec('yarn --version', (error, stdout, stderr) => {
if (error) {
console.error(`exec yarn error: ${error}`);
return;
}
console.log(`exec yarn stdout: ${stdout}`);
console.error(`exec yarn stderr: ${stderr}`);
});
// -------------------------------------------------------
const { spawn } = require('child_process');
// Test
const testSpawn = spawn('echo', ['"test spawn is running"']);
testSpawn.stdout.on('data', (data) => {
console.log(`spawn test stdout: ${data}`);
});
testSpawn.stderr.on('data', (data) => {
console.error(`spawn test stderr: ${data}`);
});
testSpawn.on('close', (code) => {
console.log(`spawn test child process exited with code ${code}`);
});
// Node
const nodeSpawn = spawn('node', ['-v']);
nodeSpawn.stdout.on('data', (data) => {
console.log(`spawn node stdout: ${data}`);
});
nodeSpawn.stderr.on('data', (data) => {
console.error(`spawn node stderr: ${data}`);
});
nodeSpawn.on('close', (code) => {
console.log(`spawn node child process exited with code ${code}`);
});
// NPM
const npmSpawn = spawn('npm', ['--version']);
npmSpawn.stdout.on('data', (data) => {
console.log(`spawn npm stdout: ${data}`);
});
npmSpawn.stderr.on('data', (data) => {
console.error(`spawn npm stderr: ${data}`);
});
npmSpawn.on('close', (code) => {
console.log(`spawn npm child process exited with code ${code}`);
});
// Yarn
const yarnSpawn = spawn('yarn', ['--version']);
yarnSpawn.stdout.on('data', (data) => {
console.log(`spawn yarn stdout: ${data}`);
});
yarnSpawn.stderr.on('data', (data) => {
console.error(`spawn yarn stderr: ${data}`);
});
yarnSpawn.on('close', (code) => {
console.log(`spawn yarn child process exited with code ${code}`);
});
```
<!--
Enter details about your bug, preferably a simple code snippet that can be
run using `node` directly without installing third-party dependencies.
-->
### How often does it reproduce? Is there a required condition?
It always happens to me. I think a requirement to be using node from the snap.
### What is the expected behaviour?
<!--
If possible please provide textual output instead of screenshots.
-->
```
$ node index.js
spawn test stdout: "test spawn is running"
spawn test child process exited with code 0
exec test stdout: exec test is running
exec test stderr:
exec node stdout: v14.16.0
exec node stderr:
spawn node stdout: v14.16.0
spawn node child process exited with code 0
spawn yarn stdout: 1.22.5
spawn yarn child process exited with code 0
exec yarn stdout: 1.22.5
exec yarn stderr:
exec npm stdout: 7.7.5
exec npm stderr:
spawn npm stdout: 7.7.5
spawn npm child process exited with code 0
```
### What do you see instead?
<!--
If possible please provide textual output instead of screenshots.
-->
```
$ node index.js
spawn test stdout: "test spawn is running"
spawn test child process exited with code 0
exec test stdout: exec test is running
exec test stderr:
exec node stdout:
exec node stderr:
spawn node child process exited with code 0
spawn yarn child process exited with code 1
exec yarn error: Error: Command failed: yarn --version
exec npm stdout:
exec npm stderr:
spawn npm child process exited with code 0
```
The output of node and npm commands is empty, while yarn fails.
However, running each of those commands from the terminal works correctly:
```
$ node -v
v14.16.0
$ npm --version
7.7.5
$ yarn --version
1.22.5
```
### Additional information
<!--
Tell us anything else you think we should know.
-->
Running a command like `which npm` returns the correct binary path both from the terminal and from the script. I also attempted to write the full path in the script, instead of the command, (e.g. `/home/fabio/.npm-global/bin/npm`) but nothing changes.
The node binary is provided by the node snap. However, I could not find clear issues in the logs. Also npm and yarn are provided in the snap, but I installed them globally under `/home/fabio/.npm-global`.
✅ I'm willing to help with debugging and getting this solved. 👨💻
|
process
|
cannot exec or spawn node npm or yarn thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or output of osversion foreach object versionstring if else in powershell console windows subsystem if known please specify affected core module name version platform linux fabio xps generic ubuntu smp fri mar utc gnu linux subsystem child process distributed as what steps will reproduce the bug install node snap sudo snap install node classic optional fix permission issues then run the following js const exec require child process test exec echo exec test is running error stdout stderr if error console error exec test error error return console log exec test stdout stdout console error exec test stderr stderr node exec node v error stdout stderr if error console error exec node error error return console log exec node stdout stdout console error exec node stderr stderr npm exec npm version error stdout stderr if error console error exec npm error error return console log exec npm stdout stdout console error exec npm stderr stderr yarn exec yarn version error stdout stderr if error console error exec yarn error error return console log exec yarn stdout stdout console error exec yarn stderr stderr const spawn require child process test const testspawn spawn echo testspawn stdout on data data console log spawn test stdout data testspawn stderr on data data console error spawn test stderr data testspawn on close code console log spawn test child process exited with code code node const nodespawn spawn node nodespawn stdout on data data console log spawn node stdout data nodespawn stderr on data data console error spawn node stderr data nodespawn on close code console log spawn node child process exited with code code npm const npmspawn spawn npm npmspawn stdout on data data console log spawn npm stdout data npmspawn stderr on data data console error spawn npm stderr data npmspawn on close code console log spawn npm child process exited with code code yarn const yarnspawn spawn yarn yarnspawn stdout on data data console log spawn yarn stdout data yarnspawn stderr on data data console error spawn yarn stderr data yarnspawn on close code console log spawn yarn child process exited with code code enter details about your bug preferably a simple code snippet that can be run using node directly without installing third party dependencies how often does it reproduce is there a required condition it always happens to me i think a requirement to be using node from the snap what is the expected behaviour if possible please provide textual output instead of screenshots node index js spawn test stdout test spawn is running spawn test child process exited with code exec test stdout exec test is running exec test stderr exec node stdout exec node stderr spawn node stdout spawn node child process exited with code spawn yarn stdout spawn yarn child process exited with code exec yarn stdout exec yarn stderr exec npm stdout exec npm stderr spawn npm stdout spawn npm child process exited with code what do you see instead if possible please provide textual output instead of screenshots node index js spawn test stdout test spawn is running spawn test child process exited with code exec test stdout exec test is running exec test stderr exec node stdout exec node stderr spawn node child process exited with code spawn yarn child process exited with code exec yarn error error command failed yarn version exec npm stdout exec npm stderr spawn npm child process exited with code the output of node and npm commands is empty while yarn fails however running each of those commands from the terminal works correctly node v npm version yarn version additional information tell us anything else you think we should know running a command like which npm returns the correct binary path both from the terminal and from the script i also attempted to write the full path in the script instead of the command e g home fabio npm global bin npm but nothing changes the node binary is provided by the node snap however i could not find clear issues in the logs also npm and yarn are provided in the snap but i installed them globally under home fabio npm global ✅ i m willing to help with debugging and getting this solved 👨💻
| 1
|
326,946
| 24,109,426,323
|
IssuesEvent
|
2022-09-20 10:05:26
|
100mslive/react-native-hms
|
https://api.github.com/repos/100mslive/react-native-hms
|
closed
|
Add docs explaining usage of isTerminal HMSException
|
documentation
|
- Add explaination of isTerminal property
- Add suggestion on app UI behaviour on receiving terminal errors
- Explain difference between Terminal & Non-Terminal errors with examples
- Screenshots/snippet of few HMSExceptions
- Update new error codes
|
1.0
|
Add docs explaining usage of isTerminal HMSException - - Add explaination of isTerminal property
- Add suggestion on app UI behaviour on receiving terminal errors
- Explain difference between Terminal & Non-Terminal errors with examples
- Screenshots/snippet of few HMSExceptions
- Update new error codes
|
non_process
|
add docs explaining usage of isterminal hmsexception add explaination of isterminal property add suggestion on app ui behaviour on receiving terminal errors explain difference between terminal non terminal errors with examples screenshots snippet of few hmsexceptions update new error codes
| 0
|
432,804
| 30,295,528,878
|
IssuesEvent
|
2023-07-09 20:10:52
|
Dekamik/DvBCrud
|
https://api.github.com/repos/Dekamik/DvBCrud
|
closed
|
Clean up past releases
|
documentation
|
DoD:
- [x] All past tags and releases are deleted on Github
- [x] All past packages has been deprecated on NuGet.org
|
1.0
|
Clean up past releases - DoD:
- [x] All past tags and releases are deleted on Github
- [x] All past packages has been deprecated on NuGet.org
|
non_process
|
clean up past releases dod all past tags and releases are deleted on github all past packages has been deprecated on nuget org
| 0
|
2,117
| 4,955,594,244
|
IssuesEvent
|
2016-12-01 20:52:44
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [FR] Intervention au 20 h de TF1 du 01/12/2016
|
Language: French Process: [0] Awaiting subtitles
|
# Video title
MÉLENCHON - NOTRE PROGRAMME : L'AVENIR EN COMMUN
# URL
https://www.youtube.com/watch?v=96uH7RFbFTo
# Youtube subtitles language
Français
# Duration
7:47
# URL subtitles
?
|
1.0
|
[subtitles] [FR] Intervention au 20 h de TF1 du 01/12/2016 - # Video title
MÉLENCHON - NOTRE PROGRAMME : L'AVENIR EN COMMUN
# URL
https://www.youtube.com/watch?v=96uH7RFbFTo
# Youtube subtitles language
Français
# Duration
7:47
# URL subtitles
?
|
process
|
intervention au h de du video title mélenchon notre programme l avenir en commun url youtube subtitles language français duration url subtitles
| 1
|
4,186
| 7,127,116,483
|
IssuesEvent
|
2018-01-20 18:06:05
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
NullPointerException in DITA OT 3.x
|
P1 bug preprocess2
|
One of our clients seems to have found a NPE in DITA OT 3.x:
BUILD FAILED
/Users/......../tools/dita-ot-3.0/build.xml:45: The following error occurred while executing this line:
/Users/.........../dita-ot-3.0/plugins/org.dita.base/build_preprocess2.xml:176: java.lang.NullPointerException
at org.dita.dost.module.reader.TopicReaderModule.getStartDocuments(TopicReaderModule.java:169)
at org.dita.dost.module.reader.TopicReaderModule.readStartFile(TopicReaderModule.java:133)
at org.dita.dost.module.reader.TopicReaderModule.execute(TopicReaderModule.java:70)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:163)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1260)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405)
at org.apache.tools.ant.Project.executeTarget(Project.java:1376)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1260)
at org.apache.tools.ant.Main.runBuild(Main.java:857)
at org.apache.tools.ant.Main.startAnt(Main.java:236)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:287)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:113)
|
1.0
|
NullPointerException in DITA OT 3.x - One of our clients seems to have found a NPE in DITA OT 3.x:
BUILD FAILED
/Users/......../tools/dita-ot-3.0/build.xml:45: The following error occurred while executing this line:
/Users/.........../dita-ot-3.0/plugins/org.dita.base/build_preprocess2.xml:176: java.lang.NullPointerException
at org.dita.dost.module.reader.TopicReaderModule.getStartDocuments(TopicReaderModule.java:169)
at org.dita.dost.module.reader.TopicReaderModule.readStartFile(TopicReaderModule.java:133)
at org.dita.dost.module.reader.TopicReaderModule.execute(TopicReaderModule.java:70)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:163)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405)
at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1260)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405)
at org.apache.tools.ant.Project.executeTarget(Project.java:1376)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1260)
at org.apache.tools.ant.Main.runBuild(Main.java:857)
at org.apache.tools.ant.Main.startAnt(Main.java:236)
at org.apache.tools.ant.launch.Launcher.run(Launcher.java:287)
at org.apache.tools.ant.launch.Launcher.main(Launcher.java:113)
|
process
|
nullpointerexception in dita ot x one of our clients seems to have found a npe in dita ot x build failed users tools dita ot build xml the following error occurred while executing this line users dita ot plugins org dita base build xml java lang nullpointerexception at org dita dost module reader topicreadermodule getstartdocuments topicreadermodule java at org dita dost module reader topicreadermodule readstartfile topicreadermodule java at org dita dost module reader topicreadermodule execute topicreadermodule java at org dita dost ant extensibleantinvoker execute extensibleantinvoker java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant helper singlecheckexecutor executetargets singlecheckexecutor java at org apache tools ant project executetargets project java at org apache tools ant taskdefs ant execute ant java at org apache tools ant taskdefs calltarget execute calltarget java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org apache tools ant dispatch dispatchutils execute dispatchutils java at org apache tools ant task perform task java at org apache tools ant target execute target java at org apache tools ant target performtasks target java at org apache tools ant project executesortedtargets project java at org apache tools ant project executetarget project java at org apache tools ant helper defaultexecutor executetargets defaultexecutor java at org apache tools ant project executetargets project java at org apache tools ant main runbuild main java at org apache tools ant main startant main java at org apache tools ant launch launcher run launcher java at org apache tools ant launch launcher main launcher java
| 1
|
21,896
| 30,345,108,009
|
IssuesEvent
|
2023-07-11 14:57:34
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
[MLv2] Parameters
|
.Epic .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
```[tasklist]
# Tasks
- [ ] Opaque parameters and helpers
- [ ] Migrate `FieldValuesWidget`
- [ ] Migrate `ParametersList` component
- [ ] Migrate native query template tag editor
- [ ] Migrate dashboard parameters
- [ ] Migrate public question, dashboard, etc.
- [ ] Migrate actions
- [ ] Migrate dashboard subscriptions
```
|
1.0
|
[MLv2] Parameters - ```[tasklist]
# Tasks
- [ ] Opaque parameters and helpers
- [ ] Migrate `FieldValuesWidget`
- [ ] Migrate `ParametersList` component
- [ ] Migrate native query template tag editor
- [ ] Migrate dashboard parameters
- [ ] Migrate public question, dashboard, etc.
- [ ] Migrate actions
- [ ] Migrate dashboard subscriptions
```
|
process
|
parameters tasks opaque parameters and helpers migrate fieldvalueswidget migrate parameterslist component migrate native query template tag editor migrate dashboard parameters migrate public question dashboard etc migrate actions migrate dashboard subscriptions
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.