Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,009
| 24,025,354,905
|
IssuesEvent
|
2022-09-15 11:02:19
|
COS301-SE-2022/Pure-LoRa-Tracking
|
https://api.github.com/repos/COS301-SE-2022/Pure-LoRa-Tracking
|
closed
|
(processing): message queue CRON service
|
(system) Server (bus) processing
|
Check message queue for data and store in db
Consider the case of what time the data came in.
It may be required to be matched with previous data to complete a row in the database
|
1.0
|
(processing): message queue CRON service - Check message queue for data and store in db
Consider the case of what time the data came in.
It may be required to be matched with previous data to complete a row in the database
|
process
|
processing message queue cron service check message queue for data and store in db consider the case of what time the data came in it may be required to be matched with previous data to complete a row in the database
| 1
|
17,456
| 23,277,703,575
|
IssuesEvent
|
2022-08-05 08:53:23
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Missing breaking change upgrade to 4.0 client - null values in seed files
|
bug/0-unknown kind/bug process/candidate tech/typescript team/client topic: seeding 4.0.0
|
### Bug description
In the client 3.x.x i was able to generate seed files with mockaroo that contained null values.
e..g.
```
[
{ "title": "Melursus ursinus", "parent_id": null, "parent_index": 1 },
{ "title": "Choriotis kori", "parent_id": 1, "parent_index": 2 },
{ "title": "Oryx gazella", "parent_id": 2, "parent_index": 3 },
{ "title": "Merops bullockoides", "parent_id": 3, "parent_index": 4 }
]
```
Now I am getting a error `Argument parent_id must not be null. Please use undefined instead.`
This is not present in the release notes and breaks 50+ seeders!!!!! And thus 7 cicd pipelines.
Reporting as a bug as its is not in the change log.
### How to reproduce
1. create a seed.json file
2. copy the json example as above
3. read the file with const data =require('seed.json')
4. load with await prisma.<model>.createMany({data})
### Expected behavior
Should work without a problem
### Prisma information
```
model Post {
id Int @id @default(autoincrement())
title String @db.VarChar(255)
parent_id Int?
parent_index Int
}
```
### Environment & setup
- OS: Mac OS
- Database: PostgreSQL
- Node.js version: 18.4.0
### Prisma Version
```
4.1.1
```
|
1.0
|
Missing breaking change upgrade to 4.0 client - null values in seed files - ### Bug description
In the client 3.x.x i was able to generate seed files with mockaroo that contained null values.
e..g.
```
[
{ "title": "Melursus ursinus", "parent_id": null, "parent_index": 1 },
{ "title": "Choriotis kori", "parent_id": 1, "parent_index": 2 },
{ "title": "Oryx gazella", "parent_id": 2, "parent_index": 3 },
{ "title": "Merops bullockoides", "parent_id": 3, "parent_index": 4 }
]
```
Now I am getting a error `Argument parent_id must not be null. Please use undefined instead.`
This is not present in the release notes and breaks 50+ seeders!!!!! And thus 7 cicd pipelines.
Reporting as a bug as its is not in the change log.
### How to reproduce
1. create a seed.json file
2. copy the json example as above
3. read the file with const data =require('seed.json')
4. load with await prisma.<model>.createMany({data})
### Expected behavior
Should work without a problem
### Prisma information
```
model Post {
id Int @id @default(autoincrement())
title String @db.VarChar(255)
parent_id Int?
parent_index Int
}
```
### Environment & setup
- OS: Mac OS
- Database: PostgreSQL
- Node.js version: 18.4.0
### Prisma Version
```
4.1.1
```
|
process
|
missing breaking change upgrade to client null values in seed files bug description in the client x x i was able to generate seed files with mockaroo that contained null values e g title melursus ursinus parent id null parent index title choriotis kori parent id parent index title oryx gazella parent id parent index title merops bullockoides parent id parent index now i am getting a error argument parent id must not be null please use undefined instead this is not present in the release notes and breaks seeders and thus cicd pipelines reporting as a bug as its is not in the change log how to reproduce create a seed json file copy the json example as above read the file with const data require seed json load with await prisma createmany data expected behavior should work without a problem prisma information model post id int id default autoincrement title string db varchar parent id int parent index int environment setup os mac os database postgresql node js version prisma version
| 1
|
18,226
| 24,290,566,619
|
IssuesEvent
|
2022-09-29 05:23:16
|
altillimity/SatDump
|
https://api.github.com/repos/altillimity/SatDump
|
closed
|
Raspberry Pi 4 not seeing GOES 16/17
|
bug Processing
|
Howdy
For the last month, I have been trying to set up a 24/7 Satdump setup on a Raspberry Pi 4 model B but it never receives the GOES 16 and 17 signals. I have a RTL-SDR V3, a NooElec GOES SAWbird, and NooElec Goes dish. I have downloaded the new beta and is using:
"./satdump live goes_hrit /media/regs1/REGS11/GOES_output --source rtlsdr --samplerate 1792000 --frequency 1694.1e6 --general_gain 40 --bias"
as my command on my Pi. The reason I know it doesnt work is that I set up the dish with satdump on my windows computer. Then, without moving the dish, I plug in the SDR into my Pi. After running the command, I get 0 SNR. I did double check, the SAWbird does receive power from the Bias tee. I tried many different things to fix it but nothing working, including installing it many times.
|
1.0
|
Raspberry Pi 4 not seeing GOES 16/17 - Howdy
For the last month, I have been trying to set up a 24/7 Satdump setup on a Raspberry Pi 4 model B but it never receives the GOES 16 and 17 signals. I have a RTL-SDR V3, a NooElec GOES SAWbird, and NooElec Goes dish. I have downloaded the new beta and is using:
"./satdump live goes_hrit /media/regs1/REGS11/GOES_output --source rtlsdr --samplerate 1792000 --frequency 1694.1e6 --general_gain 40 --bias"
as my command on my Pi. The reason I know it doesnt work is that I set up the dish with satdump on my windows computer. Then, without moving the dish, I plug in the SDR into my Pi. After running the command, I get 0 SNR. I did double check, the SAWbird does receive power from the Bias tee. I tried many different things to fix it but nothing working, including installing it many times.
|
process
|
raspberry pi not seeing goes howdy for the last month i have been trying to set up a satdump setup on a raspberry pi model b but it never receives the goes and signals i have a rtl sdr a nooelec goes sawbird and nooelec goes dish i have downloaded the new beta and is using satdump live goes hrit media goes output source rtlsdr samplerate frequency general gain bias as my command on my pi the reason i know it doesnt work is that i set up the dish with satdump on my windows computer then without moving the dish i plug in the sdr into my pi after running the command i get snr i did double check the sawbird does receive power from the bias tee i tried many different things to fix it but nothing working including installing it many times
| 1
|
818,673
| 30,699,345,896
|
IssuesEvent
|
2023-07-26 21:27:10
|
Rexo99/MoneyMate
|
https://api.github.com/repos/Rexo99/MoneyMate
|
closed
|
Ausgaben Foto Attribut (Geschäftslogik)
|
Almost finished low priority
|
* Datenmodell anpassen
* Informieren wie und wo das Foto abgespeichert wird
|
1.0
|
Ausgaben Foto Attribut (Geschäftslogik) - * Datenmodell anpassen
* Informieren wie und wo das Foto abgespeichert wird
|
non_process
|
ausgaben foto attribut geschäftslogik datenmodell anpassen informieren wie und wo das foto abgespeichert wird
| 0
|
10,935
| 13,750,327,954
|
IssuesEvent
|
2020-10-06 11:53:18
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
Colored working_visualisations on the bio processor
|
Angels Bio Processing Impact: Enhancement
|
**Describe the bug**
Similar to how the ingots are colored on the induction furnace, angel wants to have colors working on the bio processor.
**Additional context**
The graphic files are already present inside the mod, however this was not possible at that time: https://forums.factorio.com/viewtopic.php?f=65&t=52633&sid=131b8bf8ddcb2e4e140ebe02faf80062
|
1.0
|
Colored working_visualisations on the bio processor - **Describe the bug**
Similar to how the ingots are colored on the induction furnace, angel wants to have colors working on the bio processor.
**Additional context**
The graphic files are already present inside the mod, however this was not possible at that time: https://forums.factorio.com/viewtopic.php?f=65&t=52633&sid=131b8bf8ddcb2e4e140ebe02faf80062
|
process
|
colored working visualisations on the bio processor describe the bug similar to how the ingots are colored on the induction furnace angel wants to have colors working on the bio processor additional context the graphic files are already present inside the mod however this was not possible at that time
| 1
|
21,473
| 29,506,628,547
|
IssuesEvent
|
2023-06-03 11:52:55
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
closed
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @sunmou99 on commit 974fb0704b555906be19c3a892fbbabf34f112d1
Last updated: Fri Jun 2 04:51 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5154293208)**
| Failures | Configs |
|----------|---------|
| firestore | [TEST] [FLAKINESS] [Android] [1/3 os: macos] [2/4 android_device: emulator_ftl_latest android_target]<details><summary>(1 failed tests)</summary> CRASH/TIMEOUT</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 974fb0704b555906be19c3a892fbbabf34f112d1
Last updated: Fri Jun 2 07:55 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5155852243)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 70989bc3fb8cdf0476a97f6d1a60109cd9464b7d
Last updated: Sat Jun 3 04:50 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5163309312)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### [build against repo] Integration test with FLAKINESS (succeeded after retry)
Requested by @sunmou99 on commit 974fb0704b555906be19c3a892fbbabf34f112d1
Last updated: Fri Jun 2 04:51 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5154293208)**
| Failures | Configs |
|----------|---------|
| firestore | [TEST] [FLAKINESS] [Android] [1/3 os: macos] [2/4 android_device: emulator_ftl_latest android_target]<details><summary>(1 failed tests)</summary> CRASH/TIMEOUT</details> |
Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 974fb0704b555906be19c3a892fbbabf34f112d1
Last updated: Fri Jun 2 07:55 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5155852243)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit 70989bc3fb8cdf0476a97f6d1a60109cd9464b7d
Last updated: Sat Jun 3 04:50 PDT 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5163309312)**
|
process
|
nightly integration testing report for firestore integration test with flakiness succeeded after retry requested by on commit last updated fri jun pdt failures configs firestore failed tests nbsp nbsp crash timeout add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated fri jun pdt ✅ nbsp integration test succeeded requested by on commit last updated sat jun pdt
| 1
|
6,018
| 8,822,871,941
|
IssuesEvent
|
2019-01-02 11:12:30
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
select all in search shows and selects deleted items
|
2.0.6 Process bug
|
in search
enter multiple selection mode
select everything
it selects even the deleted items that have the same name

|
1.0
|
select all in search shows and selects deleted items - in search
enter multiple selection mode
select everything
it selects even the deleted items that have the same name

|
process
|
select all in search shows and selects deleted items in search enter multiple selection mode select everything it selects even the deleted items that have the same name
| 1
|
6,252
| 9,213,798,611
|
IssuesEvent
|
2019-03-10 15:00:54
|
chuminh712/BookStorage---Group-2
|
https://api.github.com/repos/chuminh712/BookStorage---Group-2
|
closed
|
Architecture Design
|
In Process
|
Design sequence diagram for Use Case Manage Book
Design sequence diagram for Use Case Manage Book Category
|
1.0
|
Architecture Design - Design sequence diagram for Use Case Manage Book
Design sequence diagram for Use Case Manage Book Category
|
process
|
architecture design design sequence diagram for use case manage book design sequence diagram for use case manage book category
| 1
|
19,657
| 26,017,047,843
|
IssuesEvent
|
2022-12-21 09:23:27
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Multi-schema support for `db pull` for MySQL
|
kind/feature process/candidate topic: introspection topic: re-introspection tech/engines tech/engines/introspection engine topic: mysql team/schema topic: prisma db pull topic: multiSchema
|
Within the `multiSchema` preview feature, `db pull` should by default introspect all schemas in the database.
|
1.0
|
Multi-schema support for `db pull` for MySQL - Within the `multiSchema` preview feature, `db pull` should by default introspect all schemas in the database.
|
process
|
multi schema support for db pull for mysql within the multischema preview feature db pull should by default introspect all schemas in the database
| 1
|
15,088
| 18,941,873,325
|
IssuesEvent
|
2021-11-18 04:34:33
|
ClickHouse/ClickHouse
|
https://api.github.com/repos/ClickHouse/ClickHouse
|
closed
|
Why a non nullable field can be with null value?
|
question sql-compatibility question-answered
|
> Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
> If you still prefer GitHub issues, remove all this text and ask your question here.
The table structure is as below:
CREATE TABLE stdfdb.tb_file
(
`id` UUID,
`created` DateTime DEFAULT now(),
`updated` Nullable(DateTime),
`remark` Nullable(FixedString(255)),
`filename` FixedString(255),
`type` FixedString(20)
)
ENGINE = MergeTree()
PARTITION BY id
ORDER BY id
SETTINGS index_granularity = 8192
the field type is non nullable, can add new data without type value successfully with dbeaver IDE
id |created |updated |remark |filename |type |
+--------------------------------------------+----------------------------+----------+--------+-----------------
00000000-0000-0000-0000-000000000000|2021-10-14 11:26:57.000| | |test2.stdf | |
00000000-0000-0000-0000-000000000000|2021-10-14 11:26:31.000| | |test1.stdf | |
|
True
|
Why a non nullable field can be with null value? - > Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
> If you still prefer GitHub issues, remove all this text and ask your question here.
The table structure is as below:
CREATE TABLE stdfdb.tb_file
(
`id` UUID,
`created` DateTime DEFAULT now(),
`updated` Nullable(DateTime),
`remark` Nullable(FixedString(255)),
`filename` FixedString(255),
`type` FixedString(20)
)
ENGINE = MergeTree()
PARTITION BY id
ORDER BY id
SETTINGS index_granularity = 8192
the field type is non nullable, can add new data without type value successfully with dbeaver IDE
id |created |updated |remark |filename |type |
+--------------------------------------------+----------------------------+----------+--------+-----------------
00000000-0000-0000-0000-000000000000|2021-10-14 11:26:57.000| | |test2.stdf | |
00000000-0000-0000-0000-000000000000|2021-10-14 11:26:31.000| | |test1.stdf | |
|
non_process
|
why a non nullable field can be with null value make sure to check documentation first if the question is concise and probably has a short answer asking it in telegram chat is probably the fastest way to find the answer for more complicated questions consider asking them on stackoverflow with clickhouse tag if you still prefer github issues remove all this text and ask your question here the table structure is as below create table stdfdb tb file id uuid created datetime default now updated nullable datetime remark nullable fixedstring filename fixedstring type fixedstring engine mergetree partition by id order by id settings index granularity the field type is non nullable can add new data without type value successfully with dbeaver ide id created updated remark filename type stdf stdf
| 0
|
61,034
| 8,483,891,058
|
IssuesEvent
|
2018-10-25 23:29:01
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Out of date comments in System.Numerics.Vectors
|
area-System.Numerics documentation up-for-grabs
|
There is a comment at the beginning of Vectors.cs [https://github.com/dotnet/corefx/blob/master/src/System.Numerics.Vectors/src/System/Numerics/Vector.cs#L22-L32](url)
> PATTERN:
* if (Vector.IsHardwareAccelerated) { ... }
* else { ... }
* EXPLANATION
* This pattern solves two problems:
* 1. Allows us to unroll loops when we know the size (when no hardware acceleration is present)
* 2. Allows reflection to work:
* - If a method is called via reflection, it will not be "intrinsified", which would cause issues if we did
* not provide an implementation for that case (i.e. if it only included a case which assumed 16-byte registers)
* (NOTE: It is assumed that Vector.IsHardwareAccelerated will be a compile-time constant, eliminating these checks
* from the JIT'd code.)
This means that the Vector.IsHardwareAccelerated true path of each SIMD intrinsic fallback would execute in reflection calls.
However, according to my experiment, CoreCLR’s behavior is different from the comment. The Vector.IsHardwareAccelerated true path cannot execute, even though it called by reflection. Usually, reflection calls of SIMD intrinsics also go to jited intrinsic.
I detected that the true path only executes when the current base type is not supported by current SIMD intrinsic (in CoreCLR). For example, there is a SIMD intrinsic “DotProduct” in CoreCLR, which just supports Int, Long, Float, and Double. If we pass two vectors of Byte or Short, DotProduct would go to the C# fallback (the Vector.IsHardwareAccelerated true path).
Consequently, here are two issues:
1. The comment is different from code's behavior (out of date comments?).
2. For most of the SIMD intrinsics, the code pattern "if (Vector.IsHardwareAccelerated) { ... }" looks useless.
|
1.0
|
Out of date comments in System.Numerics.Vectors - There is a comment at the beginning of Vectors.cs [https://github.com/dotnet/corefx/blob/master/src/System.Numerics.Vectors/src/System/Numerics/Vector.cs#L22-L32](url)
> PATTERN:
* if (Vector.IsHardwareAccelerated) { ... }
* else { ... }
* EXPLANATION
* This pattern solves two problems:
* 1. Allows us to unroll loops when we know the size (when no hardware acceleration is present)
* 2. Allows reflection to work:
* - If a method is called via reflection, it will not be "intrinsified", which would cause issues if we did
* not provide an implementation for that case (i.e. if it only included a case which assumed 16-byte registers)
* (NOTE: It is assumed that Vector.IsHardwareAccelerated will be a compile-time constant, eliminating these checks
* from the JIT'd code.)
This means that the Vector.IsHardwareAccelerated true path of each SIMD intrinsic fallback would execute in reflection calls.
However, according to my experiment, CoreCLR’s behavior is different from the comment. The Vector.IsHardwareAccelerated true path cannot execute, even though it called by reflection. Usually, reflection calls of SIMD intrinsics also go to jited intrinsic.
I detected that the true path only executes when the current base type is not supported by current SIMD intrinsic (in CoreCLR). For example, there is a SIMD intrinsic “DotProduct” in CoreCLR, which just supports Int, Long, Float, and Double. If we pass two vectors of Byte or Short, DotProduct would go to the C# fallback (the Vector.IsHardwareAccelerated true path).
Consequently, here are two issues:
1. The comment is different from code's behavior (out of date comments?).
2. For most of the SIMD intrinsics, the code pattern "if (Vector.IsHardwareAccelerated) { ... }" looks useless.
|
non_process
|
out of date comments in system numerics vectors there is a comment at the beginning of vectors cs url pattern if vector ishardwareaccelerated else explanation this pattern solves two problems allows us to unroll loops when we know the size when no hardware acceleration is present allows reflection to work if a method is called via reflection it will not be intrinsified which would cause issues if we did not provide an implementation for that case i e if it only included a case which assumed byte registers note it is assumed that vector ishardwareaccelerated will be a compile time constant eliminating these checks from the jit d code this means that the vector ishardwareaccelerated true path of each simd intrinsic fallback would execute in reflection calls however according to my experiment coreclr’s behavior is different from the comment the vector ishardwareaccelerated true path cannot execute even though it called by reflection usually reflection calls of simd intrinsics also go to jited intrinsic i detected that the true path only executes when the current base type is not supported by current simd intrinsic in coreclr for example there is a simd intrinsic “dotproduct” in coreclr which just supports int long float and double if we pass two vectors of byte or short dotproduct would go to the c fallback the vector ishardwareaccelerated true path consequently here are two issues the comment is different from code s behavior out of date comments for most of the simd intrinsics the code pattern if vector ishardwareaccelerated looks useless
| 0
|
19,728
| 26,074,242,756
|
IssuesEvent
|
2022-12-24 08:37:59
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
Roland 4k V\ideo scaler VC-100UHD Region of Interest
|
NOT YET PROCESSED
|
The name of the device, hardware, or software you would like to control:
What you would like to be able to make it do from Companion:
create controls to zoom in, zoom out (digital zoom), and modify the positions of the ROI boxes.
Direct links or attachments to the ethernet control protocol or API:
"- I am not sure whether this is helpful but here is what the owners manual says about it:
- This product contains eParts integrated software platform of eSOL Co.,Ltd. eParts is a trademark of eSOL Co., Ltd. in Japan. • This Product uses the Source Code of μT-Kernel under T-License 2.0 granted by the T-Engine Forum (www. tron.org). • This product uses Free RTOS"
Here is the link to the owners manual:
https://static.roland.com/assets/media/pdf/VC-100UHD_eng01_W.pdf
Here is the program control application:
https://proav.roland.com/global/support/by_product/vc-100uhd/updates_drivers/f60b77bb-4b45-4408-9f9d-9e45465b6e4b/
|
1.0
|
Roland 4k V\ideo scaler VC-100UHD Region of Interest - The name of the device, hardware, or software you would like to control:
What you would like to be able to make it do from Companion:
create controls to zoom in, zoom out (digital zoom), and modify the positions of the ROI boxes.
Direct links or attachments to the ethernet control protocol or API:
"- I am not sure whether this is helpful but here is what the owners manual says about it:
- This product contains eParts integrated software platform of eSOL Co.,Ltd. eParts is a trademark of eSOL Co., Ltd. in Japan. • This Product uses the Source Code of μT-Kernel under T-License 2.0 granted by the T-Engine Forum (www. tron.org). • This product uses Free RTOS"
Here is the link to the owners manual:
https://static.roland.com/assets/media/pdf/VC-100UHD_eng01_W.pdf
Here is the program control application:
https://proav.roland.com/global/support/by_product/vc-100uhd/updates_drivers/f60b77bb-4b45-4408-9f9d-9e45465b6e4b/
|
process
|
roland v ideo scaler vc region of interest the name of the device hardware or software you would like to control what you would like to be able to make it do from companion create controls to zoom in zoom out digital zoom and modify the positions of the roi boxes direct links or attachments to the ethernet control protocol or api i am not sure whether this is helpful but here is what the owners manual says about it this product contains eparts integrated software platform of esol co ltd eparts is a trademark of esol co ltd in japan • this product uses the source code of μt kernel under t license granted by the t engine forum www tron org • this product uses free rtos here is the link to the owners manual here is the program control application
| 1
|
1,785
| 10,755,967,299
|
IssuesEvent
|
2019-10-31 10:13:44
|
elastic/apm-integration-testing
|
https://api.github.com/repos/elastic/apm-integration-testing
|
closed
|
Opbeans node is broken after Docker images changes
|
[zube]: In Review automation bug ci
|
After lates changes on Opbeans-node Docker image, the Docker image in the integration testing is broken
`./scripts/compose.py start 7.4.0 --release --with-opbeans-node`
```
2019-10-29T11:04:59: PM2 log: Launching in no daemon mode
2019-10-29T11:04:59: PM2 log: App [server:0] starting in -cluster mode-
2019-10-29T11:04:59: PM2 log: App [workload:1] starting in -cluster mode-
2019-10-29T11:04:59: PM2 log: App [workload:1] online
2019-10-29T11:04:59: PM2 log: App [server:0] online
/app/.workload.js
Sending error to Elastic APM { id: '5b75e9b2ee1b4a8588134e270a10aeb0' }
2019-10-29T11:05:01: PM2 log: App name:workload id:1 disconnected
2019-10-29T11:05:01: PM2 log: App [workload:1] exited with code [1] via signal [SIGINT]
{"level":30,"time":1572347101967,"pid":19,"hostname":"6879da2b4a34","msg":"server is listening on port 3000","v":1}
2019-10-29T11:05:03: PM2 log: App [workload:1] starting in -cluster mode-
2019-10-29T11:05:03: PM2 log: App [workload:1] online
/app/.workload.js
{"level":20,"time":1572347105480,"pid":19,"hostname":"6879da2b4a34","req":{"id":1,"method":"GET","url":"/throw-async-error","headers":{"user-agent":"workload/2.
4.3","host":"opbeans-node:3000","connection":"close"},"remoteAddress":"::ffff:172.20.0.7","remotePort":35658},"msg":"request received","v":1}
{"level":30,"time":1572347105509,"pid":19,"hostname":"6879da2b4a34","msg":"Sending error to Elastic APM {\"id\":\"6d3471552be164764679f35d81771283\"}","v":1}
{"level":50,"time":1572347105504,"pid":19,"hostname":"6879da2b4a34","msg":"this will not get captured by express","stack":"Error: this will not get captured by
express\n at /app/server/coffee.js:42:11\n at _combinedTickCallback (internal/process/next_tick.js:132:7)\n at process._tickCallback (internal/process/
next_tick.js:181:9)","type":"Error","v":1}
{"level":50,"time":1572347105504,"pid":19,"hostname":"6879da2b4a34","msg":"Application encountered an uncaught exception. Flushing Elastic APM queue and exiting
...","v":1}
{"level":50,"time":1572347105506,"pid":19,"hostname":"6879da2b4a34","msg":"Elastic APM queue flushed!","v":1}
2019-10-29T11:05:05: PM2 log: App name:server id:0 disconnected
2019-10-29T11:05:05: PM2 log: App [server:0] exited with code [1] via signal [SIGINT]
2019-10-29T11:05:05: PM2 log: App [server:0] starting in -cluster mode-
Sending error to Elastic APM { id: 'b61b466a0c36ab38037887d878fd6cfe' }
2019-10-29T11:05:05: PM2 log: App name:workload id:1 disconnected
2019-10-29T11:05:05: PM2 log: App [workload:1] exited with code [1] via signal [SIGINT]
2019-10-29T11:05:05: PM2 log: App [server:0] online
{"level":30,"time":1572347106930,"pid":51,"hostname":"6879da2b4a34","msg":"server is listening on port 3000","v":1}
2019-10-29T11:05:07: PM2 log: App [workload:1] starting in -cluster mode-
2019-10-29T11:05:07: PM2 log: App [workload:1] online
{"level":20,"time":1572347107828,"pid":51,"hostname":"6879da2b4a34","req":{"id":1,"method":"GET","url":"/","headers":{"host":"opbeans-node:3000","user-agent":"W
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
Error: connect ECONNREFUSED 172.20.0.7:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
T http://opbeans-node:3000/api/products/2/customers
{"level":30,"time":1572347109390,"pid":51,"hostname":"6879da2b4a34","req":{"id":2,"method":"GET","url":"/api/products/2/customers","headers":{"user-agent":"workload/2.4.3","host":"opbeans-node:3000","connection":"close"},"remoteAddress":"::ffff:172.20.0.7","remotePort":35682},"res":{"statusCode":200,"headers":{"x-powered-by":"Express","content-type":"application/json; charset=utf-8","content-length":"21284","etag":"W/\"5324-NP5pUxBZCR7v9As8GdbIdG4jqPc\""}},"responseTime":157,"msg":"request completed","v":1}
{"level":20,"time":1572347109732,"pid":51,"hostname":"6879da2b4a34","req":{"id":3,"method":"GET","url":"/api/stats","headers":{"user-agent":"workload/2.4.3","host":"opbeans-node:3000","connection":"close"},"remoteAddress":"::ffff:172.20.0.7","remotePort":35686},"msg":"request received","v":1}
:Error: connect ECONNREFUSED 172.20.0.7:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
Error: connect ECONNREFUSED 172.20.0.7:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
```
|
1.0
|
Opbeans node is broken after Docker images changes - After lates changes on Opbeans-node Docker image, the Docker image in the integration testing is broken
`./scripts/compose.py start 7.4.0 --release --with-opbeans-node`
```
2019-10-29T11:04:59: PM2 log: Launching in no daemon mode
2019-10-29T11:04:59: PM2 log: App [server:0] starting in -cluster mode-
2019-10-29T11:04:59: PM2 log: App [workload:1] starting in -cluster mode-
2019-10-29T11:04:59: PM2 log: App [workload:1] online
2019-10-29T11:04:59: PM2 log: App [server:0] online
/app/.workload.js
Sending error to Elastic APM { id: '5b75e9b2ee1b4a8588134e270a10aeb0' }
2019-10-29T11:05:01: PM2 log: App name:workload id:1 disconnected
2019-10-29T11:05:01: PM2 log: App [workload:1] exited with code [1] via signal [SIGINT]
{"level":30,"time":1572347101967,"pid":19,"hostname":"6879da2b4a34","msg":"server is listening on port 3000","v":1}
2019-10-29T11:05:03: PM2 log: App [workload:1] starting in -cluster mode-
2019-10-29T11:05:03: PM2 log: App [workload:1] online
/app/.workload.js
{"level":20,"time":1572347105480,"pid":19,"hostname":"6879da2b4a34","req":{"id":1,"method":"GET","url":"/throw-async-error","headers":{"user-agent":"workload/2.
4.3","host":"opbeans-node:3000","connection":"close"},"remoteAddress":"::ffff:172.20.0.7","remotePort":35658},"msg":"request received","v":1}
{"level":30,"time":1572347105509,"pid":19,"hostname":"6879da2b4a34","msg":"Sending error to Elastic APM {\"id\":\"6d3471552be164764679f35d81771283\"}","v":1}
{"level":50,"time":1572347105504,"pid":19,"hostname":"6879da2b4a34","msg":"this will not get captured by express","stack":"Error: this will not get captured by
express\n at /app/server/coffee.js:42:11\n at _combinedTickCallback (internal/process/next_tick.js:132:7)\n at process._tickCallback (internal/process/
next_tick.js:181:9)","type":"Error","v":1}
{"level":50,"time":1572347105504,"pid":19,"hostname":"6879da2b4a34","msg":"Application encountered an uncaught exception. Flushing Elastic APM queue and exiting
...","v":1}
{"level":50,"time":1572347105506,"pid":19,"hostname":"6879da2b4a34","msg":"Elastic APM queue flushed!","v":1}
2019-10-29T11:05:05: PM2 log: App name:server id:0 disconnected
2019-10-29T11:05:05: PM2 log: App [server:0] exited with code [1] via signal [SIGINT]
2019-10-29T11:05:05: PM2 log: App [server:0] starting in -cluster mode-
Sending error to Elastic APM { id: 'b61b466a0c36ab38037887d878fd6cfe' }
2019-10-29T11:05:05: PM2 log: App name:workload id:1 disconnected
2019-10-29T11:05:05: PM2 log: App [workload:1] exited with code [1] via signal [SIGINT]
2019-10-29T11:05:05: PM2 log: App [server:0] online
{"level":30,"time":1572347106930,"pid":51,"hostname":"6879da2b4a34","msg":"server is listening on port 3000","v":1}
2019-10-29T11:05:07: PM2 log: App [workload:1] starting in -cluster mode-
2019-10-29T11:05:07: PM2 log: App [workload:1] online
{"level":20,"time":1572347107828,"pid":51,"hostname":"6879da2b4a34","req":{"id":1,"method":"GET","url":"/","headers":{"host":"opbeans-node:3000","user-agent":"W
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
Error: connect ECONNREFUSED 172.20.0.7:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
T http://opbeans-node:3000/api/products/2/customers
{"level":30,"time":1572347109390,"pid":51,"hostname":"6879da2b4a34","req":{"id":2,"method":"GET","url":"/api/products/2/customers","headers":{"user-agent":"workload/2.4.3","host":"opbeans-node:3000","connection":"close"},"remoteAddress":"::ffff:172.20.0.7","remotePort":35682},"res":{"statusCode":200,"headers":{"x-powered-by":"Express","content-type":"application/json; charset=utf-8","content-length":"21284","etag":"W/\"5324-NP5pUxBZCR7v9As8GdbIdG4jqPc\""}},"responseTime":157,"msg":"request completed","v":1}
{"level":20,"time":1572347109732,"pid":51,"hostname":"6879da2b4a34","req":{"id":3,"method":"GET","url":"/api/stats","headers":{"user-agent":"workload/2.4.3","host":"opbeans-node:3000","connection":"close"},"remoteAddress":"::ffff:172.20.0.7","remotePort":35686},"msg":"request received","v":1}
:Error: connect ECONNREFUSED 172.20.0.7:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
Error: connect ECONNREFUSED 172.20.0.7:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
```
|
non_process
|
opbeans node is broken after docker images changes after lates changes on opbeans node docker image the docker image in the integration testing is broken scripts compose py start release with opbeans node log launching in no daemon mode log app starting in cluster mode log app starting in cluster mode log app online log app online app workload js sending error to elastic apm id log app name workload id disconnected log app exited with code via signal level time pid hostname msg server is listening on port v log app starting in cluster mode log app online app workload js level time pid hostname req id method get url throw async error headers user agent workload host opbeans node connection close remoteaddress ffff remoteport msg request received v level time pid hostname msg sending error to elastic apm id v level time pid hostname msg this will not get captured by express stack error this will not get captured by express n at app server coffee js n at combinedtickcallback internal process next tick js n at process tickcallback internal process next tick js type error v level time pid hostname msg application encountered an uncaught exception flushing elastic apm queue and exiting v level time pid hostname msg elastic apm queue flushed v log app name server id disconnected log app exited with code via signal log app starting in cluster mode sending error to elastic apm id log app name workload id disconnected log app exited with code via signal log app online level time pid hostname msg server is listening on port v log app starting in cluster mode log app online level time pid hostname req id method get url headers host opbeans node user agent w at tcpconnectwrap afterconnect net js error connect econnrefused at tcpconnectwrap afterconnect net js t level time pid hostname req id method get url api products customers headers user agent workload host opbeans node connection close remoteaddress ffff remoteport res statuscode headers x powered by express content type application json charset utf content length etag w responsetime msg request completed v level time pid hostname req id method get url api stats headers user agent workload host opbeans node connection close remoteaddress ffff remoteport msg request received v error connect econnrefused at tcpconnectwrap afterconnect net js error connect econnrefused at tcpconnectwrap afterconnect net js
| 0
|
174,227
| 27,598,214,553
|
IssuesEvent
|
2023-03-09 08:15:13
|
saving-satoshi/saving-satoshi
|
https://api.github.com/repos/saving-satoshi/saving-satoshi
|
closed
|
Revise the about page copy
|
copy design
|
Don't think the information is really accurate anymore. Things to consider:
- Is the story intro still accurate?
- Let's describe the demo state, what's included and what to expect going forward
- Let's expand on "How to contribute". Just opening an issue is fine for small bugs, but not helpful if someone wants to help with design, localization, story, etc
- Update list of contributors
- Include a simple way to give feedback on the learning experience
What else?
I suggest starting a Google doc for drafting up a new and improved version of this page.
|
1.0
|
Revise the about page copy - Don't think the information is really accurate anymore. Things to consider:
- Is the story intro still accurate?
- Let's describe the demo state, what's included and what to expect going forward
- Let's expand on "How to contribute". Just opening an issue is fine for small bugs, but not helpful if someone wants to help with design, localization, story, etc
- Update list of contributors
- Include a simple way to give feedback on the learning experience
What else?
I suggest starting a Google doc for drafting up a new and improved version of this page.
|
non_process
|
revise the about page copy don t think the information is really accurate anymore things to consider is the story intro still accurate let s describe the demo state what s included and what to expect going forward let s expand on how to contribute just opening an issue is fine for small bugs but not helpful if someone wants to help with design localization story etc update list of contributors include a simple way to give feedback on the learning experience what else i suggest starting a google doc for drafting up a new and improved version of this page
| 0
|
86,256
| 3,704,312,576
|
IssuesEvent
|
2016-02-29 23:35:35
|
BlinkUX/ISB-LSDF
|
https://api.github.com/repos/BlinkUX/ISB-LSDF
|
closed
|
Plotted user feature axis render terrible axis labels
|
Awaiting Confirmation bug Priority 1
|
<img width="828" alt="screen shot 2016-02-27 at 12 12 50 pm" src="https://cloud.githubusercontent.com/assets/4040084/13375094/89f60a24-dd4b-11e5-989e-7456090e1c28.png">
|
1.0
|
Plotted user feature axis render terrible axis labels - <img width="828" alt="screen shot 2016-02-27 at 12 12 50 pm" src="https://cloud.githubusercontent.com/assets/4040084/13375094/89f60a24-dd4b-11e5-989e-7456090e1c28.png">
|
non_process
|
plotted user feature axis render terrible axis labels img width alt screen shot at pm src
| 0
|
49
| 2,513,878,256
|
IssuesEvent
|
2015-01-15 04:33:35
|
GsDevKit/zinc
|
https://api.github.com/repos/GsDevKit/zinc
|
closed
|
socket error while snapping off continuation
|
inprocess
|
See the discussion and attached stack in http://forum.world.st/GS-SS-Beta-Setup-a-new-copy-of-Glass-tp4757105p4757522.html for details of the error ...
To reproduce generate an error using the WAExceptionFunctionalTest test: http://localhost:8383/tests/functional/WAExceptionFunctionalTest
GemStone3.1.0.5 and Seaside 3.1.0, loaded using https://github.com/glassdb/webEditionHome/blob/master/docs/install/installSeaside3.1.md
|
1.0
|
socket error while snapping off continuation - See the discussion and attached stack in http://forum.world.st/GS-SS-Beta-Setup-a-new-copy-of-Glass-tp4757105p4757522.html for details of the error ...
To reproduce generate an error using the WAExceptionFunctionalTest test: http://localhost:8383/tests/functional/WAExceptionFunctionalTest
GemStone3.1.0.5 and Seaside 3.1.0, loaded using https://github.com/glassdb/webEditionHome/blob/master/docs/install/installSeaside3.1.md
|
process
|
socket error while snapping off continuation see the discussion and attached stack in for details of the error to reproduce generate an error using the waexceptionfunctionaltest test and seaside loaded using
| 1
|
17,440
| 23,265,882,762
|
IssuesEvent
|
2022-08-04 17:19:42
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
opened
|
Transparência - Detalhes do coletor/Filtragem URLs e RegEx
|
[1] Requisito [0] Desenvolvimento [2] Média Prioridade [3] Processamento Dinâmico
|
## Comportamento Esperado
Espera-se que as configurações de filtrar links por URLs e RegEx se apliquem também aos passos do processamento dinâmico.
## Comportamento Atual
Ao configurar um coletor dinâmico com essa ferramenta, os links filtrados são, basicamente, o que podem ser filtrados através do processamento do Scrapy. Para filtrar links dinamicamente, apesar de haver a opções extras para filtrar xpaths em alguns passos do mecanismo, não há uma opção nos detalhes do coletor para filtrar os links "dinâmicos" disponíveis através de especificação de URLs e RegEx desejados. Isso pode ser pouco intuitivo para o usuário.
## Passos para reproduzir o erro
Não se aplica.
## Sistema
- MP ou local: ambos
- Branch específica: master
- Sistema diferente: não
## Screenshots
Não se aplica.
|
1.0
|
Transparência - Detalhes do coletor/Filtragem URLs e RegEx - ## Comportamento Esperado
Espera-se que as configurações de filtrar links por URLs e RegEx se apliquem também aos passos do processamento dinâmico.
## Comportamento Atual
Ao configurar um coletor dinâmico com essa ferramenta, os links filtrados são, basicamente, o que podem ser filtrados através do processamento do Scrapy. Para filtrar links dinamicamente, apesar de haver a opções extras para filtrar xpaths em alguns passos do mecanismo, não há uma opção nos detalhes do coletor para filtrar os links "dinâmicos" disponíveis através de especificação de URLs e RegEx desejados. Isso pode ser pouco intuitivo para o usuário.
## Passos para reproduzir o erro
Não se aplica.
## Sistema
- MP ou local: ambos
- Branch específica: master
- Sistema diferente: não
## Screenshots
Não se aplica.
|
process
|
transparência detalhes do coletor filtragem urls e regex comportamento esperado espera se que as configurações de filtrar links por urls e regex se apliquem também aos passos do processamento dinâmico comportamento atual ao configurar um coletor dinâmico com essa ferramenta os links filtrados são basicamente o que podem ser filtrados através do processamento do scrapy para filtrar links dinamicamente apesar de haver a opções extras para filtrar xpaths em alguns passos do mecanismo não há uma opção nos detalhes do coletor para filtrar os links dinâmicos disponíveis através de especificação de urls e regex desejados isso pode ser pouco intuitivo para o usuário passos para reproduzir o erro não se aplica sistema mp ou local ambos branch específica master sistema diferente não screenshots não se aplica
| 1
|
288,578
| 24,917,374,802
|
IssuesEvent
|
2022-10-30 15:08:08
|
FuelLabs/fuel-indexer
|
https://api.github.com/repos/FuelLabs/fuel-indexer
|
closed
|
Add tests for new Receipt types
|
testing
|
- Now that we have support for several receipt types, it would be nice to have this functionality tested
- We initially punted on testing in the relevant PRs because we wanted to make sure we were on track to complete the milestone
- Not only this, but we also don't really know how to trigger some of these events in a Sway smart contract
- We should add a test for all added receipts functionality
Tests for:
- [x] Log
- [x] LogData
- [x] Transfer
- [ ] TransferOut
- [x] MessageOut
- [x] ScriptResult
|
1.0
|
Add tests for new Receipt types - - Now that we have support for several receipt types, it would be nice to have this functionality tested
- We initially punted on testing in the relevant PRs because we wanted to make sure we were on track to complete the milestone
- Not only this, but we also don't really know how to trigger some of these events in a Sway smart contract
- We should add a test for all added receipts functionality
Tests for:
- [x] Log
- [x] LogData
- [x] Transfer
- [ ] TransferOut
- [x] MessageOut
- [x] ScriptResult
|
non_process
|
add tests for new receipt types now that we have support for several receipt types it would be nice to have this functionality tested we initially punted on testing in the relevant prs because we wanted to make sure we were on track to complete the milestone not only this but we also don t really know how to trigger some of these events in a sway smart contract we should add a test for all added receipts functionality tests for log logdata transfer transferout messageout scriptresult
| 0
|
42,521
| 6,987,545,613
|
IssuesEvent
|
2017-12-14 09:38:25
|
raiden-network/microraiden
|
https://api.github.com/repos/raiden-network/microraiden
|
opened
|
Update documentation
|
channel manager documentation m2m client sprint candidate
|
Quite a lot has changed (and will change during this sprint) concerning usage of the proxy and python client/lib. We should consolidate README-like information into a single top-level README.md file and update usage instructions on basic examples.
We should also document all classes and methods (using docstrings) that are expected to be used by applications building on top of µRaiden.
|
1.0
|
Update documentation - Quite a lot has changed (and will change during this sprint) concerning usage of the proxy and python client/lib. We should consolidate README-like information into a single top-level README.md file and update usage instructions on basic examples.
We should also document all classes and methods (using docstrings) that are expected to be used by applications building on top of µRaiden.
|
non_process
|
update documentation quite a lot has changed and will change during this sprint concerning usage of the proxy and python client lib we should consolidate readme like information into a single top level readme md file and update usage instructions on basic examples we should also document all classes and methods using docstrings that are expected to be used by applications building on top of µraiden
| 0
|
11,833
| 14,655,443,642
|
IssuesEvent
|
2020-12-28 11:01:58
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM[ [Dev] Participant details page >Withdrawal date is displaying
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Participant details page >Withdrawal date is displaying
Steps
1. Join a study and withdrawn from the study
2. Enable invitation for the same participant
3. Navigate to Participant details page and Observe the Withdrawal date
AR : Withdrawal date is displaying
ER : Withdrawal date should be displayed as 'NA'
[ Note : It should also be fixed when user Invited the participant]

|
3.0
|
[PM[ [Dev] Participant details page >Withdrawal date is displaying - Participant details page >Withdrawal date is displaying
Steps
1. Join a study and withdrawn from the study
2. Enable invitation for the same participant
3. Navigate to Participant details page and Observe the Withdrawal date
AR : Withdrawal date is displaying
ER : Withdrawal date should be displayed as 'NA'
[ Note : It should also be fixed when user Invited the participant]

|
process
|
participant details page withdrawal date is displaying participant details page withdrawal date is displaying steps join a study and withdrawn from the study enable invitation for the same participant navigate to participant details page and observe the withdrawal date ar withdrawal date is displaying er withdrawal date should be displayed as na
| 1
|
35,765
| 5,006,624,221
|
IssuesEvent
|
2016-12-12 14:41:44
|
d3athrow/vgstation13
|
https://api.github.com/repos/d3athrow/vgstation13
|
closed
|
Parapen disappearing
|
Bug / Fix Needs Moar Testing
|
(WEB REPORT BY: brolaire REMOTE: 158.69.55.2:7777)
> Revision (Should be above if you're viewing this from ingame!)
> ce5a82256bbcd88f97bf112624142b6f9e608981
>
> General description of the issue
> Parapen put into left pocket, disappeared from the HUD, later on during search, HoS pulled it OUT OF MY EYES-slot.
>
> What you expected to happen
> Put parapen into right pocket, it goes in there.
>
> What actually happened
> Parapen turns invisible and was apparently equipped to my glasses slot instead of pockets.
>
> Steps to reproduce if possible
> -Get parapen
> -Put normal pen into LEFT pocket
> -PDA in LEFT hand, parapen in RIGHT hand
> -Active hand: RIGHT
> -Press E twice in hotkey-mode
> -Click with active RIGHT hand on RIGHT pocket.
|
1.0
|
Parapen disappearing - (WEB REPORT BY: brolaire REMOTE: 158.69.55.2:7777)
> Revision (Should be above if you're viewing this from ingame!)
> ce5a82256bbcd88f97bf112624142b6f9e608981
>
> General description of the issue
> Parapen put into left pocket, disappeared from the HUD, later on during search, HoS pulled it OUT OF MY EYES-slot.
>
> What you expected to happen
> Put parapen into right pocket, it goes in there.
>
> What actually happened
> Parapen turns invisible and was apparently equipped to my glasses slot instead of pockets.
>
> Steps to reproduce if possible
> -Get parapen
> -Put normal pen into LEFT pocket
> -PDA in LEFT hand, parapen in RIGHT hand
> -Active hand: RIGHT
> -Press E twice in hotkey-mode
> -Click with active RIGHT hand on RIGHT pocket.
|
non_process
|
parapen disappearing web report by brolaire remote revision should be above if you re viewing this from ingame general description of the issue parapen put into left pocket disappeared from the hud later on during search hos pulled it out of my eyes slot what you expected to happen put parapen into right pocket it goes in there what actually happened parapen turns invisible and was apparently equipped to my glasses slot instead of pockets steps to reproduce if possible get parapen put normal pen into left pocket pda in left hand parapen in right hand active hand right press e twice in hotkey mode click with active right hand on right pocket
| 0
|
240,234
| 18,296,196,373
|
IssuesEvent
|
2021-10-05 20:42:42
|
tc39/proposal-pipeline-operator
|
https://api.github.com/repos/tc39/proposal-pipeline-operator
|
closed
|
Readme indents change
|
question documentation
|
There are no indents all around the whole readme, there should be indents, i.e. 2 spaces.
This is a general js tendency, for example here:
```js
context
.call1()
.call2()
.call3()
```
So in the readme it should be:
```js
return xf
|> bind(^['@@transducer/step'], ^)
|> obj[methodName](^, ACC)
|> xf['@@transducer/result'](^);
```
This is not critical but default js formaters make indents before the accessors, so I think we should follow it.
|
1.0
|
Readme indents change - There are no indents all around the whole readme, there should be indents, i.e. 2 spaces.
This is a general js tendency, for example here:
```js
context
.call1()
.call2()
.call3()
```
So in the readme it should be:
```js
return xf
|> bind(^['@@transducer/step'], ^)
|> obj[methodName](^, ACC)
|> xf['@@transducer/result'](^);
```
This is not critical but default js formaters make indents before the accessors, so I think we should follow it.
|
non_process
|
readme indents change there are no indents all around the whole readme there should be indents i e spaces this is a general js tendency for example here js context so in the readme it should be js return xf bind obj acc xf this is not critical but default js formaters make indents before the accessors so i think we should follow it
| 0
|
38,689
| 19,503,378,118
|
IssuesEvent
|
2021-12-28 08:38:30
|
ARK-Builders/ARK-Navigator
|
https://api.github.com/repos/ARK-Builders/ARK-Navigator
|
opened
|
Asynchronous writing to tags storage
|
performance
|
At the moment, after tags are changed for a resource, we wait till new state of tags storage is written into filesystem.
We can update the storage file in background allowing user to continue working. Right now there is slight delay less than 0.5 sec after applying new tags which can be removed. We must ensure that such coroutines complete even in case of a crash, because we can't afford to lose user's data. So this features depends on #173.
|
True
|
Asynchronous writing to tags storage - At the moment, after tags are changed for a resource, we wait till new state of tags storage is written into filesystem.
We can update the storage file in background allowing user to continue working. Right now there is slight delay less than 0.5 sec after applying new tags which can be removed. We must ensure that such coroutines complete even in case of a crash, because we can't afford to lose user's data. So this features depends on #173.
|
non_process
|
asynchronous writing to tags storage at the moment after tags are changed for a resource we wait till new state of tags storage is written into filesystem we can update the storage file in background allowing user to continue working right now there is slight delay less than sec after applying new tags which can be removed we must ensure that such coroutines complete even in case of a crash because we can t afford to lose user s data so this features depends on
| 0
|
20,120
| 26,658,397,179
|
IssuesEvent
|
2023-01-25 18:47:25
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Ghidra cannot disassemble a LDS instruction in real mode
|
Type: Bug Feature: Processor/x86 Status: Internal
|
**Describe the bug**
C5B70D03 is supposed to disassemble to LDS SI,DWORD PTR [BX+030D]
However in ghidra I get an error:
Error | Bad Instruction | Unable to resolve constructor at f000:0026 | f000:0026 | | ?? C5h
**To Reproduce**
Create a project in real mode and import hex C5B70D03 in the bytes view, and try to disassemble it
**Expected behavior**
Ghidra disassembles a LDS instruction
**Screenshots**

**Attachments**
If applicable, please attach any files that caused problems or log files generated by the software.
**Environment (please complete the following information):**
- OS: ubuntu 20.04
- Java Version: 17.0.4
- Ghidra Version: 10.1.5_PUBLIC
- Ghidra Origin: official GitHub distro
|
1.0
|
Ghidra cannot disassemble a LDS instruction in real mode - **Describe the bug**
C5B70D03 is supposed to disassemble to LDS SI,DWORD PTR [BX+030D]
However in ghidra I get an error:
Error | Bad Instruction | Unable to resolve constructor at f000:0026 | f000:0026 | | ?? C5h
**To Reproduce**
Create a project in real mode and import hex C5B70D03 in the bytes view, and try to disassemble it
**Expected behavior**
Ghidra disassembles a LDS instruction
**Screenshots**

**Attachments**
If applicable, please attach any files that caused problems or log files generated by the software.
**Environment (please complete the following information):**
- OS: ubuntu 20.04
- Java Version: 17.0.4
- Ghidra Version: 10.1.5_PUBLIC
- Ghidra Origin: official GitHub distro
|
process
|
ghidra cannot disassemble a lds instruction in real mode describe the bug is supposed to disassemble to lds si dword ptr however in ghidra i get an error error bad instruction unable to resolve constructor at to reproduce create a project in real mode and import hex in the bytes view and try to disassemble it expected behavior ghidra disassembles a lds instruction screenshots attachments if applicable please attach any files that caused problems or log files generated by the software environment please complete the following information os ubuntu java version ghidra version public ghidra origin official github distro
| 1
|
77,683
| 3,507,212,594
|
IssuesEvent
|
2016-01-08 11:56:25
|
OregonCore/OregonCore
|
https://api.github.com/repos/OregonCore/OregonCore
|
closed
|
[Bug] Out of range use the melee ability (BB #663)
|
migrated Priority: Medium Type: Bug
|
This issue was migrated from bitbucket.
**Original Reporter:** Alex_Step
**Original Date:** 22.08.2014 05:08:15 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** closed
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/663
<hr>
Attack ability are only possible if you stand close to the enemy. If a little away from him - that are only auto attacks, and when you try to use the ability (Melee) to display the message - Out of range.
|
1.0
|
[Bug] Out of range use the melee ability (BB #663) - This issue was migrated from bitbucket.
**Original Reporter:** Alex_Step
**Original Date:** 22.08.2014 05:08:15 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** closed
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/663
<hr>
Attack ability are only possible if you stand close to the enemy. If a little away from him - that are only auto attacks, and when you try to use the ability (Melee) to display the message - Out of range.
|
non_process
|
out of range use the melee ability bb this issue was migrated from bitbucket original reporter alex step original date gmt original priority major original type bug original state closed direct link attack ability are only possible if you stand close to the enemy if a little away from him that are only auto attacks and when you try to use the ability melee to display the message out of range
| 0
|
320,680
| 27,450,085,042
|
IssuesEvent
|
2023-03-02 16:45:46
|
allure-framework/allure-python
|
https://api.github.com/repos/allure-framework/allure-python
|
reopened
|
pytest-bdd 5.0.0: error on generating report with failed tests with scenario outlines
|
type:bug theme:pytest-bdd
|
[//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [V] bug report
- [V] feature request
#### What is the current behavior?
report doesn't include failed tests with scenario outline (pytest-bdd 5.0.0)
it fails with
com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.String` out of START_OBJECT token
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
1. pytest-bdd 5.0.0
2. run scenario outlines with examples which fails
3. generate report
#### What is the expected behavior?
failed tests appears in report
#### Please tell us about your environment:
- Test framework: pytest-bdd@5.0.0
- Allure adaptor: allure-pytest-bdd@2.9.45 (and other lower versions)
#### Other information
it worked OK with previous pytest-bdd versions
pytest-bdd 5.0.0 changes https://pytest-bdd.readthedocs.io/en/latest/#id1
|
1.0
|
pytest-bdd 5.0.0: error on generating report with failed tests with scenario outlines - [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [V] bug report
- [V] feature request
#### What is the current behavior?
report doesn't include failed tests with scenario outline (pytest-bdd 5.0.0)
it fails with
com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of `java.lang.String` out of START_OBJECT token
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
1. pytest-bdd 5.0.0
2. run scenario outlines with examples which fails
3. generate report
#### What is the expected behavior?
failed tests appears in report
#### Please tell us about your environment:
- Test framework: pytest-bdd@5.0.0
- Allure adaptor: allure-pytest-bdd@2.9.45 (and other lower versions)
#### Other information
it worked OK with previous pytest-bdd versions
pytest-bdd 5.0.0 changes https://pytest-bdd.readthedocs.io/en/latest/#id1
|
non_process
|
pytest bdd error on generating report with failed tests with scenario outlines note for support questions please use stackoverflow or gitter this repository s issues are reserved for feature requests and bug reports in case of any problems with allure jenkins plugin please use the following repository to create an issue make sure you have a clear name for your issue the name should start with a capital letter and no dot is required in the end of the sentence an example of good issue names the report is broken in add an ability to disable default plugins support emoji in test descriptions i m submitting a bug report feature request what is the current behavior report doesn t include failed tests with scenario outline pytest bdd it fails with com fasterxml jackson databind exc mismatchedinputexception cannot deserialize instance of java lang string out of start object token if the current behavior is a bug please provide the steps to reproduce and if possible a minimal demo of the problem pytest bdd run scenario outlines with examples which fails generate report what is the expected behavior failed tests appears in report please tell us about your environment test framework pytest bdd allure adaptor allure pytest bdd and other lower versions other information it worked ok with previous pytest bdd versions pytest bdd changes
| 0
|
243,143
| 18,677,078,989
|
IssuesEvent
|
2021-10-31 18:42:47
|
AY2122S1-CS2103T-T10-1/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-T10-1/tp
|
closed
|
[PE-D] inconsisten prefix for tele
|
type.Documentation priority.High mustfix
|


many of the tele prefix in the UG uses @.
However, the app error messages range from using @ and te/
<!--session: 1635494420984-8d1a8229-d028-430a-bd08-8035c5129d31-->
<!--Version: Web v3.4.1-->
-------------
Labels: `type.DocumentationBug` `severity.Medium`
original: hpkoh/ped#3
|
1.0
|
[PE-D] inconsisten prefix for tele - 

many of the tele prefix in the UG uses @.
However, the app error messages range from using @ and te/
<!--session: 1635494420984-8d1a8229-d028-430a-bd08-8035c5129d31-->
<!--Version: Web v3.4.1-->
-------------
Labels: `type.DocumentationBug` `severity.Medium`
original: hpkoh/ped#3
|
non_process
|
inconsisten prefix for tele many of the tele prefix in the ug uses however the app error messages range from using and te labels type documentationbug severity medium original hpkoh ped
| 0
|
43,162
| 5,579,933,182
|
IssuesEvent
|
2017-03-28 15:33:55
|
open-horizon/anax
|
https://api.github.com/repos/open-horizon/anax
|
opened
|
Add resource checking / mgmt. support to anax that can be used before registration, agreement, and container start
|
bug design
|
This has been ignored for a long time in the platform and causes instability and other ugly problems.
|
1.0
|
Add resource checking / mgmt. support to anax that can be used before registration, agreement, and container start - This has been ignored for a long time in the platform and causes instability and other ugly problems.
|
non_process
|
add resource checking mgmt support to anax that can be used before registration agreement and container start this has been ignored for a long time in the platform and causes instability and other ugly problems
| 0
|
15,621
| 19,767,644,328
|
IssuesEvent
|
2022-01-17 05:54:35
|
pkgjs/parseargs
|
https://api.github.com/repos/pkgjs/parseargs
|
closed
|
Transfer iansu/eslint-plugin-node-core to pkgjs org
|
process
|
I extracted the ESLint config used by Node core into a standalone package here: https://github.com/iansu/eslint-plugin-node-core. The idea is that we can use this config in projects like parseargs to make the eventual porting of the code into Node core as easy as possible.
We would like to transfer this repo from my GitHub account into the pkgjs org. I think I will need permission to create repos in pkgjs to do this. I would also like the ability to publish `@pkgjs/eslint-plugin-node-core` to npm.
|
1.0
|
Transfer iansu/eslint-plugin-node-core to pkgjs org - I extracted the ESLint config used by Node core into a standalone package here: https://github.com/iansu/eslint-plugin-node-core. The idea is that we can use this config in projects like parseargs to make the eventual porting of the code into Node core as easy as possible.
We would like to transfer this repo from my GitHub account into the pkgjs org. I think I will need permission to create repos in pkgjs to do this. I would also like the ability to publish `@pkgjs/eslint-plugin-node-core` to npm.
|
process
|
transfer iansu eslint plugin node core to pkgjs org i extracted the eslint config used by node core into a standalone package here the idea is that we can use this config in projects like parseargs to make the eventual porting of the code into node core as easy as possible we would like to transfer this repo from my github account into the pkgjs org i think i will need permission to create repos in pkgjs to do this i would also like the ability to publish pkgjs eslint plugin node core to npm
| 1
|
20,679
| 27,350,548,366
|
IssuesEvent
|
2023-02-27 09:16:02
|
haddocking/haddock3
|
https://api.github.com/repos/haddocking/haddock3
|
opened
|
create detailed documentation for haddock3-analyse
|
analysis/postprocessing
|
haddock3-analyse should now have an adequate, detailed documentation
|
1.0
|
create detailed documentation for haddock3-analyse - haddock3-analyse should now have an adequate, detailed documentation
|
process
|
create detailed documentation for analyse analyse should now have an adequate detailed documentation
| 1
|
127,925
| 12,343,269,587
|
IssuesEvent
|
2020-05-15 03:26:57
|
COVID-19-electronic-health-system/Corona-tracker
|
https://api.github.com/repos/COVID-19-electronic-health-system/Corona-tracker
|
opened
|
[DOCS] README for translations python script
|
documentation
|
# ⚠️ IMPORTANT: Please fill out this template to give us as much information as possible to consider/implement this update.
### Summary
<!-- One paragraph explanation of the feature. -->
In reference to #702, and to have a modular issue, this issue is to provide an improved README that more precisely details the script execution and algorithm and especially details how to create a google credential file.
### Motivation
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Possible Alternatives
<!-- A clear and concise description of the alternative solutions you've considered. Be sure to explain why the current documentation isn't suitable for this feature. -->
### Additional Context
<!-- Add any other context or screenshots about the documentation update here. -->
|
1.0
|
[DOCS] README for translations python script - # ⚠️ IMPORTANT: Please fill out this template to give us as much information as possible to consider/implement this update.
### Summary
<!-- One paragraph explanation of the feature. -->
In reference to #702, and to have a modular issue, this issue is to provide an improved README that more precisely details the script execution and algorithm and especially details how to create a google credential file.
### Motivation
<!-- Why are we doing this? What use cases does it support? What is the expected outcome? -->
### Possible Alternatives
<!-- A clear and concise description of the alternative solutions you've considered. Be sure to explain why the current documentation isn't suitable for this feature. -->
### Additional Context
<!-- Add any other context or screenshots about the documentation update here. -->
|
non_process
|
readme for translations python script ⚠️ important please fill out this template to give us as much information as possible to consider implement this update summary in reference to and to have a modular issue this issue is to provide an improved readme that more precisely details the script execution and algorithm and especially details how to create a google credential file motivation possible alternatives additional context
| 0
|
5,975
| 8,794,017,079
|
IssuesEvent
|
2018-12-21 22:41:38
|
babel/babel
|
https://api.github.com/repos/babel/babel
|
closed
|
@babel/polyfill was published yesterday without pushing browser.js
|
area: publishing process i: bug i: regression pkg: polyfill
|
## Bug Report
**Current Behavior**
A clear and concise description of the behavior.
In version 7.0.0 there was a file, browser.js, which was distributed when published.
**Input Code**
- REPL or Repo link if applicable:
https://github.com/babel/babel/blob/master/packages/babel-polyfill/scripts/postpublish.js#L7
```js
var your => (code) => here;
```
**Expected behavior/code**
A clear and concise description of what you expected to happen (or code).
**Babel Configuration (.babelrc, package.json, cli command)**
```js
{
"your": { "config": "here" }
}
```
**Environment**
- Babel version(s): [v7.0.0-beta.34]
- Node/npm version: [e.g. Node 8/npm 5]
- OS: [e.g. OSX 10.13.4, Windows 10]
- Monorepo: [e.g. yes/no/Lerna]
- How you are using Babel: [e.g. `cli`, `register`, `loader`]
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
Re-publish the polyfill package with the browser.js file.
**Additional context/Screenshots**
Add any other context about the problem here. If applicable, add screenshots to help explain.
|
1.0
|
@babel/polyfill was published yesterday without pushing browser.js - ## Bug Report
**Current Behavior**
A clear and concise description of the behavior.
In version 7.0.0 there was a file, browser.js, which was distributed when published.
**Input Code**
- REPL or Repo link if applicable:
https://github.com/babel/babel/blob/master/packages/babel-polyfill/scripts/postpublish.js#L7
```js
var your => (code) => here;
```
**Expected behavior/code**
A clear and concise description of what you expected to happen (or code).
**Babel Configuration (.babelrc, package.json, cli command)**
```js
{
"your": { "config": "here" }
}
```
**Environment**
- Babel version(s): [v7.0.0-beta.34]
- Node/npm version: [e.g. Node 8/npm 5]
- OS: [e.g. OSX 10.13.4, Windows 10]
- Monorepo: [e.g. yes/no/Lerna]
- How you are using Babel: [e.g. `cli`, `register`, `loader`]
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
Re-publish the polyfill package with the browser.js file.
**Additional context/Screenshots**
Add any other context about the problem here. If applicable, add screenshots to help explain.
|
process
|
babel polyfill was published yesterday without pushing browser js bug report current behavior a clear and concise description of the behavior in version there was a file browser js which was distributed when published input code repl or repo link if applicable js var your code here expected behavior code a clear and concise description of what you expected to happen or code babel configuration babelrc package json cli command js your config here environment babel version s node npm version os monorepo how you are using babel possible solution re publish the polyfill package with the browser js file additional context screenshots add any other context about the problem here if applicable add screenshots to help explain
| 1
|
96,681
| 16,163,106,432
|
IssuesEvent
|
2021-05-01 02:09:27
|
Heavencraft/heavencraft-legacy-201404-201603
|
https://api.github.com/repos/Heavencraft/heavencraft-legacy-201404-201603
|
closed
|
CVE-2011-1498 Medium Severity Vulnerability detected by WhiteSource
|
security vulnerability
|
## CVE-2011-1498 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>httpclient-4.0.1.jar</b></p></summary>
<p>HttpComponents Client (base module)</p>
<p>path: /root/.m2/repository/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar</p>
<p>
<p>Library home page: <a href=http://hc.apache.org/httpcomponents-client>http://hc.apache.org/httpcomponents-client</a></p>
Dependency Hierarchy:
- geoip2-0.7.1.jar (Root Library)
- google-http-client-1.17.0-rc.jar
- :x: **httpclient-4.0.1.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache HttpClient 4.x before 4.1.1 in Apache HttpComponents, when used with an authenticating proxy server, sends the Proxy-Authorization header to the origin server, which allows remote web servers to obtain sensitive information by logging this header.
<p>Publish Date: 2011-07-07
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-1498>CVE-2011-1498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=709531">https://bugzilla.redhat.com/show_bug.cgi?id=709531</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Upgrade to version httpcomponents-client 4.1.1 or greater</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2011-1498 Medium Severity Vulnerability detected by WhiteSource - ## CVE-2011-1498 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>httpclient-4.0.1.jar</b></p></summary>
<p>HttpComponents Client (base module)</p>
<p>path: /root/.m2/repository/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar</p>
<p>
<p>Library home page: <a href=http://hc.apache.org/httpcomponents-client>http://hc.apache.org/httpcomponents-client</a></p>
Dependency Hierarchy:
- geoip2-0.7.1.jar (Root Library)
- google-http-client-1.17.0-rc.jar
- :x: **httpclient-4.0.1.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache HttpClient 4.x before 4.1.1 in Apache HttpComponents, when used with an authenticating proxy server, sends the Proxy-Authorization header to the origin server, which allows remote web servers to obtain sensitive information by logging this header.
<p>Publish Date: 2011-07-07
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-1498>CVE-2011-1498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=709531">https://bugzilla.redhat.com/show_bug.cgi?id=709531</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Upgrade to version httpcomponents-client 4.1.1 or greater</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium severity vulnerability detected by whitesource cve medium severity vulnerability vulnerable library httpclient jar httpcomponents client base module path root repository org apache httpcomponents httpclient httpclient jar library home page a href dependency hierarchy jar root library google http client rc jar x httpclient jar vulnerable library vulnerability details apache httpclient x before in apache httpcomponents when used with an authenticating proxy server sends the proxy authorization header to the origin server which allows remote web servers to obtain sensitive information by logging this header publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution upgrade to version httpcomponents client or greater step up your open source security game with whitesource
| 0
|
227,929
| 18,110,911,529
|
IssuesEvent
|
2021-09-23 03:43:31
|
apache/shardingsphere
|
https://api.github.com/repos/apache/shardingsphere
|
closed
|
Remove useless assertions in unit test
|
in: test
|
DistSQL should be able to be executed in any mode, there are some unit tests need to be cleared,
Search the code `No Registry center to execute` in all unit test classes, remove and refactor them.
|
1.0
|
Remove useless assertions in unit test - DistSQL should be able to be executed in any mode, there are some unit tests need to be cleared,
Search the code `No Registry center to execute` in all unit test classes, remove and refactor them.
|
non_process
|
remove useless assertions in unit test distsql should be able to be executed in any mode there are some unit tests need to be cleared search the code no registry center to execute in all unit test classes remove and refactor them
| 0
|
22,173
| 30,723,908,679
|
IssuesEvent
|
2023-07-27 18:01:59
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
ServiceController's API got wrong service status
|
question area-System.ServiceProcess no-recent-activity needs-author-action
|
My env is Windows 11/.net 6 sdk/VS 2022.
I am trying to start my service if found the service is stopped.
What I manually to make the service to stop is stop the service and delete a critical file(a DLL file), so then the service cannot be started.
And then code will be go into the if statement. but after I call start() and WaitForStatus(ServiceControllerStatus.Running), the status got running. and if I change the call to WaitForStatus(ServiceControllerStatus.Stopped), it got stopped.
And the timeout never wait for 5 sec or 30 sec.
This is weird.
Do I call this API in a incorrect way?
To short the description, I just put the code here for you to reproduce.
`
static void Main(string[] args)
{
Console.WriteLine("Hello, World!");
ServiceController serviceController = null;
try
{
serviceController = new("service name");
serviceController.Refresh();
// here value is Stopped
if (serviceController.Status == ServiceControllerStatus.Stopped)
{
serviceController.Start();
serviceController.Refresh();
serviceController.WaitForStatus(ServiceControllerStatus.Running, TimeSpan.FromSeconds(5));
serviceController.Refresh();
var s = serviceController.Status;// here value is Running
}
}
catch (Exception ex)
{
}
}
`
|
1.0
|
ServiceController's API got wrong service status - My env is Windows 11/.net 6 sdk/VS 2022.
I am trying to start my service if found the service is stopped.
What I manually to make the service to stop is stop the service and delete a critical file(a DLL file), so then the service cannot be started.
And then code will be go into the if statement. but after I call start() and WaitForStatus(ServiceControllerStatus.Running), the status got running. and if I change the call to WaitForStatus(ServiceControllerStatus.Stopped), it got stopped.
And the timeout never wait for 5 sec or 30 sec.
This is weird.
Do I call this API in a incorrect way?
To short the description, I just put the code here for you to reproduce.
`
static void Main(string[] args)
{
Console.WriteLine("Hello, World!");
ServiceController serviceController = null;
try
{
serviceController = new("service name");
serviceController.Refresh();
// here value is Stopped
if (serviceController.Status == ServiceControllerStatus.Stopped)
{
serviceController.Start();
serviceController.Refresh();
serviceController.WaitForStatus(ServiceControllerStatus.Running, TimeSpan.FromSeconds(5));
serviceController.Refresh();
var s = serviceController.Status;// here value is Running
}
}
catch (Exception ex)
{
}
}
`
|
process
|
servicecontroller s api got wrong service status my env is windows net sdk vs i am trying to start my service if found the service is stopped what i manually to make the service to stop is stop the service and delete a critical file a dll file so then the service cannot be started and then code will be go into the if statement but after i call start and waitforstatus servicecontrollerstatus running the status got running and if i change the call to waitforstatus servicecontrollerstatus stopped it got stopped and the timeout never wait for sec or sec this is weird do i call this api in a incorrect way to short the description i just put the code here for you to reproduce static void main string args console writeline hello world servicecontroller servicecontroller null try servicecontroller new service name servicecontroller refresh here value is stopped if servicecontroller status servicecontrollerstatus stopped servicecontroller start servicecontroller refresh servicecontroller waitforstatus servicecontrollerstatus running timespan fromseconds servicecontroller refresh var s servicecontroller status here value is running catch exception ex
| 1
|
16,730
| 21,891,887,936
|
IssuesEvent
|
2022-05-20 03:13:09
|
sgaxun/sgaxun.github.io
|
https://api.github.com/repos/sgaxun/sgaxun.github.io
|
closed
|
Spring Boot 启动流程 | sgaxun's blog
|
Gitalk /post/spring-boot-start-process/
|
https://blog.sgaxun.me/post/spring-boot-start-process/
Spring Boot 的启动很简单,代码如下:
@SpringBootApplication
public class Application implements CommandLineRunner {
public stat...
|
1.0
|
Spring Boot 启动流程 | sgaxun's blog - https://blog.sgaxun.me/post/spring-boot-start-process/
Spring Boot 的启动很简单,代码如下:
@SpringBootApplication
public class Application implements CommandLineRunner {
public stat...
|
process
|
spring boot 启动流程 sgaxun s blog spring boot 的启动很简单,代码如下: springbootapplication public class application implements commandlinerunner public stat
| 1
|
7,803
| 19,426,430,219
|
IssuesEvent
|
2021-12-21 06:26:12
|
MicrosoftDocs/architecture-center
|
https://api.github.com/repos/MicrosoftDocs/architecture-center
|
closed
|
Enterprise integration using message broker and events
|
doc-enhancement cxp triaged architecture-center/svc reference-architecture/subsvc Pri2
|
Hi,
First of all, thank you. These pages are a very useful reference, and I'm really pleased they exist!
In regards to this page:
https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/enterprise-integration/queues-events
I see that a Service Bus Queue feeds into Event Grid. I am curious if this is still the best advice?
I assume that Event Grid is used to trigger the logic app. However, the Peek-Lock Service Bus trigger will trigger a Logic App without the need for Event Grid, and without the need for polling (I think).
https://docs.microsoft.com/en-us/azure/connectors/connectors-create-api-servicebus
I think directly coupling the Logic App to Service Bus provides a more simple and robust architecture.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 58f4706b-af61-c661-279d-05e9e9abb79a
* Version Independent ID: d5a4b726-4c50-0851-edc5-04b60ee5ce89
* Content: [Enterprise integration with message broker and events - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/enterprise-integration/queues-events)
* Content Source: [docs/reference-architectures/enterprise-integration/queues-events.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/reference-architectures/enterprise-integration/queues-events.yml)
* Service: **architecture-center**
* Sub-service: **reference-architecture**
* GitHub Login: @MattFarm
* Microsoft Alias: **pnp**
|
2.0
|
Enterprise integration using message broker and events - Hi,
First of all, thank you. These pages are a very useful reference, and I'm really pleased they exist!
In regards to this page:
https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/enterprise-integration/queues-events
I see that a Service Bus Queue feeds into Event Grid. I am curious if this is still the best advice?
I assume that Event Grid is used to trigger the logic app. However, the Peek-Lock Service Bus trigger will trigger a Logic App without the need for Event Grid, and without the need for polling (I think).
https://docs.microsoft.com/en-us/azure/connectors/connectors-create-api-servicebus
I think directly coupling the Logic App to Service Bus provides a more simple and robust architecture.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 58f4706b-af61-c661-279d-05e9e9abb79a
* Version Independent ID: d5a4b726-4c50-0851-edc5-04b60ee5ce89
* Content: [Enterprise integration with message broker and events - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/enterprise-integration/queues-events)
* Content Source: [docs/reference-architectures/enterprise-integration/queues-events.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/reference-architectures/enterprise-integration/queues-events.yml)
* Service: **architecture-center**
* Sub-service: **reference-architecture**
* GitHub Login: @MattFarm
* Microsoft Alias: **pnp**
|
non_process
|
enterprise integration using message broker and events hi first of all thank you these pages are a very useful reference and i m really pleased they exist in regards to this page i see that a service bus queue feeds into event grid i am curious if this is still the best advice i assume that event grid is used to trigger the logic app however the peek lock service bus trigger will trigger a logic app without the need for event grid and without the need for polling i think i think directly coupling the logic app to service bus provides a more simple and robust architecture document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service reference architecture github login mattfarm microsoft alias pnp
| 0
|
5,130
| 7,917,091,894
|
IssuesEvent
|
2018-07-04 08:46:09
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
Uniform process_graph_id, and job_id generation across different backends
|
in discussion job management process graph management
|
As addressed during the last biweekly development prints it would be a useful feature for the handling of processes if the ID’s, in particular the process_graph_id and eventually also the job_id would be uniformly generated across the different backends. This would make the handling of distributed processes somehow simpler in particular also when performing comparison tests across different backends, i.e a given process can be generated a accessed via the same id on all backends.
In a scenario where a process is created and launched on different backends, that may reside on different clouds for example, having a uniformly created UUID would be also easier to handle or debug in a kind of Task scheduler implementation.
One possible combination would be to use the following scheme
process_graph_id = hash(sha256(process_graph + [*user_id?]))
Where some fields of the process_graph, if necessary, could also be omitted in order to generalise further, for example "from":"2017-01-01", "to":"2017-01-31"
job_id = hash(process_graph_id + #instance_sequence_nr))
This would allow for example a unique, uniform and directional mapping between process_graph (eventually per user) and job_id.
|
1.0
|
Uniform process_graph_id, and job_id generation across different backends - As addressed during the last biweekly development prints it would be a useful feature for the handling of processes if the ID’s, in particular the process_graph_id and eventually also the job_id would be uniformly generated across the different backends. This would make the handling of distributed processes somehow simpler in particular also when performing comparison tests across different backends, i.e a given process can be generated a accessed via the same id on all backends.
In a scenario where a process is created and launched on different backends, that may reside on different clouds for example, having a uniformly created UUID would be also easier to handle or debug in a kind of Task scheduler implementation.
One possible combination would be to use the following scheme
process_graph_id = hash(sha256(process_graph + [*user_id?]))
Where some fields of the process_graph, if necessary, could also be omitted in order to generalise further, for example "from":"2017-01-01", "to":"2017-01-31"
job_id = hash(process_graph_id + #instance_sequence_nr))
This would allow for example a unique, uniform and directional mapping between process_graph (eventually per user) and job_id.
|
process
|
uniform process graph id and job id generation across different backends as addressed during the last biweekly development prints it would be a useful feature for the handling of processes if the id’s in particular the process graph id and eventually also the job id would be uniformly generated across the different backends this would make the handling of distributed processes somehow simpler in particular also when performing comparison tests across different backends i e a given process can be generated a accessed via the same id on all backends in a scenario where a process is created and launched on different backends that may reside on different clouds for example having a uniformly created uuid would be also easier to handle or debug in a kind of task scheduler implementation one possible combination would be to use the following scheme process graph id hash process graph where some fields of the process graph if necessary could also be omitted in order to generalise further for example from to job id hash process graph id instance sequence nr this would allow for example a unique uniform and directional mapping between process graph eventually per user and job id
| 1
|
18,367
| 24,495,644,165
|
IssuesEvent
|
2022-10-10 08:26:14
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
reopened
|
TPV regulation of mitotic metaphase/anaphase transition
|
cell cycle and DNA processes parent relationship query
|
remove the parent
GO:0010965 regulation of mitotic sister chromatid separation
from
GO:0030071 regulation of mitotic metaphase/anaphase transition
they are related but the relationship is odd.
|
1.0
|
TPV regulation of mitotic metaphase/anaphase transition - remove the parent
GO:0010965 regulation of mitotic sister chromatid separation
from
GO:0030071 regulation of mitotic metaphase/anaphase transition
they are related but the relationship is odd.
|
process
|
tpv regulation of mitotic metaphase anaphase transition remove the parent go regulation of mitotic sister chromatid separation from go regulation of mitotic metaphase anaphase transition they are related but the relationship is odd
| 1
|
393,573
| 11,621,713,097
|
IssuesEvent
|
2020-02-27 03:57:00
|
dhowe/spectre
|
https://api.github.com/repos/dhowe/spectre
|
opened
|
Dark-ad 'share' button has broken text display
|
high-priority needs-quick-fix
|
Please make sure this is fixed for ALL buttons

|
1.0
|
Dark-ad 'share' button has broken text display - Please make sure this is fixed for ALL buttons

|
non_process
|
dark ad share button has broken text display please make sure this is fixed for all buttons
| 0
|
3,582
| 6,620,700,826
|
IssuesEvent
|
2017-09-21 16:23:35
|
cptechinc/soft-6-ecomm
|
https://api.github.com/repos/cptechinc/soft-6-ecomm
|
closed
|
Fix Includes for head and foot
|
PHP Processwire
|
```
<?php
include('./_head.php');
?>
<?php
include('./_foot.php');
?>
```
Should look like
```
<?php include('./_head.php'); ?>
<?php include('./_foot.php'); ?>
```
|
1.0
|
Fix Includes for head and foot - ```
<?php
include('./_head.php');
?>
<?php
include('./_foot.php');
?>
```
Should look like
```
<?php include('./_head.php'); ?>
<?php include('./_foot.php'); ?>
```
|
process
|
fix includes for head and foot php include head php php include foot php should look like
| 1
|
657,231
| 21,789,220,100
|
IssuesEvent
|
2022-05-14 16:16:12
|
misskey-dev/misskey
|
https://api.github.com/repos/misskey-dev/misskey
|
closed
|
Crash when processing certain PNGs in ARM64
|
🐛Bug ⚙️Server 🔥high priority
|
<!--
Thanks for reporting!
First, in order to avoid duplicate Issues, please search to see if the problem you found has already been reported.
-->
## 💡 Summary
ARM64上で稼働しているインスタンスで特定のPNGを処理しようとするとクラッシュします。
```
malloc(): corrupted top size
ERR * [core cluster] [1] died :(
```
特定のPNGとして Pixelfed のデフォルト [favicon.png](https://github.com/pixelfed/pixelfed/raw/28bbd63300fbe81b969039373eed8a70a77b4938/public/img/favicon.png) などが該当するため、URLプレビューで Pixelfed インスタンスを表示しようとした時などにクラッシュします。
起きる環境の例
```
Ubuntu 20.04.4 on AWS t4g
Linux hostname 5.13.0-1022-aws #24~20.04.1-Ubuntu SMP Thu Apr 7 22:14:11 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
Ubuntu 20.04.4 on OCI A1
Linux hostname 5.13.0-1027-oracle #32~20.04.1-Ubuntu SMP Fri Apr 15 06:01:57 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
```
sharpが0.30.0以降だと起きる
```
OK sharp 0.29.3
NG sharp 0.30.0 - 0.30.4
```
<!-- Tell us what the bug is -->
## 🥰 Expected Behavior
クラッシュしない
<!--- Tell us what should happen -->
## 🤬 Actual Behavior
クラッシュする
<!--- Tell us what happens instead of the expected behavior -->
## 📝 Steps to Reproduce
1. ARM64上でMisskeyインスタンスを構築する
2. PixelfedインスタンスのURLを投稿してURLプレビューを表示させようとする
## 📌 Environment
Misskey version: 12.110.1
Summary を参照
|
1.0
|
Crash when processing certain PNGs in ARM64 - <!--
Thanks for reporting!
First, in order to avoid duplicate Issues, please search to see if the problem you found has already been reported.
-->
## 💡 Summary
ARM64上で稼働しているインスタンスで特定のPNGを処理しようとするとクラッシュします。
```
malloc(): corrupted top size
ERR * [core cluster] [1] died :(
```
特定のPNGとして Pixelfed のデフォルト [favicon.png](https://github.com/pixelfed/pixelfed/raw/28bbd63300fbe81b969039373eed8a70a77b4938/public/img/favicon.png) などが該当するため、URLプレビューで Pixelfed インスタンスを表示しようとした時などにクラッシュします。
起きる環境の例
```
Ubuntu 20.04.4 on AWS t4g
Linux hostname 5.13.0-1022-aws #24~20.04.1-Ubuntu SMP Thu Apr 7 22:14:11 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
Ubuntu 20.04.4 on OCI A1
Linux hostname 5.13.0-1027-oracle #32~20.04.1-Ubuntu SMP Fri Apr 15 06:01:57 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
```
sharpが0.30.0以降だと起きる
```
OK sharp 0.29.3
NG sharp 0.30.0 - 0.30.4
```
<!-- Tell us what the bug is -->
## 🥰 Expected Behavior
クラッシュしない
<!--- Tell us what should happen -->
## 🤬 Actual Behavior
クラッシュする
<!--- Tell us what happens instead of the expected behavior -->
## 📝 Steps to Reproduce
1. ARM64上でMisskeyインスタンスを構築する
2. PixelfedインスタンスのURLを投稿してURLプレビューを表示させようとする
## 📌 Environment
Misskey version: 12.110.1
Summary を参照
|
non_process
|
crash when processing certain pngs in thanks for reporting first in order to avoid duplicate issues please search to see if the problem you found has already been reported 💡 summary 。 malloc corrupted top size err died 特定のpngとして pixelfed のデフォルト などが該当するため、urlプレビューで pixelfed インスタンスを表示しようとした時などにクラッシュします。 起きる環境の例 ubuntu on aws linux hostname aws ubuntu smp thu apr utc gnu linux ubuntu on oci linux hostname oracle ubuntu smp fri apr utc gnu linux ok sharp ng sharp 🥰 expected behavior クラッシュしない 🤬 actual behavior クラッシュする 📝 steps to reproduce pixelfedインスタンスのurlを投稿してurlプレビューを表示させようとする 📌 environment misskey version summary を参照
| 0
|
8,747
| 11,872,768,428
|
IssuesEvent
|
2020-03-26 16:17:38
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
closed
|
Upgrade Bikeshed to Python 3
|
type:process
|
Bikeshed is about to port to Python 3 soon, so it's likely we'll need to update our build scripts to match.
|
1.0
|
Upgrade Bikeshed to Python 3 - Bikeshed is about to port to Python 3 soon, so it's likely we'll need to update our build scripts to match.
|
process
|
upgrade bikeshed to python bikeshed is about to port to python soon so it s likely we ll need to update our build scripts to match
| 1
|
512
| 2,982,201,076
|
IssuesEvent
|
2015-07-17 09:25:30
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На главном портале в документах, починить диалог открытия доступа к документу.
|
hi priority In process of testing test version
|
еще несколько дней назад работало.
Работает на бэте:

не работает на альфе:

|
1.0
|
На главном портале в документах, починить диалог открытия доступа к документу. - еще несколько дней назад работало.
Работает на бэте:

не работает на альфе:

|
process
|
на главном портале в документах починить диалог открытия доступа к документу еще несколько дней назад работало работает на бэте не работает на альфе
| 1
|
9,934
| 12,970,231,194
|
IssuesEvent
|
2020-07-21 09:01:28
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Add a few sanity tests for Studio in the CLI pipeline
|
kind/improvement process/candidate
|
Curretly, the only test for Studio in the CLI pipeline is one where it is launched, and HTTP request's response code is checked. This essentially tests if Client was generated correctly and that Studio launched. It does not check if Studio shows data correctly though.
In an ideal world, the entirety of Studio's test suite would run on our Buildkite agents, but that is unrealistic (and unnecessary), since it would bloat build times. So my plan is to run a few sanity tests without Cypress on the pipeline as follows.
1. Start studio-server
2. Send a Prisma Client request to it
3. See how it responds.
This means we do not test Studio's UI, but rather the requests that go out of it. I think this is a pretty good compromise, since the UI itself is tested in Studio's CI pipeline anyway (and block releases). As such, if something in Client changes that has the potential to break Studio, these tests will likely catch it.
|
1.0
|
Add a few sanity tests for Studio in the CLI pipeline - Curretly, the only test for Studio in the CLI pipeline is one where it is launched, and HTTP request's response code is checked. This essentially tests if Client was generated correctly and that Studio launched. It does not check if Studio shows data correctly though.
In an ideal world, the entirety of Studio's test suite would run on our Buildkite agents, but that is unrealistic (and unnecessary), since it would bloat build times. So my plan is to run a few sanity tests without Cypress on the pipeline as follows.
1. Start studio-server
2. Send a Prisma Client request to it
3. See how it responds.
This means we do not test Studio's UI, but rather the requests that go out of it. I think this is a pretty good compromise, since the UI itself is tested in Studio's CI pipeline anyway (and block releases). As such, if something in Client changes that has the potential to break Studio, these tests will likely catch it.
|
process
|
add a few sanity tests for studio in the cli pipeline curretly the only test for studio in the cli pipeline is one where it is launched and http request s response code is checked this essentially tests if client was generated correctly and that studio launched it does not check if studio shows data correctly though in an ideal world the entirety of studio s test suite would run on our buildkite agents but that is unrealistic and unnecessary since it would bloat build times so my plan is to run a few sanity tests without cypress on the pipeline as follows start studio server send a prisma client request to it see how it responds this means we do not test studio s ui but rather the requests that go out of it i think this is a pretty good compromise since the ui itself is tested in studio s ci pipeline anyway and block releases as such if something in client changes that has the potential to break studio these tests will likely catch it
| 1
|
219,940
| 16,857,322,750
|
IssuesEvent
|
2021-06-21 08:31:48
|
DAGsHub/fds
|
https://api.github.com/repos/DAGsHub/fds
|
closed
|
Add About section for FDS
|
documentation
|
There is no about section for FDS. For example refer https://gitless.com/#main for inspiration
|
1.0
|
Add About section for FDS - There is no about section for FDS. For example refer https://gitless.com/#main for inspiration
|
non_process
|
add about section for fds there is no about section for fds for example refer for inspiration
| 0
|
165,099
| 26,096,988,100
|
IssuesEvent
|
2022-12-26 21:58:11
|
sboxgame/issues
|
https://api.github.com/repos/sboxgame/issues
|
closed
|
Add `DamageInfo.WithTags` method
|
api design
|
### What it is?
You have to add tags to damage info one at a time with `WithTag(sometag)`.
### What should it be?
Allow adding multiple tags with a single line using `WithTags(my, tags, are, cool)`, either as a param list or IEnumerable
|
1.0
|
Add `DamageInfo.WithTags` method - ### What it is?
You have to add tags to damage info one at a time with `WithTag(sometag)`.
### What should it be?
Allow adding multiple tags with a single line using `WithTags(my, tags, are, cool)`, either as a param list or IEnumerable
|
non_process
|
add damageinfo withtags method what it is you have to add tags to damage info one at a time with withtag sometag what should it be allow adding multiple tags with a single line using withtags my tags are cool either as a param list or ienumerable
| 0
|
417,741
| 28,110,900,965
|
IssuesEvent
|
2023-03-31 07:05:21
|
Sheemo/ped
|
https://api.github.com/repos/Sheemo/ped
|
opened
|
Inconsistencies in User Guide for T/ tag
|
severity.VeryLow type.DocumentationBug
|
Sometimes it reads TEAM_NAME (under show), sometimes it reads TEAMNAME (under remove), sometimes it reads TEAMTAG (under edit)
<!--session: 1680242662877-68f1f0b8-49bf-45a2-b0d6-9c11a4d5249c-->
<!--Version: Web v3.4.7-->
|
1.0
|
Inconsistencies in User Guide for T/ tag - Sometimes it reads TEAM_NAME (under show), sometimes it reads TEAMNAME (under remove), sometimes it reads TEAMTAG (under edit)
<!--session: 1680242662877-68f1f0b8-49bf-45a2-b0d6-9c11a4d5249c-->
<!--Version: Web v3.4.7-->
|
non_process
|
inconsistencies in user guide for t tag sometimes it reads team name under show sometimes it reads teamname under remove sometimes it reads teamtag under edit
| 0
|
20,778
| 27,515,563,304
|
IssuesEvent
|
2023-03-06 11:43:28
|
HPSCTerrSys/TSMP
|
https://api.github.com/repos/HPSCTerrSys/TSMP
|
closed
|
ToDo for TSMP on github: see what we do with the standard pre and postpro tools from Shrestha that are still on git in bonn (Bugzilla Bug 54)
|
pre/post processing
|
This issue was created automatically with bugzilla2github
# Bugzilla Bug 54
Date: 2020-02-04 14:14:19 +0100
From: Klaus Goergen <<k.goergen@fz-juelich.de>>
To: Abouzar Ghasemi, <<a.ghasemi@fz-juelich.de>>
CC: a.ghasemi@fz-juelich.de, n.wagner@fz-juelich.de
Last updated: 2020-02-04 14:14:19 +0100
## Comment 90
Date: 2020-02-04 14:14:19 +0100
From: Klaus Goergen <<k.goergen@fz-juelich.de>>
find a solution for this and stage the tools also on github
double chekc with prabhakar whether this is OK
|
1.0
|
ToDo for TSMP on github: see what we do with the standard pre and postpro tools from Shrestha that are still on git in bonn (Bugzilla Bug 54) - This issue was created automatically with bugzilla2github
# Bugzilla Bug 54
Date: 2020-02-04 14:14:19 +0100
From: Klaus Goergen <<k.goergen@fz-juelich.de>>
To: Abouzar Ghasemi, <<a.ghasemi@fz-juelich.de>>
CC: a.ghasemi@fz-juelich.de, n.wagner@fz-juelich.de
Last updated: 2020-02-04 14:14:19 +0100
## Comment 90
Date: 2020-02-04 14:14:19 +0100
From: Klaus Goergen <<k.goergen@fz-juelich.de>>
find a solution for this and stage the tools also on github
double chekc with prabhakar whether this is OK
|
process
|
todo for tsmp on github see what we do with the standard pre and postpro tools from shrestha that are still on git in bonn bugzilla bug this issue was created automatically with bugzilla bug date from klaus goergen lt gt to abouzar ghasemi lt gt cc a ghasemi fz juelich de n wagner fz juelich de last updated comment date from klaus goergen lt gt find a solution for this and stage the tools also on github double chekc with prabhakar whether this is ok
| 1
|
14,981
| 18,511,961,387
|
IssuesEvent
|
2021-10-20 05:08:04
|
juzi5201314/maop
|
https://api.github.com/repos/juzi5201314/maop
|
closed
|
改进password的配置
|
http processing
|
为了不储存密码明文,将密码hash后放到data path中。
1. 在终端与用户交互获取密码
2. 通过`--no-password`启动,禁用身份验证(禁用需要验证的操作
3. 通过`password`子命令手动设置密码
|
1.0
|
改进password的配置 - 为了不储存密码明文,将密码hash后放到data path中。
1. 在终端与用户交互获取密码
2. 通过`--no-password`启动,禁用身份验证(禁用需要验证的操作
3. 通过`password`子命令手动设置密码
|
process
|
改进password的配置 为了不储存密码明文,将密码hash后放到data path中。 在终端与用户交互获取密码 通过 no password 启动,禁用身份验证 禁用需要验证的操作 通过 password 子命令手动设置密码
| 1
|
19,128
| 25,183,502,195
|
IssuesEvent
|
2022-11-11 15:43:27
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
opened
|
Provide a type conversion / cast processor
|
enhancement plugin - processor
|
**Is your feature request related to a problem? Please describe.**
Some pipelines have Event values in one type (e.g. string), but want to convert them to another type (e.g. int).
**Describe the solution you'd like**
Provide a new convert processor along with the other Mutate Event Processors.
```
processor
- convert_entries:
entries:
- from_key: "mySource"
to_key: "myTarget"
type: integer
```
The default value for `to_key` can be the `from_key`. So this could be simplified in some cases:
```
processor
- convert_entries:
entries:
- from_key: "http_status"
type: integer
```
**Additional context**
With conditional routing and expressions this can help pipeline authors perform better comparisons. It also allows for sending data to OpenSearch in a more desirable format.
See #2009 for a grok-based solution for a similar problem.
|
1.0
|
Provide a type conversion / cast processor - **Is your feature request related to a problem? Please describe.**
Some pipelines have Event values in one type (e.g. string), but want to convert them to another type (e.g. int).
**Describe the solution you'd like**
Provide a new convert processor along with the other Mutate Event Processors.
```
processor
- convert_entries:
entries:
- from_key: "mySource"
to_key: "myTarget"
type: integer
```
The default value for `to_key` can be the `from_key`. So this could be simplified in some cases:
```
processor
- convert_entries:
entries:
- from_key: "http_status"
type: integer
```
**Additional context**
With conditional routing and expressions this can help pipeline authors perform better comparisons. It also allows for sending data to OpenSearch in a more desirable format.
See #2009 for a grok-based solution for a similar problem.
|
process
|
provide a type conversion cast processor is your feature request related to a problem please describe some pipelines have event values in one type e g string but want to convert them to another type e g int describe the solution you d like provide a new convert processor along with the other mutate event processors processor convert entries entries from key mysource to key mytarget type integer the default value for to key can be the from key so this could be simplified in some cases processor convert entries entries from key http status type integer additional context with conditional routing and expressions this can help pipeline authors perform better comparisons it also allows for sending data to opensearch in a more desirable format see for a grok based solution for a similar problem
| 1
|
6,606
| 9,692,946,092
|
IssuesEvent
|
2019-05-24 14:55:35
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Storage: 'tearDownModule' flakes with 409
|
api: storage flaky testing type: process
|
From: https://source.cloud.google.com/results/invocations/b09e6e84-ca6e-427f-8014-887b811578bc/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fstorage/log
```python
def tearDownModule():
errors = (exceptions.Conflict, exceptions.TooManyRequests)
retry = RetryErrors(errors, max_tries=9)
> retry(Config.TEST_BUCKET.delete)(force=True)
...
E Conflict: 409 DELETE https://www.googleapis.com/storage/v1/b/new_1552405494808: The bucket you tried to delete was not empty.
```
|
1.0
|
Storage: 'tearDownModule' flakes with 409 - From: https://source.cloud.google.com/results/invocations/b09e6e84-ca6e-427f-8014-887b811578bc/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fstorage/log
```python
def tearDownModule():
errors = (exceptions.Conflict, exceptions.TooManyRequests)
retry = RetryErrors(errors, max_tries=9)
> retry(Config.TEST_BUCKET.delete)(force=True)
...
E Conflict: 409 DELETE https://www.googleapis.com/storage/v1/b/new_1552405494808: The bucket you tried to delete was not empty.
```
|
process
|
storage teardownmodule flakes with from python def teardownmodule errors exceptions conflict exceptions toomanyrequests retry retryerrors errors max tries retry config test bucket delete force true e conflict delete the bucket you tried to delete was not empty
| 1
|
13,986
| 16,761,219,546
|
IssuesEvent
|
2021-06-13 20:36:07
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
color balance rgb gamut clipping
|
scope: image processing
|
Hi
I am not sure if this is a bug, or if I am just too stupid to get my head around the color science. For some learning exercise I've loaded some artificial test images and played with the color balance rgb module.
According to the manual:
> At its output, color balance RGB checks that the graded colors fit inside the pipeline RGB color space (Rec 2020 by default) and applies a soft chroma clipping at constant luminance and hue. This prevents the out-of-gamut colors that can quickly be produced when increasing chroma and saturation.
But to me, the color changes are rather surprising. @aurelienpierre I hope you don't mind to elaborate what's happening and whether this is the intended behaviour.
I've downloaded one of the graphs from https://eng.aurelienpierre.com/2021/04/the-srgb-book-of-color/ and pushed chroma respectively saturation (beyond reason, I know).
What surprises me, is that blue (hue 243) turns cyan (hue 185) before it eventually turns black when pushing chroma. Isn't that contrary to what's written in the manual?
Pushing saturation skips the cyan conversion and just goes black.
Activating filmic with default settings mitigates the cyan issue, since everything high luminance is desaturrated.
I am not sure if my approach was doomed from the start due to testing with a png, but I can also reproduce with a tif file.
EDIT: There are actually two things I am wondering:
1. Why is there a hue shift to cyan? This doesn't botter me really, since it seems to not happen when filmic is switched on
2. Shouldn't increased chroma clipping result in white instead of black? Looking at the 2nd image, right corner, this would avoid the harsh transition.
darktable 3.5.0 98ba99f784736c7c261e5f28209e33588142db48 (recent master)
Linux, OpenCL active - but can reproduce without.
**Screenshots**
chroma pushed - filmic off

chroma pushed - filmic on

saturation pushed - filmic off (filmic on does not change the clipping)

**Test Images**
[test-images.zip](https://github.com/darktable-org/darktable/files/6563763/test-images.zip)
|
1.0
|
color balance rgb gamut clipping - Hi
I am not sure if this is a bug, or if I am just too stupid to get my head around the color science. For some learning exercise I've loaded some artificial test images and played with the color balance rgb module.
According to the manual:
> At its output, color balance RGB checks that the graded colors fit inside the pipeline RGB color space (Rec 2020 by default) and applies a soft chroma clipping at constant luminance and hue. This prevents the out-of-gamut colors that can quickly be produced when increasing chroma and saturation.
But to me, the color changes are rather surprising. @aurelienpierre I hope you don't mind to elaborate what's happening and whether this is the intended behaviour.
I've downloaded one of the graphs from https://eng.aurelienpierre.com/2021/04/the-srgb-book-of-color/ and pushed chroma respectively saturation (beyond reason, I know).
What surprises me, is that blue (hue 243) turns cyan (hue 185) before it eventually turns black when pushing chroma. Isn't that contrary to what's written in the manual?
Pushing saturation skips the cyan conversion and just goes black.
Activating filmic with default settings mitigates the cyan issue, since everything high luminance is desaturrated.
I am not sure if my approach was doomed from the start due to testing with a png, but I can also reproduce with a tif file.
EDIT: There are actually two things I am wondering:
1. Why is there a hue shift to cyan? This doesn't botter me really, since it seems to not happen when filmic is switched on
2. Shouldn't increased chroma clipping result in white instead of black? Looking at the 2nd image, right corner, this would avoid the harsh transition.
darktable 3.5.0 98ba99f784736c7c261e5f28209e33588142db48 (recent master)
Linux, OpenCL active - but can reproduce without.
**Screenshots**
chroma pushed - filmic off

chroma pushed - filmic on

saturation pushed - filmic off (filmic on does not change the clipping)

**Test Images**
[test-images.zip](https://github.com/darktable-org/darktable/files/6563763/test-images.zip)
|
process
|
color balance rgb gamut clipping hi i am not sure if this is a bug or if i am just too stupid to get my head around the color science for some learning exercise i ve loaded some artificial test images and played with the color balance rgb module according to the manual at its output color balance rgb checks that the graded colors fit inside the pipeline rgb color space rec by default and applies a soft chroma clipping at constant luminance and hue this prevents the out of gamut colors that can quickly be produced when increasing chroma and saturation but to me the color changes are rather surprising aurelienpierre i hope you don t mind to elaborate what s happening and whether this is the intended behaviour i ve downloaded one of the graphs from and pushed chroma respectively saturation beyond reason i know what surprises me is that blue hue turns cyan hue before it eventually turns black when pushing chroma isn t that contrary to what s written in the manual pushing saturation skips the cyan conversion and just goes black activating filmic with default settings mitigates the cyan issue since everything high luminance is desaturrated i am not sure if my approach was doomed from the start due to testing with a png but i can also reproduce with a tif file edit there are actually two things i am wondering why is there a hue shift to cyan this doesn t botter me really since it seems to not happen when filmic is switched on shouldn t increased chroma clipping result in white instead of black looking at the image right corner this would avoid the harsh transition darktable recent master linux opencl active but can reproduce without screenshots chroma pushed filmic off chroma pushed filmic on saturation pushed filmic off filmic on does not change the clipping test images
| 1
|
17,087
| 22,595,997,155
|
IssuesEvent
|
2022-06-29 03:06:29
|
pyanodon/pybugreports
|
https://api.github.com/repos/pyanodon/pybugreports
|
closed
|
248k mod error
|
postprocess-fail
|
### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [x] Pypostprocessing failure
- [ ] Other
### What is the problem?

### Steps to reproduce
just loading it will give an error.
### Additional context
it just all py mods and 248k and it throws that error.
### Log file
[factorio-current.log](https://github.com/pyanodon/pybugreports/files/9006671/factorio-current.log)
|
1.0
|
248k mod error - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [ ] pyalienlife
- [ ] pyalternativeenergy
- [ ] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [X] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [ ] Balance
- [x] Pypostprocessing failure
- [ ] Other
### What is the problem?

### Steps to reproduce
just loading it will give an error.
### Additional context
it just all py mods and 248k and it throws that error.
### Log file
[factorio-current.log](https://github.com/pyanodon/pybugreports/files/9006671/factorio-current.log)
|
process
|
mod error mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem steps to reproduce just loading it will give an error additional context it just all py mods and and it throws that error log file
| 1
|
769,318
| 27,001,470,122
|
IssuesEvent
|
2023-02-10 08:11:47
|
WavesHQ/bridge
|
https://api.github.com/repos/WavesHQ/bridge
|
closed
|
chore(EvmTransactionConfirmer): Follow up tasks from PR #256
|
needs/area needs/triage kind/feature needs/priority server
|
<!-- Please only use this template for submitting enhancement/feature requests -->
#### What would you like to be added:
- [x] Keeping test code DRY
- [x] Shift `TestingExampleModule` into its own file
- [x] Shift test/app fixture logic into the `BridgeServerTestingApp` constructor (or wherever appropriate)
- [x] Persistence of 'last block number' instead of passing it in via an endpoint
- [ ] Implementation of the confirmer as a `service` that runs every X minutes (ie. cron job-ed)
#### Why is this needed:
Follow up on comments from #256 . To merge a baseline in first - nits can come later
|
1.0
|
chore(EvmTransactionConfirmer): Follow up tasks from PR #256 - <!-- Please only use this template for submitting enhancement/feature requests -->
#### What would you like to be added:
- [x] Keeping test code DRY
- [x] Shift `TestingExampleModule` into its own file
- [x] Shift test/app fixture logic into the `BridgeServerTestingApp` constructor (or wherever appropriate)
- [x] Persistence of 'last block number' instead of passing it in via an endpoint
- [ ] Implementation of the confirmer as a `service` that runs every X minutes (ie. cron job-ed)
#### Why is this needed:
Follow up on comments from #256 . To merge a baseline in first - nits can come later
|
non_process
|
chore evmtransactionconfirmer follow up tasks from pr what would you like to be added keeping test code dry shift testingexamplemodule into its own file shift test app fixture logic into the bridgeservertestingapp constructor or wherever appropriate persistence of last block number instead of passing it in via an endpoint implementation of the confirmer as a service that runs every x minutes ie cron job ed why is this needed follow up on comments from to merge a baseline in first nits can come later
| 0
|
239,282
| 7,788,324,319
|
IssuesEvent
|
2018-06-07 03:54:07
|
fossasia/open-event-server
|
https://api.github.com/repos/fossasia/open-event-server
|
closed
|
Travis build is failing
|
Priority: URGENT help-wanted
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Since the last merge, travis has been failing, it has to do with the SQL being offline or being unable to connect with the database, it'd be a nice idea to add exception handling to such cases, and maybe figure out how to make it work again.
|
1.0
|
Travis build is failing - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Since the last merge, travis has been failing, it has to do with the SQL being offline or being unable to connect with the database, it'd be a nice idea to add exception handling to such cases, and maybe figure out how to make it work again.
|
non_process
|
travis build is failing describe the bug since the last merge travis has been failing it has to do with the sql being offline or being unable to connect with the database it d be a nice idea to add exception handling to such cases and maybe figure out how to make it work again
| 0
|
8,107
| 11,300,637,598
|
IssuesEvent
|
2020-01-17 14:02:44
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
penetration peg and penetration hyphae
|
multi-species process
|
GO:0075201 formation of symbiont penetration hypha for entry into host
has the definition
The assembly by the symbiont of a threadlike, tubular structure, which may contain multiple nuclei and may or may not be divided internally by septa or cross-walls, for the purpose of penetration into its host organism. In the case of an appressorium existing, **this term is defined in further details as the process in which the symbiont penetration peg expands to form a hypha** which traverses the epidermal cell and emerges into the intercellular space of the mesophyll tissue. The host is defined as the larger of the organisms involved in a symbiotic interaction.
@CuzickA what is the difference between a penetration hyphae and a penetration peg?
|
1.0
|
penetration peg and penetration hyphae - GO:0075201 formation of symbiont penetration hypha for entry into host
has the definition
The assembly by the symbiont of a threadlike, tubular structure, which may contain multiple nuclei and may or may not be divided internally by septa or cross-walls, for the purpose of penetration into its host organism. In the case of an appressorium existing, **this term is defined in further details as the process in which the symbiont penetration peg expands to form a hypha** which traverses the epidermal cell and emerges into the intercellular space of the mesophyll tissue. The host is defined as the larger of the organisms involved in a symbiotic interaction.
@CuzickA what is the difference between a penetration hyphae and a penetration peg?
|
process
|
penetration peg and penetration hyphae go formation of symbiont penetration hypha for entry into host has the definition the assembly by the symbiont of a threadlike tubular structure which may contain multiple nuclei and may or may not be divided internally by septa or cross walls for the purpose of penetration into its host organism in the case of an appressorium existing this term is defined in further details as the process in which the symbiont penetration peg expands to form a hypha which traverses the epidermal cell and emerges into the intercellular space of the mesophyll tissue the host is defined as the larger of the organisms involved in a symbiotic interaction cuzicka what is the difference between a penetration hyphae and a penetration peg
| 1
|
237,032
| 26,078,753,689
|
IssuesEvent
|
2022-12-25 01:05:09
|
gabriel-milan/denoising-autoencoder
|
https://api.github.com/repos/gabriel-milan/denoising-autoencoder
|
opened
|
CVE-2022-40898 (Medium) detected in wheel-0.36.2-py2.py3-none-any.whl
|
security vulnerability
|
## CVE-2022-40898 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>wheel-0.36.2-py2.py3-none-any.whl</b></p></summary>
<p>A built-package format for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/65/63/39d04c74222770ed1589c0eaba06c05891801219272420b40311cd60c880/wheel-0.36.2-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/65/63/39d04c74222770ed1589c0eaba06c05891801219272420b40311cd60c880/wheel-0.36.2-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /training/requirements.txt</p>
<p>Path to vulnerable library: /training/requirements.txt,/server/app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **wheel-0.36.2-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/denoising-autoencoder/commit/22186005a9ff5cf052b53f8bb5aa092b9ea8a670">22186005a9ff5cf052b53f8bb5aa092b9ea8a670</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue discovered in Python Packaging Authority (PyPA) Wheel 0.37.1 and earlier allows remote attackers to cause a denial of service via attacker controlled input to wheel cli.
<p>Publish Date: 2022-12-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40898>CVE-2022-40898</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-12-23</p>
<p>Fix Resolution: wheel 0.38.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-40898 (Medium) detected in wheel-0.36.2-py2.py3-none-any.whl - ## CVE-2022-40898 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>wheel-0.36.2-py2.py3-none-any.whl</b></p></summary>
<p>A built-package format for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/65/63/39d04c74222770ed1589c0eaba06c05891801219272420b40311cd60c880/wheel-0.36.2-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/65/63/39d04c74222770ed1589c0eaba06c05891801219272420b40311cd60c880/wheel-0.36.2-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /training/requirements.txt</p>
<p>Path to vulnerable library: /training/requirements.txt,/server/app/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **wheel-0.36.2-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/denoising-autoencoder/commit/22186005a9ff5cf052b53f8bb5aa092b9ea8a670">22186005a9ff5cf052b53f8bb5aa092b9ea8a670</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue discovered in Python Packaging Authority (PyPA) Wheel 0.37.1 and earlier allows remote attackers to cause a denial of service via attacker controlled input to wheel cli.
<p>Publish Date: 2022-12-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40898>CVE-2022-40898</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-12-23</p>
<p>Fix Resolution: wheel 0.38.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in wheel none any whl cve medium severity vulnerability vulnerable library wheel none any whl a built package format for python library home page a href path to dependency file training requirements txt path to vulnerable library training requirements txt server app requirements txt dependency hierarchy x wheel none any whl vulnerable library found in head commit a href found in base branch master vulnerability details an issue discovered in python packaging authority pypa wheel and earlier allows remote attackers to cause a denial of service via attacker controlled input to wheel cli publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution wheel step up your open source security game with mend
| 0
|
7,642
| 10,737,536,146
|
IssuesEvent
|
2019-10-29 13:17:34
|
dotnet/coreclr
|
https://api.github.com/repos/dotnet/coreclr
|
opened
|
Getting process information is few times slower on Linux
|
area-System.Diagnostics.Process os-linux tenet-performance
|
Getting process information is few times slower on Linux:
| Slower | Lin/Win | Win Median (ns) | Lin Median (ns) | Modality|
| -------------------------------------------------- | -------:| ---------------:| ---------------:| --------:|
| System.Diagnostics.Perf_Process.GetProcesses | 12.94 | 1048505.00 | 13562765.47 | |
| System.Diagnostics.Perf_Process.GetProcessesByName | 4.73 | 1059057.08 | 5010892.56 | |
| System.Diagnostics.Perf_Process.GetCurrentProcess | 4.49 | 101.52 | 455.68 | |
Important: `System.Diagnostics.Perf_Process.GetProcesses` fetches the data for all running processes and comparing two different OSes which run different number of processes is not apples-to-apples comparison. However, `GetProcessesByName` and `GetCurrentProcess` perform the same work and moreover `GetCurrentProcess` is frequently used API.
How to run the [benchmarks](https://github.com/dotnet/performance/blob/400abfd1c1a7a21ca4787f6dbd0a268e29626b56/src/benchmarks/micro/corefx/System.Diagnostics/Perf_Process.cs):
```cmd
git clone https://github.com/dotnet/performance.git
python3 ./performance/scripts/benchmarks_ci.py -f netcoreapp5.0 --filter 'System.Diagnostics.Perf_Process.Get*'
```
Recommended profilers are [PerfCollect](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-corefx.md#PerfCollect) and [VTune](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-corefx.md#VTune).
There is a high chance that we are already using the best available API and the limiting factor is OS here.
|
1.0
|
Getting process information is few times slower on Linux - Getting process information is few times slower on Linux:
| Slower | Lin/Win | Win Median (ns) | Lin Median (ns) | Modality|
| -------------------------------------------------- | -------:| ---------------:| ---------------:| --------:|
| System.Diagnostics.Perf_Process.GetProcesses | 12.94 | 1048505.00 | 13562765.47 | |
| System.Diagnostics.Perf_Process.GetProcessesByName | 4.73 | 1059057.08 | 5010892.56 | |
| System.Diagnostics.Perf_Process.GetCurrentProcess | 4.49 | 101.52 | 455.68 | |
Important: `System.Diagnostics.Perf_Process.GetProcesses` fetches the data for all running processes and comparing two different OSes which run different number of processes is not apples-to-apples comparison. However, `GetProcessesByName` and `GetCurrentProcess` perform the same work and moreover `GetCurrentProcess` is frequently used API.
How to run the [benchmarks](https://github.com/dotnet/performance/blob/400abfd1c1a7a21ca4787f6dbd0a268e29626b56/src/benchmarks/micro/corefx/System.Diagnostics/Perf_Process.cs):
```cmd
git clone https://github.com/dotnet/performance.git
python3 ./performance/scripts/benchmarks_ci.py -f netcoreapp5.0 --filter 'System.Diagnostics.Perf_Process.Get*'
```
Recommended profilers are [PerfCollect](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-corefx.md#PerfCollect) and [VTune](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-corefx.md#VTune).
There is a high chance that we are already using the best available API and the limiting factor is OS here.
|
process
|
getting process information is few times slower on linux getting process information is few times slower on linux slower lin win win median ns lin median ns modality system diagnostics perf process getprocesses system diagnostics perf process getprocessesbyname system diagnostics perf process getcurrentprocess important system diagnostics perf process getprocesses fetches the data for all running processes and comparing two different oses which run different number of processes is not apples to apples comparison however getprocessesbyname and getcurrentprocess perform the same work and moreover getcurrentprocess is frequently used api how to run the cmd git clone performance scripts benchmarks ci py f filter system diagnostics perf process get recommended profilers are and there is a high chance that we are already using the best available api and the limiting factor is os here
| 1
|
275,001
| 23,887,646,194
|
IssuesEvent
|
2022-09-08 08:57:47
|
opencurve/curve
|
https://api.github.com/repos/opencurve/curve
|
closed
|
The length of the file written by the mount point is not updated
|
bug need test
|
**Describe the bug (描述bug)**
The length of the file written by the mount point is not updated
**To Reproduce (复现方法)**
1、mount fs1 -> test1 fs1 -> test2
2、echo a > test1/a
3、cat test2/a -> a
4、echo b >> test1/a
sleep 5mins
5、cat test2/a -> a
6、umount test5 test6 / mount test5 test6
**Expected behavior (期望行为)**
**Versions (各种版本)**
OS:
Compiler:
branch:
commit id:
**Additional context/screenshots (更多上下文/截图)**
|
1.0
|
The length of the file written by the mount point is not updated - **Describe the bug (描述bug)**
The length of the file written by the mount point is not updated
**To Reproduce (复现方法)**
1、mount fs1 -> test1 fs1 -> test2
2、echo a > test1/a
3、cat test2/a -> a
4、echo b >> test1/a
sleep 5mins
5、cat test2/a -> a
6、umount test5 test6 / mount test5 test6
**Expected behavior (期望行为)**
**Versions (各种版本)**
OS:
Compiler:
branch:
commit id:
**Additional context/screenshots (更多上下文/截图)**
|
non_process
|
the length of the file written by the mount point is not updated describe the bug 描述bug the length of the file written by the mount point is not updated to reproduce 复现方法 、mount 、echo a a 、cat a a 、echo b a sleep 、cat a a 、umount mount expected behavior 期望行为 versions 各种版本 os compiler branch commit id additional context screenshots 更多上下文 截图
| 0
|
10,123
| 13,044,162,280
|
IssuesEvent
|
2020-07-29 03:47:31
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `StringTimeTimeDiff` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `StringTimeTimeDiff` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `StringTimeTimeDiff` from TiDB -
## Description
Port the scalar function `StringTimeTimeDiff` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function stringtimetimediff from tidb description port the scalar function stringtimetimediff from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
41,766
| 5,396,446,890
|
IssuesEvent
|
2017-02-27 11:44:38
|
RestComm/jain-slee.sip
|
https://api.github.com/repos/RestComm/jain-slee.sip
|
closed
|
Make the parameter of port listerning configurable
|
2. Enhancement Testing
|
Now the parameter of SIP RA listerning port is hardcored now. Let's do it easier configurable.
|
1.0
|
Make the parameter of port listerning configurable - Now the parameter of SIP RA listerning port is hardcored now. Let's do it easier configurable.
|
non_process
|
make the parameter of port listerning configurable now the parameter of sip ra listerning port is hardcored now let s do it easier configurable
| 0
|
612,948
| 19,060,158,625
|
IssuesEvent
|
2021-11-26 06:12:46
|
oilshell/oil
|
https://api.github.com/repos/oilshell/oil
|
closed
|
shopt -s parse_paren could allow function call syntax
|
oil-language low-priority maybe-new-syntax
|
Right now this is OK in `bin/oil`:
```
f() { echo hi }
```
but it should be
```
proc f { echo hi }
```
And that frees up `f()` for a function call.
We can get rid of `parse_do` as well and add the "modeless" `call`:
So
```
call print(x + 42) # bin/osh
print(x + 42) # bin/oil with shopt -s parse_paren
```
I wanted `do` instead of `call` for cases like:
```
do obj.method()
```
But I no longer think that's important.
|
1.0
|
shopt -s parse_paren could allow function call syntax - Right now this is OK in `bin/oil`:
```
f() { echo hi }
```
but it should be
```
proc f { echo hi }
```
And that frees up `f()` for a function call.
We can get rid of `parse_do` as well and add the "modeless" `call`:
So
```
call print(x + 42) # bin/osh
print(x + 42) # bin/oil with shopt -s parse_paren
```
I wanted `do` instead of `call` for cases like:
```
do obj.method()
```
But I no longer think that's important.
|
non_process
|
shopt s parse paren could allow function call syntax right now this is ok in bin oil f echo hi but it should be proc f echo hi and that frees up f for a function call we can get rid of parse do as well and add the modeless call so call print x bin osh print x bin oil with shopt s parse paren i wanted do instead of call for cases like do obj method but i no longer think that s important
| 0
|
8,045
| 11,218,512,324
|
IssuesEvent
|
2020-01-07 11:41:09
|
Starcounter/Home
|
https://api.github.com/repos/Starcounter/Home
|
closed
|
Database instances limit
|
hosting hosting-single-process question windows
|
Starcounter version: `<2.3.1.8415>`.
### Issue type
- [x] Bug
- [ ] Feature request
- [ ] Suggestion
- [x] Question
- [ ] Cannot reproduce
- [ ] Urgent
### Issue description
Is there a limit in how many Starcounter databases that can be run simultaneously on a server?
When I start the 16th database, the server crashes.
### Reproduction code snippet
```batch
:: Create databases
staradmin --database=8080 new db DefaultUserHttpPort=8080
staradmin --database=8081 new db DefaultUserHttpPort=8081
staradmin --database=8082 new db DefaultUserHttpPort=8082
staradmin --database=8083 new db DefaultUserHttpPort=8083
staradmin --database=8084 new db DefaultUserHttpPort=8084
staradmin --database=8085 new db DefaultUserHttpPort=8085
staradmin --database=8086 new db DefaultUserHttpPort=8086
staradmin --database=8087 new db DefaultUserHttpPort=8087
staradmin --database=8088 new db DefaultUserHttpPort=8088
staradmin --database=8089 new db DefaultUserHttpPort=8089
staradmin --database=8090 new db DefaultUserHttpPort=8090
staradmin --database=8091 new db DefaultUserHttpPort=8091
staradmin --database=8092 new db DefaultUserHttpPort=8092
staradmin --database=8093 new db DefaultUserHttpPort=8093
staradmin --database=8094 new db DefaultUserHttpPort=8094
staradmin --database=8095 new db DefaultUserHttpPort=8095
:: Start the first 15 database
staradmin --database=8080 start db
staradmin --database=8081 start db
staradmin --database=8082 start db
staradmin --database=8083 start db
staradmin --database=8084 start db
staradmin --database=8085 start db
staradmin --database=8086 start db
staradmin --database=8087 start db
staradmin --database=8088 start db
staradmin --database=8089 start db
staradmin --database=8090 start db
staradmin --database=8091 start db
staradmin --database=8092 start db
staradmin --database=8093 start db
staradmin --database=8094 start db
:: Start the 16th database, the server crashes
staradmin --database=8095 start db
```
### Exception stack trace
```
C:\> staradmin --database=8095 start db
Starting 8095 (starting database ..)
[10:53:58, Starcounter (8095)]
System.Exception: ScErrUnspecified (SCERR999): An unspecified error caused the operation to fail. System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at Starcounter.NodeTask.PerformSyncRequest() in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Internal\Rest\NodeTask.cs:line 514
Version: 2.3.1.8415.
Help page: https://docs.starcounter.io/v/2.3.1/?q=SCERR999.
at Starcounter.Internal.AppsBootstrapper.RegisterNewCodehostInGateway() in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Apps.JsonPatch\AppsBootstrapper.cs:line 64
at Starcounter.Internal.AppsBootstrapper.InitAppsBootstrapper(Byte numSchedulers, UInt16 defaultUserHttpPort, UInt16 defaultSystemHttpPort, UInt32 sessionTimeoutMinutes, String dbName, Boolean noNetworkGateway) in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Apps.JsonPatch\AppsBootstrapper.cs:line 139
at StarcounterInternal.Bootstrap.Control.Setup(String[] args) in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Bootstrap\Control.cs:line 216
at StarcounterInternal.Bootstrap.Control.Main(String[] args) in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Bootstrap\Control.cs:line 55
HResult=-2146233088
HelpLink=https://docs.starcounter.io/v/2.3.1/?q=SCERR999
```
### Event log stack trace
```
Faulting application name: scnetworkgateway.exe, version: 0.0.0.0, time stamp: 0x5ac61add
Faulting module name: ucrtbase.dll, version: 10.0.17763.404, time stamp: 0x490b0aeb
Exception code: 0xc0000409
Fault offset: 0x000000000006e91e
Faulting process id: 0x23b8
Faulting application start time: 0x01d50f0a0eb7309a
Faulting application path: C:\Program Files\Starcounter\scnetworkgateway.exe
Faulting module path: C:\Windows\System32\ucrtbase.dll
Report Id: 3bb34588-2331-473f-92f9-060b5863b3b9
Faulting package full name:
Faulting package-relative application ID:
```
|
1.0
|
Database instances limit - Starcounter version: `<2.3.1.8415>`.
### Issue type
- [x] Bug
- [ ] Feature request
- [ ] Suggestion
- [x] Question
- [ ] Cannot reproduce
- [ ] Urgent
### Issue description
Is there a limit in how many Starcounter databases that can be run simultaneously on a server?
When I start the 16th database, the server crashes.
### Reproduction code snippet
```batch
:: Create databases
staradmin --database=8080 new db DefaultUserHttpPort=8080
staradmin --database=8081 new db DefaultUserHttpPort=8081
staradmin --database=8082 new db DefaultUserHttpPort=8082
staradmin --database=8083 new db DefaultUserHttpPort=8083
staradmin --database=8084 new db DefaultUserHttpPort=8084
staradmin --database=8085 new db DefaultUserHttpPort=8085
staradmin --database=8086 new db DefaultUserHttpPort=8086
staradmin --database=8087 new db DefaultUserHttpPort=8087
staradmin --database=8088 new db DefaultUserHttpPort=8088
staradmin --database=8089 new db DefaultUserHttpPort=8089
staradmin --database=8090 new db DefaultUserHttpPort=8090
staradmin --database=8091 new db DefaultUserHttpPort=8091
staradmin --database=8092 new db DefaultUserHttpPort=8092
staradmin --database=8093 new db DefaultUserHttpPort=8093
staradmin --database=8094 new db DefaultUserHttpPort=8094
staradmin --database=8095 new db DefaultUserHttpPort=8095
:: Start the first 15 database
staradmin --database=8080 start db
staradmin --database=8081 start db
staradmin --database=8082 start db
staradmin --database=8083 start db
staradmin --database=8084 start db
staradmin --database=8085 start db
staradmin --database=8086 start db
staradmin --database=8087 start db
staradmin --database=8088 start db
staradmin --database=8089 start db
staradmin --database=8090 start db
staradmin --database=8091 start db
staradmin --database=8092 start db
staradmin --database=8093 start db
staradmin --database=8094 start db
:: Start the 16th database, the server crashes
staradmin --database=8095 start db
```
### Exception stack trace
```
C:\> staradmin --database=8095 start db
Starting 8095 (starting database ..)
[10:53:58, Starcounter (8095)]
System.Exception: ScErrUnspecified (SCERR999): An unspecified error caused the operation to fail. System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at Starcounter.NodeTask.PerformSyncRequest() in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Internal\Rest\NodeTask.cs:line 514
Version: 2.3.1.8415.
Help page: https://docs.starcounter.io/v/2.3.1/?q=SCERR999.
at Starcounter.Internal.AppsBootstrapper.RegisterNewCodehostInGateway() in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Apps.JsonPatch\AppsBootstrapper.cs:line 64
at Starcounter.Internal.AppsBootstrapper.InitAppsBootstrapper(Byte numSchedulers, UInt16 defaultUserHttpPort, UInt16 defaultSystemHttpPort, UInt32 sessionTimeoutMinutes, String dbName, Boolean noNetworkGateway) in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Apps.JsonPatch\AppsBootstrapper.cs:line 139
at StarcounterInternal.Bootstrap.Control.Setup(String[] args) in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Bootstrap\Control.cs:line 216
at StarcounterInternal.Bootstrap.Control.Main(String[] args) in C:\TeamCity\BuildAgent\work\sc-17027\Level1\src\Starcounter.Bootstrap\Control.cs:line 55
HResult=-2146233088
HelpLink=https://docs.starcounter.io/v/2.3.1/?q=SCERR999
```
### Event log stack trace
```
Faulting application name: scnetworkgateway.exe, version: 0.0.0.0, time stamp: 0x5ac61add
Faulting module name: ucrtbase.dll, version: 10.0.17763.404, time stamp: 0x490b0aeb
Exception code: 0xc0000409
Fault offset: 0x000000000006e91e
Faulting process id: 0x23b8
Faulting application start time: 0x01d50f0a0eb7309a
Faulting application path: C:\Program Files\Starcounter\scnetworkgateway.exe
Faulting module path: C:\Windows\System32\ucrtbase.dll
Report Id: 3bb34588-2331-473f-92f9-060b5863b3b9
Faulting package full name:
Faulting package-relative application ID:
```
|
process
|
database instances limit starcounter version issue type bug feature request suggestion question cannot reproduce urgent issue description is there a limit in how many starcounter databases that can be run simultaneously on a server when i start the database the server crashes reproduction code snippet batch create databases staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport staradmin database new db defaultuserhttpport start the first database staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db staradmin database start db start the database the server crashes staradmin database start db exception stack trace c staradmin database start db starting starting database system exception scerrunspecified an unspecified error caused the operation to fail system net sockets socketexception an existing connection was forcibly closed by the remote host at system net sockets socket receive byte buffer offset size socketflags socketflags at starcounter nodetask performsyncrequest in c teamcity buildagent work sc src starcounter internal rest nodetask cs line version help page at starcounter internal appsbootstrapper registernewcodehostingateway in c teamcity buildagent work sc src starcounter apps jsonpatch appsbootstrapper cs line at starcounter internal appsbootstrapper initappsbootstrapper byte numschedulers defaultuserhttpport defaultsystemhttpport sessiontimeoutminutes string dbname boolean nonetworkgateway in c teamcity buildagent work sc src starcounter apps jsonpatch appsbootstrapper cs line at starcounterinternal bootstrap control setup string args in c teamcity buildagent work sc src starcounter bootstrap control cs line at starcounterinternal bootstrap control main string args in c teamcity buildagent work sc src starcounter bootstrap control cs line hresult helplink event log stack trace faulting application name scnetworkgateway exe version time stamp faulting module name ucrtbase dll version time stamp exception code fault offset faulting process id faulting application start time faulting application path c program files starcounter scnetworkgateway exe faulting module path c windows ucrtbase dll report id faulting package full name faulting package relative application id
| 1
|
22,070
| 30,593,969,596
|
IssuesEvent
|
2023-07-21 19:49:43
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] Error when using `replace-clause` with joins
|
.metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
Probably related to [#32026](https://github.com/metabase/metabase/issues/32026)
Calling `replace-clause` to replace a join, can fail with the following error:
```js
Cannot read properties of null (reading 'lastIndexOf')
at goog.string.internal.startsWith (cljs_env.js:3208:1)
at Object.clojure$string$starts_with_QMARK_ [as starts_with_QMARK_] (string.cljs:280:1)
at convert.cljc:406:1
at convert.cljc:406:1
at Object.cljs$core$IFn$_invoke$arity$1 (core.cljs:11344:1)
at core.cljs:5285:1
at core.cljs:5686:1
at core.cljs:5686:1
at Object.cljs$core$IReduce$_reduce$arity$3 (core.cljs:5690:1)
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:2570:1)
```
### To Reproduce
> **Warning**
> The issue can't be reproduced with FE (it's WIP), please use a REPL
1. Create a query from the sample Orders table
2. Join the Products table on Orders.PRODUCT_ID = Products.ID
3. Create another join clause on Orders.PRODUCT_ID = Products.ID and Orders.CREATED_AT = Products.CREATED_AT
4. Use `replace-clause` to replace a join from step 2 with a join from step 3
5. Call `toLegacyQuery` on the query you got on step 4
Code:
```js
function findCols(columns, columnName) {
return columns.find(col => Lib.displayInfo(q, -1, col).name === columnName)
}
const products = Lib.tableOrCardMetadata(q, 1)
const [strategy] = Lib.availableJoinStrategies(q, -1)
const [operator] = Lib.joinConditionOperators(q, -1)
const lhsColumns = Lib.joinConditionLHSColumns(q, -1)
const rhsColumns = Lib.joinConditionRHSColumns(q, -1)
const orders_productId = findCols(lhsColumns, "PRODUCT_ID")
const products_id = findCols(rhsColumns, "ID")
const orders_createdAt = findCols(lhsColumns, "CREATED_AT")
const products_createdAt = findCols(rhsColumns, "CREATED_AT")
const cond_1 = Lib.filterClause(operator, orders_productId, products_id)
const join_cls_1 = Lib.joinClause(products, [cond_1])
const q2 = Lib.join(q, -1, join_cls_1)
const [join] = Lib.joins(q2, -1)
const cond_2 = Lib.filterClause(operator, orders_createdAt, products_createdAt)
const join_cls_2 = Lib.joinClause(products, [cond_1, cond_2])
const q3 = Lib.replaceClause(q2, -1, join, join_cls_2)
Lib.toLegacyQuery(q3) // will throw an error
```
|
1.0
|
[MLv2] [Bug] Error when using `replace-clause` with joins - Probably related to [#32026](https://github.com/metabase/metabase/issues/32026)
Calling `replace-clause` to replace a join, can fail with the following error:
```js
Cannot read properties of null (reading 'lastIndexOf')
at goog.string.internal.startsWith (cljs_env.js:3208:1)
at Object.clojure$string$starts_with_QMARK_ [as starts_with_QMARK_] (string.cljs:280:1)
at convert.cljc:406:1
at convert.cljc:406:1
at Object.cljs$core$IFn$_invoke$arity$1 (core.cljs:11344:1)
at core.cljs:5285:1
at core.cljs:5686:1
at core.cljs:5686:1
at Object.cljs$core$IReduce$_reduce$arity$3 (core.cljs:5690:1)
at Function.cljs$core$IFn$_invoke$arity$3 (core.cljs:2570:1)
```
### To Reproduce
> **Warning**
> The issue can't be reproduced with FE (it's WIP), please use a REPL
1. Create a query from the sample Orders table
2. Join the Products table on Orders.PRODUCT_ID = Products.ID
3. Create another join clause on Orders.PRODUCT_ID = Products.ID and Orders.CREATED_AT = Products.CREATED_AT
4. Use `replace-clause` to replace a join from step 2 with a join from step 3
5. Call `toLegacyQuery` on the query you got on step 4
Code:
```js
function findCols(columns, columnName) {
return columns.find(col => Lib.displayInfo(q, -1, col).name === columnName)
}
const products = Lib.tableOrCardMetadata(q, 1)
const [strategy] = Lib.availableJoinStrategies(q, -1)
const [operator] = Lib.joinConditionOperators(q, -1)
const lhsColumns = Lib.joinConditionLHSColumns(q, -1)
const rhsColumns = Lib.joinConditionRHSColumns(q, -1)
const orders_productId = findCols(lhsColumns, "PRODUCT_ID")
const products_id = findCols(rhsColumns, "ID")
const orders_createdAt = findCols(lhsColumns, "CREATED_AT")
const products_createdAt = findCols(rhsColumns, "CREATED_AT")
const cond_1 = Lib.filterClause(operator, orders_productId, products_id)
const join_cls_1 = Lib.joinClause(products, [cond_1])
const q2 = Lib.join(q, -1, join_cls_1)
const [join] = Lib.joins(q2, -1)
const cond_2 = Lib.filterClause(operator, orders_createdAt, products_createdAt)
const join_cls_2 = Lib.joinClause(products, [cond_1, cond_2])
const q3 = Lib.replaceClause(q2, -1, join, join_cls_2)
Lib.toLegacyQuery(q3) // will throw an error
```
|
process
|
error when using replace clause with joins probably related to calling replace clause to replace a join can fail with the following error js cannot read properties of null reading lastindexof at goog string internal startswith cljs env js at object clojure string starts with qmark string cljs at convert cljc at convert cljc at object cljs core ifn invoke arity core cljs at core cljs at core cljs at core cljs at object cljs core ireduce reduce arity core cljs at function cljs core ifn invoke arity core cljs to reproduce warning the issue can t be reproduced with fe it s wip please use a repl create a query from the sample orders table join the products table on orders product id products id create another join clause on orders product id products id and orders created at products created at use replace clause to replace a join from step with a join from step call tolegacyquery on the query you got on step code js function findcols columns columnname return columns find col lib displayinfo q col name columnname const products lib tableorcardmetadata q const lib availablejoinstrategies q const lib joinconditionoperators q const lhscolumns lib joinconditionlhscolumns q const rhscolumns lib joinconditionrhscolumns q const orders productid findcols lhscolumns product id const products id findcols rhscolumns id const orders createdat findcols lhscolumns created at const products createdat findcols rhscolumns created at const cond lib filterclause operator orders productid products id const join cls lib joinclause products const lib join q join cls const lib joins const cond lib filterclause operator orders createdat products createdat const join cls lib joinclause products const lib replaceclause join join cls lib tolegacyquery will throw an error
| 1
|
44
| 2,512,262,030
|
IssuesEvent
|
2015-01-14 15:18:27
|
tinkerpop/tinkerpop3
|
https://api.github.com/repos/tinkerpop/tinkerpop3
|
opened
|
Consider going full Optional semantics on TraversalSideEffects
|
enhancement process
|
Right now if you do `TraversalSideEffects.get(key)` and the value doesn't exist, you get an exception. The reason we did this was that we don't want `null` and we don't want `Optional` because users will interact with the sideEffects via `Traverser`. For example:
```java
g.V().out().map{it.sideEffects(it.get().value('age'))+1}
```
However, note that `Traverser.sideEffects()` is a helper method to the more core `Traverser.Admin.getSideEffects()`. Thus, we can keep the non-Optional "user friendly" `Traverser.sideEffects(key)` method that throws the exception if the sideEffect doesn't exist. However, for vendors/developers, `Optional` seems the better choice so people don't have to do the awkard `TraversalSideEffects.exists()` method.
|
1.0
|
Consider going full Optional semantics on TraversalSideEffects - Right now if you do `TraversalSideEffects.get(key)` and the value doesn't exist, you get an exception. The reason we did this was that we don't want `null` and we don't want `Optional` because users will interact with the sideEffects via `Traverser`. For example:
```java
g.V().out().map{it.sideEffects(it.get().value('age'))+1}
```
However, note that `Traverser.sideEffects()` is a helper method to the more core `Traverser.Admin.getSideEffects()`. Thus, we can keep the non-Optional "user friendly" `Traverser.sideEffects(key)` method that throws the exception if the sideEffect doesn't exist. However, for vendors/developers, `Optional` seems the better choice so people don't have to do the awkard `TraversalSideEffects.exists()` method.
|
process
|
consider going full optional semantics on traversalsideeffects right now if you do traversalsideeffects get key and the value doesn t exist you get an exception the reason we did this was that we don t want null and we don t want optional because users will interact with the sideeffects via traverser for example java g v out map it sideeffects it get value age however note that traverser sideeffects is a helper method to the more core traverser admin getsideeffects thus we can keep the non optional user friendly traverser sideeffects key method that throws the exception if the sideeffect doesn t exist however for vendors developers optional seems the better choice so people don t have to do the awkard traversalsideeffects exists method
| 1
|
470,692
| 13,542,866,220
|
IssuesEvent
|
2020-09-16 18:01:58
|
GEOSX/GEOSX
|
https://api.github.com/repos/GEOSX/GEOSX
|
closed
|
VTK output error
|
priority: low type: bug
|
**Describe the bug**
I got this error running hi24l case using 64 cores on Quartz. This occurs at 1.6e6 seconds. The previous 15 vtk output writes seem fine.
Received signal 8: Floating point exception
Floating point exception:
** StackTrace of 24 frames **
Frame 1: LvArray::stackTraceHandler(int, bool)
Frame 2:
Frame 3:
Frame 4: vtkDataArray::ComputeVectorRange(double*)
Frame 5: vtkDataArray::ComputeRange(double*, int)
Frame 6: vtkXMLWriter::WriteArrayInline(vtkAbstractArray*, vtkIndent, char const*, int)
Frame 7: vtkXMLWriter::WritePointsInline(vtkPoints*, vtkIndent)
Frame 8: vtkXMLUnstructuredDataWriter::WriteInlinePiece(vtkIndent)
Frame 9: vtkXMLUnstructuredGridWriter::WriteInlinePiece(vtkIndent)
Frame 10: vtkXMLUnstructuredDataWriter::WriteInlineMode(vtkIndent)
Frame 11: vtkXMLUnstructuredDataWriter::WriteAPiece()
Frame 12: vtkXMLUnstructuredDataWriter::ProcessRequest(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 13: vtkExecutive::CallAlgorithm(vtkInformation*, int, vtkInformationVector**, vtkInformationVector*)
Frame 14: vtkDemandDrivenPipeline::ExecuteData(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 15: vtkCompositeDataPipeline::ExecuteData(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 16: vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 17: vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 18: vtkDemandDrivenPipeline::UpdateData(int)
Frame 19: vtkStreamingDemandDrivenPipeline::Update(int, vtkInformationVector*)
Frame 20: vtkXMLWriter::Write()
Frame 21: geosx::vtk::VTKPolyDataWriterInterface::WriteUnstructuredGrid(vtkSmartPointer<vtkUnstructuredGrid>, double, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
Frame 22: geosx::vtk::VTKPolyDataWriterInterface::WriteWellElementRegions(double, geosx::ElementRegionManager const&, geosx::NodeManager const&) const
Frame 23: geosx::vtk::VTKPolyDataWriterInterface::Write(double, int, geosx::DomainPartition const&)
Frame 24: geosx::EventBase::Execute(double, double, int, int, double, geosx::dataRepository::Group*)
=====
|
1.0
|
VTK output error - **Describe the bug**
I got this error running hi24l case using 64 cores on Quartz. This occurs at 1.6e6 seconds. The previous 15 vtk output writes seem fine.
Received signal 8: Floating point exception
Floating point exception:
** StackTrace of 24 frames **
Frame 1: LvArray::stackTraceHandler(int, bool)
Frame 2:
Frame 3:
Frame 4: vtkDataArray::ComputeVectorRange(double*)
Frame 5: vtkDataArray::ComputeRange(double*, int)
Frame 6: vtkXMLWriter::WriteArrayInline(vtkAbstractArray*, vtkIndent, char const*, int)
Frame 7: vtkXMLWriter::WritePointsInline(vtkPoints*, vtkIndent)
Frame 8: vtkXMLUnstructuredDataWriter::WriteInlinePiece(vtkIndent)
Frame 9: vtkXMLUnstructuredGridWriter::WriteInlinePiece(vtkIndent)
Frame 10: vtkXMLUnstructuredDataWriter::WriteInlineMode(vtkIndent)
Frame 11: vtkXMLUnstructuredDataWriter::WriteAPiece()
Frame 12: vtkXMLUnstructuredDataWriter::ProcessRequest(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 13: vtkExecutive::CallAlgorithm(vtkInformation*, int, vtkInformationVector**, vtkInformationVector*)
Frame 14: vtkDemandDrivenPipeline::ExecuteData(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 15: vtkCompositeDataPipeline::ExecuteData(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 16: vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 17: vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*, vtkInformationVector**, vtkInformationVector*)
Frame 18: vtkDemandDrivenPipeline::UpdateData(int)
Frame 19: vtkStreamingDemandDrivenPipeline::Update(int, vtkInformationVector*)
Frame 20: vtkXMLWriter::Write()
Frame 21: geosx::vtk::VTKPolyDataWriterInterface::WriteUnstructuredGrid(vtkSmartPointer<vtkUnstructuredGrid>, double, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
Frame 22: geosx::vtk::VTKPolyDataWriterInterface::WriteWellElementRegions(double, geosx::ElementRegionManager const&, geosx::NodeManager const&) const
Frame 23: geosx::vtk::VTKPolyDataWriterInterface::Write(double, int, geosx::DomainPartition const&)
Frame 24: geosx::EventBase::Execute(double, double, int, int, double, geosx::dataRepository::Group*)
=====
|
non_process
|
vtk output error describe the bug i got this error running case using cores on quartz this occurs at seconds the previous vtk output writes seem fine received signal floating point exception floating point exception stacktrace of frames frame lvarray stacktracehandler int bool frame frame frame vtkdataarray computevectorrange double frame vtkdataarray computerange double int frame vtkxmlwriter writearrayinline vtkabstractarray vtkindent char const int frame vtkxmlwriter writepointsinline vtkpoints vtkindent frame vtkxmlunstructureddatawriter writeinlinepiece vtkindent frame vtkxmlunstructuredgridwriter writeinlinepiece vtkindent frame vtkxmlunstructureddatawriter writeinlinemode vtkindent frame vtkxmlunstructureddatawriter writeapiece frame vtkxmlunstructureddatawriter processrequest vtkinformation vtkinformationvector vtkinformationvector frame vtkexecutive callalgorithm vtkinformation int vtkinformationvector vtkinformationvector frame vtkdemanddrivenpipeline executedata vtkinformation vtkinformationvector vtkinformationvector frame vtkcompositedatapipeline executedata vtkinformation vtkinformationvector vtkinformationvector frame vtkdemanddrivenpipeline processrequest vtkinformation vtkinformationvector vtkinformationvector frame vtkstreamingdemanddrivenpipeline processrequest vtkinformation vtkinformationvector vtkinformationvector frame vtkdemanddrivenpipeline updatedata int frame vtkstreamingdemanddrivenpipeline update int vtkinformationvector frame vtkxmlwriter write frame geosx vtk vtkpolydatawriterinterface writeunstructuredgrid vtksmartpointer double std basic string std allocator const const frame geosx vtk vtkpolydatawriterinterface writewellelementregions double geosx elementregionmanager const geosx nodemanager const const frame geosx vtk vtkpolydatawriterinterface write double int geosx domainpartition const frame geosx eventbase execute double double int int double geosx datarepository group
| 0
|
5,494
| 8,362,788,316
|
IssuesEvent
|
2018-10-03 17:51:40
|
facebook/graphql
|
https://api.github.com/repos/facebook/graphql
|
closed
|
Enforce HTTPS for github.io pages?
|
🐝 Process
|
At the moment HTTPS is not forced for http://graphql.github.io/ and http://facebook.github.io/graphql/ the GraphQL readme even links to the "unsecure" http://facebook.github.io/graphql/.
Would it not be cleaner if http://graphql.github.io/ and http://facebook.github.io/graphql/ were forced over [HTTPS](https://help.github.com/articles/securing-your-github-pages-site-with-https/)?
|
1.0
|
Enforce HTTPS for github.io pages? - At the moment HTTPS is not forced for http://graphql.github.io/ and http://facebook.github.io/graphql/ the GraphQL readme even links to the "unsecure" http://facebook.github.io/graphql/.
Would it not be cleaner if http://graphql.github.io/ and http://facebook.github.io/graphql/ were forced over [HTTPS](https://help.github.com/articles/securing-your-github-pages-site-with-https/)?
|
process
|
enforce https for github io pages at the moment https is not forced for and the graphql readme even links to the unsecure would it not be cleaner if and were forced over
| 1
|
17,952
| 3,013,817,468
|
IssuesEvent
|
2015-07-29 11:26:59
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
YAWL looses Workflow after Database Foreign Key Violation
|
auto-migrated Priority-Medium Type-Defect
|
```
To make it short, the engine looses cases, every time on different points. The
case is still visible in the case management window, but is away from all other
views.
-------------------------------------
1) The database constraint violation:
-------------------------------------
ERROR - FEHLER: Aktualisieren oder L├Âschen in Tabelle
»rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »r
s_queueitems«
Detail: Auf Schlüssel (wir_id)=(176) wird noch aus Tabelle »rs_queueitems« verwiesen.
2014-05-16 11:46:52,142 [ERROR] HibernateEngine :- Handled Exception:
Error persisting object (delete): 1:HandOutItems
org.hibernate.exception.ConstraintViolationException: FEHLER: Aktualisieren
oder L├Âschen in Tabelle ┬╗rs_workitemcache┬½ verletzt
Fremdschl├╝ssel-Con
straint »fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
obviously problem in the workitem cache table
--------------------------------------------------------
2. Stack Trace in Browser, cannot be reproduced any time
--------------------------------------------------------
Exception Handler
Description: An unhandled exception occurred during the execution of the web
application. Please review the following stack trace for more information
regarding the error.
Exception Details: java.lang.NullPointerException
null
Possible Source of Error:
Class Name: org.yawlfoundation.yawl.resourcing.ResourceManager
File Name: ResourceManager.java
Method Name: removeFromAll
Line Number: 943
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
org.yawlfoundation.yawl.resourcing.ResourceManager.removeFromAll(ResourceManager
.java:943)
org.yawlfoundation.yawl.resourcing.ResourceManager.cancelCase(ResourceManager.ja
va:2200)
org.yawlfoundation.yawl.resourcing.jsf.caseMgt.cancelCase(caseMgt.java:742)
org.yawlfoundation.yawl.resourcing.jsf.caseMgt.btnCancelCase_action(caseMgt.java
:716)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
a:43)
java.lang.reflect.Method.invoke(Method.java:601)
com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:126)
com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.ja
va:72)
com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListene
rImpl.java:57)
javax.faces.component.UICommand.broadcast(UICommand.java:312)
javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.ja
va:75)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
Exception Details: javax.faces.el.EvaluationException
java.lang.NullPointerException
Possible Source of Error:
Class Name: com.sun.faces.el.MethodBindingImpl
File Name: MethodBindingImpl.java
Method Name: invoke
Line Number: 130
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:130)
com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.ja
va:72)
com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListene
rImpl.java:57)
javax.faces.component.UICommand.broadcast(UICommand.java:312)
javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.ja
va:75)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
Exception Details: javax.faces.FacesException
#{caseMgt.btnCancelCase_action}: javax.faces.el.EvaluationException: java.lang.NullPointerException
Possible Source of Error:
Class Name: com.sun.faces.application.ActionListenerImpl
File Name: ActionListenerImpl.java
Method Name: processAction
Line Number: 78
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.ja
va:78)
com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListene
rImpl.java:57)
javax.faces.component.UICommand.broadcast(UICommand.java:312)
javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.ja
va:75)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
Exception Details: com.sun.rave.web.ui.appbase.ApplicationException
#{caseMgt.btnCancelCase_action}: javax.faces.el.EvaluationException: java.lang.NullPointerException
Possible Source of Error:
Class Name: com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl
File Name: ViewHandlerImpl.java
Method Name: destroy
Line Number: 601
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl.destroy(ViewHandlerImpl.java:6
01)
com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl.renderView(ViewHandlerImpl.jav
a:302)
com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:87)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:117)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:198)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
-------------------------
3. the yawl hibernate log
-------------------------
ERROR] 2014-05-16 06:29:02,035 org.hibernate.engine.jdbc.spi.SqlExceptionHelper
- FEHLER: Aktualisieren oder Löschen in Tabelle »rs_workitemcache« verletzt
Fremdschlüssel-Constraint »fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(169) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 06:29:10,888
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(191) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 06:29:12,354
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(420) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 06:29:13,854
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(130) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 10:01:10,327
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(1226) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:03:26,218
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(1438) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:40:54,895
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(102) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:46:52,140
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(176) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:46:52,181
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(171) wird noch aus Tabelle »rs_queueitems« verwiesen.
```
Original issue reported on code.google.com by `anwim...@gmail.com` on 16 May 2014 at 11:59
|
1.0
|
YAWL looses Workflow after Database Foreign Key Violation - ```
To make it short, the engine looses cases, every time on different points. The
case is still visible in the case management window, but is away from all other
views.
-------------------------------------
1) The database constraint violation:
-------------------------------------
ERROR - FEHLER: Aktualisieren oder L├Âschen in Tabelle
»rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »r
s_queueitems«
Detail: Auf Schlüssel (wir_id)=(176) wird noch aus Tabelle »rs_queueitems« verwiesen.
2014-05-16 11:46:52,142 [ERROR] HibernateEngine :- Handled Exception:
Error persisting object (delete): 1:HandOutItems
org.hibernate.exception.ConstraintViolationException: FEHLER: Aktualisieren
oder L├Âschen in Tabelle ┬╗rs_workitemcache┬½ verletzt
Fremdschl├╝ssel-Con
straint »fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
obviously problem in the workitem cache table
--------------------------------------------------------
2. Stack Trace in Browser, cannot be reproduced any time
--------------------------------------------------------
Exception Handler
Description: An unhandled exception occurred during the execution of the web
application. Please review the following stack trace for more information
regarding the error.
Exception Details: java.lang.NullPointerException
null
Possible Source of Error:
Class Name: org.yawlfoundation.yawl.resourcing.ResourceManager
File Name: ResourceManager.java
Method Name: removeFromAll
Line Number: 943
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
org.yawlfoundation.yawl.resourcing.ResourceManager.removeFromAll(ResourceManager
.java:943)
org.yawlfoundation.yawl.resourcing.ResourceManager.cancelCase(ResourceManager.ja
va:2200)
org.yawlfoundation.yawl.resourcing.jsf.caseMgt.cancelCase(caseMgt.java:742)
org.yawlfoundation.yawl.resourcing.jsf.caseMgt.btnCancelCase_action(caseMgt.java
:716)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
a:43)
java.lang.reflect.Method.invoke(Method.java:601)
com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:126)
com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.ja
va:72)
com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListene
rImpl.java:57)
javax.faces.component.UICommand.broadcast(UICommand.java:312)
javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.ja
va:75)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
Exception Details: javax.faces.el.EvaluationException
java.lang.NullPointerException
Possible Source of Error:
Class Name: com.sun.faces.el.MethodBindingImpl
File Name: MethodBindingImpl.java
Method Name: invoke
Line Number: 130
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:130)
com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.ja
va:72)
com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListene
rImpl.java:57)
javax.faces.component.UICommand.broadcast(UICommand.java:312)
javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.ja
va:75)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
Exception Details: javax.faces.FacesException
#{caseMgt.btnCancelCase_action}: javax.faces.el.EvaluationException: java.lang.NullPointerException
Possible Source of Error:
Class Name: com.sun.faces.application.ActionListenerImpl
File Name: ActionListenerImpl.java
Method Name: processAction
Line Number: 78
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.ja
va:78)
com.sun.rave.web.ui.appbase.faces.ActionListenerImpl.processAction(ActionListene
rImpl.java:57)
javax.faces.component.UICommand.broadcast(UICommand.java:312)
javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:267)
javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:381)
com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.ja
va:75)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
Exception Details: com.sun.rave.web.ui.appbase.ApplicationException
#{caseMgt.btnCancelCase_action}: javax.faces.el.EvaluationException: java.lang.NullPointerException
Possible Source of Error:
Class Name: com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl
File Name: ViewHandlerImpl.java
Method Name: destroy
Line Number: 601
Source not available. Information regarding the location of the exception can
be identified using the exception stack trace below.
Stack Trace:
com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl.destroy(ViewHandlerImpl.java:6
01)
com.sun.rave.web.ui.appbase.faces.ViewHandlerImpl.renderView(ViewHandlerImpl.jav
a:302)
com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:87)
com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:221)
com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:117)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:198)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:305)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
com.sun.rave.web.ui.util.UploadFilter.doFilter(UploadFilter.java:184)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.resourcing.jsf.SessionTimeoutFilter.doFilter(SessionTime
outFilter.java:71)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.yawlfoundation.yawl.util.CharsetFilter.doFilter(CharsetFilter.java:51)
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilt
erChain.java:243)
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.
java:210)
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:2
22)
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:1
23)
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.jav
a:502)
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118
)
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor
.java:1023)
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractPro
tocol.java:589)
org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1852
)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722)
-------------------------
3. the yawl hibernate log
-------------------------
ERROR] 2014-05-16 06:29:02,035 org.hibernate.engine.jdbc.spi.SqlExceptionHelper
- FEHLER: Aktualisieren oder Löschen in Tabelle »rs_workitemcache« verletzt
Fremdschlüssel-Constraint »fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(169) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 06:29:10,888
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(191) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 06:29:12,354
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(420) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 06:29:13,854
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(130) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 10:01:10,327
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(1226) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:03:26,218
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(1438) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:40:54,895
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(102) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:46:52,140
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(176) wird noch aus Tabelle »rs_queueitems« verwiesen.
[ERROR] 2014-05-16 11:46:52,181
org.hibernate.engine.jdbc.spi.SqlExceptionHelper - FEHLER: Aktualisieren oder
Löschen in Tabelle »rs_workitemcache« verletzt Fremdschlüssel-Constraint
»fk5d5a65edb319e1cc« von Tabelle »rs_queueitems«
Detail: Auf Schlüssel (wir_id)=(171) wird noch aus Tabelle »rs_queueitems« verwiesen.
```
Original issue reported on code.google.com by `anwim...@gmail.com` on 16 May 2014 at 11:59
|
non_process
|
yawl looses workflow after database foreign key violation to make it short the engine looses cases every time on different points the case is still visible in the case management window but is away from all other views the database constraint violation error fehler aktualisieren oder l├âschen in tabelle ┬╗rs workitemcache┬½ verletzt fremdschl├╝ssel constraint ┬╗ ┬½ von tabelle ┬╗r s queueitems┬½ detail auf schl├╝ssel wir id wird noch aus tabelle ┬╗rs queueitems┬½ verwiesen hibernateengine handled exception error persisting object delete handoutitems org hibernate exception constraintviolationexception fehler aktualisieren oder l├âschen in tabelle ┬╗rs workitemcache┬½ verletzt fremdschl├╝ssel con straint ┬╗ ┬½ von tabelle ┬╗rs queueitems┬½ obviously problem in the workitem cache table stack trace in browser cannot be reproduced any time exception handler description an unhandled exception occurred during the execution of the web application please review the following stack trace for more information regarding the error exception details java lang nullpointerexception null possible source of error class name org yawlfoundation yawl resourcing resourcemanager file name resourcemanager java method name removefromall line number source not available information regarding the location of the exception can be identified using the exception stack trace below stack trace org yawlfoundation yawl resourcing resourcemanager removefromall resourcemanager java org yawlfoundation yawl resourcing resourcemanager cancelcase resourcemanager ja va org yawlfoundation yawl resourcing jsf casemgt cancelcase casemgt java org yawlfoundation yawl resourcing jsf casemgt btncancelcase action casemgt java sun reflect nativemethodaccessorimpl nativemethodaccessorimpl java sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl jav a java lang reflect method invoke method java com sun faces el methodbindingimpl invoke methodbindingimpl java com sun faces application actionlistenerimpl processaction actionlistenerimpl ja va com sun rave web ui appbase faces actionlistenerimpl processaction actionlistene rimpl java javax faces component uicommand broadcast uicommand java javax faces component uiviewroot broadcastevents uiviewroot java javax faces component uiviewroot processapplication uiviewroot java com sun faces lifecycle invokeapplicationphase execute invokeapplicationphase ja va com sun faces lifecycle lifecycleimpl phase lifecycleimpl java com sun faces lifecycle lifecycleimpl execute lifecycleimpl java javax faces webapp facesservlet service facesservlet java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java com sun rave web ui util uploadfilter dofilter uploadfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl resourcing jsf sessiontimeoutfilter dofilter sessiontime outfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl util charsetfilter dofilter charsetfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org apache catalina core standardwrappervalve invoke standardwrappervalve java org apache catalina core standardcontextvalve invoke standardcontextvalve java org apache catalina authenticator authenticatorbase invoke authenticatorbase jav a org apache catalina core standardhostvalve invoke standardhostvalve java org apache catalina valves errorreportvalve invoke errorreportvalve java org apache catalina valves accesslogvalve invoke accesslogvalve java org apache catalina core standardenginevalve invoke standardenginevalve java org apache catalina connector coyoteadapter service coyoteadapter java org apache coyote process java org apache coyote abstractprotocol abstractconnectionhandler process abstractpro tocol java org apache tomcat util net aprendpoint socketprocessor run aprendpoint java java util concurrent threadpoolexecutor runworker threadpoolexecutor java java util concurrent threadpoolexecutor worker run threadpoolexecutor java java lang thread run thread java exception details javax faces el evaluationexception java lang nullpointerexception possible source of error class name com sun faces el methodbindingimpl file name methodbindingimpl java method name invoke line number source not available information regarding the location of the exception can be identified using the exception stack trace below stack trace com sun faces el methodbindingimpl invoke methodbindingimpl java com sun faces application actionlistenerimpl processaction actionlistenerimpl ja va com sun rave web ui appbase faces actionlistenerimpl processaction actionlistene rimpl java javax faces component uicommand broadcast uicommand java javax faces component uiviewroot broadcastevents uiviewroot java javax faces component uiviewroot processapplication uiviewroot java com sun faces lifecycle invokeapplicationphase execute invokeapplicationphase ja va com sun faces lifecycle lifecycleimpl phase lifecycleimpl java com sun faces lifecycle lifecycleimpl execute lifecycleimpl java javax faces webapp facesservlet service facesservlet java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java com sun rave web ui util uploadfilter dofilter uploadfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl resourcing jsf sessiontimeoutfilter dofilter sessiontime outfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl util charsetfilter dofilter charsetfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org apache catalina core standardwrappervalve invoke standardwrappervalve java org apache catalina core standardcontextvalve invoke standardcontextvalve java org apache catalina authenticator authenticatorbase invoke authenticatorbase jav a org apache catalina core standardhostvalve invoke standardhostvalve java org apache catalina valves errorreportvalve invoke errorreportvalve java org apache catalina valves accesslogvalve invoke accesslogvalve java org apache catalina core standardenginevalve invoke standardenginevalve java org apache catalina connector coyoteadapter service coyoteadapter java org apache coyote process java org apache coyote abstractprotocol abstractconnectionhandler process abstractpro tocol java org apache tomcat util net aprendpoint socketprocessor run aprendpoint java java util concurrent threadpoolexecutor runworker threadpoolexecutor java java util concurrent threadpoolexecutor worker run threadpoolexecutor java java lang thread run thread java exception details javax faces facesexception casemgt btncancelcase action javax faces el evaluationexception java lang nullpointerexception possible source of error class name com sun faces application actionlistenerimpl file name actionlistenerimpl java method name processaction line number source not available information regarding the location of the exception can be identified using the exception stack trace below stack trace com sun faces application actionlistenerimpl processaction actionlistenerimpl ja va com sun rave web ui appbase faces actionlistenerimpl processaction actionlistene rimpl java javax faces component uicommand broadcast uicommand java javax faces component uiviewroot broadcastevents uiviewroot java javax faces component uiviewroot processapplication uiviewroot java com sun faces lifecycle invokeapplicationphase execute invokeapplicationphase ja va com sun faces lifecycle lifecycleimpl phase lifecycleimpl java com sun faces lifecycle lifecycleimpl execute lifecycleimpl java javax faces webapp facesservlet service facesservlet java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java com sun rave web ui util uploadfilter dofilter uploadfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl resourcing jsf sessiontimeoutfilter dofilter sessiontime outfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl util charsetfilter dofilter charsetfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org apache catalina core standardwrappervalve invoke standardwrappervalve java org apache catalina core standardcontextvalve invoke standardcontextvalve java org apache catalina authenticator authenticatorbase invoke authenticatorbase jav a org apache catalina core standardhostvalve invoke standardhostvalve java org apache catalina valves errorreportvalve invoke errorreportvalve java org apache catalina valves accesslogvalve invoke accesslogvalve java org apache catalina core standardenginevalve invoke standardenginevalve java org apache catalina connector coyoteadapter service coyoteadapter java org apache coyote process java org apache coyote abstractprotocol abstractconnectionhandler process abstractpro tocol java org apache tomcat util net aprendpoint socketprocessor run aprendpoint java java util concurrent threadpoolexecutor runworker threadpoolexecutor java java util concurrent threadpoolexecutor worker run threadpoolexecutor java java lang thread run thread java exception details com sun rave web ui appbase applicationexception casemgt btncancelcase action javax faces el evaluationexception java lang nullpointerexception possible source of error class name com sun rave web ui appbase faces viewhandlerimpl file name viewhandlerimpl java method name destroy line number source not available information regarding the location of the exception can be identified using the exception stack trace below stack trace com sun rave web ui appbase faces viewhandlerimpl destroy viewhandlerimpl java com sun rave web ui appbase faces viewhandlerimpl renderview viewhandlerimpl jav a com sun faces lifecycle renderresponsephase execute renderresponsephase java com sun faces lifecycle lifecycleimpl phase lifecycleimpl java com sun faces lifecycle lifecycleimpl render lifecycleimpl java javax faces webapp facesservlet service facesservlet java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java com sun rave web ui util uploadfilter dofilter uploadfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl resourcing jsf sessiontimeoutfilter dofilter sessiontime outfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org yawlfoundation yawl util charsetfilter dofilter charsetfilter java org apache catalina core applicationfilterchain internaldofilter applicationfilt erchain java org apache catalina core applicationfilterchain dofilter applicationfilterchain java org apache catalina core standardwrappervalve invoke standardwrappervalve java org apache catalina core standardcontextvalve invoke standardcontextvalve java org apache catalina authenticator authenticatorbase invoke authenticatorbase jav a org apache catalina core standardhostvalve invoke standardhostvalve java org apache catalina valves errorreportvalve invoke errorreportvalve java org apache catalina valves accesslogvalve invoke accesslogvalve java org apache catalina core standardenginevalve invoke standardenginevalve java org apache catalina connector coyoteadapter service coyoteadapter java org apache coyote process java org apache coyote abstractprotocol abstractconnectionhandler process abstractpro tocol java org apache tomcat util net aprendpoint socketprocessor run aprendpoint java java util concurrent threadpoolexecutor runworker threadpoolexecutor java java util concurrent threadpoolexecutor worker run threadpoolexecutor java java lang thread run thread java the yawl hibernate log error org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen org hibernate engine jdbc spi sqlexceptionhelper fehler aktualisieren oder löschen in tabelle »rs workitemcache« verletzt fremdschlüssel constraint » « von tabelle »rs queueitems« detail auf schlüssel wir id wird noch aus tabelle »rs queueitems« verwiesen original issue reported on code google com by anwim gmail com on may at
| 0
|
14,673
| 17,790,743,536
|
IssuesEvent
|
2021-08-31 15:54:01
|
dotnet/csharpstandard
|
https://api.github.com/repos/dotnet/csharpstandard
|
opened
|
Plan for integrating C# 7.x - C# 10.0 feature specifications into the ECMA standard
|
type: process
|
Representatives from the ECMA committee will attend an upcoming C# LDM (language design meeting) to create a plan and process to integrate the latest features in the shipped C# compiler into the ECMA standard text.
The purpose of the discussion is to develop a plan and processes the have the committee and the LDM collaborate efficiently as the committee catches up to the current shipping version (both moving targets).
**Current status**
- The ECMA committee is working on adding the features described in the [draft C# 6 spec](https://github.com/dotnet/csharplang/tree/main/spec) into the [ECMA C# 5 standard text](https://github.com/dotnet/csharpstandard/tree/standard-v5#table-of-contents). There are [four features remaining](https://github.com/dotnet/csharpstandard/pulls?q=is%3Apr+is%3Aopen+%22C%23+6%22) represented by 5 PRs. One additional feature is tracked in issue #200.
- Several PRs have been created for [C# 7.x features](https://github.com/dotnet/csharpstandard/pulls?q=is%3Apr+is%3Aopen+%22C%23+7%22) using the following proposal folders:
- [C# 7.0](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.0)
- [C# 7.1](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.1)
- [C# 7.2](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.2)
- [C# 7.3](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.3)
- The C# Language Design Team has created feature specs for features developed from C# 7.0 through C# 10.0. (There are also drafts in progress for proposed future features).
- The Microsoft draft C# 6.0 spec is published on [docs.microsoft.com](https://docs.microsoft.com/dotnet/csharp/language-reference/language-specification/introduction).
**Goals**
- Upon merging the remaining C# 6.0 features, the docs platform should publish the ECMA committee draft spec instead of the Microsoft draft spec.
- Once C# 6.0 has been drafted, the dotnet/csharplang/spec folder can be deleted, with links updated to the version in dotnet/csharpstandard/standard.
- As each updated feature is incorporated into the draft standard, the feature spec will be removed from docs.microsoft.com, and those links updated to the latest standard draft.
- The ECMA committee and the LDM team develop a process to incorporate the new features as quickly as possible.
**Discussion items**
- How to facilitate reviews from LDM members? Tag for review? Assign PRs to them?
- How to get questions answered from LDM / compiler team members?
|
1.0
|
Plan for integrating C# 7.x - C# 10.0 feature specifications into the ECMA standard - Representatives from the ECMA committee will attend an upcoming C# LDM (language design meeting) to create a plan and process to integrate the latest features in the shipped C# compiler into the ECMA standard text.
The purpose of the discussion is to develop a plan and processes the have the committee and the LDM collaborate efficiently as the committee catches up to the current shipping version (both moving targets).
**Current status**
- The ECMA committee is working on adding the features described in the [draft C# 6 spec](https://github.com/dotnet/csharplang/tree/main/spec) into the [ECMA C# 5 standard text](https://github.com/dotnet/csharpstandard/tree/standard-v5#table-of-contents). There are [four features remaining](https://github.com/dotnet/csharpstandard/pulls?q=is%3Apr+is%3Aopen+%22C%23+6%22) represented by 5 PRs. One additional feature is tracked in issue #200.
- Several PRs have been created for [C# 7.x features](https://github.com/dotnet/csharpstandard/pulls?q=is%3Apr+is%3Aopen+%22C%23+7%22) using the following proposal folders:
- [C# 7.0](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.0)
- [C# 7.1](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.1)
- [C# 7.2](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.2)
- [C# 7.3](https://github.com/dotnet/csharplang/tree/main/proposals/csharp-7.3)
- The C# Language Design Team has created feature specs for features developed from C# 7.0 through C# 10.0. (There are also drafts in progress for proposed future features).
- The Microsoft draft C# 6.0 spec is published on [docs.microsoft.com](https://docs.microsoft.com/dotnet/csharp/language-reference/language-specification/introduction).
**Goals**
- Upon merging the remaining C# 6.0 features, the docs platform should publish the ECMA committee draft spec instead of the Microsoft draft spec.
- Once C# 6.0 has been drafted, the dotnet/csharplang/spec folder can be deleted, with links updated to the version in dotnet/csharpstandard/standard.
- As each updated feature is incorporated into the draft standard, the feature spec will be removed from docs.microsoft.com, and those links updated to the latest standard draft.
- The ECMA committee and the LDM team develop a process to incorporate the new features as quickly as possible.
**Discussion items**
- How to facilitate reviews from LDM members? Tag for review? Assign PRs to them?
- How to get questions answered from LDM / compiler team members?
|
process
|
plan for integrating c x c feature specifications into the ecma standard representatives from the ecma committee will attend an upcoming c ldm language design meeting to create a plan and process to integrate the latest features in the shipped c compiler into the ecma standard text the purpose of the discussion is to develop a plan and processes the have the committee and the ldm collaborate efficiently as the committee catches up to the current shipping version both moving targets current status the ecma committee is working on adding the features described in the into the there are represented by prs one additional feature is tracked in issue several prs have been created for using the following proposal folders the c language design team has created feature specs for features developed from c through c there are also drafts in progress for proposed future features the microsoft draft c spec is published on goals upon merging the remaining c features the docs platform should publish the ecma committee draft spec instead of the microsoft draft spec once c has been drafted the dotnet csharplang spec folder can be deleted with links updated to the version in dotnet csharpstandard standard as each updated feature is incorporated into the draft standard the feature spec will be removed from docs microsoft com and those links updated to the latest standard draft the ecma committee and the ldm team develop a process to incorporate the new features as quickly as possible discussion items how to facilitate reviews from ldm members tag for review assign prs to them how to get questions answered from ldm compiler team members
| 1
|
21,775
| 30,289,582,016
|
IssuesEvent
|
2023-07-09 05:19:55
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[exporter/prometheus] Expired metrics were not be deleted
|
bug Stale processor/spanmetrics connector/spanmetrics closed as inactive
|
### Component(s)
exporter/prometheus
### What happened?
## Description
Hi, I am trying to use spanmetrics processor and prometheus exporter to transform spans to metrics. But I found some expired metrics seems to be appeared repeatedly when new metrics received. And also the memory usage continued to rise. So is this a bug of prometheus exporter?
## Steps to Reproduce
examples:
when I posted span to collector, the prometheus exporter exported the metric like this
```
calls_total{db_instance="N/A",db_name="name-KSDORKdOKV",db_sql_table="table-IWstkE",db_system="redis",operation="get",service_name="go-project.examples",span_kind="SPAN_KIND_CLIENT",status_code="STATUS_CODE_UNSET"} 1
```
after 5 seconds, the metric will disappear. Then I posted another span to collector, the prometheus exporter exported two metrics included the expired one
```
calls_total{db_instance="N/A",db_name="name-KSDORKdOKV",db_sql_table="table-IWstkE",db_system="redis",operation="get",service_name="go-project.examples",span_kind="SPAN_KIND_CLIENT",status_code="STATUS_CODE_UNSET"} 1
calls_total{db_instance="N/A",db_name="name-QSvMJKDYso",db_sql_table="table-ZHdGvF",db_system="redis",operation="set",service_name="go-project.examples",span_kind="SPAN_KIND_CLIENT",status_code="STATUS_CODE_UNSET"} 1
```
## Expected Result
Expired metrics will be deleted.
## Actual Result
Expired metrics seems to be still stored in memory cache.
### Collector version
2ada50fd4a
### Environment information
## Environment
OS: MacOS 13.0.1
Compiler(if manually compiled): go 1.19.3
### OpenTelemetry Collector configuration
```yaml
receivers:
# Dummy receiver that's never used, because a pipeline is required to have one.
otlp/spanmetrics:
protocols:
grpc:
endpoint: "localhost:12345"
otlp:
protocols:
grpc:
endpoint: "localhost:55677"
processors:
batch:
spanmetrics:
metrics_exporter: otlp/spanmetrics
latency_histogram_buckets: [10ms, 100ms]
dimensions:
- name: db.system
default: N/A
- name: db.name
default: N/A
- name: db.sql.table
default: N/A
- name: db.instance
default: N/A
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
exporters:
logging:
verbosity: basic
otlp/spanmetrics:
endpoint: "localhost:55677"
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 5s
service:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics, batch]
exporters: [logging]
# The exporter name must match the metrics_exporter name.
# The receiver is just a dummy and never used; added to pass validation requiring at least one receiver in a pipeline.
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [otlp/spanmetrics]
metrics:
receivers: [otlp]
exporters: [prometheus]
```
### Log output
_No response_
### Additional context
_No response_
|
1.0
|
[exporter/prometheus] Expired metrics were not be deleted - ### Component(s)
exporter/prometheus
### What happened?
## Description
Hi, I am trying to use spanmetrics processor and prometheus exporter to transform spans to metrics. But I found some expired metrics seems to be appeared repeatedly when new metrics received. And also the memory usage continued to rise. So is this a bug of prometheus exporter?
## Steps to Reproduce
examples:
when I posted span to collector, the prometheus exporter exported the metric like this
```
calls_total{db_instance="N/A",db_name="name-KSDORKdOKV",db_sql_table="table-IWstkE",db_system="redis",operation="get",service_name="go-project.examples",span_kind="SPAN_KIND_CLIENT",status_code="STATUS_CODE_UNSET"} 1
```
after 5 seconds, the metric will disappear. Then I posted another span to collector, the prometheus exporter exported two metrics included the expired one
```
calls_total{db_instance="N/A",db_name="name-KSDORKdOKV",db_sql_table="table-IWstkE",db_system="redis",operation="get",service_name="go-project.examples",span_kind="SPAN_KIND_CLIENT",status_code="STATUS_CODE_UNSET"} 1
calls_total{db_instance="N/A",db_name="name-QSvMJKDYso",db_sql_table="table-ZHdGvF",db_system="redis",operation="set",service_name="go-project.examples",span_kind="SPAN_KIND_CLIENT",status_code="STATUS_CODE_UNSET"} 1
```
## Expected Result
Expired metrics will be deleted.
## Actual Result
Expired metrics seems to be still stored in memory cache.
### Collector version
2ada50fd4a
### Environment information
## Environment
OS: MacOS 13.0.1
Compiler(if manually compiled): go 1.19.3
### OpenTelemetry Collector configuration
```yaml
receivers:
# Dummy receiver that's never used, because a pipeline is required to have one.
otlp/spanmetrics:
protocols:
grpc:
endpoint: "localhost:12345"
otlp:
protocols:
grpc:
endpoint: "localhost:55677"
processors:
batch:
spanmetrics:
metrics_exporter: otlp/spanmetrics
latency_histogram_buckets: [10ms, 100ms]
dimensions:
- name: db.system
default: N/A
- name: db.name
default: N/A
- name: db.sql.table
default: N/A
- name: db.instance
default: N/A
dimensions_cache_size: 1000
aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
exporters:
logging:
verbosity: basic
otlp/spanmetrics:
endpoint: "localhost:55677"
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
metric_expiration: 5s
service:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics, batch]
exporters: [logging]
# The exporter name must match the metrics_exporter name.
# The receiver is just a dummy and never used; added to pass validation requiring at least one receiver in a pipeline.
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [otlp/spanmetrics]
metrics:
receivers: [otlp]
exporters: [prometheus]
```
### Log output
_No response_
### Additional context
_No response_
|
process
|
expired metrics were not be deleted component s exporter prometheus what happened description hi i am trying to use spanmetrics processor and prometheus exporter to transform spans to metrics but i found some expired metrics seems to be appeared repeatedly when new metrics received and also the memory usage continued to rise so is this a bug of prometheus exporter steps to reproduce examples when i posted span to collector the prometheus exporter exported the metric like this calls total db instance n a db name name ksdorkdokv db sql table table iwstke db system redis operation get service name go project examples span kind span kind client status code status code unset after seconds the metric will disappear then i posted another span to collector the prometheus exporter exported two metrics included the expired one calls total db instance n a db name name ksdorkdokv db sql table table iwstke db system redis operation get service name go project examples span kind span kind client status code status code unset calls total db instance n a db name name qsvmjkdyso db sql table table zhdgvf db system redis operation set service name go project examples span kind span kind client status code status code unset expected result expired metrics will be deleted actual result expired metrics seems to be still stored in memory cache collector version environment information environment os macos compiler if manually compiled go opentelemetry collector configuration yaml receivers dummy receiver that s never used because a pipeline is required to have one otlp spanmetrics protocols grpc endpoint localhost otlp protocols grpc endpoint localhost processors batch spanmetrics metrics exporter otlp spanmetrics latency histogram buckets dimensions name db system default n a name db name default n a name db sql table default n a name db instance default n a dimensions cache size aggregation temporality aggregation temporality cumulative exporters logging verbosity basic otlp spanmetrics endpoint localhost tls insecure true prometheus endpoint metric expiration service pipelines traces receivers processors exporters the exporter name must match the metrics exporter name the receiver is just a dummy and never used added to pass validation requiring at least one receiver in a pipeline metrics spanmetrics receivers exporters metrics receivers exporters log output no response additional context no response
| 1
|
14,556
| 17,685,709,392
|
IssuesEvent
|
2021-08-24 01:01:36
|
googleapis/java-websecurityscanner
|
https://api.github.com/repos/googleapis/java-websecurityscanner
|
opened
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #155
* #168
* #190
* #155
* #168
* #190
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #155
* #168
* #190
* #155
* #168
* #190
|
process
|
warning a recent release failed the following release prs may have failed
| 1
|
9,204
| 12,238,802,507
|
IssuesEvent
|
2020-05-04 20:27:54
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Do environment deployments respect the ##vso[build.updatebuildnumber] command?
|
Pri1 devops-cicd-process/tech devops/prod
|
If you use:
echo ##vso[build.updatebuildnumber]1.2.3.4
Either in the same or previous stage as a deployment job in some pipeline, it appears the environment still uses the 'original' build number of something like 20200501.1 rather than a custom one of something like 1.2.3.4 when showing the little "label" in the deployment summary list page for the environment. If you click on the label, it will take you to the pipeline, which has the correct custom build number displayed.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Do environment deployments respect the ##vso[build.updatebuildnumber] command? - If you use:
echo ##vso[build.updatebuildnumber]1.2.3.4
Either in the same or previous stage as a deployment job in some pipeline, it appears the environment still uses the 'original' build number of something like 20200501.1 rather than a custom one of something like 1.2.3.4 when showing the little "label" in the deployment summary list page for the environment. If you click on the label, it will take you to the pipeline, which has the correct custom build number displayed.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 77d95db6-9983-7346-d0eb-4b7443e4e252
* Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087
* Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops)
* Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
do environment deployments respect the vso command if you use echo vso either in the same or previous stage as a deployment job in some pipeline it appears the environment still uses the original build number of something like rather than a custom one of something like when showing the little label in the deployment summary list page for the environment if you click on the label it will take you to the pipeline which has the correct custom build number displayed document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
8,199
| 11,394,992,237
|
IssuesEvent
|
2020-01-30 10:28:43
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
opened
|
Make using DATABASE_URL as default DB connection URL a best practice
|
kind/docs process/candidate
|
- `DATABASE_URL` should be our new default connection URL env var name (→ works out of the box with Heroku)
- Docs + examples should point out that we strongly recommend to not check in the `.env` file to their VCS
- We need to do an extra good job of explaining how env vars are handled so developers can 100% understand how things are working.
|
1.0
|
Make using DATABASE_URL as default DB connection URL a best practice - - `DATABASE_URL` should be our new default connection URL env var name (→ works out of the box with Heroku)
- Docs + examples should point out that we strongly recommend to not check in the `.env` file to their VCS
- We need to do an extra good job of explaining how env vars are handled so developers can 100% understand how things are working.
|
process
|
make using database url as default db connection url a best practice database url should be our new default connection url env var name → works out of the box with heroku docs examples should point out that we strongly recommend to not check in the env file to their vcs we need to do an extra good job of explaining how env vars are handled so developers can understand how things are working
| 1
|
9,694
| 12,699,496,748
|
IssuesEvent
|
2020-06-22 14:56:37
|
prisma/vscode
|
https://api.github.com/repos/prisma/vscode
|
closed
|
Run integration tests on multiple platforms
|
kind/improvement process/candidate topic: automation topic: tests
|
Using the GH action matrix feature, we should run the integration tests os windows, Linux and Mac.
Currently, we only test Linux in the integration tests.
|
1.0
|
Run integration tests on multiple platforms - Using the GH action matrix feature, we should run the integration tests os windows, Linux and Mac.
Currently, we only test Linux in the integration tests.
|
process
|
run integration tests on multiple platforms using the gh action matrix feature we should run the integration tests os windows linux and mac currently we only test linux in the integration tests
| 1
|
117,034
| 17,401,814,032
|
IssuesEvent
|
2021-08-02 20:53:33
|
transloadit/uppy
|
https://api.github.com/repos/transloadit/uppy
|
closed
|
@uppy/transloadit Depends on old package with major security vulnerability
|
🔐 Security
|
The latest version of @uppy/transloadit contains a dependency for `socket.io-client "~2.2.0"`, which in turn depends on `engine.io-client "~3.3.1"`, which depends on a version of xmlhttprequest-ssl that has this major security vulnerability: https://www.npmjs.com/advisories/1665
Later versions of `socket.io-client` don't depend on xmlhttprequest-ssl at all. Is it possible to upgrade this dependency without breaking anything?
|
True
|
@uppy/transloadit Depends on old package with major security vulnerability - The latest version of @uppy/transloadit contains a dependency for `socket.io-client "~2.2.0"`, which in turn depends on `engine.io-client "~3.3.1"`, which depends on a version of xmlhttprequest-ssl that has this major security vulnerability: https://www.npmjs.com/advisories/1665
Later versions of `socket.io-client` don't depend on xmlhttprequest-ssl at all. Is it possible to upgrade this dependency without breaking anything?
|
non_process
|
uppy transloadit depends on old package with major security vulnerability the latest version of uppy transloadit contains a dependency for socket io client which in turn depends on engine io client which depends on a version of xmlhttprequest ssl that has this major security vulnerability later versions of socket io client don t depend on xmlhttprequest ssl at all is it possible to upgrade this dependency without breaking anything
| 0
|
19,524
| 25,834,805,925
|
IssuesEvent
|
2022-12-12 18:44:55
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
Bob's Alien Artifact Tiers
|
Impact: Duplicate Impact: Enhancement Angels Bio Processing Angels Exploration Impact: Mod compatibility
|
Alien Artifacts in Bob's mods are tiered. Adjust metals required to make each to match the tiers.
**T0**
- Generic - Purple - Iron
**T1**
- Blue - Piercing - Nickel (now Cobalt)
- Electric - Orange - Aluminum (now Tungsten)
**T2**
- Poison - Purple - Titanium
- Explosive - Yellow - Gold
**T3**
- Fire - Red - Tungsten (now Copper)
- Poison - Green - Chrome (now Zinc)
|
1.0
|
Bob's Alien Artifact Tiers - Alien Artifacts in Bob's mods are tiered. Adjust metals required to make each to match the tiers.
**T0**
- Generic - Purple - Iron
**T1**
- Blue - Piercing - Nickel (now Cobalt)
- Electric - Orange - Aluminum (now Tungsten)
**T2**
- Poison - Purple - Titanium
- Explosive - Yellow - Gold
**T3**
- Fire - Red - Tungsten (now Copper)
- Poison - Green - Chrome (now Zinc)
|
process
|
bob s alien artifact tiers alien artifacts in bob s mods are tiered adjust metals required to make each to match the tiers generic purple iron blue piercing nickel now cobalt electric orange aluminum now tungsten poison purple titanium explosive yellow gold fire red tungsten now copper poison green chrome now zinc
| 1
|
58,319
| 11,864,177,312
|
IssuesEvent
|
2020-03-25 21:08:40
|
microsoft/vscode-python
|
https://api.github.com/repos/microsoft/vscode-python
|
opened
|
Create full kernel instead of partial proxy kernel
|
data science type-code health
|
* Track methods invoked that aren't supported
This helps us determine widgets calling methods we do not support, and probably need to be.
|
1.0
|
Create full kernel instead of partial proxy kernel - * Track methods invoked that aren't supported
This helps us determine widgets calling methods we do not support, and probably need to be.
|
non_process
|
create full kernel instead of partial proxy kernel track methods invoked that aren t supported this helps us determine widgets calling methods we do not support and probably need to be
| 0
|
15,159
| 3,317,575,988
|
IssuesEvent
|
2015-11-06 22:24:26
|
openhealthcare/elcid
|
https://api.github.com/repos/openhealthcare/elcid
|
opened
|
In advanced search, the 4th box ALWAYS goes onto a second line now.
|
bug Design
|
Caused by new category called Haematology Background Information - made that box too wide.
Solutions:
a) narrow Saved Searches
b) change name to Haem background info
|
1.0
|
In advanced search, the 4th box ALWAYS goes onto a second line now. - Caused by new category called Haematology Background Information - made that box too wide.
Solutions:
a) narrow Saved Searches
b) change name to Haem background info
|
non_process
|
in advanced search the box always goes onto a second line now caused by new category called haematology background information made that box too wide solutions a narrow saved searches b change name to haem background info
| 0
|
19,440
| 25,708,477,357
|
IssuesEvent
|
2022-12-07 03:41:19
|
vesoft-inc/nebula
|
https://api.github.com/repos/vesoft-inc/nebula
|
closed
|
should verify the type mismatch when insert or update
|
type/bug wontfix need to discuss severity/minor auto-sync type/bug/correctness find/automation affects/master process/fixed
|
if the type of property is `datetime`, and then insert or update with `string`, should be prevented by graph validator.
actual:
send to storaged.
should be SemanticError not storage error
```
(root@nebula) [sf100]> update vertex on Person 933 set firstName=23 when firstName == "harris"
Execution succeeded (time spent 592/14224 us)
Wed, 09 Nov 2022 16:31:08 CST
(root@nebula) [sf100]> update vertex on Person 933 set firstName=23
[ERROR (-1005)]: Storage Error: Invalid data, may be wrong value type.
Wed, 09 Nov 2022 16:31:51 CST
(root@nebula) [sf100]> desc tag Person
+----------------+----------+-------+---------+---------+
| Field | Type | Null | Default | Comment |
+----------------+----------+-------+---------+---------+
| "firstName" | "string" | "YES" | | |
| "lastName" | "string" | "YES" | | |
| "gender" | "string" | "YES" | | |
| "birthday" | "string" | "YES" | | |
| "creationDate" | "string" | "YES" | | |
| "locationIP" | "string" | "YES" | | |
| "browserUsed" | "string" | "YES" | | |
+----------------+----------+-------+---------+---------+
Got 7 rows (time spent 629/13654 us)
Wed, 09 Nov 2022 16:41:35 CST
```
```
(root@nebula) [sf100]> INSERT VERTEX t5(p1, p2, p3) VALUES 1:("Abe", 2, 3);
Execution succeeded (time spent 1070/13328 us)
Wed, 09 Nov 2022 16:34:06 CST
(root@nebula) [sf100]> INSERT VERTEX t5(p1, p2, p3) VALUES 1:("Abe", "2", 3);
[ERROR (-1005)]: Storage Error: The data type does not meet the requirements. Use the correct type of data.
```
|
1.0
|
should verify the type mismatch when insert or update - if the type of property is `datetime`, and then insert or update with `string`, should be prevented by graph validator.
actual:
send to storaged.
should be SemanticError not storage error
```
(root@nebula) [sf100]> update vertex on Person 933 set firstName=23 when firstName == "harris"
Execution succeeded (time spent 592/14224 us)
Wed, 09 Nov 2022 16:31:08 CST
(root@nebula) [sf100]> update vertex on Person 933 set firstName=23
[ERROR (-1005)]: Storage Error: Invalid data, may be wrong value type.
Wed, 09 Nov 2022 16:31:51 CST
(root@nebula) [sf100]> desc tag Person
+----------------+----------+-------+---------+---------+
| Field | Type | Null | Default | Comment |
+----------------+----------+-------+---------+---------+
| "firstName" | "string" | "YES" | | |
| "lastName" | "string" | "YES" | | |
| "gender" | "string" | "YES" | | |
| "birthday" | "string" | "YES" | | |
| "creationDate" | "string" | "YES" | | |
| "locationIP" | "string" | "YES" | | |
| "browserUsed" | "string" | "YES" | | |
+----------------+----------+-------+---------+---------+
Got 7 rows (time spent 629/13654 us)
Wed, 09 Nov 2022 16:41:35 CST
```
```
(root@nebula) [sf100]> INSERT VERTEX t5(p1, p2, p3) VALUES 1:("Abe", 2, 3);
Execution succeeded (time spent 1070/13328 us)
Wed, 09 Nov 2022 16:34:06 CST
(root@nebula) [sf100]> INSERT VERTEX t5(p1, p2, p3) VALUES 1:("Abe", "2", 3);
[ERROR (-1005)]: Storage Error: The data type does not meet the requirements. Use the correct type of data.
```
|
process
|
should verify the type mismatch when insert or update if the type of property is datetime and then insert or update with string should be prevented by graph validator actual send to storaged should be semanticerror not storage error root nebula update vertex on person set firstname when firstname harris execution succeeded time spent us wed nov cst root nebula update vertex on person set firstname storage error invalid data may be wrong value type wed nov cst root nebula desc tag person field type null default comment firstname string yes lastname string yes gender string yes birthday string yes creationdate string yes locationip string yes browserused string yes got rows time spent us wed nov cst root nebula insert vertex values abe execution succeeded time spent us wed nov cst root nebula insert vertex values abe storage error the data type does not meet the requirements use the correct type of data
| 1
|
53,600
| 6,337,359,881
|
IssuesEvent
|
2017-07-26 23:41:01
|
rust-lang/rust
|
https://api.github.com/repos/rust-lang/rust
|
closed
|
ICE: Conflicting trait impls with custom default-implemented trait bound
|
C-bug E-needstest I-ICE
|
This stems from my attempt to extend the `NotSame` trait for type inequality as described in #29499 to tuples of arbitrary size. The implementation for (A, B) conflicts with the implementation for A because the (A) tuple lacks a trailing comma.
``` rust
#![feature(optin_builtin_traits)]
trait NotSame {}
impl NotSame for .. {}
impl<A> !NotSame for (A, A) {}
trait OneOfEach {}
impl <A> OneOfEach for (A) { }
impl <A, B> OneOfEach for (A, B) where (B): OneOfEach, (A, B): NotSame { }
// ...
fn main() {}
```
```
robert@laptop:~/projects/one_of_each$ RUST_BACKTRACE=1 rustc one_of_each.rs
one_of_each.rs:11:1: 11:75 error: internal compiler error: coherence failed to report ambiguity: cannot locate the impl of the trait `OneOfEach` for the type `(A, B)`
one_of_each.rs:11 impl <A, B> OneOfEach for (A, B) where (B): OneOfEach, (A, B): NotSame { }
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
thread 'rustc' panicked at 'Box<Any>', ../src/libsyntax/diagnostic.rs:176
stack backtrace:
1: 0x7f8e6d26fd20 - sys::backtrace::tracing::imp::write::h5839347184a363c1Tnt
2: 0x7f8e6d2765e5 - panicking::log_panic::_<closure>::closure.39955
3: 0x7f8e6d276055 - panicking::log_panic::hcde6d42710304abbWnx
4: 0x7f8e6d239683 - sys_common::unwind::begin_unwind_inner::h4039843fef6bffefYgs
5: 0x7f8e670c63c7 - sys_common::unwind::begin_unwind::begin_unwind::h17811388397816602357
6: 0x7f8e670c6386 - diagnostic::_<impl>::span_bug::h517cd016f9a1ed4fqGA
7: 0x7f8e6b04f4a0 - middle::traits::error_reporting::report_fulfillment_errors::h7b40fdbf7b69efa1DqR
8: 0x7f8e6bdf69b5 - check::_<impl>::select_all_obligations_or_error::h68d2b62f895fda24Nar
9: 0x7f8e6be4d9a4 - check::wf::_<impl>::check_item_well_formed::h294fa0c9be11cb26O4j
10: 0x7f8e6be8bfbe - check::check_wf_old::h2896130387effa64m9o
11: 0x7f8e6bf3b8ab - check_crate::hbd4bd8f4f3340ed1BrD
12: 0x7f8e6d7438b9 - driver::phase_3_run_analysis_passes::_<closure>::closure.21990
13: 0x7f8e6d729213 - middle::ty::context::_<impl>::create_and_enter::create_and_enter::h7648551165536465273
14: 0x7f8e6d72420e - driver::phase_3_run_analysis_passes::h13807885485637783892
15: 0x7f8e6d704c92 - driver::compile_input::he5c7814d86abe8678ba
16: 0x7f8e6d85badb - run_compiler::h3e946a4e9bc089bfvqc
17: 0x7f8e6d858b56 - sys_common::unwind::try::try_fn::try_fn::h18225326314371046170
18: 0x7f8e6d26da48 - __rust_try
19: 0x7f8e6d261abb - sys_common::unwind::try::inner_try::h81e998e565f2181dwds
20: 0x7f8e6d858ea4 - boxed::_<impl>::call_box::call_box::h16507595822000360715
21: 0x7f8e6d2750b3 - sys::thread::_<impl>::new::thread_start::h0be42f811434f5398Fw
22: 0x7f8e6691c181 - start_thread
23: 0x7f8e6ceef47c - __clone
24: 0x0 - <unknown>
```
On rustc 1.6.0-nightly (1a2eaffb6 2015-10-31)
|
1.0
|
ICE: Conflicting trait impls with custom default-implemented trait bound - This stems from my attempt to extend the `NotSame` trait for type inequality as described in #29499 to tuples of arbitrary size. The implementation for (A, B) conflicts with the implementation for A because the (A) tuple lacks a trailing comma.
``` rust
#![feature(optin_builtin_traits)]
trait NotSame {}
impl NotSame for .. {}
impl<A> !NotSame for (A, A) {}
trait OneOfEach {}
impl <A> OneOfEach for (A) { }
impl <A, B> OneOfEach for (A, B) where (B): OneOfEach, (A, B): NotSame { }
// ...
fn main() {}
```
```
robert@laptop:~/projects/one_of_each$ RUST_BACKTRACE=1 rustc one_of_each.rs
one_of_each.rs:11:1: 11:75 error: internal compiler error: coherence failed to report ambiguity: cannot locate the impl of the trait `OneOfEach` for the type `(A, B)`
one_of_each.rs:11 impl <A, B> OneOfEach for (A, B) where (B): OneOfEach, (A, B): NotSame { }
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
thread 'rustc' panicked at 'Box<Any>', ../src/libsyntax/diagnostic.rs:176
stack backtrace:
1: 0x7f8e6d26fd20 - sys::backtrace::tracing::imp::write::h5839347184a363c1Tnt
2: 0x7f8e6d2765e5 - panicking::log_panic::_<closure>::closure.39955
3: 0x7f8e6d276055 - panicking::log_panic::hcde6d42710304abbWnx
4: 0x7f8e6d239683 - sys_common::unwind::begin_unwind_inner::h4039843fef6bffefYgs
5: 0x7f8e670c63c7 - sys_common::unwind::begin_unwind::begin_unwind::h17811388397816602357
6: 0x7f8e670c6386 - diagnostic::_<impl>::span_bug::h517cd016f9a1ed4fqGA
7: 0x7f8e6b04f4a0 - middle::traits::error_reporting::report_fulfillment_errors::h7b40fdbf7b69efa1DqR
8: 0x7f8e6bdf69b5 - check::_<impl>::select_all_obligations_or_error::h68d2b62f895fda24Nar
9: 0x7f8e6be4d9a4 - check::wf::_<impl>::check_item_well_formed::h294fa0c9be11cb26O4j
10: 0x7f8e6be8bfbe - check::check_wf_old::h2896130387effa64m9o
11: 0x7f8e6bf3b8ab - check_crate::hbd4bd8f4f3340ed1BrD
12: 0x7f8e6d7438b9 - driver::phase_3_run_analysis_passes::_<closure>::closure.21990
13: 0x7f8e6d729213 - middle::ty::context::_<impl>::create_and_enter::create_and_enter::h7648551165536465273
14: 0x7f8e6d72420e - driver::phase_3_run_analysis_passes::h13807885485637783892
15: 0x7f8e6d704c92 - driver::compile_input::he5c7814d86abe8678ba
16: 0x7f8e6d85badb - run_compiler::h3e946a4e9bc089bfvqc
17: 0x7f8e6d858b56 - sys_common::unwind::try::try_fn::try_fn::h18225326314371046170
18: 0x7f8e6d26da48 - __rust_try
19: 0x7f8e6d261abb - sys_common::unwind::try::inner_try::h81e998e565f2181dwds
20: 0x7f8e6d858ea4 - boxed::_<impl>::call_box::call_box::h16507595822000360715
21: 0x7f8e6d2750b3 - sys::thread::_<impl>::new::thread_start::h0be42f811434f5398Fw
22: 0x7f8e6691c181 - start_thread
23: 0x7f8e6ceef47c - __clone
24: 0x0 - <unknown>
```
On rustc 1.6.0-nightly (1a2eaffb6 2015-10-31)
|
non_process
|
ice conflicting trait impls with custom default implemented trait bound this stems from my attempt to extend the notsame trait for type inequality as described in to tuples of arbitrary size the implementation for a b conflicts with the implementation for a because the a tuple lacks a trailing comma rust trait notsame impl notsame for impl notsame for a a trait oneofeach impl oneofeach for a impl oneofeach for a b where b oneofeach a b notsame fn main robert laptop projects one of each rust backtrace rustc one of each rs one of each rs error internal compiler error coherence failed to report ambiguity cannot locate the impl of the trait oneofeach for the type a b one of each rs impl oneofeach for a b where b oneofeach a b notsame note the compiler unexpectedly panicked this is a bug note we would appreciate a bug report thread rustc panicked at box src libsyntax diagnostic rs stack backtrace sys backtrace tracing imp write panicking log panic closure panicking log panic sys common unwind begin unwind inner sys common unwind begin unwind begin unwind diagnostic span bug middle traits error reporting report fulfillment errors check select all obligations or error check wf check item well formed check check wf old check crate driver phase run analysis passes closure middle ty context create and enter create and enter driver phase run analysis passes driver compile input run compiler sys common unwind try try fn try fn rust try sys common unwind try inner try boxed call box call box sys thread new thread start start thread clone on rustc nightly
| 0
|
56,377
| 15,046,891,742
|
IssuesEvent
|
2021-02-03 08:08:21
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
SelectOneMenu: Escaping issue when using multiple SelectItemGroups with the same SelectItem
|
defect
|
**Describe the defect**
Escapes certain SelectItems if they appear multiple times in different SelectGroups, although they are not meant to be escaped. The problem seems to lie in the calculation of the index when checking against the "escape" property of the SelectItem. In this example there is an offset of -1, which when selecting the first not escaped option ("0"), results in the application checking against the first SelectItem ("Escape") which is to be escaped.
**Environment:**
- PF Version: _8.0_
- JSF + version: _Mojarra 2.3.14_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'http://localhost:8080/primefaces-test/'
2. Open the SelectOneMenu
3. Click on "0"
4. "0" shows up escaped
**Expected behavior**
"0" should not show up escaped, just like the other options "1" and "2".
**Example XHTML**
```html
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:ui="http://java.sun.com/jsf/facelets"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:p="http://primefaces.org/ui"
xmlns:h="http://java.sun.com/jsf/html">
<h:head>
<title>PrimeFaces Test</title>
</h:head>
<h:body>
<h:form id="frmTest">
<p:selectOneMenu id="selectOneMenu" value="#{testView.text}">
<f:selectItems value="#{testView.items}"/>
</p:selectOneMenu>
</h:form>
</h:body>
</html>
```
**Example Bean**
```java
@Named
@ViewScoped
public class TestView implements Serializable {
private String text;
private List<SelectItem> items;
@PostConstruct
public void init() {
SelectItemGroup group;
items = new LinkedList<>();
items.add(new SelectItem("Escape", "Escape", null, false, true));
for(int g = 0; g<2; g++) {
String example = String.format("<span>%s</span>", "Example");
group = new SelectItemGroup(example);
group.setEscape(false);
group.setSelectItems(new SelectItem[0]);
items.add(group);
for(int i = 0; i<3; i++) {
String text = String.format("<span>%s</span>", i);
items.add(new SelectItem(i, text, null, false, false));
}
}
}
public List<SelectItem> getItems() {
return items;
}
public void setItems(List<SelectItem> items) {
this.items = items;
}
public String getText() {
return this.text;
}
public void setText(String text) {
this.text = text;
}
}
```
|
1.0
|
SelectOneMenu: Escaping issue when using multiple SelectItemGroups with the same SelectItem - **Describe the defect**
Escapes certain SelectItems if they appear multiple times in different SelectGroups, although they are not meant to be escaped. The problem seems to lie in the calculation of the index when checking against the "escape" property of the SelectItem. In this example there is an offset of -1, which when selecting the first not escaped option ("0"), results in the application checking against the first SelectItem ("Escape") which is to be escaped.
**Environment:**
- PF Version: _8.0_
- JSF + version: _Mojarra 2.3.14_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'http://localhost:8080/primefaces-test/'
2. Open the SelectOneMenu
3. Click on "0"
4. "0" shows up escaped
**Expected behavior**
"0" should not show up escaped, just like the other options "1" and "2".
**Example XHTML**
```html
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:ui="http://java.sun.com/jsf/facelets"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:p="http://primefaces.org/ui"
xmlns:h="http://java.sun.com/jsf/html">
<h:head>
<title>PrimeFaces Test</title>
</h:head>
<h:body>
<h:form id="frmTest">
<p:selectOneMenu id="selectOneMenu" value="#{testView.text}">
<f:selectItems value="#{testView.items}"/>
</p:selectOneMenu>
</h:form>
</h:body>
</html>
```
**Example Bean**
```java
@Named
@ViewScoped
public class TestView implements Serializable {
private String text;
private List<SelectItem> items;
@PostConstruct
public void init() {
SelectItemGroup group;
items = new LinkedList<>();
items.add(new SelectItem("Escape", "Escape", null, false, true));
for(int g = 0; g<2; g++) {
String example = String.format("<span>%s</span>", "Example");
group = new SelectItemGroup(example);
group.setEscape(false);
group.setSelectItems(new SelectItem[0]);
items.add(group);
for(int i = 0; i<3; i++) {
String text = String.format("<span>%s</span>", i);
items.add(new SelectItem(i, text, null, false, false));
}
}
}
public List<SelectItem> getItems() {
return items;
}
public void setItems(List<SelectItem> items) {
this.items = items;
}
public String getText() {
return this.text;
}
public void setText(String text) {
this.text = text;
}
}
```
|
non_process
|
selectonemenu escaping issue when using multiple selectitemgroups with the same selectitem describe the defect escapes certain selectitems if they appear multiple times in different selectgroups although they are not meant to be escaped the problem seems to lie in the calculation of the index when checking against the escape property of the selectitem in this example there is an offset of which when selecting the first not escaped option results in the application checking against the first selectitem escape which is to be escaped environment pf version jsf version mojarra affected browsers all to reproduce steps to reproduce the behavior go to open the selectonemenu click on shows up escaped expected behavior should not show up escaped just like the other options and example xhtml html html xmlns xmlns ui xmlns f xmlns p xmlns h primefaces test example bean java named viewscoped public class testview implements serializable private string text private list items postconstruct public void init selectitemgroup group items new linkedlist items add new selectitem escape escape null false true for int g g g string example string format s example group new selectitemgroup example group setescape false group setselectitems new selectitem items add group for int i i i string text string format s i items add new selectitem i text null false false public list getitems return items public void setitems list items this items items public string gettext return this text public void settext string text this text text
| 0
|
790,906
| 27,841,584,349
|
IssuesEvent
|
2023-03-20 13:06:55
|
Wizleap-Inc/wiz-ui
|
https://api.github.com/repos/Wizleap-Inc/wiz-ui
|
closed
|
Feat(checkbox, radio): チェック時に取り消し線を出すオプションを追加
|
📦 component 🔼 High Priority
|
**機能追加理由・詳細**

**解決策の提案(任意)**
**その他考慮事項(任意)**
|
1.0
|
Feat(checkbox, radio): チェック時に取り消し線を出すオプションを追加 - **機能追加理由・詳細**

**解決策の提案(任意)**
**その他考慮事項(任意)**
|
non_process
|
feat checkbox radio チェック時に取り消し線を出すオプションを追加 機能追加理由・詳細 解決策の提案(任意) その他考慮事項(任意)
| 0
|
11,886
| 14,007,942,255
|
IssuesEvent
|
2020-10-28 22:33:08
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Unify module structure
|
category: modules type: compatibility issue type: enhancement
|
Some of the modules we created have associated traits that until now have been placed in different directories. Some modules have them in their base directory (i.e. `hpx/modulename/`), some have them in `hpx/traits`. This should be unified. The consensus on irc was that associated traits should end up in the directory `hpx/modulename/traits/` (http://irclog.cct.lsu.edu/ste~b~~b~ar/2019-08-30#1567164898-1567165343;).
Here is a list of modules that should be adopted to this new policy:
- [x] algorithms (#4587)
- [x] datastructures (#4113)
- [x] iterator_support (#4179)
- [x] parallel_executors
- [x] compute_cuda
There are possibly more.
|
True
|
Unify module structure - Some of the modules we created have associated traits that until now have been placed in different directories. Some modules have them in their base directory (i.e. `hpx/modulename/`), some have them in `hpx/traits`. This should be unified. The consensus on irc was that associated traits should end up in the directory `hpx/modulename/traits/` (http://irclog.cct.lsu.edu/ste~b~~b~ar/2019-08-30#1567164898-1567165343;).
Here is a list of modules that should be adopted to this new policy:
- [x] algorithms (#4587)
- [x] datastructures (#4113)
- [x] iterator_support (#4179)
- [x] parallel_executors
- [x] compute_cuda
There are possibly more.
|
non_process
|
unify module structure some of the modules we created have associated traits that until now have been placed in different directories some modules have them in their base directory i e hpx modulename some have them in hpx traits this should be unified the consensus on irc was that associated traits should end up in the directory hpx modulename traits here is a list of modules that should be adopted to this new policy algorithms datastructures iterator support parallel executors compute cuda there are possibly more
| 0
|
124,272
| 12,227,914,186
|
IssuesEvent
|
2020-05-03 17:10:42
|
jilleJr/Newtonsoft.Json-for-Unity.Converters
|
https://api.github.com/repos/jilleJr/Newtonsoft.Json-for-Unity.Converters
|
closed
|
List types that has converters or tested to README
|
documentation enhancement
|
## Description
Depends on #4 #14
When all converters meant to be in the x.1 release are done, these needs to be documented. Preferably in the README.
## Motivation
The one reason of this repo is solving conversions for certain types. This list will give new visitors exactly the info they need when they are still wondering if they should invest their time in start using this package.
## Suggested solution
List with sections for the different namespaces (UnityEngine vs UnityEngine.Rendering vs UnityEngine.AI) and for each type supported have some way of seeing that they are tested or has their own converter.
Maybe a table like:
> | Type | Verified to work<sup>(1)</sup> | Custom converter<sup>(2)</sup>
> | ---- | -------- | ----------------
> | *UnityEngine*.**Keyframe** | ✔ | -
> | *UnityEngine*.**Ray** | ✔ | ✔
> | *UnityEngine*.**Ray2D** | ✔ | ✔
> | *UnityEngine*.**Rect** | ✔ | ✔
> | *UnityEngine*.**RectInt** | ✔ | ✔
>
> 1. **Verified to work** from our suite of tests currently passing in all the
> following configurations:
>
> | OS | Unity | Scripting runtime | API compatability mode |
> | --- | --- | ---- | ---- |
> | Windows | 2019.2.11f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
> | | | IL2CPP | .NET Standard 2.0
> | | | | .NET 4.x
> | | 2018.2.14f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
> | | | IL2CPP | .NET Standard 2.0
> | | | | .NET 4.x
> | Linux | 2019.2.11f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
> | Linux | 2018.2.14f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
>
> 2. **Custom converter** created where Newtonsoft.Json has trouble out-of-the-box.
> Types without this flag works as-is with Newtonsoft.Json.
That compatability table is of course somewhat of a dream scenario. But we can get there! Just a bit more CI/CD to do
|
1.0
|
List types that has converters or tested to README - ## Description
Depends on #4 #14
When all converters meant to be in the x.1 release are done, these needs to be documented. Preferably in the README.
## Motivation
The one reason of this repo is solving conversions for certain types. This list will give new visitors exactly the info they need when they are still wondering if they should invest their time in start using this package.
## Suggested solution
List with sections for the different namespaces (UnityEngine vs UnityEngine.Rendering vs UnityEngine.AI) and for each type supported have some way of seeing that they are tested or has their own converter.
Maybe a table like:
> | Type | Verified to work<sup>(1)</sup> | Custom converter<sup>(2)</sup>
> | ---- | -------- | ----------------
> | *UnityEngine*.**Keyframe** | ✔ | -
> | *UnityEngine*.**Ray** | ✔ | ✔
> | *UnityEngine*.**Ray2D** | ✔ | ✔
> | *UnityEngine*.**Rect** | ✔ | ✔
> | *UnityEngine*.**RectInt** | ✔ | ✔
>
> 1. **Verified to work** from our suite of tests currently passing in all the
> following configurations:
>
> | OS | Unity | Scripting runtime | API compatability mode |
> | --- | --- | ---- | ---- |
> | Windows | 2019.2.11f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
> | | | IL2CPP | .NET Standard 2.0
> | | | | .NET 4.x
> | | 2018.2.14f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
> | | | IL2CPP | .NET Standard 2.0
> | | | | .NET 4.x
> | Linux | 2019.2.11f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
> | Linux | 2018.2.14f1 | Mono | .NET Standard 2.0
> | | | | .NET 4.x
>
> 2. **Custom converter** created where Newtonsoft.Json has trouble out-of-the-box.
> Types without this flag works as-is with Newtonsoft.Json.
That compatability table is of course somewhat of a dream scenario. But we can get there! Just a bit more CI/CD to do
|
non_process
|
list types that has converters or tested to readme description depends on when all converters meant to be in the x release are done these needs to be documented preferably in the readme motivation the one reason of this repo is solving conversions for certain types this list will give new visitors exactly the info they need when they are still wondering if they should invest their time in start using this package suggested solution list with sections for the different namespaces unityengine vs unityengine rendering vs unityengine ai and for each type supported have some way of seeing that they are tested or has their own converter maybe a table like type verified to work custom converter unityengine keyframe ✔ unityengine ray ✔ ✔ unityengine ✔ ✔ unityengine rect ✔ ✔ unityengine rectint ✔ ✔ verified to work from our suite of tests currently passing in all the following configurations os unity scripting runtime api compatability mode windows mono net standard net x net standard net x mono net standard net x net standard net x linux mono net standard net x linux mono net standard net x custom converter created where newtonsoft json has trouble out of the box types without this flag works as is with newtonsoft json that compatability table is of course somewhat of a dream scenario but we can get there just a bit more ci cd to do
| 0
|
327,151
| 24,120,309,921
|
IssuesEvent
|
2022-09-20 18:05:10
|
oybek703/node-architecture
|
https://api.github.com/repos/oybek703/node-architecture
|
closed
|
basics and two cli projects
|
documentation
|
- [x] 01 Введение
- [x] 02 Настройка окружения
- [x] 03 Начало работы с Node.js
- [x] 04 Как работает Node.js_
- [x] 05 Многопоточность
- [x] 06 Движок V8
- [x] 07 Node Package Manager
- [x] 08 Приложение 1 - CLI прогноз погоды
- [x] 09 Приложение 2 - API с ExpressJS
|
1.0
|
basics and two cli projects - - [x] 01 Введение
- [x] 02 Настройка окружения
- [x] 03 Начало работы с Node.js
- [x] 04 Как работает Node.js_
- [x] 05 Многопоточность
- [x] 06 Движок V8
- [x] 07 Node Package Manager
- [x] 08 Приложение 1 - CLI прогноз погоды
- [x] 09 Приложение 2 - API с ExpressJS
|
non_process
|
basics and two cli projects введение настройка окружения начало работы с node js как работает node js многопоточность движок node package manager приложение cli прогноз погоды приложение api с expressjs
| 0
|
6,561
| 9,648,874,835
|
IssuesEvent
|
2019-05-17 17:29:47
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Department of State: Change content on "Next Steps" page
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Student
What: Clarify that references are not required
Why: As a student I want to know what is required in my application
A/C
- Step #2 should read
- Review your experience (this will be bold)
- Under this will be
- Tell us about your work, military, or other experience.
Note: In the mock it says " Review your experience and references" however we need to remove the word "references" Included it for your reference.
https://opm.invisionapp.com/d/main/#/console/15360465/319289299/preview
Public Link:https://opm.invisionapp.com/share/ZEPNZR09Q54
|
1.0
|
Department of State: Change content on "Next Steps" page - Who: Student
What: Clarify that references are not required
Why: As a student I want to know what is required in my application
A/C
- Step #2 should read
- Review your experience (this will be bold)
- Under this will be
- Tell us about your work, military, or other experience.
Note: In the mock it says " Review your experience and references" however we need to remove the word "references" Included it for your reference.
https://opm.invisionapp.com/d/main/#/console/15360465/319289299/preview
Public Link:https://opm.invisionapp.com/share/ZEPNZR09Q54
|
process
|
department of state change content on next steps page who student what clarify that references are not required why as a student i want to know what is required in my application a c step should read review your experience this will be bold under this will be tell us about your work military or other experience note in the mock it says review your experience and references however we need to remove the word references included it for your reference public link
| 1
|
132,335
| 5,183,494,477
|
IssuesEvent
|
2017-01-20 01:05:04
|
koding/koding
|
https://api.github.com/repos/koding/koding
|
closed
|
'Build Your Stack' Onboarding item keeps appearing for the members after reloading the page.
|
A-Bug Component-Onboarding Priority-Mid
|
## Expected Behavior
It shouldn't reappear after reloading the page.
## Current Behavior

http://recordit.co/qZCNHK7Kuy
When you open the onboarding modal it disappears again.
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/37062028-build-your-stack-onboarding-item-keeps-appearing-for-the-members-after-reloading-the-page?utm_campaign=plugin&utm_content=tracker%2F41989991&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F41989991&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
'Build Your Stack' Onboarding item keeps appearing for the members after reloading the page. - ## Expected Behavior
It shouldn't reappear after reloading the page.
## Current Behavior

http://recordit.co/qZCNHK7Kuy
When you open the onboarding modal it disappears again.
## <bountysource-plugin>
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/37062028-build-your-stack-onboarding-item-keeps-appearing-for-the-members-after-reloading-the-page?utm_campaign=plugin&utm_content=tracker%2F41989991&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F41989991&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
non_process
|
build your stack onboarding item keeps appearing for the members after reloading the page expected behavior it shouldn t reappear after reloading the page current behavior when you open the onboarding modal it disappears again want to back this issue we accept bounties via
| 0
|
19,850
| 14,658,581,373
|
IssuesEvent
|
2020-12-28 18:16:28
|
tlaplus/tlaplus
|
https://api.github.com/repos/tlaplus/tlaplus
|
closed
|
Bogus overwrite warning when clicking a .tla file in Finder or Explorer
|
Toolbox bug usability
|
MC: "[...] open an existing TLA file with the ToolBox without ignoring a warning that my file will be overwritten (which it isn’t)"
|
True
|
Bogus overwrite warning when clicking a .tla file in Finder or Explorer - MC: "[...] open an existing TLA file with the ToolBox without ignoring a warning that my file will be overwritten (which it isn’t)"
|
non_process
|
bogus overwrite warning when clicking a tla file in finder or explorer mc open an existing tla file with the toolbox without ignoring a warning that my file will be overwritten which it isn’t
| 0
|
14,906
| 18,292,935,367
|
IssuesEvent
|
2021-10-05 17:10:58
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Quark fails to scan with specific rule set
|
bug issue-processing-state-04
|
**Describe the bug**
The flag `-s` and `-d` can specify a rule set for analysis. However, such functions seem not to work correctly.
The reason is that Quark includes the built-in rules by default. When the user specifies a rule, Quark doesn't clean up the rule resources. That causes an unexpected analysis using both specific and built-in rule sets.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a rule named `test.json` in the working directory.
2. Run quark analysis with this command.
```
quark-engine -s ./test.json -a <PATH_TO_ANY_APK>
```
**Expected behavior**
Quark generates a summary report containing the specified rule only.
**Screenshots**

|
1.0
|
Quark fails to scan with specific rule set - **Describe the bug**
The flag `-s` and `-d` can specify a rule set for analysis. However, such functions seem not to work correctly.
The reason is that Quark includes the built-in rules by default. When the user specifies a rule, Quark doesn't clean up the rule resources. That causes an unexpected analysis using both specific and built-in rule sets.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a rule named `test.json` in the working directory.
2. Run quark analysis with this command.
```
quark-engine -s ./test.json -a <PATH_TO_ANY_APK>
```
**Expected behavior**
Quark generates a summary report containing the specified rule only.
**Screenshots**

|
process
|
quark fails to scan with specific rule set describe the bug the flag s and d can specify a rule set for analysis however such functions seem not to work correctly the reason is that quark includes the built in rules by default when the user specifies a rule quark doesn t clean up the rule resources that causes an unexpected analysis using both specific and built in rule sets to reproduce steps to reproduce the behavior create a rule named test json in the working directory run quark analysis with this command quark engine s test json a expected behavior quark generates a summary report containing the specified rule only screenshots
| 1
|
3,531
| 6,570,735,297
|
IssuesEvent
|
2017-09-10 03:37:37
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Default files for ethprice and ethname
|
apps-ethPrice status-inprocess tools-ethName type-enhancement
|
It is probably better to copy in files to ethprice and ethName than create them on the fly. (1) faster, (2) easier to change
|
1.0
|
Default files for ethprice and ethname - It is probably better to copy in files to ethprice and ethName than create them on the fly. (1) faster, (2) easier to change
|
process
|
default files for ethprice and ethname it is probably better to copy in files to ethprice and ethname than create them on the fly faster easier to change
| 1
|
20,920
| 27,757,637,188
|
IssuesEvent
|
2023-03-16 04:51:19
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
ProcessPoolExecutor shutdown hangs after future cancel was requested
|
type-bug expert-multiprocessing
|
**Bug report**
With a ProcessPoolExecutor, after submitting and quickly canceling a future, a call to `shutdown(wait=True)` would hang indefinitely.
This happens pretty much on all platforms and all recent Python versions.
Here is a minimal reproduction:
```py
import concurrent.futures
ppe = concurrent.futures.ProcessPoolExecutor(1)
ppe.submit(int).result()
ppe.submit(int).cancel()
ppe.shutdown(wait=True)
```
The first submission gets the executor going and creates its internal `queue_management_thread`.
The second submission appears to get that thread to loop, enter a wait state, and never receive a wakeup event.
Introducing a tiny sleep between the second submit and its cancel request makes the issue disappear. From my initial observation it looks like something in the way the `queue_management_worker` internal loop is structured doesn't handle this edge case well.
Shutting down with `wait=False` would return immediately as expected, but the `queue_management_thread` would then die with an unhandled `OSError: handle is closed` exception.
**Environment**
* Discovered on macOS-12.2.1 with cpython 3.8.5.
* Reproduced in Ubuntu and Windows (x64) as well, and in cpython versions 3.7 to 3.11.0-beta.3.
* Reproduced in pypy3.8 as well, but not consistently. Seen for example in Ubuntu with Python 3.8.13 (PyPy 7.3.9).
**Additional info**
When tested with `pytest-timeout` under Ubuntu and cpython 3.8.13, these are the tracebacks at the moment of timing out:
<details>
```pytb
_____________________________________ test _____________________________________
@pytest.mark.timeout(10)
def test():
ppe = concurrent.futures.ProcessPoolExecutor(1)
ppe.submit(int).result()
ppe.submit(int).cancel()
> ppe.shutdown(wait=True)
test_reproduce_python_bug.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/concurrent/futures/process.py:686: in shutdown
self._queue_management_thread.join()
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py:1011: in join
self._wait_for_tstate_lock()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Thread(QueueManagerThread, started daemon 140003176535808)>
block = True, timeout = -1
def _wait_for_tstate_lock(self, block=True, timeout=-1):
# Issue #18808: wait for the thread state to be gone.
# At the end of the thread's life, after all knowledge of the thread
# is removed from C data structures, C code releases our _tstate_lock.
# This method passes its arguments to _tstate_lock.acquire().
# If the lock is acquired, the C code is done, and self._stop() is
# called. That sets ._is_stopped to True, and ._tstate_lock to None.
lock = self._tstate_lock
if lock is None: # already determined that the C code is done
assert self._is_stopped
> elif lock.acquire(block, timeout):
E Failed: Timeout >10.0s
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py:1027: Failed
----------------------------- Captured stderr call -----------------------------
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~ Stack of QueueFeederThread (140003159754496) ~~~~~~~~~~~~~~~~~
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/multiprocessing/queues.py", line 227, in _feed
nwait()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 302, in wait
waiter.acquire()
~~~~~~~~~~~~~~~~ Stack of QueueManagerThread (140003176535808) ~~~~~~~~~~~~~~~~~
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/concurrent/futures/process.py", line 362, in _queue_management_worker
ready = mp.connection.wait(readers + worker_sentinels)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
```
</details>
Tracebacks in PyPy are similar on the `concurrent.futures.process` level. Tracebacks in Windows are different in the lower-level areas, but again similar on the `concurrent.futures.process` level.
Linked PRs:
- #94468
<!-- gh-linked-prs -->
### Linked PRs
* gh-94468
* gh-102746
* gh-102747
<!-- /gh-linked-prs -->
|
1.0
|
ProcessPoolExecutor shutdown hangs after future cancel was requested - **Bug report**
With a ProcessPoolExecutor, after submitting and quickly canceling a future, a call to `shutdown(wait=True)` would hang indefinitely.
This happens pretty much on all platforms and all recent Python versions.
Here is a minimal reproduction:
```py
import concurrent.futures
ppe = concurrent.futures.ProcessPoolExecutor(1)
ppe.submit(int).result()
ppe.submit(int).cancel()
ppe.shutdown(wait=True)
```
The first submission gets the executor going and creates its internal `queue_management_thread`.
The second submission appears to get that thread to loop, enter a wait state, and never receive a wakeup event.
Introducing a tiny sleep between the second submit and its cancel request makes the issue disappear. From my initial observation it looks like something in the way the `queue_management_worker` internal loop is structured doesn't handle this edge case well.
Shutting down with `wait=False` would return immediately as expected, but the `queue_management_thread` would then die with an unhandled `OSError: handle is closed` exception.
**Environment**
* Discovered on macOS-12.2.1 with cpython 3.8.5.
* Reproduced in Ubuntu and Windows (x64) as well, and in cpython versions 3.7 to 3.11.0-beta.3.
* Reproduced in pypy3.8 as well, but not consistently. Seen for example in Ubuntu with Python 3.8.13 (PyPy 7.3.9).
**Additional info**
When tested with `pytest-timeout` under Ubuntu and cpython 3.8.13, these are the tracebacks at the moment of timing out:
<details>
```pytb
_____________________________________ test _____________________________________
@pytest.mark.timeout(10)
def test():
ppe = concurrent.futures.ProcessPoolExecutor(1)
ppe.submit(int).result()
ppe.submit(int).cancel()
> ppe.shutdown(wait=True)
test_reproduce_python_bug.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/concurrent/futures/process.py:686: in shutdown
self._queue_management_thread.join()
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py:1011: in join
self._wait_for_tstate_lock()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Thread(QueueManagerThread, started daemon 140003176535808)>
block = True, timeout = -1
def _wait_for_tstate_lock(self, block=True, timeout=-1):
# Issue #18808: wait for the thread state to be gone.
# At the end of the thread's life, after all knowledge of the thread
# is removed from C data structures, C code releases our _tstate_lock.
# This method passes its arguments to _tstate_lock.acquire().
# If the lock is acquired, the C code is done, and self._stop() is
# called. That sets ._is_stopped to True, and ._tstate_lock to None.
lock = self._tstate_lock
if lock is None: # already determined that the C code is done
assert self._is_stopped
> elif lock.acquire(block, timeout):
E Failed: Timeout >10.0s
/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py:1027: Failed
----------------------------- Captured stderr call -----------------------------
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~ Stack of QueueFeederThread (140003159754496) ~~~~~~~~~~~~~~~~~
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/multiprocessing/queues.py", line 227, in _feed
nwait()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 302, in wait
waiter.acquire()
~~~~~~~~~~~~~~~~ Stack of QueueManagerThread (140003176535808) ~~~~~~~~~~~~~~~~~
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/concurrent/futures/process.py", line 362, in _queue_management_worker
ready = mp.connection.wait(readers + worker_sentinels)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/opt/hostedtoolcache/Python/3.8.13/x64/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
```
</details>
Tracebacks in PyPy are similar on the `concurrent.futures.process` level. Tracebacks in Windows are different in the lower-level areas, but again similar on the `concurrent.futures.process` level.
Linked PRs:
- #94468
<!-- gh-linked-prs -->
### Linked PRs
* gh-94468
* gh-102746
* gh-102747
<!-- /gh-linked-prs -->
|
process
|
processpoolexecutor shutdown hangs after future cancel was requested bug report with a processpoolexecutor after submitting and quickly canceling a future a call to shutdown wait true would hang indefinitely this happens pretty much on all platforms and all recent python versions here is a minimal reproduction py import concurrent futures ppe concurrent futures processpoolexecutor ppe submit int result ppe submit int cancel ppe shutdown wait true the first submission gets the executor going and creates its internal queue management thread the second submission appears to get that thread to loop enter a wait state and never receive a wakeup event introducing a tiny sleep between the second submit and its cancel request makes the issue disappear from my initial observation it looks like something in the way the queue management worker internal loop is structured doesn t handle this edge case well shutting down with wait false would return immediately as expected but the queue management thread would then die with an unhandled oserror handle is closed exception environment discovered on macos with cpython reproduced in ubuntu and windows as well and in cpython versions to beta reproduced in as well but not consistently seen for example in ubuntu with python pypy additional info when tested with pytest timeout under ubuntu and cpython these are the tracebacks at the moment of timing out pytb test pytest mark timeout def test ppe concurrent futures processpoolexecutor ppe submit int result ppe submit int cancel ppe shutdown wait true test reproduce python bug py opt hostedtoolcache python lib concurrent futures process py in shutdown self queue management thread join opt hostedtoolcache python lib threading py in join self wait for tstate lock self block true timeout def wait for tstate lock self block true timeout issue wait for the thread state to be gone at the end of the thread s life after all knowledge of the thread is removed from c data structures c code releases our tstate lock this method passes its arguments to tstate lock acquire if the lock is acquired the c code is done and self stop is called that sets is stopped to true and tstate lock to none lock self tstate lock if lock is none already determined that the c code is done assert self is stopped elif lock acquire block timeout e failed timeout opt hostedtoolcache python lib threading py failed captured stderr call timeout stack of queuefeederthread file opt hostedtoolcache python lib threading py line in bootstrap self bootstrap inner file opt hostedtoolcache python lib threading py line in bootstrap inner self run file opt hostedtoolcache python lib threading py line in run self target self args self kwargs file opt hostedtoolcache python lib multiprocessing queues py line in feed nwait file opt hostedtoolcache python lib threading py line in wait waiter acquire stack of queuemanagerthread file opt hostedtoolcache python lib threading py line in bootstrap self bootstrap inner file opt hostedtoolcache python lib threading py line in bootstrap inner self run file opt hostedtoolcache python lib threading py line in run self target self args self kwargs file opt hostedtoolcache python lib concurrent futures process py line in queue management worker ready mp connection wait readers worker sentinels file opt hostedtoolcache python lib multiprocessing connection py line in wait ready selector select timeout file opt hostedtoolcache python lib selectors py line in select fd event list self selector poll timeout timeout tracebacks in pypy are similar on the concurrent futures process level tracebacks in windows are different in the lower level areas but again similar on the concurrent futures process level linked prs linked prs gh gh gh
| 1
|
21,384
| 29,202,230,275
|
IssuesEvent
|
2023-05-21 00:37:17
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Hibrido / ] Fullstack Developer (Híbrido - Belo Horizonte) na Coodesh
|
SALVADOR PJ JAVA MYSQL JAVASCRIPT FULL-STACK PRIMEFACES JSF SPRING SQL GIT HIBERNATE MAVEN REST SOAP JSON ANGULAR REQUISITOS NGINX PROCESSOS INOVAÇÃO GITHUB APACHE UMA C DOCUMENTAÇÃO WILDFLY HTTP MANUTENÇÃO HIBRIDO ALOCADO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-developer-hibrido-belo-horizonte-195712919?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Prime Results </strong>está buscando <strong>Fullstack Developer</strong> para compor seu time!</p>
<p>Acreditamos no poder de transformação social realizado pelas empresas Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolvimento/implementação e manutenção de aplicações </li>
<li>Participar da análise e execução dos projetos e execução dos tickets;</li>
<li>Definir as atividades necessárias para a realização de projetos, analisando os impactos em sistemas e processos através do entendimento da necessidade, conhecimento técnico e arquitetônico dos sistemas;</li>
<li>Desenvolver códigos para atendimento às áreas e empresas clientes, proporcionando o esclarecimento de dúvidas relacionados ao projeto, contribuindo para uma melhor análise de impactos de processos e sistemas sob sua responsabilidade; </li>
<li>Participar das atividades de planejamento para a liberação do produto para homologação e produção, por meio da validação de testes de aceite, assim como documentação de não conformidades avaliando e planejando a execução das correções reportadas;</li>
<li>Participar da rotina de SQUADs. </li>
</ul>
<p></p>
## Prime Results :
<p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas. </p>
<p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a>
## Habilidades:
- Java
- Hibernate
- Angular
- Javascript
- JSON
- Apache
- MySQL
- Microsoft SQL Server
- Spring
## Local:
undefined
## Requisitos:
- Experiência em Java: JSF, Spring, PrimeFaces, Hibernate, JasperReports;
- Conhecimentos em modelagem e desenvolvimento de Bancos de Dados relacionais: MySQL, SQL Server;
- Conhecimentos em Tecnologias Web: HTML5, CSS e Frameworks JavaScript, Angular;
- Conhecimento de Arquiteturas Web e Serviços (HTTP, SOAP, REST ou JSON);
- Conhecimentos nas ferramentas: GIT e Maven;
- Conhecimentos técnicos em servidores de aplicação (Wildfly - J2EE), servidores web (Apache e NGINX) e Spring Boot.
## Benefícios:
- GymPass;
- Assistência Médica após o período de experiência.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Developer (Híbrido - Belo Horizonte) na Prime Results ](https://coodesh.com/vagas/fullstack-developer-hibrido-belo-horizonte-195712919?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
PJ
#### Categoria
Full-Stack
|
1.0
|
[Hibrido / ] Fullstack Developer (Híbrido - Belo Horizonte) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-developer-hibrido-belo-horizonte-195712919?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Prime Results </strong>está buscando <strong>Fullstack Developer</strong> para compor seu time!</p>
<p>Acreditamos no poder de transformação social realizado pelas empresas Acreditamos no poder transformador das pessoas, aliado à gestão e tecnologia. Compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolvimento/implementação e manutenção de aplicações </li>
<li>Participar da análise e execução dos projetos e execução dos tickets;</li>
<li>Definir as atividades necessárias para a realização de projetos, analisando os impactos em sistemas e processos através do entendimento da necessidade, conhecimento técnico e arquitetônico dos sistemas;</li>
<li>Desenvolver códigos para atendimento às áreas e empresas clientes, proporcionando o esclarecimento de dúvidas relacionados ao projeto, contribuindo para uma melhor análise de impactos de processos e sistemas sob sua responsabilidade; </li>
<li>Participar das atividades de planejamento para a liberação do produto para homologação e produção, por meio da validação de testes de aceite, assim como documentação de não conformidades avaliando e planejando a execução das correções reportadas;</li>
<li>Participar da rotina de SQUADs. </li>
</ul>
<p></p>
## Prime Results :
<p>O Best Seller Simon Sinek, diz que a maioria das empresas sabem o que fazem, porém não sabem por que o fazem. Não é o nosso caso. A Prime Results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade. Nossos clientes hoje, fazem a diferença na vida de mais de 250.000 brasileiros, nas áreas de proteção patrimonial, saúde e assistência 24 horas. </p>
<p>Nosso objetivo central é criar um ambiente criativo, dinâmico e engajado, sempre aliados a métodos, processos inteligentes e muita inovação.</p><a href='https://coodesh.com/empresas/prime-results'>Veja mais no site</a>
## Habilidades:
- Java
- Hibernate
- Angular
- Javascript
- JSON
- Apache
- MySQL
- Microsoft SQL Server
- Spring
## Local:
undefined
## Requisitos:
- Experiência em Java: JSF, Spring, PrimeFaces, Hibernate, JasperReports;
- Conhecimentos em modelagem e desenvolvimento de Bancos de Dados relacionais: MySQL, SQL Server;
- Conhecimentos em Tecnologias Web: HTML5, CSS e Frameworks JavaScript, Angular;
- Conhecimento de Arquiteturas Web e Serviços (HTTP, SOAP, REST ou JSON);
- Conhecimentos nas ferramentas: GIT e Maven;
- Conhecimentos técnicos em servidores de aplicação (Wildfly - J2EE), servidores web (Apache e NGINX) e Spring Boot.
## Benefícios:
- GymPass;
- Assistência Médica após o período de experiência.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Developer (Híbrido - Belo Horizonte) na Prime Results ](https://coodesh.com/vagas/fullstack-developer-hibrido-belo-horizonte-195712919?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Alocado
#### Regime
PJ
#### Categoria
Full-Stack
|
process
|
fullstack developer híbrido belo horizonte na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a prime results está buscando fullstack developer para compor seu time acreditamos no poder de transformação social realizado pelas empresas acreditamos no poder transformador das pessoas aliado à gestão e tecnologia compartilhamos nosso conhecimento para solucionar problemas complexos e gerar valor para nossos clientes responsabilidades desenvolvimento implementação e manutenção de aplicações nbsp participar da análise e execução dos projetos e execução dos tickets definir as atividades necessárias para a realização de projetos analisando os impactos em sistemas e processos através do entendimento da necessidade conhecimento técnico e arquitetônico dos sistemas desenvolver códigos para atendimento às áreas e empresas clientes proporcionando o esclarecimento de dúvidas relacionados ao projeto contribuindo para uma melhor análise de impactos de processos e sistemas sob sua responsabilidade nbsp participar das atividades de planejamento para a liberação do produto para homologação e produção por meio da validação de testes de aceite assim como documentação de não conformidades avaliando e planejando a execução das correções reportadas participar da rotina de squads nbsp prime results o best seller simon sinek diz que a maioria das empresas sabem o que fazem porém não sabem por que o fazem não é o nosso caso a prime results é uma empresa especializada em gestão organizacional que usa seu potencial de transformação em empresas que geram impacto positivo na sociedade nossos clientes hoje fazem a diferença na vida de mais de brasileiros nas áreas de proteção patrimonial saúde e assistência horas nbsp nosso objetivo central é criar um ambiente criativo dinâmico e engajado sempre aliados a métodos processos inteligentes e muita inovação habilidades java hibernate angular javascript json apache mysql microsoft sql server spring local undefined requisitos experiência em java jsf spring primefaces hibernate jasperreports conhecimentos em modelagem e desenvolvimento de bancos de dados relacionais mysql sql server conhecimentos em tecnologias web css e frameworks javascript angular conhecimento de arquiteturas web e serviços http soap rest ou json conhecimentos nas ferramentas git e maven conhecimentos técnicos em servidores de aplicação wildfly servidores web apache e nginx e spring boot benefícios gympass assistência médica após o período de experiência como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação alocado regime pj categoria full stack
| 1
|
2,286
| 5,110,502,604
|
IssuesEvent
|
2017-01-06 00:35:23
|
wpninjas/ninja-forms
|
https://api.github.com/repos/wpninjas/ninja-forms
|
closed
|
Can't add error to form (after submit) programmatically ...
|
FRONT: Processing
|
In v2.9.7 I used following to add error to form (during ninja_forms_preprocess hook)
```
add_action('ninja_forms_pre_process', 'dpcontent_pre_process_ninja_form');
function dpcontent_pre_process_ninja_form() {
global $ninja_forms_processing;
$ninja_forms_processing->add_error(1, 'Error submitting your enquiry');
```
How do add error to a form after submission in NF v3?
I currently have:
// NF3 Form processing:
```
add_action( 'ninja_forms_after_submission', 'dpcontent_ninja_forms_after_submission' );
function dpcontent_ninja_forms_after_submission($form_data) {
// Form processing settings
$form_processing_id = $form_data['form_id'];
$form = Ninja_Forms()->form($form_processing_id);
// Error in processing, how to add error to form? (no add_error(id, msg) method on NF_Form type)
...
```
|
1.0
|
Can't add error to form (after submit) programmatically ... - In v2.9.7 I used following to add error to form (during ninja_forms_preprocess hook)
```
add_action('ninja_forms_pre_process', 'dpcontent_pre_process_ninja_form');
function dpcontent_pre_process_ninja_form() {
global $ninja_forms_processing;
$ninja_forms_processing->add_error(1, 'Error submitting your enquiry');
```
How do add error to a form after submission in NF v3?
I currently have:
// NF3 Form processing:
```
add_action( 'ninja_forms_after_submission', 'dpcontent_ninja_forms_after_submission' );
function dpcontent_ninja_forms_after_submission($form_data) {
// Form processing settings
$form_processing_id = $form_data['form_id'];
$form = Ninja_Forms()->form($form_processing_id);
// Error in processing, how to add error to form? (no add_error(id, msg) method on NF_Form type)
...
```
|
process
|
can t add error to form after submit programmatically in i used following to add error to form during ninja forms preprocess hook add action ninja forms pre process dpcontent pre process ninja form function dpcontent pre process ninja form global ninja forms processing ninja forms processing add error error submitting your enquiry how do add error to a form after submission in nf i currently have form processing add action ninja forms after submission dpcontent ninja forms after submission function dpcontent ninja forms after submission form data form processing settings form processing id form data form ninja forms form form processing id error in processing how to add error to form no add error id msg method on nf form type
| 1
|
206,856
| 7,121,780,460
|
IssuesEvent
|
2018-01-19 09:18:45
|
0-complexity/openvcloud
|
https://api.github.com/repos/0-complexity/openvcloud
|
opened
|
OVC should select a second SR node to host new VM if the primary SRnode failed
|
priority_major type_bug
|
#### Detailed description
Since OVS is sending to OVC a list of all available SR regardless if they are up or not ( or even the node is up but the volume driver is down for any reason ).
OVC doesn't select a second SR node if first option ( least occupied node ) Failed to create the VM.
We suggest that if OVC didn't get a successful msg from OVS for creation , it tries to create on another node ( before sending a user message saying that there's no resources ) .
So end user doesn't see / feel this internal issue.
(Probably , to throw error message that the creation failed on node X , so monitoring team can look at it and fix the issue )
#### Steps to reproduce
#### Relevant stacktraces
|
1.0
|
OVC should select a second SR node to host new VM if the primary SRnode failed - #### Detailed description
Since OVS is sending to OVC a list of all available SR regardless if they are up or not ( or even the node is up but the volume driver is down for any reason ).
OVC doesn't select a second SR node if first option ( least occupied node ) Failed to create the VM.
We suggest that if OVC didn't get a successful msg from OVS for creation , it tries to create on another node ( before sending a user message saying that there's no resources ) .
So end user doesn't see / feel this internal issue.
(Probably , to throw error message that the creation failed on node X , so monitoring team can look at it and fix the issue )
#### Steps to reproduce
#### Relevant stacktraces
|
non_process
|
ovc should select a second sr node to host new vm if the primary srnode failed detailed description since ovs is sending to ovc a list of all available sr regardless if they are up or not or even the node is up but the volume driver is down for any reason ovc doesn t select a second sr node if first option least occupied node failed to create the vm we suggest that if ovc didn t get a successful msg from ovs for creation it tries to create on another node before sending a user message saying that there s no resources so end user doesn t see feel this internal issue probably to throw error message that the creation failed on node x so monitoring team can look at it and fix the issue steps to reproduce relevant stacktraces
| 0
|
9,532
| 2,615,155,539
|
IssuesEvent
|
2015-03-01 06:33:41
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
terms page
|
auto-migrated Milestone-4 Priority-Medium Type-Defect
|
```
it shows an error:
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 511, in __call__
handler.get(*groups)
File "/base/data/home/apps/html5rocks/3.345500695094989155/main.py", line 176, in get
self.render(template_path=path)
File "/base/data/home/apps/html5rocks/3.345500695094989155/main.py", line 117, in render
'toc' : self.get_toc(template_path),
File "/base/data/home/apps/html5rocks/3.345500695094989155/main.py", line 45, in get_toc
template_text = webapp.template.render(path, {});
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/template.py", line 81, in render
return t.render(Context(template_dict))
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/template.py", line 121, in wrap_render
return orig_render(context)
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 168, in render
return self.nodelist.render(context)
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 705, in render
bits.append(self.render_node(node, context))
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 718, in render_node
return(node.render(context))
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/loader_tags.py", line 63, in render
compiled_parent = self.get_parent(context)
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/loader_tags.py", line 58, in get_parent
raise TemplateSyntaxError, "Template %r cannot be extended, because it doesn't exist" % parent
TemplateSyntaxError: Template 'base.html' cannot be extended, because it
doesn't exist
```
Original issue reported on code.google.com by `antonino...@gmail.com` on 16 Nov 2010 at 9:44
|
1.0
|
terms page - ```
it shows an error:
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 511, in __call__
handler.get(*groups)
File "/base/data/home/apps/html5rocks/3.345500695094989155/main.py", line 176, in get
self.render(template_path=path)
File "/base/data/home/apps/html5rocks/3.345500695094989155/main.py", line 117, in render
'toc' : self.get_toc(template_path),
File "/base/data/home/apps/html5rocks/3.345500695094989155/main.py", line 45, in get_toc
template_text = webapp.template.render(path, {});
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/template.py", line 81, in render
return t.render(Context(template_dict))
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/template.py", line 121, in wrap_render
return orig_render(context)
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 168, in render
return self.nodelist.render(context)
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 705, in render
bits.append(self.render_node(node, context))
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/__init__.py", line 718, in render_node
return(node.render(context))
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/loader_tags.py", line 63, in render
compiled_parent = self.get_parent(context)
File "/base/python_runtime/python_lib/versions/third_party/django-0.96/django/template/loader_tags.py", line 58, in get_parent
raise TemplateSyntaxError, "Template %r cannot be extended, because it doesn't exist" % parent
TemplateSyntaxError: Template 'base.html' cannot be extended, because it
doesn't exist
```
Original issue reported on code.google.com by `antonino...@gmail.com` on 16 Nov 2010 at 9:44
|
non_process
|
terms page it shows an error traceback most recent call last file base python runtime python lib versions google appengine ext webapp init py line in call handler get groups file base data home apps main py line in get self render template path path file base data home apps main py line in render toc self get toc template path file base data home apps main py line in get toc template text webapp template render path file base python runtime python lib versions google appengine ext webapp template py line in render return t render context template dict file base python runtime python lib versions google appengine ext webapp template py line in wrap render return orig render context file base python runtime python lib versions third party django django template init py line in render return self nodelist render context file base python runtime python lib versions third party django django template init py line in render bits append self render node node context file base python runtime python lib versions third party django django template init py line in render node return node render context file base python runtime python lib versions third party django django template loader tags py line in render compiled parent self get parent context file base python runtime python lib versions third party django django template loader tags py line in get parent raise templatesyntaxerror template r cannot be extended because it doesn t exist parent templatesyntaxerror template base html cannot be extended because it doesn t exist original issue reported on code google com by antonino gmail com on nov at
| 0
|
22,695
| 20,009,952,378
|
IssuesEvent
|
2022-02-01 04:20:42
|
halide/Halide
|
https://api.github.com/repos/halide/Halide
|
opened
|
Make Halide not blow up for novice users who don't write a schedule
|
usability autoscheduler error_message
|
Options:
- Raise a warning
- Throw an error (require at least one explicit `.inline()` or `output.compute_root()` to override)
- Have a built-in simple autoscheduler that's `-O0` or `-O1`
- Should always be trivial to run on any pipeline in any context with no extra work
- Don't use the complex autoscheduler plugin interface
- Don't require estimates
- Don't require loading modules
- Don't require generators
- Test aggressively to be sure it is robust to any input
- Call it something different than "autoscheduler" to set clear expectations
- `Pipeline::trivial_schedule` (or `schedule_trivially`)
- `Pipeline::default_schedule`
I lean towards combining options (2) & (3): error, plus trivial way to get past it by calling `trivial_schedule()`.
|
True
|
Make Halide not blow up for novice users who don't write a schedule - Options:
- Raise a warning
- Throw an error (require at least one explicit `.inline()` or `output.compute_root()` to override)
- Have a built-in simple autoscheduler that's `-O0` or `-O1`
- Should always be trivial to run on any pipeline in any context with no extra work
- Don't use the complex autoscheduler plugin interface
- Don't require estimates
- Don't require loading modules
- Don't require generators
- Test aggressively to be sure it is robust to any input
- Call it something different than "autoscheduler" to set clear expectations
- `Pipeline::trivial_schedule` (or `schedule_trivially`)
- `Pipeline::default_schedule`
I lean towards combining options (2) & (3): error, plus trivial way to get past it by calling `trivial_schedule()`.
|
non_process
|
make halide not blow up for novice users who don t write a schedule options raise a warning throw an error require at least one explicit inline or output compute root to override have a built in simple autoscheduler that s or should always be trivial to run on any pipeline in any context with no extra work don t use the complex autoscheduler plugin interface don t require estimates don t require loading modules don t require generators test aggressively to be sure it is robust to any input call it something different than autoscheduler to set clear expectations pipeline trivial schedule or schedule trivially pipeline default schedule i lean towards combining options error plus trivial way to get past it by calling trivial schedule
| 0
|
13,742
| 16,496,273,739
|
IssuesEvent
|
2021-05-25 10:41:54
|
keep-network/keep-core
|
https://api.github.com/repos/keep-network/keep-core
|
closed
|
Support rewards from the old operator contract in KEEP token dashboard
|
:old_key: token dashboard process & client team
|
Once we deploy a new operator contract we need to make sure stakers can see and withdraw rewards from the old operator contract version.
We can do it in the most user-friendly way by querying both the old and new operator contract and combining rewards on one page.
Later on, once all stakers withdraw their rewards, we'll remove the code querying the old operator contract.
|
1.0
|
Support rewards from the old operator contract in KEEP token dashboard - Once we deploy a new operator contract we need to make sure stakers can see and withdraw rewards from the old operator contract version.
We can do it in the most user-friendly way by querying both the old and new operator contract and combining rewards on one page.
Later on, once all stakers withdraw their rewards, we'll remove the code querying the old operator contract.
|
process
|
support rewards from the old operator contract in keep token dashboard once we deploy a new operator contract we need to make sure stakers can see and withdraw rewards from the old operator contract version we can do it in the most user friendly way by querying both the old and new operator contract and combining rewards on one page later on once all stakers withdraw their rewards we ll remove the code querying the old operator contract
| 1
|
18,186
| 10,217,700,271
|
IssuesEvent
|
2019-08-15 14:16:52
|
whitesource-yossi/npm-plugin3
|
https://api.github.com/repos/whitesource-yossi/npm-plugin3
|
opened
|
CVE-2011-3045 (Medium) detected in libpng-v1.2.2
|
security vulnerability
|
## CVE-2011-3045 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libpngv1.2.2</b></p></summary>
<p>
<p>mirror of git://git.code.sf.net/p/libpng/code (mirror of the official repository)</p>
<p>Library home page: <a href=https://api.github.com/repos/miningathome/libpng>https://api.github.com/repos/miningathome/libpng</a></p>
<p>Found in HEAD commit: <a href="https://github.com/whitesource-yossi/npm-plugin3/commit/17c4f5082db21ae062ab0ca04afea5c034c60c6b">17c4f5082db21ae062ab0ca04afea5c034c60c6b</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (1)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /npm-plugin3/pngrutil.c
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Integer signedness error in the png_inflate function in pngrutil.c in libpng before 1.4.10beta01, as used in Google Chrome before 17.0.963.83 and other products, allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted PNG file, a different vulnerability than CVE-2011-3026.
<p>Publish Date: 2012-03-22
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3045>CVE-2011-3045</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-3045">https://nvd.nist.gov/vuln/detail/CVE-2011-3045</a></p>
<p>Release Date: 2012-03-22</p>
<p>Fix Resolution: libpng - 1.4.10beta01,Google Chrome - 17.0.963.83</p>
</p>
</details>
<p></p>
|
True
|
CVE-2011-3045 (Medium) detected in libpng-v1.2.2 - ## CVE-2011-3045 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libpngv1.2.2</b></p></summary>
<p>
<p>mirror of git://git.code.sf.net/p/libpng/code (mirror of the official repository)</p>
<p>Library home page: <a href=https://api.github.com/repos/miningathome/libpng>https://api.github.com/repos/miningathome/libpng</a></p>
<p>Found in HEAD commit: <a href="https://github.com/whitesource-yossi/npm-plugin3/commit/17c4f5082db21ae062ab0ca04afea5c034c60c6b">17c4f5082db21ae062ab0ca04afea5c034c60c6b</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (1)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /npm-plugin3/pngrutil.c
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Integer signedness error in the png_inflate function in pngrutil.c in libpng before 1.4.10beta01, as used in Google Chrome before 17.0.963.83 and other products, allows remote attackers to cause a denial of service (application crash) or possibly execute arbitrary code via a crafted PNG file, a different vulnerability than CVE-2011-3026.
<p>Publish Date: 2012-03-22
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-3045>CVE-2011-3045</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2011-3045">https://nvd.nist.gov/vuln/detail/CVE-2011-3045</a></p>
<p>Release Date: 2012-03-22</p>
<p>Fix Resolution: libpng - 1.4.10beta01,Google Chrome - 17.0.963.83</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in libpng cve medium severity vulnerability vulnerable library mirror of git git code sf net p libpng code mirror of the official repository library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries npm pngrutil c vulnerability details integer signedness error in the png inflate function in pngrutil c in libpng before as used in google chrome before and other products allows remote attackers to cause a denial of service application crash or possibly execute arbitrary code via a crafted png file a different vulnerability than cve publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution libpng google chrome
| 0
|
154,520
| 24,309,333,495
|
IssuesEvent
|
2022-09-29 20:32:03
|
pwa-builder/PWABuilder
|
https://api.github.com/repos/pwa-builder/PWABuilder
|
opened
|
Add ability to add list of languages for windows package generation
|
feature request :mailbox_with_mail: design :art: needs triage :mag:
|
### Tell us about your feature idea
Currently our windows packaging UI only supports adding a single language supported by the app. But there are use cases for multiple languages, and the backend service already supports this. We need to add the ability to submitted comma separated languages to our service.
### Do you have an implementation or a solution in mind?
Something like a multi select dropdown will work.
### Have you considered any alternatives?
_No response_
|
1.0
|
Add ability to add list of languages for windows package generation - ### Tell us about your feature idea
Currently our windows packaging UI only supports adding a single language supported by the app. But there are use cases for multiple languages, and the backend service already supports this. We need to add the ability to submitted comma separated languages to our service.
### Do you have an implementation or a solution in mind?
Something like a multi select dropdown will work.
### Have you considered any alternatives?
_No response_
|
non_process
|
add ability to add list of languages for windows package generation tell us about your feature idea currently our windows packaging ui only supports adding a single language supported by the app but there are use cases for multiple languages and the backend service already supports this we need to add the ability to submitted comma separated languages to our service do you have an implementation or a solution in mind something like a multi select dropdown will work have you considered any alternatives no response
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.