Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
53,102
| 27,970,871,636
|
IssuesEvent
|
2023-03-25 02:11:50
|
tailscale/tailscale
|
https://api.github.com/repos/tailscale/tailscale
|
closed
|
debug: make DERP send frame (framePeerGone) as limited reply as "nobody's here"
|
T3 Performance/Debugging needs-fix
|
Imagine node A is trying to reach node B.
Node A thinks that Node B was last homed at DERP region, say, 5.
But let's say that either Node B disappeared or the control server's broken or Node A's client is broken or... whatever. Assume that Node A can connect to DERP-5 and send frame type `frameSendPacket` to send to B, but then...
_nothing_
Currently we just do nothing (besides increment some varz in `(*Server).recordDrop`), and Node A is none the wiser that Node B isn't in that region.
It'd be nice to send a message (rate-limited) telling Node A that Node B's not there so Node A can locally log "oh, well that's messed up.", which would be useful during debugging connectivity problems.
But we already have `framePeerGone` used for DRPO (#150) that's very close to what we want. We could probably just reuse it, perhaps with a reason byte (reasons being: because they disconnected (the current use, when not listed), or because you tried to send to them, thinking they were here).
/cc @DentonGentry @crawshaw @danderson @maisem
|
True
|
debug: make DERP send frame (framePeerGone) as limited reply as "nobody's here" - Imagine node A is trying to reach node B.
Node A thinks that Node B was last homed at DERP region, say, 5.
But let's say that either Node B disappeared or the control server's broken or Node A's client is broken or... whatever. Assume that Node A can connect to DERP-5 and send frame type `frameSendPacket` to send to B, but then...
_nothing_
Currently we just do nothing (besides increment some varz in `(*Server).recordDrop`), and Node A is none the wiser that Node B isn't in that region.
It'd be nice to send a message (rate-limited) telling Node A that Node B's not there so Node A can locally log "oh, well that's messed up.", which would be useful during debugging connectivity problems.
But we already have `framePeerGone` used for DRPO (#150) that's very close to what we want. We could probably just reuse it, perhaps with a reason byte (reasons being: because they disconnected (the current use, when not listed), or because you tried to send to them, thinking they were here).
/cc @DentonGentry @crawshaw @danderson @maisem
|
non_process
|
debug make derp send frame framepeergone as limited reply as nobody s here imagine node a is trying to reach node b node a thinks that node b was last homed at derp region say but let s say that either node b disappeared or the control server s broken or node a s client is broken or whatever assume that node a can connect to derp and send frame type framesendpacket to send to b but then nothing currently we just do nothing besides increment some varz in server recorddrop and node a is none the wiser that node b isn t in that region it d be nice to send a message rate limited telling node a that node b s not there so node a can locally log oh well that s messed up which would be useful during debugging connectivity problems but we already have framepeergone used for drpo that s very close to what we want we could probably just reuse it perhaps with a reason byte reasons being because they disconnected the current use when not listed or because you tried to send to them thinking they were here cc dentongentry crawshaw danderson maisem
| 0
|
809,427
| 30,192,592,380
|
IssuesEvent
|
2023-07-04 16:47:29
|
Tau-ri-Dev/JSGMod-1.12.2
|
https://api.github.com/repos/Tau-ri-Dev/JSGMod-1.12.2
|
closed
|
Bug: Myst pages gates problem
|
Bug/Issue Medium priority Can not reproduce
|
immediately shut down when dialing generated gate the first time
1. Make a new gate with mysterious page
2. Dial the Gate
3. Gate activates and closes immediately
4. Active sound still playes although the gate is closed
5. Dial again and everything works normal
**Expected behavior**
Normal dialing and normal opening
**Mod version**
jsg-1.12.2-4.11.1.0-pre4
**My whole JSG config (as .zip)**
normal autogenerated without any changes
|
1.0
|
Bug: Myst pages gates problem - immediately shut down when dialing generated gate the first time
1. Make a new gate with mysterious page
2. Dial the Gate
3. Gate activates and closes immediately
4. Active sound still playes although the gate is closed
5. Dial again and everything works normal
**Expected behavior**
Normal dialing and normal opening
**Mod version**
jsg-1.12.2-4.11.1.0-pre4
**My whole JSG config (as .zip)**
normal autogenerated without any changes
|
non_process
|
bug myst pages gates problem immediately shut down when dialing generated gate the first time make a new gate with mysterious page dial the gate gate activates and closes immediately active sound still playes although the gate is closed dial again and everything works normal expected behavior normal dialing and normal opening mod version jsg my whole jsg config as zip normal autogenerated without any changes
| 0
|
15,510
| 19,703,266,431
|
IssuesEvent
|
2022-01-12 18:52:17
|
googleapis/java-contact-center-insights
|
https://api.github.com/repos/googleapis/java-contact-center-insights
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'contact-center-insights' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'contact-center-insights' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname contact center insights invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
127,409
| 27,037,411,310
|
IssuesEvent
|
2023-02-12 23:09:43
|
Unitystation-fork/Unitystation-Tutorial_Standalone
|
https://api.github.com/repos/Unitystation-fork/Unitystation-Tutorial_Standalone
|
closed
|
Organisation des branches de dev
|
enhancement help wanted code
|
DEV ne dois pas etre toucher et ne sert que de base pour le travail de l'equipe principale
https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone
même chose pour release qui ne sera utilisé qu'une fois pour faire notre build propre
https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/release
Nous devons travailler uniquement sur la branche "TUTORIEL"
https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial
il va donc falloir check si des modification on ete faite sur "DEV" ET les deplacer urgement vers Tutorial
Fusionner les branche
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/Henrique
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/73-add-room
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/Guema/main
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/main-tree
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/%2340-add-room
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/Kira/main/bot-teleport
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/UnityVersionUpdate
Et que les prochaine branche s'appel /Tutorial/<xx-ticket issue>
|
1.0
|
Organisation des branches de dev - DEV ne dois pas etre toucher et ne sert que de base pour le travail de l'equipe principale
https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone
même chose pour release qui ne sera utilisé qu'une fois pour faire notre build propre
https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/release
Nous devons travailler uniquement sur la branche "TUTORIEL"
https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial
il va donc falloir check si des modification on ete faite sur "DEV" ET les deplacer urgement vers Tutorial
Fusionner les branche
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/Henrique
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/73-add-room
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/Guema/main
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/main-tree
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/%2340-add-room
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/Kira/main/bot-teleport
[ ] https://github.com/Unitystation-fork/Unitystation-Tutorial_Standalone/tree/Tutorial-Develop/UnityVersionUpdate
Et que les prochaine branche s'appel /Tutorial/<xx-ticket issue>
|
non_process
|
organisation des branches de dev dev ne dois pas etre toucher et ne sert que de base pour le travail de l equipe principale même chose pour release qui ne sera utilisé qu une fois pour faire notre build propre nous devons travailler uniquement sur la branche tutoriel il va donc falloir check si des modification on ete faite sur dev et les deplacer urgement vers tutorial fusionner les branche et que les prochaine branche s appel tutorial
| 0
|
16,337
| 20,995,798,937
|
IssuesEvent
|
2022-03-29 13:24:52
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Use the `/q` flag with `cmd.exe` in `child_process.spawn()`
|
child_process windows feature request stale
|
**Is your feature request related to a problem? Please describe.**
`child_process.spawn()` with `shell: true` on Windows [calls `cmd.exe /d /s /c`](https://github.com/nodejs/node/blob/master/lib/child_process.js#L486).
This makes `childProcess.stdout` [include the prompt and command](https://github.com/sindresorhus/execa/issues/116) with Batch files.
**Describe the solution you'd like**
Add the [`/q` flag](https://ss64.com/nt/cmd.html).
**Alternatives**
Adding `@echo off` to Batch files.
|
1.0
|
Use the `/q` flag with `cmd.exe` in `child_process.spawn()` - **Is your feature request related to a problem? Please describe.**
`child_process.spawn()` with `shell: true` on Windows [calls `cmd.exe /d /s /c`](https://github.com/nodejs/node/blob/master/lib/child_process.js#L486).
This makes `childProcess.stdout` [include the prompt and command](https://github.com/sindresorhus/execa/issues/116) with Batch files.
**Describe the solution you'd like**
Add the [`/q` flag](https://ss64.com/nt/cmd.html).
**Alternatives**
Adding `@echo off` to Batch files.
|
process
|
use the q flag with cmd exe in child process spawn is your feature request related to a problem please describe child process spawn with shell true on windows this makes childprocess stdout with batch files describe the solution you d like add the alternatives adding echo off to batch files
| 1
|
9,169
| 12,222,021,464
|
IssuesEvent
|
2020-05-02 11:08:36
|
dotenv-linter/dotenv-linter
|
https://api.github.com/repos/dotenv-linter/dotenv-linter
|
opened
|
Release v1.3.0
|
process proposal
|
What should be included in the new release:
- [ ] New check: Trailing Whitespace (#150)
- [ ] New check: Newline at the end (#152)
- [ ] New check: Extra blank line (#167)
- [ ] New check: Quote character (#171)
- [ ] Ability to skip some checks (#168)
@mstruebing What do you think about it?
|
1.0
|
Release v1.3.0 - What should be included in the new release:
- [ ] New check: Trailing Whitespace (#150)
- [ ] New check: Newline at the end (#152)
- [ ] New check: Extra blank line (#167)
- [ ] New check: Quote character (#171)
- [ ] Ability to skip some checks (#168)
@mstruebing What do you think about it?
|
process
|
release what should be included in the new release new check trailing whitespace new check newline at the end new check extra blank line new check quote character ability to skip some checks mstruebing what do you think about it
| 1
|
137,858
| 30,768,050,401
|
IssuesEvent
|
2023-07-30 14:50:48
|
neon-mmd/websurfx
|
https://api.github.com/repos/neon-mmd/websurfx
|
closed
|
🔧 Cache next page on search
|
💻 aspect: code 🟨 priority: medium ✨ goal: improvement 🏁 status: ready for dev 🔢 points: 5
|
## Why?
When searching, the backend only requests the current page, but it should also get the result for the next page. This would avoid extra loading time when going to the next page as results are already cached.
### How?
We can simply create an async task that is run after the requests is done, which will fetch the results, cache them, so that they can be served on requested without delay.
|
1.0
|
🔧 Cache next page on search - ## Why?
When searching, the backend only requests the current page, but it should also get the result for the next page. This would avoid extra loading time when going to the next page as results are already cached.
### How?
We can simply create an async task that is run after the requests is done, which will fetch the results, cache them, so that they can be served on requested without delay.
|
non_process
|
🔧 cache next page on search why when searching the backend only requests the current page but it should also get the result for the next page this would avoid extra loading time when going to the next page as results are already cached how we can simply create an async task that is run after the requests is done which will fetch the results cache them so that they can be served on requested without delay
| 0
|
77,494
| 27,017,920,489
|
IssuesEvent
|
2023-02-10 21:23:46
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
Threads: Long threads are slow to load with bubbles layout
|
T-Defect A-Message-Bubbles S-Major O-Occasional A-Threads Z-Labs Team: Delight App: Android
|
### Steps to reproduce
1. Open a thread with 17+ threaded messages
### Outcome
#### What did you expect?
Can see messages instantly
#### What happened instead?
Messages are slow to load
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
1.4.34
### Homeserver
matrix.org
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
1.0
|
Threads: Long threads are slow to load with bubbles layout - ### Steps to reproduce
1. Open a thread with 17+ threaded messages
### Outcome
#### What did you expect?
Can see messages instantly
#### What happened instead?
Messages are slow to load
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
1.4.34
### Homeserver
matrix.org
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
non_process
|
threads long threads are slow to load with bubbles layout steps to reproduce open a thread with threaded messages outcome what did you expect can see messages instantly what happened instead messages are slow to load your phone model no response operating system version no response application version and app store homeserver matrix org will you send logs no are you willing to provide a pr no
| 0
|
4,832
| 7,725,982,734
|
IssuesEvent
|
2018-05-24 19:43:48
|
kaching-hq/Privacy-and-Security
|
https://api.github.com/repos/kaching-hq/Privacy-and-Security
|
opened
|
Subprocessors: list and get DPAs
|
Processes
|
- [ ] Investigate data flows to subprocessors and contact them
Get DPAs from:
- [ ] AWS
- [ ] MongoDB
- [ ] Logz.io
- [ ] Klarna Partnership
- [ ] Adyen
- [ ] Bambora
- [ ] Upgraded
|
1.0
|
Subprocessors: list and get DPAs - - [ ] Investigate data flows to subprocessors and contact them
Get DPAs from:
- [ ] AWS
- [ ] MongoDB
- [ ] Logz.io
- [ ] Klarna Partnership
- [ ] Adyen
- [ ] Bambora
- [ ] Upgraded
|
process
|
subprocessors list and get dpas investigate data flows to subprocessors and contact them get dpas from aws mongodb logz io klarna partnership adyen bambora upgraded
| 1
|
369,947
| 25,879,361,321
|
IssuesEvent
|
2022-12-14 10:11:24
|
packit/packit
|
https://api.github.com/repos/packit/packit
|
opened
|
Provide a fast way to validate packit.yaml
|
user-experience documentation
|
We already have this using the `validate-config` command:
```
$ packit validate-config
2022-12-14 11:09:48.242 validate_config.py INFO .packit.yaml is valid and ready to be used
```
Although folks may not be familiar with this command, or even Packit CLI in general. At minimum, we should provide documentation how to validate the config without the need to create a new PR, ideally do it locally (pre-commit hook).
|
1.0
|
Provide a fast way to validate packit.yaml - We already have this using the `validate-config` command:
```
$ packit validate-config
2022-12-14 11:09:48.242 validate_config.py INFO .packit.yaml is valid and ready to be used
```
Although folks may not be familiar with this command, or even Packit CLI in general. At minimum, we should provide documentation how to validate the config without the need to create a new PR, ideally do it locally (pre-commit hook).
|
non_process
|
provide a fast way to validate packit yaml we already have this using the validate config command packit validate config validate config py info packit yaml is valid and ready to be used although folks may not be familiar with this command or even packit cli in general at minimum we should provide documentation how to validate the config without the need to create a new pr ideally do it locally pre commit hook
| 0
|
18,830
| 24,734,171,791
|
IssuesEvent
|
2022-10-20 20:18:37
|
anitsh/til
|
https://api.github.com/repos/anitsh/til
|
opened
|
Control Objectives for Information and Related Technology (COBIT)
|
process
|
COBIT helps organisations meet business challenges in regulatory compliance, risk management and aligning IT strategy with organisational goals.
https://www.itgovernance.co.uk/cobit
https://www.sunrisesoftware.com/blog/cobit-vs-itil-understanding-the-different-frameworks/
https://www.comptia.org/blog/itsm-frameworks-explained-which-are-most-popular
|
1.0
|
Control Objectives for Information and Related Technology (COBIT) - COBIT helps organisations meet business challenges in regulatory compliance, risk management and aligning IT strategy with organisational goals.
https://www.itgovernance.co.uk/cobit
https://www.sunrisesoftware.com/blog/cobit-vs-itil-understanding-the-different-frameworks/
https://www.comptia.org/blog/itsm-frameworks-explained-which-are-most-popular
|
process
|
control objectives for information and related technology cobit cobit helps organisations meet business challenges in regulatory compliance risk management and aligning it strategy with organisational goals
| 1
|
16,909
| 22,218,469,260
|
IssuesEvent
|
2022-06-08 05:49:43
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
Exporters catch up faster after restart or leader change
|
kind/feature scope/broker area/reliability team/distributed team/process-automation
|
**Is your feature request related to a problem? Please describe.**
Each exporter reports the position of the last exported record back to the broker. The broker stores this position per exporter in the state (i.e. RocksDB). On a restart or leader change, it restores the last snapshot and loads the positions from the state. The exporters receive records that have a higher position compared to the state.
Since the positions are stored only in the state, it depends on the last snapshot of how many records the exporter receives again. If the last snapshot is up-to-date then the exporter receives no or just a few records again. If the last snapshot is a bit older or there are many records on the stream then the exporter receives many records again.
The behavior itself is correct because the exporters should be idempotent. However, it can impact the memory consumption of the broker (i.e. the log stream can't be compacted until the records are exported), and the integration of other services/applications that consumes from the exporter's sink (i.e. outdated until the exporter caught up).
**Describe the solution you'd like**
The positions of the exporters are also stored on the log stream. So, the positions are replicated to the followers like other records in a much faster way.
In order to reduce the overhead, the records with the positions can be written periodically (e.g. every minute) and should be filtered out for the exporters themselves.
**Describe alternatives you've considered**
Every exporter manages the position themself outside of the broker. On restart or leader change, it restores the position and continues with the next record.
**Additional context**
This topic is relevant when building the state on followers. If we don't replicate the snapshots anymore (or less often) then we need a way to replicate/synchronize the exporter positions.
|
1.0
|
Exporters catch up faster after restart or leader change - **Is your feature request related to a problem? Please describe.**
Each exporter reports the position of the last exported record back to the broker. The broker stores this position per exporter in the state (i.e. RocksDB). On a restart or leader change, it restores the last snapshot and loads the positions from the state. The exporters receive records that have a higher position compared to the state.
Since the positions are stored only in the state, it depends on the last snapshot of how many records the exporter receives again. If the last snapshot is up-to-date then the exporter receives no or just a few records again. If the last snapshot is a bit older or there are many records on the stream then the exporter receives many records again.
The behavior itself is correct because the exporters should be idempotent. However, it can impact the memory consumption of the broker (i.e. the log stream can't be compacted until the records are exported), and the integration of other services/applications that consumes from the exporter's sink (i.e. outdated until the exporter caught up).
**Describe the solution you'd like**
The positions of the exporters are also stored on the log stream. So, the positions are replicated to the followers like other records in a much faster way.
In order to reduce the overhead, the records with the positions can be written periodically (e.g. every minute) and should be filtered out for the exporters themselves.
**Describe alternatives you've considered**
Every exporter manages the position themself outside of the broker. On restart or leader change, it restores the position and continues with the next record.
**Additional context**
This topic is relevant when building the state on followers. If we don't replicate the snapshots anymore (or less often) then we need a way to replicate/synchronize the exporter positions.
|
process
|
exporters catch up faster after restart or leader change is your feature request related to a problem please describe each exporter reports the position of the last exported record back to the broker the broker stores this position per exporter in the state i e rocksdb on a restart or leader change it restores the last snapshot and loads the positions from the state the exporters receive records that have a higher position compared to the state since the positions are stored only in the state it depends on the last snapshot of how many records the exporter receives again if the last snapshot is up to date then the exporter receives no or just a few records again if the last snapshot is a bit older or there are many records on the stream then the exporter receives many records again the behavior itself is correct because the exporters should be idempotent however it can impact the memory consumption of the broker i e the log stream can t be compacted until the records are exported and the integration of other services applications that consumes from the exporter s sink i e outdated until the exporter caught up describe the solution you d like the positions of the exporters are also stored on the log stream so the positions are replicated to the followers like other records in a much faster way in order to reduce the overhead the records with the positions can be written periodically e g every minute and should be filtered out for the exporters themselves describe alternatives you ve considered every exporter manages the position themself outside of the broker on restart or leader change it restores the position and continues with the next record additional context this topic is relevant when building the state on followers if we don t replicate the snapshots anymore or less often then we need a way to replicate synchronize the exporter positions
| 1
|
12,993
| 15,358,415,493
|
IssuesEvent
|
2021-03-01 14:47:15
|
pollination/pollination-dsl
|
https://api.github.com/repos/pollination/pollination-dsl
|
closed
|
DAG Task generates duplicated output names
|
RFC/Discussion :speech_balloon: Tools & Process :wrench:
|
The current iteration of this recipe generates duplicated task output names. This shouldn't be possible in theory so we should add some more validation on Queenbee's end to make sure it doesn't happen again. I noticed this as I was able to create and translate a job to send to Argo but then Argo throws an error :upside_down_face:
https://github.com/pollination/annual-radiation/blob/88a57f79ca8cb3c2829b6d3bfee229df322f26fa/pollination/annual_radiation/entry.py#L69-L87
This code generates this yaml definition (`sensor-grids-file` is duplicated):
```yaml
- type: DAGTask
annotations: {}
name: create-rad-folder
template: honeybee-radiance/create-radiance-folder
needs: []
arguments:
- type: TaskPathArgument
annotations: {}
name: input-model
from:
type: InputFileReference
annotations: {}
variable: model
sub_path: null
- type: TaskArgument
annotations: {}
name: sensor-grid
from:
type: InputReference
annotations: {}
variable: sensor-grid
loop: null
sub_folder: null
returns:
- type: TaskPathReturn
annotations: {}
name: model-folder
description: null
path: model
required: true
- type: TaskPathReturn
annotations: {}
name: sensor-grids-file
description: null
path: results/direct/grids_info.json
required: true
- type: TaskPathReturn
annotations: {}
name: sensor-grids-file
description: null
path: results/total/grids_info.json
required: true
- type: TaskReturn
annotations: {}
name: sensor-grids
description: Sensor grids information.
```
|
1.0
|
DAG Task generates duplicated output names - The current iteration of this recipe generates duplicated task output names. This shouldn't be possible in theory so we should add some more validation on Queenbee's end to make sure it doesn't happen again. I noticed this as I was able to create and translate a job to send to Argo but then Argo throws an error :upside_down_face:
https://github.com/pollination/annual-radiation/blob/88a57f79ca8cb3c2829b6d3bfee229df322f26fa/pollination/annual_radiation/entry.py#L69-L87
This code generates this yaml definition (`sensor-grids-file` is duplicated):
```yaml
- type: DAGTask
annotations: {}
name: create-rad-folder
template: honeybee-radiance/create-radiance-folder
needs: []
arguments:
- type: TaskPathArgument
annotations: {}
name: input-model
from:
type: InputFileReference
annotations: {}
variable: model
sub_path: null
- type: TaskArgument
annotations: {}
name: sensor-grid
from:
type: InputReference
annotations: {}
variable: sensor-grid
loop: null
sub_folder: null
returns:
- type: TaskPathReturn
annotations: {}
name: model-folder
description: null
path: model
required: true
- type: TaskPathReturn
annotations: {}
name: sensor-grids-file
description: null
path: results/direct/grids_info.json
required: true
- type: TaskPathReturn
annotations: {}
name: sensor-grids-file
description: null
path: results/total/grids_info.json
required: true
- type: TaskReturn
annotations: {}
name: sensor-grids
description: Sensor grids information.
```
|
process
|
dag task generates duplicated output names the current iteration of this recipe generates duplicated task output names this shouldn t be possible in theory so we should add some more validation on queenbee s end to make sure it doesn t happen again i noticed this as i was able to create and translate a job to send to argo but then argo throws an error upside down face this code generates this yaml definition sensor grids file is duplicated yaml type dagtask annotations name create rad folder template honeybee radiance create radiance folder needs arguments type taskpathargument annotations name input model from type inputfilereference annotations variable model sub path null type taskargument annotations name sensor grid from type inputreference annotations variable sensor grid loop null sub folder null returns type taskpathreturn annotations name model folder description null path model required true type taskpathreturn annotations name sensor grids file description null path results direct grids info json required true type taskpathreturn annotations name sensor grids file description null path results total grids info json required true type taskreturn annotations name sensor grids description sensor grids information
| 1
|
9,152
| 12,203,358,221
|
IssuesEvent
|
2020-04-30 10:27:49
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
AUTOMATIC BATCH PROCESS - Observability of Platform performance
|
EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: STORY :book:
|
### User want
As a developer
I want to be able to view logs and create alerts
So I can debug and monitor the state of the system
This ticket is broken in the following small ones
#523
#524
#525
**Customer acceptance criteria**
**Technical acceptance criteria**
All key parts of the system create logs in a centralised place.
A dashboard shows logs from the various components.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
L
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
AUTOMATIC BATCH PROCESS - Observability of Platform performance - ### User want
As a developer
I want to be able to view logs and create alerts
So I can debug and monitor the state of the system
This ticket is broken in the following small ones
#523
#524
#525
**Customer acceptance criteria**
**Technical acceptance criteria**
All key parts of the system create logs in a centralised place.
A dashboard shows logs from the various components.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
L
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
automatic batch process observability of platform performance user want as a developer i want to be able to view logs and create alerts so i can debug and monitor the state of the system this ticket is broken in the following small ones customer acceptance criteria technical acceptance criteria all key parts of the system create logs in a centralised place a dashboard shows logs from the various components data acceptance criteria testing acceptance criteria size l value effort exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
365,473
| 25,537,463,442
|
IssuesEvent
|
2022-11-29 13:04:46
|
abpframework/abp
|
https://api.github.com/repos/abpframework/abp
|
closed
|
Improve ASP.NET Boilerplate Migration Guide
|
abp-framework documentation priority:normal effort-sm
|
Add UI part, add more samples and improve the current sections.
Related to https://github.com/abpframework/abp/issues/2120
|
1.0
|
Improve ASP.NET Boilerplate Migration Guide - Add UI part, add more samples and improve the current sections.
Related to https://github.com/abpframework/abp/issues/2120
|
non_process
|
improve asp net boilerplate migration guide add ui part add more samples and improve the current sections related to
| 0
|
9,300
| 12,310,440,701
|
IssuesEvent
|
2020-05-12 10:37:26
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Unable to run "Runbook launch" Powershell
|
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
|
@piyo0320 commented 15 hours ago — with docs.microsoft.com
I ran the Powershell found in "Runbook Run", but it fails with the following error:
Start-AzAutomationRunbook: Object reference not set on object instance.
Occurrence line: 1 character: 8
$ job = Start-AzAutomationRunbook @startParams
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CategoryInfo: CloseError: (:) [Start-AzAutomationRunbook], NullReferenceException
FullyQualifiedErrorId: Microsoft.Azure.Commands.Automation.Cmdlet.StartAzureAutomationRunbook
When I checked $ startParams and $ startParams.Parameters on the console, they seemed to have the expected values.
Is there a possibility of execution depending on the environment?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d4ea5e93-b220-57c1-b011-865b7da3c5a6
* Version Independent ID: 179a79da-0e12-716e-725e-9db6b8f20282
* Content: [Deploy an Azure Resource Manager template in an Azure Automation runbook](https://docs.microsoft.com/en-us/azure/automation/automation-deploy-template-runbook#feedback)
* Content Source: [articles/automation/automation-deploy-template-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-deploy-template-runbook.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Unable to run "Runbook launch" Powershell - @piyo0320 commented 15 hours ago — with docs.microsoft.com
I ran the Powershell found in "Runbook Run", but it fails with the following error:
Start-AzAutomationRunbook: Object reference not set on object instance.
Occurrence line: 1 character: 8
$ job = Start-AzAutomationRunbook @startParams
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CategoryInfo: CloseError: (:) [Start-AzAutomationRunbook], NullReferenceException
FullyQualifiedErrorId: Microsoft.Azure.Commands.Automation.Cmdlet.StartAzureAutomationRunbook
When I checked $ startParams and $ startParams.Parameters on the console, they seemed to have the expected values.
Is there a possibility of execution depending on the environment?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d4ea5e93-b220-57c1-b011-865b7da3c5a6
* Version Independent ID: 179a79da-0e12-716e-725e-9db6b8f20282
* Content: [Deploy an Azure Resource Manager template in an Azure Automation runbook](https://docs.microsoft.com/en-us/azure/automation/automation-deploy-template-runbook#feedback)
* Content Source: [articles/automation/automation-deploy-template-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-deploy-template-runbook.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
unable to run runbook launch powershell commented hours ago — with docs microsoft com i ran the powershell found in runbook run but it fails with the following error start azautomationrunbook object reference not set on object instance occurrence line character job start azautomationrunbook startparams categoryinfo closeerror nullreferenceexception fullyqualifiederrorid microsoft azure commands automation cmdlet startazureautomationrunbook when i checked startparams and startparams parameters on the console they seemed to have the expected values is there a possibility of execution depending on the environment document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
3,040
| 6,039,921,624
|
IssuesEvent
|
2017-06-10 08:56:49
|
triplea-game/triplea
|
https://api.github.com/repos/triplea-game/triplea
|
closed
|
Feature Backlog
|
discussion type: process
|
We are tracking a large number of feature requests and have not been able to reduce their number off of the queue very effectively. I closed a number of issues I had opened with wish-list feature requests and moved them to a one-liner bullet point here: https://github.com/triplea-game/triplea/wiki/Feature-Back-Log
I suggest we update our feature request process to be this:
1. open an issue to talk about it
2. once the idea for the feature has been refined, and the developers have given some insight if the idea is feasible, we then move the issue to the feature back log wiki page.
|
1.0
|
Feature Backlog - We are tracking a large number of feature requests and have not been able to reduce their number off of the queue very effectively. I closed a number of issues I had opened with wish-list feature requests and moved them to a one-liner bullet point here: https://github.com/triplea-game/triplea/wiki/Feature-Back-Log
I suggest we update our feature request process to be this:
1. open an issue to talk about it
2. once the idea for the feature has been refined, and the developers have given some insight if the idea is feasible, we then move the issue to the feature back log wiki page.
|
process
|
feature backlog we are tracking a large number of feature requests and have not been able to reduce their number off of the queue very effectively i closed a number of issues i had opened with wish list feature requests and moved them to a one liner bullet point here i suggest we update our feature request process to be this open an issue to talk about it once the idea for the feature has been refined and the developers have given some insight if the idea is feasible we then move the issue to the feature back log wiki page
| 1
|
8,781
| 11,902,067,067
|
IssuesEvent
|
2020-03-30 13:27:22
|
OCFL/spec
|
https://api.github.com/repos/OCFL/spec
|
opened
|
Move error codes from wiki into main git repo, with versioning
|
OCFL Object Process/Extensions/Related
|
Move content from the wiki https://github.com/OCFL/spec/wiki/OCFL-Validator-Codes into the main git repo, under the `draft` directory so that it will then be versioned as we version the spec
|
1.0
|
Move error codes from wiki into main git repo, with versioning - Move content from the wiki https://github.com/OCFL/spec/wiki/OCFL-Validator-Codes into the main git repo, under the `draft` directory so that it will then be versioned as we version the spec
|
process
|
move error codes from wiki into main git repo with versioning move content from the wiki into the main git repo under the draft directory so that it will then be versioned as we version the spec
| 1
|
47,516
| 13,056,218,005
|
IssuesEvent
|
2020-07-30 04:01:34
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
cmake compiler detection on Mac OS X (XCode>=4.3) (Trac #666)
|
Migrated from Trac defect tools/ports
|
The CMake compiler detection [in cake 2.8.5 and 2.8.7] is not consistent for C and C++ code:
Compilers will be checked in a defined order, with "c++" being the first one and "g++" the secondary option for C++ compilers.
For C code, the order is reversed: CMake first looks for "gcc", then for "cc".
This makes !IceTray on Mac OS X with XCode >=4.3 (where Apple changed the default compiler to Clang) think that all code is being compiled with Clang, yet C-code will be compiled with gcc (which still defaults to the llvm-gcc compiler instead of clang).
This breaks some assumptions when compiler options are set in icetray/cmake.
To fix this, I think the file $I3_PORTS/share/cmake-2.8.7/Modules/CMakeDetermineCCompiler.cmake should be patched @ line 59 to change
```text
SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}gcc ${_CMAKE_TOOLCHAIN_PREFIX}cc cl bcc xlc)
```
into
```text
SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}cc ${_CMAKE_TOOLCHAIN_PREFIX}gcc cl bcc xlc)
```
[compare to CMakeDetermineCXXCompiler.cmake, where the gcc/cc order is reversed]
Migrated from https://code.icecube.wisc.edu/ticket/666
```json
{
"status": "closed",
"changetime": "2014-12-04T20:18:27",
"description": "The CMake compiler detection [in cake 2.8.5 and 2.8.7] is not consistent for C and C++ code:\n\nCompilers will be checked in a defined order, with \"c++\" being the first one and \"g++\" the secondary option for C++ compilers.\nFor C code, the order is reversed: CMake first looks for \"gcc\", then for \"cc\".\n\nThis makes !IceTray on Mac OS X with XCode >=4.3 (where Apple changed the default compiler to Clang) think that all code is being compiled with Clang, yet C-code will be compiled with gcc (which still defaults to the llvm-gcc compiler instead of clang).\nThis breaks some assumptions when compiler options are set in icetray/cmake.\n\nTo fix this, I think the file $I3_PORTS/share/cmake-2.8.7/Modules/CMakeDetermineCCompiler.cmake should be patched @ line 59 to change\n{{{\n SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}gcc ${_CMAKE_TOOLCHAIN_PREFIX}cc cl bcc xlc)\n}}}\ninto \n{{{\n SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}cc ${_CMAKE_TOOLCHAIN_PREFIX}gcc cl bcc xlc)\n}}}\n\n\n[compare to CMakeDetermineCXXCompiler.cmake, where the gcc/cc order is reversed]\n\n",
"reporter": "claudio.kopper",
"cc": "claudio.kopper",
"resolution": "wontfix",
"_ts": "1417724307757066",
"component": "tools/ports",
"summary": "cmake compiler detection on Mac OS X (XCode>=4.3)",
"priority": "normal",
"keywords": "clang gcc cc apple mac os cake",
"time": "2012-02-17T14:31:34",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
cmake compiler detection on Mac OS X (XCode>=4.3) (Trac #666) - The CMake compiler detection [in cake 2.8.5 and 2.8.7] is not consistent for C and C++ code:
Compilers will be checked in a defined order, with "c++" being the first one and "g++" the secondary option for C++ compilers.
For C code, the order is reversed: CMake first looks for "gcc", then for "cc".
This makes !IceTray on Mac OS X with XCode >=4.3 (where Apple changed the default compiler to Clang) think that all code is being compiled with Clang, yet C-code will be compiled with gcc (which still defaults to the llvm-gcc compiler instead of clang).
This breaks some assumptions when compiler options are set in icetray/cmake.
To fix this, I think the file $I3_PORTS/share/cmake-2.8.7/Modules/CMakeDetermineCCompiler.cmake should be patched @ line 59 to change
```text
SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}gcc ${_CMAKE_TOOLCHAIN_PREFIX}cc cl bcc xlc)
```
into
```text
SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}cc ${_CMAKE_TOOLCHAIN_PREFIX}gcc cl bcc xlc)
```
[compare to CMakeDetermineCXXCompiler.cmake, where the gcc/cc order is reversed]
Migrated from https://code.icecube.wisc.edu/ticket/666
```json
{
"status": "closed",
"changetime": "2014-12-04T20:18:27",
"description": "The CMake compiler detection [in cake 2.8.5 and 2.8.7] is not consistent for C and C++ code:\n\nCompilers will be checked in a defined order, with \"c++\" being the first one and \"g++\" the secondary option for C++ compilers.\nFor C code, the order is reversed: CMake first looks for \"gcc\", then for \"cc\".\n\nThis makes !IceTray on Mac OS X with XCode >=4.3 (where Apple changed the default compiler to Clang) think that all code is being compiled with Clang, yet C-code will be compiled with gcc (which still defaults to the llvm-gcc compiler instead of clang).\nThis breaks some assumptions when compiler options are set in icetray/cmake.\n\nTo fix this, I think the file $I3_PORTS/share/cmake-2.8.7/Modules/CMakeDetermineCCompiler.cmake should be patched @ line 59 to change\n{{{\n SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}gcc ${_CMAKE_TOOLCHAIN_PREFIX}cc cl bcc xlc)\n}}}\ninto \n{{{\n SET(CMAKE_C_COMPILER_LIST ${_CMAKE_TOOLCHAIN_PREFIX}cc ${_CMAKE_TOOLCHAIN_PREFIX}gcc cl bcc xlc)\n}}}\n\n\n[compare to CMakeDetermineCXXCompiler.cmake, where the gcc/cc order is reversed]\n\n",
"reporter": "claudio.kopper",
"cc": "claudio.kopper",
"resolution": "wontfix",
"_ts": "1417724307757066",
"component": "tools/ports",
"summary": "cmake compiler detection on Mac OS X (XCode>=4.3)",
"priority": "normal",
"keywords": "clang gcc cc apple mac os cake",
"time": "2012-02-17T14:31:34",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
non_process
|
cmake compiler detection on mac os x xcode trac the cmake compiler detection is not consistent for c and c code compilers will be checked in a defined order with c being the first one and g the secondary option for c compilers for c code the order is reversed cmake first looks for gcc then for cc this makes icetray on mac os x with xcode where apple changed the default compiler to clang think that all code is being compiled with clang yet c code will be compiled with gcc which still defaults to the llvm gcc compiler instead of clang this breaks some assumptions when compiler options are set in icetray cmake to fix this i think the file ports share cmake modules cmakedetermineccompiler cmake should be patched line to change text set cmake c compiler list cmake toolchain prefix gcc cmake toolchain prefix cc cl bcc xlc into text set cmake c compiler list cmake toolchain prefix cc cmake toolchain prefix gcc cl bcc xlc migrated from json status closed changetime description the cmake compiler detection is not consistent for c and c code n ncompilers will be checked in a defined order with c being the first one and g the secondary option for c compilers nfor c code the order is reversed cmake first looks for gcc then for cc n nthis makes icetray on mac os x with xcode where apple changed the default compiler to clang think that all code is being compiled with clang yet c code will be compiled with gcc which still defaults to the llvm gcc compiler instead of clang nthis breaks some assumptions when compiler options are set in icetray cmake n nto fix this i think the file ports share cmake modules cmakedetermineccompiler cmake should be patched line to change n n set cmake c compiler list cmake toolchain prefix gcc cmake toolchain prefix cc cl bcc xlc n ninto n n set cmake c compiler list cmake toolchain prefix cc cmake toolchain prefix gcc cl bcc xlc n n n n n n reporter claudio kopper cc claudio kopper resolution wontfix ts component tools ports summary cmake compiler detection on mac os x xcode priority normal keywords clang gcc cc apple mac os cake time milestone owner nega type defect
| 0
|
11,411
| 14,241,587,487
|
IssuesEvent
|
2020-11-18 23:44:41
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Hi Team you need uppdate https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/DataReport-Utils since it is not working and many customer asked about it
|
cxp doc-bug machine-learning/svc team-data-science-process/subsvc triaged
|
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 55b93c5d-758b-d3ea-0165-64869100bba6
* Version Independent ID: 7c801950-0b7f-aaef-559e-6b7e196fba7d
* Content: [Data acquisition and understanding of Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/lifecycle-data?source=docs)
* Content Source: [articles/machine-learning/team-data-science-process/lifecycle-data.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/team-data-science-process/lifecycle-data.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
1.0
|
Hi Team you need uppdate https://github.com/Azure/Azure-TDSP-Utilities/tree/master/DataScienceUtilities/DataReport-Utils since it is not working and many customer asked about it -
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 55b93c5d-758b-d3ea-0165-64869100bba6
* Version Independent ID: 7c801950-0b7f-aaef-559e-6b7e196fba7d
* Content: [Data acquisition and understanding of Team Data Science Process](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/lifecycle-data?source=docs)
* Content Source: [articles/machine-learning/team-data-science-process/lifecycle-data.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/team-data-science-process/lifecycle-data.md)
* Service: **machine-learning**
* Sub-service: **team-data-science-process**
* GitHub Login: @marktab
* Microsoft Alias: **tdsp**
|
process
|
hi team you need uppdate since it is not working and many customer asked about it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id aaef content content source service machine learning sub service team data science process github login marktab microsoft alias tdsp
| 1
|
17,466
| 12,060,974,220
|
IssuesEvent
|
2020-04-15 22:26:34
|
verilator/verilator
|
https://api.github.com/repos/verilator/verilator
|
closed
|
Add --build option to start C++ compilation right after verilation
|
area: usability status: assigned
|
This is byproduct of #2206 as suggested in https://github.com/verilator/verilator/pull/2206#pullrequestreview-387813514 .
Before writing code, I'd like to have the same understanding of this feature.
#### What it does
- verilator calls make at the end of veriation ( just a short cut for user)
#### Options
- `--build` : takes no arguments
- `-j` : parallelism for make. similar to what GNU make interprets
- `-j` : unlimited parallelism
- `-jN` : Limit maximum process to `N` (where N >= 1)
- `-j N` : Limit maximum process to `N` (where N >= 1)
- `-MAKEFLAGS` _string_ : _string_ is directly passed to make or cmake
- `--make` {cmake | gmake | make} : which command is used to build. Both gmake and make means GNU make. If `gmake` is specified, `system("gmake -f ... -C ...")` is called. If `make` is specified, `system("make -f ... -C ...")` is executed instead. If `cmake` is specified, `sustem("cmake --build ...")` is used.
#### Questions
1. In https://github.com/verilator/verilator/pull/2206#pullrequestreview-387813514, @wsnyder wrote `number_might_be_zero`. I don't think `-j 0` is valid for GNU make. What is the desired behavior for `-j 0` ?
|
True
|
Add --build option to start C++ compilation right after verilation - This is byproduct of #2206 as suggested in https://github.com/verilator/verilator/pull/2206#pullrequestreview-387813514 .
Before writing code, I'd like to have the same understanding of this feature.
#### What it does
- verilator calls make at the end of veriation ( just a short cut for user)
#### Options
- `--build` : takes no arguments
- `-j` : parallelism for make. similar to what GNU make interprets
- `-j` : unlimited parallelism
- `-jN` : Limit maximum process to `N` (where N >= 1)
- `-j N` : Limit maximum process to `N` (where N >= 1)
- `-MAKEFLAGS` _string_ : _string_ is directly passed to make or cmake
- `--make` {cmake | gmake | make} : which command is used to build. Both gmake and make means GNU make. If `gmake` is specified, `system("gmake -f ... -C ...")` is called. If `make` is specified, `system("make -f ... -C ...")` is executed instead. If `cmake` is specified, `sustem("cmake --build ...")` is used.
#### Questions
1. In https://github.com/verilator/verilator/pull/2206#pullrequestreview-387813514, @wsnyder wrote `number_might_be_zero`. I don't think `-j 0` is valid for GNU make. What is the desired behavior for `-j 0` ?
|
non_process
|
add build option to start c compilation right after verilation this is byproduct of as suggested in before writing code i d like to have the same understanding of this feature what it does verilator calls make at the end of veriation just a short cut for user options build takes no arguments j parallelism for make similar to what gnu make interprets j unlimited parallelism jn limit maximum process to n where n j n limit maximum process to n where n makeflags string string is directly passed to make or cmake make cmake gmake make which command is used to build both gmake and make means gnu make if gmake is specified system gmake f c is called if make is specified system make f c is executed instead if cmake is specified sustem cmake build is used questions in wsnyder wrote number might be zero i don t think j is valid for gnu make what is the desired behavior for j
| 0
|
19,181
| 25,291,055,613
|
IssuesEvent
|
2022-11-17 00:17:08
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Split by line should be Split
|
Processing Bug
|
### What is the bug or the crash?
Split by line processing alg, should be able to handle polygons as well. I am filing it as a bug, hoping it can be added to 3.28!
### Steps to reproduce the issue
1- open two polygon layers
2- try to split one with another using Split by line
3- you need to convert one to a line to do that
### Versions
3.26
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Split by line should be Split - ### What is the bug or the crash?
Split by line processing alg, should be able to handle polygons as well. I am filing it as a bug, hoping it can be added to 3.28!
### Steps to reproduce the issue
1- open two polygon layers
2- try to split one with another using Split by line
3- you need to convert one to a line to do that
### Versions
3.26
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
split by line should be split what is the bug or the crash split by line processing alg should be able to handle polygons as well i am filing it as a bug hoping it can be added to steps to reproduce the issue open two polygon layers try to split one with another using split by line you need to convert one to a line to do that versions supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
9,853
| 12,842,928,144
|
IssuesEvent
|
2020-07-08 03:18:51
|
OUDcollective/Quantified-Self
|
https://api.github.com/repos/OUDcollective/Quantified-Self
|
opened
|
Duns & Bradstreet
|
Creative Strategy Git Gud Leadership and Development Machines Learning conscience hacking help wanted institutional stigmatization process implementation unconscious bias
|

## On Chasing the Wind, LLC
`
Categorical Associative Algorithms – Altruism
`<br>
> Dangerous games being played by
big tech
big corporate best interest...
and what do we have here?
big DUN DUN DUN -d'uh- S
SMB credit rates– who's that?
'with?'
'with...are you with me?'
'with!'
Oh yeah... what's up... Wind was late swinging in.
Hey.. that's 'Chasing!
What's Chasing doing here?
'In the damn midst'?
I told the whole earth - even back-end;
pandemic 'forced states'
now DBA
with Wind
Y'all leave 'Chasing' alone and
out of the midst!
End of discussion.
music fades.
lights above begin to flicker as
-whoosh- big gust -blows-
screen black
and... Cut!
Great scene team.
'Sorry with'... ok.. ok... I am sorry. no more. Yes 'chasing is safe. said 'Gale' came and they were going back to Unit B
if you need them; do I even want to ask wh- ah neverming.
It's hard enough processing pandemic, now we've had things like - what 4 walking, talking, living, beings
manifest from - poof - literally nothing.
'sorry... you're right.... I'm sorry I mis-spoke. From dust.
As I am a witness to that damn literal 'dust in the wind'
don't know if i ever be wound all the way in after seeing
```
...
..
.
```<br>
---
and
```
...
..
.
```<br>
ACTION
- poof -
'shhh! chasing is crazy, let's leave it at that before with catches
-whoosh-
// ohh.. fuuc....
- poof -
fade to black
## with (the) Wind
---
`
hasn't been seen since that ominous last scene y'all.
`<br>
**POSSIBILITIES PONTIFICATIONS**
9114 Central Avenue SW *Unit B* 30014
italicized and pointed to while not yet able
to point you all the way to the whole picture
story ... maybe not.. maybe never.
just remember ... it is in italics...
Best,
x.__________
with Wind
> algorithm, keys got 'tensor' shh... did you -poof- just happen to hear with Wind whisperings? ohh....
( to be continued )
---
---
**Source URL**:
[https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html](https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>1920x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>1920x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
1.0
|
Duns & Bradstreet - 
## On Chasing the Wind, LLC
`
Categorical Associative Algorithms – Altruism
`<br>
> Dangerous games being played by
big tech
big corporate best interest...
and what do we have here?
big DUN DUN DUN -d'uh- S
SMB credit rates– who's that?
'with?'
'with...are you with me?'
'with!'
Oh yeah... what's up... Wind was late swinging in.
Hey.. that's 'Chasing!
What's Chasing doing here?
'In the damn midst'?
I told the whole earth - even back-end;
pandemic 'forced states'
now DBA
with Wind
Y'all leave 'Chasing' alone and
out of the midst!
End of discussion.
music fades.
lights above begin to flicker as
-whoosh- big gust -blows-
screen black
and... Cut!
Great scene team.
'Sorry with'... ok.. ok... I am sorry. no more. Yes 'chasing is safe. said 'Gale' came and they were going back to Unit B
if you need them; do I even want to ask wh- ah neverming.
It's hard enough processing pandemic, now we've had things like - what 4 walking, talking, living, beings
manifest from - poof - literally nothing.
'sorry... you're right.... I'm sorry I mis-spoke. From dust.
As I am a witness to that damn literal 'dust in the wind'
don't know if i ever be wound all the way in after seeing
```
...
..
.
```<br>
---
and
```
...
..
.
```<br>
ACTION
- poof -
'shhh! chasing is crazy, let's leave it at that before with catches
-whoosh-
// ohh.. fuuc....
- poof -
fade to black
## with (the) Wind
---
`
hasn't been seen since that ominous last scene y'all.
`<br>
**POSSIBILITIES PONTIFICATIONS**
9114 Central Avenue SW *Unit B* 30014
italicized and pointed to while not yet able
to point you all the way to the whole picture
story ... maybe not.. maybe never.
just remember ... it is in italics...
Best,
x.__________
with Wind
> algorithm, keys got 'tensor' shh... did you -poof- just happen to hear with Wind whisperings? ohh....
( to be continued )
---
---
**Source URL**:
[https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html](https://www.dnb.com/business-directory/company-profiles.chasing_the_wind_llc.aa4bbefb9e75c98e05b0a99fbf48a51b.html)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>1920x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>1920x888</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
|
process
|
duns bradstreet on chasing the wind llc categorical associative algorithms – altruism dangerous games being played by big tech big corporate best interest and what do we have here big dun dun dun d uh s smb credit rates– who s that with with are you with me with oh yeah what s up wind was late swinging in hey that s chasing what s chasing doing here in the damn midst i told the whole earth even back end pandemic forced states now dba with wind y all leave chasing alone and out of the midst end of discussion music fades lights above begin to flicker as whoosh big gust blows screen black and cut great scene team sorry with ok ok i am sorry no more yes chasing is safe said gale came and they were going back to unit b if you need them do i even want to ask wh ah neverming it s hard enough processing pandemic now we ve had things like what walking talking living beings manifest from poof literally nothing sorry you re right i m sorry i mis spoke from dust as i am a witness to that damn literal dust in the wind don t know if i ever be wound all the way in after seeing and action poof shhh chasing is crazy let s leave it at that before with catches whoosh ohh fuuc poof fade to black with the wind hasn t been seen since that ominous last scene y all possibilities pontifications central avenue sw unit b italicized and pointed to while not yet able to point you all the way to the whole picture story maybe not maybe never just remember it is in italics best x with wind algorithm keys got tensor shh did you poof just happen to hear with wind whisperings ohh to be continued source url browser chrome os windows bit screen size viewport size pixel ratio zoom level
| 1
|
6,022
| 8,823,704,225
|
IssuesEvent
|
2019-01-02 14:37:13
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
every entity, commenter permission cant add comments to the update field
|
2.0.6 Process bug
|
create a new entity
assign a different user as an assignee
try to write something in the updates field
press on add
the add button doesnt work
|
1.0
|
every entity, commenter permission cant add comments to the update field - create a new entity
assign a different user as an assignee
try to write something in the updates field
press on add
the add button doesnt work
|
process
|
every entity commenter permission cant add comments to the update field create a new entity assign a different user as an assignee try to write something in the updates field press on add the add button doesnt work
| 1
|
16,156
| 20,515,956,760
|
IssuesEvent
|
2022-03-01 11:46:12
|
apache/shardingsphere
|
https://api.github.com/repos/apache/shardingsphere
|
closed
|
Add unit test for ExecuteProcessStrategyEvaluator
|
in: test feature:show-process project: OSD2021
|
Hi community,
This issue is for #10887.
### Aim
Add unit test for `ExecuteProcessStrategyEvaluator` to test its public functions.
### Basic Qualifications
* Java
* Maven
* Junit.Test
|
1.0
|
Add unit test for ExecuteProcessStrategyEvaluator - Hi community,
This issue is for #10887.
### Aim
Add unit test for `ExecuteProcessStrategyEvaluator` to test its public functions.
### Basic Qualifications
* Java
* Maven
* Junit.Test
|
process
|
add unit test for executeprocessstrategyevaluator hi community this issue is for aim add unit test for executeprocessstrategyevaluator to test its public functions basic qualifications java maven junit test
| 1
|
41,191
| 10,677,616,143
|
IssuesEvent
|
2019-10-21 15:43:27
|
xamarin/xamarin-android
|
https://api.github.com/repos/xamarin/xamarin-android
|
closed
|
.mdb file generation error
|
Area: App+Library Build need-info
|
### Steps to Reproduce
1. build with Xamarin 10.0.99.79 (latest stable build from Azure)
2. `.mdb` file could not be found
<!--
If you have a repro project, you may drag & drop the .zip/etc. onto the issue editor to attach it.
-->
### Expected Behavior
Build succesfully
### Actual Behavior
Build fails
### Version Information
Xamarin version10.0.99.79
<!--
1. On macOS and within Visual Studio, select Visual Studio > About Visual Studio, then click the Show Details button, then click the Copy Information button.
2. Paste below this comment block.
-->
### Log File
[8/30/2019 5:03:59 PM] Warning: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1994,2): warning : Could not load file or assembly 'Xamarin.Android.Cecil, Version=0.11.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies. The system cannot find the file specified.
[8/30/2019 5:03:59 PM] Warning: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1994,2): warning : at Pdb2Mdb.Converter.Convert(String filename)
[8/30/2019 5:03:59 PM] Warning: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1994,2): warning : at Xamarin.Android.Tasks.ConvertDebuggingFiles.Execute()
[8/30/2019 5:04:10 PM] Error: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1995,2): error MSB3375: The file "C:\XaLaunchANMP\bin\Debug\XaLaunchANMP.dll.mdb" does not exist
[8/30/2019 5:04:11 PM] Error: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1995,2): error MSB3375: The file "C:\XaLaunchANMP\bin\Debug\Game.Collections.dll.mdb" does not exist.
|
1.0
|
.mdb file generation error - ### Steps to Reproduce
1. build with Xamarin 10.0.99.79 (latest stable build from Azure)
2. `.mdb` file could not be found
<!--
If you have a repro project, you may drag & drop the .zip/etc. onto the issue editor to attach it.
-->
### Expected Behavior
Build succesfully
### Actual Behavior
Build fails
### Version Information
Xamarin version10.0.99.79
<!--
1. On macOS and within Visual Studio, select Visual Studio > About Visual Studio, then click the Show Details button, then click the Copy Information button.
2. Paste below this comment block.
-->
### Log File
[8/30/2019 5:03:59 PM] Warning: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1994,2): warning : Could not load file or assembly 'Xamarin.Android.Cecil, Version=0.11.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or one of its dependencies. The system cannot find the file specified.
[8/30/2019 5:03:59 PM] Warning: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1994,2): warning : at Pdb2Mdb.Converter.Convert(String filename)
[8/30/2019 5:03:59 PM] Warning: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1994,2): warning : at Xamarin.Android.Tasks.ConvertDebuggingFiles.Execute()
[8/30/2019 5:04:10 PM] Error: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1995,2): error MSB3375: The file "C:\XaLaunchANMP\bin\Debug\XaLaunchANMP.dll.mdb" does not exist
[8/30/2019 5:04:11 PM] Error: C:\Xamarin.Android\10.0.99.79\ $MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1995,2): error MSB3375: The file "C:\XaLaunchANMP\bin\Debug\Game.Collections.dll.mdb" does not exist.
|
non_process
|
mdb file generation error steps to reproduce build with xamarin latest stable build from azure mdb file could not be found if you have a repro project you may drag drop the zip etc onto the issue editor to attach it expected behavior build succesfully actual behavior build fails version information xamarin on macos and within visual studio select visual studio about visual studio then click the show details button then click the copy information button paste below this comment block log file warning c xamarin android msbuild xamarin android xamarin android common targets warning could not load file or assembly xamarin android cecil version culture neutral publickeytoken or one of its dependencies the system cannot find the file specified warning c xamarin android msbuild xamarin android xamarin android common targets warning at converter convert string filename warning c xamarin android msbuild xamarin android xamarin android common targets warning at xamarin android tasks convertdebuggingfiles execute error c xamarin android msbuild xamarin android xamarin android common targets error the file c xalaunchanmp bin debug xalaunchanmp dll mdb does not exist error c xamarin android msbuild xamarin android xamarin android common targets error the file c xalaunchanmp bin debug game collections dll mdb does not exist
| 0
|
81,093
| 10,097,377,371
|
IssuesEvent
|
2019-07-28 05:04:51
|
Steemhunt/reviewhunt-web
|
https://api.github.com/repos/Steemhunt/reviewhunt-web
|
opened
|
Show preview page after the campaign submission
|
design
|
After a Maker submit the campaign, we may need to keep show the summary page or the preview page of his/her campaign.

|
1.0
|
Show preview page after the campaign submission - After a Maker submit the campaign, we may need to keep show the summary page or the preview page of his/her campaign.

|
non_process
|
show preview page after the campaign submission after a maker submit the campaign we may need to keep show the summary page or the preview page of his her campaign
| 0
|
112,842
| 14,292,125,184
|
IssuesEvent
|
2020-11-24 00:16:30
|
mexyn/statev_v2_issues
|
https://api.github.com/repos/mexyn/statev_v2_issues
|
closed
|
Versetzung des Ladepunktes
|
gamedesign solved
|
<!-- Bitte die Vorlage unten vollständig ausfüllen --> Sandy_Folk
**Character Name**
<!-- Mit welchem Character wurde das Verhalten in-game ausgelöst/beobachtet -->Sandy_Folk
**Zeitpunkt (Datum / Uhrzeit)**
<!-- Wann exakt (Datum / Uhrzeit) ist der Fehler beobachtet worden -->23.11.2020 / 01:16
**Beobachtetes Verhalten**
<!--- Beschreibe den Fehler -->Kann an diesem Punkt nicht mit größeren Fahrzeugen Be- und Entladen
**Erwartetes Verhalten**
<!--- Beschreibe wie es richtigerweise sein sollte --> Punkt bitte nach vorne an die Strasse setzen
**Schritte um den Fehler nachvollziehen zu können**
<!--- Beschreibe Schritt für Schritt wie man den Fehler nachstellen kann -->
**Monitorauflösung (nur wenn Falsche Darstellung in der UI)**
<!--- Beschreibe Schritt für Schritt wie man den Fehler nachstellen kann -->
**Optional: Video / Bilder des Fehlers**
<!--- Falls du ein Video oder Bild vom Fehler gemacht hast, dann kannst du diesen hier einfügen. Dies geht ganz einfach per Drag & Drop -->


Firmenhash | davis35
-- | --

|
1.0
|
Versetzung des Ladepunktes - <!-- Bitte die Vorlage unten vollständig ausfüllen --> Sandy_Folk
**Character Name**
<!-- Mit welchem Character wurde das Verhalten in-game ausgelöst/beobachtet -->Sandy_Folk
**Zeitpunkt (Datum / Uhrzeit)**
<!-- Wann exakt (Datum / Uhrzeit) ist der Fehler beobachtet worden -->23.11.2020 / 01:16
**Beobachtetes Verhalten**
<!--- Beschreibe den Fehler -->Kann an diesem Punkt nicht mit größeren Fahrzeugen Be- und Entladen
**Erwartetes Verhalten**
<!--- Beschreibe wie es richtigerweise sein sollte --> Punkt bitte nach vorne an die Strasse setzen
**Schritte um den Fehler nachvollziehen zu können**
<!--- Beschreibe Schritt für Schritt wie man den Fehler nachstellen kann -->
**Monitorauflösung (nur wenn Falsche Darstellung in der UI)**
<!--- Beschreibe Schritt für Schritt wie man den Fehler nachstellen kann -->
**Optional: Video / Bilder des Fehlers**
<!--- Falls du ein Video oder Bild vom Fehler gemacht hast, dann kannst du diesen hier einfügen. Dies geht ganz einfach per Drag & Drop -->


Firmenhash | davis35
-- | --

|
non_process
|
versetzung des ladepunktes sandy folk character name sandy folk zeitpunkt datum uhrzeit beobachtetes verhalten kann an diesem punkt nicht mit größeren fahrzeugen be und entladen erwartetes verhalten punkt bitte nach vorne an die strasse setzen schritte um den fehler nachvollziehen zu können monitorauflösung nur wenn falsche darstellung in der ui optional video bilder des fehlers firmenhash
| 0
|
14,311
| 17,316,478,011
|
IssuesEvent
|
2021-07-27 06:57:14
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
closed
|
Mac Mail inline PDF destroys E-Mail-Body
|
bug mail processing prioritised by payment verified
|
<!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8 & develop
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #1033880
### Expected behavior:
* When sending several images with text in between them and at least one inline PDF (with Mac Mail), Zammad will import all attachments and show the inline images and text in the correct way (PDF-Attachment will be available as attachment).
### Actual behavior:
* When sending several images with text in between them and at least one inline PDF (with Mac Mail), Zammad will import all attachments and cut off beginning with the first picture (all text in between the pictures will not be displayed).
### Steps to reproduce the behavior:
* Send a Mail with Mac Mail to your Zammad instance with Text, an image, some text, a PDF (inline), some Text and another inline image.
* check what happens to the Mail
This is an example how it might look in Mac Mail

No attachments get lost during this action, Text does. This only affects Mails being send with Mac Mail, as this is the only Mail-Program I know, that allows you to put files other than images as inline document.
How it will look in Zammad:

Example-E-mail being affected for test-importing:
[ticket-12045-157.zip](https://github.com/zammad/zammad/files/2685219/ticket-12045-157.zip)
Yes I'm sure this is a bug and no feature request or a general question.
|
1.0
|
Mac Mail inline PDF destroys E-Mail-Body - <!--
Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓
Since november 15th we handle all requests, except real bugs, at our community board.
Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21
Please post:
- Feature requests
- Development questions
- Technical questions
on the board -> https://community.zammad.org !
If you think you hit a bug, please continue:
- Search existing issues and the CHANGELOG.md for your issue - there might be a solution already
- Make sure to use the latest version of Zammad if possible
- Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it!
- Please write the issue in english
- Don't remove the template - otherwise we will close the issue without further comments
- Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate
Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted).
* The upper textblock will be removed automatically when you submit your issue *
-->
### Infos:
* Used Zammad version: 2.8 & develop
* Installation method (source, package, ..): any
* Operating system: any
* Database + version: any
* Elasticsearch version: any
* Browser + version: any
* Ticket-ID: #1033880
### Expected behavior:
* When sending several images with text in between them and at least one inline PDF (with Mac Mail), Zammad will import all attachments and show the inline images and text in the correct way (PDF-Attachment will be available as attachment).
### Actual behavior:
* When sending several images with text in between them and at least one inline PDF (with Mac Mail), Zammad will import all attachments and cut off beginning with the first picture (all text in between the pictures will not be displayed).
### Steps to reproduce the behavior:
* Send a Mail with Mac Mail to your Zammad instance with Text, an image, some text, a PDF (inline), some Text and another inline image.
* check what happens to the Mail
This is an example how it might look in Mac Mail

No attachments get lost during this action, Text does. This only affects Mails being send with Mac Mail, as this is the only Mail-Program I know, that allows you to put files other than images as inline document.
How it will look in Zammad:

Example-E-mail being affected for test-importing:
[ticket-12045-157.zip](https://github.com/zammad/zammad/files/2685219/ticket-12045-157.zip)
Yes I'm sure this is a bug and no feature request or a general question.
|
process
|
mac mail inline pdf destroys e mail body hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version develop installation method source package any operating system any database version any elasticsearch version any browser version any ticket id expected behavior when sending several images with text in between them and at least one inline pdf with mac mail zammad will import all attachments and show the inline images and text in the correct way pdf attachment will be available as attachment actual behavior when sending several images with text in between them and at least one inline pdf with mac mail zammad will import all attachments and cut off beginning with the first picture all text in between the pictures will not be displayed steps to reproduce the behavior send a mail with mac mail to your zammad instance with text an image some text a pdf inline some text and another inline image check what happens to the mail this is an example how it might look in mac mail no attachments get lost during this action text does this only affects mails being send with mac mail as this is the only mail program i know that allows you to put files other than images as inline document how it will look in zammad example e mail being affected for test importing yes i m sure this is a bug and no feature request or a general question
| 1
|
114,822
| 4,646,791,021
|
IssuesEvent
|
2016-10-01 03:52:39
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
MacOS Sierra Support: Build gcloud component and kubectl distribution with Go 1.7.x
|
component/kubectl kind/bug priority/P1 team/ux
|
**Kubernetes version** (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:13:23Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
```
**Environment**:
- **Cloud provider or hardware configuration**: Apple MacBook Pro
- **OS** (e.g. from /etc/os-release): MacOS Sierra
- **Kernel** (e.g. `uname -a`): Darwin 16.0.0
- **Install tools**: `gcloud components install kubectl`, `brew install kubernetes-cli`, or download from release
- **Others**:
Tested on both a GKE cluster and minikube to be sure. Also tested with running kubectl from Google Cloud Shell and a Docker container, and `kubectl` works as expected on both Kubernetes environments from these client environments.
**What happened**:
`kubectl` has unpredictable errors performing any operations. There's no single error. Sometimes the process hangs, sometimes it panics, sometimes there's connection issues to the cluster.
**What you expected to happen**:
`kubectl` commands to run as expected, without hangs, segfaults, or network issues.
**How to reproduce it** (as minimally and precisely as possible):
1. Upgrade to MacOS Sierra
2. Use kubectl
**Anything else do we need to know**:
Compiling kubectl for OSX with Go 1.7.x will fix this, as fixes for MacOS Sierra are in Go 1.7.
I compiled a version of kubectl from master with version:
```
Client Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.0.598+ff1cec99ccdecf", GitCommit:"ff1cec99ccdecfda7f4fedd1fd97ad1f44fd4010", GitTreeState:"clean", BuildDate:"2016-09-12T07:24:10Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
````
and it works as expected.
I'm happy to pull request something to fix this, but I don't want to make the assumption that everything needs to be updated to Go 1.7. Also, I'm not sure how the gcloud `kubectl` component is related or diverges from the release build version, so if I can understand the nuances I can put something together to fix this.
|
1.0
|
MacOS Sierra Support: Build gcloud component and kubectl distribution with Go 1.7.x - **Kubernetes version** (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:13:23Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
```
**Environment**:
- **Cloud provider or hardware configuration**: Apple MacBook Pro
- **OS** (e.g. from /etc/os-release): MacOS Sierra
- **Kernel** (e.g. `uname -a`): Darwin 16.0.0
- **Install tools**: `gcloud components install kubectl`, `brew install kubernetes-cli`, or download from release
- **Others**:
Tested on both a GKE cluster and minikube to be sure. Also tested with running kubectl from Google Cloud Shell and a Docker container, and `kubectl` works as expected on both Kubernetes environments from these client environments.
**What happened**:
`kubectl` has unpredictable errors performing any operations. There's no single error. Sometimes the process hangs, sometimes it panics, sometimes there's connection issues to the cluster.
**What you expected to happen**:
`kubectl` commands to run as expected, without hangs, segfaults, or network issues.
**How to reproduce it** (as minimally and precisely as possible):
1. Upgrade to MacOS Sierra
2. Use kubectl
**Anything else do we need to know**:
Compiling kubectl for OSX with Go 1.7.x will fix this, as fixes for MacOS Sierra are in Go 1.7.
I compiled a version of kubectl from master with version:
```
Client Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.0.598+ff1cec99ccdecf", GitCommit:"ff1cec99ccdecfda7f4fedd1fd97ad1f44fd4010", GitTreeState:"clean", BuildDate:"2016-09-12T07:24:10Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
````
and it works as expected.
I'm happy to pull request something to fix this, but I don't want to make the assumption that everything needs to be updated to Go 1.7. Also, I'm not sure how the gcloud `kubectl` component is related or diverges from the release build version, so if I can understand the nuances I can put something together to fix this.
|
non_process
|
macos sierra support build gcloud component and kubectl distribution with go x kubernetes version use kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin environment cloud provider or hardware configuration apple macbook pro os e g from etc os release macos sierra kernel e g uname a darwin install tools gcloud components install kubectl brew install kubernetes cli or download from release others tested on both a gke cluster and minikube to be sure also tested with running kubectl from google cloud shell and a docker container and kubectl works as expected on both kubernetes environments from these client environments what happened kubectl has unpredictable errors performing any operations there s no single error sometimes the process hangs sometimes it panics sometimes there s connection issues to the cluster what you expected to happen kubectl commands to run as expected without hangs segfaults or network issues how to reproduce it as minimally and precisely as possible upgrade to macos sierra use kubectl anything else do we need to know compiling kubectl for osx with go x will fix this as fixes for macos sierra are in go i compiled a version of kubectl from master with version client version version info major minor gitversion alpha gitcommit gittreestate clean builddate goversion compiler gc platform darwin and it works as expected i m happy to pull request something to fix this but i don t want to make the assumption that everything needs to be updated to go also i m not sure how the gcloud kubectl component is related or diverges from the release build version so if i can understand the nuances i can put something together to fix this
| 0
|
20,360
| 27,016,381,112
|
IssuesEvent
|
2023-02-10 19:49:07
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Cast Binary to Utf8 With Safe True is Unsound
|
bug development-process
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
`cast_binary_to_string` added in #3624 is unsound as it creates a `StringArray` containing invalid UTF-8 data. The data is null, but this is insufficient to meet the ArrayData contract which requires all the data be valid UTF-8.
Fortunately this has not yet been released
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
```
#[test]
fn test_cast_invalid_utf8() {
let v1: &[u8] = b"\xFF invalid";
let v2: &[u8] = b"\x00 Foo";
let s = BinaryArray::from(vec![v1, v2]);
let options = CastOptions { safe: true };
let array = cast_with_options(&s, &DataType::Utf8, &options).unwrap();
let a = as_string_array(array.as_ref());
a.data().validate_full().unwrap();
assert_eq!(a.null_count(), 1);
assert_eq!(a.len(), 2);
assert!(a.is_null(0));
assert_eq!(a.value(0), "");
assert_eq!(a.value(1), "\x00 Foo");
}
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
1.0
|
Cast Binary to Utf8 With Safe True is Unsound - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
`cast_binary_to_string` added in #3624 is unsound as it creates a `StringArray` containing invalid UTF-8 data. The data is null, but this is insufficient to meet the ArrayData contract which requires all the data be valid UTF-8.
Fortunately this has not yet been released
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
```
#[test]
fn test_cast_invalid_utf8() {
let v1: &[u8] = b"\xFF invalid";
let v2: &[u8] = b"\x00 Foo";
let s = BinaryArray::from(vec![v1, v2]);
let options = CastOptions { safe: true };
let array = cast_with_options(&s, &DataType::Utf8, &options).unwrap();
let a = as_string_array(array.as_ref());
a.data().validate_full().unwrap();
assert_eq!(a.null_count(), 1);
assert_eq!(a.len(), 2);
assert!(a.is_null(0));
assert_eq!(a.value(0), "");
assert_eq!(a.value(1), "\x00 Foo");
}
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
process
|
cast binary to with safe true is unsound describe the bug a clear and concise description of what the bug is cast binary to string added in is unsound as it creates a stringarray containing invalid utf data the data is null but this is insufficient to meet the arraydata contract which requires all the data be valid utf fortunately this has not yet been released to reproduce steps to reproduce the behavior fn test cast invalid let b xff invalid let b foo let s binaryarray from vec let options castoptions safe true let array cast with options s datatype options unwrap let a as string array array as ref a data validate full unwrap assert eq a null count assert eq a len assert a is null assert eq a value assert eq a value foo expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here
| 1
|
99,987
| 11,170,168,935
|
IssuesEvent
|
2019-12-28 11:39:11
|
PACK-Solutions/angular-stdlib
|
https://api.github.com/repos/PACK-Solutions/angular-stdlib
|
closed
|
Add a code of conduct
|
documentation
|
Add a code of conduct using **GitHub** template.
Contact to use : omistral@pack-solutions.com
> How to : https://help.github.com/en/github/building-a-strong-community/adding-a-code-of-conduct-to-your-project
|
1.0
|
Add a code of conduct - Add a code of conduct using **GitHub** template.
Contact to use : omistral@pack-solutions.com
> How to : https://help.github.com/en/github/building-a-strong-community/adding-a-code-of-conduct-to-your-project
|
non_process
|
add a code of conduct add a code of conduct using github template contact to use omistral pack solutions com how to
| 0
|
49,032
| 20,424,657,120
|
IssuesEvent
|
2022-02-24 01:36:15
|
microsoft/botframework-cli
|
https://api.github.com/repos/microsoft/botframework-cli
|
closed
|
Test Command Fails with Error "Provide an output directory"
|
customer-reported Bot Services customer-replied-to
|
When running a test, we have tried to provide -o=<out> and --out=<out> in different formats and we always get an error "ERROR-MESSAGE: "Please provide an output directory, CWD=c:\\Ezra\\Orchestrator, called from OrchestratorEvaluate.runAsync()"
Sample commands:
`bf orchestrator:test --in=\\generated\orchestrator.blu" --model=\\model -o=\\analysis --test=\\questions.txt`
`bf orchestrator:test --in=\\generated\orchestrator.blu" --model=\\model --out=\\analysis --test=\\questions.txt`
VERSION
@microsoft/botframework-cli/4.15.0 win32-x64 node-v12.18.4
|
1.0
|
Test Command Fails with Error "Provide an output directory" - When running a test, we have tried to provide -o=<out> and --out=<out> in different formats and we always get an error "ERROR-MESSAGE: "Please provide an output directory, CWD=c:\\Ezra\\Orchestrator, called from OrchestratorEvaluate.runAsync()"
Sample commands:
`bf orchestrator:test --in=\\generated\orchestrator.blu" --model=\\model -o=\\analysis --test=\\questions.txt`
`bf orchestrator:test --in=\\generated\orchestrator.blu" --model=\\model --out=\\analysis --test=\\questions.txt`
VERSION
@microsoft/botframework-cli/4.15.0 win32-x64 node-v12.18.4
|
non_process
|
test command fails with error provide an output directory when running a test we have tried to provide o and out in different formats and we always get an error error message please provide an output directory cwd c ezra orchestrator called from orchestratorevaluate runasync sample commands bf orchestrator test in generated orchestrator blu model model o analysis test questions txt bf orchestrator test in generated orchestrator blu model model out analysis test questions txt version microsoft botframework cli node
| 0
|
20,301
| 26,940,477,242
|
IssuesEvent
|
2023-02-08 01:29:02
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Built-in method to escape shell arguments
|
child_process feature request stale
|
**Is your feature request related to a problem? Please describe.**
Yes, I was using `execa` which is a light wrapper for `childProcess.spawn` to execute a script (call it `./scripts/configure`) which took user input as an argument. One of my users supplied `"Users & Permissions Management"` as that input, which caused the script to hang as the resulting spawned process was:
```
./scripts/configure Users & Permissions Management
```
I realised as soon as the bug was reported that I should've escaped the string passed into my function that called `execa`, so then I looked for modules to correctly escape shell arguments, and they seem pretty complex. Which leads to the question: do I really want to depend on a third-party module to correctly escape shell arguments? Am I just trading one security risk for another?
**Describe the solution you'd like**
Have a method like `childProcess.escapeArgument(arg: string): string` which correctly escapes the given value such that it is just a string for all terminals (cross-platform).
**Clarification:** I am not arguing for childProcess.spawn to escape arguments into strings by default, as that'd be a breaking change, even though it would likely be for the best (if you wanna pass multiple arguments, use the array syntax, not a string). Instead, I'm just asking for a method built-in that's well tested to escape an argument into a string argument for a shell command.
**Describe alternatives you've considered**
[Various NPM modules](https://github.com/nodejs/node/issues/34840#issuecomment-676398171), writing it myself, etc. All just shift the security responsibility to arguably worse places. This seems like due to the security benefits it can give, it'd be a good candidate for being a built-in function, ideally backported to LTS's
|
1.0
|
Built-in method to escape shell arguments - **Is your feature request related to a problem? Please describe.**
Yes, I was using `execa` which is a light wrapper for `childProcess.spawn` to execute a script (call it `./scripts/configure`) which took user input as an argument. One of my users supplied `"Users & Permissions Management"` as that input, which caused the script to hang as the resulting spawned process was:
```
./scripts/configure Users & Permissions Management
```
I realised as soon as the bug was reported that I should've escaped the string passed into my function that called `execa`, so then I looked for modules to correctly escape shell arguments, and they seem pretty complex. Which leads to the question: do I really want to depend on a third-party module to correctly escape shell arguments? Am I just trading one security risk for another?
**Describe the solution you'd like**
Have a method like `childProcess.escapeArgument(arg: string): string` which correctly escapes the given value such that it is just a string for all terminals (cross-platform).
**Clarification:** I am not arguing for childProcess.spawn to escape arguments into strings by default, as that'd be a breaking change, even though it would likely be for the best (if you wanna pass multiple arguments, use the array syntax, not a string). Instead, I'm just asking for a method built-in that's well tested to escape an argument into a string argument for a shell command.
**Describe alternatives you've considered**
[Various NPM modules](https://github.com/nodejs/node/issues/34840#issuecomment-676398171), writing it myself, etc. All just shift the security responsibility to arguably worse places. This seems like due to the security benefits it can give, it'd be a good candidate for being a built-in function, ideally backported to LTS's
|
process
|
built in method to escape shell arguments is your feature request related to a problem please describe yes i was using execa which is a light wrapper for childprocess spawn to execute a script call it scripts configure which took user input as an argument one of my users supplied users permissions management as that input which caused the script to hang as the resulting spawned process was scripts configure users permissions management i realised as soon as the bug was reported that i should ve escaped the string passed into my function that called execa so then i looked for modules to correctly escape shell arguments and they seem pretty complex which leads to the question do i really want to depend on a third party module to correctly escape shell arguments am i just trading one security risk for another describe the solution you d like have a method like childprocess escapeargument arg string string which correctly escapes the given value such that it is just a string for all terminals cross platform clarification i am not arguing for childprocess spawn to escape arguments into strings by default as that d be a breaking change even though it would likely be for the best if you wanna pass multiple arguments use the array syntax not a string instead i m just asking for a method built in that s well tested to escape an argument into a string argument for a shell command describe alternatives you ve considered writing it myself etc all just shift the security responsibility to arguably worse places this seems like due to the security benefits it can give it d be a good candidate for being a built in function ideally backported to lts s
| 1
|
15,345
| 19,502,544,818
|
IssuesEvent
|
2021-12-28 07:08:49
|
plazi/treatmentBank
|
https://api.github.com/repos/plazi/treatmentBank
|
opened
|
file not processed zootaxa.5062.1.1
|
help wanted processing
|
@gsautter can you please check, why this file is not available. It does not show up in the transfer stats, srs etc. but when I upload it again then I get the message below
thanks
d
Donat@bern MINGW64 ~
$ cd E:/diglib/0-zootaxa/0_temp_poa_transfer3/
Donat@bern MINGW64 /e/diglib/0-zootaxa/0_temp_poa_transfer3
$ for file in *.pdf; do curl -H "....." -H "Meta-Data-Mode:Go use your templates!" -F "file=@$file; filename=$file" -F "user=plazi" -F "mimeType=application/pdf" -X PUT https://tb.plazi.org/GgServer/docUpload; done; # file
{
"attributes": { "docId": "6607E753652CFF81FF90FFFFFFCB5423",
"status": "Previously imported by plazi",
"approvalRequired": "266",
"approvalRequired_for_document": "1",
"approvalRequired_for_originalDoi": "1",
"approvalRequired_for_taxonomicNames": "18",
"approvalRequired_for_textStreams": "232",
"approvalRequired_for_treatments": "14",
"checkinTime": "1636374593637",
"checkinUser": "plazi",
"docName": "zootaxa.5062.1.1.pdf",
"docStyle": "DocumentStyle:D239614CE4198176A422035174489AB1.1:Zootaxa.2009-2012.monograph",
"docStyleId": "D239614CE4198176A422035174489AB1",
"docStyleName": "Zootaxa.2009-2012.monograph",
"docStyleVersion": "1",
"updateTime": "1636374932061",
"updateUser": "GgImagineBatch",
"zenodo-license-document": "CLOSED"
},
"user": "plazi",
"mimeType": "application/pdf"
}
Donat@bern MINGW64 /e/diglib/0-zootaxa/0_temp_poa_transfer3
$
|
1.0
|
file not processed zootaxa.5062.1.1 - @gsautter can you please check, why this file is not available. It does not show up in the transfer stats, srs etc. but when I upload it again then I get the message below
thanks
d
Donat@bern MINGW64 ~
$ cd E:/diglib/0-zootaxa/0_temp_poa_transfer3/
Donat@bern MINGW64 /e/diglib/0-zootaxa/0_temp_poa_transfer3
$ for file in *.pdf; do curl -H "....." -H "Meta-Data-Mode:Go use your templates!" -F "file=@$file; filename=$file" -F "user=plazi" -F "mimeType=application/pdf" -X PUT https://tb.plazi.org/GgServer/docUpload; done; # file
{
"attributes": { "docId": "6607E753652CFF81FF90FFFFFFCB5423",
"status": "Previously imported by plazi",
"approvalRequired": "266",
"approvalRequired_for_document": "1",
"approvalRequired_for_originalDoi": "1",
"approvalRequired_for_taxonomicNames": "18",
"approvalRequired_for_textStreams": "232",
"approvalRequired_for_treatments": "14",
"checkinTime": "1636374593637",
"checkinUser": "plazi",
"docName": "zootaxa.5062.1.1.pdf",
"docStyle": "DocumentStyle:D239614CE4198176A422035174489AB1.1:Zootaxa.2009-2012.monograph",
"docStyleId": "D239614CE4198176A422035174489AB1",
"docStyleName": "Zootaxa.2009-2012.monograph",
"docStyleVersion": "1",
"updateTime": "1636374932061",
"updateUser": "GgImagineBatch",
"zenodo-license-document": "CLOSED"
},
"user": "plazi",
"mimeType": "application/pdf"
}
Donat@bern MINGW64 /e/diglib/0-zootaxa/0_temp_poa_transfer3
$
|
process
|
file not processed zootaxa gsautter can you please check why this file is not available it does not show up in the transfer stats srs etc but when i upload it again then i get the message below thanks d donat bern cd e diglib zootaxa temp poa donat bern e diglib zootaxa temp poa for file in pdf do curl h h meta data mode go use your templates f file file filename file f user plazi f mimetype application pdf x put done file attributes docid status previously imported by plazi approvalrequired approvalrequired for document approvalrequired for originaldoi approvalrequired for taxonomicnames approvalrequired for textstreams approvalrequired for treatments checkintime checkinuser plazi docname zootaxa pdf docstyle documentstyle zootaxa monograph docstyleid docstylename zootaxa monograph docstyleversion updatetime updateuser ggimaginebatch zenodo license document closed user plazi mimetype application pdf donat bern e diglib zootaxa temp poa
| 1
|
25,315
| 7,680,257,143
|
IssuesEvent
|
2018-05-16 00:30:01
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Checksum issue downloading node-js
|
infra/BUILDPONY infra/Kokoro lang/node
|
```
gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR
gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 502 status code downloading checksum
gyp ERR! stack at Request.<anonymous> (/root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/lib/install.js:289:18)
gyp ERR! stack at emitOne (events.js:101:20)
gyp ERR! stack at Request.emit (events.js:191:7)
gyp ERR! stack at Request.onRequestResponse (/root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/request/request.js:986:10)
gyp ERR! stack at emitOne (events.js:96:13)
gyp ERR! stack at ClientRequest.emit (events.js:191:7)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (_http_client.js:522:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete (_http_common.js:99:23)
gyp ERR! stack at TLSSocket.socketOnData (_http_client.js:411:20)
gyp ERR! stack at emitOne (events.js:96:13)
gyp ERR! stack at TLSSocket.emit (events.js:191:7)
gyp ERR! System Linux 4.4.0-83-generic
gyp ERR! command "/root/.nvm/versions/node/v7.10.1/bin/node" "/root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "configure" "--fallback-to-build" "--library=static_library" "--module=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc/grpc_node.node" "--module_name=grpc_node" "--module_path=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc"
gyp ERR! cwd /var/local/git/grpc-node/packages/grpc-native-core
gyp ERR! node -v v7.10.1
gyp ERR! node-gyp -v v3.5.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/root/.nvm/versions/node/v7.10.1/bin/node /root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --library=static_library --module=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc/grpc_node.node --module_name=grpc_node --module_path=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/var/local/git/grpc-node/packages/grpc-native-core/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at emitTwo (events.js:106:13)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:194:7)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:899:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)
node-pre-gyp ERR! System Linux 4.4.0-83-generic
node-pre-gyp ERR! command "/root/.nvm/versions/node/v7.10.1/bin/node" "/var/local/git/grpc-node/packages/grpc-native-core/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build" "--library=static_library"
node-pre-gyp ERR! cwd /var/local/git/grpc-node/packages/grpc-native-core
node-pre-gyp ERR! node -v v7.10.1
node-pre-gyp ERR! node-pre-gyp -v v0.6.39
node-pre-gyp ERR! not ok
Failed to execute '/root/.nvm/versions/node/v7.10.1/bin/node /root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --library=static_library --module=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc/grpc_node.node --module_name=grpc_node --module_path=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc' (1)
npm ERR! Linux 4.4.0-83-generic
npm ERR! argv "/root/.nvm/versions/node/v7.10.1/bin/node" "/root/.nvm/versions/node/v7.10.1/bin/npm" "install" "--build-from-source" "--unsafe-perm"
npm ERR! node v7.10.1
npm ERR! npm v4.2.0
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! grpc@1.9.0-dev install: `node-pre-gyp install --fallback-to-build --library=static_library`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the grpc@1.9.0-dev install script 'node-pre-gyp install --fallback-to-build --library=static_library'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the grpc package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-pre-gyp install --fallback-to-build --library=static_library
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs grpc
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls grpc
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /tmp/npm-cache/_logs/2018-01-06T03_04_42_921Z-debug.log
[03:04:42] 'native.core.install' errored after 2.96 s
[03:04:42] Error: Command failed: npm install --build-from-source --unsafe-perm
at Promise.all.then.arr (/var/local/git/grpc-node/node_modules/execa/index.js:236:11)
at process._tickCallback (internal/process/next_tick.js:109:7)
[03:04:42] 'setup' errored after 2.97 s
[03:04:42] Error in plugin "run-sequence(native.core.install)"
Message:
Command failed: npm install --build-from-source --unsafe-perm
Details:
code: 1
killed: false
stdout: null
stderr: null
failed: true
signal: null
cmd: npm install --build-from-source --unsafe-perm
timedOut: false
```
https://sponge.corp.google.com/invocation?id=5cfe9ba8-2cf8-4298-bb9b-66d711c1d8f2&searchFor=
|
1.0
|
Checksum issue downloading node-js - ```
gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR
gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 502 status code downloading checksum
gyp ERR! stack at Request.<anonymous> (/root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/lib/install.js:289:18)
gyp ERR! stack at emitOne (events.js:101:20)
gyp ERR! stack at Request.emit (events.js:191:7)
gyp ERR! stack at Request.onRequestResponse (/root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/request/request.js:986:10)
gyp ERR! stack at emitOne (events.js:96:13)
gyp ERR! stack at ClientRequest.emit (events.js:191:7)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (_http_client.js:522:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete (_http_common.js:99:23)
gyp ERR! stack at TLSSocket.socketOnData (_http_client.js:411:20)
gyp ERR! stack at emitOne (events.js:96:13)
gyp ERR! stack at TLSSocket.emit (events.js:191:7)
gyp ERR! System Linux 4.4.0-83-generic
gyp ERR! command "/root/.nvm/versions/node/v7.10.1/bin/node" "/root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "configure" "--fallback-to-build" "--library=static_library" "--module=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc/grpc_node.node" "--module_name=grpc_node" "--module_path=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc"
gyp ERR! cwd /var/local/git/grpc-node/packages/grpc-native-core
gyp ERR! node -v v7.10.1
gyp ERR! node-gyp -v v3.5.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/root/.nvm/versions/node/v7.10.1/bin/node /root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --library=static_library --module=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc/grpc_node.node --module_name=grpc_node --module_path=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/var/local/git/grpc-node/packages/grpc-native-core/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at emitTwo (events.js:106:13)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:194:7)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:899:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)
node-pre-gyp ERR! System Linux 4.4.0-83-generic
node-pre-gyp ERR! command "/root/.nvm/versions/node/v7.10.1/bin/node" "/var/local/git/grpc-node/packages/grpc-native-core/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build" "--library=static_library"
node-pre-gyp ERR! cwd /var/local/git/grpc-node/packages/grpc-native-core
node-pre-gyp ERR! node -v v7.10.1
node-pre-gyp ERR! node-pre-gyp -v v0.6.39
node-pre-gyp ERR! not ok
Failed to execute '/root/.nvm/versions/node/v7.10.1/bin/node /root/.nvm/versions/node/v7.10.1/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --library=static_library --module=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc/grpc_node.node --module_name=grpc_node --module_path=/var/local/git/grpc-node/packages/grpc-native-core/src/node/extension_binary/node-v51-linux-x64-glibc' (1)
npm ERR! Linux 4.4.0-83-generic
npm ERR! argv "/root/.nvm/versions/node/v7.10.1/bin/node" "/root/.nvm/versions/node/v7.10.1/bin/npm" "install" "--build-from-source" "--unsafe-perm"
npm ERR! node v7.10.1
npm ERR! npm v4.2.0
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! grpc@1.9.0-dev install: `node-pre-gyp install --fallback-to-build --library=static_library`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the grpc@1.9.0-dev install script 'node-pre-gyp install --fallback-to-build --library=static_library'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the grpc package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-pre-gyp install --fallback-to-build --library=static_library
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs grpc
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls grpc
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /tmp/npm-cache/_logs/2018-01-06T03_04_42_921Z-debug.log
[03:04:42] 'native.core.install' errored after 2.96 s
[03:04:42] Error: Command failed: npm install --build-from-source --unsafe-perm
at Promise.all.then.arr (/var/local/git/grpc-node/node_modules/execa/index.js:236:11)
at process._tickCallback (internal/process/next_tick.js:109:7)
[03:04:42] 'setup' errored after 2.97 s
[03:04:42] Error in plugin "run-sequence(native.core.install)"
Message:
Command failed: npm install --build-from-source --unsafe-perm
Details:
code: 1
killed: false
stdout: null
stderr: null
failed: true
signal: null
cmd: npm install --build-from-source --unsafe-perm
timedOut: false
```
https://sponge.corp.google.com/invocation?id=5cfe9ba8-2cf8-4298-bb9b-66d711c1d8f2&searchFor=
|
non_process
|
checksum issue downloading node js gyp warn download nvm nodejs org mirror is deprecated and will be removed in node gyp please use nodejs org mirror gyp warn download nvm nodejs org mirror is deprecated and will be removed in node gyp please use nodejs org mirror gyp warn install got an error rolling back install gyp err configure error gyp err stack error status code downloading checksum gyp err stack at request root nvm versions node lib node modules npm node modules node gyp lib install js gyp err stack at emitone events js gyp err stack at request emit events js gyp err stack at request onrequestresponse root nvm versions node lib node modules npm node modules request request js gyp err stack at emitone events js gyp err stack at clientrequest emit events js gyp err stack at httpparser parseronincomingclient http client js gyp err stack at httpparser parseronheaderscomplete http common js gyp err stack at tlssocket socketondata http client js gyp err stack at emitone events js gyp err stack at tlssocket emit events js gyp err system linux generic gyp err command root nvm versions node bin node root nvm versions node lib node modules npm node modules node gyp bin node gyp js configure fallback to build library static library module var local git grpc node packages grpc native core src node extension binary node linux glibc grpc node node module name grpc node module path var local git grpc node packages grpc native core src node extension binary node linux glibc gyp err cwd var local git grpc node packages grpc native core gyp err node v gyp err node gyp v gyp err not ok node pre gyp err build error node pre gyp err stack error failed to execute root nvm versions node bin node root nvm versions node lib node modules npm node modules node gyp bin node gyp js configure fallback to build library static library module var local git grpc node packages grpc native core src node extension binary node linux glibc grpc node node module name grpc node module path var local git grpc node packages grpc native core src node extension binary node linux glibc node pre gyp err stack at childprocess var local git grpc node packages grpc native core node modules node pre gyp lib util compile js node pre gyp err stack at emittwo events js node pre gyp err stack at childprocess emit events js node pre gyp err stack at maybeclose internal child process js node pre gyp err stack at process childprocess handle onexit internal child process js node pre gyp err system linux generic node pre gyp err command root nvm versions node bin node var local git grpc node packages grpc native core node modules bin node pre gyp install fallback to build library static library node pre gyp err cwd var local git grpc node packages grpc native core node pre gyp err node v node pre gyp err node pre gyp v node pre gyp err not ok failed to execute root nvm versions node bin node root nvm versions node lib node modules npm node modules node gyp bin node gyp js configure fallback to build library static library module var local git grpc node packages grpc native core src node extension binary node linux glibc grpc node node module name grpc node module path var local git grpc node packages grpc native core src node extension binary node linux glibc npm err linux generic npm err argv root nvm versions node bin node root nvm versions node bin npm install build from source unsafe perm npm err node npm err npm npm err code elifecycle npm err errno npm err grpc dev install node pre gyp install fallback to build library static library npm err exit status npm err npm err failed at the grpc dev install script node pre gyp install fallback to build library static library npm err make sure you have the latest version of node js and npm installed npm err if you do this is most likely a problem with the grpc package npm err not with npm itself npm err tell the author that this fails on your system npm err node pre gyp install fallback to build library static library npm err you can get information on how to open an issue for this project with npm err npm bugs grpc npm err or if that isn t available you can get their info via npm err npm owner ls grpc npm err there is likely additional logging output above npm err please include the following file with any support request npm err tmp npm cache logs debug log native core install errored after s error command failed npm install build from source unsafe perm at promise all then arr var local git grpc node node modules execa index js at process tickcallback internal process next tick js setup errored after s error in plugin run sequence native core install message command failed npm install build from source unsafe perm details code killed false stdout null stderr null failed true signal null cmd npm install build from source unsafe perm timedout false
| 0
|
18,283
| 24,374,621,177
|
IssuesEvent
|
2022-10-03 23:01:13
|
emily-writes-poems/emily-writes-poems-processing
|
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
|
closed
|
create GUI for mongo poem collections
|
script migration processing
|
- [x] inserting new collection - https://github.com/emily-writes-poems/emily-writes-poems-processing/issues/4
- [ ] edit poems in existing collections - emily-writes-poems/emily-writes-poems-scripts#19
- [ ] related: edit collection - https://github.com/emily-writes-poems/emily-writes-poems-processing/issues/5
- [ ] checkbox selector for poems in DB?
- [x] #28
|
1.0
|
create GUI for mongo poem collections - - [x] inserting new collection - https://github.com/emily-writes-poems/emily-writes-poems-processing/issues/4
- [ ] edit poems in existing collections - emily-writes-poems/emily-writes-poems-scripts#19
- [ ] related: edit collection - https://github.com/emily-writes-poems/emily-writes-poems-processing/issues/5
- [ ] checkbox selector for poems in DB?
- [x] #28
|
process
|
create gui for mongo poem collections inserting new collection edit poems in existing collections emily writes poems emily writes poems scripts related edit collection checkbox selector for poems in db
| 1
|
18,152
| 24,189,817,218
|
IssuesEvent
|
2022-09-23 16:22:23
|
openxla/stablehlo
|
https://api.github.com/repos/openxla/stablehlo
|
reopened
|
Define code style
|
Process
|
Currently, StableHLO doesn't have a formally-defined code style, but it will be good to have one. We could start with inheriting MLIR-HLO's code style to maximize familiarity for existing CHLO/MHLO users.
|
1.0
|
Define code style - Currently, StableHLO doesn't have a formally-defined code style, but it will be good to have one. We could start with inheriting MLIR-HLO's code style to maximize familiarity for existing CHLO/MHLO users.
|
process
|
define code style currently stablehlo doesn t have a formally defined code style but it will be good to have one we could start with inheriting mlir hlo s code style to maximize familiarity for existing chlo mhlo users
| 1
|
174,519
| 13,493,302,567
|
IssuesEvent
|
2020-09-11 19:26:02
|
nicholas-maltbie/PropHunt
|
https://api.github.com/repos/nicholas-maltbie/PropHunt
|
closed
|
Separate Generated Code From Code Coverage
|
refactor test
|
Move all generated code to its own Assembly Definition. These pieces of generated or external code can be assumed to be correct as they are not going to be modified by us directly.
We will only be testing how this works during integration tests and it is assumed to be working so we will only be observing the secondary effects of the code. Hence generated code should be moved to its own assembly definition and be avoided by the rest of the project.
This is specifically addressing all the generated code for the NetCode portions of the project but should be included for other additions to the project. We should not have to update our unit testing for pieces of code that others have written.
This can only be done after the basic testing framework has been included (which is PR #54 )
|
1.0
|
Separate Generated Code From Code Coverage - Move all generated code to its own Assembly Definition. These pieces of generated or external code can be assumed to be correct as they are not going to be modified by us directly.
We will only be testing how this works during integration tests and it is assumed to be working so we will only be observing the secondary effects of the code. Hence generated code should be moved to its own assembly definition and be avoided by the rest of the project.
This is specifically addressing all the generated code for the NetCode portions of the project but should be included for other additions to the project. We should not have to update our unit testing for pieces of code that others have written.
This can only be done after the basic testing framework has been included (which is PR #54 )
|
non_process
|
separate generated code from code coverage move all generated code to its own assembly definition these pieces of generated or external code can be assumed to be correct as they are not going to be modified by us directly we will only be testing how this works during integration tests and it is assumed to be working so we will only be observing the secondary effects of the code hence generated code should be moved to its own assembly definition and be avoided by the rest of the project this is specifically addressing all the generated code for the netcode portions of the project but should be included for other additions to the project we should not have to update our unit testing for pieces of code that others have written this can only be done after the basic testing framework has been included which is pr
| 0
|
4,825
| 4,652,669,448
|
IssuesEvent
|
2016-10-03 14:40:57
|
stamp-web/stamp-webservices
|
https://api.github.com/repos/stamp-web/stamp-webservices
|
closed
|
Look into doing DB-level sorting of catalogue numbers
|
performance
|
Using an expression to do Database level sorting of catalogue numbers vs. in memory
|
True
|
Look into doing DB-level sorting of catalogue numbers - Using an expression to do Database level sorting of catalogue numbers vs. in memory
|
non_process
|
look into doing db level sorting of catalogue numbers using an expression to do database level sorting of catalogue numbers vs in memory
| 0
|
500,797
| 14,515,177,661
|
IssuesEvent
|
2020-12-13 11:24:23
|
altmp/altv-issues
|
https://api.github.com/repos/altmp/altv-issues
|
closed
|
alt.Vehicle.setDoorState method will not really open or close doors
|
Class: bug Priority: high Scope: module-api Status: to-investigate
|
<!--- Provide a general summary of the issue in the Title above -->
Vehicle::setDoorState will not really open or close Doors. Only seems to work sometimes, and the doors will usually only open an inch.
(tested server side)
## Expected Behavior
<!--- Tell us what should happen -->
should open or close doors on the target vehicle
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
door opens only sometimes depending on which vehicle is beeing used.
and only opens for about an inch, and not all the way.
tested vehicles:
Dominator (about 1 out of 20 times)
Police2 (1 out of 2 times)
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. use setDoorSate on any Vehicle
## Context (Environment)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world --
I am trying to write a script where you can open a vehicle's trunk.
|
1.0
|
alt.Vehicle.setDoorState method will not really open or close doors - <!--- Provide a general summary of the issue in the Title above -->
Vehicle::setDoorState will not really open or close Doors. Only seems to work sometimes, and the doors will usually only open an inch.
(tested server side)
## Expected Behavior
<!--- Tell us what should happen -->
should open or close doors on the target vehicle
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
door opens only sometimes depending on which vehicle is beeing used.
and only opens for about an inch, and not all the way.
tested vehicles:
Dominator (about 1 out of 20 times)
Police2 (1 out of 2 times)
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. use setDoorSate on any Vehicle
## Context (Environment)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world --
I am trying to write a script where you can open a vehicle's trunk.
|
non_process
|
alt vehicle setdoorstate method will not really open or close doors vehicle setdoorstate will not really open or close doors only seems to work sometimes and the doors will usually only open an inch tested server side expected behavior should open or close doors on the target vehicle current behavior door opens only sometimes depending on which vehicle is beeing used and only opens for about an inch and not all the way tested vehicles dominator about out of times out of times steps to reproduce use setdoorsate on any vehicle context environment providing context helps us come up with a solution that is most useful in the real world i am trying to write a script where you can open a vehicle s trunk
| 0
|
15,794
| 19,985,831,928
|
IssuesEvent
|
2022-01-30 16:47:31
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
MySQL does not support `onDelete: setDefault`
|
bug/1-repro-available kind/bug process/candidate topic: schema validation topic: mysql team/migrations topic: referential actions topic: referentialIntegrity
|
When trying to migrate a super simple schema to MySQL with an `onDelete: setDefault`, this happens:
```
C:\Users\Jan\Documents\throwaway\setdefault>npx prisma db push
Environment variables loaded from .env
Prisma schema loaded from prisma\schema.prisma
Datasource "db": MySQL database "purple_kingfisher" at "mysql-db-provision.cm0mkpwj8arx.eu-central-1.rds.amazonaws.com:3306"
Error: Cannot add foreign key constraint
0: sql_migration_connector::sql_database_step_applier::apply_migration
at migration-engine\connectors\sql-migration-connector\src\sql_database_step_applier.rs:11
1: migration_core::api::SchemaPush
at migration-engine\core\src\api.rs:187
```
```prisma
model OnDeleteSetDefaultParent {
id Int @id @default(autoincrement())
name String @unique
mandatoryChildren OnDeleteSetDefaultMandatoryChild[]
}
model OnDeleteSetDefaultMandatoryChild {
id Int @id @default(autoincrement())
name String @unique
parent OnDeleteSetDefaultParent @relation(fields: [parentId], references: [id], onDelete: SetDefault)
parentId Int @default(1)
}
```
[Per our documentation](https://www.prisma.io/docs/concepts/components/prisma-schema/relations/referential-actions#types-of-referential-actions) this should fundamentally work (if maybe a bit different than expected):
[](https://www.prisma.io/docs/concepts/components/prisma-schema/relations/referential-actions#types-of-referential-actions)
But looking at the MySQL docs, this does not seem to be supported at all:
> SET DEFAULT: This action is recognized by the MySQL parser, but both InnoDB and NDB reject table definitions containing ON DELETE SET DEFAULT or ON UPDATE SET DEFAULT clauses.
- https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html#:~:text=SET%20DEFAULT%3A%20This%20action%20is%20recognized%20by%20the%20MySQL%20parser%2C%20but%20both%20InnoDB%20and%20NDB%20reject%20table%20definitions%20containing%20ON%20DELETE%20SET%20DEFAULT%20or%20ON%20UPDATE%20SET%20DEFAULT%20clauses.
- https://dev.mysql.com/doc/refman/5.7/en/create-table-foreign-keys.html#:~:text=SET%20DEFAULT%3A%20This%20action%20is%20recognized%20by%20the%20MySQL%20parser%2C%20but%20both%20InnoDB%20and%20NDB%20reject%20table%20definitions%20containing%20ON%20DELETE%20SET%20DEFAULT%20or%20ON%20UPDATE%20SET%20DEFAULT%20clauses.
-
Similar for MariaDB:
> The SET DEFAULT action is not supported.
- https://mariadb.com/kb/en/foreign-keys/#:~:text=The%20SET%20DEFAULT%20action%20is%20not%20supported.
We even mention similar in our engines comments:
https://github.com/prisma/prisma-engines/blob/ccf3dc944acdabb431947150e12b984b34c538cd/query-engine/connector-test-kit-rs/query-engine-tests/tests/new/ref_actions/on_delete/set_default.rs#L1
https://github.com/prisma/prisma-engines/blob/ccf3dc944acdabb431947150e12b984b34c538cd/migration-engine/migration-engine-tests/tests/migrations/relations.rs#L565-L566
We should adapt our validation to not allow this, and update our documentation afterwards as well. No need for our users to waste their time with this.
|
1.0
|
MySQL does not support `onDelete: setDefault` - When trying to migrate a super simple schema to MySQL with an `onDelete: setDefault`, this happens:
```
C:\Users\Jan\Documents\throwaway\setdefault>npx prisma db push
Environment variables loaded from .env
Prisma schema loaded from prisma\schema.prisma
Datasource "db": MySQL database "purple_kingfisher" at "mysql-db-provision.cm0mkpwj8arx.eu-central-1.rds.amazonaws.com:3306"
Error: Cannot add foreign key constraint
0: sql_migration_connector::sql_database_step_applier::apply_migration
at migration-engine\connectors\sql-migration-connector\src\sql_database_step_applier.rs:11
1: migration_core::api::SchemaPush
at migration-engine\core\src\api.rs:187
```
```prisma
model OnDeleteSetDefaultParent {
id Int @id @default(autoincrement())
name String @unique
mandatoryChildren OnDeleteSetDefaultMandatoryChild[]
}
model OnDeleteSetDefaultMandatoryChild {
id Int @id @default(autoincrement())
name String @unique
parent OnDeleteSetDefaultParent @relation(fields: [parentId], references: [id], onDelete: SetDefault)
parentId Int @default(1)
}
```
[Per our documentation](https://www.prisma.io/docs/concepts/components/prisma-schema/relations/referential-actions#types-of-referential-actions) this should fundamentally work (if maybe a bit different than expected):
[](https://www.prisma.io/docs/concepts/components/prisma-schema/relations/referential-actions#types-of-referential-actions)
But looking at the MySQL docs, this does not seem to be supported at all:
> SET DEFAULT: This action is recognized by the MySQL parser, but both InnoDB and NDB reject table definitions containing ON DELETE SET DEFAULT or ON UPDATE SET DEFAULT clauses.
- https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html#:~:text=SET%20DEFAULT%3A%20This%20action%20is%20recognized%20by%20the%20MySQL%20parser%2C%20but%20both%20InnoDB%20and%20NDB%20reject%20table%20definitions%20containing%20ON%20DELETE%20SET%20DEFAULT%20or%20ON%20UPDATE%20SET%20DEFAULT%20clauses.
- https://dev.mysql.com/doc/refman/5.7/en/create-table-foreign-keys.html#:~:text=SET%20DEFAULT%3A%20This%20action%20is%20recognized%20by%20the%20MySQL%20parser%2C%20but%20both%20InnoDB%20and%20NDB%20reject%20table%20definitions%20containing%20ON%20DELETE%20SET%20DEFAULT%20or%20ON%20UPDATE%20SET%20DEFAULT%20clauses.
-
Similar for MariaDB:
> The SET DEFAULT action is not supported.
- https://mariadb.com/kb/en/foreign-keys/#:~:text=The%20SET%20DEFAULT%20action%20is%20not%20supported.
We even mention similar in our engines comments:
https://github.com/prisma/prisma-engines/blob/ccf3dc944acdabb431947150e12b984b34c538cd/query-engine/connector-test-kit-rs/query-engine-tests/tests/new/ref_actions/on_delete/set_default.rs#L1
https://github.com/prisma/prisma-engines/blob/ccf3dc944acdabb431947150e12b984b34c538cd/migration-engine/migration-engine-tests/tests/migrations/relations.rs#L565-L566
We should adapt our validation to not allow this, and update our documentation afterwards as well. No need for our users to waste their time with this.
|
process
|
mysql does not support ondelete setdefault when trying to migrate a super simple schema to mysql with an ondelete setdefault this happens c users jan documents throwaway setdefault npx prisma db push environment variables loaded from env prisma schema loaded from prisma schema prisma datasource db mysql database purple kingfisher at mysql db provision eu central rds amazonaws com error cannot add foreign key constraint sql migration connector sql database step applier apply migration at migration engine connectors sql migration connector src sql database step applier rs migration core api schemapush at migration engine core src api rs prisma model ondeletesetdefaultparent id int id default autoincrement name string unique mandatorychildren ondeletesetdefaultmandatorychild model ondeletesetdefaultmandatorychild id int id default autoincrement name string unique parent ondeletesetdefaultparent relation fields references ondelete setdefault parentid int default this should fundamentally work if maybe a bit different than expected but looking at the mysql docs this does not seem to be supported at all set default this action is recognized by the mysql parser but both innodb and ndb reject table definitions containing on delete set default or on update set default clauses similar for mariadb the set default action is not supported we even mention similar in our engines comments we should adapt our validation to not allow this and update our documentation afterwards as well no need for our users to waste their time with this
| 1
|
18,287
| 24,381,805,448
|
IssuesEvent
|
2022-10-04 08:28:42
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
(migrate) print expected value for defaults migrations on diff
|
process/candidate kind/improvement team/schema topic: dbgenerated
|
- Show it in the migration comments
- Show it in the drift/diff summary view
|
1.0
|
(migrate) print expected value for defaults migrations on diff - - Show it in the migration comments
- Show it in the drift/diff summary view
|
process
|
migrate print expected value for defaults migrations on diff show it in the migration comments show it in the drift diff summary view
| 1
|
277,704
| 30,671,645,157
|
IssuesEvent
|
2023-07-25 23:16:50
|
Mend-developer-platform-load/3213490_28
|
https://api.github.com/repos/Mend-developer-platform-load/3213490_28
|
opened
|
grunt-angular-templates-1.2.0.tgz: 1 vulnerabilities (highest severity is: 7.5)
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>grunt-angular-templates-1.2.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-minifier/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Mend-developer-platform-load/3213490_28/commit/ae1936d6f4727aa40458275db3161f7d6d1a4d8e">ae1936d6f4727aa40458275db3161f7d6d1a4d8e</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (grunt-angular-templates version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-37620](https://www.mend.io/vulnerability-database/CVE-2022-37620) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | html-minifier-4.0.0.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-37620</summary>
### Vulnerable Library - <b>html-minifier-4.0.0.tgz</b></p>
<p>Highly configurable, well-tested, JavaScript-based HTML minifier.</p>
<p>Library home page: <a href="https://registry.npmjs.org/html-minifier/-/html-minifier-4.0.0.tgz">https://registry.npmjs.org/html-minifier/-/html-minifier-4.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-minifier/package.json</p>
<p>
Dependency Hierarchy:
- grunt-angular-templates-1.2.0.tgz (Root Library)
- :x: **html-minifier-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Mend-developer-platform-load/3213490_28/commit/ae1936d6f4727aa40458275db3161f7d6d1a4d8e">ae1936d6f4727aa40458275db3161f7d6d1a4d8e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Regular Expression Denial of Service (ReDoS) flaw was found in kangax html-minifier 4.0.0 via the candidate variable in htmlminifier.js.
<p>Publish Date: 2022-10-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37620>CVE-2022-37620</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
|
True
|
grunt-angular-templates-1.2.0.tgz: 1 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>grunt-angular-templates-1.2.0.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-minifier/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Mend-developer-platform-load/3213490_28/commit/ae1936d6f4727aa40458275db3161f7d6d1a4d8e">ae1936d6f4727aa40458275db3161f7d6d1a4d8e</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (grunt-angular-templates version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-37620](https://www.mend.io/vulnerability-database/CVE-2022-37620) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | html-minifier-4.0.0.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the "Details" section below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-37620</summary>
### Vulnerable Library - <b>html-minifier-4.0.0.tgz</b></p>
<p>Highly configurable, well-tested, JavaScript-based HTML minifier.</p>
<p>Library home page: <a href="https://registry.npmjs.org/html-minifier/-/html-minifier-4.0.0.tgz">https://registry.npmjs.org/html-minifier/-/html-minifier-4.0.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/html-minifier/package.json</p>
<p>
Dependency Hierarchy:
- grunt-angular-templates-1.2.0.tgz (Root Library)
- :x: **html-minifier-4.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Mend-developer-platform-load/3213490_28/commit/ae1936d6f4727aa40458275db3161f7d6d1a4d8e">ae1936d6f4727aa40458275db3161f7d6d1a4d8e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Regular Expression Denial of Service (ReDoS) flaw was found in kangax html-minifier 4.0.0 via the candidate variable in htmlminifier.js.
<p>Publish Date: 2022-10-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37620>CVE-2022-37620</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
|
non_process
|
grunt angular templates tgz vulnerabilities highest severity is vulnerable library grunt angular templates tgz path to dependency file package json path to vulnerable library node modules html minifier package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in grunt angular templates version remediation available high html minifier tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the details section below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library html minifier tgz highly configurable well tested javascript based html minifier library home page a href path to dependency file package json path to vulnerable library node modules html minifier package json dependency hierarchy grunt angular templates tgz root library x html minifier tgz vulnerable library found in head commit a href found in base branch main vulnerability details a regular expression denial of service redos flaw was found in kangax html minifier via the candidate variable in htmlminifier js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href
| 0
|
4,975
| 7,807,794,290
|
IssuesEvent
|
2018-06-11 18:02:58
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Change the "Participate" button so it doesn't imply side effects
|
good first issue hacktoberfest space: processes stale-issue target: user-experience wontfix
|
# This is a Feature Proposal
#### :tophat: Description
By looking at how other users navigate on the platform, having a "Browse" or "Take action" button on participatory processes seems to imply there's side effects. We should have a link with a different copy (suggestions welcome).
This impacts the usage of some people who don't enter participatory processes because they're afraid it's an action that maybe can't be undone.

#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***: Meta Decidim
* ***Browser & version***:
* ***Screenshot***:
* ***Error messages***:
* ***URL to reproduce the error***:
|
1.0
|
Change the "Participate" button so it doesn't imply side effects - # This is a Feature Proposal
#### :tophat: Description
By looking at how other users navigate on the platform, having a "Browse" or "Take action" button on participatory processes seems to imply there's side effects. We should have a link with a different copy (suggestions welcome).
This impacts the usage of some people who don't enter participatory processes because they're afraid it's an action that maybe can't be undone.

#### :pushpin: Related issues
*None*
#### :clipboard: Additional Data
* ***Decidim deployment where you found the issue***: Meta Decidim
* ***Browser & version***:
* ***Screenshot***:
* ***Error messages***:
* ***URL to reproduce the error***:
|
process
|
change the participate button so it doesn t imply side effects this is a feature proposal tophat description by looking at how other users navigate on the platform having a browse or take action button on participatory processes seems to imply there s side effects we should have a link with a different copy suggestions welcome this impacts the usage of some people who don t enter participatory processes because they re afraid it s an action that maybe can t be undone pushpin related issues none clipboard additional data decidim deployment where you found the issue meta decidim browser version screenshot error messages url to reproduce the error
| 1
|
15,732
| 3,481,443,298
|
IssuesEvent
|
2015-12-29 16:11:07
|
slivne/try_git
|
https://api.github.com/repos/slivne/try_git
|
opened
|
repair : repair_fixes_deletion_of_cells_test
|
dtest repair
|
Check that repair fixes a deleteion of cells 1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff 2. Insert data 3. Shutdown node 2 4. Delete some cells 5. Start node 2 6. Run repair on node 2 7. Shutdown node 1 - check that all data exists on node 2
|
1.0
|
repair : repair_fixes_deletion_of_cells_test - Check that repair fixes a deleteion of cells 1. Create a cluster of 2 nodes with rf=2, disable read_repair, hinttef_handoff 2. Insert data 3. Shutdown node 2 4. Delete some cells 5. Start node 2 6. Run repair on node 2 7. Shutdown node 1 - check that all data exists on node 2
|
non_process
|
repair repair fixes deletion of cells test check that repair fixes a deleteion of cells create a cluster of nodes with rf disable read repair hinttef handoff insert data shutdown node delete some cells start node run repair on node shutdown node check that all data exists on node
| 0
|
250,949
| 21,391,827,142
|
IssuesEvent
|
2022-04-21 07:55:07
|
archethic-foundation/archethic-node
|
https://api.github.com/repos/archethic-foundation/archethic-node
|
opened
|
Failure in the tests during CI
|
bug testing
|
Test fails in the CI due to race condition of the GenServer already started
<img width="929" alt="image" src="https://user-images.githubusercontent.com/42943690/164406582-05136596-89a4-40ae-85b1-0812b59deea4.png">
We can make the tests non-asynchronous or not define the name of the GenServer by default.
|
1.0
|
Failure in the tests during CI - Test fails in the CI due to race condition of the GenServer already started
<img width="929" alt="image" src="https://user-images.githubusercontent.com/42943690/164406582-05136596-89a4-40ae-85b1-0812b59deea4.png">
We can make the tests non-asynchronous or not define the name of the GenServer by default.
|
non_process
|
failure in the tests during ci test fails in the ci due to race condition of the genserver already started img width alt image src we can make the tests non asynchronous or not define the name of the genserver by default
| 0
|
8,636
| 11,787,185,969
|
IssuesEvent
|
2020-03-17 13:38:15
|
prisma/prisma-client-js
|
https://api.github.com/repos/prisma/prisma-client-js
|
closed
|
Best heuristic to enable "pretty" errors?
|
kind/discussion process/candidate topic: dx topic: env
|
Current spec https://github.com/prisma/specs/blob/c230b8832410e5f669215826f440e7b9222c48a7/prisma-client-js/README.md#error-formatting
errorFormat is set to `pretty` by default
Except if if we find these environment variables
- NO_COLOR: If this env var is provided, colors are stripped from the error message. Therefore we end up with a colorless error. The NO_COLOR environment variable is a standard described here. We have a tracking issue here.
- NODE_ENV=production: If the env var NODE_ENV is set to production, only the minimal error will be printed. This allows for easier digestion of logs in production environments.
We want to change the default from `pretty` to `colorless` see https://github.com/prisma/prisma-client-js/issues/579
We should find a new heuristic to enable `pretty` for local development
@Weakky suggested something that is used by Nexus for typegen only in dev:
```
if (!process.env.NODE_ENV || process.env.NODE_ENV === "development") {
/* enable pretty mode */
}
```
|
1.0
|
Best heuristic to enable "pretty" errors? - Current spec https://github.com/prisma/specs/blob/c230b8832410e5f669215826f440e7b9222c48a7/prisma-client-js/README.md#error-formatting
errorFormat is set to `pretty` by default
Except if if we find these environment variables
- NO_COLOR: If this env var is provided, colors are stripped from the error message. Therefore we end up with a colorless error. The NO_COLOR environment variable is a standard described here. We have a tracking issue here.
- NODE_ENV=production: If the env var NODE_ENV is set to production, only the minimal error will be printed. This allows for easier digestion of logs in production environments.
We want to change the default from `pretty` to `colorless` see https://github.com/prisma/prisma-client-js/issues/579
We should find a new heuristic to enable `pretty` for local development
@Weakky suggested something that is used by Nexus for typegen only in dev:
```
if (!process.env.NODE_ENV || process.env.NODE_ENV === "development") {
/* enable pretty mode */
}
```
|
process
|
best heuristic to enable pretty errors current spec errorformat is set to pretty by default except if if we find these environment variables no color if this env var is provided colors are stripped from the error message therefore we end up with a colorless error the no color environment variable is a standard described here we have a tracking issue here node env production if the env var node env is set to production only the minimal error will be printed this allows for easier digestion of logs in production environments we want to change the default from pretty to colorless see we should find a new heuristic to enable pretty for local development weakky suggested something that is used by nexus for typegen only in dev if process env node env process env node env development enable pretty mode
| 1
|
56,109
| 3,078,228,553
|
IssuesEvent
|
2015-08-21 08:49:07
|
pavel-pimenov/flylinkdc-r5xx
|
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
|
closed
|
просмотр мелких файлов
|
bug imported Priority-Medium
|
_From [msm78...@gmail.com](https://code.google.com/u/110862643213520006198/) on March 17, 2011 17:11:19_
Предлагаю добавить возможность просмотра не только txt файлов но и любых других из заданного в настройках списка типов файлов.
Было бы удобно выбрать в контекстном меню пункт "открыть системой", после чего файл скачивался и открывался заданным для ОС приложением.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=403_
|
1.0
|
просмотр мелких файлов - _From [msm78...@gmail.com](https://code.google.com/u/110862643213520006198/) on March 17, 2011 17:11:19_
Предлагаю добавить возможность просмотра не только txt файлов но и любых других из заданного в настройках списка типов файлов.
Было бы удобно выбрать в контекстном меню пункт "открыть системой", после чего файл скачивался и открывался заданным для ОС приложением.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=403_
|
non_process
|
просмотр мелких файлов from on march предлагаю добавить возможность просмотра не только txt файлов но и любых других из заданного в настройках списка типов файлов было бы удобно выбрать в контекстном меню пункт открыть системой после чего файл скачивался и открывался заданным для ос приложением original issue
| 0
|
3,645
| 6,677,495,475
|
IssuesEvent
|
2017-10-05 10:42:10
|
our-city-app/oca-backend
|
https://api.github.com/repos/our-city-app/oca-backend
|
closed
|
Failed to log client error to server
|
process_duplicate type_bug
|
13/Mar/2017:20:01:22 +0100 "POST /mobi/rest/system/log_error HTTP/1.1" 500 225 https://rogerth.at/flex/ "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
Incorrect type received for parameter 'errorMessage'. Expected <type 'unicode'> and got <type 'dict'> ({}).
|
1.0
|
Failed to log client error to server - 13/Mar/2017:20:01:22 +0100 "POST /mobi/rest/system/log_error HTTP/1.1" 500 225 https://rogerth.at/flex/ "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
Incorrect type received for parameter 'errorMessage'. Expected <type 'unicode'> and got <type 'dict'> ({}).
|
process
|
failed to log client error to server mar post mobi rest system log error http mozilla windows nt trident rv like gecko incorrect type received for parameter errormessage expected and got
| 1
|
16,623
| 4,074,172,070
|
IssuesEvent
|
2016-05-28 08:10:19
|
milessabin/macro-compat
|
https://api.github.com/repos/milessabin/macro-compat
|
closed
|
Compile error: bad symbolic reference. A signature in TypecheckerContextExtensions.class refers to term tools
|
Documentation
|
You also need to add scala-compiler as a dep. Worth mentioning in readme?
```
"org.scala-lang" % "scala-compiler" % scalaVersion.value % "provided",
````
|
1.0
|
Compile error: bad symbolic reference. A signature in TypecheckerContextExtensions.class refers to term tools - You also need to add scala-compiler as a dep. Worth mentioning in readme?
```
"org.scala-lang" % "scala-compiler" % scalaVersion.value % "provided",
````
|
non_process
|
compile error bad symbolic reference a signature in typecheckercontextextensions class refers to term tools you also need to add scala compiler as a dep worth mentioning in readme org scala lang scala compiler scalaversion value provided
| 0
|
16,291
| 20,920,214,626
|
IssuesEvent
|
2022-03-24 16:42:04
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
PowerTransformer 'divide by zero encountered in log' + proposed fix
|
Bug Moderate module:preprocessing
|
#### Description
PowerTransformer sometimes issues 'divide by zero encountered in log' warning and returns the wrong answer.
#### Steps/Code to Reproduce
```py
import numpy as np
import sklearn
sklearn.show_versions()
from sklearn.preprocessing import PowerTransformer
a = np.array([
3251637.22,620695.44,11642969.00,2223468.22,85307500.00,16494389.89,
917215.88,11642969.00,2145773.87,4962000.00,620695.44,651234.50,
1907876.71,4053297.88,3251637.22,3259103.08,9547969.00,20631286.23,
12807072.08,2383819.84,90114500.00,17209575.46,12852969.00,2414609.99,
2170368.23])
PowerTransformer().fit_transform(a.reshape(-1,1))
```
#### Expected Results
The result after my proposed fix detailed below.
```py
array([[-0.23030198],
[-1.6982624 ],
[ 0.70573405],
[-0.5401762 ],
[ 1.892682 ],
[ 0.93580745],
[-1.32349791],
[ 0.70573405],
[-0.56995447],
[ 0.0969763 ],
[-1.6982624 ],
[-1.6511494 ],
[-0.66930788],
[-0.05744311],
[-0.23030198],
[-0.22847775],
[ 0.57003163],
[ 1.07829013],
[ 0.7697045 ],
[-0.48226763],
[ 1.9212548 ],
[ 0.96314737],
[ 0.7720907 ],
[-0.47165169],
[-0.56039821]])
```
#### Actual Results
```pytb
/Users/bcbrock/github/scikit-learn/sklearn/preprocessing/data.py:2930: RuntimeWarning: divide by zero encountered in log
loglike = -n_samples / 2 * np.log(x_trans.var())
array([[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.]])
```
#### Versions
I installed the GitHub master branch as of this morning.
```
System:
python: 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
executable: /anaconda3/bin/python
machine: Darwin-17.7.0-x86_64-i386-64bit
Python deps:
pip: 19.2.3
setuptools: 38.4.0
sklearn: 0.22.dev0
numpy: 1.17.0
scipy: 1.1.0
Cython: 0.29.13
pandas: 0.24.2
matplotlib: 2.1.2
joblib: 0.13.2
```
#### My Analysis and Proposal
I don't claim to an expert on the math, but this is what appears to be happening: Some of the Lambda values tried during the log-likelihood optimization yield 0-variance arrays, despite the large variance in the input data. Taking the log() of the zero variance causes the warning. However the real problem seems to be that log(0) yields -INF, which I believe is being taken as the 'most optimal' result, hence the return of the array of zeros. My proposed solution is to check for zero-variance arrays, and return +INF in these cases. This removes the warning and yields what appears to me to be a reasonable answer.
```diff
diff --git a/sklearn/preprocessing/data.py b/sklearn/preprocessing/data.py
index 4a2c5a4..9ff91e8 100644
--- a/sklearn/preprocessing/data.py
+++ b/sklearn/preprocessing/data.py
@@ -2924,12 +2924,15 @@ class PowerTransformer(TransformerMixin, BaseEstimator):
def _neg_log_likelihood(lmbda):
"""Return the negative log likelihood of the observed data x as a
function of lambda."""
x_trans = self._yeo_johnson_transform(x, lmbda)
n_samples = x.shape[0]
+ variance = x_trans.var()
+ if variance == 0:
+ return np.inf
- loglike = -n_samples / 2 * np.log(x_trans.var())
+ loglike = -n_samples / 2 * np.log(variance)
loglike += (lmbda - 1) * (np.sign(x) * np.log1p(np.abs(x))).sum()
return -loglike
# the computation of lambda is influenced by NaNs so we need to
```
Here are plots of the input and output (after the above fix).


There are three remaining issues:
1. The real problem may be in the way the optimization is bing done, i.e., why do certain values of Lambda yield zero-variance arrays? I haven't looked into that.
2. I'm not sure what the expected/correct response is when the input array has a single element. With the fix the return value seems to be a constant 0.
3. A similar issue appears with the Box-Cox transform, but that is an issue in **scipy**, not in **scikit-learn**.
Please feel free to use any of this as you wish. Unfortunately I am not authorized by my employer to contribute to open-source projects.
Thank you,
Bishop Brock
<!-- Thanks for contributing! -->
|
1.0
|
PowerTransformer 'divide by zero encountered in log' + proposed fix - #### Description
PowerTransformer sometimes issues 'divide by zero encountered in log' warning and returns the wrong answer.
#### Steps/Code to Reproduce
```py
import numpy as np
import sklearn
sklearn.show_versions()
from sklearn.preprocessing import PowerTransformer
a = np.array([
3251637.22,620695.44,11642969.00,2223468.22,85307500.00,16494389.89,
917215.88,11642969.00,2145773.87,4962000.00,620695.44,651234.50,
1907876.71,4053297.88,3251637.22,3259103.08,9547969.00,20631286.23,
12807072.08,2383819.84,90114500.00,17209575.46,12852969.00,2414609.99,
2170368.23])
PowerTransformer().fit_transform(a.reshape(-1,1))
```
#### Expected Results
The result after my proposed fix detailed below.
```py
array([[-0.23030198],
[-1.6982624 ],
[ 0.70573405],
[-0.5401762 ],
[ 1.892682 ],
[ 0.93580745],
[-1.32349791],
[ 0.70573405],
[-0.56995447],
[ 0.0969763 ],
[-1.6982624 ],
[-1.6511494 ],
[-0.66930788],
[-0.05744311],
[-0.23030198],
[-0.22847775],
[ 0.57003163],
[ 1.07829013],
[ 0.7697045 ],
[-0.48226763],
[ 1.9212548 ],
[ 0.96314737],
[ 0.7720907 ],
[-0.47165169],
[-0.56039821]])
```
#### Actual Results
```pytb
/Users/bcbrock/github/scikit-learn/sklearn/preprocessing/data.py:2930: RuntimeWarning: divide by zero encountered in log
loglike = -n_samples / 2 * np.log(x_trans.var())
array([[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.]])
```
#### Versions
I installed the GitHub master branch as of this morning.
```
System:
python: 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
executable: /anaconda3/bin/python
machine: Darwin-17.7.0-x86_64-i386-64bit
Python deps:
pip: 19.2.3
setuptools: 38.4.0
sklearn: 0.22.dev0
numpy: 1.17.0
scipy: 1.1.0
Cython: 0.29.13
pandas: 0.24.2
matplotlib: 2.1.2
joblib: 0.13.2
```
#### My Analysis and Proposal
I don't claim to an expert on the math, but this is what appears to be happening: Some of the Lambda values tried during the log-likelihood optimization yield 0-variance arrays, despite the large variance in the input data. Taking the log() of the zero variance causes the warning. However the real problem seems to be that log(0) yields -INF, which I believe is being taken as the 'most optimal' result, hence the return of the array of zeros. My proposed solution is to check for zero-variance arrays, and return +INF in these cases. This removes the warning and yields what appears to me to be a reasonable answer.
```diff
diff --git a/sklearn/preprocessing/data.py b/sklearn/preprocessing/data.py
index 4a2c5a4..9ff91e8 100644
--- a/sklearn/preprocessing/data.py
+++ b/sklearn/preprocessing/data.py
@@ -2924,12 +2924,15 @@ class PowerTransformer(TransformerMixin, BaseEstimator):
def _neg_log_likelihood(lmbda):
"""Return the negative log likelihood of the observed data x as a
function of lambda."""
x_trans = self._yeo_johnson_transform(x, lmbda)
n_samples = x.shape[0]
+ variance = x_trans.var()
+ if variance == 0:
+ return np.inf
- loglike = -n_samples / 2 * np.log(x_trans.var())
+ loglike = -n_samples / 2 * np.log(variance)
loglike += (lmbda - 1) * (np.sign(x) * np.log1p(np.abs(x))).sum()
return -loglike
# the computation of lambda is influenced by NaNs so we need to
```
Here are plots of the input and output (after the above fix).


There are three remaining issues:
1. The real problem may be in the way the optimization is bing done, i.e., why do certain values of Lambda yield zero-variance arrays? I haven't looked into that.
2. I'm not sure what the expected/correct response is when the input array has a single element. With the fix the return value seems to be a constant 0.
3. A similar issue appears with the Box-Cox transform, but that is an issue in **scipy**, not in **scikit-learn**.
Please feel free to use any of this as you wish. Unfortunately I am not authorized by my employer to contribute to open-source projects.
Thank you,
Bishop Brock
<!-- Thanks for contributing! -->
|
process
|
powertransformer divide by zero encountered in log proposed fix description powertransformer sometimes issues divide by zero encountered in log warning and returns the wrong answer steps code to reproduce py import numpy as np import sklearn sklearn show versions from sklearn preprocessing import powertransformer a np array powertransformer fit transform a reshape expected results the result after my proposed fix detailed below py array actual results pytb users bcbrock github scikit learn sklearn preprocessing data py runtimewarning divide by zero encountered in log loglike n samples np log x trans var array versions i installed the github master branch as of this morning system python anaconda inc default jan executable bin python machine darwin python deps pip setuptools sklearn numpy scipy cython pandas matplotlib joblib my analysis and proposal i don t claim to an expert on the math but this is what appears to be happening some of the lambda values tried during the log likelihood optimization yield variance arrays despite the large variance in the input data taking the log of the zero variance causes the warning however the real problem seems to be that log yields inf which i believe is being taken as the most optimal result hence the return of the array of zeros my proposed solution is to check for zero variance arrays and return inf in these cases this removes the warning and yields what appears to me to be a reasonable answer diff diff git a sklearn preprocessing data py b sklearn preprocessing data py index a sklearn preprocessing data py b sklearn preprocessing data py class powertransformer transformermixin baseestimator def neg log likelihood lmbda return the negative log likelihood of the observed data x as a function of lambda x trans self yeo johnson transform x lmbda n samples x shape variance x trans var if variance return np inf loglike n samples np log x trans var loglike n samples np log variance loglike lmbda np sign x np np abs x sum return loglike the computation of lambda is influenced by nans so we need to here are plots of the input and output after the above fix there are three remaining issues the real problem may be in the way the optimization is bing done i e why do certain values of lambda yield zero variance arrays i haven t looked into that i m not sure what the expected correct response is when the input array has a single element with the fix the return value seems to be a constant a similar issue appears with the box cox transform but that is an issue in scipy not in scikit learn please feel free to use any of this as you wish unfortunately i am not authorized by my employer to contribute to open source projects thank you bishop brock
| 1
|
16,325
| 20,980,507,011
|
IssuesEvent
|
2022-03-28 19:25:09
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Terminal does not load when multiple folders open in the workspace
|
bug WSL remote terminal-process
|
Issue Type: <b>Bug</b>
I'm using vscode with wsl2, when I have a workspace that has more than one folder, I am unable to open a terminal. No errors, just an empty panel. Closing out the additional folders until there is one, then the terminal loads. Attempting to spawn an additional terminal with "+" does not work, same behavior
VS Code version: Code 1.65.2 (c722ca6c7eed3d7987c0d5c3df5c45f6b15e77d1, 2022-03-10T14:33:55.248Z)
OS version: Windows_NT x64 10.0.19044
Restricted Mode: No
Remote OS version: Linux x64 5.10.60.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz (12 x 2712)|
|GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization: enabled<br>skia_renderer: enabled_on<br>video_decode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|31.73GB (20.64GB free)|
|Process Argv|--crash-reporter-id 2c833a45-a7cd-4fb4-a5f0-d73c3424a0c5|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu-20.04|
|OS|Linux x64 5.10.60.1-microsoft-standard-WSL2|
|CPUs|Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz (6 x 2712)|
|Memory (System)|19.55GB (18.28GB free)|
|VM|0%|
</details><details><summary>Extensions (31)</summary>
Extension|Author (truncated)|Version
---|---|---
dotenv|mik|1.0.1
jupyter-keymap|ms-|1.0.0
remote-containers|ms-|0.224.2
remote-ssh|ms-|0.76.1
remote-ssh-edit|ms-|0.76.1
remote-wsl|ms-|0.64.2
rewrap|stk|1.16.3
vscode-icons|vsc|11.10.0
gitlens|eam|12.0.4
gc-excelviewer|Gra|4.2.53
vscode-kubernetes-tools|ms-|1.3.7
python|ms-|2022.2.1924087327
vscode-pylance|ms-|2022.3.1
jupyter|ms-|2022.2.1030672458
jupyter-keymap|ms-|1.0.0
jupyter-renderers|ms-|1.0.6
ansible|red|0.8.1
fabric8-analytics|red|0.3.5
java|red|1.4.0
vscode-commons|red|0.0.6
vscode-xml|red|0.19.1
vscode-yaml|red|1.5.1
code-spell-checker|str|2.1.7
shellcheck|tim|0.18.9
vscodeintellicode|Vis|1.2.17
vscode-java-debug|vsc|0.38.0
vscode-java-dependency|vsc|0.19.0
vscode-java-pack|vsc|0.22.0
vscode-java-test|vsc|0.34.2
vscode-maven|vsc|0.35.1
markdown-all-in-one|yzh|3.4.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383:30185418
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyl392:30443607
pythontb:30283811
pythonvspyt551cf:30345471
pythonptprofiler:30281270
vshan820:30294714
vstes263:30335439
vscoreces:30445986
pythondataviewer:30285071
vscod805cf:30301675
pythonvspyt200:30340761
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
vsaa593cf:30376535
vsc1dst:30438360
pythonvs932:30410667
wslgetstarted:30449410
vsclayoutctrt:30451275
dsvsc009:30452663
pythonvspyt640:30450904
vscscmwlcmt:30438805
cppdebug:30451566
pynewfile477:30450038
```
</details>
<!-- generated by issue reporter -->
|
1.0
|
Terminal does not load when multiple folders open in the workspace -
Issue Type: <b>Bug</b>
I'm using vscode with wsl2, when I have a workspace that has more than one folder, I am unable to open a terminal. No errors, just an empty panel. Closing out the additional folders until there is one, then the terminal loads. Attempting to spawn an additional terminal with "+" does not work, same behavior
VS Code version: Code 1.65.2 (c722ca6c7eed3d7987c0d5c3df5c45f6b15e77d1, 2022-03-10T14:33:55.248Z)
OS version: Windows_NT x64 10.0.19044
Restricted Mode: No
Remote OS version: Linux x64 5.10.60.1-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz (12 x 2712)|
|GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization: enabled<br>skia_renderer: enabled_on<br>video_decode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|31.73GB (20.64GB free)|
|Process Argv|--crash-reporter-id 2c833a45-a7cd-4fb4-a5f0-d73c3424a0c5|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu-20.04|
|OS|Linux x64 5.10.60.1-microsoft-standard-WSL2|
|CPUs|Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz (6 x 2712)|
|Memory (System)|19.55GB (18.28GB free)|
|VM|0%|
</details><details><summary>Extensions (31)</summary>
Extension|Author (truncated)|Version
---|---|---
dotenv|mik|1.0.1
jupyter-keymap|ms-|1.0.0
remote-containers|ms-|0.224.2
remote-ssh|ms-|0.76.1
remote-ssh-edit|ms-|0.76.1
remote-wsl|ms-|0.64.2
rewrap|stk|1.16.3
vscode-icons|vsc|11.10.0
gitlens|eam|12.0.4
gc-excelviewer|Gra|4.2.53
vscode-kubernetes-tools|ms-|1.3.7
python|ms-|2022.2.1924087327
vscode-pylance|ms-|2022.3.1
jupyter|ms-|2022.2.1030672458
jupyter-keymap|ms-|1.0.0
jupyter-renderers|ms-|1.0.6
ansible|red|0.8.1
fabric8-analytics|red|0.3.5
java|red|1.4.0
vscode-commons|red|0.0.6
vscode-xml|red|0.19.1
vscode-yaml|red|1.5.1
code-spell-checker|str|2.1.7
shellcheck|tim|0.18.9
vscodeintellicode|Vis|1.2.17
vscode-java-debug|vsc|0.38.0
vscode-java-dependency|vsc|0.19.0
vscode-java-pack|vsc|0.22.0
vscode-java-test|vsc|0.34.2
vscode-maven|vsc|0.35.1
markdown-all-in-one|yzh|3.4.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vsreu685:30147344
python383:30185418
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyl392:30443607
pythontb:30283811
pythonvspyt551cf:30345471
pythonptprofiler:30281270
vshan820:30294714
vstes263:30335439
vscoreces:30445986
pythondataviewer:30285071
vscod805cf:30301675
pythonvspyt200:30340761
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
vsaa593cf:30376535
vsc1dst:30438360
pythonvs932:30410667
wslgetstarted:30449410
vsclayoutctrt:30451275
dsvsc009:30452663
pythonvspyt640:30450904
vscscmwlcmt:30438805
cppdebug:30451566
pynewfile477:30450038
```
</details>
<!-- generated by issue reporter -->
|
process
|
terminal does not load when multiple folders open in the workspace issue type bug i m using vscode with when i have a workspace that has more than one folder i am unable to open a terminal no errors just an empty panel closing out the additional folders until there is one then the terminal loads attempting to spawn an additional terminal with does not work same behavior vs code version code os version windows nt restricted mode no remote os version linux microsoft standard system info item value cpus intel r core tm cpu x gpu status canvas enabled gpu compositing enabled multiple raster threads enabled on oop rasterization enabled opengl enabled on rasterization enabled skia renderer enabled on video decode enabled vulkan disabled off webgl enabled enabled load avg undefined memory system free process argv crash reporter id screen reader no vm item value remote wsl ubuntu os linux microsoft standard cpus intel r core tm cpu x memory system free vm extensions extension author truncated version dotenv mik jupyter keymap ms remote containers ms remote ssh ms remote ssh edit ms remote wsl ms rewrap stk vscode icons vsc gitlens eam gc excelviewer gra vscode kubernetes tools ms python ms vscode pylance ms jupyter ms jupyter keymap ms jupyter renderers ms ansible red analytics red java red vscode commons red vscode xml red vscode yaml red code spell checker str shellcheck tim vscodeintellicode vis vscode java debug vsc vscode java dependency vsc vscode java pack vsc vscode java test vsc vscode maven vsc markdown all in one yzh a b experiments pythontb pythonptprofiler vscoreces pythondataviewer wslgetstarted vsclayoutctrt vscscmwlcmt cppdebug
| 1
|
156,238
| 24,586,576,273
|
IssuesEvent
|
2022-10-13 20:18:38
|
kubeshop/tracetest
|
https://api.github.com/repos/kubeshop/tracetest
|
opened
|
Add suggestions for the test spec selector
|
design frontend
|
**Context:** as part of the efforts of teaching users how to interact with our [Selector Language](https://docs.tracetest.io/advanced-selectors/), we want to provide a list of `suggestions` based on the currently selected span, that will help users understand how queries are built and illustrate common query operations.
**AC1:**
As a user interacting with the Create Test Spec form,
I want to see a list of `suggestions` for the selector query based on the selected span,
so I can easily apply and understand how to build queries.
**AC2:**
As a user interacting with the list of suggestions,
I want to click on one suggestion,
so I can apply the suggested query to the test spec.
**Proposed Rules:**
- All spans (`empty selector`)
- All spans by `type`
- All spans by `service`
- All spans by `name`
- First span by `type`
- Last span by `type`
- All `type` spans that are children of parent span (only if span has a parent, it's not the root span)
|
1.0
|
Add suggestions for the test spec selector - **Context:** as part of the efforts of teaching users how to interact with our [Selector Language](https://docs.tracetest.io/advanced-selectors/), we want to provide a list of `suggestions` based on the currently selected span, that will help users understand how queries are built and illustrate common query operations.
**AC1:**
As a user interacting with the Create Test Spec form,
I want to see a list of `suggestions` for the selector query based on the selected span,
so I can easily apply and understand how to build queries.
**AC2:**
As a user interacting with the list of suggestions,
I want to click on one suggestion,
so I can apply the suggested query to the test spec.
**Proposed Rules:**
- All spans (`empty selector`)
- All spans by `type`
- All spans by `service`
- All spans by `name`
- First span by `type`
- Last span by `type`
- All `type` spans that are children of parent span (only if span has a parent, it's not the root span)
|
non_process
|
add suggestions for the test spec selector context as part of the efforts of teaching users how to interact with our we want to provide a list of suggestions based on the currently selected span that will help users understand how queries are built and illustrate common query operations as a user interacting with the create test spec form i want to see a list of suggestions for the selector query based on the selected span so i can easily apply and understand how to build queries as a user interacting with the list of suggestions i want to click on one suggestion so i can apply the suggested query to the test spec proposed rules all spans empty selector all spans by type all spans by service all spans by name first span by type last span by type all type spans that are children of parent span only if span has a parent it s not the root span
| 0
|
11,111
| 13,957,680,851
|
IssuesEvent
|
2020-10-24 08:07:14
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
DE: request for a new harvesting
|
DE - Germany Geoportal Harvesting process
|
Dear Geoportal Helpdesk,
As mentioned in Roberts Mail from 2020/03/02 we would like to initiate a new push of our metadata records to the EU Geoportal. For this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the Geoportal harvesting "sandbox", please.
Thanks in advance and best regards,
Sara Biesel (on behalf of SDI Germany)
|
1.0
|
DE: request for a new harvesting - Dear Geoportal Helpdesk,
As mentioned in Roberts Mail from 2020/03/02 we would like to initiate a new push of our metadata records to the EU Geoportal. For this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the Geoportal harvesting "sandbox", please.
Thanks in advance and best regards,
Sara Biesel (on behalf of SDI Germany)
|
process
|
de request for a new harvesting dear geoportal helpdesk as mentioned in roberts mail from we would like to initiate a new push of our metadata records to the eu geoportal for this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the geoportal harvesting quot sandbox quot please thanks in advance and best regards sara biesel on behalf of sdi germany
| 1
|
213,666
| 16,531,711,161
|
IssuesEvent
|
2021-05-27 07:02:22
|
chunglt1911/TestNotion
|
https://api.github.com/repos/chunglt1911/TestNotion
|
opened
|
プリビュー画面
|
documentation
|
* **画面名**: プリビュー
* **作成者**: チュン
* **更新日**: 2021.05.26
* **カテゴリー**: Customer
* **Figma**: [Link](https://www.figma.com/file/krfSoWUOT6Uh7FABt5Zm3q/prrrr?node-id=2%3A0)
1. 機能詳細
| Design | Description | API |
--- | --- | --- |
| 許可承認ポップアップ <br/>  | マイクとカメラ許可設定を行う <br/>デフォルトカメラのビデオ非表示<br/><br/><br/>初期許可確認ダイアログ表示する<br/> ・```OK```クリックでビデオが表示する <br/>・```許可しない```クリックで ダイアログ閉じる <br/><br/>2回目以降、カメラ許可ダイアログ表示しない<br/>(ブラウザーの設定からカメラの設定変更可能)<br/>| |
|  | カメラのビデオ非表示 <br/> ③ クリックでアクセス設定画面へ移動| --- |
2. 表示パータンのFigma
* [初期表示](https://www.figma.com/file/krfSoWUOT6Uh7FABt5Zm3q/prrrr?node-id=546%3A268)
* [カメラ許可しない](https://www.figma.com/file/krfSoWUOT6Uh7FABt5Zm3q/prrrr?node-id=2%3A0)
3. 項目定義
| No | item_JP | item_EN | Type | Required | Validation | Note |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | ビデオ | --- | video frame | --- | --- | --- |
| 2 | 説明 | --- | label | --- | --- | --- |
| 3 | アクセス許可の設定 | --- | button | --- | --- | --- |
|
1.0
|
プリビュー画面 - * **画面名**: プリビュー
* **作成者**: チュン
* **更新日**: 2021.05.26
* **カテゴリー**: Customer
* **Figma**: [Link](https://www.figma.com/file/krfSoWUOT6Uh7FABt5Zm3q/prrrr?node-id=2%3A0)
1. 機能詳細
| Design | Description | API |
--- | --- | --- |
| 許可承認ポップアップ <br/>  | マイクとカメラ許可設定を行う <br/>デフォルトカメラのビデオ非表示<br/><br/><br/>初期許可確認ダイアログ表示する<br/> ・```OK```クリックでビデオが表示する <br/>・```許可しない```クリックで ダイアログ閉じる <br/><br/>2回目以降、カメラ許可ダイアログ表示しない<br/>(ブラウザーの設定からカメラの設定変更可能)<br/>| |
|  | カメラのビデオ非表示 <br/> ③ クリックでアクセス設定画面へ移動| --- |
2. 表示パータンのFigma
* [初期表示](https://www.figma.com/file/krfSoWUOT6Uh7FABt5Zm3q/prrrr?node-id=546%3A268)
* [カメラ許可しない](https://www.figma.com/file/krfSoWUOT6Uh7FABt5Zm3q/prrrr?node-id=2%3A0)
3. 項目定義
| No | item_JP | item_EN | Type | Required | Validation | Note |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | ビデオ | --- | video frame | --- | --- | --- |
| 2 | 説明 | --- | label | --- | --- | --- |
| 3 | アクセス許可の設定 | --- | button | --- | --- | --- |
|
non_process
|
プリビュー画面 画面名 : プリビュー 作成者 : チュン 更新日 : カテゴリー : customer figma 機能詳細 design description api 許可承認ポップアップ マイクとカメラ許可設定を行う デフォルトカメラのビデオ非表示 初期許可確認ダイアログ表示する ・ ok クリックでビデオが表示する ・ 許可しない クリックで ダイアログ閉じる 、カメラ許可ダイアログ表示しない ブラウザーの設定からカメラの設定変更可能 カメラのビデオ非表示 ③ クリックでアクセス設定画面へ移動 表示パータンのfigma 項目定義 no item jp item en type required validation note ビデオ video frame 説明 label アクセス許可の設定 button
| 0
|
191,584
| 6,835,348,098
|
IssuesEvent
|
2017-11-10 00:39:07
|
sys-bio/tellurium
|
https://api.github.com/repos/sys-bio/tellurium
|
closed
|
Ensure Antimony SBO converter accepts just numbers
|
priority
|
e.g. both should be valid:
```
cell.sboTerm = SBO:0000290;
cell.sboTerm = 290;
```
|
1.0
|
Ensure Antimony SBO converter accepts just numbers - e.g. both should be valid:
```
cell.sboTerm = SBO:0000290;
cell.sboTerm = 290;
```
|
non_process
|
ensure antimony sbo converter accepts just numbers e g both should be valid cell sboterm sbo cell sboterm
| 0
|
16,714
| 21,872,698,108
|
IssuesEvent
|
2022-05-19 07:20:58
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
opened
|
Explicitly handle duplicated task headers
|
kind/toil area/reliability team/process-automation
|
**Description**
When deploying a model where a service task has more than one custom header with the same key, an error is raised and the deploy fails.
The error message returned on deploy is pretty clear:

But the error is not handled explicitly:
https://github.com/camunda/zeebe/blob/2c793910c4ed5a996c40f0b1b52b346880edb551/engine/src/main/java/io/camunda/zeebe/engine/processing/deployment/model/transformer/zeebe/TaskHeadersTransformer.java#L38-L41
We should handle this a bit more gracefully so that the error does not show up in our error reporting tool: https://console.cloud.google.com/errors/detail/CPr11a6I_KbY6gE;service=zeebe;time=P7D?project=camunda-cloud-240911
Related discussion on Slack: https://camunda.slack.com/archives/CSQ2E3BT4/p1652943416460409
BPMN file to reproduce: [duplicatetaskheaders.zip](https://github.com/camunda/zeebe/files/8726153/duplicatetaskheaders.zip)
|
1.0
|
Explicitly handle duplicated task headers - **Description**
When deploying a model where a service task has more than one custom header with the same key, an error is raised and the deploy fails.
The error message returned on deploy is pretty clear:

But the error is not handled explicitly:
https://github.com/camunda/zeebe/blob/2c793910c4ed5a996c40f0b1b52b346880edb551/engine/src/main/java/io/camunda/zeebe/engine/processing/deployment/model/transformer/zeebe/TaskHeadersTransformer.java#L38-L41
We should handle this a bit more gracefully so that the error does not show up in our error reporting tool: https://console.cloud.google.com/errors/detail/CPr11a6I_KbY6gE;service=zeebe;time=P7D?project=camunda-cloud-240911
Related discussion on Slack: https://camunda.slack.com/archives/CSQ2E3BT4/p1652943416460409
BPMN file to reproduce: [duplicatetaskheaders.zip](https://github.com/camunda/zeebe/files/8726153/duplicatetaskheaders.zip)
|
process
|
explicitly handle duplicated task headers description when deploying a model where a service task has more than one custom header with the same key an error is raised and the deploy fails the error message returned on deploy is pretty clear but the error is not handled explicitly we should handle this a bit more gracefully so that the error does not show up in our error reporting tool related discussion on slack bpmn file to reproduce
| 1
|
3,833
| 6,802,431,592
|
IssuesEvent
|
2017-11-02 20:11:06
|
gratipay/inside.gratipay.com
|
https://api.github.com/repos/gratipay/inside.gratipay.com
|
closed
|
appraise our CMMI maturity level
|
Core Governance & Process
|
I noticed a quick [appraisal](https://www.loomio.org/d/j5yu8acP/does-loomio-provide-any-reliable-support) of Loomio/Enspiral as having CMMI Level 1 (cf. #421), and it got me thinking: what's _our_ CMMI maturity level?
> CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization.
https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration
> An organization cannot be certified in CMMI; instead, an organization is _appraised_. Depending on the type of appraisal, the organization can be awarded a maturity level rating (1-5) or a capability level achievement profile.
https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration#Appraisal

|
1.0
|
appraise our CMMI maturity level - I noticed a quick [appraisal](https://www.loomio.org/d/j5yu8acP/does-loomio-provide-any-reliable-support) of Loomio/Enspiral as having CMMI Level 1 (cf. #421), and it got me thinking: what's _our_ CMMI maturity level?
> CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization.
https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration
> An organization cannot be certified in CMMI; instead, an organization is _appraised_. Depending on the type of appraisal, the organization can be awarded a maturity level rating (1-5) or a capability level achievement profile.
https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration#Appraisal

|
process
|
appraise our cmmi maturity level i noticed a quick of loomio enspiral as having cmmi level cf and it got me thinking what s our cmmi maturity level cmmi models provide guidance for developing or improving processes that meet the business goals of an organization a cmmi model may also be used as a framework for appraising the process maturity of the organization an organization cannot be certified in cmmi instead an organization is appraised depending on the type of appraisal the organization can be awarded a maturity level rating or a capability level achievement profile
| 1
|
65,475
| 3,228,460,981
|
IssuesEvent
|
2015-10-12 02:28:29
|
cs2103aug2015-t11-4j/main
|
https://api.github.com/repos/cs2103aug2015-t11-4j/main
|
closed
|
dataDisplay Class for display all items
|
:logic priority.high
|
dataDisplay.displayAll() to show everything on the UI screen
|
1.0
|
dataDisplay Class for display all items - dataDisplay.displayAll() to show everything on the UI screen
|
non_process
|
datadisplay class for display all items datadisplay displayall to show everything on the ui screen
| 0
|
10,618
| 13,439,078,492
|
IssuesEvent
|
2020-09-07 19:57:55
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
opened
|
New `replace` remap function
|
domain: mapping domain: processing type: feature
|
The `replace` remap function replaces all occurrences of a string.
## Examples
Given the following event:
```js
{
"message": "I like apples and bananas"
}
``
### String literal
And the following remap instruction set:
```
.message = replace(.message, "a", "o")
```
Would produce:
```js
{
"message": "I like opples and bononos"
}
``
### Regex
And the following remap instruction set:
```
.message = replace(.message, /a/, "o")
```
Would produce:
```js
{
"message": "I like opples and bononos"
}
``
|
1.0
|
New `replace` remap function - The `replace` remap function replaces all occurrences of a string.
## Examples
Given the following event:
```js
{
"message": "I like apples and bananas"
}
``
### String literal
And the following remap instruction set:
```
.message = replace(.message, "a", "o")
```
Would produce:
```js
{
"message": "I like opples and bononos"
}
``
### Regex
And the following remap instruction set:
```
.message = replace(.message, /a/, "o")
```
Would produce:
```js
{
"message": "I like opples and bononos"
}
``
|
process
|
new replace remap function the replace remap function replaces all occurrences of a string examples given the following event js message i like apples and bananas string literal and the following remap instruction set message replace message a o would produce js message i like opples and bononos regex and the following remap instruction set message replace message a o would produce js message i like opples and bononos
| 1
|
9,402
| 12,400,940,550
|
IssuesEvent
|
2020-05-21 08:52:52
|
dotnetcore/Home
|
https://api.github.com/repos/dotnetcore/Home
|
closed
|
NCaller 申请加入NCC
|
Ap: Process-Termination
|
## 高性能动态调用库 NCaller
<br/>
- 项目背景:<br/>
此项目基于NCC - Natasha项目为动态调用提供高性能操作,随着core框架结构的升级,dynamic和emit都已经接近原生性能,但dynamic对于静态类,动态生成的静态类,动态生成的动态类的调用均有不完善之处。在此背景下,本人延用Natasha,在保证丰富操作的同时,对动态结构做了大量优化,使之耗时达到原生与dynamic之间,据不完全统计,NCaller的耗时仅是原生的2-3倍。
- 相关issue: https://github.com/dotnet/corefx/issues/39565
<br/>
<br/>
- 项目简介: <br/>
此项目为[Natasha](https://github.com/dotnetcore/Natasha)的衍生项目,通过运行时自动构建高性能操作代理类,为普通类,静态类,动态类,动态类中的动态类,动态生成的静态类等提供了良好的、完备的、高性能的操作,如果反射、dynamic都不能满足高端的需求,可使用本类库,它将是一个不错的选择, 项目地址:https://github.com/night-moon-studio/NCaller
<br/>
<br/>
- 项目信息: <br/>
- [x] MIT协议
- [x] 测试覆盖率检测
- [x] 持续构建
- [ ] 英文文档
- [ ] 详细Wiki
- [x] OnlineChart
<br/>
<br/>
- 后期规划: <br/>
因为本库关注点是操作与性能,后期会持续关注用户反馈,并在新版benchmark出来时,对NCaller做更精细的基准测试。
- 新出炉的官方benchmark规范:[【地址】](https://github.com/dotnet/performance/blob/master/docs/microbenchmark-design-guidelines.md)
|
1.0
|
NCaller 申请加入NCC - ## 高性能动态调用库 NCaller
<br/>
- 项目背景:<br/>
此项目基于NCC - Natasha项目为动态调用提供高性能操作,随着core框架结构的升级,dynamic和emit都已经接近原生性能,但dynamic对于静态类,动态生成的静态类,动态生成的动态类的调用均有不完善之处。在此背景下,本人延用Natasha,在保证丰富操作的同时,对动态结构做了大量优化,使之耗时达到原生与dynamic之间,据不完全统计,NCaller的耗时仅是原生的2-3倍。
- 相关issue: https://github.com/dotnet/corefx/issues/39565
<br/>
<br/>
- 项目简介: <br/>
此项目为[Natasha](https://github.com/dotnetcore/Natasha)的衍生项目,通过运行时自动构建高性能操作代理类,为普通类,静态类,动态类,动态类中的动态类,动态生成的静态类等提供了良好的、完备的、高性能的操作,如果反射、dynamic都不能满足高端的需求,可使用本类库,它将是一个不错的选择, 项目地址:https://github.com/night-moon-studio/NCaller
<br/>
<br/>
- 项目信息: <br/>
- [x] MIT协议
- [x] 测试覆盖率检测
- [x] 持续构建
- [ ] 英文文档
- [ ] 详细Wiki
- [x] OnlineChart
<br/>
<br/>
- 后期规划: <br/>
因为本库关注点是操作与性能,后期会持续关注用户反馈,并在新版benchmark出来时,对NCaller做更精细的基准测试。
- 新出炉的官方benchmark规范:[【地址】](https://github.com/dotnet/performance/blob/master/docs/microbenchmark-design-guidelines.md)
|
process
|
ncaller 申请加入ncc 高性能动态调用库 ncaller 项目背景: 此项目基于ncc natasha项目为动态调用提供高性能操作,随着core框架结构的升级,dynamic和emit都已经接近原生性能,但dynamic对于静态类,动态生成的静态类,动态生成的动态类的调用均有不完善之处。在此背景下,本人延用natasha,在保证丰富操作的同时,对动态结构做了大量优化,使之耗时达到原生与dynamic之间,据不完全统计, 。 相关issue 项目简介: 此项目为 项目地址: 项目信息: mit协议 测试覆盖率检测 持续构建 英文文档 详细wiki onlinechart 后期规划: 因为本库关注点是操作与性能,后期会持续关注用户反馈,并在新版benchmark出来时,对ncaller做更精细的基准测试。 新出炉的官方benchmark规范:
| 1
|
201,667
| 15,806,782,699
|
IssuesEvent
|
2021-04-04 07:13:29
|
AY2021S2-CS2103-W16-1/tp
|
https://api.github.com/repos/AY2021S2-CS2103-W16-1/tp
|
closed
|
[PE-D] Docs: Use of technical code samples
|
documentation
|
In the `summary command:

The inclusion of examples using code samples may be difficult for the target audience to understand if they have no knowledge of the code, or are not technical.
`i.e., completionStatus == INCOMPLETE && deadline < current date`
<!--session: 1617430717131-26c66e0a-78d6-41a2-abd3-095196388af5-->
-------------
Labels: `severity.VeryLow` `type.DocumentationBug`
original: yungweezy/ped#5
|
1.0
|
[PE-D] Docs: Use of technical code samples - In the `summary command:

The inclusion of examples using code samples may be difficult for the target audience to understand if they have no knowledge of the code, or are not technical.
`i.e., completionStatus == INCOMPLETE && deadline < current date`
<!--session: 1617430717131-26c66e0a-78d6-41a2-abd3-095196388af5-->
-------------
Labels: `severity.VeryLow` `type.DocumentationBug`
original: yungweezy/ped#5
|
non_process
|
docs use of technical code samples in the summary command the inclusion of examples using code samples may be difficult for the target audience to understand if they have no knowledge of the code or are not technical i e completionstatus incomplete deadline current date labels severity verylow type documentationbug original yungweezy ped
| 0
|
14,426
| 17,480,661,332
|
IssuesEvent
|
2021-08-09 01:13:40
|
googleapis/python-spanner
|
https://api.github.com/repos/googleapis/python-spanner
|
closed
|
samples.samples.autocommit_test: test_enable_autocommit_mode failed
|
api: spanner type: process samples flakybot: issue flakybot: flaky
|
Note: #282 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 2487800e31842a44dcc37937c325e130c8c926b0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/e2951c84-6fe7-446c-87e7-3b97256bee0b), [Sponge](http://sponge2/e2951c84-6fe7-446c-87e7-3b97256bee0b)
status: failed
<details><summary>Test output</summary><br><pre>target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f8555698d08>
sleep_generator = <generator object exponential_sleep_generator at 0x7f8555216a98>
deadline = 120, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
> return target()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:189:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f85552f0780>
retry = <google.api_core.retry.Retry object at 0x7f85556ae7b8>
def _done_or_raise(self, retry=DEFAULT_RETRY):
"""Check if the future is done and raise if it's not."""
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
if not self.done(**kwargs):
> raise _OperationNotComplete()
E google.api_core.future.polling._OperationNotComplete
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:87: _OperationNotComplete
The above exception was the direct cause of the following exception:
self = <google.api_core.operation.Operation object at 0x7f85552f0780>
timeout = 120, retry = <google.api_core.retry.Retry object at 0x7f85556ae7b8>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
> retry_(self._done_or_raise)(**kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:108:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (), kwargs = {}
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>)
sleep_generator = <generator object exponential_sleep_generator at 0x7f8555216a98>
@general_helpers.wraps(func)
def retry_wrapped_func(*args, **kwargs):
"""A wrapper that calls target function with retry."""
target = functools.partial(func, *args, **kwargs)
sleep_generator = exponential_sleep_generator(
self._initial, self._maximum, multiplier=self._multiplier
)
return retry_target(
target,
self._predicate,
sleep_generator,
self._deadline,
> on_error=on_error,
)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:291:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f8555698d08>
sleep_generator = <generator object exponential_sleep_generator at 0x7f8555216a98>
deadline = 120, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
return target()
# pylint: disable=broad-except
# This function explicitly must deal with broad exceptions.
except Exception as exc:
if not predicate(exc):
raise
last_exc = exc
if on_error is not None:
on_error(exc)
now = datetime_helpers.utcnow()
if deadline_datetime is not None:
if deadline_datetime <= now:
six.raise_from(
exceptions.RetryError(
"Deadline of {:.1f}s exceeded while calling {}".format(
deadline, target
),
last_exc,
),
> last_exc,
)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:211:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None, from_value = _OperationNotComplete()
> ???
E google.api_core.exceptions.RetryError: Deadline of 120.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>), last exception:
<string>:3: RetryError
During handling of the above exception, another exception occurred:
@pytest.fixture(scope="module")
def spanner_instance():
spanner_client = spanner.Client()
config_name = f"{spanner_client.project_name}/instanceConfigs/regional-us-central1"
instance = spanner_client.instance(INSTANCE_ID, config_name)
op = instance.create()
> op.result(120) # block until completion
autocommit_test.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:130: in result
self._blocking_poll(timeout=timeout, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f85552f0780>
timeout = 120, retry = <google.api_core.retry.Retry object at 0x7f85556ae7b8>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
retry_(self._done_or_raise)(**kwargs)
except exceptions.RetryError:
raise concurrent.futures.TimeoutError(
> "Operation did not complete within the designated " "timeout."
)
E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout.
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:111: TimeoutError</pre></details>
|
1.0
|
samples.samples.autocommit_test: test_enable_autocommit_mode failed - Note: #282 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: 2487800e31842a44dcc37937c325e130c8c926b0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/e2951c84-6fe7-446c-87e7-3b97256bee0b), [Sponge](http://sponge2/e2951c84-6fe7-446c-87e7-3b97256bee0b)
status: failed
<details><summary>Test output</summary><br><pre>target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f8555698d08>
sleep_generator = <generator object exponential_sleep_generator at 0x7f8555216a98>
deadline = 120, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
> return target()
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:189:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f85552f0780>
retry = <google.api_core.retry.Retry object at 0x7f85556ae7b8>
def _done_or_raise(self, retry=DEFAULT_RETRY):
"""Check if the future is done and raise if it's not."""
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
if not self.done(**kwargs):
> raise _OperationNotComplete()
E google.api_core.future.polling._OperationNotComplete
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:87: _OperationNotComplete
The above exception was the direct cause of the following exception:
self = <google.api_core.operation.Operation object at 0x7f85552f0780>
timeout = 120, retry = <google.api_core.retry.Retry object at 0x7f85556ae7b8>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
> retry_(self._done_or_raise)(**kwargs)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:108:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (), kwargs = {}
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>)
sleep_generator = <generator object exponential_sleep_generator at 0x7f8555216a98>
@general_helpers.wraps(func)
def retry_wrapped_func(*args, **kwargs):
"""A wrapper that calls target function with retry."""
target = functools.partial(func, *args, **kwargs)
sleep_generator = exponential_sleep_generator(
self._initial, self._maximum, multiplier=self._multiplier
)
return retry_target(
target,
self._predicate,
sleep_generator,
self._deadline,
> on_error=on_error,
)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:291:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>)
predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f8555698d08>
sleep_generator = <generator object exponential_sleep_generator at 0x7f8555216a98>
deadline = 120, on_error = None
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target. The last sleep
period is shortened as necessary, so that the last retry runs at
``deadline`` (and not considerably beyond it).
on_error (Callable[Exception]): A function to call while processing a
retryable exception. Any error raised by this function will *not*
be caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
"""
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
return target()
# pylint: disable=broad-except
# This function explicitly must deal with broad exceptions.
except Exception as exc:
if not predicate(exc):
raise
last_exc = exc
if on_error is not None:
on_error(exc)
now = datetime_helpers.utcnow()
if deadline_datetime is not None:
if deadline_datetime <= now:
six.raise_from(
exceptions.RetryError(
"Deadline of {:.1f}s exceeded while calling {}".format(
deadline, target
),
last_exc,
),
> last_exc,
)
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:211:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None, from_value = _OperationNotComplete()
> ???
E google.api_core.exceptions.RetryError: Deadline of 120.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f85552f0780>>), last exception:
<string>:3: RetryError
During handling of the above exception, another exception occurred:
@pytest.fixture(scope="module")
def spanner_instance():
spanner_client = spanner.Client()
config_name = f"{spanner_client.project_name}/instanceConfigs/regional-us-central1"
instance = spanner_client.instance(INSTANCE_ID, config_name)
op = instance.create()
> op.result(120) # block until completion
autocommit_test.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:130: in result
self._blocking_poll(timeout=timeout, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.api_core.operation.Operation object at 0x7f85552f0780>
timeout = 120, retry = <google.api_core.retry.Retry object at 0x7f85556ae7b8>
def _blocking_poll(self, timeout=None, retry=DEFAULT_RETRY):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
"""
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
kwargs = {} if retry is DEFAULT_RETRY else {"retry": retry}
retry_(self._done_or_raise)(**kwargs)
except exceptions.RetryError:
raise concurrent.futures.TimeoutError(
> "Operation did not complete within the designated " "timeout."
)
E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout.
.nox/py-3-6/lib/python3.6/site-packages/google/api_core/future/polling.py:111: TimeoutError</pre></details>
|
process
|
samples samples autocommit test test enable autocommit mode failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output target functools partial predicate if exception type predicate at sleep generator deadline on error none def retry target target predicate sleep generator deadline on error none call a function and retry if it fails this is the lowest level retry helper generally you ll use the higher level retry helper class retry args target callable the function to call and retry this must be a nullary function apply arguments with functools partial predicate callable a callable used to determine if an exception raised by the target should be considered retryable it should return true to retry or false otherwise sleep generator iterable an infinite iterator that determines how long to sleep between retries deadline float how long to keep retrying the target the last sleep period is shortened as necessary so that the last retry runs at deadline and not considerably beyond it on error callable a function to call while processing a retryable exception any error raised by this function will not be caught returns any the return value of the target function raises google api core retryerror if the deadline is exceeded while retrying valueerror if the sleep generator stops yielding values exception if the target raises a method that isn t retryable if deadline is not none deadline datetime datetime helpers utcnow datetime timedelta seconds deadline else deadline datetime none last exc none for sleep in sleep generator try return target nox py lib site packages google api core retry py self retry def done or raise self retry default retry check if the future is done and raise if it s not kwargs if retry is default retry else retry retry if not self done kwargs raise operationnotcomplete e google api core future polling operationnotcomplete nox py lib site packages google api core future polling py operationnotcomplete the above exception was the direct cause of the following exception self timeout retry def blocking poll self timeout none retry default retry poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try kwargs if retry is default retry else retry retry retry self done or raise kwargs nox py lib site packages google api core future polling py args kwargs target functools partial sleep generator general helpers wraps func def retry wrapped func args kwargs a wrapper that calls target function with retry target functools partial func args kwargs sleep generator exponential sleep generator self initial self maximum multiplier self multiplier return retry target target self predicate sleep generator self deadline on error on error nox py lib site packages google api core retry py target functools partial predicate if exception type predicate at sleep generator deadline on error none def retry target target predicate sleep generator deadline on error none call a function and retry if it fails this is the lowest level retry helper generally you ll use the higher level retry helper class retry args target callable the function to call and retry this must be a nullary function apply arguments with functools partial predicate callable a callable used to determine if an exception raised by the target should be considered retryable it should return true to retry or false otherwise sleep generator iterable an infinite iterator that determines how long to sleep between retries deadline float how long to keep retrying the target the last sleep period is shortened as necessary so that the last retry runs at deadline and not considerably beyond it on error callable a function to call while processing a retryable exception any error raised by this function will not be caught returns any the return value of the target function raises google api core retryerror if the deadline is exceeded while retrying valueerror if the sleep generator stops yielding values exception if the target raises a method that isn t retryable if deadline is not none deadline datetime datetime helpers utcnow datetime timedelta seconds deadline else deadline datetime none last exc none for sleep in sleep generator try return target pylint disable broad except this function explicitly must deal with broad exceptions except exception as exc if not predicate exc raise last exc exc if on error is not none on error exc now datetime helpers utcnow if deadline datetime is not none if deadline datetime now six raise from exceptions retryerror deadline of s exceeded while calling format deadline target last exc last exc nox py lib site packages google api core retry py value none from value operationnotcomplete e google api core exceptions retryerror deadline of exceeded while calling functools partial last exception retryerror during handling of the above exception another exception occurred pytest fixture scope module def spanner instance spanner client spanner client config name f spanner client project name instanceconfigs regional us instance spanner client instance instance id config name op instance create op result block until completion autocommit test py nox py lib site packages google api core future polling py in result self blocking poll timeout timeout kwargs self timeout retry def blocking poll self timeout none retry default retry poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try kwargs if retry is default retry else retry retry retry self done or raise kwargs except exceptions retryerror raise concurrent futures timeouterror operation did not complete within the designated timeout e concurrent futures base timeouterror operation did not complete within the designated timeout nox py lib site packages google api core future polling py timeouterror
| 1
|
70,544
| 8,558,022,769
|
IssuesEvent
|
2018-11-08 17:04:34
|
wordpress-mobile/WordPress-Android
|
https://api.github.com/repos/wordpress-mobile/WordPress-Android
|
closed
|
Login rework: re-instate the secondary help screen with shortcuts to Helpshift FAQ
|
Login [Status] Needs Design Review [Status] Stale [Type] Enhancement
|
Before the new login flows design, we had an error screen coming up after an error happened a few times, covering the whole screen, showing a message and offering buttons to Helpshift's FAQ section.
Here's a screenshot of the old screen:

The "Tell me more" button opens directly to a specific FAQ section in Helpshift.
Maybe we want to re-instate some form of that.
|
1.0
|
Login rework: re-instate the secondary help screen with shortcuts to Helpshift FAQ - Before the new login flows design, we had an error screen coming up after an error happened a few times, covering the whole screen, showing a message and offering buttons to Helpshift's FAQ section.
Here's a screenshot of the old screen:

The "Tell me more" button opens directly to a specific FAQ section in Helpshift.
Maybe we want to re-instate some form of that.
|
non_process
|
login rework re instate the secondary help screen with shortcuts to helpshift faq before the new login flows design we had an error screen coming up after an error happened a few times covering the whole screen showing a message and offering buttons to helpshift s faq section here s a screenshot of the old screen the tell me more button opens directly to a specific faq section in helpshift maybe we want to re instate some form of that
| 0
|
16,247
| 20,798,539,356
|
IssuesEvent
|
2022-03-17 11:42:56
|
ltechkorea/inference_results_v1.0
|
https://api.github.com/repos/ltechkorea/inference_results_v1.0
|
closed
|
[ BUG ] BERT: `generate_engines` failed
|
bug natural language processing
|
<!--
label에 해당 카테고리 추가해 주세요.
-->
## **Describe the bug**
> TensorRT Engine 빌드 실패
- `make generate_engines` failed.
### **Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
```
time make launch_docker DOCKER_COMMAND='make generate_engines RUN_ARGS="--benchmarks=bert --scenarios=Offline --config_ver=default --test_mode=PerformanceOnly --verbose --fast"'
Launching Docker session
docker run --gpus "device=7" --rm -t -w /work \
-v /home/jay/work/inference-v1.0/closed/LTechKorea:/work -v /home/jay:/mnt//home/jay \
--cap-add SYS_ADMIN --cap-add SYS_TIME \
-e NVIDIA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
--shm-size=32gb \
-v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro \
--security-opt apparmor=unconfined --security-opt seccomp=unconfined \
--name mlperf-inference-jay -h mlperf-inference-jay --add-host mlperf-inference-jay:127.0.0.1 \
--user 1001:1001 --net host --device /dev/fuse \
-v /opt/data/scratch.mlperf_inference:/opt/data/scratch.mlperf_inference -v /opt/data/Dataset:/opt/data/Dataset \
-e MLPERF_SCRATCH_PATH=/opt/data/scratch.mlperf_inference \
-e HOST_HOSTNAME=ltech-gpu10 \
-e LD_LIBRARY_PATH=:/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu:/home/jay/work/inference-v1.0/closed/LTechKorea/build/inference/loadgen/build:/usr/local/cuda-11.1/targets/x86_64-linux/lib/ \
\
mlperf-inference:jay make generate_engines RUN_ARGS="--benchmarks=bert --scenarios=Offline --config_ver=default --test_mode=PerformanceOnly --verbose --fast"
[2021-07-22 15:03:49,274 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[2021-07-22 15:03:50,470 main.py:701 INFO] Detected System ID: V100S-PCIE-32GBx1
[2021-07-22 15:03:50,478 main.py:529 INFO] Using config files: configs/bert/Offline/config.json
[2021-07-22 15:03:50,478 __init__.py:341 INFO] Parsing config file configs/bert/Offline/config.json ...
[2021-07-22 15:03:50,479 main.py:542 INFO] Processing config "V100S-PCIE-32GBx1_bert_Offline"
[2021-07-22 15:03:50,547 main.py:82 INFO] Building engines for bert benchmark in Offline scenario...
[2021-07-22 15:03:50,548 main.py:102 INFO] Building GPU engine for V100S-PCIE-32GBx1_bert_Offline
[2021-07-22 15:03:58,797 bert_var_seqlen.py:63 INFO] Using workspace size: 7,516,192,768
[2021-07-22 15:03:58,797 builder.py:55 INFO] ========= BenchmarkBuilder Arguments =========
[2021-07-22 15:03:58,797 builder.py:57 INFO] coalesced_tensor=True
[2021-07-22 15:03:58,797 builder.py:57 INFO] enable_interleaved=False
[2021-07-22 15:03:58,797 builder.py:57 INFO] input_dtype=int32
[2021-07-22 15:03:58,797 builder.py:57 INFO] input_format=linear
[2021-07-22 15:03:58,797 builder.py:57 INFO] precision=int8
[2021-07-22 15:03:58,797 builder.py:57 INFO] tensor_path=${PREPROCESSED_DATA_DIR}/squad_tokenized/input_ids.npy,${PREPROCESSED_DATA_DIR}/squad_tokenized/segment_ids.npy,${PREPROCESSED_DATA_DIR}/squad_tokenized/input_mask.npy
[2021-07-22 15:03:58,797 builder.py:57 INFO] use_graphs=False
[2021-07-22 15:03:58,797 builder.py:57 INFO] use_small_tile_gemm_plugin=True
[2021-07-22 15:03:58,797 builder.py:57 INFO] config_ver=default
[2021-07-22 15:03:58,797 builder.py:57 INFO] gemm_plugin_fairshare_cache_size=120
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_batch_size=1024
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_copy_streams=2
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_inference_streams=2
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_offline_expected_qps=3400
[2021-07-22 15:03:58,797 builder.py:57 INFO] workspace_size=7516192768
[2021-07-22 15:03:58,797 builder.py:57 INFO] system_id=V100S-PCIE-32GBx1
[2021-07-22 15:03:58,797 builder.py:57 INFO] scenario=Offline
[2021-07-22 15:03:58,797 builder.py:57 INFO] benchmark=bert
[2021-07-22 15:03:58,797 builder.py:57 INFO] config_name=V100S-PCIE-32GBx1_bert_Offline
[2021-07-22 15:03:58,797 builder.py:57 INFO] accuracy_level=99%
[2021-07-22 15:03:58,797 builder.py:57 INFO] optimization_level=plugin-enabled
[2021-07-22 15:03:58,797 builder.py:57 INFO] inference_server=lwis
[2021-07-22 15:03:58,797 builder.py:57 INFO] system_name=None
[2021-07-22 15:03:58,797 builder.py:57 INFO] verbose=True
[2021-07-22 15:03:58,798 builder.py:57 INFO] batch_size=1024
[2021-07-22 15:03:58,798 builder.py:57 INFO] dla_core=None
[TensorRT] VERBOSE: Registered plugin creator - ::GridAnchor_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::NMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Reorg_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Region_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Clip_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::LReLU_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PriorBox_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Normalize_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::RPROI_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::FlattenConcat_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::CropAndResize version 1
[TensorRT] VERBOSE: Registered plugin creator - ::DetectionLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Proposal version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ProposalLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ResizeNearest_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Split version 1
[TensorRT] VERBOSE: Registered plugin creator - ::SpecialSlice_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::InstanceNormalization_TRT version 1
[2021-07-22 15:04:12,771 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[TensorRT] VERBOSE: EmbLayerNormVarSeqlen createPlugin
Building bert_embeddings_layernorm_beta...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_layernorm_gamma...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_word_embeddings...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_token_type_embeddings...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_position_embeddings...
PluginFieldType is Float32
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
[TensorRT] VERBOSE: Setting dynamic range for (Unnamed Layer* 0) [PluginV2DynamicExt]_output_1 to [-1,1]
[TensorRT] VERBOSE: Setting dynamic range for (Unnamed Layer* 0) [PluginV2DynamicExt]_output_0 to [-2.49108,2.49108]
Replacing l0_fc_qkv with small-tile GEMM plugin, with fairshare cache size 120.
#assertionsrc/smallTileGEMMPlugin.cu,588
Traceback (most recent call last):
File "code/main.py", line 703, in <module>
main(main_args, system)
File "code/main.py", line 634, in main
launch_handle_generate_engine(*_gen_args, **_gen_kwargs)
File "code/main.py", line 62, in launch_handle_generate_engine
raise RuntimeError("Building engines failed!")
RuntimeError: Building engines failed!
Makefile:613: recipe for target 'generate_engines' failed
make: *** [generate_engines] Error 1
make: *** [Makefile:357: launch_docker] Error 2
make launch_docker 0.18s user 15.18s system 20% cpu 1:14.02 total
```
## **Expected behavior**
> A clear and concise description of what you expected to happen.
- TensorRT engine 정상 빌드
## **Possible Solution**
1. 1st solution
2. 2nd solution
## **Additional context**
> Add any other context about the problem here.
- 추가 정보
- 추가 정보
|
1.0
|
[ BUG ] BERT: `generate_engines` failed - <!--
label에 해당 카테고리 추가해 주세요.
-->
## **Describe the bug**
> TensorRT Engine 빌드 실패
- `make generate_engines` failed.
### **Screenshots or Logs**
If applicable, add screenshots to help explain your problem.
```
time make launch_docker DOCKER_COMMAND='make generate_engines RUN_ARGS="--benchmarks=bert --scenarios=Offline --config_ver=default --test_mode=PerformanceOnly --verbose --fast"'
Launching Docker session
docker run --gpus "device=7" --rm -t -w /work \
-v /home/jay/work/inference-v1.0/closed/LTechKorea:/work -v /home/jay:/mnt//home/jay \
--cap-add SYS_ADMIN --cap-add SYS_TIME \
-e NVIDIA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
--shm-size=32gb \
-v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro \
--security-opt apparmor=unconfined --security-opt seccomp=unconfined \
--name mlperf-inference-jay -h mlperf-inference-jay --add-host mlperf-inference-jay:127.0.0.1 \
--user 1001:1001 --net host --device /dev/fuse \
-v /opt/data/scratch.mlperf_inference:/opt/data/scratch.mlperf_inference -v /opt/data/Dataset:/opt/data/Dataset \
-e MLPERF_SCRATCH_PATH=/opt/data/scratch.mlperf_inference \
-e HOST_HOSTNAME=ltech-gpu10 \
-e LD_LIBRARY_PATH=:/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu:/home/jay/work/inference-v1.0/closed/LTechKorea/build/inference/loadgen/build:/usr/local/cuda-11.1/targets/x86_64-linux/lib/ \
\
mlperf-inference:jay make generate_engines RUN_ARGS="--benchmarks=bert --scenarios=Offline --config_ver=default --test_mode=PerformanceOnly --verbose --fast"
[2021-07-22 15:03:49,274 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[2021-07-22 15:03:50,470 main.py:701 INFO] Detected System ID: V100S-PCIE-32GBx1
[2021-07-22 15:03:50,478 main.py:529 INFO] Using config files: configs/bert/Offline/config.json
[2021-07-22 15:03:50,478 __init__.py:341 INFO] Parsing config file configs/bert/Offline/config.json ...
[2021-07-22 15:03:50,479 main.py:542 INFO] Processing config "V100S-PCIE-32GBx1_bert_Offline"
[2021-07-22 15:03:50,547 main.py:82 INFO] Building engines for bert benchmark in Offline scenario...
[2021-07-22 15:03:50,548 main.py:102 INFO] Building GPU engine for V100S-PCIE-32GBx1_bert_Offline
[2021-07-22 15:03:58,797 bert_var_seqlen.py:63 INFO] Using workspace size: 7,516,192,768
[2021-07-22 15:03:58,797 builder.py:55 INFO] ========= BenchmarkBuilder Arguments =========
[2021-07-22 15:03:58,797 builder.py:57 INFO] coalesced_tensor=True
[2021-07-22 15:03:58,797 builder.py:57 INFO] enable_interleaved=False
[2021-07-22 15:03:58,797 builder.py:57 INFO] input_dtype=int32
[2021-07-22 15:03:58,797 builder.py:57 INFO] input_format=linear
[2021-07-22 15:03:58,797 builder.py:57 INFO] precision=int8
[2021-07-22 15:03:58,797 builder.py:57 INFO] tensor_path=${PREPROCESSED_DATA_DIR}/squad_tokenized/input_ids.npy,${PREPROCESSED_DATA_DIR}/squad_tokenized/segment_ids.npy,${PREPROCESSED_DATA_DIR}/squad_tokenized/input_mask.npy
[2021-07-22 15:03:58,797 builder.py:57 INFO] use_graphs=False
[2021-07-22 15:03:58,797 builder.py:57 INFO] use_small_tile_gemm_plugin=True
[2021-07-22 15:03:58,797 builder.py:57 INFO] config_ver=default
[2021-07-22 15:03:58,797 builder.py:57 INFO] gemm_plugin_fairshare_cache_size=120
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_batch_size=1024
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_copy_streams=2
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_inference_streams=2
[2021-07-22 15:03:58,797 builder.py:57 INFO] gpu_offline_expected_qps=3400
[2021-07-22 15:03:58,797 builder.py:57 INFO] workspace_size=7516192768
[2021-07-22 15:03:58,797 builder.py:57 INFO] system_id=V100S-PCIE-32GBx1
[2021-07-22 15:03:58,797 builder.py:57 INFO] scenario=Offline
[2021-07-22 15:03:58,797 builder.py:57 INFO] benchmark=bert
[2021-07-22 15:03:58,797 builder.py:57 INFO] config_name=V100S-PCIE-32GBx1_bert_Offline
[2021-07-22 15:03:58,797 builder.py:57 INFO] accuracy_level=99%
[2021-07-22 15:03:58,797 builder.py:57 INFO] optimization_level=plugin-enabled
[2021-07-22 15:03:58,797 builder.py:57 INFO] inference_server=lwis
[2021-07-22 15:03:58,797 builder.py:57 INFO] system_name=None
[2021-07-22 15:03:58,797 builder.py:57 INFO] verbose=True
[2021-07-22 15:03:58,798 builder.py:57 INFO] batch_size=1024
[2021-07-22 15:03:58,798 builder.py:57 INFO] dla_core=None
[TensorRT] VERBOSE: Registered plugin creator - ::GridAnchor_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::NMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Reorg_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Region_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Clip_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::LReLU_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PriorBox_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Normalize_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::RPROI_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMS_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::FlattenConcat_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::CropAndResize version 1
[TensorRT] VERBOSE: Registered plugin creator - ::DetectionLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Proposal version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ProposalLayer_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::ResizeNearest_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::Split version 1
[TensorRT] VERBOSE: Registered plugin creator - ::SpecialSlice_TRT version 1
[TensorRT] VERBOSE: Registered plugin creator - ::InstanceNormalization_TRT version 1
[2021-07-22 15:04:12,771 __init__.py:255 INFO] Running command: CUDA_VISIBILE_ORDER=PCI_BUS_ID nvidia-smi --query-gpu=gpu_name,pci.device_id,uuid --format=csv
[TensorRT] VERBOSE: EmbLayerNormVarSeqlen createPlugin
Building bert_embeddings_layernorm_beta...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_layernorm_gamma...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_word_embeddings...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_token_type_embeddings...
PluginFieldType is Float32
[TensorRT] VERBOSE: Building bert_embeddings_position_embeddings...
PluginFieldType is Float32
[TensorRT] WARNING: Tensor DataType is determined at build time for tensors not marked as input or output.
[TensorRT] VERBOSE: Setting dynamic range for (Unnamed Layer* 0) [PluginV2DynamicExt]_output_1 to [-1,1]
[TensorRT] VERBOSE: Setting dynamic range for (Unnamed Layer* 0) [PluginV2DynamicExt]_output_0 to [-2.49108,2.49108]
Replacing l0_fc_qkv with small-tile GEMM plugin, with fairshare cache size 120.
#assertionsrc/smallTileGEMMPlugin.cu,588
Traceback (most recent call last):
File "code/main.py", line 703, in <module>
main(main_args, system)
File "code/main.py", line 634, in main
launch_handle_generate_engine(*_gen_args, **_gen_kwargs)
File "code/main.py", line 62, in launch_handle_generate_engine
raise RuntimeError("Building engines failed!")
RuntimeError: Building engines failed!
Makefile:613: recipe for target 'generate_engines' failed
make: *** [generate_engines] Error 1
make: *** [Makefile:357: launch_docker] Error 2
make launch_docker 0.18s user 15.18s system 20% cpu 1:14.02 total
```
## **Expected behavior**
> A clear and concise description of what you expected to happen.
- TensorRT engine 정상 빌드
## **Possible Solution**
1. 1st solution
2. 2nd solution
## **Additional context**
> Add any other context about the problem here.
- 추가 정보
- 추가 정보
|
process
|
bert generate engines failed label에 해당 카테고리 추가해 주세요 describe the bug tensorrt engine 빌드 실패 make generate engines failed screenshots or logs if applicable add screenshots to help explain your problem time make launch docker docker command make generate engines run args benchmarks bert scenarios offline config ver default test mode performanceonly verbose fast launching docker session docker run gpus device rm t w work v home jay work inference closed ltechkorea work v home jay mnt home jay cap add sys admin cap add sys time e nvidia visible devices shm size v etc timezone etc timezone ro v etc localtime etc localtime ro security opt apparmor unconfined security opt seccomp unconfined name mlperf inference jay h mlperf inference jay add host mlperf inference jay user net host device dev fuse v opt data scratch mlperf inference opt data scratch mlperf inference v opt data dataset opt data dataset e mlperf scratch path opt data scratch mlperf inference e host hostname ltech e ld library path usr local cuda usr lib linux gnu home jay work inference closed ltechkorea build inference loadgen build usr local cuda targets linux lib mlperf inference jay make generate engines run args benchmarks bert scenarios offline config ver default test mode performanceonly verbose fast running command cuda visibile order pci bus id nvidia smi query gpu gpu name pci device id uuid format csv detected system id pcie using config files configs bert offline config json parsing config file configs bert offline config json processing config pcie bert offline building engines for bert benchmark in offline scenario building gpu engine for pcie bert offline using workspace size benchmarkbuilder arguments coalesced tensor true enable interleaved false input dtype input format linear precision tensor path preprocessed data dir squad tokenized input ids npy preprocessed data dir squad tokenized segment ids npy preprocessed data dir squad tokenized input mask npy use graphs false use small tile gemm plugin true config ver default gemm plugin fairshare cache size gpu batch size gpu copy streams gpu inference streams gpu offline expected qps workspace size system id pcie scenario offline benchmark bert config name pcie bert offline accuracy level optimization level plugin enabled inference server lwis system name none verbose true batch size dla core none verbose registered plugin creator gridanchor trt version verbose registered plugin creator nms trt version verbose registered plugin creator reorg trt version verbose registered plugin creator region trt version verbose registered plugin creator clip trt version verbose registered plugin creator lrelu trt version verbose registered plugin creator priorbox trt version verbose registered plugin creator normalize trt version verbose registered plugin creator rproi trt version verbose registered plugin creator batchednms trt version verbose registered plugin creator batchednmsdynamic trt version verbose registered plugin creator flattenconcat trt version verbose registered plugin creator cropandresize version verbose registered plugin creator detectionlayer trt version verbose registered plugin creator proposal version verbose registered plugin creator proposallayer trt version verbose registered plugin creator pyramidroialign trt version verbose registered plugin creator resizenearest trt version verbose registered plugin creator split version verbose registered plugin creator specialslice trt version verbose registered plugin creator instancenormalization trt version running command cuda visibile order pci bus id nvidia smi query gpu gpu name pci device id uuid format csv verbose emblayernormvarseqlen createplugin building bert embeddings layernorm beta pluginfieldtype is verbose building bert embeddings layernorm gamma pluginfieldtype is verbose building bert embeddings word embeddings pluginfieldtype is verbose building bert embeddings token type embeddings pluginfieldtype is verbose building bert embeddings position embeddings pluginfieldtype is warning tensor datatype is determined at build time for tensors not marked as input or output verbose setting dynamic range for unnamed layer output to verbose setting dynamic range for unnamed layer output to replacing fc qkv with small tile gemm plugin with fairshare cache size assertionsrc smalltilegemmplugin cu traceback most recent call last file code main py line in main main args system file code main py line in main launch handle generate engine gen args gen kwargs file code main py line in launch handle generate engine raise runtimeerror building engines failed runtimeerror building engines failed makefile recipe for target generate engines failed make error make error make launch docker user system cpu total expected behavior a clear and concise description of what you expected to happen tensorrt engine 정상 빌드 possible solution solution solution additional context add any other context about the problem here 추가 정보 추가 정보
| 1
|
107,638
| 4,312,555,061
|
IssuesEvent
|
2016-07-22 06:25:00
|
SVADemoAPP/Server
|
https://api.github.com/repos/SVADemoAPP/Server
|
closed
|
散点图优化
|
Middle priority story
|
* 分普通/VIP/特殊用户三级展示,不同级别的点颜色不同
* 特殊角色泡泡显示姓名
* 获取用户角色需用户同意
* 点击散点弹出详细信息
* 有输入框能向该用户发送信息
|
1.0
|
散点图优化 - * 分普通/VIP/特殊用户三级展示,不同级别的点颜色不同
* 特殊角色泡泡显示姓名
* 获取用户角色需用户同意
* 点击散点弹出详细信息
* 有输入框能向该用户发送信息
|
non_process
|
散点图优化 分普通 vip 特殊用户三级展示,不同级别的点颜色不同 特殊角色泡泡显示姓名 获取用户角色需用户同意 点击散点弹出详细信息 有输入框能向该用户发送信息
| 0
|
19,988
| 26,462,583,259
|
IssuesEvent
|
2023-01-16 19:14:14
|
kubernetes-sigs/windows-operational-readiness
|
https://api.github.com/repos/kubernetes-sigs/windows-operational-readiness
|
closed
|
Ability for pods to bind to host network interfaces on Windows
|
kind/feature lifecycle/rotten category/ext.hostprocess
|
Ability for pods to bind to host network interfaces on windows (requires hostProcess pods for scheduling the pod itself).
|
1.0
|
Ability for pods to bind to host network interfaces on Windows - Ability for pods to bind to host network interfaces on windows (requires hostProcess pods for scheduling the pod itself).
|
process
|
ability for pods to bind to host network interfaces on windows ability for pods to bind to host network interfaces on windows requires hostprocess pods for scheduling the pod itself
| 1
|
122,430
| 4,835,352,840
|
IssuesEvent
|
2016-11-08 16:35:17
|
bounswe/bounswe2016group4
|
https://api.github.com/repos/bounswe/bounswe2016group4
|
closed
|
<django.db.models.base.ModelState object at 0x03F092F0> is not JSON serializable
|
backend bug priority-high
|
when i try to call /get_a_food/1 which is
```
def get_food(req, food_id):
# no error handling
food_dict = db_retrieve_food(food_id).__dict__
print(food_dict)
food_json = json.dumps(food_dict)
print(food_json)
return render(req, 'kwue/food.html', food_json)
```
json.dumps gives error
|
1.0
|
<django.db.models.base.ModelState object at 0x03F092F0> is not JSON serializable - when i try to call /get_a_food/1 which is
```
def get_food(req, food_id):
# no error handling
food_dict = db_retrieve_food(food_id).__dict__
print(food_dict)
food_json = json.dumps(food_dict)
print(food_json)
return render(req, 'kwue/food.html', food_json)
```
json.dumps gives error
|
non_process
|
is not json serializable when i try to call get a food which is def get food req food id no error handling food dict db retrieve food food id dict print food dict food json json dumps food dict print food json return render req kwue food html food json json dumps gives error
| 0
|
11,890
| 14,686,033,148
|
IssuesEvent
|
2021-01-01 12:47:39
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
reopened
|
$connect doesn't throw error on mysql connector if database is not reachable
|
bug/2-confirmed kind/bug process/candidate team/client topic: connections
|
## Problem
I am trying to generate million of rows of database seeders.
I would like to have a way to test connection before creating data to avoid unnecessary error message.
## Suggested solution
Perhaps something like this:
```typescript
try {
await prisma.$connect()
} catch (error) {
throw error
}
```
I tried to look in official docs for away to test connection, but found none.
"prisma.$connect()" doesn't trigger error if the DB is offline.
## Version
"@prisma/client": "^2.11.0"
Database: mysql:8
|
1.0
|
$connect doesn't throw error on mysql connector if database is not reachable - ## Problem
I am trying to generate million of rows of database seeders.
I would like to have a way to test connection before creating data to avoid unnecessary error message.
## Suggested solution
Perhaps something like this:
```typescript
try {
await prisma.$connect()
} catch (error) {
throw error
}
```
I tried to look in official docs for away to test connection, but found none.
"prisma.$connect()" doesn't trigger error if the DB is offline.
## Version
"@prisma/client": "^2.11.0"
Database: mysql:8
|
process
|
connect doesn t throw error on mysql connector if database is not reachable problem i am trying to generate million of rows of database seeders i would like to have a way to test connection before creating data to avoid unnecessary error message suggested solution perhaps something like this typescript try await prisma connect catch error throw error i tried to look in official docs for away to test connection but found none prisma connect doesn t trigger error if the db is offline version prisma client database mysql
| 1
|
87,687
| 8,109,937,033
|
IssuesEvent
|
2018-08-14 09:16:42
|
legion-platform/legion
|
https://api.github.com/repos/legion-platform/legion
|
opened
|
Add tests for Blue Ocean dashboard with models metrics in small Jenkins
|
CI/CD/tests
|
Example test flow:
1. Get learning statistics for the test model.
2. Run model job in the small Jenkins.
3. Open Blue Ocean dashboard by selenium.
4. Check statistics for the test model and other model info.
|
1.0
|
Add tests for Blue Ocean dashboard with models metrics in small Jenkins - Example test flow:
1. Get learning statistics for the test model.
2. Run model job in the small Jenkins.
3. Open Blue Ocean dashboard by selenium.
4. Check statistics for the test model and other model info.
|
non_process
|
add tests for blue ocean dashboard with models metrics in small jenkins example test flow get learning statistics for the test model run model job in the small jenkins open blue ocean dashboard by selenium check statistics for the test model and other model info
| 0
|
190,228
| 6,812,804,161
|
IssuesEvent
|
2017-11-06 05:56:05
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
ouo.io - see bug description
|
browser-firefox priority-important
|
<!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: http://ouo.io/
**Browser / Version**: Firefox 57.0
**Operating System**: Mac OS X 10.12
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Web not secure, Not using https
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2017/11/0fcb418f-94e2-4bcc-859f-ba66041aa96e.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
ouo.io - see bug description - <!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: addon-reporter-firefox -->
**URL**: http://ouo.io/
**Browser / Version**: Firefox 57.0
**Operating System**: Mac OS X 10.12
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Web not secure, Not using https
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2017/11/0fcb418f-94e2-4bcc-859f-ba66041aa96e.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
ouo io see bug description url browser version firefox operating system mac os x tested another browser no problem type something else description web not secure not using https steps to reproduce from with ❤️
| 0
|
14,322
| 17,351,223,032
|
IssuesEvent
|
2021-07-29 08:59:04
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
tests/system/test_pandas.py::test_insert_rows_from_dataframe is flaky
|
api: bigquery testing type: process
|
In CI, this test has failed at least twice in the last day.
```
> assert len(row_tuples) == len(expected)
E AssertionError: assert 0 == 6
E + where 0 = len([])
E + and 6 = len([(1.11, True, 'my string', 10), (2.22, False, 'another string', 20), (3.33, False, 'another string', 30), (4.44, True, 'another string', 40), (5.55, False, 'another string', 50), (6.66, True, None, 60)])
tests/system/test_pandas.py:696: AssertionError
```
Perhaps this due to eventual consistency and we need to poll until we get 6 results.
|
1.0
|
tests/system/test_pandas.py::test_insert_rows_from_dataframe is flaky - In CI, this test has failed at least twice in the last day.
```
> assert len(row_tuples) == len(expected)
E AssertionError: assert 0 == 6
E + where 0 = len([])
E + and 6 = len([(1.11, True, 'my string', 10), (2.22, False, 'another string', 20), (3.33, False, 'another string', 30), (4.44, True, 'another string', 40), (5.55, False, 'another string', 50), (6.66, True, None, 60)])
tests/system/test_pandas.py:696: AssertionError
```
Perhaps this due to eventual consistency and we need to poll until we get 6 results.
|
process
|
tests system test pandas py test insert rows from dataframe is flaky in ci this test has failed at least twice in the last day assert len row tuples len expected e assertionerror assert e where len e and len tests system test pandas py assertionerror perhaps this due to eventual consistency and we need to poll until we get results
| 1
|
20,453
| 27,117,926,550
|
IssuesEvent
|
2023-02-15 20:09:04
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
closed
|
Gradle build job in github workflow ignores test failure
|
bug process
|
### Description
The gradle build job in our github workflow ignores test failure. For example, in this workflow [run](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292), the job is marked as successful although there are test failures in the step `Execute Gradle`:
```
7m 59s
Run gradle/gradle-build-action@v2
Restore Gradle state from cache
/home/runner/work/hedera-mirror-node/hedera-mirror-node/gradlew :importer:build --scan ***
Starting a Gradle Daemon (subsequent builds will be faster)
Configuration on demand is an incubating feature.
> Task :buildSrc:generateExternalPluginSpecBuilders FROM-CACHE
> Task :buildSrc:extractPrecompiledScriptPluginPlugins FROM-CACHE
> Task :buildSrc:compilePluginsBlocks FROM-CACHE
> Task :buildSrc:generatePrecompiledScriptPluginAccessors
> Task :buildSrc:generateScriptPluginAdapters FROM-CACHE
> Task :buildSrc:compileKotlin FROM-CACHE
> Task :buildSrc:compileJava NO-SOURCE
> Task :buildSrc:compileGroovy NO-SOURCE
> Task :buildSrc:pluginDescriptors
> Task :buildSrc:processResources
> Task :buildSrc:classes
> Task :buildSrc:inspectClassesForKotlinIC
> Task :buildSrc:jar
> Task :buildSrc:assemble
> Task :buildSrc:compileTestKotlin NO-SOURCE
> Task :buildSrc:compileTestJava NO-SOURCE
> Task :buildSrc:compileTestGroovy NO-SOURCE
> Task :buildSrc:pluginUnderTestMetadata
> Task :buildSrc:processTestResources NO-SOURCE
> Task :buildSrc:testClasses UP-TO-DATE
> Task :buildSrc:test NO-SOURCE
> Task :buildSrc:validatePlugins FROM-CACHE
> Task :buildSrc:check UP-TO-DATE
> Task :buildSrc:gitHook
> Task :buildSrc:build
> Task :importer:bootBuildInfo
> Task :common:generateEffectiveLombokConfig
> Task :importer:generateEffectiveLombokConfig
> Task :importer:generateGitProperties
> Task :importer:processResources
> Task :importer:generateTestEffectiveLombokConfig
> Task :common:compileJava FROM-CACHE
> Task :common:processResources NO-SOURCE
> Task :common:classes UP-TO-DATE
> Task :common:jar
> Task :common:generateTestEffectiveLombokConfig
> Task :importer:processTestResources
> Task :common:compileTestJava FROM-CACHE
> Task :common:processTestResources NO-SOURCE
> Task :common:testClasses UP-TO-DATE
> Task :importer:compileJava FROM-CACHE
> Task :importer:classes
> Task :importer:bootJarMainClassName
> Task :importer:bootJar
> Task :importer:jar
> Task :importer:package
> Task :importer:assemble
> Task :importer:compileTestJava FROM-CACHE
> Task :importer:testClasses
> Task :importer:test
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
EntityRecordItemListenerFileTest > fileUpdateAddressBookComplete() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:604
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:6[1](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:1)3
EntityRecordItemListenerFileTest > fileAppendToAddressBookInSingleRecordFile() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:320
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:321
EntityRecordItemListenerFileTest > fileUpdateAddressBookPartial() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:5[78](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:80)
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:5[83](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:85)
EntityRecordItemListenerFileTest > fileAppendToAddressBook() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:2[95](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:97)
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:302
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
2095 tests completed, 4 failed
There were failing tests. See the report at: file:///home/runner/work/hedera-mirror-node/hedera-mirror-node/hedera-mirror-importer/build/reports/tests/test/index.html
> Task :importer:jacocoTestReport
> Task :importer:check
> Task :importer:build
Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
See https://docs.gradle.org/7.6/userguide/command_line_interface.html#sec:command_line_warnings
BUILD SUCCESSFUL in 7m 45s
32 actionable tasks: 22 executed, 10 from cache
```
### Steps to reproduce
see the description
### Additional context
_No response_
### Hedera network
other
### Version
v0.75.0-SNAPSHOT
### Operating system
None
|
1.0
|
Gradle build job in github workflow ignores test failure - ### Description
The gradle build job in our github workflow ignores test failure. For example, in this workflow [run](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292), the job is marked as successful although there are test failures in the step `Execute Gradle`:
```
7m 59s
Run gradle/gradle-build-action@v2
Restore Gradle state from cache
/home/runner/work/hedera-mirror-node/hedera-mirror-node/gradlew :importer:build --scan ***
Starting a Gradle Daemon (subsequent builds will be faster)
Configuration on demand is an incubating feature.
> Task :buildSrc:generateExternalPluginSpecBuilders FROM-CACHE
> Task :buildSrc:extractPrecompiledScriptPluginPlugins FROM-CACHE
> Task :buildSrc:compilePluginsBlocks FROM-CACHE
> Task :buildSrc:generatePrecompiledScriptPluginAccessors
> Task :buildSrc:generateScriptPluginAdapters FROM-CACHE
> Task :buildSrc:compileKotlin FROM-CACHE
> Task :buildSrc:compileJava NO-SOURCE
> Task :buildSrc:compileGroovy NO-SOURCE
> Task :buildSrc:pluginDescriptors
> Task :buildSrc:processResources
> Task :buildSrc:classes
> Task :buildSrc:inspectClassesForKotlinIC
> Task :buildSrc:jar
> Task :buildSrc:assemble
> Task :buildSrc:compileTestKotlin NO-SOURCE
> Task :buildSrc:compileTestJava NO-SOURCE
> Task :buildSrc:compileTestGroovy NO-SOURCE
> Task :buildSrc:pluginUnderTestMetadata
> Task :buildSrc:processTestResources NO-SOURCE
> Task :buildSrc:testClasses UP-TO-DATE
> Task :buildSrc:test NO-SOURCE
> Task :buildSrc:validatePlugins FROM-CACHE
> Task :buildSrc:check UP-TO-DATE
> Task :buildSrc:gitHook
> Task :buildSrc:build
> Task :importer:bootBuildInfo
> Task :common:generateEffectiveLombokConfig
> Task :importer:generateEffectiveLombokConfig
> Task :importer:generateGitProperties
> Task :importer:processResources
> Task :importer:generateTestEffectiveLombokConfig
> Task :common:compileJava FROM-CACHE
> Task :common:processResources NO-SOURCE
> Task :common:classes UP-TO-DATE
> Task :common:jar
> Task :common:generateTestEffectiveLombokConfig
> Task :importer:processTestResources
> Task :common:compileTestJava FROM-CACHE
> Task :common:processTestResources NO-SOURCE
> Task :common:testClasses UP-TO-DATE
> Task :importer:compileJava FROM-CACHE
> Task :importer:classes
> Task :importer:bootJarMainClassName
> Task :importer:bootJar
> Task :importer:jar
> Task :importer:package
> Task :importer:assemble
> Task :importer:compileTestJava FROM-CACHE
> Task :importer:testClasses
> Task :importer:test
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
EntityRecordItemListenerFileTest > fileUpdateAddressBookComplete() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:604
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:6[1](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:1)3
EntityRecordItemListenerFileTest > fileAppendToAddressBookInSingleRecordFile() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:320
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:321
EntityRecordItemListenerFileTest > fileUpdateAddressBookPartial() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:5[78](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:80)
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:5[83](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:85)
EntityRecordItemListenerFileTest > fileAppendToAddressBook() FAILED
org.opentest4j.MultipleFailuresError at EntityRecordItemListenerFileTest.java:2[95](https://github.com/hashgraph/hedera-mirror-node/actions/runs/4159674158/jobs/7195950292#step:4:97)
Caused by: org.opentest4j.AssertionFailedError at EntityRecordItemListenerFileTest.java:302
OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended
2095 tests completed, 4 failed
There were failing tests. See the report at: file:///home/runner/work/hedera-mirror-node/hedera-mirror-node/hedera-mirror-importer/build/reports/tests/test/index.html
> Task :importer:jacocoTestReport
> Task :importer:check
> Task :importer:build
Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
See https://docs.gradle.org/7.6/userguide/command_line_interface.html#sec:command_line_warnings
BUILD SUCCESSFUL in 7m 45s
32 actionable tasks: 22 executed, 10 from cache
```
### Steps to reproduce
see the description
### Additional context
_No response_
### Hedera network
other
### Version
v0.75.0-SNAPSHOT
### Operating system
None
|
process
|
gradle build job in github workflow ignores test failure description the gradle build job in our github workflow ignores test failure for example in this workflow the job is marked as successful although there are test failures in the step execute gradle run gradle gradle build action restore gradle state from cache home runner work hedera mirror node hedera mirror node gradlew importer build scan starting a gradle daemon subsequent builds will be faster configuration on demand is an incubating feature task buildsrc generateexternalpluginspecbuilders from cache task buildsrc extractprecompiledscriptpluginplugins from cache task buildsrc compilepluginsblocks from cache task buildsrc generateprecompiledscriptpluginaccessors task buildsrc generatescriptpluginadapters from cache task buildsrc compilekotlin from cache task buildsrc compilejava no source task buildsrc compilegroovy no source task buildsrc plugindescriptors task buildsrc processresources task buildsrc classes task buildsrc inspectclassesforkotlinic task buildsrc jar task buildsrc assemble task buildsrc compiletestkotlin no source task buildsrc compiletestjava no source task buildsrc compiletestgroovy no source task buildsrc pluginundertestmetadata task buildsrc processtestresources no source task buildsrc testclasses up to date task buildsrc test no source task buildsrc validateplugins from cache task buildsrc check up to date task buildsrc githook task buildsrc build task importer bootbuildinfo task common generateeffectivelombokconfig task importer generateeffectivelombokconfig task importer generategitproperties task importer processresources task importer generatetesteffectivelombokconfig task common compilejava from cache task common processresources no source task common classes up to date task common jar task common generatetesteffectivelombokconfig task importer processtestresources task common compiletestjava from cache task common processtestresources no source task common testclasses up to date task importer compilejava from cache task importer classes task importer bootjarmainclassname task importer bootjar task importer jar task importer package task importer assemble task importer compiletestjava from cache task importer testclasses task importer test openjdk bit server vm warning sharing is only supported for boot loader classes because bootstrap classpath has been appended entityrecorditemlistenerfiletest fileupdateaddressbookcomplete failed org multiplefailureserror at entityrecorditemlistenerfiletest java caused by org assertionfailederror at entityrecorditemlistenerfiletest java entityrecorditemlistenerfiletest fileappendtoaddressbookinsinglerecordfile failed org multiplefailureserror at entityrecorditemlistenerfiletest java caused by org assertionfailederror at entityrecorditemlistenerfiletest java entityrecorditemlistenerfiletest fileupdateaddressbookpartial failed org multiplefailureserror at entityrecorditemlistenerfiletest java caused by org assertionfailederror at entityrecorditemlistenerfiletest java entityrecorditemlistenerfiletest fileappendtoaddressbook failed org multiplefailureserror at entityrecorditemlistenerfiletest java caused by org assertionfailederror at entityrecorditemlistenerfiletest java openjdk bit server vm warning sharing is only supported for boot loader classes because bootstrap classpath has been appended tests completed failed there were failing tests see the report at file home runner work hedera mirror node hedera mirror node hedera mirror importer build reports tests test index html task importer jacocotestreport task importer check task importer build deprecated gradle features were used in this build making it incompatible with gradle you can use warning mode all to show the individual deprecation warnings and determine if they come from your own scripts or plugins see build successful in actionable tasks executed from cache steps to reproduce see the description additional context no response hedera network other version snapshot operating system none
| 1
|
201,093
| 15,173,643,056
|
IssuesEvent
|
2021-02-13 15:03:12
|
sle118/squeezelite-esp32
|
https://api.github.com/repos/sle118/squeezelite-esp32
|
closed
|
Preset-Buttons don't work... solution found
|
ready for testing
|
Hi
the new preset-buttons from PR#64 have a litle typo-bug... you forgot a "_"-char (underscore).
File components/squeezelite/controls.c
Line 145 – 150
old: LMS_CALLBACK(pre1, PRESET_1, preset1.single)
new : LMS_CALLBACK(pre1, PRESET_1, preset_1.single)
...
old: LMS_CALLBACK(pre6, PRESET_6, preset6.single)
new: LMS_CALLBACK(pre6, PRESET_6, preset_6.single)
I compiled the new lines and it worked :-)
Can you integrate it to your next version?
Many greetings and thank you
M.P.
|
1.0
|
Preset-Buttons don't work... solution found - Hi
the new preset-buttons from PR#64 have a litle typo-bug... you forgot a "_"-char (underscore).
File components/squeezelite/controls.c
Line 145 – 150
old: LMS_CALLBACK(pre1, PRESET_1, preset1.single)
new : LMS_CALLBACK(pre1, PRESET_1, preset_1.single)
...
old: LMS_CALLBACK(pre6, PRESET_6, preset6.single)
new: LMS_CALLBACK(pre6, PRESET_6, preset_6.single)
I compiled the new lines and it worked :-)
Can you integrate it to your next version?
Many greetings and thank you
M.P.
|
non_process
|
preset buttons don t work solution found hi the new preset buttons from pr have a litle typo bug you forgot a char underscore file components squeezelite controls c line – old lms callback preset single new lms callback preset preset single old lms callback preset single new lms callback preset preset single i compiled the new lines and it worked can you integrate it to your next version many greetings and thank you m p
| 0
|
268,571
| 8,408,478,276
|
IssuesEvent
|
2018-10-12 01:52:56
|
datasnakes/rut
|
https://api.github.com/repos/datasnakes/rut
|
opened
|
Standalone rut CLI (Primary Objectives)
|
Priority: Critical Status: Available Status: Review Needed Type: Discussion Type: Feature
|
Complete the primary objectives under [Hackseq 2018 - rut](https://github.com/datasnakes/rut/projects) project.
- [ ] Install R packages from CRAN in user .libPath or in global R .libPath
- [ ] Create CRAN snapshots anywhere with checkpoint.
- [ ] Create packrat projects using jetpack's CLI
- [ ] Create a default .Rprofile/.Renviron
|
1.0
|
Standalone rut CLI (Primary Objectives) - Complete the primary objectives under [Hackseq 2018 - rut](https://github.com/datasnakes/rut/projects) project.
- [ ] Install R packages from CRAN in user .libPath or in global R .libPath
- [ ] Create CRAN snapshots anywhere with checkpoint.
- [ ] Create packrat projects using jetpack's CLI
- [ ] Create a default .Rprofile/.Renviron
|
non_process
|
standalone rut cli primary objectives complete the primary objectives under project install r packages from cran in user libpath or in global r libpath create cran snapshots anywhere with checkpoint create packrat projects using jetpack s cli create a default rprofile renviron
| 0
|
8,295
| 11,460,622,510
|
IssuesEvent
|
2020-02-07 10:07:25
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
obsolete: GO:0052161 modulation by symbiont of defense-related host cell wall thickening
|
multi-species process obsoletion quick fix
|
GO:0052161 modulation by symbiont of defense-related host cell wall thickening
Definition (GO:0052161 GONUTS page)
Any process in which an organism modulates the frequency, rate or extent of host processes resulting in the thickening of its cell walls, occurring as part of the defense response of the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
This term, and its descendants
GO:0052105 induction by symbiont of defense-related host cell wall thickening is_a
GO:0052189 modulation by symbiont of defense-related host cell wall callose deposition
should be obsoleted.
They don't make sense because this is a plant PTI (pathogen triggered immunity) response. The plant increases callose deposition as a defense response an infected cell. I think this is to contain the pathogens to prevent them spreading to surrounding cells.
There are appropriate plant terms for this process.
GO:0052542 defense response by callose deposition
But, this (GO:0052161) is not an evolved process for the pathogen. It happens in the plant as a result of the plant detecting the pathogen.
Can we search for any other terms under "**response to host defenses**" which have
Any process in blah ......**occurring as part of the defense response of the host organism** ?as it is unlikely that any of these are valid, the 2 statements are logically inconsistent.
again, no annotations, no reference....
|
1.0
|
obsolete: GO:0052161 modulation by symbiont of defense-related host cell wall thickening - GO:0052161 modulation by symbiont of defense-related host cell wall thickening
Definition (GO:0052161 GONUTS page)
Any process in which an organism modulates the frequency, rate or extent of host processes resulting in the thickening of its cell walls, occurring as part of the defense response of the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
This term, and its descendants
GO:0052105 induction by symbiont of defense-related host cell wall thickening is_a
GO:0052189 modulation by symbiont of defense-related host cell wall callose deposition
should be obsoleted.
They don't make sense because this is a plant PTI (pathogen triggered immunity) response. The plant increases callose deposition as a defense response an infected cell. I think this is to contain the pathogens to prevent them spreading to surrounding cells.
There are appropriate plant terms for this process.
GO:0052542 defense response by callose deposition
But, this (GO:0052161) is not an evolved process for the pathogen. It happens in the plant as a result of the plant detecting the pathogen.
Can we search for any other terms under "**response to host defenses**" which have
Any process in blah ......**occurring as part of the defense response of the host organism** ?as it is unlikely that any of these are valid, the 2 statements are logically inconsistent.
again, no annotations, no reference....
|
process
|
obsolete go modulation by symbiont of defense related host cell wall thickening go modulation by symbiont of defense related host cell wall thickening definition go gonuts page any process in which an organism modulates the frequency rate or extent of host processes resulting in the thickening of its cell walls occurring as part of the defense response of the host organism the host is defined as the larger of the organisms involved in a symbiotic interaction this term and its descendants go induction by symbiont of defense related host cell wall thickening is a go modulation by symbiont of defense related host cell wall callose deposition should be obsoleted they don t make sense because this is a plant pti pathogen triggered immunity response the plant increases callose deposition as a defense response an infected cell i think this is to contain the pathogens to prevent them spreading to surrounding cells there are appropriate plant terms for this process go defense response by callose deposition but this go is not an evolved process for the pathogen it happens in the plant as a result of the plant detecting the pathogen can we search for any other terms under response to host defenses which have any process in blah occurring as part of the defense response of the host organism as it is unlikely that any of these are valid the statements are logically inconsistent again no annotations no reference
| 1
|
23,406
| 4,934,211,395
|
IssuesEvent
|
2016-11-28 18:26:09
|
biocore/scikit-bio
|
https://api.github.com/repos/biocore/scikit-bio
|
closed
|
updates to release.md
|
documentation maintenance
|
- [x] base this on `conda` environments instead of `virtualenv`
- [x] note that version strings may need to be changed in new `@experimental/@deprecated/@stable` code under _Prep the release_
|
1.0
|
updates to release.md - - [x] base this on `conda` environments instead of `virtualenv`
- [x] note that version strings may need to be changed in new `@experimental/@deprecated/@stable` code under _Prep the release_
|
non_process
|
updates to release md base this on conda environments instead of virtualenv note that version strings may need to be changed in new experimental deprecated stable code under prep the release
| 0
|
191,437
| 14,594,246,427
|
IssuesEvent
|
2020-12-20 04:21:19
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
chef/automate: api/config/dex/config_request_test.go; 8 LoC
|
fresh test tiny
|
Found a possible issue in [chef/automate](https://www.github.com/chef/automate) at [api/config/dex/config_request_test.go](https://github.com/chef/automate/blob/590d86f451627dc954083ff10936e03575b918f8/api/config/dex/config_request_test.go#L483-L490)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to connector is reassigned at line 485
[Click here to see the code in its original context.](https://github.com/chef/automate/blob/590d86f451627dc954083ff10936e03575b918f8/api/config/dex/config_request_test.go#L483-L490)
<details>
<summary>Click here to show the 8 line(s) of Go which triggered the analyzer.</summary>
```go
for i, connector := range combinations {
t.Run(fmt.Sprintf("combination %d", i), func(t *testing.T) {
cfg.V1.Sys.Connectors = &connector
err := cfg.Validate()
require.NoError(t, err)
})
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 590d86f451627dc954083ff10936e03575b918f8
|
1.0
|
chef/automate: api/config/dex/config_request_test.go; 8 LoC -
Found a possible issue in [chef/automate](https://www.github.com/chef/automate) at [api/config/dex/config_request_test.go](https://github.com/chef/automate/blob/590d86f451627dc954083ff10936e03575b918f8/api/config/dex/config_request_test.go#L483-L490)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to connector is reassigned at line 485
[Click here to see the code in its original context.](https://github.com/chef/automate/blob/590d86f451627dc954083ff10936e03575b918f8/api/config/dex/config_request_test.go#L483-L490)
<details>
<summary>Click here to show the 8 line(s) of Go which triggered the analyzer.</summary>
```go
for i, connector := range combinations {
t.Run(fmt.Sprintf("combination %d", i), func(t *testing.T) {
cfg.V1.Sys.Connectors = &connector
err := cfg.Validate()
require.NoError(t, err)
})
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 590d86f451627dc954083ff10936e03575b918f8
|
non_process
|
chef automate api config dex config request test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to connector is reassigned at line click here to show the line s of go which triggered the analyzer go for i connector range combinations t run fmt sprintf combination d i func t testing t cfg sys connectors connector err cfg validate require noerror t err leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
20,605
| 27,268,639,731
|
IssuesEvent
|
2023-02-22 20:14:13
|
PHACDataHub/project-intake
|
https://api.github.com/repos/PHACDataHub/project-intake
|
opened
|
[EPIC]: Self-serve intake process
|
Area: Change Management Program Area: Cloud Operating Model Area: Process
|
### 👱 Suggester Name
Keith Young
### 🎯 Milestone
M1 - Establish a governance Secretariat
### ❓ Problem Statement
No process exists to fast-track simple (templated) onboarding
### 🎉Desired Outcome Summary
A process to allow fast technology delivery with minimal or no human interaction
### 📋 Detailed Description
We don't have a process to allow users to self-serve simple technology requests.
A simple use-case would be, "I am a data scientist and I need to process data over the next 72 hours".
We should have an intake form that allows a user to select from pre-defined templates with known boundaries (compute limits, time limits). They fill out the form, and a few automatic checks are done and the successful path in the use-case means the resources are spun-up and the user receives an email indicating their resources are ready for use.
### ✔️ Feature List
- [ ] Come up with process flow that represents this process
- [ ] Come up with template ideas for commodity simple use-cases
- [ ] Decide what the minimum viable form requirements are
> If a new GCP identity is needed how is the automation flow impacted and resolved?
### ⛔ Dependencies
_No response_
|
1.0
|
[EPIC]: Self-serve intake process - ### 👱 Suggester Name
Keith Young
### 🎯 Milestone
M1 - Establish a governance Secretariat
### ❓ Problem Statement
No process exists to fast-track simple (templated) onboarding
### 🎉Desired Outcome Summary
A process to allow fast technology delivery with minimal or no human interaction
### 📋 Detailed Description
We don't have a process to allow users to self-serve simple technology requests.
A simple use-case would be, "I am a data scientist and I need to process data over the next 72 hours".
We should have an intake form that allows a user to select from pre-defined templates with known boundaries (compute limits, time limits). They fill out the form, and a few automatic checks are done and the successful path in the use-case means the resources are spun-up and the user receives an email indicating their resources are ready for use.
### ✔️ Feature List
- [ ] Come up with process flow that represents this process
- [ ] Come up with template ideas for commodity simple use-cases
- [ ] Decide what the minimum viable form requirements are
> If a new GCP identity is needed how is the automation flow impacted and resolved?
### ⛔ Dependencies
_No response_
|
process
|
self serve intake process 👱 suggester name keith young 🎯 milestone establish a governance secretariat ❓ problem statement no process exists to fast track simple templated onboarding 🎉desired outcome summary a process to allow fast technology delivery with minimal or no human interaction 📋 detailed description we don t have a process to allow users to self serve simple technology requests a simple use case would be i am a data scientist and i need to process data over the next hours we should have an intake form that allows a user to select from pre defined templates with known boundaries compute limits time limits they fill out the form and a few automatic checks are done and the successful path in the use case means the resources are spun up and the user receives an email indicating their resources are ready for use ✔️ feature list come up with process flow that represents this process come up with template ideas for commodity simple use cases decide what the minimum viable form requirements are if a new gcp identity is needed how is the automation flow impacted and resolved ⛔ dependencies no response
| 1
|
152,120
| 5,833,348,410
|
IssuesEvent
|
2017-05-09 01:11:16
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
[openbmc]rspconfig support option: ip/netmask/gateway/vlan
|
priority:high sprint2 type:feature
|
Acceptance: support option ip/netmask/gateway/vlan for rspconfig against openbmc
|
1.0
|
[openbmc]rspconfig support option: ip/netmask/gateway/vlan - Acceptance: support option ip/netmask/gateway/vlan for rspconfig against openbmc
|
non_process
|
rspconfig support option ip netmask gateway vlan acceptance support option ip netmask gateway vlan for rspconfig against openbmc
| 0
|
10,873
| 13,642,529,507
|
IssuesEvent
|
2020-09-25 15:39:51
|
nlpie/mtap
|
https://api.github.com/repos/nlpie/mtap
|
closed
|
Support pre-forking for Python services
|
area/framework/processing kind/enhancement lang/python
|
Python doesn't support processor concurrency for threads and gRPC servers don't support forking so ProcessPoolExecutors are not an option.
We can support concurrency by running multiple processes, each with with a gRPC server hosting the service.
|
1.0
|
Support pre-forking for Python services - Python doesn't support processor concurrency for threads and gRPC servers don't support forking so ProcessPoolExecutors are not an option.
We can support concurrency by running multiple processes, each with with a gRPC server hosting the service.
|
process
|
support pre forking for python services python doesn t support processor concurrency for threads and grpc servers don t support forking so processpoolexecutors are not an option we can support concurrency by running multiple processes each with with a grpc server hosting the service
| 1
|
1,800
| 4,540,075,721
|
IssuesEvent
|
2016-09-09 13:33:42
|
ongroup/mvmason
|
https://api.github.com/repos/ongroup/mvmason
|
closed
|
Get github issues ready for the pilot
|
4 - Done Priority: MEDIUM process
|
<!---
@huboard:{"order":9.00180009,"milestone_order":0.9996000999800034,"custom_state":""}
-->
|
1.0
|
Get github issues ready for the pilot -
<!---
@huboard:{"order":9.00180009,"milestone_order":0.9996000999800034,"custom_state":""}
-->
|
process
|
get github issues ready for the pilot huboard order milestone order custom state
| 1
|
4,447
| 7,314,170,876
|
IssuesEvent
|
2018-03-01 05:38:44
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Unexpected status character: I in ProcessTests.TestGetProcesses
|
area-System.Diagnostics.Process os-linux test-run-core
|
https://mc.dot.net/#/user/danmosemsft/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/01337d9cc1d615c0367d1625028ac2cc95cdb34f/workItem/System.Diagnostics.Process.Tests/wilogs
```
2018-02-28 21:54:38,644: INFO: proc(54): run_and_log_output: Output: None of the following programs were installed on this machine: xdg-open,gnome-open,kfmclient.
2018-02-28 21:54:39,611: INFO: proc(54): run_and_log_output: Output: Assertion Failed
2018-02-28 21:54:39,611: INFO: proc(54): run_and_log_output: Output: Unexpected status character: I
2018-02-28 21:54:39,612: INFO: proc(54): run_and_log_output: Output:
2018-02-28 21:54:39,612: INFO: proc(54): run_and_log_output: Output: at System.Diagnostics.Tests.ProcessTests.TestGetProcesses() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Debug+AGroup_x64+TestOuter_false_prtest/src/System.Diagnostics.Process/tests/ProcessTests.cs:line 899
2018-02-28 21:54:39,613: INFO: proc(54): run_and_log_output: Output: at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
```
ProcFsStateToThreadState needs to be updated to include "I"
it seems this was added in
https://github.com/torvalds/linux/commit/06eb61844d841d0032a9950ce7f8e783ee49c0d0
|
1.0
|
Unexpected status character: I in ProcessTests.TestGetProcesses - https://mc.dot.net/#/user/danmosemsft/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/01337d9cc1d615c0367d1625028ac2cc95cdb34f/workItem/System.Diagnostics.Process.Tests/wilogs
```
2018-02-28 21:54:38,644: INFO: proc(54): run_and_log_output: Output: None of the following programs were installed on this machine: xdg-open,gnome-open,kfmclient.
2018-02-28 21:54:39,611: INFO: proc(54): run_and_log_output: Output: Assertion Failed
2018-02-28 21:54:39,611: INFO: proc(54): run_and_log_output: Output: Unexpected status character: I
2018-02-28 21:54:39,612: INFO: proc(54): run_and_log_output: Output:
2018-02-28 21:54:39,612: INFO: proc(54): run_and_log_output: Output: at System.Diagnostics.Tests.ProcessTests.TestGetProcesses() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Debug+AGroup_x64+TestOuter_false_prtest/src/System.Diagnostics.Process/tests/ProcessTests.cs:line 899
2018-02-28 21:54:39,613: INFO: proc(54): run_and_log_output: Output: at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
```
ProcFsStateToThreadState needs to be updated to include "I"
it seems this was added in
https://github.com/torvalds/linux/commit/06eb61844d841d0032a9950ce7f8e783ee49c0d0
|
process
|
unexpected status character i in processtests testgetprocesses info proc run and log output output none of the following programs were installed on this machine xdg open gnome open kfmclient info proc run and log output output assertion failed info proc run and log output output unexpected status character i info proc run and log output output info proc run and log output output at system diagnostics tests processtests testgetprocesses in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup debug agroup testouter false prtest src system diagnostics process tests processtests cs line info proc run and log output output at system runtimemethodhandle invokemethod object target object arguments signature sig boolean constructor boolean wrapexceptions procfsstatetothreadstate needs to be updated to include i it seems this was added in
| 1
|
83,715
| 7,880,372,574
|
IssuesEvent
|
2018-06-26 15:45:13
|
apache/incubator-mxnet
|
https://api.github.com/repos/apache/incubator-mxnet
|
closed
|
test_cifar10 fails in CI master build
|
Flaky Test
|
## Description
test_cifar10 fails in CI master build.
Failed build: https://builds.apache.org/blue/organizations/jenkins/incubator-mxnet/detail/master/532/pipeline/
## Environment info (Required)
CI build
Python 2 CPU
MXNet commit hash:
1c1c788916d672ee3cafdc4c91d7002a94a59d13
## Error Message:
```
======================================================================
FAIL: test_dtype.test_cifar10
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/workspace/tests/python/train/test_dtype.py", line 192, in test_cifar10
run_cifar10(train, val, use_module=True)
File "/workspace/tests/python/train/test_dtype.py", line 136, in run_cifar10
assert (ret[0][1] > 0.08)
AssertionError:
-------------------- >> begin captured logging << --------------------
root: INFO: Start training with [cpu(0)]
root: INFO: Epoch[0] Batch [50] Speed: 7880.07 samples/sec accuracy=0.109531
root: INFO: Epoch[0] Batch [100] Speed: 10121.26 samples/sec accuracy=0.092969
root: INFO: Epoch[0] Batch [150] Speed: 9358.31 samples/sec accuracy=0.094375
root: INFO: Epoch[0] Batch [200] Speed: 9408.26 samples/sec accuracy=0.100312
root: INFO: Epoch[0] Batch [250] Speed: 10316.01 samples/sec accuracy=0.099062
root: INFO: Epoch[0] Batch [300] Speed: 9913.46 samples/sec accuracy=0.093906
root: INFO: Epoch[0] Batch [350] Speed: 10337.40 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Resetting Data Iterator
root: INFO: Epoch[0] Time cost=5.339
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099881
root: INFO: Epoch[0] Batch [50] Speed: 7869.73 samples/sec accuracy=0.123775
root: INFO: Epoch[0] Batch [100] Speed: 9542.55 samples/sec accuracy=0.099375
root: INFO: Epoch[0] Batch [150] Speed: 9407.05 samples/sec accuracy=0.094219
root: INFO: Epoch[0] Batch [200] Speed: 8906.41 samples/sec accuracy=0.100625
root: INFO: Epoch[0] Batch [250] Speed: 9453.06 samples/sec accuracy=0.098125
root: INFO: Epoch[0] Batch [300] Speed: 8874.91 samples/sec accuracy=0.093281
root: INFO: Epoch[0] Batch [350] Speed: 6026.26 samples/sec accuracy=0.104063
root: INFO: Epoch[0] Train-accuracy=0.097656
root: INFO: Epoch[0] Time cost=6.069
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099960
root: INFO: Start training with [cpu(0)]
root: INFO: Epoch[0] Batch [50] Speed: 4614.01 samples/sec accuracy=0.120938
root: INFO: Epoch[0] Batch [100] Speed: 6189.65 samples/sec accuracy=0.096562
root: INFO: Epoch[0] Batch [150] Speed: 6062.91 samples/sec accuracy=0.094844
root: INFO: Epoch[0] Batch [200] Speed: 6710.55 samples/sec accuracy=0.100312
root: INFO: Epoch[0] Batch [250] Speed: 5982.40 samples/sec accuracy=0.099062
root: INFO: Epoch[0] Batch [300] Speed: 7129.64 samples/sec accuracy=0.093906
root: INFO: Epoch[0] Batch [350] Speed: 6555.40 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Resetting Data Iterator
root: INFO: Epoch[0] Time cost=8.313
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099881
root: INFO: Epoch[0] Batch [50] Speed: 4543.26 samples/sec accuracy=0.119638
root: INFO: Epoch[0] Batch [100] Speed: 5423.21 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Batch [150] Speed: 5033.73 samples/sec accuracy=0.092813
root: INFO: Epoch[0] Batch [200] Speed: 9233.24 samples/sec accuracy=0.100469
root: INFO: Epoch[0] Batch [250] Speed: 9242.30 samples/sec accuracy=0.099375
root: INFO: Epoch[0] Batch [300] Speed: 10217.41 samples/sec accuracy=0.093125
root: INFO: Epoch[0] Batch [350] Speed: 8666.85 samples/sec accuracy=0.104063
root: INFO: Epoch[0] Train-accuracy=0.092188
root: INFO: Epoch[0] Time cost=7.249
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099960
root: INFO: Start training with [cpu(0)]
root: INFO: Epoch[0] Batch [50] Speed: 8083.13 samples/sec accuracy=0.102656
root: INFO: Epoch[0] Batch [100] Speed: 9373.58 samples/sec accuracy=0.101250
root: INFO: Epoch[0] Batch [150] Speed: 8239.63 samples/sec accuracy=0.092969
root: INFO: Epoch[0] Batch [200] Speed: 8116.80 samples/sec accuracy=0.101094
root: INFO: Epoch[0] Batch [250] Speed: 8595.15 samples/sec accuracy=0.099531
root: INFO: Epoch[0] Batch [300] Speed: 9090.94 samples/sec accuracy=0.093906
root: INFO: Epoch[0] Batch [350] Speed: 9014.57 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Resetting Data Iterator
root: INFO: Epoch[0] Time cost=5.879
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099881
root: INFO: Epoch[0] Batch [50] Speed: 9343.27 samples/sec accuracy=0.102635
root: INFO: Epoch[0] Batch [100] Speed: 8417.44 samples/sec accuracy=0.095312
root: INFO: Epoch[0] Batch [150] Speed: 9007.86 samples/sec accuracy=0.101094
root: INFO: Epoch[0] Batch [200] Speed: 9894.47 samples/sec accuracy=0.103125
root: INFO: Epoch[0] Batch [250] Speed: 9899.44 samples/sec accuracy=0.100312
root: INFO: Epoch[0] Batch [300] Speed: 9939.62 samples/sec accuracy=0.095781
root: INFO: Epoch[0] Batch [350] Speed: 10021.53 samples/sec accuracy=0.101250
root: INFO: Epoch[0] Train-accuracy=0.097656
root: INFO: Epoch[0] Time cost=5.316
root: INFO: Epoch[0] Validation-accuracy=0.078521
root: INFO: final accuracy = 0.078425
--------------------- >> end captured logging << ---------------------
[success] 29.22% test_autograd.test_autograd: 68.1240s
[fail] 27.32% test_dtype.test_cifar10: 63.6942s
[success] 21.17% test_bucketing.test_bucket_module: 49.3386s
[success] 14.59% test_mlp.test_mlp: 33.9994s
[success] 7.70% test_conv.test_mnist: 17.9464s
----------------------------------------------------------------------
Ran 5 tests in 237.792s
FAILED (failures=1)
```
## Steps to reproduce
Build and run unit test
|
1.0
|
test_cifar10 fails in CI master build -
## Description
test_cifar10 fails in CI master build.
Failed build: https://builds.apache.org/blue/organizations/jenkins/incubator-mxnet/detail/master/532/pipeline/
## Environment info (Required)
CI build
Python 2 CPU
MXNet commit hash:
1c1c788916d672ee3cafdc4c91d7002a94a59d13
## Error Message:
```
======================================================================
FAIL: test_dtype.test_cifar10
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/workspace/tests/python/train/test_dtype.py", line 192, in test_cifar10
run_cifar10(train, val, use_module=True)
File "/workspace/tests/python/train/test_dtype.py", line 136, in run_cifar10
assert (ret[0][1] > 0.08)
AssertionError:
-------------------- >> begin captured logging << --------------------
root: INFO: Start training with [cpu(0)]
root: INFO: Epoch[0] Batch [50] Speed: 7880.07 samples/sec accuracy=0.109531
root: INFO: Epoch[0] Batch [100] Speed: 10121.26 samples/sec accuracy=0.092969
root: INFO: Epoch[0] Batch [150] Speed: 9358.31 samples/sec accuracy=0.094375
root: INFO: Epoch[0] Batch [200] Speed: 9408.26 samples/sec accuracy=0.100312
root: INFO: Epoch[0] Batch [250] Speed: 10316.01 samples/sec accuracy=0.099062
root: INFO: Epoch[0] Batch [300] Speed: 9913.46 samples/sec accuracy=0.093906
root: INFO: Epoch[0] Batch [350] Speed: 10337.40 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Resetting Data Iterator
root: INFO: Epoch[0] Time cost=5.339
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099881
root: INFO: Epoch[0] Batch [50] Speed: 7869.73 samples/sec accuracy=0.123775
root: INFO: Epoch[0] Batch [100] Speed: 9542.55 samples/sec accuracy=0.099375
root: INFO: Epoch[0] Batch [150] Speed: 9407.05 samples/sec accuracy=0.094219
root: INFO: Epoch[0] Batch [200] Speed: 8906.41 samples/sec accuracy=0.100625
root: INFO: Epoch[0] Batch [250] Speed: 9453.06 samples/sec accuracy=0.098125
root: INFO: Epoch[0] Batch [300] Speed: 8874.91 samples/sec accuracy=0.093281
root: INFO: Epoch[0] Batch [350] Speed: 6026.26 samples/sec accuracy=0.104063
root: INFO: Epoch[0] Train-accuracy=0.097656
root: INFO: Epoch[0] Time cost=6.069
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099960
root: INFO: Start training with [cpu(0)]
root: INFO: Epoch[0] Batch [50] Speed: 4614.01 samples/sec accuracy=0.120938
root: INFO: Epoch[0] Batch [100] Speed: 6189.65 samples/sec accuracy=0.096562
root: INFO: Epoch[0] Batch [150] Speed: 6062.91 samples/sec accuracy=0.094844
root: INFO: Epoch[0] Batch [200] Speed: 6710.55 samples/sec accuracy=0.100312
root: INFO: Epoch[0] Batch [250] Speed: 5982.40 samples/sec accuracy=0.099062
root: INFO: Epoch[0] Batch [300] Speed: 7129.64 samples/sec accuracy=0.093906
root: INFO: Epoch[0] Batch [350] Speed: 6555.40 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Resetting Data Iterator
root: INFO: Epoch[0] Time cost=8.313
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099881
root: INFO: Epoch[0] Batch [50] Speed: 4543.26 samples/sec accuracy=0.119638
root: INFO: Epoch[0] Batch [100] Speed: 5423.21 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Batch [150] Speed: 5033.73 samples/sec accuracy=0.092813
root: INFO: Epoch[0] Batch [200] Speed: 9233.24 samples/sec accuracy=0.100469
root: INFO: Epoch[0] Batch [250] Speed: 9242.30 samples/sec accuracy=0.099375
root: INFO: Epoch[0] Batch [300] Speed: 10217.41 samples/sec accuracy=0.093125
root: INFO: Epoch[0] Batch [350] Speed: 8666.85 samples/sec accuracy=0.104063
root: INFO: Epoch[0] Train-accuracy=0.092188
root: INFO: Epoch[0] Time cost=7.249
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099960
root: INFO: Start training with [cpu(0)]
root: INFO: Epoch[0] Batch [50] Speed: 8083.13 samples/sec accuracy=0.102656
root: INFO: Epoch[0] Batch [100] Speed: 9373.58 samples/sec accuracy=0.101250
root: INFO: Epoch[0] Batch [150] Speed: 8239.63 samples/sec accuracy=0.092969
root: INFO: Epoch[0] Batch [200] Speed: 8116.80 samples/sec accuracy=0.101094
root: INFO: Epoch[0] Batch [250] Speed: 8595.15 samples/sec accuracy=0.099531
root: INFO: Epoch[0] Batch [300] Speed: 9090.94 samples/sec accuracy=0.093906
root: INFO: Epoch[0] Batch [350] Speed: 9014.57 samples/sec accuracy=0.103750
root: INFO: Epoch[0] Resetting Data Iterator
root: INFO: Epoch[0] Time cost=5.879
root: INFO: Epoch[0] Validation-accuracy=0.099881
root: INFO: final accuracy = 0.099881
root: INFO: Epoch[0] Batch [50] Speed: 9343.27 samples/sec accuracy=0.102635
root: INFO: Epoch[0] Batch [100] Speed: 8417.44 samples/sec accuracy=0.095312
root: INFO: Epoch[0] Batch [150] Speed: 9007.86 samples/sec accuracy=0.101094
root: INFO: Epoch[0] Batch [200] Speed: 9894.47 samples/sec accuracy=0.103125
root: INFO: Epoch[0] Batch [250] Speed: 9899.44 samples/sec accuracy=0.100312
root: INFO: Epoch[0] Batch [300] Speed: 9939.62 samples/sec accuracy=0.095781
root: INFO: Epoch[0] Batch [350] Speed: 10021.53 samples/sec accuracy=0.101250
root: INFO: Epoch[0] Train-accuracy=0.097656
root: INFO: Epoch[0] Time cost=5.316
root: INFO: Epoch[0] Validation-accuracy=0.078521
root: INFO: final accuracy = 0.078425
--------------------- >> end captured logging << ---------------------
[success] 29.22% test_autograd.test_autograd: 68.1240s
[fail] 27.32% test_dtype.test_cifar10: 63.6942s
[success] 21.17% test_bucketing.test_bucket_module: 49.3386s
[success] 14.59% test_mlp.test_mlp: 33.9994s
[success] 7.70% test_conv.test_mnist: 17.9464s
----------------------------------------------------------------------
Ran 5 tests in 237.792s
FAILED (failures=1)
```
## Steps to reproduce
Build and run unit test
|
non_process
|
test fails in ci master build description test fails in ci master build failed build environment info required ci build python cpu mxnet commit hash error message fail test dtype test traceback most recent call last file usr local lib dist packages nose case py line in runtest self test self arg file workspace tests python train test dtype py line in test run train val use module true file workspace tests python train test dtype py line in run assert ret assertionerror begin captured logging root info start training with root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch resetting data iterator root info epoch time cost root info epoch validation accuracy root info final accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch train accuracy root info epoch time cost root info epoch validation accuracy root info final accuracy root info start training with root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch resetting data iterator root info epoch time cost root info epoch validation accuracy root info final accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch train accuracy root info epoch time cost root info epoch validation accuracy root info final accuracy root info start training with root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch resetting data iterator root info epoch time cost root info epoch validation accuracy root info final accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch batch speed samples sec accuracy root info epoch train accuracy root info epoch time cost root info epoch validation accuracy root info final accuracy end captured logging test autograd test autograd test dtype test test bucketing test bucket module test mlp test mlp test conv test mnist ran tests in failed failures steps to reproduce build and run unit test
| 0
|
24,143
| 3,917,074,299
|
IssuesEvent
|
2016-04-21 06:25:10
|
irnawansuprapti/openbiz-cubi
|
https://api.github.com/repos/irnawansuprapti/openbiz-cubi
|
closed
|
health 5600003131hjhj
|
auto-migrated Priority-Medium spam Type-Defect
|
```
A great tip for maintaining good skin is to use a moisturizer every day. These
products infuse your skin with moisture, making it appear supple and radiant.
During winter months, a moisturizer is a must as the cold makes your skin prone
to drying and flaking. Moisturizers can help you look younger.
To keep your skin looking its finest, exfoliate with a bristle brush when you
are in the bath or shower. This treatment will remove dead skin cells to
present newer, smoother skin. Additionally, brushing increases circulation
which helps reduce skin problems, such as acne. Exfoliation helps get the
toxins from your skin as well. http://nitroshredadvice.com/bio-diamond/
```
Original issue reported on code.google.com by `GaylBuc...@gmail.com` on 16 Apr 2015 at 5:56
|
1.0
|
health 5600003131hjhj - ```
A great tip for maintaining good skin is to use a moisturizer every day. These
products infuse your skin with moisture, making it appear supple and radiant.
During winter months, a moisturizer is a must as the cold makes your skin prone
to drying and flaking. Moisturizers can help you look younger.
To keep your skin looking its finest, exfoliate with a bristle brush when you
are in the bath or shower. This treatment will remove dead skin cells to
present newer, smoother skin. Additionally, brushing increases circulation
which helps reduce skin problems, such as acne. Exfoliation helps get the
toxins from your skin as well. http://nitroshredadvice.com/bio-diamond/
```
Original issue reported on code.google.com by `GaylBuc...@gmail.com` on 16 Apr 2015 at 5:56
|
non_process
|
health a great tip for maintaining good skin is to use a moisturizer every day these products infuse your skin with moisture making it appear supple and radiant during winter months a moisturizer is a must as the cold makes your skin prone to drying and flaking moisturizers can help you look younger to keep your skin looking its finest exfoliate with a bristle brush when you are in the bath or shower this treatment will remove dead skin cells to present newer smoother skin additionally brushing increases circulation which helps reduce skin problems such as acne exfoliation helps get the toxins from your skin as well original issue reported on code google com by gaylbuc gmail com on apr at
| 0
|
17,889
| 23,864,042,017
|
IssuesEvent
|
2022-09-07 09:29:24
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[SQL Connector] Upsert Pulsar support code
|
compute/data-processing type/feature
|
In Q3, we need to support the upsert pulsar mode and test it out with real world use cases.
|
1.0
|
[SQL Connector] Upsert Pulsar support code - In Q3, we need to support the upsert pulsar mode and test it out with real world use cases.
|
process
|
upsert pulsar support code in we need to support the upsert pulsar mode and test it out with real world use cases
| 1
|
130,150
| 18,041,974,715
|
IssuesEvent
|
2021-09-18 07:29:39
|
SasanLabs/VulnerableApp
|
https://api.github.com/repos/SasanLabs/VulnerableApp
|
closed
|
Enhance VulnerableApp for adding Flag based levels like other VulnerableApplication have.
|
enhancement design-document Framework-changes Analysis
|
1. Introduce a new cookie and as per cookie run the entire application as Secure/insecure etc.
2. As endpoints are resolved by application so giving a way to go with Cookie based resolution and Url based resolution.
3. Only moving to Cookie based might not be good for someone who is testing url manually, trying to learn. (This is valid untill we are not converting app into Game.)
eg is similar to DVWA where we choose levels and then we get the challenge .... for gamification of vulnerable app
|
1.0
|
Enhance VulnerableApp for adding Flag based levels like other VulnerableApplication have. - 1. Introduce a new cookie and as per cookie run the entire application as Secure/insecure etc.
2. As endpoints are resolved by application so giving a way to go with Cookie based resolution and Url based resolution.
3. Only moving to Cookie based might not be good for someone who is testing url manually, trying to learn. (This is valid untill we are not converting app into Game.)
eg is similar to DVWA where we choose levels and then we get the challenge .... for gamification of vulnerable app
|
non_process
|
enhance vulnerableapp for adding flag based levels like other vulnerableapplication have introduce a new cookie and as per cookie run the entire application as secure insecure etc as endpoints are resolved by application so giving a way to go with cookie based resolution and url based resolution only moving to cookie based might not be good for someone who is testing url manually trying to learn this is valid untill we are not converting app into game eg is similar to dvwa where we choose levels and then we get the challenge for gamification of vulnerable app
| 0
|
17,913
| 23,905,713,161
|
IssuesEvent
|
2022-09-09 00:23:36
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
opened
|
Memory subsystem improvements
|
processor air
|
I want to summarize the improvements to memory subsystem (memory operations + memory chiplet) which I'm hoping to get into v0.3 release (though, maybe not all of them will make it in). These are listed in no particular order.
1. Memory access trace refactoring. Currently, for each memory access we record both the old state of the memory and the new state. While this works fine (and doesn't add any extra columns), it probably makes the constraint system more complicated than it needs to be. An alternative way is to track only a single set of values (e.g., values after the operation) and have a separate column which would track read/write flag - e.g., `0` means it was a read operation, `1` means it was a write operation.
2. Move range checks for `delta` to the stack as described in #229. This should let us get rid of one auxiliary column, but would also require changing opcodes for memory operations. As a part of this work, we'll also need to update how memory lookup rows are computed, and it might be a good opportunity to address #335 as well.
3. Currently, when generating memory trace we do two things sub-optimally: (1) we compute `delta`'s twice - first in `append_range_checks()` and then in `fill_trace()` methods. (2) we compute inverses of `delta` one by one. For programs which perform a lot of memory accesses this could be very costly as a single inversion is equivalent to something like 60 multiplications. A better way to do it would be to compute `delta`'s only once (e.g., in `append_range_checks()`), save them into a vector, and then pass this vector to `fill_trace()`. There, we'd be able to use batch inversion to speed things up considerably.
4. It would be really cool to add something like a `memcopy` operation. This operation would copy memory from one region to another in a single VM cycle. In the memory chiplet, the trace would probably require 3n rows to copy n words, but compared to the alternatives, this would be much more efficient.
|
1.0
|
Memory subsystem improvements - I want to summarize the improvements to memory subsystem (memory operations + memory chiplet) which I'm hoping to get into v0.3 release (though, maybe not all of them will make it in). These are listed in no particular order.
1. Memory access trace refactoring. Currently, for each memory access we record both the old state of the memory and the new state. While this works fine (and doesn't add any extra columns), it probably makes the constraint system more complicated than it needs to be. An alternative way is to track only a single set of values (e.g., values after the operation) and have a separate column which would track read/write flag - e.g., `0` means it was a read operation, `1` means it was a write operation.
2. Move range checks for `delta` to the stack as described in #229. This should let us get rid of one auxiliary column, but would also require changing opcodes for memory operations. As a part of this work, we'll also need to update how memory lookup rows are computed, and it might be a good opportunity to address #335 as well.
3. Currently, when generating memory trace we do two things sub-optimally: (1) we compute `delta`'s twice - first in `append_range_checks()` and then in `fill_trace()` methods. (2) we compute inverses of `delta` one by one. For programs which perform a lot of memory accesses this could be very costly as a single inversion is equivalent to something like 60 multiplications. A better way to do it would be to compute `delta`'s only once (e.g., in `append_range_checks()`), save them into a vector, and then pass this vector to `fill_trace()`. There, we'd be able to use batch inversion to speed things up considerably.
4. It would be really cool to add something like a `memcopy` operation. This operation would copy memory from one region to another in a single VM cycle. In the memory chiplet, the trace would probably require 3n rows to copy n words, but compared to the alternatives, this would be much more efficient.
|
process
|
memory subsystem improvements i want to summarize the improvements to memory subsystem memory operations memory chiplet which i m hoping to get into release though maybe not all of them will make it in these are listed in no particular order memory access trace refactoring currently for each memory access we record both the old state of the memory and the new state while this works fine and doesn t add any extra columns it probably makes the constraint system more complicated than it needs to be an alternative way is to track only a single set of values e g values after the operation and have a separate column which would track read write flag e g means it was a read operation means it was a write operation move range checks for delta to the stack as described in this should let us get rid of one auxiliary column but would also require changing opcodes for memory operations as a part of this work we ll also need to update how memory lookup rows are computed and it might be a good opportunity to address as well currently when generating memory trace we do two things sub optimally we compute delta s twice first in append range checks and then in fill trace methods we compute inverses of delta one by one for programs which perform a lot of memory accesses this could be very costly as a single inversion is equivalent to something like multiplications a better way to do it would be to compute delta s only once e g in append range checks save them into a vector and then pass this vector to fill trace there we d be able to use batch inversion to speed things up considerably it would be really cool to add something like a memcopy operation this operation would copy memory from one region to another in a single vm cycle in the memory chiplet the trace would probably require rows to copy n words but compared to the alternatives this would be much more efficient
| 1
|
74,526
| 9,083,619,651
|
IssuesEvent
|
2019-02-17 21:54:49
|
Dhanciles/inTouch-FE
|
https://api.github.com/repos/Dhanciles/inTouch-FE
|
closed
|
Design AllContacts Component
|
design in progress
|
- use figma to develop draft for `AllContacts` component
- build waffle card to implement component
|
1.0
|
Design AllContacts Component - - use figma to develop draft for `AllContacts` component
- build waffle card to implement component
|
non_process
|
design allcontacts component use figma to develop draft for allcontacts component build waffle card to implement component
| 0
|
3,599
| 3,966,931,274
|
IssuesEvent
|
2016-05-03 14:40:31
|
broadinstitute/gatk
|
https://api.github.com/repos/broadinstitute/gatk
|
closed
|
Beat GATK3 performance of HaplotypeCaller (GVCF mode)
|
performance
|
The GATK3 v3.5-0-g36282e4 commandlines and numbers:
running on gsa5 (has AVX)
GVCF mode on 32GB of ram
```
time java -jar /humgen/gsa-hpprojects/GATK/bin/current/GenomeAnalysisTK.jar -T HaplotypeCaller -I src/test/resources/large/CEUTrio.HiSeq.WGS.b37.NA12878.20.21.bam -R src/test/resources/large/human_g1k_v37.20.21.fasta -ERC GVCF --out a.gatk3.g.vcf
...
real 3m48.076s
user 8m45.049s
```
GVCF mode on 10GB of ram
```
time java -Xmx10g -Xms10g -jar /humgen/gsa-hpprojects/GATK/bin/current/GenomeAnalysisTK.jar -T HaplotypeCaller -I src/test/resources/large/CEUTrio.HiSeq.WGS.b37.NA12878.20.21.bam -R src/test/resources/large/human_g1k_v37.20.21.fasta -ERC GVCF --out a.gatk3.g.vcf
...
real 3m39.496s
user 10m16.387s
```
|
True
|
Beat GATK3 performance of HaplotypeCaller (GVCF mode) - The GATK3 v3.5-0-g36282e4 commandlines and numbers:
running on gsa5 (has AVX)
GVCF mode on 32GB of ram
```
time java -jar /humgen/gsa-hpprojects/GATK/bin/current/GenomeAnalysisTK.jar -T HaplotypeCaller -I src/test/resources/large/CEUTrio.HiSeq.WGS.b37.NA12878.20.21.bam -R src/test/resources/large/human_g1k_v37.20.21.fasta -ERC GVCF --out a.gatk3.g.vcf
...
real 3m48.076s
user 8m45.049s
```
GVCF mode on 10GB of ram
```
time java -Xmx10g -Xms10g -jar /humgen/gsa-hpprojects/GATK/bin/current/GenomeAnalysisTK.jar -T HaplotypeCaller -I src/test/resources/large/CEUTrio.HiSeq.WGS.b37.NA12878.20.21.bam -R src/test/resources/large/human_g1k_v37.20.21.fasta -ERC GVCF --out a.gatk3.g.vcf
...
real 3m39.496s
user 10m16.387s
```
|
non_process
|
beat performance of haplotypecaller gvcf mode the commandlines and numbers running on has avx gvcf mode on of ram time java jar humgen gsa hpprojects gatk bin current genomeanalysistk jar t haplotypecaller i src test resources large ceutrio hiseq wgs bam r src test resources large human fasta erc gvcf out a g vcf real user gvcf mode on of ram time java jar humgen gsa hpprojects gatk bin current genomeanalysistk jar t haplotypecaller i src test resources large ceutrio hiseq wgs bam r src test resources large human fasta erc gvcf out a g vcf real user
| 0
|
450,789
| 13,019,284,497
|
IssuesEvent
|
2020-07-26 21:41:59
|
joehot200/AntiAura
|
https://api.github.com/repos/joehot200/AntiAura
|
opened
|
Improve TPS accounting for Flight and Step
|
bug mid-priority
|
Server side lag *spikes* have the potential for Flight and Step false positives.
|
1.0
|
Improve TPS accounting for Flight and Step - Server side lag *spikes* have the potential for Flight and Step false positives.
|
non_process
|
improve tps accounting for flight and step server side lag spikes have the potential for flight and step false positives
| 0
|
10,821
| 13,609,291,766
|
IssuesEvent
|
2020-09-23 04:50:47
|
googleapis/java-spanner
|
https://api.github.com/repos/googleapis/java-spanner
|
closed
|
Dependency Dashboard
|
api: spanner type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.commons-commons-lang3-3.x -->deps: update dependency org.apache.commons:commons-lang3 to v3.11
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.commons-commons-lang3-3.x -->deps: update dependency org.apache.commons:commons-lang3 to v3.11
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any deps update dependency org apache commons commons to check this box to trigger a request for renovate to run again on this repository
| 1
|
1,132
| 5,146,365,135
|
IssuesEvent
|
2017-01-13 00:52:55
|
GoogleChrome/lighthouse
|
https://api.github.com/repos/GoogleChrome/lighthouse
|
opened
|
Aggregation refactor
|
architecture question
|
I'm currently working on a change so you can easily select what sort of audits you want to run.
In reality this means need to build a full config that includes all these things:
> passes, gatherers per pass, audit list, aggregations (both parent and child aggregations), and the audit list within each of those aggregations
There's a lot of relationships here to juggle and generally it works, but one particular sore spot is "aggregations." What are aggregations anyways? :)
-------
I have a proposal for adjusting our `config/default.json`, which will have implications for the pipeline from auditResults => report.
Basically, I'm interested in replacing our [big aggregations blob](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120) with something like the following:
* **`reportCategories` array**. These are basically our parent aggregations like "PWA", "Fancier Stuff" etc. It's very presentational, and how a visual report will be generated with some semblance of order.
* **`auditGroups` array**. The meat of our aggregations, but it's a flat list with no nesting. so `"App can load on offline/flaky connections"` is a sibling to `"Using modern protocols"` and `"Page load performance is fast"`.
* **`auditGroupTags` object**. Describes the few tags that are applied to each `auditGroup`. These will be used for users configuring what they want to evaluate on a given run.
Each `auditGroup` (née aggregation) gains an `id` property, a `reportCategory` property, and a `groupTags` one. They don't have any more `items` property which has 1 or more children, which a lot of code has special handling for. yay. :)
Here's an excerpt of a revised json around the [aggregations part](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120):
```js
], // end of audits array
"reportCategories": [
{
"name": "Progressive Web App",
"description": "These audits validate the aspects of a Progressive Web App.",
"id": "pwa_category",
"scored": true
}, {
"name": "Fancier stuff",
"description": "A list of newer features that you could be using in your app. These audits do not affect your score and are just suggestions.",
"id": "fancy_bp_category",
"scored": false
}, {
"name": "Performance Metrics",
"description": "These encapsulate your app's performance.",
"id": "perf_diagnostics_category",
"scored": false
} // and "Best Practices"..
],
"auditGroupTags": {
"pwa": "Progressive Web App audits",
"perf": "Performance metrics & diagnostics",
"best_practices": "Developer best practices"
},
"auditGroups": [
{
"name": "New JavaScript features",
"id": "fancy_best_practices",
"reportCategory": "fancy_bp_category",
"groupTags": ["best_practices"],
"audits": {
"no-datenow": {
"expectedValue": false
},
"no-console-time": {
"expectedValue": false
}
}
}, {
"name": "App can load on offline/flaky connections",
"description": "Ensuring your web app can respond when the network connection is unavailable or flaky is critical to providing your users a good experience. This is achieved through use of a [Service Worker](https://developers.google.com/web/fundamentals/primers/service-worker/).",
"id": "offline",
"reportCategory": "pwa_category",
"groupTags": ["pwa"],
"audits": {
"service-worker": {
"expectedValue": true,
"weight": 1
},
"works-offline": {
"expectedValue": true,
"weight": 1
}
}
}, {
"name": "Page load performance is fast",
"description": "Users notice if sites and apps don't perform well. These top-level metrics capture the most important perceived performance concerns.",
"id": "perf_metrics",
"reportCategory": "pwa_category",
"groupTags": ["pwa", "perf"],
"audits": {
"first-meaningful-paint": {
"expectedValue": 100,
"weight": 1
},
"speed-index-metric": {
"expectedValue": 100,
"weight": 1
},
"estimated-input-latency": {
"expectedValue": 100,
"weight": 1
},
"time-to-interactive": {
"expectedValue": 100,
"weight": 1
},
"scrolling-60fps": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Content scrolls at 60fps",
"category": "UX"
},
"touch-150ms": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Touch input gets a response in < 150ms",
"category": "UX"
},
"fmp-no-jank": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "App is interactive without jank after the first meaningful paint",
"category": "UX"
}
}
}, {
// .. the rest of the auditGroups ....
```
(Side note: we can now nuke categorizable (today) as it only was there for the "toggle to view report by technology vs user feature".)
---------
I think this approach would really help everyone understand the code quite a bit better. But curious if others think it improves clarity.
WDYT?
|
1.0
|
Aggregation refactor - I'm currently working on a change so you can easily select what sort of audits you want to run.
In reality this means need to build a full config that includes all these things:
> passes, gatherers per pass, audit list, aggregations (both parent and child aggregations), and the audit list within each of those aggregations
There's a lot of relationships here to juggle and generally it works, but one particular sore spot is "aggregations." What are aggregations anyways? :)
-------
I have a proposal for adjusting our `config/default.json`, which will have implications for the pipeline from auditResults => report.
Basically, I'm interested in replacing our [big aggregations blob](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120) with something like the following:
* **`reportCategories` array**. These are basically our parent aggregations like "PWA", "Fancier Stuff" etc. It's very presentational, and how a visual report will be generated with some semblance of order.
* **`auditGroups` array**. The meat of our aggregations, but it's a flat list with no nesting. so `"App can load on offline/flaky connections"` is a sibling to `"Using modern protocols"` and `"Page load performance is fast"`.
* **`auditGroupTags` object**. Describes the few tags that are applied to each `auditGroup`. These will be used for users configuring what they want to evaluate on a given run.
Each `auditGroup` (née aggregation) gains an `id` property, a `reportCategory` property, and a `groupTags` one. They don't have any more `items` property which has 1 or more children, which a lot of code has special handling for. yay. :)
Here's an excerpt of a revised json around the [aggregations part](https://github.com/GoogleChrome/lighthouse/blob/ce7927307d1d4a15d81eea6b4deda57cb38d6c25/lighthouse-core/config/default.json#L98-L120):
```js
], // end of audits array
"reportCategories": [
{
"name": "Progressive Web App",
"description": "These audits validate the aspects of a Progressive Web App.",
"id": "pwa_category",
"scored": true
}, {
"name": "Fancier stuff",
"description": "A list of newer features that you could be using in your app. These audits do not affect your score and are just suggestions.",
"id": "fancy_bp_category",
"scored": false
}, {
"name": "Performance Metrics",
"description": "These encapsulate your app's performance.",
"id": "perf_diagnostics_category",
"scored": false
} // and "Best Practices"..
],
"auditGroupTags": {
"pwa": "Progressive Web App audits",
"perf": "Performance metrics & diagnostics",
"best_practices": "Developer best practices"
},
"auditGroups": [
{
"name": "New JavaScript features",
"id": "fancy_best_practices",
"reportCategory": "fancy_bp_category",
"groupTags": ["best_practices"],
"audits": {
"no-datenow": {
"expectedValue": false
},
"no-console-time": {
"expectedValue": false
}
}
}, {
"name": "App can load on offline/flaky connections",
"description": "Ensuring your web app can respond when the network connection is unavailable or flaky is critical to providing your users a good experience. This is achieved through use of a [Service Worker](https://developers.google.com/web/fundamentals/primers/service-worker/).",
"id": "offline",
"reportCategory": "pwa_category",
"groupTags": ["pwa"],
"audits": {
"service-worker": {
"expectedValue": true,
"weight": 1
},
"works-offline": {
"expectedValue": true,
"weight": 1
}
}
}, {
"name": "Page load performance is fast",
"description": "Users notice if sites and apps don't perform well. These top-level metrics capture the most important perceived performance concerns.",
"id": "perf_metrics",
"reportCategory": "pwa_category",
"groupTags": ["pwa", "perf"],
"audits": {
"first-meaningful-paint": {
"expectedValue": 100,
"weight": 1
},
"speed-index-metric": {
"expectedValue": 100,
"weight": 1
},
"estimated-input-latency": {
"expectedValue": 100,
"weight": 1
},
"time-to-interactive": {
"expectedValue": 100,
"weight": 1
},
"scrolling-60fps": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Content scrolls at 60fps",
"category": "UX"
},
"touch-150ms": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "Touch input gets a response in < 150ms",
"category": "UX"
},
"fmp-no-jank": {
"expectedValue": true,
"weight": 0,
"comingSoon": true,
"description": "App is interactive without jank after the first meaningful paint",
"category": "UX"
}
}
}, {
// .. the rest of the auditGroups ....
```
(Side note: we can now nuke categorizable (today) as it only was there for the "toggle to view report by technology vs user feature".)
---------
I think this approach would really help everyone understand the code quite a bit better. But curious if others think it improves clarity.
WDYT?
|
non_process
|
aggregation refactor i m currently working on a change so you can easily select what sort of audits you want to run in reality this means need to build a full config that includes all these things passes gatherers per pass audit list aggregations both parent and child aggregations and the audit list within each of those aggregations there s a lot of relationships here to juggle and generally it works but one particular sore spot is aggregations what are aggregations anyways i have a proposal for adjusting our config default json which will have implications for the pipeline from auditresults report basically i m interested in replacing our with something like the following reportcategories array these are basically our parent aggregations like pwa fancier stuff etc it s very presentational and how a visual report will be generated with some semblance of order auditgroups array the meat of our aggregations but it s a flat list with no nesting so app can load on offline flaky connections is a sibling to using modern protocols and page load performance is fast auditgrouptags object describes the few tags that are applied to each auditgroup these will be used for users configuring what they want to evaluate on a given run each auditgroup née aggregation gains an id property a reportcategory property and a grouptags one they don t have any more items property which has or more children which a lot of code has special handling for yay here s an excerpt of a revised json around the js end of audits array reportcategories name progressive web app description these audits validate the aspects of a progressive web app id pwa category scored true name fancier stuff description a list of newer features that you could be using in your app these audits do not affect your score and are just suggestions id fancy bp category scored false name performance metrics description these encapsulate your app s performance id perf diagnostics category scored false and best practices auditgrouptags pwa progressive web app audits perf performance metrics diagnostics best practices developer best practices auditgroups name new javascript features id fancy best practices reportcategory fancy bp category grouptags audits no datenow expectedvalue false no console time expectedvalue false name app can load on offline flaky connections description ensuring your web app can respond when the network connection is unavailable or flaky is critical to providing your users a good experience this is achieved through use of a id offline reportcategory pwa category grouptags audits service worker expectedvalue true weight works offline expectedvalue true weight name page load performance is fast description users notice if sites and apps don t perform well these top level metrics capture the most important perceived performance concerns id perf metrics reportcategory pwa category grouptags audits first meaningful paint expectedvalue weight speed index metric expectedvalue weight estimated input latency expectedvalue weight time to interactive expectedvalue weight scrolling expectedvalue true weight comingsoon true description content scrolls at category ux touch expectedvalue true weight comingsoon true description touch input gets a response in category ux fmp no jank expectedvalue true weight comingsoon true description app is interactive without jank after the first meaningful paint category ux the rest of the auditgroups side note we can now nuke categorizable today as it only was there for the toggle to view report by technology vs user feature i think this approach would really help everyone understand the code quite a bit better but curious if others think it improves clarity wdyt
| 0
|
19,716
| 26,073,135,691
|
IssuesEvent
|
2022-12-24 04:26:34
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
closed
|
Implement `BasePreprocessingLayerTest`
|
contribution-welcome preprocessing low-priority
|
This can handle things like `tf.function` compilation, output shape preservation, etc.
FYI @bhack
|
1.0
|
Implement `BasePreprocessingLayerTest` - This can handle things like `tf.function` compilation, output shape preservation, etc.
FYI @bhack
|
process
|
implement basepreprocessinglayertest this can handle things like tf function compilation output shape preservation etc fyi bhack
| 1
|
52,395
| 13,751,537,099
|
IssuesEvent
|
2020-10-06 13:29:04
|
yael-lindman/jenkins
|
https://api.github.com/repos/yael-lindman/jenkins
|
opened
|
CVE-2020-2224 (Medium) detected in matrix-project-1.14.jar
|
security vulnerability
|
## CVE-2020-2224 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>matrix-project-1.14.jar</b></p></summary>
<p>Multi-configuration (matrix) project type.</p>
<p>Library home page: <a href="https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin">https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin</a></p>
<p>Path to dependency file: jenkins/test/pom.xml</p>
<p>Path to vulnerable library: jenkins-ci/plugins/matrix-project/1.14/matrix-project-1.14.jar</p>
<p>
Dependency Hierarchy:
- :x: **matrix-project-1.14.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yael-lindman/jenkins/commit/c3e721338bf5bf4c0292ed374cab9fd4b7c77948">c3e721338bf5bf4c0292ed374cab9fd4b7c77948</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jenkins Matrix Project Plugin 1.16 and earlier does not escape the node names shown in tooltips on the overview page of builds with a single axis, resulting in a stored cross-site scripting vulnerability.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2224>CVE-2020-2224</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.jenkins.io/security/advisory/2020-07-15/">https://www.jenkins.io/security/advisory/2020-07-15/</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: org.jenkins-ci.plugins:matrix-project:1.17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-2224 (Medium) detected in matrix-project-1.14.jar - ## CVE-2020-2224 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>matrix-project-1.14.jar</b></p></summary>
<p>Multi-configuration (matrix) project type.</p>
<p>Library home page: <a href="https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin">https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin</a></p>
<p>Path to dependency file: jenkins/test/pom.xml</p>
<p>Path to vulnerable library: jenkins-ci/plugins/matrix-project/1.14/matrix-project-1.14.jar</p>
<p>
Dependency Hierarchy:
- :x: **matrix-project-1.14.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yael-lindman/jenkins/commit/c3e721338bf5bf4c0292ed374cab9fd4b7c77948">c3e721338bf5bf4c0292ed374cab9fd4b7c77948</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jenkins Matrix Project Plugin 1.16 and earlier does not escape the node names shown in tooltips on the overview page of builds with a single axis, resulting in a stored cross-site scripting vulnerability.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2224>CVE-2020-2224</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.jenkins.io/security/advisory/2020-07-15/">https://www.jenkins.io/security/advisory/2020-07-15/</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: org.jenkins-ci.plugins:matrix-project:1.17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in matrix project jar cve medium severity vulnerability vulnerable library matrix project jar multi configuration matrix project type library home page a href path to dependency file jenkins test pom xml path to vulnerable library jenkins ci plugins matrix project matrix project jar dependency hierarchy x matrix project jar vulnerable library found in head commit a href found in base branch master vulnerability details jenkins matrix project plugin and earlier does not escape the node names shown in tooltips on the overview page of builds with a single axis resulting in a stored cross site scripting vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jenkins ci plugins matrix project step up your open source security game with whitesource
| 0
|
404,502
| 11,858,477,453
|
IssuesEvent
|
2020-03-25 11:31:26
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.pornhub.com - see bug description
|
browser-firefox-reality engine-gecko nsfw priority-critical
|
<!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/50690 -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://www.pornhub.com/view_video.php?viewkey=ph5ce2f95f5e88e
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: vr video wont come ou
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.pornhub.com - see bug description - <!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: browser-fxr -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/50690 -->
<!-- @extra_labels: browser-firefox-reality -->
**URL**: https://www.pornhub.com/view_video.php?viewkey=ph5ce2f95f5e88e
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: vr video wont come ou
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description vr video wont come ou steps to reproduce browser configuration none from with ❤️
| 0
|
4,211
| 7,176,692,126
|
IssuesEvent
|
2018-01-31 10:53:43
|
tinyMediaManager/tinyMediaManager
|
https://api.github.com/repos/tinyMediaManager/tinyMediaManager
|
closed
|
TMM couldn't download artwork
|
bug processing
|
Version: 2.9.7
Build: 2017-12-28 20:10
OS: Linux 4.14.14-1-default
JDK: 9.0.4 amd64 Oracle Corporation
__What is the actual behaviour?__
Shows' artwork isn't downloaded
__What is the expected behaviour?__
The artwork is downloaded successully
__Steps to reproduce:__
Try to download episodes' metainfo (& thumbs).
__Additional__
Have you attached the logfile from the day it happened? [tmm_logs.zip](https://github.com/tinyMediaManager/tinyMediaManager/files/1679044/tmm_logs.zip)
|
1.0
|
TMM couldn't download artwork -
Version: 2.9.7
Build: 2017-12-28 20:10
OS: Linux 4.14.14-1-default
JDK: 9.0.4 amd64 Oracle Corporation
__What is the actual behaviour?__
Shows' artwork isn't downloaded
__What is the expected behaviour?__
The artwork is downloaded successully
__Steps to reproduce:__
Try to download episodes' metainfo (& thumbs).
__Additional__
Have you attached the logfile from the day it happened? [tmm_logs.zip](https://github.com/tinyMediaManager/tinyMediaManager/files/1679044/tmm_logs.zip)
|
process
|
tmm couldn t download artwork version build os linux default jdk oracle corporation what is the actual behaviour shows artwork isn t downloaded what is the expected behaviour the artwork is downloaded successully steps to reproduce try to download episodes metainfo thumbs additional have you attached the logfile from the day it happened
| 1
|
124,045
| 4,891,151,338
|
IssuesEvent
|
2016-11-18 15:55:34
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
[k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}
|
kind/flake priority/P2
|
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-enormous-cluster/159/
Failed: [k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:79
Expected
<int>: 1
not to be >
<int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:78
```
Previous issues for this test: #26544 #26938 #27595 #30146 #30469 #31374 #31427 #31433 #31589 #31981 #32257 #33711 #33839
|
1.0
|
[k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-enormous-cluster/159/
Failed: [k8s.io] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:79
Expected
<int>: 1
not to be >
<int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/load.go:78
```
Previous issues for this test: #26544 #26938 #27595 #30146 #30469 #31374 #31427 #31433 #31589 #31981 #32257 #33711 #33839
|
non_process
|
load capacity should be able to handle pods per node kubernetes suite failed load capacity should be able to handle pods per node kubernetes suite go src io kubernetes output dockerized go src io kubernetes test load go expected not to be go src io kubernetes output dockerized go src io kubernetes test load go previous issues for this test
| 0
|
24,877
| 2,674,259,745
|
IssuesEvent
|
2015-03-25 00:38:28
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
closed
|
Import newest go-etcd
|
area/introspection dependency/etcd priority/P1 team/master
|
The etcd team fixed the lib and it now reports better errors instead of everything being "501 attempted to connect to each peer twice". We should import that ASAP
|
1.0
|
Import newest go-etcd - The etcd team fixed the lib and it now reports better errors instead of everything being "501 attempted to connect to each peer twice". We should import that ASAP
|
non_process
|
import newest go etcd the etcd team fixed the lib and it now reports better errors instead of everything being attempted to connect to each peer twice we should import that asap
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.