Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,035
| 14,738,642,160
|
IssuesEvent
|
2021-01-07 05:20:29
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Check Payments made on the Portal not reflected in SAB
|
anc-ops anc-process anp-1 ant-bug ant-support has attachment
|
In GitLab by @kdjstudios on Jul 2, 2018, 08:50
**Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-07-02-97189/conversation
**Server:** Internal
**Client/Site:** Billerica
**Account:** Multi
**Issue:**
I reviewed by e-check report and see that payments were made for the following accounts:

These payments are not reflected in SAB.
Please let me know once this has been reviewed and corrected.
|
1.0
|
Check Payments made on the Portal not reflected in SAB - In GitLab by @kdjstudios on Jul 2, 2018, 08:50
**Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-07-02-97189/conversation
**Server:** Internal
**Client/Site:** Billerica
**Account:** Multi
**Issue:**
I reviewed by e-check report and see that payments were made for the following accounts:

These payments are not reflected in SAB.
Please let me know once this has been reviewed and corrected.
|
process
|
check payments made on the portal not reflected in sab in gitlab by kdjstudios on jul submitted by kimberly gagner helpdesk server internal client site billerica account multi issue i reviewed by e check report and see that payments were made for the following accounts uploads image png these payments are not reflected in sab please let me know once this has been reviewed and corrected
| 1
|
443
| 2,873,822,775
|
IssuesEvent
|
2015-06-08 19:07:46
|
K0zka/kerub
|
https://api.github.com/repos/K0zka/kerub
|
opened
|
ensure unique host properties
|
component:data processing priority: normal
|
When joining a host, ensure that some unique properties of the host are indeed unique in the installation:
- host address
- system uuid (from dmidecode, if any info is provided)
|
1.0
|
ensure unique host properties - When joining a host, ensure that some unique properties of the host are indeed unique in the installation:
- host address
- system uuid (from dmidecode, if any info is provided)
|
process
|
ensure unique host properties when joining a host ensure that some unique properties of the host are indeed unique in the installation host address system uuid from dmidecode if any info is provided
| 1
|
161
| 2,583,358,884
|
IssuesEvent
|
2015-02-16 04:31:02
|
dominikwilkowski/bronzies
|
https://api.github.com/repos/dominikwilkowski/bronzies
|
closed
|
REST API highscore needs to mark last entry with `"justadded": true`
|
bug In process
|
Currently it saves it into the database... that's silly
|
1.0
|
REST API highscore needs to mark last entry with `"justadded": true` - Currently it saves it into the database... that's silly
|
process
|
rest api highscore needs to mark last entry with justadded true currently it saves it into the database that s silly
| 1
|
10,612
| 13,437,423,364
|
IssuesEvent
|
2020-09-07 15:54:28
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
opened
|
Add support for parsing and normalizing time units in the remap sytnax
|
domain: processing type: feature
|
With log data, especially after parsing, it's common for numbers to contain units in string format.
## Example
For example, this common Elixir log contains time units (`5.4ms`):
```
17:42:51.469 [info] GET /17:42:51.469 [info] Sent 200 in 5.4ms
```
But the unit can change depending on the duration (`167µs`):
```
17:42:51.469 [info] GET /17:42:51.469 [info] Sent 200 in 167µs
```
In addition to the changing unit, you'll notice we went from floats to integers.
## Proposal
In the same vein that we have `int` and `float` functions, I propose that we add a `duration` function that handles parsing and normalizing. For example, when parsing this log:
```
17:42:51.469 [info] GET /17:42:51.469 [info] Sent 200 in 5.4ms
```
You'll likely end up with a `duration` (or whatever name they want) field:
```js
{
// ...
"duration": "5.4ms",
// ...
}
```
And in order parse this into a normalized unit, I would recommend the user use the remap syntax to do the following:
```
.duration_seconds = duration(.duration, "s")
del(duration)
```
This would result in:
```js
{
// ...
"duration_seconds": 0.0054,
// ...
}
```
The `duration` function handles all of the intricacies to normalize any value passed to it to seconds. This means:
* Handling unit suffixes.
* Converting units between each other.
## Requirements
- [ ] Recognize [valid time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)#Less_than_one_second) for our use case: `µs` (microsecond), `ns` (nanosecond), `ms` (millisecond), `s` (second), `m` (minute), `h` (hour), `d` (day).
- [ ] Be forgiving with parsing: `10ms` or `10 ms`.
- [ ] Convert and normalized units to the specified unit.
|
1.0
|
Add support for parsing and normalizing time units in the remap sytnax - With log data, especially after parsing, it's common for numbers to contain units in string format.
## Example
For example, this common Elixir log contains time units (`5.4ms`):
```
17:42:51.469 [info] GET /17:42:51.469 [info] Sent 200 in 5.4ms
```
But the unit can change depending on the duration (`167µs`):
```
17:42:51.469 [info] GET /17:42:51.469 [info] Sent 200 in 167µs
```
In addition to the changing unit, you'll notice we went from floats to integers.
## Proposal
In the same vein that we have `int` and `float` functions, I propose that we add a `duration` function that handles parsing and normalizing. For example, when parsing this log:
```
17:42:51.469 [info] GET /17:42:51.469 [info] Sent 200 in 5.4ms
```
You'll likely end up with a `duration` (or whatever name they want) field:
```js
{
// ...
"duration": "5.4ms",
// ...
}
```
And in order parse this into a normalized unit, I would recommend the user use the remap syntax to do the following:
```
.duration_seconds = duration(.duration, "s")
del(duration)
```
This would result in:
```js
{
// ...
"duration_seconds": 0.0054,
// ...
}
```
The `duration` function handles all of the intricacies to normalize any value passed to it to seconds. This means:
* Handling unit suffixes.
* Converting units between each other.
## Requirements
- [ ] Recognize [valid time units](https://en.wikipedia.org/wiki/Orders_of_magnitude_(time)#Less_than_one_second) for our use case: `µs` (microsecond), `ns` (nanosecond), `ms` (millisecond), `s` (second), `m` (minute), `h` (hour), `d` (day).
- [ ] Be forgiving with parsing: `10ms` or `10 ms`.
- [ ] Convert and normalized units to the specified unit.
|
process
|
add support for parsing and normalizing time units in the remap sytnax with log data especially after parsing it s common for numbers to contain units in string format example for example this common elixir log contains time units get sent in but the unit can change depending on the duration get sent in in addition to the changing unit you ll notice we went from floats to integers proposal in the same vein that we have int and float functions i propose that we add a duration function that handles parsing and normalizing for example when parsing this log get sent in you ll likely end up with a duration or whatever name they want field js duration and in order parse this into a normalized unit i would recommend the user use the remap syntax to do the following duration seconds duration duration s del duration this would result in js duration seconds the duration function handles all of the intricacies to normalize any value passed to it to seconds this means handling unit suffixes converting units between each other requirements recognize for our use case µs microsecond ns nanosecond ms millisecond s second m minute h hour d day be forgiving with parsing or ms convert and normalized units to the specified unit
| 1
|
381,255
| 11,275,611,320
|
IssuesEvent
|
2020-01-14 21:10:07
|
redhat-developer/vscode-openshift-tools
|
https://api.github.com/repos/redhat-developer/vscode-openshift-tools
|
closed
|
Clicking 'Open' after cloning component repo cancels the whole workflow
|
kind/bug priority/major
|
Creating a component from git repo:
- create a project and an application
- choose a repo and a revision
- clone the repo
- select 'Open' when asked if you want to open the cloned folder
- Aaaand it's gone!
No component is getting created because the whole editor had to reload.
|
1.0
|
Clicking 'Open' after cloning component repo cancels the whole workflow - Creating a component from git repo:
- create a project and an application
- choose a repo and a revision
- clone the repo
- select 'Open' when asked if you want to open the cloned folder
- Aaaand it's gone!
No component is getting created because the whole editor had to reload.
|
non_process
|
clicking open after cloning component repo cancels the whole workflow creating a component from git repo create a project and an application choose a repo and a revision clone the repo select open when asked if you want to open the cloned folder aaaand it s gone no component is getting created because the whole editor had to reload
| 0
|
20,472
| 27,131,439,617
|
IssuesEvent
|
2023-02-16 10:04:06
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
migrate namespace blaze -> namespace bazel
|
P3 type: process team-OSS stale
|
This is a tracking bug for the migration of the namespace in the C++ code.
|
1.0
|
migrate namespace blaze -> namespace bazel - This is a tracking bug for the migration of the namespace in the C++ code.
|
process
|
migrate namespace blaze namespace bazel this is a tracking bug for the migration of the namespace in the c code
| 1
|
422
| 2,854,701,944
|
IssuesEvent
|
2015-06-02 03:07:45
|
broadinstitute/hellbender
|
https://api.github.com/repos/broadinstitute/hellbender
|
closed
|
Remove use of GenomicsSecretsFile and instead use SecretsFile
|
Dataflow DataflowPreprocessingPipeline
|
The GenomicsSecretsFile is no longer needed at all. Fix its use in DataflowCommandLineProgram.
|
1.0
|
Remove use of GenomicsSecretsFile and instead use SecretsFile - The GenomicsSecretsFile is no longer needed at all. Fix its use in DataflowCommandLineProgram.
|
process
|
remove use of genomicssecretsfile and instead use secretsfile the genomicssecretsfile is no longer needed at all fix its use in dataflowcommandlineprogram
| 1
|
6,332
| 9,370,234,214
|
IssuesEvent
|
2019-04-03 13:03:08
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Reduce the build times to fit in Travis limit.
|
type: process
|
Builds starting from a cold cache are failing to complete in the 50 minutes allowed by Travis CI. We need to figure out a way to reduce the build time, or increase the timeout, or both.
|
1.0
|
Reduce the build times to fit in Travis limit. - Builds starting from a cold cache are failing to complete in the 50 minutes allowed by Travis CI. We need to figure out a way to reduce the build time, or increase the timeout, or both.
|
process
|
reduce the build times to fit in travis limit builds starting from a cold cache are failing to complete in the minutes allowed by travis ci we need to figure out a way to reduce the build time or increase the timeout or both
| 1
|
11,860
| 14,665,134,559
|
IssuesEvent
|
2020-12-29 13:36:37
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Studies statuses 'Active'/ 'Paused'/ 'Deactivated' should display in PM for their respective sites/studies.
|
Feature request P1 Participant manager Process: Fixed Process: Release 2 Process: Reopened Process: Tested QA UX
|
Currently, if a study is Paused/Deactivated from SB, PM admin/user is unable to identify these study's status in PM.
Recommended approach: Studies statuses 'Active'/ 'Paused'/ 'Deactivated' should display in PM for their respective sites/studies.
|
4.0
|
[PM] Studies statuses 'Active'/ 'Paused'/ 'Deactivated' should display in PM for their respective sites/studies. - Currently, if a study is Paused/Deactivated from SB, PM admin/user is unable to identify these study's status in PM.
Recommended approach: Studies statuses 'Active'/ 'Paused'/ 'Deactivated' should display in PM for their respective sites/studies.
|
process
|
studies statuses active paused deactivated should display in pm for their respective sites studies currently if a study is paused deactivated from sb pm admin user is unable to identify these study s status in pm recommended approach studies statuses active paused deactivated should display in pm for their respective sites studies
| 1
|
13,121
| 15,505,101,790
|
IssuesEvent
|
2021-03-11 14:59:58
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Outdated code in example
|
Pri2 assigned-to-author automation/svc doc-enhancement process-automation/subsvc triaged
|
Export-RunAsCertificateToHybridWorker should be updated to use the modern AZ cmdlets.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run runbooks on Azure Automation Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#feedback)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
Outdated code in example - Export-RunAsCertificateToHybridWorker should be updated to use the modern AZ cmdlets.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run runbooks on Azure Automation Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#feedback)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
outdated code in example export runascertificatetohybridworker should be updated to use the modern az cmdlets document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
7,137
| 10,280,404,011
|
IssuesEvent
|
2019-08-26 05:09:22
|
thewca/wca-regulations
|
https://api.github.com/repos/thewca/wca-regulations
|
opened
|
Create a process for reviewing Delegate reports
|
process
|
- Check competition venue photos for how well they maintain scramble secrecy.
- Check incidents to see if they were handled well.
|
1.0
|
Create a process for reviewing Delegate reports - - Check competition venue photos for how well they maintain scramble secrecy.
- Check incidents to see if they were handled well.
|
process
|
create a process for reviewing delegate reports check competition venue photos for how well they maintain scramble secrecy check incidents to see if they were handled well
| 1
|
375,299
| 26,156,599,924
|
IssuesEvent
|
2022-12-30 23:22:19
|
theseus-os/Theseus
|
https://api.github.com/repos/theseus-os/Theseus
|
closed
|
`make view-book` creates book when it already exists
|
minor documentation
|
This is probably not a priority. More like an aesthetics/convenience issue. But I think it's good to add it to the issues list so that we can solve it in the future in case we run out of ideas on how to improve the OS further.
Here's how the issue looks like:
<details>
<summary> $ make view-book </summary>
```
2022-12-27 12:24:48 [INFO] (mdbook::book): Book building has started
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the html backend
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the linkcheck backend
2022-12-27 12:24:48 [INFO] (mdbook::renderer): Invoking the "linkcheck" renderer
2022-12-27 12:24:48 [WARN] (mdbook::renderer): The command `mdbook-linkcheck` for backend `linkcheck` was not found, but was marked as optional.
The Theseus Book is now available at "/home/amab/src/Theseus/build/book/html/index.html".
Opening the Theseus book in your browser...
2022-12-27 12:24:48 [INFO] (mdbook::book): Book building has started
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the html backend
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the linkcheck backend
2022-12-27 12:24:48 [INFO] (mdbook::renderer): Invoking the "linkcheck" renderer
2022-12-27 12:24:48 [WARN] (mdbook::renderer): The command `mdbook-linkcheck` for backend `linkcheck` was not found, but was marked as optional.
2022-12-27 12:24:48 [INFO] (mdbook): Opening web browser
```
</details>
|
1.0
|
`make view-book` creates book when it already exists - This is probably not a priority. More like an aesthetics/convenience issue. But I think it's good to add it to the issues list so that we can solve it in the future in case we run out of ideas on how to improve the OS further.
Here's how the issue looks like:
<details>
<summary> $ make view-book </summary>
```
2022-12-27 12:24:48 [INFO] (mdbook::book): Book building has started
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the html backend
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the linkcheck backend
2022-12-27 12:24:48 [INFO] (mdbook::renderer): Invoking the "linkcheck" renderer
2022-12-27 12:24:48 [WARN] (mdbook::renderer): The command `mdbook-linkcheck` for backend `linkcheck` was not found, but was marked as optional.
The Theseus Book is now available at "/home/amab/src/Theseus/build/book/html/index.html".
Opening the Theseus book in your browser...
2022-12-27 12:24:48 [INFO] (mdbook::book): Book building has started
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the html backend
2022-12-27 12:24:48 [INFO] (mdbook::book): Running the linkcheck backend
2022-12-27 12:24:48 [INFO] (mdbook::renderer): Invoking the "linkcheck" renderer
2022-12-27 12:24:48 [WARN] (mdbook::renderer): The command `mdbook-linkcheck` for backend `linkcheck` was not found, but was marked as optional.
2022-12-27 12:24:48 [INFO] (mdbook): Opening web browser
```
</details>
|
non_process
|
make view book creates book when it already exists this is probably not a priority more like an aesthetics convenience issue but i think it s good to add it to the issues list so that we can solve it in the future in case we run out of ideas on how to improve the os further here s how the issue looks like make view book mdbook book book building has started mdbook book running the html backend mdbook book running the linkcheck backend mdbook renderer invoking the linkcheck renderer mdbook renderer the command mdbook linkcheck for backend linkcheck was not found but was marked as optional the theseus book is now available at home amab src theseus build book html index html opening the theseus book in your browser mdbook book book building has started mdbook book running the html backend mdbook book running the linkcheck backend mdbook renderer invoking the linkcheck renderer mdbook renderer the command mdbook linkcheck for backend linkcheck was not found but was marked as optional mdbook opening web browser
| 0
|
10,688
| 13,466,805,734
|
IssuesEvent
|
2020-09-09 23:52:20
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
closed
|
Create GitHub Action to automatically push website to App Engine
|
lang: shell priority: p2 type: process
|
In the interest of keeping our project actions in one place, a GitHub action should be added to push the website to App Engine instead of a GCP Cloud Build Trigger.
This can be done by adding a script that runs `gcloud app deploy`.
Once the website URL is redirected to stackdriver-sandbox.dev in GCP and the project is removed from GitHub Pages, then this issue should be resolved (Contingent on Issue #310 being resolved).
|
1.0
|
Create GitHub Action to automatically push website to App Engine - In the interest of keeping our project actions in one place, a GitHub action should be added to push the website to App Engine instead of a GCP Cloud Build Trigger.
This can be done by adding a script that runs `gcloud app deploy`.
Once the website URL is redirected to stackdriver-sandbox.dev in GCP and the project is removed from GitHub Pages, then this issue should be resolved (Contingent on Issue #310 being resolved).
|
process
|
create github action to automatically push website to app engine in the interest of keeping our project actions in one place a github action should be added to push the website to app engine instead of a gcp cloud build trigger this can be done by adding a script that runs gcloud app deploy once the website url is redirected to stackdriver sandbox dev in gcp and the project is removed from github pages then this issue should be resolved contingent on issue being resolved
| 1
|
13,538
| 16,068,685,451
|
IssuesEvent
|
2021-04-24 01:38:22
|
rjsears/chia_plot_manager
|
https://api.github.com/repos/rjsears/chia_plot_manager
|
closed
|
Fix coin_monitor.py for new log file format
|
In Process bug
|
1.1.1 changed the way the logfiles are formatted. Need to update coin_monitor.py for this new logfile.
|
1.0
|
Fix coin_monitor.py for new log file format - 1.1.1 changed the way the logfiles are formatted. Need to update coin_monitor.py for this new logfile.
|
process
|
fix coin monitor py for new log file format changed the way the logfiles are formatted need to update coin monitor py for this new logfile
| 1
|
2,519
| 5,287,528,607
|
IssuesEvent
|
2017-02-08 12:39:09
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
closed
|
Setup Sentry for our processors and collectors
|
4. Ready for Review Collectors Data Processors
|
It'll allow us to keep track of exceptions in our data pipeline processes
|
1.0
|
Setup Sentry for our processors and collectors - It'll allow us to keep track of exceptions in our data pipeline processes
|
process
|
setup sentry for our processors and collectors it ll allow us to keep track of exceptions in our data pipeline processes
| 1
|
3,044
| 6,040,681,627
|
IssuesEvent
|
2017-06-10 16:38:58
|
kangarko/ChatControl-Pro
|
https://api.github.com/repos/kangarko/ChatControl-Pro
|
closed
|
Packet Permissions (pt. 2)
|
[queued / Your issue will be processed shortly]
|
Using "ignore perm (perm)" does not work in packets.txt, as when reloading the plugin I get the error of: http://prntscr.com/f928q0
Config section:
match ^DeluxeMenus*
ignore perm chc.dm
then deny
Again, this is in my packets.txt, not rules.txt file.
ALSO, another question:
Is it possible to replace multiple lines into one line? Example: Executing command /dm displays two multiple lines of text in chat "DeluxeMenus version\nCreated by extended_clip", and I would like to replace both of them into one line of text to be displayed in chat with a custom message such as "No permission".
|
1.0
|
Packet Permissions (pt. 2) - Using "ignore perm (perm)" does not work in packets.txt, as when reloading the plugin I get the error of: http://prntscr.com/f928q0
Config section:
match ^DeluxeMenus*
ignore perm chc.dm
then deny
Again, this is in my packets.txt, not rules.txt file.
ALSO, another question:
Is it possible to replace multiple lines into one line? Example: Executing command /dm displays two multiple lines of text in chat "DeluxeMenus version\nCreated by extended_clip", and I would like to replace both of them into one line of text to be displayed in chat with a custom message such as "No permission".
|
process
|
packet permissions pt using ignore perm perm does not work in packets txt as when reloading the plugin i get the error of config section match deluxemenus ignore perm chc dm then deny again this is in my packets txt not rules txt file also another question is it possible to replace multiple lines into one line example executing command dm displays two multiple lines of text in chat deluxemenus version ncreated by extended clip and i would like to replace both of them into one line of text to be displayed in chat with a custom message such as no permission
| 1
|
19,426
| 25,583,399,330
|
IssuesEvent
|
2022-12-01 07:13:39
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
multiSchema + migrate: decide whether the _prisma_migrations table living in the search path's first schema is correct
|
process/candidate tech/engines/migration engine team/schema topic: multiSchema
|
Pros:
- Mostly works
- Avoids potential footguns with more configuration knobs (accidental changes)
Cons:
- The Prisma schema defines the schemas the application owns and manages. The migrations table is potentially in none of these.
|
1.0
|
multiSchema + migrate: decide whether the _prisma_migrations table living in the search path's first schema is correct - Pros:
- Mostly works
- Avoids potential footguns with more configuration knobs (accidental changes)
Cons:
- The Prisma schema defines the schemas the application owns and manages. The migrations table is potentially in none of these.
|
process
|
multischema migrate decide whether the prisma migrations table living in the search path s first schema is correct pros mostly works avoids potential footguns with more configuration knobs accidental changes cons the prisma schema defines the schemas the application owns and manages the migrations table is potentially in none of these
| 1
|
4,279
| 7,190,585,825
|
IssuesEvent
|
2018-02-02 17:46:40
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Genesis blocks
|
libs-etherlib status-inprocess type-question
|
Merge of issue https://github.com/Great-Hill-Corporation/ethslurp/issues/94
Block zero shows 8893 transactions on etherscan.io: https://etherscan.io/block/0.
The file that contains these 8893 "transactions" (they must have been internal transactions) is here: https://github.com/blockapps/blockapps-data.big/blob/master/livenetGenesis.json (or at least it was). The file is copied here for safe keeping: http://quickblocks.io/do_not_remove_genesis_data/genesis.json
From https://github.com/Great-Hill-Corporation/ethslurp/issues/100
|
1.0
|
Genesis blocks - Merge of issue https://github.com/Great-Hill-Corporation/ethslurp/issues/94
Block zero shows 8893 transactions on etherscan.io: https://etherscan.io/block/0.
The file that contains these 8893 "transactions" (they must have been internal transactions) is here: https://github.com/blockapps/blockapps-data.big/blob/master/livenetGenesis.json (or at least it was). The file is copied here for safe keeping: http://quickblocks.io/do_not_remove_genesis_data/genesis.json
From https://github.com/Great-Hill-Corporation/ethslurp/issues/100
|
process
|
genesis blocks merge of issue block zero shows transactions on etherscan io the file that contains these transactions they must have been internal transactions is here or at least it was the file is copied here for safe keeping from
| 1
|
674,583
| 23,058,508,097
|
IssuesEvent
|
2022-07-25 07:46:52
|
fredo-ai/Fredo-Public
|
https://api.github.com/repos/fredo-ai/Fredo-Public
|
closed
|
New command /learn
|
priority-1
|
will present a link to the Fredo knowledge base: https://bit.ly/3yfBIYS
here is a flow: https://miro.com/app/board/o9J_lttkfEA=/?moveToWidget=3458764528670249153&cot=14
Help menu: https://miro.com/app/board/o9J_lttkfEA=/?moveToWidget=3458764519491476452&cot=14
**It should be in every menu in all platforms.**
|
1.0
|
New command /learn - will present a link to the Fredo knowledge base: https://bit.ly/3yfBIYS
here is a flow: https://miro.com/app/board/o9J_lttkfEA=/?moveToWidget=3458764528670249153&cot=14
Help menu: https://miro.com/app/board/o9J_lttkfEA=/?moveToWidget=3458764519491476452&cot=14
**It should be in every menu in all platforms.**
|
non_process
|
new command learn will present a link to the fredo knowledge base here is a flow help menu it should be in every menu in all platforms
| 0
|
188,403
| 22,046,372,868
|
IssuesEvent
|
2022-05-30 02:30:34
|
Check-den-Fakt/WebScraper
|
https://api.github.com/repos/Check-den-Fakt/WebScraper
|
closed
|
WS-2017-0330 (Medium) detected in mime-1.2.6.tgz - autoclosed
|
security vulnerability
|
## WS-2017-0330 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mime-1.2.6.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.6.tgz">https://registry.npmjs.org/mime/-/mime-1.2.6.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/WebScraper/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/WebScraper/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- fetch-node-0.0.1.tgz (Root Library)
- hooks-node-0.0.1.tgz
- express-3.0.3.tgz
- send-0.1.0.tgz
- :x: **mime-1.2.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Check-den-Fakt/WebScraper/commit/86338e55dd24dcf9dfff1dcc910171aad8422123">86338e55dd24dcf9dfff1dcc910171aad8422123</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of mime (1.0.0 throw 1.4.0 and 2.0.0 throw 2.0.2), are vulnerable to regular expression denial of service.
<p>Publish Date: 2017-09-27
<p>URL: <a href=https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0>WS-2017-0330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0">https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0</a></p>
<p>Release Date: 2019-04-03</p>
<p>Fix Resolution: 1.4.1,2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2017-0330 (Medium) detected in mime-1.2.6.tgz - autoclosed - ## WS-2017-0330 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mime-1.2.6.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.6.tgz">https://registry.npmjs.org/mime/-/mime-1.2.6.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/WebScraper/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/WebScraper/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- fetch-node-0.0.1.tgz (Root Library)
- hooks-node-0.0.1.tgz
- express-3.0.3.tgz
- send-0.1.0.tgz
- :x: **mime-1.2.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Check-den-Fakt/WebScraper/commit/86338e55dd24dcf9dfff1dcc910171aad8422123">86338e55dd24dcf9dfff1dcc910171aad8422123</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of mime (1.0.0 throw 1.4.0 and 2.0.0 throw 2.0.2), are vulnerable to regular expression denial of service.
<p>Publish Date: 2017-09-27
<p>URL: <a href=https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0>WS-2017-0330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0">https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0</a></p>
<p>Release Date: 2019-04-03</p>
<p>Fix Resolution: 1.4.1,2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
ws medium detected in mime tgz autoclosed ws medium severity vulnerability vulnerable library mime tgz a comprehensive library for mime type mapping library home page a href path to dependency file tmp ws scm webscraper package json path to vulnerable library tmp ws scm webscraper node modules mime package json dependency hierarchy fetch node tgz root library hooks node tgz express tgz send tgz x mime tgz vulnerable library found in head commit a href vulnerability details affected version of mime throw and throw are vulnerable to regular expression denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,280
| 10,431,983,021
|
IssuesEvent
|
2019-09-17 10:13:55
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
GoAccess submitting aggregate data to MySQL
|
log-processing question
|
Is it possible to make GoAccess count views from Apache log (single server) in real-time and submit the aggregate data to MySQL according some threshold? (ex: each 5 min or 1KB)
|
1.0
|
GoAccess submitting aggregate data to MySQL - Is it possible to make GoAccess count views from Apache log (single server) in real-time and submit the aggregate data to MySQL according some threshold? (ex: each 5 min or 1KB)
|
process
|
goaccess submitting aggregate data to mysql is it possible to make goaccess count views from apache log single server in real time and submit the aggregate data to mysql according some threshold ex each min or
| 1
|
18,348
| 24,472,671,362
|
IssuesEvent
|
2022-10-07 22:04:20
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
closed
|
Add Scottish Forestry as source
|
good first issue data processing back end
|
**Is your feature request related to a problem? Please describe.**
Scottish Forestry have an open data portal and it looks like it's on arcgis so we may be able to use our existing arcgis extractor.
https://open-data-scottishforestry.hub.arcgis.com/
|
1.0
|
Add Scottish Forestry as source - **Is your feature request related to a problem? Please describe.**
Scottish Forestry have an open data portal and it looks like it's on arcgis so we may be able to use our existing arcgis extractor.
https://open-data-scottishforestry.hub.arcgis.com/
|
process
|
add scottish forestry as source is your feature request related to a problem please describe scottish forestry have an open data portal and it looks like it s on arcgis so we may be able to use our existing arcgis extractor
| 1
|
63,955
| 15,766,573,089
|
IssuesEvent
|
2021-03-31 15:12:01
|
LuaJIT/LuaJIT
|
https://api.github.com/repos/LuaJIT/LuaJIT
|
closed
|
LuaJIT 2.1 no longer compiles under Alpine Linux with default grep
|
2.1 bug build system
|
I do not know if this is a real issue, but this information can be useful to someone.
Since the commit b9d523965b3f55b19345a1ed1ebc92e431747ce1, LuaJIT 2.1 no longer compiles under Alpine Linux with the default grep.
This seems to come from the introduction of the grep option `U` which is not recognized with the default alpine `grep` command.
This can be tested with the following Dockerfile:
```
FROM alpine:3.13
RUN mkdir -p /usr/src/LuaJIT && \
apk add --no-cache build-base
COPY . /usr/src/LuaJIT/
WORKDIR /usr/src/LuaJIT
RUN make && \
make install && \
ln -sf luajit-2.1.0-beta3 /usr/local/bin/luajit
```
This is solved by adding the `grep` package that install `GNU grep`, replacing the `BusyBox grep`.
```
FROM alpine:3.13
RUN mkdir -p /usr/src/LuaJIT && \
apk add --no-cache build-base grep
COPY . /usr/src/LuaJIT/
WORKDIR /usr/src/LuaJIT
RUN make && \
make install && \
ln -sf luajit-2.1.0-beta3 /usr/local/bin/luajit
```
|
1.0
|
LuaJIT 2.1 no longer compiles under Alpine Linux with default grep - I do not know if this is a real issue, but this information can be useful to someone.
Since the commit b9d523965b3f55b19345a1ed1ebc92e431747ce1, LuaJIT 2.1 no longer compiles under Alpine Linux with the default grep.
This seems to come from the introduction of the grep option `U` which is not recognized with the default alpine `grep` command.
This can be tested with the following Dockerfile:
```
FROM alpine:3.13
RUN mkdir -p /usr/src/LuaJIT && \
apk add --no-cache build-base
COPY . /usr/src/LuaJIT/
WORKDIR /usr/src/LuaJIT
RUN make && \
make install && \
ln -sf luajit-2.1.0-beta3 /usr/local/bin/luajit
```
This is solved by adding the `grep` package that install `GNU grep`, replacing the `BusyBox grep`.
```
FROM alpine:3.13
RUN mkdir -p /usr/src/LuaJIT && \
apk add --no-cache build-base grep
COPY . /usr/src/LuaJIT/
WORKDIR /usr/src/LuaJIT
RUN make && \
make install && \
ln -sf luajit-2.1.0-beta3 /usr/local/bin/luajit
```
|
non_process
|
luajit no longer compiles under alpine linux with default grep i do not know if this is a real issue but this information can be useful to someone since the commit luajit no longer compiles under alpine linux with the default grep this seems to come from the introduction of the grep option u which is not recognized with the default alpine grep command this can be tested with the following dockerfile from alpine run mkdir p usr src luajit apk add no cache build base copy usr src luajit workdir usr src luajit run make make install ln sf luajit usr local bin luajit this is solved by adding the grep package that install gnu grep replacing the busybox grep from alpine run mkdir p usr src luajit apk add no cache build base grep copy usr src luajit workdir usr src luajit run make make install ln sf luajit usr local bin luajit
| 0
|
435,807
| 12,541,611,686
|
IssuesEvent
|
2020-06-05 12:40:37
|
gnosygnu/xowa
|
https://api.github.com/repos/gnosygnu/xowa
|
closed
|
strange chars �������
|
[effort 3 - less than a week] [priority 2 - major] [resolved - commit] [risk 2 - major] [schedule 1 - within days] [type - bug] core - parser xtn - lua
|
the page `en.wiktionary.org/wiki/齧り付く`
looks like

On investigation `Module:ja` seems to be the culprit
(its looking like another regex issue)
This module contains the function
function export.rm_spaces_hyphens(f)
local text = type(f) == 'table' and f.args[1] or f
text = str_gsub(text, '.', { [' '] = '', ['-'] = '', ['.'] = '', ['\''] = '' })
text = str_gsub(text, ' ', '')
return text
end
It is the first `str_gsub` that is causing the problem
Looking at StringLib.java (in luaj sources), it seems that UTF-8 chars are not processed properly
|
1.0
|
strange chars ������� - the page `en.wiktionary.org/wiki/齧り付く`
looks like

On investigation `Module:ja` seems to be the culprit
(its looking like another regex issue)
This module contains the function
function export.rm_spaces_hyphens(f)
local text = type(f) == 'table' and f.args[1] or f
text = str_gsub(text, '.', { [' '] = '', ['-'] = '', ['.'] = '', ['\''] = '' })
text = str_gsub(text, ' ', '')
return text
end
It is the first `str_gsub` that is causing the problem
Looking at StringLib.java (in luaj sources), it seems that UTF-8 chars are not processed properly
|
non_process
|
strange chars ������� the page en wiktionary org wiki 齧り付く looks like on investigation module ja seems to be the culprit its looking like another regex issue this module contains the function function export rm spaces hyphens f local text type f table and f args or f text str gsub text text str gsub text nbsp return text end it is the first str gsub that is causing the problem looking at stringlib java in luaj sources it seems that utf chars are not processed properly
| 0
|
12,145
| 9,602,845,909
|
IssuesEvent
|
2019-05-10 15:30:05
|
fluencelabs/fluence
|
https://api.github.com/repos/fluencelabs/fluence
|
closed
|
IPFS failover
|
~infrastructure
|
- Run 2+ separate IPFS instances (on different datacenters)
- Send all requests to a single node, and failover to the next one
- Enable healthchecks for them (in nagios and in caddy for failover)
As availability of IPFS is crucial for Fluence, it should be tolerant to a single IPFS instance failures.
related #667
|
1.0
|
IPFS failover - - Run 2+ separate IPFS instances (on different datacenters)
- Send all requests to a single node, and failover to the next one
- Enable healthchecks for them (in nagios and in caddy for failover)
As availability of IPFS is crucial for Fluence, it should be tolerant to a single IPFS instance failures.
related #667
|
non_process
|
ipfs failover run separate ipfs instances on different datacenters send all requests to a single node and failover to the next one enable healthchecks for them in nagios and in caddy for failover as availability of ipfs is crucial for fluence it should be tolerant to a single ipfs instance failures related
| 0
|
8,119
| 11,303,179,038
|
IssuesEvent
|
2020-01-17 19:28:05
|
bcgov/entity
|
https://api.github.com/repos/bcgov/entity
|
closed
|
OCM Team Sizing
|
OCM_Int Processes
|
Nov 7 - decided to use regular T-shirt sizes
OLD:
VERY MAJOR (VM):
(1) Approvals required by GCPE, IDIM and Executive and/or
(2) >14 hours of effort to create the first draft
MAJOR (J):
(1) Approvals required by GCPE & IDIM and/or
(2) >7 and <14 hours of effort to create the first draft
MEDIUM (D):
(1) Review/approval by two of business, POs, and OCM team and/or
(2) >3 and <7 hours to create
MINOR (M):L
(1) no approval outside OCM team and/or
(2) <3 hours to create
|
1.0
|
OCM Team Sizing - Nov 7 - decided to use regular T-shirt sizes
OLD:
VERY MAJOR (VM):
(1) Approvals required by GCPE, IDIM and Executive and/or
(2) >14 hours of effort to create the first draft
MAJOR (J):
(1) Approvals required by GCPE & IDIM and/or
(2) >7 and <14 hours of effort to create the first draft
MEDIUM (D):
(1) Review/approval by two of business, POs, and OCM team and/or
(2) >3 and <7 hours to create
MINOR (M):L
(1) no approval outside OCM team and/or
(2) <3 hours to create
|
process
|
ocm team sizing nov decided to use regular t shirt sizes old very major vm approvals required by gcpe idim and executive and or hours of effort to create the first draft major j approvals required by gcpe idim and or and hours of effort to create the first draft medium d review approval by two of business pos and ocm team and or and hours to create minor m l no approval outside ocm team and or hours to create
| 1
|
7,881
| 11,047,305,796
|
IssuesEvent
|
2019-12-09 18:41:24
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
ntr:rhamnolipid biosynthesis
|
New term request multi-species process
|
rhamnolipid biosynthesis
positive regulation of rhamnolipid biosynthesis
The chemical reactions and pathways resulting in the formation/production of a glycolipid surfactant that acts as a bacterial biofilm dispersal
Found in Pseudomonas aeruginosa and other xxx bacteria
(I see Mycolicibacterium and Streptomyces? )
synonyms rhamnolipid production
parents
GO:0009247 glycolipid biosynthetic process
PMID:28715477
|
1.0
|
ntr:rhamnolipid biosynthesis - rhamnolipid biosynthesis
positive regulation of rhamnolipid biosynthesis
The chemical reactions and pathways resulting in the formation/production of a glycolipid surfactant that acts as a bacterial biofilm dispersal
Found in Pseudomonas aeruginosa and other xxx bacteria
(I see Mycolicibacterium and Streptomyces? )
synonyms rhamnolipid production
parents
GO:0009247 glycolipid biosynthetic process
PMID:28715477
|
process
|
ntr rhamnolipid biosynthesis rhamnolipid biosynthesis positive regulation of rhamnolipid biosynthesis the chemical reactions and pathways resulting in the formation production of a glycolipid surfactant that acts as a bacterial biofilm dispersal found in pseudomonas aeruginosa and other xxx bacteria i see mycolicibacterium and streptomyces synonyms rhamnolipid production parents go glycolipid biosynthetic process pmid
| 1
|
1,000
| 3,463,227,049
|
IssuesEvent
|
2015-12-21 08:39:35
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Видача копій, витягів з розпоряджень міського голови, рішень, прийнятих міською радою та виконавчим комітетом (Рефакторинг)
|
In process of testing
|
Рефакторинг БП для:
Днепропетровск
Нетишин (Хмельницкая обл.)
Житомир
Кузнецовск (Ровенская)
Ужгород
Бердянск (Запорожская)
Вознесенск (Николаевская)
Строчка в Сервисе 4
Сервис-дата:
609
109
107
298
101
98
|
1.0
|
Видача копій, витягів з розпоряджень міського голови, рішень, прийнятих міською радою та виконавчим комітетом (Рефакторинг) - Рефакторинг БП для:
Днепропетровск
Нетишин (Хмельницкая обл.)
Житомир
Кузнецовск (Ровенская)
Ужгород
Бердянск (Запорожская)
Вознесенск (Николаевская)
Строчка в Сервисе 4
Сервис-дата:
609
109
107
298
101
98
|
process
|
видача копій витягів з розпоряджень міського голови рішень прийнятих міською радою та виконавчим комітетом рефакторинг рефакторинг бп для днепропетровск нетишин хмельницкая обл житомир кузнецовск ровенская ужгород бердянск запорожская вознесенск николаевская строчка в сервисе сервис дата
| 1
|
758
| 3,075,824,827
|
IssuesEvent
|
2015-08-20 15:24:46
|
jghibiki/mopey
|
https://api.github.com/repos/jghibiki/mopey
|
opened
|
Current Song + Spam + Upvote + Downvote tracking
|
API Database Service
|
One part of this service that i know we talked about but never made it into the diagram was that there should be some way of marking which song is playing - we will need to track which users have made upvotes, downvotes, or spam notices for this song as well as be able to have a client api endpoint that gets the currently playing song. Because of that, to complete the Service we need to add a model called CurrentSong which will only ever have 1 row at a time. The model should have a 1 - 1 relationship with a song. We will also need to add models for upvote, downvote, and SpamReport. Each of these tables simply takes a user (and of course it's own id). When a song is playing, if a user upvotes/downvotes/reports spam we'd check the database to see if the user exists in the respective table (if they do we do nothing) if they do not, then we add them to the table and modify the song requestor's karma value. Then at the end of the song, the service should have an api endpoint to clear all three of the tables, and remove the song currently in the CurrentSong table.
|
1.0
|
Current Song + Spam + Upvote + Downvote tracking - One part of this service that i know we talked about but never made it into the diagram was that there should be some way of marking which song is playing - we will need to track which users have made upvotes, downvotes, or spam notices for this song as well as be able to have a client api endpoint that gets the currently playing song. Because of that, to complete the Service we need to add a model called CurrentSong which will only ever have 1 row at a time. The model should have a 1 - 1 relationship with a song. We will also need to add models for upvote, downvote, and SpamReport. Each of these tables simply takes a user (and of course it's own id). When a song is playing, if a user upvotes/downvotes/reports spam we'd check the database to see if the user exists in the respective table (if they do we do nothing) if they do not, then we add them to the table and modify the song requestor's karma value. Then at the end of the song, the service should have an api endpoint to clear all three of the tables, and remove the song currently in the CurrentSong table.
|
non_process
|
current song spam upvote downvote tracking one part of this service that i know we talked about but never made it into the diagram was that there should be some way of marking which song is playing we will need to track which users have made upvotes downvotes or spam notices for this song as well as be able to have a client api endpoint that gets the currently playing song because of that to complete the service we need to add a model called currentsong which will only ever have row at a time the model should have a relationship with a song we will also need to add models for upvote downvote and spamreport each of these tables simply takes a user and of course it s own id when a song is playing if a user upvotes downvotes reports spam we d check the database to see if the user exists in the respective table if they do we do nothing if they do not then we add them to the table and modify the song requestor s karma value then at the end of the song the service should have an api endpoint to clear all three of the tables and remove the song currently in the currentsong table
| 0
|
755,813
| 26,441,025,764
|
IssuesEvent
|
2023-01-16 00:00:18
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
[Python+gevent] Python client gets stuck while reading from grpc stream under load
|
kind/bug lang/Python priority/P2 disposition/requires reporter action
|
<!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc version `1.35.0`. Client is in python and server is in golang.
### What operating system (Linux, Windows,...) and version?
Running in docker with Ubuntu 21.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
Client is using python with gevent:
```
Python 3.7.3
grpcio==1.35.0
protobuf==3.12.2
gevent==21.1.2
```
Server is in Golang
`google.golang.org/grpc v1.35.0`
### What did you do?
We have a golang server that implements a server-side streaming endpoint to stream files.
Proto definition of the streaming response
```
message ReadFileResp {
optional bytes content = 1;
}
```
Golang server code that sends bytes on the stream
```
buffer := make([]byte, ReadFileStreamBufferSizeKB*1024)
for {
n, readErr := reader.Read(buffer)
bytesRead += n
streamErr := stream.Send(ReadFileResp{Content: buffer[:n]})
if streamErr != nil {
msg := fmt.Sprintf("Failed to send contents to stream: %v", streamErr)
return status.NewError(status.Internal, streamErr, msg).GRPCErr()
}
if readErr == io.EOF {
return nil
} else if readErr != nil {
msg := fmt.Sprintf("Failed to read content from file. Path: %v. Err: %v", path, readErr)
return status.NewError(status.Internal, readErr, msg).GRPCErr()
}
}
```
Python client code reads from the stream and collects the data into a bytearray
```
content = bytearray()
try:
for read_resp in in_stream:
if len(read_resp.content) > 0:
content.extend(read_resp.content)
except Exception as e: # catch all exceptions to align with thrift file client
msg = 'Readfile RPC stream error: {}'.format(e)
logging.error(msg)`
```
We are calling this server-side streaming api from the python client under load, making approximately 40-80 concurrent requests at a time and running for 10s of minutes.
### What did you expect to see?
Expect to see the streaming read requests complete.
If the load is too high, we expect the latency to increase or the requests to fail.
### What did you see instead?
After about 10-15 minutes of running under load, all calls to this streaming api get stuck in the python client.
- Once a particular client node enters this state, it remains stuck indefinitely. Reducing/stopping the load does not help it recover. Restarting the server does not help, but restarting that particular client does.
- While a client node is in this state, other grpc clients nodes are able to continue successfully communicating with the grpc server.
- This happens with both streaming and unary rpc calls.
- If we call the streaming api after the client gets stuck, we can see the request successfully reaching the server and the server sending the contents back on the stream. However, on the client side we never receive the contents of the stream.
- Once the client gets stuck, subsequent calls to the streaming api seem to increase the CPU usage, which never goes back down (maybe we are entering an infinite loop?). Dumping the cpu profile of such a stuck client shows the following information. Even with no load on the system, ncalls for the top functions keeps going up.
```
`{"status": "OK", "stats": ["
Clock type: CPU
Ordered by: totaltime, desc
name ncall tsub ttot tavg
..vent/thread.py:99 LockType.acquire 171.. 0.762530 2.090877 0.000012
..ges/grpc/_common.py:105 _wait_once 570.. 0.040233 1.147533 0.000020
...7/threading.py:264 Condition.wait 570.. 1.240049 1.145259 0.000020
..timeout.py:264 _start_new_or_dummy 57024 0.143530 1.139789 0.000020
..eout.py:239 Timeout._on_expiration 57024 0.738490 1.001224 0.000018
..es/gevent/timeout.py:243 start_new 57024 0.325641 0.996259 0.000017
../threading.py:198 _RLock._is_owned 57022 0.139721 0.445448 0.000008
..event/timeout.py:219 Timeout.start 57024 0.258610 0.381557 0.000007
..ing.py:184 _RLock._acquire_restore 57022 0.179657 0.324623 0.000006
..ages/gevent/thread.py:72 get_ident 57022 0.189061 0.305727 0.000005
..nt/timeout.py:349 Timeout.__exit__ 57024 0.149478 0.262571 0.000005
..nt/timeout.py:199 Timeout.__init__ 57024 0.226992 0.226992 0.000004
..t/timeout.py:341 Timeout.__enter__ 57024 0.125360 0.188608 0.000003
..pc/_channel.py:788 _response_ready 57022 0.165941 0.165941 0.000003
..ent/timeout.py:285 Timeout.pending 114.. 0.130459 0.130459 0.000001
..event/timeout.py:302 Timeout.close 57024 0.113093 0.113093 0.000002
..eading.py:188 _RLock._release_save 57022 0.111618 0.111618 0.00000
```
### Anything else we should know about your project / environment?
This only seems to happen while running the python client code in docker.
If I run the server and client code locally on my Mac, I am unable to reproduce the issue. However, if I run the client code inside docker on my Mac laptop, it reproduces consistently.
|
1.0
|
[Python+gevent] Python client gets stuck while reading from grpc stream under load - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
grpc version `1.35.0`. Client is in python and server is in golang.
### What operating system (Linux, Windows,...) and version?
Running in docker with Ubuntu 21.04
### What runtime / compiler are you using (e.g. python version or version of gcc)
Client is using python with gevent:
```
Python 3.7.3
grpcio==1.35.0
protobuf==3.12.2
gevent==21.1.2
```
Server is in Golang
`google.golang.org/grpc v1.35.0`
### What did you do?
We have a golang server that implements a server-side streaming endpoint to stream files.
Proto definition of the streaming response
```
message ReadFileResp {
optional bytes content = 1;
}
```
Golang server code that sends bytes on the stream
```
buffer := make([]byte, ReadFileStreamBufferSizeKB*1024)
for {
n, readErr := reader.Read(buffer)
bytesRead += n
streamErr := stream.Send(ReadFileResp{Content: buffer[:n]})
if streamErr != nil {
msg := fmt.Sprintf("Failed to send contents to stream: %v", streamErr)
return status.NewError(status.Internal, streamErr, msg).GRPCErr()
}
if readErr == io.EOF {
return nil
} else if readErr != nil {
msg := fmt.Sprintf("Failed to read content from file. Path: %v. Err: %v", path, readErr)
return status.NewError(status.Internal, readErr, msg).GRPCErr()
}
}
```
Python client code reads from the stream and collects the data into a bytearray
```
content = bytearray()
try:
for read_resp in in_stream:
if len(read_resp.content) > 0:
content.extend(read_resp.content)
except Exception as e: # catch all exceptions to align with thrift file client
msg = 'Readfile RPC stream error: {}'.format(e)
logging.error(msg)`
```
We are calling this server-side streaming api from the python client under load, making approximately 40-80 concurrent requests at a time and running for 10s of minutes.
### What did you expect to see?
Expect to see the streaming read requests complete.
If the load is too high, we expect the latency to increase or the requests to fail.
### What did you see instead?
After about 10-15 minutes of running under load, all calls to this streaming api get stuck in the python client.
- Once a particular client node enters this state, it remains stuck indefinitely. Reducing/stopping the load does not help it recover. Restarting the server does not help, but restarting that particular client does.
- While a client node is in this state, other grpc clients nodes are able to continue successfully communicating with the grpc server.
- This happens with both streaming and unary rpc calls.
- If we call the streaming api after the client gets stuck, we can see the request successfully reaching the server and the server sending the contents back on the stream. However, on the client side we never receive the contents of the stream.
- Once the client gets stuck, subsequent calls to the streaming api seem to increase the CPU usage, which never goes back down (maybe we are entering an infinite loop?). Dumping the cpu profile of such a stuck client shows the following information. Even with no load on the system, ncalls for the top functions keeps going up.
```
`{"status": "OK", "stats": ["
Clock type: CPU
Ordered by: totaltime, desc
name ncall tsub ttot tavg
..vent/thread.py:99 LockType.acquire 171.. 0.762530 2.090877 0.000012
..ges/grpc/_common.py:105 _wait_once 570.. 0.040233 1.147533 0.000020
...7/threading.py:264 Condition.wait 570.. 1.240049 1.145259 0.000020
..timeout.py:264 _start_new_or_dummy 57024 0.143530 1.139789 0.000020
..eout.py:239 Timeout._on_expiration 57024 0.738490 1.001224 0.000018
..es/gevent/timeout.py:243 start_new 57024 0.325641 0.996259 0.000017
../threading.py:198 _RLock._is_owned 57022 0.139721 0.445448 0.000008
..event/timeout.py:219 Timeout.start 57024 0.258610 0.381557 0.000007
..ing.py:184 _RLock._acquire_restore 57022 0.179657 0.324623 0.000006
..ages/gevent/thread.py:72 get_ident 57022 0.189061 0.305727 0.000005
..nt/timeout.py:349 Timeout.__exit__ 57024 0.149478 0.262571 0.000005
..nt/timeout.py:199 Timeout.__init__ 57024 0.226992 0.226992 0.000004
..t/timeout.py:341 Timeout.__enter__ 57024 0.125360 0.188608 0.000003
..pc/_channel.py:788 _response_ready 57022 0.165941 0.165941 0.000003
..ent/timeout.py:285 Timeout.pending 114.. 0.130459 0.130459 0.000001
..event/timeout.py:302 Timeout.close 57024 0.113093 0.113093 0.000002
..eading.py:188 _RLock._release_save 57022 0.111618 0.111618 0.00000
```
### Anything else we should know about your project / environment?
This only seems to happen while running the python client code in docker.
If I run the server and client code locally on my Mac, I am unable to reproduce the issue. However, if I run the client code inside docker on my Mac laptop, it reproduces consistently.
|
non_process
|
python client gets stuck while reading from grpc stream under load please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using grpc version client is in python and server is in golang what operating system linux windows and version running in docker with ubuntu what runtime compiler are you using e g python version or version of gcc client is using python with gevent python grpcio protobuf gevent server is in golang google golang org grpc what did you do we have a golang server that implements a server side streaming endpoint to stream files proto definition of the streaming response message readfileresp optional bytes content golang server code that sends bytes on the stream buffer make byte readfilestreambuffersizekb for n readerr reader read buffer bytesread n streamerr stream send readfileresp content buffer if streamerr nil msg fmt sprintf failed to send contents to stream v streamerr return status newerror status internal streamerr msg grpcerr if readerr io eof return nil else if readerr nil msg fmt sprintf failed to read content from file path v err v path readerr return status newerror status internal readerr msg grpcerr python client code reads from the stream and collects the data into a bytearray content bytearray try for read resp in in stream if len read resp content content extend read resp content except exception as e catch all exceptions to align with thrift file client msg readfile rpc stream error format e logging error msg we are calling this server side streaming api from the python client under load making approximately concurrent requests at a time and running for of minutes what did you expect to see expect to see the streaming read requests complete if the load is too high we expect the latency to increase or the requests to fail what did you see instead after about minutes of running under load all calls to this streaming api get stuck in the python client once a particular client node enters this state it remains stuck indefinitely reducing stopping the load does not help it recover restarting the server does not help but restarting that particular client does while a client node is in this state other grpc clients nodes are able to continue successfully communicating with the grpc server this happens with both streaming and unary rpc calls if we call the streaming api after the client gets stuck we can see the request successfully reaching the server and the server sending the contents back on the stream however on the client side we never receive the contents of the stream once the client gets stuck subsequent calls to the streaming api seem to increase the cpu usage which never goes back down maybe we are entering an infinite loop dumping the cpu profile of such a stuck client shows the following information even with no load on the system ncalls for the top functions keeps going up status ok stats clock type cpu ordered by totaltime desc name ncall tsub ttot tavg vent thread py locktype acquire ges grpc common py wait once threading py condition wait timeout py start new or dummy eout py timeout on expiration es gevent timeout py start new threading py rlock is owned event timeout py timeout start ing py rlock acquire restore ages gevent thread py get ident nt timeout py timeout exit nt timeout py timeout init t timeout py timeout enter pc channel py response ready ent timeout py timeout pending event timeout py timeout close eading py rlock release save anything else we should know about your project environment this only seems to happen while running the python client code in docker if i run the server and client code locally on my mac i am unable to reproduce the issue however if i run the client code inside docker on my mac laptop it reproduces consistently
| 0
|
342,695
| 30,634,774,339
|
IssuesEvent
|
2023-07-24 16:54:04
|
microsoft/vscode-python
|
https://api.github.com/repos/microsoft/vscode-python
|
closed
|
New pythonTestAdapter does not work with unittest.SkipTest
|
bug area-testing needs PR
|
Type: <b>Bug</b>
<!-- Please fill in all XXX markers -->
# Behaviour
## Expected vs. Actual
The new `pythonTestAdapter` errors out instead of skipping the test.
## Steps to reproduce:
Have a file like this in your test folder:
``` python
from unittest import SkipTest
raise SkipTest("This is unittest.SkipTest calling")
def test_example():
assert 1 == 1
```
<!--
**After** creating the issue on GitHub, you can add screenshots and GIFs of what is happening. Consider tools like https://www.cockos.com/licecap/, https://github.com/phw/peek or https://www.screentogif.com/ for GIF creation.
-->
<!-- **NOTE**: Everything below except Python output panel is auto-generated; no editing required. Please do provide Python output panel. -->
# Diagnostic data
- Python version (& distribution if applicable, e.g. Anaconda): 3.10.12
- Type of virtual environment used (e.g. conda, venv, virtualenv, etc.): Conda
- Value of the `python.languageServer` setting: Pylance
<details>
<summary>Output for <code>Python</code> in the <code>Output</code> panel (<code>View</code>→<code>Output</code>, change the drop-down the upper-right of the <code>Output</code> panel to <code>Python</code>)
</summary>
<p>
```
2023-07-18 20:47:13.010 [error] pytest test discovery error
unittest.case.SkipTest: This is unittest.SkipTest calling
Check Python Test Logs for more details.
```
</p>
</details>
<details>
<summary>User Settings</summary>
<p>
```
languageServer: "Pylance"
linting
• mypyArgs: "<placeholder>"
• mypyEnabled: true
formatting
• provider: "black"
testing
• pytestArgs: "<placeholder>"
• pytestEnabled: true
experiments
• optInto: ["pythonTestAdapter"]
```
</p>
</details>
Extension version: 2023.12.0
VS Code version: Code 1.80.1 (74f6148eb9ea00507ec113ec51c489d6ffb4b771, 2023-07-12T17:22:25.257Z)
OS version: Linux x64 6.2.6-76060206-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 4112)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|125.71GB (99.49GB free)|
|Process Argv|/home/shh/Development/holoviz/holoviz.code-workspace|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|pop|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|pop|
|XDG_SESSION_TYPE|x11|
</details>
<!-- generated by issue reporter -->
|
1.0
|
New pythonTestAdapter does not work with unittest.SkipTest - Type: <b>Bug</b>
<!-- Please fill in all XXX markers -->
# Behaviour
## Expected vs. Actual
The new `pythonTestAdapter` errors out instead of skipping the test.
## Steps to reproduce:
Have a file like this in your test folder:
``` python
from unittest import SkipTest
raise SkipTest("This is unittest.SkipTest calling")
def test_example():
assert 1 == 1
```
<!--
**After** creating the issue on GitHub, you can add screenshots and GIFs of what is happening. Consider tools like https://www.cockos.com/licecap/, https://github.com/phw/peek or https://www.screentogif.com/ for GIF creation.
-->
<!-- **NOTE**: Everything below except Python output panel is auto-generated; no editing required. Please do provide Python output panel. -->
# Diagnostic data
- Python version (& distribution if applicable, e.g. Anaconda): 3.10.12
- Type of virtual environment used (e.g. conda, venv, virtualenv, etc.): Conda
- Value of the `python.languageServer` setting: Pylance
<details>
<summary>Output for <code>Python</code> in the <code>Output</code> panel (<code>View</code>→<code>Output</code>, change the drop-down the upper-right of the <code>Output</code> panel to <code>Python</code>)
</summary>
<p>
```
2023-07-18 20:47:13.010 [error] pytest test discovery error
unittest.case.SkipTest: This is unittest.SkipTest calling
Check Python Test Logs for more details.
```
</p>
</details>
<details>
<summary>User Settings</summary>
<p>
```
languageServer: "Pylance"
linting
• mypyArgs: "<placeholder>"
• mypyEnabled: true
formatting
• provider: "black"
testing
• pytestArgs: "<placeholder>"
• pytestEnabled: true
experiments
• optInto: ["pythonTestAdapter"]
```
</p>
</details>
Extension version: 2023.12.0
VS Code version: Code 1.80.1 (74f6148eb9ea00507ec113ec51c489d6ffb4b771, 2023-07-12T17:22:25.257Z)
OS version: Linux x64 6.2.6-76060206-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 4112)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|125.71GB (99.49GB free)|
|Process Argv|/home/shh/Development/holoviz/holoviz.code-workspace|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|pop|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|pop|
|XDG_SESSION_TYPE|x11|
</details>
<!-- generated by issue reporter -->
|
non_process
|
new pythontestadapter does not work with unittest skiptest type bug behaviour expected vs actual the new pythontestadapter errors out instead of skipping the test steps to reproduce have a file like this in your test folder python from unittest import skiptest raise skiptest this is unittest skiptest calling def test example assert after creating the issue on github you can add screenshots and gifs of what is happening consider tools like or for gif creation diagnostic data python version distribution if applicable e g anaconda type of virtual environment used e g conda venv virtualenv etc conda value of the python languageserver setting pylance output for python in the output panel view → output change the drop down the upper right of the output panel to python pytest test discovery error unittest case skiptest this is unittest skiptest calling check python test logs for more details user settings languageserver pylance linting • mypyargs • mypyenabled true formatting • provider black testing • pytestargs • pytestenabled true experiments • optinto extension version vs code version code os version linux generic modes system info item value cpus amd ryzen core processor x gpu status canvas enabled canvas oop rasterization disabled off direct rendering display compositor disabled off ok gpu compositing enabled multiple raster threads enabled on opengl enabled on rasterization enabled raw draw disabled off ok video decode enabled video encode disabled software vulkan disabled off webgl enabled enabled webgpu disabled off load avg memory system free process argv home shh development holoviz holoviz code workspace screen reader no vm desktop session pop xdg current desktop unity xdg session desktop pop xdg session type
| 0
|
131,253
| 27,861,356,273
|
IssuesEvent
|
2023-03-21 06:40:59
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Task] : Split dataTree for eval
|
High Task FE Coders Pod Evaluated Value
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
As we know we have a dataTree which holds all properties and evaluated values. As part of the performance improvement we can make split this dataTree into `dataTree` and `entityConfig` -
1. `DataTree` - which has evaluated values and paths which actually needs evaluations
2. `entityConfig` - this will have all config properties which are needed for like dependency map ie DynamicBindingPathList, DynamicPaths, TriggerPaths, ValidationPath, etc.
Benefits -
1. During update dataTree, will take difference between oldDataTree and dataTree avoiding all the path which doesn't neeed evalutions
2. size of dataTree will be reduced which will help during evaluations
## Issues this task closes
- https://github.com/appsmithorg/appsmith/issues/13982
|
1.0
|
[Task] : Split dataTree for eval - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
As we know we have a dataTree which holds all properties and evaluated values. As part of the performance improvement we can make split this dataTree into `dataTree` and `entityConfig` -
1. `DataTree` - which has evaluated values and paths which actually needs evaluations
2. `entityConfig` - this will have all config properties which are needed for like dependency map ie DynamicBindingPathList, DynamicPaths, TriggerPaths, ValidationPath, etc.
Benefits -
1. During update dataTree, will take difference between oldDataTree and dataTree avoiding all the path which doesn't neeed evalutions
2. size of dataTree will be reduced which will help during evaluations
## Issues this task closes
- https://github.com/appsmithorg/appsmith/issues/13982
|
non_process
|
split datatree for eval is there an existing issue for this i have searched the existing issues subtasks as we know we have a datatree which holds all properties and evaluated values as part of the performance improvement we can make split this datatree into datatree and entityconfig datatree which has evaluated values and paths which actually needs evaluations entityconfig this will have all config properties which are needed for like dependency map ie dynamicbindingpathlist dynamicpaths triggerpaths validationpath etc benefits during update datatree will take difference between olddatatree and datatree avoiding all the path which doesn t neeed evalutions size of datatree will be reduced which will help during evaluations issues this task closes
| 0
|
10,094
| 13,044,162,071
|
IssuesEvent
|
2020-07-29 03:47:29
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddDateAndString` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddDateAndString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddDateAndString` from TiDB -
## Description
Port the scalar function `AddDateAndString` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @andylokandy
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function adddateandstring from tidb description port the scalar function adddateandstring from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
10,398
| 13,201,340,148
|
IssuesEvent
|
2020-08-14 09:57:19
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
Reverse Dutch Auction for the Donation address BTC
|
an:idea re:processes was:stalled
|
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://docs.bisq.network/proposals.html)._
<!-- Please do not remove the text above. -->
I think quite a bit of efficiency could be gained for the BISQ DAO participants by changing the way the Donation address buys BSQ tokens from contributors.
Currently, the donation address gets all the BTC fees and the Donation address manager buys BSQ on the open market from BSQ sellers. This system does work pretty well so far and as a market maker I have no problem with it. But I think we could serve the contributors or other BSQ stakeholders by having a more fair and transparent way to buy and burn the BSQ that comes from BTC fees.
I propose a reverse dutch auction each time the Donation address would like to clear out the accumulated Bitcoin (once per month, or an X BTC threshold, for example, but the important thing would be for the sellers **to know when it was happening**)
Google's IPO used a Dutch auction to determine the price and it's well described here: (https://www.quora.com/How-was-Googles-IPO-unique)
In the case of BSQ, sellers would commit the number of BSQ tokens and the lowest price they would accept for those tokens. All the offers would be counted up, and the donation address would calculate the price it should pay to use up all its bitcoin in buying BSQ tokens. In this style of auction the final price paid is equal for all sellers, and it equals the price of highest priced seller that was included in the successful sellers group.
Simple examples: Al offers 1000 BSQ at 7000 sats, Brad offers 1000 BSQ at 10000 sats, and Charlie offers 1000 BSQ at 12000 sats. The Donation address needs to burn .15 BTC. So the result would be Al selling 750 BSQ at 10000 sats and Brad selling 750 BSQ at 10000 sats. This would burn .15 BTC. If Al had offered 3000 BSQ at 7000 sats, the result would be Al selling 2143 BSQ for 7000 sats.
It would save a lot of time and effort for the contributor or other BSQ seller to be posting offers and jockeying for order book position in a very illiquid, not frequently traded market.
This post references: (https://github.com/bisq-network/roles/issues/80)
|
1.0
|
Reverse Dutch Auction for the Donation address BTC - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://docs.bisq.network/proposals.html)._
<!-- Please do not remove the text above. -->
I think quite a bit of efficiency could be gained for the BISQ DAO participants by changing the way the Donation address buys BSQ tokens from contributors.
Currently, the donation address gets all the BTC fees and the Donation address manager buys BSQ on the open market from BSQ sellers. This system does work pretty well so far and as a market maker I have no problem with it. But I think we could serve the contributors or other BSQ stakeholders by having a more fair and transparent way to buy and burn the BSQ that comes from BTC fees.
I propose a reverse dutch auction each time the Donation address would like to clear out the accumulated Bitcoin (once per month, or an X BTC threshold, for example, but the important thing would be for the sellers **to know when it was happening**)
Google's IPO used a Dutch auction to determine the price and it's well described here: (https://www.quora.com/How-was-Googles-IPO-unique)
In the case of BSQ, sellers would commit the number of BSQ tokens and the lowest price they would accept for those tokens. All the offers would be counted up, and the donation address would calculate the price it should pay to use up all its bitcoin in buying BSQ tokens. In this style of auction the final price paid is equal for all sellers, and it equals the price of highest priced seller that was included in the successful sellers group.
Simple examples: Al offers 1000 BSQ at 7000 sats, Brad offers 1000 BSQ at 10000 sats, and Charlie offers 1000 BSQ at 12000 sats. The Donation address needs to burn .15 BTC. So the result would be Al selling 750 BSQ at 10000 sats and Brad selling 750 BSQ at 10000 sats. This would burn .15 BTC. If Al had offered 3000 BSQ at 7000 sats, the result would be Al selling 2143 BSQ for 7000 sats.
It would save a lot of time and effort for the contributor or other BSQ seller to be posting offers and jockeying for order book position in a very illiquid, not frequently traded market.
This post references: (https://github.com/bisq-network/roles/issues/80)
|
process
|
reverse dutch auction for the donation address btc this is a bisq network proposal please familiarize yourself with the i think quite a bit of efficiency could be gained for the bisq dao participants by changing the way the donation address buys bsq tokens from contributors currently the donation address gets all the btc fees and the donation address manager buys bsq on the open market from bsq sellers this system does work pretty well so far and as a market maker i have no problem with it but i think we could serve the contributors or other bsq stakeholders by having a more fair and transparent way to buy and burn the bsq that comes from btc fees i propose a reverse dutch auction each time the donation address would like to clear out the accumulated bitcoin once per month or an x btc threshold for example but the important thing would be for the sellers to know when it was happening google s ipo used a dutch auction to determine the price and it s well described here in the case of bsq sellers would commit the number of bsq tokens and the lowest price they would accept for those tokens all the offers would be counted up and the donation address would calculate the price it should pay to use up all its bitcoin in buying bsq tokens in this style of auction the final price paid is equal for all sellers and it equals the price of highest priced seller that was included in the successful sellers group simple examples al offers bsq at sats brad offers bsq at sats and charlie offers bsq at sats the donation address needs to burn btc so the result would be al selling bsq at sats and brad selling bsq at sats this would burn btc if al had offered bsq at sats the result would be al selling bsq for sats it would save a lot of time and effort for the contributor or other bsq seller to be posting offers and jockeying for order book position in a very illiquid not frequently traded market this post references
| 1
|
8,085
| 11,257,663,499
|
IssuesEvent
|
2020-01-13 00:19:35
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
SAGA SagaAlgorithm().processAlgorithm() builds multiple saga_cmd commands for composite algorithms
|
Bug Processing
|
Using PyQGIS to automate QGIS version 3.8.0 from OSGEO4W64, running SAGA tools that have _n_-multiple output files (e.g., Morphometric Features, Basic Terrain Analysis) creates saga batch files with _n_ repeats of `saga_cmd`. An example is probably easiest way to explain: see below `saga_batch_job.bat` created when running `saga:basicterrainanalysis`:
```
set SAGA=C:/OSGEO4~1/apps\saga-ltr
set SAGA_MLB=C:/OSGEO4~1/apps\saga-ltr\modules
PATH=%PATH%;%SAGA%;%SAGA_MLB%
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat" -CHNL_DIST "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/6094ae4dc472455d8f885f4d9d17fd25/CHNL_DIST.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat" -CHNL_DIST "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/6094ae4dc472455d8f885f4d9d17fd25/CHNL_DIST.sdat" -VALL_DEPTH "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8502276da63b46ddac4ac1f723d1bb3b/VALL_DEPTH.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat" -CHNL_DIST "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/6094ae4dc472455d8f885f4d9d17fd25/CHNL_DIST.sdat" -VALL_DEPTH "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8502276da63b46ddac4ac1f723d1bb3b/VALL_DEPTH.sdat" -RSP "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/254df5398ebe4aa7b8d5e5d9fa8c3f8b/RSP.sdat"
exit
```
|
1.0
|
SAGA SagaAlgorithm().processAlgorithm() builds multiple saga_cmd commands for composite algorithms - Using PyQGIS to automate QGIS version 3.8.0 from OSGEO4W64, running SAGA tools that have _n_-multiple output files (e.g., Morphometric Features, Basic Terrain Analysis) creates saga batch files with _n_ repeats of `saga_cmd`. An example is probably easiest way to explain: see below `saga_batch_job.bat` created when running `saga:basicterrainanalysis`:
```
set SAGA=C:/OSGEO4~1/apps\saga-ltr
set SAGA_MLB=C:/OSGEO4~1/apps\saga-ltr\modules
PATH=%PATH%;%SAGA%;%SAGA_MLB%
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat" -CHNL_DIST "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/6094ae4dc472455d8f885f4d9d17fd25/CHNL_DIST.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat" -CHNL_DIST "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/6094ae4dc472455d8f885f4d9d17fd25/CHNL_DIST.sdat" -VALL_DEPTH "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8502276da63b46ddac4ac1f723d1bb3b/VALL_DEPTH.sdat"
call saga_cmd ta_compound "Basic Terrain Analysis" -ELEVATION "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/638a0004f347433a803a8a07425809fc/IslandOfKahoolawe.sgrd" -THRESHOLD 4 -SHADE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/17f9298e398044408d81595a164211e1/SHADE.sdat" -SLOPE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c277da6136ad46518452ce42eff18178/SLOPE.sdat" -ASPECT "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/d4c08f147f4847cb9ad72677802c8355/ASPECT.sdat" -HCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a97ead5f046e4f33ab6f7fe56f4e802c/HCURV.sdat" -VCURV "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/70a203d7f8624f4292ee6a5e4c91735a/VCURV.sdat" -CONVERGENCE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/c3f7d3314b9d43498fb2855dfef40618/CONVERGENCE.sdat" -SINKS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5fa10ec8743049db9a0eeb7e91c27e5f/SINKS.sdat" -FLOW "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/24595abe554e44afbf4c127ac358b62d/FLOW.sdat" -WETNESS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/bb8411a3a424453cac566e82ca1754a8/WETNESS.sdat" -LSFACTOR "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/a24577e332e0465d8854d74551719c3b/LSFACTOR.sdat" -CHANNELS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8a88bd91bf0145819f0aefee3ef52e1a/CHANNELS.shp" -BASINS "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/5c6bb7c989fe4d1c9d5c807278a53efc/BASINS.shp" -CHNL_BASE "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/ff54a9ff60eb47a8800734c4b00547b3/CHNL_BASE.sdat" -CHNL_DIST "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/6094ae4dc472455d8f885f4d9d17fd25/CHNL_DIST.sdat" -VALL_DEPTH "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/8502276da63b46ddac4ac1f723d1bb3b/VALL_DEPTH.sdat" -RSP "C:/Temp/processing_ccfd40c73a294f9c9d0d0cfbd6966771/254df5398ebe4aa7b8d5e5d9fa8c3f8b/RSP.sdat"
exit
```
|
process
|
saga sagaalgorithm processalgorithm builds multiple saga cmd commands for composite algorithms using pyqgis to automate qgis version from running saga tools that have n multiple output files e g morphometric features basic terrain analysis creates saga batch files with n repeats of saga cmd an example is probably easiest way to explain see below saga batch job bat created when running saga basicterrainanalysis set saga c apps saga ltr set saga mlb c apps saga ltr modules path path saga saga mlb call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat channels c temp processing channels shp call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat channels c temp processing channels shp basins c temp processing basins shp call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat channels c temp processing channels shp basins c temp processing basins shp chnl base c temp processing chnl base sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat channels c temp processing channels shp basins c temp processing basins shp chnl base c temp processing chnl base sdat chnl dist c temp processing chnl dist sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat channels c temp processing channels shp basins c temp processing basins shp chnl base c temp processing chnl base sdat chnl dist c temp processing chnl dist sdat vall depth c temp processing vall depth sdat call saga cmd ta compound basic terrain analysis elevation c temp processing islandofkahoolawe sgrd threshold shade c temp processing shade sdat slope c temp processing slope sdat aspect c temp processing aspect sdat hcurv c temp processing hcurv sdat vcurv c temp processing vcurv sdat convergence c temp processing convergence sdat sinks c temp processing sinks sdat flow c temp processing flow sdat wetness c temp processing wetness sdat lsfactor c temp processing lsfactor sdat channels c temp processing channels shp basins c temp processing basins shp chnl base c temp processing chnl base sdat chnl dist c temp processing chnl dist sdat vall depth c temp processing vall depth sdat rsp c temp processing rsp sdat exit
| 1
|
155,248
| 24,430,926,066
|
IssuesEvent
|
2022-10-06 08:04:26
|
OpenRefine/OpenRefine
|
https://api.github.com/repos/OpenRefine/OpenRefine
|
opened
|
Add new data quality or data profiling UI enhancements
|
enhancement overlay model design discussions UX
|
Facets are great for quickly getting a sense of a column's values and showing the count of each unique value.
However, there are broad strokes of general data quality or data profiling views in other tools and the OpenRefine ecosystem of extensions and addons that could be looked to as inspiration to incorporate some basic and advanced data profiling/data quality visualizations in OpenRefine's core.
### Proposed solution
Enable new **interactive** overlay modeling or view(s) to provide data profiling and data quality checking.
Example screenshot from extension, OpenRefineQualityMetrics

#### Data Profiling
With data profiling it would be statistical based to showcase error distribution heatmaps, and other visualizations later. Example existing OpenRefine extension: https://github.com/christianbors/OpenRefineQualityMetrics
- this might take the form of existing usage of our Facet area in new ways.
- or perhaps better visually allow Facets or Analytic Views to lock into place above their respective columns. The data grid view would essentially be split horizontally with an upper and lower.
- dockable or movable Facets/Views might be needed for some analytical visualizations for profiling different data types.
#### Data Quality
The data quality checking (rules) should support some initial common formats and (sub)structures that many users expect and use.
- the formats or data sub structures can be maintained as predefined in the core of OpenRefine and extensible via a secondary UI to write rules with a `name` and `expression`.
- users can pick the rules they want to do data quality checking against each column.
- allowing rules to be added or removed as well as enabled/disabled will be important to users.
- have a centralalized browsing experience to find new rules added by the community and filter by categories/domain to allow a user to add them into OpenRefine's Data Quality checking.
- Checks or Rules are likely long running operations sometimes and should be supported especially if the data quality checks involve eager expressions or regex lookarounds, especially negative lookbehind.
### Alternatives considered
Instead of core, make this a robust extension. The extension might warrant support from diverse groups that support a OpenRefine grant(s) for the extension. Or directly help and support @christianbors with his extension and expand it's features which might take the form of gifted support from OpenRefine's own grants in the future.
### Additional context
@magdmartin @thadguidry @tfmorris have some small previous history with this proposal and discussion archived in our Google Groups mailing list (somewhere!!) If I or someone else finds it, link it here please.
|
1.0
|
Add new data quality or data profiling UI enhancements - Facets are great for quickly getting a sense of a column's values and showing the count of each unique value.
However, there are broad strokes of general data quality or data profiling views in other tools and the OpenRefine ecosystem of extensions and addons that could be looked to as inspiration to incorporate some basic and advanced data profiling/data quality visualizations in OpenRefine's core.
### Proposed solution
Enable new **interactive** overlay modeling or view(s) to provide data profiling and data quality checking.
Example screenshot from extension, OpenRefineQualityMetrics

#### Data Profiling
With data profiling it would be statistical based to showcase error distribution heatmaps, and other visualizations later. Example existing OpenRefine extension: https://github.com/christianbors/OpenRefineQualityMetrics
- this might take the form of existing usage of our Facet area in new ways.
- or perhaps better visually allow Facets or Analytic Views to lock into place above their respective columns. The data grid view would essentially be split horizontally with an upper and lower.
- dockable or movable Facets/Views might be needed for some analytical visualizations for profiling different data types.
#### Data Quality
The data quality checking (rules) should support some initial common formats and (sub)structures that many users expect and use.
- the formats or data sub structures can be maintained as predefined in the core of OpenRefine and extensible via a secondary UI to write rules with a `name` and `expression`.
- users can pick the rules they want to do data quality checking against each column.
- allowing rules to be added or removed as well as enabled/disabled will be important to users.
- have a centralalized browsing experience to find new rules added by the community and filter by categories/domain to allow a user to add them into OpenRefine's Data Quality checking.
- Checks or Rules are likely long running operations sometimes and should be supported especially if the data quality checks involve eager expressions or regex lookarounds, especially negative lookbehind.
### Alternatives considered
Instead of core, make this a robust extension. The extension might warrant support from diverse groups that support a OpenRefine grant(s) for the extension. Or directly help and support @christianbors with his extension and expand it's features which might take the form of gifted support from OpenRefine's own grants in the future.
### Additional context
@magdmartin @thadguidry @tfmorris have some small previous history with this proposal and discussion archived in our Google Groups mailing list (somewhere!!) If I or someone else finds it, link it here please.
|
non_process
|
add new data quality or data profiling ui enhancements facets are great for quickly getting a sense of a column s values and showing the count of each unique value however there are broad strokes of general data quality or data profiling views in other tools and the openrefine ecosystem of extensions and addons that could be looked to as inspiration to incorporate some basic and advanced data profiling data quality visualizations in openrefine s core proposed solution enable new interactive overlay modeling or view s to provide data profiling and data quality checking example screenshot from extension openrefinequalitymetrics data profiling with data profiling it would be statistical based to showcase error distribution heatmaps and other visualizations later example existing openrefine extension this might take the form of existing usage of our facet area in new ways or perhaps better visually allow facets or analytic views to lock into place above their respective columns the data grid view would essentially be split horizontally with an upper and lower dockable or movable facets views might be needed for some analytical visualizations for profiling different data types data quality the data quality checking rules should support some initial common formats and sub structures that many users expect and use the formats or data sub structures can be maintained as predefined in the core of openrefine and extensible via a secondary ui to write rules with a name and expression users can pick the rules they want to do data quality checking against each column allowing rules to be added or removed as well as enabled disabled will be important to users have a centralalized browsing experience to find new rules added by the community and filter by categories domain to allow a user to add them into openrefine s data quality checking checks or rules are likely long running operations sometimes and should be supported especially if the data quality checks involve eager expressions or regex lookarounds especially negative lookbehind alternatives considered instead of core make this a robust extension the extension might warrant support from diverse groups that support a openrefine grant s for the extension or directly help and support christianbors with his extension and expand it s features which might take the form of gifted support from openrefine s own grants in the future additional context magdmartin thadguidry tfmorris have some small previous history with this proposal and discussion archived in our google groups mailing list somewhere if i or someone else finds it link it here please
| 0
|
22,526
| 31,625,574,846
|
IssuesEvent
|
2023-09-06 04:58:14
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
[Mirror] BLAKE3-team/BLAKE3 1.3.3
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
* https://github.com/BLAKE3-team/BLAKE3/archive/refs/tags/1.3.3.zip
This dependency was added to distdir_deps.bzl in 03ba2bac60a. A release artifact ("/releases/download/...") containing source code does not appear to be available.
|
1.0
|
[Mirror] BLAKE3-team/BLAKE3 1.3.3 - ### Please list the URLs of the archives you'd like to mirror:
* https://github.com/BLAKE3-team/BLAKE3/archive/refs/tags/1.3.3.zip
This dependency was added to distdir_deps.bzl in 03ba2bac60a. A release artifact ("/releases/download/...") containing source code does not appear to be available.
|
process
|
team please list the urls of the archives you d like to mirror this dependency was added to distdir deps bzl in a release artifact releases download containing source code does not appear to be available
| 1
|
2,806
| 5,738,516,473
|
IssuesEvent
|
2017-04-23 05:05:21
|
SIMEXP/niak
|
https://api.github.com/repos/SIMEXP/niak
|
closed
|
Bunch of enhancement remarks from Aman and Christian for better QC workflow
|
enhancement preprocessing quality control
|
- Put the comment window before the fail/maybe/ok input
- Allow register to stay open while writing down the comment
- Add in the csv file for each subject the complete register command ready to be paste in a terminal.
|
1.0
|
Bunch of enhancement remarks from Aman and Christian for better QC workflow - - Put the comment window before the fail/maybe/ok input
- Allow register to stay open while writing down the comment
- Add in the csv file for each subject the complete register command ready to be paste in a terminal.
|
process
|
bunch of enhancement remarks from aman and christian for better qc workflow put the comment window before the fail maybe ok input allow register to stay open while writing down the comment add in the csv file for each subject the complete register command ready to be paste in a terminal
| 1
|
17,391
| 23,208,240,240
|
IssuesEvent
|
2022-08-02 07:51:41
|
vacp2p/rfc
|
https://api.github.com/repos/vacp2p/rfc
|
closed
|
RFC template
|
track:rfc-process
|
This issue tracks the writing of an RFC template.
The purpose of this template is giving a basic common structure to all RFCs which helps both developers and researchers filtering and digesting the information quicker.
## Background
* [forum discussion](https://forum.vac.dev/t/rfc-categories-structure/125)
## Progress / Tasks
* [x] #488
* [x] adjust [COSS](https://rfc.vac.dev/spec/1/) to reflect the new process #518
- add `category`
- `tags` are optional
## Future task
* update #368 (or new issue) tracking adding `category` to existing RFCs
|
1.0
|
RFC template - This issue tracks the writing of an RFC template.
The purpose of this template is giving a basic common structure to all RFCs which helps both developers and researchers filtering and digesting the information quicker.
## Background
* [forum discussion](https://forum.vac.dev/t/rfc-categories-structure/125)
## Progress / Tasks
* [x] #488
* [x] adjust [COSS](https://rfc.vac.dev/spec/1/) to reflect the new process #518
- add `category`
- `tags` are optional
## Future task
* update #368 (or new issue) tracking adding `category` to existing RFCs
|
process
|
rfc template this issue tracks the writing of an rfc template the purpose of this template is giving a basic common structure to all rfcs which helps both developers and researchers filtering and digesting the information quicker background progress tasks adjust to reflect the new process add category tags are optional future task update or new issue tracking adding category to existing rfcs
| 1
|
22,681
| 31,931,618,778
|
IssuesEvent
|
2023-09-19 07:49:57
|
prusa3d/Prusa-Firmware
|
https://api.github.com/repos/prusa3d/Prusa-Firmware
|
closed
|
Please restore old unload temperature requirement behavior
|
enhancement processing
|
The new filament unload preheat requirement is obnoxious and wasteful, why must I wait for a preheat when the nozzle is already 195C? Why does that preheat trigger a bed preheat? Please restore the old behavior of simply allowing the unload procedure as long as the nozzle is still above a certain temperature, or at least don't preheat the bed
|
1.0
|
Please restore old unload temperature requirement behavior - The new filament unload preheat requirement is obnoxious and wasteful, why must I wait for a preheat when the nozzle is already 195C? Why does that preheat trigger a bed preheat? Please restore the old behavior of simply allowing the unload procedure as long as the nozzle is still above a certain temperature, or at least don't preheat the bed
|
process
|
please restore old unload temperature requirement behavior the new filament unload preheat requirement is obnoxious and wasteful why must i wait for a preheat when the nozzle is already why does that preheat trigger a bed preheat please restore the old behavior of simply allowing the unload procedure as long as the nozzle is still above a certain temperature or at least don t preheat the bed
| 1
|
116,423
| 17,369,256,042
|
IssuesEvent
|
2021-07-30 11:46:02
|
lukebroganws/Java-Demo
|
https://api.github.com/repos/lukebroganws/Java-Demo
|
opened
|
CVE-2016-2510 (High) detected in bsh-core-2.0b4.jar
|
security vulnerability
|
## CVE-2016-2510 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bsh-core-2.0b4.jar</b></p></summary>
<p>BeanShell core</p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar,Java-Demo/.extract/webapps/ROOT/WEB-INF/lib/bsh-core-2.0b4.jar,Java-Demo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/bsh-core-2.0b4.jar</p>
<p>
Dependency Hierarchy:
- :x: **bsh-core-2.0b4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebroganws/Java-Demo/commit/9c1bc5d1780a325ef5a39962950ec2956214bf22">9c1bc5d1780a325ef5a39962950ec2956214bf22</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.
<p>Publish Date: 2016-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510>CVE-2016-2510</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2510">https://nvd.nist.gov/vuln/detail/CVE-2016-2510</a></p>
<p>Release Date: 2016-04-07</p>
<p>Fix Resolution: 2.0b6</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.beanshell","packageName":"bsh-core","packageVersion":"2.0b4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.beanshell:bsh-core:2.0b4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0b6"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2510","vulnerabilityDetails":"BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-2510 (High) detected in bsh-core-2.0b4.jar - ## CVE-2016-2510 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bsh-core-2.0b4.jar</b></p></summary>
<p>BeanShell core</p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/beanshell/bsh-core/2.0b4/bsh-core-2.0b4.jar,Java-Demo/.extract/webapps/ROOT/WEB-INF/lib/bsh-core-2.0b4.jar,Java-Demo/target/easybuggy-1-SNAPSHOT/WEB-INF/lib/bsh-core-2.0b4.jar</p>
<p>
Dependency Hierarchy:
- :x: **bsh-core-2.0b4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/lukebroganws/Java-Demo/commit/9c1bc5d1780a325ef5a39962950ec2956214bf22">9c1bc5d1780a325ef5a39962950ec2956214bf22</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.
<p>Publish Date: 2016-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510>CVE-2016-2510</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2510">https://nvd.nist.gov/vuln/detail/CVE-2016-2510</a></p>
<p>Release Date: 2016-04-07</p>
<p>Fix Resolution: 2.0b6</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.beanshell","packageName":"bsh-core","packageVersion":"2.0b4","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.beanshell:bsh-core:2.0b4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0b6"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2510","vulnerabilityDetails":"BeanShell (bsh) before 2.0b6, when included on the classpath by an application that uses Java serialization or XStream, allows remote attackers to execute arbitrary code via crafted serialized data, related to XThis.Handler.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2510","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in bsh core jar cve high severity vulnerability vulnerable library bsh core jar beanshell core path to dependency file java demo pom xml path to vulnerable library home wss scanner repository org beanshell bsh core bsh core jar java demo extract webapps root web inf lib bsh core jar java demo target easybuggy snapshot web inf lib bsh core jar dependency hierarchy x bsh core jar vulnerable library found in head commit a href found in base branch main vulnerability details beanshell bsh before when included on the classpath by an application that uses java serialization or xstream allows remote attackers to execute arbitrary code via crafted serialized data related to xthis handler publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org beanshell bsh core isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails beanshell bsh before when included on the classpath by an application that uses java serialization or xstream allows remote attackers to execute arbitrary code via crafted serialized data related to xthis handler vulnerabilityurl
| 0
|
14,674
| 17,791,194,728
|
IssuesEvent
|
2021-08-31 16:22:37
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
mitigation, final fixes
|
ready multi-species process
|
from
https://github.com/geneontology/go-ontology/issues/18357
we have 3 "mitigation terms"
Would like to if possible reclassify as "evasion" or "supression"
All related to viral host interactions
<img width="448" alt="Screen Shot 2020-04-07 at 14 53 41" src="https://user-images.githubusercontent.com/7359272/78677367-90c39b80-78df-11ea-9bd6-a6299321de0a.png">
but there seems to be some redundancy/strangeness in the parentages?
1. The term
GO:0030683 mitigation of host immune response by virus
has the parent
GO:0050690 regulation of defense response to virus by virus
!!!!
|
1.0
|
mitigation, final fixes -
from
https://github.com/geneontology/go-ontology/issues/18357
we have 3 "mitigation terms"
Would like to if possible reclassify as "evasion" or "supression"
All related to viral host interactions
<img width="448" alt="Screen Shot 2020-04-07 at 14 53 41" src="https://user-images.githubusercontent.com/7359272/78677367-90c39b80-78df-11ea-9bd6-a6299321de0a.png">
but there seems to be some redundancy/strangeness in the parentages?
1. The term
GO:0030683 mitigation of host immune response by virus
has the parent
GO:0050690 regulation of defense response to virus by virus
!!!!
|
process
|
mitigation final fixes from we have mitigation terms would like to if possible reclassify as evasion or supression all related to viral host interactions img width alt screen shot at src but there seems to be some redundancy strangeness in the parentages the term go mitigation of host immune response by virus has the parent go regulation of defense response to virus by virus
| 1
|
705,275
| 24,229,101,892
|
IssuesEvent
|
2022-09-26 16:37:28
|
joomlahenk/fabrik
|
https://api.github.com/repos/joomlahenk/fabrik
|
closed
|
Plugins upgrade, install, update, uninstall test
|
help wanted Medium Priority
|
Created new sql files for plugins that create a new table. The plugins can now be installed and uninstalled separately. The table will now be created on install, not on use (as in J!3). Defaults for J!4 are set. All have now an update sql file, so new defaults will be set on upgrade, just like for com_fabrik.
I tested this with the element sequence plugin. It takes too much time to test all, so please test if you can.
|
1.0
|
Plugins upgrade, install, update, uninstall test - Created new sql files for plugins that create a new table. The plugins can now be installed and uninstalled separately. The table will now be created on install, not on use (as in J!3). Defaults for J!4 are set. All have now an update sql file, so new defaults will be set on upgrade, just like for com_fabrik.
I tested this with the element sequence plugin. It takes too much time to test all, so please test if you can.
|
non_process
|
plugins upgrade install update uninstall test created new sql files for plugins that create a new table the plugins can now be installed and uninstalled separately the table will now be created on install not on use as in j defaults for j are set all have now an update sql file so new defaults will be set on upgrade just like for com fabrik i tested this with the element sequence plugin it takes too much time to test all so please test if you can
| 0
|
8,420
| 11,584,729,950
|
IssuesEvent
|
2020-02-22 19:02:50
|
cetic/tsorage
|
https://api.github.com/repos/cetic/tsorage
|
opened
|
Add a state machine engine
|
processing
|
Add a system that enables the representation of state machines.
These machines could then be used for modeling the behaviour of a physical system. Every time a data point is ingested, it can trigger the transition from the current state to the next one.
State changes can trigger some predefined actions, either because these actions are directly specified in the state machine, or because the evolution of the current state of a machine, being communicated to the processor as a particular time series, can be further processed by this component.
|
1.0
|
Add a state machine engine - Add a system that enables the representation of state machines.
These machines could then be used for modeling the behaviour of a physical system. Every time a data point is ingested, it can trigger the transition from the current state to the next one.
State changes can trigger some predefined actions, either because these actions are directly specified in the state machine, or because the evolution of the current state of a machine, being communicated to the processor as a particular time series, can be further processed by this component.
|
process
|
add a state machine engine add a system that enables the representation of state machines these machines could then be used for modeling the behaviour of a physical system every time a data point is ingested it can trigger the transition from the current state to the next one state changes can trigger some predefined actions either because these actions are directly specified in the state machine or because the evolution of the current state of a machine being communicated to the processor as a particular time series can be further processed by this component
| 1
|
9,587
| 12,537,814,016
|
IssuesEvent
|
2020-06-05 04:43:54
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
opened
|
[libbeat] translate_sid should skip empty target fields
|
:Processors :Windows bug libbeat
|
The `translate_sid` processor only requires one of the three target fields to be configured. It should work properly when some of the targets are not set, but it doesn't check if the are empty strings. So it ends up adding target fields that are empty strings to the event (e.g. `"": "Group"`).
|
1.0
|
[libbeat] translate_sid should skip empty target fields - The `translate_sid` processor only requires one of the three target fields to be configured. It should work properly when some of the targets are not set, but it doesn't check if the are empty strings. So it ends up adding target fields that are empty strings to the event (e.g. `"": "Group"`).
|
process
|
translate sid should skip empty target fields the translate sid processor only requires one of the three target fields to be configured it should work properly when some of the targets are not set but it doesn t check if the are empty strings so it ends up adding target fields that are empty strings to the event e g group
| 1
|
318,385
| 23,718,526,887
|
IssuesEvent
|
2022-08-30 13:41:06
|
tursics/rock-paper-scissors
|
https://api.github.com/repos/tursics/rock-paper-scissors
|
closed
|
Improve documentation
|
documentation enhancement
|
- [x] add build instructions
- [x] document the extensibility
- [x] like more complex computer
- [x] like device dependent input (AR + swipe + ...)
- [x] human vs human via sockets or backend
- [x] add more than 2 players
- [x] use event system for additions like sound effects
- [x] more UIs (via class names in Hand class)
- [x] more hands (via Hand base class)
- [x] different games (via validate function)
- [x] i18n
|
1.0
|
Improve documentation - - [x] add build instructions
- [x] document the extensibility
- [x] like more complex computer
- [x] like device dependent input (AR + swipe + ...)
- [x] human vs human via sockets or backend
- [x] add more than 2 players
- [x] use event system for additions like sound effects
- [x] more UIs (via class names in Hand class)
- [x] more hands (via Hand base class)
- [x] different games (via validate function)
- [x] i18n
|
non_process
|
improve documentation add build instructions document the extensibility like more complex computer like device dependent input ar swipe human vs human via sockets or backend add more than players use event system for additions like sound effects more uis via class names in hand class more hands via hand base class different games via validate function
| 0
|
6,074
| 8,919,438,630
|
IssuesEvent
|
2019-01-21 00:11:37
|
SynBioDex/SEPs
|
https://api.github.com/repos/SynBioDex/SEPs
|
closed
|
SEP 002 -- SEP Template Document
|
Accepted Active Type: Process
|
# SEP 002 -- SEP Template Document <replace title with your own>
< Note 1: remove/replace all instructions given within "< >" >
< :exclamation: Note 2: you really need to look at this in raw / editing mode :smiley: >
< Note 3: download the raw template from: https://raw.githubusercontent.com/SynBioDex/SEPs/master/sep_002_template.md >
| SEP | <leave empty> |
| --- | --- |
| **Title** | SEP Template Document <replace with your short descriptive title (44 character max)> |
| **Authors** | <Author1 name (author1mail at xxmail com), Author2 name (author mail at yyy com)> |
| **Editor** | <leave empty> |
| **Type** | Process <choose from: Process OR Data Model> |
| **SBOL Version** | <SBOL version this proposal should apply to; remove line if Type==Process> |
| **Replaces** | <list SEP(s) this proposal would replace, otherwise remove line. E.g. SEP #1> |
| **Status** | Draft |
| **Created** | 08-Oct-2015 <current date> |
| **Last modified** | 08-Oct-2015 <leave empty if this is the first submission> |
## Abstract
< insert short summary >
## Table of Contents <remove TOC if SEP is rather short>
- [1. Rationale](#rationale) <or, if you prefer, 'Motivation'>
- [2. Specification](#specification)
- 2.1 optional sub-point
- 2.2 optional sub-point
- [3. Example or Use Case](#example)
- [4. Backwards Compatibility](#compatibility)
- [5. Discussion](#discussion)
- 5.1 discussion point
- 5.2 discussion point
- [References](#references)
- [Copyright](#copyright)
## 1. Rationale <a name="rationale"></a>
< insert motivation / rational, keep it brief >
## 2. Specification <a name="specification"></a>
< give detailed specification, can be split up into more than one section if useful >
< refer to other SEPs like this: This SEP is much better than SEP #1>
### 2.1 optional sub-point
< sub-divide specification section if useful >
### 2.2 optional sub-point
< sub-divide specification section if useful >
## 3. Example or Use Case <a name='example'></a>
< Describe brief examples or use cases >
## 4. Backwards Compatibility <a name='compatibility'></a>
< discuss compatibility issues and how to solve them; remove this section if this doesn't apply >
< e.g. in case of procedure SEP >
## 5. Discussion <a name='discussion'></a>
### 5.1 discussion point
< Summarize discussion, also represent dissenting opinions >
### 5.2 discussion point
## References <a name='references'></a>
< list references as follows >
< refer to these references in the text as, e.g. [SBOL](http://sbolstandard.org) or [1](https://www.python.org/dev/peps/pep-0001) >
## Copyright <a name='copyright'></a>
< SEPs should generally be public domain; replace names as needed. If necessary, change to a different license (typically another CC license) >
<p xmlns:dct="http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<a rel="dct:publisher"
href="sbolstandard.org">
<span property="dct:title">SBOL developers</span></a>
has waived all copyright and related or neighboring rights to
<span property="dct:title">SEP 002</span>.
This work is published from:
<span property="vcard:Country" datatype="dct:ISO3166"
content="US" about="sbolstandard.org">
United States</span>.
</p>
|
1.0
|
SEP 002 -- SEP Template Document - # SEP 002 -- SEP Template Document <replace title with your own>
< Note 1: remove/replace all instructions given within "< >" >
< :exclamation: Note 2: you really need to look at this in raw / editing mode :smiley: >
< Note 3: download the raw template from: https://raw.githubusercontent.com/SynBioDex/SEPs/master/sep_002_template.md >
| SEP | <leave empty> |
| --- | --- |
| **Title** | SEP Template Document <replace with your short descriptive title (44 character max)> |
| **Authors** | <Author1 name (author1mail at xxmail com), Author2 name (author mail at yyy com)> |
| **Editor** | <leave empty> |
| **Type** | Process <choose from: Process OR Data Model> |
| **SBOL Version** | <SBOL version this proposal should apply to; remove line if Type==Process> |
| **Replaces** | <list SEP(s) this proposal would replace, otherwise remove line. E.g. SEP #1> |
| **Status** | Draft |
| **Created** | 08-Oct-2015 <current date> |
| **Last modified** | 08-Oct-2015 <leave empty if this is the first submission> |
## Abstract
< insert short summary >
## Table of Contents <remove TOC if SEP is rather short>
- [1. Rationale](#rationale) <or, if you prefer, 'Motivation'>
- [2. Specification](#specification)
- 2.1 optional sub-point
- 2.2 optional sub-point
- [3. Example or Use Case](#example)
- [4. Backwards Compatibility](#compatibility)
- [5. Discussion](#discussion)
- 5.1 discussion point
- 5.2 discussion point
- [References](#references)
- [Copyright](#copyright)
## 1. Rationale <a name="rationale"></a>
< insert motivation / rational, keep it brief >
## 2. Specification <a name="specification"></a>
< give detailed specification, can be split up into more than one section if useful >
< refer to other SEPs like this: This SEP is much better than SEP #1>
### 2.1 optional sub-point
< sub-divide specification section if useful >
### 2.2 optional sub-point
< sub-divide specification section if useful >
## 3. Example or Use Case <a name='example'></a>
< Describe brief examples or use cases >
## 4. Backwards Compatibility <a name='compatibility'></a>
< discuss compatibility issues and how to solve them; remove this section if this doesn't apply >
< e.g. in case of procedure SEP >
## 5. Discussion <a name='discussion'></a>
### 5.1 discussion point
< Summarize discussion, also represent dissenting opinions >
### 5.2 discussion point
## References <a name='references'></a>
< list references as follows >
< refer to these references in the text as, e.g. [SBOL](http://sbolstandard.org) or [1](https://www.python.org/dev/peps/pep-0001) >
## Copyright <a name='copyright'></a>
< SEPs should generally be public domain; replace names as needed. If necessary, change to a different license (typically another CC license) >
<p xmlns:dct="http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
<a rel="license"
href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
</a>
<br />
To the extent possible under law,
<a rel="dct:publisher"
href="sbolstandard.org">
<span property="dct:title">SBOL developers</span></a>
has waived all copyright and related or neighboring rights to
<span property="dct:title">SEP 002</span>.
This work is published from:
<span property="vcard:Country" datatype="dct:ISO3166"
content="US" about="sbolstandard.org">
United States</span>.
</p>
|
process
|
sep sep template document sep sep template document sep title sep template document authors editor type process sbol version replaces status draft created oct last modified oct abstract table of contents rationale specification optional sub point optional sub point example compatibility discussion discussion point discussion point references copyright rationale specification optional sub point optional sub point example or use case backwards compatibility discussion discussion point discussion point references copyright p xmlns dct xmlns vcard a rel license href to the extent possible under law a rel dct publisher href sbolstandard org sbol developers has waived all copyright and related or neighboring rights to sep this work is published from span property vcard country datatype dct content us about sbolstandard org united states
| 1
|
776,627
| 27,264,499,469
|
IssuesEvent
|
2023-02-22 17:00:54
|
ascheid/itsg33-pbmm-issue-gen
|
https://api.github.com/repos/ascheid/itsg33-pbmm-issue-gen
|
opened
|
AU-9(2): Protection Of Audit Information | Audit Backup On Separate Physical Systems / Components
|
Priority: P2 Class: Technical ITSG-33 Suggested Assignment: IT Projects Control: AU-9
|
# Control Definition
PROTECTION OF AUDIT INFORMATION | AUDIT BACKUP ON SEPARATE PHYSICAL SYSTEMS / COMPONENTS
The information system backs up audit records [Assignment: organization-defined frequency] onto a physically different system or system component than the system or component being audited.
# Class
Technical
# Supplemental Guidance
This control enhancement helps to ensure that a compromise of the information system being audited does not also result in a compromise of the audit records. Related controls: AU-4, AU-5, AU-11.
# Suggested Assignment
IT Projects
|
1.0
|
AU-9(2): Protection Of Audit Information | Audit Backup On Separate Physical Systems / Components - # Control Definition
PROTECTION OF AUDIT INFORMATION | AUDIT BACKUP ON SEPARATE PHYSICAL SYSTEMS / COMPONENTS
The information system backs up audit records [Assignment: organization-defined frequency] onto a physically different system or system component than the system or component being audited.
# Class
Technical
# Supplemental Guidance
This control enhancement helps to ensure that a compromise of the information system being audited does not also result in a compromise of the audit records. Related controls: AU-4, AU-5, AU-11.
# Suggested Assignment
IT Projects
|
non_process
|
au protection of audit information audit backup on separate physical systems components control definition protection of audit information audit backup on separate physical systems components the information system backs up audit records onto a physically different system or system component than the system or component being audited class technical supplemental guidance this control enhancement helps to ensure that a compromise of the information system being audited does not also result in a compromise of the audit records related controls au au au suggested assignment it projects
| 0
|
50,102
| 3,006,185,490
|
IssuesEvent
|
2015-07-27 08:44:02
|
Itseez/opencv
|
https://api.github.com/repos/Itseez/opencv
|
opened
|
output of GPU detectMultiScale returns multiple of detector size
|
auto-transferred category: gpu (cuda) feature priority: low
|
Transferred from http://code.opencv.org/issues/1525
```
|| brendan ruff on 2011-12-23 13:12
|| Priority: Low
|| Affected: None
|| Category: gpu (cuda)
|| Tracker: Feature
|| Difficulty: None
|| PR:
|| Platform: None / None
```
output of GPU detectMultiScale returns multiple of detector size
-----------
```
Hi
I've started using GPU detectMultiScale on 2.3.1 but the rectangles returned are only multiples of detector size. eg for the face _alt version which is 20x20 then my reported faces are one of 20x20, 40x40, 60x60, etc .
The actual detection seems to be in the right place and reflects the scaling passed in. I checked with scalings from 1.1 through to 2.0 . 1.1 gives good detection at all faces sizes but running up to 2.0 gives holes in the detection as expected, and also the processing time reduces with inreased scaling as expected.
So my question is why does GPU detectMultiScale only return multiple integers of the detector size when clearly it is using the scaling as given ? Is this a bug or a feature of the implementation.
My workaround is to apply the programmatic CPU detectMultiScale in a small ROI around the detected location of each face over the entire scale range bracketing the reported GPU size (ie if 40x40 then detect from 20x20 to 60x60 on a 60x60 pixel ROI. This is relatively low on CPU overhead as the ROI is smallish. Still it all adds up as I'm looking at crowd scenes.
The Graphics card is an older integrated G96M GPU. Querying for the number of multiprocessors (using the device info) returns 1. I think this implies it is one processor cluster as compared to 1 actual core processor unit. Maybe it is only 1 core ? I get 3 frames per second on 800x600 so not too bad for a 3 year old laptop GPU.
```
History
-------
##### brendan ruff on 2011-12-23 13:30
```
PS I have characterised the bug a bit better. The quantisation of the scale only applies between 20x20 to 40x40 for the 20x20 frontal face model. After 40x40 the face can be any size (eg 41x41), but between 20 and 40 the detector reports only either 20 or 40.
The CPU version reports any size. The workaround stands to redetect faces 40 pixels or below.
```
##### Anatoly Baksheev on 2011-12-23 14:28
```
Anton, what do you think about this?
- Status deleted (Open)
```
##### Anton Obukhov on 2011-12-23 14:38
```
This is on purpose. The integral image is built only once for the input image, and instead of performing downsampling and recalculation of statistics, the decimation approach is used. Thus what happens is that the float scale parameter is rounded to the low integer and that scale is processed.
The code was written at times of Tesla compute architecture (SM 1.0-1.3 capability), but with the modern GPUs one could easily modify the code so that it is more scale-friendly:
- downsample image from previous level
- calculate statistics for the level
- use it in the classifier.
I had a plan to introduce such patch, but this is not a priority for me now, so no estimate provided.
```
##### Alexander Shishkov on 2012-03-22 14:35
```
- Status set to Open
```
##### Marina Kolpakova on 2012-06-28 12:43
```
- Target version set to 3.0
- Assignee changed from Anton Obukhov to Marina Kolpakova
```
##### Marina Kolpakova on 2013-04-05 23:29
```
- Assignee deleted (Marina Kolpakova)
```
|
1.0
|
output of GPU detectMultiScale returns multiple of detector size - Transferred from http://code.opencv.org/issues/1525
```
|| brendan ruff on 2011-12-23 13:12
|| Priority: Low
|| Affected: None
|| Category: gpu (cuda)
|| Tracker: Feature
|| Difficulty: None
|| PR:
|| Platform: None / None
```
output of GPU detectMultiScale returns multiple of detector size
-----------
```
Hi
I've started using GPU detectMultiScale on 2.3.1 but the rectangles returned are only multiples of detector size. eg for the face _alt version which is 20x20 then my reported faces are one of 20x20, 40x40, 60x60, etc .
The actual detection seems to be in the right place and reflects the scaling passed in. I checked with scalings from 1.1 through to 2.0 . 1.1 gives good detection at all faces sizes but running up to 2.0 gives holes in the detection as expected, and also the processing time reduces with inreased scaling as expected.
So my question is why does GPU detectMultiScale only return multiple integers of the detector size when clearly it is using the scaling as given ? Is this a bug or a feature of the implementation.
My workaround is to apply the programmatic CPU detectMultiScale in a small ROI around the detected location of each face over the entire scale range bracketing the reported GPU size (ie if 40x40 then detect from 20x20 to 60x60 on a 60x60 pixel ROI. This is relatively low on CPU overhead as the ROI is smallish. Still it all adds up as I'm looking at crowd scenes.
The Graphics card is an older integrated G96M GPU. Querying for the number of multiprocessors (using the device info) returns 1. I think this implies it is one processor cluster as compared to 1 actual core processor unit. Maybe it is only 1 core ? I get 3 frames per second on 800x600 so not too bad for a 3 year old laptop GPU.
```
History
-------
##### brendan ruff on 2011-12-23 13:30
```
PS I have characterised the bug a bit better. The quantisation of the scale only applies between 20x20 to 40x40 for the 20x20 frontal face model. After 40x40 the face can be any size (eg 41x41), but between 20 and 40 the detector reports only either 20 or 40.
The CPU version reports any size. The workaround stands to redetect faces 40 pixels or below.
```
##### Anatoly Baksheev on 2011-12-23 14:28
```
Anton, what do you think about this?
- Status deleted (Open)
```
##### Anton Obukhov on 2011-12-23 14:38
```
This is on purpose. The integral image is built only once for the input image, and instead of performing downsampling and recalculation of statistics, the decimation approach is used. Thus what happens is that the float scale parameter is rounded to the low integer and that scale is processed.
The code was written at times of Tesla compute architecture (SM 1.0-1.3 capability), but with the modern GPUs one could easily modify the code so that it is more scale-friendly:
- downsample image from previous level
- calculate statistics for the level
- use it in the classifier.
I had a plan to introduce such patch, but this is not a priority for me now, so no estimate provided.
```
##### Alexander Shishkov on 2012-03-22 14:35
```
- Status set to Open
```
##### Marina Kolpakova on 2012-06-28 12:43
```
- Target version set to 3.0
- Assignee changed from Anton Obukhov to Marina Kolpakova
```
##### Marina Kolpakova on 2013-04-05 23:29
```
- Assignee deleted (Marina Kolpakova)
```
|
non_process
|
output of gpu detectmultiscale returns multiple of detector size transferred from brendan ruff on priority low affected none category gpu cuda tracker feature difficulty none pr platform none none output of gpu detectmultiscale returns multiple of detector size hi i ve started using gpu detectmultiscale on but the rectangles returned are only multiples of detector size eg for the face alt version which is then my reported faces are one of etc the actual detection seems to be in the right place and reflects the scaling passed in i checked with scalings from through to gives good detection at all faces sizes but running up to gives holes in the detection as expected and also the processing time reduces with inreased scaling as expected so my question is why does gpu detectmultiscale only return multiple integers of the detector size when clearly it is using the scaling as given is this a bug or a feature of the implementation my workaround is to apply the programmatic cpu detectmultiscale in a small roi around the detected location of each face over the entire scale range bracketing the reported gpu size ie if then detect from to on a pixel roi this is relatively low on cpu overhead as the roi is smallish still it all adds up as i m looking at crowd scenes the graphics card is an older integrated gpu querying for the number of multiprocessors using the device info returns i think this implies it is one processor cluster as compared to actual core processor unit maybe it is only core i get frames per second on so not too bad for a year old laptop gpu history brendan ruff on ps i have characterised the bug a bit better the quantisation of the scale only applies between to for the frontal face model after the face can be any size eg but between and the detector reports only either or the cpu version reports any size the workaround stands to redetect faces pixels or below anatoly baksheev on anton what do you think about this status deleted open anton obukhov on this is on purpose the integral image is built only once for the input image and instead of performing downsampling and recalculation of statistics the decimation approach is used thus what happens is that the float scale parameter is rounded to the low integer and that scale is processed the code was written at times of tesla compute architecture sm capability but with the modern gpus one could easily modify the code so that it is more scale friendly downsample image from previous level calculate statistics for the level use it in the classifier i had a plan to introduce such patch but this is not a priority for me now so no estimate provided alexander shishkov on status set to open marina kolpakova on target version set to assignee changed from anton obukhov to marina kolpakova marina kolpakova on assignee deleted marina kolpakova
| 0
|
559,551
| 16,565,566,239
|
IssuesEvent
|
2021-05-29 10:29:58
|
olive-editor/olive
|
https://api.github.com/repos/olive-editor/olive
|
closed
|
[NODES] Allow to connect nodes without holding Ctrl/Cmd button down
|
Medium Priority Nodes/Compositing
|
<!-- ⚠ Do not delete this issue template! ⚠ -->
**Commit Hash** <!-- 8 character string of letters/numbers in title bar or Help > About dialog (e.g. 3ea173c9) -->
59952fdd
**Platform** <!-- e.g. Windows 10, Ubuntu 20.04 or macOS 10.15 -->
macOS 10.14.6
**Summary**
Related with issue #1594, it would be nice to have no only a usable/easy way to disconnect a node with other but also to connect it. It's kind a "hidden" feature the need of drag the left mouse button while pressing CTRL/COMMAND in order to be able to connect a node with another one.
It would be nice that you can press a button in the node header and drag the mouse to "see the arrow". Some other softwares do this way and it's so useful:
<img width="494" alt="button" src="https://user-images.githubusercontent.com/5942369/116401215-459e6280-a82b-11eb-935f-39f623cccb62.png">
Thanks
**Additional Information / Output**
|
1.0
|
[NODES] Allow to connect nodes without holding Ctrl/Cmd button down - <!-- ⚠ Do not delete this issue template! ⚠ -->
**Commit Hash** <!-- 8 character string of letters/numbers in title bar or Help > About dialog (e.g. 3ea173c9) -->
59952fdd
**Platform** <!-- e.g. Windows 10, Ubuntu 20.04 or macOS 10.15 -->
macOS 10.14.6
**Summary**
Related with issue #1594, it would be nice to have no only a usable/easy way to disconnect a node with other but also to connect it. It's kind a "hidden" feature the need of drag the left mouse button while pressing CTRL/COMMAND in order to be able to connect a node with another one.
It would be nice that you can press a button in the node header and drag the mouse to "see the arrow". Some other softwares do this way and it's so useful:
<img width="494" alt="button" src="https://user-images.githubusercontent.com/5942369/116401215-459e6280-a82b-11eb-935f-39f623cccb62.png">
Thanks
**Additional Information / Output**
|
non_process
|
allow to connect nodes without holding ctrl cmd button down commit hash about dialog e g platform macos summary related with issue it would be nice to have no only a usable easy way to disconnect a node with other but also to connect it it s kind a hidden feature the need of drag the left mouse button while pressing ctrl command in order to be able to connect a node with another one it would be nice that you can press a button in the node header and drag the mouse to see the arrow some other softwares do this way and it s so useful img width alt button src thanks additional information output
| 0
|
8,992
| 12,102,391,774
|
IssuesEvent
|
2020-04-20 16:37:18
|
nanoframework/Home
|
https://api.github.com/repos/nanoframework/Home
|
closed
|
CustomAttributes on classes get lost during metadata processing
|
Area: Metadata Processor Status: FIXED Type: Bug
|
## Details about Problem
**nanoFramework area:** Visual Studio extension (metadata-processor)
**VS extension version:** 2019.1.8.11
## Description
CustomAttributes, like TestClass get lost during "minimalize" phase of metadata processing.
## Detailed repro steps so we can see the same problem
1. Create class having one CustomAttribute on it.
2. Add some code which calls GetCustomAttributes on that class
3. Deploy to your MCU
4. During runtime you will find 0 attributes as a result of the call at step 2.
## Expected behaviour
CustomAttributes must have returned.
|
1.0
|
CustomAttributes on classes get lost during metadata processing - ## Details about Problem
**nanoFramework area:** Visual Studio extension (metadata-processor)
**VS extension version:** 2019.1.8.11
## Description
CustomAttributes, like TestClass get lost during "minimalize" phase of metadata processing.
## Detailed repro steps so we can see the same problem
1. Create class having one CustomAttribute on it.
2. Add some code which calls GetCustomAttributes on that class
3. Deploy to your MCU
4. During runtime you will find 0 attributes as a result of the call at step 2.
## Expected behaviour
CustomAttributes must have returned.
|
process
|
customattributes on classes get lost during metadata processing details about problem nanoframework area visual studio extension metadata processor vs extension version description customattributes like testclass get lost during minimalize phase of metadata processing detailed repro steps so we can see the same problem create class having one customattribute on it add some code which calls getcustomattributes on that class deploy to your mcu during runtime you will find attributes as a result of the call at step expected behaviour customattributes must have returned
| 1
|
253,424
| 19,100,764,014
|
IssuesEvent
|
2021-11-29 22:12:51
|
CMPUT301F21T44/HelloHabits
|
https://api.github.com/repos/CMPUT301F21T44/HelloHabits
|
closed
|
Create Javadoc on all classes
|
documentation
|
Ideally whoever creates new classes and methods should do them, but...
|
1.0
|
Create Javadoc on all classes - Ideally whoever creates new classes and methods should do them, but...
|
non_process
|
create javadoc on all classes ideally whoever creates new classes and methods should do them but
| 0
|
7,420
| 10,542,791,183
|
IssuesEvent
|
2019-10-02 13:51:22
|
prisma/studio
|
https://api.github.com/repos/prisma/studio
|
closed
|
Studio doesn't show any data
|
bug/0-needs-info kind/bug process/candidate
|
I'm trying to run this dataset using Prisma Studio: https://github.com/infoverload/datasets/tree/master/datasets/sqlite/climate
When running `prisma2 dev`, I successfully get an endpoint for Studio, but it doesn't show any data:

The SQLite browser proves that there is data in the DB file though:

- `City` has 105 records
- `MonthlyAvg` has 1260 records
|
1.0
|
Studio doesn't show any data - I'm trying to run this dataset using Prisma Studio: https://github.com/infoverload/datasets/tree/master/datasets/sqlite/climate
When running `prisma2 dev`, I successfully get an endpoint for Studio, but it doesn't show any data:

The SQLite browser proves that there is data in the DB file though:

- `City` has 105 records
- `MonthlyAvg` has 1260 records
|
process
|
studio doesn t show any data i m trying to run this dataset using prisma studio when running dev i successfully get an endpoint for studio but it doesn t show any data the sqlite browser proves that there is data in the db file though city has records monthlyavg has records
| 1
|
21,213
| 28,289,335,238
|
IssuesEvent
|
2023-04-09 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 7 Apr 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Recovering Continuous Scene Dynamics from A Single Blurry Image with Events
- **Authors:** Zhangyi Cheng, Xiang Zhang, Lei Yu, Jianzhuang Liu, Wen Yang, Gui-Song Xia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.02695
- **Pdf link:** https://arxiv.org/pdf/2304.02695
- **Abstract**
This paper aims at demystifying a single motion-blurred image with events and revealing temporally continuous scene dynamics encrypted behind motion blurs. To achieve this end, an Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events, enabling the latent sharp image restoration of arbitrary timestamps in the range of imaging exposures. Specifically, a dual attention transformer is proposed to efficiently leverage merits from both modalities, i.e., the high temporal resolution of event features and the smoothness of image features, alleviating temporal ambiguities while suppressing the event noise. The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps. Motion- and texture-guided supervisions are employed simultaneously to enhance restorations of the non-referenced timestamps and improve the overall sharpness. Experiments on synthetic, semi-synthetic, and real-world datasets demonstrate that our proposed method outperforms state-of-the-art methods by a large margin in terms of both objective PSNR and SSIM measurements and subjective evaluations.
### Boundary-Denoising for Video Activity Localization
- **Authors:** Mengmeng Xu, Mattia Soldan, Jialin Gao, Shuming Liu, Juan-Manuel Pérez-Rúa, Bernard Ghanem
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.02934
- **Pdf link:** https://arxiv.org/pdf/2304.02934
- **Abstract**
Video activity localization aims at understanding the semantic content in long untrimmed videos and retrieving actions of interest. The retrieved action with its start and end locations can be used for highlight generation, temporal action detection, etc. Unfortunately, learning the exact boundary location of activities is highly challenging because temporal activities are continuous in time, and there are often no clear-cut transitions between actions. Moreover, the definition of the start and end of events is subjective, which may confuse the model. To alleviate the boundary ambiguity, we propose to study the video activity localization problem from a denoising perspective. Specifically, we propose an encoder-decoder model named DenoiseLoc. During training, a set of action spans is randomly generated from the ground truth with a controlled noise scale. Then we attempt to reverse this process by boundary denoising, allowing the localizer to predict activities with precise boundaries and resulting in faster convergence speed. Experiments show that DenoiseLoc advances %in several video activity understanding tasks. For example, we observe a gain of +12.36% average mAP on QV-Highlights dataset and +1.64% mAP@0.5 on THUMOS'14 dataset over the baseline. Moreover, DenoiseLoc achieves state-of-the-art performance on TACoS and MAD datasets, but with much fewer predictions compared to other current methods.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Hierarchical B-frame Video Coding Using Two-Layer CANF without Motion Coding
- **Authors:** David Alexandre, Hsueh-Ming Hang, Wen-Hsiao Peng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2304.02690
- **Pdf link:** https://arxiv.org/pdf/2304.02690
- **Abstract**
Typical video compression systems consist of two main modules: motion coding and residual coding. This general architecture is adopted by classical coding schemes (such as international standards H.265 and H.266) and deep learning-based coding schemes. We propose a novel B-frame coding architecture based on two-layer Conditional Augmented Normalization Flows (CANF). It has the striking feature of not transmitting any motion information. Our proposed idea of video compression without motion coding offers a new direction for learned video coding. Our base layer is a low-resolution image compressor that replaces the full-resolution motion compressor. The low-resolution coded image is merged with the warped high-resolution images to generate a high-quality image as a conditioning signal for the enhancement-layer image coding in full resolution. One advantage of this architecture is significantly reduced computational complexity due to eliminating the motion information compressor. In addition, we adopt a skip-mode coding technique to reduce the transmitted latent samples. The rate-distortion performance of our scheme is slightly lower than that of the state-of-the-art learned B-frame coding scheme, B-CANF, but outperforms other learned B-frame coding schemes. However, compared to B-CANF, our scheme saves 45% of multiply-accumulate operations (MACs) for encoding and 27% of MACs for decoding. The code is available at https://nycu-clab.github.io.
## Keyword: RAW
### Learning Knowledge-Rich Sequential Model for Planar Homography Estimation in Aerial Video
- **Authors:** Pu Li, Xiaobai Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.02715
- **Pdf link:** https://arxiv.org/pdf/2304.02715
- **Abstract**
This paper presents an unsupervised approach that leverages raw aerial videos to learn to estimate planar homographic transformation between consecutive video frames. Previous learning-based estimators work on pairs of images to estimate their planar homographic transformations but suffer from severe over-fitting issues, especially when applying over aerial videos. To address this concern, we develop a sequential estimator that directly processes a sequence of video frames and estimates their pairwise planar homographic transformations in batches. We also incorporate a set of spatial-temporal knowledge to regularize the learning of such a sequence-to-sequence model. We collect a set of challenging aerial videos and compare the proposed method to the alternative algorithms. Empirical studies suggest that our sequential model achieves significant improvement over alternative image-based methods and the knowledge-rich regularization further boosts our system performance. Our codes and dataset could be found at https://github.com/Paul-LiPu/DeepVideoHomography
### Uncurated Image-Text Datasets: Shedding Light on Demographic Bias
- **Authors:** Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computers and Society (cs.CY)
- **Arxiv link:** https://arxiv.org/abs/2304.02828
- **Pdf link:** https://arxiv.org/pdf/2304.02828
- **Abstract**
The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations. It is known that even small but manually annotated datasets, such as MSCOCO, are affected by societal bias. This problem, far from being solved, may be getting worse with data crawled from the Internet without much control. In addition, the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging. Our first contribution is to annotate part of the Google Conceptual Captions dataset, widely used for training vision-and-language models, with four demographic and two contextual attributes. Our second contribution is to conduct a comprehensive analysis of the annotations, focusing on how different demographic groups are represented. Our last contribution lies in evaluating three prevailing vision-and-language tasks: image captioning, text-image CLIP embeddings, and text-to-image generation, showing that societal bias is a persistent problem in all of them.
### SketchFFusion: Sketch-guided image editing with diffusion model
- **Authors:** Weihang Mao, Bo Han, Zihao Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.03174
- **Pdf link:** https://arxiv.org/pdf/2304.03174
- **Abstract**
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user, while maintaining the original status of the unedited areas. Due to the high cost of acquiring human sketches, previous works mostly relied on edge maps as a substitute for sketches, but sketches possess more rich structural information. In this paper, we propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user. Simultaneously, current image editing methods often face challenges such as image distortion, training cost, and loss of fine details in the sketch. To address these limitations, We propose a conditional diffusion model (SketchFFusion) based on the sketch structure vector. We evaluate the generative performance of our model and demonstrate that it outperforms existing methods.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 7 Apr 23 - ## Keyword: events
### Recovering Continuous Scene Dynamics from A Single Blurry Image with Events
- **Authors:** Zhangyi Cheng, Xiang Zhang, Lei Yu, Jianzhuang Liu, Wen Yang, Gui-Song Xia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.02695
- **Pdf link:** https://arxiv.org/pdf/2304.02695
- **Abstract**
This paper aims at demystifying a single motion-blurred image with events and revealing temporally continuous scene dynamics encrypted behind motion blurs. To achieve this end, an Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events, enabling the latent sharp image restoration of arbitrary timestamps in the range of imaging exposures. Specifically, a dual attention transformer is proposed to efficiently leverage merits from both modalities, i.e., the high temporal resolution of event features and the smoothness of image features, alleviating temporal ambiguities while suppressing the event noise. The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps. Motion- and texture-guided supervisions are employed simultaneously to enhance restorations of the non-referenced timestamps and improve the overall sharpness. Experiments on synthetic, semi-synthetic, and real-world datasets demonstrate that our proposed method outperforms state-of-the-art methods by a large margin in terms of both objective PSNR and SSIM measurements and subjective evaluations.
### Boundary-Denoising for Video Activity Localization
- **Authors:** Mengmeng Xu, Mattia Soldan, Jialin Gao, Shuming Liu, Juan-Manuel Pérez-Rúa, Bernard Ghanem
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.02934
- **Pdf link:** https://arxiv.org/pdf/2304.02934
- **Abstract**
Video activity localization aims at understanding the semantic content in long untrimmed videos and retrieving actions of interest. The retrieved action with its start and end locations can be used for highlight generation, temporal action detection, etc. Unfortunately, learning the exact boundary location of activities is highly challenging because temporal activities are continuous in time, and there are often no clear-cut transitions between actions. Moreover, the definition of the start and end of events is subjective, which may confuse the model. To alleviate the boundary ambiguity, we propose to study the video activity localization problem from a denoising perspective. Specifically, we propose an encoder-decoder model named DenoiseLoc. During training, a set of action spans is randomly generated from the ground truth with a controlled noise scale. Then we attempt to reverse this process by boundary denoising, allowing the localizer to predict activities with precise boundaries and resulting in faster convergence speed. Experiments show that DenoiseLoc advances %in several video activity understanding tasks. For example, we observe a gain of +12.36% average mAP on QV-Highlights dataset and +1.64% mAP@0.5 on THUMOS'14 dataset over the baseline. Moreover, DenoiseLoc achieves state-of-the-art performance on TACoS and MAD datasets, but with much fewer predictions compared to other current methods.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Hierarchical B-frame Video Coding Using Two-Layer CANF without Motion Coding
- **Authors:** David Alexandre, Hsueh-Ming Hang, Wen-Hsiao Peng
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2304.02690
- **Pdf link:** https://arxiv.org/pdf/2304.02690
- **Abstract**
Typical video compression systems consist of two main modules: motion coding and residual coding. This general architecture is adopted by classical coding schemes (such as international standards H.265 and H.266) and deep learning-based coding schemes. We propose a novel B-frame coding architecture based on two-layer Conditional Augmented Normalization Flows (CANF). It has the striking feature of not transmitting any motion information. Our proposed idea of video compression without motion coding offers a new direction for learned video coding. Our base layer is a low-resolution image compressor that replaces the full-resolution motion compressor. The low-resolution coded image is merged with the warped high-resolution images to generate a high-quality image as a conditioning signal for the enhancement-layer image coding in full resolution. One advantage of this architecture is significantly reduced computational complexity due to eliminating the motion information compressor. In addition, we adopt a skip-mode coding technique to reduce the transmitted latent samples. The rate-distortion performance of our scheme is slightly lower than that of the state-of-the-art learned B-frame coding scheme, B-CANF, but outperforms other learned B-frame coding schemes. However, compared to B-CANF, our scheme saves 45% of multiply-accumulate operations (MACs) for encoding and 27% of MACs for decoding. The code is available at https://nycu-clab.github.io.
## Keyword: RAW
### Learning Knowledge-Rich Sequential Model for Planar Homography Estimation in Aerial Video
- **Authors:** Pu Li, Xiaobai Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.02715
- **Pdf link:** https://arxiv.org/pdf/2304.02715
- **Abstract**
This paper presents an unsupervised approach that leverages raw aerial videos to learn to estimate planar homographic transformation between consecutive video frames. Previous learning-based estimators work on pairs of images to estimate their planar homographic transformations but suffer from severe over-fitting issues, especially when applying over aerial videos. To address this concern, we develop a sequential estimator that directly processes a sequence of video frames and estimates their pairwise planar homographic transformations in batches. We also incorporate a set of spatial-temporal knowledge to regularize the learning of such a sequence-to-sequence model. We collect a set of challenging aerial videos and compare the proposed method to the alternative algorithms. Empirical studies suggest that our sequential model achieves significant improvement over alternative image-based methods and the knowledge-rich regularization further boosts our system performance. Our codes and dataset could be found at https://github.com/Paul-LiPu/DeepVideoHomography
### Uncurated Image-Text Datasets: Shedding Light on Demographic Bias
- **Authors:** Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computers and Society (cs.CY)
- **Arxiv link:** https://arxiv.org/abs/2304.02828
- **Pdf link:** https://arxiv.org/pdf/2304.02828
- **Abstract**
The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations. It is known that even small but manually annotated datasets, such as MSCOCO, are affected by societal bias. This problem, far from being solved, may be getting worse with data crawled from the Internet without much control. In addition, the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging. Our first contribution is to annotate part of the Google Conceptual Captions dataset, widely used for training vision-and-language models, with four demographic and two contextual attributes. Our second contribution is to conduct a comprehensive analysis of the annotations, focusing on how different demographic groups are represented. Our last contribution lies in evaluating three prevailing vision-and-language tasks: image captioning, text-image CLIP embeddings, and text-to-image generation, showing that societal bias is a persistent problem in all of them.
### SketchFFusion: Sketch-guided image editing with diffusion model
- **Authors:** Weihang Mao, Bo Han, Zihao Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.03174
- **Pdf link:** https://arxiv.org/pdf/2304.03174
- **Abstract**
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user, while maintaining the original status of the unedited areas. Due to the high cost of acquiring human sketches, previous works mostly relied on edge maps as a substitute for sketches, but sketches possess more rich structural information. In this paper, we propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user. Simultaneously, current image editing methods often face challenges such as image distortion, training cost, and loss of fine details in the sketch. To address these limitations, We propose a conditional diffusion model (SketchFFusion) based on the sketch structure vector. We evaluate the generative performance of our model and demonstrate that it outperforms existing methods.
## Keyword: raw image
There is no result
|
process
|
new submissions for fri apr keyword events recovering continuous scene dynamics from a single blurry image with events authors zhangyi cheng xiang zhang lei yu jianzhuang liu wen yang gui song xia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper aims at demystifying a single motion blurred image with events and revealing temporally continuous scene dynamics encrypted behind motion blurs to achieve this end an implicit video function ivf is learned to represent a single motion blurred image with concurrent events enabling the latent sharp image restoration of arbitrary timestamps in the range of imaging exposures specifically a dual attention transformer is proposed to efficiently leverage merits from both modalities i e the high temporal resolution of event features and the smoothness of image features alleviating temporal ambiguities while suppressing the event noise the proposed network is trained only with the supervision of ground truth images of limited referenced timestamps motion and texture guided supervisions are employed simultaneously to enhance restorations of the non referenced timestamps and improve the overall sharpness experiments on synthetic semi synthetic and real world datasets demonstrate that our proposed method outperforms state of the art methods by a large margin in terms of both objective psnr and ssim measurements and subjective evaluations boundary denoising for video activity localization authors mengmeng xu mattia soldan jialin gao shuming liu juan manuel pérez rúa bernard ghanem subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract video activity localization aims at understanding the semantic content in long untrimmed videos and retrieving actions of interest the retrieved action with its start and end locations can be used for highlight generation temporal action detection etc unfortunately learning the exact boundary location of activities is highly challenging because temporal activities are continuous in time and there are often no clear cut transitions between actions moreover the definition of the start and end of events is subjective which may confuse the model to alleviate the boundary ambiguity we propose to study the video activity localization problem from a denoising perspective specifically we propose an encoder decoder model named denoiseloc during training a set of action spans is randomly generated from the ground truth with a controlled noise scale then we attempt to reverse this process by boundary denoising allowing the localizer to predict activities with precise boundaries and resulting in faster convergence speed experiments show that denoiseloc advances in several video activity understanding tasks for example we observe a gain of average map on qv highlights dataset and map on thumos dataset over the baseline moreover denoiseloc achieves state of the art performance on tacos and mad datasets but with much fewer predictions compared to other current methods keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression hierarchical b frame video coding using two layer canf without motion coding authors david alexandre hsueh ming hang wen hsiao peng subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract typical video compression systems consist of two main modules motion coding and residual coding this general architecture is adopted by classical coding schemes such as international standards h and h and deep learning based coding schemes we propose a novel b frame coding architecture based on two layer conditional augmented normalization flows canf it has the striking feature of not transmitting any motion information our proposed idea of video compression without motion coding offers a new direction for learned video coding our base layer is a low resolution image compressor that replaces the full resolution motion compressor the low resolution coded image is merged with the warped high resolution images to generate a high quality image as a conditioning signal for the enhancement layer image coding in full resolution one advantage of this architecture is significantly reduced computational complexity due to eliminating the motion information compressor in addition we adopt a skip mode coding technique to reduce the transmitted latent samples the rate distortion performance of our scheme is slightly lower than that of the state of the art learned b frame coding scheme b canf but outperforms other learned b frame coding schemes however compared to b canf our scheme saves of multiply accumulate operations macs for encoding and of macs for decoding the code is available at keyword raw learning knowledge rich sequential model for planar homography estimation in aerial video authors pu li xiaobai liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper presents an unsupervised approach that leverages raw aerial videos to learn to estimate planar homographic transformation between consecutive video frames previous learning based estimators work on pairs of images to estimate their planar homographic transformations but suffer from severe over fitting issues especially when applying over aerial videos to address this concern we develop a sequential estimator that directly processes a sequence of video frames and estimates their pairwise planar homographic transformations in batches we also incorporate a set of spatial temporal knowledge to regularize the learning of such a sequence to sequence model we collect a set of challenging aerial videos and compare the proposed method to the alternative algorithms empirical studies suggest that our sequential model achieves significant improvement over alternative image based methods and the knowledge rich regularization further boosts our system performance our codes and dataset could be found at uncurated image text datasets shedding light on demographic bias authors noa garcia yusuke hirota yankun wu yuta nakashima subjects computer vision and pattern recognition cs cv computers and society cs cy arxiv link pdf link abstract the increasing tendency to collect large and uncurated datasets to train vision and language models has raised concerns about fair representations it is known that even small but manually annotated datasets such as mscoco are affected by societal bias this problem far from being solved may be getting worse with data crawled from the internet without much control in addition the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging our first contribution is to annotate part of the google conceptual captions dataset widely used for training vision and language models with four demographic and two contextual attributes our second contribution is to conduct a comprehensive analysis of the annotations focusing on how different demographic groups are represented our last contribution lies in evaluating three prevailing vision and language tasks image captioning text image clip embeddings and text to image generation showing that societal bias is a persistent problem in all of them sketchffusion sketch guided image editing with diffusion model authors weihang mao bo han zihao wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract sketch guided image editing aims to achieve local fine tuning of the image based on the sketch information provided by the user while maintaining the original status of the unedited areas due to the high cost of acquiring human sketches previous works mostly relied on edge maps as a substitute for sketches but sketches possess more rich structural information in this paper we propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user simultaneously current image editing methods often face challenges such as image distortion training cost and loss of fine details in the sketch to address these limitations we propose a conditional diffusion model sketchffusion based on the sketch structure vector we evaluate the generative performance of our model and demonstrate that it outperforms existing methods keyword raw image there is no result
| 1
|
15,230
| 2,850,337,105
|
IssuesEvent
|
2015-05-31 13:56:34
|
damonkohler/android-scripting
|
https://api.github.com/repos/damonkohler/android-scripting
|
closed
|
Multiple Cleans Should Not Be Required or Recommended
|
auto-migrated Priority-Medium Type-Defect
|
```
What device(s) are you experiencing the problem on?
This is a documentation error, so not applicable.
What firmware version are you running on the device?
This is a documentation error, so not applicable.
What steps will reproduce the problem?
This is a documentation error, so not applicable.
What is the expected output? What do you see instead?
This is a documentation error, so not applicable.
What version of the product are you using? On what operating system?
Latest HG clone. Debian Linux (Stable).
Please provide any additional information below.
The build instructions ('android/ScriptingLayerForAndroid/README') say
"Project-->Clean-->All (this may need to be run several times)", which is silly.
Either it's a waste of time and effort, and therefore, it shouldn't be
recommended. Or, it is actually necessary, and the build system is broken. In
that case, a bug report should be issued and referenced from the (otherwise
silly) instructions. (I searched all open and closed issues for "clean", and no
indication of one.)
```
Original issue reported on code.google.com by `Karatorian` on 15 Sep 2012 at 9:28
|
1.0
|
Multiple Cleans Should Not Be Required or Recommended - ```
What device(s) are you experiencing the problem on?
This is a documentation error, so not applicable.
What firmware version are you running on the device?
This is a documentation error, so not applicable.
What steps will reproduce the problem?
This is a documentation error, so not applicable.
What is the expected output? What do you see instead?
This is a documentation error, so not applicable.
What version of the product are you using? On what operating system?
Latest HG clone. Debian Linux (Stable).
Please provide any additional information below.
The build instructions ('android/ScriptingLayerForAndroid/README') say
"Project-->Clean-->All (this may need to be run several times)", which is silly.
Either it's a waste of time and effort, and therefore, it shouldn't be
recommended. Or, it is actually necessary, and the build system is broken. In
that case, a bug report should be issued and referenced from the (otherwise
silly) instructions. (I searched all open and closed issues for "clean", and no
indication of one.)
```
Original issue reported on code.google.com by `Karatorian` on 15 Sep 2012 at 9:28
|
non_process
|
multiple cleans should not be required or recommended what device s are you experiencing the problem on this is a documentation error so not applicable what firmware version are you running on the device this is a documentation error so not applicable what steps will reproduce the problem this is a documentation error so not applicable what is the expected output what do you see instead this is a documentation error so not applicable what version of the product are you using on what operating system latest hg clone debian linux stable please provide any additional information below the build instructions android scriptinglayerforandroid readme say project clean all this may need to be run several times which is silly either it s a waste of time and effort and therefore it shouldn t be recommended or it is actually necessary and the build system is broken in that case a bug report should be issued and referenced from the otherwise silly instructions i searched all open and closed issues for clean and no indication of one original issue reported on code google com by karatorian on sep at
| 0
|
14,134
| 8,484,775,728
|
IssuesEvent
|
2018-10-26 04:38:01
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
v9.x seems to be slowing down ?
|
V8 Engine performance v10.x
|
v9.x seems to be slowing down?
I have a computational function that performs `benchmark` tests in different versions of `node`.
```js
// test function
function compute(w, v, s, k, f, vr1 = 1, vr2 = 0) {
return s + (Math.max(v * vr2 || v / vr1, w) - f) * k;
}
```
Results are as follows:
```json
(nodejs v7.9) 63,482,006 ops/sec ±0.93% (90 runs sampled)
(nodejs v8.7) 539,999,198 ops/sec ±0.48% (89 runs sampled)
(nodejs v8.9) 543,188,459 ops/sec ±0.53% (91 runs sampled)
(nodejs v9.1) 487,698,977 ops/sec ±0.64% (89 runs sampled)
```
|
True
|
v9.x seems to be slowing down ? - v9.x seems to be slowing down?
I have a computational function that performs `benchmark` tests in different versions of `node`.
```js
// test function
function compute(w, v, s, k, f, vr1 = 1, vr2 = 0) {
return s + (Math.max(v * vr2 || v / vr1, w) - f) * k;
}
```
Results are as follows:
```json
(nodejs v7.9) 63,482,006 ops/sec ±0.93% (90 runs sampled)
(nodejs v8.7) 539,999,198 ops/sec ±0.48% (89 runs sampled)
(nodejs v8.9) 543,188,459 ops/sec ±0.53% (91 runs sampled)
(nodejs v9.1) 487,698,977 ops/sec ±0.64% (89 runs sampled)
```
|
non_process
|
x seems to be slowing down x seems to be slowing down i have a computational function that performs benchmark tests in different versions of node js test function function compute w v s k f return s math max v v w f k results are as follows json nodejs ops sec ± runs sampled nodejs ops sec ± runs sampled nodejs ops sec ± runs sampled nodejs ops sec ± runs sampled
| 0
|
14,821
| 18,160,164,116
|
IssuesEvent
|
2021-09-27 08:42:55
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
coprocessor: redact log
|
type/enhancement sig/coprocessor difficulty/easy component/security
|
## Development Task
In #7758 we are adding `security.redact-info-log` option to TiKV, to hide user data from printing into info log. The option provide a way for users to avoid leaking sensitive data into info log, when they want to transfer the info log out of the storage node to use in other systems. However the PR only handles MVCC and storage engine KVs. It does not handle coprocessor logs. We need to redact log for the coprocessor module when the option is on.
The task is part of https://github.com/pingcap/tidb/issues/18566 and we target to include it in 5.0 release.
|
1.0
|
coprocessor: redact log - ## Development Task
In #7758 we are adding `security.redact-info-log` option to TiKV, to hide user data from printing into info log. The option provide a way for users to avoid leaking sensitive data into info log, when they want to transfer the info log out of the storage node to use in other systems. However the PR only handles MVCC and storage engine KVs. It does not handle coprocessor logs. We need to redact log for the coprocessor module when the option is on.
The task is part of https://github.com/pingcap/tidb/issues/18566 and we target to include it in 5.0 release.
|
process
|
coprocessor redact log development task in we are adding security redact info log option to tikv to hide user data from printing into info log the option provide a way for users to avoid leaking sensitive data into info log when they want to transfer the info log out of the storage node to use in other systems however the pr only handles mvcc and storage engine kvs it does not handle coprocessor logs we need to redact log for the coprocessor module when the option is on the task is part of and we target to include it in release
| 1
|
1,211
| 3,715,446,590
|
IssuesEvent
|
2016-03-03 01:46:54
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
opened
|
Validate mysql-monitor_writer_is_also_reader
|
ADMIN MONITOR PROTOCOL QUERY PROCESSOR
|
Create a set of automated test case to validate that variable mysql-monitor_writer_is_also_reader always works as expected
|
1.0
|
Validate mysql-monitor_writer_is_also_reader - Create a set of automated test case to validate that variable mysql-monitor_writer_is_also_reader always works as expected
|
process
|
validate mysql monitor writer is also reader create a set of automated test case to validate that variable mysql monitor writer is also reader always works as expected
| 1
|
286,649
| 8,791,235,978
|
IssuesEvent
|
2018-12-21 11:54:10
|
chmjs/chameleon-vuetify
|
https://api.github.com/repos/chmjs/chameleon-vuetify
|
closed
|
Add page color option
|
Priority:Medium Type:Enhancement
|
@Sneki , as we spoken, page color option is needed .
It shoud be same as on other elements : color picker + input clear possibility.
|
1.0
|
Add page color option - @Sneki , as we spoken, page color option is needed .
It shoud be same as on other elements : color picker + input clear possibility.
|
non_process
|
add page color option sneki as we spoken page color option is needed it shoud be same as on other elements color picker input clear possibility
| 0
|
95,107
| 8,531,092,561
|
IssuesEvent
|
2018-11-04 08:02:05
|
ihhub/penguinV
|
https://api.github.com/repos/ihhub/penguinV
|
closed
|
Include BinaryDilate and BinaryErode functions into unit tests
|
Hacktoberfest unit tests
|
These functions are defined in `src/image_function.h` file. We have unit tests for image functions located at `test/unit_tests/unit_test_image_function.cpp`. The definitions of function pointers already exist within the file:
```cpp
typedef void (*BinaryDilateForm1)( Image & image, uint32_t dilationX, uint32_t dilationY );
typedef void (*BinaryDilateForm2)( Image & image, uint32_t x, uint32_t y, uint32_t width, uint32_t height, uint32_t dilationX, uint32_t dilationY );
typedef void (*BinaryErodeForm1)( Image & image, uint32_t erosionX, uint32_t erosionY );
typedef void (*BinaryErodeForm2)( Image & image, uint32_t x, uint32_t y, uint32_t width, uint32_t height, uint32_t erosionX, uint32_t erosionY );
```
We need to add test functions. In these functions for `BinaryDilate` we create a white (255) image but put few black (0) pixels on random places, for `BinaryErode` we create a black (0) image and put few white (255) pixels. The result should be fully white or fully black image.
After declaration of these test functions add needed 2 lines code in `namespace image_function`
|
1.0
|
Include BinaryDilate and BinaryErode functions into unit tests - These functions are defined in `src/image_function.h` file. We have unit tests for image functions located at `test/unit_tests/unit_test_image_function.cpp`. The definitions of function pointers already exist within the file:
```cpp
typedef void (*BinaryDilateForm1)( Image & image, uint32_t dilationX, uint32_t dilationY );
typedef void (*BinaryDilateForm2)( Image & image, uint32_t x, uint32_t y, uint32_t width, uint32_t height, uint32_t dilationX, uint32_t dilationY );
typedef void (*BinaryErodeForm1)( Image & image, uint32_t erosionX, uint32_t erosionY );
typedef void (*BinaryErodeForm2)( Image & image, uint32_t x, uint32_t y, uint32_t width, uint32_t height, uint32_t erosionX, uint32_t erosionY );
```
We need to add test functions. In these functions for `BinaryDilate` we create a white (255) image but put few black (0) pixels on random places, for `BinaryErode` we create a black (0) image and put few white (255) pixels. The result should be fully white or fully black image.
After declaration of these test functions add needed 2 lines code in `namespace image_function`
|
non_process
|
include binarydilate and binaryerode functions into unit tests these functions are defined in src image function h file we have unit tests for image functions located at test unit tests unit test image function cpp the definitions of function pointers already exist within the file cpp typedef void image image t dilationx t dilationy typedef void image image t x t y t width t height t dilationx t dilationy typedef void image image t erosionx t erosiony typedef void image image t x t y t width t height t erosionx t erosiony we need to add test functions in these functions for binarydilate we create a white image but put few black pixels on random places for binaryerode we create a black image and put few white pixels the result should be fully white or fully black image after declaration of these test functions add needed lines code in namespace image function
| 0
|
10,925
| 13,725,768,801
|
IssuesEvent
|
2020-10-03 20:00:52
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
#If...Then...#Else directive not recognized
|
antlr bug has-workaround parse-tree-preprocessing
|
**Rubberduck version information**
Version 2.4.1.4627
OS: Microsoft Windows NT 6.2.9200.0, x64
Host Product: Visual Basic x86
Host Version: 6.00.9782
Host Executable: VB6.EXE
**Description**
The #If...Then...#Else directive is not recognized.
**To Reproduce**
Steps to reproduce the behavior:
1. Have a module with a method using inside it a #If...Then...#Else directive,
**Expected behavior**
#If...Then...#Else directive should be recognized.
**Screenshots**
N/A
**Logfile**
ERROR-2.4.1.4627;Rubberduck.Parsing.VBA.Parsing.ModuleParser;Syntax error; offending token 'If' at line 852, column 6 in the CodePaneCode version of module modMyModule.;
**Additional context**
```vb
Private Sub Main()
Dim a as Long: a= 10
Dim b as Long: b=5
Dim c as Long : c=a+b
'- CCG_VERSION1 = -1 and CCG_VERSION2 = -1 are Conditional Compilation Constants in the project propertis, under the Make tab.
#If CCG_VERSION1 Or _
CCG_VERSION2 Then
c=c+c
#else
c=c*c
#end if
Print c
End Sub
```
|
1.0
|
#If...Then...#Else directive not recognized - **Rubberduck version information**
Version 2.4.1.4627
OS: Microsoft Windows NT 6.2.9200.0, x64
Host Product: Visual Basic x86
Host Version: 6.00.9782
Host Executable: VB6.EXE
**Description**
The #If...Then...#Else directive is not recognized.
**To Reproduce**
Steps to reproduce the behavior:
1. Have a module with a method using inside it a #If...Then...#Else directive,
**Expected behavior**
#If...Then...#Else directive should be recognized.
**Screenshots**
N/A
**Logfile**
ERROR-2.4.1.4627;Rubberduck.Parsing.VBA.Parsing.ModuleParser;Syntax error; offending token 'If' at line 852, column 6 in the CodePaneCode version of module modMyModule.;
**Additional context**
```vb
Private Sub Main()
Dim a as Long: a= 10
Dim b as Long: b=5
Dim c as Long : c=a+b
'- CCG_VERSION1 = -1 and CCG_VERSION2 = -1 are Conditional Compilation Constants in the project propertis, under the Make tab.
#If CCG_VERSION1 Or _
CCG_VERSION2 Then
c=c+c
#else
c=c*c
#end if
Print c
End Sub
```
|
process
|
if then else directive not recognized rubberduck version information version os microsoft windows nt host product visual basic host version host executable exe description the if then else directive is not recognized to reproduce steps to reproduce the behavior have a module with a method using inside it a if then else directive expected behavior if then else directive should be recognized screenshots n a logfile error rubberduck parsing vba parsing moduleparser syntax error offending token if at line column in the codepanecode version of module modmymodule additional context vb private sub main dim a as long a dim b as long b dim c as long c a b ccg and ccg are conditional compilation constants in the project propertis under the make tab if ccg or ccg then c c c else c c c end if print c end sub
| 1
|
13,803
| 16,557,062,129
|
IssuesEvent
|
2021-05-28 15:02:49
|
EBIvariation/eva-opentargets
|
https://api.github.com/repos/EBIvariation/eva-opentargets
|
closed
|
Evidence string generation for the 21.06 release
|
Processing
|
The submission window for 21.06 is not finalised, but is expected to open approximately at the end of May.
As we discussed, it's best to submit sooner rather than later, in case there are any problems with generation or validation.
|
1.0
|
Evidence string generation for the 21.06 release - The submission window for 21.06 is not finalised, but is expected to open approximately at the end of May.
As we discussed, it's best to submit sooner rather than later, in case there are any problems with generation or validation.
|
process
|
evidence string generation for the release the submission window for is not finalised but is expected to open approximately at the end of may as we discussed it s best to submit sooner rather than later in case there are any problems with generation or validation
| 1
|
50,069
| 21,005,700,444
|
IssuesEvent
|
2022-03-29 22:23:59
|
hashicorp/terraform-provider-aws
|
https://api.github.com/repos/hashicorp/terraform-provider-aws
|
closed
|
resource/aws_db_instance: Should support enabling cross-region automated backups
|
enhancement service/rds
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
RDS supports enabling cross-region automated backups via the API as described here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReplicateBackups.html
### New or Affected Resource(s)
* aws_db_instance
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_db_instance" "this_db_instance" {
identifier = local.rds_name
allocated_storage = var.allocated_storage
max_allocated_storage = var.max_allocated_storage
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
allow_major_version_upgrade = var.allow_major_version_upgrade
availability_zone = data.aws_subnet.rds_subnet1.availability_zone
multi_az = var.multi_az
backup_retention_period = var.backup_retention_period
backup_window = var.backup_window
###### NEW CONFIGURATION
backup_replication_region = "us-east-2"
backup_replication_retention_period = 28
######
maintenance_window = var.maintenance_window
db_subnet_group_name = aws_db_subnet_group.this_db_subnet_group.name
deletion_protection = var.deletion_protection
engine = var.engine
engine_version = var.engine_version
kms_key_id = aws_kms_key.this_key.arn
storage_encrypted = "true"
parameter_group_name = aws_db_parameter_group.this_parameter_group.name
username = var.username
password = var.password
publicly_accessible = false
skip_final_snapshot = false
final_snapshot_identifier = "finalsnapshot-${local.rds_name}"
instance_class = var.instance_class
performance_insights_enabled = true
performance_insights_kms_key_id = aws_kms_key.this_key.arn
performance_insights_retention_period = var.performance_insights_retention_period
tags = {
Name = local.rds_name
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
|
1.0
|
resource/aws_db_instance: Should support enabling cross-region automated backups - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
RDS supports enabling cross-region automated backups via the API as described here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReplicateBackups.html
### New or Affected Resource(s)
* aws_db_instance
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_db_instance" "this_db_instance" {
identifier = local.rds_name
allocated_storage = var.allocated_storage
max_allocated_storage = var.max_allocated_storage
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
allow_major_version_upgrade = var.allow_major_version_upgrade
availability_zone = data.aws_subnet.rds_subnet1.availability_zone
multi_az = var.multi_az
backup_retention_period = var.backup_retention_period
backup_window = var.backup_window
###### NEW CONFIGURATION
backup_replication_region = "us-east-2"
backup_replication_retention_period = 28
######
maintenance_window = var.maintenance_window
db_subnet_group_name = aws_db_subnet_group.this_db_subnet_group.name
deletion_protection = var.deletion_protection
engine = var.engine
engine_version = var.engine_version
kms_key_id = aws_kms_key.this_key.arn
storage_encrypted = "true"
parameter_group_name = aws_db_parameter_group.this_parameter_group.name
username = var.username
password = var.password
publicly_accessible = false
skip_final_snapshot = false
final_snapshot_identifier = "finalsnapshot-${local.rds_name}"
instance_class = var.instance_class
performance_insights_enabled = true
performance_insights_kms_key_id = aws_kms_key.this_key.arn
performance_insights_retention_period = var.performance_insights_retention_period
tags = {
Name = local.rds_name
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://aws.amazon.com/about-aws/whats-new/2018/04/introducing-amazon-ec2-fleet/
--->
|
non_process
|
resource aws db instance should support enabling cross region automated backups community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description rds supports enabling cross region automated backups via the api as described here new or affected resource s aws db instance potential terraform configuration hcl resource aws db instance this db instance identifier local rds name allocated storage var allocated storage max allocated storage var max allocated storage apply immediately var apply immediately auto minor version upgrade var auto minor version upgrade allow major version upgrade var allow major version upgrade availability zone data aws subnet rds availability zone multi az var multi az backup retention period var backup retention period backup window var backup window new configuration backup replication region us east backup replication retention period maintenance window var maintenance window db subnet group name aws db subnet group this db subnet group name deletion protection var deletion protection engine var engine engine version var engine version kms key id aws kms key this key arn storage encrypted true parameter group name aws db parameter group this parameter group name username var username password var password publicly accessible false skip final snapshot false final snapshot identifier finalsnapshot local rds name instance class var instance class performance insights enabled true performance insights kms key id aws kms key this key arn performance insights retention period var performance insights retention period tags name local rds name references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example
| 0
|
14,580
| 17,703,483,042
|
IssuesEvent
|
2021-08-25 03:07:18
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - associatedOrganisms
|
Term - change Class - Organism Class - ResourceRelationship non-normative Process - complete
|
## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): Consistency and clarity
* Proponents (who needs this change): Everyone
Current Term definition: https://dwc.tdwg.org/terms/#dwc:associatedOrganisms
Proposed new attributes of the term:
* Term name (in lowerCamelCase): associatedOrganisms
* Organized in Class (e.g. Location, Taxon): Organism
* Definition of the term: **A list (concatenated and separated) of identifiers of other Organisms and the associations of this Organism to each of them.**
* Usage comments (recommendations regarding content, etc.): **This term can be used to provide a list of associations to other Organisms. Note that the ResourceRelationship class is an alternative means of representing associations, and with more detail. Recommended best practice is to separate the values in a list with space vertical bar space ( | ).**
* Examples: **`"sibling of":"http://arctos.database.museum/guid/DMNS:Mamm:14171"`, `"parent of":"http://arctos.database.museum/guid/MSB:Mamm:196208" | "parent of":"http://arctos.database.museum/guid/MSB:Mamm:196523" | "sibling of":"http://arctos.database.museum/guid/MSB:Mamm:142638"`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedOrganisms-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): not in ABCD
Discussions around changes to relationshipOfResource (#194), around a new term relationshipOfResourceID (#186, #283), and changes to associatedOccurrences (Issue #324) suggest that a clarification should also be made in the associatedOrganisms definition and usage notes. Specifically, the directionality of the relationship should be made clear.
|
1.0
|
Change term - associatedOrganisms - ## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): Consistency and clarity
* Proponents (who needs this change): Everyone
Current Term definition: https://dwc.tdwg.org/terms/#dwc:associatedOrganisms
Proposed new attributes of the term:
* Term name (in lowerCamelCase): associatedOrganisms
* Organized in Class (e.g. Location, Taxon): Organism
* Definition of the term: **A list (concatenated and separated) of identifiers of other Organisms and the associations of this Organism to each of them.**
* Usage comments (recommendations regarding content, etc.): **This term can be used to provide a list of associations to other Organisms. Note that the ResourceRelationship class is an alternative means of representing associations, and with more detail. Recommended best practice is to separate the values in a list with space vertical bar space ( | ).**
* Examples: **`"sibling of":"http://arctos.database.museum/guid/DMNS:Mamm:14171"`, `"parent of":"http://arctos.database.museum/guid/MSB:Mamm:196208" | "parent of":"http://arctos.database.museum/guid/MSB:Mamm:196523" | "sibling of":"http://arctos.database.museum/guid/MSB:Mamm:142638"`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedOrganisms-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): not in ABCD
Discussions around changes to relationshipOfResource (#194), around a new term relationshipOfResourceID (#186, #283), and changes to associatedOccurrences (Issue #324) suggest that a clarification should also be made in the associatedOrganisms definition and usage notes. Specifically, the directionality of the relationship should be made clear.
|
process
|
change term associatedorganisms change term submitter john wieczorek justification why is this change necessary consistency and clarity proponents who needs this change everyone current term definition proposed new attributes of the term term name in lowercamelcase associatedorganisms organized in class e g location taxon organism definition of the term a list concatenated and separated of identifiers of other organisms and the associations of this organism to each of them usage comments recommendations regarding content etc this term can be used to provide a list of associations to other organisms note that the resourcerelationship class is an alternative means of representing associations and with more detail recommended best practice is to separate the values in a list with space vertical bar space examples sibling of parent of parent of sibling of refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable not in abcd discussions around changes to relationshipofresource around a new term relationshipofresourceid and changes to associatedoccurrences issue suggest that a clarification should also be made in the associatedorganisms definition and usage notes specifically the directionality of the relationship should be made clear
| 1
|
2,610
| 5,367,756,514
|
IssuesEvent
|
2017-02-22 05:58:19
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [FR] PAS VU À LA TÉLÉ #8 : LA PAUVRETÉ - CHRISTOPHE ROBERT - FONDATION ABBÉ PIERRE
|
Language: French Process: [6] Approved
|
# Video title
PAS VU À LA TÉLÉ #8 : LA PAUVRETÉ - CHRISTOPHE ROBERT - FONDATION ABBÉ PIERRE
# URL
https://www.youtube.com/watch?v=bdcMbUVfWRA&t=2s
# Youtube subtitles language
Français
# Duration
1:03:35
# URL subtitles
https://www.youtube.com/timedtext_editor?ref=watch&action_mde_edit_form=1&bl=vmp&ui=hd&tab=captions&v=bdcMbUVfWRA&lang=fr
|
1.0
|
[subtitles] [FR] PAS VU À LA TÉLÉ #8 : LA PAUVRETÉ - CHRISTOPHE ROBERT - FONDATION ABBÉ PIERRE - # Video title
PAS VU À LA TÉLÉ #8 : LA PAUVRETÉ - CHRISTOPHE ROBERT - FONDATION ABBÉ PIERRE
# URL
https://www.youtube.com/watch?v=bdcMbUVfWRA&t=2s
# Youtube subtitles language
Français
# Duration
1:03:35
# URL subtitles
https://www.youtube.com/timedtext_editor?ref=watch&action_mde_edit_form=1&bl=vmp&ui=hd&tab=captions&v=bdcMbUVfWRA&lang=fr
|
process
|
pas vu à la télé la pauvreté christophe robert fondation abbé pierre video title pas vu à la télé la pauvreté christophe robert fondation abbé pierre url youtube subtitles language français duration url subtitles
| 1
|
21,061
| 28,010,455,874
|
IssuesEvent
|
2023-03-27 18:13:08
|
esmero/strawberryfield
|
https://api.github.com/repos/esmero/strawberryfield
|
opened
|
Allow ap:tasks entry to define what JSON key can alternatively (other than label) drive the NODE title
|
enhancement JSON Preprocessors Events and Subscriber JMESPath
|
# What?
So far we have fixed that `label` key will drive always an ADO/NODE title. There are times (specially on multiligual Repositories) when we want to make one exception. `ap:tasks` is the key for machinable instruction giving to Archipelago.
This ISSUE will support code that will provide
```JSON
"ap:tasks": {
"ap:entitytitle": "JMESPATHEXPRESSION"
}
````
I'm keeping a lowercase no `_` nomenclature for these keys but happy to revision this @alliomeria @aksm if you think we should use CamelCase or `_` for this key.
The code will modify a few lines here
https://github.com/esmero/strawberryfield/blob/78f4bdb5ca57e522e85c139c3db99ca6cebdd728/src/EventSubscriber/StrawberryfieldEventPresaveSubscriberSetTitlefromMetadata.php
Thanks
|
1.0
|
Allow ap:tasks entry to define what JSON key can alternatively (other than label) drive the NODE title - # What?
So far we have fixed that `label` key will drive always an ADO/NODE title. There are times (specially on multiligual Repositories) when we want to make one exception. `ap:tasks` is the key for machinable instruction giving to Archipelago.
This ISSUE will support code that will provide
```JSON
"ap:tasks": {
"ap:entitytitle": "JMESPATHEXPRESSION"
}
````
I'm keeping a lowercase no `_` nomenclature for these keys but happy to revision this @alliomeria @aksm if you think we should use CamelCase or `_` for this key.
The code will modify a few lines here
https://github.com/esmero/strawberryfield/blob/78f4bdb5ca57e522e85c139c3db99ca6cebdd728/src/EventSubscriber/StrawberryfieldEventPresaveSubscriberSetTitlefromMetadata.php
Thanks
|
process
|
allow ap tasks entry to define what json key can alternatively other than label drive the node title what so far we have fixed that label key will drive always an ado node title there are times specially on multiligual repositories when we want to make one exception ap tasks is the key for machinable instruction giving to archipelago this issue will support code that will provide json ap tasks ap entitytitle jmespathexpression i m keeping a lowercase no nomenclature for these keys but happy to revision this alliomeria aksm if you think we should use camelcase or for this key the code will modify a few lines here thanks
| 1
|
62,225
| 6,782,630,106
|
IssuesEvent
|
2017-10-30 08:57:52
|
owncloud/client
|
https://api.github.com/repos/owncloud/client
|
closed
|
[Regression] In 2.4 "Credentials Wrong" state does not retrigger the password-asking form
|
bug ReadyToTest sev2-high
|
### Expected behavior
After receiving the 401: Unauthorized from the server (`<s:message>No public access to this resource., Username or password was incorrect, Username or password was incorrect</s:message>`) client 2.3.3 displays again the password dialog.
### Actual behavior
The status on the account tab stays `Connecting to <server> as <username>...` until it times out and switches to `Signed out...`:
```
[ warning sync.connectionvalidator ]: ******** Password is wrong! QNetworkReply::NetworkError(OperationCanceledError) "Operation canceled"
[ info gui.account.state ]: AccountState connection status change: "Credentials not ready" -> "Credentials Wrong"
```
### Steps to reproduce
1. Log out from any client's account and try to log in with the wrong password
|
1.0
|
[Regression] In 2.4 "Credentials Wrong" state does not retrigger the password-asking form - ### Expected behavior
After receiving the 401: Unauthorized from the server (`<s:message>No public access to this resource., Username or password was incorrect, Username or password was incorrect</s:message>`) client 2.3.3 displays again the password dialog.
### Actual behavior
The status on the account tab stays `Connecting to <server> as <username>...` until it times out and switches to `Signed out...`:
```
[ warning sync.connectionvalidator ]: ******** Password is wrong! QNetworkReply::NetworkError(OperationCanceledError) "Operation canceled"
[ info gui.account.state ]: AccountState connection status change: "Credentials not ready" -> "Credentials Wrong"
```
### Steps to reproduce
1. Log out from any client's account and try to log in with the wrong password
|
non_process
|
in credentials wrong state does not retrigger the password asking form expected behavior after receiving the unauthorized from the server no public access to this resource username or password was incorrect username or password was incorrect client displays again the password dialog actual behavior the status on the account tab stays connecting to as until it times out and switches to signed out password is wrong qnetworkreply networkerror operationcancelederror operation canceled accountstate connection status change credentials not ready credentials wrong steps to reproduce log out from any client s account and try to log in with the wrong password
| 0
|
16,095
| 5,207,652,452
|
IssuesEvent
|
2017-01-25 00:22:42
|
missionpinball/mpf
|
https://api.github.com/repos/missionpinball/mpf
|
opened
|
Unify leds and lights
|
code refactor
|
In platforms unify configure_matrix_lights and configure_leds -> configure_light (only single channel)
Unify LEDs and (Matrix-)Lights -> lights device with single or multichannel lights. will call configure_light on the platform once per channel
|
1.0
|
Unify leds and lights - In platforms unify configure_matrix_lights and configure_leds -> configure_light (only single channel)
Unify LEDs and (Matrix-)Lights -> lights device with single or multichannel lights. will call configure_light on the platform once per channel
|
non_process
|
unify leds and lights in platforms unify configure matrix lights and configure leds configure light only single channel unify leds and matrix lights lights device with single or multichannel lights will call configure light on the platform once per channel
| 0
|
77,023
| 15,496,255,224
|
IssuesEvent
|
2021-03-11 02:20:28
|
n-devs/BuySellCar-Online
|
https://api.github.com/repos/n-devs/BuySellCar-Online
|
opened
|
CVE-2020-15366 (Medium) detected in ajv-6.10.0.tgz
|
security vulnerability
|
## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.10.0.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz</a></p>
<p>Path to dependency file: /BuySellCar-Online/package.json</p>
<p>Path to vulnerable library: BuySellCar-Online/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- eslint-5.12.0.tgz
- :x: **ajv-6.10.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: ajv - 6.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-15366 (Medium) detected in ajv-6.10.0.tgz - ## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.10.0.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz">https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz</a></p>
<p>Path to dependency file: /BuySellCar-Online/package.json</p>
<p>Path to vulnerable library: BuySellCar-Online/node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.8.tgz (Root Library)
- eslint-5.12.0.tgz
- :x: **ajv-6.10.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution: ajv - 6.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in ajv tgz cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file buysellcar online package json path to vulnerable library buysellcar online node modules ajv package json dependency hierarchy react scripts tgz root library eslint tgz x ajv tgz vulnerable library vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv step up your open source security game with whitesource
| 0
|
146,383
| 13,180,239,636
|
IssuesEvent
|
2020-08-12 12:27:43
|
bbc/simorgh
|
https://api.github.com/repos/bbc/simorgh
|
closed
|
Create MAINTAINERS.md file
|
Documentation Refinement Needed
|
**Is your feature request related to a problem? Please describe.**
Create a `MAINTAINERS.md` file which would list the maintainers of Simorgh, the members of both World Service engineering teams.
**Describe the solution you'd like**
Create a `MAINTAINERS.md` file in the root of the Simorgh repository.
**Describe alternatives you've considered**
n/a
**Testing notes**
n/a
**Additional context**
This will be referred to in our Code of Conduct, as that document refers to project maintainers.
The file will need some form of periodic review, as the maintainers list may need to be updated over time.
|
1.0
|
Create MAINTAINERS.md file - **Is your feature request related to a problem? Please describe.**
Create a `MAINTAINERS.md` file which would list the maintainers of Simorgh, the members of both World Service engineering teams.
**Describe the solution you'd like**
Create a `MAINTAINERS.md` file in the root of the Simorgh repository.
**Describe alternatives you've considered**
n/a
**Testing notes**
n/a
**Additional context**
This will be referred to in our Code of Conduct, as that document refers to project maintainers.
The file will need some form of periodic review, as the maintainers list may need to be updated over time.
|
non_process
|
create maintainers md file is your feature request related to a problem please describe create a maintainers md file which would list the maintainers of simorgh the members of both world service engineering teams describe the solution you d like create a maintainers md file in the root of the simorgh repository describe alternatives you ve considered n a testing notes n a additional context this will be referred to in our code of conduct as that document refers to project maintainers the file will need some form of periodic review as the maintainers list may need to be updated over time
| 0
|
184,036
| 6,700,445,176
|
IssuesEvent
|
2017-10-11 04:51:08
|
jwwolfe/innsystems
|
https://api.github.com/repos/jwwolfe/innsystems
|
closed
|
broadcastS, and client class's multicasting capabilities are non existant
|
auto-migrated Component-Logic Priority-Low Type-Enhancement Usability
|
```
What steps will reproduce the problem?
1. Run the test class that uses broadcastS
What is the expected output? What do you see instead?
I expected to see the IP of the client machine trying to connect, but after
5 min the client still had not found the server
```
Original issue reported on code.google.com by `darkbird44@gmail.com` on 7 Mar 2007 at 2:27
|
1.0
|
broadcastS, and client class's multicasting capabilities are non existant - ```
What steps will reproduce the problem?
1. Run the test class that uses broadcastS
What is the expected output? What do you see instead?
I expected to see the IP of the client machine trying to connect, but after
5 min the client still had not found the server
```
Original issue reported on code.google.com by `darkbird44@gmail.com` on 7 Mar 2007 at 2:27
|
non_process
|
broadcasts and client class s multicasting capabilities are non existant what steps will reproduce the problem run the test class that uses broadcasts what is the expected output what do you see instead i expected to see the ip of the client machine trying to connect but after min the client still had not found the server original issue reported on code google com by gmail com on mar at
| 0
|
36,059
| 12,396,310,165
|
IssuesEvent
|
2020-05-20 20:17:42
|
jjcv2304/CheatCodes
|
https://api.github.com/repos/jjcv2304/CheatCodes
|
closed
|
CVE-2018-11694 (High) detected in node-sass-4.13.1.tgz, opennms-opennms-source-24.1.2-1
|
security vulnerability
|
## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.13.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/CheatCodes/CheatCodes.UI/package.json</p>
<p>Path to vulnerable library: /CheatCodes/CheatCodes.UI/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jjcv2304/CheatCodes/commit/f8a78ff76053ee4abe1f6cdc400a632e5ba871e8">f8a78ff76053ee4abe1f6cdc400a632e5ba871e8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11694 (High) detected in node-sass-4.13.1.tgz, opennms-opennms-source-24.1.2-1 - ## CVE-2018-11694 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.13.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.13.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/CheatCodes/CheatCodes.UI/package.json</p>
<p>Path to vulnerable library: /CheatCodes/CheatCodes.UI/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.13.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jjcv2304/CheatCodes/commit/f8a78ff76053ee4abe1f6cdc400a632e5ba871e8">f8a78ff76053ee4abe1f6cdc400a632e5ba871e8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694</a></p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in node sass tgz opennms opennms source cve high severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm cheatcodes cheatcodes ui package json path to vulnerable library cheatcodes cheatcodes ui node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
20,047
| 26,535,399,146
|
IssuesEvent
|
2023-01-19 15:18:59
|
unicode-org/icu4x
|
https://api.github.com/repos/unicode-org/icu4x
|
opened
|
Reduce the number of checked in generated files
|
C-test-infra C-process
|
We have quite a few generated files checked in, with CI failing if they are out of date. This happens to me quite a lot, and it's a waste of my time and CI resources (a new commit has to run *everything* again).
Some files very cheap to generate on the fly (`gn-gen` and `diplomat-gen`), while some like testdata are not. Can we remove the cheap files?
|
1.0
|
Reduce the number of checked in generated files - We have quite a few generated files checked in, with CI failing if they are out of date. This happens to me quite a lot, and it's a waste of my time and CI resources (a new commit has to run *everything* again).
Some files very cheap to generate on the fly (`gn-gen` and `diplomat-gen`), while some like testdata are not. Can we remove the cheap files?
|
process
|
reduce the number of checked in generated files we have quite a few generated files checked in with ci failing if they are out of date this happens to me quite a lot and it s a waste of my time and ci resources a new commit has to run everything again some files very cheap to generate on the fly gn gen and diplomat gen while some like testdata are not can we remove the cheap files
| 1
|
13,865
| 16,622,470,590
|
IssuesEvent
|
2021-06-03 04:31:40
|
lsmacedo/spotifyt
|
https://api.github.com/repos/lsmacedo/spotifyt
|
closed
|
Rodar scripts de pre-commit e pre-push
|
pending review process
|
Pre-commit: rodar lint e testes unitários
Pre-push: rodar testes E2E
|
1.0
|
Rodar scripts de pre-commit e pre-push - Pre-commit: rodar lint e testes unitários
Pre-push: rodar testes E2E
|
process
|
rodar scripts de pre commit e pre push pre commit rodar lint e testes unitários pre push rodar testes
| 1
|
251,852
| 18,976,987,423
|
IssuesEvent
|
2021-11-20 06:15:03
|
sayhisam1/Stitch
|
https://api.github.com/repos/sayhisam1/Stitch
|
opened
|
Add ECS Patterns to Wiki
|
documentation help wanted good first issue
|
It's often useful to have a quick reference of patterns one might want to use when building things with an ECS.
I've added a wiki (https://github.com/sayhisam1/Stitch/wiki/ECS-Pattern-Data-Dump) that contains some ECS patterns with some Stitch examples.
If anyone has any patterns they want to add, feel free to add it!
|
1.0
|
Add ECS Patterns to Wiki - It's often useful to have a quick reference of patterns one might want to use when building things with an ECS.
I've added a wiki (https://github.com/sayhisam1/Stitch/wiki/ECS-Pattern-Data-Dump) that contains some ECS patterns with some Stitch examples.
If anyone has any patterns they want to add, feel free to add it!
|
non_process
|
add ecs patterns to wiki it s often useful to have a quick reference of patterns one might want to use when building things with an ecs i ve added a wiki that contains some ecs patterns with some stitch examples if anyone has any patterns they want to add feel free to add it
| 0
|
50,343
| 6,360,326,103
|
IssuesEvent
|
2017-07-31 09:47:32
|
IBM-Bluemix/finance-trade
|
https://api.github.com/repos/IBM-Bluemix/finance-trade
|
closed
|
Portfolio & Holdings sample JSON files creation
|
Design
|
Created a JSON with 3 portfolios (Technology,Pharma,Agriculture)
```
{
"portfolios": [
{
"closed": "false",
"data": {},
"name": "technology",
"timestamp": "currentdate"
}, {
"closed": "false",
"data": {},
"name": "pharmaceutical",
"timestamp": "currentdate"
}, {
"closed": "false",
"data": {},
"name": "agriculture",
"timestamp": "currentdate"
}
]
}
```
Holdings JSON was created for each Portfolio
[Archive.zip](https://github.com/IBM-Bluemix/finance-trade/files/1172802/Archive.zip)
|
1.0
|
Portfolio & Holdings sample JSON files creation - Created a JSON with 3 portfolios (Technology,Pharma,Agriculture)
```
{
"portfolios": [
{
"closed": "false",
"data": {},
"name": "technology",
"timestamp": "currentdate"
}, {
"closed": "false",
"data": {},
"name": "pharmaceutical",
"timestamp": "currentdate"
}, {
"closed": "false",
"data": {},
"name": "agriculture",
"timestamp": "currentdate"
}
]
}
```
Holdings JSON was created for each Portfolio
[Archive.zip](https://github.com/IBM-Bluemix/finance-trade/files/1172802/Archive.zip)
|
non_process
|
portfolio holdings sample json files creation created a json with portfolios technology pharma agriculture portfolios closed false data name technology timestamp currentdate closed false data name pharmaceutical timestamp currentdate closed false data name agriculture timestamp currentdate holdings json was created for each portfolio
| 0
|
21,903
| 30,352,327,811
|
IssuesEvent
|
2023-07-11 20:01:16
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Use splitting classes to break data into more than train/test data chunks
|
New Feature module:preprocessing
|
I came across a situation where I needed to split my data into more than 2 groups, and realised that this can't be done that simply with sklearn's splitter classes as it stands. I note that stack overflow has several similar questions from other users.
One can repeatedly apply splits, but this seems quite clunky given that the code seems to be readily extensible to handle `n_groups > 2`. Was the choice to limit to a train/test split consciously made for any specific reasons?
|
1.0
|
Use splitting classes to break data into more than train/test data chunks - I came across a situation where I needed to split my data into more than 2 groups, and realised that this can't be done that simply with sklearn's splitter classes as it stands. I note that stack overflow has several similar questions from other users.
One can repeatedly apply splits, but this seems quite clunky given that the code seems to be readily extensible to handle `n_groups > 2`. Was the choice to limit to a train/test split consciously made for any specific reasons?
|
process
|
use splitting classes to break data into more than train test data chunks i came across a situation where i needed to split my data into more than groups and realised that this can t be done that simply with sklearn s splitter classes as it stands i note that stack overflow has several similar questions from other users one can repeatedly apply splits but this seems quite clunky given that the code seems to be readily extensible to handle n groups was the choice to limit to a train test split consciously made for any specific reasons
| 1
|
1,504
| 4,087,194,783
|
IssuesEvent
|
2016-06-01 09:09:46
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
unexpected 'self' Process Component
|
Process
|
I just update silex framework where I'm using the Process component and was working ok but now I've got an error. I used to use v3.0 of this component, but know I'm getting the next error message: syntax error, unexpected 'self' (T_STRING) in ../vendor/symfony/process/Process.php on line 538. This is in the version v3.1.0.
|
1.0
|
unexpected 'self' Process Component - I just update silex framework where I'm using the Process component and was working ok but now I've got an error. I used to use v3.0 of this component, but know I'm getting the next error message: syntax error, unexpected 'self' (T_STRING) in ../vendor/symfony/process/Process.php on line 538. This is in the version v3.1.0.
|
process
|
unexpected self process component i just update silex framework where i m using the process component and was working ok but now i ve got an error i used to use of this component but know i m getting the next error message syntax error unexpected self t string in vendor symfony process process php on line this is in the version
| 1
|
12,684
| 15,048,615,486
|
IssuesEvent
|
2021-02-03 10:24:43
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Generate TEXT fields from String
|
kind/feature process/candidate team/migrations topic: native database types topic: types
|
Hi,
I’m using `prisma2 lift` to generate my database, but it’s generating all my String fields as `VARCHAR` with size 191. Unfortunately, when I go to populate these fields, some of my data has more than 191 characters. Is there a way I can get `prisma2 lift` to generate `TEXT` fields or specify my desired length of `VARCHAR`?
Thanks!
|
1.0
|
Generate TEXT fields from String - Hi,
I’m using `prisma2 lift` to generate my database, but it’s generating all my String fields as `VARCHAR` with size 191. Unfortunately, when I go to populate these fields, some of my data has more than 191 characters. Is there a way I can get `prisma2 lift` to generate `TEXT` fields or specify my desired length of `VARCHAR`?
Thanks!
|
process
|
generate text fields from string hi i’m using lift to generate my database but it’s generating all my string fields as varchar with size unfortunately when i go to populate these fields some of my data has more than characters is there a way i can get lift to generate text fields or specify my desired length of varchar thanks
| 1
|
18,924
| 24,880,187,969
|
IssuesEvent
|
2022-10-27 23:46:44
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Dictionary Uniqueness and Sortedness
|
enhancement development-process
|
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
A schema Field currently has a `dict_is_ordered` field. Much like `dict_id` this is non-trivial to use in practice (#1206) as it is part of the schema which may need to remain constant across multiple RecordBatch that might otherwise wish to use different dictionaries. It is also not visible to compute kernels.
There doesn't appear to be a notion of if a dictionary contains unique values.
**Describe the solution you'd like**
We would ideally like to associated dictionary properties with the dictionary arrays themselves. This would allow compute kernels to exploit their properties, in addition to avoiding schema incompatibility issues.
There is an assumption in many places that the arrays themselves are sugar on top of `ArrayData`, with it possible to construct a DictionaryArray from a `ArrayData`. As such I would like to propose:
* Remove dict_is_ordered from Field
* Add dict_is_ordered bool to ArrayData (default false)
* Add dict_is_unique bool to ArrayData (default false)
The various compute kernels can then progressively be updated to exploit these properties.
|
1.0
|
Dictionary Uniqueness and Sortedness - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
A schema Field currently has a `dict_is_ordered` field. Much like `dict_id` this is non-trivial to use in practice (#1206) as it is part of the schema which may need to remain constant across multiple RecordBatch that might otherwise wish to use different dictionaries. It is also not visible to compute kernels.
There doesn't appear to be a notion of if a dictionary contains unique values.
**Describe the solution you'd like**
We would ideally like to associated dictionary properties with the dictionary arrays themselves. This would allow compute kernels to exploit their properties, in addition to avoiding schema incompatibility issues.
There is an assumption in many places that the arrays themselves are sugar on top of `ArrayData`, with it possible to construct a DictionaryArray from a `ArrayData`. As such I would like to propose:
* Remove dict_is_ordered from Field
* Add dict_is_ordered bool to ArrayData (default false)
* Add dict_is_unique bool to ArrayData (default false)
The various compute kernels can then progressively be updated to exploit these properties.
|
process
|
dictionary uniqueness and sortedness is your feature request related to a problem or challenge please describe what you are trying to do a schema field currently has a dict is ordered field much like dict id this is non trivial to use in practice as it is part of the schema which may need to remain constant across multiple recordbatch that might otherwise wish to use different dictionaries it is also not visible to compute kernels there doesn t appear to be a notion of if a dictionary contains unique values describe the solution you d like we would ideally like to associated dictionary properties with the dictionary arrays themselves this would allow compute kernels to exploit their properties in addition to avoiding schema incompatibility issues there is an assumption in many places that the arrays themselves are sugar on top of arraydata with it possible to construct a dictionaryarray from a arraydata as such i would like to propose remove dict is ordered from field add dict is ordered bool to arraydata default false add dict is unique bool to arraydata default false the various compute kernels can then progressively be updated to exploit these properties
| 1
|
20,446
| 27,102,443,630
|
IssuesEvent
|
2023-02-15 09:40:09
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DISABLED test_fs_sharing (__main__.TestMultiprocessing)
|
module: windows module: multiprocessing triaged skipped
|
Flaky failures in the last week: https://fburl.com/scuba/opensource_ci_jobs/inmj698k. They only appear to be on windows
Platforms: rocm
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin
|
1.0
|
DISABLED test_fs_sharing (__main__.TestMultiprocessing) - Flaky failures in the last week: https://fburl.com/scuba/opensource_ci_jobs/inmj698k. They only appear to be on windows
Platforms: rocm
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin
|
process
|
disabled test fs sharing main testmultiprocessing flaky failures in the last week they only appear to be on windows platforms rocm cc mszhanyi nbcsm vitalyfedyunin
| 1
|
4,149
| 7,098,416,950
|
IssuesEvent
|
2018-01-15 05:00:36
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
[New Tests] Failed: System.ServiceProcess.Tests.ServiceBaseTests / LogWritten & LogWritten_AutoLog_False
|
area-System.ServiceProcess test bug
|
Affected tests (introduced in PR #26260 - @Anipik):
* LogWritten
* LogWritten_AutoLog_False
Failure of `LogWritten` test:
```
System.InvalidOperationException : Cannot open log Application on computer '.'. This function is not supported on this system
at System.Diagnostics.EventLogInternal.OpenForRead(String currentMachineName) in E:\A\_work\1554\s\corefx\src\System.Diagnostics.EventLog\src\System\Diagnostics\EventLogInternal.cs:line 1112
at System.Diagnostics.EventLogInternal.get_EntryCount() in E:\A\_work\1554\s\corefx\src\System.Diagnostics.EventLog\src\System\Diagnostics\EventLogInternal.cs:line 155
at System.ServiceProcess.Tests.ServiceBaseTests.LogWritten() in E:\A\_work\1554\s\corefx\src\System.ServiceProcess.ServiceController\tests\ServiceBaseTests.cs:line 180
```
## History of failures
Day | Build | OS | Details
--- | --- | --- | ---
1/13 | 20180113.03 | Win10.Nano | LogWritten & LogWritten_AutoLog_False
1/14 | 20180114.01 | Win10.Nano | LogWritten & LogWritten_AutoLog_False
1/14 | 20180114.03 | Win10.Nano | LogWritten & LogWritten_AutoLog_False
|
1.0
|
[New Tests] Failed: System.ServiceProcess.Tests.ServiceBaseTests / LogWritten & LogWritten_AutoLog_False - Affected tests (introduced in PR #26260 - @Anipik):
* LogWritten
* LogWritten_AutoLog_False
Failure of `LogWritten` test:
```
System.InvalidOperationException : Cannot open log Application on computer '.'. This function is not supported on this system
at System.Diagnostics.EventLogInternal.OpenForRead(String currentMachineName) in E:\A\_work\1554\s\corefx\src\System.Diagnostics.EventLog\src\System\Diagnostics\EventLogInternal.cs:line 1112
at System.Diagnostics.EventLogInternal.get_EntryCount() in E:\A\_work\1554\s\corefx\src\System.Diagnostics.EventLog\src\System\Diagnostics\EventLogInternal.cs:line 155
at System.ServiceProcess.Tests.ServiceBaseTests.LogWritten() in E:\A\_work\1554\s\corefx\src\System.ServiceProcess.ServiceController\tests\ServiceBaseTests.cs:line 180
```
## History of failures
Day | Build | OS | Details
--- | --- | --- | ---
1/13 | 20180113.03 | Win10.Nano | LogWritten & LogWritten_AutoLog_False
1/14 | 20180114.01 | Win10.Nano | LogWritten & LogWritten_AutoLog_False
1/14 | 20180114.03 | Win10.Nano | LogWritten & LogWritten_AutoLog_False
|
process
|
failed system serviceprocess tests servicebasetests logwritten logwritten autolog false affected tests introduced in pr anipik logwritten logwritten autolog false failure of logwritten test system invalidoperationexception cannot open log application on computer this function is not supported on this system at system diagnostics eventloginternal openforread string currentmachinename in e a work s corefx src system diagnostics eventlog src system diagnostics eventloginternal cs line at system diagnostics eventloginternal get entrycount in e a work s corefx src system diagnostics eventlog src system diagnostics eventloginternal cs line at system serviceprocess tests servicebasetests logwritten in e a work s corefx src system serviceprocess servicecontroller tests servicebasetests cs line history of failures day build os details nano logwritten logwritten autolog false nano logwritten logwritten autolog false nano logwritten logwritten autolog false
| 1
|
349,068
| 24,932,617,184
|
IssuesEvent
|
2022-10-31 12:54:52
|
clarin-eric/ParlaMint
|
https://api.github.com/repos/clarin-eric/ParlaMint
|
opened
|
Obsolete projectDesc
|
bug 🕮 Documentation
|
We forgot about the `<projectDesc>` element in the corpus root file (.ana included), and people now submitting their corpora still use the one from ParlaMint I, which goes like this:
```
<projectDesc>
<p xml:lang="en"><ref target="https://www.clarin.eu/content/parlamint">ParlaMint</ref> is a project that
aims to (1) create a multilingual set of comparable corpora of parliamentary proceedings uniformly encoded
according to the <ref target="https://github.com/clarin-eric/parla-clarin">Parla-CLARIN recommendations</ref>
and covering the COVID-19 pandemic from November 2019 as well as the earlier period from 2015 to serve as
a reference corpus; (2) process the corpora linguistically to add Universal Dependencies syntactic structures and
Named Entity annotation; (3) make the corpora available through concordancers and Parlameter; and (4) build
use cases in Political Sciences and Digital Humanities based on the corpus data.</p>
</projectDesc>
```
Somewhat fortuitously, all this still holds for ParlaMint II, except "and Parlameter". Note also that the [Guidelines](https://clarin-eric.github.io/ParlaMint/#sec-projectDesc) do in fact ommit this although leaving the text otherwise as it is. Nevertheless, more things could be written now.
So, the question is @matyaskopp, shall we ask the partners to change the projectDesc? I would vote "yes", but maybe for 3.1.
The minimal change would be to delete "and Parlameter" (the change being supported by the current Guidelines), a more comprehensive one could be the suggested text as below:
```
<projectDesc>
<p xml:lang="en"><ref target="https://www.clarin.eu/content/parlamint">ParlaMint</ref> is a project that
aims to (1) create a multilingual set of comparable corpora of parliamentary proceedings uniformly encoded
according to the <ref target="https://clarin-eric.github.io/ParlaMint/">ParlaMint encoding guidelines</ref>
and covering the period from 2015 to mid-2022; (2) add linguistic annotatations to the corpora and
machine-translate them to English; (3) make the corpora available through concordancers; and (4) build
use cases in Political Sciences and Digital Humanities based on the corpus data.</p>
</projectDesc>
```
|
1.0
|
Obsolete projectDesc - We forgot about the `<projectDesc>` element in the corpus root file (.ana included), and people now submitting their corpora still use the one from ParlaMint I, which goes like this:
```
<projectDesc>
<p xml:lang="en"><ref target="https://www.clarin.eu/content/parlamint">ParlaMint</ref> is a project that
aims to (1) create a multilingual set of comparable corpora of parliamentary proceedings uniformly encoded
according to the <ref target="https://github.com/clarin-eric/parla-clarin">Parla-CLARIN recommendations</ref>
and covering the COVID-19 pandemic from November 2019 as well as the earlier period from 2015 to serve as
a reference corpus; (2) process the corpora linguistically to add Universal Dependencies syntactic structures and
Named Entity annotation; (3) make the corpora available through concordancers and Parlameter; and (4) build
use cases in Political Sciences and Digital Humanities based on the corpus data.</p>
</projectDesc>
```
Somewhat fortuitously, all this still holds for ParlaMint II, except "and Parlameter". Note also that the [Guidelines](https://clarin-eric.github.io/ParlaMint/#sec-projectDesc) do in fact ommit this although leaving the text otherwise as it is. Nevertheless, more things could be written now.
So, the question is @matyaskopp, shall we ask the partners to change the projectDesc? I would vote "yes", but maybe for 3.1.
The minimal change would be to delete "and Parlameter" (the change being supported by the current Guidelines), a more comprehensive one could be the suggested text as below:
```
<projectDesc>
<p xml:lang="en"><ref target="https://www.clarin.eu/content/parlamint">ParlaMint</ref> is a project that
aims to (1) create a multilingual set of comparable corpora of parliamentary proceedings uniformly encoded
according to the <ref target="https://clarin-eric.github.io/ParlaMint/">ParlaMint encoding guidelines</ref>
and covering the period from 2015 to mid-2022; (2) add linguistic annotatations to the corpora and
machine-translate them to English; (3) make the corpora available through concordancers; and (4) build
use cases in Political Sciences and Digital Humanities based on the corpus data.</p>
</projectDesc>
```
|
non_process
|
obsolete projectdesc we forgot about the element in the corpus root file ana included and people now submitting their corpora still use the one from parlamint i which goes like this ref target is a project that aims to create a multilingual set of comparable corpora of parliamentary proceedings uniformly encoded according to the and covering the covid pandemic from november as well as the earlier period from to serve as a reference corpus process the corpora linguistically to add universal dependencies syntactic structures and named entity annotation make the corpora available through concordancers and parlameter and build use cases in political sciences and digital humanities based on the corpus data somewhat fortuitously all this still holds for parlamint ii except and parlameter note also that the do in fact ommit this although leaving the text otherwise as it is nevertheless more things could be written now so the question is matyaskopp shall we ask the partners to change the projectdesc i would vote yes but maybe for the minimal change would be to delete and parlameter the change being supported by the current guidelines a more comprehensive one could be the suggested text as below ref target is a project that aims to create a multilingual set of comparable corpora of parliamentary proceedings uniformly encoded according to the and covering the period from to mid add linguistic annotatations to the corpora and machine translate them to english make the corpora available through concordancers and build use cases in political sciences and digital humanities based on the corpus data
| 0
|
22,609
| 31,834,994,352
|
IssuesEvent
|
2023-09-14 13:01:36
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
Remove trash can from Batch Status tab
|
Batch Processor
|
Remove the trash can from the Batch Status tab. Users should not be allowed to delete their batches (at least for now).
|
1.0
|
Remove trash can from Batch Status tab - Remove the trash can from the Batch Status tab. Users should not be allowed to delete their batches (at least for now).
|
process
|
remove trash can from batch status tab remove the trash can from the batch status tab users should not be allowed to delete their batches at least for now
| 1
|
21,687
| 30,181,914,364
|
IssuesEvent
|
2023-07-04 09:26:48
|
UnitTestBot/UTBotJava
|
https://api.github.com/repos/UnitTestBot/UTBotJava
|
opened
|
Join the functionality of ValueConstructor and MockValueConstructor
|
ctg-refactoring comp-instrumented-process
|
**Description**
There are two classes in UnitTestBot with the same responsibility `ValueConstructor` and `MockValueConstructor`.
Both of them are responsible for converting UtModel to UtConcreteValue and located in instrumented process. Mostly `MockValueConstructor` is used because it is a wrapper of `ValueConstructor` with additional logic for mocks. However, `ValueConstructor` is also used. This is very unclear and requires supporting similar copypasted codebase.
The refactoring joining `ValueConstructor` and `MockValueConstructor` is required.
|
1.0
|
Join the functionality of ValueConstructor and MockValueConstructor - **Description**
There are two classes in UnitTestBot with the same responsibility `ValueConstructor` and `MockValueConstructor`.
Both of them are responsible for converting UtModel to UtConcreteValue and located in instrumented process. Mostly `MockValueConstructor` is used because it is a wrapper of `ValueConstructor` with additional logic for mocks. However, `ValueConstructor` is also used. This is very unclear and requires supporting similar copypasted codebase.
The refactoring joining `ValueConstructor` and `MockValueConstructor` is required.
|
process
|
join the functionality of valueconstructor and mockvalueconstructor description there are two classes in unittestbot with the same responsibility valueconstructor and mockvalueconstructor both of them are responsible for converting utmodel to utconcretevalue and located in instrumented process mostly mockvalueconstructor is used because it is a wrapper of valueconstructor with additional logic for mocks however valueconstructor is also used this is very unclear and requires supporting similar copypasted codebase the refactoring joining valueconstructor and mockvalueconstructor is required
| 1
|
445,006
| 31,160,621,649
|
IssuesEvent
|
2023-08-16 15:44:53
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
[WASI] wasiconsole template: Instructions in generated README.md incorrect
|
documentation arch-wasm area-VM-meta-mono in-pr os-wasi
|
### Description
The instructions in README.md are not correct when it comes to using `wasmtime` to run the wasiconsole application
- Path to wasm file is incorrect
- Path itself is wrong (folder `browser-wasm` is not part of the path and there exists no wasm file with that name)
- Commandline for wasmtime is incorrect
### Reproduction Steps
- Install .NET 8 Preview 7 (or any of the previous previews)
- Open a new prompt and make sure to be using the preview just installed (dotnet --info)
- Install wasi-experimental workload
`dotnet workload install wasi-experimental`
- Create a new wasi console application
- `mkdir SampleWasiConsole; cd SampleWasiConsole`
- `dotnet new wasiconsole MyWasiConsole`
- `dotnet build`
- `dotnet run`
- `wasmtime .\bin\Debug\net8.0\browser-wasm\AppBundle\MyWasiConsole.wasm`
### Expected behavior
The instructions in the README.md should enable me to run the console application using the standalone `wasmtime` runtime
### Actual behavior
`dotnet build` and `dotnet run` works
But the instructions for wasmtime fails because (i) the path is wrong and (ii) the wasmfile does not exist. Suggest changing to something like known workaround
### Regression?
No. This is new for .NET 8
### Known Workarounds
- `cd .\bin\Debug\net8.0\wasi-wasm\AppBundle\`
- `wasmtime .\dotnet.wasm --dir . MyWasiConsole`
### Configuration
*Which version of .NET is the code running on?*
```
❯ dotnet --info
.NET SDK:
Version: 8.0.100-preview.7.23376.3
Commit: daebeea8ea
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22621
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\8.0.100-preview.7.23376.3\
.NET workloads installed:
[wasi-experimental]
Installation Source: SDK 8.0.100-preview.7
Manifest Version: 8.0.0-preview.7.23375.6/8.0.100-preview.7
Manifest Path: C:\Program Files\dotnet\sdk-manifests\8.0.100-preview.7\microsoft.net.workload.mono.toolchain.current\8.0.0-preview.7.23375.6\WorkloadManifest.json
Install Type: Msi
[wasm-tools-net6]
Installation Source: VS 17.7.34003.232, VS 17.7.34003.232
Manifest Version: 8.0.0-preview.7.23375.6/8.0.100-preview.7
Manifest Path: C:\Program Files\dotnet\sdk-manifests\8.0.100-preview.7\microsoft.net.workload.mono.toolchain.net6\8.0.0-preview.7.23375.6\WorkloadManifest.json
Install Type: Msi
[wasm-tools]
Installation Source: VS 17.7.34003.232, VS 17.7.34003.232
Manifest Version: 8.0.0-preview.7.23375.6/8.0.100-preview.7
Manifest Path: C:\Program Files\dotnet\sdk-manifests\8.0.100-preview.7\microsoft.net.workload.mono.toolchain.current\8.0.0-preview.7.23375.6\WorkloadManifest.json
Install Type: Msi
Host:
Version: 8.0.0-preview.7.23375.6
Architecture: x64
Commit: 65b696cf5e
RID: win-x64
.NET SDKs installed:
3.1.426 [C:\Program Files\dotnet\sdk]
6.0.413 [C:\Program Files\dotnet\sdk]
7.0.400 [C:\Program Files\dotnet\sdk]
8.0.100-preview.7.23376.3 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 6.0.21 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 7.0.10 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 8.0.0-preview.7.23375.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.21 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 7.0.10 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.0-preview.7.23375.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 6.0.21 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 7.0.10 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.0-preview.7.23376.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
What OS and version, and what distro if applicable?
```
Edition Windows 11 Pro
Version 22H2
Installed on 2023-02-27
OS build 22621.2134
Serial number R90ZD2LH
Experience Windows Feature Experience Pack 1000.22659.1000.0
```
*What is the architecture (x64, x86, ARM, ARM64)?*
`x64`
*Do you know whether it is specific to that configuration?*
No
*If you're using Blazor, which web browser(s) do you see this issue in?*
N/A
### Other information
_No response_
|
1.0
|
[WASI] wasiconsole template: Instructions in generated README.md incorrect - ### Description
The instructions in README.md are not correct when it comes to using `wasmtime` to run the wasiconsole application
- Path to wasm file is incorrect
- Path itself is wrong (folder `browser-wasm` is not part of the path and there exists no wasm file with that name)
- Commandline for wasmtime is incorrect
### Reproduction Steps
- Install .NET 8 Preview 7 (or any of the previous previews)
- Open a new prompt and make sure to be using the preview just installed (dotnet --info)
- Install wasi-experimental workload
`dotnet workload install wasi-experimental`
- Create a new wasi console application
- `mkdir SampleWasiConsole; cd SampleWasiConsole`
- `dotnet new wasiconsole MyWasiConsole`
- `dotnet build`
- `dotnet run`
- `wasmtime .\bin\Debug\net8.0\browser-wasm\AppBundle\MyWasiConsole.wasm`
### Expected behavior
The instructions in the README.md should enable me to run the console application using the standalone `wasmtime` runtime
### Actual behavior
`dotnet build` and `dotnet run` works
But the instructions for wasmtime fails because (i) the path is wrong and (ii) the wasmfile does not exist. Suggest changing to something like known workaround
### Regression?
No. This is new for .NET 8
### Known Workarounds
- `cd .\bin\Debug\net8.0\wasi-wasm\AppBundle\`
- `wasmtime .\dotnet.wasm --dir . MyWasiConsole`
### Configuration
*Which version of .NET is the code running on?*
```
❯ dotnet --info
.NET SDK:
Version: 8.0.100-preview.7.23376.3
Commit: daebeea8ea
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22621
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\8.0.100-preview.7.23376.3\
.NET workloads installed:
[wasi-experimental]
Installation Source: SDK 8.0.100-preview.7
Manifest Version: 8.0.0-preview.7.23375.6/8.0.100-preview.7
Manifest Path: C:\Program Files\dotnet\sdk-manifests\8.0.100-preview.7\microsoft.net.workload.mono.toolchain.current\8.0.0-preview.7.23375.6\WorkloadManifest.json
Install Type: Msi
[wasm-tools-net6]
Installation Source: VS 17.7.34003.232, VS 17.7.34003.232
Manifest Version: 8.0.0-preview.7.23375.6/8.0.100-preview.7
Manifest Path: C:\Program Files\dotnet\sdk-manifests\8.0.100-preview.7\microsoft.net.workload.mono.toolchain.net6\8.0.0-preview.7.23375.6\WorkloadManifest.json
Install Type: Msi
[wasm-tools]
Installation Source: VS 17.7.34003.232, VS 17.7.34003.232
Manifest Version: 8.0.0-preview.7.23375.6/8.0.100-preview.7
Manifest Path: C:\Program Files\dotnet\sdk-manifests\8.0.100-preview.7\microsoft.net.workload.mono.toolchain.current\8.0.0-preview.7.23375.6\WorkloadManifest.json
Install Type: Msi
Host:
Version: 8.0.0-preview.7.23375.6
Architecture: x64
Commit: 65b696cf5e
RID: win-x64
.NET SDKs installed:
3.1.426 [C:\Program Files\dotnet\sdk]
6.0.413 [C:\Program Files\dotnet\sdk]
7.0.400 [C:\Program Files\dotnet\sdk]
8.0.100-preview.7.23376.3 [C:\Program Files\dotnet\sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 6.0.21 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 7.0.10 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 8.0.0-preview.7.23375.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.21 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 7.0.10 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 8.0.0-preview.7.23375.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.32 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 6.0.21 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 7.0.10 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 8.0.0-preview.7.23376.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
What OS and version, and what distro if applicable?
```
Edition Windows 11 Pro
Version 22H2
Installed on 2023-02-27
OS build 22621.2134
Serial number R90ZD2LH
Experience Windows Feature Experience Pack 1000.22659.1000.0
```
*What is the architecture (x64, x86, ARM, ARM64)?*
`x64`
*Do you know whether it is specific to that configuration?*
No
*If you're using Blazor, which web browser(s) do you see this issue in?*
N/A
### Other information
_No response_
|
non_process
|
wasiconsole template instructions in generated readme md incorrect description the instructions in readme md are not correct when it comes to using wasmtime to run the wasiconsole application path to wasm file is incorrect path itself is wrong folder browser wasm is not part of the path and there exists no wasm file with that name commandline for wasmtime is incorrect reproduction steps install net preview or any of the previous previews open a new prompt and make sure to be using the preview just installed dotnet info install wasi experimental workload dotnet workload install wasi experimental create a new wasi console application mkdir samplewasiconsole cd samplewasiconsole dotnet new wasiconsole mywasiconsole dotnet build dotnet run wasmtime bin debug browser wasm appbundle mywasiconsole wasm expected behavior the instructions in the readme md should enable me to run the console application using the standalone wasmtime runtime actual behavior dotnet build and dotnet run works but the instructions for wasmtime fails because i the path is wrong and ii the wasmfile does not exist suggest changing to something like known workaround regression no this is new for net known workarounds cd bin debug wasi wasm appbundle wasmtime dotnet wasm dir mywasiconsole configuration which version of net is the code running on ❯ dotnet info net sdk version preview commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk preview net workloads installed installation source sdk preview manifest version preview preview manifest path c program files dotnet sdk manifests preview microsoft net workload mono toolchain current preview workloadmanifest json install type msi installation source vs vs manifest version preview preview manifest path c program files dotnet sdk manifests preview microsoft net workload mono toolchain preview workloadmanifest json install type msi installation source vs vs manifest version preview preview manifest path c program files dotnet sdk manifests preview microsoft net workload mono toolchain current preview workloadmanifest json install type msi host version preview architecture commit rid win net sdks installed preview net runtimes installed microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app preview microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app preview microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app preview what os and version and what distro if applicable edition windows pro version installed on os build serial number experience windows feature experience pack what is the architecture arm do you know whether it is specific to that configuration no if you re using blazor which web browser s do you see this issue in n a other information no response
| 0
|
330,750
| 24,275,912,766
|
IssuesEvent
|
2022-09-28 13:51:05
|
gobuffalo/buffalo
|
https://api.github.com/repos/gobuffalo/buffalo
|
closed
|
update function description for `render.Download()` to clarify the usage of the function
|
documentation internal cleanup
|
To address #2285
|
1.0
|
update function description for `render.Download()` to clarify the usage of the function - To address #2285
|
non_process
|
update function description for render download to clarify the usage of the function to address
| 0
|
12,658
| 15,030,632,297
|
IssuesEvent
|
2021-02-02 07:46:03
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
v0.19.0程序会卡死
|
#Bug *In process
|
### 测试环境
- 系统: ubuntu 20.04
- 内存:16G
- CPU: 4.01 GHz 四核Intel Core i7
- 网络: 127.0.0.1本地循环网络
- 测试客户端:srs-bench
- 测试版本:lal_v0.19.0_linux从release中下载。参数修改:"gop_num": 0 和关闭 hls
- 测试方法:推流1路,通过srs-bench 播放100路RTMP
srs-bench推流命令:`srs-bench/objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 1 -r rtmp://127.0.0.1:1935/live/livestream
srs-bench压测命令:`srs-bench/objs/sb_rtmp_load -c 100 -r rtmp://127.0.0.1:1935/live/livestream`
### 出现问题
- ffplay 直接播放一路没有问题.
- srs-bench 加载完100路几秒后,推流端和播放端就会卡住。使用ffplay播放不了,像是死锁的感觉,程序没有崩溃。
- srs-bench 有时候压测10路都会出现
- lal_v0.18.0_linux 版本不会出现
|
1.0
|
v0.19.0程序会卡死 - ### 测试环境
- 系统: ubuntu 20.04
- 内存:16G
- CPU: 4.01 GHz 四核Intel Core i7
- 网络: 127.0.0.1本地循环网络
- 测试客户端:srs-bench
- 测试版本:lal_v0.19.0_linux从release中下载。参数修改:"gop_num": 0 和关闭 hls
- 测试方法:推流1路,通过srs-bench 播放100路RTMP
srs-bench推流命令:`srs-bench/objs/sb_rtmp_publish -i doc/source.200kbps.768x320.flv -c 1 -r rtmp://127.0.0.1:1935/live/livestream
srs-bench压测命令:`srs-bench/objs/sb_rtmp_load -c 100 -r rtmp://127.0.0.1:1935/live/livestream`
### 出现问题
- ffplay 直接播放一路没有问题.
- srs-bench 加载完100路几秒后,推流端和播放端就会卡住。使用ffplay播放不了,像是死锁的感觉,程序没有崩溃。
- srs-bench 有时候压测10路都会出现
- lal_v0.18.0_linux 版本不会出现
|
process
|
测试环境 系统 ubuntu 内存 cpu ghz 四核intel core 网络: 测试客户端:srs bench 测试版本:lal linux从release中下载。参数修改 gop num 和关闭 hls 测试方法: ,通过srs bench srs bench推流命令: srs bench objs sb rtmp publish i doc source flv c r rtmp live livestream srs bench压测命令 srs bench objs sb rtmp load c r rtmp live livestream 出现问题 ffplay 直接播放一路没有问题 srs bench ,推流端和播放端就会卡住。使用ffplay播放不了,像是死锁的感觉,程序没有崩溃。 srs bench lal linux 版本不会出现
| 1
|
42,784
| 5,535,884,827
|
IssuesEvent
|
2017-03-21 18:19:22
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
With clause in query expressions (support for zip)
|
0 - Backlog Area-Language Design Feature Request
|
I propose to add a `with` clause to the query expression syntax. This will allow us to zip sequences together using either LINQ syntax.
A query expression with a `with` clause followed by a `select` clause
``` c#
from x1 in e1
with x2 in e2
select v
```
is translated into
``` c#
( e1 ) . Zip( e2, ( x1 , x2 ) => v )
```
A query expression with a `with` clause followed by something other than a `select` clause
``` c#
from x1 in e1
with x2 in e2
…
```
is translated into
``` c#
from * in ( e1 ) . Zip(
e2 , ( x1 , x2 ) => new { x1 , x2 })
…
```
This transformation would happen in section 7.16.2.4 of the spec:
``` c#
from x1 in e1
from x2 in e2
with x3 in e3
…
```
is first translated into
``` c#
from * in ( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => new { x1 , x2 } )
with x3 in e3
…
```
and then translated into (if I understand how transparent identifiers work)
``` c#
from ** in ( ( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => new { x1 , x2 } ) ) . Zip(
e2 , ( * , x3 ) => new { * , x3 })
…
```
And
``` c#
from x1 in e1
with x2 in e2
from x3 in e3
…
```
is first translated into
``` c#
from * in ( e1 ) . Zip( e2 , ( x1 , x2 ) => new { x1 , x2 })
from x3 in e3
…
```
and then translated into
``` c#
from ** in ( ( e1 ) . Zip( e2 , ( x1 , x2 ) => new { x1 , x2 }) ).SelectMany(
* => e3 , ( * , x3 ) => new { * , x3 } )
…
```
|
1.0
|
With clause in query expressions (support for zip) - I propose to add a `with` clause to the query expression syntax. This will allow us to zip sequences together using either LINQ syntax.
A query expression with a `with` clause followed by a `select` clause
``` c#
from x1 in e1
with x2 in e2
select v
```
is translated into
``` c#
( e1 ) . Zip( e2, ( x1 , x2 ) => v )
```
A query expression with a `with` clause followed by something other than a `select` clause
``` c#
from x1 in e1
with x2 in e2
…
```
is translated into
``` c#
from * in ( e1 ) . Zip(
e2 , ( x1 , x2 ) => new { x1 , x2 })
…
```
This transformation would happen in section 7.16.2.4 of the spec:
``` c#
from x1 in e1
from x2 in e2
with x3 in e3
…
```
is first translated into
``` c#
from * in ( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => new { x1 , x2 } )
with x3 in e3
…
```
and then translated into (if I understand how transparent identifiers work)
``` c#
from ** in ( ( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => new { x1 , x2 } ) ) . Zip(
e2 , ( * , x3 ) => new { * , x3 })
…
```
And
``` c#
from x1 in e1
with x2 in e2
from x3 in e3
…
```
is first translated into
``` c#
from * in ( e1 ) . Zip( e2 , ( x1 , x2 ) => new { x1 , x2 })
from x3 in e3
…
```
and then translated into
``` c#
from ** in ( ( e1 ) . Zip( e2 , ( x1 , x2 ) => new { x1 , x2 }) ).SelectMany(
* => e3 , ( * , x3 ) => new { * , x3 } )
…
```
|
non_process
|
with clause in query expressions support for zip i propose to add a with clause to the query expression syntax this will allow us to zip sequences together using either linq syntax a query expression with a with clause followed by a select clause c from in with in select v is translated into c zip v a query expression with a with clause followed by something other than a select clause c from in with in … is translated into c from in zip new … this transformation would happen in section of the spec c from in from in with in … is first translated into c from in selectmany new with in … and then translated into if i understand how transparent identifiers work c from in selectmany new zip new … and c from in with in from in … is first translated into c from in zip new from in … and then translated into c from in zip new selectmany new …
| 0
|
21,302
| 28,498,948,700
|
IssuesEvent
|
2023-04-18 15:54:56
|
ICEI-PUC-Minas-PMV-ADS/pmv-ads-2023-1-e1-proj-web-t15-e1-proj-web-t15-time4-projlivroapp
|
https://api.github.com/repos/ICEI-PUC-Minas-PMV-ADS/pmv-ads-2023-1-e1-proj-web-t15-e1-proj-web-t15-time4-projlivroapp
|
closed
|
Revisar introdução de Metodologia
|
processo sprint 1
|
Atualizada a doc de Metodologia para descrever convenções sobre PRs e outras
|
1.0
|
Revisar introdução de Metodologia - Atualizada a doc de Metodologia para descrever convenções sobre PRs e outras
|
process
|
revisar introdução de metodologia atualizada a doc de metodologia para descrever convenções sobre prs e outras
| 1
|
16,950
| 22,303,226,994
|
IssuesEvent
|
2022-06-13 10:38:04
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
opened
|
Remove deprecated `MessageCorrelationInfoEnricher` in pump namespace
|
good first issue area:message-processing breaking-change
|
**Is your feature request related to a problem? Please describe.**
Previously, we had the `MessageCorrelationInfoEnricher` in a different namespace but because it will be used in all concrete messaging implementations, we added it to the `Arcus.Messaging.Abstractions.Telemetry` namespace.
**Describe the solution you'd like**
Remove the deprecated `MessageCorrelationInfoEnricher` in the `Arcus.Messaging.Abstractions` project which is still in the `Arcus.Messaging.Pumps.Abstractions.Telemetry` namespace.
|
1.0
|
Remove deprecated `MessageCorrelationInfoEnricher` in pump namespace - **Is your feature request related to a problem? Please describe.**
Previously, we had the `MessageCorrelationInfoEnricher` in a different namespace but because it will be used in all concrete messaging implementations, we added it to the `Arcus.Messaging.Abstractions.Telemetry` namespace.
**Describe the solution you'd like**
Remove the deprecated `MessageCorrelationInfoEnricher` in the `Arcus.Messaging.Abstractions` project which is still in the `Arcus.Messaging.Pumps.Abstractions.Telemetry` namespace.
|
process
|
remove deprecated messagecorrelationinfoenricher in pump namespace is your feature request related to a problem please describe previously we had the messagecorrelationinfoenricher in a different namespace but because it will be used in all concrete messaging implementations we added it to the arcus messaging abstractions telemetry namespace describe the solution you d like remove the deprecated messagecorrelationinfoenricher in the arcus messaging abstractions project which is still in the arcus messaging pumps abstractions telemetry namespace
| 1
|
13,046
| 15,387,700,743
|
IssuesEvent
|
2021-03-03 09:51:44
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Panic: Did not find a relation for model
|
bug/1-repro-available kind/bug process/candidate team/migrations topic: schema validation
|
I'm using prisma `2.13.1`.
```prisma
model Movie {
id String @id @default(cuid())
title String
slug String @unique
synopsis String
year Int
runtime Int
imdb String @unique
rating Float
poster String
genres String[]
cast ActorsOnMovies[] @relation("ActorsOnMovies")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Actor {
id String @id @default(cuid())
imdb String @unique
name String
movies ActorsOnMovies[] @relation("ActorsOnMovies")
createdAt DateTime @default(now())
updtedAt DateTime @updatedAt
}
model ActorsOnMovies {
actorId String
actor Actor @relation("ActorsOnMovies", fields: [actorId], references: [id])
movieId String
movie Movie @relation("ActorsOnMovies", fields: [movieId], references: [id])
character String
@@id([actorId, movieId])
}
```
Then I run `prisma db push --force --preview-feature` and I get:
```
Running generate... (Use --skip-generate to skip the generators)
Error: Schema parsing
thread 'main' panicked at 'Did not find a relation for model Actor and field movies', libs/prisma-models/src/datamodel_converter.rs:80:29
stack backtrace:
0: _rust_begin_unwind
1: std::panicking::begin_panic_fmt
2: prisma_models::datamodel_converter::DatamodelConverter::convert_fields::{{closure}}::{{closure}}
3: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
4: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
5: prisma_models::datamodel_converter::DatamodelConverter::convert
6: query_engine::main::main::{{closure}}::main::{{closure}}
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
8: std::thread::local::LocalKey<T>::with
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: async_io::driver::block_on
11: tokio::runtime::context::enter
12: tokio::runtime::handle::Handle::enter
13: std::thread::local::LocalKey<T>::with
14: std::thread::local::LocalKey<T>::with
15: async_std::task::builder::Builder::blocking
16: query_engine::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
|
1.0
|
Panic: Did not find a relation for model - I'm using prisma `2.13.1`.
```prisma
model Movie {
id String @id @default(cuid())
title String
slug String @unique
synopsis String
year Int
runtime Int
imdb String @unique
rating Float
poster String
genres String[]
cast ActorsOnMovies[] @relation("ActorsOnMovies")
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
model Actor {
id String @id @default(cuid())
imdb String @unique
name String
movies ActorsOnMovies[] @relation("ActorsOnMovies")
createdAt DateTime @default(now())
updtedAt DateTime @updatedAt
}
model ActorsOnMovies {
actorId String
actor Actor @relation("ActorsOnMovies", fields: [actorId], references: [id])
movieId String
movie Movie @relation("ActorsOnMovies", fields: [movieId], references: [id])
character String
@@id([actorId, movieId])
}
```
Then I run `prisma db push --force --preview-feature` and I get:
```
Running generate... (Use --skip-generate to skip the generators)
Error: Schema parsing
thread 'main' panicked at 'Did not find a relation for model Actor and field movies', libs/prisma-models/src/datamodel_converter.rs:80:29
stack backtrace:
0: _rust_begin_unwind
1: std::panicking::begin_panic_fmt
2: prisma_models::datamodel_converter::DatamodelConverter::convert_fields::{{closure}}::{{closure}}
3: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
4: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
5: prisma_models::datamodel_converter::DatamodelConverter::convert
6: query_engine::main::main::{{closure}}::main::{{closure}}
7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
8: std::thread::local::LocalKey<T>::with
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: async_io::driver::block_on
11: tokio::runtime::context::enter
12: tokio::runtime::handle::Handle::enter
13: std::thread::local::LocalKey<T>::with
14: std::thread::local::LocalKey<T>::with
15: async_std::task::builder::Builder::blocking
16: query_engine::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
|
process
|
panic did not find a relation for model i m using prisma prisma model movie id string id default cuid title string slug string unique synopsis string year int runtime int imdb string unique rating float poster string genres string cast actorsonmovies relation actorsonmovies createdat datetime default now updatedat datetime updatedat model actor id string id default cuid imdb string unique name string movies actorsonmovies relation actorsonmovies createdat datetime default now updtedat datetime updatedat model actorsonmovies actorid string actor actor relation actorsonmovies fields references movieid string movie movie relation actorsonmovies fields references character string id then i run prisma db push force preview feature and i get running generate use skip generate to skip the generators error schema parsing thread main panicked at did not find a relation for model actor and field movies libs prisma models src datamodel converter rs stack backtrace rust begin unwind std panicking begin panic fmt prisma models datamodel converter datamodelconverter convert fields closure closure as core iter traits iterator iterator fold as core iter traits iterator iterator fold prisma models datamodel converter datamodelconverter convert query engine main main closure main closure as core future future future poll std thread local localkey with as core future future future poll async io driver block on tokio runtime context enter tokio runtime handle handle enter std thread local localkey with std thread local localkey with async std task builder builder blocking query engine main note some details are omitted run with rust backtrace full for a verbose backtrace
| 1
|
18,954
| 13,175,776,367
|
IssuesEvent
|
2020-08-12 02:43:24
|
arbitrarydot/copilot
|
https://api.github.com/repos/arbitrarydot/copilot
|
opened
|
Automated Tests
|
infrastructure
|
I know nobody likes writing tests, but they will save us time in the long run when issues arise.
|
1.0
|
Automated Tests - I know nobody likes writing tests, but they will save us time in the long run when issues arise.
|
non_process
|
automated tests i know nobody likes writing tests but they will save us time in the long run when issues arise
| 0
|
9,437
| 12,425,055,817
|
IssuesEvent
|
2020-05-24 14:38:47
|
jyn514/rcc
|
https://api.github.com/repos/jyn514/rcc
|
closed
|
Cannot use function macros with no arguments
|
bug preprocessor
|
### Code
<!-- The code that was not interpreted correctly goes here.
This should also include the error message you got. -->
```c
#define f() 1
f()
<stdin>:2:1 error: invalid macro: wrong number of arguments: expected 1, got 0
```
Found while investigating #349 and #437
|
1.0
|
Cannot use function macros with no arguments - ### Code
<!-- The code that was not interpreted correctly goes here.
This should also include the error message you got. -->
```c
#define f() 1
f()
<stdin>:2:1 error: invalid macro: wrong number of arguments: expected 1, got 0
```
Found while investigating #349 and #437
|
process
|
cannot use function macros with no arguments code the code that was not interpreted correctly goes here this should also include the error message you got c define f f error invalid macro wrong number of arguments expected got found while investigating and
| 1
|
10,954
| 16,010,997,242
|
IssuesEvent
|
2021-04-20 10:32:09
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
Refactor tests for Travis manager
|
priority-5-triage status:requirements type:feature
|
**What would you like Renovate to be able to do?**
In `lib/manager/travis/package.spec.ts`:
```ts
let config: any;
```
Convert `any` to `PackageUpdateConfig`.
**Did you already have any implementation ideas?**
<!-- In case you've already dug into existing options or source code and have ideas, mention them here. Try to keep implementation ideas separate from *requirements* above -->
<!-- Please also mention here in case this is a feature you'd be interested in writing yourself, so you can be assigned it. -->
|
1.0
|
Refactor tests for Travis manager - **What would you like Renovate to be able to do?**
In `lib/manager/travis/package.spec.ts`:
```ts
let config: any;
```
Convert `any` to `PackageUpdateConfig`.
**Did you already have any implementation ideas?**
<!-- In case you've already dug into existing options or source code and have ideas, mention them here. Try to keep implementation ideas separate from *requirements* above -->
<!-- Please also mention here in case this is a feature you'd be interested in writing yourself, so you can be assigned it. -->
|
non_process
|
refactor tests for travis manager what would you like renovate to be able to do in lib manager travis package spec ts ts let config any convert any to packageupdateconfig did you already have any implementation ideas
| 0
|
11,584
| 14,444,993,140
|
IssuesEvent
|
2020-12-07 22:10:02
|
AKROGIS/PDS-Data-Management
|
https://api.github.com/repos/AKROGIS/PDS-Data-Management
|
closed
|
Non-ASCII data in the robo copy logs throws an error
|
LogProcessor bug
|
For example:
`07-26 07:05:05 main ERROR Unexpected exception processing log file: E:\XDrive\Logs\2019-07-25_18-00-02-YUGA-update-x-drive.log, exception: 'ascii' codec can't decode byte 0x82 in position 82: ordinal not in range(128)`
This happens when there is a non-ASCII character in a file system path
|
1.0
|
Non-ASCII data in the robo copy logs throws an error - For example:
`07-26 07:05:05 main ERROR Unexpected exception processing log file: E:\XDrive\Logs\2019-07-25_18-00-02-YUGA-update-x-drive.log, exception: 'ascii' codec can't decode byte 0x82 in position 82: ordinal not in range(128)`
This happens when there is a non-ASCII character in a file system path
|
process
|
non ascii data in the robo copy logs throws an error for example main error unexpected exception processing log file e xdrive logs yuga update x drive log exception ascii codec can t decode byte in position ordinal not in range this happens when there is a non ascii character in a file system path
| 1
|
15,290
| 19,295,556,221
|
IssuesEvent
|
2021-12-12 14:29:58
|
zsuperxtreme/gd
|
https://api.github.com/repos/zsuperxtreme/gd
|
closed
|
Kiosk App - to show the ticket before print out the temporary receipt
|
Processing
|
can use back the existing Cashier paid screen, change the button PAID to PRINT
|
1.0
|
Kiosk App - to show the ticket before print out the temporary receipt - can use back the existing Cashier paid screen, change the button PAID to PRINT
|
process
|
kiosk app to show the ticket before print out the temporary receipt can use back the existing cashier paid screen change the button paid to print
| 1
|
11,691
| 14,543,549,643
|
IssuesEvent
|
2020-12-15 17:00:40
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Link is wrong in task page in pipeline documentation
|
Pri1 devops-cicd-process/tech devops/prod doc-bug
|
[Enter feedback here]
In this sentence, quoted as "By default, all tasks run in the same context, whether that's on the host or in a job container", the link of "host" actually leads to the page for job rather than host.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8098f527-ebdf-60d5-3989-5228b7a207c1
* Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6
* Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Link is wrong in task page in pipeline documentation -
[Enter feedback here]
In this sentence, quoted as "By default, all tasks run in the same context, whether that's on the host or in a job container", the link of "host" actually leads to the page for job rather than host.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8098f527-ebdf-60d5-3989-5228b7a207c1
* Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6
* Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
link is wrong in task page in pipeline documentation in this sentence quoted as by default all tasks run in the same context whether that s on the host or in a job container the link of host actually leads to the page for job rather than host document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id ebdf version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
651,232
| 21,470,393,130
|
IssuesEvent
|
2022-04-26 08:59:22
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
store.google.com - site is not usable
|
priority-critical browser-fenix engine-gecko
|
<!-- @browser: Firefox Mobile 100.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:100.0) Gecko/100.0 Firefox/100.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://store.google.com/?hl=en-US
**Browser / Version**: Firefox Mobile 100.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Someone keeps backing me out of pages I can't control anything not settings I can't download my contacts tried going it through here got it short cutted to my home page then went to the assistant and back to homepage and it was gone. I think my device is hacked
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220417185951</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/4/b2a79db4-d21a-42ef-8ca6-5d2e1df9eaa1)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
store.google.com - site is not usable - <!-- @browser: Firefox Mobile 100.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:100.0) Gecko/100.0 Firefox/100.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://store.google.com/?hl=en-US
**Browser / Version**: Firefox Mobile 100.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
Someone keeps backing me out of pages I can't control anything not settings I can't download my contacts tried going it through here got it short cutted to my home page then went to the assistant and back to homepage and it was gone. I think my device is hacked
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220417185951</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/4/b2a79db4-d21a-42ef-8ca6-5d2e1df9eaa1)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
store google com site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce someone keeps backing me out of pages i can t control anything not settings i can t download my contacts tried going it through here got it short cutted to my home page then went to the assistant and back to homepage and it was gone i think my device is hacked browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
2,596
| 5,355,041,414
|
IssuesEvent
|
2017-02-20 11:43:00
|
CGAL/cgal
|
https://api.github.com/repos/CGAL/cgal
|
closed
|
Polyhedron demo: Reloading a point set
|
bug CGAL 3D demo Pkg::Point_set_processing_3
|
When reloading a point set the point size should not be reset.
|
1.0
|
Polyhedron demo: Reloading a point set - When reloading a point set the point size should not be reset.
|
process
|
polyhedron demo reloading a point set when reloading a point set the point size should not be reset
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.