Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
49,524 | 13,187,226,356 | IssuesEvent | 2020-08-13 02:44:57 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | cmake - env-shell not picking up weird boost location (Trac #1597) | Incomplete Migration Migrated from Trac cmake defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1597">https://code.icecube.wisc.edu/ticket/1597</a>, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-22T14:11:01",
"description": "https://icecube-spno.slack.com/archives/software/p1458575103000900",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1461334261171427",
"component": "cmake",
"summary": "cmake - env-shell not picking up weird boost location",
"priority": "normal",
"keywords": "",
"time": "2016-03-21T15:47:37",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | cmake - env-shell not picking up weird boost location (Trac #1597) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1597">https://code.icecube.wisc.edu/ticket/1597</a>, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-04-22T14:11:01",
"description": "https://icecube-spno.slack.com/archives/software/p1458575103000900",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1461334261171427",
"component": "cmake",
"summary": "cmake - env-shell not picking up weird boost location",
"priority": "normal",
"keywords": "",
"time": "2016-03-21T15:47:37",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | cmake env shell not picking up weird boost location trac migrated from json status closed changetime description reporter nega cc resolution fixed ts component cmake summary cmake env shell not picking up weird boost location priority normal keywords time milestone owner nega type defect | 1 |
444,163 | 31,024,627,555 | IssuesEvent | 2023-08-10 08:18:09 | adevinta/spark | https://api.github.com/repos/adevinta/spark | closed | [Components] Update RadioGroup documentation | documentation | ### Documentation update
Update the component documentation following the latests changes of our [guide](https://sparkui.vercel.app/?path=/docs/contributing-writing-packages-documentation--docs) | 1.0 | [Components] Update RadioGroup documentation - ### Documentation update
Update the component documentation following the latests changes of our [guide](https://sparkui.vercel.app/?path=/docs/contributing-writing-packages-documentation--docs) | non_defect | update radiogroup documentation documentation update update the component documentation following the latests changes of our | 0 |
79,538 | 10,131,342,470 | IssuesEvent | 2019-08-01 19:17:06 | cloudyr/googleCloudStorageR | https://api.github.com/repos/cloudyr/googleCloudStorageR | closed | Enable versioning on buckets | documentation | Buckets can give a history of versions for objects when they are overwritted, activated by this API
https://cloud.google.com/storage/docs/using-object-versioning
May be causing this issue ( https://github.com/cloudyr/googleCloudStorageR/issues/95 ) at the moment for buckets that already have it activated. | 1.0 | Enable versioning on buckets - Buckets can give a history of versions for objects when they are overwritted, activated by this API
https://cloud.google.com/storage/docs/using-object-versioning
May be causing this issue ( https://github.com/cloudyr/googleCloudStorageR/issues/95 ) at the moment for buckets that already have it activated. | non_defect | enable versioning on buckets buckets can give a history of versions for objects when they are overwritted activated by this api may be causing this issue at the moment for buckets that already have it activated | 0 |
356,206 | 25,176,137,761 | IssuesEvent | 2022-11-11 09:25:34 | janelleljt/pe | https://api.github.com/repos/janelleljt/pe | opened | Header for diagrams placed below diagrams | type.DocumentationBug severity.Low | The header for the sequence diagram is places below the actual sequence diagram. There is a similar issue for the header of the activity diagram.
First instance:

Second instance:

Would be cosmetic but I feel like this labelling is unclear for the reader.
<!--session: 1668154289128-0fdcdc51-dd11-4be9-b85a-b61f9b4ec6de-->
<!--Version: Web v3.4.4--> | 1.0 | Header for diagrams placed below diagrams - The header for the sequence diagram is places below the actual sequence diagram. There is a similar issue for the header of the activity diagram.
First instance:

Second instance:

Would be cosmetic but I feel like this labelling is unclear for the reader.
<!--session: 1668154289128-0fdcdc51-dd11-4be9-b85a-b61f9b4ec6de-->
<!--Version: Web v3.4.4--> | non_defect | header for diagrams placed below diagrams the header for the sequence diagram is places below the actual sequence diagram there is a similar issue for the header of the activity diagram first instance second instance would be cosmetic but i feel like this labelling is unclear for the reader | 0 |
34,843 | 4,561,567,357 | IssuesEvent | 2016-09-14 12:14:45 | TeamOfOne/psc16-team33 | https://api.github.com/repos/TeamOfOne/psc16-team33 | closed | Make the note title integrate seamlessly with the note diagram | design / UI | Centered, no divider line, and closer to the chord | 1.0 | Make the note title integrate seamlessly with the note diagram - Centered, no divider line, and closer to the chord | non_defect | make the note title integrate seamlessly with the note diagram centered no divider line and closer to the chord | 0 |
16,035 | 10,426,175,195 | IssuesEvent | 2019-09-16 16:58:14 | microsoft/botbuilder-tools | https://api.github.com/repos/microsoft/botbuilder-tools | closed | Botdispatch: Handling of none intents | Bot Services customer-replied-to customer-reported dispatch-cli | ## Tool
Name: [Dispatch]
Version: 1.5.3
OS: Windows 10
## Describe the bug
This bug report is nearly the same as [this one](https://github.com/Microsoft/botbuilder-tools/issues/190) from last year. We have several luis apps and created a dispatch model. Now, some of the luis apps contains utterances in their none intents which are also utterances in an intent of another luis app. Say luis app A contains utterance X in its none intent but X is an utterance of an intent "SomeIntent" of luis app B. The dispatch model conatins the utterance X in its none intent and because of that the utterance X is predicted as None instead of "SomeIntent". I would have expected utterance X to not appear in the dispatchers none intent, like suggested by tsuwandy in the bug report linked above.
## Expected behavior
If utterance X occures in any intent that is not the none intent, then X must not occure in the dispatchers none intent.
Maybe I'm doing something wrong, but I tracked this behaviour back to version 1.3.0.
Just one thing: Is there ANY documentation about how exactly the dispatcher tool is working? | 1.0 | Botdispatch: Handling of none intents - ## Tool
Name: [Dispatch]
Version: 1.5.3
OS: Windows 10
## Describe the bug
This bug report is nearly the same as [this one](https://github.com/Microsoft/botbuilder-tools/issues/190) from last year. We have several luis apps and created a dispatch model. Now, some of the luis apps contains utterances in their none intents which are also utterances in an intent of another luis app. Say luis app A contains utterance X in its none intent but X is an utterance of an intent "SomeIntent" of luis app B. The dispatch model conatins the utterance X in its none intent and because of that the utterance X is predicted as None instead of "SomeIntent". I would have expected utterance X to not appear in the dispatchers none intent, like suggested by tsuwandy in the bug report linked above.
## Expected behavior
If utterance X occures in any intent that is not the none intent, then X must not occure in the dispatchers none intent.
Maybe I'm doing something wrong, but I tracked this behaviour back to version 1.3.0.
Just one thing: Is there ANY documentation about how exactly the dispatcher tool is working? | non_defect | botdispatch handling of none intents tool name version os windows describe the bug this bug report is nearly the same as from last year we have several luis apps and created a dispatch model now some of the luis apps contains utterances in their none intents which are also utterances in an intent of another luis app say luis app a contains utterance x in its none intent but x is an utterance of an intent someintent of luis app b the dispatch model conatins the utterance x in its none intent and because of that the utterance x is predicted as none instead of someintent i would have expected utterance x to not appear in the dispatchers none intent like suggested by tsuwandy in the bug report linked above expected behavior if utterance x occures in any intent that is not the none intent then x must not occure in the dispatchers none intent maybe i m doing something wrong but i tracked this behaviour back to version just one thing is there any documentation about how exactly the dispatcher tool is working | 0 |
43,234 | 11,575,647,979 | IssuesEvent | 2020-02-21 10:11:59 | contao/contao | https://api.github.com/repos/contao/contao | closed | set up two-factor-authentication: description "verification code" | defect | **Affected version(s)**
4.9
**Description**
There is a small issue when setting up the two-factor-authentication (user -> security) regarding the description of the field "Verification code".
Above and below the input field for the verification code the description says "Verification code" (case1):

When entering a wrong verification code, the color of both descriptions changes to red (which is fine) (case2):

The description below the input field should either be removed (as it is exactly the same as the description above) or even better, should say "Enter verification code here" in case1 and "Please enter a valid verification code" in case2 (wrong verification code entered). | 1.0 | set up two-factor-authentication: description "verification code" - **Affected version(s)**
4.9
**Description**
There is a small issue when setting up the two-factor-authentication (user -> security) regarding the description of the field "Verification code".
Above and below the input field for the verification code the description says "Verification code" (case1):

When entering a wrong verification code, the color of both descriptions changes to red (which is fine) (case2):

The description below the input field should either be removed (as it is exactly the same as the description above) or even better, should say "Enter verification code here" in case1 and "Please enter a valid verification code" in case2 (wrong verification code entered). | defect | set up two factor authentication description verification code affected version s description there is a small issue when setting up the two factor authentication user security regarding the description of the field verification code above and below the input field for the verification code the description says verification code when entering a wrong verification code the color of both descriptions changes to red which is fine the description below the input field should either be removed as it is exactly the same as the description above or even better should say enter verification code here in and please enter a valid verification code in wrong verification code entered | 1 |
245,458 | 18,784,263,916 | IssuesEvent | 2021-11-08 10:25:26 | ExpertSDR3/ExpertSDR3-BUG-TRACKER | https://api.github.com/repos/ExpertSDR3/ExpertSDR3-BUG-TRACKER | opened | No warning not to mix sound drivers | documentation |
In ESDR2 there was a warning not to mix the sound drivers when setting up VAC. IN ESDR3 there is no warning when setting up. If you do choose mixed drivers, it just doesn't work.
 | 1.0 | No warning not to mix sound drivers -
In ESDR2 there was a warning not to mix the sound drivers when setting up VAC. IN ESDR3 there is no warning when setting up. If you do choose mixed drivers, it just doesn't work.
 | non_defect | no warning not to mix sound drivers in there was a warning not to mix the sound drivers when setting up vac in there is no warning when setting up if you do choose mixed drivers it just doesn t work | 0 |
237,850 | 19,679,445,781 | IssuesEvent | 2022-01-11 15:27:40 | ubtue/DatenProbleme | https://api.github.com/repos/ubtue/DatenProbleme | closed | ISSN 2474-1809 | Antisemitism Studies (jstor) | Translator-Fehler | ready for testing Zotero_SEMI-AUTO | **URL**
https://www.jstor.org/stable/10.2979/antistud.3.1.03
**Import-Translator**
ubtue_JSTOR.js
(Einzel-Import)
**Problembeschreibung**
Der Translator gibt beim Einzel-Import einen Fehler aus und nimmt laut diesem den JSTOR.js-Translator, statt den ubtue..
Beim Mehrfach-Import tritt der Fehler nicht auf.

| 1.0 | ISSN 2474-1809 | Antisemitism Studies (jstor) | Translator-Fehler - **URL**
https://www.jstor.org/stable/10.2979/antistud.3.1.03
**Import-Translator**
ubtue_JSTOR.js
(Einzel-Import)
**Problembeschreibung**
Der Translator gibt beim Einzel-Import einen Fehler aus und nimmt laut diesem den JSTOR.js-Translator, statt den ubtue..
Beim Mehrfach-Import tritt der Fehler nicht auf.

| non_defect | issn antisemitism studies jstor translator fehler url import translator ubtue jstor js einzel import problembeschreibung der translator gibt beim einzel import einen fehler aus und nimmt laut diesem den jstor js translator statt den ubtue beim mehrfach import tritt der fehler nicht auf | 0 |
47,572 | 5,902,888,426 | IssuesEvent | 2017-05-19 03:45:04 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | [GKE] Ingress should conform to Ingress spec | priority/failing-test sig/network | The [1.6 GKE Ingress Tests
](https://k8s-testgrid.appspot.com/release-1.6-all#gci-gke-ingress-1.6) started failing yesterday. This looks like an infra issue.
[Error: ](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-ingress-release-1.6/700)
```
Mar 21 01:24:23.063: Ingress failed to acquire an IP address within 15m0s
```
@kubernetes/test-infra-maintainers | 1.0 | [GKE] Ingress should conform to Ingress spec - The [1.6 GKE Ingress Tests
](https://k8s-testgrid.appspot.com/release-1.6-all#gci-gke-ingress-1.6) started failing yesterday. This looks like an infra issue.
[Error: ](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-ingress-release-1.6/700)
```
Mar 21 01:24:23.063: Ingress failed to acquire an IP address within 15m0s
```
@kubernetes/test-infra-maintainers | non_defect | ingress should conform to ingress spec the gke ingress tests started failing yesterday this looks like an infra issue mar ingress failed to acquire an ip address within kubernetes test infra maintainers | 0 |
287,978 | 24,879,591,095 | IssuesEvent | 2022-10-27 22:51:39 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome Elastic Synthetics Integration UI Functional Tests.x-pack/test/functional_synthetics/apps/uptime/synthetics_integration·ts - Uptime app with generated data When on the Synthetics Integration Policy Create Page create new policy allows saving browser monitor | failed-test needs-team | A test failed on a tracked branch
```
Error: expected { id: 'synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d',
revision: 1,
name: 'Sample Synthetics integration',
type: 'synthetics/browser',
data_stream: { namespace: 'default' },
use_output: 'default',
package_policy_id: 'fe48ea47-0c8e-4845-99d0-44c6c612078d',
streams:
[ { id: 'synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d',
data_stream: [Object],
__ui: [Object],
type: 'browser',
name: 'Sample Synthetics integration',
enabled: true,
'service.name': 'Sample APM Service',
schedule: '@every 10m',
timeout: null,
throttling: '5d/3u/20l',
tags: [Object],
'source.zip_url.username': 'username',
'source.zip_url.password': 'password',
'source.zip_url.url': 'http://test.zip',
'source.zip_url.folder': 'folder',
params: [Object],
screenshots: 'on',
processors: [Object] },
{ id: 'synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d',
data_stream: [Object],
processors: [Object] },
{ id: 'synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d',
data_stream: [Object],
processors: [Object] } ],
meta: { package: { name: 'synthetics', version: '0.11.1' } } } to sort of equal { data_stream: { namespace: 'default' },
id: 'synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d',
meta: { package: { name: 'synthetics', version: '0.11.1' } },
name: 'Sample Synthetics integration',
package_policy_id: 'fe48ea47-0c8e-4845-99d0-44c6c612078d',
revision: 1,
streams:
[ { data_stream: [Object],
id: 'synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d',
name: 'Sample Synthetics integration',
type: 'browser',
enabled: true,
processors: [Object],
screenshots: 'on',
schedule: '@every 10m',
timeout: null,
tags: [Object],
throttling: '5d/3u/20l',
'service.name': 'Sample APM Service',
'source.zip_url.url': 'http://test.zip',
'source.zip_url.folder': 'folder',
'source.zip_url.username': 'username',
'source.zip_url.password': 'password',
params: [Object],
__ui: [Object] },
{ data_stream: [Object],
id: 'synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d',
processors: [Object] },
{ data_stream: [Object],
id: 'synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d',
processors: [Object] } ],
type: 'synthetics/browser',
use_output: 'default' }
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (x-pack/test/functional_synthetics/apps/uptime/synthetics_integration.ts:460:71)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16) {
actual: '{\n' +
' "data_stream": {\n' +
' "namespace": "default"\n' +
' }\n' +
' "id": "synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "meta": {\n' +
' "package": {\n' +
' "name": "synthetics"\n' +
' "version": "0.11.1"\n' +
' }\n' +
' }\n' +
' "name": "Sample Synthetics integration"\n' +
' "package_policy_id": "fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "revision": 1\n' +
' "streams": [\n' +
' {\n' +
' "__ui": {\n' +
' "is_tls_enabled": false\n' +
' "is_zip_url_tls_enabled": false\n' +
' "script_source": {\n' +
' "file_name": ""\n' +
' "is_generated_script": false\n' +
' }\n' +
' }\n' +
' "data_stream": {\n' +
' "dataset": "browser"\n' +
' "elasticsearch": {\n' +
' "privileges": {\n' +
' "indices": [\n' +
' "auto_configure"\n' +
' "create_doc"\n' +
' "read"\n' +
' ]\n' +
' }\n' +
' }\n' +
' "type": "synthetics"\n' +
' }\n' +
' "enabled": true\n' +
' "id": "synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "name": "Sample Synthetics integration"\n' +
' "params": {\n' +
' "url": "http://localhost:8080"\n' +
' }\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' "schedule": "@every 10m"\n' +
' "screenshots": "on"\n' +
' "service.name": "Sample APM Service"\n' +
' "source.zip_url.folder": "folder"\n' +
' "source.zip_url.password": "password"\n' +
' "source.zip_url.url": "http://test.zip"\n' +
' "source.zip_url.username": "username"\n' +
' "tags": [\n' +
' "sample tag"\n' +
' ]\n' +
' "throttling": "5d/3u/20l"\n' +
' "timeout": [null]\n' +
' "type": "browser"\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.network"\n' +
' "elasticsearch": {\n' +
' "privileges": {\n' +
' "indices": [\n' +
' "auto_configure"\n' +
' "create_doc"\n' +
' "read"\n' +
' ]\n' +
' }\n' +
' }\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.screenshot"\n' +
' "elasticsearch": {\n' +
' "privileges": {\n' +
' "indices": [\n' +
' "auto_configure"\n' +
' "create_doc"\n' +
' "read"\n' +
' ]\n' +
' }\n' +
' }\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' ]\n' +
' "type": "synthetics/browser"\n' +
' "use_output": "default"\n' +
'}',
expected: '{\n' +
' "data_stream": {\n' +
' "namespace": "default"\n' +
' }\n' +
' "id": "synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "meta": {\n' +
' "package": {\n' +
' "name": "synthetics"\n' +
' "version": "0.11.1"\n' +
' }\n' +
' }\n' +
' "name": "Sample Synthetics integration"\n' +
' "package_policy_id": "fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "revision": 1\n' +
' "streams": [\n' +
' {\n' +
' "__ui": {\n' +
' "is_tls_enabled": false\n' +
' "is_zip_url_tls_enabled": false\n' +
' "script_source": {\n' +
' "file_name": ""\n' +
' "is_generated_script": false\n' +
' }\n' +
' }\n' +
' "data_stream": {\n' +
' "dataset": "browser"\n' +
' "type": "synthetics"\n' +
' }\n' +
' "enabled": true\n' +
' "id": "synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "name": "Sample Synthetics integration"\n' +
' "params": {\n' +
' "url": "http://localhost:8080"\n' +
' }\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' "schedule": "@every 10m"\n' +
' "screenshots": "on"\n' +
' "service.name": "Sample APM Service"\n' +
' "source.zip_url.folder": "folder"\n' +
' "source.zip_url.password": "password"\n' +
' "source.zip_url.url": "http://test.zip"\n' +
' "source.zip_url.username": "username"\n' +
' "tags": [\n' +
' "sample tag"\n' +
' ]\n' +
' "throttling": "5d/3u/20l"\n' +
' "timeout": [null]\n' +
' "type": "browser"\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.network"\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.screenshot"\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' ]\n' +
' "type": "synthetics/browser"\n' +
' "use_output": "default"\n' +
'}',
showDiff: true
}
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22895#01841b3c-0cd7-4a77-b46c-cbb9240553e6)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome Elastic Synthetics Integration UI Functional Tests.x-pack/test/functional_synthetics/apps/uptime/synthetics_integration·ts","test.name":"Uptime app with generated data When on the Synthetics Integration Policy Create Page create new policy allows saving browser monitor","test.failCount":2}} --> | 1.0 | Failing test: Chrome Elastic Synthetics Integration UI Functional Tests.x-pack/test/functional_synthetics/apps/uptime/synthetics_integration·ts - Uptime app with generated data When on the Synthetics Integration Policy Create Page create new policy allows saving browser monitor - A test failed on a tracked branch
```
Error: expected { id: 'synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d',
revision: 1,
name: 'Sample Synthetics integration',
type: 'synthetics/browser',
data_stream: { namespace: 'default' },
use_output: 'default',
package_policy_id: 'fe48ea47-0c8e-4845-99d0-44c6c612078d',
streams:
[ { id: 'synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d',
data_stream: [Object],
__ui: [Object],
type: 'browser',
name: 'Sample Synthetics integration',
enabled: true,
'service.name': 'Sample APM Service',
schedule: '@every 10m',
timeout: null,
throttling: '5d/3u/20l',
tags: [Object],
'source.zip_url.username': 'username',
'source.zip_url.password': 'password',
'source.zip_url.url': 'http://test.zip',
'source.zip_url.folder': 'folder',
params: [Object],
screenshots: 'on',
processors: [Object] },
{ id: 'synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d',
data_stream: [Object],
processors: [Object] },
{ id: 'synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d',
data_stream: [Object],
processors: [Object] } ],
meta: { package: { name: 'synthetics', version: '0.11.1' } } } to sort of equal { data_stream: { namespace: 'default' },
id: 'synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d',
meta: { package: { name: 'synthetics', version: '0.11.1' } },
name: 'Sample Synthetics integration',
package_policy_id: 'fe48ea47-0c8e-4845-99d0-44c6c612078d',
revision: 1,
streams:
[ { data_stream: [Object],
id: 'synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d',
name: 'Sample Synthetics integration',
type: 'browser',
enabled: true,
processors: [Object],
screenshots: 'on',
schedule: '@every 10m',
timeout: null,
tags: [Object],
throttling: '5d/3u/20l',
'service.name': 'Sample APM Service',
'source.zip_url.url': 'http://test.zip',
'source.zip_url.folder': 'folder',
'source.zip_url.username': 'username',
'source.zip_url.password': 'password',
params: [Object],
__ui: [Object] },
{ data_stream: [Object],
id: 'synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d',
processors: [Object] },
{ data_stream: [Object],
id: 'synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d',
processors: [Object] } ],
type: 'synthetics/browser',
use_output: 'default' }
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at Context.<anonymous> (x-pack/test/functional_synthetics/apps/uptime/synthetics_integration.ts:460:71)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16) {
actual: '{\n' +
' "data_stream": {\n' +
' "namespace": "default"\n' +
' }\n' +
' "id": "synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "meta": {\n' +
' "package": {\n' +
' "name": "synthetics"\n' +
' "version": "0.11.1"\n' +
' }\n' +
' }\n' +
' "name": "Sample Synthetics integration"\n' +
' "package_policy_id": "fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "revision": 1\n' +
' "streams": [\n' +
' {\n' +
' "__ui": {\n' +
' "is_tls_enabled": false\n' +
' "is_zip_url_tls_enabled": false\n' +
' "script_source": {\n' +
' "file_name": ""\n' +
' "is_generated_script": false\n' +
' }\n' +
' }\n' +
' "data_stream": {\n' +
' "dataset": "browser"\n' +
' "elasticsearch": {\n' +
' "privileges": {\n' +
' "indices": [\n' +
' "auto_configure"\n' +
' "create_doc"\n' +
' "read"\n' +
' ]\n' +
' }\n' +
' }\n' +
' "type": "synthetics"\n' +
' }\n' +
' "enabled": true\n' +
' "id": "synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "name": "Sample Synthetics integration"\n' +
' "params": {\n' +
' "url": "http://localhost:8080"\n' +
' }\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' "schedule": "@every 10m"\n' +
' "screenshots": "on"\n' +
' "service.name": "Sample APM Service"\n' +
' "source.zip_url.folder": "folder"\n' +
' "source.zip_url.password": "password"\n' +
' "source.zip_url.url": "http://test.zip"\n' +
' "source.zip_url.username": "username"\n' +
' "tags": [\n' +
' "sample tag"\n' +
' ]\n' +
' "throttling": "5d/3u/20l"\n' +
' "timeout": [null]\n' +
' "type": "browser"\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.network"\n' +
' "elasticsearch": {\n' +
' "privileges": {\n' +
' "indices": [\n' +
' "auto_configure"\n' +
' "create_doc"\n' +
' "read"\n' +
' ]\n' +
' }\n' +
' }\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.screenshot"\n' +
' "elasticsearch": {\n' +
' "privileges": {\n' +
' "indices": [\n' +
' "auto_configure"\n' +
' "create_doc"\n' +
' "read"\n' +
' ]\n' +
' }\n' +
' }\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' ]\n' +
' "type": "synthetics/browser"\n' +
' "use_output": "default"\n' +
'}',
expected: '{\n' +
' "data_stream": {\n' +
' "namespace": "default"\n' +
' }\n' +
' "id": "synthetics/browser-synthetics-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "meta": {\n' +
' "package": {\n' +
' "name": "synthetics"\n' +
' "version": "0.11.1"\n' +
' }\n' +
' }\n' +
' "name": "Sample Synthetics integration"\n' +
' "package_policy_id": "fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "revision": 1\n' +
' "streams": [\n' +
' {\n' +
' "__ui": {\n' +
' "is_tls_enabled": false\n' +
' "is_zip_url_tls_enabled": false\n' +
' "script_source": {\n' +
' "file_name": ""\n' +
' "is_generated_script": false\n' +
' }\n' +
' }\n' +
' "data_stream": {\n' +
' "dataset": "browser"\n' +
' "type": "synthetics"\n' +
' }\n' +
' "enabled": true\n' +
' "id": "synthetics/browser-browser-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "name": "Sample Synthetics integration"\n' +
' "params": {\n' +
' "url": "http://localhost:8080"\n' +
' }\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' "schedule": "@every 10m"\n' +
' "screenshots": "on"\n' +
' "service.name": "Sample APM Service"\n' +
' "source.zip_url.folder": "folder"\n' +
' "source.zip_url.password": "password"\n' +
' "source.zip_url.url": "http://test.zip"\n' +
' "source.zip_url.username": "username"\n' +
' "tags": [\n' +
' "sample tag"\n' +
' ]\n' +
' "throttling": "5d/3u/20l"\n' +
' "timeout": [null]\n' +
' "type": "browser"\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.network"\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.network-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' {\n' +
' "data_stream": {\n' +
' "dataset": "browser.screenshot"\n' +
' "type": "synthetics"\n' +
' }\n' +
' "id": "synthetics/browser-browser.screenshot-fe48ea47-0c8e-4845-99d0-44c6c612078d"\n' +
' "processors": [\n' +
' {\n' +
' "add_observer_metadata": {\n' +
' "geo": {\n' +
' "name": "Fleet managed"\n' +
' }\n' +
' }\n' +
' }\n' +
' {\n' +
' "add_fields": {\n' +
' "fields": {\n' +
' "monitor.fleet_managed": true\n' +
' }\n' +
' "target": ""\n' +
' }\n' +
' }\n' +
' ]\n' +
' }\n' +
' ]\n' +
' "type": "synthetics/browser"\n' +
' "use_output": "default"\n' +
'}',
showDiff: true
}
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22895#01841b3c-0cd7-4a77-b46c-cbb9240553e6)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome Elastic Synthetics Integration UI Functional Tests.x-pack/test/functional_synthetics/apps/uptime/synthetics_integration·ts","test.name":"Uptime app with generated data When on the Synthetics Integration Policy Create Page create new policy allows saving browser monitor","test.failCount":2}} --> | non_defect | failing test chrome elastic synthetics integration ui functional tests x pack test functional synthetics apps uptime synthetics integration·ts uptime app with generated data when on the synthetics integration policy create page create new policy allows saving browser monitor a test failed on a tracked branch error expected id synthetics browser synthetics revision name sample synthetics integration type synthetics browser data stream namespace default use output default package policy id streams id synthetics browser browser data stream ui type browser name sample synthetics integration enabled true service name sample apm service schedule every timeout null throttling tags source zip url username username source zip url password password source zip url url source zip url folder folder params screenshots on processors id synthetics browser browser network data stream processors id synthetics browser browser screenshot data stream processors meta package name synthetics version to sort of equal data stream namespace default id synthetics browser synthetics meta package name synthetics version name sample synthetics integration package policy id revision streams id synthetics browser browser name sample synthetics integration type browser enabled true processors screenshots on schedule every timeout null tags throttling service name sample apm service source zip url url source zip url folder folder source zip url username username source zip url password password params ui data stream id synthetics browser browser network processors data stream id synthetics browser browser screenshot processors type synthetics browser use output default at assertion assert node modules kbn expect expect js at assertion eql node modules kbn expect expect js at context x pack test functional synthetics apps uptime synthetics integration ts at runmicrotasks at processticksandrejections node internal process task queues at object apply node modules kbn test target node src functional test runner lib mocha wrap function js actual n data stream n namespace default n n id synthetics browser synthetics n meta n package n name synthetics n version n n n name sample synthetics integration n package policy id n revision n streams n n ui n is tls enabled false n is zip url tls enabled false n script source n file name n is generated script false n n n data stream n dataset browser n elasticsearch n privileges n indices n auto configure n create doc n read n n n n type synthetics n n enabled true n id synthetics browser browser n name sample synthetics integration n params n url n processors n n add observer metadata n geo n name fleet managed n n n n n add fields n fields n monitor fleet managed true n n target n n n n schedule every n screenshots on n service name sample apm service n source zip url folder folder n source zip url password password n source zip url url source zip url username username n tags n sample tag n n throttling n timeout n type browser n n n data stream n dataset browser network n elasticsearch n privileges n indices n auto configure n create doc n read n n n n type synthetics n n id synthetics browser browser network n processors n n add observer metadata n geo n name fleet managed n n n n n add fields n fields n monitor fleet managed true n n target n n n n n n data stream n dataset browser screenshot n elasticsearch n privileges n indices n auto configure n create doc n read n n n n type synthetics n n id synthetics browser browser screenshot n processors n n add observer metadata n geo n name fleet managed n n n n n add fields n fields n monitor fleet managed true n n target n n n n n n type synthetics browser n use output default n expected n data stream n namespace default n n id synthetics browser synthetics n meta n package n name synthetics n version n n n name sample synthetics integration n package policy id n revision n streams n n ui n is tls enabled false n is zip url tls enabled false n script source n file name n is generated script false n n n data stream n dataset browser n type synthetics n n enabled true n id synthetics browser browser n name sample synthetics integration n params n url n processors n n add observer metadata n geo n name fleet managed n n n n n add fields n fields n monitor fleet managed true n n target n n n n schedule every n screenshots on n service name sample apm service n source zip url folder folder n source zip url password password n source zip url url source zip url username username n tags n sample tag n n throttling n timeout n type browser n n n data stream n dataset browser network n type synthetics n n id synthetics browser browser network n processors n n add observer metadata n geo n name fleet managed n n n n n add fields n fields n monitor fleet managed true n n target n n n n n n data stream n dataset browser screenshot n type synthetics n n id synthetics browser browser screenshot n processors n n add observer metadata n geo n name fleet managed n n n n n add fields n fields n monitor fleet managed true n n target n n n n n n type synthetics browser n use output default n showdiff true first failure | 0 |
130,479 | 18,074,320,614 | IssuesEvent | 2021-09-21 08:09:31 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | Tracking missing shortcuts | enhancement design app:files status/STALE | 1. The key to a successful interface is to provide several ways to perform specific actions in order to cater for the needs of a broad spectrum of users
2. Advanced users need shortcuts
A sample selection:
- [ ] `ctrl+a` Select all
- [ ] `ctrl+c` Copy selection
- [ ] `ctrl+v` Paste selection
- [ ] `ctrl+x` Cut selection
- [ ] `del` Delete selection
- [ ] `ctrl+f` Puts cursor in search field #18897
- [ ] `ctrl+z` Undo (a deletion per example)
---
@PVince81 @schiesbn @jancborchardt @henni
| 1.0 | Tracking missing shortcuts - 1. The key to a successful interface is to provide several ways to perform specific actions in order to cater for the needs of a broad spectrum of users
2. Advanced users need shortcuts
A sample selection:
- [ ] `ctrl+a` Select all
- [ ] `ctrl+c` Copy selection
- [ ] `ctrl+v` Paste selection
- [ ] `ctrl+x` Cut selection
- [ ] `del` Delete selection
- [ ] `ctrl+f` Puts cursor in search field #18897
- [ ] `ctrl+z` Undo (a deletion per example)
---
@PVince81 @schiesbn @jancborchardt @henni
| non_defect | tracking missing shortcuts the key to a successful interface is to provide several ways to perform specific actions in order to cater for the needs of a broad spectrum of users advanced users need shortcuts a sample selection ctrl a select all ctrl c copy selection ctrl v paste selection ctrl x cut selection del delete selection ctrl f puts cursor in search field ctrl z undo a deletion per example schiesbn jancborchardt henni | 0 |
339,777 | 10,262,006,601 | IssuesEvent | 2019-08-22 11:15:47 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :203504]Uninitialized variables in /subsys/net/lib/sockets/sockets_net_mgmt.c | Coverity area: Networking bug priority: medium | Static code scan issues seen in File: /subsys/net/lib/sockets/sockets_net_mgmt.c
Category: Uninitialized variables
Function: znet_mgmt_recvfrom
Component: Networking
CID: 203504
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | 1.0 | [Coverity CID :203504]Uninitialized variables in /subsys/net/lib/sockets/sockets_net_mgmt.c - Static code scan issues seen in File: /subsys/net/lib/sockets/sockets_net_mgmt.c
Category: Uninitialized variables
Function: znet_mgmt_recvfrom
Component: Networking
CID: 203504
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | non_defect | uninitialized variables in subsys net lib sockets sockets net mgmt c static code scan issues seen in file subsys net lib sockets sockets net mgmt c category uninitialized variables function znet mgmt recvfrom component networking cid please fix or provide comments to square it off in coverity in the link | 0 |
9,397 | 2,615,147,568 | IssuesEvent | 2015-03-01 06:23:53 | chrsmith/html5rocks | https://api.github.com/repos/chrsmith/html5rocks | closed | Add case studies as new resources on front page | auto-migrated Milestone-3 Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `v...@google.com` on 30 Sep 2010 at 8:33 | 1.0 | Add case studies as new resources on front page - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `v...@google.com` on 30 Sep 2010 at 8:33 | defect | add case studies as new resources on front page what steps will reproduce the problem what is the expected output what do you see instead please use labels and text to provide additional information original issue reported on code google com by v google com on sep at | 1 |
29,963 | 5,964,774,353 | IssuesEvent | 2017-05-30 09:45:16 | buildo/rc-datepicker | https://api.github.com/repos/buildo/rc-datepicker | closed | Ci is failing with Error: Missing description for prop 'startDate' in 'DatePickerInput'. | defect in review | ## description
add this description: `specify an initial "visible" date with no need to select a defaultValue`
## how to reproduce
- {optional: describe steps to reproduce defect}
## specs
{optional: describe a possible fix for this defect, if not obvious}
## misc
{optional: other useful info}
| 1.0 | Ci is failing with Error: Missing description for prop 'startDate' in 'DatePickerInput'. - ## description
add this description: `specify an initial "visible" date with no need to select a defaultValue`
## how to reproduce
- {optional: describe steps to reproduce defect}
## specs
{optional: describe a possible fix for this defect, if not obvious}
## misc
{optional: other useful info}
| defect | ci is failing with error missing description for prop startdate in datepickerinput description add this description specify an initial visible date with no need to select a defaultvalue how to reproduce optional describe steps to reproduce defect specs optional describe a possible fix for this defect if not obvious misc optional other useful info | 1 |
16,287 | 2,889,333,139 | IssuesEvent | 2015-06-13 09:54:56 | kuribot/boilerpipe | https://api.github.com/repos/kuribot/boilerpipe | closed | ContentFusion can change the order of document text | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. When processing a document with the ContentFusion class the text of the
document can get out of order if changes are made in multiple iteration of the
dowhile.
2. When changes are made and two TextBlocks are merged the outer loop is
executed again (reprocessing the entire document for more changes), however,
the prevBlock variable is not reset to the first block of the document (It
still contains the last block of the document). This can cause block(s) at the
beginning of the document to be merged at the end of the document.
What is the expected output? What do you see instead?
Blocks at the beginning of the document are merged to the end of blocks at the
end of the document. These blocks should not be merged at all or should be
merged to the beginning of later blocks.
What version of the product are you using? On what operating system?
most recent from repository
Please provide any additional information below.
My recommendation would be moving the prevBlock instantiation inside the
dowhile loop.
From:
TextBlock prevBlock = textBlocks.get(0);
boolean changes = false;
do {
changes = false;
for (ListIterator<TextBlock> it = textBlocks.listIterator(1); it.hasNext();) {
To:
boolean changes = false;
do {
changes = false;
TextBlock prevBlock = textBlocks.get(0);
for (ListIterator<TextBlock> it = textBlocks.listIterator(1); it.hasNext();) {
```
Original issue reported on code.google.com by `aricbosc...@gmail.com` on 6 Mar 2013 at 2:49 | 1.0 | ContentFusion can change the order of document text - ```
What steps will reproduce the problem?
1. When processing a document with the ContentFusion class the text of the
document can get out of order if changes are made in multiple iteration of the
dowhile.
2. When changes are made and two TextBlocks are merged the outer loop is
executed again (reprocessing the entire document for more changes), however,
the prevBlock variable is not reset to the first block of the document (It
still contains the last block of the document). This can cause block(s) at the
beginning of the document to be merged at the end of the document.
What is the expected output? What do you see instead?
Blocks at the beginning of the document are merged to the end of blocks at the
end of the document. These blocks should not be merged at all or should be
merged to the beginning of later blocks.
What version of the product are you using? On what operating system?
most recent from repository
Please provide any additional information below.
My recommendation would be moving the prevBlock instantiation inside the
dowhile loop.
From:
TextBlock prevBlock = textBlocks.get(0);
boolean changes = false;
do {
changes = false;
for (ListIterator<TextBlock> it = textBlocks.listIterator(1); it.hasNext();) {
To:
boolean changes = false;
do {
changes = false;
TextBlock prevBlock = textBlocks.get(0);
for (ListIterator<TextBlock> it = textBlocks.listIterator(1); it.hasNext();) {
```
Original issue reported on code.google.com by `aricbosc...@gmail.com` on 6 Mar 2013 at 2:49 | defect | contentfusion can change the order of document text what steps will reproduce the problem when processing a document with the contentfusion class the text of the document can get out of order if changes are made in multiple iteration of the dowhile when changes are made and two textblocks are merged the outer loop is executed again reprocessing the entire document for more changes however the prevblock variable is not reset to the first block of the document it still contains the last block of the document this can cause block s at the beginning of the document to be merged at the end of the document what is the expected output what do you see instead blocks at the beginning of the document are merged to the end of blocks at the end of the document these blocks should not be merged at all or should be merged to the beginning of later blocks what version of the product are you using on what operating system most recent from repository please provide any additional information below my recommendation would be moving the prevblock instantiation inside the dowhile loop from textblock prevblock textblocks get boolean changes false do changes false for listiterator it textblocks listiterator it hasnext to boolean changes false do changes false textblock prevblock textblocks get for listiterator it textblocks listiterator it hasnext original issue reported on code google com by aricbosc gmail com on mar at | 1 |
396,370 | 27,115,806,005 | IssuesEvent | 2023-02-15 18:28:20 | josgarber6/acme-l3-D01 | https://api.github.com/repos/josgarber6/acme-l3-D01 | closed | Task 027: Produce a report on how you have set up your development configuration | documentation | They are not asking us to reproduce the guidelines to set it up, but to make it clear that we have followed them, and we have our development configuration ready to work. | 1.0 | Task 027: Produce a report on how you have set up your development configuration - They are not asking us to reproduce the guidelines to set it up, but to make it clear that we have followed them, and we have our development configuration ready to work. | non_defect | task produce a report on how you have set up your development configuration they are not asking us to reproduce the guidelines to set it up but to make it clear that we have followed them and we have our development configuration ready to work | 0 |
61,075 | 17,023,595,371 | IssuesEvent | 2021-07-03 02:50:04 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | projection "Latitude/Longitude" eats memory | Component: merkaartor Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 2.29pm, Saturday, 22nd May 2010]**
in merkaartor 0.16 prerelease (debian package 0.16~dev1-1), downloading data in "Latitude/Longitude (EPSG:4326)" projection or switching to that projection after data was downloaded in another projection makes merkaartor use up huge quantities of memory (roughly 2gb before it got oom-killed) and not repsond to anything but killing the process.
other projections are not affected. | 1.0 | projection "Latitude/Longitude" eats memory - **[Submitted to the original trac issue database at 2.29pm, Saturday, 22nd May 2010]**
in merkaartor 0.16 prerelease (debian package 0.16~dev1-1), downloading data in "Latitude/Longitude (EPSG:4326)" projection or switching to that projection after data was downloaded in another projection makes merkaartor use up huge quantities of memory (roughly 2gb before it got oom-killed) and not repsond to anything but killing the process.
other projections are not affected. | defect | projection latitude longitude eats memory in merkaartor prerelease debian package downloading data in latitude longitude epsg projection or switching to that projection after data was downloaded in another projection makes merkaartor use up huge quantities of memory roughly before it got oom killed and not repsond to anything but killing the process other projections are not affected | 1 |
83,925 | 16,396,730,659 | IssuesEvent | 2021-05-18 01:29:29 | StanfordBioinformatics/pulsar_lims | https://api.github.com/repos/StanfordBioinformatics/pulsar_lims | closed | ENCODE Data submission: IP for input control of cs-95 | Encode IP submission | This input library control experiment https://www.encodeproject.org/experiments/ENCSR099ETQ/
does not have IP on the portal. Its IP was attached to its parent.
But the biosample "Replicate not used in Chip experiment."
How to solve this? | 1.0 | ENCODE Data submission: IP for input control of cs-95 - This input library control experiment https://www.encodeproject.org/experiments/ENCSR099ETQ/
does not have IP on the portal. Its IP was attached to its parent.
But the biosample "Replicate not used in Chip experiment."
How to solve this? | non_defect | encode data submission ip for input control of cs this input library control experiment does not have ip on the portal its ip was attached to its parent but the biosample replicate not used in chip experiment how to solve this | 0 |
387,624 | 11,463,671,084 | IssuesEvent | 2020-02-07 16:26:44 | aragon/aragon-cli | https://api.github.com/repos/aragon/aragon-cli | closed | Warn user if their mainnet ENS registry configuration points to the old registry | :skull: security 🦅 flock/nest high priority | See [ENS migration](https://medium.com/the-ethereum-name-service/ens-registry-migration-bug-fix-new-features-64379193a5a).
Many old Aragon app repos will undoubtedly have the old ENS registry hardcoded in their `arapp.json` configuration.
We should detect this for mainnet configurations and:
1. Switch to the new registry address
2. Warn the user that they should update their configuration to use the new registry address. | 1.0 | Warn user if their mainnet ENS registry configuration points to the old registry - See [ENS migration](https://medium.com/the-ethereum-name-service/ens-registry-migration-bug-fix-new-features-64379193a5a).
Many old Aragon app repos will undoubtedly have the old ENS registry hardcoded in their `arapp.json` configuration.
We should detect this for mainnet configurations and:
1. Switch to the new registry address
2. Warn the user that they should update their configuration to use the new registry address. | non_defect | warn user if their mainnet ens registry configuration points to the old registry see many old aragon app repos will undoubtedly have the old ens registry hardcoded in their arapp json configuration we should detect this for mainnet configurations and switch to the new registry address warn the user that they should update their configuration to use the new registry address | 0 |
244,082 | 20,607,799,561 | IssuesEvent | 2022-03-07 03:53:45 | bokeh/bokeh | https://api.github.com/repos/bokeh/bokeh | closed | Allow to invalidate sampledata cache | type: feature tag: component: tests reso: noaction | ```
$ make html
sphinx-build -b html -d build/doctrees source -W build/html
Running Sphinx v2.3.1
making output directory... done
creating gallery file entries... [100%] logaxis.rst
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/numpy/objects.inv...
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 249 source files that are out of date
updating environment: [new config] 249 added, 0 changed, 0 removed
copying bokeh-plot files... [100%] bokeh-plot-ffe21e7ce7b148779c29dec3d149f1c6-external-docs-user_guide-plotting.js
Exception occurred:
File "/home/mateusz/repo/bokeh3/bokeh/sphinxext/bokeh_plot.py", line 156, in run
raise RuntimeError("Sphinx bokeh-plot exception: \n\n%s\n\n Failed on:\n\n %s" % (e,source))
RuntimeError: Sphinx bokeh-plot exception:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: '2010-10-06'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 1057, in get_loc
return Index.get_loc(self, key, method, tolerance)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: '2010-10-06'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: 1286323200000000000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 1070, in get_loc
return Index.get_loc(self, stamp, method, tolerance)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: 1286323200000000000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/repo/bokeh3/bokeh/application/handlers/code_runner.py", line 174, in run
exec(self._code, module.__dict__)
File "/home/mateusz/repo/bokeh3/sphinx/source/docs/user_guide/examples/styling_glyph_hover.py", line 7, in <module>
subset = data.loc['2010-10-06']
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexing.py", line 1424, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexing.py", line 1850, in _getitem_axis
return self._get_label(key, axis=axis)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexing.py", line 160, in _get_label
return self.obj._xs(label, axis=axis)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/generic.py", line 3737, in xs
loc = self.index.get_loc(key)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 1072, in get_loc
raise KeyError(key)
KeyError: '2010-10-06'
Failed on:
from bokeh.models import HoverTool
from bokeh.plotting import figure, output_file, show
from bokeh.sampledata.glucose import data
output_file("styling_hover.html")
subset = data.loc['2010-10-06']
x, y = subset.index.to_series(), subset['glucose']
# Basic plot setup
plot = figure(plot_width=600, plot_height=300, x_axis_type="datetime", tools="",
toolbar_location=None, title='Hover over points')
plot.line(x, y, line_dash="4 4", line_width=1, color='gray')
cr = plot.circle(x, y, size=20,
fill_color="grey", hover_fill_color="firebrick",
fill_alpha=0.05, hover_alpha=0.3,
line_color=None, hover_line_color="white")
plot.add_tools(HoverTool(tooltips=None, renderers=[cr], mode='hline'))
show(plot)
``` | 1.0 | Allow to invalidate sampledata cache - ```
$ make html
sphinx-build -b html -d build/doctrees source -W build/html
Running Sphinx v2.3.1
making output directory... done
creating gallery file entries... [100%] logaxis.rst
loading intersphinx inventory from https://docs.python.org/3/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
loading intersphinx inventory from https://docs.scipy.org/doc/numpy/objects.inv...
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 249 source files that are out of date
updating environment: [new config] 249 added, 0 changed, 0 removed
copying bokeh-plot files... [100%] bokeh-plot-ffe21e7ce7b148779c29dec3d149f1c6-external-docs-user_guide-plotting.js
Exception occurred:
File "/home/mateusz/repo/bokeh3/bokeh/sphinxext/bokeh_plot.py", line 156, in run
raise RuntimeError("Sphinx bokeh-plot exception: \n\n%s\n\n Failed on:\n\n %s" % (e,source))
RuntimeError: Sphinx bokeh-plot exception:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: '2010-10-06'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 1057, in get_loc
return Index.get_loc(self, key, method, tolerance)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: '2010-10-06'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: 1286323200000000000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 1070, in get_loc
return Index.get_loc(self, stamp, method, tolerance)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 410, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 435, in pandas._libs.index.DatetimeEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine._get_loc_duplicates
KeyError: 1286323200000000000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mateusz/repo/bokeh3/bokeh/application/handlers/code_runner.py", line 174, in run
exec(self._code, module.__dict__)
File "/home/mateusz/repo/bokeh3/sphinx/source/docs/user_guide/examples/styling_glyph_hover.py", line 7, in <module>
subset = data.loc['2010-10-06']
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexing.py", line 1424, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexing.py", line 1850, in _getitem_axis
return self._get_label(key, axis=axis)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexing.py", line 160, in _get_label
return self.obj._xs(label, axis=axis)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/generic.py", line 3737, in xs
loc = self.index.get_loc(key)
File "/home/mateusz/conda/envs/bk/lib/python3.8/site-packages/pandas/core/indexes/datetimes.py", line 1072, in get_loc
raise KeyError(key)
KeyError: '2010-10-06'
Failed on:
from bokeh.models import HoverTool
from bokeh.plotting import figure, output_file, show
from bokeh.sampledata.glucose import data
output_file("styling_hover.html")
subset = data.loc['2010-10-06']
x, y = subset.index.to_series(), subset['glucose']
# Basic plot setup
plot = figure(plot_width=600, plot_height=300, x_axis_type="datetime", tools="",
toolbar_location=None, title='Hover over points')
plot.line(x, y, line_dash="4 4", line_width=1, color='gray')
cr = plot.circle(x, y, size=20,
fill_color="grey", hover_fill_color="firebrick",
fill_alpha=0.05, hover_alpha=0.3,
line_color=None, hover_line_color="white")
plot.add_tools(HoverTool(tooltips=None, renderers=[cr], mode='hline'))
show(plot)
``` | non_defect | allow to invalidate sampledata cache make html sphinx build b html d build doctrees source w build html running sphinx making output directory done creating gallery file entries logaxis rst loading intersphinx inventory from loading intersphinx inventory from loading intersphinx inventory from building targets for po files that are out of date building targets for source files that are out of date updating environment added changed removed copying bokeh plot files bokeh plot external docs user guide plotting js exception occurred file home mateusz repo bokeh sphinxext bokeh plot py line in run raise runtimeerror sphinx bokeh plot exception n n s n n failed on n n s e source runtimeerror sphinx bokeh plot exception traceback most recent call last file home mateusz conda envs bk lib site packages pandas core indexes base py line in get loc return self engine get loc key file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index indexengine get loc duplicates keyerror during handling of the above exception another exception occurred traceback most recent call last file home mateusz conda envs bk lib site packages pandas core indexes datetimes py line in get loc return index get loc self key method tolerance file home mateusz conda envs bk lib site packages pandas core indexes base py line in get loc return self engine get loc self maybe cast indexer key file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index indexengine get loc duplicates keyerror during handling of the above exception another exception occurred traceback most recent call last file home mateusz conda envs bk lib site packages pandas core indexes base py line in get loc return self engine get loc key file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index indexengine get loc duplicates keyerror during handling of the above exception another exception occurred traceback most recent call last file home mateusz conda envs bk lib site packages pandas core indexes datetimes py line in get loc return index get loc self stamp method tolerance file home mateusz conda envs bk lib site packages pandas core indexes base py line in get loc return self engine get loc self maybe cast indexer key file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index datetimeengine get loc file pandas libs index pyx line in pandas libs index indexengine get loc duplicates keyerror during handling of the above exception another exception occurred traceback most recent call last file home mateusz repo bokeh application handlers code runner py line in run exec self code module dict file home mateusz repo sphinx source docs user guide examples styling glyph hover py line in subset data loc file home mateusz conda envs bk lib site packages pandas core indexing py line in getitem return self getitem axis maybe callable axis axis file home mateusz conda envs bk lib site packages pandas core indexing py line in getitem axis return self get label key axis axis file home mateusz conda envs bk lib site packages pandas core indexing py line in get label return self obj xs label axis axis file home mateusz conda envs bk lib site packages pandas core generic py line in xs loc self index get loc key file home mateusz conda envs bk lib site packages pandas core indexes datetimes py line in get loc raise keyerror key keyerror failed on from bokeh models import hovertool from bokeh plotting import figure output file show from bokeh sampledata glucose import data output file styling hover html subset data loc x y subset index to series subset basic plot setup plot figure plot width plot height x axis type datetime tools toolbar location none title hover over points plot line x y line dash line width color gray cr plot circle x y size fill color grey hover fill color firebrick fill alpha hover alpha line color none hover line color white plot add tools hovertool tooltips none renderers mode hline show plot | 0 |
16,354 | 5,233,695,481 | IssuesEvent | 2017-01-30 13:43:39 | SemsTestOrg/combinearchive-web | https://api.github.com/repos/SemsTestOrg/combinearchive-web | closed | Add own VCard information when creating a new Archive | code fixed major migrated task | ## Trac Ticket #53
**component:** code
**owner:** somebody
**reporter:** martinP
**created:** 2014-08-21 13:14:08
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2014-09-08 08:40:54
**author:** martinP
fixed long time ago. dunno why this is still open.
## comment 2
**time:** 2014-09-08 08:40:54
**author:** martinP
Updated **resolution** to **fixed**
## comment 3
**time:** 2014-09-08 08:40:54
**author:** martinP
Updated **status** to **closed**
| 1.0 | Add own VCard information when creating a new Archive - ## Trac Ticket #53
**component:** code
**owner:** somebody
**reporter:** martinP
**created:** 2014-08-21 13:14:08
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2014-09-08 08:40:54
**author:** martinP
fixed long time ago. dunno why this is still open.
## comment 2
**time:** 2014-09-08 08:40:54
**author:** martinP
Updated **resolution** to **fixed**
## comment 3
**time:** 2014-09-08 08:40:54
**author:** martinP
Updated **status** to **closed**
| non_defect | add own vcard information when creating a new archive trac ticket component code owner somebody reporter martinp created milestone type task version keywords comment time author martinp fixed long time ago dunno why this is still open comment time author martinp updated resolution to fixed comment time author martinp updated status to closed | 0 |
31,814 | 6,628,603,293 | IssuesEvent | 2017-09-23 20:07:24 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | opened | Conector CFDI3.3 - Pagos - No envía la descripción de la forma de pago al PDF | bug defect | Solo envía la clave al PDF
Para el XML correspondería a este atributo:
FormaDePagoP="03"
| 1.0 | Conector CFDI3.3 - Pagos - No envía la descripción de la forma de pago al PDF - Solo envía la clave al PDF
Para el XML correspondería a este atributo:
FormaDePagoP="03"
| defect | conector pagos no envía la descripción de la forma de pago al pdf solo envía la clave al pdf para el xml correspondería a este atributo formadepagop | 1 |
1,981 | 2,603,974,476 | IssuesEvent | 2015-02-24 19:01:09 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳生殖疱疹容易治疗吗 | auto-migrated Priority-Medium Type-Defect | ```
沈阳生殖疱疹容易治疗吗〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:11 | 1.0 | 沈阳生殖疱疹容易治疗吗 - ```
沈阳生殖疱疹容易治疗吗〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:11 | defect | 沈阳生殖疱疹容易治疗吗 沈阳生殖疱疹容易治疗吗〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
77,274 | 26,892,118,295 | IssuesEvent | 2023-02-06 09:42:29 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | CSP: splitButton - oncomplete, does not work properly with blockUI, and file download. | :lady_beetle: defect :bangbang: needs-triage | ### Describe the bug
CSP: splitButton - oncomplete, does not work properly with blockUI, and file download.
### Reproducer
**Steps:**
1. run the code below
2. open the page
3. use the basic button

4. use the save button

5. without CSP: -> block panel working fine, the file is downloaded
6. with CSP:- > block panel not working, the file is not downloaded
**Expected result**
App with CSP should work in the same way as without.
<context-param>
<param-name>primefaces.CSP</param-name>
<param-value>true</param-value>
</context-param>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:p="http://primefaces.org/ui">
<h:head>
<title>PrimeFaces Test</title>
<h:outputScript name="test.js"/>
<h:outputStylesheet name="test.css"/>
</h:head>
<h:body>
<script>
function startDl() {
PF('buiDatatable').show();
}
function stopDl() {
PF('buiDatatable').hide();
}
</script>
<div class="card">
<p:growl id="messages"/>
<h:form id="sample123">
<p:blockUI block="dlg1" widgetVar="buiDatatable">
<i class="pi pi-spin pi-spinner" style="font-size: 3rem"></i>
</p:blockUI>
</h:form>
<p:commandButton value="Basic" type="button" onclick="PF('dlg1').show();"/>
<p:dialog id="dlg1" header="Basic Dialog" widgetVar="dlg1" minHeight="40">
<h:form id="sample">
<p:commandButton id="remoteDownload" ajax="false">
<p:fileDownload value="#{fileDownloadView.file}"/>
</p:commandButton>
<h5 class="mt-0">Basic</h5>
<p:splitButton value="Save" action="#{buttonView.save}" update="messages"
onclick="startDl();"
oncomplete="stopDl(); document.getElementById('sample:remoteDownload').click();PF('dlg1').hide();"
icon="pi pi-save">
<p:menuitem value="Update" action="#{buttonView.update}" update="messages" icon="pi pi-refresh"/>
<p:menuitem value="Delete" action="#{buttonView.delete}" ajax="false" icon="pi pi-times"/>
</p:splitButton>
</h:form>
</p:dialog>
</div>
</h:body>
</html>
package org.primefaces.test;
import org.primefaces.model.menu.DefaultMenuItem;
import org.primefaces.model.menu.DefaultMenuModel;
import org.primefaces.model.menu.DefaultSubMenu;
import org.primefaces.model.menu.MenuModel;
import javax.annotation.PostConstruct;
import javax.enterprise.context.RequestScoped;
import javax.faces.application.FacesMessage;
import javax.faces.context.FacesContext;
import javax.inject.Named;
import java.util.concurrent.TimeUnit;
@Named
@RequestScoped
public class ButtonView {
private MenuModel model;
@PostConstruct
public void init() {
model = new DefaultMenuModel();
//First submenu
DefaultMenuItem item = DefaultMenuItem.builder()
.value("External")
.url("http://www.primefaces.org")
.icon("pi pi-home")
.build();
DefaultSubMenu firstSubmenu = DefaultSubMenu.builder()
.label("Dynamic Submenu")
.addElement(item)
.build();
model.getElements().add(firstSubmenu);
//Second submenu
item = DefaultMenuItem.builder()
.id("mniSave")
.value("Save")
.icon("pi pi-save")
.function((i) -> save())
.update("messages")
.build();
DefaultSubMenu secondSubmenu = DefaultSubMenu.builder()
.label("Dynamic Actions")
.addElement(item)
.build();
item = DefaultMenuItem.builder()
.value("Delete")
.icon("pi pi-times")
.command("#{buttonView.delete}")
.ajax(false)
.build();
secondSubmenu.getElements().add(item);
model.getElements().add(secondSubmenu);
}
public MenuModel getModel() {
return model;
}
public String save() {
addMessage("Data saved");
return null;
}
public void update() {
addMessage("Data updated");
}
public void delete() {
addMessage("Data deleted");
}
public String sleepAndSave() throws InterruptedException {
TimeUnit.SECONDS.sleep(1);
return save();
}
public void sleepAndUpdate() throws InterruptedException {
TimeUnit.SECONDS.sleep(1);
update();
}
public void sleepAndDelete() throws InterruptedException {
TimeUnit.SECONDS.sleep(1);
delete();
}
public void buttonAction() {
addMessage("Welcome to PrimeFaces!!");
}
public void addMessage(String summary) {
FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_INFO, summary, null);
FacesContext.getCurrentInstance().addMessage(null, message);
}
}
package org.primefaces.test;
import org.primefaces.model.DefaultStreamedContent;
import org.primefaces.model.StreamedContent;
import javax.enterprise.context.RequestScoped;
import javax.inject.Named;
import java.io.ByteArrayInputStream;
import java.nio.charset.StandardCharsets;
@Named
@RequestScoped
public class FileDownloadView {
public StreamedContent getFile() {
ByteArrayInputStream reportResult = new ByteArrayInputStream("initialString".getBytes());
return DefaultStreamedContent.builder()
.stream(() -> reportResult)
.contentType("plain/text")
.name("sample.txt")
.contentEncoding(StandardCharsets.UTF_8.name())
.build();
}
}
package org.primefaces.test;
import java.io.Serializable;
import java.time.LocalDateTime;
import java.math.BigDecimal;
import javax.annotation.PostConstruct;
import javax.faces.view.ViewScoped;
import javax.inject.Named;
import lombok.Data;
@Data
@Named
@ViewScoped
public class TestView implements Serializable {
private String string;
private Integer integer;
private BigDecimal decimal;
private LocalDateTime localDateTime;
@PostConstruct
public void init() {
string = "Welcome to PrimeFaces!!!";
}
}
### Expected behavior
App with CSP should work in the same way as without.
### PrimeFaces edition
None
### PrimeFaces version
12.0.0
### Theme
all
### JSF implementation
All
### JSF version
all
### Java version
all
### Browser(s)
Chrome | 1.0 | CSP: splitButton - oncomplete, does not work properly with blockUI, and file download. - ### Describe the bug
CSP: splitButton - oncomplete, does not work properly with blockUI, and file download.
### Reproducer
**Steps:**
1. run the code below
2. open the page
3. use the basic button

4. use the save button

5. without CSP: -> block panel working fine, the file is downloaded
6. with CSP:- > block panel not working, the file is not downloaded
**Expected result**
App with CSP should work in the same way as without.
<context-param>
<param-name>primefaces.CSP</param-name>
<param-value>true</param-value>
</context-param>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:p="http://primefaces.org/ui">
<h:head>
<title>PrimeFaces Test</title>
<h:outputScript name="test.js"/>
<h:outputStylesheet name="test.css"/>
</h:head>
<h:body>
<script>
function startDl() {
PF('buiDatatable').show();
}
function stopDl() {
PF('buiDatatable').hide();
}
</script>
<div class="card">
<p:growl id="messages"/>
<h:form id="sample123">
<p:blockUI block="dlg1" widgetVar="buiDatatable">
<i class="pi pi-spin pi-spinner" style="font-size: 3rem"></i>
</p:blockUI>
</h:form>
<p:commandButton value="Basic" type="button" onclick="PF('dlg1').show();"/>
<p:dialog id="dlg1" header="Basic Dialog" widgetVar="dlg1" minHeight="40">
<h:form id="sample">
<p:commandButton id="remoteDownload" ajax="false">
<p:fileDownload value="#{fileDownloadView.file}"/>
</p:commandButton>
<h5 class="mt-0">Basic</h5>
<p:splitButton value="Save" action="#{buttonView.save}" update="messages"
onclick="startDl();"
oncomplete="stopDl(); document.getElementById('sample:remoteDownload').click();PF('dlg1').hide();"
icon="pi pi-save">
<p:menuitem value="Update" action="#{buttonView.update}" update="messages" icon="pi pi-refresh"/>
<p:menuitem value="Delete" action="#{buttonView.delete}" ajax="false" icon="pi pi-times"/>
</p:splitButton>
</h:form>
</p:dialog>
</div>
</h:body>
</html>
package org.primefaces.test;
import org.primefaces.model.menu.DefaultMenuItem;
import org.primefaces.model.menu.DefaultMenuModel;
import org.primefaces.model.menu.DefaultSubMenu;
import org.primefaces.model.menu.MenuModel;
import javax.annotation.PostConstruct;
import javax.enterprise.context.RequestScoped;
import javax.faces.application.FacesMessage;
import javax.faces.context.FacesContext;
import javax.inject.Named;
import java.util.concurrent.TimeUnit;
@Named
@RequestScoped
public class ButtonView {
private MenuModel model;
@PostConstruct
public void init() {
model = new DefaultMenuModel();
//First submenu
DefaultMenuItem item = DefaultMenuItem.builder()
.value("External")
.url("http://www.primefaces.org")
.icon("pi pi-home")
.build();
DefaultSubMenu firstSubmenu = DefaultSubMenu.builder()
.label("Dynamic Submenu")
.addElement(item)
.build();
model.getElements().add(firstSubmenu);
//Second submenu
item = DefaultMenuItem.builder()
.id("mniSave")
.value("Save")
.icon("pi pi-save")
.function((i) -> save())
.update("messages")
.build();
DefaultSubMenu secondSubmenu = DefaultSubMenu.builder()
.label("Dynamic Actions")
.addElement(item)
.build();
item = DefaultMenuItem.builder()
.value("Delete")
.icon("pi pi-times")
.command("#{buttonView.delete}")
.ajax(false)
.build();
secondSubmenu.getElements().add(item);
model.getElements().add(secondSubmenu);
}
public MenuModel getModel() {
return model;
}
public String save() {
addMessage("Data saved");
return null;
}
public void update() {
addMessage("Data updated");
}
public void delete() {
addMessage("Data deleted");
}
public String sleepAndSave() throws InterruptedException {
TimeUnit.SECONDS.sleep(1);
return save();
}
public void sleepAndUpdate() throws InterruptedException {
TimeUnit.SECONDS.sleep(1);
update();
}
public void sleepAndDelete() throws InterruptedException {
TimeUnit.SECONDS.sleep(1);
delete();
}
public void buttonAction() {
addMessage("Welcome to PrimeFaces!!");
}
public void addMessage(String summary) {
FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_INFO, summary, null);
FacesContext.getCurrentInstance().addMessage(null, message);
}
}
package org.primefaces.test;
import org.primefaces.model.DefaultStreamedContent;
import org.primefaces.model.StreamedContent;
import javax.enterprise.context.RequestScoped;
import javax.inject.Named;
import java.io.ByteArrayInputStream;
import java.nio.charset.StandardCharsets;
@Named
@RequestScoped
public class FileDownloadView {
public StreamedContent getFile() {
ByteArrayInputStream reportResult = new ByteArrayInputStream("initialString".getBytes());
return DefaultStreamedContent.builder()
.stream(() -> reportResult)
.contentType("plain/text")
.name("sample.txt")
.contentEncoding(StandardCharsets.UTF_8.name())
.build();
}
}
package org.primefaces.test;
import java.io.Serializable;
import java.time.LocalDateTime;
import java.math.BigDecimal;
import javax.annotation.PostConstruct;
import javax.faces.view.ViewScoped;
import javax.inject.Named;
import lombok.Data;
@Data
@Named
@ViewScoped
public class TestView implements Serializable {
private String string;
private Integer integer;
private BigDecimal decimal;
private LocalDateTime localDateTime;
@PostConstruct
public void init() {
string = "Welcome to PrimeFaces!!!";
}
}
### Expected behavior
App with CSP should work in the same way as without.
### PrimeFaces edition
None
### PrimeFaces version
12.0.0
### Theme
all
### JSF implementation
All
### JSF version
all
### Java version
all
### Browser(s)
Chrome | defect | csp splitbutton oncomplete does not work properly with blockui and file download describe the bug csp splitbutton oncomplete does not work properly with blockui and file download reproducer steps run the code below open the page use the basic button use the save button without csp block panel working fine the file is downloaded with csp block panel not working the file is not downloaded expected result app with csp should work in the same way as without primefaces csp true html xmlns xmlns h xmlns p primefaces test function startdl pf buidatatable show function stopdl pf buidatatable hide basic p splitbutton value save action buttonview save update messages onclick startdl oncomplete stopdl document getelementbyid sample remotedownload click pf hide icon pi pi save package org primefaces test import org primefaces model menu defaultmenuitem import org primefaces model menu defaultmenumodel import org primefaces model menu defaultsubmenu import org primefaces model menu menumodel import javax annotation postconstruct import javax enterprise context requestscoped import javax faces application facesmessage import javax faces context facescontext import javax inject named import java util concurrent timeunit named requestscoped public class buttonview private menumodel model postconstruct public void init model new defaultmenumodel first submenu defaultmenuitem item defaultmenuitem builder value external url icon pi pi home build defaultsubmenu firstsubmenu defaultsubmenu builder label dynamic submenu addelement item build model getelements add firstsubmenu second submenu item defaultmenuitem builder id mnisave value save icon pi pi save function i save update messages build defaultsubmenu secondsubmenu defaultsubmenu builder label dynamic actions addelement item build item defaultmenuitem builder value delete icon pi pi times command buttonview delete ajax false build secondsubmenu getelements add item model getelements add secondsubmenu public menumodel getmodel return model public string save addmessage data saved return null public void update addmessage data updated public void delete addmessage data deleted public string sleepandsave throws interruptedexception timeunit seconds sleep return save public void sleepandupdate throws interruptedexception timeunit seconds sleep update public void sleepanddelete throws interruptedexception timeunit seconds sleep delete public void buttonaction addmessage welcome to primefaces public void addmessage string summary facesmessage message new facesmessage facesmessage severity info summary null facescontext getcurrentinstance addmessage null message package org primefaces test import org primefaces model defaultstreamedcontent import org primefaces model streamedcontent import javax enterprise context requestscoped import javax inject named import java io bytearrayinputstream import java nio charset standardcharsets named requestscoped public class filedownloadview public streamedcontent getfile bytearrayinputstream reportresult new bytearrayinputstream initialstring getbytes return defaultstreamedcontent builder stream reportresult contenttype plain text name sample txt contentencoding standardcharsets utf name build package org primefaces test import java io serializable import java time localdatetime import java math bigdecimal import javax annotation postconstruct import javax faces view viewscoped import javax inject named import lombok data data named viewscoped public class testview implements serializable private string string private integer integer private bigdecimal decimal private localdatetime localdatetime postconstruct public void init string welcome to primefaces expected behavior app with csp should work in the same way as without primefaces edition none primefaces version theme all jsf implementation all jsf version all java version all browser s chrome | 1 |
689,202 | 23,611,499,801 | IssuesEvent | 2022-08-24 12:47:44 | wso2/product-is | https://api.github.com/repos/wso2/product-is | closed | Delete fragment applications when the business application is deleted. | Priority/Highest Severity/Critical bug Organization Management | **Describe the issue:**
When a business application is deleted, if the application is shared with child organizations, fragment applications created in the child organizations has to be deleted. | 1.0 | Delete fragment applications when the business application is deleted. - **Describe the issue:**
When a business application is deleted, if the application is shared with child organizations, fragment applications created in the child organizations has to be deleted. | non_defect | delete fragment applications when the business application is deleted describe the issue when a business application is deleted if the application is shared with child organizations fragment applications created in the child organizations has to be deleted | 0 |
69,570 | 22,504,980,199 | IssuesEvent | 2022-06-23 14:48:09 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | closed | root-level setting tab_size won't work | defect discussed | When working with JavaScript the following `settings.json`…
```
{
"tab_size": 4
}
```
… ☝️ this will not work (effective `tab_size` applied will be `2` regardless), whilst…
```
{
"tab_size": 4,
"language_overrides": {
"JavaScript": {
"tab_size": 4
}
}
}
```
… works. 🤷 | 1.0 | root-level setting tab_size won't work - When working with JavaScript the following `settings.json`…
```
{
"tab_size": 4
}
```
… ☝️ this will not work (effective `tab_size` applied will be `2` regardless), whilst…
```
{
"tab_size": 4,
"language_overrides": {
"JavaScript": {
"tab_size": 4
}
}
}
```
… works. 🤷 | defect | root level setting tab size won t work when working with javascript the following settings json … tab size … ☝️ this will not work effective tab size applied will be regardless whilst… tab size language overrides javascript tab size … works 🤷 | 1 |
480,673 | 13,864,997,287 | IssuesEvent | 2020-10-16 03:03:21 | x13pixels/remedybg-issues | https://api.github.com/repos/x13pixels/remedybg-issues | closed | Unremovable empty line in watch window | Component: Watch Window Priority: 3 (Low-Med) Type: Bug | Hello, this just happened in the latest version 0.3.0.7
I was editing something in the watch, and this extra empty line appeared at the end that I'm unable to remove.

I can't repro it unfortunately. I think I was holding 'del' to remove a bunch of fields, and then added a new field. I did it very fast and haven't noticed what exactly happened.
Here's the .rdbg file with the saved watch state. The extra line is there and bugged behavior can be observed when I load it.
https://stas.ams3.digitaloceanspaces.com/graphene.rdbg
| 1.0 | Unremovable empty line in watch window - Hello, this just happened in the latest version 0.3.0.7
I was editing something in the watch, and this extra empty line appeared at the end that I'm unable to remove.

I can't repro it unfortunately. I think I was holding 'del' to remove a bunch of fields, and then added a new field. I did it very fast and haven't noticed what exactly happened.
Here's the .rdbg file with the saved watch state. The extra line is there and bugged behavior can be observed when I load it.
https://stas.ams3.digitaloceanspaces.com/graphene.rdbg
| non_defect | unremovable empty line in watch window hello this just happened in the latest version i was editing something in the watch and this extra empty line appeared at the end that i m unable to remove i can t repro it unfortunately i think i was holding del to remove a bunch of fields and then added a new field i did it very fast and haven t noticed what exactly happened here s the rdbg file with the saved watch state the extra line is there and bugged behavior can be observed when i load it | 0 |
32,021 | 6,681,143,901 | IssuesEvent | 2017-10-06 01:23:59 | extnet/Ext.NET | https://api.github.com/repos/extnet/Ext.NET | opened | DrowDownField with TreePanel with loader: expanding deep nodes collapses the field | 4.x defect sencha | Found: 4.4.1
Ext.NET forum thread: []()
Expanding nodes in a TreePanel within a DropDownField can occasionally dismiss the drop down panel. This happens because, while expanding the nodes, the load mask is added + focused and, when it loses focus, the focus is sent to "null/undefined", thus the outer DOM element on the page gets the focus.
Preventing Ext.ComponentManager.onGlobalFocus() from triggering events when the target focus element is not defined mitigates this specific issue but may incur into undesired side effects. Best would be to ensure, once the tree panel's view mask is dismissed, the tree view gets the focus, or whatever was focused before the mask was applied -- instead of a null DOM element.
This issue basically triggers in an Ext.NET exclusive component but the source is very likely to be in ExtJS.
This issue may be similar to #901 but the fix that works there does not apply here. | 1.0 | DrowDownField with TreePanel with loader: expanding deep nodes collapses the field - Found: 4.4.1
Ext.NET forum thread: []()
Expanding nodes in a TreePanel within a DropDownField can occasionally dismiss the drop down panel. This happens because, while expanding the nodes, the load mask is added + focused and, when it loses focus, the focus is sent to "null/undefined", thus the outer DOM element on the page gets the focus.
Preventing Ext.ComponentManager.onGlobalFocus() from triggering events when the target focus element is not defined mitigates this specific issue but may incur into undesired side effects. Best would be to ensure, once the tree panel's view mask is dismissed, the tree view gets the focus, or whatever was focused before the mask was applied -- instead of a null DOM element.
This issue basically triggers in an Ext.NET exclusive component but the source is very likely to be in ExtJS.
This issue may be similar to #901 but the fix that works there does not apply here. | defect | drowdownfield with treepanel with loader expanding deep nodes collapses the field found ext net forum thread expanding nodes in a treepanel within a dropdownfield can occasionally dismiss the drop down panel this happens because while expanding the nodes the load mask is added focused and when it loses focus the focus is sent to null undefined thus the outer dom element on the page gets the focus preventing ext componentmanager onglobalfocus from triggering events when the target focus element is not defined mitigates this specific issue but may incur into undesired side effects best would be to ensure once the tree panel s view mask is dismissed the tree view gets the focus or whatever was focused before the mask was applied instead of a null dom element this issue basically triggers in an ext net exclusive component but the source is very likely to be in extjs this issue may be similar to but the fix that works there does not apply here | 1 |
1,554 | 2,603,967,516 | IssuesEvent | 2015-02-24 18:59:24 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳单纯性疱疹传染 | auto-migrated Priority-Medium Type-Defect | ```
沈阳单纯性疱疹传染〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:10 | 1.0 | 沈阳单纯性疱疹传染 - ```
沈阳单纯性疱疹传染〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:10 | defect | 沈阳单纯性疱疹传染 沈阳单纯性疱疹传染〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at | 1 |
82,601 | 15,650,829,818 | IssuesEvent | 2021-03-23 09:28:34 | elastic/integrations | https://api.github.com/repos/elastic/integrations | closed | Convert Cisco's edge processing to Ingest Node pipeline | Integration:Cisco Team:Security-External Integrations release-pending |
# Convert edge processing to Ingest Node pipeline
This package uses Beats processors for some of its data processing. We want to
move that processing into the ingest node pipeline that is part of the package
to make reuse easier (e.g. data from Kafka could be routed through the pipeline).
### Data Streams
- ios (inputs: log, syslog)
- [x] All beats processors are removed
- [x] [Pipeline tests](https://github.com/elastic/elastic-package/blob/master/docs/howto/pipeline_testing.md) added
| True | Convert Cisco's edge processing to Ingest Node pipeline -
# Convert edge processing to Ingest Node pipeline
This package uses Beats processors for some of its data processing. We want to
move that processing into the ingest node pipeline that is part of the package
to make reuse easier (e.g. data from Kafka could be routed through the pipeline).
### Data Streams
- ios (inputs: log, syslog)
- [x] All beats processors are removed
- [x] [Pipeline tests](https://github.com/elastic/elastic-package/blob/master/docs/howto/pipeline_testing.md) added
| non_defect | convert cisco s edge processing to ingest node pipeline convert edge processing to ingest node pipeline this package uses beats processors for some of its data processing we want to move that processing into the ingest node pipeline that is part of the package to make reuse easier e g data from kafka could be routed through the pipeline data streams ios inputs log syslog all beats processors are removed added | 0 |
367,295 | 10,851,721,409 | IssuesEvent | 2019-11-13 11:20:11 | pombase/canto | https://api.github.com/repos/pombase/canto | closed | Fix term specific evidence codes | FlyBase bug high priority | (Continued from #1991)
The implementation is buggy.
From @vmt25:
Had a better look at the interactions and it seems the default setting is not allowing any interaction type unless specified, so that only the terms for which ‘evidence codes' are on specified on canto_deploy will allow adding interaction types and completing the annotation .
| 1.0 | Fix term specific evidence codes - (Continued from #1991)
The implementation is buggy.
From @vmt25:
Had a better look at the interactions and it seems the default setting is not allowing any interaction type unless specified, so that only the terms for which ‘evidence codes' are on specified on canto_deploy will allow adding interaction types and completing the annotation .
| non_defect | fix term specific evidence codes continued from the implementation is buggy from had a better look at the interactions and it seems the default setting is not allowing any interaction type unless specified so that only the terms for which ‘evidence codes are on specified on canto deploy will allow adding interaction types and completing the annotation | 0 |
21,146 | 6,987,196,805 | IssuesEvent | 2017-12-14 08:14:12 | minishift/minishift-ci-jobs | https://api.github.com/repos/minishift/minishift-ci-jobs | closed | Increase the timeout of minishift pr and master job | component/build kind/task priority/major | Need to increase the timeout since minishift job is taking ~56 mins.
See https://github.com/minishift/minishift/pull/1767#issuecomment-349374001 | 1.0 | Increase the timeout of minishift pr and master job - Need to increase the timeout since minishift job is taking ~56 mins.
See https://github.com/minishift/minishift/pull/1767#issuecomment-349374001 | non_defect | increase the timeout of minishift pr and master job need to increase the timeout since minishift job is taking mins see | 0 |
312,205 | 23,419,690,543 | IssuesEvent | 2022-08-13 13:54:48 | chunky-dev/docs | https://api.github.com/repos/chunky-dev/docs | closed | Rework Reference - Introduction | documentation help wanted | The `Introduction` article requires further refinement.
- [ ] #68
- [x] Samples Per Pixel (SPP) - Update gif with a smoother animation with more frames
- [x] Merge NEE/ESS stuff together
- [x] Emitter Sampling Strategy (ESS) - Section is confusing. Probably needs diagrams to explain the complex nature of ESS.
- [x] Materials - Need to add a section on various material properties and how they function (in context). This is intended to replace some of the content within `Reference - Render Controls - Materials` as that article should just be for the UI and short descriptions.
- [x] Image formats and color - This section does not really fit here. Maybe we should move some of the content to `Reference - Scene Format` and other parts to `Reference - Render Controls - Advanced`.
---
Given the scope of the article, and the changes that may be required, I would not be against the sub-division of the article. Please use this issue to further expand on any issues with the article and your suggestions. | 1.0 | Rework Reference - Introduction - The `Introduction` article requires further refinement.
- [ ] #68
- [x] Samples Per Pixel (SPP) - Update gif with a smoother animation with more frames
- [x] Merge NEE/ESS stuff together
- [x] Emitter Sampling Strategy (ESS) - Section is confusing. Probably needs diagrams to explain the complex nature of ESS.
- [x] Materials - Need to add a section on various material properties and how they function (in context). This is intended to replace some of the content within `Reference - Render Controls - Materials` as that article should just be for the UI and short descriptions.
- [x] Image formats and color - This section does not really fit here. Maybe we should move some of the content to `Reference - Scene Format` and other parts to `Reference - Render Controls - Advanced`.
---
Given the scope of the article, and the changes that may be required, I would not be against the sub-division of the article. Please use this issue to further expand on any issues with the article and your suggestions. | non_defect | rework reference introduction the introduction article requires further refinement samples per pixel spp update gif with a smoother animation with more frames merge nee ess stuff together emitter sampling strategy ess section is confusing probably needs diagrams to explain the complex nature of ess materials need to add a section on various material properties and how they function in context this is intended to replace some of the content within reference render controls materials as that article should just be for the ui and short descriptions image formats and color this section does not really fit here maybe we should move some of the content to reference scene format and other parts to reference render controls advanced given the scope of the article and the changes that may be required i would not be against the sub division of the article please use this issue to further expand on any issues with the article and your suggestions | 0 |
244,093 | 18,738,920,134 | IssuesEvent | 2021-11-04 11:13:20 | do-mpc/do-mpc | https://api.github.com/repos/do-mpc/do-mpc | closed | MPC Implementing Real System Measurements | Documentation | Hi,
first things first: Great Work!
I implemented the ODE from here: https://apmonitor.com/do/index.php/Main/DynamicControl and adapted the variables no problem everything works fine.
But now I try to implement my real system where I try to input my actual measurements. Where and how do I do it?
I tried commenting out the simulator in the control loop and put the measurements into the x0 array. But I guess that's not all I have to do. And I don't know how to calculate the differential v'. Because it doesn't seem to be the slope of the last two points in time.
Maybe I just missed where to put the measurements and how to calculate all that is not directly measurable. If you could help me that would be great :) | 1.0 | MPC Implementing Real System Measurements - Hi,
first things first: Great Work!
I implemented the ODE from here: https://apmonitor.com/do/index.php/Main/DynamicControl and adapted the variables no problem everything works fine.
But now I try to implement my real system where I try to input my actual measurements. Where and how do I do it?
I tried commenting out the simulator in the control loop and put the measurements into the x0 array. But I guess that's not all I have to do. And I don't know how to calculate the differential v'. Because it doesn't seem to be the slope of the last two points in time.
Maybe I just missed where to put the measurements and how to calculate all that is not directly measurable. If you could help me that would be great :) | non_defect | mpc implementing real system measurements hi first things first great work i implemented the ode from here and adapted the variables no problem everything works fine but now i try to implement my real system where i try to input my actual measurements where and how do i do it i tried commenting out the simulator in the control loop and put the measurements into the array but i guess that s not all i have to do and i don t know how to calculate the differential v because it doesn t seem to be the slope of the last two points in time maybe i just missed where to put the measurements and how to calculate all that is not directly measurable if you could help me that would be great | 0 |
63,634 | 17,795,798,385 | IssuesEvent | 2021-08-31 22:01:26 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | opened | Coil:Cooling:WaterToAirHeatPump:EquationFit coil reported total cooling energy not consistent with curve calculated total cooling energy | Defect | Issue overview
--------------
In WaterToAirHeatPumpSimple.cc, there's `state.dataWaterToAirHeatPumpSimple->QLoadTotalReport` created for reporting in order to keep the consistency between `Cooling Coil Total Cooling Energy` and `Air System Coil Cooling Energy` in the [bugfix PR8980](https://github.com/NREL/EnergyPlus/pull/8980). The coil total cooling energy used to be calculated based on total cooling capacity curve output and run time fraction (where `state.dataWaterToAirHeatPumpSimple->QLoadTotal` is calculated), however, it is not consistent with air system coil cooling energy variable which is calculated based on inlet and outlet enthalpy. See the conversation in [PR8980](https://github.com/NREL/EnergyPlus/pull/8980#issuecomment-906444438) for more details. Need further investigation to understand why the discrepancy exists.
FYI, @rraustad @shorowit .
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | Coil:Cooling:WaterToAirHeatPump:EquationFit coil reported total cooling energy not consistent with curve calculated total cooling energy - Issue overview
--------------
In WaterToAirHeatPumpSimple.cc, there's `state.dataWaterToAirHeatPumpSimple->QLoadTotalReport` created for reporting in order to keep the consistency between `Cooling Coil Total Cooling Energy` and `Air System Coil Cooling Energy` in the [bugfix PR8980](https://github.com/NREL/EnergyPlus/pull/8980). The coil total cooling energy used to be calculated based on total cooling capacity curve output and run time fraction (where `state.dataWaterToAirHeatPumpSimple->QLoadTotal` is calculated), however, it is not consistent with air system coil cooling energy variable which is calculated based on inlet and outlet enthalpy. See the conversation in [PR8980](https://github.com/NREL/EnergyPlus/pull/8980#issuecomment-906444438) for more details. Need further investigation to understand why the discrepancy exists.
FYI, @rraustad @shorowit .
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| defect | coil cooling watertoairheatpump equationfit coil reported total cooling energy not consistent with curve calculated total cooling energy issue overview in watertoairheatpumpsimple cc there s state datawatertoairheatpumpsimple qloadtotalreport created for reporting in order to keep the consistency between cooling coil total cooling energy and air system coil cooling energy in the the coil total cooling energy used to be calculated based on total cooling capacity curve output and run time fraction where state datawatertoairheatpumpsimple qloadtotal is calculated however it is not consistent with air system coil cooling energy variable which is calculated based on inlet and outlet enthalpy see the conversation in for more details need further investigation to understand why the discrepancy exists fyi rraustad shorowit details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 1 |
49,099 | 13,185,231,445 | IssuesEvent | 2020-08-12 20:59:15 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | gulliver-modules::fancyfit test fails on SL5 32bit (Trac #746) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/746
, reported by nega and owned by boersma</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T17:23:17",
"description": "Python traceback:\n\n{{{\nERROR (I3FortyTwo): PLEASE INVESTIGATE (fortytwo.py:185 in Finish)\nFATAL (I3FortyTwo): SOME CHECKS FAILED (fortytwo.py:186 in Finish)\nERROR (I3Module): <class 'fortytwo.I3FortyTwo'>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fancyfit.py\", line 142, in ?\n tray.Finish()\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fortytwo.py\", line 186, in Finish\n icetray.logging.log_fatal(\"SOME CHECKS FAILED\",unit=u42)\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: SOME CHECKS FAILED (in Finish)\n}}}\n\nMore output at: http://builds.icecube.wisc.edu/builders/quick_icerec_SL5/builds/1103/steps/test/logs/stdio\n",
"reporter": "nega",
"cc": "dataclass@icecube.wisc.edu",
"resolution": "wontfix",
"_ts": "1423675397463977",
"component": "combo reconstruction",
"summary": "gulliver-modules::fancyfit test fails on SL5 32bit",
"priority": "normal",
"keywords": "gulliver-modules tests",
"time": "2014-09-05T21:19:03",
"milestone": "",
"owner": "boersma",
"type": "defect"
}
```
</p>
</details>
| 1.0 | gulliver-modules::fancyfit test fails on SL5 32bit (Trac #746) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/746
, reported by nega and owned by boersma</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T17:23:17",
"description": "Python traceback:\n\n{{{\nERROR (I3FortyTwo): PLEASE INVESTIGATE (fortytwo.py:185 in Finish)\nFATAL (I3FortyTwo): SOME CHECKS FAILED (fortytwo.py:186 in Finish)\nERROR (I3Module): <class 'fortytwo.I3FortyTwo'>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fancyfit.py\", line 142, in ?\n tray.Finish()\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fortytwo.py\", line 186, in Finish\n icetray.logging.log_fatal(\"SOME CHECKS FAILED\",unit=u42)\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: SOME CHECKS FAILED (in Finish)\n}}}\n\nMore output at: http://builds.icecube.wisc.edu/builders/quick_icerec_SL5/builds/1103/steps/test/logs/stdio\n",
"reporter": "nega",
"cc": "dataclass@icecube.wisc.edu",
"resolution": "wontfix",
"_ts": "1423675397463977",
"component": "combo reconstruction",
"summary": "gulliver-modules::fancyfit test fails on SL5 32bit",
"priority": "normal",
"keywords": "gulliver-modules tests",
"time": "2014-09-05T21:19:03",
"milestone": "",
"owner": "boersma",
"type": "defect"
}
```
</p>
</details>
| defect | gulliver modules fancyfit test fails on trac migrated from reported by nega and owned by boersma json status closed changetime description python traceback n n nerror please investigate fortytwo py in finish nfatal some checks failed fortytwo py in finish nerror exception thrown cxx in void do void ntraceback most recent call last n file build buildslave foraii quick icerec source gulliver modules resources scripts fancyfit py line in n tray finish n file build buildslave foraii quick icerec source gulliver modules resources scripts fortytwo py line in finish n icetray logging log fatal some checks failed unit n file build buildslave foraii quick icerec build lib icecube icetray py line in log fatal n raise runtimeerror message in tb nruntimeerror some checks failed in finish n n nmore output at reporter nega cc dataclass icecube wisc edu resolution wontfix ts component combo reconstruction summary gulliver modules fancyfit test fails on priority normal keywords gulliver modules tests time milestone owner boersma type defect | 1 |
80,470 | 30,299,818,302 | IssuesEvent | 2023-07-10 04:28:34 | gperftools/gperftools | https://api.github.com/repos/gperftools/gperftools | closed | Heap Checker considers data belonging to dead threads to be live (Linux) | Type-Defect Priority-Medium Status-New | Originally reported on Google Code with ID 537
```
Heap Checker treats all memory ranges mapped by libpthread as live, with the exception
of those ranges which contain the stack pointer of a running thread. On Linux, libpthread
does not immediately unmap the stack on thread death. As a result, stacks of dead threads
may still exist in memory and will be treated as live by Heap Checker.
Reproducer:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
void *thread_func(void *) {
// Print the stack range for reference.
pthread_attr_t attr;
pthread_getattr_np(pthread_self(), &attr);
void *stackaddr;
size_t stacksize;
pthread_attr_getstack(&attr, &stackaddr, &stacksize);
printf("Stack at %p-%p\n", stackaddr, (char *)stackaddr + stacksize);
void *leaked = malloc(31337);
printf("%p\n", leaked); // break optimization
}
int main() {
pthread_t pid;
pthread_create(&pid, 0, thread_func, 0);
pthread_join(pid, 0);
return 0;
}
Build with -pthread and -ltcmalloc, run with HEAPCHECK=strict or HEAPCHECK=normal.
Heap checker reports no leaks, when in fact we have a leaked block of 31337 bytes.
Now run with HEAPCHECK=strict PERFTOOLS_VERBOSE=20.
On stdout:
Stack at 0x7f8792bf8000-0x7f87933f9000
[...]
On stderr:
[...]
Checking for whole-program memory leaks
Disabling allocations from /lib/x86_64-linux-gnu/libpthread-2.15.so at depth 1:
Global memory regions made by /lib/x86_64-linux-gnu/libpthread-2.15.so will be live
data
Disabling allocations from /lib/x86_64-linux-gnu/ld-2.15.so at depth 2:
Global memory regions made by /lib/x86_64-linux-gnu/ld-2.15.so will be live data
Found 0 threads (from pid 32364)
Looking into ./a.out: 0x601000..0x602000
Looking into [heap]: 0x21c3000..0x23dc000
Looking into UNNAMED: 0x7f8792bf9000..0x7f87933f9000
[...]
Looking for heap pointers in 0x7f8792bf9000 of 8388608 bytes
Got pointer into 0x22e0000 at +0 offset
Found pointer to 0x22e0000 of 31337 bytes at 0x7f87933f7780 inside 0x7f8792bf9000 of
size 8388608
[...]
```
Reported by `earthdok@google.com` on 2013-06-05 10:36:56
| 1.0 | Heap Checker considers data belonging to dead threads to be live (Linux) - Originally reported on Google Code with ID 537
```
Heap Checker treats all memory ranges mapped by libpthread as live, with the exception
of those ranges which contain the stack pointer of a running thread. On Linux, libpthread
does not immediately unmap the stack on thread death. As a result, stacks of dead threads
may still exist in memory and will be treated as live by Heap Checker.
Reproducer:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
void *thread_func(void *) {
// Print the stack range for reference.
pthread_attr_t attr;
pthread_getattr_np(pthread_self(), &attr);
void *stackaddr;
size_t stacksize;
pthread_attr_getstack(&attr, &stackaddr, &stacksize);
printf("Stack at %p-%p\n", stackaddr, (char *)stackaddr + stacksize);
void *leaked = malloc(31337);
printf("%p\n", leaked); // break optimization
}
int main() {
pthread_t pid;
pthread_create(&pid, 0, thread_func, 0);
pthread_join(pid, 0);
return 0;
}
Build with -pthread and -ltcmalloc, run with HEAPCHECK=strict or HEAPCHECK=normal.
Heap checker reports no leaks, when in fact we have a leaked block of 31337 bytes.
Now run with HEAPCHECK=strict PERFTOOLS_VERBOSE=20.
On stdout:
Stack at 0x7f8792bf8000-0x7f87933f9000
[...]
On stderr:
[...]
Checking for whole-program memory leaks
Disabling allocations from /lib/x86_64-linux-gnu/libpthread-2.15.so at depth 1:
Global memory regions made by /lib/x86_64-linux-gnu/libpthread-2.15.so will be live
data
Disabling allocations from /lib/x86_64-linux-gnu/ld-2.15.so at depth 2:
Global memory regions made by /lib/x86_64-linux-gnu/ld-2.15.so will be live data
Found 0 threads (from pid 32364)
Looking into ./a.out: 0x601000..0x602000
Looking into [heap]: 0x21c3000..0x23dc000
Looking into UNNAMED: 0x7f8792bf9000..0x7f87933f9000
[...]
Looking for heap pointers in 0x7f8792bf9000 of 8388608 bytes
Got pointer into 0x22e0000 at +0 offset
Found pointer to 0x22e0000 of 31337 bytes at 0x7f87933f7780 inside 0x7f8792bf9000 of
size 8388608
[...]
```
Reported by `earthdok@google.com` on 2013-06-05 10:36:56
| defect | heap checker considers data belonging to dead threads to be live linux originally reported on google code with id heap checker treats all memory ranges mapped by libpthread as live with the exception of those ranges which contain the stack pointer of a running thread on linux libpthread does not immediately unmap the stack on thread death as a result stacks of dead threads may still exist in memory and will be treated as live by heap checker reproducer include include include void thread func void print the stack range for reference pthread attr t attr pthread getattr np pthread self attr void stackaddr size t stacksize pthread attr getstack attr stackaddr stacksize printf stack at p p n stackaddr char stackaddr stacksize void leaked malloc printf p n leaked break optimization int main pthread t pid pthread create pid thread func pthread join pid return build with pthread and ltcmalloc run with heapcheck strict or heapcheck normal heap checker reports no leaks when in fact we have a leaked block of bytes now run with heapcheck strict perftools verbose on stdout stack at on stderr checking for whole program memory leaks disabling allocations from lib linux gnu libpthread so at depth global memory regions made by lib linux gnu libpthread so will be live data disabling allocations from lib linux gnu ld so at depth global memory regions made by lib linux gnu ld so will be live data found threads from pid looking into a out looking into looking into unnamed looking for heap pointers in of bytes got pointer into at offset found pointer to of bytes at inside of size reported by earthdok google com on | 1 |
56,636 | 6,994,385,787 | IssuesEvent | 2017-12-15 15:11:52 | yldio/joyent-portal | https://api.github.com/repos/yldio/joyent-portal | opened | Add link to edit firewall rules | design | From London sprint.
Potentially using an external link icon ("flivver"). | 1.0 | Add link to edit firewall rules - From London sprint.
Potentially using an external link icon ("flivver"). | non_defect | add link to edit firewall rules from london sprint potentially using an external link icon flivver | 0 |
74,224 | 25,013,753,604 | IssuesEvent | 2022-11-03 17:04:39 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | opened | Panic upon "Organize imports" in a TypeScript barrel-file | defect triage | ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Running the "Organize imports" action in a TypeScript barrel file causes Zed to panic.
Steps to reproduce:
1. Open a TypeScript barrel file
e.g.
```ts
export * from "./somefile"
export * from "./anotherfile"
```
2. Press `cmd + .` and select "Organize imports".
### Expected behavior
Nothing happens (not sure if `export * from "..."` should be reordered though)
### Environment
Zed 0.62.5 – /Applications/Zed.app
macOS 12.6
architecture arm64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
16:57:26 [ERROR] thread 'background-executor-6' panicked at 'point PointUtf16 { row: 0, column: 21 } is beyond the end of a line with length 20': crates/rope/src/rope.rs:728 0: backtrace::capture::Backtrace::new
1: Zed::init_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
3: std::panicking::begin_panic_handler::{{closure}}
4: std::sys_common::backtrace::__rust_end_short_backtrace
5: _rust_begin_unwind
6: core::panicking::panic_fmt
7: rope::Rope::point_utf16_to_offset
8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: async_task::raw::RawTask<F,T,S>::run
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
12: async_io::driver::block_on
13: std::sys_common::backtrace::__rust_begin_short_backtrace
14: core::ops::function::FnOnce::call_once{{vtable.shim}}
15: std::sys::unix::thread::Thread::new::thread_start
16: __pthread_deallocate
16:57:26 [ERROR] thread 'main' panicked at 'task has failed': /Users/administrator/.cargo/git/checkouts/async-task-939ec7beeb877e57/341b57d/src/task.rs:368 0: backtrace::capture::Backtrace::new
1: Zed::init_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
3: std::panicking::begin_panic_handler::{{closure}}
4: std::sys_common::backtrace::__rust_end_short_backtrace
5: _rust_begin_unwind
6: core::panicking::panic_fmt
7: core::panicking::panic_display
8: core::panicking::panic_str
9: core::option::expect_failed
10: futures_lite::future::FutureExt::poll
11: <gpui::executor::Task<T> as core::future::future::Future>::poll
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <async_task::runnable::spawn_local::Checked<F> as core::future::future::Future>::poll
15: async_task::raw::RawTask<F,T,S>::run
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: <unknown>
28: <gpui::platform::mac::platform::MacForegroundPlatform as gpui::platform::ForegroundPlatform>::run
29: gpui::app::App::run
30: Zed::main
31: std::sys_common::backtrace::__rust_begin_short_backtrace
32: std::rt::lang_start::{{closure}}
33: std::rt::lang_start_internal
34: _main | 1.0 | Panic upon "Organize imports" in a TypeScript barrel-file - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Running the "Organize imports" action in a TypeScript barrel file causes Zed to panic.
Steps to reproduce:
1. Open a TypeScript barrel file
e.g.
```ts
export * from "./somefile"
export * from "./anotherfile"
```
2. Press `cmd + .` and select "Organize imports".
### Expected behavior
Nothing happens (not sure if `export * from "..."` should be reordered though)
### Environment
Zed 0.62.5 – /Applications/Zed.app
macOS 12.6
architecture arm64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
16:57:26 [ERROR] thread 'background-executor-6' panicked at 'point PointUtf16 { row: 0, column: 21 } is beyond the end of a line with length 20': crates/rope/src/rope.rs:728 0: backtrace::capture::Backtrace::new
1: Zed::init_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
3: std::panicking::begin_panic_handler::{{closure}}
4: std::sys_common::backtrace::__rust_end_short_backtrace
5: _rust_begin_unwind
6: core::panicking::panic_fmt
7: rope::Rope::point_utf16_to_offset
8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
10: async_task::raw::RawTask<F,T,S>::run
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
12: async_io::driver::block_on
13: std::sys_common::backtrace::__rust_begin_short_backtrace
14: core::ops::function::FnOnce::call_once{{vtable.shim}}
15: std::sys::unix::thread::Thread::new::thread_start
16: __pthread_deallocate
16:57:26 [ERROR] thread 'main' panicked at 'task has failed': /Users/administrator/.cargo/git/checkouts/async-task-939ec7beeb877e57/341b57d/src/task.rs:368 0: backtrace::capture::Backtrace::new
1: Zed::init_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
3: std::panicking::begin_panic_handler::{{closure}}
4: std::sys_common::backtrace::__rust_end_short_backtrace
5: _rust_begin_unwind
6: core::panicking::panic_fmt
7: core::panicking::panic_display
8: core::panicking::panic_str
9: core::option::expect_failed
10: futures_lite::future::FutureExt::poll
11: <gpui::executor::Task<T> as core::future::future::Future>::poll
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <async_task::runnable::spawn_local::Checked<F> as core::future::future::Future>::poll
15: async_task::raw::RawTask<F,T,S>::run
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: <unknown>
28: <gpui::platform::mac::platform::MacForegroundPlatform as gpui::platform::ForegroundPlatform>::run
29: gpui::app::App::run
30: Zed::main
31: std::sys_common::backtrace::__rust_begin_short_backtrace
32: std::rt::lang_start::{{closure}}
33: std::rt::lang_start_internal
34: _main | defect | panic upon organize imports in a typescript barrel file check for existing issues completed describe the bug provide steps to reproduce it running the organize imports action in a typescript barrel file causes zed to panic steps to reproduce open a typescript barrel file e g ts export from somefile export from anotherfile press cmd and select organize imports expected behavior nothing happens not sure if export from should be reordered though environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue thread background executor panicked at point row column is beyond the end of a line with length crates rope src rope rs backtrace capture backtrace new zed init panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt rope rope point to offset as core future future future poll as core future future future poll async task raw rawtask run as core future future future poll async io driver block on std sys common backtrace rust begin short backtrace core ops function fnonce call once vtable shim std sys unix thread thread new thread start pthread deallocate thread main panicked at task has failed users administrator cargo git checkouts async task src task rs backtrace capture backtrace new zed init panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core panicking panic display core panicking panic str core option expect failed futures lite future futureext poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll async task raw rawtask run run gpui app app run zed main std sys common backtrace rust begin short backtrace std rt lang start closure std rt lang start internal main | 1 |
41,938 | 10,720,818,446 | IssuesEvent | 2019-10-26 20:30:29 | tulir/mautrix-telegram | https://api.github.com/repos/tulir/mautrix-telegram | closed | Incorrect peer type is sometimes saved in portal database | bug: defect | In some cases, the bridge will create a portal object with an incorrect peer type for some reason (`chat` instead of `channel`). Such bridges won't work at all, since many requests (like send message) are peer type specific. The type is then saved into the database and can't be fixed without manually editing the database. This might only be a problem for puppeted accounts. | 1.0 | Incorrect peer type is sometimes saved in portal database - In some cases, the bridge will create a portal object with an incorrect peer type for some reason (`chat` instead of `channel`). Such bridges won't work at all, since many requests (like send message) are peer type specific. The type is then saved into the database and can't be fixed without manually editing the database. This might only be a problem for puppeted accounts. | defect | incorrect peer type is sometimes saved in portal database in some cases the bridge will create a portal object with an incorrect peer type for some reason chat instead of channel such bridges won t work at all since many requests like send message are peer type specific the type is then saved into the database and can t be fixed without manually editing the database this might only be a problem for puppeted accounts | 1 |
66,577 | 20,362,365,301 | IssuesEvent | 2022-02-20 21:29:54 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | Kernel Panic and DoS on massive amounts of snapshot mount/umount | Type: Defect |
### System information
Distribution Name | Ubuntu
Distribution Version | 16.04
Kernel Version | 5.10.83
Architecture | x86_64/amd64
OpenZFS Version | 2.1.2
Get a Kernel panic from the ZFS snapshot umount/mount code. Kernel eventually ends up wedged as ZFS is locked in a system call. Basically, a DoS
Mount .zfs/snapshot directory of a dataset via Samba smb share to Windows, Have LOTS of snapshots
Backtrace from dmesg:
'
[1679741.045537] INFO: task spl_delay_taskq:2096 blocked for more than 122 seconds.
[1679741.053018] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.060784] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.068873] task:spl_delay_taskq state:D stack: 0 pid: 2096 ppid: 2 flags:0x00004000
[1679741.068880] Call Trace:
[1679741.068894] __schedule+0x22e/0x760
[1679741.068900] schedule+0x3c/0xa0
[1679741.068904] schedule_timeout+0x1c0/0x220
[1679741.068909] wait_for_completion+0x97/0x100
[1679741.068918] call_usermodehelper_exec+0x12e/0x160
[1679741.068976] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.069038] snapentry_expire+0x37/0xc0 [zfs]
[1679741.069046] taskq_thread+0x2d5/0x490 [spl]
[1679741.069053] ? wake_up_q+0xa0/0xa0
[1679741.069062] ? task_done+0x90/0x90 [spl]
[1679741.069067] kthread+0x117/0x130
[1679741.069074] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.069079] ret_from_fork+0x22/0x30
[1679741.069225] INFO: task spl_delay_taskq:12654 blocked for more than 122 seconds.
[1679741.076806] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.084551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.092647] task:spl_delay_taskq state:D stack: 0 pid:12654 ppid: 2 flags:0x00004000
[1679741.092654] Call Trace:
[1679741.092662] __schedule+0x22e/0x760
[1679741.092674] schedule+0x3c/0xa0
[1679741.092678] rwsem_down_read_slowpath+0x2f6/0x4a0
[1679741.092722] snapentry_expire+0x4b/0xc0 [zfs]
[1679741.092730] taskq_thread+0x2d5/0x490 [spl]
[1679741.092736] ? wake_up_q+0xa0/0xa0
[1679741.092744] ? task_done+0x90/0x90 [spl]
[1679741.092748] kthread+0x117/0x130
[1679741.092753] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.092757] ret_from_fork+0x22/0x30
[1679741.092761] INFO: task spl_delay_taskq:12657 blocked for more than 122 seconds.
[1679741.100340] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.108093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.116187] task:spl_delay_taskq state:D stack: 0 pid:12657 ppid: 2 flags:0x00004000
[1679741.116195] Call Trace:
[1679741.116204] __schedule+0x22e/0x760
[1679741.116211] schedule+0x3c/0xa0
[1679741.116215] schedule_timeout+0x1c0/0x220
[1679741.116220] wait_for_completion+0x97/0x100
[1679741.116231] call_usermodehelper_exec+0x12e/0x160
[1679741.116271] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.116316] snapentry_expire+0x37/0xc0 [zfs]
[1679741.116323] taskq_thread+0x2d5/0x490 [spl]
[1679741.116328] ? wake_up_q+0xa0/0xa0
[1679741.116337] ? task_done+0x90/0x90 [spl]
[1679741.116343] kthread+0x117/0x130
[1679741.116347] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.116350] ret_from_fork+0x22/0x30
[1679741.116356] INFO: task spl_delay_taskq:12665 blocked for more than 122 seconds.
[1679741.123930] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.131697] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.139828] task:spl_delay_taskq state:D stack: 0 pid:12665 ppid: 2 flags:0x00004000
[1679741.139833] Call Trace:
[1679741.139841] __schedule+0x22e/0x760
[1679741.139851] schedule+0x3c/0xa0
[1679741.139855] rwsem_down_read_slowpath+0x2f6/0x4a0
[1679741.139896] snapentry_expire+0x4b/0xc0 [zfs]
[1679741.139926] taskq_thread+0x2d5/0x490 [spl]
[1679741.139932] ? wake_up_q+0xa0/0xa0
[1679741.139940] ? task_done+0x90/0x90 [spl]
[1679741.139952] kthread+0x117/0x130
[1679741.139959] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.139962] ret_from_fork+0x22/0x30
[1679741.139969] INFO: task spl_delay_taskq:12671 blocked for more than 122 seconds.
[1679741.147577] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.155359] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.163491] task:spl_delay_taskq state:D stack: 0 pid:12671 ppid: 2 flags:0x00004000
[1679741.163511] Call Trace:
[1679741.163517] __schedule+0x22e/0x760
[1679741.163521] schedule+0x3c/0xa0
[1679741.163524] schedule_timeout+0x1c0/0x220
[1679741.163529] wait_for_completion+0x97/0x100
[1679741.163534] call_usermodehelper_exec+0x12e/0x160
[1679741.163574] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.163613] snapentry_expire+0x37/0xc0 [zfs]
[1679741.163622] taskq_thread+0x2d5/0x490 [spl]
[1679741.163627] ? wake_up_q+0xa0/0xa0
[1679741.163635] ? task_done+0x90/0x90 [spl]
[1679741.163639] kthread+0x117/0x130
[1679741.163643] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.163647] ret_from_fork+0x22/0x30
[1679741.163667] INFO: task spl_delay_taskq:12675 blocked for more than 122 seconds.
[1679741.171265] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.179049] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.187173] task:spl_delay_taskq state:D stack: 0 pid:12675 ppid: 2 flags:0x00004000
[1679741.187178] Call Trace:
[1679741.187187] __schedule+0x22e/0x760
[1679741.187192] schedule+0x3c/0xa0
[1679741.187196] schedule_timeout+0x1c0/0x220
[1679741.187201] wait_for_completion+0x97/0x100
[1679741.187208] call_usermodehelper_exec+0x12e/0x160
[1679741.187257] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.187299] snapentry_expire+0x37/0xc0 [zfs]
[1679741.187307] taskq_thread+0x2d5/0x490 [spl]
[1679741.187313] ? wake_up_q+0xa0/0xa0
[1679741.187321] ? task_done+0x90/0x90 [spl]
[1679741.187326] kthread+0x117/0x130
[1679741.187330] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.187335] ret_from_fork+0x22/0x30
[1679741.187341] INFO: task spl_delay_taskq:12680 blocked for more than 123 seconds.
[1679741.194957] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.202737] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.210861] task:spl_delay_taskq state:D stack: 0 pid:12680 ppid: 2 flags:0x00004000
[1679741.210867] Call Trace:
[1679741.210872] __schedule+0x22e/0x760
[1679741.210879] schedule+0x3c/0xa0
[1679741.210883] schedule_timeout+0x1c0/0x220
[1679741.210889] wait_for_completion+0x97/0x100
[1679741.210898] call_usermodehelper_exec+0x12e/0x160
[1679741.210938] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.210981] snapentry_expire+0x37/0xc0 [zfs]
[1679741.210988] taskq_thread+0x2d5/0x490 [spl]
[1679741.210994] ? wake_up_q+0xa0/0xa0
[1679741.211003] ? task_done+0x90/0x90 [spl]
[1679741.211007] kthread+0x117/0x130
[1679741.211012] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.211015] ret_from_fork+0x22/0x30
[1679741.211019] INFO: task spl_delay_taskq:12687 blocked for more than 123 seconds.
[1679741.218634] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.226416] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.234548] task:spl_delay_taskq state:D stack: 0 pid:12687 ppid: 2 flags:0x00004000
[1679741.234552] Call Trace:
[1679741.234556] __schedule+0x22e/0x760
[1679741.234562] schedule+0x3c/0xa0
[1679741.234571] schedule_timeout+0x1c0/0x220
[1679741.234579] wait_for_completion+0x97/0x100
[1679741.234584] call_usermodehelper_exec+0x12e/0x160
[1679741.234623] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.234666] snapentry_expire+0x37/0xc0 [zfs]
[1679741.234674] taskq_thread+0x2d5/0x490 [spl]
[1679741.234678] ? wake_up_q+0xa0/0xa0
[1679741.234686] ? task_done+0x90/0x90 [spl]
[1679741.234691] kthread+0x117/0x130
[1679741.234695] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.234698] ret_from_fork+0x22/0x30
[1679741.234703] INFO: task spl_delay_taskq:12693 blocked for more than 123 seconds.
[1679741.242324] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.250100] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.258224] task:spl_delay_taskq state:D stack: 0 pid:12693 ppid: 2 flags:0x00004000
[1679741.258227] Call Trace:
[1679741.258234] __schedule+0x22e/0x760
[1679741.258238] schedule+0x3c/0xa0
[1679741.258246] schedule_timeout+0x1c0/0x220
[1679741.258252] wait_for_completion+0x97/0x100
[1679741.258257] call_usermodehelper_exec+0x12e/0x160
[1679741.258297] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.258339] snapentry_expire+0x37/0xc0 [zfs]
[1679741.258346] taskq_thread+0x2d5/0x490 [spl]
[1679741.258351] ? wake_up_q+0xa0/0xa0
[1679741.258359] ? task_done+0x90/0x90 [spl]
[1679741.258364] kthread+0x117/0x130
[1679741.258368] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.258371] ret_from_fork+0x22/0x30
[1679741.258376] INFO: task spl_delay_taskq:12698 blocked for more than 123 seconds.
[1679741.265985] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.273770] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.281904] task:spl_delay_taskq state:D stack: 0 pid:12698 ppid: 2 flags:0x00004000
[1679741.281908] Call Trace:
[1679741.281913] __schedule+0x22e/0x760
[1679741.281920] schedule+0x3c/0xa0
[1679741.281929] schedule_timeout+0x1c0/0x220
[1679741.281933] wait_for_completion+0x97/0x100
[1679741.281938] call_usermodehelper_exec+0x12e/0x160
[1679741.281982] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.282025] snapentry_expire+0x37/0xc0 [zfs]
[1679741.282033] taskq_thread+0x2d5/0x490 [spl]
[1679741.282039] ? wake_up_q+0xa0/0xa0
[1679741.282047] ? task_done+0x90/0x90 [spl]
[1679741.282052] kthread+0x117/0x130
[1679741.282056] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.282059] ret_from_fork+0x22/0x30
'
| 1.0 | Kernel Panic and DoS on massive amounts of snapshot mount/umount -
### System information
Distribution Name | Ubuntu
Distribution Version | 16.04
Kernel Version | 5.10.83
Architecture | x86_64/amd64
OpenZFS Version | 2.1.2
Get a Kernel panic from the ZFS snapshot umount/mount code. Kernel eventually ends up wedged as ZFS is locked in a system call. Basically, a DoS
Mount .zfs/snapshot directory of a dataset via Samba smb share to Windows, Have LOTS of snapshots
Backtrace from dmesg:
'
[1679741.045537] INFO: task spl_delay_taskq:2096 blocked for more than 122 seconds.
[1679741.053018] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.060784] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.068873] task:spl_delay_taskq state:D stack: 0 pid: 2096 ppid: 2 flags:0x00004000
[1679741.068880] Call Trace:
[1679741.068894] __schedule+0x22e/0x760
[1679741.068900] schedule+0x3c/0xa0
[1679741.068904] schedule_timeout+0x1c0/0x220
[1679741.068909] wait_for_completion+0x97/0x100
[1679741.068918] call_usermodehelper_exec+0x12e/0x160
[1679741.068976] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.069038] snapentry_expire+0x37/0xc0 [zfs]
[1679741.069046] taskq_thread+0x2d5/0x490 [spl]
[1679741.069053] ? wake_up_q+0xa0/0xa0
[1679741.069062] ? task_done+0x90/0x90 [spl]
[1679741.069067] kthread+0x117/0x130
[1679741.069074] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.069079] ret_from_fork+0x22/0x30
[1679741.069225] INFO: task spl_delay_taskq:12654 blocked for more than 122 seconds.
[1679741.076806] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.084551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.092647] task:spl_delay_taskq state:D stack: 0 pid:12654 ppid: 2 flags:0x00004000
[1679741.092654] Call Trace:
[1679741.092662] __schedule+0x22e/0x760
[1679741.092674] schedule+0x3c/0xa0
[1679741.092678] rwsem_down_read_slowpath+0x2f6/0x4a0
[1679741.092722] snapentry_expire+0x4b/0xc0 [zfs]
[1679741.092730] taskq_thread+0x2d5/0x490 [spl]
[1679741.092736] ? wake_up_q+0xa0/0xa0
[1679741.092744] ? task_done+0x90/0x90 [spl]
[1679741.092748] kthread+0x117/0x130
[1679741.092753] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.092757] ret_from_fork+0x22/0x30
[1679741.092761] INFO: task spl_delay_taskq:12657 blocked for more than 122 seconds.
[1679741.100340] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.108093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.116187] task:spl_delay_taskq state:D stack: 0 pid:12657 ppid: 2 flags:0x00004000
[1679741.116195] Call Trace:
[1679741.116204] __schedule+0x22e/0x760
[1679741.116211] schedule+0x3c/0xa0
[1679741.116215] schedule_timeout+0x1c0/0x220
[1679741.116220] wait_for_completion+0x97/0x100
[1679741.116231] call_usermodehelper_exec+0x12e/0x160
[1679741.116271] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.116316] snapentry_expire+0x37/0xc0 [zfs]
[1679741.116323] taskq_thread+0x2d5/0x490 [spl]
[1679741.116328] ? wake_up_q+0xa0/0xa0
[1679741.116337] ? task_done+0x90/0x90 [spl]
[1679741.116343] kthread+0x117/0x130
[1679741.116347] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.116350] ret_from_fork+0x22/0x30
[1679741.116356] INFO: task spl_delay_taskq:12665 blocked for more than 122 seconds.
[1679741.123930] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.131697] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.139828] task:spl_delay_taskq state:D stack: 0 pid:12665 ppid: 2 flags:0x00004000
[1679741.139833] Call Trace:
[1679741.139841] __schedule+0x22e/0x760
[1679741.139851] schedule+0x3c/0xa0
[1679741.139855] rwsem_down_read_slowpath+0x2f6/0x4a0
[1679741.139896] snapentry_expire+0x4b/0xc0 [zfs]
[1679741.139926] taskq_thread+0x2d5/0x490 [spl]
[1679741.139932] ? wake_up_q+0xa0/0xa0
[1679741.139940] ? task_done+0x90/0x90 [spl]
[1679741.139952] kthread+0x117/0x130
[1679741.139959] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.139962] ret_from_fork+0x22/0x30
[1679741.139969] INFO: task spl_delay_taskq:12671 blocked for more than 122 seconds.
[1679741.147577] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.155359] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.163491] task:spl_delay_taskq state:D stack: 0 pid:12671 ppid: 2 flags:0x00004000
[1679741.163511] Call Trace:
[1679741.163517] __schedule+0x22e/0x760
[1679741.163521] schedule+0x3c/0xa0
[1679741.163524] schedule_timeout+0x1c0/0x220
[1679741.163529] wait_for_completion+0x97/0x100
[1679741.163534] call_usermodehelper_exec+0x12e/0x160
[1679741.163574] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.163613] snapentry_expire+0x37/0xc0 [zfs]
[1679741.163622] taskq_thread+0x2d5/0x490 [spl]
[1679741.163627] ? wake_up_q+0xa0/0xa0
[1679741.163635] ? task_done+0x90/0x90 [spl]
[1679741.163639] kthread+0x117/0x130
[1679741.163643] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.163647] ret_from_fork+0x22/0x30
[1679741.163667] INFO: task spl_delay_taskq:12675 blocked for more than 122 seconds.
[1679741.171265] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.179049] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.187173] task:spl_delay_taskq state:D stack: 0 pid:12675 ppid: 2 flags:0x00004000
[1679741.187178] Call Trace:
[1679741.187187] __schedule+0x22e/0x760
[1679741.187192] schedule+0x3c/0xa0
[1679741.187196] schedule_timeout+0x1c0/0x220
[1679741.187201] wait_for_completion+0x97/0x100
[1679741.187208] call_usermodehelper_exec+0x12e/0x160
[1679741.187257] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.187299] snapentry_expire+0x37/0xc0 [zfs]
[1679741.187307] taskq_thread+0x2d5/0x490 [spl]
[1679741.187313] ? wake_up_q+0xa0/0xa0
[1679741.187321] ? task_done+0x90/0x90 [spl]
[1679741.187326] kthread+0x117/0x130
[1679741.187330] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.187335] ret_from_fork+0x22/0x30
[1679741.187341] INFO: task spl_delay_taskq:12680 blocked for more than 123 seconds.
[1679741.194957] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.202737] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.210861] task:spl_delay_taskq state:D stack: 0 pid:12680 ppid: 2 flags:0x00004000
[1679741.210867] Call Trace:
[1679741.210872] __schedule+0x22e/0x760
[1679741.210879] schedule+0x3c/0xa0
[1679741.210883] schedule_timeout+0x1c0/0x220
[1679741.210889] wait_for_completion+0x97/0x100
[1679741.210898] call_usermodehelper_exec+0x12e/0x160
[1679741.210938] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.210981] snapentry_expire+0x37/0xc0 [zfs]
[1679741.210988] taskq_thread+0x2d5/0x490 [spl]
[1679741.210994] ? wake_up_q+0xa0/0xa0
[1679741.211003] ? task_done+0x90/0x90 [spl]
[1679741.211007] kthread+0x117/0x130
[1679741.211012] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.211015] ret_from_fork+0x22/0x30
[1679741.211019] INFO: task spl_delay_taskq:12687 blocked for more than 123 seconds.
[1679741.218634] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.226416] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.234548] task:spl_delay_taskq state:D stack: 0 pid:12687 ppid: 2 flags:0x00004000
[1679741.234552] Call Trace:
[1679741.234556] __schedule+0x22e/0x760
[1679741.234562] schedule+0x3c/0xa0
[1679741.234571] schedule_timeout+0x1c0/0x220
[1679741.234579] wait_for_completion+0x97/0x100
[1679741.234584] call_usermodehelper_exec+0x12e/0x160
[1679741.234623] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.234666] snapentry_expire+0x37/0xc0 [zfs]
[1679741.234674] taskq_thread+0x2d5/0x490 [spl]
[1679741.234678] ? wake_up_q+0xa0/0xa0
[1679741.234686] ? task_done+0x90/0x90 [spl]
[1679741.234691] kthread+0x117/0x130
[1679741.234695] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.234698] ret_from_fork+0x22/0x30
[1679741.234703] INFO: task spl_delay_taskq:12693 blocked for more than 123 seconds.
[1679741.242324] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.250100] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.258224] task:spl_delay_taskq state:D stack: 0 pid:12693 ppid: 2 flags:0x00004000
[1679741.258227] Call Trace:
[1679741.258234] __schedule+0x22e/0x760
[1679741.258238] schedule+0x3c/0xa0
[1679741.258246] schedule_timeout+0x1c0/0x220
[1679741.258252] wait_for_completion+0x97/0x100
[1679741.258257] call_usermodehelper_exec+0x12e/0x160
[1679741.258297] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.258339] snapentry_expire+0x37/0xc0 [zfs]
[1679741.258346] taskq_thread+0x2d5/0x490 [spl]
[1679741.258351] ? wake_up_q+0xa0/0xa0
[1679741.258359] ? task_done+0x90/0x90 [spl]
[1679741.258364] kthread+0x117/0x130
[1679741.258368] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.258371] ret_from_fork+0x22/0x30
[1679741.258376] INFO: task spl_delay_taskq:12698 blocked for more than 123 seconds.
[1679741.265985] Tainted: P O 5.10.83-amd64-mag-lts #202112051034
[1679741.273770] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[1679741.281904] task:spl_delay_taskq state:D stack: 0 pid:12698 ppid: 2 flags:0x00004000
[1679741.281908] Call Trace:
[1679741.281913] __schedule+0x22e/0x760
[1679741.281920] schedule+0x3c/0xa0
[1679741.281929] schedule_timeout+0x1c0/0x220
[1679741.281933] wait_for_completion+0x97/0x100
[1679741.281938] call_usermodehelper_exec+0x12e/0x160
[1679741.281982] zfsctl_snapshot_unmount+0x109/0x1f0 [zfs]
[1679741.282025] snapentry_expire+0x37/0xc0 [zfs]
[1679741.282033] taskq_thread+0x2d5/0x490 [spl]
[1679741.282039] ? wake_up_q+0xa0/0xa0
[1679741.282047] ? task_done+0x90/0x90 [spl]
[1679741.282052] kthread+0x117/0x130
[1679741.282056] ? kthread_associate_blkcg+0xa0/0xa0
[1679741.282059] ret_from_fork+0x22/0x30
'
| defect | kernel panic and dos on massive amounts of snapshot mount umount system information distribution name ubuntu distribution version kernel version architecture openzfs version get a kernel panic from the zfs snapshot umount mount code kernel eventually ends up wedged as zfs is locked in a system call basically a dos mount zfs snapshot directory of a dataset via samba smb share to windows have lots of snapshots backtrace from dmesg info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule rwsem down read slowpath snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule rwsem down read slowpath snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork info task spl delay taskq blocked for more than seconds tainted p o mag lts echo proc sys kernel hung task timeout secs disables this message task spl delay taskq state d stack pid ppid flags call trace schedule schedule schedule timeout wait for completion call usermodehelper exec zfsctl snapshot unmount snapentry expire taskq thread wake up q task done kthread kthread associate blkcg ret from fork | 1 |
46,352 | 9,932,595,934 | IssuesEvent | 2019-07-02 10:11:02 | eclipse/vorto | https://api.github.com/repos/eclipse/vorto | closed | Wrongly generated Ditto Protocol json in arduino generator | Code Generators bug | Contains an additional '}' which makes the json malformed. | 1.0 | Wrongly generated Ditto Protocol json in arduino generator - Contains an additional '}' which makes the json malformed. | non_defect | wrongly generated ditto protocol json in arduino generator contains an additional which makes the json malformed | 0 |
159,502 | 25,003,936,509 | IssuesEvent | 2022-11-03 10:17:20 | Senf-koeln/senf-monorepo | https://api.github.com/repos/Senf-koeln/senf-monorepo | closed | Chat | Design not ready | As a member I want to be able to chat with collaborators in order to communicate about and progress with the action.
Important is, that this application is divided into Workspaces. It should not be one big workspace, but projectroom-related (at some point maybe also idea-related)) | 1.0 | Chat - As a member I want to be able to chat with collaborators in order to communicate about and progress with the action.
Important is, that this application is divided into Workspaces. It should not be one big workspace, but projectroom-related (at some point maybe also idea-related)) | non_defect | chat as a member i want to be able to chat with collaborators in order to communicate about and progress with the action important is that this application is divided into workspaces it should not be one big workspace but projectroom related at some point maybe also idea related | 0 |
327,844 | 9,982,026,947 | IssuesEvent | 2019-07-10 08:55:37 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | vikings.fandom.com - see bug description | browser-fenix engine-gecko priority-important | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://vikings.fandom.com/wiki/Rollo
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: even with autoplay turned OFF the video autoplays
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | vikings.fandom.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://vikings.fandom.com/wiki/Rollo
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: even with autoplay turned OFF the video autoplays
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | vikings fandom com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description even with autoplay turned off the video autoplays steps to reproduce browser configuration none from with ❤️ | 0 |
30,579 | 4,634,545,959 | IssuesEvent | 2016-09-29 01:45:54 | servo/servo | https://api.github.com/repos/servo/servo | opened | Navigation items at the top of nbc.com overlap one another horizontally | A-layout/block A-layout/floats C-needs-test I-wrong | The text is all smushed together—layout considers the floats to have the wrong widths. Each item seems to have `float: left` and the elements inside form stacking contexts in various ways. | 1.0 | Navigation items at the top of nbc.com overlap one another horizontally - The text is all smushed together—layout considers the floats to have the wrong widths. Each item seems to have `float: left` and the elements inside form stacking contexts in various ways. | non_defect | navigation items at the top of nbc com overlap one another horizontally the text is all smushed together—layout considers the floats to have the wrong widths each item seems to have float left and the elements inside form stacking contexts in various ways | 0 |
59,700 | 17,023,209,896 | IssuesEvent | 2021-07-03 00:52:17 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Undeleting long ways | Component: potlatch (flash editor) Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 6.15pm, Wednesday, 20th February 2008]**
If a way is >200 points, Potlatch refuses to unlock it.
This is a problem for an undeleted way, because you can't split it, either.
http://www.openstreetmap.org/edit.html?lat=39.9516181&lon=29.6951805&zoom=15
| 1.0 | Undeleting long ways - **[Submitted to the original trac issue database at 6.15pm, Wednesday, 20th February 2008]**
If a way is >200 points, Potlatch refuses to unlock it.
This is a problem for an undeleted way, because you can't split it, either.
http://www.openstreetmap.org/edit.html?lat=39.9516181&lon=29.6951805&zoom=15
| defect | undeleting long ways if a way is points potlatch refuses to unlock it this is a problem for an undeleted way because you can t split it either | 1 |
56,097 | 14,928,831,051 | IssuesEvent | 2021-01-24 20:46:57 | martinrotter/rssguard | https://api.github.com/repos/martinrotter/rssguard | opened | [BUG]: feed stop working on v3.8.5 | Type-Defect | Previously used v3.8.3, the below feed can be parsed.
https://www.hiveandhoneyapiary.com/honey-bee.xml
now, showing this error

Another the feed couldnt be parsed on v3.8.3 - working fine with v3.8.5
#### How to reproduce the bug?
1. add the feed to RSSGuard
#### Other information
* OS: win7x64
* RSS Guard version: v3.8.5 | 1.0 | [BUG]: feed stop working on v3.8.5 - Previously used v3.8.3, the below feed can be parsed.
https://www.hiveandhoneyapiary.com/honey-bee.xml
now, showing this error

Another the feed couldnt be parsed on v3.8.3 - working fine with v3.8.5
#### How to reproduce the bug?
1. add the feed to RSSGuard
#### Other information
* OS: win7x64
* RSS Guard version: v3.8.5 | defect | feed stop working on previously used the below feed can be parsed now showing this error another the feed couldnt be parsed on working fine with how to reproduce the bug add the feed to rssguard other information os rss guard version | 1 |
17,565 | 3,012,747,280 | IssuesEvent | 2015-07-29 02:09:20 | yawlfoundation/yawl | https://api.github.com/repos/yawlfoundation/yawl | closed | [CLOSED] Unselecting a resource variable is not possible | auto-migrated Priority-Medium Type-Defect | <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Monday Jul 27, 2015 at 03:21 GMT_
_Originally opened as https://github.com/adamsmj/yawl/issues/57_
----
```
In step 2 of the resource wizard it does not seem to be possible to
unselect a resource variable once selected (see attached screenshot).
```
Original issue reported on code.google.com by `arthurte...@gmail.com` on 28 Jul 2008 at 6:34
Attachments:
* [TryingtoUnselectaResVar.doc](https://storage.googleapis.com/google-code-attachments/yawl/issue-57/comment-0/TryingtoUnselectaResVar.doc)
| 1.0 | [CLOSED] Unselecting a resource variable is not possible - <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)**
_Monday Jul 27, 2015 at 03:21 GMT_
_Originally opened as https://github.com/adamsmj/yawl/issues/57_
----
```
In step 2 of the resource wizard it does not seem to be possible to
unselect a resource variable once selected (see attached screenshot).
```
Original issue reported on code.google.com by `arthurte...@gmail.com` on 28 Jul 2008 at 6:34
Attachments:
* [TryingtoUnselectaResVar.doc](https://storage.googleapis.com/google-code-attachments/yawl/issue-57/comment-0/TryingtoUnselectaResVar.doc)
| defect | unselecting a resource variable is not possible issue by monday jul at gmt originally opened as in step of the resource wizard it does not seem to be possible to unselect a resource variable once selected see attached screenshot original issue reported on code google com by arthurte gmail com on jul at attachments | 1 |
118,727 | 11,987,105,759 | IssuesEvent | 2020-04-07 20:34:14 | microsoft/ApplicationInsights-dotnet | https://api.github.com/repos/microsoft/ApplicationInsights-dotnet | closed | Integration with .NET Activity is not documented | P2 documentation | AppInsights heavily relies upon .NET System.Diagnostics.Activity for in-proc context propagation (correlaiton) and dependency tracking.
Integration and limitations (when to use Activity vs AppInsights APIs) need to be documented. | 1.0 | Integration with .NET Activity is not documented - AppInsights heavily relies upon .NET System.Diagnostics.Activity for in-proc context propagation (correlaiton) and dependency tracking.
Integration and limitations (when to use Activity vs AppInsights APIs) need to be documented. | non_defect | integration with net activity is not documented appinsights heavily relies upon net system diagnostics activity for in proc context propagation correlaiton and dependency tracking integration and limitations when to use activity vs appinsights apis need to be documented | 0 |
2,123 | 2,603,976,676 | IssuesEvent | 2015-02-24 19:01:41 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳龟头长硬疙瘩 | auto-migrated Priority-Medium Type-Defect | ```
沈阳龟头长硬疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102330
8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于�
��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷�
��悠久、設備精良、技術權威、專家云集,是預防、保健、醫
療、科研康復為一體的綜合性醫院。是國家首批公立甲等部��
�醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南�
��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后
勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等��
�。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:22 | 1.0 | 沈阳龟头长硬疙瘩 - ```
沈阳龟头长硬疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102330
8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于�
��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷�
��悠久、設備精良、技術權威、專家云集,是預防、保健、醫
療、科研康復為一體的綜合性醫院。是國家首批公立甲等部��
�醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南�
��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后
勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等��
�。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:22 | defect | 沈阳龟头长硬疙瘩 沈阳龟头长硬疙瘩〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位于� �� 。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 original issue reported on code google com by gmail com on jun at | 1 |
4,007 | 4,154,214,845 | IssuesEvent | 2016-06-16 10:42:46 | Tyler-Yates/Graphinator | https://api.github.com/repos/Tyler-Yates/Graphinator | opened | Allow Users to Select Greedy Coloring Algorithm | performance | Greedy coloring may not produce the minimum coloring of the graph but it is much faster than the always correct coloring algorithm that is currently implemented.
Allow users to select the greedy coloring algorithm and notify them in the Panel that the coloring is just a guess. | True | Allow Users to Select Greedy Coloring Algorithm - Greedy coloring may not produce the minimum coloring of the graph but it is much faster than the always correct coloring algorithm that is currently implemented.
Allow users to select the greedy coloring algorithm and notify them in the Panel that the coloring is just a guess. | non_defect | allow users to select greedy coloring algorithm greedy coloring may not produce the minimum coloring of the graph but it is much faster than the always correct coloring algorithm that is currently implemented allow users to select the greedy coloring algorithm and notify them in the panel that the coloring is just a guess | 0 |
80,002 | 29,831,251,170 | IssuesEvent | 2023-06-18 09:56:10 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | Scan progress variables are not updated after scrub completes | Type: Defect Status: Stale | <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | Bullseye
Kernel Version | 5.10.0-12-amd64
Architecture | x86_64
OpenZFS Version | 2.1.2-1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
3x 2tb drives in a mirror configuration
### Describe the problem you're observing
While using libzfs to get scrub completion percentage, I noticed that it never reaches 100% but rather remains at 99.99XXX% indefinitely. More specifically, both `pss_issued` and `pss_examined` are always less than `pss_to_examine`.
Program output several hours after `zpool status` reported that the scrub had finished (same after reboot and import/export):
```
> x.getPoolStatus("tank")
pss_issued: 19040284160
pss_examined: 19040394752
pss_to_examine: 19040546304
----
pss_to_examine - pss_issued: 262144
pss_to_examine - pss_examined: 151552
----
(pss_issued / pss_to_examine) * 100: 99.99862323278012
(pss_examined / pss_to_examine) * 100: 99.999204056451
```
These variables are not updated for the final time as `pss_state` and `pss_function` are changed to indicate completion
### Describe how to reproduce the problem
1. Initiate a zpool scrub
2. Wait for it to finish
3. Check `pss_issued` and `pss_examined` to see they are less than `pss_to_examine`
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
n/a
| 1.0 | Scan progress variables are not updated after scrub completes - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | Bullseye
Kernel Version | 5.10.0-12-amd64
Architecture | x86_64
OpenZFS Version | 2.1.2-1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
3x 2tb drives in a mirror configuration
### Describe the problem you're observing
While using libzfs to get scrub completion percentage, I noticed that it never reaches 100% but rather remains at 99.99XXX% indefinitely. More specifically, both `pss_issued` and `pss_examined` are always less than `pss_to_examine`.
Program output several hours after `zpool status` reported that the scrub had finished (same after reboot and import/export):
```
> x.getPoolStatus("tank")
pss_issued: 19040284160
pss_examined: 19040394752
pss_to_examine: 19040546304
----
pss_to_examine - pss_issued: 262144
pss_to_examine - pss_examined: 151552
----
(pss_issued / pss_to_examine) * 100: 99.99862323278012
(pss_examined / pss_to_examine) * 100: 99.999204056451
```
These variables are not updated for the final time as `pss_state` and `pss_function` are changed to indicate completion
### Describe how to reproduce the problem
1. Initiate a zpool scrub
2. Wait for it to finish
3. Check `pss_issued` and `pss_examined` to see they are less than `pss_to_examine`
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
n/a
| defect | scan progress variables are not updated after scrub completes thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name debian distribution version bullseye kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd drives in a mirror configuration describe the problem you re observing while using libzfs to get scrub completion percentage i noticed that it never reaches but rather remains at indefinitely more specifically both pss issued and pss examined are always less than pss to examine program output several hours after zpool status reported that the scrub had finished same after reboot and import export x getpoolstatus tank pss issued pss examined pss to examine pss to examine pss issued pss to examine pss examined pss issued pss to examine pss examined pss to examine these variables are not updated for the final time as pss state and pss function are changed to indicate completion describe how to reproduce the problem initiate a zpool scrub wait for it to finish check pss issued and pss examined to see they are less than pss to examine include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with n a | 1 |
4,080 | 6,902,526,304 | IssuesEvent | 2017-11-25 21:46:59 | progwml6/Natura | https://api.github.com/repos/progwml6/Natura | closed | [1.12-4.3.0.16] Barley Seeds Placed Automatically Turn Into Cotton Seeds | 1.12 bug mod compatibility | This happens with the following:
MC 1.12
Forge: forge-14.21.1.2443
Actually Additions 1.12.1-r119
Natura 1.12-4.3.0.16
[](https://gyazo.com/43ac7daddfe7119fffe79d2ac7078783)
As seen above I place barley seeds in a placer from actually additions, and this is what comes out...
[](https://gyazo.com/055effc2049abcfab13b83b1f25525e8)
Cotton seeds, this happens with Industrial foregoing, and any other machine seed placing block. It just won't plant barley. | True | [1.12-4.3.0.16] Barley Seeds Placed Automatically Turn Into Cotton Seeds - This happens with the following:
MC 1.12
Forge: forge-14.21.1.2443
Actually Additions 1.12.1-r119
Natura 1.12-4.3.0.16
[](https://gyazo.com/43ac7daddfe7119fffe79d2ac7078783)
As seen above I place barley seeds in a placer from actually additions, and this is what comes out...
[](https://gyazo.com/055effc2049abcfab13b83b1f25525e8)
Cotton seeds, this happens with Industrial foregoing, and any other machine seed placing block. It just won't plant barley. | non_defect | barley seeds placed automatically turn into cotton seeds this happens with the following mc forge forge actually additions natura as seen above i place barley seeds in a placer from actually additions and this is what comes out cotton seeds this happens with industrial foregoing and any other machine seed placing block it just won t plant barley | 0 |
60,856 | 17,023,540,979 | IssuesEvent | 2021-07-03 02:33:10 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Segfault on upload button from editing name | Component: merkaartor Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 10.13am, Thursday, 21st January 2010]**
I've created a lot of new features, and I want to use the name from one of them for my changeset comment. So I pick one, and select the text in the Name field of the Properties dock. Having done that, I then use the 'Upload' toolbar button and get this crash:
```
#0 0x00007ffff4288f55 in *__GI_raise (sig=<value optimized out>)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x00007ffff428bd90 in *__GI_abort () at abort.c:88
#2 0x00000000006216e7 in myMessageOutput (msgType=QtFatalMsg,
buf=0x2587178 "ASSERT failure in QList<T>::operator[]: \"index out of range\", file /usr/include/qt4/QtCore/qlist.h, line 403") at Main.cpp:64
#3 0x00007ffff4fd36d3 in qt_message_output(QtMsgType, char const*) () from /usr/lib/libQtCore.so.4
#4 0x00007ffff4fd387b in qFatal(char const*, ...) () from /usr/lib/libQtCore.so.4
#5 0x000000000043c074 in QList<MapFeature*>::operator[] (this=0xe380c8, i=0)
at /usr/include/qt4/QtCore/qlist.h:403
#6 0x0000000000435c6d in PropertiesDock::on_tag_changed (this=0xe38080, k=..., v=...)
at Docks/PropertiesDock.cpp:639
#7 0x000000000072d346 in PropertiesDock::qt_metacall (this=0xe38080,
_c=QMetaObject::InvokeMetaMethod, _id=13, _a=0x7fffffff9ed0) at moc_PropertiesDock.cpp:109
#8 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#9 0x00000000007356f9 in TagTemplates::tagChanged (this=0x10e53d0, _t1=..., _t2=...)
at moc_TagTemplate.cpp:497
#10 0x000000000070c2e6 in TagTemplates::on_tag_changed (this=0x10e53d0, k=..., v=...)
at TagTemplate/TagTemplate.cpp:1090
#11 0x0000000000735b0e in TagTemplates::qt_metacall (this=0x10e53d0,
_c=QMetaObject::InvokeMetaMethod, _id=5, _a=0x7fffffffa110) at moc_TagTemplate.cpp:478
#12 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#13 0x0000000000735791 in TagTemplate::tagChanged (this=0x1126990, _t1=..., _t2=...)
at moc_TagTemplate.cpp:413
#14 0x000000000070d50c in TagTemplate::on_tag_changed (this=0x1126990, k=..., v=...)
at TagTemplate/TagTemplate.cpp:839
#15 0x00000000007362f0 in TagTemplate::qt_metacall (this=0x1126990,
_c=QMetaObject::InvokeMetaMethod, _id=3, _a=0x7fffffffa350) at moc_TagTemplate.cpp:394
#16 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#17 0x0000000000735829 in TagTemplateWidget::tagChanged (this=0x10fb600, _t1=..., _t2=...)
at moc_TagTemplate.cpp:88
#18 0x000000000070f3fb in TagTemplateWidgetEdit::on_editingFinished (this=0x10fb600)
at TagTemplate/TagTemplate.cpp:655
#19 0x00000000007363e8 in TagTemplateWidgetEdit::qt_metacall (this=0x10fb600,
_c=QMetaObject::InvokeMetaMethod, _id=0, _a=0x7fffffffa4c0) at moc_TagTemplate.cpp:330
#20 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#21 0x00007ffff5c7b50b in QLineEdit::focusOutEvent(QFocusEvent*) () from /usr/lib/libQtGui.so.4
#22 0x00007ffff59000b9 in QWidget::event(QEvent*) () from /usr/lib/libQtGui.so.4
#23 0x00007ffff5c79d44 in QLineEdit::event(QEvent*) () from /usr/lib/libQtGui.so.4
#24 0x00007ffff58b001d in QApplicationPrivate::notify_helper(QObject*, QEvent*) ()
from /usr/lib/libQtGui.so.4
#25 0x00007ffff58b807a in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4
#26 0x00007ffff50c0c9c in QCoreApplication::notifyInternal(QObject*, QEvent*) ()
from /usr/lib/libQtCore.so.4
#27 0x00007ffff58b66ef in QApplicationPrivate::setFocusWidget(QWidget*, Qt::FocusReason) ()
from /usr/lib/libQtGui.so.4
#28 0x00007ffff58fa525 in QWidget::setFocus(Qt::FocusReason) () from /usr/lib/libQtGui.so.4
#29 0x00007ffff58fa807 in QWidget::focusNextPrevChild(bool) () from /usr/lib/libQtGui.so.4
#30 0x00007ffff58fe97d in QWidgetPrivate::hide_helper() () from /usr/lib/libQtGui.so.4
#31 0x00007ffff5905860 in QWidget::setVisible(bool) () from /usr/lib/libQtGui.so.4
#32 0x00007ffff5c51e3d in QDockWidgetLayout::setWidgetForRole(QDockWidgetLayout::Role, QWidget*) ()
from /usr/lib/libQtGui.so.4
#33 0x000000000043c849 in MDockAncestor::setWidget (this=0xe38080, widget=0x1076bc0)
at Docks/MDockAncestor.h:13
---Type <return> to continue, or q <return> to quit---
#34 0x0000000000437186 in PropertiesDock::switchToNoUi (this=0xe38080)
at Docks/PropertiesDock.cpp:470
#35 0x0000000000437b06 in PropertiesDock::switchUi (this=0xe38080) at Docks/PropertiesDock.cpp:378
#36 0x000000000043882a in PropertiesDock::setSelection (this=0xe38080, aFeature=0x0)
at Docks/PropertiesDock.cpp:225
#37 0x00000000006329eb in MainWindow::on_editPropertiesAction_triggered (this=0x7fffffffd8c0)
at MainWindow.cpp:698
#38 0x0000000000632519 in MainWindow::on_fileUploadAction_triggered (this=0x7fffffffd8c0)
at MainWindow.cpp:1082
#39 0x000000000072f277 in MainWindow::qt_metacall (this=0x7fffffffd8c0,
_c=QMetaObject::InvokeMetaMethod, _id=26, _a=0x7fffffffb160) at moc_MainWindow.cpp:333
#40 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#41 0x00007ffff58aa147 in QAction::triggered(bool) () from /usr/lib/libQtGui.so.4
#42 0x00007ffff58ab5c0 in QAction::activate(QAction::ActionEvent) () from /usr/lib/libQtGui.so.4
#43 0x00007ffff5c22eda in ?? () from /usr/lib/libQtGui.so.4
#44 0x00007ffff5c23175 in QAbstractButton::mouseReleaseEvent(QMouseEvent*) ()
from /usr/lib/libQtGui.so.4
#45 0x00007ffff5cf1c2a in QToolButton::mouseReleaseEvent(QMouseEvent*) () from /usr/lib/libQtGui.so.4
#46 0x00007ffff590037f in QWidget::event(QEvent*) () from /usr/lib/libQtGui.so.4
#47 0x00007ffff58b001d in QApplicationPrivate::notify_helper(QObject*, QEvent*) ()
from /usr/lib/libQtGui.so.4
#48 0x00007ffff58b87ca in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4
#49 0x00007ffff50c0c9c in QCoreApplication::notifyInternal(QObject*, QEvent*) ()
from /usr/lib/libQtCore.so.4
#50 0x00007ffff58b7a78 in QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&) () from /usr/lib/libQtGui.so.4
#51 0x00007ffff5920659 in ?? () from /usr/lib/libQtGui.so.4
#52 0x00007ffff591f40f in QApplication::x11ProcessEvent(_XEvent*) () from /usr/lib/libQtGui.so.4
#53 0x00007ffff594776c in ?? () from /usr/lib/libQtGui.so.4
#54 0x00007ffff298213a in g_main_context_dispatch () from /lib/libglib-2.0.so.0
#55 0x00007ffff2985998 in ?? () from /lib/libglib-2.0.so.0
#56 0x00007ffff2985b4c in g_main_context_iteration () from /lib/libglib-2.0.so.0
#57 0x00007ffff50e939c in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>)
() from /usr/lib/libQtCore.so.4
#58 0x00007ffff5946f1f in ?? () from /usr/lib/libQtGui.so.4
#59 0x00007ffff50bf562 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
from /usr/lib/libQtCore.so.4
#60 0x00007ffff50bf934 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) ()
from /usr/lib/libQtCore.so.4
#61 0x00007ffff50c1ba4 in QCoreApplication::exec() () from /usr/lib/libQtCore.so.4
#62 0x0000000000620ee5 in main (argc=1, argv=0x7fffffffe548) at Main.cpp:208
```
| 1.0 | Segfault on upload button from editing name - **[Submitted to the original trac issue database at 10.13am, Thursday, 21st January 2010]**
I've created a lot of new features, and I want to use the name from one of them for my changeset comment. So I pick one, and select the text in the Name field of the Properties dock. Having done that, I then use the 'Upload' toolbar button and get this crash:
```
#0 0x00007ffff4288f55 in *__GI_raise (sig=<value optimized out>)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x00007ffff428bd90 in *__GI_abort () at abort.c:88
#2 0x00000000006216e7 in myMessageOutput (msgType=QtFatalMsg,
buf=0x2587178 "ASSERT failure in QList<T>::operator[]: \"index out of range\", file /usr/include/qt4/QtCore/qlist.h, line 403") at Main.cpp:64
#3 0x00007ffff4fd36d3 in qt_message_output(QtMsgType, char const*) () from /usr/lib/libQtCore.so.4
#4 0x00007ffff4fd387b in qFatal(char const*, ...) () from /usr/lib/libQtCore.so.4
#5 0x000000000043c074 in QList<MapFeature*>::operator[] (this=0xe380c8, i=0)
at /usr/include/qt4/QtCore/qlist.h:403
#6 0x0000000000435c6d in PropertiesDock::on_tag_changed (this=0xe38080, k=..., v=...)
at Docks/PropertiesDock.cpp:639
#7 0x000000000072d346 in PropertiesDock::qt_metacall (this=0xe38080,
_c=QMetaObject::InvokeMetaMethod, _id=13, _a=0x7fffffff9ed0) at moc_PropertiesDock.cpp:109
#8 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#9 0x00000000007356f9 in TagTemplates::tagChanged (this=0x10e53d0, _t1=..., _t2=...)
at moc_TagTemplate.cpp:497
#10 0x000000000070c2e6 in TagTemplates::on_tag_changed (this=0x10e53d0, k=..., v=...)
at TagTemplate/TagTemplate.cpp:1090
#11 0x0000000000735b0e in TagTemplates::qt_metacall (this=0x10e53d0,
_c=QMetaObject::InvokeMetaMethod, _id=5, _a=0x7fffffffa110) at moc_TagTemplate.cpp:478
#12 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#13 0x0000000000735791 in TagTemplate::tagChanged (this=0x1126990, _t1=..., _t2=...)
at moc_TagTemplate.cpp:413
#14 0x000000000070d50c in TagTemplate::on_tag_changed (this=0x1126990, k=..., v=...)
at TagTemplate/TagTemplate.cpp:839
#15 0x00000000007362f0 in TagTemplate::qt_metacall (this=0x1126990,
_c=QMetaObject::InvokeMetaMethod, _id=3, _a=0x7fffffffa350) at moc_TagTemplate.cpp:394
#16 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#17 0x0000000000735829 in TagTemplateWidget::tagChanged (this=0x10fb600, _t1=..., _t2=...)
at moc_TagTemplate.cpp:88
#18 0x000000000070f3fb in TagTemplateWidgetEdit::on_editingFinished (this=0x10fb600)
at TagTemplate/TagTemplate.cpp:655
#19 0x00000000007363e8 in TagTemplateWidgetEdit::qt_metacall (this=0x10fb600,
_c=QMetaObject::InvokeMetaMethod, _id=0, _a=0x7fffffffa4c0) at moc_TagTemplate.cpp:330
#20 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#21 0x00007ffff5c7b50b in QLineEdit::focusOutEvent(QFocusEvent*) () from /usr/lib/libQtGui.so.4
#22 0x00007ffff59000b9 in QWidget::event(QEvent*) () from /usr/lib/libQtGui.so.4
#23 0x00007ffff5c79d44 in QLineEdit::event(QEvent*) () from /usr/lib/libQtGui.so.4
#24 0x00007ffff58b001d in QApplicationPrivate::notify_helper(QObject*, QEvent*) ()
from /usr/lib/libQtGui.so.4
#25 0x00007ffff58b807a in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4
#26 0x00007ffff50c0c9c in QCoreApplication::notifyInternal(QObject*, QEvent*) ()
from /usr/lib/libQtCore.so.4
#27 0x00007ffff58b66ef in QApplicationPrivate::setFocusWidget(QWidget*, Qt::FocusReason) ()
from /usr/lib/libQtGui.so.4
#28 0x00007ffff58fa525 in QWidget::setFocus(Qt::FocusReason) () from /usr/lib/libQtGui.so.4
#29 0x00007ffff58fa807 in QWidget::focusNextPrevChild(bool) () from /usr/lib/libQtGui.so.4
#30 0x00007ffff58fe97d in QWidgetPrivate::hide_helper() () from /usr/lib/libQtGui.so.4
#31 0x00007ffff5905860 in QWidget::setVisible(bool) () from /usr/lib/libQtGui.so.4
#32 0x00007ffff5c51e3d in QDockWidgetLayout::setWidgetForRole(QDockWidgetLayout::Role, QWidget*) ()
from /usr/lib/libQtGui.so.4
#33 0x000000000043c849 in MDockAncestor::setWidget (this=0xe38080, widget=0x1076bc0)
at Docks/MDockAncestor.h:13
---Type <return> to continue, or q <return> to quit---
#34 0x0000000000437186 in PropertiesDock::switchToNoUi (this=0xe38080)
at Docks/PropertiesDock.cpp:470
#35 0x0000000000437b06 in PropertiesDock::switchUi (this=0xe38080) at Docks/PropertiesDock.cpp:378
#36 0x000000000043882a in PropertiesDock::setSelection (this=0xe38080, aFeature=0x0)
at Docks/PropertiesDock.cpp:225
#37 0x00000000006329eb in MainWindow::on_editPropertiesAction_triggered (this=0x7fffffffd8c0)
at MainWindow.cpp:698
#38 0x0000000000632519 in MainWindow::on_fileUploadAction_triggered (this=0x7fffffffd8c0)
at MainWindow.cpp:1082
#39 0x000000000072f277 in MainWindow::qt_metacall (this=0x7fffffffd8c0,
_c=QMetaObject::InvokeMetaMethod, _id=26, _a=0x7fffffffb160) at moc_MainWindow.cpp:333
#40 0x00007ffff50d5df2 in QMetaObject::activate(QObject*, int, int, void**) ()
from /usr/lib/libQtCore.so.4
#41 0x00007ffff58aa147 in QAction::triggered(bool) () from /usr/lib/libQtGui.so.4
#42 0x00007ffff58ab5c0 in QAction::activate(QAction::ActionEvent) () from /usr/lib/libQtGui.so.4
#43 0x00007ffff5c22eda in ?? () from /usr/lib/libQtGui.so.4
#44 0x00007ffff5c23175 in QAbstractButton::mouseReleaseEvent(QMouseEvent*) ()
from /usr/lib/libQtGui.so.4
#45 0x00007ffff5cf1c2a in QToolButton::mouseReleaseEvent(QMouseEvent*) () from /usr/lib/libQtGui.so.4
#46 0x00007ffff590037f in QWidget::event(QEvent*) () from /usr/lib/libQtGui.so.4
#47 0x00007ffff58b001d in QApplicationPrivate::notify_helper(QObject*, QEvent*) ()
from /usr/lib/libQtGui.so.4
#48 0x00007ffff58b87ca in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4
#49 0x00007ffff50c0c9c in QCoreApplication::notifyInternal(QObject*, QEvent*) ()
from /usr/lib/libQtCore.so.4
#50 0x00007ffff58b7a78 in QApplicationPrivate::sendMouseEvent(QWidget*, QMouseEvent*, QWidget*, QWidget*, QWidget**, QPointer<QWidget>&) () from /usr/lib/libQtGui.so.4
#51 0x00007ffff5920659 in ?? () from /usr/lib/libQtGui.so.4
#52 0x00007ffff591f40f in QApplication::x11ProcessEvent(_XEvent*) () from /usr/lib/libQtGui.so.4
#53 0x00007ffff594776c in ?? () from /usr/lib/libQtGui.so.4
#54 0x00007ffff298213a in g_main_context_dispatch () from /lib/libglib-2.0.so.0
#55 0x00007ffff2985998 in ?? () from /lib/libglib-2.0.so.0
#56 0x00007ffff2985b4c in g_main_context_iteration () from /lib/libglib-2.0.so.0
#57 0x00007ffff50e939c in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>)
() from /usr/lib/libQtCore.so.4
#58 0x00007ffff5946f1f in ?? () from /usr/lib/libQtGui.so.4
#59 0x00007ffff50bf562 in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) ()
from /usr/lib/libQtCore.so.4
#60 0x00007ffff50bf934 in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) ()
from /usr/lib/libQtCore.so.4
#61 0x00007ffff50c1ba4 in QCoreApplication::exec() () from /usr/lib/libQtCore.so.4
#62 0x0000000000620ee5 in main (argc=1, argv=0x7fffffffe548) at Main.cpp:208
```
| defect | segfault on upload button from editing name i ve created a lot of new features and i want to use the name from one of them for my changeset comment so i pick one and select the text in the name field of the properties dock having done that i then use the upload toolbar button and get this crash in gi raise sig at nptl sysdeps unix sysv linux raise c in gi abort at abort c in mymessageoutput msgtype qtfatalmsg buf assert failure in qlist operator index out of range file usr include qtcore qlist h line at main cpp in qt message output qtmsgtype char const from usr lib libqtcore so in qfatal char const from usr lib libqtcore so in qlist operator this i at usr include qtcore qlist h in propertiesdock on tag changed this k v at docks propertiesdock cpp in propertiesdock qt metacall this c qmetaobject invokemetamethod id a at moc propertiesdock cpp in qmetaobject activate qobject int int void from usr lib libqtcore so in tagtemplates tagchanged this at moc tagtemplate cpp in tagtemplates on tag changed this k v at tagtemplate tagtemplate cpp in tagtemplates qt metacall this c qmetaobject invokemetamethod id a at moc tagtemplate cpp in qmetaobject activate qobject int int void from usr lib libqtcore so in tagtemplate tagchanged this at moc tagtemplate cpp in tagtemplate on tag changed this k v at tagtemplate tagtemplate cpp in tagtemplate qt metacall this c qmetaobject invokemetamethod id a at moc tagtemplate cpp in qmetaobject activate qobject int int void from usr lib libqtcore so in tagtemplatewidget tagchanged this at moc tagtemplate cpp in tagtemplatewidgetedit on editingfinished this at tagtemplate tagtemplate cpp in tagtemplatewidgetedit qt metacall this c qmetaobject invokemetamethod id a at moc tagtemplate cpp in qmetaobject activate qobject int int void from usr lib libqtcore so in qlineedit focusoutevent qfocusevent from usr lib libqtgui so in qwidget event qevent from usr lib libqtgui so in qlineedit event qevent from usr lib libqtgui so in qapplicationprivate notify helper qobject qevent from usr lib libqtgui so in qapplication notify qobject qevent from usr lib libqtgui so in qcoreapplication notifyinternal qobject qevent from usr lib libqtcore so in qapplicationprivate setfocuswidget qwidget qt focusreason from usr lib libqtgui so in qwidget setfocus qt focusreason from usr lib libqtgui so in qwidget focusnextprevchild bool from usr lib libqtgui so in qwidgetprivate hide helper from usr lib libqtgui so in qwidget setvisible bool from usr lib libqtgui so in qdockwidgetlayout setwidgetforrole qdockwidgetlayout role qwidget from usr lib libqtgui so in mdockancestor setwidget this widget at docks mdockancestor h type to continue or q to quit in propertiesdock switchtonoui this at docks propertiesdock cpp in propertiesdock switchui this at docks propertiesdock cpp in propertiesdock setselection this afeature at docks propertiesdock cpp in mainwindow on editpropertiesaction triggered this at mainwindow cpp in mainwindow on fileuploadaction triggered this at mainwindow cpp in mainwindow qt metacall this c qmetaobject invokemetamethod id a at moc mainwindow cpp in qmetaobject activate qobject int int void from usr lib libqtcore so in qaction triggered bool from usr lib libqtgui so in qaction activate qaction actionevent from usr lib libqtgui so in from usr lib libqtgui so in qabstractbutton mousereleaseevent qmouseevent from usr lib libqtgui so in qtoolbutton mousereleaseevent qmouseevent from usr lib libqtgui so in qwidget event qevent from usr lib libqtgui so in qapplicationprivate notify helper qobject qevent from usr lib libqtgui so in qapplication notify qobject qevent from usr lib libqtgui so in qcoreapplication notifyinternal qobject qevent from usr lib libqtcore so in qapplicationprivate sendmouseevent qwidget qmouseevent qwidget qwidget qwidget qpointer from usr lib libqtgui so in from usr lib libqtgui so in qapplication xevent from usr lib libqtgui so in from usr lib libqtgui so in g main context dispatch from lib libglib so in from lib libglib so in g main context iteration from lib libglib so in qeventdispatcherglib processevents qflags from usr lib libqtcore so in from usr lib libqtgui so in qeventloop processevents qflags from usr lib libqtcore so in qeventloop exec qflags from usr lib libqtcore so in qcoreapplication exec from usr lib libqtcore so in main argc argv at main cpp | 1 |
759,917 | 26,618,496,385 | IssuesEvent | 2023-01-24 09:24:28 | ballerina-platform/ballerina-standard-library | https://api.github.com/repos/ballerina-platform/ballerina-standard-library | closed | GraalVM Check is failing intermittently in UDP module | Points/2 Priority/High Type/Task module/udp graalvm | **Description:**
> $Subject
> Part of https://github.com/ballerina-platform/ballerina-standard-library/issues/3755
See the workflow run [here](https://github.com/ballerina-platform/module-ballerina-udp/actions/runs/3951869726)
| 1.0 | GraalVM Check is failing intermittently in UDP module - **Description:**
> $Subject
> Part of https://github.com/ballerina-platform/ballerina-standard-library/issues/3755
See the workflow run [here](https://github.com/ballerina-platform/module-ballerina-udp/actions/runs/3951869726)
| non_defect | graalvm check is failing intermittently in udp module description subject part of see the workflow run | 0 |
186,402 | 21,931,194,979 | IssuesEvent | 2022-05-23 09:52:57 | elikkatzgit/enableLicenseViolations | https://api.github.com/repos/elikkatzgit/enableLicenseViolations | closed | CVE-2020-9493 (High) detected in log4j-1.2.17.jar | security vulnerability | ## CVE-2020-9493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.17.jar</b></p></summary>
<p>Apache Log4j 1.2</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /g4j/log4j/1.2.17/log4j-1.2.17.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.17.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/enableLicenseViolations/commit/8ad51ecf1de66bbfe7ab228d1e26964af16f68d1">8ad51ecf1de66bbfe7ab228d1e26964af16f68d1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.
<p>Publish Date: 2021-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493>CVE-2020-9493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2021/06/16/1">https://www.openwall.com/lists/oss-security/2021/06/16/1</a></p>
<p>Release Date: 2021-06-16</p>
<p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.17","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"log4j:log4j:1.2.17","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ch.qos.reload4j:reload4j:1.2.18.1","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-9493","vulnerabilityDetails":"A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-9493 (High) detected in log4j-1.2.17.jar - ## CVE-2020-9493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.17.jar</b></p></summary>
<p>Apache Log4j 1.2</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /g4j/log4j/1.2.17/log4j-1.2.17.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.17.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/enableLicenseViolations/commit/8ad51ecf1de66bbfe7ab228d1e26964af16f68d1">8ad51ecf1de66bbfe7ab228d1e26964af16f68d1</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.
<p>Publish Date: 2021-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493>CVE-2020-9493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openwall.com/lists/oss-security/2021/06/16/1">https://www.openwall.com/lists/oss-security/2021/06/16/1</a></p>
<p>Release Date: 2021-06-16</p>
<p>Fix Resolution: ch.qos.reload4j:reload4j:1.2.18.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"log4j","packageName":"log4j","packageVersion":"1.2.17","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"log4j:log4j:1.2.17","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ch.qos.reload4j:reload4j:1.2.18.1","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-9493","vulnerabilityDetails":"A deserialization flaw was found in Apache Chainsaw versions prior to 2.1.0 which could lead to malicious code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9493","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in jar cve high severity vulnerability vulnerable library jar apache path to dependency file pom xml path to vulnerable library jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details a deserialization flaw was found in apache chainsaw versions prior to which could lead to malicious code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree isminimumfixversionavailable true minimumfixversion ch qos isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a deserialization flaw was found in apache chainsaw versions prior to which could lead to malicious code execution vulnerabilityurl | 0 |
117,055 | 9,907,996,317 | IssuesEvent | 2019-06-27 17:08:59 | GTNewHorizons/NewHorizons | https://api.github.com/repos/GTNewHorizons/NewHorizons | reopened | Witching Gadgets Cloak research order | FixedInDev need to be tested | #### Which modpack version are you using?
2.0.7.5
#
#### If in multiplayer; On which server does this happen?
Delta
#
#### What did you try to do, and what did you expect to happen?
I found cloaks in the thaumonomicon, but one of the ingredients, `cloth of spacious folds`, was not researched and the item and all sub-research which use the cloak as an ingredient could therefore not be crafted
#
#### What happened instead? (Attach screenshots if needed)


#
#### What do you suggest instead/what changes do you propose?
Make the cloak research dependent on whatever provides the cloth of spacious folds. Alternatively the recipe can be altered to use e.g. an extra bewitched fleece instead of the cloth, or the cloth recipe can be included in the research itself.
| 1.0 | Witching Gadgets Cloak research order - #### Which modpack version are you using?
2.0.7.5
#
#### If in multiplayer; On which server does this happen?
Delta
#
#### What did you try to do, and what did you expect to happen?
I found cloaks in the thaumonomicon, but one of the ingredients, `cloth of spacious folds`, was not researched and the item and all sub-research which use the cloak as an ingredient could therefore not be crafted
#
#### What happened instead? (Attach screenshots if needed)


#
#### What do you suggest instead/what changes do you propose?
Make the cloak research dependent on whatever provides the cloth of spacious folds. Alternatively the recipe can be altered to use e.g. an extra bewitched fleece instead of the cloth, or the cloth recipe can be included in the research itself.
| non_defect | witching gadgets cloak research order which modpack version are you using if in multiplayer on which server does this happen delta what did you try to do and what did you expect to happen i found cloaks in the thaumonomicon but one of the ingredients cloth of spacious folds was not researched and the item and all sub research which use the cloak as an ingredient could therefore not be crafted what happened instead attach screenshots if needed what do you suggest instead what changes do you propose make the cloak research dependent on whatever provides the cloth of spacious folds alternatively the recipe can be altered to use e g an extra bewitched fleece instead of the cloth or the cloth recipe can be included in the research itself | 0 |
74,623 | 14,273,774,165 | IssuesEvent | 2020-11-21 23:32:05 | nhcarrigan/Becca-Lyria-documentation | https://api.github.com/repos/nhcarrigan/Becca-Lyria-documentation | closed | [UPDATE] - 7.2.3 | ✨ goal: improvement 💻 aspect: code 🔒 staff only 🚧 status: blocked 🟨 priority: medium | ## Description
<!--What information needs to be updated on the page?-->
Document the change that the `role` command now takes a role string name instead of a mention.
Document the addition of the `listall` parameter for the `role` command.
## Related Pull Request
<!--Please include a full link to the related Pull Request on the bot repository-->
https://github.com/nhcarrigan/Becca-Lyria/pull/352
https://github.com/nhcarrigan/Becca-Lyria/pull/353 | 1.0 | [UPDATE] - 7.2.3 - ## Description
<!--What information needs to be updated on the page?-->
Document the change that the `role` command now takes a role string name instead of a mention.
Document the addition of the `listall` parameter for the `role` command.
## Related Pull Request
<!--Please include a full link to the related Pull Request on the bot repository-->
https://github.com/nhcarrigan/Becca-Lyria/pull/352
https://github.com/nhcarrigan/Becca-Lyria/pull/353 | non_defect | description document the change that the role command now takes a role string name instead of a mention document the addition of the listall parameter for the role command related pull request | 0 |
443,746 | 30,927,924,153 | IssuesEvent | 2023-08-06 18:13:31 | extratone/gti | https://api.github.com/repos/extratone/gti | opened | Auto Drafts Template | documentation | # Auto Template <|>
Updated `[[date|%m%d%Y-%H%M%S]]`
- [GitHub Issue](https://github.com/extratone/gti/issues/7)
- [WTF](https://davidblue.wtf/drafts/[[uuid]].html)
- [Local](shareddocuments:///private/var/mobile/Library/Mobile%20Documents/com~apple~CloudDocs/Written/[[uuid]].md)
- [Working Copy](working-copy://open?repo=gti&mode=content)
- [Draft](drafts://open?uuid=[[uuid]])
---
[[clipboard]] | 1.0 | Auto Drafts Template - # Auto Template <|>
Updated `[[date|%m%d%Y-%H%M%S]]`
- [GitHub Issue](https://github.com/extratone/gti/issues/7)
- [WTF](https://davidblue.wtf/drafts/[[uuid]].html)
- [Local](shareddocuments:///private/var/mobile/Library/Mobile%20Documents/com~apple~CloudDocs/Written/[[uuid]].md)
- [Working Copy](working-copy://open?repo=gti&mode=content)
- [Draft](drafts://open?uuid=[[uuid]])
---
[[clipboard]] | non_defect | auto drafts template auto template updated html shareddocuments private var mobile library mobile com apple clouddocs written md working copy open repo gti mode content drafts open uuid | 0 |
766,250 | 26,875,131,662 | IssuesEvent | 2023-02-04 23:40:37 | MattTheLegoman/RealmsInExile | https://api.github.com/repos/MattTheLegoman/RealmsInExile | closed | Dunedain MAA | oddity priority: low scripting | I noticed there is no tradition for the dunedain to receive their MAA.
So there is no way for them to use any MAA initially.
However a suggestion for the future would be to move Elegost north slightly and perhaps have a eastern Tharbad holding. Obviously it would be a ruin but I feel like that would be a fitting beginning for him. With the opportunity to rebuild the bridge and so on. Rather than a holding with no history/flavour.
| 1.0 | Dunedain MAA - I noticed there is no tradition for the dunedain to receive their MAA.
So there is no way for them to use any MAA initially.
However a suggestion for the future would be to move Elegost north slightly and perhaps have a eastern Tharbad holding. Obviously it would be a ruin but I feel like that would be a fitting beginning for him. With the opportunity to rebuild the bridge and so on. Rather than a holding with no history/flavour.
| non_defect | dunedain maa i noticed there is no tradition for the dunedain to receive their maa so there is no way for them to use any maa initially however a suggestion for the future would be to move elegost north slightly and perhaps have a eastern tharbad holding obviously it would be a ruin but i feel like that would be a fitting beginning for him with the opportunity to rebuild the bridge and so on rather than a holding with no history flavour | 0 |
133,092 | 18,816,717,662 | IssuesEvent | 2021-11-10 00:40:33 | DrWaleedAYousef/Teaching | https://api.github.com/repos/DrWaleedAYousef/Teaching | closed | Digital design and computer architecture study group | Digital Design | if anyone interested to join a study group for studying digital design and computer architecture, please email me : sirsaif99@gmail.com | 1.0 | Digital design and computer architecture study group - if anyone interested to join a study group for studying digital design and computer architecture, please email me : sirsaif99@gmail.com | non_defect | digital design and computer architecture study group if anyone interested to join a study group for studying digital design and computer architecture please email me gmail com | 0 |
50,293 | 13,187,427,626 | IssuesEvent | 2020-08-13 03:22:51 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | icerec/trunk dox on software.icecube.wisc.edu are "403 Forbidden" (Trac #474) | Migrated from Trac combo reconstruction defect | On http://software.icecube.wisc.edu/ there is a link to IceRec "nightly builds": http://software.icecube.wisc.edu/icerec_trunk/ which results in "403 - Forbidden".
I am assigning this now to dladieu, but I could equally well imagine that this is actually the job of the icerec metaproject coordinator, which would be Meike de With these days.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/474
, reported by boersma and owned by nega_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-10-03T18:27:09",
"description": "On http://software.icecube.wisc.edu/ there is a link to IceRec \"nightly builds\": http://software.icecube.wisc.edu/icerec_trunk/ which results in \"403 - Forbidden\".\n\nI am assigning this now to dladieu, but I could equally well imagine that this is actually the job of the icerec metaproject coordinator, which would be Meike de With these days.",
"reporter": "boersma",
"cc": "meike.dewith",
"resolution": "fixed",
"_ts": "1412360829211490",
"component": "combo reconstruction",
"summary": "icerec/trunk dox on software.icecube.wisc.edu are \"403 Forbidden\"",
"priority": "minor",
"keywords": "icerec documentation doxygen",
"time": "2013-11-26T10:42:01",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | icerec/trunk dox on software.icecube.wisc.edu are "403 Forbidden" (Trac #474) - On http://software.icecube.wisc.edu/ there is a link to IceRec "nightly builds": http://software.icecube.wisc.edu/icerec_trunk/ which results in "403 - Forbidden".
I am assigning this now to dladieu, but I could equally well imagine that this is actually the job of the icerec metaproject coordinator, which would be Meike de With these days.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/474
, reported by boersma and owned by nega_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-10-03T18:27:09",
"description": "On http://software.icecube.wisc.edu/ there is a link to IceRec \"nightly builds\": http://software.icecube.wisc.edu/icerec_trunk/ which results in \"403 - Forbidden\".\n\nI am assigning this now to dladieu, but I could equally well imagine that this is actually the job of the icerec metaproject coordinator, which would be Meike de With these days.",
"reporter": "boersma",
"cc": "meike.dewith",
"resolution": "fixed",
"_ts": "1412360829211490",
"component": "combo reconstruction",
"summary": "icerec/trunk dox on software.icecube.wisc.edu are \"403 Forbidden\"",
"priority": "minor",
"keywords": "icerec documentation doxygen",
"time": "2013-11-26T10:42:01",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | icerec trunk dox on software icecube wisc edu are forbidden trac on there is a link to icerec nightly builds which results in forbidden i am assigning this now to dladieu but i could equally well imagine that this is actually the job of the icerec metaproject coordinator which would be meike de with these days migrated from reported by boersma and owned by nega json status closed changetime description on there is a link to icerec nightly builds which results in forbidden n ni am assigning this now to dladieu but i could equally well imagine that this is actually the job of the icerec metaproject coordinator which would be meike de with these days reporter boersma cc meike dewith resolution fixed ts component combo reconstruction summary icerec trunk dox on software icecube wisc edu are forbidden priority minor keywords icerec documentation doxygen time milestone owner nega type defect | 1 |
7,462 | 2,610,387,203 | IssuesEvent | 2015-02-26 20:05:19 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | opened | Swear filter | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Chat
What is the expected output? What do you see instead?
Did you thought about implementing some swear filter in chat?
It could be something like this:
chat: swear you
convert to
chat: *** you
```
-----
Original issue reported on code.google.com by `blackmet...@o2.pl` on 7 Dec 2011 at 6:42 | 1.0 | Swear filter - ```
What steps will reproduce the problem?
1. Chat
What is the expected output? What do you see instead?
Did you thought about implementing some swear filter in chat?
It could be something like this:
chat: swear you
convert to
chat: *** you
```
-----
Original issue reported on code.google.com by `blackmet...@o2.pl` on 7 Dec 2011 at 6:42 | defect | swear filter what steps will reproduce the problem chat what is the expected output what do you see instead did you thought about implementing some swear filter in chat it could be something like this chat swear you convert to chat you original issue reported on code google com by blackmet pl on dec at | 1 |
5,685 | 2,610,193,446 | IssuesEvent | 2015-02-26 19:01:07 | chrsmith/quchuseban | https://api.github.com/repos/chrsmith/quchuseban | opened | 解析色斑去除小偏方 | auto-migrated Priority-Medium Type-Defect | ```
《摘要》
今晨的阳光明媚,一天的情绪随风而动,一抹嫣红,在秋色��
�满开。秋露晨霜花开淡然。于时光的转角回望,一路蝶舞相�
��,恣意盎然。于季节的边缘眺望,秋风萧瑟时,花谢花飞处
,香韵依然。借今夜,看着你深遂的眼睛,柳絮的纤丝拽着��
�,走进萧萧的树林,枯叶在脚下沙沙作响,此间,拾起胶片�
��不远处,听见蝉蛙共鸣,声音越来越远。像孤身的奔驰,飞
奔于眼的深处,片片落叶纷飞,一片轻醉。每当看见他们自��
�的样子的时候我就很羡慕,矫健的身姿,我想如果不是色斑�
��话,也许我也会活的很自在!色斑去除小偏方,
《客户案例》
女人最怕的就是自己变老了,不漂亮了。要是老公再嫌��
�自己,更是感觉这日子彻底没希望了。我今年28岁,26岁结的
婚,老公比我大五岁,算是现在常说的金龟婿吧,有自己的��
�司,结婚后我就在家当全职太太了,我的很多姐妹都很羡慕�
��,说我嫁了个好老公,我也很知足,老公疼我,我们的二人
世界也很幸福,过了一年,我们的小宝宝就出生了,我们的��
�活就更美满了。可好景不长,我感觉生完孩子后,我的皮肤�
��始变的干了,脸色还发黄,更要命的是出现了很多斑,还有
眼角还有一些小细纹,这让爱美的我再也接受不了了,那个��
�子里的女人真的是我吗?<br>
我的闺蜜是个很时尚的女人,对自己的外表要求很高,��
�总是提醒我,要我注意一下自己的形象,现在外面小姑娘青�
��貌美的多的是,不能保证男人不偷腥。可我觉得老公不是那
样的人,而且他公司事情那么多,哪有时间想别的。可我心��
�也有些打鼓,老公长得一表人才的,又是事业小成的男人,�
��应酬那么多,万一,我真不敢想下去。<br>
我开始疯狂采购大牌护肤品,使用过一段时间后,皮肤��
�实改善了,可脸上的斑倒一点没见少。这可怎么办呢,后来�
��去论坛上看姐妹们都是用什么祛斑的,有个帖子引起了我的
注意,说的也是一个姐妹生完孩子长斑了,最后用的「黛芙��
�尔精华液」彻底祛除了,我当时觉得这帖子是真的还是假的�
��就试着用QQ联系了那个发帖人,没想到真是她的亲身经历,�
��们通过视频看了她以前的照片和现在的样子,真是让我很惊
啊,居然能有这么好的效果,我立刻毫不犹豫的在「黛芙薇��
�精华液」商城上订购了,现在我的斑已经彻底的没有了,又�
��了漂亮可人的大美女了,而且老公说我比以前更有女人味了
,至于那些小姑娘,本小姐才不怕呢。
阅读了色斑去除小偏方,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
色斑去除小偏方,同时为您分享祛斑小方法
茶水去斑美白
方法一:洗脸后,将茶水涂到脸上,并用手轻轻拍脸。
方法二:将蘸了茶水的脱脂棉附在脸上2-3分钟,然后清水洗��
�,有除色斑、美白的效果。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:42 | 1.0 | 解析色斑去除小偏方 - ```
《摘要》
今晨的阳光明媚,一天的情绪随风而动,一抹嫣红,在秋色��
�满开。秋露晨霜花开淡然。于时光的转角回望,一路蝶舞相�
��,恣意盎然。于季节的边缘眺望,秋风萧瑟时,花谢花飞处
,香韵依然。借今夜,看着你深遂的眼睛,柳絮的纤丝拽着��
�,走进萧萧的树林,枯叶在脚下沙沙作响,此间,拾起胶片�
��不远处,听见蝉蛙共鸣,声音越来越远。像孤身的奔驰,飞
奔于眼的深处,片片落叶纷飞,一片轻醉。每当看见他们自��
�的样子的时候我就很羡慕,矫健的身姿,我想如果不是色斑�
��话,也许我也会活的很自在!色斑去除小偏方,
《客户案例》
女人最怕的就是自己变老了,不漂亮了。要是老公再嫌��
�自己,更是感觉这日子彻底没希望了。我今年28岁,26岁结的
婚,老公比我大五岁,算是现在常说的金龟婿吧,有自己的��
�司,结婚后我就在家当全职太太了,我的很多姐妹都很羡慕�
��,说我嫁了个好老公,我也很知足,老公疼我,我们的二人
世界也很幸福,过了一年,我们的小宝宝就出生了,我们的��
�活就更美满了。可好景不长,我感觉生完孩子后,我的皮肤�
��始变的干了,脸色还发黄,更要命的是出现了很多斑,还有
眼角还有一些小细纹,这让爱美的我再也接受不了了,那个��
�子里的女人真的是我吗?<br>
我的闺蜜是个很时尚的女人,对自己的外表要求很高,��
�总是提醒我,要我注意一下自己的形象,现在外面小姑娘青�
��貌美的多的是,不能保证男人不偷腥。可我觉得老公不是那
样的人,而且他公司事情那么多,哪有时间想别的。可我心��
�也有些打鼓,老公长得一表人才的,又是事业小成的男人,�
��应酬那么多,万一,我真不敢想下去。<br>
我开始疯狂采购大牌护肤品,使用过一段时间后,皮肤��
�实改善了,可脸上的斑倒一点没见少。这可怎么办呢,后来�
��去论坛上看姐妹们都是用什么祛斑的,有个帖子引起了我的
注意,说的也是一个姐妹生完孩子长斑了,最后用的「黛芙��
�尔精华液」彻底祛除了,我当时觉得这帖子是真的还是假的�
��就试着用QQ联系了那个发帖人,没想到真是她的亲身经历,�
��们通过视频看了她以前的照片和现在的样子,真是让我很惊
啊,居然能有这么好的效果,我立刻毫不犹豫的在「黛芙薇��
�精华液」商城上订购了,现在我的斑已经彻底的没有了,又�
��了漂亮可人的大美女了,而且老公说我比以前更有女人味了
,至于那些小姑娘,本小姐才不怕呢。
阅读了色斑去除小偏方,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
色斑去除小偏方,同时为您分享祛斑小方法
茶水去斑美白
方法一:洗脸后,将茶水涂到脸上,并用手轻轻拍脸。
方法二:将蘸了茶水的脱脂棉附在脸上2-3分钟,然后清水洗��
�,有除色斑、美白的效果。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 5:42 | defect | 解析色斑去除小偏方 《摘要》 今晨的阳光明媚,一天的情绪随风而动,一抹嫣红,在秋色�� �满开。秋露晨霜花开淡然。于时光的转角回望,一路蝶舞相� ��,恣意盎然。于季节的边缘眺望,秋风萧瑟时,花谢花飞处 ,香韵依然。借今夜,看着你深遂的眼睛,柳絮的纤丝拽着�� �,走进萧萧的树林,枯叶在脚下沙沙作响,此间,拾起胶片� ��不远处,听见蝉蛙共鸣,声音越来越远。像孤身的奔驰,飞 奔于眼的深处,片片落叶纷飞,一片轻醉。每当看见他们自�� �的样子的时候我就很羡慕,矫健的身姿,我想如果不是色斑� ��话,也许我也会活的很自在!色斑去除小偏方, 《客户案例》 女人最怕的就是自己变老了,不漂亮了。要是老公再嫌�� �自己,更是感觉这日子彻底没希望了。 , 婚,老公比我大五岁,算是现在常说的金龟婿吧,有自己的�� �司,结婚后我就在家当全职太太了,我的很多姐妹都很羡慕� ��,说我嫁了个好老公,我也很知足,老公疼我,我们的二人 世界也很幸福,过了一年,我们的小宝宝就出生了,我们的�� �活就更美满了。可好景不长,我感觉生完孩子后,我的皮肤� ��始变的干了,脸色还发黄,更要命的是出现了很多斑,还有 眼角还有一些小细纹,这让爱美的我再也接受不了了,那个�� �子里的女人真的是我吗 我的闺蜜是个很时尚的女人,对自己的外表要求很高,�� �总是提醒我,要我注意一下自己的形象,现在外面小姑娘青� ��貌美的多的是,不能保证男人不偷腥。可我觉得老公不是那 样的人,而且他公司事情那么多,哪有时间想别的。可我心�� �也有些打鼓,老公长得一表人才的,又是事业小成的男人,� ��应酬那么多,万一,我真不敢想下去。 我开始疯狂采购大牌护肤品,使用过一段时间后,皮肤�� �实改善了,可脸上的斑倒一点没见少。这可怎么办呢,后来� ��去论坛上看姐妹们都是用什么祛斑的,有个帖子引起了我的 注意,说的也是一个姐妹生完孩子长斑了,最后用的「黛芙�� �尔精华液」彻底祛除了,我当时觉得这帖子是真的还是假的� ��就试着用qq联系了那个发帖人,没想到真是她的亲身经历,� ��们通过视频看了她以前的照片和现在的样子,真是让我很惊 啊,居然能有这么好的效果,我立刻毫不犹豫的在「黛芙薇�� �精华液」商城上订购了,现在我的斑已经彻底的没有了,又� ��了漂亮可人的大美女了,而且老公说我比以前更有女人味了 ,至于那些小姑娘,本小姐才不怕呢。 阅读了色斑去除小偏方,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 色斑去除小偏方,同时为您分享祛斑小方法 茶水去斑美白 方法一:洗脸后,将茶水涂到脸上,并用手轻轻拍脸。 方法二: ,然后清水洗�� �,有除色斑、美白的效果。 original issue reported on code google com by additive gmail com on jul at | 1 |
298,466 | 9,200,342,993 | IssuesEvent | 2019-03-07 16:50:47 | ClubRobotInsat/libkicad-robot | https://api.github.com/repos/ClubRobotInsat/libkicad-robot | closed | Logic signal level converter | hardware high_priority | The `stm32f103` speaks in 3.3V, but the servomotors speaks in 5V. Even if the current situation kind of works, we should convert the logic level between these two systems.
After a little bit of research I found a transistor `BSS138` that should do the job.

*[Source](https://learn.sparkfun.com/tutorials/bi-directional-logic-level-converter-hookup-guide/all)* | 1.0 | Logic signal level converter - The `stm32f103` speaks in 3.3V, but the servomotors speaks in 5V. Even if the current situation kind of works, we should convert the logic level between these two systems.
After a little bit of research I found a transistor `BSS138` that should do the job.

*[Source](https://learn.sparkfun.com/tutorials/bi-directional-logic-level-converter-hookup-guide/all)* | non_defect | logic signal level converter the speaks in but the servomotors speaks in even if the current situation kind of works we should convert the logic level between these two systems after a little bit of research i found a transistor that should do the job | 0 |
50,212 | 13,187,382,351 | IssuesEvent | 2020-08-13 03:14:16 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | CompareFloatingPointAs... fails on PPC64 (Trac #316) | Migrated from Trac dataclasses defect | Email from Nathan:
"...CompareFloatingPoint tests fail for release builds. This appears to be because all the integer pointer casts inside of it violate GCC's strict aliasing assumptions, and may return gibberish. On RISC systems, this seems to break basically all the time, causing the values tested to be meaningless and the tests to fail. Register-starved CISC architectures like x86 seem to run into this less often, but we can get silent corruption there too. I guess we've just been lucky so far.
This can be fixed either by passing -fno-strict-aliasing to GCC, or by retooling the CompareFloatingPoint code. The first option is a little suboptimal since some of those functions are inlined into other places from a header, so you'd have to pessimize all of IceTray instead of just a few files. It might also be worth while to add -Werror at least to the offline software buildbots, since GCC actually warns about this and similar bugs might be caught in other places. "
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/316
, reported by olivas and owned by olivas_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T17:42:33",
"description": "Email from Nathan:\n\n\"...CompareFloatingPoint tests fail for release builds. This appears to be because all the integer pointer casts inside of it violate GCC's strict aliasing assumptions, and may return gibberish. On RISC systems, this seems to break basically all the time, causing the values tested to be meaningless and the tests to fail. Register-starved CISC architectures like x86 seem to run into this less often, but we can get silent corruption there too. I guess we've just been lucky so far.\n\nThis can be fixed either by passing -fno-strict-aliasing to GCC, or by retooling the CompareFloatingPoint code. The first option is a little suboptimal since some of those functions are inlined into other places from a header, so you'd have to pessimize all of IceTray instead of just a few files. It might also be worth while to add -Werror at least to the offline software buildbots, since GCC actually warns about this and similar bugs might be caught in other places. \"\n",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1351705353000000",
"component": "dataclasses",
"summary": "CompareFloatingPointAs... fails on PPC64",
"priority": "normal",
"keywords": "",
"time": "2011-10-29T23:05:35",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | CompareFloatingPointAs... fails on PPC64 (Trac #316) - Email from Nathan:
"...CompareFloatingPoint tests fail for release builds. This appears to be because all the integer pointer casts inside of it violate GCC's strict aliasing assumptions, and may return gibberish. On RISC systems, this seems to break basically all the time, causing the values tested to be meaningless and the tests to fail. Register-starved CISC architectures like x86 seem to run into this less often, but we can get silent corruption there too. I guess we've just been lucky so far.
This can be fixed either by passing -fno-strict-aliasing to GCC, or by retooling the CompareFloatingPoint code. The first option is a little suboptimal since some of those functions are inlined into other places from a header, so you'd have to pessimize all of IceTray instead of just a few files. It might also be worth while to add -Werror at least to the offline software buildbots, since GCC actually warns about this and similar bugs might be caught in other places. "
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/316
, reported by olivas and owned by olivas_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T17:42:33",
"description": "Email from Nathan:\n\n\"...CompareFloatingPoint tests fail for release builds. This appears to be because all the integer pointer casts inside of it violate GCC's strict aliasing assumptions, and may return gibberish. On RISC systems, this seems to break basically all the time, causing the values tested to be meaningless and the tests to fail. Register-starved CISC architectures like x86 seem to run into this less often, but we can get silent corruption there too. I guess we've just been lucky so far.\n\nThis can be fixed either by passing -fno-strict-aliasing to GCC, or by retooling the CompareFloatingPoint code. The first option is a little suboptimal since some of those functions are inlined into other places from a header, so you'd have to pessimize all of IceTray instead of just a few files. It might also be worth while to add -Werror at least to the offline software buildbots, since GCC actually warns about this and similar bugs might be caught in other places. \"\n",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1351705353000000",
"component": "dataclasses",
"summary": "CompareFloatingPointAs... fails on PPC64",
"priority": "normal",
"keywords": "",
"time": "2011-10-29T23:05:35",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| defect | comparefloatingpointas fails on trac email from nathan comparefloatingpoint tests fail for release builds this appears to be because all the integer pointer casts inside of it violate gcc s strict aliasing assumptions and may return gibberish on risc systems this seems to break basically all the time causing the values tested to be meaningless and the tests to fail register starved cisc architectures like seem to run into this less often but we can get silent corruption there too i guess we ve just been lucky so far this can be fixed either by passing fno strict aliasing to gcc or by retooling the comparefloatingpoint code the first option is a little suboptimal since some of those functions are inlined into other places from a header so you d have to pessimize all of icetray instead of just a few files it might also be worth while to add werror at least to the offline software buildbots since gcc actually warns about this and similar bugs might be caught in other places migrated from reported by olivas and owned by olivas json status closed changetime description email from nathan n n comparefloatingpoint tests fail for release builds this appears to be because all the integer pointer casts inside of it violate gcc s strict aliasing assumptions and may return gibberish on risc systems this seems to break basically all the time causing the values tested to be meaningless and the tests to fail register starved cisc architectures like seem to run into this less often but we can get silent corruption there too i guess we ve just been lucky so far n nthis can be fixed either by passing fno strict aliasing to gcc or by retooling the comparefloatingpoint code the first option is a little suboptimal since some of those functions are inlined into other places from a header so you d have to pessimize all of icetray instead of just a few files it might also be worth while to add werror at least to the offline software buildbots since gcc actually warns about this and similar bugs might be caught in other places n reporter olivas cc resolution fixed ts component dataclasses summary comparefloatingpointas fails on priority normal keywords time milestone owner olivas type defect | 1 |
39,645 | 20,117,856,090 | IssuesEvent | 2022-02-07 21:37:08 | anguspiv/www.angusp.com | https://api.github.com/repos/anguspiv/www.angusp.com | closed | Lighthouse complains about link target size | enhancement performance | THe link tap target sizes are too small on mobile
| True | Lighthouse complains about link target size - THe link tap target sizes are too small on mobile
| non_defect | lighthouse complains about link target size the link tap target sizes are too small on mobile | 0 |
57,561 | 15,862,370,927 | IssuesEvent | 2021-04-08 11:31:11 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Hazelcast 4.1.1: IMap#keySet triggers value deserialization | Type: Defect | Hi,
We are running the following code on an IMap instance. BTW we are running Hazelcast 4.1.1 as a cluster consisting of 3 pods in K8s.
for (String key : map.keySet()) {
if (sb.charAt(sb.length() - 1) != '[') {
sb.append(", ");
}
sb.append(""" + key + """);
sb.append(""," + map.getEntryView(key).getExpirationTime() + """);
}
But getting a serialization exception:
Exception thrown by application class 'com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read:87'
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.foobar.cache.CacheWrapper
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:87)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:76)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:205)
at com.hazelcast.map.impl.record.Records.tryStoreIntoCache(Records.java:164)
at com.hazelcast.map.impl.record.Records.getValueOrCachedValue(Records.java:131)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:99)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:234)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:218)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEachAfterLoad(DefaultRecordStore.java:247)
at com.hazelcast.map.impl.query.PartitionScanRunner.run(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.query.CallerRunsPartitionScanExecutor.execute(CallerRunsPartitionScanExecutor.java:43)
at com.hazelcast.map.impl.query.QueryRunner.runPartitionIndexOrPartitionScanQueryOnGivenOwnedPartition(QueryRunner.java:219)
at com.hazelcast.map.impl.query.QueryPartitionOperation.runInternal(QueryPartitionOperation.java:46)
at com.hazelcast.map.impl.operation.MapOperation.run(MapOperation.java:112)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:256)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:237)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:160)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:138)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
at ------ submitted from ------.()
at com.hazelcast.internal.util.ExceptionUtil.cloneExceptionWithFixedAsyncStackTrace(ExceptionUtil.java:265)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.returnOrThrowWithGetConventions(InvocationFuture.java:112)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrowIfException(InvocationFuture.java:100)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:606)
at com.hazelcast.client.impl.protocol.task.map.AbstractMapQueryMessageTask.collectResultsFromMissingPartitions(AbstractMapQueryMessageTask.java:247)
at com.hazelcast.client.impl.protocol.task.map.AbstractMapQueryMessageTask.invokeOnMissingPartitions(AbstractMapQueryMessageTask.java:136)
at com.hazelcast.client.impl.protocol.task.map.AbstractMapQueryMessageTask.call(AbstractMapQueryMessageTask.java:100)
at com.hazelcast.client.impl.protocol.task.AbstractCallableMessageTask.processMessage(AbstractCallableMessageTask.java:35)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:153)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:116)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:834)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
at ------ submitted from ------.()
at com.hazelcast.internal.util.ExceptionUtil.cloneExceptionWithFixedAsyncStackTrace(ExceptionUtil.java:265)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.returnOrThrowWithGetConventions(InvocationFuture.java:112)
at com.hazelcast.client.impl.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:95)
at com.hazelcast.client.impl.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:40)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:614)
at com.hazelcast.client.impl.spi.ClientProxy.invoke(ClientProxy.java:215)
at com.hazelcast.client.impl.proxy.ClientMapProxy.keySet(ClientMapProxy.java:1105)
at com.foobar.restws.resources.operations.CacheOperationsResource.getCacheObjectContent(CacheOperationsResource.java:81)
at com.foobar.restws.resources.operations.CacheOperationsResource$Proxy$_$$_WeldClientProxy.getCacheObjectContent(Unknown Source)
at sun.reflect.GeneratedMethodAccessor1417.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.ibm.ws.jaxrs20.server.LibertyJaxRsServerFactoryBean.performInvocation(LibertyJaxRsServerFactoryBean.java:652)
at [internal classes]
at com.foobar.operations.filter.OperationsAuthorizationFilter.doFilter(OperationsAuthorizationFilter.java:76)
at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201)
at [internal classes]
Caused by: java.lang.ClassNotFoundException: com.foobar.cache.CacheWrapper
at jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at com.hazelcast.internal.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:289)
at com.hazelcast.internal.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:249)
at com.hazelcast.internal.nio.IOUtil$ClassLoaderAwareObjectInputStream.resolveClass(IOUtil.java:783)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1995)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1862)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:83)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:76)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:205)
at com.hazelcast.map.impl.record.Records.tryStoreIntoCache(Records.java:164)
at com.hazelcast.map.impl.record.Records.getValueOrCachedValue(Records.java:131)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:99)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:234)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:218)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEachAfterLoad(DefaultRecordStore.java:247)
at com.hazelcast.map.impl.query.PartitionScanRunner.run(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.query.CallerRunsPartitionScanExecutor.execute(CallerRunsPartitionScanExecutor.java:43)
at com.hazelcast.map.impl.query.QueryRunner.runPartitionIndexOrPartitionScanQueryOnGivenOwnedPartition(QueryRunner.java:219)
at com.hazelcast.map.impl.query.QueryPartitionOperation.runInternal(QueryPartitionOperation.java:46)
at com.hazelcast.map.impl.operation.MapOperation.run(MapOperation.java:112)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:256)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:237)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:160)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:138)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) | 1.0 | Hazelcast 4.1.1: IMap#keySet triggers value deserialization - Hi,
We are running the following code on an IMap instance. BTW we are running Hazelcast 4.1.1 as a cluster consisting of 3 pods in K8s.
for (String key : map.keySet()) {
if (sb.charAt(sb.length() - 1) != '[') {
sb.append(", ");
}
sb.append(""" + key + """);
sb.append(""," + map.getEntryView(key).getExpirationTime() + """);
}
But getting a serialization exception:
Exception thrown by application class 'com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read:87'
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.foobar.cache.CacheWrapper
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:87)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:76)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:205)
at com.hazelcast.map.impl.record.Records.tryStoreIntoCache(Records.java:164)
at com.hazelcast.map.impl.record.Records.getValueOrCachedValue(Records.java:131)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:99)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:234)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:218)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEachAfterLoad(DefaultRecordStore.java:247)
at com.hazelcast.map.impl.query.PartitionScanRunner.run(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.query.CallerRunsPartitionScanExecutor.execute(CallerRunsPartitionScanExecutor.java:43)
at com.hazelcast.map.impl.query.QueryRunner.runPartitionIndexOrPartitionScanQueryOnGivenOwnedPartition(QueryRunner.java:219)
at com.hazelcast.map.impl.query.QueryPartitionOperation.runInternal(QueryPartitionOperation.java:46)
at com.hazelcast.map.impl.operation.MapOperation.run(MapOperation.java:112)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:256)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:237)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:160)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:138)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
at ------ submitted from ------.()
at com.hazelcast.internal.util.ExceptionUtil.cloneExceptionWithFixedAsyncStackTrace(ExceptionUtil.java:265)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.returnOrThrowWithGetConventions(InvocationFuture.java:112)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrowIfException(InvocationFuture.java:100)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:606)
at com.hazelcast.client.impl.protocol.task.map.AbstractMapQueryMessageTask.collectResultsFromMissingPartitions(AbstractMapQueryMessageTask.java:247)
at com.hazelcast.client.impl.protocol.task.map.AbstractMapQueryMessageTask.invokeOnMissingPartitions(AbstractMapQueryMessageTask.java:136)
at com.hazelcast.client.impl.protocol.task.map.AbstractMapQueryMessageTask.call(AbstractMapQueryMessageTask.java:100)
at com.hazelcast.client.impl.protocol.task.AbstractCallableMessageTask.processMessage(AbstractCallableMessageTask.java:35)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:153)
at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:116)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:834)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
at ------ submitted from ------.()
at com.hazelcast.internal.util.ExceptionUtil.cloneExceptionWithFixedAsyncStackTrace(ExceptionUtil.java:265)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.returnOrThrowWithGetConventions(InvocationFuture.java:112)
at com.hazelcast.client.impl.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:95)
at com.hazelcast.client.impl.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:40)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:614)
at com.hazelcast.client.impl.spi.ClientProxy.invoke(ClientProxy.java:215)
at com.hazelcast.client.impl.proxy.ClientMapProxy.keySet(ClientMapProxy.java:1105)
at com.foobar.restws.resources.operations.CacheOperationsResource.getCacheObjectContent(CacheOperationsResource.java:81)
at com.foobar.restws.resources.operations.CacheOperationsResource$Proxy$_$$_WeldClientProxy.getCacheObjectContent(Unknown Source)
at sun.reflect.GeneratedMethodAccessor1417.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.ibm.ws.jaxrs20.server.LibertyJaxRsServerFactoryBean.performInvocation(LibertyJaxRsServerFactoryBean.java:652)
at [internal classes]
at com.foobar.operations.filter.OperationsAuthorizationFilter.doFilter(OperationsAuthorizationFilter.java:76)
at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201)
at [internal classes]
Caused by: java.lang.ClassNotFoundException: com.foobar.cache.CacheWrapper
at jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at com.hazelcast.internal.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:289)
at com.hazelcast.internal.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:249)
at com.hazelcast.internal.nio.IOUtil$ClassLoaderAwareObjectInputStream.resolveClass(IOUtil.java:783)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1995)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1862)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:83)
at com.hazelcast.internal.serialization.impl.defaultserializers.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:76)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:44)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:205)
at com.hazelcast.map.impl.record.Records.tryStoreIntoCache(Records.java:164)
at com.hazelcast.map.impl.record.Records.getValueOrCachedValue(Records.java:131)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:99)
at com.hazelcast.map.impl.query.PartitionScanRunner$1.accept(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:234)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEach(DefaultRecordStore.java:218)
at com.hazelcast.map.impl.recordstore.DefaultRecordStore.forEachAfterLoad(DefaultRecordStore.java:247)
at com.hazelcast.map.impl.query.PartitionScanRunner.run(PartitionScanRunner.java:94)
at com.hazelcast.map.impl.query.CallerRunsPartitionScanExecutor.execute(CallerRunsPartitionScanExecutor.java:43)
at com.hazelcast.map.impl.query.QueryRunner.runPartitionIndexOrPartitionScanQueryOnGivenOwnedPartition(QueryRunner.java:219)
at com.hazelcast.map.impl.query.QueryPartitionOperation.runInternal(QueryPartitionOperation.java:46)
at com.hazelcast.map.impl.operation.MapOperation.run(MapOperation.java:112)
at com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:184)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:256)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:237)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:160)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:138)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102) | defect | hazelcast imap keyset triggers value deserialization hi we are running the following code on an imap instance btw we are running hazelcast as a cluster consisting of pods in for string key map keyset if sb charat sb length sb append sb append key sb append map getentryview key getexpirationtime but getting a serialization exception exception thrown by application class com hazelcast internal serialization impl defaultserializers javadefaultserializers javaserializer read com hazelcast nio serialization hazelcastserializationexception java lang classnotfoundexception com foobar cache cachewrapper at com hazelcast internal serialization impl defaultserializers javadefaultserializers javaserializer read javadefaultserializers java at com hazelcast internal serialization impl defaultserializers javadefaultserializers javaserializer read javadefaultserializers java at com hazelcast internal serialization impl streamserializeradapter read streamserializeradapter java at com hazelcast internal serialization impl abstractserializationservice toobject abstractserializationservice java at com hazelcast map impl record records trystoreintocache records java at com hazelcast map impl record records getvalueorcachedvalue records java at com hazelcast map impl query partitionscanrunner accept partitionscanrunner java at com hazelcast map impl query partitionscanrunner accept partitionscanrunner java at com hazelcast map impl recordstore defaultrecordstore foreach defaultrecordstore java at com hazelcast map impl recordstore defaultrecordstore foreach defaultrecordstore java at com hazelcast map impl recordstore defaultrecordstore foreachafterload defaultrecordstore java at com hazelcast map impl query partitionscanrunner run partitionscanrunner java at com hazelcast map impl query callerrunspartitionscanexecutor execute callerrunspartitionscanexecutor java at com hazelcast map impl query queryrunner runpartitionindexorpartitionscanqueryongivenownedpartition queryrunner java at com hazelcast map impl query querypartitionoperation runinternal querypartitionoperation java at com hazelcast map impl operation mapoperation run mapoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java at submitted from at com hazelcast internal util exceptionutil cloneexceptionwithfixedasyncstacktrace exceptionutil java at com hazelcast spi impl operationservice impl invocationfuture returnorthrowwithgetconventions invocationfuture java at com hazelcast spi impl operationservice impl invocationfuture resolveandthrowifexception invocationfuture java at com hazelcast spi impl abstractinvocationfuture get abstractinvocationfuture java at com hazelcast client impl protocol task map abstractmapquerymessagetask collectresultsfrommissingpartitions abstractmapquerymessagetask java at com hazelcast client impl protocol task map abstractmapquerymessagetask invokeonmissingpartitions abstractmapquerymessagetask java at com hazelcast client impl protocol task map abstractmapquerymessagetask call abstractmapquerymessagetask java at com hazelcast client impl protocol task abstractcallablemessagetask processmessage abstractcallablemessagetask java at com hazelcast client impl protocol task abstractmessagetask initializeandprocessmessage abstractmessagetask java at com hazelcast client impl protocol task abstractmessagetask run abstractmessagetask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast internal util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java at submitted from at com hazelcast internal util exceptionutil cloneexceptionwithfixedasyncstacktrace exceptionutil java at com hazelcast spi impl operationservice impl invocationfuture returnorthrowwithgetconventions invocationfuture java at com hazelcast client impl spi impl clientinvocationfuture resolveandthrowifexception clientinvocationfuture java at com hazelcast client impl spi impl clientinvocationfuture resolveandthrowifexception clientinvocationfuture java at com hazelcast spi impl abstractinvocationfuture get abstractinvocationfuture java at com hazelcast client impl spi clientproxy invoke clientproxy java at com hazelcast client impl proxy clientmapproxy keyset clientmapproxy java at com foobar restws resources operations cacheoperationsresource getcacheobjectcontent cacheoperationsresource java at com foobar restws resources operations cacheoperationsresource proxy weldclientproxy getcacheobjectcontent unknown source at sun reflect invoke unknown source at java lang reflect method invoke method java at com ibm ws server libertyjaxrsserverfactorybean performinvocation libertyjaxrsserverfactorybean java at at com foobar operations filter operationsauthorizationfilter dofilter operationsauthorizationfilter java at com ibm ws webcontainer filter filterinstancewrapper dofilter filterinstancewrapper java at caused by java lang classnotfoundexception com foobar cache cachewrapper at jdk internal loader builtinclassloader loadclass builtinclassloader java at jdk internal loader classloaders appclassloader loadclass classloaders java at java lang classloader loadclass classloader java at com hazelcast internal nio classloaderutil tryloadclass classloaderutil java at com hazelcast internal nio classloaderutil loadclass classloaderutil java at com hazelcast internal nio ioutil classloaderawareobjectinputstream resolveclass ioutil java at java io objectinputstream readnonproxydesc objectinputstream java at java io objectinputstream readclassdesc objectinputstream java at java io objectinputstream readordinaryobject objectinputstream java at java io objectinputstream objectinputstream java at java io objectinputstream readobject objectinputstream java at java io objectinputstream readobject objectinputstream java at com hazelcast internal serialization impl defaultserializers javadefaultserializers javaserializer read javadefaultserializers java at com hazelcast internal serialization impl defaultserializers javadefaultserializers javaserializer read javadefaultserializers java at com hazelcast internal serialization impl streamserializeradapter read streamserializeradapter java at com hazelcast internal serialization impl abstractserializationservice toobject abstractserializationservice java at com hazelcast map impl record records trystoreintocache records java at com hazelcast map impl record records getvalueorcachedvalue records java at com hazelcast map impl query partitionscanrunner accept partitionscanrunner java at com hazelcast map impl query partitionscanrunner accept partitionscanrunner java at com hazelcast map impl recordstore defaultrecordstore foreach defaultrecordstore java at com hazelcast map impl recordstore defaultrecordstore foreach defaultrecordstore java at com hazelcast map impl recordstore defaultrecordstore foreachafterload defaultrecordstore java at com hazelcast map impl query partitionscanrunner run partitionscanrunner java at com hazelcast map impl query callerrunspartitionscanexecutor execute callerrunspartitionscanexecutor java at com hazelcast map impl query queryrunner runpartitionindexorpartitionscanqueryongivenownedpartition queryrunner java at com hazelcast map impl query querypartitionoperation runinternal querypartitionoperation java at com hazelcast map impl operation mapoperation run mapoperation java at com hazelcast spi impl operationservice operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread executerun operationthread java at com hazelcast internal util executor hazelcastmanagedthread run hazelcastmanagedthread java | 1 |
73,430 | 24,621,878,579 | IssuesEvent | 2022-10-16 02:20:28 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | "Click to read topic" tooltip gets stuck | T-Defect S-Tolerable A-Room-View O-Occasional | ### Steps to reproduce
1. Open a room that has a topic
2. Click on the topic
3. Close the topic (by pressing esc, clicking the x, or clicking outside of the dialog)
### Outcome
#### What did you expect?
"Click to read topic" tooltip disappears
#### What happened instead?
"Click to read topic" tooltip is still visible and does not disappear until I click
https://user-images.githubusercontent.com/5855073/196014766-8a3aa12f-30dd-4bda-a4f3-4d602b30cd57.mp4
### Operating system
macOS
### Browser information
Firefox 105.0.3 and Chrome 106.0.5249.103
### URL for webapp
_No response_
### Application version
1.11.10
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | "Click to read topic" tooltip gets stuck - ### Steps to reproduce
1. Open a room that has a topic
2. Click on the topic
3. Close the topic (by pressing esc, clicking the x, or clicking outside of the dialog)
### Outcome
#### What did you expect?
"Click to read topic" tooltip disappears
#### What happened instead?
"Click to read topic" tooltip is still visible and does not disappear until I click
https://user-images.githubusercontent.com/5855073/196014766-8a3aa12f-30dd-4bda-a4f3-4d602b30cd57.mp4
### Operating system
macOS
### Browser information
Firefox 105.0.3 and Chrome 106.0.5249.103
### URL for webapp
_No response_
### Application version
1.11.10
### Homeserver
_No response_
### Will you send logs?
No | defect | click to read topic tooltip gets stuck steps to reproduce open a room that has a topic click on the topic close the topic by pressing esc clicking the x or clicking outside of the dialog outcome what did you expect click to read topic tooltip disappears what happened instead click to read topic tooltip is still visible and does not disappear until i click operating system macos browser information firefox and chrome url for webapp no response application version homeserver no response will you send logs no | 1 |
7,584 | 2,610,406,447 | IssuesEvent | 2015-02-26 20:12:06 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | opened | Clone Stim | auto-migrated Priority-Medium Type-Defect | ```
The Clone trooper 'Stim' ability does not have a timeout, apparently.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 4 Dec 2011 at 1:46 | 1.0 | Clone Stim - ```
The Clone trooper 'Stim' ability does not have a timeout, apparently.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 4 Dec 2011 at 1:46 | defect | clone stim the clone trooper stim ability does not have a timeout apparently original issue reported on code google com by killerhurdz netscape net on dec at | 1 |
181,418 | 30,685,183,897 | IssuesEvent | 2023-07-26 11:52:46 | calcom/cal.com | https://api.github.com/repos/calcom/cal.com | closed | Upload Avtar should have some padding at the bottom | 🎨 needs design 🧹 Improvements 👩🔬 needs investigation | ### Issue Summary
While signing up, I noticed that the **Close** and **Save** buttons are too near to the bottom. We should add some spacing to make the design consistent.
### Steps to Reproduce
1. Go to Signup
2. Verify your email
3. Fill in personal information and attach calendars.
4. Go to upload a profile picture
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
### Actual Results
- Buttons too near to the bottom of the modal
### Expected Results
- Buttons should have some padding at the bottom
### Technical details
N/A
### Evidence
<img width="726" alt="image" src="https://github.com/calcom/cal.com/assets/33599674/3010321f-e8c3-42ec-a8e5-2a4f9994a0d4">
| 1.0 | Upload Avtar should have some padding at the bottom - ### Issue Summary
While signing up, I noticed that the **Close** and **Save** buttons are too near to the bottom. We should add some spacing to make the design consistent.
### Steps to Reproduce
1. Go to Signup
2. Verify your email
3. Fill in personal information and attach calendars.
4. Go to upload a profile picture
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
### Actual Results
- Buttons too near to the bottom of the modal
### Expected Results
- Buttons should have some padding at the bottom
### Technical details
N/A
### Evidence
<img width="726" alt="image" src="https://github.com/calcom/cal.com/assets/33599674/3010321f-e8c3-42ec-a8e5-2a4f9994a0d4">
| non_defect | upload avtar should have some padding at the bottom issue summary while signing up i noticed that the close and save buttons are too near to the bottom we should add some spacing to make the design consistent steps to reproduce go to signup verify your email fill in personal information and attach calendars go to upload a profile picture any other relevant information for example why do you consider this a bug and what did you expect to happen instead actual results buttons too near to the bottom of the modal expected results buttons should have some padding at the bottom technical details n a evidence img width alt image src | 0 |
750,098 | 26,188,899,620 | IssuesEvent | 2023-01-03 06:28:20 | googleapis/google-cloud-ruby | https://api.github.com/repos/googleapis/google-cloud-ruby | closed | [Nightly CI Failures] Failures detected for google-cloud-vm_migration | type: bug priority: p1 nightly failure | At 2023-01-02 08:54:22 UTC, detected failures in google-cloud-vm_migration for: bundle
report_key_4bb015fe48e7516fbffbc9f6469f5b30 | 1.0 | [Nightly CI Failures] Failures detected for google-cloud-vm_migration - At 2023-01-02 08:54:22 UTC, detected failures in google-cloud-vm_migration for: bundle
report_key_4bb015fe48e7516fbffbc9f6469f5b30 | non_defect | failures detected for google cloud vm migration at utc detected failures in google cloud vm migration for bundle report key | 0 |
25,422 | 4,317,351,984 | IssuesEvent | 2016-07-23 08:01:19 | stko/oobd | https://api.github.com/repos/stko/oobd | closed | Sometimes the sended command is found in the answer string, which causes error msg | auto-migrated Priority-Medium Type-Defect | ```
as just seen, sometimes the sent command comes back as first part of the
answer, which is then interpreted as error, because unexpected
```
Original issue reported on code.google.com by `steffen....@gmail.com` on 5 May 2013 at 11:28 | 1.0 | Sometimes the sended command is found in the answer string, which causes error msg - ```
as just seen, sometimes the sent command comes back as first part of the
answer, which is then interpreted as error, because unexpected
```
Original issue reported on code.google.com by `steffen....@gmail.com` on 5 May 2013 at 11:28 | defect | sometimes the sended command is found in the answer string which causes error msg as just seen sometimes the sent command comes back as first part of the answer which is then interpreted as error because unexpected original issue reported on code google com by steffen gmail com on may at | 1 |
50,688 | 7,623,283,187 | IssuesEvent | 2018-05-03 14:38:17 | openfisca/openfisca-doc | https://api.github.com/repos/openfisca/openfisca-doc | closed | Add notebooks as examples | flow:team:doing kind:documentation kind:solution | As a new user,
I can find interactive usage examples,
So that I can test OpenFisca and understand what it does | 1.0 | Add notebooks as examples - As a new user,
I can find interactive usage examples,
So that I can test OpenFisca and understand what it does | non_defect | add notebooks as examples as a new user i can find interactive usage examples so that i can test openfisca and understand what it does | 0 |
36,650 | 8,135,192,635 | IssuesEvent | 2018-08-20 00:59:15 | rust-lang-nursery/rust-bindgen | https://api.github.com/repos/rust-lang-nursery/rust-bindgen | closed | variadic function must have C or cdecl calling convention | A-spe E-less-easy I-ABI-bug I-bogus-codegen bug | I am using this script https://gist.github.com/fitzgen/187381e358f60efa8194d0b276b4d11a.
The hashtag for my bindgen version is 4dd4ac7 .
$ ./b.sh bindgen abc.h
```
clang-4.0: warning: treating 'c-header' input as 'c++-header' when in C++ mode, this behavior is deprecated [-Wdeprecated]
error[E0045]: variadic function must have C or cdecl calling convention
--> /tmp/bindings-CLx7tG.rs:5:2
|
5 | pub fn a ( arg1 : :: std :: os :: raw :: c_char , ... ) -> :: std :: os :: raw :: c_char ;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ variadics require C or cdecl calling convention
error: aborting due to previous error
Interesting: bindgen emitted Rust code that won't compile!
```
$ cat abc.h
```
char __attribute__((ms_abi)) a(char, ...);
``` | 1.0 | variadic function must have C or cdecl calling convention - I am using this script https://gist.github.com/fitzgen/187381e358f60efa8194d0b276b4d11a.
The hashtag for my bindgen version is 4dd4ac7 .
$ ./b.sh bindgen abc.h
```
clang-4.0: warning: treating 'c-header' input as 'c++-header' when in C++ mode, this behavior is deprecated [-Wdeprecated]
error[E0045]: variadic function must have C or cdecl calling convention
--> /tmp/bindings-CLx7tG.rs:5:2
|
5 | pub fn a ( arg1 : :: std :: os :: raw :: c_char , ... ) -> :: std :: os :: raw :: c_char ;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ variadics require C or cdecl calling convention
error: aborting due to previous error
Interesting: bindgen emitted Rust code that won't compile!
```
$ cat abc.h
```
char __attribute__((ms_abi)) a(char, ...);
``` | non_defect | variadic function must have c or cdecl calling convention i am using this script the hashtag for my bindgen version is b sh bindgen abc h clang warning treating c header input as c header when in c mode this behavior is deprecated error variadic function must have c or cdecl calling convention tmp bindings rs pub fn a std os raw c char std os raw c char variadics require c or cdecl calling convention error aborting due to previous error interesting bindgen emitted rust code that won t compile cat abc h char attribute ms abi a char | 0 |
87,161 | 17,154,126,164 | IssuesEvent | 2021-07-14 03:06:24 | stlink-org/stlink | https://api.github.com/repos/stlink-org/stlink | opened | [feature] Add multi-core support for devices like STM32H745/755 | code/feature-request | Some of the new STM32H7 devices are multi core devices. ST's official tools support it and it would be nice to have that here
| 1.0 | [feature] Add multi-core support for devices like STM32H745/755 - Some of the new STM32H7 devices are multi core devices. ST's official tools support it and it would be nice to have that here
| non_defect | add multi core support for devices like some of the new devices are multi core devices st s official tools support it and it would be nice to have that here | 0 |
426,971 | 12,391,083,407 | IssuesEvent | 2020-05-20 11:52:31 | RonAsis/Wsep202 | https://api.github.com/repos/RonAsis/Wsep202 | opened | test discount scenario from meeting with shahaf | High priority bug | by with purchase policy of:
composed of two with AND between them:
minimum of 3 items of product1.
maximum=5 of product1.
scenario:
buy 1 of - see failed.
buy 6 of - see failed.
buy 3 - see purchased successfully.
for each step see proper meaningful notification. | 1.0 | test discount scenario from meeting with shahaf - by with purchase policy of:
composed of two with AND between them:
minimum of 3 items of product1.
maximum=5 of product1.
scenario:
buy 1 of - see failed.
buy 6 of - see failed.
buy 3 - see purchased successfully.
for each step see proper meaningful notification. | non_defect | test discount scenario from meeting with shahaf by with purchase policy of composed of two with and between them minimum of items of maximum of scenario buy of see failed buy of see failed buy see purchased successfully for each step see proper meaningful notification | 0 |
146,316 | 23,043,358,030 | IssuesEvent | 2022-07-23 13:59:30 | Opentrons/opentrons | https://api.github.com/repos/Opentrons/opentrons | closed | 6.0 Feedback: [Design QA] Rename robot slideout | design 6.0-feedback | ### Overview
The original feedback is in #10618
### Current Behavior
_No response_
### Expected Behavior
- [x] (1) Update icon to 24px
- [x] (2) Did we change this copy? if so, let me know and I can update design.
- [x] (3) Add a margin of 4px between "35 character max" and error message
- [x] (4) Remove our box-shadow from the button
<img width="297" alt="171940550-1f32cd70-d856-43f7-9ef8-3a316ef4659d" src="https://user-images.githubusercontent.com/474225/177647512-54b1dcc1-b8e4-4fb6-89eb-280bab9497ca.png">
### Steps To Reproduce
_No response_
### Operating system
_No response_
### Robot setup or anything else?
_No response_ | 1.0 | 6.0 Feedback: [Design QA] Rename robot slideout - ### Overview
The original feedback is in #10618
### Current Behavior
_No response_
### Expected Behavior
- [x] (1) Update icon to 24px
- [x] (2) Did we change this copy? if so, let me know and I can update design.
- [x] (3) Add a margin of 4px between "35 character max" and error message
- [x] (4) Remove our box-shadow from the button
<img width="297" alt="171940550-1f32cd70-d856-43f7-9ef8-3a316ef4659d" src="https://user-images.githubusercontent.com/474225/177647512-54b1dcc1-b8e4-4fb6-89eb-280bab9497ca.png">
### Steps To Reproduce
_No response_
### Operating system
_No response_
### Robot setup or anything else?
_No response_ | non_defect | feedback rename robot slideout overview the original feedback is in current behavior no response expected behavior update icon to did we change this copy if so let me know and i can update design add a margin of between character max and error message remove our box shadow from the button img width alt src steps to reproduce no response operating system no response robot setup or anything else no response | 0 |
53,718 | 13,262,143,629 | IssuesEvent | 2020-08-20 21:11:13 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Documentation builder has no Geant4 (Trac #1934) | Migrated from Trac defect infrastructure | I improved g4-tankresponse's doxygen (to comply with ticket #1303). This has never made it to the documentation page (http://software.icecube.wisc.edu/documentation/doxygen) because Geant4 does not seem to be installed in the builder.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1934">https://code.icecube.wisc.edu/projects/icecube/ticket/1934</a>, reported by jgonzalezand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:18",
"_ts": "1550067318169976",
"description": "I improved g4-tankresponse's doxygen (to comply with ticket #1309). This has never made it to the documentation page (http://software.icecube.wisc.edu/documentation/doxygen) because Geant4 does not seem to be installed in the builder.",
"reporter": "jgonzalez",
"cc": "",
"resolution": "worksforme",
"time": "2017-01-19T15:13:16",
"component": "infrastructure",
"summary": "Documentation builder has no Geant4",
"priority": "minor",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Documentation builder has no Geant4 (Trac #1934) - I improved g4-tankresponse's doxygen (to comply with ticket #1303). This has never made it to the documentation page (http://software.icecube.wisc.edu/documentation/doxygen) because Geant4 does not seem to be installed in the builder.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1934">https://code.icecube.wisc.edu/projects/icecube/ticket/1934</a>, reported by jgonzalezand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:15:18",
"_ts": "1550067318169976",
"description": "I improved g4-tankresponse's doxygen (to comply with ticket #1309). This has never made it to the documentation page (http://software.icecube.wisc.edu/documentation/doxygen) because Geant4 does not seem to be installed in the builder.",
"reporter": "jgonzalez",
"cc": "",
"resolution": "worksforme",
"time": "2017-01-19T15:13:16",
"component": "infrastructure",
"summary": "Documentation builder has no Geant4",
"priority": "minor",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | documentation builder has no trac i improved tankresponse s doxygen to comply with ticket this has never made it to the documentation page because does not seem to be installed in the builder migrated from json status closed changetime ts description i improved tankresponse s doxygen to comply with ticket this has never made it to the documentation page because does not seem to be installed in the builder reporter jgonzalez cc resolution worksforme time component infrastructure summary documentation builder has no priority minor keywords milestone owner nega type defect | 1 |
350,089 | 10,478,421,432 | IssuesEvent | 2019-09-23 23:58:50 | BCcampus/edehr | https://api.github.com/repos/BCcampus/edehr | closed | Input fields with unit of measurement to the right | Effort - Low Epic - Form Priority - Medium ~Feature | Some input fields have the unit of measurement to the right of it. This means that the input can't take up 100% of the column width.
We'll need a way to indicate this kind of input in the spreadsheet.
An example of this is on the respiratory assessment screen.
<img width="756" alt="screen shot 2019-02-13 at 1 07 15 pm" src="https://user-images.githubusercontent.com/20425828/52744031-55099c00-2f90-11e9-9646-096ec55e249e.png">
| 1.0 | Input fields with unit of measurement to the right - Some input fields have the unit of measurement to the right of it. This means that the input can't take up 100% of the column width.
We'll need a way to indicate this kind of input in the spreadsheet.
An example of this is on the respiratory assessment screen.
<img width="756" alt="screen shot 2019-02-13 at 1 07 15 pm" src="https://user-images.githubusercontent.com/20425828/52744031-55099c00-2f90-11e9-9646-096ec55e249e.png">
| non_defect | input fields with unit of measurement to the right some input fields have the unit of measurement to the right of it this means that the input can t take up of the column width we ll need a way to indicate this kind of input in the spreadsheet an example of this is on the respiratory assessment screen img width alt screen shot at pm src | 0 |
45,454 | 9,764,215,583 | IssuesEvent | 2019-06-05 15:19:50 | GQCG/gqcp | https://api.github.com/repos/GQCG/gqcp | opened | Victini cluster test error: ProductFockSpace_test | bug code review | On the victini cluster, there is currently one failure in ProductFockSpace_test:
```
/tmp/vsc40558/gqcp/0.1.0/intel-2018a/gqcp/tests/FockSpace/ProductFockSpace_test.cpp(98): error: in "FockSpace_EvaluateOperator_Dense_diagonal_false": check two_electron_evaluation1.isApprox(two_electron_evaluation2) has failed
```
Please use the EasyBuild procedure to reproduce this error.
| 1.0 | Victini cluster test error: ProductFockSpace_test - On the victini cluster, there is currently one failure in ProductFockSpace_test:
```
/tmp/vsc40558/gqcp/0.1.0/intel-2018a/gqcp/tests/FockSpace/ProductFockSpace_test.cpp(98): error: in "FockSpace_EvaluateOperator_Dense_diagonal_false": check two_electron_evaluation1.isApprox(two_electron_evaluation2) has failed
```
Please use the EasyBuild procedure to reproduce this error.
| non_defect | victini cluster test error productfockspace test on the victini cluster there is currently one failure in productfockspace test tmp gqcp intel gqcp tests fockspace productfockspace test cpp error in fockspace evaluateoperator dense diagonal false check two electron isapprox two electron has failed please use the easybuild procedure to reproduce this error | 0 |
71,842 | 23,824,934,781 | IssuesEvent | 2022-09-05 14:09:52 | line/armeria | https://api.github.com/repos/line/armeria | closed | Support protobuf Content-Type not limited to "application/protobuf" in UnframedGrpcService | defect | As a user of UnframedGrpcService I want the service support Content-Type [not limited to "application/protobuf"](https://github.com/line/armeria/blob/3c61331a7a14bc9b1339c91f5781cdce8bac4d2f/grpc/src/main/java/com/linecorp/armeria/server/grpc/UnframedGrpcService.java#L122-L129) because media type for protobuf is not standardized by an RFC and so many community have different standards. For example [OpenTelemetry Protocol Specification](https://opentelemetry.io/docs/reference/specification/protocol/otlp/) expects the client MUST set “Content-Type: application/x-protobuf” request header when sending binary-encoded Protobuf.
The UnframedGrpcService should support taking a protobuf media type override value that can be used and not throw HTTP 415 Unsupported Media Type response code.
| 1.0 | Support protobuf Content-Type not limited to "application/protobuf" in UnframedGrpcService - As a user of UnframedGrpcService I want the service support Content-Type [not limited to "application/protobuf"](https://github.com/line/armeria/blob/3c61331a7a14bc9b1339c91f5781cdce8bac4d2f/grpc/src/main/java/com/linecorp/armeria/server/grpc/UnframedGrpcService.java#L122-L129) because media type for protobuf is not standardized by an RFC and so many community have different standards. For example [OpenTelemetry Protocol Specification](https://opentelemetry.io/docs/reference/specification/protocol/otlp/) expects the client MUST set “Content-Type: application/x-protobuf” request header when sending binary-encoded Protobuf.
The UnframedGrpcService should support taking a protobuf media type override value that can be used and not throw HTTP 415 Unsupported Media Type response code.
| defect | support protobuf content type not limited to application protobuf in unframedgrpcservice as a user of unframedgrpcservice i want the service support content type because media type for protobuf is not standardized by an rfc and so many community have different standards for example expects the client must set “content type application x protobuf” request header when sending binary encoded protobuf the unframedgrpcservice should support taking a protobuf media type override value that can be used and not throw http unsupported media type response code | 1 |
2,444 | 3,864,553,307 | IssuesEvent | 2016-04-08 14:18:56 | saproto/saproto | https://api.github.com/repos/saproto/saproto | opened | Make authentication work with LDAP instead of RADIUS. | feature-request security | We're gonna try to use the LDAP directory of the UT instead of RADIUS, because it is more open and less difficult if we're ever gonna switch IP's. Also, we can easily access it over VPN. | True | Make authentication work with LDAP instead of RADIUS. - We're gonna try to use the LDAP directory of the UT instead of RADIUS, because it is more open and less difficult if we're ever gonna switch IP's. Also, we can easily access it over VPN. | non_defect | make authentication work with ldap instead of radius we re gonna try to use the ldap directory of the ut instead of radius because it is more open and less difficult if we re ever gonna switch ip s also we can easily access it over vpn | 0 |
14,596 | 2,829,610,097 | IssuesEvent | 2015-05-23 02:06:28 | awesomebing1/fuzzdb | https://api.github.com/repos/awesomebing1/fuzzdb | closed | http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-football-2014-online-tv-pc | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-
football-2014-online-tv-pc
http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-
football-2014-online-tv-pc
http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-
football-2014-online-tv-pc
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `sabujhos...@gmail.com` on 15 Nov 2014 at 3:42 | 1.0 | http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-football-2014-online-tv-pc - ```
What steps will reproduce the problem?
1.
2.
3.
http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-
football-2014-online-tv-pc
http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-
football-2014-online-tv-pc
http://www.nureyev-medical.org/forum/virginia-tech-vs-duke-live-streaming-ncaaf-
football-2014-online-tv-pc
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `sabujhos...@gmail.com` on 15 Nov 2014 at 3:42 | defect | what steps will reproduce the problem football online tv pc football online tv pc football online tv pc what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by sabujhos gmail com on nov at | 1 |
52,033 | 13,211,371,040 | IssuesEvent | 2020-08-15 22:39:18 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [genie-icetray] tests taking 1000+ minutes to run (Trac #1536) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1536">https://code.icecube.wisc.edu/projects/icecube/ticket/1536</a>, reported by negaand owned by melanie.day</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "The test process automatically quits after 1200 seconds w/o output. Tests should only take a few 10s of seconds at most.\n\n{{{\n 9088 ? R 983:28 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n15646 ? R 914:11 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n27741 ? R 299:10 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n30551 ? R 1056:17 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n}}}",
"reporter": "nega",
"cc": "kclark",
"resolution": "duplicate",
"time": "2016-01-29T21:16:50",
"component": "combo simulation",
"summary": "[genie-icetray] tests taking 1000+ minutes to run",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "melanie.day",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [genie-icetray] tests taking 1000+ minutes to run (Trac #1536) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1536">https://code.icecube.wisc.edu/projects/icecube/ticket/1536</a>, reported by negaand owned by melanie.day</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "The test process automatically quits after 1200 seconds w/o output. Tests should only take a few 10s of seconds at most.\n\n{{{\n 9088 ? R 983:28 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n15646 ? R 914:11 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n27741 ? R 299:10 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n30551 ? R 1056:17 python /build/buildslave/morax_cvmfs/Scientific_Linux_6__cvmfs_/source/genie-icetray/resources/test/GENIETest.py\n}}}",
"reporter": "nega",
"cc": "kclark",
"resolution": "duplicate",
"time": "2016-01-29T21:16:50",
"component": "combo simulation",
"summary": "[genie-icetray] tests taking 1000+ minutes to run",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "melanie.day",
"type": "defect"
}
```
</p>
</details>
| defect | tests taking minutes to run trac migrated from json status closed changetime ts description the test process automatically quits after seconds w o output tests should only take a few of seconds at most n n n r python build buildslave morax cvmfs scientific linux cvmfs source genie icetray resources test genietest py r python build buildslave morax cvmfs scientific linux cvmfs source genie icetray resources test genietest py r python build buildslave morax cvmfs scientific linux cvmfs source genie icetray resources test genietest py r python build buildslave morax cvmfs scientific linux cvmfs source genie icetray resources test genietest py n reporter nega cc kclark resolution duplicate time component combo simulation summary tests taking minutes to run priority normal keywords milestone owner melanie day type defect | 1 |
4,133 | 2,610,087,978 | IssuesEvent | 2015-02-26 18:26:40 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳痤疮如何消除最好 | auto-migrated Priority-Medium Type-Defect | ```
深圳痤疮如何消除最好【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:24 | 1.0 | 深圳痤疮如何消除最好 - ```
深圳痤疮如何消除最好【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:24 | defect | 深圳痤疮如何消除最好 深圳痤疮如何消除最好【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at | 1 |
53,504 | 13,261,781,264 | IssuesEvent | 2020-08-20 20:31:19 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Error running steamshovel, dataio-shovel in El Capitan (Trac #1556) | Migrated from Trac combo core defect | [jbraun@dyn-8-20:~/svn/offline-software/build 20110]$ cmake ../src
-- The C compiler identification is Clang 7.0.2
-- The CXX compiler identification is Clang 7.0.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
--
-- IceCube Configuration starting
--
-- OSTYPE = Darwin
-- OSVERSION = 15.2.0
-- ARCH = i386
-- BUILDNAME = Darwin-15.2.0/i386/LLVM-7.0.2
-- TOOLSET = LLVM-7.0.2/i386/
-- HOSTNAME = dyn-8-20.icecube.wisc.edu
-- CMake path = /usr/local/Cellar/cmake/2.8.12.2/bin/cmake
-- CMake version = 2.8.12.2
-- SVN_REVISION = 142096
-- SVN_URL = http://code.icecube.wisc.edu/svn/meta-projects/offline-software/trunk
-- META_PROJECT = offline-software.trunk
--
-- Setting compiler, compile drivers, and linker
--
-- distcc not found.
-- ccache not found.
-- Using gfilt stl decryptor
-- Performing Test CXX_HAS_Wno_deprecated
-- Performing Test CXX_HAS_Wno_deprecated - Success
-- Performing Test CXX_HAS_Wno_unused_variable
-- Performing Test CXX_HAS_Wno_unused_variable - Success
-- Performing Test CXX_HAS_Wno_unused_local_typedef
-- Performing Test CXX_HAS_Wno_unused_local_typedef - Success
-- Performing Test CXX_HAS_Wno_unused_local_typedefs
-- Performing Test CXX_HAS_Wno_unused_local_typedefs - Success
-- Setting default compiler flags and build type.
--
-- Configuring tools...
--
-- Using system packages when I3_PORTS not available
-- Using default site cmake dir of /usr/share/fizzicks/cmake
--
-- root
-- + TObject.h found at /usr/local/Cellar/root/5.34.18/include/root
-- + /usr/local/Cellar/root/5.34.18/lib/root/libCore.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libCint.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libRIO.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libNet.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libHist.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf3d.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libGpad.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libTree.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libRint.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libPostscript.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libMatrix.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libPhysics.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libMathCore.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libThread.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libMinuit.so
--
-- Boost
-- Boost version: 1.56.0
-- Found the following Boost libraries:
-- python
-- system
-- signals
-- thread
-- date_time
-- serialization
-- filesystem
-- program_options
-- regex
-- iostreams
--
-- boostnumpy
-- - boost/numpy.hpp not found in include
-- - boost_numpy
--
-- python
-- + version: Python 2.7.10
-- + binary: /usr/bin/python
-- + includes: /System/Library/Frameworks/Python.framework/Headers
-- + libs: /usr/lib/libpython2.7.dylib
-- + numpy: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include
-- + scipy: FOUND
--
-- blas
-- Looking for dgemm_
-- Looking for dgemm_ - found
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE
-- A library with BLAS API found.
--
-- lapack
-- A library with BLAS API found.
-- Looking for cheev_
-- Looking for cheev_ - found
-- A library with LAPACK API found.
--
-- gsl
-- + gsl/gsl_rng.h found at /usr/local/include
-- + /usr/local/lib/libgsl.dylib
--
-- sprng
-- + sprng/sprng.h found at /usr/local/include
-- + /usr/local/lib/libsprng.a
--
-- pal
-- - star/pal.h not found in include
-- - pal
--
-- pal
-- - star/pal.h not found in include
-- - starlink_pal
--
-- sla
-- + slalib/slalib.h found at /usr/local/include
-- + /usr/local/lib/libsla.a
--
-- libarchive
-- + archive.h found at /usr/local/opt/libarchive/include
-- + /usr/local/opt/libarchive/lib/libarchive.dylib
--
-- mysql
-- + mysql/mysql.h found at /usr/local/include
-- + /usr/local/lib/libmysqlclient.dylib
--
-- bdb
-- + db.h found at /usr/include
-- - db
--
-- MPI
-- Could NOT find MPI_C (missing: MPI_C_LIBRARIES MPI_C_INCLUDE_PATH)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIBRARIES MPI_CXX_INCLUDE_PATH)
--
-- suitesparse
-- + cholmod.h found at /usr/local/include
-- + /usr/local/lib/libcamd.a
-- + /usr/local/lib/libccolamd.a
-- + /usr/local/lib/libspqr.a
-- + /usr/local/lib/libcholmod.a
-- + /usr/local/lib/libamd.a
-- + /usr/local/lib/libcolamd.a
-- - tbb
-- + /usr/local/lib/libsuitesparseconfig.a
--
-- suitesparse
-- + suitesparse/cholmod.h found at /usr/local/include
-- + /usr/local/lib/libcamd.a
-- + /usr/local/lib/libccolamd.a
-- + /usr/local/lib/libspqr.a
-- + /usr/local/lib/libcholmod.a
-- + /usr/local/lib/libamd.a
-- + /usr/local/lib/libcolamd.a
-- + /usr/local/lib/libsuitesparseconfig.a
--
-- ZThread
-- - ZThread.h not found in ZTHREAD_INCLUDE_DIR-NOTFOUND
-- - ZThread
--
-- omniORB
-- - omniconfig.h not found in include/omniorb-4.0.7
-- - omnithread
-- - omniORB4
-- - COS4
-- - COSDynamic4
-- - omniCodeSets4
-- - omniDynamic4
--
-- ncurses
-- Looking for wsyncup in /usr/lib/libcurses.dylib
-- Looking for wsyncup in /usr/lib/libcurses.dylib - found
-- Found Curses: /usr/lib/libcurses.dylib
-- + ncurses.h found at /usr/include
-- + libncurses found at /usr/lib/libncurses.dylib
--
-- cdk
-- - cdk/cdk.h not found in include
-- + /usr/local/lib/libcdk.a
--
-- cdk
-- + cdk.h found at /usr/local/include
-- + /usr/local/lib/libcdk.a
--
-- healpix-cxx
-- + healpix_cxx/healpix_map.h found at /usr/local/include
-- + /usr/local/lib/libhealpix_cxx.dylib
--
-- qt4
-- Found OpenGL: /System/Library/Frameworks/OpenGL.framework
-- Found GLUT: -framework GLUT
-- Looking for Q_WS_X11
-- Looking for Q_WS_X11 - not found
-- Looking for Q_WS_WIN
-- Looking for Q_WS_WIN - not found
-- Looking for Q_WS_QWS
-- Looking for Q_WS_QWS - not found
-- Looking for Q_WS_MAC
-- Looking for Q_WS_MAC - found
-- Looking for QT_MAC_USE_COCOA
-- Looking for QT_MAC_USE_COCOA - found
-- Found Qt4: /usr/local/bin/qmake (found suitable version "4.8.6", minimum required is "4.8")
--
-- cfitsio
-- + fitsio.h found at /usr/local/include
-- + /usr/local/lib/libcfitsio.dylib
--
-- hdf5
-- + hdf5.h found at /usr/local/include
-- + /usr/local/lib/libhdf5.dylib
-- + /usr/local/lib/libhdf5_hl.dylib
--
-- minuit2
-- - Minuit2/MnConfig.h not found in /usr/local/Cellar/root/5.34.18/include
-- + /usr/local/lib/libMinuit2.dylib
--
-- minuit2
-- - Minuit2/MnConfig.h not found in include/Minuit2
-- + /usr/local/lib/libMinuit2.dylib
--
-- minuit2
-- - Minuit2/MnConfig.h not found in include/Minuit2-5.24.00
-- + /usr/local/lib/libMinuit2.dylib
--
-- clhep
-- + CLHEP/ClhepVersion.h found at /usr/local/include
-- + /usr/local/lib/libCLHEP.dylib
-- Looking for Geant4 geant4-config program
-- Looking for Geant4 geant4-config program -- not found
-- Looking for Geant4 liblist program
-- Looking for Geant4 liblist program -- not found
--
-- zlib
-- + zlib.h found at /usr/include
-- + /usr/lib/libz.dylib
--
-- OpenCL
-- + Using the OpenCL Framework because we're on Apple
-- + cl.h found at /System/Library/Frameworks/OpenCL.framework/Headers
-- + OpenCL framework found at -framework OpenCL
-- Looking for CL_VERSION_2_0
-- Looking for CL_VERSION_2_0 - not found
-- Looking for CL_VERSION_1_2
-- Looking for CL_VERSION_1_2 - found
--
-- gmp
-- + gmp.h found at /usr/local
-- + /usr/local/lib/libgmp.dylib
--
-- log4cpp
-- + log4cpp/Category.hh found at /usr/local/include
-- + /usr/local/lib/liblog4cpp.dylib
--
-- xml2
-- + libxml/parser.h found at /usr/include/libxml2
-- + /usr/lib/libxml2.dylib
--
-- genie
-- Looking for Genie version
-- Genie not installed.
--
-- zmq
-- - zmq.hpp not found in ZMQ_INCLUDE_DIR-NOTFOUND
-- + /usr/local/lib/libzmq.dylib
--
-- zmq
-- - zmq.hpp not found in include/zmq-
-- + /usr/local/lib/libzmq.dylib
--
-- doxygen
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
--
-- multinest
-- - multinest.h not found in include
-- - multinest
--
-- Configuring projects:
--
-- + WaveCalibrator
-- +-- python [symlinks]
-- + astro
-- +-- python [symlinks]
-- + Using SLALIB
-- +-- astro-pybindings
-- + cmake
-- +-- sphinx-build found, building sphinx documentation
-- + daq-decode
-- +-- python [symlinks]
-- + dataclasses
-- +-- python [symlinks]
-- +-- dataclasses-pybindings
-- + dataio
-- +-- python [symlinks]
-- +-- dataio-pyshovel *not* included (missing urwid python package)
-- +-- dataio-pybindings
-- +-- test_unregistered-pybindings
-- + filter-tools
-- +-- python [symlinks]
-- + hdfwriter
-- +-- python [symlinks]
-- +-- hdfwriter-pybindings
-- + icepick
-- +-- python [symlinks]
-- + icetray
-- +-- libdcap *not* found, omitting optional dcap support
-- +-- python [symlinks]
-- +-- icetray-pybindings
-- +-- icetray_test-pybindings
-- + interfaces
-- +-- interfaces-pybindings
-- + payload-parsing
-- +-- python [symlinks]
-- +-- payload_parsing-pybindings
-- + phys-services
-- +-- python [symlinks]
-- +-- sprng found, adding SPRNGRandomService
-- +-- phys_services-pybindings
-- + rootwriter
-- +-- python [symlinks]
-- +-- rootwriter-pybindings
-- + steamshovel
-- +-- python [symlinks]
-- +-- shovelart-pybindings
-- +-- shovelio-pybindings
-- + tableio
-- +-- python [symlinks]
-- +-- tableio-pybindings
-- Generating env-shell.sh
-- Generating icetray-config
-- Generating tarball_hook.sh
-- Configuring 'gfilt' STL decryptor
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/jbraun/svn/offline-software/build
[jbraun@dyn-8-20:~/svn/offline-software/build 20111]$ make -j 4
Scanning dependencies of target I3Tray.py
Scanning dependencies of target env-check
[ 0%] [ 0%] Generating ../lib/I3Tray.py
Checking build against environment
[ 0%] Built target env-check
[ 0%] Built target I3Tray.py
…
[100%] Built target steamshovel
[jbraun@dyn-8-20:~/svn/offline-software/build 20112]$ ./env-shell.sh
************************************************************************
* *
* W E L C O M E to I C E T R A Y *
* *
* Version offline-software.trunk r142096 *
* *
* You are welcome to visit our Web site *
* http://icecube.umd.edu *
* *
************************************************************************
Icetray environment has:
I3_SRC = /Users/jbraun/svn/offline-software/src
I3_BUILD = /Users/jbraun/svn/offline-software/build
I3_TESTDATA = /Users/jbraun/data/i3-test-data
Python = Python 2.7.10
[jbraun@dyn-8-20:~/svn/offline-software/build 20099]$ steamshovel
Assertion failed: (!registered), function set_key, file /Users/jbraun/svn/offline-software/src/icetray/public/icetray/i3_extended_type_info.h, line 42.
Abort trap: 6
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1556">https://code.icecube.wisc.edu/projects/icecube/ticket/1556</a>, reported by jbraun</summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-02-18T19:42:45",
"_ts": "1455824565964210",
"description": " [jbraun@dyn-8-20:~/svn/offline-software/build 20110]$ cmake ../src\n-- The C compiler identification is Clang 7.0.2\n-- The CXX compiler identification is Clang 7.0.2\n-- Check for working C compiler: /usr/bin/cc\n-- Check for working C compiler: /usr/bin/cc -- works\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working CXX compiler: /usr/bin/c++\n-- Check for working CXX compiler: /usr/bin/c++ -- works\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- \n-- IceCube Configuration starting \n-- \n-- OSTYPE = Darwin \n-- OSVERSION = 15.2.0 \n-- ARCH = i386 \n-- BUILDNAME = Darwin-15.2.0/i386/LLVM-7.0.2 \n-- TOOLSET = LLVM-7.0.2/i386/ \n-- HOSTNAME = dyn-8-20.icecube.wisc.edu \n-- CMake path = /usr/local/Cellar/cmake/2.8.12.2/bin/cmake\n-- CMake version = 2.8.12.2\n-- SVN_REVISION = 142096 \n-- SVN_URL = http://code.icecube.wisc.edu/svn/meta-projects/offline-software/trunk \n-- META_PROJECT = offline-software.trunk \n-- \n-- Setting compiler, compile drivers, and linker \n-- \n-- distcc not found.\n-- ccache not found.\n-- Using gfilt stl decryptor\n-- Performing Test CXX_HAS_Wno_deprecated\n-- Performing Test CXX_HAS_Wno_deprecated - Success\n-- Performing Test CXX_HAS_Wno_unused_variable\n-- Performing Test CXX_HAS_Wno_unused_variable - Success\n-- Performing Test CXX_HAS_Wno_unused_local_typedef\n-- Performing Test CXX_HAS_Wno_unused_local_typedef - Success\n-- Performing Test CXX_HAS_Wno_unused_local_typedefs\n-- Performing Test CXX_HAS_Wno_unused_local_typedefs - Success\n-- Setting default compiler flags and build type.\n-- \n-- Configuring tools... \n-- \n-- Using system packages when I3_PORTS not available\n-- Using default site cmake dir of /usr/share/fizzicks/cmake\n-- \n-- root \n-- + TObject.h found at /usr/local/Cellar/root/5.34.18/include/root\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libCore.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libCint.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libRIO.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libNet.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libHist.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf3d.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libGpad.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libTree.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libRint.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libPostscript.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libMatrix.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libPhysics.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libMathCore.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libThread.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libMinuit.so\n-- \n-- Boost \n-- Boost version: 1.56.0\n-- Found the following Boost libraries:\n-- python\n-- system\n-- signals\n-- thread\n-- date_time\n-- serialization\n-- filesystem\n-- program_options\n-- regex\n-- iostreams\n-- \n-- boostnumpy \n-- - boost/numpy.hpp not found in include\n-- - boost_numpy\n-- \n-- python \n-- + version: Python 2.7.10\n-- + binary: /usr/bin/python\n-- + includes: /System/Library/Frameworks/Python.framework/Headers\n-- + libs: /usr/lib/libpython2.7.dylib\n-- + numpy: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include\n-- + scipy: FOUND\n-- \n-- blas \n-- Looking for dgemm_\n-- Looking for dgemm_ - found\n-- Looking for include file pthread.h\n-- Looking for include file pthread.h - found\n-- Looking for pthread_create\n-- Looking for pthread_create - found\n-- Found Threads: TRUE \n-- A library with BLAS API found.\n-- \n-- lapack \n-- A library with BLAS API found.\n-- Looking for cheev_\n-- Looking for cheev_ - found\n-- A library with LAPACK API found.\n-- \n-- gsl \n-- + gsl/gsl_rng.h found at /usr/local/include\n-- + /usr/local/lib/libgsl.dylib\n-- \n-- sprng \n-- + sprng/sprng.h found at /usr/local/include\n-- + /usr/local/lib/libsprng.a\n-- \n-- pal \n-- - star/pal.h not found in include\n-- - pal\n-- \n-- pal \n-- - star/pal.h not found in include\n-- - starlink_pal\n-- \n-- sla \n-- + slalib/slalib.h found at /usr/local/include\n-- + /usr/local/lib/libsla.a\n-- \n-- libarchive \n-- + archive.h found at /usr/local/opt/libarchive/include\n-- + /usr/local/opt/libarchive/lib/libarchive.dylib\n-- \n-- mysql \n-- + mysql/mysql.h found at /usr/local/include\n-- + /usr/local/lib/libmysqlclient.dylib\n-- \n-- bdb \n-- + db.h found at /usr/include\n-- - db\n-- \n-- MPI \n-- Could NOT find MPI_C (missing: MPI_C_LIBRARIES MPI_C_INCLUDE_PATH) \n-- Could NOT find MPI_CXX (missing: MPI_CXX_LIBRARIES MPI_CXX_INCLUDE_PATH) \n-- \n-- suitesparse \n-- + cholmod.h found at /usr/local/include\n-- + /usr/local/lib/libcamd.a\n-- + /usr/local/lib/libccolamd.a\n-- + /usr/local/lib/libspqr.a\n-- + /usr/local/lib/libcholmod.a\n-- + /usr/local/lib/libamd.a\n-- + /usr/local/lib/libcolamd.a\n-- - tbb\n-- + /usr/local/lib/libsuitesparseconfig.a\n-- \n-- suitesparse \n-- + suitesparse/cholmod.h found at /usr/local/include\n-- + /usr/local/lib/libcamd.a\n-- + /usr/local/lib/libccolamd.a\n-- + /usr/local/lib/libspqr.a\n-- + /usr/local/lib/libcholmod.a\n-- + /usr/local/lib/libamd.a\n-- + /usr/local/lib/libcolamd.a\n-- + /usr/local/lib/libsuitesparseconfig.a\n-- \n-- ZThread \n-- - ZThread.h not found in ZTHREAD_INCLUDE_DIR-NOTFOUND\n-- - ZThread\n-- \n-- omniORB \n-- - omniconfig.h not found in include/omniorb-4.0.7\n-- - omnithread\n-- - omniORB4\n-- - COS4\n-- - COSDynamic4\n-- - omniCodeSets4\n-- - omniDynamic4\n-- \n-- ncurses \n-- Looking for wsyncup in /usr/lib/libcurses.dylib\n-- Looking for wsyncup in /usr/lib/libcurses.dylib - found\n-- Found Curses: /usr/lib/libcurses.dylib \n-- + ncurses.h found at /usr/include\n-- + libncurses found at /usr/lib/libncurses.dylib\n-- \n-- cdk \n-- - cdk/cdk.h not found in include\n-- + /usr/local/lib/libcdk.a\n-- \n-- cdk \n-- + cdk.h found at /usr/local/include\n-- + /usr/local/lib/libcdk.a\n-- \n-- healpix-cxx \n-- + healpix_cxx/healpix_map.h found at /usr/local/include\n-- + /usr/local/lib/libhealpix_cxx.dylib\n-- \n-- qt4 \n-- Found OpenGL: /System/Library/Frameworks/OpenGL.framework \n-- Found GLUT: -framework GLUT \n-- Looking for Q_WS_X11\n-- Looking for Q_WS_X11 - not found\n-- Looking for Q_WS_WIN\n-- Looking for Q_WS_WIN - not found\n-- Looking for Q_WS_QWS\n-- Looking for Q_WS_QWS - not found\n-- Looking for Q_WS_MAC\n-- Looking for Q_WS_MAC - found\n-- Looking for QT_MAC_USE_COCOA\n-- Looking for QT_MAC_USE_COCOA - found\n-- Found Qt4: /usr/local/bin/qmake (found suitable version \"4.8.6\", minimum required is \"4.8\") \n-- \n-- cfitsio \n-- + fitsio.h found at /usr/local/include\n-- + /usr/local/lib/libcfitsio.dylib\n-- \n-- hdf5 \n-- + hdf5.h found at /usr/local/include\n-- + /usr/local/lib/libhdf5.dylib\n-- + /usr/local/lib/libhdf5_hl.dylib\n-- \n-- minuit2 \n-- - Minuit2/MnConfig.h not found in /usr/local/Cellar/root/5.34.18/include\n-- + /usr/local/lib/libMinuit2.dylib\n-- \n-- minuit2 \n-- - Minuit2/MnConfig.h not found in include/Minuit2\n-- + /usr/local/lib/libMinuit2.dylib\n-- \n-- minuit2 \n-- - Minuit2/MnConfig.h not found in include/Minuit2-5.24.00\n-- + /usr/local/lib/libMinuit2.dylib\n-- \n-- clhep \n-- + CLHEP/ClhepVersion.h found at /usr/local/include\n-- + /usr/local/lib/libCLHEP.dylib\n-- Looking for Geant4 geant4-config program\n-- Looking for Geant4 geant4-config program -- not found\n-- Looking for Geant4 liblist program\n-- Looking for Geant4 liblist program -- not found\n-- \n-- zlib \n-- + zlib.h found at /usr/include\n-- + /usr/lib/libz.dylib\n-- \n-- OpenCL \n-- + Using the OpenCL Framework because we're on Apple\n-- + cl.h found at /System/Library/Frameworks/OpenCL.framework/Headers\n-- + OpenCL framework found at -framework OpenCL\n-- Looking for CL_VERSION_2_0\n-- Looking for CL_VERSION_2_0 - not found\n-- Looking for CL_VERSION_1_2\n-- Looking for CL_VERSION_1_2 - found\n-- \n-- gmp \n-- + gmp.h found at /usr/local\n-- + /usr/local/lib/libgmp.dylib\n-- \n-- log4cpp \n-- + log4cpp/Category.hh found at /usr/local/include\n-- + /usr/local/lib/liblog4cpp.dylib\n-- \n-- xml2 \n-- + libxml/parser.h found at /usr/include/libxml2\n-- + /usr/lib/libxml2.dylib\n-- \n-- genie \n-- Looking for Genie version\n-- Genie not installed.\n-- \n-- zmq \n-- - zmq.hpp not found in ZMQ_INCLUDE_DIR-NOTFOUND\n-- + /usr/local/lib/libzmq.dylib\n-- \n-- zmq \n-- - zmq.hpp not found in include/zmq-\n-- + /usr/local/lib/libzmq.dylib\n-- \n-- doxygen \n-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) \n-- \n-- multinest \n-- - multinest.h not found in include\n-- - multinest\n-- \n-- Configuring projects: \n-- \n-- + WaveCalibrator\n-- +-- python [symlinks] \n-- + astro\n-- +-- python [symlinks] \n-- + Using SLALIB \n-- +-- astro-pybindings \n-- + cmake\n-- +-- sphinx-build found, building sphinx documentation \n-- + daq-decode\n-- +-- python [symlinks] \n-- + dataclasses\n-- +-- python [symlinks] \n-- +-- dataclasses-pybindings \n-- + dataio\n-- +-- python [symlinks] \n-- +-- dataio-pyshovel *not* included (missing urwid python package) \n-- +-- dataio-pybindings \n-- +-- test_unregistered-pybindings \n-- + filter-tools\n-- +-- python [symlinks] \n-- + hdfwriter\n-- +-- python [symlinks] \n-- +-- hdfwriter-pybindings \n-- + icepick\n-- +-- python [symlinks] \n-- + icetray\n-- +-- libdcap *not* found, omitting optional dcap support \n-- +-- python [symlinks] \n-- +-- icetray-pybindings \n-- +-- icetray_test-pybindings \n-- + interfaces\n-- +-- interfaces-pybindings \n-- + payload-parsing\n-- +-- python [symlinks] \n-- +-- payload_parsing-pybindings \n-- + phys-services\n-- +-- python [symlinks] \n-- +-- sprng found, adding SPRNGRandomService \n-- +-- phys_services-pybindings \n-- + rootwriter\n-- +-- python [symlinks] \n-- +-- rootwriter-pybindings \n-- + steamshovel\n-- +-- python [symlinks] \n-- +-- shovelart-pybindings \n-- +-- shovelio-pybindings \n-- + tableio\n-- +-- python [symlinks] \n-- +-- tableio-pybindings \n-- Generating env-shell.sh\n-- Generating icetray-config\n-- Generating tarball_hook.sh\n-- Configuring 'gfilt' STL decryptor\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /Users/jbraun/svn/offline-software/build\n [jbraun@dyn-8-20:~/svn/offline-software/build 20111]$ make -j 4\nScanning dependencies of target I3Tray.py\nScanning dependencies of target env-check\n[ 0%] [ 0%] Generating ../lib/I3Tray.py\nChecking build against environment\n[ 0%] Built target env-check\n[ 0%] Built target I3Tray.py\n\u2026\n[100%] Built target steamshovel\n [jbraun@dyn-8-20:~/svn/offline-software/build 20112]$ ./env-shell.sh \n************************************************************************\n* *\n* W E L C O M E to I C E T R A Y *\n* *\n* Version offline-software.trunk r142096 *\n* *\n* You are welcome to visit our Web site *\n* http://icecube.umd.edu *\n* *\n************************************************************************\nIcetray environment has:\n I3_SRC = /Users/jbraun/svn/offline-software/src\n I3_BUILD = /Users/jbraun/svn/offline-software/build\n I3_TESTDATA = /Users/jbraun/data/i3-test-data\n Python = Python 2.7.10\n [jbraun@dyn-8-20:~/svn/offline-software/build 20099]$ steamshovel\nAssertion failed: (!registered), function set_key, file /Users/jbraun/svn/offline-software/src/icetray/public/icetray/i3_extended_type_info.h, line 42.\nAbort trap: 6",
"reporter": "jbraun",
"cc": "",
"resolution": "invalid",
"time": "2016-02-18T18:17:37",
"component": "combo core",
"summary": "Error running steamshovel, dataio-shovel in El Capitan",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Error running steamshovel, dataio-shovel in El Capitan (Trac #1556) - [jbraun@dyn-8-20:~/svn/offline-software/build 20110]$ cmake ../src
-- The C compiler identification is Clang 7.0.2
-- The CXX compiler identification is Clang 7.0.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
--
-- IceCube Configuration starting
--
-- OSTYPE = Darwin
-- OSVERSION = 15.2.0
-- ARCH = i386
-- BUILDNAME = Darwin-15.2.0/i386/LLVM-7.0.2
-- TOOLSET = LLVM-7.0.2/i386/
-- HOSTNAME = dyn-8-20.icecube.wisc.edu
-- CMake path = /usr/local/Cellar/cmake/2.8.12.2/bin/cmake
-- CMake version = 2.8.12.2
-- SVN_REVISION = 142096
-- SVN_URL = http://code.icecube.wisc.edu/svn/meta-projects/offline-software/trunk
-- META_PROJECT = offline-software.trunk
--
-- Setting compiler, compile drivers, and linker
--
-- distcc not found.
-- ccache not found.
-- Using gfilt stl decryptor
-- Performing Test CXX_HAS_Wno_deprecated
-- Performing Test CXX_HAS_Wno_deprecated - Success
-- Performing Test CXX_HAS_Wno_unused_variable
-- Performing Test CXX_HAS_Wno_unused_variable - Success
-- Performing Test CXX_HAS_Wno_unused_local_typedef
-- Performing Test CXX_HAS_Wno_unused_local_typedef - Success
-- Performing Test CXX_HAS_Wno_unused_local_typedefs
-- Performing Test CXX_HAS_Wno_unused_local_typedefs - Success
-- Setting default compiler flags and build type.
--
-- Configuring tools...
--
-- Using system packages when I3_PORTS not available
-- Using default site cmake dir of /usr/share/fizzicks/cmake
--
-- root
-- + TObject.h found at /usr/local/Cellar/root/5.34.18/include/root
-- + /usr/local/Cellar/root/5.34.18/lib/root/libCore.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libCint.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libRIO.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libNet.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libHist.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf3d.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libGpad.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libTree.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libRint.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libPostscript.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libMatrix.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libPhysics.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libMathCore.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libThread.so
-- + /usr/local/Cellar/root/5.34.18/lib/root/libMinuit.so
--
-- Boost
-- Boost version: 1.56.0
-- Found the following Boost libraries:
-- python
-- system
-- signals
-- thread
-- date_time
-- serialization
-- filesystem
-- program_options
-- regex
-- iostreams
--
-- boostnumpy
-- - boost/numpy.hpp not found in include
-- - boost_numpy
--
-- python
-- + version: Python 2.7.10
-- + binary: /usr/bin/python
-- + includes: /System/Library/Frameworks/Python.framework/Headers
-- + libs: /usr/lib/libpython2.7.dylib
-- + numpy: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include
-- + scipy: FOUND
--
-- blas
-- Looking for dgemm_
-- Looking for dgemm_ - found
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - found
-- Found Threads: TRUE
-- A library with BLAS API found.
--
-- lapack
-- A library with BLAS API found.
-- Looking for cheev_
-- Looking for cheev_ - found
-- A library with LAPACK API found.
--
-- gsl
-- + gsl/gsl_rng.h found at /usr/local/include
-- + /usr/local/lib/libgsl.dylib
--
-- sprng
-- + sprng/sprng.h found at /usr/local/include
-- + /usr/local/lib/libsprng.a
--
-- pal
-- - star/pal.h not found in include
-- - pal
--
-- pal
-- - star/pal.h not found in include
-- - starlink_pal
--
-- sla
-- + slalib/slalib.h found at /usr/local/include
-- + /usr/local/lib/libsla.a
--
-- libarchive
-- + archive.h found at /usr/local/opt/libarchive/include
-- + /usr/local/opt/libarchive/lib/libarchive.dylib
--
-- mysql
-- + mysql/mysql.h found at /usr/local/include
-- + /usr/local/lib/libmysqlclient.dylib
--
-- bdb
-- + db.h found at /usr/include
-- - db
--
-- MPI
-- Could NOT find MPI_C (missing: MPI_C_LIBRARIES MPI_C_INCLUDE_PATH)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIBRARIES MPI_CXX_INCLUDE_PATH)
--
-- suitesparse
-- + cholmod.h found at /usr/local/include
-- + /usr/local/lib/libcamd.a
-- + /usr/local/lib/libccolamd.a
-- + /usr/local/lib/libspqr.a
-- + /usr/local/lib/libcholmod.a
-- + /usr/local/lib/libamd.a
-- + /usr/local/lib/libcolamd.a
-- - tbb
-- + /usr/local/lib/libsuitesparseconfig.a
--
-- suitesparse
-- + suitesparse/cholmod.h found at /usr/local/include
-- + /usr/local/lib/libcamd.a
-- + /usr/local/lib/libccolamd.a
-- + /usr/local/lib/libspqr.a
-- + /usr/local/lib/libcholmod.a
-- + /usr/local/lib/libamd.a
-- + /usr/local/lib/libcolamd.a
-- + /usr/local/lib/libsuitesparseconfig.a
--
-- ZThread
-- - ZThread.h not found in ZTHREAD_INCLUDE_DIR-NOTFOUND
-- - ZThread
--
-- omniORB
-- - omniconfig.h not found in include/omniorb-4.0.7
-- - omnithread
-- - omniORB4
-- - COS4
-- - COSDynamic4
-- - omniCodeSets4
-- - omniDynamic4
--
-- ncurses
-- Looking for wsyncup in /usr/lib/libcurses.dylib
-- Looking for wsyncup in /usr/lib/libcurses.dylib - found
-- Found Curses: /usr/lib/libcurses.dylib
-- + ncurses.h found at /usr/include
-- + libncurses found at /usr/lib/libncurses.dylib
--
-- cdk
-- - cdk/cdk.h not found in include
-- + /usr/local/lib/libcdk.a
--
-- cdk
-- + cdk.h found at /usr/local/include
-- + /usr/local/lib/libcdk.a
--
-- healpix-cxx
-- + healpix_cxx/healpix_map.h found at /usr/local/include
-- + /usr/local/lib/libhealpix_cxx.dylib
--
-- qt4
-- Found OpenGL: /System/Library/Frameworks/OpenGL.framework
-- Found GLUT: -framework GLUT
-- Looking for Q_WS_X11
-- Looking for Q_WS_X11 - not found
-- Looking for Q_WS_WIN
-- Looking for Q_WS_WIN - not found
-- Looking for Q_WS_QWS
-- Looking for Q_WS_QWS - not found
-- Looking for Q_WS_MAC
-- Looking for Q_WS_MAC - found
-- Looking for QT_MAC_USE_COCOA
-- Looking for QT_MAC_USE_COCOA - found
-- Found Qt4: /usr/local/bin/qmake (found suitable version "4.8.6", minimum required is "4.8")
--
-- cfitsio
-- + fitsio.h found at /usr/local/include
-- + /usr/local/lib/libcfitsio.dylib
--
-- hdf5
-- + hdf5.h found at /usr/local/include
-- + /usr/local/lib/libhdf5.dylib
-- + /usr/local/lib/libhdf5_hl.dylib
--
-- minuit2
-- - Minuit2/MnConfig.h not found in /usr/local/Cellar/root/5.34.18/include
-- + /usr/local/lib/libMinuit2.dylib
--
-- minuit2
-- - Minuit2/MnConfig.h not found in include/Minuit2
-- + /usr/local/lib/libMinuit2.dylib
--
-- minuit2
-- - Minuit2/MnConfig.h not found in include/Minuit2-5.24.00
-- + /usr/local/lib/libMinuit2.dylib
--
-- clhep
-- + CLHEP/ClhepVersion.h found at /usr/local/include
-- + /usr/local/lib/libCLHEP.dylib
-- Looking for Geant4 geant4-config program
-- Looking for Geant4 geant4-config program -- not found
-- Looking for Geant4 liblist program
-- Looking for Geant4 liblist program -- not found
--
-- zlib
-- + zlib.h found at /usr/include
-- + /usr/lib/libz.dylib
--
-- OpenCL
-- + Using the OpenCL Framework because we're on Apple
-- + cl.h found at /System/Library/Frameworks/OpenCL.framework/Headers
-- + OpenCL framework found at -framework OpenCL
-- Looking for CL_VERSION_2_0
-- Looking for CL_VERSION_2_0 - not found
-- Looking for CL_VERSION_1_2
-- Looking for CL_VERSION_1_2 - found
--
-- gmp
-- + gmp.h found at /usr/local
-- + /usr/local/lib/libgmp.dylib
--
-- log4cpp
-- + log4cpp/Category.hh found at /usr/local/include
-- + /usr/local/lib/liblog4cpp.dylib
--
-- xml2
-- + libxml/parser.h found at /usr/include/libxml2
-- + /usr/lib/libxml2.dylib
--
-- genie
-- Looking for Genie version
-- Genie not installed.
--
-- zmq
-- - zmq.hpp not found in ZMQ_INCLUDE_DIR-NOTFOUND
-- + /usr/local/lib/libzmq.dylib
--
-- zmq
-- - zmq.hpp not found in include/zmq-
-- + /usr/local/lib/libzmq.dylib
--
-- doxygen
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
--
-- multinest
-- - multinest.h not found in include
-- - multinest
--
-- Configuring projects:
--
-- + WaveCalibrator
-- +-- python [symlinks]
-- + astro
-- +-- python [symlinks]
-- + Using SLALIB
-- +-- astro-pybindings
-- + cmake
-- +-- sphinx-build found, building sphinx documentation
-- + daq-decode
-- +-- python [symlinks]
-- + dataclasses
-- +-- python [symlinks]
-- +-- dataclasses-pybindings
-- + dataio
-- +-- python [symlinks]
-- +-- dataio-pyshovel *not* included (missing urwid python package)
-- +-- dataio-pybindings
-- +-- test_unregistered-pybindings
-- + filter-tools
-- +-- python [symlinks]
-- + hdfwriter
-- +-- python [symlinks]
-- +-- hdfwriter-pybindings
-- + icepick
-- +-- python [symlinks]
-- + icetray
-- +-- libdcap *not* found, omitting optional dcap support
-- +-- python [symlinks]
-- +-- icetray-pybindings
-- +-- icetray_test-pybindings
-- + interfaces
-- +-- interfaces-pybindings
-- + payload-parsing
-- +-- python [symlinks]
-- +-- payload_parsing-pybindings
-- + phys-services
-- +-- python [symlinks]
-- +-- sprng found, adding SPRNGRandomService
-- +-- phys_services-pybindings
-- + rootwriter
-- +-- python [symlinks]
-- +-- rootwriter-pybindings
-- + steamshovel
-- +-- python [symlinks]
-- +-- shovelart-pybindings
-- +-- shovelio-pybindings
-- + tableio
-- +-- python [symlinks]
-- +-- tableio-pybindings
-- Generating env-shell.sh
-- Generating icetray-config
-- Generating tarball_hook.sh
-- Configuring 'gfilt' STL decryptor
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/jbraun/svn/offline-software/build
[jbraun@dyn-8-20:~/svn/offline-software/build 20111]$ make -j 4
Scanning dependencies of target I3Tray.py
Scanning dependencies of target env-check
[ 0%] [ 0%] Generating ../lib/I3Tray.py
Checking build against environment
[ 0%] Built target env-check
[ 0%] Built target I3Tray.py
…
[100%] Built target steamshovel
[jbraun@dyn-8-20:~/svn/offline-software/build 20112]$ ./env-shell.sh
************************************************************************
* *
* W E L C O M E to I C E T R A Y *
* *
* Version offline-software.trunk r142096 *
* *
* You are welcome to visit our Web site *
* http://icecube.umd.edu *
* *
************************************************************************
Icetray environment has:
I3_SRC = /Users/jbraun/svn/offline-software/src
I3_BUILD = /Users/jbraun/svn/offline-software/build
I3_TESTDATA = /Users/jbraun/data/i3-test-data
Python = Python 2.7.10
[jbraun@dyn-8-20:~/svn/offline-software/build 20099]$ steamshovel
Assertion failed: (!registered), function set_key, file /Users/jbraun/svn/offline-software/src/icetray/public/icetray/i3_extended_type_info.h, line 42.
Abort trap: 6
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1556">https://code.icecube.wisc.edu/projects/icecube/ticket/1556</a>, reported by jbraun</summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-02-18T19:42:45",
"_ts": "1455824565964210",
"description": " [jbraun@dyn-8-20:~/svn/offline-software/build 20110]$ cmake ../src\n-- The C compiler identification is Clang 7.0.2\n-- The CXX compiler identification is Clang 7.0.2\n-- Check for working C compiler: /usr/bin/cc\n-- Check for working C compiler: /usr/bin/cc -- works\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working CXX compiler: /usr/bin/c++\n-- Check for working CXX compiler: /usr/bin/c++ -- works\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- \n-- IceCube Configuration starting \n-- \n-- OSTYPE = Darwin \n-- OSVERSION = 15.2.0 \n-- ARCH = i386 \n-- BUILDNAME = Darwin-15.2.0/i386/LLVM-7.0.2 \n-- TOOLSET = LLVM-7.0.2/i386/ \n-- HOSTNAME = dyn-8-20.icecube.wisc.edu \n-- CMake path = /usr/local/Cellar/cmake/2.8.12.2/bin/cmake\n-- CMake version = 2.8.12.2\n-- SVN_REVISION = 142096 \n-- SVN_URL = http://code.icecube.wisc.edu/svn/meta-projects/offline-software/trunk \n-- META_PROJECT = offline-software.trunk \n-- \n-- Setting compiler, compile drivers, and linker \n-- \n-- distcc not found.\n-- ccache not found.\n-- Using gfilt stl decryptor\n-- Performing Test CXX_HAS_Wno_deprecated\n-- Performing Test CXX_HAS_Wno_deprecated - Success\n-- Performing Test CXX_HAS_Wno_unused_variable\n-- Performing Test CXX_HAS_Wno_unused_variable - Success\n-- Performing Test CXX_HAS_Wno_unused_local_typedef\n-- Performing Test CXX_HAS_Wno_unused_local_typedef - Success\n-- Performing Test CXX_HAS_Wno_unused_local_typedefs\n-- Performing Test CXX_HAS_Wno_unused_local_typedefs - Success\n-- Setting default compiler flags and build type.\n-- \n-- Configuring tools... \n-- \n-- Using system packages when I3_PORTS not available\n-- Using default site cmake dir of /usr/share/fizzicks/cmake\n-- \n-- root \n-- + TObject.h found at /usr/local/Cellar/root/5.34.18/include/root\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libCore.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libCint.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libRIO.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libNet.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libHist.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libGraf3d.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libGpad.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libTree.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libRint.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libPostscript.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libMatrix.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libPhysics.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libMathCore.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libThread.so\n-- + /usr/local/Cellar/root/5.34.18/lib/root/libMinuit.so\n-- \n-- Boost \n-- Boost version: 1.56.0\n-- Found the following Boost libraries:\n-- python\n-- system\n-- signals\n-- thread\n-- date_time\n-- serialization\n-- filesystem\n-- program_options\n-- regex\n-- iostreams\n-- \n-- boostnumpy \n-- - boost/numpy.hpp not found in include\n-- - boost_numpy\n-- \n-- python \n-- + version: Python 2.7.10\n-- + binary: /usr/bin/python\n-- + includes: /System/Library/Frameworks/Python.framework/Headers\n-- + libs: /usr/lib/libpython2.7.dylib\n-- + numpy: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include\n-- + scipy: FOUND\n-- \n-- blas \n-- Looking for dgemm_\n-- Looking for dgemm_ - found\n-- Looking for include file pthread.h\n-- Looking for include file pthread.h - found\n-- Looking for pthread_create\n-- Looking for pthread_create - found\n-- Found Threads: TRUE \n-- A library with BLAS API found.\n-- \n-- lapack \n-- A library with BLAS API found.\n-- Looking for cheev_\n-- Looking for cheev_ - found\n-- A library with LAPACK API found.\n-- \n-- gsl \n-- + gsl/gsl_rng.h found at /usr/local/include\n-- + /usr/local/lib/libgsl.dylib\n-- \n-- sprng \n-- + sprng/sprng.h found at /usr/local/include\n-- + /usr/local/lib/libsprng.a\n-- \n-- pal \n-- - star/pal.h not found in include\n-- - pal\n-- \n-- pal \n-- - star/pal.h not found in include\n-- - starlink_pal\n-- \n-- sla \n-- + slalib/slalib.h found at /usr/local/include\n-- + /usr/local/lib/libsla.a\n-- \n-- libarchive \n-- + archive.h found at /usr/local/opt/libarchive/include\n-- + /usr/local/opt/libarchive/lib/libarchive.dylib\n-- \n-- mysql \n-- + mysql/mysql.h found at /usr/local/include\n-- + /usr/local/lib/libmysqlclient.dylib\n-- \n-- bdb \n-- + db.h found at /usr/include\n-- - db\n-- \n-- MPI \n-- Could NOT find MPI_C (missing: MPI_C_LIBRARIES MPI_C_INCLUDE_PATH) \n-- Could NOT find MPI_CXX (missing: MPI_CXX_LIBRARIES MPI_CXX_INCLUDE_PATH) \n-- \n-- suitesparse \n-- + cholmod.h found at /usr/local/include\n-- + /usr/local/lib/libcamd.a\n-- + /usr/local/lib/libccolamd.a\n-- + /usr/local/lib/libspqr.a\n-- + /usr/local/lib/libcholmod.a\n-- + /usr/local/lib/libamd.a\n-- + /usr/local/lib/libcolamd.a\n-- - tbb\n-- + /usr/local/lib/libsuitesparseconfig.a\n-- \n-- suitesparse \n-- + suitesparse/cholmod.h found at /usr/local/include\n-- + /usr/local/lib/libcamd.a\n-- + /usr/local/lib/libccolamd.a\n-- + /usr/local/lib/libspqr.a\n-- + /usr/local/lib/libcholmod.a\n-- + /usr/local/lib/libamd.a\n-- + /usr/local/lib/libcolamd.a\n-- + /usr/local/lib/libsuitesparseconfig.a\n-- \n-- ZThread \n-- - ZThread.h not found in ZTHREAD_INCLUDE_DIR-NOTFOUND\n-- - ZThread\n-- \n-- omniORB \n-- - omniconfig.h not found in include/omniorb-4.0.7\n-- - omnithread\n-- - omniORB4\n-- - COS4\n-- - COSDynamic4\n-- - omniCodeSets4\n-- - omniDynamic4\n-- \n-- ncurses \n-- Looking for wsyncup in /usr/lib/libcurses.dylib\n-- Looking for wsyncup in /usr/lib/libcurses.dylib - found\n-- Found Curses: /usr/lib/libcurses.dylib \n-- + ncurses.h found at /usr/include\n-- + libncurses found at /usr/lib/libncurses.dylib\n-- \n-- cdk \n-- - cdk/cdk.h not found in include\n-- + /usr/local/lib/libcdk.a\n-- \n-- cdk \n-- + cdk.h found at /usr/local/include\n-- + /usr/local/lib/libcdk.a\n-- \n-- healpix-cxx \n-- + healpix_cxx/healpix_map.h found at /usr/local/include\n-- + /usr/local/lib/libhealpix_cxx.dylib\n-- \n-- qt4 \n-- Found OpenGL: /System/Library/Frameworks/OpenGL.framework \n-- Found GLUT: -framework GLUT \n-- Looking for Q_WS_X11\n-- Looking for Q_WS_X11 - not found\n-- Looking for Q_WS_WIN\n-- Looking for Q_WS_WIN - not found\n-- Looking for Q_WS_QWS\n-- Looking for Q_WS_QWS - not found\n-- Looking for Q_WS_MAC\n-- Looking for Q_WS_MAC - found\n-- Looking for QT_MAC_USE_COCOA\n-- Looking for QT_MAC_USE_COCOA - found\n-- Found Qt4: /usr/local/bin/qmake (found suitable version \"4.8.6\", minimum required is \"4.8\") \n-- \n-- cfitsio \n-- + fitsio.h found at /usr/local/include\n-- + /usr/local/lib/libcfitsio.dylib\n-- \n-- hdf5 \n-- + hdf5.h found at /usr/local/include\n-- + /usr/local/lib/libhdf5.dylib\n-- + /usr/local/lib/libhdf5_hl.dylib\n-- \n-- minuit2 \n-- - Minuit2/MnConfig.h not found in /usr/local/Cellar/root/5.34.18/include\n-- + /usr/local/lib/libMinuit2.dylib\n-- \n-- minuit2 \n-- - Minuit2/MnConfig.h not found in include/Minuit2\n-- + /usr/local/lib/libMinuit2.dylib\n-- \n-- minuit2 \n-- - Minuit2/MnConfig.h not found in include/Minuit2-5.24.00\n-- + /usr/local/lib/libMinuit2.dylib\n-- \n-- clhep \n-- + CLHEP/ClhepVersion.h found at /usr/local/include\n-- + /usr/local/lib/libCLHEP.dylib\n-- Looking for Geant4 geant4-config program\n-- Looking for Geant4 geant4-config program -- not found\n-- Looking for Geant4 liblist program\n-- Looking for Geant4 liblist program -- not found\n-- \n-- zlib \n-- + zlib.h found at /usr/include\n-- + /usr/lib/libz.dylib\n-- \n-- OpenCL \n-- + Using the OpenCL Framework because we're on Apple\n-- + cl.h found at /System/Library/Frameworks/OpenCL.framework/Headers\n-- + OpenCL framework found at -framework OpenCL\n-- Looking for CL_VERSION_2_0\n-- Looking for CL_VERSION_2_0 - not found\n-- Looking for CL_VERSION_1_2\n-- Looking for CL_VERSION_1_2 - found\n-- \n-- gmp \n-- + gmp.h found at /usr/local\n-- + /usr/local/lib/libgmp.dylib\n-- \n-- log4cpp \n-- + log4cpp/Category.hh found at /usr/local/include\n-- + /usr/local/lib/liblog4cpp.dylib\n-- \n-- xml2 \n-- + libxml/parser.h found at /usr/include/libxml2\n-- + /usr/lib/libxml2.dylib\n-- \n-- genie \n-- Looking for Genie version\n-- Genie not installed.\n-- \n-- zmq \n-- - zmq.hpp not found in ZMQ_INCLUDE_DIR-NOTFOUND\n-- + /usr/local/lib/libzmq.dylib\n-- \n-- zmq \n-- - zmq.hpp not found in include/zmq-\n-- + /usr/local/lib/libzmq.dylib\n-- \n-- doxygen \n-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) \n-- \n-- multinest \n-- - multinest.h not found in include\n-- - multinest\n-- \n-- Configuring projects: \n-- \n-- + WaveCalibrator\n-- +-- python [symlinks] \n-- + astro\n-- +-- python [symlinks] \n-- + Using SLALIB \n-- +-- astro-pybindings \n-- + cmake\n-- +-- sphinx-build found, building sphinx documentation \n-- + daq-decode\n-- +-- python [symlinks] \n-- + dataclasses\n-- +-- python [symlinks] \n-- +-- dataclasses-pybindings \n-- + dataio\n-- +-- python [symlinks] \n-- +-- dataio-pyshovel *not* included (missing urwid python package) \n-- +-- dataio-pybindings \n-- +-- test_unregistered-pybindings \n-- + filter-tools\n-- +-- python [symlinks] \n-- + hdfwriter\n-- +-- python [symlinks] \n-- +-- hdfwriter-pybindings \n-- + icepick\n-- +-- python [symlinks] \n-- + icetray\n-- +-- libdcap *not* found, omitting optional dcap support \n-- +-- python [symlinks] \n-- +-- icetray-pybindings \n-- +-- icetray_test-pybindings \n-- + interfaces\n-- +-- interfaces-pybindings \n-- + payload-parsing\n-- +-- python [symlinks] \n-- +-- payload_parsing-pybindings \n-- + phys-services\n-- +-- python [symlinks] \n-- +-- sprng found, adding SPRNGRandomService \n-- +-- phys_services-pybindings \n-- + rootwriter\n-- +-- python [symlinks] \n-- +-- rootwriter-pybindings \n-- + steamshovel\n-- +-- python [symlinks] \n-- +-- shovelart-pybindings \n-- +-- shovelio-pybindings \n-- + tableio\n-- +-- python [symlinks] \n-- +-- tableio-pybindings \n-- Generating env-shell.sh\n-- Generating icetray-config\n-- Generating tarball_hook.sh\n-- Configuring 'gfilt' STL decryptor\n-- Configuring done\n-- Generating done\n-- Build files have been written to: /Users/jbraun/svn/offline-software/build\n [jbraun@dyn-8-20:~/svn/offline-software/build 20111]$ make -j 4\nScanning dependencies of target I3Tray.py\nScanning dependencies of target env-check\n[ 0%] [ 0%] Generating ../lib/I3Tray.py\nChecking build against environment\n[ 0%] Built target env-check\n[ 0%] Built target I3Tray.py\n\u2026\n[100%] Built target steamshovel\n [jbraun@dyn-8-20:~/svn/offline-software/build 20112]$ ./env-shell.sh \n************************************************************************\n* *\n* W E L C O M E to I C E T R A Y *\n* *\n* Version offline-software.trunk r142096 *\n* *\n* You are welcome to visit our Web site *\n* http://icecube.umd.edu *\n* *\n************************************************************************\nIcetray environment has:\n I3_SRC = /Users/jbraun/svn/offline-software/src\n I3_BUILD = /Users/jbraun/svn/offline-software/build\n I3_TESTDATA = /Users/jbraun/data/i3-test-data\n Python = Python 2.7.10\n [jbraun@dyn-8-20:~/svn/offline-software/build 20099]$ steamshovel\nAssertion failed: (!registered), function set_key, file /Users/jbraun/svn/offline-software/src/icetray/public/icetray/i3_extended_type_info.h, line 42.\nAbort trap: 6",
"reporter": "jbraun",
"cc": "",
"resolution": "invalid",
"time": "2016-02-18T18:17:37",
"component": "combo core",
"summary": "Error running steamshovel, dataio-shovel in El Capitan",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | error running steamshovel dataio shovel in el capitan trac cmake src the c compiler identification is clang the cxx compiler identification is clang check for working c compiler usr bin cc check for working c compiler usr bin cc works detecting c compiler abi info detecting c compiler abi info done check for working cxx compiler usr bin c check for working cxx compiler usr bin c works detecting cxx compiler abi info detecting cxx compiler abi info done icecube configuration starting ostype darwin osversion arch buildname darwin llvm toolset llvm hostname dyn icecube wisc edu cmake path usr local cellar cmake bin cmake cmake version svn revision svn url meta project offline software trunk setting compiler compile drivers and linker distcc not found ccache not found using gfilt stl decryptor performing test cxx has wno deprecated performing test cxx has wno deprecated success performing test cxx has wno unused variable performing test cxx has wno unused variable success performing test cxx has wno unused local typedef performing test cxx has wno unused local typedef success performing test cxx has wno unused local typedefs performing test cxx has wno unused local typedefs success setting default compiler flags and build type configuring tools using system packages when ports not available using default site cmake dir of usr share fizzicks cmake root tobject h found at usr local cellar root include root usr local cellar root lib root libcore so usr local cellar root lib root libcint so usr local cellar root lib root librio so usr local cellar root lib root libnet so usr local cellar root lib root libhist so usr local cellar root lib root libgraf so usr local cellar root lib root so usr local cellar root lib root libgpad so usr local cellar root lib root libtree so usr local cellar root lib root librint so usr local cellar root lib root libpostscript so usr local cellar root lib root libmatrix so usr local cellar root lib root libphysics so usr local cellar root lib root libmathcore so usr local cellar root lib root libthread so usr local cellar root lib root libminuit so boost boost version found the following boost libraries python system signals thread date time serialization filesystem program options regex iostreams boostnumpy boost numpy hpp not found in include boost numpy python version python binary usr bin python includes system library frameworks python framework headers libs usr lib dylib numpy system library frameworks python framework versions extras lib python numpy core include scipy found blas looking for dgemm looking for dgemm found looking for include file pthread h looking for include file pthread h found looking for pthread create looking for pthread create found found threads true a library with blas api found lapack a library with blas api found looking for cheev looking for cheev found a library with lapack api found gsl gsl gsl rng h found at usr local include usr local lib libgsl dylib sprng sprng sprng h found at usr local include usr local lib libsprng a pal star pal h not found in include pal pal star pal h not found in include starlink pal sla slalib slalib h found at usr local include usr local lib libsla a libarchive archive h found at usr local opt libarchive include usr local opt libarchive lib libarchive dylib mysql mysql mysql h found at usr local include usr local lib libmysqlclient dylib bdb db h found at usr include db mpi could not find mpi c missing mpi c libraries mpi c include path could not find mpi cxx missing mpi cxx libraries mpi cxx include path suitesparse cholmod h found at usr local include usr local lib libcamd a usr local lib libccolamd a usr local lib libspqr a usr local lib libcholmod a usr local lib libamd a usr local lib libcolamd a tbb usr local lib libsuitesparseconfig a suitesparse suitesparse cholmod h found at usr local include usr local lib libcamd a usr local lib libccolamd a usr local lib libspqr a usr local lib libcholmod a usr local lib libamd a usr local lib libcolamd a usr local lib libsuitesparseconfig a zthread zthread h not found in zthread include dir notfound zthread omniorb omniconfig h not found in include omniorb omnithread ncurses looking for wsyncup in usr lib libcurses dylib looking for wsyncup in usr lib libcurses dylib found found curses usr lib libcurses dylib ncurses h found at usr include libncurses found at usr lib libncurses dylib cdk cdk cdk h not found in include usr local lib libcdk a cdk cdk h found at usr local include usr local lib libcdk a healpix cxx healpix cxx healpix map h found at usr local include usr local lib libhealpix cxx dylib found opengl system library frameworks opengl framework found glut framework glut looking for q ws looking for q ws not found looking for q ws win looking for q ws win not found looking for q ws qws looking for q ws qws not found looking for q ws mac looking for q ws mac found looking for qt mac use cocoa looking for qt mac use cocoa found found usr local bin qmake found suitable version minimum required is cfitsio fitsio h found at usr local include usr local lib libcfitsio dylib h found at usr local include usr local lib dylib usr local lib hl dylib mnconfig h not found in usr local cellar root include usr local lib dylib mnconfig h not found in include usr local lib dylib mnconfig h not found in include usr local lib dylib clhep clhep clhepversion h found at usr local include usr local lib libclhep dylib looking for config program looking for config program not found looking for liblist program looking for liblist program not found zlib zlib h found at usr include usr lib libz dylib opencl using the opencl framework because we re on apple cl h found at system library frameworks opencl framework headers opencl framework found at framework opencl looking for cl version looking for cl version not found looking for cl version looking for cl version found gmp gmp h found at usr local usr local lib libgmp dylib category hh found at usr local include usr local lib dylib libxml parser h found at usr include usr lib dylib genie looking for genie version genie not installed zmq zmq hpp not found in zmq include dir notfound usr local lib libzmq dylib zmq zmq hpp not found in include zmq usr local lib libzmq dylib doxygen could not find doxygen missing doxygen executable multinest multinest h not found in include multinest configuring projects wavecalibrator python astro python using slalib astro pybindings cmake sphinx build found building sphinx documentation daq decode python dataclasses python dataclasses pybindings dataio python dataio pyshovel not included missing urwid python package dataio pybindings test unregistered pybindings filter tools python hdfwriter python hdfwriter pybindings icepick python icetray libdcap not found omitting optional dcap support python icetray pybindings icetray test pybindings interfaces interfaces pybindings payload parsing python payload parsing pybindings phys services python sprng found adding sprngrandomservice phys services pybindings rootwriter python rootwriter pybindings steamshovel python shovelart pybindings shovelio pybindings tableio python tableio pybindings generating env shell sh generating icetray config generating tarball hook sh configuring gfilt stl decryptor configuring done generating done build files have been written to users jbraun svn offline software build make j scanning dependencies of target py scanning dependencies of target env check generating lib py checking build against environment built target env check built target py … built target steamshovel env shell sh w e l c o m e to i c e t r a y version offline software trunk you are welcome to visit our web site icetray environment has src users jbraun svn offline software src build users jbraun svn offline software build testdata users jbraun data test data python python steamshovel assertion failed registered function set key file users jbraun svn offline software src icetray public icetray extended type info h line abort trap migrated from json status closed changetime ts description cmake src n the c compiler identification is clang n the cxx compiler identification is clang n check for working c compiler usr bin cc n check for working c compiler usr bin cc works n detecting c compiler abi info n detecting c compiler abi info done n check for working cxx compiler usr bin c n check for working cxx compiler usr bin c works n detecting cxx compiler abi info n detecting cxx compiler abi info done n n icecube configuration starting n n ostype darwin n osversion n arch n buildname darwin llvm n toolset llvm n hostname dyn icecube wisc edu n cmake path usr local cellar cmake bin cmake n cmake version n svn revision n svn url n meta project offline software trunk n n setting compiler compile drivers and linker n n distcc not found n ccache not found n using gfilt stl decryptor n performing test cxx has wno deprecated n performing test cxx has wno deprecated success n performing test cxx has wno unused variable n performing test cxx has wno unused variable success n performing test cxx has wno unused local typedef n performing test cxx has wno unused local typedef success n performing test cxx has wno unused local typedefs n performing test cxx has wno unused local typedefs success n setting default compiler flags and build type n n configuring tools n n using system packages when ports not available n using default site cmake dir of usr share fizzicks cmake n n root n tobject h found at usr local cellar root include root n usr local cellar root lib root libcore so n usr local cellar root lib root libcint so n usr local cellar root lib root librio so n usr local cellar root lib root libnet so n usr local cellar root lib root libhist so n usr local cellar root lib root libgraf so n usr local cellar root lib root so n usr local cellar root lib root libgpad so n usr local cellar root lib root libtree so n usr local cellar root lib root librint so n usr local cellar root lib root libpostscript so n usr local cellar root lib root libmatrix so n usr local cellar root lib root libphysics so n usr local cellar root lib root libmathcore so n usr local cellar root lib root libthread so n usr local cellar root lib root libminuit so n n boost n boost version n found the following boost libraries n python n system n signals n thread n date time n serialization n filesystem n program options n regex n iostreams n n boostnumpy n boost numpy hpp not found in include n boost numpy n n python n version python n binary usr bin python n includes system library frameworks python framework headers n libs usr lib dylib n numpy system library frameworks python framework versions extras lib python numpy core include n scipy found n n blas n looking for dgemm n looking for dgemm found n looking for include file pthread h n looking for include file pthread h found n looking for pthread create n looking for pthread create found n found threads true n a library with blas api found n n lapack n a library with blas api found n looking for cheev n looking for cheev found n a library with lapack api found n n gsl n gsl gsl rng h found at usr local include n usr local lib libgsl dylib n n sprng n sprng sprng h found at usr local include n usr local lib libsprng a n n pal n star pal h not found in include n pal n n pal n star pal h not found in include n starlink pal n n sla n slalib slalib h found at usr local include n usr local lib libsla a n n libarchive n archive h found at usr local opt libarchive include n usr local opt libarchive lib libarchive dylib n n mysql n mysql mysql h found at usr local include n usr local lib libmysqlclient dylib n n bdb n db h found at usr include n db n n mpi n could not find mpi c missing mpi c libraries mpi c include path n could not find mpi cxx missing mpi cxx libraries mpi cxx include path n n suitesparse n cholmod h found at usr local include n usr local lib libcamd a n usr local lib libccolamd a n usr local lib libspqr a n usr local lib libcholmod a n usr local lib libamd a n usr local lib libcolamd a n tbb n usr local lib libsuitesparseconfig a n n suitesparse n suitesparse cholmod h found at usr local include n usr local lib libcamd a n usr local lib libccolamd a n usr local lib libspqr a n usr local lib libcholmod a n usr local lib libamd a n usr local lib libcolamd a n usr local lib libsuitesparseconfig a n n zthread n zthread h not found in zthread include dir notfound n zthread n n omniorb n omniconfig h not found in include omniorb n omnithread n n n n n n n ncurses n looking for wsyncup in usr lib libcurses dylib n looking for wsyncup in usr lib libcurses dylib found n found curses usr lib libcurses dylib n ncurses h found at usr include n libncurses found at usr lib libncurses dylib n n cdk n cdk cdk h not found in include n usr local lib libcdk a n n cdk n cdk h found at usr local include n usr local lib libcdk a n n healpix cxx n healpix cxx healpix map h found at usr local include n usr local lib libhealpix cxx dylib n n n found opengl system library frameworks opengl framework n found glut framework glut n looking for q ws n looking for q ws not found n looking for q ws win n looking for q ws win not found n looking for q ws qws n looking for q ws qws not found n looking for q ws mac n looking for q ws mac found n looking for qt mac use cocoa n looking for qt mac use cocoa found n found usr local bin qmake found suitable version minimum required is n n cfitsio n fitsio h found at usr local include n usr local lib libcfitsio dylib n n n h found at usr local include n usr local lib dylib n usr local lib hl dylib n n n mnconfig h not found in usr local cellar root include n usr local lib dylib n n n mnconfig h not found in include n usr local lib dylib n n n mnconfig h not found in include n usr local lib dylib n n clhep n clhep clhepversion h found at usr local include n usr local lib libclhep dylib n looking for config program n looking for config program not found n looking for liblist program n looking for liblist program not found n n zlib n zlib h found at usr include n usr lib libz dylib n n opencl n using the opencl framework because we re on apple n cl h found at system library frameworks opencl framework headers n opencl framework found at framework opencl n looking for cl version n looking for cl version not found n looking for cl version n looking for cl version found n n gmp n gmp h found at usr local n usr local lib libgmp dylib n n n category hh found at usr local include n usr local lib dylib n n n libxml parser h found at usr include n usr lib dylib n n genie n looking for genie version n genie not installed n n zmq n zmq hpp not found in zmq include dir notfound n usr local lib libzmq dylib n n zmq n zmq hpp not found in include zmq n usr local lib libzmq dylib n n doxygen n could not find doxygen missing doxygen executable n n multinest n multinest h not found in include n multinest n n configuring projects n n wavecalibrator n python n astro n python n using slalib n astro pybindings n cmake n sphinx build found building sphinx documentation n daq decode n python n dataclasses n python n dataclasses pybindings n dataio n python n dataio pyshovel not included missing urwid python package n dataio pybindings n test unregistered pybindings n filter tools n python n hdfwriter n python n hdfwriter pybindings n icepick n python n icetray n libdcap not found omitting optional dcap support n python n icetray pybindings n icetray test pybindings n interfaces n interfaces pybindings n payload parsing n python n payload parsing pybindings n phys services n python n sprng found adding sprngrandomservice n phys services pybindings n rootwriter n python n rootwriter pybindings n steamshovel n python n shovelart pybindings n shovelio pybindings n tableio n python n tableio pybindings n generating env shell sh n generating icetray config n generating tarball hook sh n configuring gfilt stl decryptor n configuring done n generating done n build files have been written to users jbraun svn offline software build n make j nscanning dependencies of target py nscanning dependencies of target env check n generating lib py nchecking build against environment n built target env check n built target py n n built target steamshovel n env shell sh n n n w e l c o m e to i c e t r a y n n version offline software trunk n n you are welcome to visit our web site n n n nicetray environment has n src users jbraun svn offline software src n build users jbraun svn offline software build n testdata users jbraun data test data n python python n steamshovel nassertion failed registered function set key file users jbraun svn offline software src icetray public icetray extended type info h line nabort trap reporter jbraun cc resolution invalid time component combo core summary error running steamshovel dataio shovel in el capitan priority normal keywords milestone owner type defect | 1 |
125,788 | 26,727,845,356 | IssuesEvent | 2023-01-29 22:56:37 | Plant-Coach/plant_coach_be | https://api.github.com/repos/Plant-Coach/plant_coach_be | closed | Refactor: Create Garden Plant process | enhancement code refactor | - [x] Use callbacks instead of the manual create method currently used.
- [x] Debug this issue with the `seeds.rb` file. | 1.0 | Refactor: Create Garden Plant process - - [x] Use callbacks instead of the manual create method currently used.
- [x] Debug this issue with the `seeds.rb` file. | non_defect | refactor create garden plant process use callbacks instead of the manual create method currently used debug this issue with the seeds rb file | 0 |
49,992 | 13,187,304,154 | IssuesEvent | 2020-08-13 02:59:23 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | Polyplopia time of arrival between neutrino and CR is incorrect (Trac #2415) | Incomplete Migration Migrated from Trac combo core defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2415">https://code.icecube.wisc.edu/ticket/2415</a>, reported by icecube and owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T16:26:51",
"description": "Polyplopia creates a positive bias between the time of arrival from the injected neutrino secondaries and the injected CR secondaries. This means that there is a preference to trigger on down-going muons as opposed to the coincident CR+Neutrino event.\n\nhttps://code.icecube.wisc.edu/projects/icecube/browser/IceCube/meta-projects/combo/trunk/polyplopia/private/polyplopia/PolyplopiaUtils.cxx#L23",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"_ts": "1593016011985589",
"component": "combo core",
"summary": "Polyplopia time of arrival between neutrino and CR is incorrect",
"priority": "critical",
"keywords": "",
"time": "2020-03-05T20:41:45",
"milestone": "Autumnal Equinox 2020",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Polyplopia time of arrival between neutrino and CR is incorrect (Trac #2415) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2415">https://code.icecube.wisc.edu/ticket/2415</a>, reported by icecube and owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T16:26:51",
"description": "Polyplopia creates a positive bias between the time of arrival from the injected neutrino secondaries and the injected CR secondaries. This means that there is a preference to trigger on down-going muons as opposed to the coincident CR+Neutrino event.\n\nhttps://code.icecube.wisc.edu/projects/icecube/browser/IceCube/meta-projects/combo/trunk/polyplopia/private/polyplopia/PolyplopiaUtils.cxx#L23",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"_ts": "1593016011985589",
"component": "combo core",
"summary": "Polyplopia time of arrival between neutrino and CR is incorrect",
"priority": "critical",
"keywords": "",
"time": "2020-03-05T20:41:45",
"milestone": "Autumnal Equinox 2020",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| defect | polyplopia time of arrival between neutrino and cr is incorrect trac migrated from json status closed changetime description polyplopia creates a positive bias between the time of arrival from the injected neutrino secondaries and the injected cr secondaries this means that there is a preference to trigger on down going muons as opposed to the coincident cr neutrino event n n reporter icecube cc resolution fixed ts component combo core summary polyplopia time of arrival between neutrino and cr is incorrect priority critical keywords time milestone autumnal equinox owner juancarlos type defect | 1 |
77,193 | 26,833,530,153 | IssuesEvent | 2023-02-02 17:38:41 | SasView/sasview | https://api.github.com/repos/SasView/sasview | closed | Tool menu re-organisation | Defect | A couple of the things on the tool menu need to be in different places, also it should be called 'Tools' | 1.0 | Tool menu re-organisation - A couple of the things on the tool menu need to be in different places, also it should be called 'Tools' | defect | tool menu re organisation a couple of the things on the tool menu need to be in different places also it should be called tools | 1 |
320,030 | 9,763,797,350 | IssuesEvent | 2019-06-05 14:31:44 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] Add ability to require certain content to be deployed to staging (alone) before it can be deployed to live | CI enhancement priority: medium | **Is your feature request related to a problem? Please describe.**
Today staging is a way point to live. If you deploy to live, content will automatically go to staging. You can also deploy to staging alone and then later deploy to live.
Today, there is nothing that forces a author to stop at staging first. For some content, this is important. The ability to go straight to live without first stopping in staging should not be allowed.
**Describe the solution you'd like**
When a user tries to deploy a commit check to see if that commit is in staging. If it's not, do not allow them to select the live option.
Support configuration that allows an admin to determine which paths this rule should be enforced on. Not all content requires the same process rigor.
The same error should be applied at the API. If you attempt a publish to live via the API and any of the paths are a match, the publish should fail if the commits do not already exist in staging.
| 1.0 | [studio] Add ability to require certain content to be deployed to staging (alone) before it can be deployed to live - **Is your feature request related to a problem? Please describe.**
Today staging is a way point to live. If you deploy to live, content will automatically go to staging. You can also deploy to staging alone and then later deploy to live.
Today, there is nothing that forces a author to stop at staging first. For some content, this is important. The ability to go straight to live without first stopping in staging should not be allowed.
**Describe the solution you'd like**
When a user tries to deploy a commit check to see if that commit is in staging. If it's not, do not allow them to select the live option.
Support configuration that allows an admin to determine which paths this rule should be enforced on. Not all content requires the same process rigor.
The same error should be applied at the API. If you attempt a publish to live via the API and any of the paths are a match, the publish should fail if the commits do not already exist in staging.
| non_defect | add ability to require certain content to be deployed to staging alone before it can be deployed to live is your feature request related to a problem please describe today staging is a way point to live if you deploy to live content will automatically go to staging you can also deploy to staging alone and then later deploy to live today there is nothing that forces a author to stop at staging first for some content this is important the ability to go straight to live without first stopping in staging should not be allowed describe the solution you d like when a user tries to deploy a commit check to see if that commit is in staging if it s not do not allow them to select the live option support configuration that allows an admin to determine which paths this rule should be enforced on not all content requires the same process rigor the same error should be applied at the api if you attempt a publish to live via the api and any of the paths are a match the publish should fail if the commits do not already exist in staging | 0 |
68,039 | 21,420,077,616 | IssuesEvent | 2022-04-22 14:47:56 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [Other] No VA.gov Experience Standard for the issue found. (00.00.1) | 508/Accessibility 508-defect-2 collab-cycle-feedback afs-education Staging CCIssue00.00 CC-Dashboard my-education-benefits | ### General Information
#### VFS team name
DGIB - My Education Benefits
#### VFS product name
My Education Benefits
#### VFS feature name
My Education Benefits App
#### Point of Contact/Reviewers
Brian DeConinck (@briandeconinck) - Accessibility
*For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website.
---
### Platform Issue
No VA.gov Experience Standard for the issue found.
### Issue Details
In Step 4 of 7: Benefit selection, users who are eligible for another benefit are prompted to choose a benefit that they must give up with an expandable additional info component about each benefit. As currently coded, this is presented roughly like this:
```
<input type="radio" id="some-id" ... />
<label for="some-id">
Name of benefit
<div class="form-expanding-group">... Learn more ...</div>
</label>
```
Since the "Learn more" additional info component is inside the `<label>`, expanding it also triggers a selection of the radio button associated with the label. It's possible that a user might accidentally make a selection when they only intended to learn more about that option --- and for blind users who don't have the visual cue of seeing the radio button change, they likely won't realize what happened. Since there is an opportunity to review at the end of the form this doesn't technically violate [WCAG 3.3.4](https://www.w3.org/TR/WCAG21/#error-prevention-legal-financial-data) but since this is a decision that can't be changed after submission we should be extra careful about preventing mistakes.
### Link, screenshot or steps to recreate
1. Navigate to the My Education Benefits app on staging and log in as test user 38.
2. Complete the form up to step 4 of 7.
3. When prompted to give up a benefit, select the first benefit radio button.
4. Click the "Learn more" that follows the second benefit radio button.
5. Note that the second radio button is now selected.
### VA.gov Experience Standard
[Category Number 00, Issue Number 00](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
---
### Platform Recommendation
The "Learn more" additional info component should be moved outside of the `<label>` element and be associated with the `<label>` using aria-describedby. This may also require some extra styling and keyboard testing to make sure the tab order still makes sense, so please reach out in this issue or in Slack to discuss as needed!
Another alternative to consider: Combine all of the "Learn more" information across all of the available options into a single additional info component that appears between the question text and the radio options. That would resolve this issue and have a fairly simple keyboard pattern that's already in use elsewhere.
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket. | 1.0 | [Other] No VA.gov Experience Standard for the issue found. (00.00.1) - ### General Information
#### VFS team name
DGIB - My Education Benefits
#### VFS product name
My Education Benefits
#### VFS feature name
My Education Benefits App
#### Point of Contact/Reviewers
Brian DeConinck (@briandeconinck) - Accessibility
*For more information on how to interpret this ticket, please refer to the [Anatomy of a Staging Review issue ticket](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/Anatomy-of-a-Staging-Review-Issue-ticket.2060320997.html) guidance on Platform Website.
---
### Platform Issue
No VA.gov Experience Standard for the issue found.
### Issue Details
In Step 4 of 7: Benefit selection, users who are eligible for another benefit are prompted to choose a benefit that they must give up with an expandable additional info component about each benefit. As currently coded, this is presented roughly like this:
```
<input type="radio" id="some-id" ... />
<label for="some-id">
Name of benefit
<div class="form-expanding-group">... Learn more ...</div>
</label>
```
Since the "Learn more" additional info component is inside the `<label>`, expanding it also triggers a selection of the radio button associated with the label. It's possible that a user might accidentally make a selection when they only intended to learn more about that option --- and for blind users who don't have the visual cue of seeing the radio button change, they likely won't realize what happened. Since there is an opportunity to review at the end of the form this doesn't technically violate [WCAG 3.3.4](https://www.w3.org/TR/WCAG21/#error-prevention-legal-financial-data) but since this is a decision that can't be changed after submission we should be extra careful about preventing mistakes.
### Link, screenshot or steps to recreate
1. Navigate to the My Education Benefits app on staging and log in as test user 38.
2. Complete the form up to step 4 of 7.
3. When prompted to give up a benefit, select the first benefit radio button.
4. Click the "Learn more" that follows the second benefit radio button.
5. Note that the second radio button is now selected.
### VA.gov Experience Standard
[Category Number 00, Issue Number 00](https://depo-platform-documentation.scrollhelp.site/collaboration-cycle/VA.gov-experience-standards.1683980311.html)
### Other References
---
### Platform Recommendation
The "Learn more" additional info component should be moved outside of the `<label>` element and be associated with the `<label>` using aria-describedby. This may also require some extra styling and keyboard testing to make sure the tab order still makes sense, so please reach out in this issue or in Slack to discuss as needed!
Another alternative to consider: Combine all of the "Learn more" information across all of the available options into a single additional info component that appears between the question text and the radio options. That would resolve this issue and have a fairly simple keyboard pattern that's already in use elsewhere.
### VFS Team Tasks to Complete
- [ ] Comment on the ticket if there are questions or concerns
- [ ] Close the ticket when the issue has been resolved or validated by your Product Owner. If a team has additional questions or needs Platform help validating the issue, please comment in the ticket. | defect | no va gov experience standard for the issue found general information vfs team name dgib my education benefits vfs product name my education benefits vfs feature name my education benefits app point of contact reviewers brian deconinck briandeconinck accessibility for more information on how to interpret this ticket please refer to the guidance on platform website platform issue no va gov experience standard for the issue found issue details in step of benefit selection users who are eligible for another benefit are prompted to choose a benefit that they must give up with an expandable additional info component about each benefit as currently coded this is presented roughly like this name of benefit learn more since the learn more additional info component is inside the expanding it also triggers a selection of the radio button associated with the label it s possible that a user might accidentally make a selection when they only intended to learn more about that option and for blind users who don t have the visual cue of seeing the radio button change they likely won t realize what happened since there is an opportunity to review at the end of the form this doesn t technically violate but since this is a decision that can t be changed after submission we should be extra careful about preventing mistakes link screenshot or steps to recreate navigate to the my education benefits app on staging and log in as test user complete the form up to step of when prompted to give up a benefit select the first benefit radio button click the learn more that follows the second benefit radio button note that the second radio button is now selected va gov experience standard other references platform recommendation the learn more additional info component should be moved outside of the element and be associated with the using aria describedby this may also require some extra styling and keyboard testing to make sure the tab order still makes sense so please reach out in this issue or in slack to discuss as needed another alternative to consider combine all of the learn more information across all of the available options into a single additional info component that appears between the question text and the radio options that would resolve this issue and have a fairly simple keyboard pattern that s already in use elsewhere vfs team tasks to complete comment on the ticket if there are questions or concerns close the ticket when the issue has been resolved or validated by your product owner if a team has additional questions or needs platform help validating the issue please comment in the ticket | 1 |
25,960 | 4,538,448,679 | IssuesEvent | 2016-09-09 06:54:20 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | Type.BaseType - Array, Comparer, EqualityComparer | defect portarelle | ### Steps To Reproduce
```csharp
public class App
{
public static void Main()
{
Assert.AreEqual(typeof(Array).BaseType, typeof(Object), "BaseType of Array should be object");
Assert.AreStrictEqual(typeof(Comparer<object>).BaseType, typeof(object), "BaseType should be correct");
Assert.AreStrictEqual(typeof(EqualityComparer<object>).BaseType, typeof(object), "BaseType should be correct");
}
}
``` | 1.0 | Type.BaseType - Array, Comparer, EqualityComparer - ### Steps To Reproduce
```csharp
public class App
{
public static void Main()
{
Assert.AreEqual(typeof(Array).BaseType, typeof(Object), "BaseType of Array should be object");
Assert.AreStrictEqual(typeof(Comparer<object>).BaseType, typeof(object), "BaseType should be correct");
Assert.AreStrictEqual(typeof(EqualityComparer<object>).BaseType, typeof(object), "BaseType should be correct");
}
}
``` | defect | type basetype array comparer equalitycomparer steps to reproduce csharp public class app public static void main assert areequal typeof array basetype typeof object basetype of array should be object assert arestrictequal typeof comparer basetype typeof object basetype should be correct assert arestrictequal typeof equalitycomparer basetype typeof object basetype should be correct | 1 |
32,049 | 6,691,693,002 | IssuesEvent | 2017-10-09 14:00:15 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | NullPointerException PartitionWideEntryWithPredicateOperation CachedQueryEntry.java:105 | Team: Core To Triage Type: Critical Type: Defect | version
```
INFO: [10.0.0.205]:5701 [HZ] [3.9-SNAPSHOT] Hazelcast Enterprise 3.9-SNAPSHOT (20170925 - 1678248, c2eac8a) starting at [10.0.0.205]:5701
```
/disk1/jenkins/workspace/freeze-all/3.9-SNAPSHOT/2017_09_25-20_32_09/processor/off-load/execute-on-predicate
HzMember1HZ/out.txt
```
Sep 26, 2017 10:02:03 AM com.hazelcast.map.impl.operation.PartitionWideEntryWithPredicateOperation
SEVERE: [10.0.0.205]:5701 [HZ] [3.9-SNAPSHOT] null
java.lang.NullPointerException
at com.hazelcast.query.impl.CachedQueryEntry.getTargetObject(CachedQueryEntry.java:105)
at com.hazelcast.query.impl.QueryableEntry.extractAttributeValue(QueryableEntry.java:81)
at com.hazelcast.query.impl.QueryableEntry.getAttributeValue(QueryableEntry.java:48)
at com.hazelcast.query.impl.predicates.AbstractPredicate.readAttributeValue(AbstractPredicate.java:132)
at com.hazelcast.query.impl.predicates.AbstractPredicate.apply(AbstractPredicate.java:57)
at com.hazelcast.query.impl.predicates.AndPredicate.apply(AndPredicate.java:129)
at com.hazelcast.query.SqlPredicate.apply(SqlPredicate.java:74)
at com.hazelcast.map.impl.operation.EntryOperator.outOfPredicateScope(EntryOperator.java:269)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:183)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:172)
at com.hazelcast.map.impl.operation.PartitionWideEntryOperation.run(PartitionWideEntryOperation.java:84)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:194)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:120)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100)
``` | 1.0 | NullPointerException PartitionWideEntryWithPredicateOperation CachedQueryEntry.java:105 - version
```
INFO: [10.0.0.205]:5701 [HZ] [3.9-SNAPSHOT] Hazelcast Enterprise 3.9-SNAPSHOT (20170925 - 1678248, c2eac8a) starting at [10.0.0.205]:5701
```
/disk1/jenkins/workspace/freeze-all/3.9-SNAPSHOT/2017_09_25-20_32_09/processor/off-load/execute-on-predicate
HzMember1HZ/out.txt
```
Sep 26, 2017 10:02:03 AM com.hazelcast.map.impl.operation.PartitionWideEntryWithPredicateOperation
SEVERE: [10.0.0.205]:5701 [HZ] [3.9-SNAPSHOT] null
java.lang.NullPointerException
at com.hazelcast.query.impl.CachedQueryEntry.getTargetObject(CachedQueryEntry.java:105)
at com.hazelcast.query.impl.QueryableEntry.extractAttributeValue(QueryableEntry.java:81)
at com.hazelcast.query.impl.QueryableEntry.getAttributeValue(QueryableEntry.java:48)
at com.hazelcast.query.impl.predicates.AbstractPredicate.readAttributeValue(AbstractPredicate.java:132)
at com.hazelcast.query.impl.predicates.AbstractPredicate.apply(AbstractPredicate.java:57)
at com.hazelcast.query.impl.predicates.AndPredicate.apply(AndPredicate.java:129)
at com.hazelcast.query.SqlPredicate.apply(SqlPredicate.java:74)
at com.hazelcast.map.impl.operation.EntryOperator.outOfPredicateScope(EntryOperator.java:269)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKeyValueInternal(EntryOperator.java:183)
at com.hazelcast.map.impl.operation.EntryOperator.operateOnKey(EntryOperator.java:172)
at com.hazelcast.map.impl.operation.PartitionWideEntryOperation.run(PartitionWideEntryOperation.java:84)
at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:194)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:120)
at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:100)
``` | defect | nullpointerexception partitionwideentrywithpredicateoperation cachedqueryentry java version info hazelcast enterprise snapshot starting at jenkins workspace freeze all snapshot processor off load execute on predicate out txt sep am com hazelcast map impl operation partitionwideentrywithpredicateoperation severe null java lang nullpointerexception at com hazelcast query impl cachedqueryentry gettargetobject cachedqueryentry java at com hazelcast query impl queryableentry extractattributevalue queryableentry java at com hazelcast query impl queryableentry getattributevalue queryableentry java at com hazelcast query impl predicates abstractpredicate readattributevalue abstractpredicate java at com hazelcast query impl predicates abstractpredicate apply abstractpredicate java at com hazelcast query impl predicates andpredicate apply andpredicate java at com hazelcast query sqlpredicate apply sqlpredicate java at com hazelcast map impl operation entryoperator outofpredicatescope entryoperator java at com hazelcast map impl operation entryoperator operateonkeyvalueinternal entryoperator java at com hazelcast map impl operation entryoperator operateonkey entryoperator java at com hazelcast map impl operation partitionwideentryoperation run partitionwideentryoperation java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread run operationthread java | 1 |
16,256 | 2,883,717,764 | IssuesEvent | 2015-06-11 13:46:59 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | Boolean conditions not properly generated in contains | Defect ORM | I have three tables: `books`, `books_languages` and `languages` using a BelongsToMany association.
The `books` and `books_languages` both have a deleted column `TINYINT(1)` used for a soft delete.
When searching the `books` table this works as expected:
```
$this->Books->find()->where(['Books.deleted' => false]);
```
which generates the following sql:
```
SELECT Books.id AS `Books__id`, Books.title AS `Books__title`
FROM books Books
WHERE Books.deleted = 0
```
However doing the same thing in a contain does not correctly convert `false` into a `0` in the sql:
```
$book = $this->Books->find()
->contain(['Languages' => function($query){
return $query->where(['BooksLanguages.deleted' => false]);
}])
```
SQL:
```
SELECT BooksLanguages.language_id AS `BooksLanguages__language_id`, BooksLanguages.id AS `BooksLanguages__id`, BooksLanguages.book_id AS `BooksLanguages__book_id`, BooksLanguages.deleted AS `BooksLanguages__deleted`, Languages.id AS `Languages__id`
FROM languages Languages INNER JOIN books_languages BooksLanguages ON Languages.id = (BooksLanguages.language_id)
WHERE (BooksLanguages.book_id in ('53') AND BooksLanguages.deleted = '')
``` | 1.0 | Boolean conditions not properly generated in contains - I have three tables: `books`, `books_languages` and `languages` using a BelongsToMany association.
The `books` and `books_languages` both have a deleted column `TINYINT(1)` used for a soft delete.
When searching the `books` table this works as expected:
```
$this->Books->find()->where(['Books.deleted' => false]);
```
which generates the following sql:
```
SELECT Books.id AS `Books__id`, Books.title AS `Books__title`
FROM books Books
WHERE Books.deleted = 0
```
However doing the same thing in a contain does not correctly convert `false` into a `0` in the sql:
```
$book = $this->Books->find()
->contain(['Languages' => function($query){
return $query->where(['BooksLanguages.deleted' => false]);
}])
```
SQL:
```
SELECT BooksLanguages.language_id AS `BooksLanguages__language_id`, BooksLanguages.id AS `BooksLanguages__id`, BooksLanguages.book_id AS `BooksLanguages__book_id`, BooksLanguages.deleted AS `BooksLanguages__deleted`, Languages.id AS `Languages__id`
FROM languages Languages INNER JOIN books_languages BooksLanguages ON Languages.id = (BooksLanguages.language_id)
WHERE (BooksLanguages.book_id in ('53') AND BooksLanguages.deleted = '')
``` | defect | boolean conditions not properly generated in contains i have three tables books books languages and languages using a belongstomany association the books and books languages both have a deleted column tinyint used for a soft delete when searching the books table this works as expected this books find where which generates the following sql select books id as books id books title as books title from books books where books deleted however doing the same thing in a contain does not correctly convert false into a in the sql book this books find contain languages function query return query where sql select bookslanguages language id as bookslanguages language id bookslanguages id as bookslanguages id bookslanguages book id as bookslanguages book id bookslanguages deleted as bookslanguages deleted languages id as languages id from languages languages inner join books languages bookslanguages on languages id bookslanguages language id where bookslanguages book id in and bookslanguages deleted | 1 |
326,633 | 24,095,212,080 | IssuesEvent | 2022-09-19 18:06:26 | CUP-ECS/miniapp-benchmarking | https://api.github.com/repos/CUP-ECS/miniapp-benchmarking | closed | Documentation needed for parameter data file | documentation enhancement | Currently, the `benchmark-runner.py` file reads in a file that contains data on the parameters extracted from CLAMR. The data has to be in a certain format, but this is not documented anywhere. Add documents to the Github wiki and `ReadME.md` file. | 1.0 | Documentation needed for parameter data file - Currently, the `benchmark-runner.py` file reads in a file that contains data on the parameters extracted from CLAMR. The data has to be in a certain format, but this is not documented anywhere. Add documents to the Github wiki and `ReadME.md` file. | non_defect | documentation needed for parameter data file currently the benchmark runner py file reads in a file that contains data on the parameters extracted from clamr the data has to be in a certain format but this is not documented anywhere add documents to the github wiki and readme md file | 0 |
21,878 | 3,574,771,336 | IssuesEvent | 2016-01-27 13:28:21 | h1aji/chmsee | https://api.github.com/repos/h1aji/chmsee | closed | You may need to rebuild the chmsee XPCOM component | auto-migrated Priority-Medium Type-Defect | ```
I meet an problem when i open an chm file.
You may need to rebuild the chmsee XPCOM component
i have exec make in src dir
my environment is centos 6.4
```
Original issue reported on code.google.com by `niujiami...@gmail.com` on 8 Aug 2013 at 1:28 | 1.0 | You may need to rebuild the chmsee XPCOM component - ```
I meet an problem when i open an chm file.
You may need to rebuild the chmsee XPCOM component
i have exec make in src dir
my environment is centos 6.4
```
Original issue reported on code.google.com by `niujiami...@gmail.com` on 8 Aug 2013 at 1:28 | defect | you may need to rebuild the chmsee xpcom component i meet an problem when i open an chm file you may need to rebuild the chmsee xpcom component i have exec make in src dir my environment is centos original issue reported on code google com by niujiami gmail com on aug at | 1 |
15,427 | 5,115,363,515 | IssuesEvent | 2017-01-06 21:34:10 | Microsoft/msphpsql | https://api.github.com/repos/Microsoft/msphpsql | closed | Cannot use transactions if there are any open recordsets | CodePlex | It seems that SQL Server does not allow more than one recordset to be opened
at the same time. So any code doing the something like this will fail:
- open recordset
- open recordset
To solve that limitation the "Multiple Active Result Sets" mode (MARS) was
created and the combination above started to work.
But the MARS mode has some limitations about what can and what cannot be done.
One of these is that we cannot use transactions if there are any open
recordsets. That means that any code doing the following will fail:
- open recordset
- start transaction
Is there any way to get around this?
## Work Item Details
**Original CodePlex Issue:** [Issue 22420](http://sqlsrvphp.codeplex.com/workitem/22420)
**Status:** Proposed
**Reason Closed:** Unassigned
**Assigned to:** Unassigned
**Reported on:** Jun 7, 2013 at 11:09 AM
**Reported by:** luisdev
**Updated on:** Jun 7, 2013 at 11:09 AM
**Updated by:** luisdev
| 1.0 | Cannot use transactions if there are any open recordsets - It seems that SQL Server does not allow more than one recordset to be opened
at the same time. So any code doing the something like this will fail:
- open recordset
- open recordset
To solve that limitation the "Multiple Active Result Sets" mode (MARS) was
created and the combination above started to work.
But the MARS mode has some limitations about what can and what cannot be done.
One of these is that we cannot use transactions if there are any open
recordsets. That means that any code doing the following will fail:
- open recordset
- start transaction
Is there any way to get around this?
## Work Item Details
**Original CodePlex Issue:** [Issue 22420](http://sqlsrvphp.codeplex.com/workitem/22420)
**Status:** Proposed
**Reason Closed:** Unassigned
**Assigned to:** Unassigned
**Reported on:** Jun 7, 2013 at 11:09 AM
**Reported by:** luisdev
**Updated on:** Jun 7, 2013 at 11:09 AM
**Updated by:** luisdev
| non_defect | cannot use transactions if there are any open recordsets it seems that sql server does not allow more than one recordset to be opened at the same time so any code doing the something like this will fail open recordset open recordset to solve that limitation the multiple active result sets mode mars was created and the combination above started to work but the mars mode has some limitations about what can and what cannot be done one of these is that we cannot use transactions if there are any open recordsets that means that any code doing the following will fail open recordset start transaction is there any way to get around this work item details original codeplex issue status proposed reason closed unassigned assigned to unassigned reported on jun at am reported by luisdev updated on jun at am updated by luisdev | 0 |
36,642 | 8,047,817,344 | IssuesEvent | 2018-08-01 02:59:31 | RennurApps/AwareIM-resources | https://api.github.com/repos/RennurApps/AwareIM-resources | opened | REST services Encoding of Attribute Values setting ignores the Parent reference attribute | COM: REST Services I: Defect v8.1 | Build 2467.
When you assign the 'Do not encode' setting to a Parent attribute, the encoding option is ignored, the attribute is still encoded, and it is generated in the JSON parameter string.
The issue with this is that these Aware IM relationship attributes you may not want exposed or are not required for JSON, but are required for your app configuration. | 1.0 | REST services Encoding of Attribute Values setting ignores the Parent reference attribute - Build 2467.
When you assign the 'Do not encode' setting to a Parent attribute, the encoding option is ignored, the attribute is still encoded, and it is generated in the JSON parameter string.
The issue with this is that these Aware IM relationship attributes you may not want exposed or are not required for JSON, but are required for your app configuration. | defect | rest services encoding of attribute values setting ignores the parent reference attribute build when you assign the do not encode setting to a parent attribute the encoding option is ignored the attribute is still encoded and it is generated in the json parameter string the issue with this is that these aware im relationship attributes you may not want exposed or are not required for json but are required for your app configuration | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.