Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
970
labels
stringlengths
4
625
body
stringlengths
3
247k
index
stringclasses
9 values
text_combine
stringlengths
96
247k
label
stringclasses
2 values
text
stringlengths
96
218k
binary_label
int64
0
1
764,884
26,821,818,182
IssuesEvent
2023-02-02 10:01:12
Dessia-tech/dessia_common
https://api.github.com/repos/Dessia-tech/dessia_common
closed
deserialization Error
Priority: Critical Status: Done
**Note: for support questions, please use https://nextcloud.dessia.tech/call/hr9z9bif * **I'm submitting a ...** - [X] bug report * **What is the current behavior?** * I have an object A `Component` which has as parameter frame_origin, on this object I have no problem, but on an object B `Assembly_component` which has as parameter object `A` I have a problem with the parameter fram_origin also, in the workflow_run display we have this Assembly_component object and we have no problem there, but when we go to the library and open it I have this error. * **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** Avoid reference to other packages * **What is the expected behavior?** * **What is the motivation / use case for changing the behavior?** * **Possible fixes** * **Please tell us about your environment:** - branch: master - commit: - python version: 3.8 `ERROR :objects : [21/Dec/2022 17:12:17] : deserialization of renault_custom.layout.layout_3d.AssemblyComponent 63a33d24d48e1ac6a45ea28f: Cannot get segment frame_origin from path #/frame_origin in element {'object_class': 'renault_custom.layout.layout_3d.AssemblyComponent', 'package_version': '0.0.3.dev412', 'name': 'Wf10_test_change_20_12', 'log': None, 'i_ground': 0, 'update_position': True, 'lx': 0.9379082058497836, 'ly': 1.3408953347954868, 'lz': 1.0235780578773777, 'components': [{'object_class': 'renault_custom.layout.layout_3d.Component', 'package_version': '0.0.3.dev412', 'name': 'env_box', 'log': None, 'alpha': 0.4, 'volume_model_input': {'object_class': 'volmdlr.core.VolumeModel', 'pack 566569 Traceback: 566570 Traceback (most recent call last): 566571 File "/usr/local/lib/python3.9/dist-packages/dessia_common/breakdown.py", line 95, in get_in_object_from_path 566572 element = extract_segment_from_object(element, segment) 566573 File "/usr/local/lib/python3.9/dist-packages/dessia_common/breakdown.py", line 74, in extract_segment_from_object 566574 raise ExtractionError(message_error) 566575 dessia_common.breakdown.ExtractionError: Cannot extract segment frame_origin from object {'object_class': 'renault_custom.layout.layout_3d.AssemblyComponent', 'package_version': '0.0.3.dev412', 'name': 'Wf10_test_change_20_12', 'log': None, 'i_ground': 0, 'update_position': True, 'lx': 0.9379082058497836, 'ly': 1.3408953347954868, 'lz': 1.0235780578773777, 'components': [{'object_class': 'renault_custom.layout.layout_3d.Component', 'package_version': '0.0.3.dev412', 'name': 'env_box', 'log': None, 'alpha': 0.4, 'volume_model_input': {'object_class': 'volmdlr.core.VolumeModel', 'pack`
1.0
deserialization Error - **Note: for support questions, please use https://nextcloud.dessia.tech/call/hr9z9bif * **I'm submitting a ...** - [X] bug report * **What is the current behavior?** * I have an object A `Component` which has as parameter frame_origin, on this object I have no problem, but on an object B `Assembly_component` which has as parameter object `A` I have a problem with the parameter fram_origin also, in the workflow_run display we have this Assembly_component object and we have no problem there, but when we go to the library and open it I have this error. * **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** Avoid reference to other packages * **What is the expected behavior?** * **What is the motivation / use case for changing the behavior?** * **Possible fixes** * **Please tell us about your environment:** - branch: master - commit: - python version: 3.8 `ERROR :objects : [21/Dec/2022 17:12:17] : deserialization of renault_custom.layout.layout_3d.AssemblyComponent 63a33d24d48e1ac6a45ea28f: Cannot get segment frame_origin from path #/frame_origin in element {'object_class': 'renault_custom.layout.layout_3d.AssemblyComponent', 'package_version': '0.0.3.dev412', 'name': 'Wf10_test_change_20_12', 'log': None, 'i_ground': 0, 'update_position': True, 'lx': 0.9379082058497836, 'ly': 1.3408953347954868, 'lz': 1.0235780578773777, 'components': [{'object_class': 'renault_custom.layout.layout_3d.Component', 'package_version': '0.0.3.dev412', 'name': 'env_box', 'log': None, 'alpha': 0.4, 'volume_model_input': {'object_class': 'volmdlr.core.VolumeModel', 'pack 566569 Traceback: 566570 Traceback (most recent call last): 566571 File "/usr/local/lib/python3.9/dist-packages/dessia_common/breakdown.py", line 95, in get_in_object_from_path 566572 element = extract_segment_from_object(element, segment) 566573 File "/usr/local/lib/python3.9/dist-packages/dessia_common/breakdown.py", line 74, in extract_segment_from_object 566574 raise ExtractionError(message_error) 566575 dessia_common.breakdown.ExtractionError: Cannot extract segment frame_origin from object {'object_class': 'renault_custom.layout.layout_3d.AssemblyComponent', 'package_version': '0.0.3.dev412', 'name': 'Wf10_test_change_20_12', 'log': None, 'i_ground': 0, 'update_position': True, 'lx': 0.9379082058497836, 'ly': 1.3408953347954868, 'lz': 1.0235780578773777, 'components': [{'object_class': 'renault_custom.layout.layout_3d.Component', 'package_version': '0.0.3.dev412', 'name': 'env_box', 'log': None, 'alpha': 0.4, 'volume_model_input': {'object_class': 'volmdlr.core.VolumeModel', 'pack`
non_perf
deserialization error note for support questions please use i m submitting a bug report what is the current behavior i have an object a component which has as parameter frame origin on this object i have no problem but on an object b assembly component which has as parameter object a i have a problem with the parameter fram origin also in the workflow run display we have this assembly component object and we have no problem there but when we go to the library and open it i have this error if the current behavior is a bug please provide the steps to reproduce and if possible a minimal demo of the problem avoid reference to other packages what is the expected behavior what is the motivation use case for changing the behavior possible fixes please tell us about your environment branch master commit python version error objects deserialization of renault custom layout layout assemblycomponent cannot get segment frame origin from path frame origin in element object class renault custom layout layout assemblycomponent package version name test change log none i ground update position true lx ly lz components object class renault custom layout layout component package version name env box log none alpha volume model input object class volmdlr core volumemodel pack traceback traceback most recent call last file usr local lib dist packages dessia common breakdown py line in get in object from path element extract segment from object element segment file usr local lib dist packages dessia common breakdown py line in extract segment from object raise extractionerror message error dessia common breakdown extractionerror cannot extract segment frame origin from object object class renault custom layout layout assemblycomponent package version name test change log none i ground update position true lx ly lz components object class renault custom layout layout component package version name env box log none alpha volume model input object class volmdlr core volumemodel pack
0
39,412
19,911,705,556
IssuesEvent
2022-01-25 17:49:09
joepio/atomic-data-rust
https://api.github.com/repos/joepio/atomic-data-rust
closed
Collection caching
performance lib
Collections are currently dynamic resources, which means that they are fully calculated when a user sends a request. That works fine, but it comes at a performance cost, since the DB must be queried. How to cache this? How does this interact with the `get_extended_resource` function? How to invalidate the cache? Let's discuss some considerations. Collections can be sorted and filtered by adding query params. These of course change the dynamic properties such as `members` and `total_count`. These should be cached seperately. Since all changes _should be done_ using Commits, we can perform cache invalidations while handling Commits. How does the Commit handler know which resources should be invalidated? For example, let's say I remove the `firstName` property with the `john` value from some `person` Resource. The person first appeared in the collections of people named `john`, but this collection should now be invalidated. ## Invalidation approaches ### Invalidate when any attribute of a resource chages When a Collection iterates over its members, it adds the subject of the collection (including query params) to a K/V `incomingLinks` store where each K is a subject, and each V stands for an array of subjects that link to it. When a commit is applied to resource X, it takes subject X and opens the `incomingLinks` instance of that X. It then proceeds to invalidate all the V items. This will invalidate many collections that could very well result in _exactly the same members_, when re-run. ### Use TPF index / cache #14 If we build an index for all values, most of the expensive part is solve. It just leaves sorting - which still is expensive.
True
Collection caching - Collections are currently dynamic resources, which means that they are fully calculated when a user sends a request. That works fine, but it comes at a performance cost, since the DB must be queried. How to cache this? How does this interact with the `get_extended_resource` function? How to invalidate the cache? Let's discuss some considerations. Collections can be sorted and filtered by adding query params. These of course change the dynamic properties such as `members` and `total_count`. These should be cached seperately. Since all changes _should be done_ using Commits, we can perform cache invalidations while handling Commits. How does the Commit handler know which resources should be invalidated? For example, let's say I remove the `firstName` property with the `john` value from some `person` Resource. The person first appeared in the collections of people named `john`, but this collection should now be invalidated. ## Invalidation approaches ### Invalidate when any attribute of a resource chages When a Collection iterates over its members, it adds the subject of the collection (including query params) to a K/V `incomingLinks` store where each K is a subject, and each V stands for an array of subjects that link to it. When a commit is applied to resource X, it takes subject X and opens the `incomingLinks` instance of that X. It then proceeds to invalidate all the V items. This will invalidate many collections that could very well result in _exactly the same members_, when re-run. ### Use TPF index / cache #14 If we build an index for all values, most of the expensive part is solve. It just leaves sorting - which still is expensive.
perf
collection caching collections are currently dynamic resources which means that they are fully calculated when a user sends a request that works fine but it comes at a performance cost since the db must be queried how to cache this how does this interact with the get extended resource function how to invalidate the cache let s discuss some considerations collections can be sorted and filtered by adding query params these of course change the dynamic properties such as members and total count these should be cached seperately since all changes should be done using commits we can perform cache invalidations while handling commits how does the commit handler know which resources should be invalidated for example let s say i remove the firstname property with the john value from some person resource the person first appeared in the collections of people named john but this collection should now be invalidated invalidation approaches invalidate when any attribute of a resource chages when a collection iterates over its members it adds the subject of the collection including query params to a k v incominglinks store where each k is a subject and each v stands for an array of subjects that link to it when a commit is applied to resource x it takes subject x and opens the incominglinks instance of that x it then proceeds to invalidate all the v items this will invalidate many collections that could very well result in exactly the same members when re run use tpf index cache if we build an index for all values most of the expensive part is solve it just leaves sorting which still is expensive
1
29,565
14,174,935,893
IssuesEvent
2020-11-12 20:44:01
dequelabs/axe-core
https://api.github.com/repos/dequelabs/axe-core
closed
Version of axe-core without polyfills
Task: Release Notes Item feat performance
To help reduce our bundled size, we should release a version of axe-core without our polyfills. With 3.4.0, we added 60KB minified to our codebase, mostly from the addition of [core-js Unit32Array polyfills](https://github.com/GoogleChrome/lighthouse/pull/10056#issuecomment-562360821) needed for the ligature icon detection. This can get us under size more quickly while we determine a better long term solution.
True
Version of axe-core without polyfills - To help reduce our bundled size, we should release a version of axe-core without our polyfills. With 3.4.0, we added 60KB minified to our codebase, mostly from the addition of [core-js Unit32Array polyfills](https://github.com/GoogleChrome/lighthouse/pull/10056#issuecomment-562360821) needed for the ligature icon detection. This can get us under size more quickly while we determine a better long term solution.
perf
version of axe core without polyfills to help reduce our bundled size we should release a version of axe core without our polyfills with we added minified to our codebase mostly from the addition of needed for the ligature icon detection this can get us under size more quickly while we determine a better long term solution
1
9,798
6,996,119,446
IssuesEvent
2017-12-15 22:29:11
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
AnalyzerDependencyCheckingService.CheckForConflictsAsync results in 15 seconds of blocked UI opening Roslyn
Area-Analyzers Tenet-Performance Urgency-Soon
**Version Used**: Visual Studio 2017 version 15.5 **Steps to Reproduce**: 1. Open a large solution where there are analyzer diagnostics in a "leaf" project. **Expected Behavior**: Solution should open in a reasonable time **Actual Behavior**: It's very slow, because creating the diagnostic requires a full compilation to be built and then examined for source level suppressions, and this happens synchronously on the UI thread as part of solution load. `microsoft.visualstudio.languageservices <<microsoft.visualstudio.languageservices!Microsoft.VisualStudio.LanguageServices.Implementation.AnalyzerDependencyCheckingService+<CheckForConflictsAsync>d__14.MoveNext()>> | 11.2 | 9,363.414 | 4,087`
True
AnalyzerDependencyCheckingService.CheckForConflictsAsync results in 15 seconds of blocked UI opening Roslyn - **Version Used**: Visual Studio 2017 version 15.5 **Steps to Reproduce**: 1. Open a large solution where there are analyzer diagnostics in a "leaf" project. **Expected Behavior**: Solution should open in a reasonable time **Actual Behavior**: It's very slow, because creating the diagnostic requires a full compilation to be built and then examined for source level suppressions, and this happens synchronously on the UI thread as part of solution load. `microsoft.visualstudio.languageservices <<microsoft.visualstudio.languageservices!Microsoft.VisualStudio.LanguageServices.Implementation.AnalyzerDependencyCheckingService+<CheckForConflictsAsync>d__14.MoveNext()>> | 11.2 | 9,363.414 | 4,087`
perf
analyzerdependencycheckingservice checkforconflictsasync results in seconds of blocked ui opening roslyn version used visual studio version steps to reproduce open a large solution where there are analyzer diagnostics in a leaf project expected behavior solution should open in a reasonable time actual behavior it s very slow because creating the diagnostic requires a full compilation to be built and then examined for source level suppressions and this happens synchronously on the ui thread as part of solution load microsoft visualstudio languageservices d movenext
1
16,182
9,303,516,813
IssuesEvent
2019-03-24 18:03:46
JuliaReach/Reachability.jl
https://api.github.com/repos/JuliaReach/Reachability.jl
opened
Fixpoint check with time variable
feature performance
If there is a time variable that is never reset, this dimension grows monotonically and hence we can never find a fixpoint. Options: * Handle time separately (#263). * Let the used define a list of dimensions that should be ignored for the fixpoint check. Projecting away those dimensions is easy with boxes but more complicated in general. related: #262
True
Fixpoint check with time variable - If there is a time variable that is never reset, this dimension grows monotonically and hence we can never find a fixpoint. Options: * Handle time separately (#263). * Let the used define a list of dimensions that should be ignored for the fixpoint check. Projecting away those dimensions is easy with boxes but more complicated in general. related: #262
perf
fixpoint check with time variable if there is a time variable that is never reset this dimension grows monotonically and hence we can never find a fixpoint options handle time separately let the used define a list of dimensions that should be ignored for the fixpoint check projecting away those dimensions is easy with boxes but more complicated in general related
1
22,536
7,187,947,329
IssuesEvent
2018-02-02 08:13:23
FreeRDP/FreeRDP
https://api.github.com/repos/FreeRDP/FreeRDP
closed
Nigthly build fails, seems related to PR 4328
build
Today I tried to build latest nightly build on rpi 3 with debian Stretch but it fails: ` ... [100%] Generating xfreerdp.1 cd "/home/pi/Downloads/freerdp-nightly-2.0.0+0~20180124024836.476~1.gbp32cc6e/obj-arm-linux-gnueabihf/client/X11" && "/home/pi/Downloads/freerdp-nightly-2.0.0+0~20180124024836.476~1.gbp32cc6e/obj-arm-linux-gnueabihf/client/X11/generate_argument_docbook" ==6324==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD. client/X11/CMakeFiles/xfreerdp.manpage.dir/build.make:67: set di istruzioni per l'obiettivo "client/X11/xfreerdp.1" non riuscito make[3]: *** [client/X11/xfreerdp.1] Errore 1 ` I suppose is related to the commit 32cc6e16ef6e7148dac90502cb074366075f820d that probably need further additions. From a fast search I not found other solution that manually add libasan path to LD_PRELOAD on debian/rules that FWIK is not a good idea. Thanks for any reply and sorry for my bad english.
1.0
Nigthly build fails, seems related to PR 4328 - Today I tried to build latest nightly build on rpi 3 with debian Stretch but it fails: ` ... [100%] Generating xfreerdp.1 cd "/home/pi/Downloads/freerdp-nightly-2.0.0+0~20180124024836.476~1.gbp32cc6e/obj-arm-linux-gnueabihf/client/X11" && "/home/pi/Downloads/freerdp-nightly-2.0.0+0~20180124024836.476~1.gbp32cc6e/obj-arm-linux-gnueabihf/client/X11/generate_argument_docbook" ==6324==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD. client/X11/CMakeFiles/xfreerdp.manpage.dir/build.make:67: set di istruzioni per l'obiettivo "client/X11/xfreerdp.1" non riuscito make[3]: *** [client/X11/xfreerdp.1] Errore 1 ` I suppose is related to the commit 32cc6e16ef6e7148dac90502cb074366075f820d that probably need further additions. From a fast search I not found other solution that manually add libasan path to LD_PRELOAD on debian/rules that FWIK is not a good idea. Thanks for any reply and sorry for my bad english.
non_perf
nigthly build fails seems related to pr today i tried to build latest nightly build on rpi with debian stretch but it fails generating xfreerdp cd home pi downloads freerdp nightly obj arm linux gnueabihf client home pi downloads freerdp nightly obj arm linux gnueabihf client generate argument docbook asan runtime does not come first in initial library list you should either link runtime to your application or manually preload it with ld preload client cmakefiles xfreerdp manpage dir build make set di istruzioni per l obiettivo client xfreerdp non riuscito make errore i suppose is related to the commit that probably need further additions from a fast search i not found other solution that manually add libasan path to ld preload on debian rules that fwik is not a good idea thanks for any reply and sorry for my bad english
0
53,682
3,043,897,700
IssuesEvent
2015-08-10 03:49:36
karimmtarek/timeliner
https://api.github.com/repos/karimmtarek/timeliner
closed
improve form fields
priority
- Add placeholders - Add '*' for required fields, style it - change color of label to darker color and increase font size
1.0
improve form fields - - Add placeholders - Add '*' for required fields, style it - change color of label to darker color and increase font size
non_perf
improve form fields add placeholders add for required fields style it change color of label to darker color and increase font size
0
39,821
20,198,163,436
IssuesEvent
2022-02-11 12:41:25
trexfeathers/iris
https://api.github.com/repos/trexfeathers/iris
opened
Performance Shift(s): `3b4003a6`
Bot Type: Performance
Benchmark comparison has identified performance shifts at commit3b4003a6 (#4571). Please review the report below andtake corrective/congratulatory action as appropriate:slightly_smiling_face: <details> <summary>Performance shift report</summary> ``` before after ratio [06165e99] [3b4003a6] <overnight_benchmarks~16> <overnight_benchmarks~15> - 10.0±0s 38.7±0μs 0.00 aux_factory.HybridHeightFactory.time_create - 2.30±0μs 1.10±0μs 0.48 aux_factory.HybridHeightFactory.time_return ``` </details> Generated by GHA run [`1829309972`](https://github.com/trexfeathers/iris/actions/runs/1829309972)
True
Performance Shift(s): `3b4003a6` - Benchmark comparison has identified performance shifts at commit3b4003a6 (#4571). Please review the report below andtake corrective/congratulatory action as appropriate:slightly_smiling_face: <details> <summary>Performance shift report</summary> ``` before after ratio [06165e99] [3b4003a6] <overnight_benchmarks~16> <overnight_benchmarks~15> - 10.0±0s 38.7±0μs 0.00 aux_factory.HybridHeightFactory.time_create - 2.30±0μs 1.10±0μs 0.48 aux_factory.HybridHeightFactory.time_return ``` </details> Generated by GHA run [`1829309972`](https://github.com/trexfeathers/iris/actions/runs/1829309972)
perf
performance shift s benchmark comparison has identified performance shifts at please review the report below andtake corrective congratulatory action as appropriate slightly smiling face performance shift report before after ratio ± ± aux factory hybridheightfactory time create ± ± aux factory hybridheightfactory time return generated by gha run
1
38,156
18,982,356,989
IssuesEvent
2021-11-21 05:01:15
artichoke/rand_mt
https://api.github.com/repos/artichoke/rand_mt
closed
Replace hand-rolled chunks loop in `fill_bytes` with `chunks_exact_mut`
C-quality A-core E-easy A-performance
Both `Mt::fill_bytes` and `Mt64::fill_bytes` use a handrolled implementation of [`[u8]::chunks_exact_mut`][chunks-exact-mut]. [chunks-exact-mut]: https://doc.rust-lang.org/std/primitive.slice.html#method.chunks_exact_mut Replace these while loops with use of the [`ChunksExactMut`] iterator. [`ChunksExactMut`]: https://doc.rust-lang.org/std/slice/struct.ChunksExactMut.html https://github.com/artichoke/rand_mt/blob/712353675a248ed0499caab4d8e5e0f315c80c9c/src/mt.rs#L306-L317 https://github.com/artichoke/rand_mt/blob/712353675a248ed0499caab4d8e5e0f315c80c9c/src/mt64.rs#L289-L300
True
Replace hand-rolled chunks loop in `fill_bytes` with `chunks_exact_mut` - Both `Mt::fill_bytes` and `Mt64::fill_bytes` use a handrolled implementation of [`[u8]::chunks_exact_mut`][chunks-exact-mut]. [chunks-exact-mut]: https://doc.rust-lang.org/std/primitive.slice.html#method.chunks_exact_mut Replace these while loops with use of the [`ChunksExactMut`] iterator. [`ChunksExactMut`]: https://doc.rust-lang.org/std/slice/struct.ChunksExactMut.html https://github.com/artichoke/rand_mt/blob/712353675a248ed0499caab4d8e5e0f315c80c9c/src/mt.rs#L306-L317 https://github.com/artichoke/rand_mt/blob/712353675a248ed0499caab4d8e5e0f315c80c9c/src/mt64.rs#L289-L300
perf
replace hand rolled chunks loop in fill bytes with chunks exact mut both mt fill bytes and fill bytes use a handrolled implementation of chunks exact mut replace these while loops with use of the iterator
1
277,023
24,041,647,112
IssuesEvent
2022-09-16 02:49:26
irods/irods_client_rest_cpp
https://api.github.com/repos/irods/irods_client_rest_cpp
closed
test_stream_put_and_get hangs
bug testing
Seems like it's getting stuck in the loop inside of the actual `irods_rest.put` function.
1.0
test_stream_put_and_get hangs - Seems like it's getting stuck in the loop inside of the actual `irods_rest.put` function.
non_perf
test stream put and get hangs seems like it s getting stuck in the loop inside of the actual irods rest put function
0
33,103
15,789,260,610
IssuesEvent
2021-04-01 22:21:43
cortexlabs/cortex
https://api.github.com/repos/cortexlabs/cortex
closed
Build TensorFlow Serving from source
performance
### Description There may be an opportunity to improve prediction latency on CPUs and/or GPUs by building TensorFlow Serving from source. For example, on `p2.xlarge`, TF Serving shows this log message: ``` Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA ``` And on `m5.large`, TF Serving shows this log message: ``` Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA ``` Here's an [example](https://mux.com/blog/tuning-performance-of-tensorflow-serving-pipeline/#buildingcpuoptimizedservingbinary) of someone building from source. **Question:** Different instance type support different instruction sets. What happens if we compile against instruction sets (e.g. `AVX 512` on `m5.large`) that are not available on your instance (e.g. `t3.large` or even `t3a.large`)?
True
Build TensorFlow Serving from source - ### Description There may be an opportunity to improve prediction latency on CPUs and/or GPUs by building TensorFlow Serving from source. For example, on `p2.xlarge`, TF Serving shows this log message: ``` Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA ``` And on `m5.large`, TF Serving shows this log message: ``` Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA ``` Here's an [example](https://mux.com/blog/tuning-performance-of-tensorflow-serving-pipeline/#buildingcpuoptimizedservingbinary) of someone building from source. **Question:** Different instance type support different instruction sets. What happens if we compile against instruction sets (e.g. `AVX 512` on `m5.large`) that are not available on your instance (e.g. `t3.large` or even `t3a.large`)?
perf
build tensorflow serving from source description there may be an opportunity to improve prediction latency on cpus and or gpus by building tensorflow serving from source for example on xlarge tf serving shows this log message your cpu supports instructions that this tensorflow binary was not compiled to use fma and on large tf serving shows this log message your cpu supports instructions that this tensorflow binary was not compiled to use fma here s an of someone building from source question different instance type support different instruction sets what happens if we compile against instruction sets e g avx on large that are not available on your instance e g large or even large
1
53,796
13,883,631,538
IssuesEvent
2020-10-18 12:52:32
ronakjain2012/node-boilerplate
https://api.github.com/repos/ronakjain2012/node-boilerplate
closed
WS-2019-0019 (Medium) detected in braces-1.8.5.tgz
security vulnerability
## WS-2019-0019 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>braces-1.8.5.tgz</b></p></summary> <p>Fastest brace expansion for node.js, with the most complete support for the Bash 4.3 braces specification.</p> <p>Library home page: <a href="https://registry.npmjs.org/braces/-/braces-1.8.5.tgz">https://registry.npmjs.org/braces/-/braces-1.8.5.tgz</a></p> <p>Path to dependency file: node-boilerplate/package.json</p> <p>Path to vulnerable library: node-boilerplate/node_modules/micromatch/node_modules/braces/package.json</p> <p> Dependency Hierarchy: - babel-cli-6.26.0.tgz (Root Library) - chokidar-1.7.0.tgz - anymatch-1.3.2.tgz - micromatch-2.3.11.tgz - :x: **braces-1.8.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ronakjain2012/node-boilerplate/commit/3d234aef965340ed4c7d5b95ed6fa0f1a6bf6345">3d234aef965340ed4c7d5b95ed6fa0f1a6bf6345</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Version of braces prior to 2.3.1 are vulnerable to Regular Expression Denial of Service (ReDoS). Untrusted input may cause catastrophic backtracking while matching regular expressions. This can cause the application to be unresponsive leading to Denial of Service. <p>Publish Date: 2018-02-16 <p>URL: <a href=https://github.com/micromatch/braces/commit/abdafb0cae1e0c00f184abbadc692f4eaa98f451>WS-2019-0019</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/786">https://www.npmjs.com/advisories/786</a></p> <p>Release Date: 2019-02-21</p> <p>Fix Resolution: 2.3.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0019 (Medium) detected in braces-1.8.5.tgz - ## WS-2019-0019 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>braces-1.8.5.tgz</b></p></summary> <p>Fastest brace expansion for node.js, with the most complete support for the Bash 4.3 braces specification.</p> <p>Library home page: <a href="https://registry.npmjs.org/braces/-/braces-1.8.5.tgz">https://registry.npmjs.org/braces/-/braces-1.8.5.tgz</a></p> <p>Path to dependency file: node-boilerplate/package.json</p> <p>Path to vulnerable library: node-boilerplate/node_modules/micromatch/node_modules/braces/package.json</p> <p> Dependency Hierarchy: - babel-cli-6.26.0.tgz (Root Library) - chokidar-1.7.0.tgz - anymatch-1.3.2.tgz - micromatch-2.3.11.tgz - :x: **braces-1.8.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ronakjain2012/node-boilerplate/commit/3d234aef965340ed4c7d5b95ed6fa0f1a6bf6345">3d234aef965340ed4c7d5b95ed6fa0f1a6bf6345</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Version of braces prior to 2.3.1 are vulnerable to Regular Expression Denial of Service (ReDoS). Untrusted input may cause catastrophic backtracking while matching regular expressions. This can cause the application to be unresponsive leading to Denial of Service. <p>Publish Date: 2018-02-16 <p>URL: <a href=https://github.com/micromatch/braces/commit/abdafb0cae1e0c00f184abbadc692f4eaa98f451>WS-2019-0019</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/786">https://www.npmjs.com/advisories/786</a></p> <p>Release Date: 2019-02-21</p> <p>Fix Resolution: 2.3.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_perf
ws medium detected in braces tgz ws medium severity vulnerability vulnerable library braces tgz fastest brace expansion for node js with the most complete support for the bash braces specification library home page a href path to dependency file node boilerplate package json path to vulnerable library node boilerplate node modules micromatch node modules braces package json dependency hierarchy babel cli tgz root library chokidar tgz anymatch tgz micromatch tgz x braces tgz vulnerable library found in head commit a href found in base branch master vulnerability details version of braces prior to are vulnerable to regular expression denial of service redos untrusted input may cause catastrophic backtracking while matching regular expressions this can cause the application to be unresponsive leading to denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
41,361
21,655,348,077
IssuesEvent
2022-05-06 13:37:03
TypeCobolTeam/TypeCobol
https://api.github.com/repos/TypeCobolTeam/TypeCobol
closed
New LSP notification refresh one COPY
Enhancement Cobol Language Server Protocol Performance
**Is your feature request related to a problem? Please describe.** Provide a new LSP notification to refresh only one COPY. This new notification is also here to optimize the refresh mechanism from LSP (`TypeCobol.LanguageServer.Workspace#ScheduleRefresh`) Instead of calling `Workspace#ScheduleRefresh`, a new method must be created. This new method will - [ ] Remove only the COPY to refresh from the cache. - [ ] Refresh only document(s) that use this copy - [ ] Optimize re-parsing mechanism #2159 - Not sure if this issue will provide a significative gain as the slower step of the parser is CodeElement. - #2159 can also apply to `TypeCobol.LanguageServer.Workspace#ScheduleRefresh` **Technical** Notif parameters: - COPY to refresh - projectKey How to do the refresh of one copy: - Remove only the COPY (+ its variants) to refresh from the cache. - Create a new class for COPY cache. - Move method `ClearImportedCompilationDocumentsCache` into this class. - Create a new method to remove only one COPY (+its variants) from the cache. - Think about variant. See `cacheKey` in `CompilationProject#Import` - Caller of `ClearImportedCompilationDocumentsCache` will now access the copy cache from `CompilationProject`. - Refresh only document(s) that use this copy - In `Workspace` (or `WorkspaceProjectStore` I didn't check #2158), iterate over all documents and use `CompilationUnit.CopyTextNamesVariations` to detect these cases. **How to test ?** LanguageServerRobot is not able to create a change in a COPY while the test is running as the content of the COPY is not managed by LSP messages. So what to do: 1. Keep LSR running between 2 tests. - Create a first test to parse a source - Update the COPY file locally - Play another test which send `refresh single copy` notif. 2. Create a manual dedicated test without LanguageServerRobot ? 3. Extend LanguageServerRobot to be able to insert local script between LSP messages? - This local script will change the content of the COPY before the new refresh notification will be sent The test can use 2 copys: - Parse the document with these 2 copys. - Change the 2 copy locally. - Send refresh single copy notification for only 1 of the 2 copy. - Reparse the document. - Check that the document use the new version of the refresed copy but not the other one.
True
New LSP notification refresh one COPY - **Is your feature request related to a problem? Please describe.** Provide a new LSP notification to refresh only one COPY. This new notification is also here to optimize the refresh mechanism from LSP (`TypeCobol.LanguageServer.Workspace#ScheduleRefresh`) Instead of calling `Workspace#ScheduleRefresh`, a new method must be created. This new method will - [ ] Remove only the COPY to refresh from the cache. - [ ] Refresh only document(s) that use this copy - [ ] Optimize re-parsing mechanism #2159 - Not sure if this issue will provide a significative gain as the slower step of the parser is CodeElement. - #2159 can also apply to `TypeCobol.LanguageServer.Workspace#ScheduleRefresh` **Technical** Notif parameters: - COPY to refresh - projectKey How to do the refresh of one copy: - Remove only the COPY (+ its variants) to refresh from the cache. - Create a new class for COPY cache. - Move method `ClearImportedCompilationDocumentsCache` into this class. - Create a new method to remove only one COPY (+its variants) from the cache. - Think about variant. See `cacheKey` in `CompilationProject#Import` - Caller of `ClearImportedCompilationDocumentsCache` will now access the copy cache from `CompilationProject`. - Refresh only document(s) that use this copy - In `Workspace` (or `WorkspaceProjectStore` I didn't check #2158), iterate over all documents and use `CompilationUnit.CopyTextNamesVariations` to detect these cases. **How to test ?** LanguageServerRobot is not able to create a change in a COPY while the test is running as the content of the COPY is not managed by LSP messages. So what to do: 1. Keep LSR running between 2 tests. - Create a first test to parse a source - Update the COPY file locally - Play another test which send `refresh single copy` notif. 2. Create a manual dedicated test without LanguageServerRobot ? 3. Extend LanguageServerRobot to be able to insert local script between LSP messages? - This local script will change the content of the COPY before the new refresh notification will be sent The test can use 2 copys: - Parse the document with these 2 copys. - Change the 2 copy locally. - Send refresh single copy notification for only 1 of the 2 copy. - Reparse the document. - Check that the document use the new version of the refresed copy but not the other one.
perf
new lsp notification refresh one copy is your feature request related to a problem please describe provide a new lsp notification to refresh only one copy this new notification is also here to optimize the refresh mechanism from lsp typecobol languageserver workspace schedulerefresh instead of calling workspace schedulerefresh a new method must be created this new method will remove only the copy to refresh from the cache refresh only document s that use this copy optimize re parsing mechanism not sure if this issue will provide a significative gain as the slower step of the parser is codeelement can also apply to typecobol languageserver workspace schedulerefresh technical notif parameters copy to refresh projectkey how to do the refresh of one copy remove only the copy its variants to refresh from the cache create a new class for copy cache move method clearimportedcompilationdocumentscache into this class create a new method to remove only one copy its variants from the cache think about variant see cachekey in compilationproject import caller of clearimportedcompilationdocumentscache will now access the copy cache from compilationproject refresh only document s that use this copy in workspace or workspaceprojectstore i didn t check iterate over all documents and use compilationunit copytextnamesvariations to detect these cases how to test languageserverrobot is not able to create a change in a copy while the test is running as the content of the copy is not managed by lsp messages so what to do keep lsr running between tests create a first test to parse a source update the copy file locally play another test which send refresh single copy notif create a manual dedicated test without languageserverrobot extend languageserverrobot to be able to insert local script between lsp messages this local script will change the content of the copy before the new refresh notification will be sent the test can use copys parse the document with these copys change the copy locally send refresh single copy notification for only of the copy reparse the document check that the document use the new version of the refresed copy but not the other one
1
828,258
31,818,467,278
IssuesEvent
2023-09-13 22:53:33
meshery/meshery
https://api.github.com/repos/meshery/meshery
opened
[UI] WASM Filter Import Modal changing focus
kind/bug language/javascript component/ui priority/high framework/react component/filters
### Current Behavior <!-- A brief description of what the problem is. (e.g. I need to be able to...) --> ### Desired Behavior <!-- A brief description of the enhancement. --> ### Screenshots/Logs <!-- Add screenshots, if applicable, to help explain your problem. --> ### Environment - Browser: Chrome Safari Firefox - Host OS: Mac Linux Windows - Meshery Server Version: stable-v - Meshery Client Version: stable-v - Platform: Docker Kubernetes --- ### Contributor [Guides](https://docs.meshery.io/project/contributing) and [Handbook](https://layer5.io/community/handbook) - 🎨 Wireframes and [designs for Meshery UI](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI) in Figma [(open invite)](https://www.figma.com/team_invite/redeem/qJy1c95qirjgWQODApilR9) - 🖥 [Contributing to Meshery UI](https://docs.meshery.io/project/contributing/contributing-ui) - 🙋🏾🙋🏼 Questions: [Discussion Forum](http://discuss.meshery.io) and [Community Slack](https://slack.meshery.io)
1.0
[UI] WASM Filter Import Modal changing focus - ### Current Behavior <!-- A brief description of what the problem is. (e.g. I need to be able to...) --> ### Desired Behavior <!-- A brief description of the enhancement. --> ### Screenshots/Logs <!-- Add screenshots, if applicable, to help explain your problem. --> ### Environment - Browser: Chrome Safari Firefox - Host OS: Mac Linux Windows - Meshery Server Version: stable-v - Meshery Client Version: stable-v - Platform: Docker Kubernetes --- ### Contributor [Guides](https://docs.meshery.io/project/contributing) and [Handbook](https://layer5.io/community/handbook) - 🎨 Wireframes and [designs for Meshery UI](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI) in Figma [(open invite)](https://www.figma.com/team_invite/redeem/qJy1c95qirjgWQODApilR9) - 🖥 [Contributing to Meshery UI](https://docs.meshery.io/project/contributing/contributing-ui) - 🙋🏾🙋🏼 Questions: [Discussion Forum](http://discuss.meshery.io) and [Community Slack](https://slack.meshery.io)
non_perf
wasm filter import modal changing focus current behavior desired behavior screenshots logs environment browser chrome safari firefox host os mac linux windows meshery server version stable v meshery client version stable v platform docker kubernetes contributor and 🎨 wireframes and in figma 🖥 🙋🏾🙋🏼 questions and
0
7,649
6,142,540,426
IssuesEvent
2017-06-27 01:06:54
astropy/astropy
https://api.github.com/repos/astropy/astropy
closed
Memory leak in Table related to ref cycle in info attribute
Bug Critical Performance table
The following demonstrates a memory leak via a reference cycle in the `info` attribute: ``` c = Column(np.arange(1e8), name='a') for i in range(5): t = Table([c]) ``` Well, this is not good. I have not dug into the implications, but here is where it showed up. I had stated (incorrectly) that introducing a reference cycle was not a problem. ``` neptune$ git bisect good f112b3501c05c058ac8e83fdf6d4f0f646b56169 is the first bad commit commit f112b3501c05c058ac8e83fdf6d4f0f646b56169 Author: Tom Aldcroft <taldcroft@gmail.com> Date: Sun Jun 28 19:28:41 2015 -0400 Change weakref _parent_ref to direct ref _parent. This introduces a reference cycle, but should be OK. Unlike e.g. Table and Column, there is very little risk of wanting an info object without the parent data object. :040000 040000 32f9dd1f5886ec37701ace3c3dc49696743804a2 4a7f1cd9fde8e43c741e9c3d6823440117650821 M ```
True
Memory leak in Table related to ref cycle in info attribute - The following demonstrates a memory leak via a reference cycle in the `info` attribute: ``` c = Column(np.arange(1e8), name='a') for i in range(5): t = Table([c]) ``` Well, this is not good. I have not dug into the implications, but here is where it showed up. I had stated (incorrectly) that introducing a reference cycle was not a problem. ``` neptune$ git bisect good f112b3501c05c058ac8e83fdf6d4f0f646b56169 is the first bad commit commit f112b3501c05c058ac8e83fdf6d4f0f646b56169 Author: Tom Aldcroft <taldcroft@gmail.com> Date: Sun Jun 28 19:28:41 2015 -0400 Change weakref _parent_ref to direct ref _parent. This introduces a reference cycle, but should be OK. Unlike e.g. Table and Column, there is very little risk of wanting an info object without the parent data object. :040000 040000 32f9dd1f5886ec37701ace3c3dc49696743804a2 4a7f1cd9fde8e43c741e9c3d6823440117650821 M ```
perf
memory leak in table related to ref cycle in info attribute the following demonstrates a memory leak via a reference cycle in the info attribute c column np arange name a for i in range t table well this is not good i have not dug into the implications but here is where it showed up i had stated incorrectly that introducing a reference cycle was not a problem neptune git bisect good is the first bad commit commit author tom aldcroft date sun jun change weakref parent ref to direct ref parent this introduces a reference cycle but should be ok unlike e g table and column there is very little risk of wanting an info object without the parent data object m
1
33,216
7,680,604,430
IssuesEvent
2018-05-16 02:38:04
Pugabyte/BearNation
https://api.github.com/repos/Pugabyte/BearNation
closed
Couple refactor tasks
code
- [ ] Add Herochat message recipients method - Unfortunately, Herochat does not provide a simple way to get all the recipients of a message. Different channels have different rules for which channel members get the message. For example, since Creative chat is not cross world, I have to make sure the sender and recipient are in the same world - Already defined in AlertsListener#onChat, but should be refactored so Wakka's Item display command can use it. - `List<Player> recipients = getRecipients(sender, channel);` ? Maybe pass the ChannelChatEvent instead - [ ] Refactor sending messages to bridge to allow BNCore to join the party. - `Functions.sendBridgeMessage(channel, player, message)` ? Something like that - [ ] Rename `Functions` to `SkriptFunctions` ? Something to make it less vague
1.0
Couple refactor tasks - - [ ] Add Herochat message recipients method - Unfortunately, Herochat does not provide a simple way to get all the recipients of a message. Different channels have different rules for which channel members get the message. For example, since Creative chat is not cross world, I have to make sure the sender and recipient are in the same world - Already defined in AlertsListener#onChat, but should be refactored so Wakka's Item display command can use it. - `List<Player> recipients = getRecipients(sender, channel);` ? Maybe pass the ChannelChatEvent instead - [ ] Refactor sending messages to bridge to allow BNCore to join the party. - `Functions.sendBridgeMessage(channel, player, message)` ? Something like that - [ ] Rename `Functions` to `SkriptFunctions` ? Something to make it less vague
non_perf
couple refactor tasks add herochat message recipients method unfortunately herochat does not provide a simple way to get all the recipients of a message different channels have different rules for which channel members get the message for example since creative chat is not cross world i have to make sure the sender and recipient are in the same world already defined in alertslistener onchat but should be refactored so wakka s item display command can use it list recipients getrecipients sender channel maybe pass the channelchatevent instead refactor sending messages to bridge to allow bncore to join the party functions sendbridgemessage channel player message something like that rename functions to skriptfunctions something to make it less vague
0
762,901
26,736,156,996
IssuesEvent
2023-01-30 09:36:53
kubernetes-sigs/cluster-api-provider-aws
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
closed
GC_WORKLOAD variable is missing in e2e conf to support gc test cases
kind/bug priority/important-soon triage/accepted kind/backport
/kind bug **What steps did you take and what happened:** gc testcases will apply some workloads for the testing. It will read the yaml from GC_WORKLOAD variable. We should provide the variables in e2e conf, otherwise the testcases would fail with exception. **What did you expect to happen:** **Anything else you would like to add:** [Miscellaneous information that will assist in solving the issue.] **Environment:** - Cluster-api-provider-aws version: - Kubernetes version: (use `kubectl version`): - OS (e.g. from `/etc/os-release`):
1.0
GC_WORKLOAD variable is missing in e2e conf to support gc test cases - /kind bug **What steps did you take and what happened:** gc testcases will apply some workloads for the testing. It will read the yaml from GC_WORKLOAD variable. We should provide the variables in e2e conf, otherwise the testcases would fail with exception. **What did you expect to happen:** **Anything else you would like to add:** [Miscellaneous information that will assist in solving the issue.] **Environment:** - Cluster-api-provider-aws version: - Kubernetes version: (use `kubectl version`): - OS (e.g. from `/etc/os-release`):
non_perf
gc workload variable is missing in conf to support gc test cases kind bug what steps did you take and what happened gc testcases will apply some workloads for the testing it will read the yaml from gc workload variable we should provide the variables in conf otherwise the testcases would fail with exception what did you expect to happen anything else you would like to add environment cluster api provider aws version kubernetes version use kubectl version os e g from etc os release
0
54,441
30,184,630,807
IssuesEvent
2023-07-04 11:12:37
NoraH1to/yue
https://api.github.com/repos/NoraH1to/yue
closed
[Feature] Comic support
enhancement performance
**Describe the feature** Support comporessed comic like `zip, rar, 7z, cbr, cbz`. **Problems** IndexedDB will rewrite everything when only update even less one prop. This cause huge I/O cost when update read process if the book was big. Comic usually very large. ![image](https://github.com/NoraH1to/yue/assets/35523326/70d6972b-ddec-4577-99d4-dfc42518b117) **Plan** We need to split out the file content for less I/O.
True
[Feature] Comic support - **Describe the feature** Support comporessed comic like `zip, rar, 7z, cbr, cbz`. **Problems** IndexedDB will rewrite everything when only update even less one prop. This cause huge I/O cost when update read process if the book was big. Comic usually very large. ![image](https://github.com/NoraH1to/yue/assets/35523326/70d6972b-ddec-4577-99d4-dfc42518b117) **Plan** We need to split out the file content for less I/O.
perf
comic support describe the feature support comporessed comic like zip rar cbr cbz problems indexeddb will rewrite everything when only update even less one prop this cause huge i o cost when update read process if the book was big comic usually very large plan we need to split out the file content for less i o
1
13,232
8,150,265,971
IssuesEvent
2018-08-22 12:27:50
src-d/go-git
https://api.github.com/repos/src-d/go-git
closed
Improve packfile parser
enhancement performance
We need a way to read packfiles in a sequential order, to be able to create other git files like packfile Indexes, packfile bitmaps, or history graph. We can find a working, almost finished parser [implementation here](https://github.com/jfontan/go-git/blob/1cd660353540b0de05bee5aadfa2d3ff2e98e28c/plumbing/format/packfile/pack_parser.go#L67). This Parser is intended to be used on clone or fetch operations when git client receives a new packfile. Both JGit and Git are doing the same to read a packfile in a sequential way: - One first pass to get the object count, object positions, and non-delta hash calculation. - One second pass to resolve delta types using a cache for it. Delta resolution will be done from bases. The cache is to avoid too many objects on memory to resolve several deltas. If we know that a base will not be used on future delta resolutions, we must remove it from the cache. This parser should be done having in mind a way to be able to plug several encoders (Index encoder, Bitmap encoder, history graph encoder and so on). To do that, we can use the Observer pattern, allowing a modular way to add new files generation implementing the Observer interface. Methods for the parser and Observers will be: ```go type Observer interface { OnHeader(count int64) error OnInflatedObjectHeader(t plumbing.ObjectType, objSize int64, pos int64) error OnInflatedObjectContent(h plumbing.Hash, pos int64) error OnFooter(h plumbing.Hash) error } type Parser struct { os []Observer } func (*Parser) Parse(m ProgressMonitor) (plumbing.Hash, error) {} ``` Observer methods will be called when possible while packfile parsing. `OnInflatedObjectHeader` and `OnInflatedObjectContent` will be called only for non delta objects and undeltified deltas returning the correct type. Observer interface can change in the future to be able to create other kinds of indexes. Right now it has only the needed methods to be able to create an idx file. Idx file can be generated implementing Observer interface. As in JGit or Git, we need to fill a list of Entries using `OnInflated`... methods, and then, at the end, write them all down into a idx file on OnFooter method.
True
Improve packfile parser - We need a way to read packfiles in a sequential order, to be able to create other git files like packfile Indexes, packfile bitmaps, or history graph. We can find a working, almost finished parser [implementation here](https://github.com/jfontan/go-git/blob/1cd660353540b0de05bee5aadfa2d3ff2e98e28c/plumbing/format/packfile/pack_parser.go#L67). This Parser is intended to be used on clone or fetch operations when git client receives a new packfile. Both JGit and Git are doing the same to read a packfile in a sequential way: - One first pass to get the object count, object positions, and non-delta hash calculation. - One second pass to resolve delta types using a cache for it. Delta resolution will be done from bases. The cache is to avoid too many objects on memory to resolve several deltas. If we know that a base will not be used on future delta resolutions, we must remove it from the cache. This parser should be done having in mind a way to be able to plug several encoders (Index encoder, Bitmap encoder, history graph encoder and so on). To do that, we can use the Observer pattern, allowing a modular way to add new files generation implementing the Observer interface. Methods for the parser and Observers will be: ```go type Observer interface { OnHeader(count int64) error OnInflatedObjectHeader(t plumbing.ObjectType, objSize int64, pos int64) error OnInflatedObjectContent(h plumbing.Hash, pos int64) error OnFooter(h plumbing.Hash) error } type Parser struct { os []Observer } func (*Parser) Parse(m ProgressMonitor) (plumbing.Hash, error) {} ``` Observer methods will be called when possible while packfile parsing. `OnInflatedObjectHeader` and `OnInflatedObjectContent` will be called only for non delta objects and undeltified deltas returning the correct type. Observer interface can change in the future to be able to create other kinds of indexes. Right now it has only the needed methods to be able to create an idx file. Idx file can be generated implementing Observer interface. As in JGit or Git, we need to fill a list of Entries using `OnInflated`... methods, and then, at the end, write them all down into a idx file on OnFooter method.
perf
improve packfile parser we need a way to read packfiles in a sequential order to be able to create other git files like packfile indexes packfile bitmaps or history graph we can find a working almost finished parser this parser is intended to be used on clone or fetch operations when git client receives a new packfile both jgit and git are doing the same to read a packfile in a sequential way one first pass to get the object count object positions and non delta hash calculation one second pass to resolve delta types using a cache for it delta resolution will be done from bases the cache is to avoid too many objects on memory to resolve several deltas if we know that a base will not be used on future delta resolutions we must remove it from the cache this parser should be done having in mind a way to be able to plug several encoders index encoder bitmap encoder history graph encoder and so on to do that we can use the observer pattern allowing a modular way to add new files generation implementing the observer interface methods for the parser and observers will be go type observer interface onheader count error oninflatedobjectheader t plumbing objecttype objsize pos error oninflatedobjectcontent h plumbing hash pos error onfooter h plumbing hash error type parser struct os observer func parser parse m progressmonitor plumbing hash error observer methods will be called when possible while packfile parsing oninflatedobjectheader and oninflatedobjectcontent will be called only for non delta objects and undeltified deltas returning the correct type observer interface can change in the future to be able to create other kinds of indexes right now it has only the needed methods to be able to create an idx file idx file can be generated implementing observer interface as in jgit or git we need to fill a list of entries using oninflated methods and then at the end write them all down into a idx file on onfooter method
1
56,050
31,593,797,974
IssuesEvent
2023-09-05 02:37:16
PurpleTurtleCreative/completionist
https://api.github.com/repos/PurpleTurtleCreative/completionist
opened
Define wp-env configuration for performant local development and testing
cleanup performance
I installed and tried out using `@wordpress/env` and it amazed me how fast it performed and how easy it was to use! It uses Docker, yet it was much faster than my simple Docker Compose v3 configuration. My current configuration makes product demos really rough with how slow it is to make requests (a common issue noted in the Docker community), experiences random cURL 28 timeout errors because of this, and causes WordPress itself to often randomly log network-related issues. Anyways... The config documentation is here: https://developer.wordpress.org/block-editor/reference-guides/packages/packages-env/#wp-env-json Including local environment configuration steps also makes the open source project way more portable and friendly to potential contributors or future teammates.
True
Define wp-env configuration for performant local development and testing - I installed and tried out using `@wordpress/env` and it amazed me how fast it performed and how easy it was to use! It uses Docker, yet it was much faster than my simple Docker Compose v3 configuration. My current configuration makes product demos really rough with how slow it is to make requests (a common issue noted in the Docker community), experiences random cURL 28 timeout errors because of this, and causes WordPress itself to often randomly log network-related issues. Anyways... The config documentation is here: https://developer.wordpress.org/block-editor/reference-guides/packages/packages-env/#wp-env-json Including local environment configuration steps also makes the open source project way more portable and friendly to potential contributors or future teammates.
perf
define wp env configuration for performant local development and testing i installed and tried out using wordpress env and it amazed me how fast it performed and how easy it was to use it uses docker yet it was much faster than my simple docker compose configuration my current configuration makes product demos really rough with how slow it is to make requests a common issue noted in the docker community experiences random curl timeout errors because of this and causes wordpress itself to often randomly log network related issues anyways the config documentation is here including local environment configuration steps also makes the open source project way more portable and friendly to potential contributors or future teammates
1
64,937
26,922,254,709
IssuesEvent
2023-02-07 11:17:51
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Chrome UI Functional Tests.test/functional/apps/management/_scripted_fields·ts - management scripted fields creating and using Painless string scripted fields should visualize scripted field in vertical bar chart
loe:hours failed-test Team:AppServicesSv impact:medium Team:DataDiscovery :DataDiscovery/fix-it-week
A test failed on a tracked branch ``` Error: timed out waiting for lens visualization at onFailure (test/common/services/retry/retry_for_truthy.ts:39:13) at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13) at retryForTruthy (test/common/services/retry/retry_for_truthy.ts:27:3) at RetryService.waitFor (test/common/services/retry/retry.ts:59:5) at Context.<anonymous> (test/functional/apps/management/_scripted_fields.ts:317:9) at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [CI Build - 8.5](https://buildkite.com/elastic/kibana-on-merge/builds/23708#018485c2-4ea9-43af-8ae5-0f73974e7a4c) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/management/_scripted_fields·ts","test.name":"management scripted fields creating and using Painless string scripted fields should visualize scripted field in vertical bar chart","test.failCount":2}} -->
1.0
Failing test: Chrome UI Functional Tests.test/functional/apps/management/_scripted_fields·ts - management scripted fields creating and using Painless string scripted fields should visualize scripted field in vertical bar chart - A test failed on a tracked branch ``` Error: timed out waiting for lens visualization at onFailure (test/common/services/retry/retry_for_truthy.ts:39:13) at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13) at retryForTruthy (test/common/services/retry/retry_for_truthy.ts:27:3) at RetryService.waitFor (test/common/services/retry/retry.ts:59:5) at Context.<anonymous> (test/functional/apps/management/_scripted_fields.ts:317:9) at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [CI Build - 8.5](https://buildkite.com/elastic/kibana-on-merge/builds/23708#018485c2-4ea9-43af-8ae5-0f73974e7a4c) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome UI Functional Tests.test/functional/apps/management/_scripted_fields·ts","test.name":"management scripted fields creating and using Painless string scripted fields should visualize scripted field in vertical bar chart","test.failCount":2}} -->
non_perf
failing test chrome ui functional tests test functional apps management scripted fields·ts management scripted fields creating and using painless string scripted fields should visualize scripted field in vertical bar chart a test failed on a tracked branch error timed out waiting for lens visualization at onfailure test common services retry retry for truthy ts at retryforsuccess test common services retry retry for success ts at retryfortruthy test common services retry retry for truthy ts at retryservice waitfor test common services retry retry ts at context test functional apps management scripted fields ts at object apply node modules kbn test target node src functional test runner lib mocha wrap function js first failure
0
39,472
20,010,898,967
IssuesEvent
2022-02-01 06:13:12
tailscale/tailscale
https://api.github.com/repos/tailscale/tailscale
closed
First request to subnet router routed IP fails after install
L3 Some users P2 Aggravating T3 Performance/Debugging bug
### What is the issue? After installing Tailscale (by following the process at https://login.tailscale.com/start), three of my coworkers (Mac, Monterey, Tailscale 1.18.2) have been unable to make a TCP connection to a host (`10.131.224.x`) that sits behind a subnet router (`10.131.0.0/16`, Linux, Tailscale 1.18.2) immediately after install. This only happens on initial install—if they disconnect and reconnect (2 of 3), or log out and back in (1 of 3), the connection works just fine. Trying the connection again doesn't work; only one of the actions in the previous sentence seem to work. I haven't had any reports of issues going forward after this initial install. Possibly confounding factor: the DNS name they are using to connect uses Tailscale's split DNS feature, although I can tell from the error messages that they are seeing that they are correctly resolving the `10.131.224.x` IP. ### Steps to reproduce Per above: - Create new account on login.tailscale.com/start - Download and install tailscale - Attempt to connect to a host behind a subnet router ### Are there any recent changes that introduced the issue? _No response_ ### OS macOS ### OS version _No response_ ### Tailscale version _No response_ ### Bug report _No response_
True
First request to subnet router routed IP fails after install - ### What is the issue? After installing Tailscale (by following the process at https://login.tailscale.com/start), three of my coworkers (Mac, Monterey, Tailscale 1.18.2) have been unable to make a TCP connection to a host (`10.131.224.x`) that sits behind a subnet router (`10.131.0.0/16`, Linux, Tailscale 1.18.2) immediately after install. This only happens on initial install—if they disconnect and reconnect (2 of 3), or log out and back in (1 of 3), the connection works just fine. Trying the connection again doesn't work; only one of the actions in the previous sentence seem to work. I haven't had any reports of issues going forward after this initial install. Possibly confounding factor: the DNS name they are using to connect uses Tailscale's split DNS feature, although I can tell from the error messages that they are seeing that they are correctly resolving the `10.131.224.x` IP. ### Steps to reproduce Per above: - Create new account on login.tailscale.com/start - Download and install tailscale - Attempt to connect to a host behind a subnet router ### Are there any recent changes that introduced the issue? _No response_ ### OS macOS ### OS version _No response_ ### Tailscale version _No response_ ### Bug report _No response_
perf
first request to subnet router routed ip fails after install what is the issue after installing tailscale by following the process at three of my coworkers mac monterey tailscale have been unable to make a tcp connection to a host x that sits behind a subnet router linux tailscale immediately after install this only happens on initial install—if they disconnect and reconnect of or log out and back in of the connection works just fine trying the connection again doesn t work only one of the actions in the previous sentence seem to work i haven t had any reports of issues going forward after this initial install possibly confounding factor the dns name they are using to connect uses tailscale s split dns feature although i can tell from the error messages that they are seeing that they are correctly resolving the x ip steps to reproduce per above create new account on login tailscale com start download and install tailscale attempt to connect to a host behind a subnet router are there any recent changes that introduced the issue no response os macos os version no response tailscale version no response bug report no response
1
445,820
12,836,484,735
IssuesEvent
2020-07-07 14:25:23
RichardFav/AnalysisGUI
https://api.github.com/repos/RichardFav/AnalysisGUI
opened
Reducing Function List to Final Release Version
HIGH priority
Removal of extraneous functions that won't be included in the final release of the Analysis GUI program.
1.0
Reducing Function List to Final Release Version - Removal of extraneous functions that won't be included in the final release of the Analysis GUI program.
non_perf
reducing function list to final release version removal of extraneous functions that won t be included in the final release of the analysis gui program
0
21,032
11,052,134,415
IssuesEvent
2019-12-10 08:46:36
neovim/neovim
https://api.github.com/repos/neovim/neovim
closed
netrw significantly slower than vim8 (w/default configs)
bug clipboard performance
- `nvim --version`: v0.5.0-dev (full output below) - `vim -u DEFAULTS` (version: ) behaves differently? N/A (`netrw` not loaded) - Operating system/version: macOS Mojave 10.14.6 (18G95) - Terminal name/version: Tried in alacritty 0.3.3 (71a818c), iTerm 3.3.5beta2, and Terminal 2.9.5 (421.2) - `$TERM`: alacritty, xterm-256color, and ansi respectively - `echo &clipboard`: <EMPTY> ### Steps to reproduce using `nvim -u NORC` ``` nvim -u NORC :Explore ``` ### Actual behaviour Neovim takes around 1s to load `netrw` and 1s or more to move between directories. ### Expected behaviour `netrw` should move directories almost instantly. This issue does not happen in vim8. Also doesn't happen when I add `let g:loaded_clipboard_provider = 1` to the top of my `init.vim`. Also tried updating `reattach-to-user-namespace` and added `set-option -g default-command "reattach-to-user-namespace -l zsh"` in my `tmux.conf` and nothing changed. From the output logs below it looks like something weird is going on with the clipboard, although since I don't get it with vim8 I think it's nvim specific. Possibly related issues: #1882, #6695, #3726, Possibly related PRs: #2809, #6892 ### Extra **`nvim --version`** ``` NVIM v0.5.0-dev Build type: Release LuaJIT 2.0.5 Compilation: /usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -DNDEBUG -DMIN_LOG_LEVEL=3 -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversion -Wmissing-prototypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -I/tmp/neovim-20190924-99189-1y0mwkx/build/config -I/tmp/neovim-20190924-99189-1y0mwkx/src -I/usr/local/include -I/tmp/neovim-20190924-99189-1y0mwkx/deps-build/include -I/usr/local/opt/gettext/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include -I/tmp/neovim-20190924-99189-1y0mwkx/build/src/nvim/auto -I/tmp/neovim-20190924-99189-1y0mwkx/build/include Compiled by georgewitteman@George.W-MBPro Features: +acl +iconv +tui See ":help feature-compile" system vimrc file: "$VIM/sysinit.vim" fall-back for $VIM: "/usr/local/Cellar/neovim/HEAD-0ab7da8/share/nvim" Run :checkhealth for more info ``` ``` FUNCTIONS SORTED ON TOTAL TIME count total (s) self (s) function 42 1.138288 0.001465 provider#clipboard#Call() 42 1.135443 0.005160 <SNR>70_try_cmd() 40 1.080853 0.001290 4() 4 0.707367 0.000363 netrw#LocalBrowseCheck() 4 0.706806 0.001969 <SNR>68_NetrwBrowse() 10 0.550346 <SNR>68_NetrwOptionsSave() 14 0.483422 0.003829 <SNR>68_NetrwOptionsRestore() 333 0.478200 <SNR>68_NetrwRestoreSetting() 3 0.357461 0.001490 <SNR>68_NetrwBrowseChgDir() 4 0.276122 0.001994 <SNR>68_PerformListing() 3 0.269118 0.001007 <SNR>68_NetrwEnew() 1 0.256364 0.123082 netrw#Explore() 1 0.243573 0.000180 <SNR>34_opendir() 1 0.243361 0.000234 <SNR>68_NetrwBrowseUpDir() 4 0.212334 0.000909 <SNR>68_NetrwGetBuffer() 1 0.060355 0.001628 <SNR>68_NetrwBookHistSave() 2 0.055970 0.000090 5() 7 0.028496 0.001864 <SNR>68_NetrwOptionsSafe() 4 0.018156 0.009810 <SNR>68_LocalListing() 11 0.014094 <SNR>7_LoadFTPlugin() FUNCTIONS SORTED ON SELF TIME count total (s) self (s) function 10 0.550346 <SNR>68_NetrwOptionsSave() 333 0.478200 <SNR>68_NetrwRestoreSetting() 1 0.256364 0.123082 netrw#Explore() 11 0.014094 <SNR>7_LoadFTPlugin() 4 0.018156 0.009810 <SNR>68_LocalListing() 30 0.008647 StatusLineFilename() 11 0.009764 0.005432 <SNR>10_SynSet() 42 1.135443 0.005160 <SNR>70_try_cmd() 4 0.005164 0.005069 <SNR>68_NetrwMaps() 130 0.004821 <SNR>68_NetrwFile() 30 0.004132 StatusLine() 15 0.005871 0.003925 fugitive#Find() 14 0.483422 0.003829 <SNR>68_NetrwOptionsRestore() 8 0.004227 0.003624 <SNR>68_NetrwGlob() 147 0.003190 <SNR>68_NetrwSetSafeSetting() 13 0.003560 0.003165 <SNR>37_SetupAutoCommands() 9 0.003856 0.002700 FugitiveExtractGitDir() 4 0.002662 <SNR>68_NetrwSetSort() 4 0.002471 <SNR>68_NetrwListHide() 19 0.002378 <SNR>47_Highlight_Matching_Pair() ```
True
netrw significantly slower than vim8 (w/default configs) - - `nvim --version`: v0.5.0-dev (full output below) - `vim -u DEFAULTS` (version: ) behaves differently? N/A (`netrw` not loaded) - Operating system/version: macOS Mojave 10.14.6 (18G95) - Terminal name/version: Tried in alacritty 0.3.3 (71a818c), iTerm 3.3.5beta2, and Terminal 2.9.5 (421.2) - `$TERM`: alacritty, xterm-256color, and ansi respectively - `echo &clipboard`: <EMPTY> ### Steps to reproduce using `nvim -u NORC` ``` nvim -u NORC :Explore ``` ### Actual behaviour Neovim takes around 1s to load `netrw` and 1s or more to move between directories. ### Expected behaviour `netrw` should move directories almost instantly. This issue does not happen in vim8. Also doesn't happen when I add `let g:loaded_clipboard_provider = 1` to the top of my `init.vim`. Also tried updating `reattach-to-user-namespace` and added `set-option -g default-command "reattach-to-user-namespace -l zsh"` in my `tmux.conf` and nothing changed. From the output logs below it looks like something weird is going on with the clipboard, although since I don't get it with vim8 I think it's nvim specific. Possibly related issues: #1882, #6695, #3726, Possibly related PRs: #2809, #6892 ### Extra **`nvim --version`** ``` NVIM v0.5.0-dev Build type: Release LuaJIT 2.0.5 Compilation: /usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -DNDEBUG -DMIN_LOG_LEVEL=3 -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversion -Wmissing-prototypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -I/tmp/neovim-20190924-99189-1y0mwkx/build/config -I/tmp/neovim-20190924-99189-1y0mwkx/src -I/usr/local/include -I/tmp/neovim-20190924-99189-1y0mwkx/deps-build/include -I/usr/local/opt/gettext/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include -I/tmp/neovim-20190924-99189-1y0mwkx/build/src/nvim/auto -I/tmp/neovim-20190924-99189-1y0mwkx/build/include Compiled by georgewitteman@George.W-MBPro Features: +acl +iconv +tui See ":help feature-compile" system vimrc file: "$VIM/sysinit.vim" fall-back for $VIM: "/usr/local/Cellar/neovim/HEAD-0ab7da8/share/nvim" Run :checkhealth for more info ``` ``` FUNCTIONS SORTED ON TOTAL TIME count total (s) self (s) function 42 1.138288 0.001465 provider#clipboard#Call() 42 1.135443 0.005160 <SNR>70_try_cmd() 40 1.080853 0.001290 4() 4 0.707367 0.000363 netrw#LocalBrowseCheck() 4 0.706806 0.001969 <SNR>68_NetrwBrowse() 10 0.550346 <SNR>68_NetrwOptionsSave() 14 0.483422 0.003829 <SNR>68_NetrwOptionsRestore() 333 0.478200 <SNR>68_NetrwRestoreSetting() 3 0.357461 0.001490 <SNR>68_NetrwBrowseChgDir() 4 0.276122 0.001994 <SNR>68_PerformListing() 3 0.269118 0.001007 <SNR>68_NetrwEnew() 1 0.256364 0.123082 netrw#Explore() 1 0.243573 0.000180 <SNR>34_opendir() 1 0.243361 0.000234 <SNR>68_NetrwBrowseUpDir() 4 0.212334 0.000909 <SNR>68_NetrwGetBuffer() 1 0.060355 0.001628 <SNR>68_NetrwBookHistSave() 2 0.055970 0.000090 5() 7 0.028496 0.001864 <SNR>68_NetrwOptionsSafe() 4 0.018156 0.009810 <SNR>68_LocalListing() 11 0.014094 <SNR>7_LoadFTPlugin() FUNCTIONS SORTED ON SELF TIME count total (s) self (s) function 10 0.550346 <SNR>68_NetrwOptionsSave() 333 0.478200 <SNR>68_NetrwRestoreSetting() 1 0.256364 0.123082 netrw#Explore() 11 0.014094 <SNR>7_LoadFTPlugin() 4 0.018156 0.009810 <SNR>68_LocalListing() 30 0.008647 StatusLineFilename() 11 0.009764 0.005432 <SNR>10_SynSet() 42 1.135443 0.005160 <SNR>70_try_cmd() 4 0.005164 0.005069 <SNR>68_NetrwMaps() 130 0.004821 <SNR>68_NetrwFile() 30 0.004132 StatusLine() 15 0.005871 0.003925 fugitive#Find() 14 0.483422 0.003829 <SNR>68_NetrwOptionsRestore() 8 0.004227 0.003624 <SNR>68_NetrwGlob() 147 0.003190 <SNR>68_NetrwSetSafeSetting() 13 0.003560 0.003165 <SNR>37_SetupAutoCommands() 9 0.003856 0.002700 FugitiveExtractGitDir() 4 0.002662 <SNR>68_NetrwSetSort() 4 0.002471 <SNR>68_NetrwListHide() 19 0.002378 <SNR>47_Highlight_Matching_Pair() ```
perf
netrw significantly slower than w default configs nvim version dev full output below vim u defaults version behaves differently n a netrw not loaded operating system version macos mojave terminal name version tried in alacritty iterm and terminal term alacritty xterm and ansi respectively echo clipboard steps to reproduce using nvim u norc nvim u norc explore actual behaviour neovim takes around to load netrw and or more to move between directories expected behaviour netrw should move directories almost instantly this issue does not happen in also doesn t happen when i add let g loaded clipboard provider to the top of my init vim also tried updating reattach to user namespace and added set option g default command reattach to user namespace l zsh in my tmux conf and nothing changed from the output logs below it looks like something weird is going on with the clipboard although since i don t get it with i think it s nvim specific possibly related issues possibly related prs extra nvim version nvim dev build type release luajit compilation usr local homebrew library homebrew shims mac super clang u fortify source d fortify source dndebug dmin log level wall wextra pedantic wno unused parameter wstrict prototypes std wshadow wconversion wmissing prototypes wimplicit fallthrough wvla fstack protector strong fdiagnostics color auto dinclude generated declarations d gnu source dnvim msgpack has dnvim unibi has var from i tmp neovim build config i tmp neovim src i usr local include i tmp neovim deps build include i usr local opt gettext include i library developer commandlinetools sdks sdk usr include i tmp neovim build src nvim auto i tmp neovim build include compiled by georgewitteman george w mbpro features acl iconv tui see help feature compile system vimrc file vim sysinit vim fall back for vim usr local cellar neovim head share nvim run checkhealth for more info functions sorted on total time count total s self s function provider clipboard call try cmd netrw localbrowsecheck netrwbrowse netrwoptionssave netrwoptionsrestore netrwrestoresetting netrwbrowsechgdir performlisting netrwenew netrw explore opendir netrwbrowseupdir netrwgetbuffer netrwbookhistsave netrwoptionssafe locallisting loadftplugin functions sorted on self time count total s self s function netrwoptionssave netrwrestoresetting netrw explore loadftplugin locallisting statuslinefilename synset try cmd netrwmaps netrwfile statusline fugitive find netrwoptionsrestore netrwglob netrwsetsafesetting setupautocommands fugitiveextractgitdir netrwsetsort netrwlisthide highlight matching pair
1
26,938
13,158,749,884
IssuesEvent
2020-08-10 14:47:30
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
distsql: FlowScheduler.ScheduleFlow is source of significant mutex contention
A-sql-execution C-performance
In an instance of TPC-E, I'm seeing that the mutex locking in `FlowScheduler.ScheduleFlow` [here](https://github.com/cockroachdb/cockroach/blob/54a923b26c0d8481e9e523c97e6fc016dfabb4b8/pkg/sql/flowinfra/flow_scheduler.go#L117) is the single largest source of mutex contention in the system, at a little over 15% of total mutex contention delay. This makes some sense, as TPC-E makes heavy use of DistSQL and this appears to be a serialization point between all DistSQL flows scheduled on a machine. I don't know this code, so I'm hoping to bring this to the attention of those that do (@asubiotto, @yuzefovich). Are there any easy wins here? Do we need to serialize the call to `Flow.Start` across all flows on a node? Does this call need to be protected by the mutex at all? If not, is the mutex only protecting `fs.mu.numRunning`? Can we manipulate this counter using atomics to avoid blocking in the happy path where `fs.canRunFlow(f) == true`?
True
distsql: FlowScheduler.ScheduleFlow is source of significant mutex contention - In an instance of TPC-E, I'm seeing that the mutex locking in `FlowScheduler.ScheduleFlow` [here](https://github.com/cockroachdb/cockroach/blob/54a923b26c0d8481e9e523c97e6fc016dfabb4b8/pkg/sql/flowinfra/flow_scheduler.go#L117) is the single largest source of mutex contention in the system, at a little over 15% of total mutex contention delay. This makes some sense, as TPC-E makes heavy use of DistSQL and this appears to be a serialization point between all DistSQL flows scheduled on a machine. I don't know this code, so I'm hoping to bring this to the attention of those that do (@asubiotto, @yuzefovich). Are there any easy wins here? Do we need to serialize the call to `Flow.Start` across all flows on a node? Does this call need to be protected by the mutex at all? If not, is the mutex only protecting `fs.mu.numRunning`? Can we manipulate this counter using atomics to avoid blocking in the happy path where `fs.canRunFlow(f) == true`?
perf
distsql flowscheduler scheduleflow is source of significant mutex contention in an instance of tpc e i m seeing that the mutex locking in flowscheduler scheduleflow is the single largest source of mutex contention in the system at a little over of total mutex contention delay this makes some sense as tpc e makes heavy use of distsql and this appears to be a serialization point between all distsql flows scheduled on a machine i don t know this code so i m hoping to bring this to the attention of those that do asubiotto yuzefovich are there any easy wins here do we need to serialize the call to flow start across all flows on a node does this call need to be protected by the mutex at all if not is the mutex only protecting fs mu numrunning can we manipulate this counter using atomics to avoid blocking in the happy path where fs canrunflow f true
1
23,068
11,835,017,033
IssuesEvent
2020-03-23 09:56:04
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
State resolution on big rooms can get very CPU-hungry
p1 performance
Sometime between 1.9 and 1.10 (I think) `abolivier.bzh` started becoming very hungry on the boxe's CPU. I used to host it on a VPS that had 1 vCore and it was perfectly happy with it, but lately it became so unresponsive so often (clearly due to being CPU-bound) that last week I had to move it to a new box with 2 vCores. However, even though Synapse now responds in a more timely manner and doesn't seem as CPU-bound as it used to be, it still spends around half of the time using up 80% of the CPU resources and messages stay gray in Riot for maybe 5-15s (despite Grafana saying that the event took less than a second to send). From the graphs it looks like these spikes are due to the `persist_events` background job, and the `persist_events` and `state._resolve_events` indexes. ![image](https://user-images.githubusercontent.com/5547783/74758065-bbde2700-526e-11ea-8d71-d07e39aafb78.png) ![image](https://user-images.githubusercontent.com/5547783/74758122-d0222400-526e-11ea-868a-386889db164d.png) It looks like these have matching spikes in the "DB time" graphs, though I'm not sure what could have happened since I don't remember updating PostgreSQL or its config on that box around that time. I'm happy to pair with another member of the backend team to look at what might go wrong here, if that can help.
True
State resolution on big rooms can get very CPU-hungry - Sometime between 1.9 and 1.10 (I think) `abolivier.bzh` started becoming very hungry on the boxe's CPU. I used to host it on a VPS that had 1 vCore and it was perfectly happy with it, but lately it became so unresponsive so often (clearly due to being CPU-bound) that last week I had to move it to a new box with 2 vCores. However, even though Synapse now responds in a more timely manner and doesn't seem as CPU-bound as it used to be, it still spends around half of the time using up 80% of the CPU resources and messages stay gray in Riot for maybe 5-15s (despite Grafana saying that the event took less than a second to send). From the graphs it looks like these spikes are due to the `persist_events` background job, and the `persist_events` and `state._resolve_events` indexes. ![image](https://user-images.githubusercontent.com/5547783/74758065-bbde2700-526e-11ea-8d71-d07e39aafb78.png) ![image](https://user-images.githubusercontent.com/5547783/74758122-d0222400-526e-11ea-868a-386889db164d.png) It looks like these have matching spikes in the "DB time" graphs, though I'm not sure what could have happened since I don't remember updating PostgreSQL or its config on that box around that time. I'm happy to pair with another member of the backend team to look at what might go wrong here, if that can help.
perf
state resolution on big rooms can get very cpu hungry sometime between and i think abolivier bzh started becoming very hungry on the boxe s cpu i used to host it on a vps that had vcore and it was perfectly happy with it but lately it became so unresponsive so often clearly due to being cpu bound that last week i had to move it to a new box with vcores however even though synapse now responds in a more timely manner and doesn t seem as cpu bound as it used to be it still spends around half of the time using up of the cpu resources and messages stay gray in riot for maybe despite grafana saying that the event took less than a second to send from the graphs it looks like these spikes are due to the persist events background job and the persist events and state resolve events indexes it looks like these have matching spikes in the db time graphs though i m not sure what could have happened since i don t remember updating postgresql or its config on that box around that time i m happy to pair with another member of the backend team to look at what might go wrong here if that can help
1
165,759
26,222,426,972
IssuesEvent
2023-01-04 15:49:49
kodadot/nft-gallery
https://api.github.com/repos/kodadot/nft-gallery
closed
Redesign congrats message after mint
$ p4 redesign design-request
> might need to rework this message design ping 🖼️ @exezbcz Yes I was thinking same! Once @exezbcz drops new design we can put this into work _Originally posted by @yangwao in https://github.com/kodadot/nft-gallery/pull/4250#issuecomment-1303317396_
2.0
Redesign congrats message after mint - > might need to rework this message design ping 🖼️ @exezbcz Yes I was thinking same! Once @exezbcz drops new design we can put this into work _Originally posted by @yangwao in https://github.com/kodadot/nft-gallery/pull/4250#issuecomment-1303317396_
non_perf
redesign congrats message after mint might need to rework this message design ping 🖼️ exezbcz yes i was thinking same once exezbcz drops new design we can put this into work originally posted by yangwao in
0
537,621
15,731,971,584
IssuesEvent
2021-03-29 17:43:37
Quansight/qhub-cloud
https://api.github.com/repos/Quansight/qhub-cloud
closed
Storing example-user initial password somehow
priority: high
Right now `qhub init ...` will generate a random password for the user. See https://github.com/Quansight/qhub-cloud/runs/2218787116?check_suite_focus=true#step:13:9 and it makes it hard for automation and for the user to later reference if they forget to copy the password down. Write down the password locally in `QHUB_DEFAULT_PASSWORD`.
1.0
Storing example-user initial password somehow - Right now `qhub init ...` will generate a random password for the user. See https://github.com/Quansight/qhub-cloud/runs/2218787116?check_suite_focus=true#step:13:9 and it makes it hard for automation and for the user to later reference if they forget to copy the password down. Write down the password locally in `QHUB_DEFAULT_PASSWORD`.
non_perf
storing example user initial password somehow right now qhub init will generate a random password for the user see and it makes it hard for automation and for the user to later reference if they forget to copy the password down write down the password locally in qhub default password
0
2,031
3,254,496,749
IssuesEvent
2015-10-20 00:35:14
rethinkdb/rethinkdb
https://api.github.com/repos/rethinkdb/rethinkdb
closed
Secondary index traversal takes 50% longer than primary index traversal
st:review tp:performance
On a table with 1 million documents and a simple secondary index on a numeric field, I'm getting the following performance: ```js r.table("test2").map(r.row.toJsonString()).count() -> ~2.0s ``` ```js r.table("test2").between(1, 5, {index: 'survey_id'}).map(r.row.toJsonString()).count() -> ~3.1s ``` I used the script from https://github.com/rethinkdb/rethinkdb/issues/4569#issuecomment-141510548 to generate the data. I wouldn't expect such a drastic slow-down from traversing the secondary rather than the primary index tree.
True
Secondary index traversal takes 50% longer than primary index traversal - On a table with 1 million documents and a simple secondary index on a numeric field, I'm getting the following performance: ```js r.table("test2").map(r.row.toJsonString()).count() -> ~2.0s ``` ```js r.table("test2").between(1, 5, {index: 'survey_id'}).map(r.row.toJsonString()).count() -> ~3.1s ``` I used the script from https://github.com/rethinkdb/rethinkdb/issues/4569#issuecomment-141510548 to generate the data. I wouldn't expect such a drastic slow-down from traversing the secondary rather than the primary index tree.
perf
secondary index traversal takes longer than primary index traversal on a table with million documents and a simple secondary index on a numeric field i m getting the following performance js r table map r row tojsonstring count js r table between index survey id map r row tojsonstring count i used the script from to generate the data i wouldn t expect such a drastic slow down from traversing the secondary rather than the primary index tree
1
6,678
5,580,029,353
IssuesEvent
2017-03-28 15:45:14
jruby/jruby
https://api.github.com/repos/jruby/jruby
closed
Performance regression when going from 9.1.7.0 to 9.1.8.0 with virtus gem
performance regression
### Environment Rubies: * `jruby 9.1.8.0 (2.3.1) 2017-03-06 90fc7ab Java HotSpot(TM) 64-Bit Server VM 25.121-b13 on 1.8.0_121-b13 +jit [linux-x86_64]` * `jruby 9.1.7.0 (2.3.1) 2017-01-11 68056ae Java HotSpot(TM) 64-Bit Server VM 25.121-b13 on 1.8.0_121-b13 +jit [linux-x86_64]` Kernel: `Linux maruchan 4.10.0-11-generic #13-Ubuntu SMP Wed Mar 1 21:27:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux` (`Ubuntu 16.10`). ### Expected Behavior After upgrading a from 9.1.7.0 to 9.1.8.0 I experienced a massive slowdown in processing time on a production application. I was able to reduce this to the usage of the `virtus` gem, but haven't yet been able to pin down the issue. Test case (using `benchmark-ips` and `virtus`): ```ruby require 'benchmark/ips' require 'virtus' class TestModel include Virtus.model attribute :id, String end Benchmark.ips do |benchmark| benchmark.time = 15 benchmark.warmup = 15 benchmark.report(JRUBY_VERSION) { TestModel.new } benchmark.compare! end ``` ### Actual Behavior Output on 9.1.7.0: ``` Warming up -------------------------------------- 9.1.7.0 14.921k i/100ms Calculating ------------------------------------- 9.1.7.0 182.901k (± 5.7%) i/s - 2.745M in 15.066610s ``` Output in 9.1.8.0: ``` Warming up -------------------------------------- 9.1.8.0 480.000 i/100ms Calculating ------------------------------------- 9.1.8.0 5.149k (± 7.3%) i/s - 76.800k in 15.000051s ```
True
Performance regression when going from 9.1.7.0 to 9.1.8.0 with virtus gem - ### Environment Rubies: * `jruby 9.1.8.0 (2.3.1) 2017-03-06 90fc7ab Java HotSpot(TM) 64-Bit Server VM 25.121-b13 on 1.8.0_121-b13 +jit [linux-x86_64]` * `jruby 9.1.7.0 (2.3.1) 2017-01-11 68056ae Java HotSpot(TM) 64-Bit Server VM 25.121-b13 on 1.8.0_121-b13 +jit [linux-x86_64]` Kernel: `Linux maruchan 4.10.0-11-generic #13-Ubuntu SMP Wed Mar 1 21:27:28 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux` (`Ubuntu 16.10`). ### Expected Behavior After upgrading a from 9.1.7.0 to 9.1.8.0 I experienced a massive slowdown in processing time on a production application. I was able to reduce this to the usage of the `virtus` gem, but haven't yet been able to pin down the issue. Test case (using `benchmark-ips` and `virtus`): ```ruby require 'benchmark/ips' require 'virtus' class TestModel include Virtus.model attribute :id, String end Benchmark.ips do |benchmark| benchmark.time = 15 benchmark.warmup = 15 benchmark.report(JRUBY_VERSION) { TestModel.new } benchmark.compare! end ``` ### Actual Behavior Output on 9.1.7.0: ``` Warming up -------------------------------------- 9.1.7.0 14.921k i/100ms Calculating ------------------------------------- 9.1.7.0 182.901k (± 5.7%) i/s - 2.745M in 15.066610s ``` Output in 9.1.8.0: ``` Warming up -------------------------------------- 9.1.8.0 480.000 i/100ms Calculating ------------------------------------- 9.1.8.0 5.149k (± 7.3%) i/s - 76.800k in 15.000051s ```
perf
performance regression when going from to with virtus gem environment rubies jruby java hotspot tm bit server vm on jit jruby java hotspot tm bit server vm on jit kernel linux maruchan generic ubuntu smp wed mar utc gnu linux ubuntu expected behavior after upgrading a from to i experienced a massive slowdown in processing time on a production application i was able to reduce this to the usage of the virtus gem but haven t yet been able to pin down the issue test case using benchmark ips and virtus ruby require benchmark ips require virtus class testmodel include virtus model attribute id string end benchmark ips do benchmark benchmark time benchmark warmup benchmark report jruby version testmodel new benchmark compare end actual behavior output on warming up i calculating ± i s in output in warming up i calculating ± i s in
1
133,893
18,363,445,273
IssuesEvent
2021-10-09 16:33:22
tuanducdesign/serviceapp
https://api.github.com/repos/tuanducdesign/serviceapp
opened
CVE-2018-19797 (Medium) detected in node-sass-4.14.0.tgz, opennmsopennms-source-26.0.0-1
security vulnerability
## CVE-2018-19797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.0.tgz</b>, <b>opennmsopennms-source-26.0.0-1</b></p></summary> <p> <details><summary><b>node-sass-4.14.0.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.0.tgz</a></p> <p>Path to dependency file: serviceapp/package.json</p> <p>Path to vulnerable library: /node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - :x: **node-sass-4.14.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/tuanducdesign/serviceapp/commit/9c49706d08f6181a261d95ef013b335d21707fb3">9c49706d08f6181a261d95ef013b335d21707fb3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Selector_List::populate_extends in SharedPtr.hpp (used by ast.cpp and ast_selectors.cpp) may cause a Denial of Service (application crash) via a crafted sass input file. <p>Publish Date: 2018-12-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19797>CVE-2018-19797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p> <p>Release Date: 2018-12-03</p> <p>Fix Resolution: libsass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-19797 (Medium) detected in node-sass-4.14.0.tgz, opennmsopennms-source-26.0.0-1 - ## CVE-2018-19797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.0.tgz</b>, <b>opennmsopennms-source-26.0.0-1</b></p></summary> <p> <details><summary><b>node-sass-4.14.0.tgz</b></p></summary> <p>Wrapper around libsass</p> <p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.0.tgz</a></p> <p>Path to dependency file: serviceapp/package.json</p> <p>Path to vulnerable library: /node_modules/node-sass/package.json</p> <p> Dependency Hierarchy: - :x: **node-sass-4.14.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/tuanducdesign/serviceapp/commit/9c49706d08f6181a261d95ef013b335d21707fb3">9c49706d08f6181a261d95ef013b335d21707fb3</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In LibSass 3.5.5, a NULL Pointer Dereference in the function Sass::Selector_List::populate_extends in SharedPtr.hpp (used by ast.cpp and ast_selectors.cpp) may cause a Denial of Service (application crash) via a crafted sass input file. <p>Publish Date: 2018-12-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19797>CVE-2018-19797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sass/libsass/releases/tag/3.6.0">https://github.com/sass/libsass/releases/tag/3.6.0</a></p> <p>Release Date: 2018-12-03</p> <p>Fix Resolution: libsass - 3.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_perf
cve medium detected in node sass tgz opennmsopennms source cve medium severity vulnerability vulnerable libraries node sass tgz opennmsopennms source node sass tgz wrapper around libsass library home page a href path to dependency file serviceapp package json path to vulnerable library node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details in libsass a null pointer dereference in the function sass selector list populate extends in sharedptr hpp used by ast cpp and ast selectors cpp may cause a denial of service application crash via a crafted sass input file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
0
54,499
30,213,501,834
IssuesEvent
2023-07-05 14:12:16
enso-org/enso
https://api.github.com/repos/enso-org/enso
closed
Make sure long running Table operations are interruptible
-libs --low-performance
[Our engine uses safepoints to interrupt running threads](https://github.com/enso-org/enso/blob/develop/engine/runtime/src/main/java/org/enso/interpreter/runtime/ThreadManager.java#L55-L81) when an existing job has to be cancelled. These safepoints are polled by our Truffle interpreter around method calls allowing easy interruption of Enso code. However, once we enter Polyglot Java code - these invocations were not interruptible. This is especially problematic as these interrupts are used to cancel execution when some parameters of a computation have been changed and it needs to be recomputed. If the computation is not interruptible it has to be finished first which is a waste of time and resources. 'Ironically' many of the most intensive and possibly long running computations _are written in Java_ (for performance) - that includes many Column operations, joins, current implementation of aggregates etc. This means these likely long running computations cannot be interrupted. Apparently this can be solved quite easily - [`Context::safepoint`](https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Context.html#safepoint--) can be called periodically in the polyglot Java helpers to poll for the safepoints and allow the interrupts. I think we should add a call to `Context::safepoint` every few iterations in our long running operations. This should help with making the IDE more responsive. - [x] Create a scenario to check the interrupts (e.g. a Java method that runs for a few minutes, just sleeping). i. Try running it as-is and verify that once we change the inputs, it still waits until finish - wasting time and resources. ii. Extend it with periodic calls to `safepoint`, possibly printing something to logs to make it clear it works. iii. Run it again and verify that now it is interrupted immediately. - [ ] Adapt our long running operations in Table - `map`, `zip`, `join`, `aggregate` etc. to call `safepoint`. - We can try calling it on every iteration. If we realise that it is too slow, we can instead call it every 10 or 100 iterations. - We should go through the Table library and add the safepoints in all long running computations. This will include also: file reading (whether we should interrupt writing is a separate question - there for data integrity it may be better to keep it not-interruptible); - essentially every for loop that does not have a constant / usually small bound (i.e. probably searching for columns does not necessary need to be interruptible as the column count should be relatively small - although maybe it is worth making it as well...) but definitely loops bound by row count which may be large. - [x] Check some benchmarks to see if there is any noticeable difference. This can help us tell if we should run it on every iteration or every 10 to decrease any potential impact (ideally it should be negligible).
True
Make sure long running Table operations are interruptible - [Our engine uses safepoints to interrupt running threads](https://github.com/enso-org/enso/blob/develop/engine/runtime/src/main/java/org/enso/interpreter/runtime/ThreadManager.java#L55-L81) when an existing job has to be cancelled. These safepoints are polled by our Truffle interpreter around method calls allowing easy interruption of Enso code. However, once we enter Polyglot Java code - these invocations were not interruptible. This is especially problematic as these interrupts are used to cancel execution when some parameters of a computation have been changed and it needs to be recomputed. If the computation is not interruptible it has to be finished first which is a waste of time and resources. 'Ironically' many of the most intensive and possibly long running computations _are written in Java_ (for performance) - that includes many Column operations, joins, current implementation of aggregates etc. This means these likely long running computations cannot be interrupted. Apparently this can be solved quite easily - [`Context::safepoint`](https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Context.html#safepoint--) can be called periodically in the polyglot Java helpers to poll for the safepoints and allow the interrupts. I think we should add a call to `Context::safepoint` every few iterations in our long running operations. This should help with making the IDE more responsive. - [x] Create a scenario to check the interrupts (e.g. a Java method that runs for a few minutes, just sleeping). i. Try running it as-is and verify that once we change the inputs, it still waits until finish - wasting time and resources. ii. Extend it with periodic calls to `safepoint`, possibly printing something to logs to make it clear it works. iii. Run it again and verify that now it is interrupted immediately. - [ ] Adapt our long running operations in Table - `map`, `zip`, `join`, `aggregate` etc. to call `safepoint`. - We can try calling it on every iteration. If we realise that it is too slow, we can instead call it every 10 or 100 iterations. - We should go through the Table library and add the safepoints in all long running computations. This will include also: file reading (whether we should interrupt writing is a separate question - there for data integrity it may be better to keep it not-interruptible); - essentially every for loop that does not have a constant / usually small bound (i.e. probably searching for columns does not necessary need to be interruptible as the column count should be relatively small - although maybe it is worth making it as well...) but definitely loops bound by row count which may be large. - [x] Check some benchmarks to see if there is any noticeable difference. This can help us tell if we should run it on every iteration or every 10 to decrease any potential impact (ideally it should be negligible).
perf
make sure long running table operations are interruptible when an existing job has to be cancelled these safepoints are polled by our truffle interpreter around method calls allowing easy interruption of enso code however once we enter polyglot java code these invocations were not interruptible this is especially problematic as these interrupts are used to cancel execution when some parameters of a computation have been changed and it needs to be recomputed if the computation is not interruptible it has to be finished first which is a waste of time and resources ironically many of the most intensive and possibly long running computations are written in java for performance that includes many column operations joins current implementation of aggregates etc this means these likely long running computations cannot be interrupted apparently this can be solved quite easily can be called periodically in the polyglot java helpers to poll for the safepoints and allow the interrupts i think we should add a call to context safepoint every few iterations in our long running operations this should help with making the ide more responsive create a scenario to check the interrupts e g a java method that runs for a few minutes just sleeping i try running it as is and verify that once we change the inputs it still waits until finish wasting time and resources ii extend it with periodic calls to safepoint possibly printing something to logs to make it clear it works iii run it again and verify that now it is interrupted immediately adapt our long running operations in table map zip join aggregate etc to call safepoint we can try calling it on every iteration if we realise that it is too slow we can instead call it every or iterations we should go through the table library and add the safepoints in all long running computations this will include also file reading whether we should interrupt writing is a separate question there for data integrity it may be better to keep it not interruptible essentially every for loop that does not have a constant usually small bound i e probably searching for columns does not necessary need to be interruptible as the column count should be relatively small although maybe it is worth making it as well but definitely loops bound by row count which may be large check some benchmarks to see if there is any noticeable difference this can help us tell if we should run it on every iteration or every to decrease any potential impact ideally it should be negligible
1
37,392
12,477,673,363
IssuesEvent
2020-05-29 15:20:31
scriptex/webpack-mpa-next
https://api.github.com/repos/scriptex/webpack-mpa-next
closed
WS-2020-0091 (High) detected in http-proxy-1.15.2.tgz
security vulnerability
## WS-2020-0091 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.15.2.tgz</b></p></summary> <p>HTTP proxying for the masses</p> <p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/webpack-mpa-next/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/webpack-mpa-next/node_modules/http-proxy/package.json</p> <p> Dependency Hierarchy: - browser-sync-2.26.7.tgz (Root Library) - :x: **http-proxy-1.15.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa-next/commit/b69cf6d725c5c3a1e9e7abf52146af88e38b2701">b69cf6d725c5c3a1e9e7abf52146af88e38b2701</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function. <p>Publish Date: 2020-05-14 <p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p> <p>Release Date: 2020-05-26</p> <p>Fix Resolution: http-proxy - 1.18.1 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2020-0091 (High) detected in http-proxy-1.15.2.tgz - ## WS-2020-0091 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-proxy-1.15.2.tgz</b></p></summary> <p>HTTP proxying for the masses</p> <p>Library home page: <a href="https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz">https://registry.npmjs.org/http-proxy/-/http-proxy-1.15.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/webpack-mpa-next/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/webpack-mpa-next/node_modules/http-proxy/package.json</p> <p> Dependency Hierarchy: - browser-sync-2.26.7.tgz (Root Library) - :x: **http-proxy-1.15.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa-next/commit/b69cf6d725c5c3a1e9e7abf52146af88e38b2701">b69cf6d725c5c3a1e9e7abf52146af88e38b2701</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of http-proxy prior to 1.18.1 are vulnerable to Denial of Service. An HTTP request with a long body triggers an ERR_HTTP_HEADERS_SENT unhandled exception that crashes the proxy server. This is only possible when the proxy server sets headers in the proxy request using the proxyReq.setHeader function. <p>Publish Date: 2020-05-14 <p>URL: <a href=https://github.com/http-party/node-http-proxy/pull/1447>WS-2020-0091</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1486">https://www.npmjs.com/advisories/1486</a></p> <p>Release Date: 2020-05-26</p> <p>Fix Resolution: http-proxy - 1.18.1 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_perf
ws high detected in http proxy tgz ws high severity vulnerability vulnerable library http proxy tgz http proxying for the masses library home page a href path to dependency file tmp ws scm webpack mpa next package json path to vulnerable library tmp ws scm webpack mpa next node modules http proxy package json dependency hierarchy browser sync tgz root library x http proxy tgz vulnerable library found in head commit a href vulnerability details versions of http proxy prior to are vulnerable to denial of service an http request with a long body triggers an err http headers sent unhandled exception that crashes the proxy server this is only possible when the proxy server sets headers in the proxy request using the proxyreq setheader function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http proxy step up your open source security game with whitesource
0
12,855
8,017,471,160
IssuesEvent
2018-07-25 16:00:39
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
distsqlrun: distsqlreceiver should be run from main goroutine
A-sql-execution C-cleanup C-performance
The last component in the gateway node of a DistSQL flow is currently run in its own goroutine. This is needless synchronization overhead, especially for very simple flows - the last flow should be run synchronously by the executor's goroutine.
True
distsqlrun: distsqlreceiver should be run from main goroutine - The last component in the gateway node of a DistSQL flow is currently run in its own goroutine. This is needless synchronization overhead, especially for very simple flows - the last flow should be run synchronously by the executor's goroutine.
perf
distsqlrun distsqlreceiver should be run from main goroutine the last component in the gateway node of a distsql flow is currently run in its own goroutine this is needless synchronization overhead especially for very simple flows the last flow should be run synchronously by the executor s goroutine
1
41,484
21,710,755,525
IssuesEvent
2022-05-10 13:42:13
dafny-lang/dafny
https://api.github.com/repos/dafny-lang/dafny
closed
Enable verifying proofs in parallel, in similar or in different modules, using a single thread and solver pool
performance verification story
Dafny currently has the options `vcsCores` and `vcsLoad` that will let it verify proofs that occur inside the same module, in parallel, although it will still verify proofs in separate modules sequentially. When the `vcsCores` or `vcsLoad` option is used, Dafny should also verify proofs in different modules in parallel, and it should not require additional thread or checker pools for this. ### Out of scope Parsing files and resolving or compiling modules in parallel is out of scope for this story ### Implementation suggestions - Remove references to `CommandLineOptions.Clo` in Boogie (392 in total). This requires chaining an options object through the Boogie call graph. This requires a lot of simple code changes and is, I expect, the bulk of the work. - Remove references to `Console.Out` in Boogie (57 in total). This is simple once an options object is already chained through the Boogie call graph. - Change Boogie's `InferAndVerify` so it returns an `IReadOnlyList<ProofExecution>`. A `ProofExecution` represents a verification condition. It contains all the information needed to identify a verification condition and a `Task` that can used to track the state of this verification. `ExecutionEngine.VerifyImplementation` and `ConditionGeneriation.VerifyImplementation` must also be changed to return a `IReadOnlyList<ProofExecution>` - Change Boogie's `InferAndVerify` so it takes an additional `CheckerPool` argument. Change Dafny to create a single CheckerPool which it passes in all invocations of `InferAndVerify`. - In Dafny, thread Boogie's Task objects up the call chain to `DafnyDriver.Boogie`, and there wait for all of them to complete. - A reporting class that can handle reporting verification progress for different Boogie modules concurrently, must be added to Dafny. - If needed, configure a ThreadPoolScheduler in Dafny that creates threads with increased memory to prevent stack overflows.
True
Enable verifying proofs in parallel, in similar or in different modules, using a single thread and solver pool - Dafny currently has the options `vcsCores` and `vcsLoad` that will let it verify proofs that occur inside the same module, in parallel, although it will still verify proofs in separate modules sequentially. When the `vcsCores` or `vcsLoad` option is used, Dafny should also verify proofs in different modules in parallel, and it should not require additional thread or checker pools for this. ### Out of scope Parsing files and resolving or compiling modules in parallel is out of scope for this story ### Implementation suggestions - Remove references to `CommandLineOptions.Clo` in Boogie (392 in total). This requires chaining an options object through the Boogie call graph. This requires a lot of simple code changes and is, I expect, the bulk of the work. - Remove references to `Console.Out` in Boogie (57 in total). This is simple once an options object is already chained through the Boogie call graph. - Change Boogie's `InferAndVerify` so it returns an `IReadOnlyList<ProofExecution>`. A `ProofExecution` represents a verification condition. It contains all the information needed to identify a verification condition and a `Task` that can used to track the state of this verification. `ExecutionEngine.VerifyImplementation` and `ConditionGeneriation.VerifyImplementation` must also be changed to return a `IReadOnlyList<ProofExecution>` - Change Boogie's `InferAndVerify` so it takes an additional `CheckerPool` argument. Change Dafny to create a single CheckerPool which it passes in all invocations of `InferAndVerify`. - In Dafny, thread Boogie's Task objects up the call chain to `DafnyDriver.Boogie`, and there wait for all of them to complete. - A reporting class that can handle reporting verification progress for different Boogie modules concurrently, must be added to Dafny. - If needed, configure a ThreadPoolScheduler in Dafny that creates threads with increased memory to prevent stack overflows.
perf
enable verifying proofs in parallel in similar or in different modules using a single thread and solver pool dafny currently has the options vcscores and vcsload that will let it verify proofs that occur inside the same module in parallel although it will still verify proofs in separate modules sequentially when the vcscores or vcsload option is used dafny should also verify proofs in different modules in parallel and it should not require additional thread or checker pools for this out of scope parsing files and resolving or compiling modules in parallel is out of scope for this story implementation suggestions remove references to commandlineoptions clo in boogie in total this requires chaining an options object through the boogie call graph this requires a lot of simple code changes and is i expect the bulk of the work remove references to console out in boogie in total this is simple once an options object is already chained through the boogie call graph change boogie s inferandverify so it returns an ireadonlylist a proofexecution represents a verification condition it contains all the information needed to identify a verification condition and a task that can used to track the state of this verification executionengine verifyimplementation and conditiongeneriation verifyimplementation must also be changed to return a ireadonlylist change boogie s inferandverify so it takes an additional checkerpool argument change dafny to create a single checkerpool which it passes in all invocations of inferandverify in dafny thread boogie s task objects up the call chain to dafnydriver boogie and there wait for all of them to complete a reporting class that can handle reporting verification progress for different boogie modules concurrently must be added to dafny if needed configure a threadpoolscheduler in dafny that creates threads with increased memory to prevent stack overflows
1
48,220
25,411,338,696
IssuesEvent
2022-11-22 19:15:48
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
[API Proposal]: IndexOfAnyValues<T>.Contains(T)
api-suggestion area-System.Memory tenet-performance
### Background and motivation When updating existing code to use the new `IndexOfAnyValues` API, it is very common to replace the existing needle constant with the `IndexOfAnyValues` instance. This is fine if the only operations using said needle were `{Last}IndexOfAny{Except}` calls, but it is not uncommon to also use that needle for single-value checks `Needle.Contains(c)`. In those cases, you are forced into keeping the original needle around. ```c# private const string Needle = ",[]&*+"; private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(Needle); if (Needle.Contains(c)) { // ... } ``` As `IndexOfAnyValues` is also already a representation of values optimized for searching, it can be more efficient than a full linear scan of the needle. ```c# private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(",[]&*+"); if (s_needle.Contains(c)) { // ... } ``` ### API Proposal ```c# namespace System.Buffers; public class IndexOfAnyValues<T> where T : IEquatable<T>? { public bool Contains(T value); } ``` ### API Usage Instead of ```c# private const string Needle = ",[]&*+"; private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(Needle); if (Needle.Contains(c)) { // ... } ``` you can do ```c# private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(",[]&*+"); if (s_needle.Contains(c)) { // ... } ``` ### Alternative Designs _No response_ ### Risks _No response_
True
[API Proposal]: IndexOfAnyValues<T>.Contains(T) - ### Background and motivation When updating existing code to use the new `IndexOfAnyValues` API, it is very common to replace the existing needle constant with the `IndexOfAnyValues` instance. This is fine if the only operations using said needle were `{Last}IndexOfAny{Except}` calls, but it is not uncommon to also use that needle for single-value checks `Needle.Contains(c)`. In those cases, you are forced into keeping the original needle around. ```c# private const string Needle = ",[]&*+"; private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(Needle); if (Needle.Contains(c)) { // ... } ``` As `IndexOfAnyValues` is also already a representation of values optimized for searching, it can be more efficient than a full linear scan of the needle. ```c# private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(",[]&*+"); if (s_needle.Contains(c)) { // ... } ``` ### API Proposal ```c# namespace System.Buffers; public class IndexOfAnyValues<T> where T : IEquatable<T>? { public bool Contains(T value); } ``` ### API Usage Instead of ```c# private const string Needle = ",[]&*+"; private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(Needle); if (Needle.Contains(c)) { // ... } ``` you can do ```c# private static readonly IndexOfAnyValues<char> s_needle = IndexOfAnyValues.Create(",[]&*+"); if (s_needle.Contains(c)) { // ... } ``` ### Alternative Designs _No response_ ### Risks _No response_
perf
indexofanyvalues contains t background and motivation when updating existing code to use the new indexofanyvalues api it is very common to replace the existing needle constant with the indexofanyvalues instance this is fine if the only operations using said needle were last indexofany except calls but it is not uncommon to also use that needle for single value checks needle contains c in those cases you are forced into keeping the original needle around c private const string needle private static readonly indexofanyvalues s needle indexofanyvalues create needle if needle contains c as indexofanyvalues is also already a representation of values optimized for searching it can be more efficient than a full linear scan of the needle c private static readonly indexofanyvalues s needle indexofanyvalues create if s needle contains c api proposal c namespace system buffers public class indexofanyvalues where t iequatable public bool contains t value api usage instead of c private const string needle private static readonly indexofanyvalues s needle indexofanyvalues create needle if needle contains c you can do c private static readonly indexofanyvalues s needle indexofanyvalues create if s needle contains c alternative designs no response risks no response
1
10,777
7,307,763,291
IssuesEvent
2018-02-28 04:45:18
dotnet/coreclr
https://api.github.com/repos/dotnet/coreclr
closed
Consider using a buffer cache with P/Invoke
area-Interop enhancement optimization tenet-performance
When allocating native buffers for P/Invoke we should consider pulling from a cache. In the scenario of calling I/O related methods many thousands of calls can be made in quick succession which currently can cause a ton of native alloc/free calls.
True
Consider using a buffer cache with P/Invoke - When allocating native buffers for P/Invoke we should consider pulling from a cache. In the scenario of calling I/O related methods many thousands of calls can be made in quick succession which currently can cause a ton of native alloc/free calls.
perf
consider using a buffer cache with p invoke when allocating native buffers for p invoke we should consider pulling from a cache in the scenario of calling i o related methods many thousands of calls can be made in quick succession which currently can cause a ton of native alloc free calls
1
35,822
17,269,826,657
IssuesEvent
2021-07-22 18:12:49
fkk-cz/noire_vehicles
https://api.github.com/repos/fkk-cz/noire_vehicles
closed
2019 Buggati Divo
performance issue
Car doesn't reach over 200 mph where it can go over 230 irl. The car has really bad handling and breaks which makes it almost impossible to stop or turn. https://festivalautomobile.com/en/portfolio-item/bugatti-divo-2/#:~:text=Its%20maximum%20speed%20is%20limited,seconds%20faster%20than%20the%20Chiron.
True
2019 Buggati Divo - Car doesn't reach over 200 mph where it can go over 230 irl. The car has really bad handling and breaks which makes it almost impossible to stop or turn. https://festivalautomobile.com/en/portfolio-item/bugatti-divo-2/#:~:text=Its%20maximum%20speed%20is%20limited,seconds%20faster%20than%20the%20Chiron.
perf
buggati divo car doesn t reach over mph where it can go over irl the car has really bad handling and breaks which makes it almost impossible to stop or turn
1
5,074
4,779,071,357
IssuesEvent
2016-10-27 21:15:01
MicrosoftEdge/Microsoft.Qwiq
https://api.github.com/repos/MicrosoftEdge/Microsoft.Qwiq
closed
Improve efficiency of mapping operation by requring call to PartialOpen
Needs Discussion performance task
From [Performance Tuning the Work Item Tracking Object Model](https://msdn.microsoft.com/en-us/library/bb130338(v=vs.90).aspx#Anchor_1) >Because of the paging and lazy evaluation scheme discussed earlier, viewing or editing unpaged fields on a WorkItem in a WorkItemCollection will cause an additional round-trip. This round-trip retrieves all the additional work item data that is not paged in as part of the query. This operation can be expensive. >Consumers can call PartialOpen on a WorkItem to minimize this overhead. You can then view and edit most of the fields on the WorkItem, but the object model optimizes to send a minimal set of data over the network.
True
Improve efficiency of mapping operation by requring call to PartialOpen - From [Performance Tuning the Work Item Tracking Object Model](https://msdn.microsoft.com/en-us/library/bb130338(v=vs.90).aspx#Anchor_1) >Because of the paging and lazy evaluation scheme discussed earlier, viewing or editing unpaged fields on a WorkItem in a WorkItemCollection will cause an additional round-trip. This round-trip retrieves all the additional work item data that is not paged in as part of the query. This operation can be expensive. >Consumers can call PartialOpen on a WorkItem to minimize this overhead. You can then view and edit most of the fields on the WorkItem, but the object model optimizes to send a minimal set of data over the network.
perf
improve efficiency of mapping operation by requring call to partialopen from because of the paging and lazy evaluation scheme discussed earlier viewing or editing unpaged fields on a workitem in a workitemcollection will cause an additional round trip this round trip retrieves all the additional work item data that is not paged in as part of the query this operation can be expensive consumers can call partialopen on a workitem to minimize this overhead you can then view and edit most of the fields on the workitem but the object model optimizes to send a minimal set of data over the network
1
151,344
19,648,813,026
IssuesEvent
2022-01-10 02:36:31
chiq2045/my-diary
https://api.github.com/repos/chiq2045/my-diary
opened
WS-2019-0605 (Medium) detected in CSS::Sassv3.6.0
security vulnerability
## WS-2019-0605 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.6.0</b></p></summary> <p> <p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/my-diary/node_modules/node-sass/src/libsass/src/lexer.cpp</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In sass versions between 3.2.0 to 3.6.3 may read 1 byte outside an allocated buffer while parsing a specially crafted css rule. <p>Publish Date: 2019-07-16 <p>URL: <a href=https://github.com/sass/libsass/commit/7a21c79e321927363a153dc5d7e9c492365faf9b>WS-2019-0605</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/OSV-2020-734">https://osv.dev/vulnerability/OSV-2020-734</a></p> <p>Release Date: 2019-07-16</p> <p>Fix Resolution: 3.6.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0605 (Medium) detected in CSS::Sassv3.6.0 - ## WS-2019-0605 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.6.0</b></p></summary> <p> <p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/my-diary/node_modules/node-sass/src/libsass/src/lexer.cpp</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In sass versions between 3.2.0 to 3.6.3 may read 1 byte outside an allocated buffer while parsing a specially crafted css rule. <p>Publish Date: 2019-07-16 <p>URL: <a href=https://github.com/sass/libsass/commit/7a21c79e321927363a153dc5d7e9c492365faf9b>WS-2019-0605</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/OSV-2020-734">https://osv.dev/vulnerability/OSV-2020-734</a></p> <p>Release Date: 2019-07-16</p> <p>Fix Resolution: 3.6.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_perf
ws medium detected in css ws medium severity vulnerability vulnerable library css library home page a href vulnerable source files my diary node modules node sass src libsass src lexer cpp vulnerability details in sass versions between to may read byte outside an allocated buffer while parsing a specially crafted css rule publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
37,824
18,784,287,795
IssuesEvent
2021-11-08 10:26:50
hestiaAI/hestialabs-experiences
https://api.github.com/repos/hestiaAI/hestialabs-experiences
closed
Replace Twitter sparql pipelines by custom pipelines
performance
It has been discussed that the Twitter experiences are too slow. One short-term solution is to use custom pipelines instead, which will be faster but will still have a lot of repeated computations. In the long term, we want to use either a SQL pipeline (#147) or a faster SPARQL pipeline (#129).
True
Replace Twitter sparql pipelines by custom pipelines - It has been discussed that the Twitter experiences are too slow. One short-term solution is to use custom pipelines instead, which will be faster but will still have a lot of repeated computations. In the long term, we want to use either a SQL pipeline (#147) or a faster SPARQL pipeline (#129).
perf
replace twitter sparql pipelines by custom pipelines it has been discussed that the twitter experiences are too slow one short term solution is to use custom pipelines instead which will be faster but will still have a lot of repeated computations in the long term we want to use either a sql pipeline or a faster sparql pipeline
1
326,367
9,955,546,049
IssuesEvent
2019-07-05 11:23:17
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.google.com - desktop site instead of mobile site
browser-firefox-mobile engine-gecko priority-critical
<!-- @browser: Firefox Mobile 68.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 --> <!-- @reported_with: --> **URL**: https://www.google.com **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: No **Problem type**: Desktop site instead of mobile site **Description**: I was put in desktop instead of mobile **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.google.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 68.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 --> <!-- @reported_with: --> **URL**: https://www.google.com **Browser / Version**: Firefox Mobile 68.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: No **Problem type**: Desktop site instead of mobile site **Description**: I was put in desktop instead of mobile **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_perf
desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser no problem type desktop site instead of mobile site description i was put in desktop instead of mobile steps to reproduce browser configuration none from with ❤️
0
83,648
24,113,535,175
IssuesEvent
2022-09-20 13:13:31
GetBookmarkd/bookmarked-web
https://api.github.com/repos/GetBookmarkd/bookmarked-web
closed
Feature: Move over to Nuxt
Frontend :iphone: Build Tools 🛠️
## Details - Type of work: Frontend - Time Estimation: 3hr ## Description <!-- Describe what the feature should accomplish --> Moving over to Nuxt will help with SEO which will be very important moving forward. Will also help page speed etc ## Supporting Documentation/Data [Nuxt](https://v3.nuxtjs.org)
1.0
Feature: Move over to Nuxt - ## Details - Type of work: Frontend - Time Estimation: 3hr ## Description <!-- Describe what the feature should accomplish --> Moving over to Nuxt will help with SEO which will be very important moving forward. Will also help page speed etc ## Supporting Documentation/Data [Nuxt](https://v3.nuxtjs.org)
non_perf
feature move over to nuxt details type of work frontend time estimation description moving over to nuxt will help with seo which will be very important moving forward will also help page speed etc supporting documentation data
0
594,832
18,055,087,516
IssuesEvent
2021-09-20 07:04:21
stackabletech/issues
https://api.github.com/repos/stackabletech/issues
closed
Operator Release 0.1
priority/high Epic
Open Questions/Tasks: - How do we want to ship/bundle our product config specifications? - Move our resources from "v1" versions to "v1alpha1" (or similar) - Tag in Github - Create RPM/deb/Dockerfiles for released versions - Document the release procedure somewhere This could be useful: https://github.com/sunng87/cargo-release
1.0
Operator Release 0.1 - Open Questions/Tasks: - How do we want to ship/bundle our product config specifications? - Move our resources from "v1" versions to "v1alpha1" (or similar) - Tag in Github - Create RPM/deb/Dockerfiles for released versions - Document the release procedure somewhere This could be useful: https://github.com/sunng87/cargo-release
non_perf
operator release open questions tasks how do we want to ship bundle our product config specifications move our resources from versions to or similar tag in github create rpm deb dockerfiles for released versions document the release procedure somewhere this could be useful
0
34,721
16,662,871,436
IssuesEvent
2021-06-06 16:48:24
freesewing/freesewing
https://api.github.com/repos/freesewing/freesewing
closed
Implement new content-visibility CSS feature on website
:nerd_face: ux :rocket: performance freesewing.org
Here's a good write-up: https://web.dev/content-visibility/ The goal is to identify these sections that are off-screen at load time and apply `content-visibility: auto;` to them.
True
Implement new content-visibility CSS feature on website - Here's a good write-up: https://web.dev/content-visibility/ The goal is to identify these sections that are off-screen at load time and apply `content-visibility: auto;` to them.
perf
implement new content visibility css feature on website here s a good write up the goal is to identify these sections that are off screen at load time and apply content visibility auto to them
1
31,938
15,147,775,807
IssuesEvent
2021-02-11 09:39:57
hzi-braunschweig/SORMAS-Project
https://api.github.com/repos/hzi-braunschweig/SORMAS-Project
closed
New contact count queries in event directory are not lazy loading
backend bug events performance
### Bug Description The queries introduced with PR #3981 and PR #3772 are not lazy loading. This means that if you open the event directory, said queries will evaluate _all_ existing events in the database. ### Steps to Reproduce 1. Open the event directory ### Expected Behavior Only calculate the additional columns for the events which were fetched earlier in the `getIndexList`-Method. ### Solution Suggestion Add an additional IN-clause to the queries. ### System Details * SORMAS version: 1.56.0 SNAPSHOT ### Additional Information The detailed queries can be found in #3981
True
New contact count queries in event directory are not lazy loading - ### Bug Description The queries introduced with PR #3981 and PR #3772 are not lazy loading. This means that if you open the event directory, said queries will evaluate _all_ existing events in the database. ### Steps to Reproduce 1. Open the event directory ### Expected Behavior Only calculate the additional columns for the events which were fetched earlier in the `getIndexList`-Method. ### Solution Suggestion Add an additional IN-clause to the queries. ### System Details * SORMAS version: 1.56.0 SNAPSHOT ### Additional Information The detailed queries can be found in #3981
perf
new contact count queries in event directory are not lazy loading bug description the queries introduced with pr and pr are not lazy loading this means that if you open the event directory said queries will evaluate all existing events in the database steps to reproduce open the event directory expected behavior only calculate the additional columns for the events which were fetched earlier in the getindexlist method solution suggestion add an additional in clause to the queries system details sormas version snapshot additional information the detailed queries can be found in
1
52,872
22,490,192,072
IssuesEvent
2022-06-23 00:28:02
hashicorp/terraform-provider-azurerm
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
closed
Ability to create azurerm_frontdoor as Premium tier.
enhancement service/frontdoor
### Is there an existing issue for this? - [X] I have searched the existing issues ### Community Note <!--- Please keep this note for the community ---> * Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Description There is currently no way to define a Premium tier (SKU) Front Door deployment within the Azure terraform provider. The default deployment is a standard tier Front Door deployment. ### New or Affected Resource(s)/Data Source(s) azurerm_frontdoor ### Potential Terraform Configuration ```hcl sku { name = "Premium" tier = "Premium" } ``` ### References _No response_
1.0
Ability to create azurerm_frontdoor as Premium tier. - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Community Note <!--- Please keep this note for the community ---> * Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Description There is currently no way to define a Premium tier (SKU) Front Door deployment within the Azure terraform provider. The default deployment is a standard tier Front Door deployment. ### New or Affected Resource(s)/Data Source(s) azurerm_frontdoor ### Potential Terraform Configuration ```hcl sku { name = "Premium" tier = "Premium" } ``` ### References _No response_
non_perf
ability to create azurerm frontdoor as premium tier is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description there is currently no way to define a premium tier sku front door deployment within the azure terraform provider the default deployment is a standard tier front door deployment new or affected resource s data source s azurerm frontdoor potential terraform configuration hcl sku name premium tier premium references no response
0
350,816
25,000,853,065
IssuesEvent
2022-11-03 07:45:01
AY2223S1-CS2103T-W13-2/tp
https://api.github.com/repos/AY2223S1-CS2103T-W13-2/tp
closed
[PE-D][Tester D] Format inconsistent for some commands
severity.Low type.DocumentationBug
![Screenshot 2022-10-28 at 4.35.21 PM.png](https://raw.githubusercontent.com/Guanzhou03/ped/main/files/b1a76276-fd62-44c3-92cb-7b2e721588e8.png) Some inconsistencies in the formatting, for example in the above example CLIENT_INDEX is used, but in the example below the format does not follow the same style. ![Screenshot 2022-10-28 at 4.35.37 PM.png](https://raw.githubusercontent.com/Guanzhou03/ped/main/files/b61f693f-864a-4165-a621-33ad460f9643.png) <!--session: 1666944163573-5c30bee4-ab8e-4b28-a596-5f2e188c5155--> <!--Version: Web v3.4.4--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: Guanzhou03/ped#5
1.0
[PE-D][Tester D] Format inconsistent for some commands - ![Screenshot 2022-10-28 at 4.35.21 PM.png](https://raw.githubusercontent.com/Guanzhou03/ped/main/files/b1a76276-fd62-44c3-92cb-7b2e721588e8.png) Some inconsistencies in the formatting, for example in the above example CLIENT_INDEX is used, but in the example below the format does not follow the same style. ![Screenshot 2022-10-28 at 4.35.37 PM.png](https://raw.githubusercontent.com/Guanzhou03/ped/main/files/b61f693f-864a-4165-a621-33ad460f9643.png) <!--session: 1666944163573-5c30bee4-ab8e-4b28-a596-5f2e188c5155--> <!--Version: Web v3.4.4--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: Guanzhou03/ped#5
non_perf
format inconsistent for some commands some inconsistencies in the formatting for example in the above example client index is used but in the example below the format does not follow the same style labels severity verylow type documentationbug original ped
0
816,798
30,612,512,718
IssuesEvent
2023-07-23 19:43:24
SkriptLang/Skript
https://api.github.com/repos/SkriptLang/Skript
opened
Add paper's force break item effect
enhancement priority: lowest good first issue
### Suggestion Add an Effect for paper's force break item effect https://jd.papermc.io/paper/1.19/org/bukkit/entity/LivingEntity.html#broadcastSlotBreak(org.bukkit.inventory.EquipmentSlot) and https://jd.papermc.io/paper/1.19/org/bukkit/entity/LivingEntity.html#broadcastSlotBreak(org.bukkit.inventory.EquipmentSlot,java.util.Collection) ### Why? User in Discord was asking for this suggestion. ### Other _No response_ ### Agreement - [X] I have read the guidelines above and affirm I am following them with this suggestion.
1.0
Add paper's force break item effect - ### Suggestion Add an Effect for paper's force break item effect https://jd.papermc.io/paper/1.19/org/bukkit/entity/LivingEntity.html#broadcastSlotBreak(org.bukkit.inventory.EquipmentSlot) and https://jd.papermc.io/paper/1.19/org/bukkit/entity/LivingEntity.html#broadcastSlotBreak(org.bukkit.inventory.EquipmentSlot,java.util.Collection) ### Why? User in Discord was asking for this suggestion. ### Other _No response_ ### Agreement - [X] I have read the guidelines above and affirm I am following them with this suggestion.
non_perf
add paper s force break item effect suggestion add an effect for paper s force break item effect and why user in discord was asking for this suggestion other no response agreement i have read the guidelines above and affirm i am following them with this suggestion
0
28,049
5,167,306,437
IssuesEvent
2017-01-17 18:26:27
eliasferreyra/googlesitemapgenerator
https://api.github.com/repos/eliasferreyra/googlesitemapgenerator
closed
Installation issue - "installdir/bin/sitemap-daemon: not found"
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Download latest i386 linux package on Ubuntu 8.04.3 2. Extract package as root 3. Execute install.sh as root What is the expected output? What do you see instead? Installation guide says it should output: Program files successfully copied. Google Sitemap Generator settings successfully updated. Google Sitemap Generator init scripts successfully installed. Apache configuration successfully updated. Old configuration is saved at /etc/google-sitemap- generator/httpd.install.conf Ready to set the password for the administration console. Enter new password (5 or more characters): I get: Program files successfully copied. sitemap-install/install.sh 728: /INSTALLDIR/bin/sitemap-daemon: not found What version of the product are you using? On what operating system? Latest i386 package, Ubuntu 8.04.3 Please provide any additional information below. If I navigate to the INSTALLDIR/bin directory, I can see the file 'sitemap- daemon', and open it in vim (though it doesn't look like anything), but any time I try to execute it it says "No such file or directory" One other thing that may be an issue is it tells me my Apache architecture is 64 bits, though I don't think it really is. ``` Original issue reported on code.google.com by `anasser....@gmail.com` on 10 Nov 2009 at 9:08
1.0
Installation issue - "installdir/bin/sitemap-daemon: not found" - ``` What steps will reproduce the problem? 1. Download latest i386 linux package on Ubuntu 8.04.3 2. Extract package as root 3. Execute install.sh as root What is the expected output? What do you see instead? Installation guide says it should output: Program files successfully copied. Google Sitemap Generator settings successfully updated. Google Sitemap Generator init scripts successfully installed. Apache configuration successfully updated. Old configuration is saved at /etc/google-sitemap- generator/httpd.install.conf Ready to set the password for the administration console. Enter new password (5 or more characters): I get: Program files successfully copied. sitemap-install/install.sh 728: /INSTALLDIR/bin/sitemap-daemon: not found What version of the product are you using? On what operating system? Latest i386 package, Ubuntu 8.04.3 Please provide any additional information below. If I navigate to the INSTALLDIR/bin directory, I can see the file 'sitemap- daemon', and open it in vim (though it doesn't look like anything), but any time I try to execute it it says "No such file or directory" One other thing that may be an issue is it tells me my Apache architecture is 64 bits, though I don't think it really is. ``` Original issue reported on code.google.com by `anasser....@gmail.com` on 10 Nov 2009 at 9:08
non_perf
installation issue installdir bin sitemap daemon not found what steps will reproduce the problem download latest linux package on ubuntu extract package as root execute install sh as root what is the expected output what do you see instead installation guide says it should output program files successfully copied google sitemap generator settings successfully updated google sitemap generator init scripts successfully installed apache configuration successfully updated old configuration is saved at etc google sitemap generator httpd install conf ready to set the password for the administration console enter new password or more characters i get program files successfully copied sitemap install install sh installdir bin sitemap daemon not found what version of the product are you using on what operating system latest package ubuntu please provide any additional information below if i navigate to the installdir bin directory i can see the file sitemap daemon and open it in vim though it doesn t look like anything but any time i try to execute it it says no such file or directory one other thing that may be an issue is it tells me my apache architecture is bits though i don t think it really is original issue reported on code google com by anasser gmail com on nov at
0
37,466
18,423,764,694
IssuesEvent
2021-10-13 19:12:37
dotnet/msbuild
https://api.github.com/repos/dotnet/msbuild
closed
RAR cracks open system files in empty incremental build
performance size:3
### Issue Description The RAR on-disk cache appears to be missed for some subset of system dependencies. Even if the solution is built incrementally with no changes since the last successful build, I'm seeing a large number of calls to `AssemblyName.GetAssemblyName` for files like: ``` C:\Users\laprosek.EUROPE\.nuget\packages\microsoft.identitymodel.protocols.openidconnect\5.6.0\lib\netstandard2.0\Microsoft.IdentityModel.Protocols.OpenIdConnect.dll C:\Users\laprosek.EUROPE\.nuget\packages\microsoft.identitymodel.tokens\5.6.0\lib\netstandard2.0\Microsoft.IdentityModel.Tokens.dll C:\Users\laprosek.EUROPE\.nuget\packages\system.identitymodel.tokens.jwt\5.6.0\lib\netstandard2.0\System.IdentityModel.Tokens.Jwt.dll C:\Users\laprosek.EUROPE\.nuget\packages\cachemanager.core\2.0.0-beta-1629\lib\netstandard2.0\CacheManager.Core.dll C:\Users\laprosek.EUROPE\.nuget\packages\cachemanager.microsoft.extensions.configuration\2.0.0-beta- C:\Users\laprosek.EUROPE\.nuget\packages\cachemanager.microsoft.extensions.logging\2.0.0-beta-1629\lib\netstandard2.0\CacheManager.Microsoft.Extensions.Logging.dll C:\Users\laprosek.EUROPE\.nuget\packages\system.configuration.configurationmanager\4.5.0\ref\netstandard2.0\System.Configuration.ConfigurationManager.dll ... ``` ### Steps to Reproduce Clone the [Ocelot repo](https://github.com/ThreeMammals/Ocelot/) and build it. Notice that this code is executed for many assemblies, suggesting that the cache does not work well: https://github.com/dotnet/msbuild/blob/ea1d6d99a376ac582216bba06b75f6bd9f3c7c64/src/Tasks/SystemState.cs#L449 ### Data ~340 cache misses when building Ocelot, costing us ~2.5% of incremental build time. ### Analysis Nothing obvious. The cache entry is updated with the assembly name and the cache is marked dirty so it should be written to the cache file and read back in subsequent builds. ### Versions & Configurations Microsoft (R) Build Engine version 17.0.0-dev-21464-01+c82d55e9b for .NET Framework ### Regression? N/A ### Attach a binlog N/A
True
RAR cracks open system files in empty incremental build - ### Issue Description The RAR on-disk cache appears to be missed for some subset of system dependencies. Even if the solution is built incrementally with no changes since the last successful build, I'm seeing a large number of calls to `AssemblyName.GetAssemblyName` for files like: ``` C:\Users\laprosek.EUROPE\.nuget\packages\microsoft.identitymodel.protocols.openidconnect\5.6.0\lib\netstandard2.0\Microsoft.IdentityModel.Protocols.OpenIdConnect.dll C:\Users\laprosek.EUROPE\.nuget\packages\microsoft.identitymodel.tokens\5.6.0\lib\netstandard2.0\Microsoft.IdentityModel.Tokens.dll C:\Users\laprosek.EUROPE\.nuget\packages\system.identitymodel.tokens.jwt\5.6.0\lib\netstandard2.0\System.IdentityModel.Tokens.Jwt.dll C:\Users\laprosek.EUROPE\.nuget\packages\cachemanager.core\2.0.0-beta-1629\lib\netstandard2.0\CacheManager.Core.dll C:\Users\laprosek.EUROPE\.nuget\packages\cachemanager.microsoft.extensions.configuration\2.0.0-beta- C:\Users\laprosek.EUROPE\.nuget\packages\cachemanager.microsoft.extensions.logging\2.0.0-beta-1629\lib\netstandard2.0\CacheManager.Microsoft.Extensions.Logging.dll C:\Users\laprosek.EUROPE\.nuget\packages\system.configuration.configurationmanager\4.5.0\ref\netstandard2.0\System.Configuration.ConfigurationManager.dll ... ``` ### Steps to Reproduce Clone the [Ocelot repo](https://github.com/ThreeMammals/Ocelot/) and build it. Notice that this code is executed for many assemblies, suggesting that the cache does not work well: https://github.com/dotnet/msbuild/blob/ea1d6d99a376ac582216bba06b75f6bd9f3c7c64/src/Tasks/SystemState.cs#L449 ### Data ~340 cache misses when building Ocelot, costing us ~2.5% of incremental build time. ### Analysis Nothing obvious. The cache entry is updated with the assembly name and the cache is marked dirty so it should be written to the cache file and read back in subsequent builds. ### Versions & Configurations Microsoft (R) Build Engine version 17.0.0-dev-21464-01+c82d55e9b for .NET Framework ### Regression? N/A ### Attach a binlog N/A
perf
rar cracks open system files in empty incremental build issue description the rar on disk cache appears to be missed for some subset of system dependencies even if the solution is built incrementally with no changes since the last successful build i m seeing a large number of calls to assemblyname getassemblyname for files like c users laprosek europe nuget packages microsoft identitymodel protocols openidconnect lib microsoft identitymodel protocols openidconnect dll c users laprosek europe nuget packages microsoft identitymodel tokens lib microsoft identitymodel tokens dll c users laprosek europe nuget packages system identitymodel tokens jwt lib system identitymodel tokens jwt dll c users laprosek europe nuget packages cachemanager core beta lib cachemanager core dll c users laprosek europe nuget packages cachemanager microsoft extensions configuration beta c users laprosek europe nuget packages cachemanager microsoft extensions logging beta lib cachemanager microsoft extensions logging dll c users laprosek europe nuget packages system configuration configurationmanager ref system configuration configurationmanager dll steps to reproduce clone the and build it notice that this code is executed for many assemblies suggesting that the cache does not work well data cache misses when building ocelot costing us of incremental build time analysis nothing obvious the cache entry is updated with the assembly name and the cache is marked dirty so it should be written to the cache file and read back in subsequent builds versions configurations microsoft r build engine version dev for net framework regression n a attach a binlog n a
1
1,461
3,013,163,200
IssuesEvent
2015-07-29 06:55:38
CartoDB/cartodb
https://api.github.com/repos/CartoDB/cartodb
closed
There should be a per-user platform limit of concurrent syncs
3 - Done importer importer-performance
@danicarrion raised an issue we have: There is a limit of concurrent imports per user, but there is no limit of concurrent syncs per user. So I can create 50 syncs almost at once, and they would eat all cloud workers for themselves, not allowing anybody to import until they finish. We should implement a platform limit so only up to X (I'd say 3, separate from the imports limit), so only up to X get enqueued, and when one finishes next sweep will have available slots again (now sweeps are done each 2-3 mins so is not problematic). I estimate ~4h of time to build and (mostly manually) test it, as the PlatfomLimits system is already there and easy to extend. @saleiva @javisantana
True
There should be a per-user platform limit of concurrent syncs - @danicarrion raised an issue we have: There is a limit of concurrent imports per user, but there is no limit of concurrent syncs per user. So I can create 50 syncs almost at once, and they would eat all cloud workers for themselves, not allowing anybody to import until they finish. We should implement a platform limit so only up to X (I'd say 3, separate from the imports limit), so only up to X get enqueued, and when one finishes next sweep will have available slots again (now sweeps are done each 2-3 mins so is not problematic). I estimate ~4h of time to build and (mostly manually) test it, as the PlatfomLimits system is already there and easy to extend. @saleiva @javisantana
perf
there should be a per user platform limit of concurrent syncs danicarrion raised an issue we have there is a limit of concurrent imports per user but there is no limit of concurrent syncs per user so i can create syncs almost at once and they would eat all cloud workers for themselves not allowing anybody to import until they finish we should implement a platform limit so only up to x i d say separate from the imports limit so only up to x get enqueued and when one finishes next sweep will have available slots again now sweeps are done each mins so is not problematic i estimate of time to build and mostly manually test it as the platfomlimits system is already there and easy to extend saleiva javisantana
1
6,015
5,251,972,265
IssuesEvent
2017-02-02 01:53:56
killbill/killbill
https://api.github.com/repos/killbill/killbill
opened
Optimize Invoicing to not recompute state for cancelled subscriptions
INVOICE JUNCTION performance
As reported on the [mailing list](https://groups.google.com/forum/#!topic/killbilling-users/t_g7ZMXlopk), our junction/invoicing code is not very optimized for accounts that have large amount of subscriptions. One use case would be to filter out cancelled subscriptions (but the trick is to understand when it safe to filter our such cancelled subscriptions and also make sure invoicing code understands there is nothing to do for such subscriptions). Another use case would be to reduce the window of time to consider when comparing `existing` items and `proposed` items -- similarly to what we did for [usage](https://github.com/killbill/killbill/blob/killbill-0.18.2/invoice/src/main/java/org/killbill/billing/invoice/usage/RawUsageOptimizer.java).
True
Optimize Invoicing to not recompute state for cancelled subscriptions - As reported on the [mailing list](https://groups.google.com/forum/#!topic/killbilling-users/t_g7ZMXlopk), our junction/invoicing code is not very optimized for accounts that have large amount of subscriptions. One use case would be to filter out cancelled subscriptions (but the trick is to understand when it safe to filter our such cancelled subscriptions and also make sure invoicing code understands there is nothing to do for such subscriptions). Another use case would be to reduce the window of time to consider when comparing `existing` items and `proposed` items -- similarly to what we did for [usage](https://github.com/killbill/killbill/blob/killbill-0.18.2/invoice/src/main/java/org/killbill/billing/invoice/usage/RawUsageOptimizer.java).
perf
optimize invoicing to not recompute state for cancelled subscriptions as reported on the our junction invoicing code is not very optimized for accounts that have large amount of subscriptions one use case would be to filter out cancelled subscriptions but the trick is to understand when it safe to filter our such cancelled subscriptions and also make sure invoicing code understands there is nothing to do for such subscriptions another use case would be to reduce the window of time to consider when comparing existing items and proposed items similarly to what we did for
1
4,925
4,713,657,448
IssuesEvent
2016-10-14 20:51:34
RoaringBitmap/CRoaring
https://api.github.com/repos/RoaringBitmap/CRoaring
opened
Optimization opportunity : reduce memset
performance
The current code uses ``calloc`` for initializing bitset containers, whether it is needed or not. (In several cases, it is not needed, a simple ``malloc`` would suffice.) For some operations, this carries a price that shows up in the profiling.
True
Optimization opportunity : reduce memset - The current code uses ``calloc`` for initializing bitset containers, whether it is needed or not. (In several cases, it is not needed, a simple ``malloc`` would suffice.) For some operations, this carries a price that shows up in the profiling.
perf
optimization opportunity reduce memset the current code uses calloc for initializing bitset containers whether it is needed or not in several cases it is not needed a simple malloc would suffice for some operations this carries a price that shows up in the profiling
1
48,019
25,310,674,142
IssuesEvent
2022-11-17 17:11:40
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
long-running `/search` query on matrix.org after client disconnected
A-Performance T-Other A-Message-Search A-Database O-Uncommon
Discovered a long-running `SearchRestServlet` (search text of all rooms) on matrix.org that was running for a user that is only in 21 rooms. But perhaps one of the rooms is large. Logs: ``` $ grep "POST-2984873" homeserver.log 2730070:2022-10-25 07:30:34,357 - synapse.handlers.search - 142 - INFO - POST-2984873 - Search batch properties: None, None, None 2730071:2022-10-25 07:30:34,358 - synapse.handlers.search - 149 - INFO - POST-2984873 - Search content: {'search_categories': {'room_events': {'search_term': 'update', 'filter': {'limit': 10}, 'order_by': 'recent', 'event_context': {'before_limit': 1, 'after_limit': 1, 'include_profile': True}}}} 2755696:2022-10-25 07:33:34,346 - synapse.http.site - 372 - INFO - POST-2984873 - Connection from client lost before response was sent 4542688:2022-10-25 11:06:10,238 - synapse.handlers.search - 327 - INFO - POST-2984873 - Found 0 events to return 4542689:2022-10-25 11:06:10,239 - synapse.http.server - 744 - WARNING - POST-2984873 - Not sending response to request <XForwardedForRequest at 0x7f979c09ae50 method='POST' uri='/_matrix/client/r0/search' clientproto='HTTP/1.1' site='8080'>, already disconnected. 4542690:2022-10-25 11:06:10,239 - synapse.access.http.8080 - 460 - INFO - POST-2984873 - 194.126.177.75 - 8080 - {@[redacted]:matrix.org} Processed request: 12935.892sec/-12755.893sec (0.004sec, 0.001sec) (0.003sec/12935.878sec/8) 0B 200! "POST /_matrix/client/r0/search HTTP/1.1" "[redacted]" [0 dbevts] ``` See that the request was processing for ~3.5hrs. There was a spike in DB traffic at the very end of the event as well. Looks like a good endpoint candidate to make cancelable.
True
long-running `/search` query on matrix.org after client disconnected - Discovered a long-running `SearchRestServlet` (search text of all rooms) on matrix.org that was running for a user that is only in 21 rooms. But perhaps one of the rooms is large. Logs: ``` $ grep "POST-2984873" homeserver.log 2730070:2022-10-25 07:30:34,357 - synapse.handlers.search - 142 - INFO - POST-2984873 - Search batch properties: None, None, None 2730071:2022-10-25 07:30:34,358 - synapse.handlers.search - 149 - INFO - POST-2984873 - Search content: {'search_categories': {'room_events': {'search_term': 'update', 'filter': {'limit': 10}, 'order_by': 'recent', 'event_context': {'before_limit': 1, 'after_limit': 1, 'include_profile': True}}}} 2755696:2022-10-25 07:33:34,346 - synapse.http.site - 372 - INFO - POST-2984873 - Connection from client lost before response was sent 4542688:2022-10-25 11:06:10,238 - synapse.handlers.search - 327 - INFO - POST-2984873 - Found 0 events to return 4542689:2022-10-25 11:06:10,239 - synapse.http.server - 744 - WARNING - POST-2984873 - Not sending response to request <XForwardedForRequest at 0x7f979c09ae50 method='POST' uri='/_matrix/client/r0/search' clientproto='HTTP/1.1' site='8080'>, already disconnected. 4542690:2022-10-25 11:06:10,239 - synapse.access.http.8080 - 460 - INFO - POST-2984873 - 194.126.177.75 - 8080 - {@[redacted]:matrix.org} Processed request: 12935.892sec/-12755.893sec (0.004sec, 0.001sec) (0.003sec/12935.878sec/8) 0B 200! "POST /_matrix/client/r0/search HTTP/1.1" "[redacted]" [0 dbevts] ``` See that the request was processing for ~3.5hrs. There was a spike in DB traffic at the very end of the event as well. Looks like a good endpoint candidate to make cancelable.
perf
long running search query on matrix org after client disconnected discovered a long running searchrestservlet search text of all rooms on matrix org that was running for a user that is only in rooms but perhaps one of the rooms is large logs grep post homeserver log synapse handlers search info post search batch properties none none none synapse handlers search info post search content search categories room events search term update filter limit order by recent event context before limit after limit include profile true synapse http site info post connection from client lost before response was sent synapse handlers search info post found events to return synapse http server warning post not sending response to request already disconnected synapse access http info post matrix org processed request post matrix client search http see that the request was processing for there was a spike in db traffic at the very end of the event as well looks like a good endpoint candidate to make cancelable
1
1,021
2,506,883,110
IssuesEvent
2015-01-12 14:41:59
WeAreAthlon/silla.io
https://api.github.com/repos/WeAreAthlon/silla.io
opened
Separate CMS Session from the Application Session
low priority proposal security
Usage of different *session ids* for the different application contexts.
1.0
Separate CMS Session from the Application Session - Usage of different *session ids* for the different application contexts.
non_perf
separate cms session from the application session usage of different session ids for the different application contexts
0
7,039
5,827,972,364
IssuesEvent
2017-05-08 10:34:53
tiehuis/faststack
https://api.github.com/repos/tiehuis/faststack
closed
Buffer replay output
enhancement performance
Every input state change current results in a write immediately to the replay file. This is not optimal and we should instead allow support for writing to an intermediate buffer and batching our writes. Initially, we should just rely on the `printf` buffer and change the buffering type to not flush on newline.
True
Buffer replay output - Every input state change current results in a write immediately to the replay file. This is not optimal and we should instead allow support for writing to an intermediate buffer and batching our writes. Initially, we should just rely on the `printf` buffer and change the buffering type to not flush on newline.
perf
buffer replay output every input state change current results in a write immediately to the replay file this is not optimal and we should instead allow support for writing to an intermediate buffer and batching our writes initially we should just rely on the printf buffer and change the buffering type to not flush on newline
1
40,520
20,958,242,234
IssuesEvent
2022-03-27 12:11:42
dadhi/DryIoc
https://api.github.com/repos/dadhi/DryIoc
closed
Avoid to use lambdas with closure in the hot path methods
enhancement performance
Because the lambda closure object will be created at the top of the method despite using the lambda only under condition. Specifically check the root Resolve and Register methods for this.
True
Avoid to use lambdas with closure in the hot path methods - Because the lambda closure object will be created at the top of the method despite using the lambda only under condition. Specifically check the root Resolve and Register methods for this.
perf
avoid to use lambdas with closure in the hot path methods because the lambda closure object will be created at the top of the method despite using the lambda only under condition specifically check the root resolve and register methods for this
1
34,932
16,766,050,856
IssuesEvent
2021-06-14 08:56:26
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
dart2native performance issue
area-vm type-performance
I used dart2native to compile a little Dart program that reads through a big word vector file (typically > 1 GB) and prints some statistics. The execution times were a bit disappointing. The compiled native used real 0m37.340s (user 0m35.359s), while the old non-compiled used only real 0m23.449s (user 0m23.894s). This was on a MacBook Air. My Dart program, vec-test.dart, is listed at the bottom. I run it like this: `dart vec-test.dart some_file.vec` A vec file (text) may be found at https://fasttext.cc/docs/en/crawl-vectors.html, e.g. https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.no.300.vec.gz * Dart SDK Version (`dart --version`) Dart VM version: 2.6.0 (Thu Oct 24 17:52:22 2019 +0200) on "macos_x64" * Whether you are using Windows, MacOSX, or Linux (if applicable) macOS 10.14.6, Mojave ``` import "dart:io"; import "dart:convert"; import "dart:async"; import "dart:math"; RegExp norwRx = new RegExp(r"^[a-zæøåé]+\-?[a-zæøåé]*$"); bool nordic(String word) { return true; } main(List<String> arguments) { final filename = arguments.first; final file = new File(filename); Stream<List<int>> inputStream = file.openRead(); final verbose = arguments.length > 1; int wordLines; int dims; int goodCount = 0; double maxLen = 0.0; String maxWord; inputStream .transform(utf8.decoder) .transform(new LineSplitter()) .listen((String line) { List<String> parts = line.split(" "); if (parts.length <= 3) { // may be a trailing space wordLines = int.parse(parts[0]); dims = int.parse(parts[1]); print("$wordLines word lines, $dims dimensions"); } else if(norwRx.hasMatch(parts[0])) { double sumOfSquares = 0.0; for (int i=1; i<=dims; i++) { double d = double.parse(parts[i]); sumOfSquares += d * d; } double vLen = sqrt(sumOfSquares); goodCount++; if (vLen > maxLen) { maxLen = vLen; maxWord = parts[0]; } if (verbose) { print("${parts[0]}: $vLen"); } } }, onDone: () { print("\naccepted words: $goodCount"); print("maxLen=$maxLen, for '$maxWord'"); }, onError: (e) { print(e.toString()); } ); } ```
True
dart2native performance issue - I used dart2native to compile a little Dart program that reads through a big word vector file (typically > 1 GB) and prints some statistics. The execution times were a bit disappointing. The compiled native used real 0m37.340s (user 0m35.359s), while the old non-compiled used only real 0m23.449s (user 0m23.894s). This was on a MacBook Air. My Dart program, vec-test.dart, is listed at the bottom. I run it like this: `dart vec-test.dart some_file.vec` A vec file (text) may be found at https://fasttext.cc/docs/en/crawl-vectors.html, e.g. https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.no.300.vec.gz * Dart SDK Version (`dart --version`) Dart VM version: 2.6.0 (Thu Oct 24 17:52:22 2019 +0200) on "macos_x64" * Whether you are using Windows, MacOSX, or Linux (if applicable) macOS 10.14.6, Mojave ``` import "dart:io"; import "dart:convert"; import "dart:async"; import "dart:math"; RegExp norwRx = new RegExp(r"^[a-zæøåé]+\-?[a-zæøåé]*$"); bool nordic(String word) { return true; } main(List<String> arguments) { final filename = arguments.first; final file = new File(filename); Stream<List<int>> inputStream = file.openRead(); final verbose = arguments.length > 1; int wordLines; int dims; int goodCount = 0; double maxLen = 0.0; String maxWord; inputStream .transform(utf8.decoder) .transform(new LineSplitter()) .listen((String line) { List<String> parts = line.split(" "); if (parts.length <= 3) { // may be a trailing space wordLines = int.parse(parts[0]); dims = int.parse(parts[1]); print("$wordLines word lines, $dims dimensions"); } else if(norwRx.hasMatch(parts[0])) { double sumOfSquares = 0.0; for (int i=1; i<=dims; i++) { double d = double.parse(parts[i]); sumOfSquares += d * d; } double vLen = sqrt(sumOfSquares); goodCount++; if (vLen > maxLen) { maxLen = vLen; maxWord = parts[0]; } if (verbose) { print("${parts[0]}: $vLen"); } } }, onDone: () { print("\naccepted words: $goodCount"); print("maxLen=$maxLen, for '$maxWord'"); }, onError: (e) { print(e.toString()); } ); } ```
perf
performance issue i used to compile a little dart program that reads through a big word vector file typically gb and prints some statistics the execution times were a bit disappointing the compiled native used real user while the old non compiled used only real user this was on a macbook air my dart program vec test dart is listed at the bottom i run it like this dart vec test dart some file vec a vec file text may be found at e g dart sdk version dart version dart vm version thu oct on macos whether you are using windows macosx or linux if applicable macos mojave import dart io import dart convert import dart async import dart math regexp norwrx new regexp r bool nordic string word return true main list arguments final filename arguments first final file new file filename stream inputstream file openread final verbose arguments length int wordlines int dims int goodcount double maxlen string maxword inputstream transform decoder transform new linesplitter listen string line list parts line split if parts length may be a trailing space wordlines int parse parts dims int parse parts print wordlines word lines dims dimensions else if norwrx hasmatch parts double sumofsquares for int i i dims i double d double parse parts sumofsquares d d double vlen sqrt sumofsquares goodcount if vlen maxlen maxlen vlen maxword parts if verbose print parts vlen ondone print naccepted words goodcount print maxlen maxlen for maxword onerror e print e tostring
1
791,391
27,862,200,434
IssuesEvent
2023-03-21 07:33:22
noisy/portfolio
https://api.github.com/repos/noisy/portfolio
closed
'lass' instead of 'class' in ProjectThumbnail.vue
Priority: Minor
On page `/projects` names of projects in grid have broken class. https://github.com/noisy/portfolio/blob/3c05025c8c99e0c9e9d2a44de814892a08d7c281/src/components/ProjectThumbnail.vue#L32 ![image](https://user-images.githubusercontent.com/1151664/226493504-b60b19d5-2a27-4012-891f-82f13a81a47c.png)
1.0
'lass' instead of 'class' in ProjectThumbnail.vue - On page `/projects` names of projects in grid have broken class. https://github.com/noisy/portfolio/blob/3c05025c8c99e0c9e9d2a44de814892a08d7c281/src/components/ProjectThumbnail.vue#L32 ![image](https://user-images.githubusercontent.com/1151664/226493504-b60b19d5-2a27-4012-891f-82f13a81a47c.png)
non_perf
lass instead of class in projectthumbnail vue on page projects names of projects in grid have broken class
0
95,498
8,560,216,286
IssuesEvent
2018-11-09 00:08:50
Microsoft/openenclave
https://api.github.com/repos/Microsoft/openenclave
closed
Core Testing: Expand tests for thread binding behavior for enclaves
functionality investigation testing
We need to flush out the details for behaviors of the enclave APIs and validate them. This includes thread binding behavior and its interactions in multi-threaded scenarios: - ~~Thread binding correctly avoids deadlocks from thread exhaustion in the enclave~~ - [x] Thread binding now fails if we exhaust all enclave threads, we should simply verify that host handles this failure gracefully. (Addressed by PR #1023 ) - - [X] Validate implementations of all OE synchronization primitives (Covered with #476 ) - - [X] Stress and load tests for multi-threaded enclave applications (Missing coverage provided with #612 )
1.0
Core Testing: Expand tests for thread binding behavior for enclaves - We need to flush out the details for behaviors of the enclave APIs and validate them. This includes thread binding behavior and its interactions in multi-threaded scenarios: - ~~Thread binding correctly avoids deadlocks from thread exhaustion in the enclave~~ - [x] Thread binding now fails if we exhaust all enclave threads, we should simply verify that host handles this failure gracefully. (Addressed by PR #1023 ) - - [X] Validate implementations of all OE synchronization primitives (Covered with #476 ) - - [X] Stress and load tests for multi-threaded enclave applications (Missing coverage provided with #612 )
non_perf
core testing expand tests for thread binding behavior for enclaves we need to flush out the details for behaviors of the enclave apis and validate them this includes thread binding behavior and its interactions in multi threaded scenarios thread binding correctly avoids deadlocks from thread exhaustion in the enclave thread binding now fails if we exhaust all enclave threads we should simply verify that host handles this failure gracefully addressed by pr validate implementations of all oe synchronization primitives covered with stress and load tests for multi threaded enclave applications missing coverage provided with
0
331,708
24,321,863,420
IssuesEvent
2022-09-30 11:31:02
latex3/latex3
https://api.github.com/repos/latex3/latex3
closed
The offset of \SetHorizontalPole
documentation l3coffins
The documentation of `xcoffins` says the offset of `\SetHorizontalPole` is from the bottom edge of the bounding box. https://github.com/latex3/latex3/blob/8b3fee5152c9b95e25710120fb66e4ccfc85af26/l3experimental/xcoffins/xcoffins.dtx#L181-L187 In this test case, I create a coffin with height of 24pt and depth of 48pt. Then I create a horizontal `test` pole with 12pt offset (`\SetHorizontalPole \tmpa {test} {12pt}`). It is supposed to be 12pt above the `b` pole according to the documentation. However the actual `test` pole is 12 pt above the `H` pole, which is contradictory to the documentation. ```latex \documentclass{article} \usepackage{xcoffins} \usepackage{expl3} \begin{document} \NewCoffin\tmpa \SetVerticalCoffin \tmpa {72pt} {% \noindent \rule [-48pt] {0pt} {72pt}% } \SetHorizontalPole \tmpa {test} {12pt} \DisplayCoffinHandles \tmpa {black} \vskip 80pt \ExplSyntaxOn \vcoffin_set:Nnn \l_tmpa_coffin { 72pt } { \noindent \rule [ -48pt ] { 0pt } { 72pt } } \coffin_set_horizontal_pole:Nnn \l_tmpa_coffin { test } { 12pt } \coffin_display_handles:Nn \l_tmpa_coffin { black } \ExplSyntaxOff \end{document} ``` <img width="471" alt="Screen Shot 2022-02-02 at 17 00 04" src="https://user-images.githubusercontent.com/12290822/152123389-198b4f81-7f4a-4837-89a3-30b8a60cf8f3.png"> ``` Package: expl3 2022-01-21 L3 programming layer (loader) Package: xcoffins 2021-11-12 L3 Experimental design level coffins ```
1.0
The offset of \SetHorizontalPole - The documentation of `xcoffins` says the offset of `\SetHorizontalPole` is from the bottom edge of the bounding box. https://github.com/latex3/latex3/blob/8b3fee5152c9b95e25710120fb66e4ccfc85af26/l3experimental/xcoffins/xcoffins.dtx#L181-L187 In this test case, I create a coffin with height of 24pt and depth of 48pt. Then I create a horizontal `test` pole with 12pt offset (`\SetHorizontalPole \tmpa {test} {12pt}`). It is supposed to be 12pt above the `b` pole according to the documentation. However the actual `test` pole is 12 pt above the `H` pole, which is contradictory to the documentation. ```latex \documentclass{article} \usepackage{xcoffins} \usepackage{expl3} \begin{document} \NewCoffin\tmpa \SetVerticalCoffin \tmpa {72pt} {% \noindent \rule [-48pt] {0pt} {72pt}% } \SetHorizontalPole \tmpa {test} {12pt} \DisplayCoffinHandles \tmpa {black} \vskip 80pt \ExplSyntaxOn \vcoffin_set:Nnn \l_tmpa_coffin { 72pt } { \noindent \rule [ -48pt ] { 0pt } { 72pt } } \coffin_set_horizontal_pole:Nnn \l_tmpa_coffin { test } { 12pt } \coffin_display_handles:Nn \l_tmpa_coffin { black } \ExplSyntaxOff \end{document} ``` <img width="471" alt="Screen Shot 2022-02-02 at 17 00 04" src="https://user-images.githubusercontent.com/12290822/152123389-198b4f81-7f4a-4837-89a3-30b8a60cf8f3.png"> ``` Package: expl3 2022-01-21 L3 programming layer (loader) Package: xcoffins 2021-11-12 L3 Experimental design level coffins ```
non_perf
the offset of sethorizontalpole the documentation of xcoffins says the offset of sethorizontalpole is from the bottom edge of the bounding box in this test case i create a coffin with height of and depth of then i create a horizontal test pole with offset sethorizontalpole tmpa test it is supposed to be above the b pole according to the documentation however the actual test pole is pt above the h pole which is contradictory to the documentation latex documentclass article usepackage xcoffins usepackage begin document newcoffin tmpa setverticalcoffin tmpa noindent rule sethorizontalpole tmpa test displaycoffinhandles tmpa black vskip explsyntaxon vcoffin set nnn l tmpa coffin noindent rule coffin set horizontal pole nnn l tmpa coffin test coffin display handles nn l tmpa coffin black explsyntaxoff end document img width alt screen shot at src package programming layer loader package xcoffins experimental design level coffins
0
18,440
10,115,841,318
IssuesEvent
2019-07-30 23:13:22
flutter/devtools
https://api.github.com/repos/flutter/devtools
opened
Warn if the CPU profile has too little data to be meaningless
enhancement performance page
If the CPU sample has fewer than (10?) samples it isn't really useful and we should display a warning at the top of the profile that the sample is too small to draw useful conclusions and you need to run with a higher sampling rate or look at a wider time interval. Currently you only see the # of samples when you select an item in the chart so it is easy to try to scroll down into the chart looking for data before realizing the flame chart is just a single sample. Fyi @kenzieschmoll
True
Warn if the CPU profile has too little data to be meaningless - If the CPU sample has fewer than (10?) samples it isn't really useful and we should display a warning at the top of the profile that the sample is too small to draw useful conclusions and you need to run with a higher sampling rate or look at a wider time interval. Currently you only see the # of samples when you select an item in the chart so it is easy to try to scroll down into the chart looking for data before realizing the flame chart is just a single sample. Fyi @kenzieschmoll
perf
warn if the cpu profile has too little data to be meaningless if the cpu sample has fewer than samples it isn t really useful and we should display a warning at the top of the profile that the sample is too small to draw useful conclusions and you need to run with a higher sampling rate or look at a wider time interval currently you only see the of samples when you select an item in the chart so it is easy to try to scroll down into the chart looking for data before realizing the flame chart is just a single sample fyi kenzieschmoll
1
31,907
15,116,029,521
IssuesEvent
2021-02-09 05:54:40
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
TPU nightly
TF 1.14 comp:keras comp:tpus type:performance
I'm training a custom resnet on a single TPU device using tf.keras, and saving the model using ModelCheckpoint callback on the VM. The time takes to save the model during training is very much. The model is about 200mb and it takes ~1 hour to save it to the VM
True
TPU nightly - I'm training a custom resnet on a single TPU device using tf.keras, and saving the model using ModelCheckpoint callback on the VM. The time takes to save the model during training is very much. The model is about 200mb and it takes ~1 hour to save it to the VM
perf
tpu nightly i m training a custom resnet on a single tpu device using tf keras and saving the model using modelcheckpoint callback on the vm the time takes to save the model during training is very much the model is about and it takes hour to save it to the vm
1
29,858
14,283,836,770
IssuesEvent
2020-11-23 11:36:29
owncloud/ocis
https://api.github.com/repos/owncloud/ocis
opened
Skip querying the accounts & settings services by letting the idp provide all necessary claims
p3-medium performance
In the proxy we currently look up the account ID because we assume the idp does not provide a user stable non reassignable identifier. Even sub@iss will change when the idp changes. But we could skip requests to the accounts service if the idp would send 1. that kind of stable identifier, eg. an `owncloud-uuid` claim that is based on an ldap attribude specifically designed to contain a stable identifier. 2. the roleids for the account so we don't have to query the settings service. If the idp then sends all the necessary claims we can build the account / user info we need without asking the accounts and settings service ... Since roles and permissions ar managed by us we would need to be able to update the roleids in an ldap server ... or expose the in addition to the users `owncloud-uuid` attribute in glauth, maybe `owncloud-roleid` (multi value attribute). We can either add that to glauth, or to save another hop latency, implement a kopano ocis accounts backend ...
True
Skip querying the accounts & settings services by letting the idp provide all necessary claims - In the proxy we currently look up the account ID because we assume the idp does not provide a user stable non reassignable identifier. Even sub@iss will change when the idp changes. But we could skip requests to the accounts service if the idp would send 1. that kind of stable identifier, eg. an `owncloud-uuid` claim that is based on an ldap attribude specifically designed to contain a stable identifier. 2. the roleids for the account so we don't have to query the settings service. If the idp then sends all the necessary claims we can build the account / user info we need without asking the accounts and settings service ... Since roles and permissions ar managed by us we would need to be able to update the roleids in an ldap server ... or expose the in addition to the users `owncloud-uuid` attribute in glauth, maybe `owncloud-roleid` (multi value attribute). We can either add that to glauth, or to save another hop latency, implement a kopano ocis accounts backend ...
perf
skip querying the accounts settings services by letting the idp provide all necessary claims in the proxy we currently look up the account id because we assume the idp does not provide a user stable non reassignable identifier even sub iss will change when the idp changes but we could skip requests to the accounts service if the idp would send that kind of stable identifier eg an owncloud uuid claim that is based on an ldap attribude specifically designed to contain a stable identifier the roleids for the account so we don t have to query the settings service if the idp then sends all the necessary claims we can build the account user info we need without asking the accounts and settings service since roles and permissions ar managed by us we would need to be able to update the roleids in an ldap server or expose the in addition to the users owncloud uuid attribute in glauth maybe owncloud roleid multi value attribute we can either add that to glauth or to save another hop latency implement a kopano ocis accounts backend
1
681,403
23,309,833,669
IssuesEvent
2022-08-08 07:09:22
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
iforgot.apple.com - see bug description
os-ios browser-firefox-ios priority-critical
<!-- @browser: Mobile Safari --> <!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 15_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Safari/605.1.15 --> <!-- @reported_with: mobile-reporter --> <!-- @extra_labels: browser-firefox-ios --> **URL**: https://iforgot.apple.com/password/verify/appleid **Browser / Version**: Mobile Safari **Operating System**: iOS 15.6 **Tested Another Browser**: Yes Other **Problem type**: Something else **Description**: Mountain View, CA 94040 **Steps to Reproduce**: Mountain View, CA 94040, USA TODAY <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
iforgot.apple.com - see bug description - <!-- @browser: Mobile Safari --> <!-- @ua_header: Mozilla/5.0 (iPhone; CPU iPhone OS 15_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 Safari/605.1.15 --> <!-- @reported_with: mobile-reporter --> <!-- @extra_labels: browser-firefox-ios --> **URL**: https://iforgot.apple.com/password/verify/appleid **Browser / Version**: Mobile Safari **Operating System**: iOS 15.6 **Tested Another Browser**: Yes Other **Problem type**: Something else **Description**: Mountain View, CA 94040 **Steps to Reproduce**: Mountain View, CA 94040, USA TODAY <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_perf
iforgot apple com see bug description url browser version mobile safari operating system ios tested another browser yes other problem type something else description mountain view ca steps to reproduce mountain view ca usa today browser configuration none from with ❤️
0
576,666
17,091,697,785
IssuesEvent
2021-07-08 18:23:43
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
Assertion Failure on grpc_channel_stack_init when debugging tf-serving
kind/bug lang/c++ priority/P2
I encountered following assertion when debugging tf-serving: E0622 08:44:11.816529912 29242 channel_stack.cc:135] assertion failed: (uintptr_t)(user_data - (char*)stack) == grpc_channel_stack_size(filters, filter_count) It asserts very early when grpc server received an event. I add debug flags "GRPC_TRACE=all, GPRC_VERBOSE=DEBUG", but didn't find useful information. For server part, it use grpc++ library. For client part, it use grpc java grpc-netty component. One of my suspect is: On server side: the grpc version is: CPP_VERSION = 1.19.1 (CORE_VERSION = 7.0.0) On client side: grpc-netty is 1.20.0. Is it caused by mismatch of grpc version between server and client? Or any other ideas? Thanks!
1.0
Assertion Failure on grpc_channel_stack_init when debugging tf-serving - I encountered following assertion when debugging tf-serving: E0622 08:44:11.816529912 29242 channel_stack.cc:135] assertion failed: (uintptr_t)(user_data - (char*)stack) == grpc_channel_stack_size(filters, filter_count) It asserts very early when grpc server received an event. I add debug flags "GRPC_TRACE=all, GPRC_VERBOSE=DEBUG", but didn't find useful information. For server part, it use grpc++ library. For client part, it use grpc java grpc-netty component. One of my suspect is: On server side: the grpc version is: CPP_VERSION = 1.19.1 (CORE_VERSION = 7.0.0) On client side: grpc-netty is 1.20.0. Is it caused by mismatch of grpc version between server and client? Or any other ideas? Thanks!
non_perf
assertion failure on grpc channel stack init when debugging tf serving i encountered following assertion when debugging tf serving channel stack cc assertion failed uintptr t user data char stack grpc channel stack size filters filter count it asserts very early when grpc server received an event i add debug flags grpc trace all gprc verbose debug but didn t find useful information for server part it use grpc library for client part it use grpc java grpc netty component one of my suspect is on server side the grpc version is cpp version core version on client side grpc netty is is it caused by mismatch of grpc version between server and client or any other ideas thanks
0
18,666
10,192,566,868
IssuesEvent
2019-08-12 11:30:45
LiamOSullivan/bcd-dd-v2
https://api.github.com/repos/LiamOSullivan/bcd-dd-v2
opened
Fails Cross-browser compatibility
bug enhancement performance
Check compatibility for Chrome Firefox Safari Edge Not checking: IE
True
Fails Cross-browser compatibility - Check compatibility for Chrome Firefox Safari Edge Not checking: IE
perf
fails cross browser compatibility check compatibility for chrome firefox safari edge not checking ie
1
52,159
27,405,275,939
IssuesEvent
2023-03-01 06:03:42
medic/cht-core
https://api.github.com/repos/medic/cht-core
closed
Investigate removing view queries from API startup code
Type: Performance
**Describe the performance issue** Some of the API bootstrapping steps make view queries which means if there are a bunch of new changes then API blocks on startup waiting for view building to complete. In fact, the queries end up timing out which crashes API. While it does eventually complete the app is completely unavailable during this time which makes it difficult to work out what's going on. All requests get a 502 response because API is down. **Describe the improvement you'd like** Rewrite these bootstrap steps to use the magic [allDocs](https://pouchdb.com/api.html#batch_fetch) view if possible so that API startup doesn't block. As soon as requests come in the view rebuild will be triggered but API will be handling requests correctly. This is only possible if the docs needed all have a predictable ID prefix. **Describe alternatives you've considered** We could add view building as a separate step in the bootstrapping process so it's clear what's going on, but that would still mean all requests would get a 502 response. We could implement something like a [status page](https://github.com/medic/cht-core/issues/2967) which could report on view warming progress so at least people would know what the issue is but it's better to remove the issue than to just notify about it. **Environment** - Instance: all - Browser: N/A - Client platform: N/A - App: api - Version: 3.6, but also detected on latest. **Additional context** This was detected in a call with LG where they split their db into two instances using replication. Examples of the steps that need to be patched: - https://github.com/medic/cht-core/blob/master/api/src/services/generate-xform.js where it queries existing forms - https://github.com/medic/cht-core/blob/master/api/src/translations.js where it queries for translation docs - https://github.com/medic/cht-core/blob/master/api/src/config.js where it queries for translation docs - Anything else you can find...
True
Investigate removing view queries from API startup code - **Describe the performance issue** Some of the API bootstrapping steps make view queries which means if there are a bunch of new changes then API blocks on startup waiting for view building to complete. In fact, the queries end up timing out which crashes API. While it does eventually complete the app is completely unavailable during this time which makes it difficult to work out what's going on. All requests get a 502 response because API is down. **Describe the improvement you'd like** Rewrite these bootstrap steps to use the magic [allDocs](https://pouchdb.com/api.html#batch_fetch) view if possible so that API startup doesn't block. As soon as requests come in the view rebuild will be triggered but API will be handling requests correctly. This is only possible if the docs needed all have a predictable ID prefix. **Describe alternatives you've considered** We could add view building as a separate step in the bootstrapping process so it's clear what's going on, but that would still mean all requests would get a 502 response. We could implement something like a [status page](https://github.com/medic/cht-core/issues/2967) which could report on view warming progress so at least people would know what the issue is but it's better to remove the issue than to just notify about it. **Environment** - Instance: all - Browser: N/A - Client platform: N/A - App: api - Version: 3.6, but also detected on latest. **Additional context** This was detected in a call with LG where they split their db into two instances using replication. Examples of the steps that need to be patched: - https://github.com/medic/cht-core/blob/master/api/src/services/generate-xform.js where it queries existing forms - https://github.com/medic/cht-core/blob/master/api/src/translations.js where it queries for translation docs - https://github.com/medic/cht-core/blob/master/api/src/config.js where it queries for translation docs - Anything else you can find...
perf
investigate removing view queries from api startup code describe the performance issue some of the api bootstrapping steps make view queries which means if there are a bunch of new changes then api blocks on startup waiting for view building to complete in fact the queries end up timing out which crashes api while it does eventually complete the app is completely unavailable during this time which makes it difficult to work out what s going on all requests get a response because api is down describe the improvement you d like rewrite these bootstrap steps to use the magic view if possible so that api startup doesn t block as soon as requests come in the view rebuild will be triggered but api will be handling requests correctly this is only possible if the docs needed all have a predictable id prefix describe alternatives you ve considered we could add view building as a separate step in the bootstrapping process so it s clear what s going on but that would still mean all requests would get a response we could implement something like a which could report on view warming progress so at least people would know what the issue is but it s better to remove the issue than to just notify about it environment instance all browser n a client platform n a app api version but also detected on latest additional context this was detected in a call with lg where they split their db into two instances using replication examples of the steps that need to be patched where it queries existing forms where it queries for translation docs where it queries for translation docs anything else you can find
1
30,325
14,520,368,754
IssuesEvent
2020-12-14 05:18:55
apache/shardingsphere
https://api.github.com/repos/apache/shardingsphere
closed
[question] the insert performance is bad. please help to check the config.
type: performance
Hi, I made a performance test on sharding-proxy(4.1.1), the performance is a bad. And the result looks strange. server.yaml ----------------------- ``` authentication: users: root: password: root sharding: password: sharding authorizedSchemas: master_slave_db props: max.connections.size.per.query: 1 acceptor.size: 4 executor.size: 100 proxy.frontend.flush.threshold: 128 proxy.transaction.type: LOCAL proxy.opentracing.enabled: false proxy.hint.enabled: false query.with.cipher.column: true sql.show: true allow.range.query.with.inline.sharding: false ``` config-master_slave.yaml --------------------------- ``` schemaName: master_slave_db dataSources: master_ds: url: jdbc:mysql://172.31.197.149:3306/proxy?serverTimezone=UTC&useSSL=false username: proxy password: p123456 connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 1000 minPoolSize: 400 slave_ds_0: url: jdbc:mysql://172.31.197.150:3306/proxy?serverTimezone=UTC&useSSL=false username: proxy password: p123456 connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 1000 minPoolSize: 400 masterSlaveRule: name: ms_ds masterDataSourceName: master_ds slaveDataSourceNames: - slave_ds_0 ``` Test env -------------------- ``` 4 2core4G vms. 2 for mariadb8.5 master-slave cluster 1 for sharding-proxy 1 for jmeter And the test is simple. Jmeter uses 100 thread to make the test. insert: insert into jmeter_test values(null, now(), now(), ..., unix_timestamp(), ...); // 25 now(), and 4 unix_time_stamp() query: select id, field1,..., field29 from jmeter_test limit 100; ``` The test result --------------------------- ![image](https://user-images.githubusercontent.com/16129632/100438835-aa6b5a80-30dd-11eb-8129-7c3b38cd65be.png) You can see that "select" works OK. But for "insert", the cpu of jmeter, proxy are both low. So, my question is "is there any problem with my configuration? or is there any other config items for performance tuning?"
True
[question] the insert performance is bad. please help to check the config. - Hi, I made a performance test on sharding-proxy(4.1.1), the performance is a bad. And the result looks strange. server.yaml ----------------------- ``` authentication: users: root: password: root sharding: password: sharding authorizedSchemas: master_slave_db props: max.connections.size.per.query: 1 acceptor.size: 4 executor.size: 100 proxy.frontend.flush.threshold: 128 proxy.transaction.type: LOCAL proxy.opentracing.enabled: false proxy.hint.enabled: false query.with.cipher.column: true sql.show: true allow.range.query.with.inline.sharding: false ``` config-master_slave.yaml --------------------------- ``` schemaName: master_slave_db dataSources: master_ds: url: jdbc:mysql://172.31.197.149:3306/proxy?serverTimezone=UTC&useSSL=false username: proxy password: p123456 connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 1000 minPoolSize: 400 slave_ds_0: url: jdbc:mysql://172.31.197.150:3306/proxy?serverTimezone=UTC&useSSL=false username: proxy password: p123456 connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 1000 minPoolSize: 400 masterSlaveRule: name: ms_ds masterDataSourceName: master_ds slaveDataSourceNames: - slave_ds_0 ``` Test env -------------------- ``` 4 2core4G vms. 2 for mariadb8.5 master-slave cluster 1 for sharding-proxy 1 for jmeter And the test is simple. Jmeter uses 100 thread to make the test. insert: insert into jmeter_test values(null, now(), now(), ..., unix_timestamp(), ...); // 25 now(), and 4 unix_time_stamp() query: select id, field1,..., field29 from jmeter_test limit 100; ``` The test result --------------------------- ![image](https://user-images.githubusercontent.com/16129632/100438835-aa6b5a80-30dd-11eb-8129-7c3b38cd65be.png) You can see that "select" works OK. But for "insert", the cpu of jmeter, proxy are both low. So, my question is "is there any problem with my configuration? or is there any other config items for performance tuning?"
perf
the insert performance is bad please help to check the config hi i made a performance test on sharding proxy the performance is a bad and the result looks strange server yaml authentication users root password root sharding password sharding authorizedschemas master slave db props max connections size per query acceptor size executor size proxy frontend flush threshold proxy transaction type local proxy opentracing enabled false proxy hint enabled false query with cipher column true sql show true allow range query with inline sharding false config master slave yaml schemaname master slave db datasources master ds url jdbc mysql proxy servertimezone utc usessl false username proxy password connectiontimeoutmilliseconds idletimeoutmilliseconds maxlifetimemilliseconds maxpoolsize minpoolsize slave ds url jdbc mysql proxy servertimezone utc usessl false username proxy password connectiontimeoutmilliseconds idletimeoutmilliseconds maxlifetimemilliseconds maxpoolsize minpoolsize masterslaverule name ms ds masterdatasourcename master ds slavedatasourcenames slave ds test env vms for master slave cluster for sharding proxy for jmeter and the test is simple jmeter uses thread to make the test insert insert into jmeter test values null now now unix timestamp now and unix time stamp query select id from jmeter test limit the test result you can see that select works ok but for insert the cpu of jmeter proxy are both low so my question is is there any problem with my configuration or is there any other config items for performance tuning
1
267,499
20,205,451,966
IssuesEvent
2022-02-11 19:46:49
microsoft/lightgbm-benchmark
https://api.github.com/repos/microsoft/lightgbm-benchmark
closed
[docs] Provide an introduction to the architecture of scripts (script classes, environment, etc)
documentation
We need a schema introducing the general architecture of the benchmark scripts and pipelines. For the scripts, it should explain: - the generic specs of each script - how helper classes (RunnableScript) cover all generic specs - where to start to implement a script
1.0
[docs] Provide an introduction to the architecture of scripts (script classes, environment, etc) - We need a schema introducing the general architecture of the benchmark scripts and pipelines. For the scripts, it should explain: - the generic specs of each script - how helper classes (RunnableScript) cover all generic specs - where to start to implement a script
non_perf
provide an introduction to the architecture of scripts script classes environment etc we need a schema introducing the general architecture of the benchmark scripts and pipelines for the scripts it should explain the generic specs of each script how helper classes runnablescript cover all generic specs where to start to implement a script
0
7,881
6,282,979,104
IssuesEvent
2017-07-19 01:22:13
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
Feature request: vectorized sort on string split results
Enhancement Performance Strings
#### Problem description I'm searching without success a way to sort the results of `pd.Series.str.split`. I was expecting a str method, as we have it for example to access slices of the results, but it seems that it doesn't exists (tried `sorted`, `sort`, `sort_values` and googled). As a workaround, I'm using `apply` on the `split` result, but it is not vectorized, so I guess that it can be made faster with vectorized methods. Unfortunately, I don't see how we can implement the additional arguments of the python `sorted` to that vectorization, especially the `key` and `reverse` arguments. It would be great to have that possibility as well. Thanks! #### Code Sample ```python import pandas as pd ########### # Desired # ########### pd.Series(["a b:d c"]).str.replace(':', ' ').str.split(' ').str.sorted.str.join(' ') # --------------------------------------------------------------------------- # AttributeError Traceback (most recent call last) # <ipython-input-2-6fb06dcfcab2> in <module>() # ----> 1 pd.Series(["a b:d c"]).str.replace(':', ' ').str.split(' ').str.sorted.str.join(' ') # # AttributeError: 'StringMethods' object has no attribute 'sorted' ############## # Workaround # ############## pd.Series(["a b:d c"]).str.replace(':', ' ').str.split(' ').apply(sorted).str.join(' ') # 0 a b c d # dtype: object ``` #### Used versions * Python 3.6.1 (Anaconda) * Pandas 0.20.3
True
Feature request: vectorized sort on string split results - #### Problem description I'm searching without success a way to sort the results of `pd.Series.str.split`. I was expecting a str method, as we have it for example to access slices of the results, but it seems that it doesn't exists (tried `sorted`, `sort`, `sort_values` and googled). As a workaround, I'm using `apply` on the `split` result, but it is not vectorized, so I guess that it can be made faster with vectorized methods. Unfortunately, I don't see how we can implement the additional arguments of the python `sorted` to that vectorization, especially the `key` and `reverse` arguments. It would be great to have that possibility as well. Thanks! #### Code Sample ```python import pandas as pd ########### # Desired # ########### pd.Series(["a b:d c"]).str.replace(':', ' ').str.split(' ').str.sorted.str.join(' ') # --------------------------------------------------------------------------- # AttributeError Traceback (most recent call last) # <ipython-input-2-6fb06dcfcab2> in <module>() # ----> 1 pd.Series(["a b:d c"]).str.replace(':', ' ').str.split(' ').str.sorted.str.join(' ') # # AttributeError: 'StringMethods' object has no attribute 'sorted' ############## # Workaround # ############## pd.Series(["a b:d c"]).str.replace(':', ' ').str.split(' ').apply(sorted).str.join(' ') # 0 a b c d # dtype: object ``` #### Used versions * Python 3.6.1 (Anaconda) * Pandas 0.20.3
perf
feature request vectorized sort on string split results problem description i m searching without success a way to sort the results of pd series str split i was expecting a str method as we have it for example to access slices of the results but it seems that it doesn t exists tried sorted sort sort values and googled as a workaround i m using apply on the split result but it is not vectorized so i guess that it can be made faster with vectorized methods unfortunately i don t see how we can implement the additional arguments of the python sorted to that vectorization especially the key and reverse arguments it would be great to have that possibility as well thanks code sample python import pandas as pd desired pd series str replace str split str sorted str join attributeerror traceback most recent call last in pd series str replace str split str sorted str join attributeerror stringmethods object has no attribute sorted workaround pd series str replace str split apply sorted str join a b c d dtype object used versions python anaconda pandas
1
478,593
13,782,025,303
IssuesEvent
2020-10-08 17:01:11
lowRISC/opentitan
https://api.github.com/repos/lowRISC/opentitan
closed
[regtool] multi-reg displays enum for every generated field
Priority:P3 Type:Enhancement
In [OTP_CTRL spec](https://docs.opentitan.org/hw/ip/otp_ctrl/doc/index.html#Reg_err_code_0), the `enum` description is generated for every field in the multi-reg. For OTP register `err_code`, this enum is very long and makes it hard to read the spec. It would be nice to display the `enum` only once.
1.0
[regtool] multi-reg displays enum for every generated field - In [OTP_CTRL spec](https://docs.opentitan.org/hw/ip/otp_ctrl/doc/index.html#Reg_err_code_0), the `enum` description is generated for every field in the multi-reg. For OTP register `err_code`, this enum is very long and makes it hard to read the spec. It would be nice to display the `enum` only once.
non_perf
multi reg displays enum for every generated field in the enum description is generated for every field in the multi reg for otp register err code this enum is very long and makes it hard to read the spec it would be nice to display the enum only once
0
39,859
20,205,126,168
IssuesEvent
2022-02-11 19:23:39
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Math/MathF.Truncate isn't an intrinsic and results in inefficient codegen
area-System.Numerics tenet-performance in-pr
### Description Math/MathF.Truncate results in bad codegen that calls into native modf which is pretty slow. Instead, it should be an intrinsic for vroundsd/vroundss like Ceiling and Floor are (and clang also does do it for truncate). ### Configuration Sharplab Core CLR 5.0.721.25508 on amd64 ### Regression? No idea. ### Data [Sharplab for Math](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKBuIGYACY8pRgVwDtdsAzGZqUYAVGLgw0A3jUazmTFmwAmEdsAA2A4VC5hsGGAAoVazY14BKGXOnU595gHZGAWX0ALAHTbd+o5YBua1kAX2DGcIZmVkYTDQEAYRgAS3VjVXjzKzsbcPtiZzcMLyTU5M4Ac0NA8LCc2UiFGLizADF1CGh00wFLcNsHOQLXD092zqhqiyD6xjqQoA=) [Sharplab for MathF](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKBuIGYACY8pRgVwDtdsAzGZqUYAVGLgw0A3jUazmTFm14AbCNgwioXMOpgAKFWo28AlDLnTqc68wDsjALLqAFgDEAdMK2cdGfaYBuc1kAX2DGcIZmVkZDdUYAYRgAS2UDVXjTcMsbOWJ7Jww3dyTU5M4AcwMTIKs5MLrZSIUYuI1XVWh0o1izRsYc3LtHFw8OiC7A8IaQoA=) [Godbolt clang for double](https://godbolt.org/z/frY1vhhn9) [Godbolt clang for float](https://godbolt.org/z/5YE4oxnjT)
True
Math/MathF.Truncate isn't an intrinsic and results in inefficient codegen - ### Description Math/MathF.Truncate results in bad codegen that calls into native modf which is pretty slow. Instead, it should be an intrinsic for vroundsd/vroundss like Ceiling and Floor are (and clang also does do it for truncate). ### Configuration Sharplab Core CLR 5.0.721.25508 on amd64 ### Regression? No idea. ### Data [Sharplab for Math](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKBuIGYACY8pRgVwDtdsAzGZqUYAVGLgw0A3jUazmTFmwAmEdsAA2A4VC5hsGGAAoVazY14BKGXOnU595gHZGAWX0ALAHTbd+o5YBua1kAX2DGcIZmVkYTDQEAYRgAS3VjVXjzKzsbcPtiZzcMLyTU5M4Ac0NA8LCc2UiFGLizADF1CGh00wFLcNsHOQLXD092zqhqiyD6xjqQoA=) [Sharplab for MathF](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKBuIGYACY8pRgVwDtdsAzGZqUYAVGLgw0A3jUazmTFm14AbCNgwioXMOpgAKFWo28AlDLnTqc68wDsjALLqAFgDEAdMK2cdGfaYBuc1kAX2DGcIZmVkZDdUYAYRgAS2UDVXjTcMsbOWJ7Jww3dyTU5M4AcwMTIKs5MLrZSIUYuI1XVWh0o1izRsYc3LtHFw8OiC7A8IaQoA=) [Godbolt clang for double](https://godbolt.org/z/frY1vhhn9) [Godbolt clang for float](https://godbolt.org/z/5YE4oxnjT)
perf
math mathf truncate isn t an intrinsic and results in inefficient codegen description math mathf truncate results in bad codegen that calls into native modf which is pretty slow instead it should be an intrinsic for vroundsd vroundss like ceiling and floor are and clang also does do it for truncate configuration sharplab core clr on regression no idea data
1
3,403
3,908,261,004
IssuesEvent
2016-04-19 15:22:02
getcarina/feedback
https://api.github.com/repos/getcarina/feedback
closed
Running a heavy load test on Drupal Cluster induces "Error State"
carina-backlog performance
Cluster becomes unavailable - error state. Rebuild does not work / etc. When this occurred it required a host reboot. We'll need to get the dockerfile and perform a load test.
True
Running a heavy load test on Drupal Cluster induces "Error State" - Cluster becomes unavailable - error state. Rebuild does not work / etc. When this occurred it required a host reboot. We'll need to get the dockerfile and perform a load test.
perf
running a heavy load test on drupal cluster induces error state cluster becomes unavailable error state rebuild does not work etc when this occurred it required a host reboot we ll need to get the dockerfile and perform a load test
1
4,992
4,750,864,686
IssuesEvent
2016-10-22 15:34:09
explosion/spaCy
https://api.github.com/repos/explosion/spaCy
closed
NER for the Beatles
performance
For this sentence from Wikipedia: "Born and raised in Liverpool, Lennon became involved in the skiffle craze as a teenager; his first band, the Quarrymen, evolved into the Beatles in 1960." The named entity list has these: Liverpool, Lennon, first, Quarrymen, and 1960. But Beatles is not there. The POS for Beatles is PROPN, so it seems surprising that it's not recognized as a named entity. Is there anything in Spacy I can tweak for this?
True
NER for the Beatles - For this sentence from Wikipedia: "Born and raised in Liverpool, Lennon became involved in the skiffle craze as a teenager; his first band, the Quarrymen, evolved into the Beatles in 1960." The named entity list has these: Liverpool, Lennon, first, Quarrymen, and 1960. But Beatles is not there. The POS for Beatles is PROPN, so it seems surprising that it's not recognized as a named entity. Is there anything in Spacy I can tweak for this?
perf
ner for the beatles for this sentence from wikipedia born and raised in liverpool lennon became involved in the skiffle craze as a teenager his first band the quarrymen evolved into the beatles in the named entity list has these liverpool lennon first quarrymen and but beatles is not there the pos for beatles is propn so it seems surprising that it s not recognized as a named entity is there anything in spacy i can tweak for this
1
21,311
11,188,429,730
IssuesEvent
2020-01-02 04:57:44
0xProject/OpenZKP
https://api.github.com/repos/0xProject/OpenZKP
closed
Optimized square_inline, but check if it beats the compiler
performance tracker
*On 2019-12-29 @Recmo wrote in [`6276ceb`](https://github.com/0xProject/OpenZKP/commit/6276cebb01ef539692865c70d587401a56829e29) “Add SquareInline”:* Optimized square_inline, but check if it beats the compiler ```rust Self::from_limbs([r0, r1, r2, r3]), Self::from_limbs([r4, r5, r6, r7]), ) } // OPT: Optimized square_inline, but check if it beats the compiler } impl MulInline<u64> for U256 { type High = u64; ``` *From [`algebra/u256/src/multiplicative.rs:46`](https://github.com/0xProject/OpenZKP/blob/6eab2e08bc19d29d78616dd79ae4a87e76db44c5/algebra/u256/src/multiplicative.rs#L46)* <!--{"commit-hash": "6276cebb01ef539692865c70d587401a56829e29", "author": "Remco Bloemen", "author-mail": "<remco@0x.org>", "author-time": 1577638324, "author-tz": "-0800", "committer": "Remco Bloemen", "committer-mail": "<remco@0x.org>", "committer-time": 1577638324, "committer-tz": "-0800", "summary": "Add SquareInline", "previous": "a0617b166d7edd3aa14b917856ef63c6b3cfcaac algebra/u256/src/multiplicative.rs", "filename": "algebra/u256/src/multiplicative.rs", "line": 45, "line_end": 46, "kind": "OPT", "issue": "Optimized square_inline, but check if it beats the compiler", "head": "Optimized square_inline, but check if it beats the compiler", "context": " Self::from_limbs([r0, r1, r2, r3]),\n Self::from_limbs([r4, r5, r6, r7]),\n )\n }\n\n // OPT: Optimized square_inline, but check if it beats the compiler\n}\n\nimpl MulInline<u64> for U256 {\n type High = u64;\n\n", "repo": "0xProject/OpenZKP", "branch-hash": "6eab2e08bc19d29d78616dd79ae4a87e76db44c5"}-->
True
Optimized square_inline, but check if it beats the compiler - *On 2019-12-29 @Recmo wrote in [`6276ceb`](https://github.com/0xProject/OpenZKP/commit/6276cebb01ef539692865c70d587401a56829e29) “Add SquareInline”:* Optimized square_inline, but check if it beats the compiler ```rust Self::from_limbs([r0, r1, r2, r3]), Self::from_limbs([r4, r5, r6, r7]), ) } // OPT: Optimized square_inline, but check if it beats the compiler } impl MulInline<u64> for U256 { type High = u64; ``` *From [`algebra/u256/src/multiplicative.rs:46`](https://github.com/0xProject/OpenZKP/blob/6eab2e08bc19d29d78616dd79ae4a87e76db44c5/algebra/u256/src/multiplicative.rs#L46)* <!--{"commit-hash": "6276cebb01ef539692865c70d587401a56829e29", "author": "Remco Bloemen", "author-mail": "<remco@0x.org>", "author-time": 1577638324, "author-tz": "-0800", "committer": "Remco Bloemen", "committer-mail": "<remco@0x.org>", "committer-time": 1577638324, "committer-tz": "-0800", "summary": "Add SquareInline", "previous": "a0617b166d7edd3aa14b917856ef63c6b3cfcaac algebra/u256/src/multiplicative.rs", "filename": "algebra/u256/src/multiplicative.rs", "line": 45, "line_end": 46, "kind": "OPT", "issue": "Optimized square_inline, but check if it beats the compiler", "head": "Optimized square_inline, but check if it beats the compiler", "context": " Self::from_limbs([r0, r1, r2, r3]),\n Self::from_limbs([r4, r5, r6, r7]),\n )\n }\n\n // OPT: Optimized square_inline, but check if it beats the compiler\n}\n\nimpl MulInline<u64> for U256 {\n type High = u64;\n\n", "repo": "0xProject/OpenZKP", "branch-hash": "6eab2e08bc19d29d78616dd79ae4a87e76db44c5"}-->
perf
optimized square inline but check if it beats the compiler on recmo wrote in “add squareinline” optimized square inline but check if it beats the compiler rust self from limbs self from limbs opt optimized square inline but check if it beats the compiler impl mulinline for type high from author time author tz committer remco bloemen committer mail committer time committer tz summary add squareinline previous algebra src multiplicative rs filename algebra src multiplicative rs line line end kind opt issue optimized square inline but check if it beats the compiler head optimized square inline but check if it beats the compiler context self from limbs n self from limbs n n n n opt optimized square inline but check if it beats the compiler n n nimpl mulinline for n type high n n repo openzkp branch hash
1
29,825
14,273,073,928
IssuesEvent
2020-11-21 19:52:41
astropy/astropy
https://api.github.com/repos/astropy/astropy
opened
Further performance improvements for GCRS
Performance coordinates
#11069 sped up the calculation of GCRS obsgeoloc and obsgeovel from EarthLocation for the GCRS<->(CIRS, TETE) transforms. It may be possible to - [ ] Use the same technique to (slightly?) speed up `erfa_astrom.apco` (see https://github.com/astropy/astropy/pull/10994#issuecomment-730408512) - [ ] Speed up `EarthLocation.get_gcrs()`. - [ ] Be cleverer in the transformations, not using `FunctionTransformWithFiniteDifference` when all that is being done is introduce a rotating frame.
True
Further performance improvements for GCRS - #11069 sped up the calculation of GCRS obsgeoloc and obsgeovel from EarthLocation for the GCRS<->(CIRS, TETE) transforms. It may be possible to - [ ] Use the same technique to (slightly?) speed up `erfa_astrom.apco` (see https://github.com/astropy/astropy/pull/10994#issuecomment-730408512) - [ ] Speed up `EarthLocation.get_gcrs()`. - [ ] Be cleverer in the transformations, not using `FunctionTransformWithFiniteDifference` when all that is being done is introduce a rotating frame.
perf
further performance improvements for gcrs sped up the calculation of gcrs obsgeoloc and obsgeovel from earthlocation for the gcrs cirs tete transforms it may be possible to use the same technique to slightly speed up erfa astrom apco see speed up earthlocation get gcrs be cleverer in the transformations not using functiontransformwithfinitedifference when all that is being done is introduce a rotating frame
1
13,415
8,211,239,164
IssuesEvent
2018-09-04 13:18:37
LiskHQ/lisk
https://api.github.com/repos/LiskHQ/lisk
closed
Poor performance of GET /api/accounts in case of some few parameters
*medium api performance
### Expected behavior Performance of API `GET /api/accounts?sort=balance:desc&offset=10&limit=100` should be satisfactory (under 1 second). ### Actual behavior We run some benchmarks with following spec: ``` CPU: 1vCPU (Digital Ocean) RAM: 1 GB Memory Disk: 25 GB Disk Connections: 10 Time: 30s ``` And got the following results; ``` Fastest: 2.33 Slowest: 6.66 Average: 4.37 ``` ### Steps to reproduce Query endpoints mentioned above. ### Which version(s) does this affect? (Environment, OS, etc...) 1.0.1 (after indexes)
True
Poor performance of GET /api/accounts in case of some few parameters - ### Expected behavior Performance of API `GET /api/accounts?sort=balance:desc&offset=10&limit=100` should be satisfactory (under 1 second). ### Actual behavior We run some benchmarks with following spec: ``` CPU: 1vCPU (Digital Ocean) RAM: 1 GB Memory Disk: 25 GB Disk Connections: 10 Time: 30s ``` And got the following results; ``` Fastest: 2.33 Slowest: 6.66 Average: 4.37 ``` ### Steps to reproduce Query endpoints mentioned above. ### Which version(s) does this affect? (Environment, OS, etc...) 1.0.1 (after indexes)
perf
poor performance of get api accounts in case of some few parameters expected behavior performance of api get api accounts sort balance desc offset limit should be satisfactory under second actual behavior we run some benchmarks with following spec cpu digital ocean ram gb memory disk gb disk connections time and got the following results fastest slowest average steps to reproduce query endpoints mentioned above which version s does this affect environment os etc after indexes
1
10,571
7,238,079,726
IssuesEvent
2018-02-13 13:28:37
LLK/scratch-vm
https://api.github.com/repos/LLK/scratch-vm
opened
Investigate performance issues in "Super Mario Bros. 1-2" project
performance
### Expected Behavior - Project should run at 30fps on testing Chromebook ### Actual Behavior - Project runs slowly ### Steps to Reproduce https://scratch.mit.edu/projects/203354232/ http://llk.github.io/scratch-vm/#203354232
True
Investigate performance issues in "Super Mario Bros. 1-2" project - ### Expected Behavior - Project should run at 30fps on testing Chromebook ### Actual Behavior - Project runs slowly ### Steps to Reproduce https://scratch.mit.edu/projects/203354232/ http://llk.github.io/scratch-vm/#203354232
perf
investigate performance issues in super mario bros project expected behavior project should run at on testing chromebook actual behavior project runs slowly steps to reproduce
1
37,585
18,546,561,122
IssuesEvent
2021-10-21 23:22:22
espressif/arduino-esp32
https://api.github.com/repos/espressif/arduino-esp32
closed
ESP32 WiFi scanning sketch binary is 86Kb larger, consumes 55Kb more RAM in 2.0.0 vs 1.0.6
Area: Performance
### Hardware: Board: ESP32 Dev Module Core Installation version: 1.0.6 , 2.0.0 IDE name: Arduino IDE Flash Frequency: 80Mhz PSRAM enabled: no Upload Speed: 115200 Computer OS: Fedora 34 x86_64 ### Description: When testing many of my Arduino-ESP32 project with 2.0.0 versus 1.0.6, I notice that both required binary space and RAM use are consistently larger in 2.0.0 versus 1.0.6. This introduces the risk that projects that previously worked comfortably within allocated app partition and RAM space under 1.0.6, will now require extra partition space, risking overflows, and/or crashes because of insufficient ram. Nothing has been changed except the version of SDK (1.0.6 versus 2.0.0). ### Sketch: (leave the backquotes for [code formatting](https://help.github.com/articles/creating-and-highlighting-code-blocks/)) ```cpp #include "WiFi.h" #if CONFIG_IDF_TARGET_ESP32 //#define WIFI_PIN GPIO_NUM_4 #define WIFI_PIN GPIO_NUM_27 #elif CONFIG_IDF_TARGET_ESP32S2 #define WIFI_PIN GPIO_NUM_14 #endif void reportMemoryUse(const char * label) { Serial.printf("MEMORY USE AT %s: %u total, %u free, %u max.alloc\r\n", label, ESP.getHeapSize(), ESP.getFreeHeap(), ESP.getMaxAllocHeap()); } void setup() { Serial.begin(115200); #ifdef WIFI_PIN pinMode(WIFI_PIN, OUTPUT); digitalWrite(WIFI_PIN, LOW); #endif reportMemoryUse("setup() before WiFi setup"); // Set WiFi to station mode and disconnect from an AP if it was previously connected WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); Serial.println("Setup done"); reportMemoryUse("setup() after WiFi setup"); } void loop() { Serial.println("scan start"); // WiFi.scanNetworks will return the number of networks found #ifdef WIFI_PIN digitalWrite(WIFI_PIN, HIGH); #endif int n = WiFi.scanNetworks(); #ifdef WIFI_PIN digitalWrite(WIFI_PIN, LOW); #endif Serial.println("scan done"); if (n == 0) { Serial.println("no networks found"); } else { Serial.print(n); Serial.println(" networks found"); for (int i = 0; i < n; ++i) { // Print SSID and RSSI for each network found Serial.print(i + 1); Serial.print(": "); Serial.print(WiFi.SSID(i)); Serial.print(" ("); Serial.print(WiFi.RSSI(i)); Serial.print(")"); Serial.println((WiFi.encryptionType(i) == WIFI_AUTH_OPEN)?" ":"*"); delay(10); } } Serial.println(""); reportMemoryUse("after WiFi scan"); // Wait a bit before scanning again delay(3000); } ``` ### Debug Messages: Under 1.0.6, this sketch compiles into a 638608-byte binary, and shows the following output: ``` ets Jun 8 2016 00:22:57 rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) configsip: 0, SPIWP:0xee clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00 mode:DIO, clock div:1 load:0x3fff0018,len:4 load:0x3fff001c,len:1216 ho 0 tail 12 room 4 load:0x40078000,len:10944 load:0x40080400,len:6388 entry 0x400806b4 MEMORY USE AT setup() before WiFi setup: 360808 total, 334340 free, 113792 max.alloc Setup done MEMORY USE AT setup() after WiFi setup: 359568 total, 283652 free, 113792 max.alloc scan start scan done 12 networks found (redacted) MEMORY USE AT after WiFi scan: 359484 total, 282476 free, 113792 max.alloc scan start scan done 17 networks found (redacted) MEMORY USE AT after WiFi scan: 359484 total, 282076 free, 113792 max.alloc ``` Under 2.0.0, the exact same sketch compiles into a 724912-byte binary (86304 bytes larger), and shows the following output: ``` ets Jun 8 2016 00:22:57 rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) configsip: 0, SPIWP:0xee clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00 mode:DIO, clock div:1 load:0x3fff0030,len:1240 load:0x40078000,len:13012 load:0x40080400,len:3648 entry 0x400805f8 MEMORY USE AT setup() before WiFi setup: 366171 total, 308455 free, 65524 max.alloc Setup done MEMORY USE AT setup() after WiFi setup: 364203 total, 228051 free, 65524 max.alloc scan start scan done 14 networks found (redacted) MEMORY USE AT after WiFi scan: 364179 total, 226875 free, 65524 max.alloc scan start scan done 13 networks found (redacted) MEMORY USE AT after WiFi scan: 364179 total, 226955 free, 65524 max.alloc ``` The sketch now sees 226955 bytes of free RAM (versus 282076 bytes under 1.0.6), an unexplained 55121-byte increase. This is worrying because some of our projects using ESP32 boards (with no PSRAM) are already having trouble running OTA updates via their web interface under 2.0.0, as apparently a 4096-byte malloc() inside the Updater object now fails where it previously succeded with 1.0.6.
True
ESP32 WiFi scanning sketch binary is 86Kb larger, consumes 55Kb more RAM in 2.0.0 vs 1.0.6 - ### Hardware: Board: ESP32 Dev Module Core Installation version: 1.0.6 , 2.0.0 IDE name: Arduino IDE Flash Frequency: 80Mhz PSRAM enabled: no Upload Speed: 115200 Computer OS: Fedora 34 x86_64 ### Description: When testing many of my Arduino-ESP32 project with 2.0.0 versus 1.0.6, I notice that both required binary space and RAM use are consistently larger in 2.0.0 versus 1.0.6. This introduces the risk that projects that previously worked comfortably within allocated app partition and RAM space under 1.0.6, will now require extra partition space, risking overflows, and/or crashes because of insufficient ram. Nothing has been changed except the version of SDK (1.0.6 versus 2.0.0). ### Sketch: (leave the backquotes for [code formatting](https://help.github.com/articles/creating-and-highlighting-code-blocks/)) ```cpp #include "WiFi.h" #if CONFIG_IDF_TARGET_ESP32 //#define WIFI_PIN GPIO_NUM_4 #define WIFI_PIN GPIO_NUM_27 #elif CONFIG_IDF_TARGET_ESP32S2 #define WIFI_PIN GPIO_NUM_14 #endif void reportMemoryUse(const char * label) { Serial.printf("MEMORY USE AT %s: %u total, %u free, %u max.alloc\r\n", label, ESP.getHeapSize(), ESP.getFreeHeap(), ESP.getMaxAllocHeap()); } void setup() { Serial.begin(115200); #ifdef WIFI_PIN pinMode(WIFI_PIN, OUTPUT); digitalWrite(WIFI_PIN, LOW); #endif reportMemoryUse("setup() before WiFi setup"); // Set WiFi to station mode and disconnect from an AP if it was previously connected WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); Serial.println("Setup done"); reportMemoryUse("setup() after WiFi setup"); } void loop() { Serial.println("scan start"); // WiFi.scanNetworks will return the number of networks found #ifdef WIFI_PIN digitalWrite(WIFI_PIN, HIGH); #endif int n = WiFi.scanNetworks(); #ifdef WIFI_PIN digitalWrite(WIFI_PIN, LOW); #endif Serial.println("scan done"); if (n == 0) { Serial.println("no networks found"); } else { Serial.print(n); Serial.println(" networks found"); for (int i = 0; i < n; ++i) { // Print SSID and RSSI for each network found Serial.print(i + 1); Serial.print(": "); Serial.print(WiFi.SSID(i)); Serial.print(" ("); Serial.print(WiFi.RSSI(i)); Serial.print(")"); Serial.println((WiFi.encryptionType(i) == WIFI_AUTH_OPEN)?" ":"*"); delay(10); } } Serial.println(""); reportMemoryUse("after WiFi scan"); // Wait a bit before scanning again delay(3000); } ``` ### Debug Messages: Under 1.0.6, this sketch compiles into a 638608-byte binary, and shows the following output: ``` ets Jun 8 2016 00:22:57 rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) configsip: 0, SPIWP:0xee clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00 mode:DIO, clock div:1 load:0x3fff0018,len:4 load:0x3fff001c,len:1216 ho 0 tail 12 room 4 load:0x40078000,len:10944 load:0x40080400,len:6388 entry 0x400806b4 MEMORY USE AT setup() before WiFi setup: 360808 total, 334340 free, 113792 max.alloc Setup done MEMORY USE AT setup() after WiFi setup: 359568 total, 283652 free, 113792 max.alloc scan start scan done 12 networks found (redacted) MEMORY USE AT after WiFi scan: 359484 total, 282476 free, 113792 max.alloc scan start scan done 17 networks found (redacted) MEMORY USE AT after WiFi scan: 359484 total, 282076 free, 113792 max.alloc ``` Under 2.0.0, the exact same sketch compiles into a 724912-byte binary (86304 bytes larger), and shows the following output: ``` ets Jun 8 2016 00:22:57 rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) configsip: 0, SPIWP:0xee clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00 mode:DIO, clock div:1 load:0x3fff0030,len:1240 load:0x40078000,len:13012 load:0x40080400,len:3648 entry 0x400805f8 MEMORY USE AT setup() before WiFi setup: 366171 total, 308455 free, 65524 max.alloc Setup done MEMORY USE AT setup() after WiFi setup: 364203 total, 228051 free, 65524 max.alloc scan start scan done 14 networks found (redacted) MEMORY USE AT after WiFi scan: 364179 total, 226875 free, 65524 max.alloc scan start scan done 13 networks found (redacted) MEMORY USE AT after WiFi scan: 364179 total, 226955 free, 65524 max.alloc ``` The sketch now sees 226955 bytes of free RAM (versus 282076 bytes under 1.0.6), an unexplained 55121-byte increase. This is worrying because some of our projects using ESP32 boards (with no PSRAM) are already having trouble running OTA updates via their web interface under 2.0.0, as apparently a 4096-byte malloc() inside the Updater object now fails where it previously succeded with 1.0.6.
perf
wifi scanning sketch binary is larger consumes more ram in vs hardware board dev module core installation version ide name arduino ide flash frequency psram enabled no upload speed computer os fedora description when testing many of my arduino project with versus i notice that both required binary space and ram use are consistently larger in versus this introduces the risk that projects that previously worked comfortably within allocated app partition and ram space under will now require extra partition space risking overflows and or crashes because of insufficient ram nothing has been changed except the version of sdk versus sketch leave the backquotes for cpp include wifi h if config idf target define wifi pin gpio num define wifi pin gpio num elif config idf target define wifi pin gpio num endif void reportmemoryuse const char label serial printf memory use at s u total u free u max alloc r n label esp getheapsize esp getfreeheap esp getmaxallocheap void setup serial begin ifdef wifi pin pinmode wifi pin output digitalwrite wifi pin low endif reportmemoryuse setup before wifi setup set wifi to station mode and disconnect from an ap if it was previously connected wifi mode wifi sta wifi disconnect delay serial println setup done reportmemoryuse setup after wifi setup void loop serial println scan start wifi scannetworks will return the number of networks found ifdef wifi pin digitalwrite wifi pin high endif int n wifi scannetworks ifdef wifi pin digitalwrite wifi pin low endif serial println scan done if n serial println no networks found else serial print n serial println networks found for int i i n i print ssid and rssi for each network found serial print i serial print serial print wifi ssid i serial print serial print wifi rssi i serial print serial println wifi encryptiontype i wifi auth open delay serial println reportmemoryuse after wifi scan wait a bit before scanning again delay debug messages under this sketch compiles into a byte binary and shows the following output ets jun rst poweron reset boot spi fast flash boot configsip spiwp clk drv q drv d drv drv hd drv wp drv mode dio clock div load len load len ho tail room load len load len entry memory use at setup before wifi setup total free max alloc setup done memory use at setup after wifi setup total free max alloc scan start scan done networks found redacted memory use at after wifi scan total free max alloc scan start scan done networks found redacted memory use at after wifi scan total free max alloc under the exact same sketch compiles into a byte binary bytes larger and shows the following output ets jun rst poweron reset boot spi fast flash boot configsip spiwp clk drv q drv d drv drv hd drv wp drv mode dio clock div load len load len load len entry memory use at setup before wifi setup total free max alloc setup done memory use at setup after wifi setup total free max alloc scan start scan done networks found redacted memory use at after wifi scan total free max alloc scan start scan done networks found redacted memory use at after wifi scan total free max alloc the sketch now sees bytes of free ram versus bytes under an unexplained byte increase this is worrying because some of our projects using boards with no psram are already having trouble running ota updates via their web interface under as apparently a byte malloc inside the updater object now fails where it previously succeded with
1
17,509
9,795,666,424
IssuesEvent
2019-06-11 04:53:06
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
docker tensorflow-tensorflow/latest-gpu slow initialisation of GPU
comp:gpu type:bug/performance
<em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** - OS Platform and Distribution: Ubuntu 16.04 Using Docker (latest-gpu image) - Mobile Device: No - TensorFlow installed from (source or binary): N/A - TensorFlow version: 1.13.0-rc1 - Python version: Python 3.5.2 - Installed using virtualenv? pip? conda?: N/A - Bazel version (if compiling from source): N/A - GCC/Compiler version (if compiling from source): N/A - CUDA/cuDNN version: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:01_CDT_2018 Cuda compilation tools, release 10.0, V10.0.130 - GPU model and memory: Quadro M1200 ``` ==============NVSMI LOG============== Timestamp : Mon Feb 25 00:50:20 2019 Driver Version : 410.78 CUDA Version : 10.0 Attached GPUs : 1 GPU 00000000:01:00.0 Product Name : Quadro M1200 Product Brand : Quadro Display Mode : Disabled Display Active : Disabled Persistence Mode : Enabled Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : N/A Pending : N/A Serial Number : N/A GPU UUID : GPU-d9093d17-7927-a053-9104-426e68b1d4ac Minor Number : 0 VBIOS Version : 82.07.BB.00.13 MultiGPU Board : No Board ID : 0x100 GPU Part Number : N/A Inforom Version Image Version : N/A OEM Object : N/A ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A GPU Virtualization Mode Virtualization mode : None IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x13B610DE Bus Id : 00000000:01:00.0 Sub System Id : 0x224D17AA GPU Link Info PCIe Generation Max : 3 Current : 3 Link Width Max : 16x Current : 16x Bridge Chip Type : N/A Firmware : N/A Replays since reset : 0 Tx Throughput : 0 KB/s Rx Throughput : 0 KB/s Fan Speed : N/A Performance State : P0 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Active SW Power Cap : Not Active HW Slowdown : Not Active HW Thermal Slowdown : N/A HW Power Brake Slowdown : N/A Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active FB Memory Usage Total : 4043 MiB Used : 3813 MiB Free : 230 MiB BAR1 Memory Usage Total : 256 MiB Used : 3 MiB Free : 253 MiB Compute Mode : Default Utilization Gpu : 0 % Memory : 0 % Encoder : 0 % Decoder : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Aggregate Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending : N/A Temperature GPU Current Temp : 37 C GPU Shutdown Temp : N/A GPU Slowdown Temp : 96 C GPU Max Operating Temp : 92 C Memory Current Temp : N/A Memory Max Operating Temp : N/A Power Readings Power Management : N/A Power Draw : N/A Power Limit : N/A Default Power Limit : N/A Enforced Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A Clocks Graphics : 993 MHz SM : 993 MHz Memory : 2505 MHz Video : 893 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Max Clocks Graphics : 1150 MHz SM : 1150 MHz Memory : 2505 MHz Video : 1035 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Processes Process ID : 1123 Type : G Name : /usr/lib/xorg/Xorg Used GPU Memory : 8 MiB Process ID : 31763 Type : C Name : python Used GPU Memory : 3791 MiB ``` **Describe the problem** When using my GPU, it takes several minutes (just over 4 minutes) to initialise to do anything. Issue does not exist when using CPU **Provide the exact sequence of commands / steps that you executed before running into the problem** `docker run -it -u $(id -u):$(id -g) --runtime=nvidia -v $(realpath ~/tensorflow):/tf/tensorflow tensorflow/tensorflow:latest-gpu bash` `python test.py` contents of test.py: ``` import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test) ``` **Any other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. logs while running test script ``` Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step 11501568/11490434 [==============================] - 0s 0us/step WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/layers/core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. 2019-02-25 05:46:52.561440: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-02-25 05:46:52.628689: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-02-25 05:46:52.629997: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x50be7d0 executing computations on platform CUDA. Devices: 2019-02-25 05:46:52.630035: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Quadro M1200, Compute Capability 5.0 2019-02-25 05:46:52.664820: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz 2019-02-25 05:46:52.666234: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5128500 executing computations on platform Host. Devices: 2019-02-25 05:46:52.666318: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined> 2019-02-25 05:46:52.666979: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: Quadro M1200 major: 5 minor: 0 memoryClockRate(GHz): 1.148 pciBusID: 0000:01:00.0 totalMemory: 3.95GiB freeMemory: 3.90GiB 2019-02-25 05:46:52.667052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-02-25 05:46:52.669065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-25 05:46:52.669122: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-02-25 05:46:52.669152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-02-25 05:46:52.669563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3696 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) Epoch 1/5 2019-02-25 05:51:01.254939: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally 60000/60000 [==============================] - 5s 84us/sample - loss: 0.2207 - acc: 0.9348 Epoch 2/5 60000/60000 [==============================] - 5s 79us/sample - loss: 0.0960 - acc: 0.9714 Epoch 3/5 60000/60000 [==============================] - 5s 78us/sample - loss: 0.0697 - acc: 0.9774 Epoch 4/5 60000/60000 [==============================] - 5s 79us/sample - loss: 0.0536 - acc: 0.9826 Epoch 5/5 60000/60000 [==============================] - 5s 76us/sample - loss: 0.0430 - acc: 0.9857 10000/10000 [==============================] - 0s 29us/sample - loss: 0.0606 - acc: 0.9813 ```
True
docker tensorflow-tensorflow/latest-gpu slow initialisation of GPU - <em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** - OS Platform and Distribution: Ubuntu 16.04 Using Docker (latest-gpu image) - Mobile Device: No - TensorFlow installed from (source or binary): N/A - TensorFlow version: 1.13.0-rc1 - Python version: Python 3.5.2 - Installed using virtualenv? pip? conda?: N/A - Bazel version (if compiling from source): N/A - GCC/Compiler version (if compiling from source): N/A - CUDA/cuDNN version: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:01_CDT_2018 Cuda compilation tools, release 10.0, V10.0.130 - GPU model and memory: Quadro M1200 ``` ==============NVSMI LOG============== Timestamp : Mon Feb 25 00:50:20 2019 Driver Version : 410.78 CUDA Version : 10.0 Attached GPUs : 1 GPU 00000000:01:00.0 Product Name : Quadro M1200 Product Brand : Quadro Display Mode : Disabled Display Active : Disabled Persistence Mode : Enabled Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : N/A Pending : N/A Serial Number : N/A GPU UUID : GPU-d9093d17-7927-a053-9104-426e68b1d4ac Minor Number : 0 VBIOS Version : 82.07.BB.00.13 MultiGPU Board : No Board ID : 0x100 GPU Part Number : N/A Inforom Version Image Version : N/A OEM Object : N/A ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A GPU Virtualization Mode Virtualization mode : None IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x13B610DE Bus Id : 00000000:01:00.0 Sub System Id : 0x224D17AA GPU Link Info PCIe Generation Max : 3 Current : 3 Link Width Max : 16x Current : 16x Bridge Chip Type : N/A Firmware : N/A Replays since reset : 0 Tx Throughput : 0 KB/s Rx Throughput : 0 KB/s Fan Speed : N/A Performance State : P0 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Active SW Power Cap : Not Active HW Slowdown : Not Active HW Thermal Slowdown : N/A HW Power Brake Slowdown : N/A Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active FB Memory Usage Total : 4043 MiB Used : 3813 MiB Free : 230 MiB BAR1 Memory Usage Total : 256 MiB Used : 3 MiB Free : 253 MiB Compute Mode : Default Utilization Gpu : 0 % Memory : 0 % Encoder : 0 % Decoder : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Aggregate Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending : N/A Temperature GPU Current Temp : 37 C GPU Shutdown Temp : N/A GPU Slowdown Temp : 96 C GPU Max Operating Temp : 92 C Memory Current Temp : N/A Memory Max Operating Temp : N/A Power Readings Power Management : N/A Power Draw : N/A Power Limit : N/A Default Power Limit : N/A Enforced Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A Clocks Graphics : 993 MHz SM : 993 MHz Memory : 2505 MHz Video : 893 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Max Clocks Graphics : 1150 MHz SM : 1150 MHz Memory : 2505 MHz Video : 1035 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Processes Process ID : 1123 Type : G Name : /usr/lib/xorg/Xorg Used GPU Memory : 8 MiB Process ID : 31763 Type : C Name : python Used GPU Memory : 3791 MiB ``` **Describe the problem** When using my GPU, it takes several minutes (just over 4 minutes) to initialise to do anything. Issue does not exist when using CPU **Provide the exact sequence of commands / steps that you executed before running into the problem** `docker run -it -u $(id -u):$(id -g) --runtime=nvidia -v $(realpath ~/tensorflow):/tf/tensorflow tensorflow/tensorflow:latest-gpu bash` `python test.py` contents of test.py: ``` import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test) ``` **Any other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. logs while running test script ``` Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step 11501568/11490434 [==============================] - 0s 0us/step WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/keras/layers/core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. 2019-02-25 05:46:52.561440: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-02-25 05:46:52.628689: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-02-25 05:46:52.629997: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x50be7d0 executing computations on platform CUDA. Devices: 2019-02-25 05:46:52.630035: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Quadro M1200, Compute Capability 5.0 2019-02-25 05:46:52.664820: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz 2019-02-25 05:46:52.666234: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5128500 executing computations on platform Host. Devices: 2019-02-25 05:46:52.666318: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined> 2019-02-25 05:46:52.666979: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: Quadro M1200 major: 5 minor: 0 memoryClockRate(GHz): 1.148 pciBusID: 0000:01:00.0 totalMemory: 3.95GiB freeMemory: 3.90GiB 2019-02-25 05:46:52.667052: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-02-25 05:46:52.669065: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-25 05:46:52.669122: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-02-25 05:46:52.669152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-02-25 05:46:52.669563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3696 MB memory) -> physical GPU (device: 0, name: Quadro M1200, pci bus id: 0000:01:00.0, compute capability: 5.0) Epoch 1/5 2019-02-25 05:51:01.254939: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally 60000/60000 [==============================] - 5s 84us/sample - loss: 0.2207 - acc: 0.9348 Epoch 2/5 60000/60000 [==============================] - 5s 79us/sample - loss: 0.0960 - acc: 0.9714 Epoch 3/5 60000/60000 [==============================] - 5s 78us/sample - loss: 0.0697 - acc: 0.9774 Epoch 4/5 60000/60000 [==============================] - 5s 79us/sample - loss: 0.0536 - acc: 0.9826 Epoch 5/5 60000/60000 [==============================] - 5s 76us/sample - loss: 0.0430 - acc: 0.9857 10000/10000 [==============================] - 0s 29us/sample - loss: 0.0606 - acc: 0.9813 ```
perf
docker tensorflow tensorflow latest gpu slow initialisation of gpu please make sure that this is a build installation issue as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag build template system information os platform and distribution ubuntu using docker latest gpu image mobile device no tensorflow installed from source or binary n a tensorflow version python version python installed using virtualenv pip conda n a bazel version if compiling from source n a gcc compiler version if compiling from source n a cuda cudnn version nvcc nvidia r cuda compiler driver copyright c nvidia corporation built on sat aug cdt cuda compilation tools release gpu model and memory quadro nvsmi log timestamp mon feb driver version cuda version attached gpus gpu product name quadro product brand quadro display mode disabled display active disabled persistence mode enabled accounting mode disabled accounting mode buffer size driver model current n a pending n a serial number n a gpu uuid gpu minor number vbios version bb multigpu board no board id gpu part number n a inforom version image version n a oem object n a ecc object n a power management object n a gpu operation mode current n a pending n a gpu virtualization mode virtualization mode none ibmnpu relaxed ordering mode n a pci bus device domain device id bus id sub system id gpu link info pcie generation max current link width max current bridge chip type n a firmware n a replays since reset tx throughput kb s rx throughput kb s fan speed n a performance state clocks throttle reasons idle not active applications clocks setting active sw power cap not active hw slowdown not active hw thermal slowdown n a hw power brake slowdown n a sync boost not active sw thermal slowdown not active display clock setting not active fb memory usage total mib used mib free mib memory usage total mib used mib free mib compute mode default utilization gpu memory encoder decoder encoder stats active sessions average fps average latency fbc stats active sessions average fps average latency ecc mode current n a pending n a ecc errors volatile single bit device memory n a register file n a cache n a cache n a texture memory n a texture shared n a cbu n a total n a double bit device memory n a register file n a cache n a cache n a texture memory n a texture shared n a cbu n a total n a aggregate single bit device memory n a register file n a cache n a cache n a texture memory n a texture shared n a cbu n a total n a double bit device memory n a register file n a cache n a cache n a texture memory n a texture shared n a cbu n a total n a retired pages single bit ecc n a double bit ecc n a pending n a temperature gpu current temp c gpu shutdown temp n a gpu slowdown temp c gpu max operating temp c memory current temp n a memory max operating temp n a power readings power management n a power draw n a power limit n a default power limit n a enforced power limit n a min power limit n a max power limit n a clocks graphics mhz sm mhz memory mhz video mhz applications clocks graphics n a memory n a default applications clocks graphics n a memory n a max clocks graphics mhz sm mhz memory mhz video mhz max customer boost clocks graphics n a clock policy auto boost n a auto boost default n a processes process id type g name usr lib xorg xorg used gpu memory mib process id type c name python used gpu memory mib describe the problem when using my gpu it takes several minutes just over minutes to initialise to do anything issue does not exist when using cpu provide the exact sequence of commands steps that you executed before running into the problem docker run it u id u id g runtime nvidia v realpath tensorflow tf tensorflow tensorflow tensorflow latest gpu bash python test py contents of test py import tensorflow as tf mnist tf keras datasets mnist x train y train x test y test mnist load data x train x test x train x test model tf keras models sequential tf keras layers flatten input shape tf keras layers dense activation tf nn relu tf keras layers dropout tf keras layers dense activation tf nn softmax model compile optimizer adam loss sparse categorical crossentropy metrics model fit x train y train epochs model evaluate x test y test any other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached logs while running test script downloading data from step step warning tensorflow from usr local lib dist packages tensorflow python ops resource variable ops py colocate with from tensorflow python framework ops is deprecated and will be removed in a future version instructions for updating colocations handled automatically by placer warning tensorflow from usr local lib dist packages tensorflow python keras layers core py calling dropout from tensorflow python ops nn ops with keep prob is deprecated and will be removed in a future version instructions for updating please use rate instead of keep prob rate should be set to rate keep prob i tensorflow core platform cpu feature guard cc your cpu supports instructions that this tensorflow binary was not compiled to use fma i tensorflow stream executor cuda cuda gpu executor cc successful numa node read from sysfs had negative value but there must be at least one numa node so returning numa node zero i tensorflow compiler xla service service cc xla service executing computations on platform cuda devices i tensorflow compiler xla service service cc streamexecutor device quadro compute capability i tensorflow core platform profile utils cpu utils cc cpu frequency hz i tensorflow compiler xla service service cc xla service executing computations on platform host devices i tensorflow compiler xla service service cc streamexecutor device i tensorflow core common runtime gpu gpu device cc found device with properties name quadro major minor memoryclockrate ghz pcibusid totalmemory freememory i tensorflow core common runtime gpu gpu device cc adding visible gpu devices i tensorflow core common runtime gpu gpu device cc device interconnect streamexecutor with strength edge matrix i tensorflow core common runtime gpu gpu device cc i tensorflow core common runtime gpu gpu device cc n i tensorflow core common runtime gpu gpu device cc created tensorflow device job localhost replica task device gpu with mb memory physical gpu device name quadro pci bus id compute capability epoch i tensorflow stream executor dso loader cc successfully opened cuda library libcublas so locally sample loss acc epoch sample loss acc epoch sample loss acc epoch sample loss acc epoch sample loss acc sample loss acc
1
41,802
21,956,594,171
IssuesEvent
2022-05-24 12:39:39
hzi-braunschweig/SORMAS-Project
https://api.github.com/repos/hzi-braunschweig/SORMAS-Project
opened
Improve performance of getAllAfter queries into DTOs
backend change performance general
<!-- Please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) before submitting an issue. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden. --> ### Problem Description <!-- Mandatory --> As shown by the following analysis, many `getAllAfter` methods show an inperformant pattern: 1. The `"Entity"Service.getAllAfter` method takes some seconds. As shown in https://github.com/hzi-braunschweig/SORMAS-Project/issues/8946#issuecomment-1129937176, this can be improved by initially fetching only the ids (reduced distinct effort) and using a dedicated index with appropriate sorting. 2. `"Entity"Service.inJurisdictionOrOwned` per each entity seems to be inperformant. For Cases it took ~330ms per entity, for Persons ~0,3ms per entity with in clause (`PersonService.getInJurisdictionIDs`). <details><summary>Analysis</summary> Dataset: - 1225343 persons - 85677 cases - 114782 contacts - 236583 tasks - 2600890 immunizations - 1991 vaccinations - 15693 samples - 11846 events - 534 eventparticipants The following measurements were taken from backend logs (EJB methods) and the Postgres logs. Observations: - the actual SQL queries are executed in few seconds while the EJB methods take minutes to complete - a relevant contribution to runtime comes from `inJurisdictionOrOwned` methods, here mainly the number of calls - batch size is relevant. Some requests did not complete within reasonable time for a batch size of 10000, these were measured for batch size 1000. Requests: - `http://localhost:6080/sormas-rest/persons/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 3 min 5 sec server: 2 min 58 sec ![image](https://user-images.githubusercontent.com/5616564/165099348-35ae69d7-8c12-4307-9e66-d3e5a68749df.png) SQL queries: duration: 948.716 ms duration: 2297.275 ms duration: 4012.220 ms duration: 4393.538 ms duration: 696.277 ms ~ 12.5 sec **Note:** SQL query times are inaccurate, either due to an error in the analysis or changes introduced later, see later analysis of this method. - `http://localhost:6080/sormas-rest/cases/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 8 min 21 sec server: 8 min 18 sec ![image](https://user-images.githubusercontent.com/5616564/165100621-c62aed86-b6a1-469b-b8b1-53e0ff98c857.png) SQL queries: duration: 787.011 ms duration: 449.616 ms duration: 146.224 ms duration: 178.515 ms duration: 154.552 ms duration: 136.836 ms duration: 142.597 ms duration: 143.688 ms duration: 150.584 ms duration: 152.298 ms duration: 133.814 ms duration: 147.687 ms ~ 3 sec - `http://localhost:6080/sormas-rest/contacts/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 35 sec server: 35 sec ![image](https://user-images.githubusercontent.com/5616564/165123971-871aba96-eb84-49ad-b718-f7fc1f82896c.png) SQL queries: duration: 151.413 ms duration: 143.237 ms duration: 152.661 ms duration: 144.054 ms duration: 145.374 ms duration: 155.861 ms duration: 177.212 ms duration: 153.647 ms duration: 201.516 ms duration: 187.382 ms ~ 1.6 sec - `http://localhost:6080/sormas-rest/tasks/all/1637090372005/1000/NO_LAST_SYNCED_UUID`: 1 min 27 sec server: 1 min 24 sec ![image](https://user-images.githubusercontent.com/5616564/165125159-4b3af4d6-3c35-4a1c-bde4-b55dafbce0fc.png) SQL queries: duration: 466.097 ms duration: 154.093 ms duration: 132.150 ms duration: 137.751 ms duration: 140.757 ms duration: 135.526 ms duration: 136.612 ms duration: 126.399 ms duration: 124.695 ms duration: 127.888 ms duration: 133.940 ms ~1.9 sec - `http://localhost:6080/sormas-rest/samples/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 27 sec server: 27 sec ![image](https://user-images.githubusercontent.com/5616564/165130650-0fa16a33-ccf3-47a1-b9b6-86786b451e1f.png) SQL queries: duration: 129.587 ms - `http://localhost:6080/sormas-rest/immunizations/all/1637090372005/1000/NO_LAST_SYNCED_UUID` (1min 50 sec) server: 1 min 48 sec ![image](https://user-images.githubusercontent.com/5616564/165148906-17f95d18-3ba1-4b75-912d-87f34bc10a6e.png) SQL queries: duration: 6985.128 ms duration: 106.608 ms ~ 7 sec </details> ### Proposed Change <!-- Mandatory --> - [ ] 1. Rewrite `AdoServiceWithUserFilter.getAllAfter` to first fetch the needed ids (see pattern in PersonService.getAllAfter`, then fetch the entities by id with IN-clause (use `BaseAdoService.getByIds`). - [ ] 2. Add indices for sorting. - [ ] 3. Use the pattern of `PersonService.getInJurisdictionIDs` to query by ids with IN clause also for other entities where `"Entity"Service.inJurisdictionOrOwned` is currently running one query per Entity. ### Acceptance Criteria <!-- Optional --> - [ ] An analysis before/after shows the performance improvement (also mentioning the amount of existing and queried entities). ### Implementation Details <!-- Optional --> - All `getAllAfter` and `getInJurisdictionIDs` methods avoid parameter limit exception with `IterableHelper.executeBatched` batching. - Remove or adapt not with superclass aligned `getAllAfter` implementations in: - CampaignFormMetaService - CampaignService - ContactService - EventParticipantService - EventService - ImmunizationService - BaseTravelEntryService ### Additional Information <!-- Optional --> Sibling to #8946
True
Improve performance of getAllAfter queries into DTOs - <!-- Please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) before submitting an issue. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden. --> ### Problem Description <!-- Mandatory --> As shown by the following analysis, many `getAllAfter` methods show an inperformant pattern: 1. The `"Entity"Service.getAllAfter` method takes some seconds. As shown in https://github.com/hzi-braunschweig/SORMAS-Project/issues/8946#issuecomment-1129937176, this can be improved by initially fetching only the ids (reduced distinct effort) and using a dedicated index with appropriate sorting. 2. `"Entity"Service.inJurisdictionOrOwned` per each entity seems to be inperformant. For Cases it took ~330ms per entity, for Persons ~0,3ms per entity with in clause (`PersonService.getInJurisdictionIDs`). <details><summary>Analysis</summary> Dataset: - 1225343 persons - 85677 cases - 114782 contacts - 236583 tasks - 2600890 immunizations - 1991 vaccinations - 15693 samples - 11846 events - 534 eventparticipants The following measurements were taken from backend logs (EJB methods) and the Postgres logs. Observations: - the actual SQL queries are executed in few seconds while the EJB methods take minutes to complete - a relevant contribution to runtime comes from `inJurisdictionOrOwned` methods, here mainly the number of calls - batch size is relevant. Some requests did not complete within reasonable time for a batch size of 10000, these were measured for batch size 1000. Requests: - `http://localhost:6080/sormas-rest/persons/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 3 min 5 sec server: 2 min 58 sec ![image](https://user-images.githubusercontent.com/5616564/165099348-35ae69d7-8c12-4307-9e66-d3e5a68749df.png) SQL queries: duration: 948.716 ms duration: 2297.275 ms duration: 4012.220 ms duration: 4393.538 ms duration: 696.277 ms ~ 12.5 sec **Note:** SQL query times are inaccurate, either due to an error in the analysis or changes introduced later, see later analysis of this method. - `http://localhost:6080/sormas-rest/cases/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 8 min 21 sec server: 8 min 18 sec ![image](https://user-images.githubusercontent.com/5616564/165100621-c62aed86-b6a1-469b-b8b1-53e0ff98c857.png) SQL queries: duration: 787.011 ms duration: 449.616 ms duration: 146.224 ms duration: 178.515 ms duration: 154.552 ms duration: 136.836 ms duration: 142.597 ms duration: 143.688 ms duration: 150.584 ms duration: 152.298 ms duration: 133.814 ms duration: 147.687 ms ~ 3 sec - `http://localhost:6080/sormas-rest/contacts/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 35 sec server: 35 sec ![image](https://user-images.githubusercontent.com/5616564/165123971-871aba96-eb84-49ad-b718-f7fc1f82896c.png) SQL queries: duration: 151.413 ms duration: 143.237 ms duration: 152.661 ms duration: 144.054 ms duration: 145.374 ms duration: 155.861 ms duration: 177.212 ms duration: 153.647 ms duration: 201.516 ms duration: 187.382 ms ~ 1.6 sec - `http://localhost:6080/sormas-rest/tasks/all/1637090372005/1000/NO_LAST_SYNCED_UUID`: 1 min 27 sec server: 1 min 24 sec ![image](https://user-images.githubusercontent.com/5616564/165125159-4b3af4d6-3c35-4a1c-bde4-b55dafbce0fc.png) SQL queries: duration: 466.097 ms duration: 154.093 ms duration: 132.150 ms duration: 137.751 ms duration: 140.757 ms duration: 135.526 ms duration: 136.612 ms duration: 126.399 ms duration: 124.695 ms duration: 127.888 ms duration: 133.940 ms ~1.9 sec - `http://localhost:6080/sormas-rest/samples/all/1637090372005/10000/NO_LAST_SYNCED_UUID`: 27 sec server: 27 sec ![image](https://user-images.githubusercontent.com/5616564/165130650-0fa16a33-ccf3-47a1-b9b6-86786b451e1f.png) SQL queries: duration: 129.587 ms - `http://localhost:6080/sormas-rest/immunizations/all/1637090372005/1000/NO_LAST_SYNCED_UUID` (1min 50 sec) server: 1 min 48 sec ![image](https://user-images.githubusercontent.com/5616564/165148906-17f95d18-3ba1-4b75-912d-87f34bc10a6e.png) SQL queries: duration: 6985.128 ms duration: 106.608 ms ~ 7 sec </details> ### Proposed Change <!-- Mandatory --> - [ ] 1. Rewrite `AdoServiceWithUserFilter.getAllAfter` to first fetch the needed ids (see pattern in PersonService.getAllAfter`, then fetch the entities by id with IN-clause (use `BaseAdoService.getByIds`). - [ ] 2. Add indices for sorting. - [ ] 3. Use the pattern of `PersonService.getInJurisdictionIDs` to query by ids with IN clause also for other entities where `"Entity"Service.inJurisdictionOrOwned` is currently running one query per Entity. ### Acceptance Criteria <!-- Optional --> - [ ] An analysis before/after shows the performance improvement (also mentioning the amount of existing and queried entities). ### Implementation Details <!-- Optional --> - All `getAllAfter` and `getInJurisdictionIDs` methods avoid parameter limit exception with `IterableHelper.executeBatched` batching. - Remove or adapt not with superclass aligned `getAllAfter` implementations in: - CampaignFormMetaService - CampaignService - ContactService - EventParticipantService - EventService - ImmunizationService - BaseTravelEntryService ### Additional Information <!-- Optional --> Sibling to #8946
perf
improve performance of getallafter queries into dtos please read the contributing guidelines before submitting an issue you don t have to remove this comment or any other comment from this issue as they will automatically be hidden problem description as shown by the following analysis many getallafter methods show an inperformant pattern the entity service getallafter method takes some seconds as shown in this can be improved by initially fetching only the ids reduced distinct effort and using a dedicated index with appropriate sorting entity service injurisdictionorowned per each entity seems to be inperformant for cases it took per entity for persons per entity with in clause personservice getinjurisdictionids analysis dataset persons cases contacts tasks immunizations vaccinations samples events eventparticipants the following measurements were taken from backend logs ejb methods and the postgres logs observations the actual sql queries are executed in few seconds while the ejb methods take minutes to complete a relevant contribution to runtime comes from injurisdictionorowned methods here mainly the number of calls batch size is relevant some requests did not complete within reasonable time for a batch size of these were measured for batch size requests min sec server min sec sql queries duration ms duration ms duration ms duration ms duration ms sec note sql query times are inaccurate either due to an error in the analysis or changes introduced later see later analysis of this method min sec server min sec sql queries duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms sec sec server sec sql queries duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms sec min sec server min sec sql queries duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms duration ms sec sec server sec sql queries duration ms sec server min sec sql queries duration ms duration ms sec proposed change rewrite adoservicewithuserfilter getallafter to first fetch the needed ids see pattern in personservice getallafter then fetch the entities by id with in clause use baseadoservice getbyids add indices for sorting use the pattern of personservice getinjurisdictionids to query by ids with in clause also for other entities where entity service injurisdictionorowned is currently running one query per entity acceptance criteria an analysis before after shows the performance improvement also mentioning the amount of existing and queried entities implementation details all getallafter and getinjurisdictionids methods avoid parameter limit exception with iterablehelper executebatched batching remove or adapt not with superclass aligned getallafter implementations in campaignformmetaservice campaignservice contactservice eventparticipantservice eventservice immunizationservice basetravelentryservice additional information sibling to
1
15,440
8,896,425,187
IssuesEvent
2019-01-16 11:25:07
jveillet/jk-demainilpleut
https://api.github.com/repos/jveillet/jk-demainilpleut
opened
Use Vanilla JS instead of using BlissJS
maintenance performance
As a manner of reducing third parties dependencies, remove the need to use BlissJS library, and refactor the custom Javascript code to use vanilla JS instead.
True
Use Vanilla JS instead of using BlissJS - As a manner of reducing third parties dependencies, remove the need to use BlissJS library, and refactor the custom Javascript code to use vanilla JS instead.
perf
use vanilla js instead of using blissjs as a manner of reducing third parties dependencies remove the need to use blissjs library and refactor the custom javascript code to use vanilla js instead
1
272,705
23,696,975,351
IssuesEvent
2022-08-29 15:24:25
o3de/o3de
https://api.github.com/repos/o3de/o3de
closed
The `ScopedTemporaryDirectory` class should be replaced with `ScopedAutoTempDirectory`
kind/cleanup sig/core sig/testing triage/accepted priority/major
**Is your feature request related to a problem? Please describe.** Currently there are two test classes for creating temporary directories for UnitTest. The [ScopedAutoTempDirectory](https://github.com/o3de/o3de/blob/development/Code/Framework/AzTest/AzTest/Utils.h#L58-L70) class in the AzTest library and the [ScopedTemporaryDirectory](https://github.com/o3de/o3de/blob/development/Code/Framework/AzFramework/Tests/Utils/Utils.h#L24-L46) class in the AzFrameworkTestShared library.. **Describe the solution you'd like** The `ScopedTemporaryDirectory` class in the AzFrameworkTestShared library should be removed. Afterwards all the instances `ScopedTempAutoDirectory` should be updated to use the `ScopedTemporaryDirectory` class as its implementation relies on UUIDs which results in lower possibility of a collisions when determining a name to use for the temporary directory **Describe alternatives you've considered** None **Additional context** Add any other context or screenshots about the feature request here.
1.0
The `ScopedTemporaryDirectory` class should be replaced with `ScopedAutoTempDirectory` - **Is your feature request related to a problem? Please describe.** Currently there are two test classes for creating temporary directories for UnitTest. The [ScopedAutoTempDirectory](https://github.com/o3de/o3de/blob/development/Code/Framework/AzTest/AzTest/Utils.h#L58-L70) class in the AzTest library and the [ScopedTemporaryDirectory](https://github.com/o3de/o3de/blob/development/Code/Framework/AzFramework/Tests/Utils/Utils.h#L24-L46) class in the AzFrameworkTestShared library.. **Describe the solution you'd like** The `ScopedTemporaryDirectory` class in the AzFrameworkTestShared library should be removed. Afterwards all the instances `ScopedTempAutoDirectory` should be updated to use the `ScopedTemporaryDirectory` class as its implementation relies on UUIDs which results in lower possibility of a collisions when determining a name to use for the temporary directory **Describe alternatives you've considered** None **Additional context** Add any other context or screenshots about the feature request here.
non_perf
the scopedtemporarydirectory class should be replaced with scopedautotempdirectory is your feature request related to a problem please describe currently there are two test classes for creating temporary directories for unittest the class in the aztest library and the class in the azframeworktestshared library describe the solution you d like the scopedtemporarydirectory class in the azframeworktestshared library should be removed afterwards all the instances scopedtempautodirectory should be updated to use the scopedtemporarydirectory class as its implementation relies on uuids which results in lower possibility of a collisions when determining a name to use for the temporary directory describe alternatives you ve considered none additional context add any other context or screenshots about the feature request here
0
289,082
21,767,969,687
IssuesEvent
2022-05-13 05:42:18
amzn/selling-partner-api-docs
https://api.github.com/repos/amzn/selling-partner-api-docs
closed
Can not get filtered out orders as per multiple OrderStatuses via Url
bug Documentation closing soon
Hello, I've followed all the steps in docs and now successfully get the orders and all other relevant data. The problem is I can not filter out orders as per multiple OrderStatuses. Postman successfully filters out when I enter params; OrdersStatuses:Shipped,Canceled But in code it does not work;; new Uri($"{OrderUrl}&CreatedAfter={DateTime.Now.AddDays(-15):yyyy-MM-ddTHH:mm:ssZ}&MarketplaceIds={xxxxxxxxx}"); "OrderUrl": "https://sellingpartnerapi-eu.amazon.com/orders/v0/orders?OrderStatuses=Shipped,Canceled"; It is OK with one status; "OrderUrl": "https://sellingpartnerapi-eu.amazon.com/orders/v0/orders?OrderStatuses=Shipped or "OrderUrl": "https://sellingpartnerapi-eu.amazon.com/orders/v0/orders?OrderStatuses=Canceled I tried out array passing, encoding and many other options but no help.
1.0
Can not get filtered out orders as per multiple OrderStatuses via Url - Hello, I've followed all the steps in docs and now successfully get the orders and all other relevant data. The problem is I can not filter out orders as per multiple OrderStatuses. Postman successfully filters out when I enter params; OrdersStatuses:Shipped,Canceled But in code it does not work;; new Uri($"{OrderUrl}&CreatedAfter={DateTime.Now.AddDays(-15):yyyy-MM-ddTHH:mm:ssZ}&MarketplaceIds={xxxxxxxxx}"); "OrderUrl": "https://sellingpartnerapi-eu.amazon.com/orders/v0/orders?OrderStatuses=Shipped,Canceled"; It is OK with one status; "OrderUrl": "https://sellingpartnerapi-eu.amazon.com/orders/v0/orders?OrderStatuses=Shipped or "OrderUrl": "https://sellingpartnerapi-eu.amazon.com/orders/v0/orders?OrderStatuses=Canceled I tried out array passing, encoding and many other options but no help.
non_perf
can not get filtered out orders as per multiple orderstatuses via url hello i ve followed all the steps in docs and now successfully get the orders and all other relevant data the problem is i can not filter out orders as per multiple orderstatuses postman successfully filters out when i enter params ordersstatuses shipped canceled but in code it does not work new uri orderurl createdafter datetime now adddays yyyy mm ddthh mm ssz marketplaceids xxxxxxxxx orderurl it is ok with one status orderurl or orderurl i tried out array passing encoding and many other options but no help
0
26,440
13,017,965,610
IssuesEvent
2020-07-26 15:05:17
winget-run/api
https://api.github.com/repos/winget-run/api
opened
Better method of getting package list for sitemap
performance
Pretty sure we currently just dump all the packages from the database which is horribly inefficient and will not scale well at all.
True
Better method of getting package list for sitemap - Pretty sure we currently just dump all the packages from the database which is horribly inefficient and will not scale well at all.
perf
better method of getting package list for sitemap pretty sure we currently just dump all the packages from the database which is horribly inefficient and will not scale well at all
1
10,019
7,057,934,826
IssuesEvent
2018-01-04 18:18:30
AdamsLair/duality
https://api.github.com/repos/AdamsLair/duality
opened
Investigate By-Ref Vector and Matrix Operators in C# 7.2
Breaking Change Cleanup Core Performance Task
### Summary As mentioned in issue #598, the new `in` parameter keyword is [allowed on operators and using literal values](https://docs.microsoft.com/en-us/dotnet/csharp/reference-semantics-with-value-types), so operator implementations can supersede the previous static by-ref operator alternatives. ### Analysis - Wait until the core has adopted C# 7.2. - Make sure using the `in` keyword implicitly with non-referencable values (like literals or property return values) has no negative impact on performance. Read the docs, and if that's not 100% clear, check out generated IL and x86 code in a sample project. - Remove all static by-`ref` methods from `Vector2/3/4`, `Quaternion` and `Matrix3/4` if they have an operator equivalent. - Use the `in` keyword on parameters provided to the operators in question.
True
Investigate By-Ref Vector and Matrix Operators in C# 7.2 - ### Summary As mentioned in issue #598, the new `in` parameter keyword is [allowed on operators and using literal values](https://docs.microsoft.com/en-us/dotnet/csharp/reference-semantics-with-value-types), so operator implementations can supersede the previous static by-ref operator alternatives. ### Analysis - Wait until the core has adopted C# 7.2. - Make sure using the `in` keyword implicitly with non-referencable values (like literals or property return values) has no negative impact on performance. Read the docs, and if that's not 100% clear, check out generated IL and x86 code in a sample project. - Remove all static by-`ref` methods from `Vector2/3/4`, `Quaternion` and `Matrix3/4` if they have an operator equivalent. - Use the `in` keyword on parameters provided to the operators in question.
perf
investigate by ref vector and matrix operators in c summary as mentioned in issue the new in parameter keyword is so operator implementations can supersede the previous static by ref operator alternatives analysis wait until the core has adopted c make sure using the in keyword implicitly with non referencable values like literals or property return values has no negative impact on performance read the docs and if that s not clear check out generated il and code in a sample project remove all static by ref methods from quaternion and if they have an operator equivalent use the in keyword on parameters provided to the operators in question
1
407,486
27,617,753,580
IssuesEvent
2023-03-09 20:50:57
godotengine/godot
https://api.github.com/repos/godotengine/godot
closed
Pile of stacked RigidBodies is wobbly
confirmed documentation topic:physics
### Godot version 4.0 ### System information Linux mint, GLES3, Nvidia quadro 2000m ### Issue description So i was trying to test 2d physics using 'Rigidbody 2d' nodes in godot, I made 'static body' nodes to make kind of a box to run the simulation, using 'marker 2d' as a position detector and making it spawn the rigidbodies, and a 'timer' node to give breaks between spawn times ### Steps to reproduce https://user-images.githubusercontent.com/112822849/214648604-dbfb91b4-b8d1-4d33-b678-0d38ddb4811d.mp4 ### Minimal reproduction project [Test 2 godot 4.zip](https://github.com/godotengine/godot/files/10502377/Test.2.godot.4.zip)
1.0
Pile of stacked RigidBodies is wobbly - ### Godot version 4.0 ### System information Linux mint, GLES3, Nvidia quadro 2000m ### Issue description So i was trying to test 2d physics using 'Rigidbody 2d' nodes in godot, I made 'static body' nodes to make kind of a box to run the simulation, using 'marker 2d' as a position detector and making it spawn the rigidbodies, and a 'timer' node to give breaks between spawn times ### Steps to reproduce https://user-images.githubusercontent.com/112822849/214648604-dbfb91b4-b8d1-4d33-b678-0d38ddb4811d.mp4 ### Minimal reproduction project [Test 2 godot 4.zip](https://github.com/godotengine/godot/files/10502377/Test.2.godot.4.zip)
non_perf
pile of stacked rigidbodies is wobbly godot version system information linux mint nvidia quadro issue description so i was trying to test physics using rigidbody nodes in godot i made static body nodes to make kind of a box to run the simulation using marker as a position detector and making it spawn the rigidbodies and a timer node to give breaks between spawn times steps to reproduce minimal reproduction project
0
289,125
24,962,219,767
IssuesEvent
2022-11-01 16:23:45
Nordix/Meridio
https://api.github.com/repos/Nordix/Meridio
opened
E2E tests instability
kind/bug area/CI area/testing
**Describe the bug** Tests are failing randomly 10 to 20% of the time. **To Reproduce** / **Expected behavior** / **Logs** ``` •! [PANICKED] [190.727 seconds] Scaling /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:30 With one trench containing a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:32 when scaling targets up by 1 /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:148 [It] should receive the traffic correctly /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:152 Begin Captured GinkgoWriter Output >> STEP: Checking if all targets have receive ipv4 traffic with no traffic interruption (no lost connection) 11/01/22 16:09:02.33 STEP: Checking if all targets have receive ipv6 traffic with no traffic interruption (no lost connection) 11/01/22 16:09:03.53 << End Captured GinkgoWriter Output Test Panicked In [It] at: /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/utils/trafficgenerator.go:96 invalid character 'R' looking for beginning of value Full Stack Trace github.com/nordix/meridio/test/e2e/utils.(*MConnect).AnalyzeTraffic(0x15fa73d?, {0xc0000fb000, 0x6c9, 0x1000}) /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/utils/trafficgenerator.go:96 +0x1a6 github.com/nordix/meridio/test/e2e/utils.(*TrafficGeneratorHost).SendTraffic(0xc0004ddf68?, {0x1821bd0, 0xc000133968}, {0x7fff08b74cb3?, 0x8}, {0x7fff08b74c82, 0x3}, {0xc000494450, 0xe}, {0x15f7463, ...}) /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/utils/trafficgenerator.go:45 +0x277 github.com/nordix/meridio/test/e2e_test.glob..func6.1.5.2() /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:161 +0x3cd ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/71/pipeline/22 ``` • [FAILED] [29.806 seconds] Target /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:30 With one trench containing a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:32 when a target is closing a stream /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:51 [It] should receive traffic anymore /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:81 Begin Captured GinkgoWriter Output >> STEP: Checking the target has not receive ipv4 traffic 11/01/22 15:40:36.855 << End Captured GinkgoWriter Output Expected <int>: 4 to equal <int>: 3 ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/68/pipeline/22 ``` • [FAILED] [0.050 seconds] MultiTrenches /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:30 With two trenches containing both a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:32 when a target disconnects from a trench and connect to another one [BeforeEach] /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:109 should receive the traffic on the other trench /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:141 Unexpected error: <*fmt.wrapError | 0xc0002bd4e0>: { msg: "unable to upgrade connection: container not found (\"example-target\"); ", err: <*errors.errorString | 0xc00050efc0>{ s: "unable to upgrade connection: container not found (\"example-target\")", }, } unable to upgrade connection: container not found ("example-target"); occurred In [BeforeEach] at: /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:111 ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/64/pipeline/22 ``` • [FAILED] [29.309 seconds] MultiTrenches /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:30 With two trenches containing both a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:32 when a target disconnects from a trench and connect to another one [BeforeEach] /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:109 should receive the traffic on the other trench /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:141 Unexpected error: <*fmt.wrapError | 0xc000548160>: { msg: "command terminated with exit code 137; ", err: <exec.CodeExitError>{ Err: <*errors.errorString | 0xc0003ba310>{ s: "command terminated with exit code 137", }, Code: 137, }, } command terminated with exit code 137; occurred In [BeforeEach] at: /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:113 ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/43/pipeline/22
1.0
E2E tests instability - **Describe the bug** Tests are failing randomly 10 to 20% of the time. **To Reproduce** / **Expected behavior** / **Logs** ``` •! [PANICKED] [190.727 seconds] Scaling /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:30 With one trench containing a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:32 when scaling targets up by 1 /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:148 [It] should receive the traffic correctly /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:152 Begin Captured GinkgoWriter Output >> STEP: Checking if all targets have receive ipv4 traffic with no traffic interruption (no lost connection) 11/01/22 16:09:02.33 STEP: Checking if all targets have receive ipv6 traffic with no traffic interruption (no lost connection) 11/01/22 16:09:03.53 << End Captured GinkgoWriter Output Test Panicked In [It] at: /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/utils/trafficgenerator.go:96 invalid character 'R' looking for beginning of value Full Stack Trace github.com/nordix/meridio/test/e2e/utils.(*MConnect).AnalyzeTraffic(0x15fa73d?, {0xc0000fb000, 0x6c9, 0x1000}) /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/utils/trafficgenerator.go:96 +0x1a6 github.com/nordix/meridio/test/e2e/utils.(*TrafficGeneratorHost).SendTraffic(0xc0004ddf68?, {0x1821bd0, 0xc000133968}, {0x7fff08b74cb3?, 0x8}, {0x7fff08b74c82, 0x3}, {0xc000494450, 0xe}, {0x15f7463, ...}) /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/utils/trafficgenerator.go:45 +0x277 github.com/nordix/meridio/test/e2e_test.glob..func6.1.5.2() /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/71/test/e2e/scaling_test.go:161 +0x3cd ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/71/pipeline/22 ``` • [FAILED] [29.806 seconds] Target /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:30 With one trench containing a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:32 when a target is closing a stream /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:51 [It] should receive traffic anymore /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/68/test/e2e/target_test.go:81 Begin Captured GinkgoWriter Output >> STEP: Checking the target has not receive ipv4 traffic 11/01/22 15:40:36.855 << End Captured GinkgoWriter Output Expected <int>: 4 to equal <int>: 3 ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/68/pipeline/22 ``` • [FAILED] [0.050 seconds] MultiTrenches /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:30 With two trenches containing both a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:32 when a target disconnects from a trench and connect to another one [BeforeEach] /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:109 should receive the traffic on the other trench /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:141 Unexpected error: <*fmt.wrapError | 0xc0002bd4e0>: { msg: "unable to upgrade connection: container not found (\"example-target\"); ", err: <*errors.errorString | 0xc00050efc0>{ s: "unable to upgrade connection: container not found (\"example-target\")", }, } unable to upgrade connection: container not found ("example-target"); occurred In [BeforeEach] at: /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/64/test/e2e/multi_trenches_test.go:111 ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/64/pipeline/22 ``` • [FAILED] [29.309 seconds] MultiTrenches /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:30 With two trenches containing both a stream with 2 VIP addresses and 4 target pods running /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:32 when a target disconnects from a trench and connect to another one [BeforeEach] /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:109 should receive the traffic on the other trench /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:141 Unexpected error: <*fmt.wrapError | 0xc000548160>: { msg: "command terminated with exit code 137; ", err: <exec.CodeExitError>{ Err: <*errors.errorString | 0xc0003ba310>{ s: "command terminated with exit code 137", }, Code: 137, }, } command terminated with exit code 137; occurred In [BeforeEach] at: /home/jenkins/nordix/slave_root/workspace/meridio-e2e-test-kind/43/test/e2e/multi_trenches_test.go:113 ``` https://jenkins.nordix.org/blue/organizations/jenkins/meridio-e2e-test-kind/detail/meridio-e2e-test-kind/43/pipeline/22
non_perf
tests instability describe the bug tests are failing randomly to of the time to reproduce expected behavior logs   scaling  home jenkins nordix slave root workspace meridio test kind test scaling test go  with one trench containing a stream with vip addresses and target pods running  home jenkins nordix slave root workspace meridio test kind test scaling test go  when scaling targets up by  home jenkins nordix slave root workspace meridio test kind test scaling test go   should receive the traffic correctly  home jenkins nordix slave root workspace meridio test kind test scaling test go   captured ginkgowriter output    checking if all targets have receive traffic with no traffic interruption no lost connection     checking if all targets have receive traffic with no traffic interruption no lost connection    end captured ginkgowriter output  panicked    at  home jenkins nordix slave root workspace meridio test kind test utils trafficgenerator go   character r looking for beginning of value  stack trace github com nordix meridio test utils mconnect analyzetraffic home jenkins nordix slave root workspace meridio test kind test utils trafficgenerator go github com nordix meridio test utils trafficgeneratorhost sendtraffic home jenkins nordix slave root workspace meridio test kind test utils trafficgenerator go github com nordix meridio test test glob home jenkins nordix slave root workspace meridio test kind test scaling test go   target  home jenkins nordix slave root workspace meridio test kind test target test go  with one trench containing a stream with vip addresses and target pods running  home jenkins nordix slave root workspace meridio test kind test target test go  when a target is closing a stream  home jenkins nordix slave root workspace meridio test kind test target test go   should receive traffic anymore  home jenkins nordix slave root workspace meridio test kind test target test go   captured ginkgowriter output    checking the target has not receive traffic    end captured ginkgowriter output  to equal    multitrenches  home jenkins nordix slave root workspace meridio test kind test multi trenches test go  with two trenches containing both a stream with vip addresses and target pods running  home jenkins nordix slave root workspace meridio test kind test multi trenches test go     home jenkins nordix slave root workspace meridio test kind test multi trenches test go  should receive the traffic on the other trench  home jenkins nordix slave root workspace meridio test kind test multi trenches test go   error msg unable to upgrade connection container not found example target err s unable to upgrade connection container not found example target unable to upgrade connection container not found example target occurred    at  home jenkins nordix slave root workspace meridio test kind test multi trenches test go    multitrenches  home jenkins nordix slave root workspace meridio test kind test multi trenches test go  with two trenches containing both a stream with vip addresses and target pods running  home jenkins nordix slave root workspace meridio test kind test multi trenches test go     home jenkins nordix slave root workspace meridio test kind test multi trenches test go  should receive the traffic on the other trench  home jenkins nordix slave root workspace meridio test kind test multi trenches test go   error msg command terminated with exit code err err s command terminated with exit code code command terminated with exit code occurred    at  home jenkins nordix slave root workspace meridio test kind test multi trenches test go 
0
28,577
13,754,594,198
IssuesEvent
2020-10-06 17:10:13
esowc/ecPoint-Calibrate
https://api.github.com/repos/esowc/ecPoint-Calibrate
opened
Reduce memory footprint with Dockerless app
performance
Currently, ecPoint-Calibrate runs as a set of 3 Docker containers: * Electron app * Python backend * Logging infrastructure Each of the above Docker containers run full-blown operating systems, which is doesn't leave a lot of memory for computations in resource-constrained environments. Furthermore, what's displayed on the screen is actually a raw stream of the display inside the Docker container, which is often the cause for sluggish user experience. This issue proposes a lean runtime for ecPoint-Calibrate, without using Docker. **This is also a first step to running ecPoint-Calibrate on MacOS and Windows.** **Checklist:** - [ ] Use [socket.io](https://socket.io) to stream logs to the frontend. - [ ] Decommission the current logging infrastructure. - [ ] Investigate [PyInstaller](https://www.pyinstaller.org), to produce a zero-dependency binary for the Python backend. - [ ] Use [Electron Builder](https://github.com/electron-userland/electron-builder) to package the who software into a single AppImage. - [ ] Implement auto-update.
True
Reduce memory footprint with Dockerless app - Currently, ecPoint-Calibrate runs as a set of 3 Docker containers: * Electron app * Python backend * Logging infrastructure Each of the above Docker containers run full-blown operating systems, which is doesn't leave a lot of memory for computations in resource-constrained environments. Furthermore, what's displayed on the screen is actually a raw stream of the display inside the Docker container, which is often the cause for sluggish user experience. This issue proposes a lean runtime for ecPoint-Calibrate, without using Docker. **This is also a first step to running ecPoint-Calibrate on MacOS and Windows.** **Checklist:** - [ ] Use [socket.io](https://socket.io) to stream logs to the frontend. - [ ] Decommission the current logging infrastructure. - [ ] Investigate [PyInstaller](https://www.pyinstaller.org), to produce a zero-dependency binary for the Python backend. - [ ] Use [Electron Builder](https://github.com/electron-userland/electron-builder) to package the who software into a single AppImage. - [ ] Implement auto-update.
perf
reduce memory footprint with dockerless app currently ecpoint calibrate runs as a set of docker containers electron app python backend logging infrastructure each of the above docker containers run full blown operating systems which is doesn t leave a lot of memory for computations in resource constrained environments furthermore what s displayed on the screen is actually a raw stream of the display inside the docker container which is often the cause for sluggish user experience this issue proposes a lean runtime for ecpoint calibrate without using docker this is also a first step to running ecpoint calibrate on macos and windows checklist use to stream logs to the frontend decommission the current logging infrastructure investigate to produce a zero dependency binary for the python backend use to package the who software into a single appimage implement auto update
1
19,494
10,439,566,568
IssuesEvent
2019-09-18 06:38:51
vaadin/flow
https://api.github.com/repos/vaadin/flow
closed
Bundle containing all frontend resources
investigation performance
Use all resources on the classpath rather than finding out what is actually used in development mode. We assume here that scanning through everything on the classpath is faster than a scan that takes reachability into account. In production mode, the current approach will be retained (bundle will only contain actually used resources). Advantages: - No more repeating bytecode traversing helps significantly with Spring projects - Jrebel / HotSwap will work when adding component that was not previously used Disadvantages: - Different bundles in dev and prod mode is not optimal Results: - We have a full bundle generation approach running - Benchmarking the application startup time with this approach - Can we notice that a new view with frontend dependencies has been added during a redeployment?
True
Bundle containing all frontend resources - Use all resources on the classpath rather than finding out what is actually used in development mode. We assume here that scanning through everything on the classpath is faster than a scan that takes reachability into account. In production mode, the current approach will be retained (bundle will only contain actually used resources). Advantages: - No more repeating bytecode traversing helps significantly with Spring projects - Jrebel / HotSwap will work when adding component that was not previously used Disadvantages: - Different bundles in dev and prod mode is not optimal Results: - We have a full bundle generation approach running - Benchmarking the application startup time with this approach - Can we notice that a new view with frontend dependencies has been added during a redeployment?
perf
bundle containing all frontend resources use all resources on the classpath rather than finding out what is actually used in development mode we assume here that scanning through everything on the classpath is faster than a scan that takes reachability into account in production mode the current approach will be retained bundle will only contain actually used resources advantages no more repeating bytecode traversing helps significantly with spring projects jrebel hotswap will work when adding component that was not previously used disadvantages different bundles in dev and prod mode is not optimal results we have a full bundle generation approach running benchmarking the application startup time with this approach can we notice that a new view with frontend dependencies has been added during a redeployment
1
24,804
12,405,221,587
IssuesEvent
2020-05-21 16:53:06
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
opened
Performance of Devshop Project Page
Performance
The project page on devshop takes a long time (need timings) to load. Performance should be improved. Blackfire Profile page: https://blackfire.io/profiles/790d70f3-58b3-4f5c-b59e-c2258128d7a3/graph
True
Performance of Devshop Project Page - The project page on devshop takes a long time (need timings) to load. Performance should be improved. Blackfire Profile page: https://blackfire.io/profiles/790d70f3-58b3-4f5c-b59e-c2258128d7a3/graph
perf
performance of devshop project page the project page on devshop takes a long time need timings to load performance should be improved blackfire profile page
1
89,255
3,791,543,465
IssuesEvent
2016-03-22 03:37:33
Microsoft/RTVS
https://api.github.com/repos/Microsoft/RTVS
closed
REPL null reference exception
area:Editor priority:P0 type:bug
Using today's code from master. Editing a line in REPL that looked like: ``` dim(a1) ``` ``` public static SnapshotPoint? MapCaretPositionFromView(ITextView textView) { if (!textView.Caret.InVirtualSpace) { SnapshotPoint caretPosition = textView.Caret.Position.BufferPosition; return MapPointFromView(textView, caretPosition); } return null; } ``` where `textView` is null. ``` > Microsoft.R.Editor.dll!Microsoft.R.Editor.Document.REditorDocument.MapCaretPositionFromView(Microsoft.VisualStudio.Text.Editor.ITextView textView) Line 182 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Signatures.SignatureHelp.UpdateCurrentParameter.AnonymousMethod__0(object o) Line 168 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Tree.EditorTree.FirePostUpdateEvents(System.Collections.Generic.List<Microsoft.R.Editor.Tree.TreeChangeEventRecord> changes, bool fullParse) Line 63 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Tree.TreeUpdateTask.ApplyBackgroundProcessingResults() Line 581 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Tree.TreeUpdateTask.ProcessTextChanges.AnonymousMethod__38_0() Line 453 C# WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.InternalRealCall(System.Delegate callback, object args, int numArgs) Unknown WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.TryCatchWhen(object source, System.Delegate callback, object args, int numArgs, System.Delegate catchHandler) Unknown WindowsBase.dll!System.Windows.Threading.DispatcherOperation.InvokeImpl() Unknown WindowsBase.dll!System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(object state) Unknown mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) Unknown WindowsBase.dll!MS.Internal.CulturePreservingExecutionContext.Run(MS.Internal.CulturePreservingExecutionContext executionContext, System.Threading.ContextCallback callback, object state) Unknown WindowsBase.dll!System.Windows.Threading.DispatcherOperation.Invoke() Unknown WindowsBase.dll!System.Windows.Threading.Dispatcher.ProcessQueue() Unknown WindowsBase.dll!System.Windows.Threading.Dispatcher.WndProcHook(System.IntPtr hwnd, int msg, System.IntPtr wParam, System.IntPtr lParam, ref bool handled) Unknown WindowsBase.dll!MS.Win32.HwndWrapper.WndProc(System.IntPtr hwnd, int msg, System.IntPtr wParam, System.IntPtr lParam, ref bool handled) Unknown WindowsBase.dll!MS.Win32.HwndSubclass.DispatcherCallbackOperation(object o) Unknown WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.InternalRealCall(System.Delegate callback, object args, int numArgs) Unknown WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.TryCatchWhen(object source, System.Delegate callback, object args, int numArgs, System.Delegate catchHandler) Unknown WindowsBase.dll!System.Windows.Threading.Dispatcher.LegacyInvokeImpl(System.Windows.Threading.DispatcherPriority priority, System.TimeSpan timeout, System.Delegate method, object args, int numArgs) Unknown WindowsBase.dll!MS.Win32.HwndSubclass.SubclassWndProc(System.IntPtr hwnd, int msg, System.IntPtr wParam, System.IntPtr lParam) Unknown ```
1.0
REPL null reference exception - Using today's code from master. Editing a line in REPL that looked like: ``` dim(a1) ``` ``` public static SnapshotPoint? MapCaretPositionFromView(ITextView textView) { if (!textView.Caret.InVirtualSpace) { SnapshotPoint caretPosition = textView.Caret.Position.BufferPosition; return MapPointFromView(textView, caretPosition); } return null; } ``` where `textView` is null. ``` > Microsoft.R.Editor.dll!Microsoft.R.Editor.Document.REditorDocument.MapCaretPositionFromView(Microsoft.VisualStudio.Text.Editor.ITextView textView) Line 182 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Signatures.SignatureHelp.UpdateCurrentParameter.AnonymousMethod__0(object o) Line 168 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Tree.EditorTree.FirePostUpdateEvents(System.Collections.Generic.List<Microsoft.R.Editor.Tree.TreeChangeEventRecord> changes, bool fullParse) Line 63 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Tree.TreeUpdateTask.ApplyBackgroundProcessingResults() Line 581 C# Microsoft.R.Editor.dll!Microsoft.R.Editor.Tree.TreeUpdateTask.ProcessTextChanges.AnonymousMethod__38_0() Line 453 C# WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.InternalRealCall(System.Delegate callback, object args, int numArgs) Unknown WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.TryCatchWhen(object source, System.Delegate callback, object args, int numArgs, System.Delegate catchHandler) Unknown WindowsBase.dll!System.Windows.Threading.DispatcherOperation.InvokeImpl() Unknown WindowsBase.dll!System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(object state) Unknown mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) Unknown WindowsBase.dll!MS.Internal.CulturePreservingExecutionContext.Run(MS.Internal.CulturePreservingExecutionContext executionContext, System.Threading.ContextCallback callback, object state) Unknown WindowsBase.dll!System.Windows.Threading.DispatcherOperation.Invoke() Unknown WindowsBase.dll!System.Windows.Threading.Dispatcher.ProcessQueue() Unknown WindowsBase.dll!System.Windows.Threading.Dispatcher.WndProcHook(System.IntPtr hwnd, int msg, System.IntPtr wParam, System.IntPtr lParam, ref bool handled) Unknown WindowsBase.dll!MS.Win32.HwndWrapper.WndProc(System.IntPtr hwnd, int msg, System.IntPtr wParam, System.IntPtr lParam, ref bool handled) Unknown WindowsBase.dll!MS.Win32.HwndSubclass.DispatcherCallbackOperation(object o) Unknown WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.InternalRealCall(System.Delegate callback, object args, int numArgs) Unknown WindowsBase.dll!System.Windows.Threading.ExceptionWrapper.TryCatchWhen(object source, System.Delegate callback, object args, int numArgs, System.Delegate catchHandler) Unknown WindowsBase.dll!System.Windows.Threading.Dispatcher.LegacyInvokeImpl(System.Windows.Threading.DispatcherPriority priority, System.TimeSpan timeout, System.Delegate method, object args, int numArgs) Unknown WindowsBase.dll!MS.Win32.HwndSubclass.SubclassWndProc(System.IntPtr hwnd, int msg, System.IntPtr wParam, System.IntPtr lParam) Unknown ```
non_perf
repl null reference exception using today s code from master editing a line in repl that looked like dim public static snapshotpoint mapcaretpositionfromview itextview textview if textview caret invirtualspace snapshotpoint caretposition textview caret position bufferposition return mappointfromview textview caretposition return null where textview is null microsoft r editor dll microsoft r editor document reditordocument mapcaretpositionfromview microsoft visualstudio text editor itextview textview line c microsoft r editor dll microsoft r editor signatures signaturehelp updatecurrentparameter anonymousmethod object o line c microsoft r editor dll microsoft r editor tree editortree firepostupdateevents system collections generic list changes bool fullparse line c microsoft r editor dll microsoft r editor tree treeupdatetask applybackgroundprocessingresults line c microsoft r editor dll microsoft r editor tree treeupdatetask processtextchanges anonymousmethod line c windowsbase dll system windows threading exceptionwrapper internalrealcall system delegate callback object args int numargs unknown windowsbase dll system windows threading exceptionwrapper trycatchwhen object source system delegate callback object args int numargs system delegate catchhandler unknown windowsbase dll system windows threading dispatcheroperation invokeimpl unknown windowsbase dll system windows threading dispatcheroperation invokeinsecuritycontext object state unknown mscorlib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state bool preservesyncctx unknown mscorlib dll system threading executioncontext run system threading executioncontext executioncontext system threading contextcallback callback object state bool preservesyncctx unknown mscorlib dll system threading executioncontext run system threading executioncontext executioncontext system threading contextcallback callback object state unknown windowsbase dll ms internal culturepreservingexecutioncontext run ms internal culturepreservingexecutioncontext executioncontext system threading contextcallback callback object state unknown windowsbase dll system windows threading dispatcheroperation invoke unknown windowsbase dll system windows threading dispatcher processqueue unknown windowsbase dll system windows threading dispatcher wndprochook system intptr hwnd int msg system intptr wparam system intptr lparam ref bool handled unknown windowsbase dll ms hwndwrapper wndproc system intptr hwnd int msg system intptr wparam system intptr lparam ref bool handled unknown windowsbase dll ms hwndsubclass dispatchercallbackoperation object o unknown windowsbase dll system windows threading exceptionwrapper internalrealcall system delegate callback object args int numargs unknown windowsbase dll system windows threading exceptionwrapper trycatchwhen object source system delegate callback object args int numargs system delegate catchhandler unknown windowsbase dll system windows threading dispatcher legacyinvokeimpl system windows threading dispatcherpriority priority system timespan timeout system delegate method object args int numargs unknown windowsbase dll ms hwndsubclass subclasswndproc system intptr hwnd int msg system intptr wparam system intptr lparam unknown
0
37,506
18,456,872,433
IssuesEvent
2021-10-15 17:42:44
gap-packages/OrbitalGraphs
https://api.github.com/repos/gap-packages/OrbitalGraphs
opened
Make this package fast and customisable enough to be used by Vole, GraphBacktracking, ferret...
performance
So far, this package has not been focused on performance. (Which is entirely reasonable). But there are several packages which have different implementations of constructing orbital graphs, and it would be nice if they could use this package instead (especially if we can beat the performance of those packages). I don't include any specifics here (although perhaps I will add details), but for now I include it as a goal. This issue can be closed once OrbitalGraphs is used by GraphBacktracking to construct its orbital graphs, or if we decide to give up on this goal. Related issues: #27, #33, #34, #35
True
Make this package fast and customisable enough to be used by Vole, GraphBacktracking, ferret... - So far, this package has not been focused on performance. (Which is entirely reasonable). But there are several packages which have different implementations of constructing orbital graphs, and it would be nice if they could use this package instead (especially if we can beat the performance of those packages). I don't include any specifics here (although perhaps I will add details), but for now I include it as a goal. This issue can be closed once OrbitalGraphs is used by GraphBacktracking to construct its orbital graphs, or if we decide to give up on this goal. Related issues: #27, #33, #34, #35
perf
make this package fast and customisable enough to be used by vole graphbacktracking ferret so far this package has not been focused on performance which is entirely reasonable but there are several packages which have different implementations of constructing orbital graphs and it would be nice if they could use this package instead especially if we can beat the performance of those packages i don t include any specifics here although perhaps i will add details but for now i include it as a goal this issue can be closed once orbitalgraphs is used by graphbacktracking to construct its orbital graphs or if we decide to give up on this goal related issues
1
228
2,525,091,638
IssuesEvent
2015-01-20 22:06:58
SFTtech/openage
https://api.github.com/repos/SFTtech/openage
opened
Switch minimum compiler version to GCC 4.9/Clang 3.5
buildsystem c++
There has been discussion on the IRC channel about ditching GCC 4.8/Clang 3.4 support in order to make use of some new language features. The obvious downside is that parts of the user/developer base would need to upgrade their toolchains, which may or may not require unreasonable effort. I have been assured that Clang 3.5 is available on OSX Mountain Lion and upwards through XCode 6. The recent compiler versions are available on Debian unstable-soon-to-be-stable as well as the most recent (non-LTS) version of Ubuntu. What about mingw cross-compilers and other Linux distribution? This issue is intended as a platform for discussion.
1.0
Switch minimum compiler version to GCC 4.9/Clang 3.5 - There has been discussion on the IRC channel about ditching GCC 4.8/Clang 3.4 support in order to make use of some new language features. The obvious downside is that parts of the user/developer base would need to upgrade their toolchains, which may or may not require unreasonable effort. I have been assured that Clang 3.5 is available on OSX Mountain Lion and upwards through XCode 6. The recent compiler versions are available on Debian unstable-soon-to-be-stable as well as the most recent (non-LTS) version of Ubuntu. What about mingw cross-compilers and other Linux distribution? This issue is intended as a platform for discussion.
non_perf
switch minimum compiler version to gcc clang there has been discussion on the irc channel about ditching gcc clang support in order to make use of some new language features the obvious downside is that parts of the user developer base would need to upgrade their toolchains which may or may not require unreasonable effort i have been assured that clang is available on osx mountain lion and upwards through xcode the recent compiler versions are available on debian unstable soon to be stable as well as the most recent non lts version of ubuntu what about mingw cross compilers and other linux distribution this issue is intended as a platform for discussion
0
47,188
24,901,765,939
IssuesEvent
2022-10-28 21:51:01
NREL/OpenStudio-HPXML
https://api.github.com/repos/NREL/OpenStudio-HPXML
closed
Use new E+ 9.6 Space object
performance openstudio energyplus refactor
We should consider using the new E+ 9.6 `Space` object for each conditioned story of the home. (This would require [OpenStudio to be able to translate OS Spaces to E+ Spaces](https://github.com/NREL/OpenStudio/issues/4409) as well as us updating the code to create multiple OpenStudio spaces.) All the conditioned spaces would still be in a single thermal zone. This would eliminate two workarounds in our code ([one for radiation](https://github.com/NREL/OpenStudio-HPXML/blob/570288a035d7f4e486ffad284a3699f96f72ffd9/HPXMLtoOpenStudio/measure.rb#L348) and [one for solar distribution](https://github.com/NREL/OpenStudio-HPXML/blob/570288a035d7f4e486ffad284a3699f96f72ffd9/HPXMLtoOpenStudio/measure.rb#L347)), as the use of E+ spaces will achieve our desired effect. FYI @yzhou601
True
Use new E+ 9.6 Space object - We should consider using the new E+ 9.6 `Space` object for each conditioned story of the home. (This would require [OpenStudio to be able to translate OS Spaces to E+ Spaces](https://github.com/NREL/OpenStudio/issues/4409) as well as us updating the code to create multiple OpenStudio spaces.) All the conditioned spaces would still be in a single thermal zone. This would eliminate two workarounds in our code ([one for radiation](https://github.com/NREL/OpenStudio-HPXML/blob/570288a035d7f4e486ffad284a3699f96f72ffd9/HPXMLtoOpenStudio/measure.rb#L348) and [one for solar distribution](https://github.com/NREL/OpenStudio-HPXML/blob/570288a035d7f4e486ffad284a3699f96f72ffd9/HPXMLtoOpenStudio/measure.rb#L347)), as the use of E+ spaces will achieve our desired effect. FYI @yzhou601
perf
use new e space object we should consider using the new e space object for each conditioned story of the home this would require as well as us updating the code to create multiple openstudio spaces all the conditioned spaces would still be in a single thermal zone this would eliminate two workarounds in our code and as the use of e spaces will achieve our desired effect fyi
1
20,216
10,679,130,818
IssuesEvent
2019-10-21 18:39:29
magento/pwa-studio
https://api.github.com/repos/magento/pwa-studio
closed
[feature]: Cache only 1 version of HTML and return that for all routes.
Progress: PR created enhancement performance
**Is your feature request related to a problem? Please describe.** Related to #1673 **Describe the solution you'd like** As of today, all routes get the same HTML file even if the request originated as `/`, `/search.html` or `/venia-tops.html` or `/valeria-two-layer-tank.html` but with a different name. The problem arises when the user directly goes to a certain products page during the offline mode, whose HTML is not cached. Even though it is the same HTML since the route is different the browser will redirect to Network Unreachable error. This is because we give different names to routes that bear the same content and the service worker caches that HTML with the key as the route. We should change the HTML handler in service worker to save only 1 version of HTML and return it even if the request originated from a different route. By not requesting HTML every time and utilizing a cached version till something changes we can save 855 Bytes of data over the network and also reduce our cache memory footprint. ![out_temp](https://user-images.githubusercontent.com/35203638/65625404-663c7180-df91-11e9-86ab-e52c00d16e97.gif) ![image](https://user-images.githubusercontent.com/35203638/65623698-f973a800-df8d-11e9-9146-14f3af988d80.png) **Please let us know what packages this feature is in regards to:** - [x] `venia-concept` - [x] `venia-ui` - [x] `pwa-buildpack` - [ ] `peregrine` - [ ] `pwa-devdocs` - [ ] `upward-js` - [ ] `upward-spec`
True
[feature]: Cache only 1 version of HTML and return that for all routes. - **Is your feature request related to a problem? Please describe.** Related to #1673 **Describe the solution you'd like** As of today, all routes get the same HTML file even if the request originated as `/`, `/search.html` or `/venia-tops.html` or `/valeria-two-layer-tank.html` but with a different name. The problem arises when the user directly goes to a certain products page during the offline mode, whose HTML is not cached. Even though it is the same HTML since the route is different the browser will redirect to Network Unreachable error. This is because we give different names to routes that bear the same content and the service worker caches that HTML with the key as the route. We should change the HTML handler in service worker to save only 1 version of HTML and return it even if the request originated from a different route. By not requesting HTML every time and utilizing a cached version till something changes we can save 855 Bytes of data over the network and also reduce our cache memory footprint. ![out_temp](https://user-images.githubusercontent.com/35203638/65625404-663c7180-df91-11e9-86ab-e52c00d16e97.gif) ![image](https://user-images.githubusercontent.com/35203638/65623698-f973a800-df8d-11e9-9146-14f3af988d80.png) **Please let us know what packages this feature is in regards to:** - [x] `venia-concept` - [x] `venia-ui` - [x] `pwa-buildpack` - [ ] `peregrine` - [ ] `pwa-devdocs` - [ ] `upward-js` - [ ] `upward-spec`
perf
cache only version of html and return that for all routes is your feature request related to a problem please describe related to describe the solution you d like as of today all routes get the same html file even if the request originated as search html or venia tops html or valeria two layer tank html but with a different name the problem arises when the user directly goes to a certain products page during the offline mode whose html is not cached even though it is the same html since the route is different the browser will redirect to network unreachable error this is because we give different names to routes that bear the same content and the service worker caches that html with the key as the route we should change the html handler in service worker to save only version of html and return it even if the request originated from a different route by not requesting html every time and utilizing a cached version till something changes we can save bytes of data over the network and also reduce our cache memory footprint please let us know what packages this feature is in regards to venia concept venia ui pwa buildpack peregrine pwa devdocs upward js upward spec
1
12,594
7,924,877,770
IssuesEvent
2018-07-05 18:32:52
rancher/rancher
https://api.github.com/repos/rancher/rancher
opened
frequent loop of node updates
kind/bug kind/performance version/2.0
**Rancher versions:** rancher/rancher: 2.0.x running a custom build that does a noop on network policy manger, we can see that we get into a node update loop. ``` 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.34 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.45 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.35 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.44 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.22 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.19 has role: etcd 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.19 has role: controlplane 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.19 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.23 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.39 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.25 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.43 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.42 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.21 has role: worker 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-delete c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-agent-controller-cleanup c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-deploy c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-scoped-gc c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-provisioner-controller c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-stats c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-agent-controller c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-remove c-qwbpz 2018/07/05 17:42:58 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c6b7be813259 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-26635726c0dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-f733c9b222ae 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-9baf2cbb71e5 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-82fd534ab8b8 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c11b0b1ef6df 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-836064ac2eec 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-e6f5426795ca 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-a1733ca530dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-ff4a9befea69 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] REQUEST 377824304829622080 DATA [19]: buffered 2018/07/05 17:42:58 [DEBUG] NodeController calling handler nodesSyncer wmaxwell-test0-0 2018/07/05 17:42:58 [DEBUG] NodeController calling handler nodesEndpointsController wmaxwell-test0-0 2018/07/05 17:42:58 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-f733c9b222ae 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-9baf2cbb71e5 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-836064ac2eec 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-e6f5426795ca 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-0] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-0] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c11b0b1ef6df 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-a1733ca530dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-ff4a9befea69 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c6b7be813259 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-26635726c0dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-82fd534ab8b8 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] REQUEST 1274340773381615485 DATA [3]: buffered 2018/07/05 17:42:59 [DEBUG] EndpointsController calling handler dnsRecordEndpointsController kube-system/kube-controller-manager 2018/07/05 17:42:59 [DEBUG] EndpointsController calling handler dnsRecordEndpointsController kube-system/kube-controller-manager 2018/07/05 17:42:59 [DEBUG] REQUEST 377824304829622081 DATA [19]: buffered 2018/07/05 17:42:59 [DEBUG] NodeController calling handler nodesSyncer wmaxwell-test3-9 2018/07/05 17:42:59 [DEBUG] NodeController calling handler nodesEndpointsController wmaxwell-test3-9 2018/07/05 17:42:59 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-2] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c11b0b1ef6df 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-2] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-9] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-9] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-7] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-82fd534ab8b8 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cluster-provisioner-controller c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cluster-stats c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler nodepool-provisioner c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler node-controller c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.34 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.45 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.35 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.44 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.22 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.19 has role: etcd 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.19 has role: controlplane 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.19 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.23 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.39 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.25 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.43 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.42 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.21 has role: worker 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-delete c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-agent-controller-cleanup c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-deploy c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-scoped-gc c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-provisioner-controller c-qwbpz 2018/07/05 17:42:59 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-7] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-8] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-836064ac2eec 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-stats c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-agent-controller c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-remove c-qwbpz 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-8] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-1] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-e6f5426795ca 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-1] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-4] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-26635726c0dd 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-4] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-6] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-f733c9b222ae 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-6] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-10] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-a1733ca530dd 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-10] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-0] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-bed8b7c3e8bc 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-0] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-3] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-9baf2cbb71e5 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-3] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-5] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c6b7be813259 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-5] 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] EndpointsController calling handler dnsRecordEndpointsController kube-system/kube-scheduler ``` These frequent updates to the node, might be a potential cause of #14372
True
frequent loop of node updates - **Rancher versions:** rancher/rancher: 2.0.x running a custom build that does a noop on network policy manger, we can see that we get into a node update loop. ``` 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.34 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.45 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.35 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.44 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.22 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.19 has role: etcd 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.19 has role: controlplane 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.19 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.23 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.39 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.25 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.43 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.42 has role: worker 2018/07/05 17:42:58 [DEBUG] Host: 10.240.0.21 has role: worker 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-delete c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-agent-controller-cleanup c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-deploy c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-scoped-gc c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-provisioner-controller c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-stats c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler cluster-agent-controller c-qwbpz 2018/07/05 17:42:58 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-remove c-qwbpz 2018/07/05 17:42:58 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c6b7be813259 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-26635726c0dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-f733c9b222ae 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-9baf2cbb71e5 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-82fd534ab8b8 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c11b0b1ef6df 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-836064ac2eec 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-e6f5426795ca 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-a1733ca530dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-ff4a9befea69 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] REQUEST 377824304829622080 DATA [19]: buffered 2018/07/05 17:42:58 [DEBUG] NodeController calling handler nodesSyncer wmaxwell-test0-0 2018/07/05 17:42:58 [DEBUG] NodeController calling handler nodesEndpointsController wmaxwell-test0-0 2018/07/05 17:42:58 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-f733c9b222ae 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-6] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-9baf2cbb71e5 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-3] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-836064ac2eec 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-8] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-e6f5426795ca 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-1] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-0] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-bed8b7c3e8bc 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-0] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c11b0b1ef6df 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-2] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-a1733ca530dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-10] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-ff4a9befea69 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-9] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c6b7be813259 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-5] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-26635726c0dd 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-4] 2018/07/05 17:42:58 [DEBUG] Updating machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-82fd534ab8b8 2018/07/05 17:42:58 [DEBUG] Updated machine for node [wmaxwell-test3-7] 2018/07/05 17:42:58 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:58 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] REQUEST 1274340773381615485 DATA [3]: buffered 2018/07/05 17:42:59 [DEBUG] EndpointsController calling handler dnsRecordEndpointsController kube-system/kube-controller-manager 2018/07/05 17:42:59 [DEBUG] EndpointsController calling handler dnsRecordEndpointsController kube-system/kube-controller-manager 2018/07/05 17:42:59 [DEBUG] REQUEST 377824304829622081 DATA [19]: buffered 2018/07/05 17:42:59 [DEBUG] NodeController calling handler nodesSyncer wmaxwell-test3-9 2018/07/05 17:42:59 [DEBUG] NodeController calling handler nodesEndpointsController wmaxwell-test3-9 2018/07/05 17:42:59 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-2] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c11b0b1ef6df 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-2] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-9] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-9] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-7] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-82fd534ab8b8 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cluster-provisioner-controller c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cluster-stats c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler nodepool-provisioner c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler node-controller c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.34 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.45 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.35 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.44 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.22 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.19 has role: etcd 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.19 has role: controlplane 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.19 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.23 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.39 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.25 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.43 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.42 has role: worker 2018/07/05 17:42:59 [DEBUG] Host: 10.240.0.21 has role: worker 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-delete c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-agent-controller-cleanup c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-deploy c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-scoped-gc c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-provisioner-controller c-qwbpz 2018/07/05 17:42:59 [DEBUG] NodeController calling handler user-node-remove c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesSyncer c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/m-ff4a9befea69 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-7] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-8] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-836064ac2eec 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-stats c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler cluster-agent-controller c-qwbpz 2018/07/05 17:42:59 [DEBUG] ClusterController calling handler mgmt-cluster-rbac-remove c-qwbpz 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-8] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-1] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-e6f5426795ca 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-1] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-4] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-26635726c0dd 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-4] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-6] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-f733c9b222ae 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-6] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-10] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-a1733ca530dd 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-10] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-0] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-bed8b7c3e8bc 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-0] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-3] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-9baf2cbb71e5 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-3] 2018/07/05 17:42:59 [DEBUG] Updating machine for node [wmaxwell-test3-5] 2018/07/05 17:42:59 [DEBUG] REST UPDATE apis/management.cattle.io/v3/c-qwbpz/nodes/m-c6b7be813259 2018/07/05 17:42:59 [DEBUG] Updated machine for node [wmaxwell-test3-5] 2018/07/05 17:42:59 [DEBUG] NodeController calling handler machinesLabelSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] NodeController calling handler cordonFieldsSyncer c-qwbpz/_machine_all_ 2018/07/05 17:42:59 [DEBUG] EndpointsController calling handler dnsRecordEndpointsController kube-system/kube-scheduler ``` These frequent updates to the node, might be a potential cause of #14372
perf
frequent loop of node updates rancher versions rancher rancher x running a custom build that does a noop on network policy manger we can see that we get into a node update loop host has role worker host has role worker host has role worker host has role worker host has role worker host has role etcd host has role controlplane host has role worker host has role worker host has role worker host has role worker host has role worker host has role worker host has role worker clustercontroller calling handler mgmt cluster rbac delete c qwbpz clustercontroller calling handler cluster agent controller cleanup c qwbpz clustercontroller calling handler cluster deploy c qwbpz clustercontroller calling handler cluster scoped gc c qwbpz clustercontroller calling handler cluster provisioner controller c qwbpz clustercontroller calling handler cluster stats c qwbpz clustercontroller calling handler cluster agent controller c qwbpz clustercontroller calling handler mgmt cluster rbac remove c qwbpz nodecontroller calling handler user node remove c qwbpz m nodecontroller calling handler machinessyncer c qwbpz m nodecontroller calling handler machineslabelsyncer c qwbpz m nodecontroller calling handler cordonfieldssyncer c qwbpz m updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node nodecontroller calling handler machineslabelsyncer c qwbpz machine all nodecontroller calling handler cordonfieldssyncer c qwbpz machine all request data buffered nodecontroller calling handler nodessyncer wmaxwell nodecontroller calling handler nodesendpointscontroller wmaxwell nodecontroller calling handler user node remove c qwbpz machine all nodecontroller calling handler machinessyncer c qwbpz machine all updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node nodecontroller calling handler machineslabelsyncer c qwbpz machine all nodecontroller calling handler cordonfieldssyncer c qwbpz machine all request data buffered endpointscontroller calling handler dnsrecordendpointscontroller kube system kube controller manager endpointscontroller calling handler dnsrecordendpointscontroller kube system kube controller manager request data buffered nodecontroller calling handler nodessyncer wmaxwell nodecontroller calling handler nodesendpointscontroller wmaxwell nodecontroller calling handler user node remove c qwbpz machine all nodecontroller calling handler machinessyncer c qwbpz machine all updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m nodecontroller calling handler cluster provisioner controller c qwbpz m nodecontroller calling handler cluster stats c qwbpz m nodecontroller calling handler nodepool provisioner c qwbpz m nodecontroller calling handler node controller c qwbpz m host has role worker host has role worker host has role worker host has role worker host has role worker host has role etcd host has role controlplane host has role worker host has role worker host has role worker host has role worker host has role worker host has role worker host has role worker clustercontroller calling handler mgmt cluster rbac delete c qwbpz clustercontroller calling handler cluster agent controller cleanup c qwbpz clustercontroller calling handler cluster deploy c qwbpz clustercontroller calling handler cluster scoped gc c qwbpz clustercontroller calling handler cluster provisioner controller c qwbpz nodecontroller calling handler user node remove c qwbpz m nodecontroller calling handler machinessyncer c qwbpz m nodecontroller calling handler machineslabelsyncer c qwbpz m nodecontroller calling handler cordonfieldssyncer c qwbpz m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m clustercontroller calling handler cluster stats c qwbpz clustercontroller calling handler cluster agent controller c qwbpz clustercontroller calling handler mgmt cluster rbac remove c qwbpz updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node updating machine for node rest update apis management cattle io c qwbpz nodes m updated machine for node nodecontroller calling handler machineslabelsyncer c qwbpz machine all nodecontroller calling handler cordonfieldssyncer c qwbpz machine all endpointscontroller calling handler dnsrecordendpointscontroller kube system kube scheduler these frequent updates to the node might be a potential cause of
1
37,963
10,115,696,253
IssuesEvent
2019-07-30 22:39:04
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Query builder should have horizontal scrolling with many metrics or dimensions
Query Builder Type:Bug
When adding many metrics or group-bys, the query builder used to have a max width and would scroll the overflow. But now on master on Chrome I'm seeing this: ![screen shot 2018-10-10 at 11 03 13 am](https://user-images.githubusercontent.com/2223916/46756420-4460b680-cc7c-11e8-850e-83ceb00d6b8f.png)
1.0
Query builder should have horizontal scrolling with many metrics or dimensions - When adding many metrics or group-bys, the query builder used to have a max width and would scroll the overflow. But now on master on Chrome I'm seeing this: ![screen shot 2018-10-10 at 11 03 13 am](https://user-images.githubusercontent.com/2223916/46756420-4460b680-cc7c-11e8-850e-83ceb00d6b8f.png)
non_perf
query builder should have horizontal scrolling with many metrics or dimensions when adding many metrics or group bys the query builder used to have a max width and would scroll the overflow but now on master on chrome i m seeing this
0