Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3
values | title stringlengths 1 970 | labels stringlengths 4 625 | body stringlengths 3 247k | index stringclasses 9
values | text_combine stringlengths 96 247k | label stringclasses 2
values | text stringlengths 96 218k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
490,443 | 14,121,486,316 | IssuesEvent | 2020-11-09 02:10:52 | AY2021S1-CS2113T-W12-2/tp | https://api.github.com/repos/AY2021S1-CS2113T-W12-2/tp | closed | [PE-D] [W12-2] Duration for flexible tasks too long | priority.High type.Bug | 
When I add a task that lasts 254 hours long, start time isn't set as default, and it doesn't show up when I use the list command.
<!--session: 1604047703205-acf0f73b-f2fb-4dd8-aff5-c649ca198eba-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: thngyuxuan/ped#3 | 1.0 | [PE-D] [W12-2] Duration for flexible tasks too long - 
When I add a task that lasts 254 hours long, start time isn't set as default, and it doesn't show up when I use the list command.
<!--session: 1604047703205-acf0f73b-f2fb-4dd8-aff5-c649ca198eba-->
-------------
Labels: `severity.Medium` `type.FunctionalityBug`
original: thngyuxuan/ped#3 | non_perf | duration for flexible tasks too long when i add a task that lasts hours long start time isn t set as default and it doesn t show up when i use the list command labels severity medium type functionalitybug original thngyuxuan ped | 0 |
35,814 | 17,269,192,734 | IssuesEvent | 2021-07-22 17:23:03 | erayerdin/levelheadbrowser | https://api.github.com/repos/erayerdin/levelheadbrowser | closed | Do not request for null icons | bug performance | Icons on profiles, levels and tower trials page are requested even if the related field is `null`. A real request is made just to get 404 in this case, which is not performant. | True | Do not request for null icons - Icons on profiles, levels and tower trials page are requested even if the related field is `null`. A real request is made just to get 404 in this case, which is not performant. | perf | do not request for null icons icons on profiles levels and tower trials page are requested even if the related field is null a real request is made just to get in this case which is not performant | 1 |
293,321 | 22,052,447,822 | IssuesEvent | 2022-05-30 09:50:38 | felangel/bloc | https://api.github.com/repos/felangel/bloc | opened | question: Cancel event in separate event handlers | documentation | How to cancel event processing?
For example i have `start` and `stop` events in separated event handlers.
I know i can make a flag like `isWorking`, but is there any way to cancel event?
| 1.0 | question: Cancel event in separate event handlers - How to cancel event processing?
For example i have `start` and `stop` events in separated event handlers.
I know i can make a flag like `isWorking`, but is there any way to cancel event?
| non_perf | question cancel event in separate event handlers how to cancel event processing for example i have start and stop events in separated event handlers i know i can make a flag like isworking but is there any way to cancel event | 0 |
47,785 | 25,187,884,023 | IssuesEvent | 2022-11-11 20:06:15 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Godot 4.0 is very slow to start on macOS compared to 3.x | bug platform:macos topic:core topic:rendering topic:porting confirmed regression performance | ### Godot version
4.0.alpha10.official
### System information
macOS 12.4 (Monterey), Vulkan, Intel Iris Pro 5200
### Issue description
Latest release of Godot 4.0 (alpha 10) seems to be super slow on macOS 12.4 (Monterey), at least when tested on my MacBook Pro setup (meanwhile, Godot 3.4.4 works without any noticeable performance issues). Here are some benchmarks:
- time to start Godot (to open project list): **33 seconds**
- time to open new project: 35 seconds
- time to start an empty 2D scene app: 33 seconds
On the other hand, editor itself is generally responsive when interacting with its UI (creating new nodes etc.).
Among important details, here are some errors that are raised during the 2D application start (empty scene):
```
E 0:00:00:0276 _debug_messenger_callback: - Message Id Number: 0 | Message Id Name:
VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (Error code 2):
Compiler encountered an internal error.
Objects - 1
Object[0] - VK_OBJECT_TYPE_PIPELINE, Handle 140389825172992
<C++ Source> drivers/vulkan/vulkan_context.cpp:159 @ _debug_messenger_callback()
E 0:00:00:0276 render_pipeline_create: vkCreateGraphicsPipelines failed with error -3 for shader 'ClusterRenderShaderRD:0'.
<C++ Error> Condition "err" is true. Returning: RID()
<C++ Source> drivers/vulkan/rendering_device_vulkan.cpp:6614 @ render_pipeline_create()
E 0:00:30:0395 _debug_messenger_callback: - Message Id Number: 0 | Message Id Name:
VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (Error code 2):
Compiler encountered an internal error.
Objects - 1
Object[0] - VK_OBJECT_TYPE_PIPELINE, Handle 140389832876544
<C++ Source> drivers/vulkan/vulkan_context.cpp:159 @ _debug_messenger_callback()
E 0:00:30:0395 render_pipeline_create: vkCreateGraphicsPipelines failed with error -3 for shader 'ClusterRenderShaderRD:0'.
<C++ Error> Condition "err" is true. Returning: RID()
<C++ Source> drivers/vulkan/rendering_device_vulkan.cpp:6614 @ render_pipeline_create()
```
Also, here's the editor output after Godot start, for the record:
```
--- Debugging process started ---
Godot Engine v4.0.alpha10.official.4bbe7f0b9 - https://godotengine.org
Vulkan API 1.1.198 - Using Vulkan Device #0: Intel - Intel Iris Pro Graphics
Registered camera FaceTime HD Camera with id 1 position 0 at index 0
--- Debugging process stopped ---
```
It seems that Vulkan support is not an issue on my MacBook Pro. I've just installed Vulkan SDK 1.3.216.0 for macOS from [LunarG website](https://vulkan.lunarg.com/sdk/home#mac) and their `vkcube.app` demo works without any issues (thanks to [MoltenVK](https://github.com/KhronosGroup/MoltenVK) underneath, of course).
I can provide any additional information or testing if that would help.
### System details
- laptop: MacBook Pro (Retina, 15-inch, Mid 2015)
- CPU: Intel Core i7-4870HQ 2.5 GHz (Haswell)
- OS: macOS Monterey 12.4 (build [21F79](https://en.wikipedia.org/wiki/MacOS_Monterey#Release_history))
### GPU details (from `system_profiler`)
```
Chipset Model: Intel Iris Pro
Type: GPU
Bus: Built-In
VRAM (Dynamic, Max): 1536 MB
Vendor: Intel
Device ID: 0x0d26
Revision ID: 0x0008
Metal Family: Supported, Metal GPUFamily macOS 1
```
And by the way, thank you all for Godot! 🎉 It's absolutely amazing that a game engine like this is available out there.
### Steps to reproduce
No extra steps are needed to reproduce the issue. It seems that any MacBook with a similar spec would be prone to this problem.
### Minimal reproduction project
_No response_ | True | Godot 4.0 is very slow to start on macOS compared to 3.x - ### Godot version
4.0.alpha10.official
### System information
macOS 12.4 (Monterey), Vulkan, Intel Iris Pro 5200
### Issue description
Latest release of Godot 4.0 (alpha 10) seems to be super slow on macOS 12.4 (Monterey), at least when tested on my MacBook Pro setup (meanwhile, Godot 3.4.4 works without any noticeable performance issues). Here are some benchmarks:
- time to start Godot (to open project list): **33 seconds**
- time to open new project: 35 seconds
- time to start an empty 2D scene app: 33 seconds
On the other hand, editor itself is generally responsive when interacting with its UI (creating new nodes etc.).
Among important details, here are some errors that are raised during the 2D application start (empty scene):
```
E 0:00:00:0276 _debug_messenger_callback: - Message Id Number: 0 | Message Id Name:
VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (Error code 2):
Compiler encountered an internal error.
Objects - 1
Object[0] - VK_OBJECT_TYPE_PIPELINE, Handle 140389825172992
<C++ Source> drivers/vulkan/vulkan_context.cpp:159 @ _debug_messenger_callback()
E 0:00:00:0276 render_pipeline_create: vkCreateGraphicsPipelines failed with error -3 for shader 'ClusterRenderShaderRD:0'.
<C++ Error> Condition "err" is true. Returning: RID()
<C++ Source> drivers/vulkan/rendering_device_vulkan.cpp:6614 @ render_pipeline_create()
E 0:00:30:0395 _debug_messenger_callback: - Message Id Number: 0 | Message Id Name:
VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (Error code 2):
Compiler encountered an internal error.
Objects - 1
Object[0] - VK_OBJECT_TYPE_PIPELINE, Handle 140389832876544
<C++ Source> drivers/vulkan/vulkan_context.cpp:159 @ _debug_messenger_callback()
E 0:00:30:0395 render_pipeline_create: vkCreateGraphicsPipelines failed with error -3 for shader 'ClusterRenderShaderRD:0'.
<C++ Error> Condition "err" is true. Returning: RID()
<C++ Source> drivers/vulkan/rendering_device_vulkan.cpp:6614 @ render_pipeline_create()
```
Also, here's the editor output after Godot start, for the record:
```
--- Debugging process started ---
Godot Engine v4.0.alpha10.official.4bbe7f0b9 - https://godotengine.org
Vulkan API 1.1.198 - Using Vulkan Device #0: Intel - Intel Iris Pro Graphics
Registered camera FaceTime HD Camera with id 1 position 0 at index 0
--- Debugging process stopped ---
```
It seems that Vulkan support is not an issue on my MacBook Pro. I've just installed Vulkan SDK 1.3.216.0 for macOS from [LunarG website](https://vulkan.lunarg.com/sdk/home#mac) and their `vkcube.app` demo works without any issues (thanks to [MoltenVK](https://github.com/KhronosGroup/MoltenVK) underneath, of course).
I can provide any additional information or testing if that would help.
### System details
- laptop: MacBook Pro (Retina, 15-inch, Mid 2015)
- CPU: Intel Core i7-4870HQ 2.5 GHz (Haswell)
- OS: macOS Monterey 12.4 (build [21F79](https://en.wikipedia.org/wiki/MacOS_Monterey#Release_history))
### GPU details (from `system_profiler`)
```
Chipset Model: Intel Iris Pro
Type: GPU
Bus: Built-In
VRAM (Dynamic, Max): 1536 MB
Vendor: Intel
Device ID: 0x0d26
Revision ID: 0x0008
Metal Family: Supported, Metal GPUFamily macOS 1
```
And by the way, thank you all for Godot! 🎉 It's absolutely amazing that a game engine like this is available out there.
### Steps to reproduce
No extra steps are needed to reproduce the issue. It seems that any MacBook with a similar spec would be prone to this problem.
### Minimal reproduction project
_No response_ | perf | godot is very slow to start on macos compared to x godot version official system information macos monterey vulkan intel iris pro issue description latest release of godot alpha seems to be super slow on macos monterey at least when tested on my macbook pro setup meanwhile godot works without any noticeable performance issues here are some benchmarks time to start godot to open project list seconds time to open new project seconds time to start an empty scene app seconds on the other hand editor itself is generally responsive when interacting with its ui creating new nodes etc among important details here are some errors that are raised during the application start empty scene e debug messenger callback message id number message id name vk error initialization failed render pipeline compile failed error code compiler encountered an internal error objects object vk object type pipeline handle drivers vulkan vulkan context cpp debug messenger callback e render pipeline create vkcreategraphicspipelines failed with error for shader clusterrendershaderrd condition err is true returning rid drivers vulkan rendering device vulkan cpp render pipeline create e debug messenger callback message id number message id name vk error initialization failed render pipeline compile failed error code compiler encountered an internal error objects object vk object type pipeline handle drivers vulkan vulkan context cpp debug messenger callback e render pipeline create vkcreategraphicspipelines failed with error for shader clusterrendershaderrd condition err is true returning rid drivers vulkan rendering device vulkan cpp render pipeline create also here s the editor output after godot start for the record debugging process started godot engine official vulkan api using vulkan device intel intel iris pro graphics registered camera facetime hd camera with id position at index debugging process stopped it seems that vulkan support is not an issue on my macbook pro i ve just installed vulkan sdk for macos from and their vkcube app demo works without any issues thanks to underneath of course i can provide any additional information or testing if that would help system details laptop macbook pro retina inch mid cpu intel core ghz haswell os macos monterey build gpu details from system profiler chipset model intel iris pro type gpu bus built in vram dynamic max mb vendor intel device id revision id metal family supported metal gpufamily macos and by the way thank you all for godot 🎉 it s absolutely amazing that a game engine like this is available out there steps to reproduce no extra steps are needed to reproduce the issue it seems that any macbook with a similar spec would be prone to this problem minimal reproduction project no response | 1 |
52,031 | 27,340,801,136 | IssuesEvent | 2023-02-26 19:22:02 | starlite-api/starlite | https://api.github.com/repos/starlite-api/starlite | closed | Refactor: Reduce reliance on `pydantic.BaseModel` | enhancement help wanted refactor performance | Internally we use pydantic models for the following types:
- `config.allowed_hosts.AllowedHostsConfig`
- `config.app.AppConfig`
- `config.cache.CacheConfig`
- `config.compression.CompressionConfig`
- `config.cors.CORSConfig`
- `config.csrf.CSRFConfig`
- `config.openapi.OpenAPIConfig`
- `config.static_files.StaticFilesConfig`
- `contrib.jwt.jwt_auth.OAuth2Login`
- `contrib.jwt.jwt_auth.Token`
- `contrib.opentelemetry.config.OpenTelemetryConfig`
- `middleware.logging.LoggingMiddlewareConfig`
- `middleware.rate_limit.RateLimitConfig`
- `middleware.session.base.BaseBackendConfig`
- `openapi.datastructures.ResponseSpec`
- `plugins.sql_alchemy.config.SQLAlchemySessionConfig`
- `plugins.sql_alchemy.config.SQLAlchemyEngineConfig`
- `plugins.sql_alchemy.config.SQLAlchemyConfig`
- `utils.exception.ExceptionResponseContent`
Any of these that do not explicitly rely on pydantic functionality for some reason should probably be migrated to dataclasses. | True | Refactor: Reduce reliance on `pydantic.BaseModel` - Internally we use pydantic models for the following types:
- `config.allowed_hosts.AllowedHostsConfig`
- `config.app.AppConfig`
- `config.cache.CacheConfig`
- `config.compression.CompressionConfig`
- `config.cors.CORSConfig`
- `config.csrf.CSRFConfig`
- `config.openapi.OpenAPIConfig`
- `config.static_files.StaticFilesConfig`
- `contrib.jwt.jwt_auth.OAuth2Login`
- `contrib.jwt.jwt_auth.Token`
- `contrib.opentelemetry.config.OpenTelemetryConfig`
- `middleware.logging.LoggingMiddlewareConfig`
- `middleware.rate_limit.RateLimitConfig`
- `middleware.session.base.BaseBackendConfig`
- `openapi.datastructures.ResponseSpec`
- `plugins.sql_alchemy.config.SQLAlchemySessionConfig`
- `plugins.sql_alchemy.config.SQLAlchemyEngineConfig`
- `plugins.sql_alchemy.config.SQLAlchemyConfig`
- `utils.exception.ExceptionResponseContent`
Any of these that do not explicitly rely on pydantic functionality for some reason should probably be migrated to dataclasses. | perf | refactor reduce reliance on pydantic basemodel internally we use pydantic models for the following types config allowed hosts allowedhostsconfig config app appconfig config cache cacheconfig config compression compressionconfig config cors corsconfig config csrf csrfconfig config openapi openapiconfig config static files staticfilesconfig contrib jwt jwt auth contrib jwt jwt auth token contrib opentelemetry config opentelemetryconfig middleware logging loggingmiddlewareconfig middleware rate limit ratelimitconfig middleware session base basebackendconfig openapi datastructures responsespec plugins sql alchemy config sqlalchemysessionconfig plugins sql alchemy config sqlalchemyengineconfig plugins sql alchemy config sqlalchemyconfig utils exception exceptionresponsecontent any of these that do not explicitly rely on pydantic functionality for some reason should probably be migrated to dataclasses | 1 |
24,447 | 12,299,584,010 | IssuesEvent | 2020-05-11 12:38:54 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | closed | Mention changed file with `--verbose_explanations` | P3 team-Performance type: feature request | Please provide the following information. The more we know about your system and use case, the more easily and likely we can help.
### Description of the problem / feature request / question:
I have a mid-sized C++ codebase. I'd like to understand which header files are commonly being changed that require rebuilds of the ~entire codebase. I added `build --explain=bazel.log --verbose_explanations` to my `.bazelrc`, in the hopes of consulting `bazel.log` after rebuilds to determine which header change(s) necessitated the full rebuild.
However, the log file is just full of information like
```
Executing action 'Compiling common/counters.cc': One of the files has changed.
```
which doesn't tell me the information I actually want -- *which* dependency file changed?
Would it be possible (probably only under `--verbose_explanations`) for bazel to log *which* file has changed, or maybe a subset of them, if an exact list is too verbose or expensive somehow?
### If possible, provide a minimal example to reproduce the problem:
### Environment info
* Operating System:
Linux Ubuntu 16.04
* Bazel version (output of `bazel info release`):
```
release 0.8.0
```
* If `bazel info release` returns "development version" or "(@non-git)", please tell us what source tree you compiled Bazel from; git commit hash is appreciated (`git rev-parse HEAD`):
### Have you found anything relevant by searching the web?
(e.g. [StackOverflow answers](http://stackoverflow.com/questions/tagged/bazel),
[GitHub issues](https://github.com/bazelbuild/bazel/issues),
email threads on the [`bazel-discuss`](https://groups.google.com/forum/#!forum/bazel-discuss) Google group)
### Anything else, information or logs or outputs that would be helpful?
(If they are large, please upload as attachment or provide link).
| True | Mention changed file with `--verbose_explanations` - Please provide the following information. The more we know about your system and use case, the more easily and likely we can help.
### Description of the problem / feature request / question:
I have a mid-sized C++ codebase. I'd like to understand which header files are commonly being changed that require rebuilds of the ~entire codebase. I added `build --explain=bazel.log --verbose_explanations` to my `.bazelrc`, in the hopes of consulting `bazel.log` after rebuilds to determine which header change(s) necessitated the full rebuild.
However, the log file is just full of information like
```
Executing action 'Compiling common/counters.cc': One of the files has changed.
```
which doesn't tell me the information I actually want -- *which* dependency file changed?
Would it be possible (probably only under `--verbose_explanations`) for bazel to log *which* file has changed, or maybe a subset of them, if an exact list is too verbose or expensive somehow?
### If possible, provide a minimal example to reproduce the problem:
### Environment info
* Operating System:
Linux Ubuntu 16.04
* Bazel version (output of `bazel info release`):
```
release 0.8.0
```
* If `bazel info release` returns "development version" or "(@non-git)", please tell us what source tree you compiled Bazel from; git commit hash is appreciated (`git rev-parse HEAD`):
### Have you found anything relevant by searching the web?
(e.g. [StackOverflow answers](http://stackoverflow.com/questions/tagged/bazel),
[GitHub issues](https://github.com/bazelbuild/bazel/issues),
email threads on the [`bazel-discuss`](https://groups.google.com/forum/#!forum/bazel-discuss) Google group)
### Anything else, information or logs or outputs that would be helpful?
(If they are large, please upload as attachment or provide link).
| perf | mention changed file with verbose explanations please provide the following information the more we know about your system and use case the more easily and likely we can help description of the problem feature request question i have a mid sized c codebase i d like to understand which header files are commonly being changed that require rebuilds of the entire codebase i added build explain bazel log verbose explanations to my bazelrc in the hopes of consulting bazel log after rebuilds to determine which header change s necessitated the full rebuild however the log file is just full of information like executing action compiling common counters cc one of the files has changed which doesn t tell me the information i actually want which dependency file changed would it be possible probably only under verbose explanations for bazel to log which file has changed or maybe a subset of them if an exact list is too verbose or expensive somehow if possible provide a minimal example to reproduce the problem environment info operating system linux ubuntu bazel version output of bazel info release release if bazel info release returns development version or non git please tell us what source tree you compiled bazel from git commit hash is appreciated git rev parse head have you found anything relevant by searching the web e g email threads on the google group anything else information or logs or outputs that would be helpful if they are large please upload as attachment or provide link | 1 |
8,665 | 6,618,295,026 | IssuesEvent | 2017-09-21 07:31:47 | TorXakis/TorXakis | https://api.github.com/repos/TorXakis/TorXakis | opened | Optimal translation of STAUTDEF? | performace-improvement | When experimenting with MovingArms I made the following observation:
The following manual generated LPE
```
PROCDEF singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( pc :: Int ) ::=
[[ pc == 0 ]] =>> UpX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 1 )
## [[ pc == 0 ]] =>> DownX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 2 )
## [[ pc == 0 ]] =>> UpY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 3 )
## [[ pc == 0 ]] =>> DownY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 4 )
## [[ pc == 0 ]] =>> UpZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 5 )
## [[ pc == 0 ]] =>> DownZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 6 )
## [[ (pc == 1) \/ (pc == 2) ]] =>> StopX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 1) ]] =>> MaxX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 2) ]] =>> MinX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 3) \/ (pc == 4) ]] =>> StopY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 3) ]] =>> MaxY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 4) ]] =>> MinY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 5) \/ (pc == 6) ]] =>> StopZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 5) ]] =>> MaxZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 6) ]] =>> MinZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
ENDDEF
MODELDEF TestPurposeLPE ::=
CHAN IN UpX, DownX, StopX,
UpY, DownY, StopY,
UpZ, DownZ, StopZ
CHAN OUT MinX, MaxX,
MinY, MaxY,
MinZ, MaxZ
BEHAVIOUR
allowedBehaviour [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( )
|[ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ]|
singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
ENDDEF
```
executes 100 steps in 17.7086647s
Yet, the comparable manual made STAUTDEF
```
STAUTDEF singleAxisMovementSTAUTDEF[ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] () ::=
STATE idle, moveX, moveY, moveZ
VAR up :: Bool
INIT idle
TRANS idle -> UpX { up := True } -> moveX
idle -> DownX { up := False } -> moveX
idle -> UpY { up := True } -> moveY
idle -> DownY { up := False } -> moveY
idle -> UpZ { up := True } -> moveZ
idle -> DownZ { up := False } -> moveZ
moveX -> StopX -> idle
moveX -> MaxX [[ up ]] -> idle
moveX -> MinX [[ not( up ) ]] -> idle
moveY -> StopY -> idle
moveY -> MaxY [[ up ]] -> idle
moveY -> MinY [[ not( up ) ]] -> idle
moveZ -> StopZ -> idle
moveZ -> MaxZ [[ up ]] -> idle
moveZ -> MinZ [[ not( up ) ]] -> idle
ENDDEF
MODELDEF TestPurposeSTAUTDEF ::=
CHAN IN UpX, DownX, StopX,
UpY, DownY, StopY,
UpZ, DownZ, StopZ
CHAN OUT MinX, MaxX,
MinY, MaxY,
MinZ, MaxZ
BEHAVIOUR
allowedBehaviour [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( )
|[ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ]|
singleAxisMovementSTAUTDEF [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( )
ENDDEF
```
takes longer 21.4246334s (with the same seed)
This observation raises the question: is the translation of STAUTDEF optimal?
For convenience the MovingArms.txs file and the commando files are attached below
[MovingArms.zip](https://github.com/TorXakis/TorXakis/files/1320302/MovingArms.zip)
| True | Optimal translation of STAUTDEF? - When experimenting with MovingArms I made the following observation:
The following manual generated LPE
```
PROCDEF singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( pc :: Int ) ::=
[[ pc == 0 ]] =>> UpX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 1 )
## [[ pc == 0 ]] =>> DownX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 2 )
## [[ pc == 0 ]] =>> UpY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 3 )
## [[ pc == 0 ]] =>> DownY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 4 )
## [[ pc == 0 ]] =>> UpZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 5 )
## [[ pc == 0 ]] =>> DownZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 6 )
## [[ (pc == 1) \/ (pc == 2) ]] =>> StopX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 1) ]] =>> MaxX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 2) ]] =>> MinX >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 3) \/ (pc == 4) ]] =>> StopY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 3) ]] =>> MaxY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 4) ]] =>> MinY >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 5) \/ (pc == 6) ]] =>> StopZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 5) ]] =>> MaxZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
## [[ (pc == 6) ]] =>> MinZ >-> singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX, UpY, DownY, StopY, MinY, MaxY, UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
ENDDEF
MODELDEF TestPurposeLPE ::=
CHAN IN UpX, DownX, StopX,
UpY, DownY, StopY,
UpZ, DownZ, StopZ
CHAN OUT MinX, MaxX,
MinY, MaxY,
MinZ, MaxZ
BEHAVIOUR
allowedBehaviour [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( )
|[ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ]|
singleAxisMovementLPE [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( 0 )
ENDDEF
```
executes 100 steps in 17.7086647s
Yet, the comparable manual made STAUTDEF
```
STAUTDEF singleAxisMovementSTAUTDEF[ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] () ::=
STATE idle, moveX, moveY, moveZ
VAR up :: Bool
INIT idle
TRANS idle -> UpX { up := True } -> moveX
idle -> DownX { up := False } -> moveX
idle -> UpY { up := True } -> moveY
idle -> DownY { up := False } -> moveY
idle -> UpZ { up := True } -> moveZ
idle -> DownZ { up := False } -> moveZ
moveX -> StopX -> idle
moveX -> MaxX [[ up ]] -> idle
moveX -> MinX [[ not( up ) ]] -> idle
moveY -> StopY -> idle
moveY -> MaxY [[ up ]] -> idle
moveY -> MinY [[ not( up ) ]] -> idle
moveZ -> StopZ -> idle
moveZ -> MaxZ [[ up ]] -> idle
moveZ -> MinZ [[ not( up ) ]] -> idle
ENDDEF
MODELDEF TestPurposeSTAUTDEF ::=
CHAN IN UpX, DownX, StopX,
UpY, DownY, StopY,
UpZ, DownZ, StopZ
CHAN OUT MinX, MaxX,
MinY, MaxY,
MinZ, MaxZ
BEHAVIOUR
allowedBehaviour [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( )
|[ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ]|
singleAxisMovementSTAUTDEF [ UpX, DownX, StopX, MinX, MaxX,
UpY, DownY, StopY, MinY, MaxY,
UpZ, DownZ, StopZ, MinZ, MaxZ] ( )
ENDDEF
```
takes longer 21.4246334s (with the same seed)
This observation raises the question: is the translation of STAUTDEF optimal?
For convenience the MovingArms.txs file and the commando files are attached below
[MovingArms.zip](https://github.com/TorXakis/TorXakis/files/1320302/MovingArms.zip)
| perf | optimal translation of stautdef when experimenting with movingarms i made the following observation the following manual generated lpe procdef singleaxismovementlpe upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz pc int upx singleaxismovementlpe downx singleaxismovementlpe upy singleaxismovementlpe downy singleaxismovementlpe upz singleaxismovementlpe downz singleaxismovementlpe stopx singleaxismovementlpe maxx singleaxismovementlpe minx singleaxismovementlpe stopy singleaxismovementlpe maxy singleaxismovementlpe miny singleaxismovementlpe stopz singleaxismovementlpe maxz singleaxismovementlpe minz singleaxismovementlpe enddef modeldef testpurposelpe chan in upx downx stopx upy downy stopy upz downz stopz chan out minx maxx miny maxy minz maxz behaviour allowedbehaviour upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz singleaxismovementlpe upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz enddef executes steps in yet the comparable manual made stautdef stautdef singleaxismovementstautdef upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz state idle movex movey movez var up bool init idle trans idle upx up true movex idle downx up false movex idle upy up true movey idle downy up false movey idle upz up true movez idle downz up false movez movex stopx idle movex maxx idle movex minx idle movey stopy idle movey maxy idle movey miny idle movez stopz idle movez maxz idle movez minz idle enddef modeldef testpurposestautdef chan in upx downx stopx upy downy stopy upz downz stopz chan out minx maxx miny maxy minz maxz behaviour allowedbehaviour upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz singleaxismovementstautdef upx downx stopx minx maxx upy downy stopy miny maxy upz downz stopz minz maxz enddef takes longer with the same seed this observation raises the question is the translation of stautdef optimal for convenience the movingarms txs file and the commando files are attached below | 1 |
53,625 | 28,316,304,433 | IssuesEvent | 2023-04-10 19:56:30 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | Service startup is slow to load Hdfs files concurrently | performance | (you don't have to strictly follow this form)
**Describe the situation**
Using Hdfs Disk, when there are many tables and parts, the service startup will lead to high system cpu, and the startup time is very long.
**How to reproduce**
1. Import a certain number of parts and write to hdfs.

2. Change the merge thread pool to 0 to prevent the part from being merged after restarting.
3. Set the max_part_loading_threads parameter to 300.
4. Restarting the System cpu will be full.

* Which ClickHouse server version to use
v21.8.15.1-lts
| True | Service startup is slow to load Hdfs files concurrently - (you don't have to strictly follow this form)
**Describe the situation**
Using Hdfs Disk, when there are many tables and parts, the service startup will lead to high system cpu, and the startup time is very long.
**How to reproduce**
1. Import a certain number of parts and write to hdfs.

2. Change the merge thread pool to 0 to prevent the part from being merged after restarting.
3. Set the max_part_loading_threads parameter to 300.
4. Restarting the System cpu will be full.

* Which ClickHouse server version to use
v21.8.15.1-lts
| perf | service startup is slow to load hdfs files concurrently you don t have to strictly follow this form describe the situation using hdfs disk when there are many tables and parts the service startup will lead to high system cpu and the startup time is very long how to reproduce import a certain number of parts and write to hdfs change the merge thread pool to to prevent the part from being merged after restarting set the max part loading threads parameter to restarting the system cpu will be full which clickhouse server version to use lts | 1 |
28,071 | 13,523,403,737 | IssuesEvent | 2020-09-15 09:54:47 | prisma/docs | https://api.github.com/repos/prisma/docs | closed | Clicking on some links is sometimes really slow | topic: performance website/improvement | It happens to me sometimes that I click on in the sidebar menu on a link and it takes “a while” to update the page, see example recording:

- I click on some links it works fast enough for me until…
- I click on “REST API”, only once (timestamp 8sec)
- And wait for rendering… 7 seconds later :slow-parrot:
This happens kind of randomly but only once per link, looking at the Dev Tools I see it loads:
`/docs/page-data/app-data.json`
`/docs/page-data/guides/upgrade-guides/upgrade-from-prisma-1/upgrading-a-rest-api/page-data.json`
I forgot to open Dev Tools to record what was happening, it could be Netlify slow response or client side rendering taking a long time somehow?
Tested on Firefox Developer Edition 79.0b3 (64-bit) on macOS Catalina | True | Clicking on some links is sometimes really slow - It happens to me sometimes that I click on in the sidebar menu on a link and it takes “a while” to update the page, see example recording:

- I click on some links it works fast enough for me until…
- I click on “REST API”, only once (timestamp 8sec)
- And wait for rendering… 7 seconds later :slow-parrot:
This happens kind of randomly but only once per link, looking at the Dev Tools I see it loads:
`/docs/page-data/app-data.json`
`/docs/page-data/guides/upgrade-guides/upgrade-from-prisma-1/upgrading-a-rest-api/page-data.json`
I forgot to open Dev Tools to record what was happening, it could be Netlify slow response or client side rendering taking a long time somehow?
Tested on Firefox Developer Edition 79.0b3 (64-bit) on macOS Catalina | perf | clicking on some links is sometimes really slow it happens to me sometimes that i click on in the sidebar menu on a link and it takes “a while” to update the page see example recording i click on some links it works fast enough for me until… i click on “rest api” only once timestamp and wait for rendering… seconds later slow parrot this happens kind of randomly but only once per link looking at the dev tools i see it loads docs page data app data json docs page data guides upgrade guides upgrade from prisma upgrading a rest api page data json i forgot to open dev tools to record what was happening it could be netlify slow response or client side rendering taking a long time somehow tested on firefox developer edition bit on macos catalina | 1 |
27,350 | 13,229,568,630 | IssuesEvent | 2020-08-18 08:23:31 | PLhery/unfollowNinja | https://api.github.com/repos/PLhery/unfollowNinja | closed | Store followers usernames in postresql instead of redis | performance | All twittos usernames are currently stored in redis, to retrieve them if some of them leave Twitter.
I estimated that these were using 25% of redis space.
There is not that much read/write operations so there is no reason to store these in an in-memory db.
(We should copy/keep/update our users's usernames in redis though, as these are currently accessed ~10 000 times/minute) | True | Store followers usernames in postresql instead of redis - All twittos usernames are currently stored in redis, to retrieve them if some of them leave Twitter.
I estimated that these were using 25% of redis space.
There is not that much read/write operations so there is no reason to store these in an in-memory db.
(We should copy/keep/update our users's usernames in redis though, as these are currently accessed ~10 000 times/minute) | perf | store followers usernames in postresql instead of redis all twittos usernames are currently stored in redis to retrieve them if some of them leave twitter i estimated that these were using of redis space there is not that much read write operations so there is no reason to store these in an in memory db we should copy keep update our users s usernames in redis though as these are currently accessed times minute | 1 |
33,945 | 16,325,812,941 | IssuesEvent | 2021-05-12 00:55:27 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | torch.one_hot causes multiple DeviceToHost transfers when input tensor is a cuda tensor | module: cuda module: performance triaged | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```
x = torch.tensor([1, 2, 3]).long().cuda()
F.one_hot(x, 4)
```

One example:
https://github.com/pytorch/pytorch/blob/release/1.8/aten/src/ATen/native/Onehot.cpp#L21
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
No DeviceToHost transfers. There are multiple .item() calls in the ATen one_hot implementation. These are mostly unnecessary except for when `num_classes=-1`
.
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
cc @ngimel @VitalyFedyunin | True | torch.one_hot causes multiple DeviceToHost transfers when input tensor is a cuda tensor - ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```
x = torch.tensor([1, 2, 3]).long().cuda()
F.one_hot(x, 4)
```

One example:
https://github.com/pytorch/pytorch/blob/release/1.8/aten/src/ATen/native/Onehot.cpp#L21
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
No DeviceToHost transfers. There are multiple .item() calls in the ATen one_hot implementation. These are mostly unnecessary except for when `num_classes=-1`
.
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
cc @ngimel @VitalyFedyunin | perf | torch one hot causes multiple devicetohost transfers when input tensor is a cuda tensor 🐛 bug to reproduce steps to reproduce the behavior x torch tensor long cuda f one hot x one example expected behavior no devicetohost transfers there are multiple item calls in the aten one hot implementation these are mostly unnecessary except for when num classes environment please copy and paste the output from our or fill out the checklist below manually you can get the script and run it with wget for security purposes please check the contents of collect env py before running it python collect env py pytorch version e g os e g linux how you installed pytorch conda pip source build command you used if compiling from source python version cuda cudnn version gpu models and configuration any other relevant information additional context cc ngimel vitalyfedyunin | 1 |
517,791 | 15,020,012,515 | IssuesEvent | 2021-02-01 14:14:10 | mozilla-lockwise/lockwise-ios | https://api.github.com/repos/mozilla-lockwise/lockwise-ios | reopened | Include explicit message when forcing users to re-authenticate due to FxA error | archived feature needs-content priority-P2 | The UX for this ticket will be displaying a dialog on the welcome screen when forcing users to sign in despite a saved account instance.
This could look like "Your local account has been corrupted, please login again" or something. We've previously had behavior like this when we forced all users to migrate to OAuth. | 1.0 | Include explicit message when forcing users to re-authenticate due to FxA error - The UX for this ticket will be displaying a dialog on the welcome screen when forcing users to sign in despite a saved account instance.
This could look like "Your local account has been corrupted, please login again" or something. We've previously had behavior like this when we forced all users to migrate to OAuth. | non_perf | include explicit message when forcing users to re authenticate due to fxa error the ux for this ticket will be displaying a dialog on the welcome screen when forcing users to sign in despite a saved account instance this could look like your local account has been corrupted please login again or something we ve previously had behavior like this when we forced all users to migrate to oauth | 0 |
6,619 | 5,544,358,870 | IssuesEvent | 2017-03-22 18:56:55 | broadinstitute/gatk | https://api.github.com/repos/broadinstitute/gatk | closed | implement parallel copy from NFS (or IFS) to HDFS | performance Spark | A lot of data we have lives on NFS (or underlying IFS - Isilon FS). Copying files in and out is a bottleneck and a pain. This ticket for an implementation of a parallel copy of a BAM/CRAM file to HDFS (sharded or unsharded)
| True | implement parallel copy from NFS (or IFS) to HDFS - A lot of data we have lives on NFS (or underlying IFS - Isilon FS). Copying files in and out is a bottleneck and a pain. This ticket for an implementation of a parallel copy of a BAM/CRAM file to HDFS (sharded or unsharded)
| perf | implement parallel copy from nfs or ifs to hdfs a lot of data we have lives on nfs or underlying ifs isilon fs copying files in and out is a bottleneck and a pain this ticket for an implementation of a parallel copy of a bam cram file to hdfs sharded or unsharded | 1 |
131,021 | 27,812,299,414 | IssuesEvent | 2023-03-18 09:06:11 | trunisam/codeboard | https://api.github.com/repos/trunisam/codeboard | opened | Reiter für Helpersysteme | Codeboard | Der Zugriff auf die Helpersysteme (Tipps und Compiler-Meldungen) soll über entsprechende Reiter in der rechten Navigationsleiste des Codeboards gewährleistet sein. | 1.0 | Reiter für Helpersysteme - Der Zugriff auf die Helpersysteme (Tipps und Compiler-Meldungen) soll über entsprechende Reiter in der rechten Navigationsleiste des Codeboards gewährleistet sein. | non_perf | reiter für helpersysteme der zugriff auf die helpersysteme tipps und compiler meldungen soll über entsprechende reiter in der rechten navigationsleiste des codeboards gewährleistet sein | 0 |
36,702 | 17,869,033,804 | IssuesEvent | 2021-09-06 13:11:05 | kframework/kore | https://api.github.com/repos/kframework/kore | closed | Investigate long running z3 commands. | type: investigation type: performance | While working on https://github.com/kframework/evm-semantics/pull/1102, I noticed that the proof takes roughly an hour to complete. About 40 minutes are spent waiting for z3.
So far, I've recorded an smt-transcript, which shows some suspicious behavior. There are multiple very long sequences of `declare-fun`-commands followed by one `assert`-command with one giant conjunction over all the declared symbols:
~~~lisp
(declare-fun <0> () Bool )
; success
(declare-fun <1> () Bool )
; success
; ...
(declare-fun <9183> () Bool )
; success
(assert (and <2> (and <3> (and <4> .... (and <9182> <9183>) ... ))))
~~~
I am not sure if this is where z3 spends most of its 40 minutes, but it seems suspicious.
[z3.1.log](https://github.com/kframework/kore/files/6929746/z3.1.log)
| True | Investigate long running z3 commands. - While working on https://github.com/kframework/evm-semantics/pull/1102, I noticed that the proof takes roughly an hour to complete. About 40 minutes are spent waiting for z3.
So far, I've recorded an smt-transcript, which shows some suspicious behavior. There are multiple very long sequences of `declare-fun`-commands followed by one `assert`-command with one giant conjunction over all the declared symbols:
~~~lisp
(declare-fun <0> () Bool )
; success
(declare-fun <1> () Bool )
; success
; ...
(declare-fun <9183> () Bool )
; success
(assert (and <2> (and <3> (and <4> .... (and <9182> <9183>) ... ))))
~~~
I am not sure if this is where z3 spends most of its 40 minutes, but it seems suspicious.
[z3.1.log](https://github.com/kframework/kore/files/6929746/z3.1.log)
| perf | investigate long running commands while working on i noticed that the proof takes roughly an hour to complete about minutes are spent waiting for so far i ve recorded an smt transcript which shows some suspicious behavior there are multiple very long sequences of declare fun commands followed by one assert command with one giant conjunction over all the declared symbols lisp declare fun bool success declare fun bool success declare fun bool success assert and and and and i am not sure if this is where spends most of its minutes but it seems suspicious | 1 |
53,444 | 28,131,451,712 | IssuesEvent | 2023-04-01 00:00:03 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | [Performance] XmlSerializationWriter.WriteTypedPrimitive is converting primitive types to string before writing them to the XmlWriter | area-Serialization tenet-performance |
### Description
Possible optimization found.
Current implementation of writing primitive types fallbacks to string using XmlConvert.ToString(..).
https://github.com/dotnet/runtime/blob/5c8aade5384cbad2d086e7fae482ba0b692d3601/src/libraries/System.Private.Xml/src/System/Xml/Serialization/XmlSerializationWriter.cs#L247
### Regression?
No
### Analysis
How to optimize:
Write to char[] and then use XmlWriter.WriteChars(...). This should eliminate all temporary string for int/DateTime/guid.... types, which should be majority when serializing custom types. This change will help us later to use XmlSerializationWriter.WriteTypedPrimitive method in XmlSerializationWriterILGen instead of again creating strings and using XmlConvert.ToString(...)
| True | [Performance] XmlSerializationWriter.WriteTypedPrimitive is converting primitive types to string before writing them to the XmlWriter -
### Description
Possible optimization found.
Current implementation of writing primitive types fallbacks to string using XmlConvert.ToString(..).
https://github.com/dotnet/runtime/blob/5c8aade5384cbad2d086e7fae482ba0b692d3601/src/libraries/System.Private.Xml/src/System/Xml/Serialization/XmlSerializationWriter.cs#L247
### Regression?
No
### Analysis
How to optimize:
Write to char[] and then use XmlWriter.WriteChars(...). This should eliminate all temporary string for int/DateTime/guid.... types, which should be majority when serializing custom types. This change will help us later to use XmlSerializationWriter.WriteTypedPrimitive method in XmlSerializationWriterILGen instead of again creating strings and using XmlConvert.ToString(...)
| perf | xmlserializationwriter writetypedprimitive is converting primitive types to string before writing them to the xmlwriter description possible optimization found current implementation of writing primitive types fallbacks to string using xmlconvert tostring regression no analysis how to optimize write to char and then use xmlwriter writechars this should eliminate all temporary string for int datetime guid types which should be majority when serializing custom types this change will help us later to use xmlserializationwriter writetypedprimitive method in xmlserializationwriterilgen instead of again creating strings and using xmlconvert tostring | 1 |
56,614 | 32,077,940,429 | IssuesEvent | 2023-09-25 12:16:37 | getsentry/sentry-python | https://api.github.com/repos/getsentry/sentry-python | closed | Add Redis Pipeline to RedisIntegration | enhancement Help wanted Feature: Performance Integration: Redis | Hi! Pipeline in redis-py has also have method [execute_command](https://github.com/andymccurdy/redis-py/blob/e19a76c58f2a998d86e51c5a2a0f1db37563efce/redis/client.py#L1531). Can you add redis.client.Pipeline to patching in RedisIntegration? It work: all Z* command on screenshot from one pipeline.

And maybe PubSub. | True | Add Redis Pipeline to RedisIntegration - Hi! Pipeline in redis-py has also have method [execute_command](https://github.com/andymccurdy/redis-py/blob/e19a76c58f2a998d86e51c5a2a0f1db37563efce/redis/client.py#L1531). Can you add redis.client.Pipeline to patching in RedisIntegration? It work: all Z* command on screenshot from one pipeline.

And maybe PubSub. | perf | add redis pipeline to redisintegration hi pipeline in redis py has also have method can you add redis client pipeline to patching in redisintegration it work all z command on screenshot from one pipeline and maybe pubsub | 1 |
11,717 | 7,680,172,490 | IssuesEvent | 2018-05-16 00:00:09 | Microsoft/DirectXShaderCompiler | https://api.github.com/repos/Microsoft/DirectXShaderCompiler | closed | Slow PCF shader performance with dxc vs. fxc | performance | Hello,
Now that I have the shader from issue #604 compiling, I've noticed that the performance of the DXIL version is much worse than when the same shader is compiled to DXBC. When testing with the experimental branch of [this project](https://github.com/TheRealMJP/DeferredTexturing), I'm getting about 20.40 milliseconds to complete the forward path, vs about 3.20ms when running the DXBC version. This was measured on an Nvidia GTX 1070 running driver 384.94.
You can compile and run the project yourself if you'd like, or if you'd like to look at the compiler output I've attached the pre-processed code, compiled DXBC, and compiled DXIL for the main pixel shader:
[Mesh_PP.txt](https://github.com/Microsoft/DirectXShaderCompiler/files/1262512/Mesh_PP.txt)
[Mesh_DXBC.txt](https://github.com/Microsoft/DirectXShaderCompiler/files/1262513/Mesh_DXBC.txt)
[Mesh_DXIL.txt](https://github.com/Microsoft/DirectXShaderCompiler/files/1262514/Mesh_DXIL.txt)
I had a quick look myself through the DXIL, and I didn't see anything immediately bad. I was initially concerned that perhaps the looks in the PCF kernel hadn't been unrolled, but it looks like that was handled properly.
Thanks in advance, and please let me know if I can provide any additional information (such as PIX captures, or pre-compiled binaries).
| True | Slow PCF shader performance with dxc vs. fxc - Hello,
Now that I have the shader from issue #604 compiling, I've noticed that the performance of the DXIL version is much worse than when the same shader is compiled to DXBC. When testing with the experimental branch of [this project](https://github.com/TheRealMJP/DeferredTexturing), I'm getting about 20.40 milliseconds to complete the forward path, vs about 3.20ms when running the DXBC version. This was measured on an Nvidia GTX 1070 running driver 384.94.
You can compile and run the project yourself if you'd like, or if you'd like to look at the compiler output I've attached the pre-processed code, compiled DXBC, and compiled DXIL for the main pixel shader:
[Mesh_PP.txt](https://github.com/Microsoft/DirectXShaderCompiler/files/1262512/Mesh_PP.txt)
[Mesh_DXBC.txt](https://github.com/Microsoft/DirectXShaderCompiler/files/1262513/Mesh_DXBC.txt)
[Mesh_DXIL.txt](https://github.com/Microsoft/DirectXShaderCompiler/files/1262514/Mesh_DXIL.txt)
I had a quick look myself through the DXIL, and I didn't see anything immediately bad. I was initially concerned that perhaps the looks in the PCF kernel hadn't been unrolled, but it looks like that was handled properly.
Thanks in advance, and please let me know if I can provide any additional information (such as PIX captures, or pre-compiled binaries).
| perf | slow pcf shader performance with dxc vs fxc hello now that i have the shader from issue compiling i ve noticed that the performance of the dxil version is much worse than when the same shader is compiled to dxbc when testing with the experimental branch of i m getting about milliseconds to complete the forward path vs about when running the dxbc version this was measured on an nvidia gtx running driver you can compile and run the project yourself if you d like or if you d like to look at the compiler output i ve attached the pre processed code compiled dxbc and compiled dxil for the main pixel shader i had a quick look myself through the dxil and i didn t see anything immediately bad i was initially concerned that perhaps the looks in the pcf kernel hadn t been unrolled but it looks like that was handled properly thanks in advance and please let me know if i can provide any additional information such as pix captures or pre compiled binaries | 1 |
50,901 | 26,834,963,306 | IssuesEvent | 2023-02-02 18:44:27 | runtimeverification/haskell-backend | https://api.github.com/repos/runtimeverification/haskell-backend | closed | Compute function call graph and memoize non-recursive functions | investigation performance | Following @virgil-serbanuta's issue https://github.com/kframework/kore/issues/1520, I did some experiment:
To run 10 steps,
without `memo` attribute added to `#computeValidJumpDest`, the runtime is 2m9s.
After adding the `memo` attribute, the runtime goes down to 59s.
There seems a regression on enabling caching for all functions of constructor-like arguments. | True | Compute function call graph and memoize non-recursive functions - Following @virgil-serbanuta's issue https://github.com/kframework/kore/issues/1520, I did some experiment:
To run 10 steps,
without `memo` attribute added to `#computeValidJumpDest`, the runtime is 2m9s.
After adding the `memo` attribute, the runtime goes down to 59s.
There seems a regression on enabling caching for all functions of constructor-like arguments. | perf | compute function call graph and memoize non recursive functions following virgil serbanuta s issue i did some experiment to run steps without memo attribute added to computevalidjumpdest the runtime is after adding the memo attribute the runtime goes down to there seems a regression on enabling caching for all functions of constructor like arguments | 1 |
19,653 | 10,478,416,534 | IssuesEvent | 2019-09-23 23:57:35 | 0xProject/starkcrypto | https://api.github.com/repos/0xProject/starkcrypto | opened | We can use macro generated specializations till then. | performance tracker | *On 2019-08-26 @Recmo wrote in [`996bf59`](https://github.com/0xProject/starkcrypto/commit/996bf5959c56749af107d7cfd2b8b6a06dc6a84a) “Add comments”:*
We can use macro generated specializations till then.
```rust
debug_assert!(divisor.len() >= 2);
debug_assert!(numerator.len() > divisor.len());
debug_assert!(*divisor.last().unwrap() > 0);
debug_assert!(*numerator.last().unwrap() == 0);
// OPT: Once const generics are in, unroll for lengths.
// OPT: We can use macro generated specializations till then.
let n = divisor.len();
let m = numerator.len() - n - 1;
// D1. Normalize.
let shift = divisor[n - 1].leading_zeros();
```
*From [`algebra/u256/src/division.rs:102`](https://github.com/0xProject/starkcrypto/blob/96ec7dde35194aefec933e560fc28a69090e8e9b/algebra/u256/src/division.rs#L102)*
<!--{"commit-hash": "996bf5959c56749af107d7cfd2b8b6a06dc6a84a", "author": "Remco Bloemen", "author-mail": "<remco@0x.org>", "author-time": 1566843820, "author-tz": "-0700", "committer": "Remco Bloemen", "committer-mail": "<remco@0x.org>", "committer-time": 1566843820, "committer-tz": "-0700", "summary": "Add comments", "previous": "5573d5a660682ab30bcc879a6ab601a90f82eca6 u256/src/division.rs", "filename": "algebra/u256/src/division.rs", "line": 101, "line_end": 102, "kind": "OPT", "issue": "We can use macro generated specializations till then.", "head": "We can use macro generated specializations till then.", "context": " debug_assert!(divisor.len() >= 2);\n debug_assert!(numerator.len() > divisor.len());\n debug_assert!(*divisor.last().unwrap() > 0);\n debug_assert!(*numerator.last().unwrap() == 0);\n // OPT: Once const generics are in, unroll for lengths.\n // OPT: We can use macro generated specializations till then.\n let n = divisor.len();\n let m = numerator.len() - n - 1;\n\n // D1. Normalize.\n let shift = divisor[n - 1].leading_zeros();\n", "repo": "0xProject/starkcrypto", "branch-hash": "96ec7dde35194aefec933e560fc28a69090e8e9b"}--> | True | We can use macro generated specializations till then. - *On 2019-08-26 @Recmo wrote in [`996bf59`](https://github.com/0xProject/starkcrypto/commit/996bf5959c56749af107d7cfd2b8b6a06dc6a84a) “Add comments”:*
We can use macro generated specializations till then.
```rust
debug_assert!(divisor.len() >= 2);
debug_assert!(numerator.len() > divisor.len());
debug_assert!(*divisor.last().unwrap() > 0);
debug_assert!(*numerator.last().unwrap() == 0);
// OPT: Once const generics are in, unroll for lengths.
// OPT: We can use macro generated specializations till then.
let n = divisor.len();
let m = numerator.len() - n - 1;
// D1. Normalize.
let shift = divisor[n - 1].leading_zeros();
```
*From [`algebra/u256/src/division.rs:102`](https://github.com/0xProject/starkcrypto/blob/96ec7dde35194aefec933e560fc28a69090e8e9b/algebra/u256/src/division.rs#L102)*
<!--{"commit-hash": "996bf5959c56749af107d7cfd2b8b6a06dc6a84a", "author": "Remco Bloemen", "author-mail": "<remco@0x.org>", "author-time": 1566843820, "author-tz": "-0700", "committer": "Remco Bloemen", "committer-mail": "<remco@0x.org>", "committer-time": 1566843820, "committer-tz": "-0700", "summary": "Add comments", "previous": "5573d5a660682ab30bcc879a6ab601a90f82eca6 u256/src/division.rs", "filename": "algebra/u256/src/division.rs", "line": 101, "line_end": 102, "kind": "OPT", "issue": "We can use macro generated specializations till then.", "head": "We can use macro generated specializations till then.", "context": " debug_assert!(divisor.len() >= 2);\n debug_assert!(numerator.len() > divisor.len());\n debug_assert!(*divisor.last().unwrap() > 0);\n debug_assert!(*numerator.last().unwrap() == 0);\n // OPT: Once const generics are in, unroll for lengths.\n // OPT: We can use macro generated specializations till then.\n let n = divisor.len();\n let m = numerator.len() - n - 1;\n\n // D1. Normalize.\n let shift = divisor[n - 1].leading_zeros();\n", "repo": "0xProject/starkcrypto", "branch-hash": "96ec7dde35194aefec933e560fc28a69090e8e9b"}--> | perf | we can use macro generated specializations till then on recmo wrote in “add comments” we can use macro generated specializations till then rust debug assert divisor len debug assert numerator len divisor len debug assert divisor last unwrap debug assert numerator last unwrap opt once const generics are in unroll for lengths opt we can use macro generated specializations till then let n divisor len let m numerator len n normalize let shift divisor leading zeros from author time author tz committer remco bloemen committer mail committer time committer tz summary add comments previous src division rs filename algebra src division rs line line end kind opt issue we can use macro generated specializations till then head we can use macro generated specializations till then context debug assert divisor len n debug assert numerator len divisor len n debug assert divisor last unwrap n debug assert numerator last unwrap n opt once const generics are in unroll for lengths n opt we can use macro generated specializations till then n let n divisor len n let m numerator len n n n normalize n let shift divisor leading zeros n repo starkcrypto branch hash | 1 |
68,338 | 17,257,904,397 | IssuesEvent | 2021-07-22 00:18:23 | o3de/o3de | https://api.github.com/repos/o3de/o3de | closed | PhysX Gem can't be used as build dependency in engine SDK Part 2 | kind/bug needs-triage sig/build | **Describe the bug**
This is a continuation of this bug: https://github.com/o3de/o3de/issues/1971
Adding the PhysX Gem as a build dependency to an empty projects leads to the following build errors:
```
-- Build files have been written to: D:/dev/open3d/projects/PhysXTestProject/build/windows_vs2019
Microsoft (R) Build Engine version 16.10.2+857e5a733 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Checking Build System
PhysXTestProjectSystemComponent.cpp
PhysXTestProject.Static.vcxproj -> D:\dev\open3d\projects\PhysXTestProject\build\windows_vs2019\lib\profile\PhysXTestProject.Static.lib
PhysXTestProjectModule.cpp
LINK : fatal error LNK1104: cannot open file '$<TARGET_PROPERTY:LmbrCentral,INTERFACE_LINK_LIBRARIES>.obj' [D:\dev\open3d\projects\PhysXTestProject\build\windows_vs2019\o3de\PhysXTestProject-84ec4707\Code\PhysXTestProject.vcxproj]
```
**To Reproduce**
The same as last time:
1. Build the engine as an SDK
2. Create a new project from the Project Manager
3. In the projects CMakeLists.txt, add `Gem::PhysX` to BUILD_DEPENDENCIES:
```
ly_add_target(
NAME PhysXTest.Static STATIC
NAMESPACE Gem
FILES_CMAKE
physxtest_files.cmake
${pal_dir}/physxtest_${PAL_PLATFORM_NAME_LOWERCASE}_files.cmake
INCLUDE_DIRECTORIES
PUBLIC
Include
BUILD_DEPENDENCIES
PRIVATE
AZ::AzGameFramework
Gem::Atom_AtomBridge.Static
Gem::PhysX
)
```
4. Build the project in the Project Manager or Visual Studio.
**Expected behavior**
No build errors.
**Additional context**
Commenting out all lines containing `$<TARGET_PROPERTY:LmbrCentral,INTERFACE_LINK_LIBRARIES>` in `o3de-install\Gems\PhysX\Code\CMakeLists.txt` fixes the problem (4 lines in my case). The Gem works after this change. | 1.0 | PhysX Gem can't be used as build dependency in engine SDK Part 2 - **Describe the bug**
This is a continuation of this bug: https://github.com/o3de/o3de/issues/1971
Adding the PhysX Gem as a build dependency to an empty projects leads to the following build errors:
```
-- Build files have been written to: D:/dev/open3d/projects/PhysXTestProject/build/windows_vs2019
Microsoft (R) Build Engine version 16.10.2+857e5a733 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Checking Build System
PhysXTestProjectSystemComponent.cpp
PhysXTestProject.Static.vcxproj -> D:\dev\open3d\projects\PhysXTestProject\build\windows_vs2019\lib\profile\PhysXTestProject.Static.lib
PhysXTestProjectModule.cpp
LINK : fatal error LNK1104: cannot open file '$<TARGET_PROPERTY:LmbrCentral,INTERFACE_LINK_LIBRARIES>.obj' [D:\dev\open3d\projects\PhysXTestProject\build\windows_vs2019\o3de\PhysXTestProject-84ec4707\Code\PhysXTestProject.vcxproj]
```
**To Reproduce**
The same as last time:
1. Build the engine as an SDK
2. Create a new project from the Project Manager
3. In the projects CMakeLists.txt, add `Gem::PhysX` to BUILD_DEPENDENCIES:
```
ly_add_target(
NAME PhysXTest.Static STATIC
NAMESPACE Gem
FILES_CMAKE
physxtest_files.cmake
${pal_dir}/physxtest_${PAL_PLATFORM_NAME_LOWERCASE}_files.cmake
INCLUDE_DIRECTORIES
PUBLIC
Include
BUILD_DEPENDENCIES
PRIVATE
AZ::AzGameFramework
Gem::Atom_AtomBridge.Static
Gem::PhysX
)
```
4. Build the project in the Project Manager or Visual Studio.
**Expected behavior**
No build errors.
**Additional context**
Commenting out all lines containing `$<TARGET_PROPERTY:LmbrCentral,INTERFACE_LINK_LIBRARIES>` in `o3de-install\Gems\PhysX\Code\CMakeLists.txt` fixes the problem (4 lines in my case). The Gem works after this change. | non_perf | physx gem can t be used as build dependency in engine sdk part describe the bug this is a continuation of this bug adding the physx gem as a build dependency to an empty projects leads to the following build errors build files have been written to d dev projects physxtestproject build windows microsoft r build engine version for net framework copyright c microsoft corporation all rights reserved checking build system physxtestprojectsystemcomponent cpp physxtestproject static vcxproj d dev projects physxtestproject build windows lib profile physxtestproject static lib physxtestprojectmodule cpp link fatal error cannot open file obj to reproduce the same as last time build the engine as an sdk create a new project from the project manager in the projects cmakelists txt add gem physx to build dependencies ly add target name physxtest static static namespace gem files cmake physxtest files cmake pal dir physxtest pal platform name lowercase files cmake include directories public include build dependencies private az azgameframework gem atom atombridge static gem physx build the project in the project manager or visual studio expected behavior no build errors additional context commenting out all lines containing in install gems physx code cmakelists txt fixes the problem lines in my case the gem works after this change | 0 |
8,116 | 6,414,770,988 | IssuesEvent | 2017-08-08 11:03:08 | fabd/kanji-koohii | https://api.github.com/repos/fabd/kanji-koohii | opened | Refactor the Study search dropdown to VueJS | help-wanted performance refactor vuejs | The old "autocomplete" Javascript needs to be removed and replaced with a (likely) smaller implementation in VueJS.
## Goals
1. pre-requisite to consider improvements to the search box functionality #9
2. may improve response on mobile (we have to include Vue on all pages anyway, so in theory we end up with less Javascript). The dropdown functionality can then be included as part of the Study page Javascript bundle.
**Refactoring** search box to Vue is a client-side / front-end task, which is suitable for contribution.
## Notes
Until this is solved, #9 is on hold. | True | Refactor the Study search dropdown to VueJS - The old "autocomplete" Javascript needs to be removed and replaced with a (likely) smaller implementation in VueJS.
## Goals
1. pre-requisite to consider improvements to the search box functionality #9
2. may improve response on mobile (we have to include Vue on all pages anyway, so in theory we end up with less Javascript). The dropdown functionality can then be included as part of the Study page Javascript bundle.
**Refactoring** search box to Vue is a client-side / front-end task, which is suitable for contribution.
## Notes
Until this is solved, #9 is on hold. | perf | refactor the study search dropdown to vuejs the old autocomplete javascript needs to be removed and replaced with a likely smaller implementation in vuejs goals pre requisite to consider improvements to the search box functionality may improve response on mobile we have to include vue on all pages anyway so in theory we end up with less javascript the dropdown functionality can then be included as part of the study page javascript bundle refactoring search box to vue is a client side front end task which is suitable for contribution notes until this is solved is on hold | 1 |
30,590 | 14,613,470,895 | IssuesEvent | 2020-12-22 08:17:51 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | GetHostAddressesAsync on Arm64 | tenet-performance | ### Description
```
[InProcess]
public class DnsBenchmark
{
[Benchmark]
public Task<IPAddress[]> GetHostAddressesAsync() => Dns.GetHostAddressesAsync("microsoft.com");
}
public class Program
{
public static async Task Main(string[] args)
{
var summary = BenchmarkRunner.Run<DnsBenchmark>();
}
}
```
### Data
#### Arm64
``` ini
BenchmarkDotNet=v0.12.1, OS=debian 10 (container)
Unknown processor
.NET Core SDK=5.0.101
[Host] : .NET Core 5.0.1 (CoreCLR 5.0.120.57516, CoreFX 5.0.120.57516), Arm64 RyuJIT
Job=InProcess Toolchain=InProcessEmitToolchain
```
| Method | Mean | Error | StdDev |
|---------------------- |---------:|--------:|--------:|
| GetHostAddressesAsync | 6.671 ms | 0.5831 ms | 1.692 ms |
#### X64
``` ini
BenchmarkDotNet=v0.12.1, OS=debian 10 (container)
Intel Xeon CPU E5-2620 0 2.00GHz, 2 CPU, 24 logical and 12 physical cores
.NET Core SDK=5.0.101
[Host] : .NET Core 5.0.1 (CoreCLR 5.0.120.57516, CoreFX 5.0.120.57516), X64 RyuJIT
Job=InProcess Toolchain=InProcessEmitToolchain
```
| Method | Mean | Error | StdDev |
|---------------------- |---------:|----------:|----------:|
| GetHostAddressesAsync | 1.817 ms | 0.0234 ms | 0.0207 ms |
| True | GetHostAddressesAsync on Arm64 - ### Description
```
[InProcess]
public class DnsBenchmark
{
[Benchmark]
public Task<IPAddress[]> GetHostAddressesAsync() => Dns.GetHostAddressesAsync("microsoft.com");
}
public class Program
{
public static async Task Main(string[] args)
{
var summary = BenchmarkRunner.Run<DnsBenchmark>();
}
}
```
### Data
#### Arm64
``` ini
BenchmarkDotNet=v0.12.1, OS=debian 10 (container)
Unknown processor
.NET Core SDK=5.0.101
[Host] : .NET Core 5.0.1 (CoreCLR 5.0.120.57516, CoreFX 5.0.120.57516), Arm64 RyuJIT
Job=InProcess Toolchain=InProcessEmitToolchain
```
| Method | Mean | Error | StdDev |
|---------------------- |---------:|--------:|--------:|
| GetHostAddressesAsync | 6.671 ms | 0.5831 ms | 1.692 ms |
#### X64
``` ini
BenchmarkDotNet=v0.12.1, OS=debian 10 (container)
Intel Xeon CPU E5-2620 0 2.00GHz, 2 CPU, 24 logical and 12 physical cores
.NET Core SDK=5.0.101
[Host] : .NET Core 5.0.1 (CoreCLR 5.0.120.57516, CoreFX 5.0.120.57516), X64 RyuJIT
Job=InProcess Toolchain=InProcessEmitToolchain
```
| Method | Mean | Error | StdDev |
|---------------------- |---------:|----------:|----------:|
| GetHostAddressesAsync | 1.817 ms | 0.0234 ms | 0.0207 ms |
| perf | gethostaddressesasync on description public class dnsbenchmark public task gethostaddressesasync dns gethostaddressesasync microsoft com public class program public static async task main string args var summary benchmarkrunner run data ini benchmarkdotnet os debian container unknown processor net core sdk net core coreclr corefx ryujit job inprocess toolchain inprocessemittoolchain method mean error stddev gethostaddressesasync ms ms ms ini benchmarkdotnet os debian container intel xeon cpu cpu logical and physical cores net core sdk net core coreclr corefx ryujit job inprocess toolchain inprocessemittoolchain method mean error stddev gethostaddressesasync ms ms ms | 1 |
328,033 | 9,985,128,837 | IssuesEvent | 2019-07-10 15:51:45 | geosolutions-it/MapStore2-C027 | https://api.github.com/repos/geosolutions-it/MapStore2-C027 | closed | MapStore Update v2019.01.xx | Priority: High Project: C027 backlog | - Aggiornamento all'ultima revision di MS2 e relativi tests
- Abilitazione dashboard
- Abilitazione widgets
- Abilitazione timeline
- Test e documentazione | 1.0 | MapStore Update v2019.01.xx - - Aggiornamento all'ultima revision di MS2 e relativi tests
- Abilitazione dashboard
- Abilitazione widgets
- Abilitazione timeline
- Test e documentazione | non_perf | mapstore update xx aggiornamento all ultima revision di e relativi tests abilitazione dashboard abilitazione widgets abilitazione timeline test e documentazione | 0 |
796,434 | 28,111,619,201 | IssuesEvent | 2023-03-31 07:40:05 | vscentrum/vsc-software-stack | https://api.github.com/repos/vscentrum/vsc-software-stack | closed | synthcity | difficulty: medium new priority: high Python site:ugent site:t1_ugent_hortense | * link to support ticket: [#2023020360000611](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=109040)
* website: https://github.com/vanderschaarlab/synthcity
* installation docs: https://github.com/vanderschaarlab/synthcity#rocket-installation
* toolchain: `foss/2022a`
* easyblock to use: `PythonBundle`
* required dependencies:
* [x] Python
* + see https://github.com/vanderschaarlab/synthcity/blob/main/prereq.txt
* notes:
* ...
* effort: *(TBD)*
| 1.0 | synthcity - * link to support ticket: [#2023020360000611](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=109040)
* website: https://github.com/vanderschaarlab/synthcity
* installation docs: https://github.com/vanderschaarlab/synthcity#rocket-installation
* toolchain: `foss/2022a`
* easyblock to use: `PythonBundle`
* required dependencies:
* [x] Python
* + see https://github.com/vanderschaarlab/synthcity/blob/main/prereq.txt
* notes:
* ...
* effort: *(TBD)*
| non_perf | synthcity link to support ticket website installation docs toolchain foss easyblock to use pythonbundle required dependencies python see notes effort tbd | 0 |
50,460 | 26,654,245,390 | IssuesEvent | 2023-01-25 15:44:23 | phetsims/calculus-grapher | https://api.github.com/repos/phetsims/calculus-grapher | closed | Create allPointsScatterPlot and cuspsScatterPlot conditionally | type:performance status:ready-for-review | In CurveNode.ts:
```typescript
this.allPointsScatterPlot = new ScatterPlot( chartTransform, allPointsScatterPlotDataSet, options.allPointsScatterPlotOptions );
this.cuspsScatterPlot = new ScatterPlot( chartTransform, cuspsScatterPlotDataSet, options.cuspsScatterPlotOptions );
...
if ( CalculusGrapherQueryParameters.allPoints ) {
this.addChild( this.allPointsScatterPlot );
}
if ( CalculusGrapherQueryParameters.cuspsPoints ) {
this.addChild( this.cuspsScatterPlot );
}
```
So there are 2 scatter plots that are always created/update, but are only added if query parameters are present. That seems like a significant amount of work that doesn't need to be done, and might impact responsiveness.
Self-assigning to create/update these scatter plots conditionally. | True | Create allPointsScatterPlot and cuspsScatterPlot conditionally - In CurveNode.ts:
```typescript
this.allPointsScatterPlot = new ScatterPlot( chartTransform, allPointsScatterPlotDataSet, options.allPointsScatterPlotOptions );
this.cuspsScatterPlot = new ScatterPlot( chartTransform, cuspsScatterPlotDataSet, options.cuspsScatterPlotOptions );
...
if ( CalculusGrapherQueryParameters.allPoints ) {
this.addChild( this.allPointsScatterPlot );
}
if ( CalculusGrapherQueryParameters.cuspsPoints ) {
this.addChild( this.cuspsScatterPlot );
}
```
So there are 2 scatter plots that are always created/update, but are only added if query parameters are present. That seems like a significant amount of work that doesn't need to be done, and might impact responsiveness.
Self-assigning to create/update these scatter plots conditionally. | perf | create allpointsscatterplot and cuspsscatterplot conditionally in curvenode ts typescript this allpointsscatterplot new scatterplot charttransform allpointsscatterplotdataset options allpointsscatterplotoptions this cuspsscatterplot new scatterplot charttransform cuspsscatterplotdataset options cuspsscatterplotoptions if calculusgrapherqueryparameters allpoints this addchild this allpointsscatterplot if calculusgrapherqueryparameters cuspspoints this addchild this cuspsscatterplot so there are scatter plots that are always created update but are only added if query parameters are present that seems like a significant amount of work that doesn t need to be done and might impact responsiveness self assigning to create update these scatter plots conditionally | 1 |
507,157 | 14,679,928,856 | IssuesEvent | 2020-12-31 08:31:06 | 1ForeverHD/HDAdmin | https://api.github.com/repos/1ForeverHD/HDAdmin | closed | Zone+ V2 | Priority: High Scope: Zone+ Type: Bug Type: Enhancement | ## Improvements
- [ ] move to new standalone repository with installer methods like topbar+ (plugin, rojo, etc)
- [ ] remove all require() dependencies
- [x] consider removing additonalHeight param as this causes confusion
- [ ] replace irrelevant : within non object methods to .
- [ ] improve tutorials to make people aware of using Zone.new() (in addition to createZone)
- [ ] add Destroy upper case alias
- [ ] consider making work with non-players
## Bugs
- [ ] have the raycast fire from the hrp centre (instead of feet) for improved accuracy on upwards slanting shapes dipping into the ground

- [ ] explore rotated wedge bug. this may be solved by the points above

- [ ] random generation appears to fail below other parts
Repo file: https://devforum.roblox.com/uploads/short-url/pn1v2zXgk3e5QxwU62m2QjnocD4.rbxl

- [ ] cluster infinite recursion https://devforum.roblox.com/t/zone-errors/864654/5
## Main
- [ ] have a Zone module then a ZoneController as a child
- [ ] for zone, support both players and parts
- [ ] have methods like :getPlayers, :getLocalPlayer, :getParts, :checkPlayerInZone, :checkLocalPlayerInZone, :checkPartInZone
- [ ] have events which automatically initiate a loop when connected to - .playerEntered, playerExited, partEntered, partExited, localPlayerEntered, localPlayerExited
- [ ] for players, dynamically change between region3 + raycasting and solely raycasting depending upon factors such as total volume across all zones, number of players in server, localPlayer loop, etc
- [ ] for parts, call region3 like normal then raycast, however if a part is anchored (or not) consider caching results based on enum.accuracy
- [ ] handle logic, decision making and even raycasting within the controller
- [ ] when raycasting, perform a centre check initially, then if this fails use angle math to get the face nearest to the closest zone centre, and perform an additional tiny raycast in that direction
- [ ] introduce enum.Accuracy with items such as 'Precise', 'High', 'Medium', 'Low' and a zone.accuracy property
- [ ] once again have :update and. updated for calculating the zones min and max bounds and region, but completely scrap the 'cluster system'
- [ ] introduce :getRandomPoint again but this time do purely a while loop with completely random casting
- [ ] for ZoneController, remove create/remove methods entirely and just have ZoneController.getZones()
- [ ] parent utility modules like Signal and Maid *under* the Zone module | 1.0 | Zone+ V2 - ## Improvements
- [ ] move to new standalone repository with installer methods like topbar+ (plugin, rojo, etc)
- [ ] remove all require() dependencies
- [x] consider removing additonalHeight param as this causes confusion
- [ ] replace irrelevant : within non object methods to .
- [ ] improve tutorials to make people aware of using Zone.new() (in addition to createZone)
- [ ] add Destroy upper case alias
- [ ] consider making work with non-players
## Bugs
- [ ] have the raycast fire from the hrp centre (instead of feet) for improved accuracy on upwards slanting shapes dipping into the ground

- [ ] explore rotated wedge bug. this may be solved by the points above

- [ ] random generation appears to fail below other parts
Repo file: https://devforum.roblox.com/uploads/short-url/pn1v2zXgk3e5QxwU62m2QjnocD4.rbxl

- [ ] cluster infinite recursion https://devforum.roblox.com/t/zone-errors/864654/5
## Main
- [ ] have a Zone module then a ZoneController as a child
- [ ] for zone, support both players and parts
- [ ] have methods like :getPlayers, :getLocalPlayer, :getParts, :checkPlayerInZone, :checkLocalPlayerInZone, :checkPartInZone
- [ ] have events which automatically initiate a loop when connected to - .playerEntered, playerExited, partEntered, partExited, localPlayerEntered, localPlayerExited
- [ ] for players, dynamically change between region3 + raycasting and solely raycasting depending upon factors such as total volume across all zones, number of players in server, localPlayer loop, etc
- [ ] for parts, call region3 like normal then raycast, however if a part is anchored (or not) consider caching results based on enum.accuracy
- [ ] handle logic, decision making and even raycasting within the controller
- [ ] when raycasting, perform a centre check initially, then if this fails use angle math to get the face nearest to the closest zone centre, and perform an additional tiny raycast in that direction
- [ ] introduce enum.Accuracy with items such as 'Precise', 'High', 'Medium', 'Low' and a zone.accuracy property
- [ ] once again have :update and. updated for calculating the zones min and max bounds and region, but completely scrap the 'cluster system'
- [ ] introduce :getRandomPoint again but this time do purely a while loop with completely random casting
- [ ] for ZoneController, remove create/remove methods entirely and just have ZoneController.getZones()
- [ ] parent utility modules like Signal and Maid *under* the Zone module | non_perf | zone improvements move to new standalone repository with installer methods like topbar plugin rojo etc remove all require dependencies consider removing additonalheight param as this causes confusion replace irrelevant within non object methods to improve tutorials to make people aware of using zone new in addition to createzone add destroy upper case alias consider making work with non players bugs have the raycast fire from the hrp centre instead of feet for improved accuracy on upwards slanting shapes dipping into the ground explore rotated wedge bug this may be solved by the points above random generation appears to fail below other parts repo file cluster infinite recursion main have a zone module then a zonecontroller as a child for zone support both players and parts have methods like getplayers getlocalplayer getparts checkplayerinzone checklocalplayerinzone checkpartinzone have events which automatically initiate a loop when connected to playerentered playerexited partentered partexited localplayerentered localplayerexited for players dynamically change between raycasting and solely raycasting depending upon factors such as total volume across all zones number of players in server localplayer loop etc for parts call like normal then raycast however if a part is anchored or not consider caching results based on enum accuracy handle logic decision making and even raycasting within the controller when raycasting perform a centre check initially then if this fails use angle math to get the face nearest to the closest zone centre and perform an additional tiny raycast in that direction introduce enum accuracy with items such as precise high medium low and a zone accuracy property once again have update and updated for calculating the zones min and max bounds and region but completely scrap the cluster system introduce getrandompoint again but this time do purely a while loop with completely random casting for zonecontroller remove create remove methods entirely and just have zonecontroller getzones parent utility modules like signal and maid under the zone module | 0 |
24,496 | 2,668,134,048 | IssuesEvent | 2015-03-23 04:41:13 | cs2103jan2015-w14-3j/main | https://api.github.com/repos/cs2103jan2015-w14-3j/main | closed | Build new "set" command to allow user to set configuration | priority.normal type.enhancement type.task | Maybe something like
set storage c:\users\edward\dropbox | 1.0 | Build new "set" command to allow user to set configuration - Maybe something like
set storage c:\users\edward\dropbox | non_perf | build new set command to allow user to set configuration maybe something like set storage c users edward dropbox | 0 |
124,653 | 12,236,148,458 | IssuesEvent | 2020-05-04 15:54:57 | mercury-telemetry/mercury | https://api.github.com/repos/mercury-telemetry/mercury | opened | Document how to Connect Sense HAT | documentation hardware | **User story**
As user, I want a step-by-step guide on how to connect my Sense HAT so that my SensePi can begin reading sensor data.
**Acceptance criteria**
Must be posted to the team wiki.
Must be reviewed and easily understood by a team member outside of the hardware team.
Should be considerate of the fact that ideally the user does not interact with the code that interfaces with the Sense HAT at all. Should be as "plug and play" as possible.
**Definition of Done**
Reviewed instructions have been posted to the team wiki. | 1.0 | Document how to Connect Sense HAT - **User story**
As user, I want a step-by-step guide on how to connect my Sense HAT so that my SensePi can begin reading sensor data.
**Acceptance criteria**
Must be posted to the team wiki.
Must be reviewed and easily understood by a team member outside of the hardware team.
Should be considerate of the fact that ideally the user does not interact with the code that interfaces with the Sense HAT at all. Should be as "plug and play" as possible.
**Definition of Done**
Reviewed instructions have been posted to the team wiki. | non_perf | document how to connect sense hat user story as user i want a step by step guide on how to connect my sense hat so that my sensepi can begin reading sensor data acceptance criteria must be posted to the team wiki must be reviewed and easily understood by a team member outside of the hardware team should be considerate of the fact that ideally the user does not interact with the code that interfaces with the sense hat at all should be as plug and play as possible definition of done reviewed instructions have been posted to the team wiki | 0 |
53,351 | 28,098,100,789 | IssuesEvent | 2023-03-30 17:15:49 | DataSQRL/sqrl | https://api.github.com/repos/DataSQRL/sqrl | opened | Pull filters through intervals joins for window time bounds | performance | Assume we have a simple click stream `ClickStream` with fields: <userid, url, timestamp>
If we want to create another stream where each click is joined with the previous (up to) 5 clicks in the last 5 minutes by the same user in order to aggregate those into a transition probability graph, we would write this as follows in SQRL:
```
IMPORT ClickStream;
ClickStream.visitedAfter := SELECT @.url as priorUrl, c.url as url, c.timestamp as timestamp, c.userid as userid, c.timestamp - @.timestamp as deltaTime, rank() as position FROM @ JOIN ClickStream c ON @.userid = c.userid AND c.timestamp > @.timestamp AND c.timestamp <= @.timestamp + INTERVAL 5 MINUTE ORDER BY c.timestamp ASC LIMIT 5;
ClickGraph := SELECT priorURL, url, avg(position) * avg(deltaTime) as relevance, count(1) as num FROM ClickStream.visitedAfter GROUP BY priorURL, url;
```
The trouble with the `ClickStream.visitedAfter` definition is that the usage of `LIMIT` and `RANK` turn this into a global window partitioned by parent which is a state and would keep the state open forever, which is very costly for a fast moving stream like ClickStream.
Instead, we want to make sure we propagate the 5 minute time bound on `c.timestamp` into the window so instead of aggregating over all prior rows we can filter on timestamp which should close the aggregation after 5 minutes. In addition, because we order by timestamp in increasing order `ASC` we don't need to put a TOPN constraint on top.
With those two modifications (pulling up the time bound and removing TOPN), the result should be a limited time window aggregation that produces a stream which we then aggregate in ClickGraph. | True | Pull filters through intervals joins for window time bounds - Assume we have a simple click stream `ClickStream` with fields: <userid, url, timestamp>
If we want to create another stream where each click is joined with the previous (up to) 5 clicks in the last 5 minutes by the same user in order to aggregate those into a transition probability graph, we would write this as follows in SQRL:
```
IMPORT ClickStream;
ClickStream.visitedAfter := SELECT @.url as priorUrl, c.url as url, c.timestamp as timestamp, c.userid as userid, c.timestamp - @.timestamp as deltaTime, rank() as position FROM @ JOIN ClickStream c ON @.userid = c.userid AND c.timestamp > @.timestamp AND c.timestamp <= @.timestamp + INTERVAL 5 MINUTE ORDER BY c.timestamp ASC LIMIT 5;
ClickGraph := SELECT priorURL, url, avg(position) * avg(deltaTime) as relevance, count(1) as num FROM ClickStream.visitedAfter GROUP BY priorURL, url;
```
The trouble with the `ClickStream.visitedAfter` definition is that the usage of `LIMIT` and `RANK` turn this into a global window partitioned by parent which is a state and would keep the state open forever, which is very costly for a fast moving stream like ClickStream.
Instead, we want to make sure we propagate the 5 minute time bound on `c.timestamp` into the window so instead of aggregating over all prior rows we can filter on timestamp which should close the aggregation after 5 minutes. In addition, because we order by timestamp in increasing order `ASC` we don't need to put a TOPN constraint on top.
With those two modifications (pulling up the time bound and removing TOPN), the result should be a limited time window aggregation that produces a stream which we then aggregate in ClickGraph. | perf | pull filters through intervals joins for window time bounds assume we have a simple click stream clickstream with fields if we want to create another stream where each click is joined with the previous up to clicks in the last minutes by the same user in order to aggregate those into a transition probability graph we would write this as follows in sqrl import clickstream clickstream visitedafter select url as priorurl c url as url c timestamp as timestamp c userid as userid c timestamp timestamp as deltatime rank as position from join clickstream c on userid c userid and c timestamp timestamp and c timestamp timestamp interval minute order by c timestamp asc limit clickgraph select priorurl url avg position avg deltatime as relevance count as num from clickstream visitedafter group by priorurl url the trouble with the clickstream visitedafter definition is that the usage of limit and rank turn this into a global window partitioned by parent which is a state and would keep the state open forever which is very costly for a fast moving stream like clickstream instead we want to make sure we propagate the minute time bound on c timestamp into the window so instead of aggregating over all prior rows we can filter on timestamp which should close the aggregation after minutes in addition because we order by timestamp in increasing order asc we don t need to put a topn constraint on top with those two modifications pulling up the time bound and removing topn the result should be a limited time window aggregation that produces a stream which we then aggregate in clickgraph | 1 |
43,192 | 23,140,737,741 | IssuesEvent | 2022-07-28 18:13:44 | trueromanus/ArdorQuery | https://api.github.com/repos/trueromanus/ArdorQuery | closed | Support Bearer Authorization as field command | ui fields request perform | ### Goal
Add new field command `bearer X`
### Implementation
It shortcut for command:
Authorization Bearer X | True | Support Bearer Authorization as field command - ### Goal
Add new field command `bearer X`
### Implementation
It shortcut for command:
Authorization Bearer X | perf | support bearer authorization as field command goal add new field command bearer x implementation it shortcut for command authorization bearer x | 1 |
13,964 | 8,416,455,211 | IssuesEvent | 2018-10-14 02:30:02 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | HttpContent.CopyToAsync ignores bufferSize sometimes | area-System.Net.Http.SocketsHttpHandler tenet-performance | if `remain > 0`, it throws `bufferSize` on the floor.
https://github.com/dotnet/corefx/blob/87a944afb3e0ec392d5838900b5498e015d5e47b/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnection.cs#L1395-L1403
4096 is not nearly enough buffer for streaming large files over high-bandwidth (10Gbps+) connections. it wastes _way_ too much CPU in the async stuff. | True | HttpContent.CopyToAsync ignores bufferSize sometimes - if `remain > 0`, it throws `bufferSize` on the floor.
https://github.com/dotnet/corefx/blob/87a944afb3e0ec392d5838900b5498e015d5e47b/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnection.cs#L1395-L1403
4096 is not nearly enough buffer for streaming large files over high-bandwidth (10Gbps+) connections. it wastes _way_ too much CPU in the async stuff. | perf | httpcontent copytoasync ignores buffersize sometimes if remain it throws buffersize on the floor is not nearly enough buffer for streaming large files over high bandwidth connections it wastes way too much cpu in the async stuff | 1 |
85,081 | 3,686,405,373 | IssuesEvent | 2016-02-25 01:06:38 | BYU-ARCLITE/Ayamel-Examples | https://api.github.com/repos/BYU-ARCLITE/Ayamel-Examples | closed | Deleting a caption file outside of CaptionAider and exiting the window should trigger a page refresh | Priority 3 UI/Look and feel | Right now, when a user deletes a caption track and exits the "Captions/Subtitles" window, the page does not refresh. The deleted caption track still lingers until the user themselves refreshes the page.
<img width="734" alt="screenshot 2016-02-19 21 14 19" src="https://cloud.githubusercontent.com/assets/5481718/13194407/3c93eb14-d74e-11e5-934d-f8614270d43a.png">
For easier User Experience, the page should refresh automatically.
BUT, be sure to only refresh the page once they've left the "Captions/Subtitles" dialogue window. That way if they are deleting multiple subtitle tracks the page won't frequently refresh on them.
[Video to experiment on](https://ayamelbeta.byu.edu/content/669). | 1.0 | Deleting a caption file outside of CaptionAider and exiting the window should trigger a page refresh - Right now, when a user deletes a caption track and exits the "Captions/Subtitles" window, the page does not refresh. The deleted caption track still lingers until the user themselves refreshes the page.
<img width="734" alt="screenshot 2016-02-19 21 14 19" src="https://cloud.githubusercontent.com/assets/5481718/13194407/3c93eb14-d74e-11e5-934d-f8614270d43a.png">
For easier User Experience, the page should refresh automatically.
BUT, be sure to only refresh the page once they've left the "Captions/Subtitles" dialogue window. That way if they are deleting multiple subtitle tracks the page won't frequently refresh on them.
[Video to experiment on](https://ayamelbeta.byu.edu/content/669). | non_perf | deleting a caption file outside of captionaider and exiting the window should trigger a page refresh right now when a user deletes a caption track and exits the captions subtitles window the page does not refresh the deleted caption track still lingers until the user themselves refreshes the page img width alt screenshot src for easier user experience the page should refresh automatically but be sure to only refresh the page once they ve left the captions subtitles dialogue window that way if they are deleting multiple subtitle tracks the page won t frequently refresh on them | 0 |
30,330 | 14,522,451,996 | IssuesEvent | 2020-12-14 08:50:18 | JuliaReach/LazySets.jl | https://api.github.com/repos/JuliaReach/LazySets.jl | opened | Concrete Minkowski sum of polytope and BallInf | performance :racehorse: | Currently we require `Polyhedra` and `CDDLib` for the following code:
```julia
julia> P = rand(Ball1)
julia> B = rand(BallInf)
julia> minkowski_sum(P, B)
```
The first time I run this in a fresh session this takes forever (as of writing it still did not terminate) for me.
I have the feeling that there is a better algorithm for this special case, but I am not 100% sure:
* moving/translating `P` according to `B`'s center (`P1 = P + B.center`)
* adding the constraints of `box_approximation(P)` (`P2 = box_approximation(P1)`)
* pushing the constraints outside by `B`'s radius (`P3 = push_outside(P2, B.radius)` with suitable definition)
* removing redundant constraints (`P4 = remove_redundant_constraints!(P3)`) | True | Concrete Minkowski sum of polytope and BallInf - Currently we require `Polyhedra` and `CDDLib` for the following code:
```julia
julia> P = rand(Ball1)
julia> B = rand(BallInf)
julia> minkowski_sum(P, B)
```
The first time I run this in a fresh session this takes forever (as of writing it still did not terminate) for me.
I have the feeling that there is a better algorithm for this special case, but I am not 100% sure:
* moving/translating `P` according to `B`'s center (`P1 = P + B.center`)
* adding the constraints of `box_approximation(P)` (`P2 = box_approximation(P1)`)
* pushing the constraints outside by `B`'s radius (`P3 = push_outside(P2, B.radius)` with suitable definition)
* removing redundant constraints (`P4 = remove_redundant_constraints!(P3)`) | perf | concrete minkowski sum of polytope and ballinf currently we require polyhedra and cddlib for the following code julia julia p rand julia b rand ballinf julia minkowski sum p b the first time i run this in a fresh session this takes forever as of writing it still did not terminate for me i have the feeling that there is a better algorithm for this special case but i am not sure moving translating p according to b s center p b center adding the constraints of box approximation p box approximation pushing the constraints outside by b s radius push outside b radius with suitable definition removing redundant constraints remove redundant constraints | 1 |
6,553 | 5,520,402,435 | IssuesEvent | 2017-03-19 04:37:04 | fossasia/open-event-webapp | https://api.github.com/repos/fossasia/open-event-webapp | reopened | Improving performance of Web app | enhancement Performance | - Please see Google page insights and improve the performance of pages.
- [ ] Approaches we can adopt for RAIL model.
- [ ] Suggest and implement the ways to improve DOM and CSSOM construction, which is currently done at once.
- [ ] Remove render-blocking CSS
- [ ] Add gulp to minify the JS, CSS code for production.
- [ ] Compress images and scale them to 264px rather than 300px.
- [x] Add lazy loading for images | True | Improving performance of Web app - - Please see Google page insights and improve the performance of pages.
- [ ] Approaches we can adopt for RAIL model.
- [ ] Suggest and implement the ways to improve DOM and CSSOM construction, which is currently done at once.
- [ ] Remove render-blocking CSS
- [ ] Add gulp to minify the JS, CSS code for production.
- [ ] Compress images and scale them to 264px rather than 300px.
- [x] Add lazy loading for images | perf | improving performance of web app please see google page insights and improve the performance of pages approaches we can adopt for rail model suggest and implement the ways to improve dom and cssom construction which is currently done at once remove render blocking css add gulp to minify the js css code for production compress images and scale them to rather than add lazy loading for images | 1 |
38,973 | 19,656,950,882 | IssuesEvent | 2022-01-10 13:30:07 | pandas-dev/pandas | https://api.github.com/repos/pandas-dev/pandas | closed | Refactor DatetimeArray._generate_range | Refactor Timeseries Performance | Currently, DatetimeArray._generate_range calls DatetimeArray._simple_new 4 times, 3 times with i8 values and ones with M8[ns] values.
<details>
```diff
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 8b0565a36..a7f8e303a 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -213,12 +213,16 @@ class DatetimeArrayMixin(dtl.DatetimeLikeArrayMixin,
_dtype = None # type: Union[np.dtype, DatetimeTZDtype]
_freq = None
+ i = 0
+
@classmethod
def _simple_new(cls, values, freq=None, tz=None):
"""
we require the we have a dtype compat for the values
if we are passed a non-dtype compat, then coerce using the constructor
"""
+ cls.i += 1
+ print(f"DTA._simple_new: {cls.i}")
assert isinstance(values, np.ndarray), type(values)
if values.dtype == 'i8':
# for compat with datetime/timedelta/period shared methods,
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 690a3db28..ab08bbf6f 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -273,6 +273,7 @@ class DatetimeIndex(DatetimeIndexOpsMixin, Int64Index, DatetimeDelegateMixin):
dayfirst=False, yearfirst=False, dtype=None,
copy=False, name=None, verify_integrity=None):
+ print("hi")
if verify_integrity is not None:
warnings.warn("The 'verify_integrity' argument is deprecated, "
"will be removed in a future version.",
```
</details>
```python
In [2]: idx = pd.date_range('2014-01-02', '2014-04-30', freq='M', tz='UTC')
DTA._simple_new: 1
DTA._simple_new: 2
DTA._simple_new: 3
DTA._simple_new: 4
```
---
I'm not familiar with this code, but I would naively hope for a function that
1. Extracts there correct freq / tz from all the arguments (start, end, etc.)
2. Generates the correct i8 values for start, end, tz
3. Wraps those i8 values in a DatetimeArray._simple_new at the end.
I'm investigating if this can be done.
I'm not sure if this applies to timedelta as well. | True | Refactor DatetimeArray._generate_range - Currently, DatetimeArray._generate_range calls DatetimeArray._simple_new 4 times, 3 times with i8 values and ones with M8[ns] values.
<details>
```diff
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 8b0565a36..a7f8e303a 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -213,12 +213,16 @@ class DatetimeArrayMixin(dtl.DatetimeLikeArrayMixin,
_dtype = None # type: Union[np.dtype, DatetimeTZDtype]
_freq = None
+ i = 0
+
@classmethod
def _simple_new(cls, values, freq=None, tz=None):
"""
we require the we have a dtype compat for the values
if we are passed a non-dtype compat, then coerce using the constructor
"""
+ cls.i += 1
+ print(f"DTA._simple_new: {cls.i}")
assert isinstance(values, np.ndarray), type(values)
if values.dtype == 'i8':
# for compat with datetime/timedelta/period shared methods,
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 690a3db28..ab08bbf6f 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -273,6 +273,7 @@ class DatetimeIndex(DatetimeIndexOpsMixin, Int64Index, DatetimeDelegateMixin):
dayfirst=False, yearfirst=False, dtype=None,
copy=False, name=None, verify_integrity=None):
+ print("hi")
if verify_integrity is not None:
warnings.warn("The 'verify_integrity' argument is deprecated, "
"will be removed in a future version.",
```
</details>
```python
In [2]: idx = pd.date_range('2014-01-02', '2014-04-30', freq='M', tz='UTC')
DTA._simple_new: 1
DTA._simple_new: 2
DTA._simple_new: 3
DTA._simple_new: 4
```
---
I'm not familiar with this code, but I would naively hope for a function that
1. Extracts there correct freq / tz from all the arguments (start, end, etc.)
2. Generates the correct i8 values for start, end, tz
3. Wraps those i8 values in a DatetimeArray._simple_new at the end.
I'm investigating if this can be done.
I'm not sure if this applies to timedelta as well. | perf | refactor datetimearray generate range currently datetimearray generate range calls datetimearray simple new times times with values and ones with values diff diff git a pandas core arrays datetimes py b pandas core arrays datetimes py index a pandas core arrays datetimes py b pandas core arrays datetimes py class datetimearraymixin dtl datetimelikearraymixin dtype none type union freq none i classmethod def simple new cls values freq none tz none we require the we have a dtype compat for the values if we are passed a non dtype compat then coerce using the constructor cls i print f dta simple new cls i assert isinstance values np ndarray type values if values dtype for compat with datetime timedelta period shared methods diff git a pandas core indexes datetimes py b pandas core indexes datetimes py index a pandas core indexes datetimes py b pandas core indexes datetimes py class datetimeindex datetimeindexopsmixin datetimedelegatemixin dayfirst false yearfirst false dtype none copy false name none verify integrity none print hi if verify integrity is not none warnings warn the verify integrity argument is deprecated will be removed in a future version python in idx pd date range freq m tz utc dta simple new dta simple new dta simple new dta simple new i m not familiar with this code but i would naively hope for a function that extracts there correct freq tz from all the arguments start end etc generates the correct values for start end tz wraps those values in a datetimearray simple new at the end i m investigating if this can be done i m not sure if this applies to timedelta as well | 1 |
2,342 | 3,410,879,492 | IssuesEvent | 2015-12-04 22:21:10 | Elgg/Elgg | https://api.github.com/repos/Elgg/Elgg | closed | Remove MD queries in ElggFile::getFilestore() | performance | Use metadata cache to pull `filestore::` metadata. | True | Remove MD queries in ElggFile::getFilestore() - Use metadata cache to pull `filestore::` metadata. | perf | remove md queries in elggfile getfilestore use metadata cache to pull filestore metadata | 1 |
19,567 | 10,465,111,296 | IssuesEvent | 2019-09-21 08:00:15 | yshui/compton | https://api.github.com/repos/yshui/compton | closed | Max GPU Usage on Intel Graphics | performance | <!-- The template below is for reporting bugs. For feature requests and others, feel free to delete irrelevant entries. -->
### Platform
<!-- Example: Ubuntu Desktop 17.04 amd64 -->
Gentoo release 2.6 x86_64
### GPU, drivers, and screen setup
<!--
Example: NVidia GTX 670, nvidia-drivers 381.09, two monitors configured side-by-side with xrandr
Please include the version of the video drivers (xf86-video-*) and mesa.
Please also paste the output of `glxinfo -B` here.
-->
GPU: Intel HD Graphics 4000
Video drivers: xf86-video-intel & mesa
```
00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09)
Subsystem: Lenovo 3rd Gen Core processor Graphics Controller [17aa:21f3]
Kernel driver in use: i915
---------------------------
* x11-drivers/xf86-video-intel
Latest version available: 2.99.917_p20190301
Latest version installed: 2.99.917_p20190301
Size of files: 1,219 KiB
* media-libs/mesa
Latest version available: 19.0.8
Latest version installed: 19.0.8
Size of files: 11,688 KiB
```
```
baboomerang@machine ~ $ glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) Ivybridge Mobile (0x166)
Version: 19.0.8
Accelerated: yes
Video memory: 1536MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.2
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.0
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 4.2 (Core Profile) Mesa 19.0.8
OpenGL core profile shading language version string: 4.20
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 3.0 Mesa 19.0.8
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 19.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
```
### Environment
<!-- Tell us something about the desktop environment you are using, for example: i3-gaps, Gnome Shell, etc. -->
i3-gaps and Polybar
```
i3 version 4.16.1 (2019-01-27) © 2009 Michael Stapelberg and contributors
---------------------------------------
polybar --version
polybar 3.2.1
Features: +alsa +curl +i3 +mpd +network +pulseaudio +xkeyboard
```
### Compton version
<!-- Put the output of `compton --version` here. -->
<!-- If you are running compton v4 or later, please also include the output of `compton --diagnostics` -->
<!-- Example: v1 -->
```
compton --version
vgit-b2ff4
```
### Compton configuration:
```
# =======
# Backend
# =======
backend = "glx";
# ===========
# GLX Backend
# ===========
glx-no-stencil = true;
glx-copy-from-front = false;
#vsync = "opengl-swc";
#glx-use-copysubbuffermesa = true;
#glx-no-rebind-pixmap = true;
#glx-swap-method = "undefined";
#glx-swap-method deprecated as of yshui's fork of compton.
use-damage = true;
# =======
# Shadows
# =======
shadow = true;
shadow-radius = 12;
shadow-offset-x = 0;
shadow-offset-y = 0;
shadow-opacity = 0.75;
#clear-shadow = true;
#no-dnd-shadow = true;
#Option `no-dnd-shadow` is deprecated, and will be removed. Please use the wintype option `shadow` of `dnd` instead.
#shadow-red = 1.0;
#shadow-green = 1.0;
#shadow-blue = 1.0;
shadow-ignore-shaped = true;
shadow-exclude = [
"name *= 'compton'",
"name *= 'tray'",
"name *= 'polybar'",
"name *= 'Polybar'",
"class_g = 'dmenu'",
"name *= 'gentoobar'",
"name *= 'Chromium'",
"class_g = 'tray'",
"class_g = 'Firefox' && argb"
];
# =======
# Opacity
# =======
# 079
# 095
inactive-opacity = 0.79;
active-opacity = 1.00;
frame-opacity = 0.70;
inactive-opacity-override = true;
# ===
# Dim
# ===
#inactive-dim = 0.2;
#inactive-dim-fixed = true;
# ====
# Blur - disabled for now
# ====
#blur-background = true;
#blur-background-frame = true;
#blur-background-fixed = false;
#blur-kern = "3x3box";
#blur-method = "kawase";
#blur-strength = 9;
#blur-background-exclude = [
# "window_type = 'dock'"
# "window_type = 'desktop'",
# "_GTK_FRAME_EXTENTS@:c"
#];
#blur: {
# method: "kernel";
# size: 99;
# deviation: 5;
#}
# ====
# Fade
# ====
fading = true;
fade-delta = 0.24;
fade-in-step = 0.03;
fade-out-step = 0.02;
#no-fading-openclose = true;
fade-exclude = [];
# =====
# Other
# =====
mark-wmwin-focused = true;
mark-ovredir-focused = true;
use-ewmh-active-win = true;
detect-rounded-corners = true;
detect-client-opacity = true;
refresh-rate = 0;
vsync = false;
dbe = false;
sw-opti = false;
unredir-if-possible = false;
detect-transient = true;
detect-client-leader = true;
focus-exclude = [];
# ============
# Window Types
# ============
wintypes: {
tooltip = { fade = true; shadow = false; };
dnd = { fade = true; shadow = false; };
menu = { fade = true; shadow = true; };
popup_menu = { fade = true; shadow = true; };
dropdown_menu = { fade = true; shadow = true; };
utility = { fade = true; shadow = true; };
dialog = { fade = true; shadow = true; };
notify = { fade = true; shadow = true; };
unknown = { fade = true; shadow = true; };
};
[compton-gentoo.zip](https://github.com/yshui/compton/files/3614737/compton-gentoo.zip)
# =====
# XSync
# =====
xrender-sync-fence = false;
```
### Steps of reproduction
<!--
If you can reliably reproduce this bug, please describe the quickest way to do so
This information will greatly help us diagnosing and fixing the issue.
-->
1. Use my provided config and make sure vsync is set to false, sw-opti set to false and xrender-sync-fence to false.
(The issue only happens when vsync is off. If its on, temps are stable and low. However, there are visual glitches as well with it on...)
2. Launch compton from within the i3 config using this EXACT line:
```
exec compton --config ~/.config/compton/compton.conf
```
### Expected behavior
GPU should have idle clocks ~350mhz to 600mhz at most with 0 windows and 0 terminals open.
System Temperature should idling ~59 C
### Current Behavior
GPU has clocks of 1300mhz always while compton is running with 0 windows and 0 terminals open.
System Temperature idles at around ~89 C
### Stack trace
<!--
If compton crashes, please make sure your compton is built with debug info, and provide a stack trace of compton when it crashed.
Note, when compton crashes in a debugger, your screen might look frozen. But gdb will likely still handle your input if it is focused.
Often you can use 'bt' and press enter to get the stack trace, then 'q', enter, 'y', enter to quit gdb.
-->
<!-- Or, you can enable core dump, and upload the core file and the compton executable here. -->
### Other details
<!-- If your problem is visual, you are encouraged to record a short video when the problem occurs and link to it here. -->
```
PowerTOP v2.10 Overview Idle stats Frequency stats Device stats Tunables Wake
Pkg(HW) | Core(HW) | CPU(OS) 0 CPU(OS) 1
| | C0 active 6.2% 4.2%
| | POLL 0.0% 0.0 ms 0.0% 0.1 ms
| | C1 93.8% 0.9 ms 95.8% 2.2 ms
C2 (pc2) 0.0% | |
C3 (pc3) 0.0% | C3 (cc3) 0.0% |
C6 (pc6) 0.0% | C6 (cc6) 0.0% |
C7 (pc7) 0.0% | C7 (cc7) 0.0% |
| Core(HW) | CPU(OS) 2 CPU(OS) 3
| | C0 active 5.8% 2.6%
| | POLL 0.0% 0.0 ms 0.0% 0.0 ms
| | C1 94.2% 1.2 ms 97.5% 4.1 ms
| |
| C3 (cc3) 0.0% |
| C6 (cc6) 0.0% |
| C7 (cc7) 0.0% |
| Core(HW) | CPU(OS) 4 CPU(OS) 5
| | C0 active 4.6% 5.9%
| | POLL 0.0% 0.0 ms 0.0% 0.1 ms
| | C1 95.4% 1.9 ms 94.2% 0.7 ms
| |
| C3 (cc3) 0.0% |
| C6 (cc6) 0.0% |
| C7 (cc7) 0.0% |
| Core(HW) | CPU(OS) 6 CPU(OS) 7
| | C0 active 5.6% 4.8%
| | POLL 0.0% 0.0 ms 0.0% 0.0 ms
| | C1 94.4% 1.3 ms 95.2% 1.9 ms
| |
| C3 (cc3) 0.0% |
| C6 (cc6) 0.0% |
| C7 (cc7) 0.0% |
| GPU |
| |
| Powered On100.0% |
| RC6 0.0% |
| RC6p 0.0% |
| RC6pp 0.0% |
| |
| |
``` | True | Max GPU Usage on Intel Graphics - <!-- The template below is for reporting bugs. For feature requests and others, feel free to delete irrelevant entries. -->
### Platform
<!-- Example: Ubuntu Desktop 17.04 amd64 -->
Gentoo release 2.6 x86_64
### GPU, drivers, and screen setup
<!--
Example: NVidia GTX 670, nvidia-drivers 381.09, two monitors configured side-by-side with xrandr
Please include the version of the video drivers (xf86-video-*) and mesa.
Please also paste the output of `glxinfo -B` here.
-->
GPU: Intel HD Graphics 4000
Video drivers: xf86-video-intel & mesa
```
00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09)
Subsystem: Lenovo 3rd Gen Core processor Graphics Controller [17aa:21f3]
Kernel driver in use: i915
---------------------------
* x11-drivers/xf86-video-intel
Latest version available: 2.99.917_p20190301
Latest version installed: 2.99.917_p20190301
Size of files: 1,219 KiB
* media-libs/mesa
Latest version available: 19.0.8
Latest version installed: 19.0.8
Size of files: 11,688 KiB
```
```
baboomerang@machine ~ $ glxinfo -B
name of display: :0
display: :0 screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) Ivybridge Mobile (0x166)
Version: 19.0.8
Accelerated: yes
Video memory: 1536MB
Unified memory: yes
Preferred profile: core (0x1)
Max core profile version: 4.2
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.0
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 4.2 (Core Profile) Mesa 19.0.8
OpenGL core profile shading language version string: 4.20
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 3.0 Mesa 19.0.8
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 19.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
```
### Environment
<!-- Tell us something about the desktop environment you are using, for example: i3-gaps, Gnome Shell, etc. -->
i3-gaps and Polybar
```
i3 version 4.16.1 (2019-01-27) © 2009 Michael Stapelberg and contributors
---------------------------------------
polybar --version
polybar 3.2.1
Features: +alsa +curl +i3 +mpd +network +pulseaudio +xkeyboard
```
### Compton version
<!-- Put the output of `compton --version` here. -->
<!-- If you are running compton v4 or later, please also include the output of `compton --diagnostics` -->
<!-- Example: v1 -->
```
compton --version
vgit-b2ff4
```
### Compton configuration:
```
# =======
# Backend
# =======
backend = "glx";
# ===========
# GLX Backend
# ===========
glx-no-stencil = true;
glx-copy-from-front = false;
#vsync = "opengl-swc";
#glx-use-copysubbuffermesa = true;
#glx-no-rebind-pixmap = true;
#glx-swap-method = "undefined";
#glx-swap-method deprecated as of yshui's fork of compton.
use-damage = true;
# =======
# Shadows
# =======
shadow = true;
shadow-radius = 12;
shadow-offset-x = 0;
shadow-offset-y = 0;
shadow-opacity = 0.75;
#clear-shadow = true;
#no-dnd-shadow = true;
#Option `no-dnd-shadow` is deprecated, and will be removed. Please use the wintype option `shadow` of `dnd` instead.
#shadow-red = 1.0;
#shadow-green = 1.0;
#shadow-blue = 1.0;
shadow-ignore-shaped = true;
shadow-exclude = [
"name *= 'compton'",
"name *= 'tray'",
"name *= 'polybar'",
"name *= 'Polybar'",
"class_g = 'dmenu'",
"name *= 'gentoobar'",
"name *= 'Chromium'",
"class_g = 'tray'",
"class_g = 'Firefox' && argb"
];
# =======
# Opacity
# =======
# 079
# 095
inactive-opacity = 0.79;
active-opacity = 1.00;
frame-opacity = 0.70;
inactive-opacity-override = true;
# ===
# Dim
# ===
#inactive-dim = 0.2;
#inactive-dim-fixed = true;
# ====
# Blur - disabled for now
# ====
#blur-background = true;
#blur-background-frame = true;
#blur-background-fixed = false;
#blur-kern = "3x3box";
#blur-method = "kawase";
#blur-strength = 9;
#blur-background-exclude = [
# "window_type = 'dock'"
# "window_type = 'desktop'",
# "_GTK_FRAME_EXTENTS@:c"
#];
#blur: {
# method: "kernel";
# size: 99;
# deviation: 5;
#}
# ====
# Fade
# ====
fading = true;
fade-delta = 0.24;
fade-in-step = 0.03;
fade-out-step = 0.02;
#no-fading-openclose = true;
fade-exclude = [];
# =====
# Other
# =====
mark-wmwin-focused = true;
mark-ovredir-focused = true;
use-ewmh-active-win = true;
detect-rounded-corners = true;
detect-client-opacity = true;
refresh-rate = 0;
vsync = false;
dbe = false;
sw-opti = false;
unredir-if-possible = false;
detect-transient = true;
detect-client-leader = true;
focus-exclude = [];
# ============
# Window Types
# ============
wintypes: {
tooltip = { fade = true; shadow = false; };
dnd = { fade = true; shadow = false; };
menu = { fade = true; shadow = true; };
popup_menu = { fade = true; shadow = true; };
dropdown_menu = { fade = true; shadow = true; };
utility = { fade = true; shadow = true; };
dialog = { fade = true; shadow = true; };
notify = { fade = true; shadow = true; };
unknown = { fade = true; shadow = true; };
};
[compton-gentoo.zip](https://github.com/yshui/compton/files/3614737/compton-gentoo.zip)
# =====
# XSync
# =====
xrender-sync-fence = false;
```
### Steps of reproduction
<!--
If you can reliably reproduce this bug, please describe the quickest way to do so
This information will greatly help us diagnosing and fixing the issue.
-->
1. Use my provided config and make sure vsync is set to false, sw-opti set to false and xrender-sync-fence to false.
(The issue only happens when vsync is off. If its on, temps are stable and low. However, there are visual glitches as well with it on...)
2. Launch compton from within the i3 config using this EXACT line:
```
exec compton --config ~/.config/compton/compton.conf
```
### Expected behavior
GPU should have idle clocks ~350mhz to 600mhz at most with 0 windows and 0 terminals open.
System Temperature should idling ~59 C
### Current Behavior
GPU has clocks of 1300mhz always while compton is running with 0 windows and 0 terminals open.
System Temperature idles at around ~89 C
### Stack trace
<!--
If compton crashes, please make sure your compton is built with debug info, and provide a stack trace of compton when it crashed.
Note, when compton crashes in a debugger, your screen might look frozen. But gdb will likely still handle your input if it is focused.
Often you can use 'bt' and press enter to get the stack trace, then 'q', enter, 'y', enter to quit gdb.
-->
<!-- Or, you can enable core dump, and upload the core file and the compton executable here. -->
### Other details
<!-- If your problem is visual, you are encouraged to record a short video when the problem occurs and link to it here. -->
```
PowerTOP v2.10 Overview Idle stats Frequency stats Device stats Tunables Wake
Pkg(HW) | Core(HW) | CPU(OS) 0 CPU(OS) 1
| | C0 active 6.2% 4.2%
| | POLL 0.0% 0.0 ms 0.0% 0.1 ms
| | C1 93.8% 0.9 ms 95.8% 2.2 ms
C2 (pc2) 0.0% | |
C3 (pc3) 0.0% | C3 (cc3) 0.0% |
C6 (pc6) 0.0% | C6 (cc6) 0.0% |
C7 (pc7) 0.0% | C7 (cc7) 0.0% |
| Core(HW) | CPU(OS) 2 CPU(OS) 3
| | C0 active 5.8% 2.6%
| | POLL 0.0% 0.0 ms 0.0% 0.0 ms
| | C1 94.2% 1.2 ms 97.5% 4.1 ms
| |
| C3 (cc3) 0.0% |
| C6 (cc6) 0.0% |
| C7 (cc7) 0.0% |
| Core(HW) | CPU(OS) 4 CPU(OS) 5
| | C0 active 4.6% 5.9%
| | POLL 0.0% 0.0 ms 0.0% 0.1 ms
| | C1 95.4% 1.9 ms 94.2% 0.7 ms
| |
| C3 (cc3) 0.0% |
| C6 (cc6) 0.0% |
| C7 (cc7) 0.0% |
| Core(HW) | CPU(OS) 6 CPU(OS) 7
| | C0 active 5.6% 4.8%
| | POLL 0.0% 0.0 ms 0.0% 0.0 ms
| | C1 94.4% 1.3 ms 95.2% 1.9 ms
| |
| C3 (cc3) 0.0% |
| C6 (cc6) 0.0% |
| C7 (cc7) 0.0% |
| GPU |
| |
| Powered On100.0% |
| RC6 0.0% |
| RC6p 0.0% |
| RC6pp 0.0% |
| |
| |
``` | perf | max gpu usage on intel graphics platform gentoo release gpu drivers and screen setup example nvidia gtx nvidia drivers two monitors configured side by side with xrandr please include the version of the video drivers video and mesa please also paste the output of glxinfo b here gpu intel hd graphics video drivers video intel mesa vga compatible controller intel corporation gen core processor graphics controller rev subsystem lenovo gen core processor graphics controller kernel driver in use drivers video intel latest version available latest version installed size of files kib media libs mesa latest version available latest version installed size of files kib baboomerang machine glxinfo b name of display display screen direct rendering yes extended renderer info glx mesa query renderer vendor intel open source technology center device mesa dri intel r ivybridge mobile version accelerated yes video memory unified memory yes preferred profile core max core profile version max compat profile version max profile version max gles profile version opengl vendor string intel open source technology center opengl renderer string mesa dri intel r ivybridge mobile opengl core profile version string core profile mesa opengl core profile shading language version string opengl core profile context flags none opengl core profile profile mask core profile opengl version string mesa opengl shading language version string opengl context flags none opengl es profile version string opengl es mesa opengl es profile shading language version string opengl es glsl es environment gaps and polybar version © michael stapelberg and contributors polybar version polybar features alsa curl mpd network pulseaudio xkeyboard compton version compton version vgit compton configuration backend backend glx glx backend glx no stencil true glx copy from front false vsync opengl swc glx use copysubbuffermesa true glx no rebind pixmap true glx swap method undefined glx swap method deprecated as of yshui s fork of compton use damage true shadows shadow true shadow radius shadow offset x shadow offset y shadow opacity clear shadow true no dnd shadow true option no dnd shadow is deprecated and will be removed please use the wintype option shadow of dnd instead shadow red shadow green shadow blue shadow ignore shaped true shadow exclude name compton name tray name polybar name polybar class g dmenu name gentoobar name chromium class g tray class g firefox argb opacity inactive opacity active opacity frame opacity inactive opacity override true dim inactive dim inactive dim fixed true blur disabled for now blur background true blur background frame true blur background fixed false blur kern blur method kawase blur strength blur background exclude window type dock window type desktop gtk frame extents c blur method kernel size deviation fade fading true fade delta fade in step fade out step no fading openclose true fade exclude other mark wmwin focused true mark ovredir focused true use ewmh active win true detect rounded corners true detect client opacity true refresh rate vsync false dbe false sw opti false unredir if possible false detect transient true detect client leader true focus exclude window types wintypes tooltip fade true shadow false dnd fade true shadow false menu fade true shadow true popup menu fade true shadow true dropdown menu fade true shadow true utility fade true shadow true dialog fade true shadow true notify fade true shadow true unknown fade true shadow true xsync xrender sync fence false steps of reproduction if you can reliably reproduce this bug please describe the quickest way to do so this information will greatly help us diagnosing and fixing the issue use my provided config and make sure vsync is set to false sw opti set to false and xrender sync fence to false the issue only happens when vsync is off if its on temps are stable and low however there are visual glitches as well with it on launch compton from within the config using this exact line exec compton config config compton compton conf expected behavior gpu should have idle clocks to at most with windows and terminals open system temperature should idling c current behavior gpu has clocks of always while compton is running with windows and terminals open system temperature idles at around c stack trace if compton crashes please make sure your compton is built with debug info and provide a stack trace of compton when it crashed note when compton crashes in a debugger your screen might look frozen but gdb will likely still handle your input if it is focused often you can use bt and press enter to get the stack trace then q enter y enter to quit gdb other details powertop overview idle stats frequency stats device stats tunables wake pkg hw core hw cpu os cpu os active poll ms ms ms ms core hw cpu os cpu os active poll ms ms ms ms core hw cpu os cpu os active poll ms ms ms ms core hw cpu os cpu os active poll ms ms ms ms gpu powered | 1 |
43,772 | 11,840,192,875 | IssuesEvent | 2020-03-23 18:25:25 | bancika/diy-layout-creator | https://api.github.com/repos/bancika/diy-layout-creator | closed | Components don't "snap to" metric grid consistently | Priority-Medium Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1. New Project -> change grid spacing from default (0.1 in) to 2.5 mm.
- Verify that Snap to Grid is enabled
- Verify that Ruler units are cm
- Scroll to the upper left corner of the canvas and zoom in.
2. Select Semiconductors->DIP IC and attempt to place the IC (using the rulers
as a guide) with pin 1 at 2cm down, 2cm right from the upper-left corner.
- Observe slight misalignment from the grid on all IC pins.
3. Select and drag the IC to reposition.
- Observe continued misalignment from the grid.
- Observe that the component appears to be "snapping to" points on a grid, but not the displayed 2.5mm grid.
4. Edit the IC component and set pin spacing to 2.5mm. Select/drag the IC
component to reposition.
- Observe continued misalignment from the grid.
What is the expected output? What do you see instead?
I expect component points to snap to the displayed grid intersections,
according to the grid spacing specified in the project settings, and the
grid/ruler settings.
Moving the component (via mouse drag or arrow keys) will sometimes realign the
anchor point to the grid. Moving individual anchor points (via mouse drag or
arrow keys) will sometimes realign to the grid.
Whenever possible, please attach the latest log file. It will
provide valuable information when trying to reproduce and locate the
problem. Log files can be found in the log directory under the main
application directory and are marked with date and time.
What version of the product are you using? On what operating system?
DIYLC 3.28.0 on Windows 7 x64.
Please provide any additional information below.
```
Original issue reported on code.google.com by `josh.w...@gmail.com` on 30 Oct 2014 at 10:57
| 1.0 | Components don't "snap to" metric grid consistently - ```
What steps will reproduce the problem?
1. New Project -> change grid spacing from default (0.1 in) to 2.5 mm.
- Verify that Snap to Grid is enabled
- Verify that Ruler units are cm
- Scroll to the upper left corner of the canvas and zoom in.
2. Select Semiconductors->DIP IC and attempt to place the IC (using the rulers
as a guide) with pin 1 at 2cm down, 2cm right from the upper-left corner.
- Observe slight misalignment from the grid on all IC pins.
3. Select and drag the IC to reposition.
- Observe continued misalignment from the grid.
- Observe that the component appears to be "snapping to" points on a grid, but not the displayed 2.5mm grid.
4. Edit the IC component and set pin spacing to 2.5mm. Select/drag the IC
component to reposition.
- Observe continued misalignment from the grid.
What is the expected output? What do you see instead?
I expect component points to snap to the displayed grid intersections,
according to the grid spacing specified in the project settings, and the
grid/ruler settings.
Moving the component (via mouse drag or arrow keys) will sometimes realign the
anchor point to the grid. Moving individual anchor points (via mouse drag or
arrow keys) will sometimes realign to the grid.
Whenever possible, please attach the latest log file. It will
provide valuable information when trying to reproduce and locate the
problem. Log files can be found in the log directory under the main
application directory and are marked with date and time.
What version of the product are you using? On what operating system?
DIYLC 3.28.0 on Windows 7 x64.
Please provide any additional information below.
```
Original issue reported on code.google.com by `josh.w...@gmail.com` on 30 Oct 2014 at 10:57
| non_perf | components don t snap to metric grid consistently what steps will reproduce the problem new project change grid spacing from default in to mm verify that snap to grid is enabled verify that ruler units are cm scroll to the upper left corner of the canvas and zoom in select semiconductors dip ic and attempt to place the ic using the rulers as a guide with pin at down right from the upper left corner observe slight misalignment from the grid on all ic pins select and drag the ic to reposition observe continued misalignment from the grid observe that the component appears to be snapping to points on a grid but not the displayed grid edit the ic component and set pin spacing to select drag the ic component to reposition observe continued misalignment from the grid what is the expected output what do you see instead i expect component points to snap to the displayed grid intersections according to the grid spacing specified in the project settings and the grid ruler settings moving the component via mouse drag or arrow keys will sometimes realign the anchor point to the grid moving individual anchor points via mouse drag or arrow keys will sometimes realign to the grid whenever possible please attach the latest log file it will provide valuable information when trying to reproduce and locate the problem log files can be found in the log directory under the main application directory and are marked with date and time what version of the product are you using on what operating system diylc on windows please provide any additional information below original issue reported on code google com by josh w gmail com on oct at | 0 |
43,966 | 23,443,144,435 | IssuesEvent | 2022-08-15 16:48:58 | aesara-devs/aesara | https://api.github.com/repos/aesara-devs/aesara | opened | Use stride information to specialize `CAReduce` | enhancement help wanted Numba performance concern | @aseyboldt demonstrates [here](https://github.com/aesara-devs/aesara/issues/1089#issuecomment-1213283266) how we could use stride information to produce better Numba implementations. It's possible that our old C implementation could use similar improvements. | True | Use stride information to specialize `CAReduce` - @aseyboldt demonstrates [here](https://github.com/aesara-devs/aesara/issues/1089#issuecomment-1213283266) how we could use stride information to produce better Numba implementations. It's possible that our old C implementation could use similar improvements. | perf | use stride information to specialize careduce aseyboldt demonstrates how we could use stride information to produce better numba implementations it s possible that our old c implementation could use similar improvements | 1 |
80,106 | 15,355,072,047 | IssuesEvent | 2021-03-01 10:36:46 | gitpod-io/gitpod | https://api.github.com/repos/gitpod-io/gitpod | closed | Support content store backed user storage in Code | editor: code | In Code extensions can write files which are expected live beyond a single workspace (e.g. settings or caches). With the advent of the content store (#2001), we should make use of the content store for storing this data. | 1.0 | Support content store backed user storage in Code - In Code extensions can write files which are expected live beyond a single workspace (e.g. settings or caches). With the advent of the content store (#2001), we should make use of the content store for storing this data. | non_perf | support content store backed user storage in code in code extensions can write files which are expected live beyond a single workspace e g settings or caches with the advent of the content store we should make use of the content store for storing this data | 0 |
32,994 | 15,754,659,180 | IssuesEvent | 2021-03-31 00:22:59 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | `tf.distribute.MirroredStrategy(),` but getting unbalance GPU usage | TF 2.4 comp:dist-strat stalled stat:awaiting response type:performance | **System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):Yes, with reference.
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): v2.4.0-49-g85c8b2a817f 2.4.1
- Python version: 3.6.9
- CUDA/cuDNN version: CUDA 11 and cuDNN 8
- GPU model and memory: 3 GPU's 32GB each (same GPU model)
**Describe the current behavior**
Trying to use `tf.distribute.MirroredStrategy(),` but getting unbalance GPU usage. The 1st GPU is used most of the time and other 2 GPU's are seldom used for the same code, same training instance.
**Describe the expected behavior**
get almost uniform usage of all 3 GPU (for better performance).
**Standalone code to reproduce the issue**
https://github.com/rrklearn2020/keras-retinanet
Running directly from the repository with tf.distribute.MirroredStrategy():
`keras_retinanet/bin/train_v2.py coco /path/to/MS/COCO`
I also tried the "tf.data.Dataset.from_generator" by adding below lines (currently not added to train_v2.py due to error)
```
train_generator=tf.data.Dataset.from_generator(train_generator)
validation_generator=tf.data.Dataset.from_generator(validation_generator)
```
But getting an error as shown below.
```
raise TypeError("`generator` must be callable.")
TypeError: `generator` must be callable.
```
Please guide me to get almost uniform usage of all 3 GPU (for better performance).
| True | `tf.distribute.MirroredStrategy(),` but getting unbalance GPU usage - **System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):Yes, with reference.
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): v2.4.0-49-g85c8b2a817f 2.4.1
- Python version: 3.6.9
- CUDA/cuDNN version: CUDA 11 and cuDNN 8
- GPU model and memory: 3 GPU's 32GB each (same GPU model)
**Describe the current behavior**
Trying to use `tf.distribute.MirroredStrategy(),` but getting unbalance GPU usage. The 1st GPU is used most of the time and other 2 GPU's are seldom used for the same code, same training instance.
**Describe the expected behavior**
get almost uniform usage of all 3 GPU (for better performance).
**Standalone code to reproduce the issue**
https://github.com/rrklearn2020/keras-retinanet
Running directly from the repository with tf.distribute.MirroredStrategy():
`keras_retinanet/bin/train_v2.py coco /path/to/MS/COCO`
I also tried the "tf.data.Dataset.from_generator" by adding below lines (currently not added to train_v2.py due to error)
```
train_generator=tf.data.Dataset.from_generator(train_generator)
validation_generator=tf.data.Dataset.from_generator(validation_generator)
```
But getting an error as shown below.
```
raise TypeError("`generator` must be callable.")
TypeError: `generator` must be callable.
```
Please guide me to get almost uniform usage of all 3 GPU (for better performance).
| perf | tf distribute mirroredstrategy but getting unbalance gpu usage system information have i written custom code as opposed to using a stock example script provided in tensorflow yes with reference os platform and distribution e g linux ubuntu ubuntu tensorflow installed from source or binary binary tensorflow version use command below python version cuda cudnn version cuda and cudnn gpu model and memory gpu s each same gpu model describe the current behavior trying to use tf distribute mirroredstrategy but getting unbalance gpu usage the gpu is used most of the time and other gpu s are seldom used for the same code same training instance describe the expected behavior get almost uniform usage of all gpu for better performance standalone code to reproduce the issue running directly from the repository with tf distribute mirroredstrategy keras retinanet bin train py coco path to ms coco i also tried the tf data dataset from generator by adding below lines currently not added to train py due to error train generator tf data dataset from generator train generator validation generator tf data dataset from generator validation generator but getting an error as shown below raise typeerror generator must be callable typeerror generator must be callable please guide me to get almost uniform usage of all gpu for better performance | 1 |
44,495 | 23,654,705,757 | IssuesEvent | 2022-08-26 10:03:12 | johnsoncodehk/volar | https://api.github.com/repos/johnsoncodehk/volar | closed | Read event AST instead of `transformOn` | performance | Code: https://github.com/johnsoncodehk/volar/blob/b7db77382b38f124cec01e5749110da863558ff7/packages/vue-language-core/src/generators/template.ts#L638
It seem expensive due to GC.
<img width="918" alt="螢幕截圖 2022-08-26 17 16 16" src="https://user-images.githubusercontent.com/16279759/186871500-5f36a873-6e55-4b07-a829-cb18e3531e93.png">
[CPU-20220826T171634.cpuprofile.zip](https://github.com/johnsoncodehk/volar/files/9431880/CPU-20220826T171634.cpuprofile.zip) | True | Read event AST instead of `transformOn` - Code: https://github.com/johnsoncodehk/volar/blob/b7db77382b38f124cec01e5749110da863558ff7/packages/vue-language-core/src/generators/template.ts#L638
It seem expensive due to GC.
<img width="918" alt="螢幕截圖 2022-08-26 17 16 16" src="https://user-images.githubusercontent.com/16279759/186871500-5f36a873-6e55-4b07-a829-cb18e3531e93.png">
[CPU-20220826T171634.cpuprofile.zip](https://github.com/johnsoncodehk/volar/files/9431880/CPU-20220826T171634.cpuprofile.zip) | perf | read event ast instead of transformon code it seem expensive due to gc img width alt 螢幕截圖 src | 1 |
99,739 | 16,450,147,120 | IssuesEvent | 2021-05-21 03:43:07 | ElliotChen/spring_boot_example | https://api.github.com/repos/ElliotChen/spring_boot_example | opened | CVE-2021-29425 (Medium) detected in commons-io-2.6.jar | security vulnerability | ## CVE-2021-29425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-io-2.6.jar</b></p></summary>
<p>The Apache Commons IO library contains utility classes, stream implementations, file filters,
file comparators, endian transformation classes, and much more.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-io/">http://commons.apache.org/proper/commons-io/</a></p>
<p>Path to dependency file: spring_boot_example/18telegrambot/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar</p>
<p>
Dependency Hierarchy:
- telegrambots-spring-boot-starter-4.9.2.jar (Root Library)
- telegrambots-4.9.2.jar
- :x: **commons-io-2.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ElliotChen/spring_boot_example/commit/71eca80f981d37ac9d44e9b9a3801f9abd937625">71eca80f981d37ac9d44e9b9a3801f9abd937625</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.
<p>Publish Date: 2021-04-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425>CVE-2021-29425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p>
<p>Release Date: 2021-04-13</p>
<p>Fix Resolution: commons-io:commons-io:2.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-29425 (Medium) detected in commons-io-2.6.jar - ## CVE-2021-29425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-io-2.6.jar</b></p></summary>
<p>The Apache Commons IO library contains utility classes, stream implementations, file filters,
file comparators, endian transformation classes, and much more.</p>
<p>Library home page: <a href="http://commons.apache.org/proper/commons-io/">http://commons.apache.org/proper/commons-io/</a></p>
<p>Path to dependency file: spring_boot_example/18telegrambot/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-io/commons-io/2.6/commons-io-2.6.jar</p>
<p>
Dependency Hierarchy:
- telegrambots-spring-boot-starter-4.9.2.jar (Root Library)
- telegrambots-4.9.2.jar
- :x: **commons-io-2.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ElliotChen/spring_boot_example/commit/71eca80f981d37ac9d44e9b9a3801f9abd937625">71eca80f981d37ac9d44e9b9a3801f9abd937625</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.
<p>Publish Date: 2021-04-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425>CVE-2021-29425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p>
<p>Release Date: 2021-04-13</p>
<p>Fix Resolution: commons-io:commons-io:2.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_perf | cve medium detected in commons io jar cve medium severity vulnerability vulnerable library commons io jar the apache commons io library contains utility classes stream implementations file filters file comparators endian transformation classes and much more library home page a href path to dependency file spring boot example pom xml path to vulnerable library home wss scanner repository commons io commons io commons io jar dependency hierarchy telegrambots spring boot starter jar root library telegrambots jar x commons io jar vulnerable library found in head commit a href found in base branch master vulnerability details in apache commons io before when invoking the method filenameutils normalize with an improper input string like foo or foo the result would be the same value thus possibly providing access to files in the parent directory but not further above thus limited path traversal if the calling code would use the result to construct a path value publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons io commons io step up your open source security game with whitesource | 0 |
25,534 | 4,369,529,202 | IssuesEvent | 2016-08-04 00:29:37 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | closed | envia error de cancelación 012, indicando error de cancelacion con el pac | bug defect | pero en el sat si cancela... pero no se refleja eso en el facturat...
segun yo existe un nuevo código en facturehoy | 1.0 | envia error de cancelación 012, indicando error de cancelacion con el pac - pero en el sat si cancela... pero no se refleja eso en el facturat...
segun yo existe un nuevo código en facturehoy | non_perf | envia error de cancelación indicando error de cancelacion con el pac pero en el sat si cancela pero no se refleja eso en el facturat segun yo existe un nuevo código en facturehoy | 0 |
40,370 | 20,809,810,187 | IssuesEvent | 2022-03-18 00:23:46 | microsoft/TypeScript | https://api.github.com/repos/microsoft/TypeScript | opened | Can we reuse SourceFiles across projects more aggressively? | Domain: Performance | 1. Clone https://github.com/mui-org/material-ui (I'm at f2d6337305b75df97d49e58f19288429b7f767e5)
2. `yarn --ignore-scripts`
3. `yarn docs:typescript:check`
4. Open `docs\src\components\typography\GradientText.tsx` in VS Code
5. Wait for loading to finish (~10s)
6. F12 on `styled` to jump to `packages\mui-material\src\styles\styled.d.ts`
`docs\tsconfig.json` includes many files from `packages\mui-material` but the `SourceFile` objects are not reused when the upstream project is opened because `docs` has `"resolveJsonModule": true` and `mui-material` does not. Ironically, this means you can cut the load time of `mui-material` down from ~1750ms to ~900ms by _adding_ `"resolveJsonModule": true`.
This happens because `resolveJsonModule` has `affectsModuleResolution: true` and that's what the `DocumentRegistry` uses to determine whether reuse is appropriate.
If we could get away with a looser definition of `affectsModuleResolution` in the language service, we could potentially save a lot of time. (Note that TS itself had an issue with failing to reuse files: #47687). | True | Can we reuse SourceFiles across projects more aggressively? - 1. Clone https://github.com/mui-org/material-ui (I'm at f2d6337305b75df97d49e58f19288429b7f767e5)
2. `yarn --ignore-scripts`
3. `yarn docs:typescript:check`
4. Open `docs\src\components\typography\GradientText.tsx` in VS Code
5. Wait for loading to finish (~10s)
6. F12 on `styled` to jump to `packages\mui-material\src\styles\styled.d.ts`
`docs\tsconfig.json` includes many files from `packages\mui-material` but the `SourceFile` objects are not reused when the upstream project is opened because `docs` has `"resolveJsonModule": true` and `mui-material` does not. Ironically, this means you can cut the load time of `mui-material` down from ~1750ms to ~900ms by _adding_ `"resolveJsonModule": true`.
This happens because `resolveJsonModule` has `affectsModuleResolution: true` and that's what the `DocumentRegistry` uses to determine whether reuse is appropriate.
If we could get away with a looser definition of `affectsModuleResolution` in the language service, we could potentially save a lot of time. (Note that TS itself had an issue with failing to reuse files: #47687). | perf | can we reuse sourcefiles across projects more aggressively clone i m at yarn ignore scripts yarn docs typescript check open docs src components typography gradienttext tsx in vs code wait for loading to finish on styled to jump to packages mui material src styles styled d ts docs tsconfig json includes many files from packages mui material but the sourcefile objects are not reused when the upstream project is opened because docs has resolvejsonmodule true and mui material does not ironically this means you can cut the load time of mui material down from to by adding resolvejsonmodule true this happens because resolvejsonmodule has affectsmoduleresolution true and that s what the documentregistry uses to determine whether reuse is appropriate if we could get away with a looser definition of affectsmoduleresolution in the language service we could potentially save a lot of time note that ts itself had an issue with failing to reuse files | 1 |
47,287 | 19,593,430,060 | IssuesEvent | 2022-01-05 15:18:50 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | Problem changes on launch template does not trigger changes on aws eks managed nodes | bug service/ec2 | I'm investigate why changes on launch template does not trigger changes on aws eks managed nodes
In my example I'm adding a new Security Group and when I apply the change manually using `terragrunt apply` it will provision the new Security Group, but it doesn't add to Auto Scaling Groups under Security Group IDs and it also doesn't apply the new version to Launch Template.
In this case I will have to manually rerun the `terragrunt apply` a second time for it to apply these changes.
The bad thing is that all my flow runs on top of a pipeline and I can't keep running the terragrunt twice manually, it has to be done automatically.
Do you have any solution for my problem or have you gone through a similar situation?
Am I forgetting something?
Regards,
João Lobo
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
| 1.0 | Problem changes on launch template does not trigger changes on aws eks managed nodes - I'm investigate why changes on launch template does not trigger changes on aws eks managed nodes
In my example I'm adding a new Security Group and when I apply the change manually using `terragrunt apply` it will provision the new Security Group, but it doesn't add to Auto Scaling Groups under Security Group IDs and it also doesn't apply the new version to Launch Template.
In this case I will have to manually rerun the `terragrunt apply` a second time for it to apply these changes.
The bad thing is that all my flow runs on top of a pipeline and I can't keep running the terragrunt twice manually, it has to be done automatically.
Do you have any solution for my problem or have you gone through a similar situation?
Am I forgetting something?
Regards,
João Lobo
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
| non_perf | problem changes on launch template does not trigger changes on aws eks managed nodes i m investigate why changes on launch template does not trigger changes on aws eks managed nodes in my example i m adding a new security group and when i apply the change manually using terragrunt apply it will provision the new security group but it doesn t add to auto scaling groups under security group ids and it also doesn t apply the new version to launch template in this case i will have to manually rerun the terragrunt apply a second time for it to apply these changes the bad thing is that all my flow runs on top of a pipeline and i can t keep running the terragrunt twice manually it has to be done automatically do you have any solution for my problem or have you gone through a similar situation am i forgetting something regards joão lobo community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment | 0 |
71,264 | 3,354,785,988 | IssuesEvent | 2015-11-18 13:58:58 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | oc export should strip references to generated token and dockercfg secrets | area/security component/cli priority/P2 | `oc export sa` doesn't remove the token and dockercfg secret references. In most use-cases, those references won't be valid when the SA is re-created. I think we should remove them on export and let people use `oc get` if they really want to keep them.
@smarterclayton is there some way to tag a field as "don't export by default"?
| 1.0 | oc export should strip references to generated token and dockercfg secrets - `oc export sa` doesn't remove the token and dockercfg secret references. In most use-cases, those references won't be valid when the SA is re-created. I think we should remove them on export and let people use `oc get` if they really want to keep them.
@smarterclayton is there some way to tag a field as "don't export by default"?
| non_perf | oc export should strip references to generated token and dockercfg secrets oc export sa doesn t remove the token and dockercfg secret references in most use cases those references won t be valid when the sa is re created i think we should remove them on export and let people use oc get if they really want to keep them smarterclayton is there some way to tag a field as don t export by default | 0 |
24,726 | 12,379,916,505 | IssuesEvent | 2020-05-19 13:15:53 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | opened | Colab Runtime Crashes training on musdb18 | type:performance |
**System information**
- Have I written custom code :Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Using Google Colab
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No
- TensorFlow installed from (source or binary):Source
- TensorFlow version (use command below):2.0
- Python version:3.6
I have built an audio separation model and am trying to train it on the musdb18 dataset. It contains a total of 150 audio tracks, 100 for training and 50 for test. I load the dataset like: I load a song, preprocess it, then feed it to the network. Train my model for one epoch, then load another song and train it. I don't think it should exceed RAM limit, but somehow the runtime crashes after a few epochs, with a message that the runtime crashed using all RAM. Could someone please guide me why this is so.
When I trained my previous tensordlow model on a single song it worked smoothly. | True | Colab Runtime Crashes training on musdb18 -
**System information**
- Have I written custom code :Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Using Google Colab
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No
- TensorFlow installed from (source or binary):Source
- TensorFlow version (use command below):2.0
- Python version:3.6
I have built an audio separation model and am trying to train it on the musdb18 dataset. It contains a total of 150 audio tracks, 100 for training and 50 for test. I load the dataset like: I load a song, preprocess it, then feed it to the network. Train my model for one epoch, then load another song and train it. I don't think it should exceed RAM limit, but somehow the runtime crashes after a few epochs, with a message that the runtime crashed using all RAM. Could someone please guide me why this is so.
When I trained my previous tensordlow model on a single song it worked smoothly. | perf | colab runtime crashes training on system information have i written custom code yes os platform and distribution e g linux ubuntu using google colab mobile device e g iphone pixel samsung galaxy if the issue happens on mobile device no tensorflow installed from source or binary source tensorflow version use command below python version i have built an audio separation model and am trying to train it on the dataset it contains a total of audio tracks for training and for test i load the dataset like i load a song preprocess it then feed it to the network train my model for one epoch then load another song and train it i don t think it should exceed ram limit but somehow the runtime crashes after a few epochs with a message that the runtime crashed using all ram could someone please guide me why this is so when i trained my previous tensordlow model on a single song it worked smoothly | 1 |
4,989 | 4,750,740,074 | IssuesEvent | 2016-10-22 14:15:12 | alexwnovak/GitWrite | https://api.github.com/repos/alexwnovak/GitWrite | closed | Look into startup performance | performance | ## Details
Materials get generated on launch, and there's a task delay of ~200 milliseconds. It ensures the animation is smooth such that they're not both running at the same time, which is dumb. I wonder if the material generation can be on a super-low-priority thread, so it won't mess up the UI thread's smoothness. | True | Look into startup performance - ## Details
Materials get generated on launch, and there's a task delay of ~200 milliseconds. It ensures the animation is smooth such that they're not both running at the same time, which is dumb. I wonder if the material generation can be on a super-low-priority thread, so it won't mess up the UI thread's smoothness. | perf | look into startup performance details materials get generated on launch and there s a task delay of milliseconds it ensures the animation is smooth such that they re not both running at the same time which is dumb i wonder if the material generation can be on a super low priority thread so it won t mess up the ui thread s smoothness | 1 |
98,268 | 20,627,015,133 | IssuesEvent | 2022-03-08 00:05:49 | withfig/fig | https://api.github.com/repos/withfig/fig | opened | Shell not compatible error even when using bash / zsh | bug codebase:shell_integrations | > Hi,
whatever shell I use: bash or zsh, fig still complains that the shell is not compatible:
>
> ~ fig doctor
🟡 Could not get current user shell
>
> ❌ Default shell is not compatible
You are not using a supported shell.
Only zsh, bash, or fish are integrated with Fig.
>
> ❌ Could not get iTerm version
>
> ❌ Doctor found errors. Please fix them and try again.
>
> In bash, fig is activated anyway and seems to work properly.
If you are facing this issue, could you please comment 1. your bashrc/zshrc and 2. a screenshot of the output when you run `ps`
| 1.0 | Shell not compatible error even when using bash / zsh - > Hi,
whatever shell I use: bash or zsh, fig still complains that the shell is not compatible:
>
> ~ fig doctor
🟡 Could not get current user shell
>
> ❌ Default shell is not compatible
You are not using a supported shell.
Only zsh, bash, or fish are integrated with Fig.
>
> ❌ Could not get iTerm version
>
> ❌ Doctor found errors. Please fix them and try again.
>
> In bash, fig is activated anyway and seems to work properly.
If you are facing this issue, could you please comment 1. your bashrc/zshrc and 2. a screenshot of the output when you run `ps`
| non_perf | shell not compatible error even when using bash zsh hi whatever shell i use bash or zsh fig still complains that the shell is not compatible fig doctor 🟡 could not get current user shell ❌ default shell is not compatible you are not using a supported shell only zsh bash or fish are integrated with fig ❌ could not get iterm version ❌ doctor found errors please fix them and try again in bash fig is activated anyway and seems to work properly if you are facing this issue could you please comment your bashrc zshrc and a screenshot of the output when you run ps | 0 |
750,717 | 26,213,996,640 | IssuesEvent | 2023-01-04 09:25:45 | NomicFoundation/ignition | https://api.github.com/repos/NomicFoundation/ignition | closed | Reduce number of imports in ignition's main module | priority:low | The `core/index` module exports a lot of stuff right now. This isn't super important at this moment, because in this first stage the hardhat plugin and ignition's core will be somehow coupled to let us iterate more quickly, but eventually we should work on having a smaller, deeper interface. | 1.0 | Reduce number of imports in ignition's main module - The `core/index` module exports a lot of stuff right now. This isn't super important at this moment, because in this first stage the hardhat plugin and ignition's core will be somehow coupled to let us iterate more quickly, but eventually we should work on having a smaller, deeper interface. | non_perf | reduce number of imports in ignition s main module the core index module exports a lot of stuff right now this isn t super important at this moment because in this first stage the hardhat plugin and ignition s core will be somehow coupled to let us iterate more quickly but eventually we should work on having a smaller deeper interface | 0 |
17,024 | 9,576,405,022 | IssuesEvent | 2019-05-07 09:01:16 | the-deep/deeper | https://api.github.com/repos/the-deep/deeper | closed | GeoInput: Use new searchable select input | performance prio-high | Use new select input for GeoInput
Related to https://github.com/the-deep/deeper/issues/314
Hours required: 4 | True | GeoInput: Use new searchable select input - Use new select input for GeoInput
Related to https://github.com/the-deep/deeper/issues/314
Hours required: 4 | perf | geoinput use new searchable select input use new select input for geoinput related to hours required | 1 |
1,305 | 2,933,629,698 | IssuesEvent | 2015-06-30 00:36:45 | postmanlabs/postman-app-support | https://api.github.com/repos/postmanlabs/postman-app-support | closed | UI slow on 1.0.3 with "big" history | Performance | Windows 8.1, 24GB mem,
Chrome 39.0.2171.95 m
Not sure if this is related - I haven't used Postman a while and needed it again about two/three weeks ago, but it was hanging even on freshly started computer or after killing the process and restarting the extension, so it was unusable.
Today I tried again and it seems to work, but the UI is very slow.
I have only about 20 requests in two collection, but the mouseover-highlight takes 1-3secs and if I click, it again delays for 1-3secs before actually opening the request details on the right side.
Alright, found the problem. It was the history. I only had a few hundred entries, but apparently this slows everything down, even if you don't see the list.
I switched to the history tab, which took 10+ seconds, cleared it, which took 20+seconds and now it's fast again.
Something must have changed in the code since I last used Postman, because the UI wasn't that slow with the same history.
And the hidden history shouldn't slow the whole UI down if not shown, no matter how big.
| True | UI slow on 1.0.3 with "big" history - Windows 8.1, 24GB mem,
Chrome 39.0.2171.95 m
Not sure if this is related - I haven't used Postman a while and needed it again about two/three weeks ago, but it was hanging even on freshly started computer or after killing the process and restarting the extension, so it was unusable.
Today I tried again and it seems to work, but the UI is very slow.
I have only about 20 requests in two collection, but the mouseover-highlight takes 1-3secs and if I click, it again delays for 1-3secs before actually opening the request details on the right side.
Alright, found the problem. It was the history. I only had a few hundred entries, but apparently this slows everything down, even if you don't see the list.
I switched to the history tab, which took 10+ seconds, cleared it, which took 20+seconds and now it's fast again.
Something must have changed in the code since I last used Postman, because the UI wasn't that slow with the same history.
And the hidden history shouldn't slow the whole UI down if not shown, no matter how big.
| perf | ui slow on with big history windows mem chrome m not sure if this is related i haven t used postman a while and needed it again about two three weeks ago but it was hanging even on freshly started computer or after killing the process and restarting the extension so it was unusable today i tried again and it seems to work but the ui is very slow i have only about requests in two collection but the mouseover highlight takes and if i click it again delays for before actually opening the request details on the right side alright found the problem it was the history i only had a few hundred entries but apparently this slows everything down even if you don t see the list i switched to the history tab which took seconds cleared it which took seconds and now it s fast again something must have changed in the code since i last used postman because the ui wasn t that slow with the same history and the hidden history shouldn t slow the whole ui down if not shown no matter how big | 1 |
56,227 | 31,813,211,948 | IssuesEvent | 2023-09-13 18:23:30 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [BE] 10-10EZ - Monitoring & Alerts: Add synthetic DataDog health checks | 1010-team website-performance monitoring 1010-ez needs-refinement | ## Background
To support the PACT Act enrollment deadline 9/30, we need to add synthetic datadog health checks in addition to the fwdproxy-based health checks for the downstream ES services.
Hypothesis: It is much easier to configure/tweak datadog health checks and easier to see why a health check failed.
i believe at least one of these was created already by mike chelen. - we may not have this in our code base, check with Platform
## Tasks
- [ ] Set up a monitor to track synthetic datadog health checks in addition to the fwdproxy-based health checks for the downstream ES services.
- [ ] Set up alert to notify #health-tools-1010-apm channel
- [ ] Test alerts, if applicable
## Acceptance Criteria
-
| True | [BE] 10-10EZ - Monitoring & Alerts: Add synthetic DataDog health checks - ## Background
To support the PACT Act enrollment deadline 9/30, we need to add synthetic datadog health checks in addition to the fwdproxy-based health checks for the downstream ES services.
Hypothesis: It is much easier to configure/tweak datadog health checks and easier to see why a health check failed.
i believe at least one of these was created already by mike chelen. - we may not have this in our code base, check with Platform
## Tasks
- [ ] Set up a monitor to track synthetic datadog health checks in addition to the fwdproxy-based health checks for the downstream ES services.
- [ ] Set up alert to notify #health-tools-1010-apm channel
- [ ] Test alerts, if applicable
## Acceptance Criteria
-
| perf | monitoring alerts add synthetic datadog health checks background to support the pact act enrollment deadline we need to add synthetic datadog health checks in addition to the fwdproxy based health checks for the downstream es services hypothesis it is much easier to configure tweak datadog health checks and easier to see why a health check failed i believe at least one of these was created already by mike chelen we may not have this in our code base check with platform tasks set up a monitor to track synthetic datadog health checks in addition to the fwdproxy based health checks for the downstream es services set up alert to notify health tools apm channel test alerts if applicable acceptance criteria | 1 |
23,463 | 11,980,782,466 | IssuesEvent | 2020-04-07 09:55:49 | astarte-platform/astarte | https://api.github.com/repos/astarte-platform/astarte | opened | DataUpdater periodic tasks | app:data_updater_plant performance | DataUpdater processes stay alive until DataUpdaterPlant pod is restarted, even if they have been connected just once.
A periodic timer should be used, and when there are no incoming "ping" messages for more than X seconds (that should be a configurable timeout) the device process should terminate itself.
The same timer might be used for purging caches, instead of the current mechanism that is slightly complicated and hard to maintain.
This change would also allow configuring the cache purge/reload interval. | True | DataUpdater periodic tasks - DataUpdater processes stay alive until DataUpdaterPlant pod is restarted, even if they have been connected just once.
A periodic timer should be used, and when there are no incoming "ping" messages for more than X seconds (that should be a configurable timeout) the device process should terminate itself.
The same timer might be used for purging caches, instead of the current mechanism that is slightly complicated and hard to maintain.
This change would also allow configuring the cache purge/reload interval. | perf | dataupdater periodic tasks dataupdater processes stay alive until dataupdaterplant pod is restarted even if they have been connected just once a periodic timer should be used and when there are no incoming ping messages for more than x seconds that should be a configurable timeout the device process should terminate itself the same timer might be used for purging caches instead of the current mechanism that is slightly complicated and hard to maintain this change would also allow configuring the cache purge reload interval | 1 |
35,397 | 17,062,633,648 | IssuesEvent | 2021-07-07 00:30:12 | venaturum/staircase | https://api.github.com/repos/venaturum/staircase | closed | Refactor percentile_stairs method | performance | percentile_stairs returns a Stairs object which represents a percentile function - let's call it f(x):
for 0 <= x <= 100,
at least x% of the domain has a value of f(x) or greater
The percentile function can be calculated by taking the step function over the required domain, and sorting the piecewise intervals by value, then scaling the domain to be [0, 100]. At the moment this is done utilising pandas dataframe. It could potentially be done more directly, which may yield better performance. | True | Refactor percentile_stairs method - percentile_stairs returns a Stairs object which represents a percentile function - let's call it f(x):
for 0 <= x <= 100,
at least x% of the domain has a value of f(x) or greater
The percentile function can be calculated by taking the step function over the required domain, and sorting the piecewise intervals by value, then scaling the domain to be [0, 100]. At the moment this is done utilising pandas dataframe. It could potentially be done more directly, which may yield better performance. | perf | refactor percentile stairs method percentile stairs returns a stairs object which represents a percentile function let s call it f x for x at least x of the domain has a value of f x or greater the percentile function can be calculated by taking the step function over the required domain and sorting the piecewise intervals by value then scaling the domain to be at the moment this is done utilising pandas dataframe it could potentially be done more directly which may yield better performance | 1 |
44,240 | 23,526,808,275 | IssuesEvent | 2022-08-19 11:41:17 | apache/arrow-rs | https://api.github.com/repos/apache/arrow-rs | closed | Avoid unecessary copies in Arrow IPC reader | good first issue arrow enhancement performance | **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
The Arrow IPC format is designed to avoid memory copies when moving data from one implementation to another. However, as noted by @tustvold on https://github.com/apache/arrow-rs/pull/2369#discussion_r944214515, the arrow-rs ipc reader implementation is actually copying data unnecessarily
**Describe the solution you'd like**
In the ipc code, create a Buffer initially and rewrite the ipc implementation in terms of `Buffer` rather than `&[u8]` (as the final output needs to be in a `Buffer`
**Describe alternatives you've considered**
N/A
**Additional context**
Came up in the context of https://github.com/apache/arrow-rs/pull/2369
Possibly also related to https://github.com/apache/arrow-rs/issues/189
| True | Avoid unecessary copies in Arrow IPC reader - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
The Arrow IPC format is designed to avoid memory copies when moving data from one implementation to another. However, as noted by @tustvold on https://github.com/apache/arrow-rs/pull/2369#discussion_r944214515, the arrow-rs ipc reader implementation is actually copying data unnecessarily
**Describe the solution you'd like**
In the ipc code, create a Buffer initially and rewrite the ipc implementation in terms of `Buffer` rather than `&[u8]` (as the final output needs to be in a `Buffer`
**Describe alternatives you've considered**
N/A
**Additional context**
Came up in the context of https://github.com/apache/arrow-rs/pull/2369
Possibly also related to https://github.com/apache/arrow-rs/issues/189
| perf | avoid unecessary copies in arrow ipc reader is your feature request related to a problem or challenge please describe what you are trying to do the arrow ipc format is designed to avoid memory copies when moving data from one implementation to another however as noted by tustvold on the arrow rs ipc reader implementation is actually copying data unnecessarily describe the solution you d like in the ipc code create a buffer initially and rewrite the ipc implementation in terms of buffer rather than as the final output needs to be in a buffer describe alternatives you ve considered n a additional context came up in the context of possibly also related to | 1 |
309,521 | 26,667,333,076 | IssuesEvent | 2023-01-26 06:19:59 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | ccl/backupccl: TestBackupRestoreSingleUserfile failed | C-test-failure O-robot branch-master | ccl/backupccl.TestBackupRestoreSingleUserfile [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455756?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455756?buildTab=artifacts#/) on master @ [2ad8df3df3272110705984efc32f1453631ce602](https://github.com/cockroachdb/cockroach/commits/2ad8df3df3272110705984efc32f1453631ce602):
```
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:413 +0x7ab
github.com/cockroachdb/cockroach/pkg/jobs.(*Registry).resumeJob.func1()
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:333 +0x128
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:470 +0x1f6
Goroutine 16038 (running) created at:
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:461 +0x619
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:332 +0x1cb
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.GRPCTransportFactory()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/transport_race.go:98 +0x161
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:2060 +0xd0d
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1668 +0xa44
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1240 +0x592
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*RangeIterator).Seek()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/range_iter.go:208 +0x73a
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1234 +0x2b7
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).Send()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:861 +0xa59
github.com/cockroachdb/cockroach/pkg/kv.lookupRangeFwdScan()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:330 +0x832
github.com/cockroachdb/cockroach/pkg/kv.RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:205 +0x315
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:570 +0x128
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).performRangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:1032 +0x3fe
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl.func1()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:920 +0xc5
github.com/cockroachdb/cockroach/pkg/util/contextutil.RunWithTimeout()
github.com/cockroachdb/cockroach/pkg/util/contextutil/context.go:104 +0x1a9
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:917 +0x1a8
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).tryLookup.func3()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:815 +0xd9
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:387 +0x51
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:305 +0x147
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:386 +0x2a4
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).DoChan.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:356 +0xd0
==================
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestBackupRestoreSingleUserfile.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | ccl/backupccl: TestBackupRestoreSingleUserfile failed - ccl/backupccl.TestBackupRestoreSingleUserfile [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455756?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/8455756?buildTab=artifacts#/) on master @ [2ad8df3df3272110705984efc32f1453631ce602](https://github.com/cockroachdb/cockroach/commits/2ad8df3df3272110705984efc32f1453631ce602):
```
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:413 +0x7ab
github.com/cockroachdb/cockroach/pkg/jobs.(*Registry).resumeJob.func1()
github.com/cockroachdb/cockroach/pkg/jobs/adopt.go:333 +0x128
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx.func2()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:470 +0x1f6
Goroutine 16038 (running) created at:
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTaskEx()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:461 +0x619
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunAsyncTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:332 +0x1cb
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.GRPCTransportFactory()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/transport_race.go:98 +0x161
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendToReplicas()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:2060 +0xd0d
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).sendPartialBatch()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1668 +0xa44
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1240 +0x592
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*RangeIterator).Seek()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/range_iter.go:208 +0x73a
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).divideAndSendBatchToRanges()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:1234 +0x2b7
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).Send()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:861 +0xa59
github.com/cockroachdb/cockroach/pkg/kv.lookupRangeFwdScan()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:330 +0x832
github.com/cockroachdb/cockroach/pkg/kv.RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/range_lookup.go:205 +0x315
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord.(*DistSender).RangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/kvcoord/dist_sender.go:570 +0x128
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).performRangeLookup()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:1032 +0x3fe
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl.func1()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:920 +0xc5
github.com/cockroachdb/cockroach/pkg/util/contextutil.RunWithTimeout()
github.com/cockroachdb/cockroach/pkg/util/contextutil/context.go:104 +0x1a9
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.tryLookupImpl()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:917 +0x1a8
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache.(*RangeCache).tryLookup.func3()
github.com/cockroachdb/cockroach/pkg/kv/kvclient/rangecache/range_cache.go:815 +0xd9
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:387 +0x51
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTask()
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:305 +0x147
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).doCall()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:386 +0x2a4
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight.(*Group).DoChan.func1()
github.com/cockroachdb/cockroach/pkg/util/syncutil/singleflight/singleflight.go:356 +0xd0
==================
```
<p>Parameters: <code>TAGS=bazel,gss,race</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestBackupRestoreSingleUserfile.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_perf | ccl backupccl testbackuprestoresingleuserfile failed ccl backupccl testbackuprestoresingleuserfile with on master github com cockroachdb cockroach pkg jobs adopt go github com cockroachdb cockroach pkg jobs registry resumejob github com cockroachdb cockroach pkg jobs adopt go github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go goroutine running created at github com cockroachdb cockroach pkg util stop stopper runasynctaskex github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg util stop stopper runasynctask github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg kv kvclient kvcoord grpctransportfactory github com cockroachdb cockroach pkg kv kvclient kvcoord transport race go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender sendtoreplicas github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender sendpartialbatch github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender divideandsendbatchtoranges github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord rangeiterator seek github com cockroachdb cockroach pkg kv kvclient kvcoord range iter go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender divideandsendbatchtoranges github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender send github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv lookuprangefwdscan github com cockroachdb cockroach pkg kv range lookup go github com cockroachdb cockroach pkg kv rangelookup github com cockroachdb cockroach pkg kv range lookup go github com cockroachdb cockroach pkg kv kvclient kvcoord distsender rangelookup github com cockroachdb cockroach pkg kv kvclient kvcoord dist sender go github com cockroachdb cockroach pkg kv kvclient rangecache rangecache performrangelookup github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg kv kvclient rangecache trylookupimpl github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg util contextutil runwithtimeout github com cockroachdb cockroach pkg util contextutil context go github com cockroachdb cockroach pkg kv kvclient rangecache trylookupimpl github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg kv kvclient rangecache rangecache trylookup github com cockroachdb cockroach pkg kv kvclient rangecache range cache go github com cockroachdb cockroach pkg util syncutil singleflight group docall github com cockroachdb cockroach pkg util syncutil singleflight singleflight go github com cockroachdb cockroach pkg util stop stopper runtask github com cockroachdb cockroach pkg util stop stopper go github com cockroachdb cockroach pkg util syncutil singleflight group docall github com cockroachdb cockroach pkg util syncutil singleflight singleflight go github com cockroachdb cockroach pkg util syncutil singleflight group dochan github com cockroachdb cockroach pkg util syncutil singleflight singleflight go parameters tags bazel gss race help see also cc cockroachdb disaster recovery | 0 |
8,887 | 6,672,690,081 | IssuesEvent | 2017-10-04 12:42:26 | potree/potree | https://api.github.com/repos/potree/potree | closed | Reduce file sizes / load things only when needed | enhancement Hacktoberfest performance |
Some files are way too large, others are loaded even though they're not used.
Output from the chrome network tracker:

* The skybox (nx, ny, nz, px, py, pz) is loaded even though the gradient background is shown. That's around 1.5MB for unused resources. Should be loaded on demand.
* cloud_icon.svg is 761kb, even though it's just a small icon. Should be replaced with something else in any case.
This would reduce the size of the page from around 5.4MB to around 3.1MB. | True | Reduce file sizes / load things only when needed -
Some files are way too large, others are loaded even though they're not used.
Output from the chrome network tracker:

* The skybox (nx, ny, nz, px, py, pz) is loaded even though the gradient background is shown. That's around 1.5MB for unused resources. Should be loaded on demand.
* cloud_icon.svg is 761kb, even though it's just a small icon. Should be replaced with something else in any case.
This would reduce the size of the page from around 5.4MB to around 3.1MB. | perf | reduce file sizes load things only when needed some files are way too large others are loaded even though they re not used output from the chrome network tracker the skybox nx ny nz px py pz is loaded even though the gradient background is shown that s around for unused resources should be loaded on demand cloud icon svg is even though it s just a small icon should be replaced with something else in any case this would reduce the size of the page from around to around | 1 |
306,088 | 9,380,566,639 | IssuesEvent | 2019-04-04 17:24:44 | googleapis/gapic-generator | https://api.github.com/repos/googleapis/gapic-generator | closed | Go generating non-compiling bits | Priority: P1 Type: Bug | See: https://code-review.googlesource.com/c/gocloud/+/39510/3/firestore/apiv1beta1/firestore_client.go
```
// Write streams batches of document updates and deletes, in order.
func (c *Client) Write(ctx context.Context, opts ...gax.CallOption) (firestorepb.Firestore_WriteClient, error) {
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "database", req.GetDatabase()))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
```
It tries to use a `req`, which doesn't exist. | 1.0 | Go generating non-compiling bits - See: https://code-review.googlesource.com/c/gocloud/+/39510/3/firestore/apiv1beta1/firestore_client.go
```
// Write streams batches of document updates and deletes, in order.
func (c *Client) Write(ctx context.Context, opts ...gax.CallOption) (firestorepb.Firestore_WriteClient, error) {
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "database", req.GetDatabase()))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
```
It tries to use a `req`, which doesn't exist. | non_perf | go generating non compiling bits see write streams batches of document updates and deletes in order func c client write ctx context context opts gax calloption firestorepb firestore writeclient error md metadata pairs x goog request params fmt sprintf s v database req getdatabase ctx insertmetadata ctx c xgoogmetadata md it tries to use a req which doesn t exist | 0 |
181,508 | 14,878,760,628 | IssuesEvent | 2021-01-20 06:24:38 | JNTUK-Instant/The-React-Port | https://api.github.com/repos/JNTUK-Instant/The-React-Port | opened | if SPA is feasible, Consider these | documentation enhancement | if #3 is feasible
- Consider using the JSON files to feed the data
- Create a new layout:
- A left-hand-side Navigation bar for browsing through the available semesters and years
- A filter bar
| 1.0 | if SPA is feasible, Consider these - if #3 is feasible
- Consider using the JSON files to feed the data
- Create a new layout:
- A left-hand-side Navigation bar for browsing through the available semesters and years
- A filter bar
| non_perf | if spa is feasible consider these if is feasible consider using the json files to feed the data create a new layout a left hand side navigation bar for browsing through the available semesters and years a filter bar | 0 |
29,109 | 13,943,857,986 | IssuesEvent | 2020-10-23 00:17:27 | microsoft/terminal | https://api.github.com/repos/microsoft/terminal | closed | Optimize the creation of the jumplist entries | Area-Performance In-PR Issue-Task Product-Terminal | Taken from some rudimentary tracing that was done in #7774:

`CShellLink::SetPath` seems to be quite performance intensive for some reason. This is probably unnecessarily slowing down our startup.
Idea: maybe we could dispatch a background thread to update the jumplist? We don't _really_ need to do that on the main thread, right? | True | Optimize the creation of the jumplist entries - Taken from some rudimentary tracing that was done in #7774:

`CShellLink::SetPath` seems to be quite performance intensive for some reason. This is probably unnecessarily slowing down our startup.
Idea: maybe we could dispatch a background thread to update the jumplist? We don't _really_ need to do that on the main thread, right? | perf | optimize the creation of the jumplist entries taken from some rudimentary tracing that was done in cshelllink setpath seems to be quite performance intensive for some reason this is probably unnecessarily slowing down our startup idea maybe we could dispatch a background thread to update the jumplist we don t really need to do that on the main thread right | 1 |
20,044 | 10,586,790,060 | IssuesEvent | 2019-10-08 20:31:35 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | opened | [Meta] Add performance report for top metrics on each PR | eng:performance | Q4 front-end perf OKRs: this is objective 2, KR 1. Possible dupe to #5020.
### Why/User Benefit/User Problem
To help prevent regressions to our top metrics (warm start from app link, cold start from home screen), we need to regularly monitor the performance dashboards (for the "boiling frog" problem). However, if the performance of the app is not visible enough, i.e. to the app developers, it will be an uphill battle trying to improve performance to meet our objectives.
To increase visibility of performance concerns, we should add a performance report to each PR (if possible).
### Acceptance Criteria (Added by PM. For EPM to track when a Meta feature is done)
- Each PR has a performance report with the delta from master & long term trends (e.g. changes from a week, from a month). If this is not possible, consider:
- Running performance only upon request (e.g. by adding a keyword)
- Running performance tests less frequently
### What / Requirements (Added by PM and Eng Manager)
---
afaik, @colintheshots has looked into enabling this with Nimbledroid so you should consult him when working on this. | True | [Meta] Add performance report for top metrics on each PR - Q4 front-end perf OKRs: this is objective 2, KR 1. Possible dupe to #5020.
### Why/User Benefit/User Problem
To help prevent regressions to our top metrics (warm start from app link, cold start from home screen), we need to regularly monitor the performance dashboards (for the "boiling frog" problem). However, if the performance of the app is not visible enough, i.e. to the app developers, it will be an uphill battle trying to improve performance to meet our objectives.
To increase visibility of performance concerns, we should add a performance report to each PR (if possible).
### Acceptance Criteria (Added by PM. For EPM to track when a Meta feature is done)
- Each PR has a performance report with the delta from master & long term trends (e.g. changes from a week, from a month). If this is not possible, consider:
- Running performance only upon request (e.g. by adding a keyword)
- Running performance tests less frequently
### What / Requirements (Added by PM and Eng Manager)
---
afaik, @colintheshots has looked into enabling this with Nimbledroid so you should consult him when working on this. | perf | add performance report for top metrics on each pr front end perf okrs this is objective kr possible dupe to why user benefit user problem to help prevent regressions to our top metrics warm start from app link cold start from home screen we need to regularly monitor the performance dashboards for the boiling frog problem however if the performance of the app is not visible enough i e to the app developers it will be an uphill battle trying to improve performance to meet our objectives to increase visibility of performance concerns we should add a performance report to each pr if possible acceptance criteria added by pm for epm to track when a meta feature is done each pr has a performance report with the delta from master long term trends e g changes from a week from a month if this is not possible consider running performance only upon request e g by adding a keyword running performance tests less frequently what requirements added by pm and eng manager afaik colintheshots has looked into enabling this with nimbledroid so you should consult him when working on this | 1 |
437,480 | 12,598,220,081 | IssuesEvent | 2020-06-11 02:17:52 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.taringa.net - The comment images are not displayed | browser-fenix engine-gecko priority-normal severity-important | <!-- @browser: Firefox Mobile 77.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:77.0) Gecko/77.0 Firefox/77.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/53907 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.taringa.net/global
**Browser / Version**: Firefox Mobile 77.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Images don't load. They load in Firefox for Android (Fennec) with adblocker disabled (uBlock Origin) and in Chrome. But they don't load in Fenix, beta and Nightly.
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/6/e3bb7961-6092-466a-b625-9d9fd3fe170b.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.taringa.net - The comment images are not displayed - <!-- @browser: Firefox Mobile 77.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:77.0) Gecko/77.0 Firefox/77.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/53907 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.taringa.net/global
**Browser / Version**: Firefox Mobile 77.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Images not loaded
**Steps to Reproduce**:
Images don't load. They load in Firefox for Android (Fennec) with adblocker disabled (uBlock Origin) and in Chrome. But they don't load in Fenix, beta and Nightly.
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/6/e3bb7961-6092-466a-b625-9d9fd3fe170b.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_perf | the comment images are not displayed url browser version firefox mobile operating system android tested another browser yes chrome problem type design is broken description images not loaded steps to reproduce images don t load they load in firefox for android fennec with adblocker disabled ublock origin and in chrome but they don t load in fenix beta and nightly view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
32,819 | 15,652,077,775 | IssuesEvent | 2021-03-23 10:57:10 | avoinea/stiam.ro | https://api.github.com/repos/avoinea/stiam.ro | closed | Update jquery plugins | performance | - cordova 3.3.0
- masonry
- jquery mobile 1.4
- iscollview
- lazyload
Cordova plugins:
- ShareSocial
| True | Update jquery plugins - - cordova 3.3.0
- masonry
- jquery mobile 1.4
- iscollview
- lazyload
Cordova plugins:
- ShareSocial
| perf | update jquery plugins cordova masonry jquery mobile iscollview lazyload cordova plugins sharesocial | 1 |
66,731 | 3,257,579,348 | IssuesEvent | 2015-10-20 18:28:33 | cs2103aug2015-t10-2j/main | https://api.github.com/repos/cs2103aug2015-t10-2j/main | closed | A user can indicate priority levels for tasks | priority.high type.story | ... so that more important ones can be distinguished from the rest. | 1.0 | A user can indicate priority levels for tasks - ... so that more important ones can be distinguished from the rest. | non_perf | a user can indicate priority levels for tasks so that more important ones can be distinguished from the rest | 0 |
135,929 | 5,266,945,162 | IssuesEvent | 2017-02-04 17:48:14 | Beep6581/RawTherapee | https://api.github.com/repos/Beep6581/RawTherapee | closed | Change our version habits | Compilation Priority-Critical | This issue is very important - git builds won't work well until it's fixed. I need everyone's help and input.
I'm writing this as a summary of hours of effort today to find a reliable solution to a problem I didn't know we had; to help you understand the problem, and to help myself see it clearly given the opportunity to lay it out here.
The way we form version numbers in CMake+other has become broken. I believe our system became obsolete the moment we switched to git. Up to now we would create a version of the form `4.2.1234`. The form is wrong in the gitosphere, because you could end up with that number simultaneously on branches `master`, `gtk3`, `locallab` and `pixelshift`, for example. What does that number tell us then? Nothing. That sort of version was created using regular expressions to parse several chunks of git output, and the only reason it "worked" till now is because we didn't tag any releases of the form `4.2-beta1`, `4.2-rc1`, etc. It happened to work because we used numeric versions. Well now we have annotated tags `5.0-gtk2` and `5.0-gtk3`, and everyone who compiles from git will run into a problem; even if the build succeeds, the version in AboutThisBuild and stored in the binary are wrong, and will cause trouble. Quote from an email:
> When I run the app, I am getting the error:
Error: malformed version string; "`5.0.5.0-gtk3`" must follow this format: `xx.xx.xx.xx`. Admitting it's a developer version...
This is coming from `ReleaseInfo.cmake` which states:
`set(GIT_VERSION 5.0.5.0-gtk2)`
Where the version is used:
- PP3 file header `AppVersion=4.2.1234`,
- Cache key files, `~/.cache/RawTherapee/data/amsterdam.pef.0cb(...)32f2.txt:20:Version=4.2.1234`
- AboutThisBuild, `Version: 4.2.1234`
- Window title,
- Saved file metadata, `-EXIF:Software=RawTherapee 4.2.1234`
- Console output
- Options file
- Some files in `tools/`, notably `WindowsInnoSetup.iss.in`
Git is flexible, but as such we cannot or should not stick to old models. RawTherapee can be compiled using several "starting points":
- By checking out a branch, `git checkout pixelshift`
- By checking out a commit, `git checkout 4649e130`
- By checking out a tag the detached way, `git checkout 5.0-gtk2`
- By checking out a tag the branch way, `git checkout -b master-5.0 5.0-gtk2`
Checking out in various ways means you have access to different things. It has serious consequences, it makes getting our old 4.2.1234 version form difficult or impossible or very dirty. I don't like it. Even if we did manage to script a way to get the 4.2.1234 version format, it wouldn't mean anything without more information such as a branch name, and most of the items listed above don't store one. Even if they did, we cannot always find a branch name!
There are several situations builders can find themselves in, I will give one example:
`git checkout <branch>`, then you would think you could find the "latest tag distance" using `git describe --tags --always`, and it could work, except... It will return different things depending on what your repo's history is.
```bash
/tmp/test $ git checkout master
Switched to branch 'master'
/tmp/test $ git branch
gtk3
* master
/tmp/test $ git describe --tags --always
209c672
/tmp/test $ git tag -a "6.0-rc1" -m "Tagged RawTherapee 6.0 release candidate 1"
/tmp/test $ git describe --tags --always
6.0-rc1
/tmp/test $ echo "A" >> foo
/tmp/test $ git commit -a -m "second"
[master 00554e0] second
1 file changed, 1 insertion(+)
/tmp/test $ git describe --tags --always
6.0-rc1-1-g00554e0
```
Sure there are a bunch of commands I could try, but in a script that is not reliable. And worse, this needs to be doable not only from a bash script, but also from `AboutThisBuild.cmake`.
Maybe there is a fool-proof way to get the info we want, but I haven't found it, and neither have these people [2](http://stackoverflow.com/questions/15806448/git-how-to-find-out-on-which-branch-tag-is) [3](http://stackoverflow.com/questions/2706797/finding-what-branch-a-git-commit-came-from) [4](http://stackoverflow.com/questions/4535251/show-the-original-branch-for-a-commit)
I propose we stop using our 4.2.1234 format.
We have at our disposal tags and commit hashes. A commit hash tells us all we need to know, but a tag is more friendly. We could use one, or both.
- PP3 files don't need `AppVersion` at all, it's not used for anything.
- I don't know about cache key files.
- `AboutThisBuild` is for developers, it can use a commit hash.
- Window title is for users, it can use a tag and/or commit hash.
- Saved file metadata as above.
- Console output as above.
- Options file I don't know yet whether the version is used.
- `WindowsInnoSetup.iss.in` I don't know.
Branch [`versionfix2`](https://github.com/Beep6581/RawTherapee/tree/versionfix2) sets out to fix this.
I need your help to revise this and make sure it works. | 1.0 | Change our version habits - This issue is very important - git builds won't work well until it's fixed. I need everyone's help and input.
I'm writing this as a summary of hours of effort today to find a reliable solution to a problem I didn't know we had; to help you understand the problem, and to help myself see it clearly given the opportunity to lay it out here.
The way we form version numbers in CMake+other has become broken. I believe our system became obsolete the moment we switched to git. Up to now we would create a version of the form `4.2.1234`. The form is wrong in the gitosphere, because you could end up with that number simultaneously on branches `master`, `gtk3`, `locallab` and `pixelshift`, for example. What does that number tell us then? Nothing. That sort of version was created using regular expressions to parse several chunks of git output, and the only reason it "worked" till now is because we didn't tag any releases of the form `4.2-beta1`, `4.2-rc1`, etc. It happened to work because we used numeric versions. Well now we have annotated tags `5.0-gtk2` and `5.0-gtk3`, and everyone who compiles from git will run into a problem; even if the build succeeds, the version in AboutThisBuild and stored in the binary are wrong, and will cause trouble. Quote from an email:
> When I run the app, I am getting the error:
Error: malformed version string; "`5.0.5.0-gtk3`" must follow this format: `xx.xx.xx.xx`. Admitting it's a developer version...
This is coming from `ReleaseInfo.cmake` which states:
`set(GIT_VERSION 5.0.5.0-gtk2)`
Where the version is used:
- PP3 file header `AppVersion=4.2.1234`,
- Cache key files, `~/.cache/RawTherapee/data/amsterdam.pef.0cb(...)32f2.txt:20:Version=4.2.1234`
- AboutThisBuild, `Version: 4.2.1234`
- Window title,
- Saved file metadata, `-EXIF:Software=RawTherapee 4.2.1234`
- Console output
- Options file
- Some files in `tools/`, notably `WindowsInnoSetup.iss.in`
Git is flexible, but as such we cannot or should not stick to old models. RawTherapee can be compiled using several "starting points":
- By checking out a branch, `git checkout pixelshift`
- By checking out a commit, `git checkout 4649e130`
- By checking out a tag the detached way, `git checkout 5.0-gtk2`
- By checking out a tag the branch way, `git checkout -b master-5.0 5.0-gtk2`
Checking out in various ways means you have access to different things. It has serious consequences, it makes getting our old 4.2.1234 version form difficult or impossible or very dirty. I don't like it. Even if we did manage to script a way to get the 4.2.1234 version format, it wouldn't mean anything without more information such as a branch name, and most of the items listed above don't store one. Even if they did, we cannot always find a branch name!
There are several situations builders can find themselves in, I will give one example:
`git checkout <branch>`, then you would think you could find the "latest tag distance" using `git describe --tags --always`, and it could work, except... It will return different things depending on what your repo's history is.
```bash
/tmp/test $ git checkout master
Switched to branch 'master'
/tmp/test $ git branch
gtk3
* master
/tmp/test $ git describe --tags --always
209c672
/tmp/test $ git tag -a "6.0-rc1" -m "Tagged RawTherapee 6.0 release candidate 1"
/tmp/test $ git describe --tags --always
6.0-rc1
/tmp/test $ echo "A" >> foo
/tmp/test $ git commit -a -m "second"
[master 00554e0] second
1 file changed, 1 insertion(+)
/tmp/test $ git describe --tags --always
6.0-rc1-1-g00554e0
```
Sure there are a bunch of commands I could try, but in a script that is not reliable. And worse, this needs to be doable not only from a bash script, but also from `AboutThisBuild.cmake`.
Maybe there is a fool-proof way to get the info we want, but I haven't found it, and neither have these people [2](http://stackoverflow.com/questions/15806448/git-how-to-find-out-on-which-branch-tag-is) [3](http://stackoverflow.com/questions/2706797/finding-what-branch-a-git-commit-came-from) [4](http://stackoverflow.com/questions/4535251/show-the-original-branch-for-a-commit)
I propose we stop using our 4.2.1234 format.
We have at our disposal tags and commit hashes. A commit hash tells us all we need to know, but a tag is more friendly. We could use one, or both.
- PP3 files don't need `AppVersion` at all, it's not used for anything.
- I don't know about cache key files.
- `AboutThisBuild` is for developers, it can use a commit hash.
- Window title is for users, it can use a tag and/or commit hash.
- Saved file metadata as above.
- Console output as above.
- Options file I don't know yet whether the version is used.
- `WindowsInnoSetup.iss.in` I don't know.
Branch [`versionfix2`](https://github.com/Beep6581/RawTherapee/tree/versionfix2) sets out to fix this.
I need your help to revise this and make sure it works. | non_perf | change our version habits this issue is very important git builds won t work well until it s fixed i need everyone s help and input i m writing this as a summary of hours of effort today to find a reliable solution to a problem i didn t know we had to help you understand the problem and to help myself see it clearly given the opportunity to lay it out here the way we form version numbers in cmake other has become broken i believe our system became obsolete the moment we switched to git up to now we would create a version of the form the form is wrong in the gitosphere because you could end up with that number simultaneously on branches master locallab and pixelshift for example what does that number tell us then nothing that sort of version was created using regular expressions to parse several chunks of git output and the only reason it worked till now is because we didn t tag any releases of the form etc it happened to work because we used numeric versions well now we have annotated tags and and everyone who compiles from git will run into a problem even if the build succeeds the version in aboutthisbuild and stored in the binary are wrong and will cause trouble quote from an email when i run the app i am getting the error error malformed version string must follow this format xx xx xx xx admitting it s a developer version this is coming from releaseinfo cmake which states set git version where the version is used file header appversion cache key files cache rawtherapee data amsterdam pef txt version aboutthisbuild version window title saved file metadata exif software rawtherapee console output options file some files in tools notably windowsinnosetup iss in git is flexible but as such we cannot or should not stick to old models rawtherapee can be compiled using several starting points by checking out a branch git checkout pixelshift by checking out a commit git checkout by checking out a tag the detached way git checkout by checking out a tag the branch way git checkout b master checking out in various ways means you have access to different things it has serious consequences it makes getting our old version form difficult or impossible or very dirty i don t like it even if we did manage to script a way to get the version format it wouldn t mean anything without more information such as a branch name and most of the items listed above don t store one even if they did we cannot always find a branch name there are several situations builders can find themselves in i will give one example git checkout then you would think you could find the latest tag distance using git describe tags always and it could work except it will return different things depending on what your repo s history is bash tmp test git checkout master switched to branch master tmp test git branch master tmp test git describe tags always tmp test git tag a m tagged rawtherapee release candidate tmp test git describe tags always tmp test echo a foo tmp test git commit a m second second file changed insertion tmp test git describe tags always sure there are a bunch of commands i could try but in a script that is not reliable and worse this needs to be doable not only from a bash script but also from aboutthisbuild cmake maybe there is a fool proof way to get the info we want but i haven t found it and neither have these people i propose we stop using our format we have at our disposal tags and commit hashes a commit hash tells us all we need to know but a tag is more friendly we could use one or both files don t need appversion at all it s not used for anything i don t know about cache key files aboutthisbuild is for developers it can use a commit hash window title is for users it can use a tag and or commit hash saved file metadata as above console output as above options file i don t know yet whether the version is used windowsinnosetup iss in i don t know branch sets out to fix this i need your help to revise this and make sure it works | 0 |
309,276 | 26,659,369,231 | IssuesEvent | 2023-01-25 19:37:47 | distributed-system-analysis/pbench | https://api.github.com/repos/distributed-system-analysis/pbench | closed | Consider reworking the alembic check to use `podman` remove functionality | enhancement Server Tests | Consider what it would take to rework the alembic check to use `podman remote` functionality. The implications for the changes suggested below need to be verified with rebuilt containers.
----
Per our earlier discussion, `jenkins/run` is set up so that it can be invoked inside a container which was run via `jenkins/run`. So, the invocation of `jenkins/run` inside `jenkins/run-alembic-migrations-check` should be fine, but the script is also using `podman` directly, wherein lies the (first) problem. I tried out the following change
```
diff --git a/jenkins/run-alembic-migrations-check b/jenkins/run-alembic-migrations-check
index 67b713730..a2cba893c 100755
--- a/jenkins/run-alembic-migrations-check
+++ b/jenkins/run-alembic-migrations-check
@@ -1,6 +1,7 @@
#!/bin/bash -e
-podman run --name postgresql-alembic \
+PODMAN=$([ -n "${CONTAINER_HOST}" ] && echo podman-remote || echo podman)
+${PODMAN} run --name postgresql-alembic \
--detach \
--rm \
--network host \
```
which allowed me to run the script via `jenkins/run`, and the PostgreSQL container seems to start up OK. (I guess we should find a more general way to encapsulate the conditional definition for `${PODMAN}`.)
However, this brought me to the second problem: inside the CI container (which is what I had `jenkins/run` use), there is no `nc` command. But, this wouldn't be too hard to address, either....
So, we _should_ be able to package this as a test, we just need to enhance the infrastructure a little.
_Originally posted by @webbnh in https://github.com/distributed-system-analysis/pbench/pull/3174#discussion_r1072915712_ | 1.0 | Consider reworking the alembic check to use `podman` remove functionality - Consider what it would take to rework the alembic check to use `podman remote` functionality. The implications for the changes suggested below need to be verified with rebuilt containers.
----
Per our earlier discussion, `jenkins/run` is set up so that it can be invoked inside a container which was run via `jenkins/run`. So, the invocation of `jenkins/run` inside `jenkins/run-alembic-migrations-check` should be fine, but the script is also using `podman` directly, wherein lies the (first) problem. I tried out the following change
```
diff --git a/jenkins/run-alembic-migrations-check b/jenkins/run-alembic-migrations-check
index 67b713730..a2cba893c 100755
--- a/jenkins/run-alembic-migrations-check
+++ b/jenkins/run-alembic-migrations-check
@@ -1,6 +1,7 @@
#!/bin/bash -e
-podman run --name postgresql-alembic \
+PODMAN=$([ -n "${CONTAINER_HOST}" ] && echo podman-remote || echo podman)
+${PODMAN} run --name postgresql-alembic \
--detach \
--rm \
--network host \
```
which allowed me to run the script via `jenkins/run`, and the PostgreSQL container seems to start up OK. (I guess we should find a more general way to encapsulate the conditional definition for `${PODMAN}`.)
However, this brought me to the second problem: inside the CI container (which is what I had `jenkins/run` use), there is no `nc` command. But, this wouldn't be too hard to address, either....
So, we _should_ be able to package this as a test, we just need to enhance the infrastructure a little.
_Originally posted by @webbnh in https://github.com/distributed-system-analysis/pbench/pull/3174#discussion_r1072915712_ | non_perf | consider reworking the alembic check to use podman remove functionality consider what it would take to rework the alembic check to use podman remote functionality the implications for the changes suggested below need to be verified with rebuilt containers per our earlier discussion jenkins run is set up so that it can be invoked inside a container which was run via jenkins run so the invocation of jenkins run inside jenkins run alembic migrations check should be fine but the script is also using podman directly wherein lies the first problem i tried out the following change diff git a jenkins run alembic migrations check b jenkins run alembic migrations check index a jenkins run alembic migrations check b jenkins run alembic migrations check bin bash e podman run name postgresql alembic podman echo podman remote echo podman podman run name postgresql alembic detach rm network host which allowed me to run the script via jenkins run and the postgresql container seems to start up ok i guess we should find a more general way to encapsulate the conditional definition for podman however this brought me to the second problem inside the ci container which is what i had jenkins run use there is no nc command but this wouldn t be too hard to address either so we should be able to package this as a test we just need to enhance the infrastructure a little originally posted by webbnh in | 0 |
664,874 | 22,291,016,986 | IssuesEvent | 2022-06-12 11:11:56 | dhowe/AdNauseam | https://api.github.com/repos/dhowe/AdNauseam | opened | Some filters in "My Filter" not working | PRIORITY: High Bug | ## The issue
Some filters are not working on 'my filters', specially the ones trying to overturn a adn-allow rule.
The problem seems to lie in the "StaticFilteringParser". Where the filter added in "my filters" can't be found by the `listsForFilter" function:
https://github.com/dhowe/AdNauseam/blob/9d4ed7df4ae9918acd396f4bce7b086686c64376/src/js/adn/core.js#L1100-L1152
Catched this while trying to solve [this issue](https://github.com/dhowe/AdNauseam/issues/2108).
## How to reproduce:
- Url: https://yts.land/movie/free-guy-2021-staycalmtorrent144/
- Check that the following rule is being 'adn-allowed': `||dozubatan.com^`
- Try to reverse it by adding in "my filters" the following: `||dozubatan.com^$important`
- You'll see that the rule is still being adn-allowed rather than blocked.
| 1.0 | Some filters in "My Filter" not working - ## The issue
Some filters are not working on 'my filters', specially the ones trying to overturn a adn-allow rule.
The problem seems to lie in the "StaticFilteringParser". Where the filter added in "my filters" can't be found by the `listsForFilter" function:
https://github.com/dhowe/AdNauseam/blob/9d4ed7df4ae9918acd396f4bce7b086686c64376/src/js/adn/core.js#L1100-L1152
Catched this while trying to solve [this issue](https://github.com/dhowe/AdNauseam/issues/2108).
## How to reproduce:
- Url: https://yts.land/movie/free-guy-2021-staycalmtorrent144/
- Check that the following rule is being 'adn-allowed': `||dozubatan.com^`
- Try to reverse it by adding in "my filters" the following: `||dozubatan.com^$important`
- You'll see that the rule is still being adn-allowed rather than blocked.
| non_perf | some filters in my filter not working the issue some filters are not working on my filters specially the ones trying to overturn a adn allow rule the problem seems to lie in the staticfilteringparser where the filter added in my filters can t be found by the listsforfilter function catched this while trying to solve how to reproduce url check that the following rule is being adn allowed dozubatan com try to reverse it by adding in my filters the following dozubatan com important you ll see that the rule is still being adn allowed rather than blocked | 0 |
4,946 | 7,558,925,142 | IssuesEvent | 2018-04-20 00:53:00 | tastybento/ASkyBlock-Bugs-N-Features | https://api.github.com/repos/tastybento/ASkyBlock-Bugs-N-Features | closed | Player break & Block pick up | Incompatibility with another plugin Invalid | Thank you for filing a bug report. Please complete these sections to help speed resolution.
**Server version? Remember to say if it is Spigot/Bukkit/Paper etc.**
**Plugin version:**
**Is this a new install or upgrade from earlier plugin version? If upgrade, what version did you use before?**
This is a current build of the plugin, we are running spigot on the most recent version as well. We have the proper "Vault" Plugin installed as well.
**(Optional) What other plugins are you using now? (do /plugins and paste text from the console log here)**
**Description of issue. What happened?**
Players are playing and are able to make and island but cannot do anything once they have made there island. They are lacking the ability to pick up items as well as, break anything.

**Steps to make this happen**
1. Player would hit a block on there island
2. Player would try and pick up a block
3.
**What do you think should happen**
| True | Player break & Block pick up - Thank you for filing a bug report. Please complete these sections to help speed resolution.
**Server version? Remember to say if it is Spigot/Bukkit/Paper etc.**
**Plugin version:**
**Is this a new install or upgrade from earlier plugin version? If upgrade, what version did you use before?**
This is a current build of the plugin, we are running spigot on the most recent version as well. We have the proper "Vault" Plugin installed as well.
**(Optional) What other plugins are you using now? (do /plugins and paste text from the console log here)**
**Description of issue. What happened?**
Players are playing and are able to make and island but cannot do anything once they have made there island. They are lacking the ability to pick up items as well as, break anything.

**Steps to make this happen**
1. Player would hit a block on there island
2. Player would try and pick up a block
3.
**What do you think should happen**
| non_perf | player break block pick up thank you for filing a bug report please complete these sections to help speed resolution server version remember to say if it is spigot bukkit paper etc plugin version is this a new install or upgrade from earlier plugin version if upgrade what version did you use before this is a current build of the plugin we are running spigot on the most recent version as well we have the proper vault plugin installed as well optional what other plugins are you using now do plugins and paste text from the console log here description of issue what happened players are playing and are able to make and island but cannot do anything once they have made there island they are lacking the ability to pick up items as well as break anything steps to make this happen player would hit a block on there island player would try and pick up a block what do you think should happen | 0 |
29,354 | 14,099,107,227 | IssuesEvent | 2020-11-06 00:31:19 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | Frequent reallocation of 128byte array in `Rune.cs` | tenet-performance | ### Description
This was not noticed in a profiler or in program performance, but noticed while reading the code.
```C#
private static ReadOnlySpan<byte> AsciiCharInfo => new byte[]
```
I'm not sure if this was instead intended to be an `=` and not a `=>`? If not, would a `Lazy<Byte[]>` perhaps be better suited?
My main concern would be methods like `IsLetterOrDigit` and `IsWhiteSpace` are very likely to be called inside a loop over a large input set, for example, a source file, where you could call multiples of those methods on each byte of the file, resulting in a lot of redundant allocations.
A bit of a side note, since these are in the ASCII range, I wonder if this table shouldn't instead be generated using `char.IsWhiteSpace`, etc? | True | Frequent reallocation of 128byte array in `Rune.cs` - ### Description
This was not noticed in a profiler or in program performance, but noticed while reading the code.
```C#
private static ReadOnlySpan<byte> AsciiCharInfo => new byte[]
```
I'm not sure if this was instead intended to be an `=` and not a `=>`? If not, would a `Lazy<Byte[]>` perhaps be better suited?
My main concern would be methods like `IsLetterOrDigit` and `IsWhiteSpace` are very likely to be called inside a loop over a large input set, for example, a source file, where you could call multiples of those methods on each byte of the file, resulting in a lot of redundant allocations.
A bit of a side note, since these are in the ASCII range, I wonder if this table shouldn't instead be generated using `char.IsWhiteSpace`, etc? | perf | frequent reallocation of array in rune cs description this was not noticed in a profiler or in program performance but noticed while reading the code c private static readonlyspan asciicharinfo new byte i m not sure if this was instead intended to be an and not a if not would a lazy perhaps be better suited my main concern would be methods like isletterordigit and iswhitespace are very likely to be called inside a loop over a large input set for example a source file where you could call multiples of those methods on each byte of the file resulting in a lot of redundant allocations a bit of a side note since these are in the ascii range i wonder if this table shouldn t instead be generated using char iswhitespace etc | 1 |
806,080 | 29,799,484,476 | IssuesEvent | 2023-06-16 06:56:18 | latteart-org/latteart | https://api.github.com/repos/latteart-org/latteart | closed | テストコード生成機能でプロキシ認証を行う場合に実行できない | Type: Enhancement Type: Invalid Priority: Could | **Describe the bug**
Proxy情報を入力画面を経由(以下①→②)してテストを実行した場合、生成される「test.spec.js」で呼び出すクラス名がプロキシ画面(①)、実行メソッド名が試験対象(②)となるため実行エラーとなる。
- ①プロキシ情報の入力画面(NoTitle.page)
- ②試験対象URL画面
- 参考
- https://rdk.me/proxy-selenium/
- https://stackoverflow.com/questions/55582136/how-to-set-proxy-with-authentication-in-selenium-chromedriver-python
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
- 対処方法1
1. 手動で認証プロキシを記憶させる事が可能な「proxy switchy omega」などのプラグインをseleniumで開いたchromeにインストールし,ユーザデータを作成
1. SeleniumのChromeOptionsに作成しておいたユーザデータを設定する.
- 対処方法2(ヘッドレスでは動かない)
1. 自作のchrome拡張機能を作成(生成)しておき,SeleniumのChromeOptionsに設定する.
- LatteArt側の対処
- wdio.conf.jsにユーザデータや拡張機能を読み込ませる設定を簡単に追加できるようにしておく.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [0.5.0]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| 1.0 | テストコード生成機能でプロキシ認証を行う場合に実行できない - **Describe the bug**
Proxy情報を入力画面を経由(以下①→②)してテストを実行した場合、生成される「test.spec.js」で呼び出すクラス名がプロキシ画面(①)、実行メソッド名が試験対象(②)となるため実行エラーとなる。
- ①プロキシ情報の入力画面(NoTitle.page)
- ②試験対象URL画面
- 参考
- https://rdk.me/proxy-selenium/
- https://stackoverflow.com/questions/55582136/how-to-set-proxy-with-authentication-in-selenium-chromedriver-python
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
- 対処方法1
1. 手動で認証プロキシを記憶させる事が可能な「proxy switchy omega」などのプラグインをseleniumで開いたchromeにインストールし,ユーザデータを作成
1. SeleniumのChromeOptionsに作成しておいたユーザデータを設定する.
- 対処方法2(ヘッドレスでは動かない)
1. 自作のchrome拡張機能を作成(生成)しておき,SeleniumのChromeOptionsに設定する.
- LatteArt側の対処
- wdio.conf.jsにユーザデータや拡張機能を読み込ませる設定を簡単に追加できるようにしておく.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [0.5.0]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| non_perf | テストコード生成機能でプロキシ認証を行う場合に実行できない describe the bug proxy情報を入力画面を経由 以下①→② してテストを実行した場合、生成される「test spec js」で呼び出すクラス名がプロキシ画面 ① 、実行メソッド名が試験対象 ② となるため実行エラーとなる。 ①プロキシ情報の入力画面 notitle page ②試験対象url画面 参考 to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior 手動で認証プロキシを記憶させる事が可能な「proxy switchy omega」などのプラグインをseleniumで開いたchromeにインストールし,ユーザデータを作成 seleniumのchromeoptionsに作成しておいたユーザデータを設定する. (ヘッドレスでは動かない) 自作のchrome拡張機能を作成(生成)しておき,seleniumのchromeoptionsに設定する. latteart側の対処 wdio conf jsにユーザデータや拡張機能を読み込ませる設定を簡単に追加できるようにしておく. screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os browser version smartphone please complete the following information device os browser version additional context add any other context about the problem here | 0 |
59,663 | 7,273,555,436 | IssuesEvent | 2018-02-21 05:52:05 | ppy/osu-web | https://api.github.com/repos/ppy/osu-web | reopened | New Beatmaps Listing does not show packs a map was in | beatmap design feature | The current search/listing page will display a link to the pack(s) a map is in

this is missing on the new website.
It could possibly be made to go on the compact information panel next to the download button, as packs are an alternative to downloading maps individually.

| 1.0 | New Beatmaps Listing does not show packs a map was in - The current search/listing page will display a link to the pack(s) a map is in

this is missing on the new website.
It could possibly be made to go on the compact information panel next to the download button, as packs are an alternative to downloading maps individually.

| non_perf | new beatmaps listing does not show packs a map was in the current search listing page will display a link to the pack s a map is in this is missing on the new website it could possibly be made to go on the compact information panel next to the download button as packs are an alternative to downloading maps individually | 0 |
10,983 | 7,374,515,937 | IssuesEvent | 2018-03-13 20:36:31 | golang/go | https://api.github.com/repos/golang/go | closed | cmd/compile: functions using ellipsis arguments are not inlined | NeedsFix Performance | I have two functions foo and bar that have identical bodies except that foo uses ellipsis and bar uses arrays. bar is inlined while foo is not. This is a performance bug, not a correctness one.
### What version of Go are you using (`go version`)?
tip:
go version devel +1106512 Fri Dec 16 22:30:12 2016 +0000 linux/amd64
### What operating system and processor architecture are you using (`go env`)?
Linux / Amd64
### What did you do?
https://play.golang.org/p/EP4LSzd9X3
Generated code contains no call to bar, but a call to foo.
```
TEXT main.main(SB) /home/mosoi/src/trash/a.go
a.go:35 0x44cfb0 64488b0c25f8ffffff FS MOVQ FS:0xfffffff8, CX
a.go:35 0x44cfb9 483b6110 CMPQ 0x10(CX), SP
a.go:35 0x44cfbd 7668 JBE 0x44d027
a.go:35 0x44cfbf 4883ec58 SUBQ $0x58, SP
a.go:35 0x44cfc3 48896c2450 MOVQ BP, 0x50(SP)
a.go:35 0x44cfc8 488d6c2450 LEAQ 0x50(SP), BP
a.go:36 0x44cfcd 488b05cc2b0200 MOVQ 0x22bcc(IP), AX
a.go:36 0x44cfd4 4889442420 MOVQ AX, 0x20(SP)
a.go:36 0x44cfd9 0f1005c82b0200 MOVUPS 0x22bc8(IP), X0
a.go:36 0x44cfe0 0f11442428 MOVUPS X0, 0x28(SP)
a.go:36 0x44cfe5 488d442420 LEAQ 0x20(SP), AX
a.go:36 0x44cfea 48890424 MOVQ AX, 0(SP)
a.go:36 0x44cfee 48c744240803000000 MOVQ $0x3, 0x8(SP)
a.go:36 0x44cff7 48c744241003000000 MOVQ $0x3, 0x10(SP)
a.go:36 0x44d000 e85bffffff CALL main.foo(SB)
a.go:38 0x44d005 488b05b42b0200 MOVQ 0x22bb4(IP), AX
a.go:38 0x44d00c 4889442438 MOVQ AX, 0x38(SP)
a.go:38 0x44d011 0f1005b02b0200 MOVUPS 0x22bb0(IP), X0
a.go:38 0x44d018 0f11442440 MOVUPS X0, 0x40(SP)
a.go:39 0x44d01d 488b6c2450 MOVQ 0x50(SP), BP
a.go:39 0x44d022 4883c458 ADDQ $0x58, SP
a.go:39 0x44d026 c3 RET
a.go:35 0x44d027 e88485ffff CALL runtime.morestack_noctxt(SB)
a.go:35 0x44d02c eb82 JMP main.main(SB)
```
### What did you expect to see?
foo (using ellipsis) and bar (using arrays) inlined
### What did you see instead?
Only bar was inlined.
| True | cmd/compile: functions using ellipsis arguments are not inlined - I have two functions foo and bar that have identical bodies except that foo uses ellipsis and bar uses arrays. bar is inlined while foo is not. This is a performance bug, not a correctness one.
### What version of Go are you using (`go version`)?
tip:
go version devel +1106512 Fri Dec 16 22:30:12 2016 +0000 linux/amd64
### What operating system and processor architecture are you using (`go env`)?
Linux / Amd64
### What did you do?
https://play.golang.org/p/EP4LSzd9X3
Generated code contains no call to bar, but a call to foo.
```
TEXT main.main(SB) /home/mosoi/src/trash/a.go
a.go:35 0x44cfb0 64488b0c25f8ffffff FS MOVQ FS:0xfffffff8, CX
a.go:35 0x44cfb9 483b6110 CMPQ 0x10(CX), SP
a.go:35 0x44cfbd 7668 JBE 0x44d027
a.go:35 0x44cfbf 4883ec58 SUBQ $0x58, SP
a.go:35 0x44cfc3 48896c2450 MOVQ BP, 0x50(SP)
a.go:35 0x44cfc8 488d6c2450 LEAQ 0x50(SP), BP
a.go:36 0x44cfcd 488b05cc2b0200 MOVQ 0x22bcc(IP), AX
a.go:36 0x44cfd4 4889442420 MOVQ AX, 0x20(SP)
a.go:36 0x44cfd9 0f1005c82b0200 MOVUPS 0x22bc8(IP), X0
a.go:36 0x44cfe0 0f11442428 MOVUPS X0, 0x28(SP)
a.go:36 0x44cfe5 488d442420 LEAQ 0x20(SP), AX
a.go:36 0x44cfea 48890424 MOVQ AX, 0(SP)
a.go:36 0x44cfee 48c744240803000000 MOVQ $0x3, 0x8(SP)
a.go:36 0x44cff7 48c744241003000000 MOVQ $0x3, 0x10(SP)
a.go:36 0x44d000 e85bffffff CALL main.foo(SB)
a.go:38 0x44d005 488b05b42b0200 MOVQ 0x22bb4(IP), AX
a.go:38 0x44d00c 4889442438 MOVQ AX, 0x38(SP)
a.go:38 0x44d011 0f1005b02b0200 MOVUPS 0x22bb0(IP), X0
a.go:38 0x44d018 0f11442440 MOVUPS X0, 0x40(SP)
a.go:39 0x44d01d 488b6c2450 MOVQ 0x50(SP), BP
a.go:39 0x44d022 4883c458 ADDQ $0x58, SP
a.go:39 0x44d026 c3 RET
a.go:35 0x44d027 e88485ffff CALL runtime.morestack_noctxt(SB)
a.go:35 0x44d02c eb82 JMP main.main(SB)
```
### What did you expect to see?
foo (using ellipsis) and bar (using arrays) inlined
### What did you see instead?
Only bar was inlined.
| perf | cmd compile functions using ellipsis arguments are not inlined i have two functions foo and bar that have identical bodies except that foo uses ellipsis and bar uses arrays bar is inlined while foo is not this is a performance bug not a correctness one what version of go are you using go version tip go version devel fri dec linux what operating system and processor architecture are you using go env linux what did you do generated code contains no call to bar but a call to foo text main main sb home mosoi src trash a go a go fs movq fs cx a go cmpq cx sp a go jbe a go subq sp a go movq bp sp a go leaq sp bp a go movq ip ax a go movq ax sp a go movups ip a go movups sp a go leaq sp ax a go movq ax sp a go movq sp a go movq sp a go call main foo sb a go movq ip ax a go movq ax sp a go movups ip a go movups sp a go movq sp bp a go addq sp a go ret a go call runtime morestack noctxt sb a go jmp main main sb what did you expect to see foo using ellipsis and bar using arrays inlined what did you see instead only bar was inlined | 1 |
19,432 | 10,427,450,602 | IssuesEvent | 2019-09-16 19:58:08 | ngageoint/opensphere-asm | https://api.github.com/repos/ngageoint/opensphere-asm | closed | Optimize coordinate wrappers | performance | `postfix.js` has a number of wrappers to avoid errors converting coordinate arrays to structs. When the source array length is longer than expected, we slice the array to avoid an error. This approach creates unnecessary GC overhead and in quick testing incurred a ~10-15% performance penalty over a wrapper that passes the lon/lat values directly to the function (with a `double lon, double lat` function signature). We should investigate how these calls can be better optimized. | True | Optimize coordinate wrappers - `postfix.js` has a number of wrappers to avoid errors converting coordinate arrays to structs. When the source array length is longer than expected, we slice the array to avoid an error. This approach creates unnecessary GC overhead and in quick testing incurred a ~10-15% performance penalty over a wrapper that passes the lon/lat values directly to the function (with a `double lon, double lat` function signature). We should investigate how these calls can be better optimized. | perf | optimize coordinate wrappers postfix js has a number of wrappers to avoid errors converting coordinate arrays to structs when the source array length is longer than expected we slice the array to avoid an error this approach creates unnecessary gc overhead and in quick testing incurred a performance penalty over a wrapper that passes the lon lat values directly to the function with a double lon double lat function signature we should investigate how these calls can be better optimized | 1 |
22,833 | 11,743,972,337 | IssuesEvent | 2020-03-12 06:24:26 | sebastienros/jint | https://api.github.com/repos/sebastienros/jint | closed | Optimize ch15/15.1/15.1.3/15.1.3.2/S15.1.3.2_A2.5_T1.js | performance | Adding reminder here, seems to be really slow, should see if anything can be done. | True | Optimize ch15/15.1/15.1.3/15.1.3.2/S15.1.3.2_A2.5_T1.js - Adding reminder here, seems to be really slow, should see if anything can be done. | perf | optimize js adding reminder here seems to be really slow should see if anything can be done | 1 |
69,600 | 13,298,449,727 | IssuesEvent | 2020-08-25 08:17:03 | MeAmAnUsername/pie | https://api.github.com/repos/MeAmAnUsername/pie | opened | Generated code layout | Component: code generation Priority: low Status: specified Type: bug | Generated code currently has a weird layout where blocks are indented to the curly brace, e.g.
```
if (true) {
return 5;
} else {
return 7;
}
```
The generated code should look like handwritten code, e.g. tab size 2 or 4, indent one tab per level. | 1.0 | Generated code layout - Generated code currently has a weird layout where blocks are indented to the curly brace, e.g.
```
if (true) {
return 5;
} else {
return 7;
}
```
The generated code should look like handwritten code, e.g. tab size 2 or 4, indent one tab per level. | non_perf | generated code layout generated code currently has a weird layout where blocks are indented to the curly brace e g if true return else return the generated code should look like handwritten code e g tab size or indent one tab per level | 0 |
55,469 | 30,763,491,441 | IssuesEvent | 2023-07-30 01:49:27 | keras-team/keras | https://api.github.com/repos/keras-team/keras | closed | Is it not posible to make recurrent neural networks that work with sparse tensors in keras? | type:bug/performance stat:awaiting response from contributor stale | I'm writing a neural network to process medical text entries an classify them into diferent categories in python3. The idea would be to use a LSTM layer followed by a Dense layer to do so. I have a large dataset which I codify in a per word basis into one-hot vectors to feed it as inputs to the network, this results in a lot of memory wasted on zeroes which means that an sparse representation of the data is necesary.
This is the code for the model:
```
import tensorflow as tf
in_shape=(8, 4581, 2126)
out_size=200
data= tf.sparse.SparseTensor(indices=[[0,0,0]],values=[1],dense_shape = in_shape)
X_i = tf.keras.Input(shape= tf.shape(data)[1:], sparse=True) #shape=(None,)
X = tf.keras.layers.LSTM(in_shape[-1])(X_i)
X = tf.keras.layers.Dense(out_size)(X_i)
X = tf.keras.layers.Activation('sigmoid')(X)
model = tf.keras.Model(X_i, X)
```
4581 is the max length of the text entries i have been testing in
2126 is the number of words in the vocabulary already with far more prunnig than I'm comfortable with (since I need to classify medical texts into medical categories technical terms may be key to make a good prediction but appear very few times)
8 is the number of entries I can work with without sparse vectors and no memory problems
However when creating the model I get the following error:
TypeError Traceback (most recent call last) ~/Henv/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape, allow_broadcast)
548 try:
--> 549 str_values = [compat.as_bytes(x) for x in proto_values]
550 except TypeError:
~/Henv/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py in <listcomp>(.0)
548 try:
--> 549 str_values = [compat.as_bytes(x) for x in proto_values]
550 except TypeError:
The complete error message is quite massive, that was the start of the message.
`TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("inputs:0", shape=(None, 3), dtype=int64), values=Tensor("inputs_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("inputs_2:0", shape=(3,), dtype=int64)). Consider casting elements to a supported type.`
The error happens in the LSTM layer.
I tried to make it a simple NN only using a dense layer (adjusting the input size accordingly) to see if the problem was general and not due to the LSTM layer but it worked correctly. (Unfortunately I can't not use a RNN)
I tried to use other RNN layers like SimpleRNN and GRU but both gave the same error.
Small scale implementations with dense tensor worked properly (bigger ones give memory problems), it's only when going to sparse tensors that the problem arises. I used [this](https://www.tensorflow.org/guide/sparse_tensor) as a "guide" to make the change.
Since the LSTM layer not liking sparse tensors seemed to be the problem I tried to add a dense layer before (dense layers output standard tensors). Didn't work, same error (but now in the new dense layer).
| True | Is it not posible to make recurrent neural networks that work with sparse tensors in keras? - I'm writing a neural network to process medical text entries an classify them into diferent categories in python3. The idea would be to use a LSTM layer followed by a Dense layer to do so. I have a large dataset which I codify in a per word basis into one-hot vectors to feed it as inputs to the network, this results in a lot of memory wasted on zeroes which means that an sparse representation of the data is necesary.
This is the code for the model:
```
import tensorflow as tf
in_shape=(8, 4581, 2126)
out_size=200
data= tf.sparse.SparseTensor(indices=[[0,0,0]],values=[1],dense_shape = in_shape)
X_i = tf.keras.Input(shape= tf.shape(data)[1:], sparse=True) #shape=(None,)
X = tf.keras.layers.LSTM(in_shape[-1])(X_i)
X = tf.keras.layers.Dense(out_size)(X_i)
X = tf.keras.layers.Activation('sigmoid')(X)
model = tf.keras.Model(X_i, X)
```
4581 is the max length of the text entries i have been testing in
2126 is the number of words in the vocabulary already with far more prunnig than I'm comfortable with (since I need to classify medical texts into medical categories technical terms may be key to make a good prediction but appear very few times)
8 is the number of entries I can work with without sparse vectors and no memory problems
However when creating the model I get the following error:
TypeError Traceback (most recent call last) ~/Henv/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape, allow_broadcast)
548 try:
--> 549 str_values = [compat.as_bytes(x) for x in proto_values]
550 except TypeError:
~/Henv/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py in <listcomp>(.0)
548 try:
--> 549 str_values = [compat.as_bytes(x) for x in proto_values]
550 except TypeError:
The complete error message is quite massive, that was the start of the message.
`TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. Contents: SparseTensor(indices=Tensor("inputs:0", shape=(None, 3), dtype=int64), values=Tensor("inputs_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("inputs_2:0", shape=(3,), dtype=int64)). Consider casting elements to a supported type.`
The error happens in the LSTM layer.
I tried to make it a simple NN only using a dense layer (adjusting the input size accordingly) to see if the problem was general and not due to the LSTM layer but it worked correctly. (Unfortunately I can't not use a RNN)
I tried to use other RNN layers like SimpleRNN and GRU but both gave the same error.
Small scale implementations with dense tensor worked properly (bigger ones give memory problems), it's only when going to sparse tensors that the problem arises. I used [this](https://www.tensorflow.org/guide/sparse_tensor) as a "guide" to make the change.
Since the LSTM layer not liking sparse tensors seemed to be the problem I tried to add a dense layer before (dense layers output standard tensors). Didn't work, same error (but now in the new dense layer).
| perf | is it not posible to make recurrent neural networks that work with sparse tensors in keras i m writing a neural network to process medical text entries an classify them into diferent categories in the idea would be to use a lstm layer followed by a dense layer to do so i have a large dataset which i codify in a per word basis into one hot vectors to feed it as inputs to the network this results in a lot of memory wasted on zeroes which means that an sparse representation of the data is necesary this is the code for the model import tensorflow as tf in shape out size data tf sparse sparsetensor indices values dense shape in shape x i tf keras input shape tf shape data sparse true shape none x tf keras layers lstm in shape x i x tf keras layers dense out size x i x tf keras layers activation sigmoid x model tf keras model x i x is the max length of the text entries i have been testing in is the number of words in the vocabulary already with far more prunnig than i m comfortable with since i need to classify medical texts into medical categories technical terms may be key to make a good prediction but appear very few times is the number of entries i can work with without sparse vectors and no memory problems however when creating the model i get the following error typeerror traceback most recent call last henv lib site packages tensorflow python framework tensor util py in make tensor proto values dtype shape verify shape allow broadcast try str values except typeerror henv lib site packages tensorflow python framework tensor util py in try str values except typeerror the complete error message is quite massive that was the start of the message typeerror failed to convert object of type to tensor contents sparsetensor indices tensor inputs shape none dtype values tensor inputs shape none dtype dense shape tensor inputs shape dtype consider casting elements to a supported type the error happens in the lstm layer i tried to make it a simple nn only using a dense layer adjusting the input size accordingly to see if the problem was general and not due to the lstm layer but it worked correctly unfortunately i can t not use a rnn i tried to use other rnn layers like simplernn and gru but both gave the same error small scale implementations with dense tensor worked properly bigger ones give memory problems it s only when going to sparse tensors that the problem arises i used as a guide to make the change since the lstm layer not liking sparse tensors seemed to be the problem i tried to add a dense layer before dense layers output standard tensors didn t work same error but now in the new dense layer | 1 |
31,697 | 15,044,781,390 | IssuesEvent | 2021-02-03 03:48:43 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Add babel transform to mangle public function names, across files | Stale Type: Feature Request WG: performance | AMP extensions/runtime talk to each other and no external world entity(mostly).
A consistent mangler across extensions could effectively reduce bundle size of all the bundles. | True | Add babel transform to mangle public function names, across files - AMP extensions/runtime talk to each other and no external world entity(mostly).
A consistent mangler across extensions could effectively reduce bundle size of all the bundles. | perf | add babel transform to mangle public function names across files amp extensions runtime talk to each other and no external world entity mostly a consistent mangler across extensions could effectively reduce bundle size of all the bundles | 1 |
453,881 | 13,091,502,890 | IssuesEvent | 2020-08-03 06:45:40 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | Grpc.Tools 2.30 does not generate correct client code. | kind/bug priority/P2 | Doing my first grpc greeter sample (template) and following the tutorial to create a C# client.
https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-3.0&tabs=visual-studio
The following code does not compile with Grpc.Tools v2.30 (does not exist)
```csharp
var client = new Greeter.GreeterClient(channel);
```
Grpc.Tools v2.29 works fine.
HIH/2c | 1.0 | Grpc.Tools 2.30 does not generate correct client code. - Doing my first grpc greeter sample (template) and following the tutorial to create a C# client.
https://docs.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-3.0&tabs=visual-studio
The following code does not compile with Grpc.Tools v2.30 (does not exist)
```csharp
var client = new Greeter.GreeterClient(channel);
```
Grpc.Tools v2.29 works fine.
HIH/2c | non_perf | grpc tools does not generate correct client code doing my first grpc greeter sample template and following the tutorial to create a c client the following code does not compile with grpc tools does not exist csharp var client new greeter greeterclient channel grpc tools works fine hih | 0 |
39,247 | 19,762,272,502 | IssuesEvent | 2022-01-16 15:56:13 | AuburnSounds/Dplug | https://api.github.com/repos/AuburnSounds/Dplug | closed | Optimize wren $ operator | Performance | A lot of things can be cached in there.
Add a wren generation count, to see if the Wren VM has restarted. | True | Optimize wren $ operator - A lot of things can be cached in there.
Add a wren generation count, to see if the Wren VM has restarted. | perf | optimize wren operator a lot of things can be cached in there add a wren generation count to see if the wren vm has restarted | 1 |
14,523 | 8,621,525,402 | IssuesEvent | 2018-11-20 17:33:49 | swoole/swoole-src | https://api.github.com/repos/swoole/swoole-src | closed | Nginx + Swoole performance too slow | performance comparison | Please answer these questions before submitting your issue. Thanks!
1. What did you do? If possible, provide a simple script for reproducing the error.
Run the `swoole_http_server`. Then test its performance using `wrk` with the following command
`wrk http://127.0.0.1:9501 -c 1000 -t 48 -d 5`
The swoole server is configured as follows
```
<?php
$http = new swoole_http_server("127.0.0.1", 9501, SWOOLE_BASE);
$http->on('request', function ($request, swoole_http_response $response) {
$response->header('Last-Modified', 'Thu, 18 Jun 2015 10:24:27 GMT');
$response->header('E-Tag', '55829c5b-17');
$response->header('Accept-Ranges', 'bytes');
$response->end("<h1>\nHello Swoole.\n</h1>");
});
$http->start();
```
2. What did you expect to see?
The performance is close to that of static nginx as what was stated in the documentation. I am expecting 1M request/second
3. What did you see instead?
I see `1 over 20` of the performance only, I am expecting that it would be close to static nginx of 1M+ requests/second. Instead i only got 40k+ request/second.
4. What version of Swoole are you using (show your `php --ri swoole`)?
swoole 4.2.7
5. What is your machine environment used (including version of kernel & php & gcc) ?
CentOS 7 1708 php 7.1 2x Xeon e5 2670 gcc 4.8.5 20150623 kernel 3.10.0-862.2.3.el7.x86_64 | True | Nginx + Swoole performance too slow - Please answer these questions before submitting your issue. Thanks!
1. What did you do? If possible, provide a simple script for reproducing the error.
Run the `swoole_http_server`. Then test its performance using `wrk` with the following command
`wrk http://127.0.0.1:9501 -c 1000 -t 48 -d 5`
The swoole server is configured as follows
```
<?php
$http = new swoole_http_server("127.0.0.1", 9501, SWOOLE_BASE);
$http->on('request', function ($request, swoole_http_response $response) {
$response->header('Last-Modified', 'Thu, 18 Jun 2015 10:24:27 GMT');
$response->header('E-Tag', '55829c5b-17');
$response->header('Accept-Ranges', 'bytes');
$response->end("<h1>\nHello Swoole.\n</h1>");
});
$http->start();
```
2. What did you expect to see?
The performance is close to that of static nginx as what was stated in the documentation. I am expecting 1M request/second
3. What did you see instead?
I see `1 over 20` of the performance only, I am expecting that it would be close to static nginx of 1M+ requests/second. Instead i only got 40k+ request/second.
4. What version of Swoole are you using (show your `php --ri swoole`)?
swoole 4.2.7
5. What is your machine environment used (including version of kernel & php & gcc) ?
CentOS 7 1708 php 7.1 2x Xeon e5 2670 gcc 4.8.5 20150623 kernel 3.10.0-862.2.3.el7.x86_64 | perf | nginx swoole performance too slow please answer these questions before submitting your issue thanks what did you do if possible provide a simple script for reproducing the error run the swoole http server then test its performance using wrk with the following command wrk c t d the swoole server is configured as follows php http new swoole http server swoole base http on request function request swoole http response response response header last modified thu jun gmt response header e tag response header accept ranges bytes response end nhello swoole n http start what did you expect to see the performance is close to that of static nginx as what was stated in the documentation i am expecting request second what did you see instead i see over of the performance only i am expecting that it would be close to static nginx of requests second instead i only got request second what version of swoole are you using show your php ri swoole swoole what is your machine environment used including version of kernel php gcc centos php xeon gcc kernel | 1 |
26,776 | 13,105,968,849 | IssuesEvent | 2020-08-04 13:06:11 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | opened | Fenix takes a few second to start up | eng:performance | ## Steps to reproduce
- Start fenix (I had no open tabs, no other apps running in the background)
### Expected behavior
Fenix starts up quickly.
### Actual behavior
Sometimes, fenix takes some seconds to render the home screen.

Profile from the slow case: https://share.firefox.dev/2DjNtoH
Fast profile: https://share.firefox.dev/3ftPMmn
Looks like this is somehow telemetry related, and in fact after turning it off the issue is gone, even after turning it on again.
### Device information
* Android device: Samsung Galaxy A50
* Fenix version: ?
| True | Fenix takes a few second to start up - ## Steps to reproduce
- Start fenix (I had no open tabs, no other apps running in the background)
### Expected behavior
Fenix starts up quickly.
### Actual behavior
Sometimes, fenix takes some seconds to render the home screen.

Profile from the slow case: https://share.firefox.dev/2DjNtoH
Fast profile: https://share.firefox.dev/3ftPMmn
Looks like this is somehow telemetry related, and in fact after turning it off the issue is gone, even after turning it on again.
### Device information
* Android device: Samsung Galaxy A50
* Fenix version: ?
| perf | fenix takes a few second to start up steps to reproduce start fenix i had no open tabs no other apps running in the background expected behavior fenix starts up quickly actual behavior sometimes fenix takes some seconds to render the home screen profile from the slow case fast profile looks like this is somehow telemetry related and in fact after turning it off the issue is gone even after turning it on again device information android device samsung galaxy fenix version | 1 |
239,122 | 19,822,583,686 | IssuesEvent | 2022-01-20 00:21:53 | JHS-Viking-Robotics/FRC-2022 | https://api.github.com/repos/JHS-Viking-Robotics/FRC-2022 | closed | Add explicit motor controller configuration options to Constants | Type: Bug Type: Refactor Priority: In Testing | Constants.java currently holds most configuration options that should be set for the robot, but it does not contain explicit fields for things like sensor inversion, individual control over left and right motor configurations, PID controls, etc. Originally some of these fields were ommitted for simplicity, but during testing especially they are important to have.
These fields need to be added in, as well as any other configuration option that might be used in the future.
See Also: #17 | 1.0 | Add explicit motor controller configuration options to Constants - Constants.java currently holds most configuration options that should be set for the robot, but it does not contain explicit fields for things like sensor inversion, individual control over left and right motor configurations, PID controls, etc. Originally some of these fields were ommitted for simplicity, but during testing especially they are important to have.
These fields need to be added in, as well as any other configuration option that might be used in the future.
See Also: #17 | non_perf | add explicit motor controller configuration options to constants constants java currently holds most configuration options that should be set for the robot but it does not contain explicit fields for things like sensor inversion individual control over left and right motor configurations pid controls etc originally some of these fields were ommitted for simplicity but during testing especially they are important to have these fields need to be added in as well as any other configuration option that might be used in the future see also | 0 |
9,321 | 6,844,700,864 | IssuesEvent | 2017-11-13 03:24:01 | polymec/polymec-dev | https://api.github.com/repos/polymec/polymec-dev | opened | Lua and C memory management needs improvement | bug performance | Currently, the garbage collected objects in C are managed with the Boehm collector, and Lua's collector is vanilla malloc/realloc/free. This results in some truly astonishing scenarios when lots of small C-garbage-collected objects are created in Lua.
Aside from this, our use of the Boehm collector kind of precludes us from using other allocators, since Boehm's GC scheme needs to keep track of the entire heap. Strategies for not using the Boehm collector include
1. Use lua's garbage collector for C garbage collection (not all objects, just designated small ones). This would be nice because it would unify the C and Lua garbage collection scheme. It would be suboptimal because it would not be clear how to collect objects that exist purely in C without deleting their reference from the Lua registry.
2. Use reference counting for C. This makes "collection" deterministic but the inconsistency still exists between Lua and C's idea of who is alive amongst collectible objects. I think if we were using GC for large C objects this idea might make more sense. As it stands now I'm not in favor of this option.
3. Find another less all-encompassing GC than the Boehm collector. Collectors for C/C++ aren't thick on the ground, though, and this would still involve two garbage collectors in one codebase.
At the moment, option 1 offers the most simplicity with a penalty of longer lifetimes for small C-only objects. Does it matter? It may be worth a try, especially if we can figure out how to tell Lua that we're done with an object. | True | Lua and C memory management needs improvement - Currently, the garbage collected objects in C are managed with the Boehm collector, and Lua's collector is vanilla malloc/realloc/free. This results in some truly astonishing scenarios when lots of small C-garbage-collected objects are created in Lua.
Aside from this, our use of the Boehm collector kind of precludes us from using other allocators, since Boehm's GC scheme needs to keep track of the entire heap. Strategies for not using the Boehm collector include
1. Use lua's garbage collector for C garbage collection (not all objects, just designated small ones). This would be nice because it would unify the C and Lua garbage collection scheme. It would be suboptimal because it would not be clear how to collect objects that exist purely in C without deleting their reference from the Lua registry.
2. Use reference counting for C. This makes "collection" deterministic but the inconsistency still exists between Lua and C's idea of who is alive amongst collectible objects. I think if we were using GC for large C objects this idea might make more sense. As it stands now I'm not in favor of this option.
3. Find another less all-encompassing GC than the Boehm collector. Collectors for C/C++ aren't thick on the ground, though, and this would still involve two garbage collectors in one codebase.
At the moment, option 1 offers the most simplicity with a penalty of longer lifetimes for small C-only objects. Does it matter? It may be worth a try, especially if we can figure out how to tell Lua that we're done with an object. | perf | lua and c memory management needs improvement currently the garbage collected objects in c are managed with the boehm collector and lua s collector is vanilla malloc realloc free this results in some truly astonishing scenarios when lots of small c garbage collected objects are created in lua aside from this our use of the boehm collector kind of precludes us from using other allocators since boehm s gc scheme needs to keep track of the entire heap strategies for not using the boehm collector include use lua s garbage collector for c garbage collection not all objects just designated small ones this would be nice because it would unify the c and lua garbage collection scheme it would be suboptimal because it would not be clear how to collect objects that exist purely in c without deleting their reference from the lua registry use reference counting for c this makes collection deterministic but the inconsistency still exists between lua and c s idea of who is alive amongst collectible objects i think if we were using gc for large c objects this idea might make more sense as it stands now i m not in favor of this option find another less all encompassing gc than the boehm collector collectors for c c aren t thick on the ground though and this would still involve two garbage collectors in one codebase at the moment option offers the most simplicity with a penalty of longer lifetimes for small c only objects does it matter it may be worth a try especially if we can figure out how to tell lua that we re done with an object | 1 |
22,150 | 7,125,049,878 | IssuesEvent | 2018-01-19 21:18:54 | dart-lang/build | https://api.github.com/repos/dart-lang/build | opened | `TestCommand.run` should not set the exit code | Type: enhancement package:build_runner | Instead users should set the exitCode based on the return value of `run`.
This is a breaking change so we should wait for build_runner 0.8.0
| 1.0 | `TestCommand.run` should not set the exit code - Instead users should set the exitCode based on the return value of `run`.
This is a breaking change so we should wait for build_runner 0.8.0
| non_perf | testcommand run should not set the exit code instead users should set the exitcode based on the return value of run this is a breaking change so we should wait for build runner | 0 |
224,982 | 24,803,533,105 | IssuesEvent | 2022-10-25 00:58:28 | samqws-marketing/coursera_naptime | https://api.github.com/repos/samqws-marketing/coursera_naptime | opened | CVE-2016-3956 (High) detected in npm-2.11.2.jar | security vulnerability | ## CVE-2016-3956 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-2.11.2.jar</b></p></summary>
<p>WebJar for npm</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/org.webjars/npm/jars/npm-2.11.2.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- :x: **npm-2.11.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The CLI in npm before 2.15.1 and 3.x before 3.8.3, as used in Node.js 0.10 before 0.10.44, 0.12 before 0.12.13, 4 before 4.4.2, and 5 before 5.10.0, includes bearer tokens with arbitrary requests, which allows remote HTTP servers to obtain sensitive information by reading Authorization headers.
<p>Publish Date: Jul 2, 2016 2:59:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3956>CVE-2016-3956</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3956">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3956</a></p>
<p>Release Date: Jul 2, 2016 2:59:00 PM</p>
<p>Fix Resolution: CLI in npm - 2.15.1,3.8.3;Node.js - 0.10.44,0.12.13,4.4.2,5.10.0</p>
</p>
</details>
<p></p>
| True | CVE-2016-3956 (High) detected in npm-2.11.2.jar - ## CVE-2016-3956 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-2.11.2.jar</b></p></summary>
<p>WebJar for npm</p>
<p>Library home page: <a href="http://webjars.org">http://webjars.org</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/org.webjars/npm/jars/npm-2.11.2.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- :x: **npm-2.11.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The CLI in npm before 2.15.1 and 3.x before 3.8.3, as used in Node.js 0.10 before 0.10.44, 0.12 before 0.12.13, 4 before 4.4.2, and 5 before 5.10.0, includes bearer tokens with arbitrary requests, which allows remote HTTP servers to obtain sensitive information by reading Authorization headers.
<p>Publish Date: Jul 2, 2016 2:59:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-3956>CVE-2016-3956</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3956">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3956</a></p>
<p>Release Date: Jul 2, 2016 2:59:00 PM</p>
<p>Fix Resolution: CLI in npm - 2.15.1,3.8.3;Node.js - 0.10.44,0.12.13,4.4.2,5.10.0</p>
</p>
</details>
<p></p>
| non_perf | cve high detected in npm jar cve high severity vulnerability vulnerable library npm jar webjar for npm library home page a href path to vulnerable library home wss scanner cache org webjars npm jars npm jar dependency hierarchy sbt plugin jar root library sbt js engine jar npm jar x npm jar vulnerable library found in head commit a href found in base branch master vulnerability details the cli in npm before and x before as used in node js before before before and before includes bearer tokens with arbitrary requests which allows remote http servers to obtain sensitive information by reading authorization headers publish date jul pm url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date jul pm fix resolution cli in npm node js | 0 |
41,717 | 21,915,266,613 | IssuesEvent | 2022-05-21 18:15:11 | jkomoros/card-web | https://api.github.com/repos/jkomoros/card-web | closed | Modify local cards on edit instead of waiting for server | performance | A lot of performance problems happen because cards are updated by the server one at time (e.g. when the timestamp finalizes, or after updateInboundLinks fires).
One thing we could do is have it so on every edit we make local edits to the cards collection in state mirroring what we expect the server to ultimately do. Then when we receive new cards in updateCards, we compare each one to the ones we have locally, and if they're equivalent (skipping any timestamp fields) then we drop that update on the floor.
One of the reasons that updateInboundLinks is done on the server is because we can't guarantee that the editor of the card has access to all of the cards pointed to, although realistically that's very very rare (the only time it would happen is if a card already had a reference to a card the editor can't see, and then it's removed it).
Related to #538 and #517, because those problems would be obviated by this
| True | Modify local cards on edit instead of waiting for server - A lot of performance problems happen because cards are updated by the server one at time (e.g. when the timestamp finalizes, or after updateInboundLinks fires).
One thing we could do is have it so on every edit we make local edits to the cards collection in state mirroring what we expect the server to ultimately do. Then when we receive new cards in updateCards, we compare each one to the ones we have locally, and if they're equivalent (skipping any timestamp fields) then we drop that update on the floor.
One of the reasons that updateInboundLinks is done on the server is because we can't guarantee that the editor of the card has access to all of the cards pointed to, although realistically that's very very rare (the only time it would happen is if a card already had a reference to a card the editor can't see, and then it's removed it).
Related to #538 and #517, because those problems would be obviated by this
| perf | modify local cards on edit instead of waiting for server a lot of performance problems happen because cards are updated by the server one at time e g when the timestamp finalizes or after updateinboundlinks fires one thing we could do is have it so on every edit we make local edits to the cards collection in state mirroring what we expect the server to ultimately do then when we receive new cards in updatecards we compare each one to the ones we have locally and if they re equivalent skipping any timestamp fields then we drop that update on the floor one of the reasons that updateinboundlinks is done on the server is because we can t guarantee that the editor of the card has access to all of the cards pointed to although realistically that s very very rare the only time it would happen is if a card already had a reference to a card the editor can t see and then it s removed it related to and because those problems would be obviated by this | 1 |
38,473 | 19,293,795,085 | IssuesEvent | 2021-12-12 08:30:05 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Stop using IntersectionObserver polyfill in ads if native support is present | WG: monetization P3: When Possible Type: Feature Request Related to: Performance Blocked Stale | Now that stable Chrome has IntersectionObserver we should use it instead of the postMessage mechanism.
CC @lannka
| True | Stop using IntersectionObserver polyfill in ads if native support is present - Now that stable Chrome has IntersectionObserver we should use it instead of the postMessage mechanism.
CC @lannka
| perf | stop using intersectionobserver polyfill in ads if native support is present now that stable chrome has intersectionobserver we should use it instead of the postmessage mechanism cc lannka | 1 |
28,492 | 13,723,938,009 | IssuesEvent | 2020-10-03 11:56:30 | flybywiresim/a32nx | https://api.github.com/repos/flybywiresim/a32nx | closed | [BUG] Cant go to FL380 | Bug Plane Performance | <!-- ⚠⚠ Do not delete this issue template! ⚠⚠ -->
<!-- Issues that do not use the issue template are likely to be ignored and closed. -->
**Mod Version**
master 0.3.0
**Describe the bug**
ATC send me to FL380 from FL360. Start to change LVL but cant. Aircraft lose speed and lost flight level.
**To Reproduce**
1. Start with full fuel and zero payload
2. Go to FL360 then FL380
3. Possible front WIND is an issue
**Expected behavior**
FL380 like ATC said
**Actual behavior**
Cant go to FL380
**References**

**Additional context**
Was this working before/when did the issue start occurring?
Is this a problem in the vanilla unmodded game?
I dint's see this problem before
Discord username (if different from GitHub):
Mugz#6174 | True | [BUG] Cant go to FL380 - <!-- ⚠⚠ Do not delete this issue template! ⚠⚠ -->
<!-- Issues that do not use the issue template are likely to be ignored and closed. -->
**Mod Version**
master 0.3.0
**Describe the bug**
ATC send me to FL380 from FL360. Start to change LVL but cant. Aircraft lose speed and lost flight level.
**To Reproduce**
1. Start with full fuel and zero payload
2. Go to FL360 then FL380
3. Possible front WIND is an issue
**Expected behavior**
FL380 like ATC said
**Actual behavior**
Cant go to FL380
**References**

**Additional context**
Was this working before/when did the issue start occurring?
Is this a problem in the vanilla unmodded game?
I dint's see this problem before
Discord username (if different from GitHub):
Mugz#6174 | perf | cant go to mod version master describe the bug atc send me to from start to change lvl but cant aircraft lose speed and lost flight level to reproduce start with full fuel and zero payload go to then possible front wind is an issue expected behavior like atc said actual behavior cant go to references additional context was this working before when did the issue start occurring is this a problem in the vanilla unmodded game i dint s see this problem before discord username if different from github mugz | 1 |
46,474 | 24,555,790,540 | IssuesEvent | 2022-10-12 15:46:02 | pmacg/stag | https://api.github.com/repos/pmacg/stag | opened | Improve SBM running time from O(k^2) to O(k) | enhancement performance | Currently, the efficient `SBM` method has running time O(k^2) since it iterates through each pair of clusters. This could be improved to O(k) by sampling all of the joining edges for a given cluster at the same time. | True | Improve SBM running time from O(k^2) to O(k) - Currently, the efficient `SBM` method has running time O(k^2) since it iterates through each pair of clusters. This could be improved to O(k) by sampling all of the joining edges for a given cluster at the same time. | perf | improve sbm running time from o k to o k currently the efficient sbm method has running time o k since it iterates through each pair of clusters this could be improved to o k by sampling all of the joining edges for a given cluster at the same time | 1 |
74,492 | 9,078,851,432 | IssuesEvent | 2019-02-16 00:23:45 | elastic/eui | https://api.github.com/repos/elastic/eui | closed | New icons needed | assign:designer icons | - [x] notebook app icons - https://fontawesome.com/v3.2.1/icon/book/
- [x] bold - https://fontawesome.com/icons/bold?style=solid
- [x] italic - https://fontawesome.com/icons/italic?style=solid
- [x] underline - https://fontawesome.com/icons/underline?style=solid
- [x] strikethrough - https://fontawesome.com/icons/strikethrough?style=solid
- [x] list - https://fontawesome.com/icons/list-ul?style=solid
- [x] ordered list - https://fontawesome.com/icons/list-ol?style=solid
- [x] code block - https://fontawesome.com/icons/code?style=solid
- [x] quote block - https://fontawesome.com/icons/quote-right?style=solid
- [x] h1/h2/h3/h4/h5 - https://fontawesome.com/icons?d=gallery&q=header
- [x] link - https://fontawesome.com/icons/link?style=solid
- [ ] binoculars - https://fontawesome.com/icons/binoculars?style=solid
- [x] table - https://fontawesome.com/icons/table?style=solid
- [x] align center - https://fontawesome.com/icons/align-center?style=solid
- [x] align left - https://fontawesome.com/icons/align-left?style=solid
- [x] align right - https://fontawesome.com/icons/align-right?style=solid
- [x] comment - https://fontawesome.com/icons/comment-alt-lines?style=solid
- [x] undo - https://fontawesome.com/v4.7.0/icon/undo/
- [x] redo - https://fontawesome.com/v4.7.0/icon/repeat/ | 1.0 | New icons needed - - [x] notebook app icons - https://fontawesome.com/v3.2.1/icon/book/
- [x] bold - https://fontawesome.com/icons/bold?style=solid
- [x] italic - https://fontawesome.com/icons/italic?style=solid
- [x] underline - https://fontawesome.com/icons/underline?style=solid
- [x] strikethrough - https://fontawesome.com/icons/strikethrough?style=solid
- [x] list - https://fontawesome.com/icons/list-ul?style=solid
- [x] ordered list - https://fontawesome.com/icons/list-ol?style=solid
- [x] code block - https://fontawesome.com/icons/code?style=solid
- [x] quote block - https://fontawesome.com/icons/quote-right?style=solid
- [x] h1/h2/h3/h4/h5 - https://fontawesome.com/icons?d=gallery&q=header
- [x] link - https://fontawesome.com/icons/link?style=solid
- [ ] binoculars - https://fontawesome.com/icons/binoculars?style=solid
- [x] table - https://fontawesome.com/icons/table?style=solid
- [x] align center - https://fontawesome.com/icons/align-center?style=solid
- [x] align left - https://fontawesome.com/icons/align-left?style=solid
- [x] align right - https://fontawesome.com/icons/align-right?style=solid
- [x] comment - https://fontawesome.com/icons/comment-alt-lines?style=solid
- [x] undo - https://fontawesome.com/v4.7.0/icon/undo/
- [x] redo - https://fontawesome.com/v4.7.0/icon/repeat/ | non_perf | new icons needed notebook app icons bold italic underline strikethrough list ordered list code block quote block link binoculars table align center align left align right comment undo redo | 0 |
639,059 | 20,745,565,654 | IssuesEvent | 2022-03-14 22:28:27 | apcountryman/picolibrary-microchip-megaavr | https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr | closed | Add Microchip megaAVR variable configuration SPI basic controller | priority-normal status-awaiting_review type-feature | Add Microchip megaAVR variable configuration SPI basic controller (`::picolibrary::Microchip::megaAVR::SPI::Variable_Configuration_Basic_Controller`).
- [x] The `Variable_Configuration_Basic_Controller` template class should be defined in the `include/picolibrary/microchip/megaavr/spi.h`/`source/picolibrary/microchip/megaavr/spi.cc` header/source file pair
- [x] The `Variable_Configuration_Basic_Controller` template class should have the following template parameters:
- [x] `typename Peripheral`: The type of peripheral used to implement variable configuration controller functionality
- [x] The `Variable_Configuration_Basic_Controller` template class should not have a default implementation | 1.0 | Add Microchip megaAVR variable configuration SPI basic controller - Add Microchip megaAVR variable configuration SPI basic controller (`::picolibrary::Microchip::megaAVR::SPI::Variable_Configuration_Basic_Controller`).
- [x] The `Variable_Configuration_Basic_Controller` template class should be defined in the `include/picolibrary/microchip/megaavr/spi.h`/`source/picolibrary/microchip/megaavr/spi.cc` header/source file pair
- [x] The `Variable_Configuration_Basic_Controller` template class should have the following template parameters:
- [x] `typename Peripheral`: The type of peripheral used to implement variable configuration controller functionality
- [x] The `Variable_Configuration_Basic_Controller` template class should not have a default implementation | non_perf | add microchip megaavr variable configuration spi basic controller add microchip megaavr variable configuration spi basic controller picolibrary microchip megaavr spi variable configuration basic controller the variable configuration basic controller template class should be defined in the include picolibrary microchip megaavr spi h source picolibrary microchip megaavr spi cc header source file pair the variable configuration basic controller template class should have the following template parameters typename peripheral the type of peripheral used to implement variable configuration controller functionality the variable configuration basic controller template class should not have a default implementation | 0 |
50,262 | 26,554,992,793 | IssuesEvent | 2023-01-20 11:11:40 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | reopened | [Task] : Split dataTree for eval | Performance High Task FE Coders Pod Evaluated Value | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
As we know we have a dataTree which holds all properties and evaluated values. As part of the performance improvement we can make split this dataTree into `dataTree` and `entityConfig` -
1. `DataTree` - which has evaluated values and paths which actually needs evaluations
2. `entityConfig` - this will have all config properties which are needed for like dependency map ie DynamicBindingPathList, DynamicPaths, TriggerPaths, ValidationPath, etc.
Benefits -
1. During update dataTree, will take difference between oldDataTree and dataTree avoiding all the path which doesn't neeed evalutions
2. size of dataTree will be reduced which will help during evaluations
## Issues this task closes
- https://github.com/appsmithorg/appsmith/issues/13982
| True | [Task] : Split dataTree for eval - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
As we know we have a dataTree which holds all properties and evaluated values. As part of the performance improvement we can make split this dataTree into `dataTree` and `entityConfig` -
1. `DataTree` - which has evaluated values and paths which actually needs evaluations
2. `entityConfig` - this will have all config properties which are needed for like dependency map ie DynamicBindingPathList, DynamicPaths, TriggerPaths, ValidationPath, etc.
Benefits -
1. During update dataTree, will take difference between oldDataTree and dataTree avoiding all the path which doesn't neeed evalutions
2. size of dataTree will be reduced which will help during evaluations
## Issues this task closes
- https://github.com/appsmithorg/appsmith/issues/13982
| perf | split datatree for eval is there an existing issue for this i have searched the existing issues subtasks as we know we have a datatree which holds all properties and evaluated values as part of the performance improvement we can make split this datatree into datatree and entityconfig datatree which has evaluated values and paths which actually needs evaluations entityconfig this will have all config properties which are needed for like dependency map ie dynamicbindingpathlist dynamicpaths triggerpaths validationpath etc benefits during update datatree will take difference between olddatatree and datatree avoiding all the path which doesn t neeed evalutions size of datatree will be reduced which will help during evaluations issues this task closes | 1 |
304,744 | 23,081,521,866 | IssuesEvent | 2022-07-26 07:40:52 | ToDoCalendar/ToDoCalendar_backend | https://api.github.com/repos/ToDoCalendar/ToDoCalendar_backend | closed | Env example added | documentation enhancement | - Так как `.env` файл находится в `.gitignore` с секретными даными, то создать файл `.env.example`, в которых секретные данные помечены словом `secret`.
- Добавить в README команду `cp .env.example .env`. | 1.0 | Env example added - - Так как `.env` файл находится в `.gitignore` с секретными даными, то создать файл `.env.example`, в которых секретные данные помечены словом `secret`.
- Добавить в README команду `cp .env.example .env`. | non_perf | env example added так как env файл находится в gitignore с секретными даными то создать файл env example в которых секретные данные помечены словом secret добавить в readme команду cp env example env | 0 |
43,664 | 23,326,124,769 | IssuesEvent | 2022-08-08 21:25:16 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | storage: `pointSynthesizingIter` should only synthesize points above existing points | C-performance A-kv-replication A-storage T-kv-replication | `pointSynthesizingIter currently synthesizes points both above and below existing points. We could probably get away with only synthesizing them above existing points, which would also avoid additional reverse peeks in `SeekGE` that are likely very slow.
NB: This sort of assumes that synthetic points are tombstones, since we may not want to skip other synthetic points. But we can cross that bridge when we get there.
Jira issue: CRDB-17346
Epic CRDB-2624 | True | storage: `pointSynthesizingIter` should only synthesize points above existing points - `pointSynthesizingIter currently synthesizes points both above and below existing points. We could probably get away with only synthesizing them above existing points, which would also avoid additional reverse peeks in `SeekGE` that are likely very slow.
NB: This sort of assumes that synthetic points are tombstones, since we may not want to skip other synthetic points. But we can cross that bridge when we get there.
Jira issue: CRDB-17346
Epic CRDB-2624 | perf | storage pointsynthesizingiter should only synthesize points above existing points pointsynthesizingiter currently synthesizes points both above and below existing points we could probably get away with only synthesizing them above existing points which would also avoid additional reverse peeks in seekge that are likely very slow nb this sort of assumes that synthetic points are tombstones since we may not want to skip other synthetic points but we can cross that bridge when we get there jira issue crdb epic crdb | 1 |
18,709 | 10,197,323,125 | IssuesEvent | 2019-08-12 23:50:17 | yandex/ClickHouse | https://api.github.com/repos/yandex/ClickHouse | closed | High write amplification for very small tables. | performance wontfix | For system tables you can often see something like this:
```
2019.08.04 20:13:08.789767 [ 4 ] {} <Debug> system.text_log (MergerMutator): Merging 4 parts: from 201908_6_156_30 to 201908_159_159_0 into tmp_merge_201908_6_159_31
```
Merging strategy decided to merge zero-level parts with 30-level parts. Probably due to similar compressed size. This is suboptimal. | True | High write amplification for very small tables. - For system tables you can often see something like this:
```
2019.08.04 20:13:08.789767 [ 4 ] {} <Debug> system.text_log (MergerMutator): Merging 4 parts: from 201908_6_156_30 to 201908_159_159_0 into tmp_merge_201908_6_159_31
```
Merging strategy decided to merge zero-level parts with 30-level parts. Probably due to similar compressed size. This is suboptimal. | perf | high write amplification for very small tables for system tables you can often see something like this system text log mergermutator merging parts from to into tmp merge merging strategy decided to merge zero level parts with level parts probably due to similar compressed size this is suboptimal | 1 |
7,381 | 6,016,664,945 | IssuesEvent | 2017-06-07 07:40:38 | evhub/coconut | https://api.github.com/repos/evhub/coconut | opened | Optimize Coconut built-ins into Python built-ins in for loops | performance | When Coconut sees `for x in enumerate`, it should use `_coconut.enumerate`, not `_coconut_enumerate`. | True | Optimize Coconut built-ins into Python built-ins in for loops - When Coconut sees `for x in enumerate`, it should use `_coconut.enumerate`, not `_coconut_enumerate`. | perf | optimize coconut built ins into python built ins in for loops when coconut sees for x in enumerate it should use coconut enumerate not coconut enumerate | 1 |
28,280 | 13,630,402,079 | IssuesEvent | 2020-09-24 16:23:25 | timberio/vector | https://api.github.com/repos/timberio/vector | opened | Automatic conconcurrency limiting has bad behavior | domain: networking domain: performance type: bug | The automatic concurrency limiting feature exhibits bad behavior for two of the scenarios.
1. When the server responds with a HTTP 429 or 503 when the request rate is over the limit, we occasionally spike up to the maximum rate limit (1000 requests/sec), and other times detect a limit well below the actual limit (graph is truncated to show the lower limit)


2. When the server drops the connection, we detect any backpressure at all and just send at the maximum rate:
 | True | Automatic conconcurrency limiting has bad behavior - The automatic concurrency limiting feature exhibits bad behavior for two of the scenarios.
1. When the server responds with a HTTP 429 or 503 when the request rate is over the limit, we occasionally spike up to the maximum rate limit (1000 requests/sec), and other times detect a limit well below the actual limit (graph is truncated to show the lower limit)


2. When the server drops the connection, we detect any backpressure at all and just send at the maximum rate:
 | perf | automatic conconcurrency limiting has bad behavior the automatic concurrency limiting feature exhibits bad behavior for two of the scenarios when the server responds with a http or when the request rate is over the limit we occasionally spike up to the maximum rate limit requests sec and other times detect a limit well below the actual limit graph is truncated to show the lower limit when the server drops the connection we detect any backpressure at all and just send at the maximum rate | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.