Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
64,650
3,213,813,138
IssuesEvent
2015-10-06 21:34:22
lorentey/stagger
https://api.github.com/repos/lorentey/stagger
closed
Integrate stagger into Python's packaging system
auto-migrated Priority-Medium Type-Enhancement
``` See "Distributing Python Modules": http://docs.python.org/3.0/distutils/index.html ``` Original issue reported on code.google.com by `Karoly.Lorentey` on 13 Jun 2009 at 5:45
1.0
Integrate stagger into Python's packaging system - ``` See "Distributing Python Modules": http://docs.python.org/3.0/distutils/index.html ``` Original issue reported on code.google.com by `Karoly.Lorentey` on 13 Jun 2009 at 5:45
non_main
integrate stagger into python s packaging system see distributing python modules original issue reported on code google com by karoly lorentey on jun at
0
1,988
6,694,259,593
IssuesEvent
2017-10-10 00:42:11
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
Maps: more specific address search
Maintainer Input Requested
Hey, is it possible to search with more specific search queries, i.e. including an address?That would be much more useful than just a simple city name matching. IA Page: http://duck.co/ia/view/maps_maps [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @nilnilnil
True
Maps: more specific address search - Hey, is it possible to search with more specific search queries, i.e. including an address?That would be much more useful than just a simple city name matching. IA Page: http://duck.co/ia/view/maps_maps [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @nilnilnil
main
maps more specific address search hey is it possible to search with more specific search queries i e including an address that would be much more useful than just a simple city name matching ia page nilnilnil
1
827,243
31,761,502,902
IssuesEvent
2023-09-12 05:41:27
RagnarokResearchLab/RagLite
https://api.github.com/repos/RagnarokResearchLab/RagLite
opened
Add more statistics to the data mining toolkit
Complexity: Low Priority: Optional Status: Accepted Type: Improvement Scope: File Formats
Found this somewhere in my old notes: > Number of processed files (duh) Number of different values that occured for each field (only variables/LUTs, depends on the data) Min, max, avg of those values (if there are multiple) Name of the files corresponding to min, max values List of those values (only useful if it's not too big) Keys of the entries whose values were always identical Min, max, avg loading times Name of the files corresponding to min, max loading time Probably not too relevant right now, but if I ever feel like looking into Renewal changes again then it might come in handy.
1.0
Add more statistics to the data mining toolkit - Found this somewhere in my old notes: > Number of processed files (duh) Number of different values that occured for each field (only variables/LUTs, depends on the data) Min, max, avg of those values (if there are multiple) Name of the files corresponding to min, max values List of those values (only useful if it's not too big) Keys of the entries whose values were always identical Min, max, avg loading times Name of the files corresponding to min, max loading time Probably not too relevant right now, but if I ever feel like looking into Renewal changes again then it might come in handy.
non_main
add more statistics to the data mining toolkit found this somewhere in my old notes number of processed files duh number of different values that occured for each field only variables luts depends on the data min max avg of those values if there are multiple name of the files corresponding to min max values list of those values only useful if it s not too big keys of the entries whose values were always identical min max avg loading times name of the files corresponding to min max loading time probably not too relevant right now but if i ever feel like looking into renewal changes again then it might come in handy
0
281,530
21,315,412,286
IssuesEvent
2022-04-16 07:22:03
jaysmyname/pe
https://api.github.com/repos/jaysmyname/pe
opened
Delete command formats should have ... at the end
type.DocumentationBug severity.VeryLow
![image.png](https://raw.githubusercontent.com/jaysmyname/pe/main/files/62c64f76-f85c-4ef0-abb9-4c732c2f2cbf.png) ![image.png](https://raw.githubusercontent.com/jaysmyname/pe/main/files/6a44e9bb-211c-4b16-bd9a-c6ebd0c8127e.png) For all the commands regarding delete, there should be a `...` at the end, signifying you can use it more than once, as demonstrated by the example `deletes 1,2,3` <!--session: 1650087410741-ccd0034b-8cd8-4172-883c-753c24bc19a1--> <!--Version: Web v3.4.2-->
1.0
Delete command formats should have ... at the end - ![image.png](https://raw.githubusercontent.com/jaysmyname/pe/main/files/62c64f76-f85c-4ef0-abb9-4c732c2f2cbf.png) ![image.png](https://raw.githubusercontent.com/jaysmyname/pe/main/files/6a44e9bb-211c-4b16-bd9a-c6ebd0c8127e.png) For all the commands regarding delete, there should be a `...` at the end, signifying you can use it more than once, as demonstrated by the example `deletes 1,2,3` <!--session: 1650087410741-ccd0034b-8cd8-4172-883c-753c24bc19a1--> <!--Version: Web v3.4.2-->
non_main
delete command formats should have at the end for all the commands regarding delete there should be a at the end signifying you can use it more than once as demonstrated by the example deletes
0
2,163
7,529,464,409
IssuesEvent
2018-04-14 05:12:03
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
Network Modules Running Slower After Upgrading to Ansible 2.5
aci affects_2.5 avi bug f5 module needs_maintainer needs_triage networking nxos performance support:community support:core support:network
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME All network modules for all device families. E.g., ios, ios-xr, nxos, eos, etc... ##### ANSIBLE VERSION ``` ansible 2.5.0 configured module search path = [u'/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)] ``` ##### CONFIGURATION ``` # Tested without callback and strategy DEFAULT_CALLBACK_WHITELIST(ansible/ansible.cfg) = ['timer', 'mail', 'skippy', 'profile_tasks'] DEFAULT_STRATEGY(ansible/ansible.cfg) = free DEFAULT_TIMEOUT(ansible/ansible.cfg) = 20 PERSISTENT_CONNECT_TIMEOUT(ansible/ansible.cfg) = 90 ``` ##### OS / ENVIRONMENT RHEL 7.4 3.10.0-693.11.1.el7.x86_64 ##### SUMMARY There's a significant difference in speed/performance after upgrading to 2.5 from 2.4.3. Result are unaffected by scale. I've tested most of our custom roles so far, and I've verified this is consistent in all of our environments. Also, going with/without the free strategy and callbacks we normally run does not affect performance. Processor and memory utilization is far lower, consistent with the increase in run times. Simply put. all network roles are running significantly slower in 2.5: ##### STEPS TO REPRODUCE Facts: ```ansible-playbook facts.yml -l cisco-ios[1:500] --forks 500``` Passwords:: ```ansible-playbook password.yml -l cisco-ios[1:100] --forks 100 ``` ##### EXPECTED RESULTS **2.4.3** Facts: ``` network_facts : collect output from ios device ------------------------- 41.10s network_facts : set config_lines fact ---------------------------------- 33.16s debug ------------------------------------------------------------------- 5.31s network_facts : include cisco-ios tasks --------------------------------- 2.66s network_facts : set config fact ----------------------------------------- 0.46s network_facts : set version fact ---------------------------------------- 0.41s network_facts : set model number ---------------------------------------- 0.41s network_facts : set management interface name fact ---------------------- 0.40s Playbook run took 0 days, 0 hours, 4 minutes, 24 seconds ``` Passwords: ``` config_localpw : Update line passwords -------------------------------- 111.60s network_facts : collect output from ios device -------------------------- 6.12s config_localpw : Update line passwords ---------------------------------- 0.91s network_facts : include cisco-ios tasks --------------------------------- 0.51s config_localpw : Update line passwords ---------------------------------- 0.50s config_localpw : Update terminal server username doorbell --------------- 0.28s config_localpw : debug -------------------------------------------------- 0.22s config_localpw : Update terminal server username doorbell --------------- 0.19s config_localpw : set_fact - Modem slot 3 -------------------------------- 0.16s config_localpw : Update enable and username config lines ---------------- 0.16s config_localpw : debug -------------------------------------------------- 0.13s config_localpw : Identify if it has a modem ----------------------------- 0.13s config_localpw : Update enable and username config lines ---------------- 0.13s config_localpw : Update terminal server username doorbell --------------- 0.12s config_localpw : debug -------------------------------------------------- 0.12s config_localpw : Update terminal server username doorbell --------------- 0.11s config_localpw : Update terminal server username doorbell --------------- 0.08s config_localpw : Update enable and username config lines ---------------- 0.07s config_localpw : debug -------------------------------------------------- 0.07s config_localpw : Update line passwords ---------------------------------- 0.07s Playbook run took 0 days, 0 hours, 3 minutes, 12 seconds ``` ##### ACTUAL RESULTS **2.5** Facts: ``` network_facts : collect output from ios device ------------------------- 27.77s network_facts : include cisco-ios tasks --------------------------------- 2.83s debug ------------------------------------------------------------------- 0.26s network_facts : set config fact ----------------------------------------- 0.12s network_facts : set management interface name fact ---------------------- 0.12s network_facts : set model number ---------------------------------------- 0.06s network_facts : set version fact ---------------------------------------- 0.04s network_facts : set config_lines fact ----------------------------------- 0.04s Playbook run took 0 days, 0 hours, 26 minutes, 33 seconds ``` Passwords: ``` config_localpw : Update line passwords ---------------------------------- 40.38s config_localpw : Update line passwords ---------------------------------- 26.52s config_localpw : Update terminal server username doorbell --------------- 22.32s config_localpw : Update enable and username config lines ---------------- 21.04s config_localpw : Update terminal server username doorbell --------------- 16.96s config_localpw : Update line passwords ---------------------------------- 16.87s config_localpw : Update line passwords ---------------------------------- 16.86s config_localpw : Update line passwords ---------------------------------- 16.68s config_localpw : Update line passwords ---------------------------------- 15.75s config_localpw : Update terminal server username doorbell --------------- 15.39s config_localpw : Update line passwords ---------------------------------- 15.25s config_localpw : Update line passwords ---------------------------------- 15.12s config_localpw : Update line passwords ---------------------------------- 14.62s config_localpw : Identify if it has a modem ----------------------------- 14.52s config_localpw : Update terminal server username doorbell --------------- 13.94s config_localpw : Update line passwords ---------------------------------- 13.78s config_localpw : Update line passwords ---------------------------------- 13.47s config_localpw : Identify if and where the modem is --------------------- 13.03s config_localpw : Update enable and username config lines ---------------- 12.82s config_localpw : Update line passwords ---------------------------------- 12.07s Playbook run took 0 days, 0 hours, 17 minutes, 38 seconds ```
True
Network Modules Running Slower After Upgrading to Ansible 2.5 - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME All network modules for all device families. E.g., ios, ios-xr, nxos, eos, etc... ##### ANSIBLE VERSION ``` ansible 2.5.0 configured module search path = [u'/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)] ``` ##### CONFIGURATION ``` # Tested without callback and strategy DEFAULT_CALLBACK_WHITELIST(ansible/ansible.cfg) = ['timer', 'mail', 'skippy', 'profile_tasks'] DEFAULT_STRATEGY(ansible/ansible.cfg) = free DEFAULT_TIMEOUT(ansible/ansible.cfg) = 20 PERSISTENT_CONNECT_TIMEOUT(ansible/ansible.cfg) = 90 ``` ##### OS / ENVIRONMENT RHEL 7.4 3.10.0-693.11.1.el7.x86_64 ##### SUMMARY There's a significant difference in speed/performance after upgrading to 2.5 from 2.4.3. Result are unaffected by scale. I've tested most of our custom roles so far, and I've verified this is consistent in all of our environments. Also, going with/without the free strategy and callbacks we normally run does not affect performance. Processor and memory utilization is far lower, consistent with the increase in run times. Simply put. all network roles are running significantly slower in 2.5: ##### STEPS TO REPRODUCE Facts: ```ansible-playbook facts.yml -l cisco-ios[1:500] --forks 500``` Passwords:: ```ansible-playbook password.yml -l cisco-ios[1:100] --forks 100 ``` ##### EXPECTED RESULTS **2.4.3** Facts: ``` network_facts : collect output from ios device ------------------------- 41.10s network_facts : set config_lines fact ---------------------------------- 33.16s debug ------------------------------------------------------------------- 5.31s network_facts : include cisco-ios tasks --------------------------------- 2.66s network_facts : set config fact ----------------------------------------- 0.46s network_facts : set version fact ---------------------------------------- 0.41s network_facts : set model number ---------------------------------------- 0.41s network_facts : set management interface name fact ---------------------- 0.40s Playbook run took 0 days, 0 hours, 4 minutes, 24 seconds ``` Passwords: ``` config_localpw : Update line passwords -------------------------------- 111.60s network_facts : collect output from ios device -------------------------- 6.12s config_localpw : Update line passwords ---------------------------------- 0.91s network_facts : include cisco-ios tasks --------------------------------- 0.51s config_localpw : Update line passwords ---------------------------------- 0.50s config_localpw : Update terminal server username doorbell --------------- 0.28s config_localpw : debug -------------------------------------------------- 0.22s config_localpw : Update terminal server username doorbell --------------- 0.19s config_localpw : set_fact - Modem slot 3 -------------------------------- 0.16s config_localpw : Update enable and username config lines ---------------- 0.16s config_localpw : debug -------------------------------------------------- 0.13s config_localpw : Identify if it has a modem ----------------------------- 0.13s config_localpw : Update enable and username config lines ---------------- 0.13s config_localpw : Update terminal server username doorbell --------------- 0.12s config_localpw : debug -------------------------------------------------- 0.12s config_localpw : Update terminal server username doorbell --------------- 0.11s config_localpw : Update terminal server username doorbell --------------- 0.08s config_localpw : Update enable and username config lines ---------------- 0.07s config_localpw : debug -------------------------------------------------- 0.07s config_localpw : Update line passwords ---------------------------------- 0.07s Playbook run took 0 days, 0 hours, 3 minutes, 12 seconds ``` ##### ACTUAL RESULTS **2.5** Facts: ``` network_facts : collect output from ios device ------------------------- 27.77s network_facts : include cisco-ios tasks --------------------------------- 2.83s debug ------------------------------------------------------------------- 0.26s network_facts : set config fact ----------------------------------------- 0.12s network_facts : set management interface name fact ---------------------- 0.12s network_facts : set model number ---------------------------------------- 0.06s network_facts : set version fact ---------------------------------------- 0.04s network_facts : set config_lines fact ----------------------------------- 0.04s Playbook run took 0 days, 0 hours, 26 minutes, 33 seconds ``` Passwords: ``` config_localpw : Update line passwords ---------------------------------- 40.38s config_localpw : Update line passwords ---------------------------------- 26.52s config_localpw : Update terminal server username doorbell --------------- 22.32s config_localpw : Update enable and username config lines ---------------- 21.04s config_localpw : Update terminal server username doorbell --------------- 16.96s config_localpw : Update line passwords ---------------------------------- 16.87s config_localpw : Update line passwords ---------------------------------- 16.86s config_localpw : Update line passwords ---------------------------------- 16.68s config_localpw : Update line passwords ---------------------------------- 15.75s config_localpw : Update terminal server username doorbell --------------- 15.39s config_localpw : Update line passwords ---------------------------------- 15.25s config_localpw : Update line passwords ---------------------------------- 15.12s config_localpw : Update line passwords ---------------------------------- 14.62s config_localpw : Identify if it has a modem ----------------------------- 14.52s config_localpw : Update terminal server username doorbell --------------- 13.94s config_localpw : Update line passwords ---------------------------------- 13.78s config_localpw : Update line passwords ---------------------------------- 13.47s config_localpw : Identify if and where the modem is --------------------- 13.03s config_localpw : Update enable and username config lines ---------------- 12.82s config_localpw : Update line passwords ---------------------------------- 12.07s Playbook run took 0 days, 0 hours, 17 minutes, 38 seconds ```
main
network modules running slower after upgrading to ansible issue type bug report component name all network modules for all device families e g ios ios xr nxos eos etc ansible version ansible configured module search path ansible python module location usr lib site packages ansible executable location bin ansible python version default may configuration tested without callback and strategy default callback whitelist ansible ansible cfg default strategy ansible ansible cfg free default timeout ansible ansible cfg persistent connect timeout ansible ansible cfg os environment rhel summary there s a significant difference in speed performance after upgrading to from result are unaffected by scale i ve tested most of our custom roles so far and i ve verified this is consistent in all of our environments also going with without the free strategy and callbacks we normally run does not affect performance processor and memory utilization is far lower consistent with the increase in run times simply put all network roles are running significantly slower in steps to reproduce facts ansible playbook facts yml l cisco ios forks passwords ansible playbook password yml l cisco ios forks expected results facts network facts collect output from ios device network facts set config lines fact debug network facts include cisco ios tasks network facts set config fact network facts set version fact network facts set model number network facts set management interface name fact playbook run took days hours minutes seconds passwords config localpw update line passwords network facts collect output from ios device config localpw update line passwords network facts include cisco ios tasks config localpw update line passwords config localpw update terminal server username doorbell config localpw debug config localpw update terminal server username doorbell config localpw set fact modem slot config localpw update enable and username config lines config localpw debug config localpw identify if it has a modem config localpw update enable and username config lines config localpw update terminal server username doorbell config localpw debug config localpw update terminal server username doorbell config localpw update terminal server username doorbell config localpw update enable and username config lines config localpw debug config localpw update line passwords playbook run took days hours minutes seconds actual results facts network facts collect output from ios device network facts include cisco ios tasks debug network facts set config fact network facts set management interface name fact network facts set model number network facts set version fact network facts set config lines fact playbook run took days hours minutes seconds passwords config localpw update line passwords config localpw update line passwords config localpw update terminal server username doorbell config localpw update enable and username config lines config localpw update terminal server username doorbell config localpw update line passwords config localpw update line passwords config localpw update line passwords config localpw update line passwords config localpw update terminal server username doorbell config localpw update line passwords config localpw update line passwords config localpw update line passwords config localpw identify if it has a modem config localpw update terminal server username doorbell config localpw update line passwords config localpw update line passwords config localpw identify if and where the modem is config localpw update enable and username config lines config localpw update line passwords playbook run took days hours minutes seconds
1
4,944
25,414,742,813
IssuesEvent
2022-11-22 22:31:36
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
closed
Creat a GitHub action to add 'needs grooming' label to any ticket added to the repo
Maintain
TASK: action being added to GitHub The requirement is that every time a ticket gets added to the repo the label 'needs grooming' automatically gets added. **Context**: In the 'Backlog' column currently, there's no way of defining if the ticket has or has not been groomed. This would make tracking tickets that need to be groomed easier when looking at ZenHub. By automatically adding the label to any ticket that has been created we ensure that a manual action is required to mark the ticket as 'groomed' (by removing the label). The expectation is that a user (product owner, dev, designer, dm) would remove the label if the ticket has been groomed.
True
Creat a GitHub action to add 'needs grooming' label to any ticket added to the repo - TASK: action being added to GitHub The requirement is that every time a ticket gets added to the repo the label 'needs grooming' automatically gets added. **Context**: In the 'Backlog' column currently, there's no way of defining if the ticket has or has not been groomed. This would make tracking tickets that need to be groomed easier when looking at ZenHub. By automatically adding the label to any ticket that has been created we ensure that a manual action is required to mark the ticket as 'groomed' (by removing the label). The expectation is that a user (product owner, dev, designer, dm) would remove the label if the ticket has been groomed.
main
creat a github action to add needs grooming label to any ticket added to the repo task action being added to github the requirement is that every time a ticket gets added to the repo the label needs grooming automatically gets added context in the backlog column currently there s no way of defining if the ticket has or has not been groomed this would make tracking tickets that need to be groomed easier when looking at zenhub by automatically adding the label to any ticket that has been created we ensure that a manual action is required to mark the ticket as groomed by removing the label the expectation is that a user product owner dev designer dm would remove the label if the ticket has been groomed
1
3,496
13,646,946,517
IssuesEvent
2020-09-26 00:52:19
amyjko/faculty
https://api.github.com/repos/amyjko/faculty
closed
Link bio to publication and award counts
maintainability
It's currently static, but can point to publication data.
True
Link bio to publication and award counts - It's currently static, but can point to publication data.
main
link bio to publication and award counts it s currently static but can point to publication data
1
5,291
26,736,833,251
IssuesEvent
2023-01-30 10:04:48
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Golang source sets (go_source) are not supported: source files are unsynced
type: feature request lang: go topic: sync awaiting-maintainer
Bazel golang rule supports [source sets](https://github.com/bazelbuild/rules_go/blob/master/go/core.rst#go-source) and their further [embedding](https://github.com/bazelbuild/rules_go/blob/master/go/core.rst#embedding) into binaries/tests (https://github.com/bazelbuild/rules_go/blob/master/go/core.rst#embedding). We are using this extensively and unfortunately, this is not supported by Bazel Intellij plugin: all sources defined via `go_source` & `embed` attr are "unsynced". ## Setup ### .bazelproject ``` directories: . derive_targets_from_directories: false targets: //myapp:all additional_languages: go ``` ### Bazel build file ``` go_source( name = "src", srcs = glob( include = ["*.go"], exclude = ["*_test.go"], ), deps = [], ) go_binary( name = "app", embed = [":src"], ) ``` ## Expected behavior All source files matched by `go_source#srcs` are synced. ## Actual behavior `go_binary#embed` is ignored and all sources are unsynced. ## Walkaround It seems that Bazel plugin doesn't understand `go_source` so all sources must be specified explicitly in `go_binary#srcs`. This syncs up correctly: ### Bazel build file ``` go_binary( name = "app", srcs = glob( include = ["*.go"], exclude = ["*_test.go"], ), ) ```
True
Golang source sets (go_source) are not supported: source files are unsynced - Bazel golang rule supports [source sets](https://github.com/bazelbuild/rules_go/blob/master/go/core.rst#go-source) and their further [embedding](https://github.com/bazelbuild/rules_go/blob/master/go/core.rst#embedding) into binaries/tests (https://github.com/bazelbuild/rules_go/blob/master/go/core.rst#embedding). We are using this extensively and unfortunately, this is not supported by Bazel Intellij plugin: all sources defined via `go_source` & `embed` attr are "unsynced". ## Setup ### .bazelproject ``` directories: . derive_targets_from_directories: false targets: //myapp:all additional_languages: go ``` ### Bazel build file ``` go_source( name = "src", srcs = glob( include = ["*.go"], exclude = ["*_test.go"], ), deps = [], ) go_binary( name = "app", embed = [":src"], ) ``` ## Expected behavior All source files matched by `go_source#srcs` are synced. ## Actual behavior `go_binary#embed` is ignored and all sources are unsynced. ## Walkaround It seems that Bazel plugin doesn't understand `go_source` so all sources must be specified explicitly in `go_binary#srcs`. This syncs up correctly: ### Bazel build file ``` go_binary( name = "app", srcs = glob( include = ["*.go"], exclude = ["*_test.go"], ), ) ```
main
golang source sets go source are not supported source files are unsynced bazel golang rule supports and their further into binaries tests we are using this extensively and unfortunately this is not supported by bazel intellij plugin all sources defined via go source embed attr are unsynced setup bazelproject directories derive targets from directories false targets myapp all additional languages go bazel build file go source name src srcs glob include exclude deps go binary name app embed expected behavior all source files matched by go source srcs are synced actual behavior go binary embed is ignored and all sources are unsynced walkaround it seems that bazel plugin doesn t understand go source so all sources must be specified explicitly in go binary srcs this syncs up correctly bazel build file go binary name app srcs glob include exclude
1
2,060
6,977,802,027
IssuesEvent
2017-12-12 15:40:36
OpenLightingProject/ola
https://api.github.com/repos/OpenLightingProject/ola
closed
Download share is outdated
Difficulty-Medium Maintainability OpSys-Linux Type-Task
Hey guys, It seems like the download share is very outdated: http://dl.openlighting.org/?C=M;O=A Can we get this checked out so that its up to date again? Thanks
True
Download share is outdated - Hey guys, It seems like the download share is very outdated: http://dl.openlighting.org/?C=M;O=A Can we get this checked out so that its up to date again? Thanks
main
download share is outdated hey guys it seems like the download share is very outdated can we get this checked out so that its up to date again thanks
1
22,882
7,241,977,090
IssuesEvent
2018-02-14 04:46:15
caffe2/caffe2
https://api.github.com/repos/caffe2/caffe2
closed
undefined reference to cudaStreamCreate
build
Hi, I encounter the following error when the making process has proceeded up to 99 percent: CMakeFiles/core_overhead_benchmark.dir/core_overhead_benchmark.cc.o: In function `BM_cudaStreamWaitEventThenStreamSynchronize(benchmark::State&)': core_overhead_benchmark.cc:(.text+0x2e2): undefined reference to `cudaStreamCreate' Here is the build summary: -- ******** Summary ******** -- General: -- Git version : v0.8.1-667-gbd5bb22-dirty -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 5.4.0 -- Protobuf compiler : /usr/bin/protoc -- CXX flags : -fopenmp -std=c++11 -O2 -fPIC -Wno-narrowing -- Build type : Release -- Compile definitions : -- -- BUILD_BINARY : ON -- BUILD_PYTHON : ON -- Python version : 2.7.12 -- Python library : /usr/lib/x86_64-linux-gnu/libpython2.7.so -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ATEN : OFF -- USE_ASAN : OFF -- USE_CUDA : ON -- CUDA version : 8.0 -- CuDNN version : 6.0.21 -- USE_EIGEN_FOR_BLAS : 1 -- USE_FFMPEG : OFF -- USE_GFLAGS : ON -- USE_GLOG : ON -- USE_GLOO : OFF -- USE_LEVELDB : ON -- LevelDB version : 1.18 -- Snappy version : 1.1.3 -- USE_LITE_PROTO : OFF -- USE_LMDB : ON -- LMDB version : 0.9.17 -- USE_METAL : OFF -- USE_MKL : -- USE_MOBILE_OPENGL : OFF -- USE_MPI : ON -- USE_NCCL : ON -- USE_NERVANA_GPU : OFF -- USE_NNPACK : ON -- USE_OBSERVERS : ON -- USE_OPENCV : ON -- OpenCV version : 3.2.0 -- USE_OPENMP : ON -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_THREADS : ON -- USE_ZMQ : OFF I don't think anything is wrong with my cuda installation as I'm using it both in Caffe and other codes and it works without a problem. And also I had another caffe2 that I have git cloned around a month ago, this one also builds and passes the tests without a problem, so the new one that I have git cloned today seems to have an issue! I appreciate any help greatly.
1.0
undefined reference to cudaStreamCreate - Hi, I encounter the following error when the making process has proceeded up to 99 percent: CMakeFiles/core_overhead_benchmark.dir/core_overhead_benchmark.cc.o: In function `BM_cudaStreamWaitEventThenStreamSynchronize(benchmark::State&)': core_overhead_benchmark.cc:(.text+0x2e2): undefined reference to `cudaStreamCreate' Here is the build summary: -- ******** Summary ******** -- General: -- Git version : v0.8.1-667-gbd5bb22-dirty -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 5.4.0 -- Protobuf compiler : /usr/bin/protoc -- CXX flags : -fopenmp -std=c++11 -O2 -fPIC -Wno-narrowing -- Build type : Release -- Compile definitions : -- -- BUILD_BINARY : ON -- BUILD_PYTHON : ON -- Python version : 2.7.12 -- Python library : /usr/lib/x86_64-linux-gnu/libpython2.7.so -- BUILD_SHARED_LIBS : ON -- BUILD_TEST : ON -- USE_ATEN : OFF -- USE_ASAN : OFF -- USE_CUDA : ON -- CUDA version : 8.0 -- CuDNN version : 6.0.21 -- USE_EIGEN_FOR_BLAS : 1 -- USE_FFMPEG : OFF -- USE_GFLAGS : ON -- USE_GLOG : ON -- USE_GLOO : OFF -- USE_LEVELDB : ON -- LevelDB version : 1.18 -- Snappy version : 1.1.3 -- USE_LITE_PROTO : OFF -- USE_LMDB : ON -- LMDB version : 0.9.17 -- USE_METAL : OFF -- USE_MKL : -- USE_MOBILE_OPENGL : OFF -- USE_MPI : ON -- USE_NCCL : ON -- USE_NERVANA_GPU : OFF -- USE_NNPACK : ON -- USE_OBSERVERS : ON -- USE_OPENCV : ON -- OpenCV version : 3.2.0 -- USE_OPENMP : ON -- USE_REDIS : OFF -- USE_ROCKSDB : OFF -- USE_THREADS : ON -- USE_ZMQ : OFF I don't think anything is wrong with my cuda installation as I'm using it both in Caffe and other codes and it works without a problem. And also I had another caffe2 that I have git cloned around a month ago, this one also builds and passes the tests without a problem, so the new one that I have git cloned today seems to have an issue! I appreciate any help greatly.
non_main
undefined reference to cudastreamcreate hi i encounter the following error when the making process has proceeded up to percent cmakefiles core overhead benchmark dir core overhead benchmark cc o in function bm cudastreamwaiteventthenstreamsynchronize benchmark state core overhead benchmark cc text undefined reference to cudastreamcreate here is the build summary summary general git version dirty system linux c compiler usr bin c c compiler version protobuf compiler usr bin protoc cxx flags fopenmp std c fpic wno narrowing build type release compile definitions build binary on build python on python version python library usr lib linux gnu so build shared libs on build test on use aten off use asan off use cuda on cuda version cudnn version use eigen for blas use ffmpeg off use gflags on use glog on use gloo off use leveldb on leveldb version snappy version use lite proto off use lmdb on lmdb version use metal off use mkl use mobile opengl off use mpi on use nccl on use nervana gpu off use nnpack on use observers on use opencv on opencv version use openmp on use redis off use rocksdb off use threads on use zmq off i don t think anything is wrong with my cuda installation as i m using it both in caffe and other codes and it works without a problem and also i had another that i have git cloned around a month ago this one also builds and passes the tests without a problem so the new one that i have git cloned today seems to have an issue i appreciate any help greatly
0
4,946
25,455,551,843
IssuesEvent
2022-11-24 13:55:24
pace/bricks
https://api.github.com/repos/pace/bricks
closed
Upgrade go-pg dependency
T::Maintainance
### Problem We are currently using `github.com/go-pg/pg v6.14.5` which might be outdated. As far as I can tell the only impact this has on us is a performance one. When using the `Exists()` method on a query (e.g. `db.Model(&m).Where(...).Exists()`) go-pg [performs a regular select and checks whether the number of rows returned](https://github.com/go-pg/pg/blob/v6.14.5/orm/query.go#L1054). This is far from efficient. ### Suggested solution Upgrade to a newer version, like v8.0.4 where [this seems to be fixed](https://github.com/go-pg/pg/blob/v8.0.4/orm/query.go#L1130). [Changelog of v.8.0.4](https://github.com/go-pg/pg/blob/v8.0.4/CHANGELOG.md). Upgrading needs adjustments in our code: > DB.OnQueryProcessed is replaced with DB.AddQueryHook The format of the hook changes also. If the impact of this upgrade is too huge, we can live with or work around the problem mentioned. But we probably have to upgrade eventually.
True
Upgrade go-pg dependency - ### Problem We are currently using `github.com/go-pg/pg v6.14.5` which might be outdated. As far as I can tell the only impact this has on us is a performance one. When using the `Exists()` method on a query (e.g. `db.Model(&m).Where(...).Exists()`) go-pg [performs a regular select and checks whether the number of rows returned](https://github.com/go-pg/pg/blob/v6.14.5/orm/query.go#L1054). This is far from efficient. ### Suggested solution Upgrade to a newer version, like v8.0.4 where [this seems to be fixed](https://github.com/go-pg/pg/blob/v8.0.4/orm/query.go#L1130). [Changelog of v.8.0.4](https://github.com/go-pg/pg/blob/v8.0.4/CHANGELOG.md). Upgrading needs adjustments in our code: > DB.OnQueryProcessed is replaced with DB.AddQueryHook The format of the hook changes also. If the impact of this upgrade is too huge, we can live with or work around the problem mentioned. But we probably have to upgrade eventually.
main
upgrade go pg dependency problem we are currently using github com go pg pg which might be outdated as far as i can tell the only impact this has on us is a performance one when using the exists method on a query e g db model m where exists go pg this is far from efficient suggested solution upgrade to a newer version like where upgrading needs adjustments in our code db onqueryprocessed is replaced with db addqueryhook the format of the hook changes also if the impact of this upgrade is too huge we can live with or work around the problem mentioned but we probably have to upgrade eventually
1
2,664
9,107,472,600
IssuesEvent
2019-02-21 04:35:57
prkumar/uplink
https://api.github.com/repos/prkumar/uplink
closed
Add a `retry` decorator
Feature Request Needs Maintainer Input
Here's the original use case from @liiight on [Gitter](https://gitter.im/python-uplink/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge): > I'm implementing retries in my code, which is done using a package called retry. basically its a super simple decorator, just catch exception, and call the decoratored func until conditions apply, nothing fancy > when using with uplink, I'm implementing it in the layer above uplink, i.e, the usage of it. i was wondering if it's possible to do it via uplink itself > i.e, catch an error and retry the called method
True
Add a `retry` decorator - Here's the original use case from @liiight on [Gitter](https://gitter.im/python-uplink/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge): > I'm implementing retries in my code, which is done using a package called retry. basically its a super simple decorator, just catch exception, and call the decoratored func until conditions apply, nothing fancy > when using with uplink, I'm implementing it in the layer above uplink, i.e, the usage of it. i was wondering if it's possible to do it via uplink itself > i.e, catch an error and retry the called method
main
add a retry decorator here s the original use case from liiight on i m implementing retries in my code which is done using a package called retry basically its a super simple decorator just catch exception and call the decoratored func until conditions apply nothing fancy when using with uplink i m implementing it in the layer above uplink i e the usage of it i was wondering if it s possible to do it via uplink itself i e catch an error and retry the called method
1
2,510
8,655,459,903
IssuesEvent
2018-11-27 16:00:31
codestation/qcma
https://api.github.com/repos/codestation/qcma
closed
QCMA disconnects in middle of file transfer
unmaintained
Ended event, code: 0xc105, id: 161 This is the last event response.
True
QCMA disconnects in middle of file transfer - Ended event, code: 0xc105, id: 161 This is the last event response.
main
qcma disconnects in middle of file transfer ended event code id this is the last event response
1
815
4,441,581,899
IssuesEvent
2016-08-19 09:52:12
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Unarchive Error, No such file or directory
bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> unarchive ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0 (devel 3c65c03a67) last updated 2016/08/15 16:01:24 (GMT +1000) lib/ansible/modules/core: (detached HEAD decb2ec9fa) last updated 2016/08/15 16:01:29 (GMT +1000) lib/ansible/modules/extras: (detached HEAD 61d5fe148c) last updated 2016/08/15 16:01:29 (GMT +1000) config file = /home/linus/Documents/ansible-playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Ubuntu management node Centos 7 managed node ##### SUMMARY <!--- Explain the problem briefly --> Gives No such file or directory error when added extra_opts: "--strip-components=2" ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - name: unpack the artifacts unarchive: src: /usr/share/stuff.tar.gz dest: /usr/share/ extra_opts: "--strip-components=2" owner: nginx group: nginx copy: no ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Changed ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> The archive file structure is: dist/production/files I wan to strip the first two directories <!--- Paste verbatim command output between quotes below --> ``` fatal: [52.65.150.148]: FAILED! => {"changed": true, "dest": "/usr/share/", "extract_results": {"cmd": "/bin/gtar -C \"/usr/share/\" -xz --strip-components=2 --owner=\"nginx\" --group=\"nginx\" -f \"/usr/share/stuff.tar.gz\"", "err": "", "out": "", "rc": 0}, "failed": true, "gid": 992, "group": "nginx", "handler": "TgzArchive", "mode": "02775", "msg": "Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/usr/share/stuff/dist/production/'", "owner": "bitbucket", "size": 4096, "src": "/usr/share/stuff.tar.gz", "state": "directory", "uid": 1003} ```
True
Unarchive Error, No such file or directory - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> unarchive ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0 (devel 3c65c03a67) last updated 2016/08/15 16:01:24 (GMT +1000) lib/ansible/modules/core: (detached HEAD decb2ec9fa) last updated 2016/08/15 16:01:29 (GMT +1000) lib/ansible/modules/extras: (detached HEAD 61d5fe148c) last updated 2016/08/15 16:01:29 (GMT +1000) config file = /home/linus/Documents/ansible-playbooks/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Ubuntu management node Centos 7 managed node ##### SUMMARY <!--- Explain the problem briefly --> Gives No such file or directory error when added extra_opts: "--strip-components=2" ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - name: unpack the artifacts unarchive: src: /usr/share/stuff.tar.gz dest: /usr/share/ extra_opts: "--strip-components=2" owner: nginx group: nginx copy: no ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> Changed ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> The archive file structure is: dist/production/files I wan to strip the first two directories <!--- Paste verbatim command output between quotes below --> ``` fatal: [52.65.150.148]: FAILED! => {"changed": true, "dest": "/usr/share/", "extract_results": {"cmd": "/bin/gtar -C \"/usr/share/\" -xz --strip-components=2 --owner=\"nginx\" --group=\"nginx\" -f \"/usr/share/stuff.tar.gz\"", "err": "", "out": "", "rc": 0}, "failed": true, "gid": 992, "group": "nginx", "handler": "TgzArchive", "mode": "02775", "msg": "Unexpected error when accessing exploded file: [Errno 2] No such file or directory: '/usr/share/stuff/dist/production/'", "owner": "bitbucket", "size": 4096, "src": "/usr/share/stuff.tar.gz", "state": "directory", "uid": 1003} ```
main
unarchive error no such file or directory issue type bug report component name unarchive ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home linus documents ansible playbooks ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu management node centos managed node summary gives no such file or directory error when added extra opts strip components steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name unpack the artifacts unarchive src usr share stuff tar gz dest usr share extra opts strip components owner nginx group nginx copy no expected results changed actual results the archive file structure is dist production files i wan to strip the first two directories fatal failed changed true dest usr share extract results cmd bin gtar c usr share xz strip components owner nginx group nginx f usr share stuff tar gz err out rc failed true gid group nginx handler tgzarchive mode msg unexpected error when accessing exploded file no such file or directory usr share stuff dist production owner bitbucket size src usr share stuff tar gz state directory uid
1
4,278
21,523,726,758
IssuesEvent
2022-04-28 16:18:48
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
reopened
SEO | Create a sitemap index file
engineering Maintain
# Description Create a sitemap index file, which should be findable at https://foundation.mozilla.org/sitemap.xml. This index would link to the sitemaps for all of the languages that the website supports. # Acceptance criteria - [x] /sitemap.xml should be accessible on staging/production/and review apps - [x] sitemap should include all 10 languages included on foundation site translations # Dev tasks - [x] Generate Sitemap using wagtail [sitemap generator](https://docs.wagtail.org/en/stable/reference/contrib/sitemaps.html)
True
SEO | Create a sitemap index file - # Description Create a sitemap index file, which should be findable at https://foundation.mozilla.org/sitemap.xml. This index would link to the sitemaps for all of the languages that the website supports. # Acceptance criteria - [x] /sitemap.xml should be accessible on staging/production/and review apps - [x] sitemap should include all 10 languages included on foundation site translations # Dev tasks - [x] Generate Sitemap using wagtail [sitemap generator](https://docs.wagtail.org/en/stable/reference/contrib/sitemaps.html)
main
seo create a sitemap index file description create a sitemap index file which should be findable at this index would link to the sitemaps for all of the languages that the website supports acceptance criteria sitemap xml should be accessible on staging production and review apps sitemap should include all languages included on foundation site translations dev tasks generate sitemap using wagtail
1
18,450
3,062,239,861
IssuesEvent
2015-08-16 11:33:41
dkpro/dkpro-jwktl
https://api.github.com/repos/dkpro/dkpro-jwktl
closed
Upgrade to Java 8
defect
Originally reported on Google Code with ID 16 ``` This is essentially possible by upgrading to the newest parent POM. ``` Reported by `chmeyer.de` on 2015-04-22 13:07:53
1.0
Upgrade to Java 8 - Originally reported on Google Code with ID 16 ``` This is essentially possible by upgrading to the newest parent POM. ``` Reported by `chmeyer.de` on 2015-04-22 13:07:53
non_main
upgrade to java originally reported on google code with id this is essentially possible by upgrading to the newest parent pom reported by chmeyer de on
0
131,762
12,489,821,444
IssuesEvent
2020-05-31 20:41:41
edwardtheharris/machines-wat-learn-good
https://api.github.com/repos/edwardtheharris/machines-wat-learn-good
closed
Regression - Intro and Data
documentation
Welcome to the introduction to the regression section of the Machine Learning with Python tutorial series. By this point, you should have Scikit-Learn already installed. If not, get it, along with Pandas and matplotlib! If you have a pre-compiled scientific distribution of Python like ActivePython from our sponsor, you should already have numpy, scipy, scikit-learn, matplotlib, and pandas installed. If not, do: ```bash pip install numpy scipy scikit-learn matplotlib pandas ``` Along with those tutorial-wide imports, we're also going to be making use of Quandl here, which you may need to separately install, with: ```bash pip install quandl ``` I will note again in the first part of the code, but the Quandl module used to be imported with an upper-case Q, but is now imported with a lower-cased q. In the video and sample codes, it is upper-cased. To begin, what is regression in terms of us using it with machine learning? The goal is to take continuous data, find the equation that best fits the data, and be able forecast out a specific value. With simple linear regression, you are just simply doing this by creating a best-fit line: ![linear regression machine learning tutorial](https://pythonprogramming.net/static/images/machine-learning/linear-regression-algorithm-tutorial-test.png) From here, we can use the equation of that line to forecast out into the future, where the 'date' is the x-axis, what the price will be. A popular use with regression is to predict stock prices. This is done because we are considering the fluidity of price over time, and attempting to forecast the next fluid price in the future using a continuous dataset. Regression is a form of supervised machine learning, which is where the scientist teaches the machine by presenting features and then presenting the correct answer, over and over, to teach the machine. Once the machine is taught, the scientist will usually "test" the machine on some unseen data, where the scientist still knows what the correct answer is, but the machine doesn't. The machine's answers are compared to the known answers, and the machine's accuracy can be measured. If the accuracy is high enough, the scientist may consider actually employing the algorithm in the real world. Since regression is so popularly used with stock prices, we can start there with an example. To begin, we need data. Sometimes the data is easy to acquire, and sometimes you have to go out and scrape it together, like what we did in an older tutorial series using machine learning with stock fundamentals for investing. In our case, we're able to at least start with simple stock price and volume information from Quandl. To begin, we'll start with data that grabs the stock price for Alphabet (previously Google), with the ticker of GOOGL: ```python import pandas as pd import Quandl df = Quandl.get("WIKI/GOOGL") print(df.head()) # Note: when filmed, Quandl's module was referenced with an upper-case Q, now it is a lower-case q, so import quandl ``` At this point, we have: | Date | Open | High | Low | Close | Volume | Ex-Dividend | |------------|--------|--------|--------|--------|----------|-------------| | 2004-08-19 | 100.00 | 104.06 | 95.96 | 100.34 | 44659000 | 0 | | 2004-08-20 | 101.01 | 109.08 | 100.50 | 108.31 | 22834300 | 0 | | 2004-08-23 | 110.75 | 113.48 | 109.05 | 109.40 | 18256100 | 0 | | 2004-08-24 | 111.24 | 111.60 | 103.57 | 104.87 | 15247300 | 0 | | 2004-08-25 | 104.96 | 108.00 | 103.88 | 106.00 | 9188600 | 0 | | Date | Split Ratio | Adj. Open | Adj. High | Adj. Low | Adj. Close | | ---------- | ----------- | --------- | --------- | -------- | ---------- | | 2004-08-19 | 1 | 50.000 | 52.03 | 47.980 | 50.170 | | 2004-08-20 | 1 | 50.505 | 54.54 | 50.250 | 54.155 | | 2004-08-23 | 1 | 55.375 | 56.74 | 54.525 | 54.700 | | 2004-08-24 | 1 | 55.620 | 55.80 | 51.785 | 52.435 | | 2004-08-25 | 1 | 52.480 | 54.00 | 51.940 | 53.000 | | Date | Adj. Volume | | ---------- | ----------- | | 2004-08-19 | 44659000 | | 2004-08-20 | 22834300 | | 2004-08-23 | 18256100 | | 2004-08-24 | 15247300 | | 2004-08-25 | 9188600 | Awesome, off to a good start, we have the data, but maybe a bit much. To reference the intro, there exists an entire machine learning category that aims to reduce the amount of input that we process. In our case, we have quite a few columns, many are redundant, a couple don't really change. We can most likely agree that having both the regular columns and adjusted columns is redundant. Adjusted columns are the most ideal ones. Regular columns here are prices on the day, but stocks have things called stock splits, where suddenly 1 share becomes something like 2 shares, thus the value of a share is halved, but the value of the company has not halved. Adjusted columns are adjusted for stock splits over time, which makes them more reliable for doing analysis. Thus, let's go ahead and pair down our original dataframe a bit: ```python df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close', 'Adj. Volume']] ``` Now we just have the adjusted columns, and the volume column. A couple major points to make here. Many people talk about or hear about machine learning as if it is some sort of dark art that somehow generates value from nothing. Machine learning can highlight value if it is there, but it has to actually be there. You need meaningful data. So how do you know if you have meaningful data? My best suggestion is to just simply use your brain. Think about it. Are historical prices indicative of future prices? Some people think so, but this has been continually disproven over time. What about historical patterns? This has a bit more merit when taken to the extremes (which machine learning can help with), but is overall fairly weak. What about the relationship between price changes and volume over time, along with historical patterns? Probably a bit better. So, as you can already see, it is not the case that the more data the merrier, but we instead want to use useful data. At the same time, raw data sometimes should be transformed. Consider daily volatility, such as with the high minus low % change? How about daily percent change? Would you consider data that is simply the Open, High, Low, Close or data that is the Close, Spread/Volatility, %change daily to be better? I would expect the latter to be more ideal. The former is all very similar data points. The latter is created based on identical data from the former, but it brings far more valuable information to the table. Thus, not all of the data you have is useful, and sometimes you need to do further manipulation on your data to make it even more valuable before feeding it through a machine learning algorithm. Let's go ahead and transform our data next: ```python df['HL_PCT'] = (df['Adj. High'] - df['Adj. Low']) / df['Adj. Close'] * 100.0 ``` I went ahead and recorded the video version of this, not realizing my stake that it was high minus low divided by close. I meant to do High - Low, divided by the low. Feel free to fix that if you like. This creates a new column that is the % spread based on the closing price, which is our crude measure of volatility. Next, we'll do daily percent change: ```python df['PCT_change'] = (df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open'] * 100.0 ``` Now we will define a new dataframe as: ```python df = df[['Adj. Close', 'HL_PCT', 'PCT_change', 'Adj. Volume']] print(df.head()) ```
1.0
Regression - Intro and Data - Welcome to the introduction to the regression section of the Machine Learning with Python tutorial series. By this point, you should have Scikit-Learn already installed. If not, get it, along with Pandas and matplotlib! If you have a pre-compiled scientific distribution of Python like ActivePython from our sponsor, you should already have numpy, scipy, scikit-learn, matplotlib, and pandas installed. If not, do: ```bash pip install numpy scipy scikit-learn matplotlib pandas ``` Along with those tutorial-wide imports, we're also going to be making use of Quandl here, which you may need to separately install, with: ```bash pip install quandl ``` I will note again in the first part of the code, but the Quandl module used to be imported with an upper-case Q, but is now imported with a lower-cased q. In the video and sample codes, it is upper-cased. To begin, what is regression in terms of us using it with machine learning? The goal is to take continuous data, find the equation that best fits the data, and be able forecast out a specific value. With simple linear regression, you are just simply doing this by creating a best-fit line: ![linear regression machine learning tutorial](https://pythonprogramming.net/static/images/machine-learning/linear-regression-algorithm-tutorial-test.png) From here, we can use the equation of that line to forecast out into the future, where the 'date' is the x-axis, what the price will be. A popular use with regression is to predict stock prices. This is done because we are considering the fluidity of price over time, and attempting to forecast the next fluid price in the future using a continuous dataset. Regression is a form of supervised machine learning, which is where the scientist teaches the machine by presenting features and then presenting the correct answer, over and over, to teach the machine. Once the machine is taught, the scientist will usually "test" the machine on some unseen data, where the scientist still knows what the correct answer is, but the machine doesn't. The machine's answers are compared to the known answers, and the machine's accuracy can be measured. If the accuracy is high enough, the scientist may consider actually employing the algorithm in the real world. Since regression is so popularly used with stock prices, we can start there with an example. To begin, we need data. Sometimes the data is easy to acquire, and sometimes you have to go out and scrape it together, like what we did in an older tutorial series using machine learning with stock fundamentals for investing. In our case, we're able to at least start with simple stock price and volume information from Quandl. To begin, we'll start with data that grabs the stock price for Alphabet (previously Google), with the ticker of GOOGL: ```python import pandas as pd import Quandl df = Quandl.get("WIKI/GOOGL") print(df.head()) # Note: when filmed, Quandl's module was referenced with an upper-case Q, now it is a lower-case q, so import quandl ``` At this point, we have: | Date | Open | High | Low | Close | Volume | Ex-Dividend | |------------|--------|--------|--------|--------|----------|-------------| | 2004-08-19 | 100.00 | 104.06 | 95.96 | 100.34 | 44659000 | 0 | | 2004-08-20 | 101.01 | 109.08 | 100.50 | 108.31 | 22834300 | 0 | | 2004-08-23 | 110.75 | 113.48 | 109.05 | 109.40 | 18256100 | 0 | | 2004-08-24 | 111.24 | 111.60 | 103.57 | 104.87 | 15247300 | 0 | | 2004-08-25 | 104.96 | 108.00 | 103.88 | 106.00 | 9188600 | 0 | | Date | Split Ratio | Adj. Open | Adj. High | Adj. Low | Adj. Close | | ---------- | ----------- | --------- | --------- | -------- | ---------- | | 2004-08-19 | 1 | 50.000 | 52.03 | 47.980 | 50.170 | | 2004-08-20 | 1 | 50.505 | 54.54 | 50.250 | 54.155 | | 2004-08-23 | 1 | 55.375 | 56.74 | 54.525 | 54.700 | | 2004-08-24 | 1 | 55.620 | 55.80 | 51.785 | 52.435 | | 2004-08-25 | 1 | 52.480 | 54.00 | 51.940 | 53.000 | | Date | Adj. Volume | | ---------- | ----------- | | 2004-08-19 | 44659000 | | 2004-08-20 | 22834300 | | 2004-08-23 | 18256100 | | 2004-08-24 | 15247300 | | 2004-08-25 | 9188600 | Awesome, off to a good start, we have the data, but maybe a bit much. To reference the intro, there exists an entire machine learning category that aims to reduce the amount of input that we process. In our case, we have quite a few columns, many are redundant, a couple don't really change. We can most likely agree that having both the regular columns and adjusted columns is redundant. Adjusted columns are the most ideal ones. Regular columns here are prices on the day, but stocks have things called stock splits, where suddenly 1 share becomes something like 2 shares, thus the value of a share is halved, but the value of the company has not halved. Adjusted columns are adjusted for stock splits over time, which makes them more reliable for doing analysis. Thus, let's go ahead and pair down our original dataframe a bit: ```python df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close', 'Adj. Volume']] ``` Now we just have the adjusted columns, and the volume column. A couple major points to make here. Many people talk about or hear about machine learning as if it is some sort of dark art that somehow generates value from nothing. Machine learning can highlight value if it is there, but it has to actually be there. You need meaningful data. So how do you know if you have meaningful data? My best suggestion is to just simply use your brain. Think about it. Are historical prices indicative of future prices? Some people think so, but this has been continually disproven over time. What about historical patterns? This has a bit more merit when taken to the extremes (which machine learning can help with), but is overall fairly weak. What about the relationship between price changes and volume over time, along with historical patterns? Probably a bit better. So, as you can already see, it is not the case that the more data the merrier, but we instead want to use useful data. At the same time, raw data sometimes should be transformed. Consider daily volatility, such as with the high minus low % change? How about daily percent change? Would you consider data that is simply the Open, High, Low, Close or data that is the Close, Spread/Volatility, %change daily to be better? I would expect the latter to be more ideal. The former is all very similar data points. The latter is created based on identical data from the former, but it brings far more valuable information to the table. Thus, not all of the data you have is useful, and sometimes you need to do further manipulation on your data to make it even more valuable before feeding it through a machine learning algorithm. Let's go ahead and transform our data next: ```python df['HL_PCT'] = (df['Adj. High'] - df['Adj. Low']) / df['Adj. Close'] * 100.0 ``` I went ahead and recorded the video version of this, not realizing my stake that it was high minus low divided by close. I meant to do High - Low, divided by the low. Feel free to fix that if you like. This creates a new column that is the % spread based on the closing price, which is our crude measure of volatility. Next, we'll do daily percent change: ```python df['PCT_change'] = (df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open'] * 100.0 ``` Now we will define a new dataframe as: ```python df = df[['Adj. Close', 'HL_PCT', 'PCT_change', 'Adj. Volume']] print(df.head()) ```
non_main
regression intro and data welcome to the introduction to the regression section of the machine learning with python tutorial series by this point you should have scikit learn already installed if not get it along with pandas and matplotlib if you have a pre compiled scientific distribution of python like activepython from our sponsor you should already have numpy scipy scikit learn matplotlib and pandas installed if not do bash pip install numpy scipy scikit learn matplotlib pandas along with those tutorial wide imports we re also going to be making use of quandl here which you may need to separately install with bash pip install quandl i will note again in the first part of the code but the quandl module used to be imported with an upper case q but is now imported with a lower cased q in the video and sample codes it is upper cased to begin what is regression in terms of us using it with machine learning the goal is to take continuous data find the equation that best fits the data and be able forecast out a specific value with simple linear regression you are just simply doing this by creating a best fit line from here we can use the equation of that line to forecast out into the future where the date is the x axis what the price will be a popular use with regression is to predict stock prices this is done because we are considering the fluidity of price over time and attempting to forecast the next fluid price in the future using a continuous dataset regression is a form of supervised machine learning which is where the scientist teaches the machine by presenting features and then presenting the correct answer over and over to teach the machine once the machine is taught the scientist will usually test the machine on some unseen data where the scientist still knows what the correct answer is but the machine doesn t the machine s answers are compared to the known answers and the machine s accuracy can be measured if the accuracy is high enough the scientist may consider actually employing the algorithm in the real world since regression is so popularly used with stock prices we can start there with an example to begin we need data sometimes the data is easy to acquire and sometimes you have to go out and scrape it together like what we did in an older tutorial series using machine learning with stock fundamentals for investing in our case we re able to at least start with simple stock price and volume information from quandl to begin we ll start with data that grabs the stock price for alphabet previously google with the ticker of googl python import pandas as pd import quandl df quandl get wiki googl print df head note when filmed quandl s module was referenced with an upper case q now it is a lower case q so import quandl at this point we have date open high low close volume ex dividend date split ratio adj open adj high adj low adj close date adj volume awesome off to a good start we have the data but maybe a bit much to reference the intro there exists an entire machine learning category that aims to reduce the amount of input that we process in our case we have quite a few columns many are redundant a couple don t really change we can most likely agree that having both the regular columns and adjusted columns is redundant adjusted columns are the most ideal ones regular columns here are prices on the day but stocks have things called stock splits where suddenly share becomes something like shares thus the value of a share is halved but the value of the company has not halved adjusted columns are adjusted for stock splits over time which makes them more reliable for doing analysis thus let s go ahead and pair down our original dataframe a bit python df df now we just have the adjusted columns and the volume column a couple major points to make here many people talk about or hear about machine learning as if it is some sort of dark art that somehow generates value from nothing machine learning can highlight value if it is there but it has to actually be there you need meaningful data so how do you know if you have meaningful data my best suggestion is to just simply use your brain think about it are historical prices indicative of future prices some people think so but this has been continually disproven over time what about historical patterns this has a bit more merit when taken to the extremes which machine learning can help with but is overall fairly weak what about the relationship between price changes and volume over time along with historical patterns probably a bit better so as you can already see it is not the case that the more data the merrier but we instead want to use useful data at the same time raw data sometimes should be transformed consider daily volatility such as with the high minus low change how about daily percent change would you consider data that is simply the open high low close or data that is the close spread volatility change daily to be better i would expect the latter to be more ideal the former is all very similar data points the latter is created based on identical data from the former but it brings far more valuable information to the table thus not all of the data you have is useful and sometimes you need to do further manipulation on your data to make it even more valuable before feeding it through a machine learning algorithm let s go ahead and transform our data next python df df df df i went ahead and recorded the video version of this not realizing my stake that it was high minus low divided by close i meant to do high low divided by the low feel free to fix that if you like this creates a new column that is the spread based on the closing price which is our crude measure of volatility next we ll do daily percent change python df df df df now we will define a new dataframe as python df df print df head
0
455,439
13,126,831,010
IssuesEvent
2020-08-06 09:15:51
The-Codin-Hole/HotWired-Bot
https://api.github.com/repos/The-Codin-Hole/HotWired-Bot
closed
Cant Launch Bot From start.py, 'module' object is not callable
priority: 1 - high type: bug
[`start.py`](https://github.com/The-Codin-Hole/HotWired-Bot/blob/f2be6e60742bcb387e80fe40131fc8d0a5f8216a/start.py) file doesn't start the bot properly and raises an exception: ``` Traceback (most recent call last): File "G:\New Downloads\HotWired-Bot\start.py", line 5, in <module> main() TypeError: 'module' object is not callable ```
1.0
Cant Launch Bot From start.py, 'module' object is not callable - [`start.py`](https://github.com/The-Codin-Hole/HotWired-Bot/blob/f2be6e60742bcb387e80fe40131fc8d0a5f8216a/start.py) file doesn't start the bot properly and raises an exception: ``` Traceback (most recent call last): File "G:\New Downloads\HotWired-Bot\start.py", line 5, in <module> main() TypeError: 'module' object is not callable ```
non_main
cant launch bot from start py module object is not callable file doesn t start the bot properly and raises an exception traceback most recent call last file g new downloads hotwired bot start py line in main typeerror module object is not callable
0
3,093
11,741,740,334
IssuesEvent
2020-03-11 22:32:02
alacritty/alacritty
https://api.github.com/repos/alacritty/alacritty
closed
Failed to open input method
A - deps B - bug B - crash C - waiting on maintainer DS - X11 S - winit/glutin
Which operating system does the issue occur on? Ubuntu 18.04 If on linux, are you using X11 or Wayland? X11 the error: ``` RUST_BACKTRACE=1 alacritty thread 'main' panicked at 'Failed to open input method: PotentialInputMethods { xmodifiers: None, fallbacks: [ PotentialInputMethod { name: "@im=local", successful: Some( false ) }, PotentialInputMethod { name: "@im=", successful: Some( false ) } ], _xim_servers: Err( GetPropertyError( TypeMismatch( 0 ) ) ) }', /home/skariel/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.15.1/src/platform/linux/x11/mod.rs:90:17 stack backtrace: 0: <unknown> 1: <unknown> 2: <unknown> 3: <unknown> 4: <unknown> 5: <unknown> 6: <unknown> 7: <unknown> 8: <unknown> 9: __libc_start_main 10: <unknown> ```
True
Failed to open input method - Which operating system does the issue occur on? Ubuntu 18.04 If on linux, are you using X11 or Wayland? X11 the error: ``` RUST_BACKTRACE=1 alacritty thread 'main' panicked at 'Failed to open input method: PotentialInputMethods { xmodifiers: None, fallbacks: [ PotentialInputMethod { name: "@im=local", successful: Some( false ) }, PotentialInputMethod { name: "@im=", successful: Some( false ) } ], _xim_servers: Err( GetPropertyError( TypeMismatch( 0 ) ) ) }', /home/skariel/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.15.1/src/platform/linux/x11/mod.rs:90:17 stack backtrace: 0: <unknown> 1: <unknown> 2: <unknown> 3: <unknown> 4: <unknown> 5: <unknown> 6: <unknown> 7: <unknown> 8: <unknown> 9: __libc_start_main 10: <unknown> ```
main
failed to open input method which operating system does the issue occur on ubuntu if on linux are you using or wayland the error rust backtrace alacritty thread main panicked at failed to open input method potentialinputmethods xmodifiers none fallbacks potentialinputmethod name im local successful some false potentialinputmethod name im successful some false xim servers err getpropertyerror typemismatch home skariel cargo registry src github com winit src platform linux mod rs stack backtrace libc start main
1
822,043
30,849,806,311
IssuesEvent
2023-08-02 15:55:18
DDMAL/CantusDB
https://api.github.com/repos/DDMAL/CantusDB
closed
We should remove the extra line in each row of `searchms` results
priority: low cosmetic/accessibility
Poking around OldCantus for CU-related things, I just found a feature I had never seen before but that is not implemented in NewCantus. If this is a known difference, close the issue.... The results on a `searchms` page have a details button that accordions out to include some chant information. OldCantus: https://cantus.uwaterloo.ca/searchms/123723?name=&genre=All&mode=&feast=&cid=&volpiano=All&combine_op=starts&t=dominus+defensor&field_indexing_notes_value_op=contains&field_indexing_notes_value= ![image](https://github.com/DDMAL/CantusDB/assets/11023634/cbe4d6b7-49eb-4767-96d4-a40c2e8a8130) NewCantus: http://206.12.88.113/searchms/123723?op=starts_with&keyword=dominus+defensor&office=&genre=&cantus_id=&mode=&feast=&position=&melodies= ![image](https://github.com/DDMAL/CantusDB/assets/11023634/a115ca72-5039-41df-ace8-110c734a2561)
1.0
We should remove the extra line in each row of `searchms` results - Poking around OldCantus for CU-related things, I just found a feature I had never seen before but that is not implemented in NewCantus. If this is a known difference, close the issue.... The results on a `searchms` page have a details button that accordions out to include some chant information. OldCantus: https://cantus.uwaterloo.ca/searchms/123723?name=&genre=All&mode=&feast=&cid=&volpiano=All&combine_op=starts&t=dominus+defensor&field_indexing_notes_value_op=contains&field_indexing_notes_value= ![image](https://github.com/DDMAL/CantusDB/assets/11023634/cbe4d6b7-49eb-4767-96d4-a40c2e8a8130) NewCantus: http://206.12.88.113/searchms/123723?op=starts_with&keyword=dominus+defensor&office=&genre=&cantus_id=&mode=&feast=&position=&melodies= ![image](https://github.com/DDMAL/CantusDB/assets/11023634/a115ca72-5039-41df-ace8-110c734a2561)
non_main
we should remove the extra line in each row of searchms results poking around oldcantus for cu related things i just found a feature i had never seen before but that is not implemented in newcantus if this is a known difference close the issue the results on a searchms page have a details button that accordions out to include some chant information oldcantus newcantus
0
630,273
20,103,632,714
IssuesEvent
2022-02-07 08:17:26
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.reddit.com - site is not usable
priority-critical browser-focus-geckoview engine-gecko
<!-- @browser: Firefox Mobile 96.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:96.0) Gecko/96.0 Firefox/96.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/99257 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.reddit.com/r/AskReddit/comments/slvcrb/which_famous_saying_isnt_really_true_in_your/ **Browser / Version**: Firefox Mobile 96.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Missing items **Steps to Reproduce**: Reddit comments do not load in mobile view <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/2/c08cc2df-0f1d-4392-97be-bff2b8b11e37.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220126154723</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2022/2/202f8564-3347-44cf-aa5a-15977e7b57eb) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.reddit.com - site is not usable - <!-- @browser: Firefox Mobile 96.0 --> <!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:96.0) Gecko/96.0 Firefox/96.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/99257 --> <!-- @extra_labels: browser-focus-geckoview --> **URL**: https://www.reddit.com/r/AskReddit/comments/slvcrb/which_famous_saying_isnt_really_true_in_your/ **Browser / Version**: Firefox Mobile 96.0 **Operating System**: Android 11 **Tested Another Browser**: Yes Chrome **Problem type**: Site is not usable **Description**: Missing items **Steps to Reproduce**: Reddit comments do not load in mobile view <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/2/c08cc2df-0f1d-4392-97be-bff2b8b11e37.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220126154723</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2022/2/202f8564-3347-44cf-aa5a-15977e7b57eb) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_main
site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description missing items steps to reproduce reddit comments do not load in mobile view view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
740
4,348,690,121
IssuesEvent
2016-07-30 03:05:43
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Using backup and with_items can loose the original file as you get the backup of the next to last item
bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ini_file module ##### ANSIBLE VERSION ansible 2.1.0 ##### CONFIGURATION ##### OS / ENVIRONMENT N/A (Linux, Debian Stable, ansible from pip) ##### SUMMARY When modifying a single file with multiple changes using ini_file and with_items, together with the `backup=yes` option, the (possibly created) backup is overwritten by subsequent loop-runs. I had a task that modified php.ini in various sections, after a run I noticed there was 1 backup file besides the original file (and a backup I created manually before-hand, just to test). The created backup file already had the 1st change, and the new php.ini had both changes. My guess is that since both tasks ran in the same second, the backup-file name was the same on both runs, and therefore the original backup was overwritten. A single backup-file would be preferred... but I'm guessing every single with_items call will make a new backup file (possibly overwriting the same file every time). ##### STEPS TO REPRODUCE ``` - name: Configure php.ini settings ini_file: dest=/etc/php.ini owner=root group=root mode=0644 backup=yes section={{item.section}} option={{item.option}} value={{item.value}} with_items: - { section: "Date", option: "date.timezone", value: "{{timezone_name}}" } - { section: "Session", option: "session.gc_maxlifetime", value: "{{php_session_gc_maxlifetime|default(1440)}}" } ``` ##### EXPECTED RESULTS A single backup file, which is unchanged from the original. ##### ACTUAL RESULTS ``` -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini.2016-07-13@11:04:08~ -rw-r--r-- 1 root root 69097 Jul 13 11:03 php.ini.my-own-backup ```
True
Using backup and with_items can loose the original file as you get the backup of the next to last item - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ini_file module ##### ANSIBLE VERSION ansible 2.1.0 ##### CONFIGURATION ##### OS / ENVIRONMENT N/A (Linux, Debian Stable, ansible from pip) ##### SUMMARY When modifying a single file with multiple changes using ini_file and with_items, together with the `backup=yes` option, the (possibly created) backup is overwritten by subsequent loop-runs. I had a task that modified php.ini in various sections, after a run I noticed there was 1 backup file besides the original file (and a backup I created manually before-hand, just to test). The created backup file already had the 1st change, and the new php.ini had both changes. My guess is that since both tasks ran in the same second, the backup-file name was the same on both runs, and therefore the original backup was overwritten. A single backup-file would be preferred... but I'm guessing every single with_items call will make a new backup file (possibly overwriting the same file every time). ##### STEPS TO REPRODUCE ``` - name: Configure php.ini settings ini_file: dest=/etc/php.ini owner=root group=root mode=0644 backup=yes section={{item.section}} option={{item.option}} value={{item.value}} with_items: - { section: "Date", option: "date.timezone", value: "{{timezone_name}}" } - { section: "Session", option: "session.gc_maxlifetime", value: "{{php_session_gc_maxlifetime|default(1440)}}" } ``` ##### EXPECTED RESULTS A single backup file, which is unchanged from the original. ##### ACTUAL RESULTS ``` -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini -rw-r--r-- 1 root root 69113 Jul 13 11:04 php.ini.2016-07-13@11:04:08~ -rw-r--r-- 1 root root 69097 Jul 13 11:03 php.ini.my-own-backup ```
main
using backup and with items can loose the original file as you get the backup of the next to last item issue type bug report component name ini file module ansible version ansible configuration os environment n a linux debian stable ansible from pip summary when modifying a single file with multiple changes using ini file and with items together with the backup yes option the possibly created backup is overwritten by subsequent loop runs i had a task that modified php ini in various sections after a run i noticed there was backup file besides the original file and a backup i created manually before hand just to test the created backup file already had the change and the new php ini had both changes my guess is that since both tasks ran in the same second the backup file name was the same on both runs and therefore the original backup was overwritten a single backup file would be preferred but i m guessing every single with items call will make a new backup file possibly overwriting the same file every time steps to reproduce name configure php ini settings ini file dest etc php ini owner root group root mode backup yes section item section option item option value item value with items section date option date timezone value timezone name section session option session gc maxlifetime value php session gc maxlifetime default expected results a single backup file which is unchanged from the original actual results rw r r root root jul php ini rw r r root root jul php ini rw r r root root jul php ini my own backup
1
116,042
11,898,900,349
IssuesEvent
2020-03-30 08:04:53
germanrcuriel/jira-cmd
https://api.github.com/repos/germanrcuriel/jira-cmd
closed
Document usability of api token in authentication instead of password
documentation
Document usability of api token in authentication instead of password
1.0
Document usability of api token in authentication instead of password - Document usability of api token in authentication instead of password
non_main
document usability of api token in authentication instead of password document usability of api token in authentication instead of password
0
2,037
7,191,106,336
IssuesEvent
2018-02-02 19:42:45
taps-api/drafts
https://api.github.com/repos/taps-api/drafts
opened
Fill out Architectural Security Considerations
Architecture
We should make more comments on the role of security in the architecture, and decide what to write for the security considerations.
1.0
Fill out Architectural Security Considerations - We should make more comments on the role of security in the architecture, and decide what to write for the security considerations.
non_main
fill out architectural security considerations we should make more comments on the role of security in the architecture and decide what to write for the security considerations
0
17,040
23,507,138,953
IssuesEvent
2022-08-18 13:33:02
Strrationalism/CPyMO
https://api.github.com/repos/Strrationalism/CPyMO
closed
[Compatibility]: FrFr
bug compatibility
* CPyMO版本:3DS CIA * 3DS设备:O3DS和N3DSLL(XL) ## 兼容性不佳的应用程序信息 * 不能运行的PyMO应用程序名称:frfr1 * 应用程序数据包版本:s60v5 * 问题类型:图像瑕疵 [save-01.zip](https://github.com/Strrationalism/CPyMO/files/9367604/save-01.zip) https://user-images.githubusercontent.com/11296891/185292863-d318e8ba-35a8-4ea6-a502-5e5a3518ba18.mp4
True
[Compatibility]: FrFr - * CPyMO版本:3DS CIA * 3DS设备:O3DS和N3DSLL(XL) ## 兼容性不佳的应用程序信息 * 不能运行的PyMO应用程序名称:frfr1 * 应用程序数据包版本:s60v5 * 问题类型:图像瑕疵 [save-01.zip](https://github.com/Strrationalism/CPyMO/files/9367604/save-01.zip) https://user-images.githubusercontent.com/11296891/185292863-d318e8ba-35a8-4ea6-a502-5e5a3518ba18.mp4
non_main
frfr cpymo版本: cia : xl 兼容性不佳的应用程序信息 不能运行的pymo应用程序名称: 应用程序数据包版本: 问题类型:图像瑕疵
0
78,112
22,145,811,044
IssuesEvent
2022-06-03 11:55:43
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
closed
[Bug]: Vertical scroll in embed mode
Bug QA App Viewers Pod Low UI Building Pod community Papercut Embedding Apps
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior There's a scroll that's appearing on pages in embed mode even when there aren't a lot of widgets in the page. The default page height seems to be fixed and doesn't resize even when the page is empty. ### Steps To Reproduce 1. Add one widget 2. Try to embed it 3. See a scroll appear even when there are no widgets that warrants a scroll ### Environment Production ### Version Cloud
1.0
[Bug]: Vertical scroll in embed mode - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior There's a scroll that's appearing on pages in embed mode even when there aren't a lot of widgets in the page. The default page height seems to be fixed and doesn't resize even when the page is empty. ### Steps To Reproduce 1. Add one widget 2. Try to embed it 3. See a scroll appear even when there are no widgets that warrants a scroll ### Environment Production ### Version Cloud
non_main
vertical scroll in embed mode is there an existing issue for this i have searched the existing issues current behavior there s a scroll that s appearing on pages in embed mode even when there aren t a lot of widgets in the page the default page height seems to be fixed and doesn t resize even when the page is empty steps to reproduce add one widget try to embed it see a scroll appear even when there are no widgets that warrants a scroll environment production version cloud
0
5,886
32,072,890,384
IssuesEvent
2023-09-25 09:09:13
polarsource/polar
https://api.github.com/repos/polarsource/polar
closed
Rename badge label from `polar` to `Funding`
backer maintainer
### What Rename our `polar` badge label to `Fund`. ### Why - `polar` is non-descriptive which is bad for a label - `Fund` or equivalent has the added benefit of being clear and a CTA on the issues list standalone before the badge is shown (in issue) ### How We should still support `polar` for legacy. We should also support case insensitive usage of our selected word, e.g `Fund` and `fund`. ### Decision log **Is `Fund` the right choice?** TODO. Other options would be `Funding` or `Sponsor`. Latter is likely the most intuitive of all. However, we're using `Fund` as a term moving forward as the default so nice to keep things consistent. <!-- POLAR PLEDGE BADGE START --> ## Upvote & Fund - We're using [Polar.sh](https://polar.sh/polarsource) so you can upvote and help fund this issue. - We receive the funding once the issue is completed & confirmed by you. - Thank you in advance for helping prioritize & fund our backlog :^) <a href="https://polar.sh/polarsource/polar/issues/1102"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/polarsource/polar/issues/1102/pledge.svg?darkmode=1"> <img alt="Fund with Polar" src="https://polar.sh/api/github/polarsource/polar/issues/1102/pledge.svg"> </picture> </a> <!-- POLAR PLEDGE BADGE END -->
True
Rename badge label from `polar` to `Funding` - ### What Rename our `polar` badge label to `Fund`. ### Why - `polar` is non-descriptive which is bad for a label - `Fund` or equivalent has the added benefit of being clear and a CTA on the issues list standalone before the badge is shown (in issue) ### How We should still support `polar` for legacy. We should also support case insensitive usage of our selected word, e.g `Fund` and `fund`. ### Decision log **Is `Fund` the right choice?** TODO. Other options would be `Funding` or `Sponsor`. Latter is likely the most intuitive of all. However, we're using `Fund` as a term moving forward as the default so nice to keep things consistent. <!-- POLAR PLEDGE BADGE START --> ## Upvote & Fund - We're using [Polar.sh](https://polar.sh/polarsource) so you can upvote and help fund this issue. - We receive the funding once the issue is completed & confirmed by you. - Thank you in advance for helping prioritize & fund our backlog :^) <a href="https://polar.sh/polarsource/polar/issues/1102"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/polarsource/polar/issues/1102/pledge.svg?darkmode=1"> <img alt="Fund with Polar" src="https://polar.sh/api/github/polarsource/polar/issues/1102/pledge.svg"> </picture> </a> <!-- POLAR PLEDGE BADGE END -->
main
rename badge label from polar to funding what rename our polar badge label to fund why polar is non descriptive which is bad for a label fund or equivalent has the added benefit of being clear and a cta on the issues list standalone before the badge is shown in issue how we should still support polar for legacy we should also support case insensitive usage of our selected word e g fund and fund decision log is fund the right choice todo other options would be funding or sponsor latter is likely the most intuitive of all however we re using fund as a term moving forward as the default so nice to keep things consistent upvote fund we re using so you can upvote and help fund this issue we receive the funding once the issue is completed confirmed by you thank you in advance for helping prioritize fund our backlog a href source media prefers color scheme dark srcset img alt fund with polar src
1
2,386
8,490,400,073
IssuesEvent
2018-10-27 00:33:31
TravisSpark/spark-website
https://api.github.com/repos/TravisSpark/spark-website
opened
Legacy Navigation Formatting
maintainence
### Checklist - [ ] Searched for, and did not find, duplicate [issue](https://github.com/TravisSpark/spark-website/issues) - [ ] Indicated whether the issue is a bug or a feature - [ ] Focused on one specific bug/feature - [ ] Gave a concise and relevant name - [ ] Created clear and concise description - [ ] Outlined which components are affected - [ ] Assigned issue to project, appropriate contributors, and relevant labels <!-- Edit as Appropriate --> ### Issue Type: Bug ### Description The algorithm to iterate through sections is based off the idea that 'Header' is still listed as a section. However, Header has been removed. The algorithm is now over-complicated. It should be simplified to a single for-loop that directly references the navigation data. ### Affected Components * events.md * news.md * policies.md ``` <!-- Find length of Navigation Array, iterate through this later --> {% assign end_nav_data = nav_data | size | minus:1 %} ``` ``` <!-- Find length of Navigation Array, iterate through this later --> {% assign end_nav_data = nav_data | size | minus:1 %} ``` ``` <a name="{{ nav_data[section_count].text | slugify }}"></a> <h2>{{ nav_data[section_count].text }}</h2> <hr> ``` ``` <!-- In each section, get only the news articles assigned to it --> {% assign section_data = news_data | where:"section",nav_data[section_count].text %} ```
True
Legacy Navigation Formatting - ### Checklist - [ ] Searched for, and did not find, duplicate [issue](https://github.com/TravisSpark/spark-website/issues) - [ ] Indicated whether the issue is a bug or a feature - [ ] Focused on one specific bug/feature - [ ] Gave a concise and relevant name - [ ] Created clear and concise description - [ ] Outlined which components are affected - [ ] Assigned issue to project, appropriate contributors, and relevant labels <!-- Edit as Appropriate --> ### Issue Type: Bug ### Description The algorithm to iterate through sections is based off the idea that 'Header' is still listed as a section. However, Header has been removed. The algorithm is now over-complicated. It should be simplified to a single for-loop that directly references the navigation data. ### Affected Components * events.md * news.md * policies.md ``` <!-- Find length of Navigation Array, iterate through this later --> {% assign end_nav_data = nav_data | size | minus:1 %} ``` ``` <!-- Find length of Navigation Array, iterate through this later --> {% assign end_nav_data = nav_data | size | minus:1 %} ``` ``` <a name="{{ nav_data[section_count].text | slugify }}"></a> <h2>{{ nav_data[section_count].text }}</h2> <hr> ``` ``` <!-- In each section, get only the news articles assigned to it --> {% assign section_data = news_data | where:"section",nav_data[section_count].text %} ```
main
legacy navigation formatting checklist searched for and did not find duplicate indicated whether the issue is a bug or a feature focused on one specific bug feature gave a concise and relevant name created clear and concise description outlined which components are affected assigned issue to project appropriate contributors and relevant labels issue type bug description the algorithm to iterate through sections is based off the idea that header is still listed as a section however header has been removed the algorithm is now over complicated it should be simplified to a single for loop that directly references the navigation data affected components events md news md policies md assign end nav data nav data size minus assign end nav data nav data size minus nav data text assign section data news data where section nav data text
1
23,264
11,864,072,517
IssuesEvent
2020-03-25 20:56:15
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Unused serviceBusApiVersion parameter
Pri2 assigned-to-author doc-enhancement service-bus-messaging/svc triaged
minor issue but the sample snippet of the resource itself doesn't use the serviceBusApiVersion parameter defined and hard codes "apiVersion": "2017-04-01". just above it. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: abfc0981-e410-02b1-5319-95bb0dd7ee54 * Version Independent ID: 743da137-e3ea-838e-b67d-2a175c93728a * Content: [Create Azure Service Bus namespace and queue using Azure template](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-resource-manager-namespace-queue#feedback) * Content Source: [articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md) * Service: **service-bus-messaging** * GitHub Login: @spelluru * Microsoft Alias: **spelluru**
1.0
Unused serviceBusApiVersion parameter - minor issue but the sample snippet of the resource itself doesn't use the serviceBusApiVersion parameter defined and hard codes "apiVersion": "2017-04-01". just above it. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: abfc0981-e410-02b1-5319-95bb0dd7ee54 * Version Independent ID: 743da137-e3ea-838e-b67d-2a175c93728a * Content: [Create Azure Service Bus namespace and queue using Azure template](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-resource-manager-namespace-queue#feedback) * Content Source: [articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md) * Service: **service-bus-messaging** * GitHub Login: @spelluru * Microsoft Alias: **spelluru**
non_main
unused servicebusapiversion parameter minor issue but the sample snippet of the resource itself doesn t use the servicebusapiversion parameter defined and hard codes apiversion just above it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service bus messaging github login spelluru microsoft alias spelluru
0
122,050
12,140,217,188
IssuesEvent
2020-04-23 20:09:48
gcivil-nyu-org/spring2020-cs-gy-9223-class
https://api.github.com/repos/gcivil-nyu-org/spring2020-cs-gy-9223-class
opened
Document how to connect GPS Hat
documentation hardware
**User story** As user, I want a step-by-step guide on how to connect my GPS Hat so that my GPSPi can send coordinate data. **Acceptance criteria** Must be posted to the team wiki. Must be reviewed and easily understood by a team member outside of the hardware team. Should be considerate of the fact that ideally the user does not interact with the code that interfaces with the Sense HAT at all. Should be as "plug and play" as possible. **Definition of Done** Reviewed instructions have been posted to the team wiki.
1.0
Document how to connect GPS Hat - **User story** As user, I want a step-by-step guide on how to connect my GPS Hat so that my GPSPi can send coordinate data. **Acceptance criteria** Must be posted to the team wiki. Must be reviewed and easily understood by a team member outside of the hardware team. Should be considerate of the fact that ideally the user does not interact with the code that interfaces with the Sense HAT at all. Should be as "plug and play" as possible. **Definition of Done** Reviewed instructions have been posted to the team wiki.
non_main
document how to connect gps hat user story as user i want a step by step guide on how to connect my gps hat so that my gpspi can send coordinate data acceptance criteria must be posted to the team wiki must be reviewed and easily understood by a team member outside of the hardware team should be considerate of the fact that ideally the user does not interact with the code that interfaces with the sense hat at all should be as plug and play as possible definition of done reviewed instructions have been posted to the team wiki
0
3,900
17,359,056,291
IssuesEvent
2021-07-29 17:53:15
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
InlineLoading inactive state animation is broken
status: waiting for maintainer response 💬 type: bug 🐛
## Brief description `InlineLoading` component inactive state animation is broken: ![image](https://user-images.githubusercontent.com/5897195/125266191-dc949a00-e305-11eb-8f04-b724b83dbd29.png) ## What package(s) are you using? <!-- Add an x in one of the options below, for example: - [] package name --> - [x] `carbon-components` - [x] `carbon-components-react` ## Detailed description Looks like inactive state in `InlineLoading` component has some weird animation of circle spinning backwards (?). Versions used: carbon-components@10.29.0 carbon-components-react@7.29.0 Tested both in Firefox (89.0.2 (64-bit)) and Chrome (Version 91.0.4472.124 (Official Build) (64-bit)) ## Steps to reproduce the issue Have `InlineComponent` with `status='inactive'`. You could reproduce it in react-components storybook e.g [InlineComponent](https://react.carbondesignsystem.com/?path=/story/components-inlineloading--inline-loading&knob-Description%20(description)=Active%20loading%20indicator) ## Notes: Looks like it was introduced with` carbon-components-react 7.6.0`
True
InlineLoading inactive state animation is broken - ## Brief description `InlineLoading` component inactive state animation is broken: ![image](https://user-images.githubusercontent.com/5897195/125266191-dc949a00-e305-11eb-8f04-b724b83dbd29.png) ## What package(s) are you using? <!-- Add an x in one of the options below, for example: - [] package name --> - [x] `carbon-components` - [x] `carbon-components-react` ## Detailed description Looks like inactive state in `InlineLoading` component has some weird animation of circle spinning backwards (?). Versions used: carbon-components@10.29.0 carbon-components-react@7.29.0 Tested both in Firefox (89.0.2 (64-bit)) and Chrome (Version 91.0.4472.124 (Official Build) (64-bit)) ## Steps to reproduce the issue Have `InlineComponent` with `status='inactive'`. You could reproduce it in react-components storybook e.g [InlineComponent](https://react.carbondesignsystem.com/?path=/story/components-inlineloading--inline-loading&knob-Description%20(description)=Active%20loading%20indicator) ## Notes: Looks like it was introduced with` carbon-components-react 7.6.0`
main
inlineloading inactive state animation is broken brief description inlineloading component inactive state animation is broken what package s are you using add an x in one of the options below for example package name carbon components carbon components react detailed description looks like inactive state in inlineloading component has some weird animation of circle spinning backwards versions used carbon components carbon components react tested both in firefox bit and chrome version official build bit steps to reproduce the issue have inlinecomponent with status inactive you could reproduce it in react components storybook e g notes looks like it was introduced with carbon components react
1
649,053
21,216,887,920
IssuesEvent
2022-04-11 08:14:49
tempus-finance/tempus-app
https://api.github.com/repos/tempus-finance/tempus-app
closed
"Available to Deposit" is incorrect
bug low priority
**Description** Available to deposit on Mint page is higher then my entire balance. **To Reproduce** 1. Navigate to [test environment](https://tempus-app-stage.web.app/). 2. Manage. 3. Mint. 4. Insert Mint amount 9999 ETH. 5. Inspect "Available to Deposit" amount. **Expected behavior** It should be equal to amount in the wallet. **Actual behavior** It is higher then wallet amount. **Screenshots** ![image](https://user-images.githubusercontent.com/98878781/159552569-e845386e-c88c-4caf-95b3-5fe357255c39.png) **Environment** Operating System: Ubuntu Browser: Chrome Wallet: MetaMask Network: Fantom URL: https://tempus-app-stage.web.app/ **Additional context** After refresh, this issue was fixed.
1.0
"Available to Deposit" is incorrect - **Description** Available to deposit on Mint page is higher then my entire balance. **To Reproduce** 1. Navigate to [test environment](https://tempus-app-stage.web.app/). 2. Manage. 3. Mint. 4. Insert Mint amount 9999 ETH. 5. Inspect "Available to Deposit" amount. **Expected behavior** It should be equal to amount in the wallet. **Actual behavior** It is higher then wallet amount. **Screenshots** ![image](https://user-images.githubusercontent.com/98878781/159552569-e845386e-c88c-4caf-95b3-5fe357255c39.png) **Environment** Operating System: Ubuntu Browser: Chrome Wallet: MetaMask Network: Fantom URL: https://tempus-app-stage.web.app/ **Additional context** After refresh, this issue was fixed.
non_main
available to deposit is incorrect description available to deposit on mint page is higher then my entire balance to reproduce navigate to manage mint insert mint amount eth inspect available to deposit amount expected behavior it should be equal to amount in the wallet actual behavior it is higher then wallet amount screenshots environment operating system ubuntu browser chrome wallet metamask network fantom url additional context after refresh this issue was fixed
0
3,393
13,160,801,507
IssuesEvent
2020-08-10 18:17:45
RapidField/solid-instruments
https://api.github.com/repos/RapidField/solid-instruments
closed
Refactor bit field copy operations for performance.
Category-Maintenance Source-Maintainer Stage-4-Complete Subcategory-Performance Tag-AddReleaseNote Verdict-Released Version-1.0.26 WindowForDelivery-2021-Q1
# Maintenance Request This issue represents a request for documentation, testing, refactoring or other non-functional changes. ## Overview The performance of several bit field copy operations can be improved by utilizing the `Span<T>` and `Memory<T>` primitives in place of `Array.Copy` and `Buffer.BlockCopy`. Those operations should be refactored to use the new primitives where appropriate. ## Statement of work The following list describes the work to be done. 1. Find and refactor uses of `Array.Copy` and `Buffer.BlockCopy`, where appropriate, to use the new primitives. ## Revision control plan **Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue. - `master` is the pull request target for - `release/v1.0.26-preview1`, which is the pull request target for - `develop`, which is the pull request target for - `maintenance/00289-refactor-copies`, which is the pull request target for contributing user branches, which should be named using the pattern - `user/{username}/00289-refactor-copies`
True
Refactor bit field copy operations for performance. - # Maintenance Request This issue represents a request for documentation, testing, refactoring or other non-functional changes. ## Overview The performance of several bit field copy operations can be improved by utilizing the `Span<T>` and `Memory<T>` primitives in place of `Array.Copy` and `Buffer.BlockCopy`. Those operations should be refactored to use the new primitives where appropriate. ## Statement of work The following list describes the work to be done. 1. Find and refactor uses of `Array.Copy` and `Buffer.BlockCopy`, where appropriate, to use the new primitives. ## Revision control plan **Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue. - `master` is the pull request target for - `release/v1.0.26-preview1`, which is the pull request target for - `develop`, which is the pull request target for - `maintenance/00289-refactor-copies`, which is the pull request target for contributing user branches, which should be named using the pattern - `user/{username}/00289-refactor-copies`
main
refactor bit field copy operations for performance maintenance request this issue represents a request for documentation testing refactoring or other non functional changes overview the performance of several bit field copy operations can be improved by utilizing the span and memory primitives in place of array copy and buffer blockcopy those operations should be refactored to use the new primitives where appropriate statement of work the following list describes the work to be done find and refactor uses of array copy and buffer blockcopy where appropriate to use the new primitives revision control plan solid instruments uses the individual contributors should follow the branching plan below when working on this issue master is the pull request target for release which is the pull request target for develop which is the pull request target for maintenance refactor copies which is the pull request target for contributing user branches which should be named using the pattern user username refactor copies
1
253,589
21,690,409,733
IssuesEvent
2022-05-09 14:54:34
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Allowance acceptance tests fail
bug P1 test
### Description The acceptance tests currently fail if the allowance tests are ran. Since allowance is included in the default tag it causes the tests to fail unless explicitly excluded. ### Steps to reproduce `./mvnw integration-test --projects hedera-mirror-test/ -P=acceptance-tests -Dcucumber.filter.tags="@acceptance"` ### Additional context _No response_ ### Hedera network mainnet ### Version v0.56.1 ### Operating system _No response_
1.0
Allowance acceptance tests fail - ### Description The acceptance tests currently fail if the allowance tests are ran. Since allowance is included in the default tag it causes the tests to fail unless explicitly excluded. ### Steps to reproduce `./mvnw integration-test --projects hedera-mirror-test/ -P=acceptance-tests -Dcucumber.filter.tags="@acceptance"` ### Additional context _No response_ ### Hedera network mainnet ### Version v0.56.1 ### Operating system _No response_
non_main
allowance acceptance tests fail description the acceptance tests currently fail if the allowance tests are ran since allowance is included in the default tag it causes the tests to fail unless explicitly excluded steps to reproduce mvnw integration test projects hedera mirror test p acceptance tests dcucumber filter tags acceptance additional context no response hedera network mainnet version operating system no response
0
5,753
30,491,346,221
IssuesEvent
2023-07-18 07:55:38
jupyter-naas/awesome-notebooks
https://api.github.com/repos/jupyter-naas/awesome-notebooks
closed
Mixpanel - Get Profile Event Activity
templates maintainer
This notebook returns the activity feed for specified users. It is usefull for organizations to track user activity and get insights from it.
True
Mixpanel - Get Profile Event Activity - This notebook returns the activity feed for specified users. It is usefull for organizations to track user activity and get insights from it.
main
mixpanel get profile event activity this notebook returns the activity feed for specified users it is usefull for organizations to track user activity and get insights from it
1
4,694
24,229,587,878
IssuesEvent
2022-09-26 17:01:56
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Bug: Using SAM with Windows MINGW64 or Git Bash
area/installation maintainer/need-followup
### Description: When a user uses the Windows installer, SAM is installed through a file titled **sam.cmd**, and that file's directory is added to PATH. This works fine if a Windows user is using either Powershell or cmd as their default/preferred terminal application, as both applications will read the input `sam --version` as `sam[.cmd] --version`. However if the user is using Git Bash, or some other bash terminal in windows, which I imagine is a pretty common occurrence for command line developers, you'll hit either a "command not found" or "No such file or directory" error, depending on your preferred application. To use SAM CLI a Bash user will need to instead run `sam.cmd --version` to get the desired result. This is not always a feasible workaround - especially if users are accessing SAM CLI by running scripts designed for many team members who use Linux, Mac and Windows devices. Specifying the application to run SAM CLI, or scripts that use SAM CLI in is not always realistic, and I suppose in some cases not possible. Given this, I would imagine it makes sense to instead install **sam.cmd** as a compiled executable **sam.exe**. This way, no matter what terminal application a Windows user uses, they can use the command `sam` to access the SAM CLI. This is, for example, the approach used by AWS CLI V2 to ensure Powershell, cmd and Bash Windows users can all use the same command `aws` to access the AWS CLI. ### Steps to reproduce: - Install the AWS SAM CLI for Windows 64 bit (or, presumably, 32 Bit). - Open a terminal using git bash. - Run `sam --version`. ### Observed result: ``` bash: sam: command not found ``` ### Expected result: ``` SAM CLI, version 1.57.0 ``` (Or a newer version if you're checking this at a later date) ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 2. `sam --version`: 1.57.0 3. Use a Windows terminal application that is not either cmd or Powershell.
True
Bug: Using SAM with Windows MINGW64 or Git Bash - ### Description: When a user uses the Windows installer, SAM is installed through a file titled **sam.cmd**, and that file's directory is added to PATH. This works fine if a Windows user is using either Powershell or cmd as their default/preferred terminal application, as both applications will read the input `sam --version` as `sam[.cmd] --version`. However if the user is using Git Bash, or some other bash terminal in windows, which I imagine is a pretty common occurrence for command line developers, you'll hit either a "command not found" or "No such file or directory" error, depending on your preferred application. To use SAM CLI a Bash user will need to instead run `sam.cmd --version` to get the desired result. This is not always a feasible workaround - especially if users are accessing SAM CLI by running scripts designed for many team members who use Linux, Mac and Windows devices. Specifying the application to run SAM CLI, or scripts that use SAM CLI in is not always realistic, and I suppose in some cases not possible. Given this, I would imagine it makes sense to instead install **sam.cmd** as a compiled executable **sam.exe**. This way, no matter what terminal application a Windows user uses, they can use the command `sam` to access the SAM CLI. This is, for example, the approach used by AWS CLI V2 to ensure Powershell, cmd and Bash Windows users can all use the same command `aws` to access the AWS CLI. ### Steps to reproduce: - Install the AWS SAM CLI for Windows 64 bit (or, presumably, 32 Bit). - Open a terminal using git bash. - Run `sam --version`. ### Observed result: ``` bash: sam: command not found ``` ### Expected result: ``` SAM CLI, version 1.57.0 ``` (Or a newer version if you're checking this at a later date) ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 2. `sam --version`: 1.57.0 3. Use a Windows terminal application that is not either cmd or Powershell.
main
bug using sam with windows or git bash description when a user uses the windows installer sam is installed through a file titled sam cmd and that file s directory is added to path this works fine if a windows user is using either powershell or cmd as their default preferred terminal application as both applications will read the input sam version as sam version however if the user is using git bash or some other bash terminal in windows which i imagine is a pretty common occurrence for command line developers you ll hit either a command not found or no such file or directory error depending on your preferred application to use sam cli a bash user will need to instead run sam cmd version to get the desired result this is not always a feasible workaround especially if users are accessing sam cli by running scripts designed for many team members who use linux mac and windows devices specifying the application to run sam cli or scripts that use sam cli in is not always realistic and i suppose in some cases not possible given this i would imagine it makes sense to instead install sam cmd as a compiled executable sam exe this way no matter what terminal application a windows user uses they can use the command sam to access the sam cli this is for example the approach used by aws cli to ensure powershell cmd and bash windows users can all use the same command aws to access the aws cli steps to reproduce install the aws sam cli for windows bit or presumably bit open a terminal using git bash run sam version observed result bash sam command not found expected result sam cli version or a newer version if you re checking this at a later date additional environment details ex windows mac amazon linux etc os windows sam version use a windows terminal application that is not either cmd or powershell
1
264,026
23,096,822,137
IssuesEvent
2022-07-26 20:27:43
harvard-lil/perma-extension
https://api.github.com/repos/harvard-lil/perma-extension
closed
E2E Testing
Tests
**Goals:** Initial round of E2E testing. To solidify as feature set solidifies. --- - [x] Components testing _(isolated of storage state)_ - [x] `<app-header>` - [x] `<archive-form>` - [x] `<archive-timeline>` - [x] `<archive-timeline-item>` - [x] `<status-bar>` - [x] E2E Scenarios _(behavior + storage monitoring)_ - [x] Sign in: Invalid API key - [x] Sign in: Valid API Key - [x] Create Archive: Invalid url - [X] Create Archive: Valid url - [x] Pick folder: Valid Url - [x] Sign Out
1.0
E2E Testing - **Goals:** Initial round of E2E testing. To solidify as feature set solidifies. --- - [x] Components testing _(isolated of storage state)_ - [x] `<app-header>` - [x] `<archive-form>` - [x] `<archive-timeline>` - [x] `<archive-timeline-item>` - [x] `<status-bar>` - [x] E2E Scenarios _(behavior + storage monitoring)_ - [x] Sign in: Invalid API key - [x] Sign in: Valid API Key - [x] Create Archive: Invalid url - [X] Create Archive: Valid url - [x] Pick folder: Valid Url - [x] Sign Out
non_main
testing goals initial round of testing to solidify as feature set solidifies components testing isolated of storage state scenarios behavior storage monitoring sign in invalid api key sign in valid api key create archive invalid url create archive valid url pick folder valid url sign out
0
578,009
17,141,609,749
IssuesEvent
2021-07-13 10:13:53
hashicorp/terraform-cdk
https://api.github.com/repos/hashicorp/terraform-cdk
closed
Enable null for mapped attributes
bug priority/important-soon providers schema
<!--- Please keep this note for the community ---> ### Community Note - Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request - Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request - If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### cdktf & Language Versions Probably all ### Affected Resource(s) AWS Route Table ### Important Factoids As written by @wuntusk in https://github.com/terraform-cdk-providers/cdktf-provider-aws/pull/394 > Currently RouteTable is unuseable > > Currently you can't make a RouteTable since all routes require both a carrierGatewayId and a destinationPrefixListId. For some reason these 2 values never got an undefined check added so they never pass null but instead make a call to cdktf.stringToTerraform. I'm guessing this has never been caught as everyone is using the vpc module shown in the demos. If you try to roll your own pure Typescript VPC you'll run into this pretty quickly. <!--- Are there anything atypical about your accounts that we should know? ---> ### References Potentially related to - https://github.com/hashicorp/terraform-cdk/issues/750 - https://github.com/hashicorp/terraform-cdk/issues/234 - https://github.com/hashicorp/terraform-cdk/pull/395
1.0
Enable null for mapped attributes - <!--- Please keep this note for the community ---> ### Community Note - Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request - Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request - If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### cdktf & Language Versions Probably all ### Affected Resource(s) AWS Route Table ### Important Factoids As written by @wuntusk in https://github.com/terraform-cdk-providers/cdktf-provider-aws/pull/394 > Currently RouteTable is unuseable > > Currently you can't make a RouteTable since all routes require both a carrierGatewayId and a destinationPrefixListId. For some reason these 2 values never got an undefined check added so they never pass null but instead make a call to cdktf.stringToTerraform. I'm guessing this has never been caught as everyone is using the vpc module shown in the demos. If you try to roll your own pure Typescript VPC you'll run into this pretty quickly. <!--- Are there anything atypical about your accounts that we should know? ---> ### References Potentially related to - https://github.com/hashicorp/terraform-cdk/issues/750 - https://github.com/hashicorp/terraform-cdk/issues/234 - https://github.com/hashicorp/terraform-cdk/pull/395
non_main
enable null for mapped attributes community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment cdktf language versions probably all affected resource s aws route table important factoids as written by wuntusk in currently routetable is unuseable currently you can t make a routetable since all routes require both a carriergatewayid and a destinationprefixlistid for some reason these values never got an undefined check added so they never pass null but instead make a call to cdktf stringtoterraform i m guessing this has never been caught as everyone is using the vpc module shown in the demos if you try to roll your own pure typescript vpc you ll run into this pretty quickly references potentially related to
0
36,701
15,043,796,418
IssuesEvent
2021-02-03 01:30:56
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
closed
Moped Tables | Refine GridTable layout/design on Projects page
Need: 3-Could Have Product: Moped Service: Dev Type: Enhancement
- [x] Change page title from "Projects List" to "Projects" - [x] Make [fixed tabs](https://material-ui.com/components/tabs/#fixed-tabs) for "Search" (currently "General Search") and "Advanced Search" (currently "Filter Search") inside the search box. - [x] Move "Download" button (see #4785) into search box. <img width="1351" alt="Projects-Page-Design-Refinements" src="https://user-images.githubusercontent.com/1463708/105315484-02c25400-5b85-11eb-8e73-5b7876b99b2e.png">
1.0
Moped Tables | Refine GridTable layout/design on Projects page - - [x] Change page title from "Projects List" to "Projects" - [x] Make [fixed tabs](https://material-ui.com/components/tabs/#fixed-tabs) for "Search" (currently "General Search") and "Advanced Search" (currently "Filter Search") inside the search box. - [x] Move "Download" button (see #4785) into search box. <img width="1351" alt="Projects-Page-Design-Refinements" src="https://user-images.githubusercontent.com/1463708/105315484-02c25400-5b85-11eb-8e73-5b7876b99b2e.png">
non_main
moped tables refine gridtable layout design on projects page change page title from projects list to projects make for search currently general search and advanced search currently filter search inside the search box move download button see into search box img width alt projects page design refinements src
0
100,630
16,490,112,425
IssuesEvent
2021-05-25 01:37:31
Baneeishaque/ask-med-pharma_Wordpress
https://api.github.com/repos/Baneeishaque/ask-med-pharma_Wordpress
opened
CVE-2021-23369 (High) detected in handlebars-4.4.2.tgz
security vulnerability
## CVE-2021-23369 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz</a></p> <p>Path to dependency file: ask-med-pharma_Wordpress/wp-content/themes/twentytwenty/package.json</p> <p>Path to vulnerable library: ask-med-pharma_Wordpress/wp-content/themes/twentytwenty/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - scripts-5.0.0.tgz (Root Library) - jest-24.9.0.tgz - jest-cli-24.9.0.tgz - core-24.9.0.tgz - reporters-24.9.0.tgz - istanbul-reports-2.2.6.tgz - :x: **handlebars-4.4.2.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source. <p>Publish Date: 2021-04-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23369>CVE-2021-23369</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369</a></p> <p>Release Date: 2021-04-12</p> <p>Fix Resolution: handlebars - 4.7.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23369 (High) detected in handlebars-4.4.2.tgz - ## CVE-2021-23369 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.4.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.4.2.tgz</a></p> <p>Path to dependency file: ask-med-pharma_Wordpress/wp-content/themes/twentytwenty/package.json</p> <p>Path to vulnerable library: ask-med-pharma_Wordpress/wp-content/themes/twentytwenty/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - scripts-5.0.0.tgz (Root Library) - jest-24.9.0.tgz - jest-cli-24.9.0.tgz - core-24.9.0.tgz - reporters-24.9.0.tgz - istanbul-reports-2.2.6.tgz - :x: **handlebars-4.4.2.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source. <p>Publish Date: 2021-04-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23369>CVE-2021-23369</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23369</a></p> <p>Release Date: 2021-04-12</p> <p>Fix Resolution: handlebars - 4.7.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file ask med pharma wordpress wp content themes twentytwenty package json path to vulnerable library ask med pharma wordpress wp content themes twentytwenty node modules handlebars package json dependency hierarchy scripts tgz root library jest tgz jest cli tgz core tgz reporters tgz istanbul reports tgz x handlebars tgz vulnerable library found in base branch master vulnerability details the package handlebars before are vulnerable to remote code execution rce when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
0
125,567
10,346,658,120
IssuesEvent
2019-09-04 15:41:09
kcigeospatial/Fred_Co_Land-Management
https://api.github.com/repos/kcigeospatial/Fred_Co_Land-Management
closed
Resuse- Addition- grading generated
Bug Ready for Test Env. Retest
grading permit generated and status check is holding up issuance for a 180 sq. ft. deck - did not meet the requirement to trigger a grading permit ![image](https://user-images.githubusercontent.com/47611580/63963935-058a4980-ca64-11e9-8d6f-54902d92f612.png)
2.0
Resuse- Addition- grading generated - grading permit generated and status check is holding up issuance for a 180 sq. ft. deck - did not meet the requirement to trigger a grading permit ![image](https://user-images.githubusercontent.com/47611580/63963935-058a4980-ca64-11e9-8d6f-54902d92f612.png)
non_main
resuse addition grading generated grading permit generated and status check is holding up issuance for a sq ft deck did not meet the requirement to trigger a grading permit
0
2,033
6,830,351,567
IssuesEvent
2017-11-09 06:13:58
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
Heating reagent containers needs to be generalized
Bug Maintainability/Hinders improvements
Heating beakers, drinking glasses, and piles of chemicals on the floor all have the actual heating process coded their own way. [Piles](https://github.com/tgstation/tgstation/blob/e7da251e4685c8de5c5363eb97ce9ecf77aafbfc/code/game/objects/effects/decals/cleanable.dm#L45) [Drinking glasses](https://github.com/tgstation/tgstation/blob/d3dcc118ecb5f5410de684b1bb46289f8066ab23/code/modules/food_and_drinks/drinks/drinks.dm#L92) [Glass beakers](https://github.com/tgstation/tgstation/blob/b83d77711999124fb33de984724653039a7e41e2/code/modules/reagents/reagent_containers/glass.dm#L92) Notice slight differences between how these are all coded. For example, you can't heat reagents in a beaker to be hotter than the thing you're using to heat it, but you can in a drinking glass (meaning you could turn chems in a regular glass into magma with a match, but not in a beaker), which is a bug. Generalization would fix this.
True
Heating reagent containers needs to be generalized - Heating beakers, drinking glasses, and piles of chemicals on the floor all have the actual heating process coded their own way. [Piles](https://github.com/tgstation/tgstation/blob/e7da251e4685c8de5c5363eb97ce9ecf77aafbfc/code/game/objects/effects/decals/cleanable.dm#L45) [Drinking glasses](https://github.com/tgstation/tgstation/blob/d3dcc118ecb5f5410de684b1bb46289f8066ab23/code/modules/food_and_drinks/drinks/drinks.dm#L92) [Glass beakers](https://github.com/tgstation/tgstation/blob/b83d77711999124fb33de984724653039a7e41e2/code/modules/reagents/reagent_containers/glass.dm#L92) Notice slight differences between how these are all coded. For example, you can't heat reagents in a beaker to be hotter than the thing you're using to heat it, but you can in a drinking glass (meaning you could turn chems in a regular glass into magma with a match, but not in a beaker), which is a bug. Generalization would fix this.
main
heating reagent containers needs to be generalized heating beakers drinking glasses and piles of chemicals on the floor all have the actual heating process coded their own way notice slight differences between how these are all coded for example you can t heat reagents in a beaker to be hotter than the thing you re using to heat it but you can in a drinking glass meaning you could turn chems in a regular glass into magma with a match but not in a beaker which is a bug generalization would fix this
1
4,290
21,647,752,865
IssuesEvent
2022-05-06 05:30:00
ansible-collections/community.vmware
https://api.github.com/repos/ansible-collections/community.vmware
closed
Teaming and security on vswitch
waiting_on_contributor affects_2.10 module feature needs_maintainer
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> vmware_vswitch should allow setting of teaming and security parameters ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> vmware/vswitch ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> The vmware_vswitch module allows adding NICs to vswitches, but currently there is no way to configure how those nics are set up for teaming or resilience. Similarly there is no way to set the security functions on the whole switch. This functionality is already present in the vmware_portgroup module, but it is not always desirable to set it at the portgroup level - configuring teaming of the uplink NICs is particularly of more use at the vSwitch level. It seems silly to be using ansible to configure vSwitches and portgroups on hosts consistently, but still have to then go and use other tools to turn on or off settings on those vSwitches. <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can also paste gist.github.com links for larger files -->
True
Teaming and security on vswitch - <!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> vmware_vswitch should allow setting of teaming and security parameters ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> vmware/vswitch ##### ADDITIONAL INFORMATION <!--- Describe how the feature would be used, why it is needed and what it would solve --> The vmware_vswitch module allows adding NICs to vswitches, but currently there is no way to configure how those nics are set up for teaming or resilience. Similarly there is no way to set the security functions on the whole switch. This functionality is already present in the vmware_portgroup module, but it is not always desirable to set it at the portgroup level - configuring teaming of the uplink NICs is particularly of more use at the vSwitch level. It seems silly to be using ansible to configure vSwitches and portgroups on hosts consistently, but still have to then go and use other tools to turn on or off settings on those vSwitches. <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can also paste gist.github.com links for larger files -->
main
teaming and security on vswitch summary vmware vswitch should allow setting of teaming and security parameters issue type feature idea component name vmware vswitch additional information the vmware vswitch module allows adding nics to vswitches but currently there is no way to configure how those nics are set up for teaming or resilience similarly there is no way to set the security functions on the whole switch this functionality is already present in the vmware portgroup module but it is not always desirable to set it at the portgroup level configuring teaming of the uplink nics is particularly of more use at the vswitch level it seems silly to be using ansible to configure vswitches and portgroups on hosts consistently but still have to then go and use other tools to turn on or off settings on those vswitches yaml
1
683
4,231,987,483
IssuesEvent
2016-07-04 19:14:51
Microsoft/DirectXTK
https://api.github.com/repos/Microsoft/DirectXTK
opened
Retire Windows 8.1 Store and Windows phone 8.1 projects
maintainence
At some point we should remove support for the older versions in favor of UWP apps ``DirectXTK_Windows81.vcxproj`` ``DirectXTK_WindowsPhone81.vcxproj`` ``DirectXTK_XAMLSilverlight_WindowsPhone81.vcxproj`` Please put any requests for continued support for one or more of these here.
True
Retire Windows 8.1 Store and Windows phone 8.1 projects - At some point we should remove support for the older versions in favor of UWP apps ``DirectXTK_Windows81.vcxproj`` ``DirectXTK_WindowsPhone81.vcxproj`` ``DirectXTK_XAMLSilverlight_WindowsPhone81.vcxproj`` Please put any requests for continued support for one or more of these here.
main
retire windows store and windows phone projects at some point we should remove support for the older versions in favor of uwp apps directxtk vcxproj directxtk vcxproj directxtk xamlsilverlight vcxproj please put any requests for continued support for one or more of these here
1
2,741
9,746,139,329
IssuesEvent
2019-06-03 11:27:19
trump-fmi/area-simplification-client
https://api.github.com/repos/trump-fmi/area-simplification-client
closed
Integrate typescript support
DONE maintainance
It would be cool if we could use typescript in addition to vanilla javascript.
True
Integrate typescript support - It would be cool if we could use typescript in addition to vanilla javascript.
main
integrate typescript support it would be cool if we could use typescript in addition to vanilla javascript
1
5,352
26,963,747,451
IssuesEvent
2023-02-08 20:23:03
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Please support minikube as an alternative to Docker Desktop
area/docker area/local/start-api area/local/start-lambda area/local/invoke maintainer/need-followup
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). --> ### Describe your idea/feature/enhancement In August 2021 [Docker announced](https://www.docker.com/blog/updating-product-subscriptions/) that the Docker Desktop application would no longer be free for most corporate users. At my employer most engineers don't use Docker at all _except_ when using `sam local invoke`. It would be greatly appreciated if SAM CLI could be made to work with [minikube](https://minikube.sigs.k8s.io/docs/) as an alternative to Docker Desktop. Minikube is easy to install, runs on Mac/Linux/Windows, and contains a docker daemon/engine. When minikube is "paused", it frees up resources used by the embedded kubernetes cluster, but leaves the docker engine running. Therefore if `sam local invoke` could be made to work with minikube, it would be much less resource intensive than running Docker Desktop. In theory the following should work (which I have done on my Mac). 1. Fully uninstalled the Docker Desktop app 1. `brew install hyperkit minikube docker` 2. `minikube start` 3. `eval $(minikube docker-env)` (exports DOCKER_HOST and other env vars) 4. Attempted to run `sam local invoke` on a stack containing a Lambda function The local invoke unfortunately does not succeed. The output looks like this: ``` ... Reading invoke payload from stdin (you can also pass it from file with --event) Invoking index.handler (nodejs14.x) BlahBlahLambdaLayer15607B5A is a local Layer in the template Building image........................ Skip pulling image and use local one: samcli/lambda:nodejs14.x-x86_64-c35a66a12aa51442874957628. Mounting /Users/***/ggit/lambda-template/dist/src as /var/task:ro,delegated inside runtime container No response from invoke container for BlahBlahLambda334A3D3E ``` ### Proposal - Investigate the failure when using `sam local invoke` with minikube. - Implement a fix - Document anything new required of the end user (e.g., a new CLI option for `sam local invoke`) Things to consider: 1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model) - I don't think this will require any changes to the SAM Spec, but may require a new CLI option for `sam local invoke` ### Additional Details None.
True
Please support minikube as an alternative to Docker Desktop - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). --> ### Describe your idea/feature/enhancement In August 2021 [Docker announced](https://www.docker.com/blog/updating-product-subscriptions/) that the Docker Desktop application would no longer be free for most corporate users. At my employer most engineers don't use Docker at all _except_ when using `sam local invoke`. It would be greatly appreciated if SAM CLI could be made to work with [minikube](https://minikube.sigs.k8s.io/docs/) as an alternative to Docker Desktop. Minikube is easy to install, runs on Mac/Linux/Windows, and contains a docker daemon/engine. When minikube is "paused", it frees up resources used by the embedded kubernetes cluster, but leaves the docker engine running. Therefore if `sam local invoke` could be made to work with minikube, it would be much less resource intensive than running Docker Desktop. In theory the following should work (which I have done on my Mac). 1. Fully uninstalled the Docker Desktop app 1. `brew install hyperkit minikube docker` 2. `minikube start` 3. `eval $(minikube docker-env)` (exports DOCKER_HOST and other env vars) 4. Attempted to run `sam local invoke` on a stack containing a Lambda function The local invoke unfortunately does not succeed. The output looks like this: ``` ... Reading invoke payload from stdin (you can also pass it from file with --event) Invoking index.handler (nodejs14.x) BlahBlahLambdaLayer15607B5A is a local Layer in the template Building image........................ Skip pulling image and use local one: samcli/lambda:nodejs14.x-x86_64-c35a66a12aa51442874957628. Mounting /Users/***/ggit/lambda-template/dist/src as /var/task:ro,delegated inside runtime container No response from invoke container for BlahBlahLambda334A3D3E ``` ### Proposal - Investigate the failure when using `sam local invoke` with minikube. - Implement a fix - Document anything new required of the end user (e.g., a new CLI option for `sam local invoke`) Things to consider: 1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model) - I don't think this will require any changes to the SAM Spec, but may require a new CLI option for `sam local invoke` ### Additional Details None.
main
please support minikube as an alternative to docker desktop describe your idea feature enhancement in august that the docker desktop application would no longer be free for most corporate users at my employer most engineers don t use docker at all except when using sam local invoke it would be greatly appreciated if sam cli could be made to work with as an alternative to docker desktop minikube is easy to install runs on mac linux windows and contains a docker daemon engine when minikube is paused it frees up resources used by the embedded kubernetes cluster but leaves the docker engine running therefore if sam local invoke could be made to work with minikube it would be much less resource intensive than running docker desktop in theory the following should work which i have done on my mac fully uninstalled the docker desktop app brew install hyperkit minikube docker minikube start eval minikube docker env exports docker host and other env vars attempted to run sam local invoke on a stack containing a lambda function the local invoke unfortunately does not succeed the output looks like this reading invoke payload from stdin you can also pass it from file with event invoking index handler x is a local layer in the template building image skip pulling image and use local one samcli lambda x mounting users ggit lambda template dist src as var task ro delegated inside runtime container no response from invoke container for proposal investigate the failure when using sam local invoke with minikube implement a fix document anything new required of the end user e g a new cli option for sam local invoke things to consider will this require any updates to the i don t think this will require any changes to the sam spec but may require a new cli option for sam local invoke additional details none
1
4,445
23,086,285,725
IssuesEvent
2022-07-26 11:43:28
Lissy93/dashy
https://api.github.com/repos/Lissy93/dashy
closed
[FEATURE_REQUEST] Weather Forecast Widget could use unified endpoint
🦄 Feature Request 👤 Awaiting Maintainer Response
### Is your feature request related to a problem? If so, please describe. The current weather forecast implementation requires a paid license. If we utilize a different API, it would be available for free. ### Describe the solution you'd like In digging into the APIs for openweathermap.org. I found that the 5day 3hr forecast is available for free where the 16 day daily forecast is not. I think it would be useful to be able to utilize the free one instead of or along side the current one. The 5day 3hr does require the user to know their lat/long though. [3 hour 5 day forecast](https://openweathermap.org/forecast5) [Current API Availability and Pricing](https://openweathermap.org/full-price#current) ### Priority Low (Nice-to-have) ### Is this something you would be keen to implement Maybe Resulting JSON sample ``` {"cod":"200","message":0,"cnt":40,"list":[{"dt":1658696400,"main":{"temp":306.78,"feels_like":305.95,"temp_min":306.78,"temp_max":308.44,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":30,"temp_kf":-1.66},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":28},"wind":{"speed":1.73,"deg":267,"gust":1.54},"visibility":10000,"pop":0.09,"sys":{"pod":"d"},"dt_txt":"2022-07-24 21:00:00"},{"dt":1658707200,"main":{"temp":305.46,"feels_like":304.2,"temp_min":305.21,"temp_max":305.46,"pressure":1010,"sea_level":1010,"grnd_level":890,"humidity":29,"temp_kf":0.25},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10d"}],"clouds":{"all":33},"wind":{"speed":5.48,"deg":52,"gust":6.44},"visibility":10000,"pop":0.35,"rain":{"3h":0.26},"sys":{"pod":"d"},"dt_txt":"2022-07-25 00:00:00"},{"dt":1658718000,"main":{"temp":303.63,"feels_like":302.83,"temp_min":303.63,"temp_max":303.63,"pressure":1010,"sea_level":1010,"grnd_level":891,"humidity":35,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":82},"wind":{"speed":4.8,"deg":136,"gust":5.72},"visibility":10000,"pop":0.16,"sys":{"pod":"n"},"dt_txt":"2022-07-25 03:00:00"},{"dt":1658728800,"main":{"temp":299.75,"feels_like":299.75,"temp_min":299.75,"temp_max":299.75,"pressure":1014,"sea_level":1014,"grnd_level":893,"humidity":51,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":89},"wind":{"speed":10.58,"deg":224,"gust":12.6},"visibility":10000,"pop":0.5,"rain":{"3h":0.2},"sys":{"pod":"n"},"dt_txt":"2022-07-25 06:00:00"},{"dt":1658739600,"main":{"temp":298.66,"feels_like":298.7,"temp_min":298.66,"temp_max":298.66,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":55,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":67},"wind":{"speed":6.65,"deg":262,"gust":8.52},"visibility":10000,"pop":0.56,"rain":{"3h":0.3},"sys":{"pod":"n"},"dt_txt":"2022-07-25 09:00:00"},{"dt":1658750400,"main":{"temp":298.04,"feels_like":298.13,"temp_min":298.04,"temp_max":298.04,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":59,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":62},"wind":{"speed":2.36,"deg":255,"gust":2.97},"visibility":10000,"pop":0.52,"sys":{"pod":"n"},"dt_txt":"2022-07-25 12:00:00"},{"dt":1658761200,"main":{"temp":299.73,"feels_like":299.73,"temp_min":299.73,"temp_max":299.73,"pressure":1014,"sea_level":1014,"grnd_level":893,"humidity":51,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":47},"wind":{"speed":2.12,"deg":237,"gust":2.55},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-25 15:00:00"},{"dt":1658772000,"main":{"temp":303.4,"feels_like":302.79,"temp_min":303.4,"temp_max":303.4,"pressure":1012,"sea_level":1012,"grnd_level":893,"humidity":37,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":30},"wind":{"speed":1.76,"deg":252,"gust":1.68},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-25 18:00:00"},{"dt":1658782800,"main":{"temp":306.83,"feels_like":305.7,"temp_min":306.83,"temp_max":306.83,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":28,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":10},"wind":{"speed":3.93,"deg":233,"gust":3.09},"visibility":10000,"pop":0.07,"sys":{"pod":"d"},"dt_txt":"2022-07-25 21:00:00"},{"dt":1658793600,"main":{"temp":307.3,"feels_like":305.99,"temp_min":307.3,"temp_max":307.3,"pressure":1007,"sea_level":1007,"grnd_level":890,"humidity":26,"temp_kf":0},"weather":[{"id":801,"main":"Clouds","description":"few clouds","icon":"02d"}],"clouds":{"all":14},"wind":{"speed":2.35,"deg":173,"gust":2.79},"visibility":10000,"pop":0.04,"sys":{"pod":"d"},"dt_txt":"2022-07-26 00:00:00"},{"dt":1658804400,"main":{"temp":304.44,"feels_like":303.51,"temp_min":304.44,"temp_max":304.44,"pressure":1008,"sea_level":1008,"grnd_level":890,"humidity":33,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":53},"wind":{"speed":3.55,"deg":231,"gust":4.19},"visibility":10000,"pop":0.21,"rain":{"3h":0.59},"sys":{"pod":"n"},"dt_txt":"2022-07-26 03:00:00"},{"dt":1658815200,"main":{"temp":302.05,"feels_like":301.84,"temp_min":302.05,"temp_max":302.05,"pressure":1011,"sea_level":1011,"grnd_level":892,"humidity":42,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":40},"wind":{"speed":8.97,"deg":223,"gust":9.95},"visibility":10000,"pop":0.67,"rain":{"3h":0.78},"sys":{"pod":"n"},"dt_txt":"2022-07-26 06:00:00"},{"dt":1658826000,"main":{"temp":296.92,"feels_like":296.89,"temp_min":296.92,"temp_max":296.92,"pressure":1013,"sea_level":1013,"grnd_level":891,"humidity":59,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":74},"wind":{"speed":8.51,"deg":255,"gust":12.39},"visibility":10000,"pop":1,"rain":{"3h":2.48},"sys":{"pod":"n"},"dt_txt":"2022-07-26 09:00:00"},{"dt":1658836800,"main":{"temp":297.79,"feels_like":297.67,"temp_min":297.79,"temp_max":297.79,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":52,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":83},"wind":{"speed":1.08,"deg":30,"gust":1.9},"visibility":10000,"pop":0.96,"rain":{"3h":0.47},"sys":{"pod":"n"},"dt_txt":"2022-07-26 12:00:00"},{"dt":1658847600,"main":{"temp":299.15,"feels_like":299.15,"temp_min":299.15,"temp_max":299.15,"pressure":1014,"sea_level":1014,"grnd_level":893,"humidity":48,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":5},"wind":{"speed":1.97,"deg":147,"gust":1.86},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-26 15:00:00"},{"dt":1658858400,"main":{"temp":303.26,"feels_like":302.63,"temp_min":303.26,"temp_max":303.26,"pressure":1013,"sea_level":1013,"grnd_level":894,"humidity":37,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":3},"wind":{"speed":3.22,"deg":160,"gust":3.9},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-26 18:00:00"},{"dt":1658869200,"main":{"temp":306.82,"feels_like":305.41,"temp_min":306.82,"temp_max":306.82,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":26,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":0},"wind":{"speed":2.28,"deg":190,"gust":4.5},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-26 21:00:00"},{"dt":1658880000,"main":{"temp":307.75,"feels_like":306.25,"temp_min":307.75,"temp_max":307.75,"pressure":1007,"sea_level":1007,"grnd_level":890,"humidity":24,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":9},"wind":{"speed":2.97,"deg":230,"gust":3.38},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-27 00:00:00"},{"dt":1658890800,"main":{"temp":300.23,"feels_like":300.64,"temp_min":300.23,"temp_max":300.23,"pressure":1011,"sea_level":1011,"grnd_level":891,"humidity":50,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03n"}],"clouds":{"all":43},"wind":{"speed":12.97,"deg":211,"gust":13.66},"visibility":10000,"pop":0.18,"sys":{"pod":"n"},"dt_txt":"2022-07-27 03:00:00"},{"dt":1658901600,"main":{"temp":299.34,"feels_like":299.34,"temp_min":299.34,"temp_max":299.34,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":44,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":48},"wind":{"speed":3.75,"deg":300,"gust":4.88},"visibility":10000,"pop":0.54,"rain":{"3h":1.07},"sys":{"pod":"n"},"dt_txt":"2022-07-27 06:00:00"},{"dt":1658912400,"main":{"temp":299.37,"feels_like":299.37,"temp_min":299.37,"temp_max":299.37,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":45,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":85},"wind":{"speed":1.98,"deg":222,"gust":2.26},"visibility":10000,"pop":0.22,"rain":{"3h":0.61},"sys":{"pod":"n"},"dt_txt":"2022-07-27 09:00:00"},{"dt":1658923200,"main":{"temp":298.49,"feels_like":298.36,"temp_min":298.49,"temp_max":298.49,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":49,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":92},"wind":{"speed":2.76,"deg":201,"gust":3.49},"visibility":10000,"pop":0.39,"rain":{"3h":0.21},"sys":{"pod":"n"},"dt_txt":"2022-07-27 12:00:00"},{"dt":1658934000,"main":{"temp":299.89,"feels_like":299.97,"temp_min":299.89,"temp_max":299.89,"pressure":1014,"sea_level":1014,"grnd_level":894,"humidity":43,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":2.06,"deg":192,"gust":2.63},"visibility":10000,"pop":0.07,"sys":{"pod":"d"},"dt_txt":"2022-07-27 15:00:00"},{"dt":1658944800,"main":{"temp":302.96,"feels_like":302.04,"temp_min":302.96,"temp_max":302.96,"pressure":1013,"sea_level":1013,"grnd_level":894,"humidity":34,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":2.16,"deg":200,"gust":2.27},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-27 18:00:00"},{"dt":1658955600,"main":{"temp":305.75,"feels_like":304.42,"temp_min":305.75,"temp_max":305.75,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":28,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04d"}],"clouds":{"all":69},"wind":{"speed":1.89,"deg":210,"gust":3.07},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-27 21:00:00"},{"dt":1658966400,"main":{"temp":306.99,"feels_like":305.48,"temp_min":306.99,"temp_max":306.99,"pressure":1008,"sea_level":1008,"grnd_level":890,"humidity":25,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04d"}],"clouds":{"all":53},"wind":{"speed":2.79,"deg":216,"gust":3.58},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 00:00:00"},{"dt":1658977200,"main":{"temp":305.31,"feels_like":304.15,"temp_min":305.31,"temp_max":305.31,"pressure":1009,"sea_level":1009,"grnd_level":891,"humidity":30,"temp_kf":0},"weather":[{"id":801,"main":"Clouds","description":"few clouds","icon":"02n"}],"clouds":{"all":20},"wind":{"speed":3.57,"deg":263,"gust":3.33},"visibility":10000,"pop":0,"sys":{"pod":"n"},"dt_txt":"2022-07-28 03:00:00"},{"dt":1658988000,"main":{"temp":303.91,"feels_like":302.82,"temp_min":303.91,"temp_max":303.91,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":32,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03n"}],"clouds":{"all":40},"wind":{"speed":3.01,"deg":274,"gust":3.8},"visibility":10000,"pop":0,"sys":{"pod":"n"},"dt_txt":"2022-07-28 06:00:00"},{"dt":1658998800,"main":{"temp":300.4,"feels_like":300.61,"temp_min":300.4,"temp_max":300.4,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":47,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":68},"wind":{"speed":4.19,"deg":250,"gust":5.84},"visibility":10000,"pop":0.3,"sys":{"pod":"n"},"dt_txt":"2022-07-28 09:00:00"},{"dt":1659009600,"main":{"temp":299.72,"feels_like":299.72,"temp_min":299.72,"temp_max":299.72,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":50,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":83},"wind":{"speed":3.69,"deg":289,"gust":4.76},"visibility":10000,"pop":0.2,"sys":{"pod":"n"},"dt_txt":"2022-07-28 12:00:00"},{"dt":1659020400,"main":{"temp":301.23,"feels_like":301.18,"temp_min":301.23,"temp_max":301.23,"pressure":1013,"sea_level":1013,"grnd_level":893,"humidity":44,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":1.81,"deg":293,"gust":1.9},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 15:00:00"},{"dt":1659031200,"main":{"temp":304.59,"feels_like":303.8,"temp_min":304.59,"temp_max":304.59,"pressure":1012,"sea_level":1012,"grnd_level":893,"humidity":34,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":1.28,"deg":281,"gust":1.14},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 18:00:00"},{"dt":1659042000,"main":{"temp":307.51,"feels_like":306.1,"temp_min":307.51,"temp_max":307.51,"pressure":1009,"sea_level":1009,"grnd_level":892,"humidity":25,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":39},"wind":{"speed":0.18,"deg":324,"gust":2.82},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 21:00:00"},{"dt":1659052800,"main":{"temp":303.03,"feels_like":302.59,"temp_min":303.03,"temp_max":303.03,"pressure":1009,"sea_level":1009,"grnd_level":891,"humidity":39,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10d"}],"clouds":{"all":61},"wind":{"speed":4.88,"deg":117,"gust":8.38},"visibility":10000,"pop":0.24,"rain":{"3h":0.43},"sys":{"pod":"d"},"dt_txt":"2022-07-29 00:00:00"},{"dt":1659063600,"main":{"temp":305.4,"feels_like":304.02,"temp_min":305.4,"temp_max":305.4,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":28,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":63},"wind":{"speed":3.9,"deg":97,"gust":4.96},"visibility":10000,"pop":0.4,"rain":{"3h":0.23},"sys":{"pod":"n"},"dt_txt":"2022-07-29 03:00:00"},{"dt":1659074400,"main":{"temp":303.33,"feels_like":302.6,"temp_min":303.33,"temp_max":303.33,"pressure":1012,"sea_level":1012,"grnd_level":893,"humidity":36,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":76},"wind":{"speed":1.87,"deg":278,"gust":2.88},"visibility":10000,"pop":0.66,"rain":{"3h":0.29},"sys":{"pod":"n"},"dt_txt":"2022-07-29 06:00:00"},{"dt":1659085200,"main":{"temp":300.88,"feels_like":300.99,"temp_min":300.88,"temp_max":300.88,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":46,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04n"}],"clouds":{"all":96},"wind":{"speed":2.85,"deg":257,"gust":3.65},"visibility":10000,"pop":0.35,"sys":{"pod":"n"},"dt_txt":"2022-07-29 09:00:00"},{"dt":1659096000,"main":{"temp":298.79,"feels_like":298.9,"temp_min":298.79,"temp_max":298.79,"pressure":1015,"sea_level":1015,"grnd_level":893,"humidity":57,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04n"}],"clouds":{"all":91},"wind":{"speed":3.7,"deg":290,"gust":4.27},"visibility":10000,"pop":0.22,"sys":{"pod":"n"},"dt_txt":"2022-07-29 12:00:00"},{"dt":1659106800,"main":{"temp":300.73,"feels_like":300.99,"temp_min":300.73,"temp_max":300.73,"pressure":1015,"sea_level":1015,"grnd_level":895,"humidity":48,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":38},"wind":{"speed":1.69,"deg":29,"gust":2.24},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-29 15:00:00"},{"dt":1659117600,"main":{"temp":304.83,"feels_like":303.96,"temp_min":304.83,"temp_max":304.83,"pressure":1014,"sea_level":1014,"grnd_level":895,"humidity":33,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":26},"wind":{"speed":3.47,"deg":98,"gust":3.54},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-29 18:00:00"}],"city":{"id":5520993,"name":"El Paso","coord":{"lat":31.7754,"lon":-106.4646},"country":"US","population":649121,"timezone":-21600,"sunrise":1658664981,"sunset":1658714884}} ```
True
[FEATURE_REQUEST] Weather Forecast Widget could use unified endpoint - ### Is your feature request related to a problem? If so, please describe. The current weather forecast implementation requires a paid license. If we utilize a different API, it would be available for free. ### Describe the solution you'd like In digging into the APIs for openweathermap.org. I found that the 5day 3hr forecast is available for free where the 16 day daily forecast is not. I think it would be useful to be able to utilize the free one instead of or along side the current one. The 5day 3hr does require the user to know their lat/long though. [3 hour 5 day forecast](https://openweathermap.org/forecast5) [Current API Availability and Pricing](https://openweathermap.org/full-price#current) ### Priority Low (Nice-to-have) ### Is this something you would be keen to implement Maybe Resulting JSON sample ``` {"cod":"200","message":0,"cnt":40,"list":[{"dt":1658696400,"main":{"temp":306.78,"feels_like":305.95,"temp_min":306.78,"temp_max":308.44,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":30,"temp_kf":-1.66},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":28},"wind":{"speed":1.73,"deg":267,"gust":1.54},"visibility":10000,"pop":0.09,"sys":{"pod":"d"},"dt_txt":"2022-07-24 21:00:00"},{"dt":1658707200,"main":{"temp":305.46,"feels_like":304.2,"temp_min":305.21,"temp_max":305.46,"pressure":1010,"sea_level":1010,"grnd_level":890,"humidity":29,"temp_kf":0.25},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10d"}],"clouds":{"all":33},"wind":{"speed":5.48,"deg":52,"gust":6.44},"visibility":10000,"pop":0.35,"rain":{"3h":0.26},"sys":{"pod":"d"},"dt_txt":"2022-07-25 00:00:00"},{"dt":1658718000,"main":{"temp":303.63,"feels_like":302.83,"temp_min":303.63,"temp_max":303.63,"pressure":1010,"sea_level":1010,"grnd_level":891,"humidity":35,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":82},"wind":{"speed":4.8,"deg":136,"gust":5.72},"visibility":10000,"pop":0.16,"sys":{"pod":"n"},"dt_txt":"2022-07-25 03:00:00"},{"dt":1658728800,"main":{"temp":299.75,"feels_like":299.75,"temp_min":299.75,"temp_max":299.75,"pressure":1014,"sea_level":1014,"grnd_level":893,"humidity":51,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":89},"wind":{"speed":10.58,"deg":224,"gust":12.6},"visibility":10000,"pop":0.5,"rain":{"3h":0.2},"sys":{"pod":"n"},"dt_txt":"2022-07-25 06:00:00"},{"dt":1658739600,"main":{"temp":298.66,"feels_like":298.7,"temp_min":298.66,"temp_max":298.66,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":55,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":67},"wind":{"speed":6.65,"deg":262,"gust":8.52},"visibility":10000,"pop":0.56,"rain":{"3h":0.3},"sys":{"pod":"n"},"dt_txt":"2022-07-25 09:00:00"},{"dt":1658750400,"main":{"temp":298.04,"feels_like":298.13,"temp_min":298.04,"temp_max":298.04,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":59,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":62},"wind":{"speed":2.36,"deg":255,"gust":2.97},"visibility":10000,"pop":0.52,"sys":{"pod":"n"},"dt_txt":"2022-07-25 12:00:00"},{"dt":1658761200,"main":{"temp":299.73,"feels_like":299.73,"temp_min":299.73,"temp_max":299.73,"pressure":1014,"sea_level":1014,"grnd_level":893,"humidity":51,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":47},"wind":{"speed":2.12,"deg":237,"gust":2.55},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-25 15:00:00"},{"dt":1658772000,"main":{"temp":303.4,"feels_like":302.79,"temp_min":303.4,"temp_max":303.4,"pressure":1012,"sea_level":1012,"grnd_level":893,"humidity":37,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":30},"wind":{"speed":1.76,"deg":252,"gust":1.68},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-25 18:00:00"},{"dt":1658782800,"main":{"temp":306.83,"feels_like":305.7,"temp_min":306.83,"temp_max":306.83,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":28,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":10},"wind":{"speed":3.93,"deg":233,"gust":3.09},"visibility":10000,"pop":0.07,"sys":{"pod":"d"},"dt_txt":"2022-07-25 21:00:00"},{"dt":1658793600,"main":{"temp":307.3,"feels_like":305.99,"temp_min":307.3,"temp_max":307.3,"pressure":1007,"sea_level":1007,"grnd_level":890,"humidity":26,"temp_kf":0},"weather":[{"id":801,"main":"Clouds","description":"few clouds","icon":"02d"}],"clouds":{"all":14},"wind":{"speed":2.35,"deg":173,"gust":2.79},"visibility":10000,"pop":0.04,"sys":{"pod":"d"},"dt_txt":"2022-07-26 00:00:00"},{"dt":1658804400,"main":{"temp":304.44,"feels_like":303.51,"temp_min":304.44,"temp_max":304.44,"pressure":1008,"sea_level":1008,"grnd_level":890,"humidity":33,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":53},"wind":{"speed":3.55,"deg":231,"gust":4.19},"visibility":10000,"pop":0.21,"rain":{"3h":0.59},"sys":{"pod":"n"},"dt_txt":"2022-07-26 03:00:00"},{"dt":1658815200,"main":{"temp":302.05,"feels_like":301.84,"temp_min":302.05,"temp_max":302.05,"pressure":1011,"sea_level":1011,"grnd_level":892,"humidity":42,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":40},"wind":{"speed":8.97,"deg":223,"gust":9.95},"visibility":10000,"pop":0.67,"rain":{"3h":0.78},"sys":{"pod":"n"},"dt_txt":"2022-07-26 06:00:00"},{"dt":1658826000,"main":{"temp":296.92,"feels_like":296.89,"temp_min":296.92,"temp_max":296.92,"pressure":1013,"sea_level":1013,"grnd_level":891,"humidity":59,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":74},"wind":{"speed":8.51,"deg":255,"gust":12.39},"visibility":10000,"pop":1,"rain":{"3h":2.48},"sys":{"pod":"n"},"dt_txt":"2022-07-26 09:00:00"},{"dt":1658836800,"main":{"temp":297.79,"feels_like":297.67,"temp_min":297.79,"temp_max":297.79,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":52,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":83},"wind":{"speed":1.08,"deg":30,"gust":1.9},"visibility":10000,"pop":0.96,"rain":{"3h":0.47},"sys":{"pod":"n"},"dt_txt":"2022-07-26 12:00:00"},{"dt":1658847600,"main":{"temp":299.15,"feels_like":299.15,"temp_min":299.15,"temp_max":299.15,"pressure":1014,"sea_level":1014,"grnd_level":893,"humidity":48,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":5},"wind":{"speed":1.97,"deg":147,"gust":1.86},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-26 15:00:00"},{"dt":1658858400,"main":{"temp":303.26,"feels_like":302.63,"temp_min":303.26,"temp_max":303.26,"pressure":1013,"sea_level":1013,"grnd_level":894,"humidity":37,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":3},"wind":{"speed":3.22,"deg":160,"gust":3.9},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-26 18:00:00"},{"dt":1658869200,"main":{"temp":306.82,"feels_like":305.41,"temp_min":306.82,"temp_max":306.82,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":26,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":0},"wind":{"speed":2.28,"deg":190,"gust":4.5},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-26 21:00:00"},{"dt":1658880000,"main":{"temp":307.75,"feels_like":306.25,"temp_min":307.75,"temp_max":307.75,"pressure":1007,"sea_level":1007,"grnd_level":890,"humidity":24,"temp_kf":0},"weather":[{"id":800,"main":"Clear","description":"clear sky","icon":"01d"}],"clouds":{"all":9},"wind":{"speed":2.97,"deg":230,"gust":3.38},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-27 00:00:00"},{"dt":1658890800,"main":{"temp":300.23,"feels_like":300.64,"temp_min":300.23,"temp_max":300.23,"pressure":1011,"sea_level":1011,"grnd_level":891,"humidity":50,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03n"}],"clouds":{"all":43},"wind":{"speed":12.97,"deg":211,"gust":13.66},"visibility":10000,"pop":0.18,"sys":{"pod":"n"},"dt_txt":"2022-07-27 03:00:00"},{"dt":1658901600,"main":{"temp":299.34,"feels_like":299.34,"temp_min":299.34,"temp_max":299.34,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":44,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":48},"wind":{"speed":3.75,"deg":300,"gust":4.88},"visibility":10000,"pop":0.54,"rain":{"3h":1.07},"sys":{"pod":"n"},"dt_txt":"2022-07-27 06:00:00"},{"dt":1658912400,"main":{"temp":299.37,"feels_like":299.37,"temp_min":299.37,"temp_max":299.37,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":45,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":85},"wind":{"speed":1.98,"deg":222,"gust":2.26},"visibility":10000,"pop":0.22,"rain":{"3h":0.61},"sys":{"pod":"n"},"dt_txt":"2022-07-27 09:00:00"},{"dt":1658923200,"main":{"temp":298.49,"feels_like":298.36,"temp_min":298.49,"temp_max":298.49,"pressure":1013,"sea_level":1013,"grnd_level":892,"humidity":49,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":92},"wind":{"speed":2.76,"deg":201,"gust":3.49},"visibility":10000,"pop":0.39,"rain":{"3h":0.21},"sys":{"pod":"n"},"dt_txt":"2022-07-27 12:00:00"},{"dt":1658934000,"main":{"temp":299.89,"feels_like":299.97,"temp_min":299.89,"temp_max":299.89,"pressure":1014,"sea_level":1014,"grnd_level":894,"humidity":43,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":2.06,"deg":192,"gust":2.63},"visibility":10000,"pop":0.07,"sys":{"pod":"d"},"dt_txt":"2022-07-27 15:00:00"},{"dt":1658944800,"main":{"temp":302.96,"feels_like":302.04,"temp_min":302.96,"temp_max":302.96,"pressure":1013,"sea_level":1013,"grnd_level":894,"humidity":34,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":2.16,"deg":200,"gust":2.27},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-27 18:00:00"},{"dt":1658955600,"main":{"temp":305.75,"feels_like":304.42,"temp_min":305.75,"temp_max":305.75,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":28,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04d"}],"clouds":{"all":69},"wind":{"speed":1.89,"deg":210,"gust":3.07},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-27 21:00:00"},{"dt":1658966400,"main":{"temp":306.99,"feels_like":305.48,"temp_min":306.99,"temp_max":306.99,"pressure":1008,"sea_level":1008,"grnd_level":890,"humidity":25,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04d"}],"clouds":{"all":53},"wind":{"speed":2.79,"deg":216,"gust":3.58},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 00:00:00"},{"dt":1658977200,"main":{"temp":305.31,"feels_like":304.15,"temp_min":305.31,"temp_max":305.31,"pressure":1009,"sea_level":1009,"grnd_level":891,"humidity":30,"temp_kf":0},"weather":[{"id":801,"main":"Clouds","description":"few clouds","icon":"02n"}],"clouds":{"all":20},"wind":{"speed":3.57,"deg":263,"gust":3.33},"visibility":10000,"pop":0,"sys":{"pod":"n"},"dt_txt":"2022-07-28 03:00:00"},{"dt":1658988000,"main":{"temp":303.91,"feels_like":302.82,"temp_min":303.91,"temp_max":303.91,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":32,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03n"}],"clouds":{"all":40},"wind":{"speed":3.01,"deg":274,"gust":3.8},"visibility":10000,"pop":0,"sys":{"pod":"n"},"dt_txt":"2022-07-28 06:00:00"},{"dt":1658998800,"main":{"temp":300.4,"feels_like":300.61,"temp_min":300.4,"temp_max":300.4,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":47,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":68},"wind":{"speed":4.19,"deg":250,"gust":5.84},"visibility":10000,"pop":0.3,"sys":{"pod":"n"},"dt_txt":"2022-07-28 09:00:00"},{"dt":1659009600,"main":{"temp":299.72,"feels_like":299.72,"temp_min":299.72,"temp_max":299.72,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":50,"temp_kf":0},"weather":[{"id":803,"main":"Clouds","description":"broken clouds","icon":"04n"}],"clouds":{"all":83},"wind":{"speed":3.69,"deg":289,"gust":4.76},"visibility":10000,"pop":0.2,"sys":{"pod":"n"},"dt_txt":"2022-07-28 12:00:00"},{"dt":1659020400,"main":{"temp":301.23,"feels_like":301.18,"temp_min":301.23,"temp_max":301.23,"pressure":1013,"sea_level":1013,"grnd_level":893,"humidity":44,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":1.81,"deg":293,"gust":1.9},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 15:00:00"},{"dt":1659031200,"main":{"temp":304.59,"feels_like":303.8,"temp_min":304.59,"temp_max":304.59,"pressure":1012,"sea_level":1012,"grnd_level":893,"humidity":34,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04d"}],"clouds":{"all":100},"wind":{"speed":1.28,"deg":281,"gust":1.14},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 18:00:00"},{"dt":1659042000,"main":{"temp":307.51,"feels_like":306.1,"temp_min":307.51,"temp_max":307.51,"pressure":1009,"sea_level":1009,"grnd_level":892,"humidity":25,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":39},"wind":{"speed":0.18,"deg":324,"gust":2.82},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-28 21:00:00"},{"dt":1659052800,"main":{"temp":303.03,"feels_like":302.59,"temp_min":303.03,"temp_max":303.03,"pressure":1009,"sea_level":1009,"grnd_level":891,"humidity":39,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10d"}],"clouds":{"all":61},"wind":{"speed":4.88,"deg":117,"gust":8.38},"visibility":10000,"pop":0.24,"rain":{"3h":0.43},"sys":{"pod":"d"},"dt_txt":"2022-07-29 00:00:00"},{"dt":1659063600,"main":{"temp":305.4,"feels_like":304.02,"temp_min":305.4,"temp_max":305.4,"pressure":1010,"sea_level":1010,"grnd_level":892,"humidity":28,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":63},"wind":{"speed":3.9,"deg":97,"gust":4.96},"visibility":10000,"pop":0.4,"rain":{"3h":0.23},"sys":{"pod":"n"},"dt_txt":"2022-07-29 03:00:00"},{"dt":1659074400,"main":{"temp":303.33,"feels_like":302.6,"temp_min":303.33,"temp_max":303.33,"pressure":1012,"sea_level":1012,"grnd_level":893,"humidity":36,"temp_kf":0},"weather":[{"id":500,"main":"Rain","description":"light rain","icon":"10n"}],"clouds":{"all":76},"wind":{"speed":1.87,"deg":278,"gust":2.88},"visibility":10000,"pop":0.66,"rain":{"3h":0.29},"sys":{"pod":"n"},"dt_txt":"2022-07-29 06:00:00"},{"dt":1659085200,"main":{"temp":300.88,"feels_like":300.99,"temp_min":300.88,"temp_max":300.88,"pressure":1012,"sea_level":1012,"grnd_level":892,"humidity":46,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04n"}],"clouds":{"all":96},"wind":{"speed":2.85,"deg":257,"gust":3.65},"visibility":10000,"pop":0.35,"sys":{"pod":"n"},"dt_txt":"2022-07-29 09:00:00"},{"dt":1659096000,"main":{"temp":298.79,"feels_like":298.9,"temp_min":298.79,"temp_max":298.79,"pressure":1015,"sea_level":1015,"grnd_level":893,"humidity":57,"temp_kf":0},"weather":[{"id":804,"main":"Clouds","description":"overcast clouds","icon":"04n"}],"clouds":{"all":91},"wind":{"speed":3.7,"deg":290,"gust":4.27},"visibility":10000,"pop":0.22,"sys":{"pod":"n"},"dt_txt":"2022-07-29 12:00:00"},{"dt":1659106800,"main":{"temp":300.73,"feels_like":300.99,"temp_min":300.73,"temp_max":300.73,"pressure":1015,"sea_level":1015,"grnd_level":895,"humidity":48,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":38},"wind":{"speed":1.69,"deg":29,"gust":2.24},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-29 15:00:00"},{"dt":1659117600,"main":{"temp":304.83,"feels_like":303.96,"temp_min":304.83,"temp_max":304.83,"pressure":1014,"sea_level":1014,"grnd_level":895,"humidity":33,"temp_kf":0},"weather":[{"id":802,"main":"Clouds","description":"scattered clouds","icon":"03d"}],"clouds":{"all":26},"wind":{"speed":3.47,"deg":98,"gust":3.54},"visibility":10000,"pop":0,"sys":{"pod":"d"},"dt_txt":"2022-07-29 18:00:00"}],"city":{"id":5520993,"name":"El Paso","coord":{"lat":31.7754,"lon":-106.4646},"country":"US","population":649121,"timezone":-21600,"sunrise":1658664981,"sunset":1658714884}} ```
main
weather forecast widget could use unified endpoint is your feature request related to a problem if so please describe the current weather forecast implementation requires a paid license if we utilize a different api it would be available for free describe the solution you d like in digging into the apis for openweathermap org i found that the forecast is available for free where the day daily forecast is not i think it would be useful to be able to utilize the free one instead of or along side the current one the does require the user to know their lat long though priority low nice to have is this something you would be keen to implement maybe resulting json sample cod message cnt list clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop rain sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod n dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt dt main temp feels like temp min temp max pressure sea level grnd level humidity temp kf weather clouds all wind speed deg gust visibility pop sys pod d dt txt city id name el paso coord lat lon country us population timezone sunrise sunset
1
198,533
22,659,658,369
IssuesEvent
2022-07-02 01:13:34
loftwah/grindmodecypher.com
https://api.github.com/repos/loftwah/grindmodecypher.com
closed
WS-2019-0332 (Medium) detected in handlebars-4.0.10.min.js - autoclosed
security vulnerability
## WS-2019-0332 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.10.min.js</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.10/handlebars.min.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.10/handlebars.min.js</a></p> <p>Path to vulnerable library: /grindmodecypher.com/wp-content/plugins/duplicator/assets/js/handlebars.min.js</p> <p> Dependency Hierarchy: - :x: **handlebars-4.0.10.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/grindmodecypher.com/commit/4a4d1cc8546fac228cc0a173dc1ff28e375df4bc">4a4d1cc8546fac228cc0a173dc1ff28e375df4bc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Arbitrary Code Execution vulnerability found in handlebars before 4.5.3. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.It is due to an incomplete fix for a WS-2019-0331. <p>Publish Date: 2019-11-17 <p>URL: <a href=https://github.com/wycats/handlebars.js/commit/198887808780bbef9dba67a8af68ece091d5baa7>WS-2019-0332</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p> <p>Release Date: 2019-12-05</p> <p>Fix Resolution: handlebars - 4.5.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0332 (Medium) detected in handlebars-4.0.10.min.js - autoclosed - ## WS-2019-0332 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.10.min.js</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.10/handlebars.min.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.10/handlebars.min.js</a></p> <p>Path to vulnerable library: /grindmodecypher.com/wp-content/plugins/duplicator/assets/js/handlebars.min.js</p> <p> Dependency Hierarchy: - :x: **handlebars-4.0.10.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/loftwah/grindmodecypher.com/commit/4a4d1cc8546fac228cc0a173dc1ff28e375df4bc">4a4d1cc8546fac228cc0a173dc1ff28e375df4bc</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Arbitrary Code Execution vulnerability found in handlebars before 4.5.3. Lookup helper fails to validate templates. Attack may submit templates that execute arbitrary JavaScript in the system.It is due to an incomplete fix for a WS-2019-0331. <p>Publish Date: 2019-11-17 <p>URL: <a href=https://github.com/wycats/handlebars.js/commit/198887808780bbef9dba67a8af68ece091d5baa7>WS-2019-0332</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p> <p>Release Date: 2019-12-05</p> <p>Fix Resolution: handlebars - 4.5.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
ws medium detected in handlebars min js autoclosed ws medium severity vulnerability vulnerable library handlebars min js handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to vulnerable library grindmodecypher com wp content plugins duplicator assets js handlebars min js dependency hierarchy x handlebars min js vulnerable library found in head commit a href vulnerability details arbitrary code execution vulnerability found in handlebars before lookup helper fails to validate templates attack may submit templates that execute arbitrary javascript in the system it is due to an incomplete fix for a ws publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution handlebars step up your open source security game with whitesource
0
5,299
26,770,973,453
IssuesEvent
2023-01-31 14:06:41
grafana/k6-docs
https://api.github.com/repos/grafana/k6-docs
closed
Document how VUs are cycled in arrival-rate executors
Area: OSS Content Type: needsMaintainerHelp
Prompted by https://github.com/loadimpact/k6/pull/1623#discussion_r505206848, we should make sure to explain that the arrival-rate executors are going to cycle over all of the initialized VUs, given enough time, even if they only need a small fraction of them at any given point to run their allotted iterations per second.
True
Document how VUs are cycled in arrival-rate executors - Prompted by https://github.com/loadimpact/k6/pull/1623#discussion_r505206848, we should make sure to explain that the arrival-rate executors are going to cycle over all of the initialized VUs, given enough time, even if they only need a small fraction of them at any given point to run their allotted iterations per second.
main
document how vus are cycled in arrival rate executors prompted by we should make sure to explain that the arrival rate executors are going to cycle over all of the initialized vus given enough time even if they only need a small fraction of them at any given point to run their allotted iterations per second
1
754,292
26,380,628,933
IssuesEvent
2023-01-12 08:20:19
idom-team/idom
https://api.github.com/repos/idom-team/idom
closed
Move `idom.widgets.hotswap` to testing utils
priority: 3 (low) type: refactor
### Discussed in https://github.com/idom-team/idom/discussions/865 --- <div type='discussions-op-text'> <sup>Originally posted by **Archmonger** December 30, 2022</sup> Ever since we've formalized support for conditionally rendered components, hotswap seems to have become a rather pointless utility. This conversation is to discuss whether to deprecate/remove it.</div> --- <div type='discussions-op-text'> <sup>Originally posted by **rmorshea** December 30, 2022</sup> The primary function isn't really to facilitate conditional rendering. Rather, it's to allow you to swap components from outside the normal rendering flow. With that said, it's uses are pretty niche. At the moment it has one usage in [IDOM's testing utilities](https://github.com/idom-team/idom/blob/1e3fac1e4227ac3d5ff225970c95e0e6acfef6ff/src/idom/testing/backend.py#L44). I'd say it would make sense to turn it into a private util in idom.testing.backend.</div>
1.0
Move `idom.widgets.hotswap` to testing utils - ### Discussed in https://github.com/idom-team/idom/discussions/865 --- <div type='discussions-op-text'> <sup>Originally posted by **Archmonger** December 30, 2022</sup> Ever since we've formalized support for conditionally rendered components, hotswap seems to have become a rather pointless utility. This conversation is to discuss whether to deprecate/remove it.</div> --- <div type='discussions-op-text'> <sup>Originally posted by **rmorshea** December 30, 2022</sup> The primary function isn't really to facilitate conditional rendering. Rather, it's to allow you to swap components from outside the normal rendering flow. With that said, it's uses are pretty niche. At the moment it has one usage in [IDOM's testing utilities](https://github.com/idom-team/idom/blob/1e3fac1e4227ac3d5ff225970c95e0e6acfef6ff/src/idom/testing/backend.py#L44). I'd say it would make sense to turn it into a private util in idom.testing.backend.</div>
non_main
move idom widgets hotswap to testing utils discussed in originally posted by archmonger december ever since we ve formalized support for conditionally rendered components hotswap seems to have become a rather pointless utility this conversation is to discuss whether to deprecate remove it originally posted by rmorshea december the primary function isn t really to facilitate conditional rendering rather it s to allow you to swap components from outside the normal rendering flow with that said it s uses are pretty niche at the moment it has one usage in i d say it would make sense to turn it into a private util in idom testing backend
0
2,972
10,693,470,694
IssuesEvent
2019-10-23 08:54:36
diofant/diofant
https://api.github.com/repos/diofant/diofant
opened
Autoformat pull requests (have bot/action for this)
help wanted maintainability needs decision
Making pylint/flake8 happy may be annoying for beginners, so may be it does make sense to automate review & fixing such issues.
True
Autoformat pull requests (have bot/action for this) - Making pylint/flake8 happy may be annoying for beginners, so may be it does make sense to automate review & fixing such issues.
main
autoformat pull requests have bot action for this making pylint happy may be annoying for beginners so may be it does make sense to automate review fixing such issues
1
349
3,250,183,919
IssuesEvent
2015-10-18 19:50:05
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
Permission denied - /opt/homebrew-cask
awaiting maintainer feedback
brew cask install is not working --> Permission denied - /opt/homebrew-cask history osx 10.10 install brew brew works upgrade to osx 10.11 install brew cask --> cask nor working here the console log with the state and the error ``` imac4ado:sync ado$ brew cask doctor ==> We need to make Caskroom for the first time at /opt/homebrew-cask/Caskroom ==> We'll set permissions properly so we won't need sudo in the future ==> OS X Release: 10.11 ==> OS X Release with Patchlevel: 10.11 ==> Hardware Architecture: intel-64 ==> Ruby Version: 2.0.0-p645 ==> Ruby Path: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby ==> Homebrew Version: 0.9.5 (git revision 142f61; last commit 2015-10-17) ==> Homebrew Executable Path: /usr/local/bin/brew ==> Homebrew Cellar Path: /usr/local/Cellar ==> Homebrew Repository Path: /usr/local ==> Homebrew Origin: https://github.com/Homebrew/homebrew ==> Homebrew-cask Version: 0.58.0 ==> Homebrew-cask Install Location: /usr/local/Cellar/brew-cask/0.58.0 ==> Homebrew-cask Staging Location: /opt/homebrew-cask/Caskroom (error: path does not exist) ==> Homebrew-cask Cached Downloads: /Library/Caches/Homebrew /Library/Caches/Homebrew/Casks 2 files (warning: run "brew cask cleanup") 1.12 megs (warning: run "brew cask cleanup") ==> Homebrew-cask Default Tap Path: /usr/local/Library/Taps/caskroom/homebrew-cask ==> Homebrew-cask Alternate Cask Taps: <NONE> ==> Homebrew-cask Default Tap Cask Count: 2806 ==> Contents of $LOAD_PATH: /usr/local/Cellar/brew-cask/0.58.0/rubylib /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin15 /Library/Ruby/Site/2.0.0/universal-darwin15 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin15 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin15 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin15 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin15 ==> Contents of $RUBYLIB Environment Variable: <NONE> ==> Contents of $RUBYOPT Environment Variable: <NONE> ==> Contents of $RUBYPATH Environment Variable: <NONE> ==> Contents of $RBENV_VERSION Environment Variable: <NONE> ==> Contents of $CHRUBY_VERSION Environment Variable: <NONE> ==> Contents of $GEM_HOME Environment Variable: <NONE> ==> Contents of $GEM_PATH Environment Variable: <NONE> ==> Contents of $BUNDLE_PATH Environment Variable: <NONE> ==> Contents of $PATH Environment Variable: PATH="/~/sync:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Server.app/Contents/ServerRoot/usr/bin:/Applications/Server.app/Contents/ServerRoot/usr/sbin:/usr/local/Library/ENV/scm" ==> Contents of $SHELL Environment Variable: SHELL="/bin/bash" ==> Contents of Locale Environment Variables: LANG="de_DE.UTF-8" ==> Running As Privileged User: No imac4ado:sync ado$ brew cask instal supaview ==> We need to make Caskroom for the first time at /opt/homebrew-cask/Caskroom ==> We'll set permissions properly so we won't need sudo in the future ==> Downloading http://twinside.free.fr/supaview/SupaView.1.3.2.zip Already downloaded: /Library/Caches/Homebrew/supaview-1.3.2.zip Error: Permission denied - /opt/homebrew-cask Most likely, this means you have an outdated version of homebrew-cask. Please run: brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup If this doesn’t fix the problem, please report this bug: https://github.com/caskroom/homebrew-cask/issues /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `fu_mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:219:in `block (2 levels) in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `reverse_each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `block in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `mkdir_p' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:100:in `extract_primary_container' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:65:in `install' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:20:in `block in install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `each' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:6:in `run' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:79:in `run_command' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:119:in `process' /usr/local/Cellar/brew-cask/0.58.0/rubylib/brew-cask-cmd.rb:19:in `<main>' imac4ado:sync ado$ brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup Updated Homebrew from 142f61a6 to 25f3f43f. ==> Updated Formulae deis deisctl gauge redis Error: caskroom/cask/brew-cask 0.58.0 already installed imac4ado:sync ado$ brew cask instal supaview ==> We need to make Caskroom for the first time at /opt/homebrew-cask/Caskroom ==> We'll set permissions properly so we won't need sudo in the future ==> Downloading http://twinside.free.fr/supaview/SupaView.1.3.2.zip Already downloaded: /Library/Caches/Homebrew/supaview-1.3.2.zip Error: Permission denied - /opt/homebrew-cask Most likely, this means you have an outdated version of homebrew-cask. Please run: brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup If this doesn’t fix the problem, please report this bug: https://github.com/caskroom/homebrew-cask/issues /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `fu_mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:219:in `block (2 levels) in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `reverse_each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `block in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `mkdir_p' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:100:in `extract_primary_container' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:65:in `install' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:20:in `block in install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `each' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:6:in `run' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:79:in `run_command' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:119:in `process' /usr/local/Cellar/brew-cask/0.58.0/rubylib/brew-cask-cmd.rb:19:in `<main>' imac4ado:sync ado$ ``` 'brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup' does not solve the problem here the current permissions: ``` imac4ado:sync ado$ sudo ls -arlt /opt/homebrew-cask total 0 drwxr-xr-x 2 ado staff 68 17 Okt 19:28 Caskroom drwx------@ 4 macports staff 136 17 Okt 19:28 .. drwxr-xr-x 3 ado staff 102 17 Okt 19:28 . ```
True
Permission denied - /opt/homebrew-cask - brew cask install is not working --> Permission denied - /opt/homebrew-cask history osx 10.10 install brew brew works upgrade to osx 10.11 install brew cask --> cask nor working here the console log with the state and the error ``` imac4ado:sync ado$ brew cask doctor ==> We need to make Caskroom for the first time at /opt/homebrew-cask/Caskroom ==> We'll set permissions properly so we won't need sudo in the future ==> OS X Release: 10.11 ==> OS X Release with Patchlevel: 10.11 ==> Hardware Architecture: intel-64 ==> Ruby Version: 2.0.0-p645 ==> Ruby Path: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby ==> Homebrew Version: 0.9.5 (git revision 142f61; last commit 2015-10-17) ==> Homebrew Executable Path: /usr/local/bin/brew ==> Homebrew Cellar Path: /usr/local/Cellar ==> Homebrew Repository Path: /usr/local ==> Homebrew Origin: https://github.com/Homebrew/homebrew ==> Homebrew-cask Version: 0.58.0 ==> Homebrew-cask Install Location: /usr/local/Cellar/brew-cask/0.58.0 ==> Homebrew-cask Staging Location: /opt/homebrew-cask/Caskroom (error: path does not exist) ==> Homebrew-cask Cached Downloads: /Library/Caches/Homebrew /Library/Caches/Homebrew/Casks 2 files (warning: run "brew cask cleanup") 1.12 megs (warning: run "brew cask cleanup") ==> Homebrew-cask Default Tap Path: /usr/local/Library/Taps/caskroom/homebrew-cask ==> Homebrew-cask Alternate Cask Taps: <NONE> ==> Homebrew-cask Default Tap Cask Count: 2806 ==> Contents of $LOAD_PATH: /usr/local/Cellar/brew-cask/0.58.0/rubylib /Library/Ruby/Site/2.0.0 /Library/Ruby/Site/2.0.0/x86_64-darwin15 /Library/Ruby/Site/2.0.0/universal-darwin15 /Library/Ruby/Site /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/x86_64-darwin15 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby/2.0.0/universal-darwin15 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/vendor_ruby /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/x86_64-darwin15 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin15 ==> Contents of $RUBYLIB Environment Variable: <NONE> ==> Contents of $RUBYOPT Environment Variable: <NONE> ==> Contents of $RUBYPATH Environment Variable: <NONE> ==> Contents of $RBENV_VERSION Environment Variable: <NONE> ==> Contents of $CHRUBY_VERSION Environment Variable: <NONE> ==> Contents of $GEM_HOME Environment Variable: <NONE> ==> Contents of $GEM_PATH Environment Variable: <NONE> ==> Contents of $BUNDLE_PATH Environment Variable: <NONE> ==> Contents of $PATH Environment Variable: PATH="/~/sync:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/Server.app/Contents/ServerRoot/usr/bin:/Applications/Server.app/Contents/ServerRoot/usr/sbin:/usr/local/Library/ENV/scm" ==> Contents of $SHELL Environment Variable: SHELL="/bin/bash" ==> Contents of Locale Environment Variables: LANG="de_DE.UTF-8" ==> Running As Privileged User: No imac4ado:sync ado$ brew cask instal supaview ==> We need to make Caskroom for the first time at /opt/homebrew-cask/Caskroom ==> We'll set permissions properly so we won't need sudo in the future ==> Downloading http://twinside.free.fr/supaview/SupaView.1.3.2.zip Already downloaded: /Library/Caches/Homebrew/supaview-1.3.2.zip Error: Permission denied - /opt/homebrew-cask Most likely, this means you have an outdated version of homebrew-cask. Please run: brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup If this doesn’t fix the problem, please report this bug: https://github.com/caskroom/homebrew-cask/issues /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `fu_mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:219:in `block (2 levels) in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `reverse_each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `block in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `mkdir_p' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:100:in `extract_primary_container' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:65:in `install' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:20:in `block in install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `each' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:6:in `run' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:79:in `run_command' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:119:in `process' /usr/local/Cellar/brew-cask/0.58.0/rubylib/brew-cask-cmd.rb:19:in `<main>' imac4ado:sync ado$ brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup Updated Homebrew from 142f61a6 to 25f3f43f. ==> Updated Formulae deis deisctl gauge redis Error: caskroom/cask/brew-cask 0.58.0 already installed imac4ado:sync ado$ brew cask instal supaview ==> We need to make Caskroom for the first time at /opt/homebrew-cask/Caskroom ==> We'll set permissions properly so we won't need sudo in the future ==> Downloading http://twinside.free.fr/supaview/SupaView.1.3.2.zip Already downloaded: /Library/Caches/Homebrew/supaview-1.3.2.zip Error: Permission denied - /opt/homebrew-cask Most likely, this means you have an outdated version of homebrew-cask. Please run: brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup If this doesn’t fix the problem, please report this bug: https://github.com/caskroom/homebrew-cask/issues /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:245:in `fu_mkdir' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:219:in `block (2 levels) in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `reverse_each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:217:in `block in mkdir_p' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `each' /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/fileutils.rb:203:in `mkdir_p' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:100:in `extract_primary_container' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/installer.rb:65:in `install' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:20:in `block in install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `each' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:17:in `install_casks' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli/install.rb:6:in `run' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:79:in `run_command' /usr/local/Cellar/brew-cask/0.58.0/rubylib/hbc/cli.rb:119:in `process' /usr/local/Cellar/brew-cask/0.58.0/rubylib/brew-cask-cmd.rb:19:in `<main>' imac4ado:sync ado$ ``` 'brew update && brew upgrade brew-cask && brew cleanup && brew cask cleanup' does not solve the problem here the current permissions: ``` imac4ado:sync ado$ sudo ls -arlt /opt/homebrew-cask total 0 drwxr-xr-x 2 ado staff 68 17 Okt 19:28 Caskroom drwx------@ 4 macports staff 136 17 Okt 19:28 .. drwxr-xr-x 3 ado staff 102 17 Okt 19:28 . ```
main
permission denied opt homebrew cask brew cask install is not working permission denied opt homebrew cask history osx install brew brew works upgrade to osx install brew cask cask nor working here the console log with the state and the error sync ado brew cask doctor we need to make caskroom for the first time at opt homebrew cask caskroom we ll set permissions properly so we won t need sudo in the future os x release os x release with patchlevel hardware architecture intel ruby version ruby path system library frameworks ruby framework versions usr bin ruby homebrew version git revision last commit homebrew executable path usr local bin brew homebrew cellar path usr local cellar homebrew repository path usr local homebrew origin homebrew cask version homebrew cask install location usr local cellar brew cask homebrew cask staging location opt homebrew cask caskroom error path does not exist homebrew cask cached downloads library caches homebrew library caches homebrew casks files warning run brew cask cleanup megs warning run brew cask cleanup homebrew cask default tap path usr local library taps caskroom homebrew cask homebrew cask alternate cask taps homebrew cask default tap cask count contents of load path usr local cellar brew cask rubylib library ruby site library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal contents of rubylib environment variable contents of rubyopt environment variable contents of rubypath environment variable contents of rbenv version environment variable contents of chruby version environment variable contents of gem home environment variable contents of gem path environment variable contents of bundle path environment variable contents of path environment variable path sync opt local bin opt local sbin usr local bin usr local sbin usr local mysql bin usr local bin usr bin bin usr sbin sbin applications server app contents serverroot usr bin applications server app contents serverroot usr sbin usr local library env scm contents of shell environment variable shell bin bash contents of locale environment variables lang de de utf running as privileged user no sync ado brew cask instal supaview we need to make caskroom for the first time at opt homebrew cask caskroom we ll set permissions properly so we won t need sudo in the future downloading already downloaded library caches homebrew supaview zip error permission denied opt homebrew cask most likely this means you have an outdated version of homebrew cask please run brew update brew upgrade brew cask brew cleanup brew cask cleanup if this doesn’t fix the problem please report this bug system library frameworks ruby framework versions usr lib ruby fileutils rb in mkdir system library frameworks ruby framework versions usr lib ruby fileutils rb in fu mkdir system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in mkdir p system library frameworks ruby framework versions usr lib ruby fileutils rb in reverse each system library frameworks ruby framework versions usr lib ruby fileutils rb in block in mkdir p system library frameworks ruby framework versions usr lib ruby fileutils rb in each system library frameworks ruby framework versions usr lib ruby fileutils rb in mkdir p usr local cellar brew cask rubylib hbc installer rb in extract primary container usr local cellar brew cask rubylib hbc installer rb in install usr local cellar brew cask rubylib hbc cli install rb in block in install casks usr local cellar brew cask rubylib hbc cli install rb in each usr local cellar brew cask rubylib hbc cli install rb in install casks usr local cellar brew cask rubylib hbc cli install rb in run usr local cellar brew cask rubylib hbc cli rb in run command usr local cellar brew cask rubylib hbc cli rb in process usr local cellar brew cask rubylib brew cask cmd rb in sync ado brew update brew upgrade brew cask brew cleanup brew cask cleanup updated homebrew from to updated formulae deis deisctl gauge redis error caskroom cask brew cask already installed sync ado brew cask instal supaview we need to make caskroom for the first time at opt homebrew cask caskroom we ll set permissions properly so we won t need sudo in the future downloading already downloaded library caches homebrew supaview zip error permission denied opt homebrew cask most likely this means you have an outdated version of homebrew cask please run brew update brew upgrade brew cask brew cleanup brew cask cleanup if this doesn’t fix the problem please report this bug system library frameworks ruby framework versions usr lib ruby fileutils rb in mkdir system library frameworks ruby framework versions usr lib ruby fileutils rb in fu mkdir system library frameworks ruby framework versions usr lib ruby fileutils rb in block levels in mkdir p system library frameworks ruby framework versions usr lib ruby fileutils rb in reverse each system library frameworks ruby framework versions usr lib ruby fileutils rb in block in mkdir p system library frameworks ruby framework versions usr lib ruby fileutils rb in each system library frameworks ruby framework versions usr lib ruby fileutils rb in mkdir p usr local cellar brew cask rubylib hbc installer rb in extract primary container usr local cellar brew cask rubylib hbc installer rb in install usr local cellar brew cask rubylib hbc cli install rb in block in install casks usr local cellar brew cask rubylib hbc cli install rb in each usr local cellar brew cask rubylib hbc cli install rb in install casks usr local cellar brew cask rubylib hbc cli install rb in run usr local cellar brew cask rubylib hbc cli rb in run command usr local cellar brew cask rubylib hbc cli rb in process usr local cellar brew cask rubylib brew cask cmd rb in sync ado brew update brew upgrade brew cask brew cleanup brew cask cleanup does not solve the problem here the current permissions sync ado sudo ls arlt opt homebrew cask total drwxr xr x ado staff okt caskroom drwx macports staff okt drwxr xr x ado staff okt
1
126,092
4,972,113,681
IssuesEvent
2016-12-05 20:37:15
Polymer/polymer-cli
https://api.github.com/repos/Polymer/polymer-cli
closed
Use a different port if the current one is in use when serving
Priority: Low Status: Available Type: Enhancement
Use case: > the user is serving more than one project at the same time. It would be neat to find a port number that is open when I call `polymer serve` if I don't explicitly specifying the port number. Similar to how browser-sync does it. Finally, copy that URL to the clipboard.
1.0
Use a different port if the current one is in use when serving - Use case: > the user is serving more than one project at the same time. It would be neat to find a port number that is open when I call `polymer serve` if I don't explicitly specifying the port number. Similar to how browser-sync does it. Finally, copy that URL to the clipboard.
non_main
use a different port if the current one is in use when serving use case the user is serving more than one project at the same time it would be neat to find a port number that is open when i call polymer serve if i don t explicitly specifying the port number similar to how browser sync does it finally copy that url to the clipboard
0
251,000
18,921,684,719
IssuesEvent
2021-11-17 02:58:38
hajimehoshi/ebiten
https://api.github.com/repos/hajimehoshi/ebiten
closed
Inappropriate description in docs
documentation
Hi Doc page address: https://ebiten.org/documents/gopherjs.html It says > As GopherJS is not actively maintained now, it is recomended to use WebAssembly if possible. Actually, [GopherJS](https://github.com/gopherjs/gopherjs) is still under maintenance now. You can find it [here](https://github.com/gopherjs/gopherjs#whats-new). The statement here may cause misunderstanding :)
1.0
Inappropriate description in docs - Hi Doc page address: https://ebiten.org/documents/gopherjs.html It says > As GopherJS is not actively maintained now, it is recomended to use WebAssembly if possible. Actually, [GopherJS](https://github.com/gopherjs/gopherjs) is still under maintenance now. You can find it [here](https://github.com/gopherjs/gopherjs#whats-new). The statement here may cause misunderstanding :)
non_main
inappropriate description in docs hi doc page address it says as gopherjs is not actively maintained now it is recomended to use webassembly if possible actually is still under maintenance now you can find it the statement here may cause misunderstanding
0
1,546
6,572,237,133
IssuesEvent
2017-09-11 00:26:31
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ec2_vpc doesn't return the default route table id created
affects_1.9 aws bug_report cloud feature_idea waiting_on_maintainer
##### Issue Type: Bug ##### Ansible Version: ansible-1.9.2-1.el7.noarch ##### Ansible Configuration: ``` - name: Create VPC ec2_vpc: region: "{{ aws_region }}" state: present cidr_block: "{{ vpc_cidr }}" dns_hostnames: yes dns_support: yes instance_tenancy: default internet_gateway: yes resource_tags: "{{ vpc_tags }}" register: vpc_res - debug: var=vpc_res ``` ##### Environment: CentOS 7. Ansible installed from EPEL repo with the latest ec2_vpc.py module from development branch. ##### Summary: Upon successful creation of vpc the output (registered variable) doesn't contain the id of created route table. ##### Steps To Reproduce: ``` TASK: [vpc | debug var=vpc_res] *********************************************** ok: [127.0.0.1] => { "var": { "vpc_res": { "changed": false, "igw_id": "igw-e54c7e80", "invocation": { "module_args": "", "module_name": "ec2_vpc" }, "subnets": [], "vpc": { "cidr_block": "172.16.0.0/16", "dhcp_options_id": "dopt-d6382cb4", "id": "vpc-5fcd183b", "region": "us-east-1", "state": "available" }, "vpc_id": "vpc-5fcd183b" } } } ``` In AWS console is created the default routing table e.g. "rtb-95912af1". ##### Expected Results: The id of the created router table should be returned in the output so that it can be referenced later in playbook. ##### Actual Results: Output doesn't contain this information. Thanks, Constantin
True
ec2_vpc doesn't return the default route table id created - ##### Issue Type: Bug ##### Ansible Version: ansible-1.9.2-1.el7.noarch ##### Ansible Configuration: ``` - name: Create VPC ec2_vpc: region: "{{ aws_region }}" state: present cidr_block: "{{ vpc_cidr }}" dns_hostnames: yes dns_support: yes instance_tenancy: default internet_gateway: yes resource_tags: "{{ vpc_tags }}" register: vpc_res - debug: var=vpc_res ``` ##### Environment: CentOS 7. Ansible installed from EPEL repo with the latest ec2_vpc.py module from development branch. ##### Summary: Upon successful creation of vpc the output (registered variable) doesn't contain the id of created route table. ##### Steps To Reproduce: ``` TASK: [vpc | debug var=vpc_res] *********************************************** ok: [127.0.0.1] => { "var": { "vpc_res": { "changed": false, "igw_id": "igw-e54c7e80", "invocation": { "module_args": "", "module_name": "ec2_vpc" }, "subnets": [], "vpc": { "cidr_block": "172.16.0.0/16", "dhcp_options_id": "dopt-d6382cb4", "id": "vpc-5fcd183b", "region": "us-east-1", "state": "available" }, "vpc_id": "vpc-5fcd183b" } } } ``` In AWS console is created the default routing table e.g. "rtb-95912af1". ##### Expected Results: The id of the created router table should be returned in the output so that it can be referenced later in playbook. ##### Actual Results: Output doesn't contain this information. Thanks, Constantin
main
vpc doesn t return the default route table id created issue type bug ansible version ansible noarch ansible configuration name create vpc vpc region aws region state present cidr block vpc cidr dns hostnames yes dns support yes instance tenancy default internet gateway yes resource tags vpc tags register vpc res debug var vpc res environment centos ansible installed from epel repo with the latest vpc py module from development branch summary upon successful creation of vpc the output registered variable doesn t contain the id of created route table steps to reproduce task ok var vpc res changed false igw id igw invocation module args module name vpc subnets vpc cidr block dhcp options id dopt id vpc region us east state available vpc id vpc in aws console is created the default routing table e g rtb expected results the id of the created router table should be returned in the output so that it can be referenced later in playbook actual results output doesn t contain this information thanks constantin
1
3,031
11,211,472,286
IssuesEvent
2020-01-06 15:30:40
javascript-obfuscator/javascript-obfuscator
https://api.github.com/repos/javascript-obfuscator/javascript-obfuscator
closed
Maintainer call
maintaining
I am a thrid-year undergraduate, I wanna take this topic as my final thesis. So I wanna learn and maintain this project.
True
Maintainer call - I am a thrid-year undergraduate, I wanna take this topic as my final thesis. So I wanna learn and maintain this project.
main
maintainer call i am a thrid year undergraduate i wanna take this topic as my final thesis so i wanna learn and maintain this project
1
11,732
3,519,133,031
IssuesEvent
2016-01-12 15:48:06
linkeddata/ldnode
https://api.github.com/repos/linkeddata/ldnode
closed
Add documentation for `--key` and `--cert` parameters
documentation
Add some more docs to the `--key` and `--cert` parameters to the readme. Specifically: - What they're used for - Whether or not they're required for other parameters (like WebID+TLS) - Instructions how to generate the key/certificate.
1.0
Add documentation for `--key` and `--cert` parameters - Add some more docs to the `--key` and `--cert` parameters to the readme. Specifically: - What they're used for - Whether or not they're required for other parameters (like WebID+TLS) - Instructions how to generate the key/certificate.
non_main
add documentation for key and cert parameters add some more docs to the key and cert parameters to the readme specifically what they re used for whether or not they re required for other parameters like webid tls instructions how to generate the key certificate
0
26,760
27,166,227,505
IssuesEvent
2023-02-17 15:32:50
bevyengine/bevy
https://api.github.com/repos/bevyengine/bevy
closed
`ScheduleBuildSettings` should have a `use_shortnames` field
A-ECS C-Usability
## What problem does this solve or what need does it fill? System names can be very long and challenging to read when resolving ## What solution would you like? Add the field, set to `false` by default. Use the existing short_name code in bevy_utils to parse the system and component names if that field is set to `true`. ## Additional context Please don't do this until #7267 is merged for the sake of my sanity.
True
`ScheduleBuildSettings` should have a `use_shortnames` field - ## What problem does this solve or what need does it fill? System names can be very long and challenging to read when resolving ## What solution would you like? Add the field, set to `false` by default. Use the existing short_name code in bevy_utils to parse the system and component names if that field is set to `true`. ## Additional context Please don't do this until #7267 is merged for the sake of my sanity.
non_main
schedulebuildsettings should have a use shortnames field what problem does this solve or what need does it fill system names can be very long and challenging to read when resolving what solution would you like add the field set to false by default use the existing short name code in bevy utils to parse the system and component names if that field is set to true additional context please don t do this until is merged for the sake of my sanity
0
203,212
15,874,523,922
IssuesEvent
2021-04-09 05:14:20
AY2021S2-CS2103-W16-4/tp
https://api.github.com/repos/AY2021S2-CS2103-W16-4/tp
closed
[PE-D] Wrong person type tag for add command in summary
documentation
There is a typo for the person type tag in the command summary. The person type tag is written as `tp/ROLE` instead of `pt/ROLE`. ![image.png](https://raw.githubusercontent.com/tanboonji/ped/main/files/012b01bc-c822-4800-b162-badc952509d5.png) <!--session: 1617429872400-053517af-66af-4ebb-b918-525f72d01fbd--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: tanboonji/ped#10
1.0
[PE-D] Wrong person type tag for add command in summary - There is a typo for the person type tag in the command summary. The person type tag is written as `tp/ROLE` instead of `pt/ROLE`. ![image.png](https://raw.githubusercontent.com/tanboonji/ped/main/files/012b01bc-c822-4800-b162-badc952509d5.png) <!--session: 1617429872400-053517af-66af-4ebb-b918-525f72d01fbd--> ------------- Labels: `severity.VeryLow` `type.DocumentationBug` original: tanboonji/ped#10
non_main
wrong person type tag for add command in summary there is a typo for the person type tag in the command summary the person type tag is written as tp role instead of pt role labels severity verylow type documentationbug original tanboonji ped
0
33,583
16,038,370,003
IssuesEvent
2021-04-22 02:50:07
juicedata/juicefs
https://api.github.com/repos/juicedata/juicefs
closed
Performance 3x ~ 8x slower than s5cmd (for large files)
area/performance
While comparing basic read/write operations, it appears than `s5cmd` is 3x ~ 8x faster than `juicefs` **What happened**: `#### WRITE IO ####` ``` $ time cp 1gb_file.txt /mnt/juicefs0/ real 0m50.859s user 0m0.016s sys 0m1.365s ``` ``` $ time s5cmd cp 1gb_file.txt s3://bucket/path/ real 0m20.614s user 0m9.411s sys 0m3.232s ``` `#### READ IO ####` ``` $ time cp /mnt/juicefs0/1gb_file.txt . real 0m45.539s user 0m0.014s sys 0m1.578s ``` ``` $ time s5cmd cp s3://bucket/path/1gb_file.txt . real 0m6.074s user 0m1.186s sys 0m2.504s ``` **Environment**: - JuiceFS version or Hadoop Java SDK version: `juicefs version 0.12.1 (2021-04-15T08:18:25Z 7b4df23)` - Cloud provider or hardware configuration running JuiceFS: `Linode 1 GB VM` - OS: `Fedora 33 (Server Edition)` - Kernel: `Linux 5.11.12-200.fc33.x86_64 #1 SMP Thu Apr 8 02:34:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux` - Object storage: `Linode` - Redis info: `Redis 6.2.1` - Network connectivity (JuiceFS to Redis, JuiceFS to object storage): `redis (local), S3 Object Storage (Linode)`
True
Performance 3x ~ 8x slower than s5cmd (for large files) - While comparing basic read/write operations, it appears than `s5cmd` is 3x ~ 8x faster than `juicefs` **What happened**: `#### WRITE IO ####` ``` $ time cp 1gb_file.txt /mnt/juicefs0/ real 0m50.859s user 0m0.016s sys 0m1.365s ``` ``` $ time s5cmd cp 1gb_file.txt s3://bucket/path/ real 0m20.614s user 0m9.411s sys 0m3.232s ``` `#### READ IO ####` ``` $ time cp /mnt/juicefs0/1gb_file.txt . real 0m45.539s user 0m0.014s sys 0m1.578s ``` ``` $ time s5cmd cp s3://bucket/path/1gb_file.txt . real 0m6.074s user 0m1.186s sys 0m2.504s ``` **Environment**: - JuiceFS version or Hadoop Java SDK version: `juicefs version 0.12.1 (2021-04-15T08:18:25Z 7b4df23)` - Cloud provider or hardware configuration running JuiceFS: `Linode 1 GB VM` - OS: `Fedora 33 (Server Edition)` - Kernel: `Linux 5.11.12-200.fc33.x86_64 #1 SMP Thu Apr 8 02:34:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux` - Object storage: `Linode` - Redis info: `Redis 6.2.1` - Network connectivity (JuiceFS to Redis, JuiceFS to object storage): `redis (local), S3 Object Storage (Linode)`
non_main
performance slower than for large files while comparing basic read write operations it appears than is faster than juicefs what happened write io time cp file txt mnt real user sys time cp file txt bucket path real user sys read io time cp mnt file txt real user sys time cp bucket path file txt real user sys environment juicefs version or hadoop java sdk version juicefs version cloud provider or hardware configuration running juicefs linode gb vm os fedora server edition kernel linux smp thu apr utc gnu linux object storage linode redis info redis network connectivity juicefs to redis juicefs to object storage redis local object storage linode
0
1,631
6,572,657,006
IssuesEvent
2017-09-11 04:08:25
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
S3 Module 'Failed to connect to S3: Region does not seem to be available for aws module boto.s3.'
affects_2.1 aws bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - s3 ##### ANSIBLE VERSION ``` 2.1.0 - devel ``` ##### OS / ENVIRONMENT - CentOS 7.2 / MacOS X 10.11.4 - Boto 2.39.0 ##### SUMMARY Since commit `0dd58e932680af5d3544a045c5ea0bd0c9eadeb0` (Use connect_to_aws where possible) this error started to happen. Testing with one commit before: `344cf5fc0e2c8637fe9513206b2c843ca60264cf` it is working fine. ##### STEPS TO REPRODUCE ``` ansible -i localhost, -c local -m s3 -a 'bucket=<somebucket> object=<some_object> dest=/tmp/file mode=get' localhost localhost | FAILED! => { "changed": false, "failed": true, "msg": "Failed to connect to S3: Region does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" } ``` ##### EXPECTED RESULTS ``` localhost | SUCCESS => { "changed": true, "msg": "GET operation complete" } ```
True
S3 Module 'Failed to connect to S3: Region does not seem to be available for aws module boto.s3.' - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - s3 ##### ANSIBLE VERSION ``` 2.1.0 - devel ``` ##### OS / ENVIRONMENT - CentOS 7.2 / MacOS X 10.11.4 - Boto 2.39.0 ##### SUMMARY Since commit `0dd58e932680af5d3544a045c5ea0bd0c9eadeb0` (Use connect_to_aws where possible) this error started to happen. Testing with one commit before: `344cf5fc0e2c8637fe9513206b2c843ca60264cf` it is working fine. ##### STEPS TO REPRODUCE ``` ansible -i localhost, -c local -m s3 -a 'bucket=<somebucket> object=<some_object> dest=/tmp/file mode=get' localhost localhost | FAILED! => { "changed": false, "failed": true, "msg": "Failed to connect to S3: Region does not seem to be available for aws module boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" } ``` ##### EXPECTED RESULTS ``` localhost | SUCCESS => { "changed": true, "msg": "GET operation complete" } ```
main
module failed to connect to region does not seem to be available for aws module boto issue type bug report component name ansible version devel os environment centos macos x boto summary since commit use connect to aws where possible this error started to happen testing with one commit before it is working fine steps to reproduce ansible i localhost c local m a bucket object dest tmp file mode get localhost localhost failed changed false failed true msg failed to connect to region does not seem to be available for aws module boto if the region definitely exists you may need to upgrade boto or extend with endpoints path expected results localhost success changed true msg get operation complete
1
4,691
24,209,879,427
IssuesEvent
2022-09-25 18:47:29
Lissy93/dashy
https://api.github.com/repos/Lissy93/dashy
closed
Authentik IDP Integration
🤷‍♂️ Question 👤 Awaiting Maintainer Response
### Question I am trying to tie in my IDP for use with Dashy. The documentation was sparse on setting up other auth methods (OAuth2/OpenID/SAML) . Is there any other guidance on this? I am using Authentik and have many options for provider setups. Any help/insight would be appreciated. ### Category Authentication ### Please tick the boxes - [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number) - [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue) - [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide - [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
True
Authentik IDP Integration - ### Question I am trying to tie in my IDP for use with Dashy. The documentation was sparse on setting up other auth methods (OAuth2/OpenID/SAML) . Is there any other guidance on this? I am using Authentik and have many options for provider setups. Any help/insight would be appreciated. ### Category Authentication ### Please tick the boxes - [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number) - [X] You've checked that this [question hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue) - [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide - [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct)
main
authentik idp integration question i am trying to tie in my idp for use with dashy the documentation was sparse on setting up other auth methods openid saml is there any other guidance on this i am using authentik and have many options for provider setups any help insight would be appreciated category authentication please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the
1
1,917
6,577,706,608
IssuesEvent
2017-09-12 02:45:07
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
s3 - manage objects in S3 - Problem in coping
affects_2.0 aws bug_report cloud waiting_on_maintainer
##### Issue Type: Please pick one and delete the rest: - Bug Report ##### Plugin Name: s3 - manage objects in S3 ##### Ansible Version: ``` 2.0.0.2 ``` ##### Ansible Configuration: Please mention any settings you've changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ##### Environment: N/A ##### Summary: We are using S3 module in our playbook to copy files from S3. Recently we added versioning to our Bucket and after a while we decided to suspend it. (Note that after enabling versioning you can't disable it anymore but just to suspend it). Unfortunately after disabling this option there is no option to cp a file using S3 module. I also tried to add "version=null" as shown in my bucket but still, action is failed. Can you please provide of a workaround for this case. ##### Steps To Reproduce: Task in ansible playbook : ``` s3: bucket=bla.bla.com object=/jobs/systems_envphp/test.php dest={{ APP_DIR }}/env.php mode=get overwrite=different ``` - Change source bucket to versioning. - Move bucket to suspend versioning. - Try to run the task again. ##### Expected Results: File will be copied same as before (note that current file version in s3 null). File is well copied using aws cli tools but not using ansible. ##### Actual Results: Recieving an error and task is failed. ``` fatal: [X.X.X.X]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\", line 2823, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1482, in get_file\r\n query_args=None)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>0D9B24E960DEAAC4</RequestId><HostId>0YODxs2JmkQchhruCaN1zs6etW35sv91lJ9F9T/6R/fpyES6883QAwCyrHYfrbpGn+vmMIUnRKA=</HostId></Error>\r\n", "msg": "MODULE FAILURE", "parsed": false} fatal: [X.X.X.X]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\", line 2823, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1482, in get_file\r\n query_args=None)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>EF295D957B42B22F</RequestId><HostId>ajmqH4MRXArOysKrGB+Ya72krnNBxEWuyzi1JUO6ZLvYMD2E+mauFJGFwnKkYWQHMCGEB4mIgfQ=</HostId></Error>\r\n", "msg": "MODULE FAILURE", "parsed": false} ```
True
s3 - manage objects in S3 - Problem in coping - ##### Issue Type: Please pick one and delete the rest: - Bug Report ##### Plugin Name: s3 - manage objects in S3 ##### Ansible Version: ``` 2.0.0.2 ``` ##### Ansible Configuration: Please mention any settings you've changed/added/removed in ansible.cfg (or using the ANSIBLE_\* environment variables). ##### Environment: N/A ##### Summary: We are using S3 module in our playbook to copy files from S3. Recently we added versioning to our Bucket and after a while we decided to suspend it. (Note that after enabling versioning you can't disable it anymore but just to suspend it). Unfortunately after disabling this option there is no option to cp a file using S3 module. I also tried to add "version=null" as shown in my bucket but still, action is failed. Can you please provide of a workaround for this case. ##### Steps To Reproduce: Task in ansible playbook : ``` s3: bucket=bla.bla.com object=/jobs/systems_envphp/test.php dest={{ APP_DIR }}/env.php mode=get overwrite=different ``` - Change source bucket to versioning. - Move bucket to suspend versioning. - Try to run the task again. ##### Expected Results: File will be copied same as before (note that current file version in s3 null). File is well copied using aws cli tools but not using ansible. ##### Actual Results: Recieving an error and task is failed. ``` fatal: [X.X.X.X]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\", line 2823, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.78-242469105647192/s3\", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1482, in get_file\r\n query_args=None)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>0D9B24E960DEAAC4</RequestId><HostId>0YODxs2JmkQchhruCaN1zs6etW35sv91lJ9F9T/6R/fpyES6883QAwCyrHYfrbpGn+vmMIUnRKA=</HostId></Error>\r\n", "msg": "MODULE FAILURE", "parsed": false} fatal: [X.X.X.X]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\", line 2823, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\", line 496, in main\r\n download_s3file(module, s3, bucket, obj, dest, retries, version=version)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1456059315.77-259233513755276/s3\", line 323, in download_s3file\r\n key.get_contents_to_filename(dest)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1712, in get_contents_to_filename\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1650, in get_contents_to_file\r\n response_headers=response_headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1482, in get_file\r\n query_args=None)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1514, in _get_file_internal\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 343, in open\r\n override_num_retries=override_num_retries)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 303, in open_read\r\n self.resp.reason, body)\r\nboto.exception.S3ResponseError: S3ResponseError: 403 Forbidden\r\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\r\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>EF295D957B42B22F</RequestId><HostId>ajmqH4MRXArOysKrGB+Ya72krnNBxEWuyzi1JUO6ZLvYMD2E+mauFJGFwnKkYWQHMCGEB4mIgfQ=</HostId></Error>\r\n", "msg": "MODULE FAILURE", "parsed": false} ```
main
manage objects in problem in coping issue type please pick one and delete the rest bug report plugin name manage objects in ansible version ansible configuration please mention any settings you ve changed added removed in ansible cfg or using the ansible environment variables environment n a summary we are using module in our playbook to copy files from recently we added versioning to our bucket and after a while we decided to suspend it note that after enabling versioning you can t disable it anymore but just to suspend it unfortunately after disabling this option there is no option to cp a file using module i also tried to add version null as shown in my bucket but still action is failed can you please provide of a workaround for this case steps to reproduce task in ansible playbook bucket bla bla com object jobs systems envphp test php dest app dir env php mode get overwrite different change source bucket to versioning move bucket to suspend versioning try to run the task again expected results file will be copied same as before note that current file version in null file is well copied using aws cli tools but not using ansible actual results recieving an error and task is failed fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home ubuntu ansible tmp ansible tmp line in r n main r n file home ubuntu ansible tmp ansible tmp line in main r n download module bucket obj dest retries version version r n file home ubuntu ansible tmp ansible tmp line in download r n key get contents to filename dest r n file usr local lib dist packages boto key py line in get contents to filename r n response headers response headers r n file usr local lib dist packages boto key py line in get contents to file r n response headers response headers r n file usr local lib dist packages boto key py line in get file r n query args none r n file usr local lib dist packages boto key py line in get file internal r n override num retries override num retries r n file usr local lib dist packages boto key py line in open r n override num retries override num retries r n file usr local lib dist packages boto key py line in open read r n self resp reason body r nboto exception forbidden r n r n accessdenied access denied vmmiunrka r n msg module failure parsed false fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file home ubuntu ansible tmp ansible tmp line in r n main r n file home ubuntu ansible tmp ansible tmp line in main r n download module bucket obj dest retries version version r n file home ubuntu ansible tmp ansible tmp line in download r n key get contents to filename dest r n file usr local lib dist packages boto key py line in get contents to filename r n response headers response headers r n file usr local lib dist packages boto key py line in get contents to file r n response headers response headers r n file usr local lib dist packages boto key py line in get file r n query args none r n file usr local lib dist packages boto key py line in get file internal r n override num retries override num retries r n file usr local lib dist packages boto key py line in open r n override num retries override num retries r n file usr local lib dist packages boto key py line in open read r n self resp reason body r nboto exception forbidden r n r n accessdenied access denied r n msg module failure parsed false
1
136,428
12,710,322,950
IssuesEvent
2020-06-23 13:43:43
bbc/simorgh
https://api.github.com/repos/bbc/simorgh
opened
Update Release Info for IDX Pages
Documentation IDX Refinement Needed ws-home
Resolves n/a *Overall change:* Update [docs/Simorgh-Release-Info.md](https://github.com/bbc/simorgh/blob/latest/docs/Simorgh-Release-Info.md) with most read page details *Code changes:* - Add release dates for persian/afghanistan and ukrainian/ukraine_in_russian IDX pages and update Photogallery date - Update scripts/simorghPages.js to include persian/afghanistan and ukrainian/ukraine_in_russian IDX pages - run node scripts/simorghPages.js to update Release-info doc - [ ] I have assigned myself to this PR and the corresponding issues - [ ] I have added labels to this PR for the relevant pod(s) affected by these changes - [ ] I have assigned this PR to the Simorgh project *Testing:* - Automated (jest and/or cypress) tests added (for new features) or updated (for existing features) - If necessary, I have run the local E2E non-smoke tests relevant to my changes (CYPRESS_APP_ENV=local CYPRESS_SMOKE=false npm run test:e2e:interactive) - This PR requires manual testing
1.0
Update Release Info for IDX Pages - Resolves n/a *Overall change:* Update [docs/Simorgh-Release-Info.md](https://github.com/bbc/simorgh/blob/latest/docs/Simorgh-Release-Info.md) with most read page details *Code changes:* - Add release dates for persian/afghanistan and ukrainian/ukraine_in_russian IDX pages and update Photogallery date - Update scripts/simorghPages.js to include persian/afghanistan and ukrainian/ukraine_in_russian IDX pages - run node scripts/simorghPages.js to update Release-info doc - [ ] I have assigned myself to this PR and the corresponding issues - [ ] I have added labels to this PR for the relevant pod(s) affected by these changes - [ ] I have assigned this PR to the Simorgh project *Testing:* - Automated (jest and/or cypress) tests added (for new features) or updated (for existing features) - If necessary, I have run the local E2E non-smoke tests relevant to my changes (CYPRESS_APP_ENV=local CYPRESS_SMOKE=false npm run test:e2e:interactive) - This PR requires manual testing
non_main
update release info for idx pages resolves n a overall change update with most read page details code changes add release dates for persian afghanistan and ukrainian ukraine in russian idx pages and update photogallery date update scripts simorghpages js to include persian afghanistan and ukrainian ukraine in russian idx pages run node scripts simorghpages js to update release info doc i have assigned myself to this pr and the corresponding issues i have added labels to this pr for the relevant pod s affected by these changes i have assigned this pr to the simorgh project testing automated jest and or cypress tests added for new features or updated for existing features if necessary i have run the local non smoke tests relevant to my changes cypress app env local cypress smoke false npm run test interactive this pr requires manual testing
0
350,405
24,982,148,701
IssuesEvent
2022-11-02 12:35:25
NLnetLabs/routinator
https://api.github.com/repos/NLnetLabs/routinator
closed
Wrong HTTP port publish argument shown in Docker docs
docker documentation
Our documentation examples for running with Docker mistakenly publish HTTP port 8323 rather than 9556. While port 8323 is consistent with the rest of the examples in the documentation, since [June 2019](https://github.com/NLnetLabs/routinator/commit/1ad12887e4e1e92c745b12be4719261cd443a6c3) the Docker image listens on port 9556 presumably because that port was allocated to us in the [Prometheus port registry](https://github.com/prometheus/prometheus/wiki/Default-port-allocations) just prior to that in [March 2019](https://github.com/NLnetLabs/routinator/issues/91). Changing the default port that the Routinator Docker image listens on might break existing users but it would be good to at least fix the errant use of `-p 8323:8323` in the documentation which publishes a port to the host that the Routinator process inside container doesn't actually listen on, it should instead show `-p 9556:9556`.
1.0
Wrong HTTP port publish argument shown in Docker docs - Our documentation examples for running with Docker mistakenly publish HTTP port 8323 rather than 9556. While port 8323 is consistent with the rest of the examples in the documentation, since [June 2019](https://github.com/NLnetLabs/routinator/commit/1ad12887e4e1e92c745b12be4719261cd443a6c3) the Docker image listens on port 9556 presumably because that port was allocated to us in the [Prometheus port registry](https://github.com/prometheus/prometheus/wiki/Default-port-allocations) just prior to that in [March 2019](https://github.com/NLnetLabs/routinator/issues/91). Changing the default port that the Routinator Docker image listens on might break existing users but it would be good to at least fix the errant use of `-p 8323:8323` in the documentation which publishes a port to the host that the Routinator process inside container doesn't actually listen on, it should instead show `-p 9556:9556`.
non_main
wrong http port publish argument shown in docker docs our documentation examples for running with docker mistakenly publish http port rather than while port is consistent with the rest of the examples in the documentation since the docker image listens on port presumably because that port was allocated to us in the just prior to that in changing the default port that the routinator docker image listens on might break existing users but it would be good to at least fix the errant use of p in the documentation which publishes a port to the host that the routinator process inside container doesn t actually listen on it should instead show p
0
61,147
7,445,160,710
IssuesEvent
2018-03-28 02:48:06
govau/dta-gov-au
https://api.github.com/repos/govau/dta-gov-au
opened
Update card size on Level 2 landing pages
design enhancement
The current card size is too high and makes the page stretch out. Cards should shrink up slightly with less padding and be three across.
1.0
Update card size on Level 2 landing pages - The current card size is too high and makes the page stretch out. Cards should shrink up slightly with less padding and be three across.
non_main
update card size on level landing pages the current card size is too high and makes the page stretch out cards should shrink up slightly with less padding and be three across
0
425,744
12,345,665,348
IssuesEvent
2020-05-15 09:21:27
Bhetghat/Bhetghat
https://api.github.com/repos/Bhetghat/Bhetghat
closed
Ability to hide and show sidebars
Priority Low
- [x] For the small screen, automatically hide right and left sidebar. - [x] Push content to the right or left when show/hide event fires.
1.0
Ability to hide and show sidebars - - [x] For the small screen, automatically hide right and left sidebar. - [x] Push content to the right or left when show/hide event fires.
non_main
ability to hide and show sidebars for the small screen automatically hide right and left sidebar push content to the right or left when show hide event fires
0
66,656
12,810,467,644
IssuesEvent
2020-07-03 18:45:54
CeuAzul/ADR
https://api.github.com/repos/CeuAzul/ADR
closed
main.py needs to be refactored
1.0 code structure
- Change its name to something more meaningful - Remove optimization related code - Be able to receive "rules" parameters (from the competition manual) or delegate this to some other function
1.0
main.py needs to be refactored - - Change its name to something more meaningful - Remove optimization related code - Be able to receive "rules" parameters (from the competition manual) or delegate this to some other function
non_main
main py needs to be refactored change its name to something more meaningful remove optimization related code be able to receive rules parameters from the competition manual or delegate this to some other function
0
4,718
24,342,402,810
IssuesEvent
2022-10-01 21:52:01
beekama/NutritionApp
https://api.github.com/repos/beekama/NutritionApp
closed
Duplicated PieChart Code
maintainability
There seems to be lot of duplicated code relating to pie-charts in `Recommendations.java` and `MainActivity.java`. This should be investigated and potentially extracted into it's own Class and Functions.
True
Duplicated PieChart Code - There seems to be lot of duplicated code relating to pie-charts in `Recommendations.java` and `MainActivity.java`. This should be investigated and potentially extracted into it's own Class and Functions.
main
duplicated piechart code there seems to be lot of duplicated code relating to pie charts in recommendations java and mainactivity java this should be investigated and potentially extracted into it s own class and functions
1
75,690
25,999,672,669
IssuesEvent
2022-12-20 14:24:31
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
opened
[🐛 Bug]: NullPointerException with jdk http client
I-defect needs-triaging
### What happened? When trying to connect a jdk http client webdriver to saucelabs with a proxy (with authentication) I get a java.lang.NullPointerException. ### How can we reproduce the issue? ```shell System.setProperty("webdriver.http.factory", "jdk-http-client"); ChromeOptions browserOptions = new ChromeOptions(); browserOptions.setPlatformName("Windows 10"); Map<String, Object> sauceOptions = new HashMap<String, Object>(); sauceOptions.put("name", "test"); sauceOptions.put("username", "..."); sauceOptions.put("accessKey", "..."); Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("...", 1234)); java.net.Authenticator auth = new java.net.Authenticator() { public PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication("username", "pwd".toCharArray()); } }; java.net.Authenticator.setDefault(auth); ClientConfig config = ClientConfig.defaultConfig() .proxy(proxy) .baseUrl(new URL("https://ondemand.eu-central-1.saucelabs.com:443/wd/hub")); WebDriver driver = RemoteWebDriver.builder() .oneOf(browserOptions) .setCapability("sauce:options", sauceOptions) .config(config).build(); ``` ### Relevant log output ```shell Exception in thread "main" java.lang.NullPointerException at java.base/java.io.ByteArrayInputStream.<init>(ByteArrayInputStream.java:108) at org.openqa.selenium.remote.http.jdk.JdkHttpMessages.lambda$createResponse$4(JdkHttpMessages.java:149) at org.openqa.selenium.remote.http.Contents.bytes(Contents.java:80) at org.openqa.selenium.remote.http.Contents.string(Contents.java:97) at org.openqa.selenium.remote.http.Contents.string(Contents.java:101) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:131) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:106) at org.openqa.selenium.remote.RemoteWebDriverBuilder.getRemoteDriver(RemoteWebDriverBuilder.java:399) at org.openqa.selenium.remote.RemoteWebDriverBuilder.build(RemoteWebDriverBuilder.java:372) at test2.SaucelabsTest2.main(SaucelabsTest2.java:39) ``` ### Operating System Windows 10 ### Selenium version Java 4.7.2 ### What are the browser(s) and version(s) where you see this issue? all ### What are the browser driver(s) and version(s) where you see this issue? all ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: NullPointerException with jdk http client - ### What happened? When trying to connect a jdk http client webdriver to saucelabs with a proxy (with authentication) I get a java.lang.NullPointerException. ### How can we reproduce the issue? ```shell System.setProperty("webdriver.http.factory", "jdk-http-client"); ChromeOptions browserOptions = new ChromeOptions(); browserOptions.setPlatformName("Windows 10"); Map<String, Object> sauceOptions = new HashMap<String, Object>(); sauceOptions.put("name", "test"); sauceOptions.put("username", "..."); sauceOptions.put("accessKey", "..."); Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("...", 1234)); java.net.Authenticator auth = new java.net.Authenticator() { public PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication("username", "pwd".toCharArray()); } }; java.net.Authenticator.setDefault(auth); ClientConfig config = ClientConfig.defaultConfig() .proxy(proxy) .baseUrl(new URL("https://ondemand.eu-central-1.saucelabs.com:443/wd/hub")); WebDriver driver = RemoteWebDriver.builder() .oneOf(browserOptions) .setCapability("sauce:options", sauceOptions) .config(config).build(); ``` ### Relevant log output ```shell Exception in thread "main" java.lang.NullPointerException at java.base/java.io.ByteArrayInputStream.<init>(ByteArrayInputStream.java:108) at org.openqa.selenium.remote.http.jdk.JdkHttpMessages.lambda$createResponse$4(JdkHttpMessages.java:149) at org.openqa.selenium.remote.http.Contents.bytes(Contents.java:80) at org.openqa.selenium.remote.http.Contents.string(Contents.java:97) at org.openqa.selenium.remote.http.Contents.string(Contents.java:101) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:131) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:106) at org.openqa.selenium.remote.RemoteWebDriverBuilder.getRemoteDriver(RemoteWebDriverBuilder.java:399) at org.openqa.selenium.remote.RemoteWebDriverBuilder.build(RemoteWebDriverBuilder.java:372) at test2.SaucelabsTest2.main(SaucelabsTest2.java:39) ``` ### Operating System Windows 10 ### Selenium version Java 4.7.2 ### What are the browser(s) and version(s) where you see this issue? all ### What are the browser driver(s) and version(s) where you see this issue? all ### Are you using Selenium Grid? _No response_
non_main
nullpointerexception with jdk http client what happened when trying to connect a jdk http client webdriver to saucelabs with a proxy with authentication i get a java lang nullpointerexception how can we reproduce the issue shell system setproperty webdriver http factory jdk http client chromeoptions browseroptions new chromeoptions browseroptions setplatformname windows map sauceoptions new hashmap sauceoptions put name test sauceoptions put username sauceoptions put accesskey proxy proxy new proxy proxy type http new inetsocketaddress java net authenticator auth new java net authenticator public passwordauthentication getpasswordauthentication return new passwordauthentication username pwd tochararray java net authenticator setdefault auth clientconfig config clientconfig defaultconfig proxy proxy baseurl new url webdriver driver remotewebdriver builder oneof browseroptions setcapability sauce options sauceoptions config config build relevant log output shell exception in thread main java lang nullpointerexception at java base java io bytearrayinputstream bytearrayinputstream java at org openqa selenium remote http jdk jdkhttpmessages lambda createresponse jdkhttpmessages java at org openqa selenium remote http contents bytes contents java at org openqa selenium remote http contents string contents java at org openqa selenium remote http contents string contents java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote remotewebdriverbuilder getremotedriver remotewebdriverbuilder java at org openqa selenium remote remotewebdriverbuilder build remotewebdriverbuilder java at main java operating system windows selenium version java what are the browser s and version s where you see this issue all what are the browser driver s and version s where you see this issue all are you using selenium grid no response
0
181,390
6,659,283,279
IssuesEvent
2017-10-01 09:13:28
Himura2la/FestEngine
https://api.github.com/repos/Himura2la/FestEngine
closed
Передавать опции vlc через командную строку и/или файл настроек
enhancement low-priority
Не нужно хардкодить опции при создании vlc.Instance. У пользователя должна быть возможность подставить свои.
1.0
Передавать опции vlc через командную строку и/или файл настроек - Не нужно хардкодить опции при создании vlc.Instance. У пользователя должна быть возможность подставить свои.
non_main
передавать опции vlc через командную строку и или файл настроек не нужно хардкодить опции при создании vlc instance у пользователя должна быть возможность подставить свои
0
123,206
16,455,528,608
IssuesEvent
2021-05-21 12:02:29
nextcloud/server
https://api.github.com/repos/nextcloud/server
closed
Wording of sharing settings does not convey that they apply to other elements too
1. to develop design enhancement feature: settings
The sharing settings no longer apply only to sharing; the results shown in the [main contacts menu search](https://github.com/nextcloud/server/issues/207) or in the user search of Nextcloud Talk is affected by those settings too. However, the settings are shown under the _Sharing_ section of the settings, and the wording of the options [does not clearly show that they apply to other elements too](https://github.com/nextcloud/spreed/issues/522#issuecomment-350838267) (for example, "_Allow sharing with groups_" or "_Allow username autocompletion in share dialog. If this is disabled the full username or email address needs to be entered._" look like that they only apply to sharing). @nextcloud/designers
1.0
Wording of sharing settings does not convey that they apply to other elements too - The sharing settings no longer apply only to sharing; the results shown in the [main contacts menu search](https://github.com/nextcloud/server/issues/207) or in the user search of Nextcloud Talk is affected by those settings too. However, the settings are shown under the _Sharing_ section of the settings, and the wording of the options [does not clearly show that they apply to other elements too](https://github.com/nextcloud/spreed/issues/522#issuecomment-350838267) (for example, "_Allow sharing with groups_" or "_Allow username autocompletion in share dialog. If this is disabled the full username or email address needs to be entered._" look like that they only apply to sharing). @nextcloud/designers
non_main
wording of sharing settings does not convey that they apply to other elements too the sharing settings no longer apply only to sharing the results shown in the or in the user search of nextcloud talk is affected by those settings too however the settings are shown under the sharing section of the settings and the wording of the options for example allow sharing with groups or allow username autocompletion in share dialog if this is disabled the full username or email address needs to be entered look like that they only apply to sharing nextcloud designers
0
4,344
21,915,482,674
IssuesEvent
2022-05-21 18:58:14
exercism/python
https://api.github.com/repos/exercism/python
closed
[Ellen's Alien Game] Tests are Blocked from Running in Editor & Locally Due to Function Import Error
maintainer action required❕
Dear community, working on Ellen's Alien Game, the tests do not run and I get this message: > We received the following error when we ran your code: > > ImportError while importing test module '.mnt.exercism-iteration.classes_test.py'. > Hint: make sure your test modules.packages have valid Python names. > Traceback: > .usr.local.lib.python3.9.importlib.__init__.py:127: in import_module > return _bootstrap._gcd_import(name[level:], package, level) > .mnt.exercism-iteration.classes_test.py:4: in <module> > from classes import new_aliens_collection > E ImportError: cannot import name 'new_aliens_collection' from 'classes' (.mnt.exercism-iteration.classes.py) There is `from classes import new_aliens_collection` in line 4 in `classes_test.py`. Why is it there?
True
[Ellen's Alien Game] Tests are Blocked from Running in Editor & Locally Due to Function Import Error - Dear community, working on Ellen's Alien Game, the tests do not run and I get this message: > We received the following error when we ran your code: > > ImportError while importing test module '.mnt.exercism-iteration.classes_test.py'. > Hint: make sure your test modules.packages have valid Python names. > Traceback: > .usr.local.lib.python3.9.importlib.__init__.py:127: in import_module > return _bootstrap._gcd_import(name[level:], package, level) > .mnt.exercism-iteration.classes_test.py:4: in <module> > from classes import new_aliens_collection > E ImportError: cannot import name 'new_aliens_collection' from 'classes' (.mnt.exercism-iteration.classes.py) There is `from classes import new_aliens_collection` in line 4 in `classes_test.py`. Why is it there?
main
tests are blocked from running in editor locally due to function import error dear community working on ellen s alien game the tests do not run and i get this message we received the following error when we ran your code importerror while importing test module mnt exercism iteration classes test py hint make sure your test modules packages have valid python names traceback usr local lib importlib init py in import module return bootstrap gcd import name package level mnt exercism iteration classes test py in from classes import new aliens collection e importerror cannot import name new aliens collection from classes mnt exercism iteration classes py there is from classes import new aliens collection in line in classes test py why is it there
1
3,486
13,562,154,349
IssuesEvent
2020-09-18 06:18:13
haskell/containers
https://api.github.com/repos/haskell/containers
closed
Branch protections
maintainability
I noticed that I can't force-push to feature branches in this repo. This is a bit annoying, since I now tend to push to this repo, then remember the issue and push to my fork, and then have to delete my branch here… Wouldn't it be sufficient if only `master` was protected?
True
Branch protections - I noticed that I can't force-push to feature branches in this repo. This is a bit annoying, since I now tend to push to this repo, then remember the issue and push to my fork, and then have to delete my branch here… Wouldn't it be sufficient if only `master` was protected?
main
branch protections i noticed that i can t force push to feature branches in this repo this is a bit annoying since i now tend to push to this repo then remember the issue and push to my fork and then have to delete my branch here… wouldn t it be sufficient if only master was protected
1
5,353
26,964,031,011
IssuesEvent
2023-02-08 20:37:57
carbon-design-system/carbon
https://api.github.com/repos/carbon-design-system/carbon
closed
[Feature Request]: Combobox interaction audit—filtering
type: enhancement 💡 status: needs triage 🕵️‍♀️ status: waiting for maintainer response 💬
### The problem Related: https://github.com/carbon-design-system/carbon-components-svelte/issues/1635 In carbon-components-svelte, the Combobox has a filtering capability, while in the React library it does not. - Svelte: https://carbon-components-svelte.onrender.com/components/ComboBox#filterable - React: https://react.carbondesignsystem.com/?path=/story/components-combobox--default ### The solution Assuming Comboxes are allowed to be filterable, is it acceptable to have the Dropdown show all items after an option has been selected? There's a video demonstration of the desired behaviour here: https://github.com/carbon-design-system/carbon-components-svelte/issues/1635#issuecomment-1399543307 **Given** there is a Combobox with filtering enabled **When** the user types a value **Then** the dropdown is properly filtered based on the matches and you can selected the item. **And** subsequent interactions with the dropdown should show all items **And** when collapsing the dropdown by clicking outside or other means, the previous value is maintained. ### Examples _No response_ ### Application/PAL _No response_ ### Business priority None ### Available extra resources _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
True
[Feature Request]: Combobox interaction audit—filtering - ### The problem Related: https://github.com/carbon-design-system/carbon-components-svelte/issues/1635 In carbon-components-svelte, the Combobox has a filtering capability, while in the React library it does not. - Svelte: https://carbon-components-svelte.onrender.com/components/ComboBox#filterable - React: https://react.carbondesignsystem.com/?path=/story/components-combobox--default ### The solution Assuming Comboxes are allowed to be filterable, is it acceptable to have the Dropdown show all items after an option has been selected? There's a video demonstration of the desired behaviour here: https://github.com/carbon-design-system/carbon-components-svelte/issues/1635#issuecomment-1399543307 **Given** there is a Combobox with filtering enabled **When** the user types a value **Then** the dropdown is properly filtered based on the matches and you can selected the item. **And** subsequent interactions with the dropdown should show all items **And** when collapsing the dropdown by clicking outside or other means, the previous value is maintained. ### Examples _No response_ ### Application/PAL _No response_ ### Business priority None ### Available extra resources _No response_ ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
main
combobox interaction audit—filtering the problem related in carbon components svelte the combobox has a filtering capability while in the react library it does not svelte react the solution assuming comboxes are allowed to be filterable is it acceptable to have the dropdown show all items after an option has been selected there s a video demonstration of the desired behaviour here given there is a combobox with filtering enabled when the user types a value then the dropdown is properly filtered based on the matches and you can selected the item and subsequent interactions with the dropdown should show all items and when collapsing the dropdown by clicking outside or other means the previous value is maintained examples no response application pal no response business priority none available extra resources no response code of conduct i agree to follow this project s
1
73,658
19,751,286,010
IssuesEvent
2022-01-15 04:51:04
rust-lang/cargo
https://api.github.com/repos/rust-lang/cargo
closed
rustc-link-arg does not propagate transitively
C-bug A-linkage A-build-scripts T-cargo finished-final-comment-period disposition-close to-announce
<!-- Thanks for filing a 🐛 bug report 😄! --> **Problem** With #9523 the following use-case does not compile anymore with nightly. It works with current stable. 1. A -sys crate links against an external library using `cargo:rustc-link-lib`/`cargo:rustc-link-lib-framework` in its `build.rs`. The crate is to be consumed by a `cdylib` crate. To make consumption easy, `build.rs` of the -sys crate also prints out `cargo:rustc-cdylib-link-arg=-Wl,-rpath,/some/path`. 2. The `cdylib` crate depends on the -sys crate and the `cdylib` is linked in a way that the external library becomes a dependency for the dynamic linker and the rpath is set due to the propagation of `rustc-cdylib-link-arg` from the -sys crate to the final `cdylib` crate. With stable rust this works as described, with nightly this now produces a build error when compiling the -sys crate due to the use of `rustc-cdylib-link-arg` in the non-cdylib -sys crate. Concrete example: Building the qttypes crate from https://github.com/woboq/qmetaobject-rs/tree/master/qttypes produces the error that #9523 introduced: ```sh $ cargo +nightly build Compiling qttypes v0.2.1 (/Users/simon/src/qmetaobject-rs/qttypes) error: invalid instruction `cargo:rustc-cdylib-link-arg` from build script of `qttypes v0.2.1 (/Users/simon/src/qmetaobject-rs/qttypes)` The package qttypes v0.2.1 (/Users/simon/src/qmetaobject-rs/qttypes) does not have a cdylib target. ``` <!-- A clear and concise description of what the bug is. --> <!-- including what currently happens and what you expected to happen. --> <!--**Steps** --> <!-- The steps to reproduce the bug. --> <!--1. 2. 3. --> **Possible Solution(s)** <!-- Not obligatory, but suggest a fix/reason for the bug, --> <!-- or ideas how to implement the addition or change --> Allow `rustc-cdylib-link-arg` again in non-cdylib crates to avoid breaking their build with the next stable release. **Notes** Output of `cargo version`: cargo 1.54.0-nightly (0cecbd673 2021-06-01) cargo 1.52.0 (69767412a 2021-04-21) <!-- Also, any additional context or information you feel may be relevant to the issue. --> <!-- (e.g rust version, OS platform/distribution/version, target toolchain(s), release channel.. -->
1.0
rustc-link-arg does not propagate transitively - <!-- Thanks for filing a 🐛 bug report 😄! --> **Problem** With #9523 the following use-case does not compile anymore with nightly. It works with current stable. 1. A -sys crate links against an external library using `cargo:rustc-link-lib`/`cargo:rustc-link-lib-framework` in its `build.rs`. The crate is to be consumed by a `cdylib` crate. To make consumption easy, `build.rs` of the -sys crate also prints out `cargo:rustc-cdylib-link-arg=-Wl,-rpath,/some/path`. 2. The `cdylib` crate depends on the -sys crate and the `cdylib` is linked in a way that the external library becomes a dependency for the dynamic linker and the rpath is set due to the propagation of `rustc-cdylib-link-arg` from the -sys crate to the final `cdylib` crate. With stable rust this works as described, with nightly this now produces a build error when compiling the -sys crate due to the use of `rustc-cdylib-link-arg` in the non-cdylib -sys crate. Concrete example: Building the qttypes crate from https://github.com/woboq/qmetaobject-rs/tree/master/qttypes produces the error that #9523 introduced: ```sh $ cargo +nightly build Compiling qttypes v0.2.1 (/Users/simon/src/qmetaobject-rs/qttypes) error: invalid instruction `cargo:rustc-cdylib-link-arg` from build script of `qttypes v0.2.1 (/Users/simon/src/qmetaobject-rs/qttypes)` The package qttypes v0.2.1 (/Users/simon/src/qmetaobject-rs/qttypes) does not have a cdylib target. ``` <!-- A clear and concise description of what the bug is. --> <!-- including what currently happens and what you expected to happen. --> <!--**Steps** --> <!-- The steps to reproduce the bug. --> <!--1. 2. 3. --> **Possible Solution(s)** <!-- Not obligatory, but suggest a fix/reason for the bug, --> <!-- or ideas how to implement the addition or change --> Allow `rustc-cdylib-link-arg` again in non-cdylib crates to avoid breaking their build with the next stable release. **Notes** Output of `cargo version`: cargo 1.54.0-nightly (0cecbd673 2021-06-01) cargo 1.52.0 (69767412a 2021-04-21) <!-- Also, any additional context or information you feel may be relevant to the issue. --> <!-- (e.g rust version, OS platform/distribution/version, target toolchain(s), release channel.. -->
non_main
rustc link arg does not propagate transitively problem with the following use case does not compile anymore with nightly it works with current stable a sys crate links against an external library using cargo rustc link lib cargo rustc link lib framework in its build rs the crate is to be consumed by a cdylib crate to make consumption easy build rs of the sys crate also prints out cargo rustc cdylib link arg wl rpath some path the cdylib crate depends on the sys crate and the cdylib is linked in a way that the external library becomes a dependency for the dynamic linker and the rpath is set due to the propagation of rustc cdylib link arg from the sys crate to the final cdylib crate with stable rust this works as described with nightly this now produces a build error when compiling the sys crate due to the use of rustc cdylib link arg in the non cdylib sys crate concrete example building the qttypes crate from produces the error that introduced sh cargo nightly build compiling qttypes users simon src qmetaobject rs qttypes error invalid instruction cargo rustc cdylib link arg from build script of qttypes users simon src qmetaobject rs qttypes the package qttypes users simon src qmetaobject rs qttypes does not have a cdylib target possible solution s allow rustc cdylib link arg again in non cdylib crates to avoid breaking their build with the next stable release notes output of cargo version cargo nightly cargo
0
1,997
6,714,401,228
IssuesEvent
2017-10-13 16:46:23
CyberReboot/vent
https://api.github.com/repos/CyberReboot/vent
closed
find all instances of try/except/pass and log the error
area/quality/maintainability Hacktoberfest
there are a number of places in the code base that have something like: ``` try: // some code except Exception as e: pass ``` Instead of just passing, it should instead log the error `e` (or whatever the variable is called in the exception).
True
find all instances of try/except/pass and log the error - there are a number of places in the code base that have something like: ``` try: // some code except Exception as e: pass ``` Instead of just passing, it should instead log the error `e` (or whatever the variable is called in the exception).
main
find all instances of try except pass and log the error there are a number of places in the code base that have something like try some code except exception as e pass instead of just passing it should instead log the error e or whatever the variable is called in the exception
1
3,135
12,036,601,343
IssuesEvent
2020-04-13 20:07:22
expo/expo-cli
https://api.github.com/repos/expo/expo-cli
closed
[release-channels] fix publish:rollback
needs response from maintainer
`publish:rollback` currently removes the rollback'ed entry from the publish logs, preventing new users from getting the entry. However, users who currently have the bad publish will continue to see it as the most recent publish and will not know to update with the last known good publish. # more explanation Current situation: - Consider a publish log that looks like this: `oldGoodPublish-> newBadPublish`. - `publish:rollback` will remove `newBadPublish`, making the logs just contain `oldGoodPublish`. - We’ll need to run `publish:set` at the end to make the logs look like `oldGoodPublish->newGoodPublish`
True
[release-channels] fix publish:rollback - `publish:rollback` currently removes the rollback'ed entry from the publish logs, preventing new users from getting the entry. However, users who currently have the bad publish will continue to see it as the most recent publish and will not know to update with the last known good publish. # more explanation Current situation: - Consider a publish log that looks like this: `oldGoodPublish-> newBadPublish`. - `publish:rollback` will remove `newBadPublish`, making the logs just contain `oldGoodPublish`. - We’ll need to run `publish:set` at the end to make the logs look like `oldGoodPublish->newGoodPublish`
main
fix publish rollback publish rollback currently removes the rollback ed entry from the publish logs preventing new users from getting the entry however users who currently have the bad publish will continue to see it as the most recent publish and will not know to update with the last known good publish more explanation current situation consider a publish log that looks like this oldgoodpublish newbadpublish publish rollback will remove newbadpublish making the logs just contain oldgoodpublish we’ll need to run publish set at the end to make the logs look like oldgoodpublish newgoodpublish
1
777,965
27,299,314,933
IssuesEvent
2023-02-23 23:40:01
ruuvi/com.ruuvi.station.ios
https://api.github.com/repos/ruuvi/com.ruuvi.station.ios
opened
TF 2.0 app crash when trying to share sensor
bug high priority
Description: iPhone 14 Pro User on sensor settings page tried to share a sensor. When tapping on email field the app crashed. https://user-images.githubusercontent.com/50437378/221056026-827b6eaa-1e9a-433e-b340-98e670c9d6c0.mov
1.0
TF 2.0 app crash when trying to share sensor - Description: iPhone 14 Pro User on sensor settings page tried to share a sensor. When tapping on email field the app crashed. https://user-images.githubusercontent.com/50437378/221056026-827b6eaa-1e9a-433e-b340-98e670c9d6c0.mov
non_main
tf app crash when trying to share sensor description iphone pro user on sensor settings page tried to share a sensor when tapping on email field the app crashed
0
1,918
6,584,905,328
IssuesEvent
2017-09-13 12:11:37
RestComm/Restcomm-Connect
https://api.github.com/repos/RestComm/Restcomm-Connect
opened
Check issues at TADHack environment reported by some participants
Support and Maintainance
Issues are related to: - Record verb not working; - Project upload; - Missing tabs when opening an existent project.
True
Check issues at TADHack environment reported by some participants - Issues are related to: - Record verb not working; - Project upload; - Missing tabs when opening an existent project.
main
check issues at tadhack environment reported by some participants issues are related to record verb not working project upload missing tabs when opening an existent project
1
2,722
9,605,540,638
IssuesEvent
2019-05-11 01:00:58
backdrop-ops/contrib
https://api.github.com/repos/backdrop-ops/contrib
closed
Join request
Maintainer application
Hi all, I'd like to share some stuff in the Backdrop community, would it be possible to join you ? The project I'd like to port/maintain is: https://www.drupal.org/project/atomium Link to the repo: https://github.com/ec-europa/atomium Thanks.
True
Join request - Hi all, I'd like to share some stuff in the Backdrop community, would it be possible to join you ? The project I'd like to port/maintain is: https://www.drupal.org/project/atomium Link to the repo: https://github.com/ec-europa/atomium Thanks.
main
join request hi all i d like to share some stuff in the backdrop community would it be possible to join you the project i d like to port maintain is link to the repo thanks
1
4,273
21,466,672,553
IssuesEvent
2022-04-26 05:00:48
tgstation/tgstation
https://api.github.com/repos/tgstation/tgstation
closed
Stacks do not respect the concept of item slots
Maintainability/Hinders improvements Bug Against God and Nature
Stacks do not respect item slots. So if you have say, 10 sheets of metal in a pocket, and you pick up 5 more, you now have 15 sheets of metal in your hand. This can also lead to stupid fucking errors with things that attempt to put say 10 sheets of x in your right pocket, and then 10 sheets of x in your left. They should not do this. Each slot should be considered independently. But that's hell. I don't know what to do about this honestly, but I know something should be done.
True
Stacks do not respect the concept of item slots - Stacks do not respect item slots. So if you have say, 10 sheets of metal in a pocket, and you pick up 5 more, you now have 15 sheets of metal in your hand. This can also lead to stupid fucking errors with things that attempt to put say 10 sheets of x in your right pocket, and then 10 sheets of x in your left. They should not do this. Each slot should be considered independently. But that's hell. I don't know what to do about this honestly, but I know something should be done.
main
stacks do not respect the concept of item slots stacks do not respect item slots so if you have say sheets of metal in a pocket and you pick up more you now have sheets of metal in your hand this can also lead to stupid fucking errors with things that attempt to put say sheets of x in your right pocket and then sheets of x in your left they should not do this each slot should be considered independently but that s hell i don t know what to do about this honestly but i know something should be done
1
255,865
19,343,888,154
IssuesEvent
2021-12-15 08:48:26
AI-technologies-for-IR-system/image-search-engine
https://api.github.com/repos/AI-technologies-for-IR-system/image-search-engine
closed
Make design doc and describe architecture of system
documentation
We need to create a top-level architecture of our system in order to describe a further scope of work. What's expected in terms of this issue: - [x] distinguish modules and describe them - [x] brief diagram of data flow between modules[^1] - [x] design a GCP model of the service - [ ] design a database [^1]: Moved into modules diagram
1.0
Make design doc and describe architecture of system - We need to create a top-level architecture of our system in order to describe a further scope of work. What's expected in terms of this issue: - [x] distinguish modules and describe them - [x] brief diagram of data flow between modules[^1] - [x] design a GCP model of the service - [ ] design a database [^1]: Moved into modules diagram
non_main
make design doc and describe architecture of system we need to create a top level architecture of our system in order to describe a further scope of work what s expected in terms of this issue distinguish modules and describe them brief diagram of data flow between modules design a gcp model of the service design a database moved into modules diagram
0
543,179
15,878,792,441
IssuesEvent
2021-04-09 11:32:41
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
Same access token is returned for different grant types
Affected/5.7.0 Complexity/High Component/OAuth Priority/Normal Severity/Major bug
An application owner receives the same access token for two different token calls in two grant types. Please consider following sample requests and responses. **Password grant type with scope: openid** curl -v -X POST -H "Authorization: Basic Xxxxxxxxx" -k -d "grant_type=password&username=admin&password=admin&scope=openid" -H "Content-Type:application/x-www-form-urlencoded" https://localhost:9443/oauth2/token {"access_token":"326977ff21402f7d7c9b2770bed0f408","refresh_token":"227dc0895de100db9359cc135390e921","scope":"openid","id_token":"...","token_type":"Bearer","expires_in":1760}% **Client credentials grant with scope openid** curl -v -X POST -H "Authorization: Basic Xxxxxxxxx" -k -d "grant_type=client_credentials&scope=openid" -H "Content-Type:application/x-www-form-urlencoded" https://localhost:9443/oauth2/token {"access_token":"326977ff21402f7d7c9b2770bed0f408","scope":"openid","id_token":"...","token_type":"Bearer","expires_in":1753}
1.0
Same access token is returned for different grant types - An application owner receives the same access token for two different token calls in two grant types. Please consider following sample requests and responses. **Password grant type with scope: openid** curl -v -X POST -H "Authorization: Basic Xxxxxxxxx" -k -d "grant_type=password&username=admin&password=admin&scope=openid" -H "Content-Type:application/x-www-form-urlencoded" https://localhost:9443/oauth2/token {"access_token":"326977ff21402f7d7c9b2770bed0f408","refresh_token":"227dc0895de100db9359cc135390e921","scope":"openid","id_token":"...","token_type":"Bearer","expires_in":1760}% **Client credentials grant with scope openid** curl -v -X POST -H "Authorization: Basic Xxxxxxxxx" -k -d "grant_type=client_credentials&scope=openid" -H "Content-Type:application/x-www-form-urlencoded" https://localhost:9443/oauth2/token {"access_token":"326977ff21402f7d7c9b2770bed0f408","scope":"openid","id_token":"...","token_type":"Bearer","expires_in":1753}
non_main
same access token is returned for different grant types an application owner receives the same access token for two different token calls in two grant types please consider following sample requests and responses password grant type with scope openid curl v x post h authorization basic xxxxxxxxx k d grant type password username admin password admin scope openid h content type application x www form urlencoded access token refresh token scope openid id token token type bearer expires in client credentials grant with scope openid curl v x post h authorization basic xxxxxxxxx k d grant type client credentials scope openid h content type application x www form urlencoded access token scope openid id token token type bearer expires in
0
5,796
30,704,131,635
IssuesEvent
2023-07-27 03:53:01
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Set Bazel flags during sync
type: feature request product: CLion awaiting-maintainer
### Description of the feature request: I want to set Bazel flags such as `--define a=b` during `Sync Project with BUILD Files` job just like `Run/Debug` ### What underlying problem are you trying to solve with this feature? I need to set different flags on the fly. ### What operating system, Intellij IDE and programming languages are you using? Please provide specific versions. Ubuntu 22.04 / CLion 2023.1.4 ### Have you found anything relevant by searching the web? https://github.com/bazelbuild/intellij/issues/4744 ### Any other information, logs, or outputs that you want to share? _No response_
True
Set Bazel flags during sync - ### Description of the feature request: I want to set Bazel flags such as `--define a=b` during `Sync Project with BUILD Files` job just like `Run/Debug` ### What underlying problem are you trying to solve with this feature? I need to set different flags on the fly. ### What operating system, Intellij IDE and programming languages are you using? Please provide specific versions. Ubuntu 22.04 / CLion 2023.1.4 ### Have you found anything relevant by searching the web? https://github.com/bazelbuild/intellij/issues/4744 ### Any other information, logs, or outputs that you want to share? _No response_
main
set bazel flags during sync description of the feature request i want to set bazel flags such as define a b during sync project with build files job just like run debug what underlying problem are you trying to solve with this feature i need to set different flags on the fly what operating system intellij ide and programming languages are you using please provide specific versions ubuntu clion have you found anything relevant by searching the web any other information logs or outputs that you want to share no response
1
1,260
5,348,482,978
IssuesEvent
2017-02-18 05:30:02
diofant/diofant
https://api.github.com/repos/diofant/diofant
opened
Use "new" style for string formatting
maintainability
I.e. ``"{0:s}".format("spam")`` instead of ``"%s" % "spam"``
True
Use "new" style for string formatting - I.e. ``"{0:s}".format("spam")`` instead of ``"%s" % "spam"``
main
use new style for string formatting i e s format spam instead of s spam
1
433,595
30,338,821,438
IssuesEvent
2023-07-11 11:22:53
pulp/pulp-operator
https://api.github.com/repos/pulp/pulp-operator
opened
[DOC] Provide the steps to configure the `metadata signing`
Documentation
**Describe the solution you'd like** Document the steps explaining how to configure pulp-operator with [`signing services`](https://docs.pulpproject.org/pulpcore/workflows/signed-metadata.html). **Additional context** Pulpcore doc: https://docs.pulpproject.org/pulpcore/workflows/signed-metadata.html Test gpg key Secret: https://github.com/pulp/pulp-operator/blob/main/.ci/assets/kubernetes/galaxy_sign.secret.yaml Example of signing scripts ConfigMap: https://github.com/pulp/pulp-operator/blob/main/.ci/assets/kubernetes/signing_scripts.configmap.yaml Sample CR: https://github.com/pulp/pulp-operator/blob/main/config/samples/galaxy.yaml
1.0
[DOC] Provide the steps to configure the `metadata signing` - **Describe the solution you'd like** Document the steps explaining how to configure pulp-operator with [`signing services`](https://docs.pulpproject.org/pulpcore/workflows/signed-metadata.html). **Additional context** Pulpcore doc: https://docs.pulpproject.org/pulpcore/workflows/signed-metadata.html Test gpg key Secret: https://github.com/pulp/pulp-operator/blob/main/.ci/assets/kubernetes/galaxy_sign.secret.yaml Example of signing scripts ConfigMap: https://github.com/pulp/pulp-operator/blob/main/.ci/assets/kubernetes/signing_scripts.configmap.yaml Sample CR: https://github.com/pulp/pulp-operator/blob/main/config/samples/galaxy.yaml
non_main
provide the steps to configure the metadata signing describe the solution you d like document the steps explaining how to configure pulp operator with additional context pulpcore doc test gpg key secret example of signing scripts configmap sample cr
0
3,233
12,368,706,221
IssuesEvent
2020-05-18 14:13:29
Kashdeya/Tiny-Progressions
https://api.github.com/repos/Kashdeya/Tiny-Progressions
closed
There is a lot of z fighting on the growth crystals
Version not Maintainted
If you were to look at the growth crystals from a distance like this then the growth crystals will z fight like crazy. Like seen below. ![1 8 0_51 2018 12 03 - 12 17 48 05-0-1-1543868303818 1](https://user-images.githubusercontent.com/32559193/49393206-7a9f3d00-f6e5-11e8-8a15-17632aa0b072.gif)
True
There is a lot of z fighting on the growth crystals - If you were to look at the growth crystals from a distance like this then the growth crystals will z fight like crazy. Like seen below. ![1 8 0_51 2018 12 03 - 12 17 48 05-0-1-1543868303818 1](https://user-images.githubusercontent.com/32559193/49393206-7a9f3d00-f6e5-11e8-8a15-17632aa0b072.gif)
main
there is a lot of z fighting on the growth crystals if you were to look at the growth crystals from a distance like this then the growth crystals will z fight like crazy like seen below
1
4,242
21,039,897,695
IssuesEvent
2022-03-31 11:19:45
cctreasury/Treasury-system
https://api.github.com/repos/cctreasury/Treasury-system
closed
555.555555
Funding Mechanism - Toolmakers and Maintainers
### Pool CC Funding Mechanism ### Name Nebiyu Sultan ### Amount of ADA 555.555555 ### Transaction ID b8a6e53ef9459113e6f9e37f6707fed99d5a92981f11d3ac56b73f9643302ee1 ### Budget Item Funding Mechanism - Toolmakers & Maintainers ### Description/Extra Comments _No response_ ### Agree the information is correct - [X] I agree the information provided is correct.
True
555.555555 - ### Pool CC Funding Mechanism ### Name Nebiyu Sultan ### Amount of ADA 555.555555 ### Transaction ID b8a6e53ef9459113e6f9e37f6707fed99d5a92981f11d3ac56b73f9643302ee1 ### Budget Item Funding Mechanism - Toolmakers & Maintainers ### Description/Extra Comments _No response_ ### Agree the information is correct - [X] I agree the information provided is correct.
main
pool cc funding mechanism name nebiyu sultan amount of ada transaction id budget item funding mechanism toolmakers maintainers description extra comments no response agree the information is correct i agree the information provided is correct
1
3,608
14,570,555,498
IssuesEvent
2020-12-17 14:32:14
pace/bricks
https://api.github.com/repos/pace/bricks
closed
objstore health fails from time to time
S::Ready T::Maintainance
### Problem The health check reports service dependencies unavailable but they are. Seems to be related to the "Read-After-Write" consistency guarantees of the objstore. ``` objstore ERR unexpected content: "2020-12-16T15:30:46Z" <-> "2020-12-16T15:30:45Z" ``` ### Cause The cause is likely that multiple requests are done to the health check at the same time. ### Proposal Check if the content is the same or the date in a +- 10sec window.
True
objstore health fails from time to time - ### Problem The health check reports service dependencies unavailable but they are. Seems to be related to the "Read-After-Write" consistency guarantees of the objstore. ``` objstore ERR unexpected content: "2020-12-16T15:30:46Z" <-> "2020-12-16T15:30:45Z" ``` ### Cause The cause is likely that multiple requests are done to the health check at the same time. ### Proposal Check if the content is the same or the date in a +- 10sec window.
main
objstore health fails from time to time problem the health check reports service dependencies unavailable but they are seems to be related to the read after write consistency guarantees of the objstore objstore err unexpected content cause the cause is likely that multiple requests are done to the health check at the same time proposal check if the content is the same or the date in a window
1
349,681
10,471,868,166
IssuesEvent
2019-09-23 08:54:26
kubernetes/kubeadm
https://api.github.com/repos/kubernetes/kubeadm
closed
kubeadm join control-plane node times out (etcd timeout)
area/etcd kind/bug priority/awaiting-more-evidence
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--> ## What keywords did you search in kubeadm issues before filing this one? etcd join timeout kubeadm join timeout ## Is this a BUG REPORT or FEATURE REQUEST? BUG REPORT ## Versions **kubeadm version** (use `kubeadm version`): kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:20:51Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} **Environment**: - **Kubernetes version** (use `kubectl version`): v1.15.2 - **Cloud provider or hardware configuration**: Openstack - **OS** (e.g. from /etc/os-release): Container Linux by CoreOS 2135.5.0 (Rhyolite) - **Kernel** (e.g. `uname -a`): Linux os1pi019-kube-master01 4.19.50-coreos-r1 #1 SMP Mon Jul 1 19:07:03 -00 2019 x86_64 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz GenuineIntel GNU/Linux - **Others**: ## What happened? `kubeadm join` was invoced and failed. The etcd container did start up 7 seconds after kubeadm timed out / did exit with failure. See the following logs (this include kubeadm logs and timestamps for pod-manifest starts): ``` 09:30:27 kubeadm service starts 09:30:27 kubeadm[2025]: [preflight] Reading configuration from the cluster... 09:30:27 kubeadm[2025]: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' 09:30:27 kubeadm[2025]: [control-plane] Using manifest folder "/etc/kubernetes/manifests" 09:30:27 kubeadm[2025]: [control-plane] Creating static Pod manifest for "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-policy" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "policy-controller" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-log" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "scheduler-policy" to "kube-scheduler" 09:30:27 kubeadm[2025]: [control-plane] Creating static Pod manifest for "kube-controller-manager" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-policy" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "policy-controller" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-log" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "scheduler-policy" to "kube-scheduler" 09:30:27 kubeadm[2025]: [control-plane] Creating static Pod manifest for "kube-scheduler" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-policy" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "policy-controller" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-log" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "scheduler-policy" to "kube-scheduler" 09:30:27 kubeadm[2025]: [check-etcd] Checking that the etcd cluster is healthy 09:30:27 kubeadm[2025]: [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace 09:30:27 kubeadm[2025]: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 09:30:27 kubeadm[2025]: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 09:30:27 kubeadm[2025]: [kubelet-start] Activating the kubelet service 09:30:27 kubeadm[2025]: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 09:30:29 kubeadm[2025]: [etcd] Announced new etcd member joining to the existing etcd cluster 09:30:29 kubeadm[2025]: [etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" 09:30:29 kubeadm[2025]: [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s 09:30:38 etcd pause shim create 09:30:30 kube-scheduler pause shim create 09:30:30 kube-controller-manager pause shim create 09:30:34 kube-scheduler shim create 09:30:35 kube-scheduler first logs 09:30:36 kube-apiserver pause shim create 09:31:07 kubeadm[2025]: [kubelet-check] Initial timeout of 40s passed. 09:31:25 kube-controller-manager shim create 09:31:25 kube-controller-manager first logs 09:31:43 kube-apiserver shim create 09:31:44 kubeadm[2025]: error execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available 09:31:44 systemd[1]: kubeadm.service: Main process exited, code=exited, status=1/FAILURE 09:31:44 kube-apiserver first logs 09:31:51 etcd shim create 09:31:52.081609 etcd first logs ``` The timeout we hit here is [this one](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/etcd/local.go#L146) which uses hardcoded values (8 times 5 seconds -> 40s) ## What you expected to happen? The etcd member get's joined to the existing control-plane node and kubeadm succeeds. ## How to reproduce it (as minimally and precisely as possible)? Hard to say. Try lots of `kubeadm joins` of control-plane nodes ## Anything else we need to know? In `kubeadm init` there is a similar looking parameter called [TimeoutForControlPlane](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1beta2/types.go#L133) which defaults to 4 Minutes and is used [here](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go#L87) to wait for the API server. This is similar to me because the problem described here and the code at the kubeadm init phase waits for a specific pod, started by the kubelet via a pod manifest. I see three options: * increase the hardcoded values * use the same parameter as already used during init (`TimeoutForControlPlane`) which would result in no change to the kubeadm specs * add an additional parameter to the kubeadm spec
1.0
kubeadm join control-plane node times out (etcd timeout) - <!-- Thanks for filing an issue! Before hitting the button, please answer these questions.--> ## What keywords did you search in kubeadm issues before filing this one? etcd join timeout kubeadm join timeout ## Is this a BUG REPORT or FEATURE REQUEST? BUG REPORT ## Versions **kubeadm version** (use `kubeadm version`): kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:20:51Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} **Environment**: - **Kubernetes version** (use `kubectl version`): v1.15.2 - **Cloud provider or hardware configuration**: Openstack - **OS** (e.g. from /etc/os-release): Container Linux by CoreOS 2135.5.0 (Rhyolite) - **Kernel** (e.g. `uname -a`): Linux os1pi019-kube-master01 4.19.50-coreos-r1 #1 SMP Mon Jul 1 19:07:03 -00 2019 x86_64 Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz GenuineIntel GNU/Linux - **Others**: ## What happened? `kubeadm join` was invoced and failed. The etcd container did start up 7 seconds after kubeadm timed out / did exit with failure. See the following logs (this include kubeadm logs and timestamps for pod-manifest starts): ``` 09:30:27 kubeadm service starts 09:30:27 kubeadm[2025]: [preflight] Reading configuration from the cluster... 09:30:27 kubeadm[2025]: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' 09:30:27 kubeadm[2025]: [control-plane] Using manifest folder "/etc/kubernetes/manifests" 09:30:27 kubeadm[2025]: [control-plane] Creating static Pod manifest for "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-policy" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "policy-controller" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-log" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "scheduler-policy" to "kube-scheduler" 09:30:27 kubeadm[2025]: [control-plane] Creating static Pod manifest for "kube-controller-manager" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-policy" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "policy-controller" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-log" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "scheduler-policy" to "kube-scheduler" 09:30:27 kubeadm[2025]: [control-plane] Creating static Pod manifest for "kube-scheduler" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-policy" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "policy-controller" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "audit-log" to "kube-apiserver" 09:30:27 kubeadm[2025]: [controlplane] Adding extra host path mount "scheduler-policy" to "kube-scheduler" 09:30:27 kubeadm[2025]: [check-etcd] Checking that the etcd cluster is healthy 09:30:27 kubeadm[2025]: [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace 09:30:27 kubeadm[2025]: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 09:30:27 kubeadm[2025]: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 09:30:27 kubeadm[2025]: [kubelet-start] Activating the kubelet service 09:30:27 kubeadm[2025]: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... 09:30:29 kubeadm[2025]: [etcd] Announced new etcd member joining to the existing etcd cluster 09:30:29 kubeadm[2025]: [etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" 09:30:29 kubeadm[2025]: [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s 09:30:38 etcd pause shim create 09:30:30 kube-scheduler pause shim create 09:30:30 kube-controller-manager pause shim create 09:30:34 kube-scheduler shim create 09:30:35 kube-scheduler first logs 09:30:36 kube-apiserver pause shim create 09:31:07 kubeadm[2025]: [kubelet-check] Initial timeout of 40s passed. 09:31:25 kube-controller-manager shim create 09:31:25 kube-controller-manager first logs 09:31:43 kube-apiserver shim create 09:31:44 kubeadm[2025]: error execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available 09:31:44 systemd[1]: kubeadm.service: Main process exited, code=exited, status=1/FAILURE 09:31:44 kube-apiserver first logs 09:31:51 etcd shim create 09:31:52.081609 etcd first logs ``` The timeout we hit here is [this one](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/etcd/local.go#L146) which uses hardcoded values (8 times 5 seconds -> 40s) ## What you expected to happen? The etcd member get's joined to the existing control-plane node and kubeadm succeeds. ## How to reproduce it (as minimally and precisely as possible)? Hard to say. Try lots of `kubeadm joins` of control-plane nodes ## Anything else we need to know? In `kubeadm init` there is a similar looking parameter called [TimeoutForControlPlane](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/apis/kubeadm/v1beta2/types.go#L133) which defaults to 4 Minutes and is used [here](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go#L87) to wait for the API server. This is similar to me because the problem described here and the code at the kubeadm init phase waits for a specific pod, started by the kubelet via a pod manifest. I see three options: * increase the hardcoded values * use the same parameter as already used during init (`TimeoutForControlPlane`) which would result in no change to the kubeadm specs * add an additional parameter to the kubeadm spec
non_main
kubeadm join control plane node times out etcd timeout what keywords did you search in kubeadm issues before filing this one etcd join timeout kubeadm join timeout is this a bug report or feature request bug report versions kubeadm version use kubeadm version kubeadm version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux environment kubernetes version use kubectl version cloud provider or hardware configuration openstack os e g from etc os release container linux by coreos rhyolite kernel e g uname a linux kube coreos smp mon jul intel r xeon r cpu genuineintel gnu linux others what happened kubeadm join was invoced and failed the etcd container did start up seconds after kubeadm timed out did exit with failure see the following logs this include kubeadm logs and timestamps for pod manifest starts kubeadm service starts kubeadm reading configuration from the cluster kubeadm fyi you can look at this config file with kubectl n kube system get cm kubeadm config oyaml kubeadm using manifest folder etc kubernetes manifests kubeadm creating static pod manifest for kube apiserver kubeadm adding extra host path mount audit policy to kube apiserver kubeadm adding extra host path mount policy controller to kube apiserver kubeadm adding extra host path mount audit log to kube apiserver kubeadm adding extra host path mount scheduler policy to kube scheduler kubeadm creating static pod manifest for kube controller manager kubeadm adding extra host path mount audit policy to kube apiserver kubeadm adding extra host path mount policy controller to kube apiserver kubeadm adding extra host path mount audit log to kube apiserver kubeadm adding extra host path mount scheduler policy to kube scheduler kubeadm creating static pod manifest for kube scheduler kubeadm adding extra host path mount audit policy to kube apiserver kubeadm adding extra host path mount policy controller to kube apiserver kubeadm adding extra host path mount audit log to kube apiserver kubeadm adding extra host path mount scheduler policy to kube scheduler kubeadm checking that the etcd cluster is healthy kubeadm downloading configuration for the kubelet from the kubelet config configmap in the kube system namespace kubeadm writing kubelet configuration to file var lib kubelet config yaml kubeadm writing kubelet environment file with flags to file var lib kubelet kubeadm flags env kubeadm activating the kubelet service kubeadm waiting for the kubelet to perform the tls bootstrap kubeadm announced new etcd member joining to the existing etcd cluster kubeadm wrote static pod manifest for a local etcd member to etc kubernetes manifests etcd yaml kubeadm waiting for the new etcd member to join the cluster this can take up to etcd pause shim create kube scheduler pause shim create kube controller manager pause shim create kube scheduler shim create kube scheduler first logs kube apiserver pause shim create kubeadm initial timeout of passed kube controller manager shim create kube controller manager first logs kube apiserver shim create kubeadm error execution phase control plane join etcd error creating local etcd static pod manifest file timeout waiting for etcd cluster to be available systemd kubeadm service main process exited code exited status failure kube apiserver first logs etcd shim create etcd first logs the timeout we hit here is which uses hardcoded values times seconds what you expected to happen the etcd member get s joined to the existing control plane node and kubeadm succeeds how to reproduce it as minimally and precisely as possible hard to say try lots of kubeadm joins of control plane nodes anything else we need to know in kubeadm init there is a similar looking parameter called which defaults to minutes and is used to wait for the api server this is similar to me because the problem described here and the code at the kubeadm init phase waits for a specific pod started by the kubelet via a pod manifest i see three options increase the hardcoded values use the same parameter as already used during init timeoutforcontrolplane which would result in no change to the kubeadm specs add an additional parameter to the kubeadm spec
0
4,176
20,073,740,701
IssuesEvent
2022-02-04 10:18:29
cloverhearts/quilljs-markdown
https://api.github.com/repos/cloverhearts/quilljs-markdown
closed
Storing the markdown
Saw with Maintainer
Hi, How can I access the markdown values? I can get the **text** and the **html** but where lives the **markdown**? Kind regards.
True
Storing the markdown - Hi, How can I access the markdown values? I can get the **text** and the **html** but where lives the **markdown**? Kind regards.
main
storing the markdown hi how can i access the markdown values i can get the text and the html but where lives the markdown kind regards
1
81,004
7,763,152,145
IssuesEvent
2018-06-01 15:37:50
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: jepsen/register/parts-start-kill-2 failed on release-2.0
C-test-failure O-robot
SHA: https://github.com/cockroachdb/cockroach/commits/41b79b1412c1c5568798091b55109dad58521bae Parameters: Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=691309&tab=buildLog ``` jepsen.go:244,jepsen.go:288: /home/agent/work/.go/bin/roachprod run teamcity-691309-jepsen:6 -- bash -e -c "\ cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \ ~/lein run test \ --tarball file://${PWD}/cockroach.tgz \ --username ${USER} \ --ssh-private-key ~/.ssh/id_rsa \ --os ubuntu \ --time-limit 300 \ --concurrency 30 \ --recovery-time 25 \ --test-count 1 \ -n 10.128.0.54 -n 10.128.0.57 -n 10.128.0.51 -n 10.128.0.52 -n 10.128.0.53 \ --test register --nemesis parts --nemesis2 start-kill-2 \ > invoke.log 2>&1 \ ": exit status 1 ```
1.0
roachtest: jepsen/register/parts-start-kill-2 failed on release-2.0 - SHA: https://github.com/cockroachdb/cockroach/commits/41b79b1412c1c5568798091b55109dad58521bae Parameters: Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=691309&tab=buildLog ``` jepsen.go:244,jepsen.go:288: /home/agent/work/.go/bin/roachprod run teamcity-691309-jepsen:6 -- bash -e -c "\ cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \ ~/lein run test \ --tarball file://${PWD}/cockroach.tgz \ --username ${USER} \ --ssh-private-key ~/.ssh/id_rsa \ --os ubuntu \ --time-limit 300 \ --concurrency 30 \ --recovery-time 25 \ --test-count 1 \ -n 10.128.0.54 -n 10.128.0.57 -n 10.128.0.51 -n 10.128.0.52 -n 10.128.0.53 \ --test register --nemesis parts --nemesis2 start-kill-2 \ > invoke.log 2>&1 \ ": exit status 1 ```
non_main
roachtest jepsen register parts start kill failed on release sha parameters failed test jepsen go jepsen go home agent work go bin roachprod run teamcity jepsen bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test register nemesis parts start kill invoke log exit status
0
25,792
7,749,621,114
IssuesEvent
2018-05-30 12:06:59
eventespresso/event-espresso-core
https://api.github.com/repos/eventespresso/event-espresso-core
closed
Fix JavaScript error on WP > Plugins page: Jed localization error: Error: Domain `event_espresso` was not found.
category:assets category:i18n type:build-process 🔨
This happens in the current release, in master branch, and in the add-type-specific-asset-methods branch. Simply go to WP > Plugins then view the console to see this JS error: ``` Jed localization error: Error: Domain `event_espresso` was not found. memoized index.js:74 ./node_modules/@wordpress/i18n/build-module/index.js/dcnpgettext< index.js:82 memoized index.js:74 __ index.js:99 ./assets/src/exit-modal-survey/index.js/< index.js:38:12 ./assets/src/exit-modal-survey/index.js http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:12:29 __webpack_require__ bootstrap:76 ./assets/src/wp-plugins-page/index.js http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:132:76 __webpack_require__ bootstrap:76 [0] http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:23195:18 __webpack_require__ bootstrap:76 checkDeferredModules bootstrap:43 webpackJsonpCallback bootstrap:30 <anonymous> http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:1:2 ```
1.0
Fix JavaScript error on WP > Plugins page: Jed localization error: Error: Domain `event_espresso` was not found. - This happens in the current release, in master branch, and in the add-type-specific-asset-methods branch. Simply go to WP > Plugins then view the console to see this JS error: ``` Jed localization error: Error: Domain `event_espresso` was not found. memoized index.js:74 ./node_modules/@wordpress/i18n/build-module/index.js/dcnpgettext< index.js:82 memoized index.js:74 __ index.js:99 ./assets/src/exit-modal-survey/index.js/< index.js:38:12 ./assets/src/exit-modal-survey/index.js http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:12:29 __webpack_require__ bootstrap:76 ./assets/src/wp-plugins-page/index.js http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:132:76 __webpack_require__ bootstrap:76 [0] http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:23195:18 __webpack_require__ bootstrap:76 checkDeferredModules bootstrap:43 webpackJsonpCallback bootstrap:30 <anonymous> http://local.wordpress.test/wp-content/plugins/32-core/assets/dist/ee-wp-plugins-page.2d53dd52679d995d5e4b.dist.js:1:2 ```
non_main
fix javascript error on wp plugins page jed localization error error domain event espresso was not found this happens in the current release in master branch and in the add type specific asset methods branch simply go to wp plugins then view the console to see this js error jed localization error error domain event espresso was not found memoized index js node modules wordpress build module index js dcnpgettext index js memoized index js index js assets src exit modal survey index js index js assets src exit modal survey index js webpack require bootstrap assets src wp plugins page index js webpack require bootstrap webpack require bootstrap checkdeferredmodules bootstrap webpackjsonpcallback bootstrap
0
828
4,462,761,992
IssuesEvent
2016-08-24 11:09:03
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Force apt module to install a deb file if version is the same as installed version
bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME ansible-modules-core/packaging/os/apt.py ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.1.0 config file = /home/vm/ops/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Specified roles folder, inventory file and a log path <!--- --> ##### OS / ENVIRONMENT N/A <!--- --> ##### SUMMARY <!--- Explain the problem briefly --> A package was installed on a remote ubuntu machine with apt module in ansible. If any change is made to the default configuration file provided by the deb package, a re-run of the ansible playbook skips the installation of the deb package in the case where package name and version are the same,thereby preventing a restore of the configuration on the server. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> 1) Install a package on an ubuntu machine using apt module in ansible 2) Modify a default configuration file provided by the package 3) Re-run the ansible playbook which installs the deb package <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> The default configuration file should have been reverted to the state which is provided by the deb package ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> The changes to the configuration file persist and are not restored on replays of playbook <!--- Paste verbatim command output between quotes below --> ##### ADDITIONAL DETAILS I dug a bit into the apt.py code and I see the following in line 492 of ansible-modules-core/packaging/os/apt.py: ``` if package_version_compare(pkg_version, installed_version) == 0: # Does not need to down-/upgrade, move on to next package continue ``` This piece of code essentially prevents ansible from restoring the configuration of package in case someone went ahead and tampered with the default configuration provided by the deb package. ##### QUESTIONS 1) Although we can issue a raw command to run "dpkg -i" instead of using the apt module, is there any recommended way of achieving the above using the apt package which I might have missed? 2) If there is no existing functionality to achieve the above in apt module, would it make sense to give an option to allow users to prevent skipping of the deb installation in case the package name and versions match? Thanks
True
Force apt module to install a deb file if version is the same as installed version - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME ansible-modules-core/packaging/os/apt.py ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.1.0 config file = /home/vm/ops/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION Specified roles folder, inventory file and a log path <!--- --> ##### OS / ENVIRONMENT N/A <!--- --> ##### SUMMARY <!--- Explain the problem briefly --> A package was installed on a remote ubuntu machine with apt module in ansible. If any change is made to the default configuration file provided by the deb package, a re-run of the ansible playbook skips the installation of the deb package in the case where package name and version are the same,thereby preventing a restore of the configuration on the server. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> 1) Install a package on an ubuntu machine using apt module in ansible 2) Modify a default configuration file provided by the package 3) Re-run the ansible playbook which installs the deb package <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> The default configuration file should have been reverted to the state which is provided by the deb package ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> The changes to the configuration file persist and are not restored on replays of playbook <!--- Paste verbatim command output between quotes below --> ##### ADDITIONAL DETAILS I dug a bit into the apt.py code and I see the following in line 492 of ansible-modules-core/packaging/os/apt.py: ``` if package_version_compare(pkg_version, installed_version) == 0: # Does not need to down-/upgrade, move on to next package continue ``` This piece of code essentially prevents ansible from restoring the configuration of package in case someone went ahead and tampered with the default configuration provided by the deb package. ##### QUESTIONS 1) Although we can issue a raw command to run "dpkg -i" instead of using the apt module, is there any recommended way of achieving the above using the apt package which I might have missed? 2) If there is no existing functionality to achieve the above in apt module, would it make sense to give an option to allow users to prevent skipping of the deb installation in case the package name and versions match? Thanks
main
force apt module to install a deb file if version is the same as installed version issue type bug report component name ansible modules core packaging os apt py ansible version ansible config file home vm ops ansible ansible cfg configured module search path default w o overrides configuration specified roles folder inventory file and a log path os environment n a summary a package was installed on a remote ubuntu machine with apt module in ansible if any change is made to the default configuration file provided by the deb package a re run of the ansible playbook skips the installation of the deb package in the case where package name and version are the same thereby preventing a restore of the configuration on the server steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used install a package on an ubuntu machine using apt module in ansible modify a default configuration file provided by the package re run the ansible playbook which installs the deb package expected results the default configuration file should have been reverted to the state which is provided by the deb package actual results the changes to the configuration file persist and are not restored on replays of playbook additional details i dug a bit into the apt py code and i see the following in line of ansible modules core packaging os apt py if package version compare pkg version installed version does not need to down upgrade move on to next package continue this piece of code essentially prevents ansible from restoring the configuration of package in case someone went ahead and tampered with the default configuration provided by the deb package questions although we can issue a raw command to run dpkg i instead of using the apt module is there any recommended way of achieving the above using the apt package which i might have missed if there is no existing functionality to achieve the above in apt module would it make sense to give an option to allow users to prevent skipping of the deb installation in case the package name and versions match thanks
1
100,011
21,101,531,223
IssuesEvent
2022-04-04 14:52:56
arduino/arduino-ide
https://api.github.com/repos/arduino/arduino-ide
closed
Incorrect "Additional Boards Manager URLs" dialog field height
topic: code type: imperfection criticality: high
## Describe the bug Arduino IDE supports adding installing arbitrary boards platforms via [**Boards Manager**](https://docs.arduino.cc/software/ide-v2/tutorials/ide-v2-board-manager) by adding the URLs of their [package index](https://arduino.github.io/arduino-cli/dev/package_index_json-specification/) file to the "**Additional Boards Manager URLs**" preference. Since users may have several URLs, a dedicated "**Additional Boards Manager URLs**" dialog is offered with a multi-line field. 🐛 The height of the field is very small, to the point where it is not clear that it is an input field at all. ## To Reproduce 1. Select **File > Preferences...** from the Arduino IDE menus. 1. Click the button on the right side of the "**Additional Boards Manager URLs**" field. 🐛 The height of the field is very small: ![image](https://user-images.githubusercontent.com/8572152/156570203-50491980-e6d4-4170-8822-ebd8c2c521f0.png) ## Expected behavior Usable input field height in the "**Additional Boards Manager URLs**" dialog, as it was in previous versions: ![image](https://user-images.githubusercontent.com/8572152/156570231-bd3694f3-4336-49e1-81f1-e6ef2572c583.png) ## Desktop - OS: Windows 10 - Version: 2.0.0-rc4-snapshot-0fc7c78 Date: 2022-03-03T08:39:37.612Z CLI Version: 0.21.0 [10107d24] ## Additional context I bisected the bug to https://github.com/arduino/arduino-ide/commit/112153fb965f63d952d126c8244cd3f84f0a1a1b (it does not occur when using the build from the previous commit https://github.com/arduino/arduino-ide/commit/69ac1f4779589d0d21ce3d37c180b3393ad6156c). --- Originally reported at https://forum.arduino.cc/t/additional-boards-manager-urls-page-cant-read/965207
1.0
Incorrect "Additional Boards Manager URLs" dialog field height - ## Describe the bug Arduino IDE supports adding installing arbitrary boards platforms via [**Boards Manager**](https://docs.arduino.cc/software/ide-v2/tutorials/ide-v2-board-manager) by adding the URLs of their [package index](https://arduino.github.io/arduino-cli/dev/package_index_json-specification/) file to the "**Additional Boards Manager URLs**" preference. Since users may have several URLs, a dedicated "**Additional Boards Manager URLs**" dialog is offered with a multi-line field. 🐛 The height of the field is very small, to the point where it is not clear that it is an input field at all. ## To Reproduce 1. Select **File > Preferences...** from the Arduino IDE menus. 1. Click the button on the right side of the "**Additional Boards Manager URLs**" field. 🐛 The height of the field is very small: ![image](https://user-images.githubusercontent.com/8572152/156570203-50491980-e6d4-4170-8822-ebd8c2c521f0.png) ## Expected behavior Usable input field height in the "**Additional Boards Manager URLs**" dialog, as it was in previous versions: ![image](https://user-images.githubusercontent.com/8572152/156570231-bd3694f3-4336-49e1-81f1-e6ef2572c583.png) ## Desktop - OS: Windows 10 - Version: 2.0.0-rc4-snapshot-0fc7c78 Date: 2022-03-03T08:39:37.612Z CLI Version: 0.21.0 [10107d24] ## Additional context I bisected the bug to https://github.com/arduino/arduino-ide/commit/112153fb965f63d952d126c8244cd3f84f0a1a1b (it does not occur when using the build from the previous commit https://github.com/arduino/arduino-ide/commit/69ac1f4779589d0d21ce3d37c180b3393ad6156c). --- Originally reported at https://forum.arduino.cc/t/additional-boards-manager-urls-page-cant-read/965207
non_main
incorrect additional boards manager urls dialog field height describe the bug arduino ide supports adding installing arbitrary boards platforms via by adding the urls of their file to the additional boards manager urls preference since users may have several urls a dedicated additional boards manager urls dialog is offered with a multi line field 🐛 the height of the field is very small to the point where it is not clear that it is an input field at all to reproduce select file preferences from the arduino ide menus click the button on the right side of the additional boards manager urls field 🐛 the height of the field is very small expected behavior usable input field height in the additional boards manager urls dialog as it was in previous versions desktop os windows version snapshot date cli version additional context i bisected the bug to it does not occur when using the build from the previous commit originally reported at
0
2,276
8,073,371,667
IssuesEvent
2018-08-06 19:01:38
AlexsLemonade/refinebio-frontend
https://api.github.com/repos/AlexsLemonade/refinebio-frontend
closed
Enable Prettier on SCSS files
maintainability review
Now we check all our javascript before every commit using Prettier. We should also be formatting our SCSS files using it.
True
Enable Prettier on SCSS files - Now we check all our javascript before every commit using Prettier. We should also be formatting our SCSS files using it.
main
enable prettier on scss files now we check all our javascript before every commit using prettier we should also be formatting our scss files using it
1
4,175
20,068,592,921
IssuesEvent
2022-02-04 01:42:54
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Invalid API Gateway Response Keys: {'base64Encoded'} revisited
stage/needs-investigation maintainer/need-followup
SAM CLI still returns Invalid API Gateway Response Keys: {'base64Encoded'} in {'statusCode': 200, 'multiValueHeaders': {'Content-Type': ['application/json']}, 'body': '{"name":"boek","isbn":"d28a6052-f216-4c8b-8a7f-726cc73f9918"}', 'base64Encoded': False} This has been reported by https://github.com/aws/aws-sam-cli/issues/1193 I think this response is correct. If you test a lambda function via the AWS Lambda console test button you also get a response containing the property named 'base64Encoded' instead of a property named 'isBase64Encoded' as SAM CLI seems to expect in function _invalid_apig_response_keys in local_apigw_service.py
True
Invalid API Gateway Response Keys: {'base64Encoded'} revisited - SAM CLI still returns Invalid API Gateway Response Keys: {'base64Encoded'} in {'statusCode': 200, 'multiValueHeaders': {'Content-Type': ['application/json']}, 'body': '{"name":"boek","isbn":"d28a6052-f216-4c8b-8a7f-726cc73f9918"}', 'base64Encoded': False} This has been reported by https://github.com/aws/aws-sam-cli/issues/1193 I think this response is correct. If you test a lambda function via the AWS Lambda console test button you also get a response containing the property named 'base64Encoded' instead of a property named 'isBase64Encoded' as SAM CLI seems to expect in function _invalid_apig_response_keys in local_apigw_service.py
main
invalid api gateway response keys revisited sam cli still returns invalid api gateway response keys in statuscode multivalueheaders content type body name boek isbn false this has been reported by i think this response is correct if you test a lambda function via the aws lambda console test button you also get a response containing the property named instead of a property named as sam cli seems to expect in function invalid apig response keys in local apigw service py
1
319,930
23,795,767,993
IssuesEvent
2022-09-02 19:28:14
hashed-io/hashed-substrate
https://api.github.com/repos/hashed-io/hashed-substrate
closed
Updated Bitcoin Vaults pallet documentation
documentation
- [x] Change pallet name on the docs (cli and polkadotjs overview) - [x] Add initial config overview (offchain worker key insertion)
1.0
Updated Bitcoin Vaults pallet documentation - - [x] Change pallet name on the docs (cli and polkadotjs overview) - [x] Add initial config overview (offchain worker key insertion)
non_main
updated bitcoin vaults pallet documentation change pallet name on the docs cli and polkadotjs overview add initial config overview offchain worker key insertion
0
701
4,273,223,247
IssuesEvent
2016-07-13 16:38:59
duckduckgo/zeroclickinfo-goodies
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
closed
JIRA: source no longer active
Maintainer Input Requested
The query for the airplane "DC10" triggers this IA, and the source is closed now. ------ IA Page: http://duck.co/ia/view/jira [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @arroway
True
JIRA: source no longer active - The query for the airplane "DC10" triggers this IA, and the source is closed now. ------ IA Page: http://duck.co/ia/view/jira [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @arroway
main
jira source no longer active the query for the airplane triggers this ia and the source is closed now ia page arroway
1