Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
844
labels
stringlengths
4
721
body
stringlengths
1
261k
index
stringclasses
12 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
248k
binary_label
int64
0
1
728,372
25,076,402,734
IssuesEvent
2022-11-07 15:48:29
OpenTabletDriver/OpenTabletDriver
https://api.github.com/repos/OpenTabletDriver/OpenTabletDriver
closed
Aux Buttons should not have Pen Passthrough as an option
enhancement linux/gtk priority:low desktop
## Description Aux buttons with pen passthrough does not make sense. I tried implementing it with stuff like BTN_0 etc, but applications do not seem to know what to do with this. Until we know it makes sense, it's probably smarter to hide it entirely. ## System Information: <!-- Please fill out this information --> | Name | Value | | ---------------- | ----- | | Operating System | Arch Linux | OpenTabletDriver Version | 267a322 | Tablet | XP-Pen Star G960S Plus
1.0
Aux Buttons should not have Pen Passthrough as an option - ## Description Aux buttons with pen passthrough does not make sense. I tried implementing it with stuff like BTN_0 etc, but applications do not seem to know what to do with this. Until we know it makes sense, it's probably smarter to hide it entirely. ## System Information: <!-- Please fill out this information --> | Name | Value | | ---------------- | ----- | | Operating System | Arch Linux | OpenTabletDriver Version | 267a322 | Tablet | XP-Pen Star G960S Plus
priority
aux buttons should not have pen passthrough as an option description aux buttons with pen passthrough does not make sense i tried implementing it with stuff like btn etc but applications do not seem to know what to do with this until we know it makes sense it s probably smarter to hide it entirely system information name value operating system arch linux opentabletdriver version tablet xp pen star plus
1
533,682
15,596,673,272
IssuesEvent
2021-03-18 16:06:11
open-telemetry/opentelemetry-specification
https://api.github.com/repos/open-telemetry/opentelemetry-specification
closed
Update YAML files for semantic conventions once supported by markdown generator
area:semantic-conventions priority:p3 release:allowed-for-ga spec:miscellaneous
YAML model for attributes seems to only have number type. But we have int or double as possible attribute types. https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/common/common.md#attributes https://github.com/open-telemetry/opentelemetry-specification/blob/master/semantic_conventions/syntax.md Once the YAML syntax and MD generator are updated (https://github.com/open-telemetry/build-tools/issues/13), we need to update our YAML files here accordingly.
1.0
Update YAML files for semantic conventions once supported by markdown generator - YAML model for attributes seems to only have number type. But we have int or double as possible attribute types. https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/common/common.md#attributes https://github.com/open-telemetry/opentelemetry-specification/blob/master/semantic_conventions/syntax.md Once the YAML syntax and MD generator are updated (https://github.com/open-telemetry/build-tools/issues/13), we need to update our YAML files here accordingly.
priority
update yaml files for semantic conventions once supported by markdown generator yaml model for attributes seems to only have number type but we have int or double as possible attribute types once the yaml syntax and md generator are updated we need to update our yaml files here accordingly
1
755,870
26,444,504,099
IssuesEvent
2023-01-16 05:30:46
codersforcauses/wadl
https://api.github.com/repos/codersforcauses/wadl
closed
Cleanup component filenames
difficulty::easy priority::low
## Basic Information all files are using index.vue, this can be confusing. Give proper filenames
1.0
Cleanup component filenames - ## Basic Information all files are using index.vue, this can be confusing. Give proper filenames
priority
cleanup component filenames basic information all files are using index vue this can be confusing give proper filenames
1
303,206
9,303,794,239
IssuesEvent
2019-03-24 20:08:57
bounswe/bounswe2019group3
https://api.github.com/repos/bounswe/bounswe2019group3
opened
(for Egemen, Bartu, Orkan, Ekrem) Document your research about GitHub under GitHub Research wiki page
Priority: Low Status: Pending Type: Assignment
- [ ] Egemen Kaplan - [ ] Bartu Ören - [ ] Orkan Akısü - [ ] Muhammet Ekrem Gezgen Document the following research you will perform: • Explore GitHub repositories to discover repos that you like. Document them by giving their references and describing what you liked. • Study git as a version management system. There are many guides and videos you can watch. I recommend Git For Ages 4 And Up (https://youtu.be/1ffBJ4sVUb4). It is long, but very good. However, you can choose a different resource if you prefer.
1.0
(for Egemen, Bartu, Orkan, Ekrem) Document your research about GitHub under GitHub Research wiki page - - [ ] Egemen Kaplan - [ ] Bartu Ören - [ ] Orkan Akısü - [ ] Muhammet Ekrem Gezgen Document the following research you will perform: • Explore GitHub repositories to discover repos that you like. Document them by giving their references and describing what you liked. • Study git as a version management system. There are many guides and videos you can watch. I recommend Git For Ages 4 And Up (https://youtu.be/1ffBJ4sVUb4). It is long, but very good. However, you can choose a different resource if you prefer.
priority
for egemen bartu orkan ekrem document your research about github under github research wiki page egemen kaplan bartu ören orkan akısü muhammet ekrem gezgen document the following research you will perform • explore github repositories to discover repos that you like document them by giving their references and describing what you liked • study git as a version management system there are many guides and videos you can watch i recommend git for ages and up it is long but very good however you can choose a different resource if you prefer
1
80,992
3,587,083,923
IssuesEvent
2016-01-30 02:52:29
onyxfish/csvkit
https://api.github.com/repos/onyxfish/csvkit
closed
Add ability to modify or friendly format numerical values
feature Low Priority
It would be useful to have an option that friendly formatted or applied modifiers to numbers. For example, if you had the float: ``` 5081998.9369165478 ``` It would print it out as: ``` 5,081,998.9369165478 ``` I'm currently printing out a large table of numbers using `csvlook`, and it would be much more readable if I could add commas to all the numbers, so I can get a rough idea of scale (particularly since the numbers are all left-aligned, and not right-aligned, and I can't seem to find a way in `csvkit` to change that). Or you might have modifiers to truncate the fractional part, rounded up/down etc. which you can apply before printing.
1.0
Add ability to modify or friendly format numerical values - It would be useful to have an option that friendly formatted or applied modifiers to numbers. For example, if you had the float: ``` 5081998.9369165478 ``` It would print it out as: ``` 5,081,998.9369165478 ``` I'm currently printing out a large table of numbers using `csvlook`, and it would be much more readable if I could add commas to all the numbers, so I can get a rough idea of scale (particularly since the numbers are all left-aligned, and not right-aligned, and I can't seem to find a way in `csvkit` to change that). Or you might have modifiers to truncate the fractional part, rounded up/down etc. which you can apply before printing.
priority
add ability to modify or friendly format numerical values it would be useful to have an option that friendly formatted or applied modifiers to numbers for example if you had the float it would print it out as i m currently printing out a large table of numbers using csvlook and it would be much more readable if i could add commas to all the numbers so i can get a rough idea of scale particularly since the numbers are all left aligned and not right aligned and i can t seem to find a way in csvkit to change that or you might have modifiers to truncate the fractional part rounded up down etc which you can apply before printing
1
324,685
9,907,574,965
IssuesEvent
2019-06-27 16:06:19
MontrealCorpusTools/PolyglotDB
https://api.github.com/repos/MontrealCorpusTools/PolyglotDB
opened
Query module refactoring
lower priority query
The query modules are a bit messy and could use cleaning up/refactoring, with some overlapping methods, inconsistent naming schemes from various additions of code and general lack of clarity in how pieces are structured and fit together.
1.0
Query module refactoring - The query modules are a bit messy and could use cleaning up/refactoring, with some overlapping methods, inconsistent naming schemes from various additions of code and general lack of clarity in how pieces are structured and fit together.
priority
query module refactoring the query modules are a bit messy and could use cleaning up refactoring with some overlapping methods inconsistent naming schemes from various additions of code and general lack of clarity in how pieces are structured and fit together
1
556,082
16,474,063,001
IssuesEvent
2021-05-24 00:29:58
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Multiple vlan interfaces on same interface not working
Stale area: Networking bug priority: low
**Describe the bug** This issue was already hinted at in issue #26235, but the author was asked to create a separate issue. I could not find such a followup issue, if I missed it please link it here and close this one. I want to run two vlan interfaces on the same ethernet interface in addition to the normal interface (board: nucleo f767zi). To achieve this, I extended the vlan sample with an additional vlan interface, similar to the one that is already present (I extended the code, the KConfig file and the prj.conf accordingly). On my machine, I used the zeth-vlan.conf from the net-tools repo. I can ping all interfaces indivually. I can also ping one vlan interface and the normal interface at the same time. But pinging both vlan interfaces at the same time stops working quickly. In the previous issue, it was already discovered that the problem shows itself in `ethernet_send` in ethernet.c. It can easily be observed by enabling the debug output and checking the output of `set_vlan_tag`, which discovers that the interface indicated in the packet and the interface given to the function are different. `set_vlan_tag` handles this by overwriting the interface that was passed as parameter with the interface returned by `net_if_ipv6_addr_lookup(&NET_IPV6_HDR(pkt)->src, &target))`. As far as I can tell, the interface returned by `net_if_ipv6_addr_lookup(&NET_IPV6_HDR(pkt)->src, &target))` is the correct one. This change is never propagated to `ethernet_send`, which then proceeds with the wrong vlan interface, resulting in an error. The interface parameter `iface` is created in `process_tx_packet` in net_if.c: ``` pkt = CONTAINER_OF(work, struct net_pkt, work); net_pkt_set_tx_stats_tick(pkt, k_cycle_get_32()); iface = net_pkt_iface(pkt); ``` Unfortunately, I am not familiar enough with the inner workings to know where the packet originates from. **To Reproduce** Steps to reproduce the behavior: 1. Extend the vlan sample with an additional vlan interface. 2. Compile and flash it to the board with debug output enabled. 3. Ping multiple vlan interfaces at the same time. 4. After a short amount of time, the pings stop working. You can also check the output in a serial terminal: `Iface <iface1> should be <iface2>` **Expected behavior** Pings with multiple vlans should work normally. **Logs and console output** prj.conf: ``` CONFIG_NETWORKING=y CONFIG_NET_LOG=y CONFIG_NET_IPV6=y CONFIG_NET_IPV4=n CONFIG_NET_DHCPV4=n CONFIG_NET_UDP=y CONFIG_NET_TCP=n CONFIG_NET_STATISTICS=y CONFIG_TEST_RANDOM_GENERATOR=n CONFIG_NET_PKT_RX_COUNT=32 CONFIG_NET_PKT_TX_COUNT=32 CONFIG_NET_BUF_RX_COUNT=32 CONFIG_NET_BUF_TX_COUNT=32 CONFIG_NET_IF_UNICAST_IPV6_ADDR_COUNT=10 CONFIG_NET_IF_MCAST_IPV6_ADDR_COUNT=10 #CONFIG_NET_IF_UNICAST_IPV4_ADDR_COUNT=1 CONFIG_NET_MAX_CONTEXTS=20 CONFIG_INIT_STACKS=y CONFIG_PRINTK=y CONFIG_NET_SHELL=y # Ethernet is needed for VLAN CONFIG_NET_L2_ETHERNET=y CONFIG_NET_CONFIG_NEED_IPV6=y CONFIG_NET_CONFIG_NEED_IPV4=n CONFIG_NET_CONFIG_SETTINGS=y # First ethernet interface will use these settings CONFIG_NET_CONFIG_MY_IPV6_ADDR="2001:db8::1" CONFIG_NET_CONFIG_PEER_IPV6_ADDR="2001:db8::2" # VLAN tag for the first interface CONFIG_SAMPLE_VLAN_TAG_1=100 CONFIG_SAMPLE_IPV6_ADDR_1="2001:db8:100::1" CONFIG_LOG_STRDUP_BUF_COUNT=40 # Settings for the second network interface CONFIG_SAMPLE_IPV6_ADDR_2="2001:db8:200::1" CONFIG_SAMPLE_VLAN_TAG_2=200 CONFIG_LOG=y CONFIG_NET_VLAN=y CONFIG_NET_VLAN_COUNT=3 CONFIG_NET_ROUTING=y ``` **Environment (please complete the following information):** - OS: Linux - Toolchain: (arm-none-eabi-gcc 10.2.0) - Version: 2.4.99 (latest master) **Additional context** pr #29865 is from the same author that mentioned the problem in the issue but does not seem to address this problem.
1.0
Multiple vlan interfaces on same interface not working - **Describe the bug** This issue was already hinted at in issue #26235, but the author was asked to create a separate issue. I could not find such a followup issue, if I missed it please link it here and close this one. I want to run two vlan interfaces on the same ethernet interface in addition to the normal interface (board: nucleo f767zi). To achieve this, I extended the vlan sample with an additional vlan interface, similar to the one that is already present (I extended the code, the KConfig file and the prj.conf accordingly). On my machine, I used the zeth-vlan.conf from the net-tools repo. I can ping all interfaces indivually. I can also ping one vlan interface and the normal interface at the same time. But pinging both vlan interfaces at the same time stops working quickly. In the previous issue, it was already discovered that the problem shows itself in `ethernet_send` in ethernet.c. It can easily be observed by enabling the debug output and checking the output of `set_vlan_tag`, which discovers that the interface indicated in the packet and the interface given to the function are different. `set_vlan_tag` handles this by overwriting the interface that was passed as parameter with the interface returned by `net_if_ipv6_addr_lookup(&NET_IPV6_HDR(pkt)->src, &target))`. As far as I can tell, the interface returned by `net_if_ipv6_addr_lookup(&NET_IPV6_HDR(pkt)->src, &target))` is the correct one. This change is never propagated to `ethernet_send`, which then proceeds with the wrong vlan interface, resulting in an error. The interface parameter `iface` is created in `process_tx_packet` in net_if.c: ``` pkt = CONTAINER_OF(work, struct net_pkt, work); net_pkt_set_tx_stats_tick(pkt, k_cycle_get_32()); iface = net_pkt_iface(pkt); ``` Unfortunately, I am not familiar enough with the inner workings to know where the packet originates from. **To Reproduce** Steps to reproduce the behavior: 1. Extend the vlan sample with an additional vlan interface. 2. Compile and flash it to the board with debug output enabled. 3. Ping multiple vlan interfaces at the same time. 4. After a short amount of time, the pings stop working. You can also check the output in a serial terminal: `Iface <iface1> should be <iface2>` **Expected behavior** Pings with multiple vlans should work normally. **Logs and console output** prj.conf: ``` CONFIG_NETWORKING=y CONFIG_NET_LOG=y CONFIG_NET_IPV6=y CONFIG_NET_IPV4=n CONFIG_NET_DHCPV4=n CONFIG_NET_UDP=y CONFIG_NET_TCP=n CONFIG_NET_STATISTICS=y CONFIG_TEST_RANDOM_GENERATOR=n CONFIG_NET_PKT_RX_COUNT=32 CONFIG_NET_PKT_TX_COUNT=32 CONFIG_NET_BUF_RX_COUNT=32 CONFIG_NET_BUF_TX_COUNT=32 CONFIG_NET_IF_UNICAST_IPV6_ADDR_COUNT=10 CONFIG_NET_IF_MCAST_IPV6_ADDR_COUNT=10 #CONFIG_NET_IF_UNICAST_IPV4_ADDR_COUNT=1 CONFIG_NET_MAX_CONTEXTS=20 CONFIG_INIT_STACKS=y CONFIG_PRINTK=y CONFIG_NET_SHELL=y # Ethernet is needed for VLAN CONFIG_NET_L2_ETHERNET=y CONFIG_NET_CONFIG_NEED_IPV6=y CONFIG_NET_CONFIG_NEED_IPV4=n CONFIG_NET_CONFIG_SETTINGS=y # First ethernet interface will use these settings CONFIG_NET_CONFIG_MY_IPV6_ADDR="2001:db8::1" CONFIG_NET_CONFIG_PEER_IPV6_ADDR="2001:db8::2" # VLAN tag for the first interface CONFIG_SAMPLE_VLAN_TAG_1=100 CONFIG_SAMPLE_IPV6_ADDR_1="2001:db8:100::1" CONFIG_LOG_STRDUP_BUF_COUNT=40 # Settings for the second network interface CONFIG_SAMPLE_IPV6_ADDR_2="2001:db8:200::1" CONFIG_SAMPLE_VLAN_TAG_2=200 CONFIG_LOG=y CONFIG_NET_VLAN=y CONFIG_NET_VLAN_COUNT=3 CONFIG_NET_ROUTING=y ``` **Environment (please complete the following information):** - OS: Linux - Toolchain: (arm-none-eabi-gcc 10.2.0) - Version: 2.4.99 (latest master) **Additional context** pr #29865 is from the same author that mentioned the problem in the issue but does not seem to address this problem.
priority
multiple vlan interfaces on same interface not working describe the bug this issue was already hinted at in issue but the author was asked to create a separate issue i could not find such a followup issue if i missed it please link it here and close this one i want to run two vlan interfaces on the same ethernet interface in addition to the normal interface board nucleo to achieve this i extended the vlan sample with an additional vlan interface similar to the one that is already present i extended the code the kconfig file and the prj conf accordingly on my machine i used the zeth vlan conf from the net tools repo i can ping all interfaces indivually i can also ping one vlan interface and the normal interface at the same time but pinging both vlan interfaces at the same time stops working quickly in the previous issue it was already discovered that the problem shows itself in ethernet send in ethernet c it can easily be observed by enabling the debug output and checking the output of set vlan tag which discovers that the interface indicated in the packet and the interface given to the function are different set vlan tag handles this by overwriting the interface that was passed as parameter with the interface returned by net if addr lookup net hdr pkt src target as far as i can tell the interface returned by net if addr lookup net hdr pkt src target is the correct one this change is never propagated to ethernet send which then proceeds with the wrong vlan interface resulting in an error the interface parameter iface is created in process tx packet in net if c pkt container of work struct net pkt work net pkt set tx stats tick pkt k cycle get iface net pkt iface pkt unfortunately i am not familiar enough with the inner workings to know where the packet originates from to reproduce steps to reproduce the behavior extend the vlan sample with an additional vlan interface compile and flash it to the board with debug output enabled ping multiple vlan interfaces at the same time after a short amount of time the pings stop working you can also check the output in a serial terminal iface should be expected behavior pings with multiple vlans should work normally logs and console output prj conf config networking y config net log y config net y config net n config net n config net udp y config net tcp n config net statistics y config test random generator n config net pkt rx count config net pkt tx count config net buf rx count config net buf tx count config net if unicast addr count config net if mcast addr count config net if unicast addr count config net max contexts config init stacks y config printk y config net shell y ethernet is needed for vlan config net ethernet y config net config need y config net config need n config net config settings y first ethernet interface will use these settings config net config my addr config net config peer addr vlan tag for the first interface config sample vlan tag config sample addr config log strdup buf count settings for the second network interface config sample addr config sample vlan tag config log y config net vlan y config net vlan count config net routing y environment please complete the following information os linux toolchain arm none eabi gcc version latest master additional context pr is from the same author that mentioned the problem in the issue but does not seem to address this problem
1
67,504
3,274,630,515
IssuesEvent
2015-10-26 12:01:04
ManoSeimas/manoseimas.lt
https://api.github.com/repos/ManoSeimas/manoseimas.lt
closed
Lobbyist list pagination overlaps HR
bug priority: 3 - low
**Steps to reproduce**: - Go to manoseimas.lt - Click "Lobistai" - Click on the name of a Lobbyist, e.g. http://www.manoseimas.lt/lobbyists/lobbyist/uab-vento-nuovo **Expected results**: Pagination buttons show up above the horizontal rule. **Actual results**: Pagination buttons overlap the horizontal rule. See image below: ![image](https://cloud.githubusercontent.com/assets/4104268/10576491/55138818-766c-11e5-9ea0-8b6caf783958.png)
1.0
Lobbyist list pagination overlaps HR - **Steps to reproduce**: - Go to manoseimas.lt - Click "Lobistai" - Click on the name of a Lobbyist, e.g. http://www.manoseimas.lt/lobbyists/lobbyist/uab-vento-nuovo **Expected results**: Pagination buttons show up above the horizontal rule. **Actual results**: Pagination buttons overlap the horizontal rule. See image below: ![image](https://cloud.githubusercontent.com/assets/4104268/10576491/55138818-766c-11e5-9ea0-8b6caf783958.png)
priority
lobbyist list pagination overlaps hr steps to reproduce go to manoseimas lt click lobistai click on the name of a lobbyist e g expected results pagination buttons show up above the horizontal rule actual results pagination buttons overlap the horizontal rule see image below
1
375,810
11,134,875,177
IssuesEvent
2019-12-20 13:01:00
nanopb/nanopb
https://api.github.com/repos/nanopb/nanopb
closed
Checking nanopb for Windows into Git results in missing files since .gitignore in the zip ignores .pyc etc.
FixedInGit Priority-Low Type-Review
nanopb-windows-x86\generator-bin>protoc.exe --nanopb_out=source --proto_path=myfile.proto ************************************************************* *** Could not import the Google protobuf Python libraries *** *** Try installing package 'python-protobuf' or similar. *** ************************************************************* Traceback (most recent call last): File "<string>", line 6, in <module> File "__main__.py", line 128, in <module> File "__main__protoc-gen-nanopb__.py", line 23, in <module> ImportError: No module named protobuf.text_format --nanopb_out: protoc-gen-nanopb: Plugin failed with status code 1.
1.0
Checking nanopb for Windows into Git results in missing files since .gitignore in the zip ignores .pyc etc. - nanopb-windows-x86\generator-bin>protoc.exe --nanopb_out=source --proto_path=myfile.proto ************************************************************* *** Could not import the Google protobuf Python libraries *** *** Try installing package 'python-protobuf' or similar. *** ************************************************************* Traceback (most recent call last): File "<string>", line 6, in <module> File "__main__.py", line 128, in <module> File "__main__protoc-gen-nanopb__.py", line 23, in <module> ImportError: No module named protobuf.text_format --nanopb_out: protoc-gen-nanopb: Plugin failed with status code 1.
priority
checking nanopb for windows into git results in missing files since gitignore in the zip ignores pyc etc nanopb windows generator bin protoc exe nanopb out source proto path myfile proto could not import the google protobuf python libraries try installing package python protobuf or similar traceback most recent call last file line in file main py line in file main protoc gen nanopb py line in importerror no module named protobuf text format nanopb out protoc gen nanopb plugin failed with status code
1
520,500
15,087,081,443
IssuesEvent
2021-02-05 21:27:43
vmware/clarity
https://api.github.com/repos/vmware/clarity
closed
The action pop-up in the single row action data-grid is not positioned correctly
@clr/angular priority: 1 low status: needs info type: enhancement
## Describe the bug The position of the action pop-up in single action data-grid is not calculated correctly when the height/width of the window changes. As a result the pop-up is not aligned to the selected grid element. ## How to reproduce https://stackblitz.com/edit/clarity-light-theme-v2-owaezn In a more complex application the issue is reproduced when different DOM elements show/hide after the window height/width is already calculated. In this cases when we try to open the action pop-up it is either not shown or it's shown in a wrong position. In a simple app, the easiest way to reproduce is window resize. Steps to reproduce the behavior: 1. Go to https://stackblitz.com/edit/clarity-light-theme-v2-owaezn -> single action data-grid 2. Click on single selection menu in some of the grid rows 3. Resize the window 4. The pop-up is still visible, but it's not aligned to the selected row. ## Expected behavior The pop-up should be aligned correctly to the selected grid row. ## Versions **App** Angular: [8.3.23] Node: [e.g. 10.14.6] Clarity: [2.3.5] **Device:** Type: [Windows] OS: [Windows 10] Browser [Firefox] ## Additional notes _Add any other notes about the problem here._
1.0
The action pop-up in the single row action data-grid is not positioned correctly - ## Describe the bug The position of the action pop-up in single action data-grid is not calculated correctly when the height/width of the window changes. As a result the pop-up is not aligned to the selected grid element. ## How to reproduce https://stackblitz.com/edit/clarity-light-theme-v2-owaezn In a more complex application the issue is reproduced when different DOM elements show/hide after the window height/width is already calculated. In this cases when we try to open the action pop-up it is either not shown or it's shown in a wrong position. In a simple app, the easiest way to reproduce is window resize. Steps to reproduce the behavior: 1. Go to https://stackblitz.com/edit/clarity-light-theme-v2-owaezn -> single action data-grid 2. Click on single selection menu in some of the grid rows 3. Resize the window 4. The pop-up is still visible, but it's not aligned to the selected row. ## Expected behavior The pop-up should be aligned correctly to the selected grid row. ## Versions **App** Angular: [8.3.23] Node: [e.g. 10.14.6] Clarity: [2.3.5] **Device:** Type: [Windows] OS: [Windows 10] Browser [Firefox] ## Additional notes _Add any other notes about the problem here._
priority
the action pop up in the single row action data grid is not positioned correctly describe the bug the position of the action pop up in single action data grid is not calculated correctly when the height width of the window changes as a result the pop up is not aligned to the selected grid element how to reproduce in a more complex application the issue is reproduced when different dom elements show hide after the window height width is already calculated in this cases when we try to open the action pop up it is either not shown or it s shown in a wrong position in a simple app the easiest way to reproduce is window resize steps to reproduce the behavior go to single action data grid click on single selection menu in some of the grid rows resize the window the pop up is still visible but it s not aligned to the selected row expected behavior the pop up should be aligned correctly to the selected grid row versions app angular node clarity device type os browser additional notes add any other notes about the problem here
1
261,709
8,245,239,678
IssuesEvent
2018-09-11 09:07:12
nlbdev/nordic-epub3-dtbook-migrator
https://api.github.com/repos/nlbdev/nordic-epub3-dtbook-migrator
closed
Reconsider whether rearnotes should have linear="no"
0 - Low priority guidelines revision question
You may want to read the rearnotes when you're done reading its chapter. Using linear="no" would cause reading systems to skip that document unless you explicitly click a noteref that links to it or if you select it in the toc. <!--- @huboard:{"order":0.0009765625} -->
1.0
Reconsider whether rearnotes should have linear="no" - You may want to read the rearnotes when you're done reading its chapter. Using linear="no" would cause reading systems to skip that document unless you explicitly click a noteref that links to it or if you select it in the toc. <!--- @huboard:{"order":0.0009765625} -->
priority
reconsider whether rearnotes should have linear no you may want to read the rearnotes when you re done reading its chapter using linear no would cause reading systems to skip that document unless you explicitly click a noteref that links to it or if you select it in the toc huboard order
1
35,055
2,789,775,138
IssuesEvent
2015-05-08 21:24:58
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
opened
Vertical label wrap for Line Charts
Priority-Low Type-Enhancement
Original [issue 91](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=91) created by orwant on 2009-10-16T14:32:44.000Z: IT would be nice to have the option to wrap vertical labels on Line Charts (and anywhere else wrapping might make sense). Often have just a few X-axis points but the client wants non-abbreviated descriptions. Even with them on an angle the labels take nearly as much vertical space as the chart body.
1.0
Vertical label wrap for Line Charts - Original [issue 91](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=91) created by orwant on 2009-10-16T14:32:44.000Z: IT would be nice to have the option to wrap vertical labels on Line Charts (and anywhere else wrapping might make sense). Often have just a few X-axis points but the client wants non-abbreviated descriptions. Even with them on an angle the labels take nearly as much vertical space as the chart body.
priority
vertical label wrap for line charts original created by orwant on it would be nice to have the option to wrap vertical labels on line charts and anywhere else wrapping might make sense often have just a few x axis points but the client wants non abbreviated descriptions even with them on an angle the labels take nearly as much vertical space as the chart body
1
85,900
3,700,080,409
IssuesEvent
2016-02-29 05:53:42
olpeh/wht
https://api.github.com/repos/olpeh/wht
opened
More features when using the commandline arguments
Feature request low-priority
## Todo - [x] Starting - [x] Stopping - [ ] Possibility to select project - [ ] Possibility to select task - [ ] Possibility to set description - [ ] Possibility to save using last entered specs - [ ] Fix an issue that causes breakduration to become -1 - [ ] Allow only one instance of the app to run
1.0
More features when using the commandline arguments - ## Todo - [x] Starting - [x] Stopping - [ ] Possibility to select project - [ ] Possibility to select task - [ ] Possibility to set description - [ ] Possibility to save using last entered specs - [ ] Fix an issue that causes breakduration to become -1 - [ ] Allow only one instance of the app to run
priority
more features when using the commandline arguments todo starting stopping possibility to select project possibility to select task possibility to set description possibility to save using last entered specs fix an issue that causes breakduration to become   allow only one instance of the app to run
1
258,822
8,179,968,504
IssuesEvent
2018-08-28 17:59:40
kjohnsen/MMAPPR2
https://api.github.com/repos/kjohnsen/MMAPPR2
closed
Auto-index BAM files
complexity-low enhancement priority-medium
This already works somewhat, but only at the time the param is created. If an already-created param references a BAM file without an index, for example, MMAPPR will crash.
1.0
Auto-index BAM files - This already works somewhat, but only at the time the param is created. If an already-created param references a BAM file without an index, for example, MMAPPR will crash.
priority
auto index bam files this already works somewhat but only at the time the param is created if an already created param references a bam file without an index for example mmappr will crash
1
301,508
9,221,094,452
IssuesEvent
2019-03-11 19:05:36
ME-ICA/tedana
https://api.github.com/repos/ME-ICA/tedana
closed
PCA on wavelet-transformed data is failing
bug low-priority
Performing PCA on the wavelet-transformed data (i.e., using the `wvpca` flag) is currently raising an error. It's possible that the shape checks I added for the inputs to various functions are too strict, in which case we will need to adjust those and improve the parameter documentation for affect functions. Given that wvpca is untested and appears to be almost completely unused, I'm marking this as `low-priority`. Here is the command used: ```shell dset_dir=/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data tedana -d ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-1_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-2_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-3_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-4_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-5_desc-preproc_bold.nii.gz \ -e 15.4 29.7 44.0 58.3 72.6 --wvpca --debug \ --out-dir /Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/current-wvpca ``` And the full traceback: ``` INFO:tedana.workflows.tedana:Using output directory: /Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/current-wvpca INFO:tedana.workflows.tedana:Loading input data: ['/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-1_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-2_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-3_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-4_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-5_desc-preproc_bold.nii.gz'] DEBUG:tedana.workflows.tedana:Resulting data shape: (59696, 5, 160) INFO:tedana.workflows.tedana:Computing adaptive mask DEBUG:tedana.workflows.tedana:Retaining 24786/59696 samples INFO:tedana.workflows.tedana:Computing T2* map /Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/scipy/stats/stats.py:1706: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval DEBUG:tedana.workflows.tedana:Setting cap on T2* map at 1158.56472 INFO:tedana.combine:Optimally combining data with voxel-wise T2 estimates INFO:tedana.decomposition.eigendecomp:Computing PCA of optimally combined multi-echo data DEBUG:tedana.decomposition._utils:Creating eimask for echo 0 DEBUG:tedana.decomposition._utils:Eimask threshold boundaries: 22.015 110074.029 INFO:tedana.decomposition.eigendecomp:Making initial component selection guess from PCA results Traceback (most recent call last): File "/Users/tsalo/anaconda/envs/python3/bin/tedana", line 11, in <module> load_entry_point('tedana', 'console_scripts', 'tedana')() File "/Users/tsalo/Documents/tsalo/tedana/tedana/workflows/tedana.py", line 384, in _main tedana_workflow(**vars(options)) File "/Users/tsalo/Documents/tsalo/tedana/tedana/workflows/tedana.py", line 321, in tedana_workflow ste=ste, wvpca=wvpca) File "/Users/tsalo/Documents/tsalo/tedana/tedana/decomposition/eigendecomp.py", line 208, in tedpca mmixN=vTmixN, full_sel=False) File "/Users/tsalo/Documents/tsalo/tedana/tedana/model/fit.py", line 80, in fitmodels_direct 'mmix ({1})'.format(catd.shape[2], mmix.shape[0])) ValueError: Third dimension (number of volumes) of catd (160) does not match first dimension of mmix (162) ```
1.0
PCA on wavelet-transformed data is failing - Performing PCA on the wavelet-transformed data (i.e., using the `wvpca` flag) is currently raising an error. It's possible that the shape checks I added for the inputs to various functions are too strict, in which case we will need to adjust those and improve the parameter documentation for affect functions. Given that wvpca is untested and appears to be almost completely unused, I'm marking this as `low-priority`. Here is the command used: ```shell dset_dir=/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data tedana -d ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-1_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-2_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-3_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-4_desc-preproc_bold.nii.gz \ ${dset_dir}/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-5_desc-preproc_bold.nii.gz \ -e 15.4 29.7 44.0 58.3 72.6 --wvpca --debug \ --out-dir /Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/current-wvpca ``` And the full traceback: ``` INFO:tedana.workflows.tedana:Using output directory: /Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/current-wvpca INFO:tedana.workflows.tedana:Loading input data: ['/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-1_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-2_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-3_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-4_desc-preproc_bold.nii.gz', '/Users/tsalo/Documents/tsalo/tedana-comparison/sandbox/e5_data/sub-01_ses-09_task-flashingcheckerboard_run-01_echo-5_desc-preproc_bold.nii.gz'] DEBUG:tedana.workflows.tedana:Resulting data shape: (59696, 5, 160) INFO:tedana.workflows.tedana:Computing adaptive mask DEBUG:tedana.workflows.tedana:Retaining 24786/59696 samples INFO:tedana.workflows.tedana:Computing T2* map /Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/scipy/stats/stats.py:1706: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval DEBUG:tedana.workflows.tedana:Setting cap on T2* map at 1158.56472 INFO:tedana.combine:Optimally combining data with voxel-wise T2 estimates INFO:tedana.decomposition.eigendecomp:Computing PCA of optimally combined multi-echo data DEBUG:tedana.decomposition._utils:Creating eimask for echo 0 DEBUG:tedana.decomposition._utils:Eimask threshold boundaries: 22.015 110074.029 INFO:tedana.decomposition.eigendecomp:Making initial component selection guess from PCA results Traceback (most recent call last): File "/Users/tsalo/anaconda/envs/python3/bin/tedana", line 11, in <module> load_entry_point('tedana', 'console_scripts', 'tedana')() File "/Users/tsalo/Documents/tsalo/tedana/tedana/workflows/tedana.py", line 384, in _main tedana_workflow(**vars(options)) File "/Users/tsalo/Documents/tsalo/tedana/tedana/workflows/tedana.py", line 321, in tedana_workflow ste=ste, wvpca=wvpca) File "/Users/tsalo/Documents/tsalo/tedana/tedana/decomposition/eigendecomp.py", line 208, in tedpca mmixN=vTmixN, full_sel=False) File "/Users/tsalo/Documents/tsalo/tedana/tedana/model/fit.py", line 80, in fitmodels_direct 'mmix ({1})'.format(catd.shape[2], mmix.shape[0])) ValueError: Third dimension (number of volumes) of catd (160) does not match first dimension of mmix (162) ```
priority
pca on wavelet transformed data is failing performing pca on the wavelet transformed data i e using the wvpca flag is currently raising an error it s possible that the shape checks i added for the inputs to various functions are too strict in which case we will need to adjust those and improve the parameter documentation for affect functions given that wvpca is untested and appears to be almost completely unused i m marking this as low priority here is the command used shell dset dir users tsalo documents tsalo tedana comparison sandbox data tedana d dset dir sub ses task flashingcheckerboard run echo desc preproc bold nii gz dset dir sub ses task flashingcheckerboard run echo desc preproc bold nii gz dset dir sub ses task flashingcheckerboard run echo desc preproc bold nii gz dset dir sub ses task flashingcheckerboard run echo desc preproc bold nii gz dset dir sub ses task flashingcheckerboard run echo desc preproc bold nii gz e wvpca debug out dir users tsalo documents tsalo tedana comparison sandbox current wvpca and the full traceback info tedana workflows tedana using output directory users tsalo documents tsalo tedana comparison sandbox current wvpca info tedana workflows tedana loading input data debug tedana workflows tedana resulting data shape info tedana workflows tedana computing adaptive mask debug tedana workflows tedana retaining samples info tedana workflows tedana computing map users tsalo anaconda envs lib site packages scipy stats stats py futurewarning using a non tuple sequence for multidimensional indexing is deprecated use arr instead of arr in the future this will be interpreted as an array index arr which will result either in an error or a different result return np add reduce sorted weights axis axis sumval debug tedana workflows tedana setting cap on map at info tedana combine optimally combining data with voxel wise estimates info tedana decomposition eigendecomp computing pca of optimally combined multi echo data debug tedana decomposition utils creating eimask for echo debug tedana decomposition utils eimask threshold boundaries info tedana decomposition eigendecomp making initial component selection guess from pca results traceback most recent call last file users tsalo anaconda envs bin tedana line in load entry point tedana console scripts tedana file users tsalo documents tsalo tedana tedana workflows tedana py line in main tedana workflow vars options file users tsalo documents tsalo tedana tedana workflows tedana py line in tedana workflow ste ste wvpca wvpca file users tsalo documents tsalo tedana tedana decomposition eigendecomp py line in tedpca mmixn vtmixn full sel false file users tsalo documents tsalo tedana tedana model fit py line in fitmodels direct mmix format catd shape mmix shape valueerror third dimension number of volumes of catd does not match first dimension of mmix
1
264,220
8,306,616,100
IssuesEvent
2018-09-22 20:46:54
electerious/Lychee
https://api.github.com/repos/electerious/Lychee
closed
Only previeus button available when 'Photo Info' is opened.
enhancement help wanted low priority
Only the button to navigate to the previous image is available when the 'Photo Info' sidebar is opened. ![screenshot](http://i.imgur.com/Rh6sZ2v.jpg)
1.0
Only previeus button available when 'Photo Info' is opened. - Only the button to navigate to the previous image is available when the 'Photo Info' sidebar is opened. ![screenshot](http://i.imgur.com/Rh6sZ2v.jpg)
priority
only previeus button available when photo info is opened only the button to navigate to the previous image is available when the photo info sidebar is opened
1
104,165
4,197,367,788
IssuesEvent
2016-06-27 01:19:06
johnthagen/stardust-rpg
https://api.github.com/repos/johnthagen/stardust-rpg
opened
Fix cyclomatic complexity issues
priority/low refactoring
Fix cyclomatic complexity issues Several functions have over 10 cyclomatic complexity. Should be cleaned up if possible.
1.0
Fix cyclomatic complexity issues - Fix cyclomatic complexity issues Several functions have over 10 cyclomatic complexity. Should be cleaned up if possible.
priority
fix cyclomatic complexity issues fix cyclomatic complexity issues several functions have over cyclomatic complexity should be cleaned up if possible
1
488,842
14,087,343,338
IssuesEvent
2020-11-05 06:12:22
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
When sync-log=false, fsync raftdb before kvdb persists data
component/rocksdb difficulty/medium priority/low severity/Minor sig/engine status/discussion type/bug
There are value in using sync-log=false when small data loss on edge case can be tolerate, and bare metal performance is desired. However, current a TiKV node may not be able to recovery from power off, if kvdb have newer data than raftdb. This can be fixed by fsync raftdb before every time kvdb persists it's data (e.g. flush memtable). This can be done in rocksdb `OnFlushBegin` callback.
1.0
When sync-log=false, fsync raftdb before kvdb persists data - There are value in using sync-log=false when small data loss on edge case can be tolerate, and bare metal performance is desired. However, current a TiKV node may not be able to recovery from power off, if kvdb have newer data than raftdb. This can be fixed by fsync raftdb before every time kvdb persists it's data (e.g. flush memtable). This can be done in rocksdb `OnFlushBegin` callback.
priority
when sync log false fsync raftdb before kvdb persists data there are value in using sync log false when small data loss on edge case can be tolerate and bare metal performance is desired however current a tikv node may not be able to recovery from power off if kvdb have newer data than raftdb this can be fixed by fsync raftdb before every time kvdb persists it s data e g flush memtable this can be done in rocksdb onflushbegin callback
1
457,811
13,162,737,589
IssuesEvent
2020-08-10 22:18:45
osrf/romi-dashboard
https://api.github.com/repos/osrf/romi-dashboard
opened
Migrate from ExpansionPanel to Accordions
low priority
## pre-requisite Upgrade material-UI #76 Accordions: https://material-ui.com/components/accordion/
1.0
Migrate from ExpansionPanel to Accordions - ## pre-requisite Upgrade material-UI #76 Accordions: https://material-ui.com/components/accordion/
priority
migrate from expansionpanel to accordions pre requisite upgrade material ui accordions
1
170,813
6,472,150,012
IssuesEvent
2017-08-17 13:24:22
arquillian/smart-testing
https://api.github.com/repos/arquillian/smart-testing
closed
Git properties
Component: Core Priority: Low Type: Chore Type: Feature
We should rename git properties to make them easier to understand. Having `git.commit` and `git.previous.commit` does not clearly explain their purpose nor it's easy to guess. I would suggest following changes: * `git.commit` -> `scm.range.head` * `git.previous.commit` -> `scm.range.tail` * `git.last.commits` -> `scm.last.changes` Thoughts?
1.0
Git properties - We should rename git properties to make them easier to understand. Having `git.commit` and `git.previous.commit` does not clearly explain their purpose nor it's easy to guess. I would suggest following changes: * `git.commit` -> `scm.range.head` * `git.previous.commit` -> `scm.range.tail` * `git.last.commits` -> `scm.last.changes` Thoughts?
priority
git properties we should rename git properties to make them easier to understand having git commit and git previous commit does not clearly explain their purpose nor it s easy to guess i would suggest following changes git commit scm range head git previous commit scm range tail git last commits scm last changes thoughts
1
188,716
6,781,478,796
IssuesEvent
2017-10-30 01:11:03
StuPro-TOSCAna/TOSCAna
https://api.github.com/repos/StuPro-TOSCAna/TOSCAna
opened
Exclude resources folders from Codacy
enhancement low priority
Codacy should not look at artifacts residing in our resources dirs.
1.0
Exclude resources folders from Codacy - Codacy should not look at artifacts residing in our resources dirs.
priority
exclude resources folders from codacy codacy should not look at artifacts residing in our resources dirs
1
490,038
14,114,923,709
IssuesEvent
2020-11-07 18:16:41
Sphereserver/Source-X
https://api.github.com/repos/Sphereserver/Source-X
closed
Item stacking ini setting glitch the dupelist
Priority: Low Status-Bug: Confirmed
When activating this option: `// EF_ItemStacking 00000004 // Enable item stacking feature when drop items on ground` ![image](https://user-images.githubusercontent.com/51728381/95279127-e0980680-081f-11eb-8d88-1451b9c4ca9c.png) Item stack when we drop them on the ground. But when this option is activate. there no way to change the dupelist on an item. For exemple i_dagger. Normaly the item switch side. I think the "bug" is HERE https://github.com/Sphereserver/Source-X/blob/892048b5e39e1df94bdca90a59a3c9d92aeff6ee/src/game/chars/CCharAct.cpp#L2033 There is a return few line before. The purpose seem to evitate to flip item. I think it should be flip if its the first item of the pile. And I see an other bug too.. `( g_Cfg.m_fFlipDroppedItems || pItem->Can(CAN_I_FLIP) ` I think it should be `( g_Cfg.m_fFlipDroppedItems && pItem->Can(CAN_I_FLIP) ` Because the setting in the ini is useless like this ``` // Flip dropped items FlipDroppedItems=1 ```
1.0
Item stacking ini setting glitch the dupelist - When activating this option: `// EF_ItemStacking 00000004 // Enable item stacking feature when drop items on ground` ![image](https://user-images.githubusercontent.com/51728381/95279127-e0980680-081f-11eb-8d88-1451b9c4ca9c.png) Item stack when we drop them on the ground. But when this option is activate. there no way to change the dupelist on an item. For exemple i_dagger. Normaly the item switch side. I think the "bug" is HERE https://github.com/Sphereserver/Source-X/blob/892048b5e39e1df94bdca90a59a3c9d92aeff6ee/src/game/chars/CCharAct.cpp#L2033 There is a return few line before. The purpose seem to evitate to flip item. I think it should be flip if its the first item of the pile. And I see an other bug too.. `( g_Cfg.m_fFlipDroppedItems || pItem->Can(CAN_I_FLIP) ` I think it should be `( g_Cfg.m_fFlipDroppedItems && pItem->Can(CAN_I_FLIP) ` Because the setting in the ini is useless like this ``` // Flip dropped items FlipDroppedItems=1 ```
priority
item stacking ini setting glitch the dupelist when activating this option ef itemstacking enable item stacking feature when drop items on ground item stack when we drop them on the ground but when this option is activate there no way to change the dupelist on an item for exemple i dagger normaly the item switch side i think the bug is here there is a return few line before the purpose seem to evitate to flip item i think it should be flip if its the first item of the pile and i see an other bug too g cfg m fflipdroppeditems pitem can can i flip i think it should be g cfg m fflipdroppeditems pitem can can i flip because the setting in the ini is useless like this flip dropped items flipdroppeditems
1
722,432
24,861,853,299
IssuesEvent
2022-10-27 08:53:40
input-output-hk/cardano-node
https://api.github.com/repos/input-output-hk/cardano-node
closed
[FR] - build cmd to check whether read-only reference input is in ledger state
enhancement priority low Vasil type: enhancement user type: internal comp: cardano-cli era: babbage
**Internal/External** *Internal* if an IOHK staff member. **Area** *Other* Any other topic (Delegation, Ranking, ...). **Describe the feature you'd like** When using `build` cmd's `--read-only-tx-in-reference` with a utxo that is not in the ledger state there is no error until `submit` when you see something like: ``` Command failed: transaction submit Error: Error while submitting tx: ShelleyTxValidationError ShelleyBasedEraBabbage (ApplyTxError [UtxowFailure (UtxoFailure (FromAlonzoUtxoFail (BadInputsUTxO (fromList [TxIn (TxId {_unTxId = SafeHash "bbfd5c6666f7ed3491d9533be796dc1bd3e165d99eb35fbcf908efee3e8edc04"}) (TxIx 666)]))))]) ``` **Describe alternatives you've considered** It would be good to provide earlier feedback, ideally with a new message. **Additional context / screenshots** Node/Cli version: 43393ac88faa99986ecaa0be409e8b6353f5e9fe
1.0
[FR] - build cmd to check whether read-only reference input is in ledger state - **Internal/External** *Internal* if an IOHK staff member. **Area** *Other* Any other topic (Delegation, Ranking, ...). **Describe the feature you'd like** When using `build` cmd's `--read-only-tx-in-reference` with a utxo that is not in the ledger state there is no error until `submit` when you see something like: ``` Command failed: transaction submit Error: Error while submitting tx: ShelleyTxValidationError ShelleyBasedEraBabbage (ApplyTxError [UtxowFailure (UtxoFailure (FromAlonzoUtxoFail (BadInputsUTxO (fromList [TxIn (TxId {_unTxId = SafeHash "bbfd5c6666f7ed3491d9533be796dc1bd3e165d99eb35fbcf908efee3e8edc04"}) (TxIx 666)]))))]) ``` **Describe alternatives you've considered** It would be good to provide earlier feedback, ideally with a new message. **Additional context / screenshots** Node/Cli version: 43393ac88faa99986ecaa0be409e8b6353f5e9fe
priority
build cmd to check whether read only reference input is in ledger state internal external internal if an iohk staff member area other any other topic delegation ranking describe the feature you d like when using build cmd s read only tx in reference with a utxo that is not in the ledger state there is no error until submit when you see something like command failed transaction submit error error while submitting tx shelleytxvalidationerror shelleybasederababbage applytxerror describe alternatives you ve considered it would be good to provide earlier feedback ideally with a new message additional context screenshots node cli version
1
458,404
13,174,672,593
IssuesEvent
2020-08-11 23:09:00
shimming-toolbox/shimming-toolbox-py
https://api.github.com/repos/shimming-toolbox/shimming-toolbox-py
opened
Improve masking capabilities
Priority: LOW enhancement
## Context The initial issue #47 which was partially fixed by PR #60 added some basic capabilities to mask some data (threshold, shape: square, cube). More masking capability could be implemented to segment the brain and spinal cord as well as other shapes. Algorithms that could help - SCT: for spinal cord + simple shapes (license MIT) - FSL BET (non-commercial): for brain - Others? ## Suggestion - Add disk and ball to `shimmingtoolbox.masking.shape - Add capability to handle phase images to `shimmingtoolbox.masking.threshold - `shimmingtoolbox.masking.bet()` --> input is file name - `shimmingtoolbox.masking.sct()` --> input could be file name or an np.array (if we use SCT's API) output: mask as nd.array (of file in the case of bet)
1.0
Improve masking capabilities - ## Context The initial issue #47 which was partially fixed by PR #60 added some basic capabilities to mask some data (threshold, shape: square, cube). More masking capability could be implemented to segment the brain and spinal cord as well as other shapes. Algorithms that could help - SCT: for spinal cord + simple shapes (license MIT) - FSL BET (non-commercial): for brain - Others? ## Suggestion - Add disk and ball to `shimmingtoolbox.masking.shape - Add capability to handle phase images to `shimmingtoolbox.masking.threshold - `shimmingtoolbox.masking.bet()` --> input is file name - `shimmingtoolbox.masking.sct()` --> input could be file name or an np.array (if we use SCT's API) output: mask as nd.array (of file in the case of bet)
priority
improve masking capabilities context the initial issue which was partially fixed by pr added some basic capabilities to mask some data threshold shape square cube more masking capability could be implemented to segment the brain and spinal cord as well as other shapes algorithms that could help sct for spinal cord simple shapes license mit fsl bet non commercial for brain others suggestion add disk and ball to shimmingtoolbox masking shape add capability to handle phase images to shimmingtoolbox masking threshold shimmingtoolbox masking bet input is file name shimmingtoolbox masking sct input could be file name or an np array if we use sct s api output mask as nd array of file in the case of bet
1
31,719
2,736,540,011
IssuesEvent
2015-04-19 14:55:47
cs2103jan2015-t16-3c/Main
https://api.github.com/repos/cs2103jan2015-t16-3c/Main
closed
A user can group the to-do list according to day, week and month
priority.low
so that the user can see his schedule easier
1.0
A user can group the to-do list according to day, week and month - so that the user can see his schedule easier
priority
a user can group the to do list according to day week and month so that the user can see his schedule easier
1
593,885
18,019,373,056
IssuesEvent
2021-09-16 17:22:08
Azure/autorest.java
https://api.github.com/repos/Azure/autorest.java
closed
LRO implementation for protocol methods
priority-0 v4 low-level-client
This task tracks the changes needed in autorest to generate LRO protocol methods. This task depends on #1044
1.0
LRO implementation for protocol methods - This task tracks the changes needed in autorest to generate LRO protocol methods. This task depends on #1044
priority
lro implementation for protocol methods this task tracks the changes needed in autorest to generate lro protocol methods this task depends on
1
727,474
25,036,566,715
IssuesEvent
2022-11-04 16:30:39
tempesta-tech/tempesta
https://api.github.com/repos/tempesta-tech/tempesta
opened
Limit the size of loaded certificates
low priority TLS good to start
# Motivation With large certificate chains we might exceed the 16KB TLS record limit and need to fragment Certificate message, what we're unable to do for now. See https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ssl-tls-record-fragmentation-support/ba-p/395743 # Scope `tfw_tls_set_cert()` should verify the file size and limit it by `16KB - overhead` to make sure that the certificate fits one record. Otherwise print a warning and reject the loading. # Testing Please develop a Python test for this: 1. load a too large certificates chain and make sure that the limit works 2. load a chain with the size on the bound and make sure that TLS handshake passes (no fragmentation is required for this size, i.e. the choosen limit is correct).
1.0
Limit the size of loaded certificates - # Motivation With large certificate chains we might exceed the 16KB TLS record limit and need to fragment Certificate message, what we're unable to do for now. See https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/ssl-tls-record-fragmentation-support/ba-p/395743 # Scope `tfw_tls_set_cert()` should verify the file size and limit it by `16KB - overhead` to make sure that the certificate fits one record. Otherwise print a warning and reject the loading. # Testing Please develop a Python test for this: 1. load a too large certificates chain and make sure that the limit works 2. load a chain with the size on the bound and make sure that TLS handshake passes (no fragmentation is required for this size, i.e. the choosen limit is correct).
priority
limit the size of loaded certificates motivation with large certificate chains we might exceed the tls record limit and need to fragment certificate message what we re unable to do for now see scope tfw tls set cert should verify the file size and limit it by overhead to make sure that the certificate fits one record otherwise print a warning and reject the loading testing please develop a python test for this load a too large certificates chain and make sure that the limit works load a chain with the size on the bound and make sure that tls handshake passes no fragmentation is required for this size i e the choosen limit is correct
1
456,186
13,146,546,867
IssuesEvent
2020-08-08 10:32:25
LibreTexts/metalc
https://api.github.com/repos/LibreTexts/metalc
closed
Chicks 11~18 have low disk storage
flock cluster low priority
![image](https://user-images.githubusercontent.com/5837628/87352725-67cf2a00-c510-11ea-984a-a655c599612d.png) Not an issue for now, but pods scheduled on those nodes might get some out of disk space issues when pulling images.
1.0
Chicks 11~18 have low disk storage - ![image](https://user-images.githubusercontent.com/5837628/87352725-67cf2a00-c510-11ea-984a-a655c599612d.png) Not an issue for now, but pods scheduled on those nodes might get some out of disk space issues when pulling images.
priority
chicks have low disk storage not an issue for now but pods scheduled on those nodes might get some out of disk space issues when pulling images
1
176,403
6,559,107,027
IssuesEvent
2017-09-07 01:33:32
copperhead/bugtracker
https://api.github.com/repos/copperhead/bugtracker
closed
give system_server different ASLR bases than the zygote
enhancement priority-low
It breaks if preloading isn't used, which is what is blocking this from being done. This is unimportant on CopperheadOS since _everything else_ is spawned from the Zygote via `exec`, but it would be nice to cover this too just in case something was missed. This subset is also something that could be upstreamed, as spawning a single process with exec is cheap, the memory cost just adds up quickly when it's used across the board.
1.0
give system_server different ASLR bases than the zygote - It breaks if preloading isn't used, which is what is blocking this from being done. This is unimportant on CopperheadOS since _everything else_ is spawned from the Zygote via `exec`, but it would be nice to cover this too just in case something was missed. This subset is also something that could be upstreamed, as spawning a single process with exec is cheap, the memory cost just adds up quickly when it's used across the board.
priority
give system server different aslr bases than the zygote it breaks if preloading isn t used which is what is blocking this from being done this is unimportant on copperheados since everything else is spawned from the zygote via exec but it would be nice to cover this too just in case something was missed this subset is also something that could be upstreamed as spawning a single process with exec is cheap the memory cost just adds up quickly when it s used across the board
1
783,283
27,525,385,143
IssuesEvent
2023-03-06 17:39:41
strategitica/strategitica
https://api.github.com/repos/strategitica/strategitica
opened
Daily time averages for balancing workload
low priority
I want to be able to see the average duration of all tasks for each day of the week so I can reschedule tasks to make things more balanced. I'd add a link in the menu called, idk, "Stats" or "Averages" or something, and it'd open a modal with the averages and maybe a table showing the total tasks duration for each day in the visible calendar. I'd probably also add a checkbox/toggle for whether to include the duration of to dos in the averages and table.
1.0
Daily time averages for balancing workload - I want to be able to see the average duration of all tasks for each day of the week so I can reschedule tasks to make things more balanced. I'd add a link in the menu called, idk, "Stats" or "Averages" or something, and it'd open a modal with the averages and maybe a table showing the total tasks duration for each day in the visible calendar. I'd probably also add a checkbox/toggle for whether to include the duration of to dos in the averages and table.
priority
daily time averages for balancing workload i want to be able to see the average duration of all tasks for each day of the week so i can reschedule tasks to make things more balanced i d add a link in the menu called idk stats or averages or something and it d open a modal with the averages and maybe a table showing the total tasks duration for each day in the visible calendar i d probably also add a checkbox toggle for whether to include the duration of to dos in the averages and table
1
577,748
17,117,969,530
IssuesEvent
2021-07-11 18:58:06
momentum-mod/game
https://api.github.com/repos/momentum-mod/game
closed
Check Triggers Touch Logic
Outcome: Resolved Priority: Low Size: Medium Type: Enhancement Where: Game
Check if triggers are properly checking if the activator is passing trigger filters. `OnStartTouch` overrides automatically guarantee this, but things like `Touch`, or even `StartTouch` need to manually check filters in the function. Consider converting `StartTouch` to be `OnStartTouch` to simplify the filter checks as well.
1.0
Check Triggers Touch Logic - Check if triggers are properly checking if the activator is passing trigger filters. `OnStartTouch` overrides automatically guarantee this, but things like `Touch`, or even `StartTouch` need to manually check filters in the function. Consider converting `StartTouch` to be `OnStartTouch` to simplify the filter checks as well.
priority
check triggers touch logic check if triggers are properly checking if the activator is passing trigger filters onstarttouch overrides automatically guarantee this but things like touch or even starttouch need to manually check filters in the function consider converting starttouch to be onstarttouch to simplify the filter checks as well
1
460,947
13,221,193,557
IssuesEvent
2020-08-17 13:41:02
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
samples/subsys/canbus/isotp/sample.subsys.canbus.isotp fails on FRDM-K64F
area: CAN bug platform: NXP priority: low
When trying to run this sample on a FRDM-K64F I get: ``` *** Booting Zephyr OS build zephyr-v2.3.0-667-g766bb84a95e2 *** Start sending data TX complete cb [-2] Error while sending data to ID 384 [-2] Receiving error [-14] Got 0 bytes in total Receiving erreor [-14] [00:00:02.007,000] <err> isotp: Reception of next FC has timed out [00:00:02.007,000] <err> isotp: Reception of next FC has timed out TX complete cb [-2] Error while sending data to ID 384 [-2] Receiving error [-14] Got 0 bytes in total Receiving erreor [-14] [00:00:04.012,000] <err> isotp: Reception of next FC has timed out [00:00:04.013,000] <err> isotp: Reception of next FC has timed out ```
1.0
samples/subsys/canbus/isotp/sample.subsys.canbus.isotp fails on FRDM-K64F - When trying to run this sample on a FRDM-K64F I get: ``` *** Booting Zephyr OS build zephyr-v2.3.0-667-g766bb84a95e2 *** Start sending data TX complete cb [-2] Error while sending data to ID 384 [-2] Receiving error [-14] Got 0 bytes in total Receiving erreor [-14] [00:00:02.007,000] <err> isotp: Reception of next FC has timed out [00:00:02.007,000] <err> isotp: Reception of next FC has timed out TX complete cb [-2] Error while sending data to ID 384 [-2] Receiving error [-14] Got 0 bytes in total Receiving erreor [-14] [00:00:04.012,000] <err> isotp: Reception of next FC has timed out [00:00:04.013,000] <err> isotp: Reception of next FC has timed out ```
priority
samples subsys canbus isotp sample subsys canbus isotp fails on frdm when trying to run this sample on a frdm i get booting zephyr os build zephyr start sending data tx complete cb error while sending data to id receiving error got bytes in total receiving erreor isotp reception of next fc has timed out isotp reception of next fc has timed out tx complete cb error while sending data to id receiving error got bytes in total receiving erreor isotp reception of next fc has timed out isotp reception of next fc has timed out
1
782,568
27,500,113,343
IssuesEvent
2023-03-05 15:51:18
concretecms/concretecms
https://api.github.com/repos/concretecms/concretecms
closed
[v9] bug: Datetime picker widget in Dashboard has no Next/Prev icons and not loading in frontend
Type:Bug Status:Proposal Status:Blocked Bug Priority:Low
1. Datetime picker widget in Dashboard has no Next/Prev icons. The mouse and tooltip are not captured by the screen capture program, but you can see there are now next/prev icons on hover: ![Screenshot_20211123_130411](https://user-images.githubusercontent.com/239979/142961281-2494fd3a-fd36-4fa3-83fe-7aaa13ebd2f9.png) 2. Datetime picker widget is not loaded in frontend for non-registered users because now it's part of the huge cms.css which doesn't load for non-registered users. Is this intentional? IS it the same story with all other jquery-ui widgets, are they all only available in Dashboard?
1.0
[v9] bug: Datetime picker widget in Dashboard has no Next/Prev icons and not loading in frontend - 1. Datetime picker widget in Dashboard has no Next/Prev icons. The mouse and tooltip are not captured by the screen capture program, but you can see there are now next/prev icons on hover: ![Screenshot_20211123_130411](https://user-images.githubusercontent.com/239979/142961281-2494fd3a-fd36-4fa3-83fe-7aaa13ebd2f9.png) 2. Datetime picker widget is not loaded in frontend for non-registered users because now it's part of the huge cms.css which doesn't load for non-registered users. Is this intentional? IS it the same story with all other jquery-ui widgets, are they all only available in Dashboard?
priority
bug datetime picker widget in dashboard has no next prev icons and not loading in frontend datetime picker widget in dashboard has no next prev icons the mouse and tooltip are not captured by the screen capture program but you can see there are now next prev icons on hover datetime picker widget is not loaded in frontend for non registered users because now it s part of the huge cms css which doesn t load for non registered users is this intentional is it the same story with all other jquery ui widgets are they all only available in dashboard
1
380,628
11,268,702,854
IssuesEvent
2020-01-14 07:01:46
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0 staging-1333] Unnecessary tagged ingredients
Fixed Priority: Low
The following tags only have one item in them. They are all used in recipes or as fuel. They should be expanded or temporarily disabled. **Tag - Contained Item** Dirt - Dirt Natural Fibers - Plant Fibers WoodScrap - Wood Pulp Circuit - Advanced Circuit Rubber - Synthetic Rubber Torch - Torch
1.0
[0.9.0 staging-1333] Unnecessary tagged ingredients - The following tags only have one item in them. They are all used in recipes or as fuel. They should be expanded or temporarily disabled. **Tag - Contained Item** Dirt - Dirt Natural Fibers - Plant Fibers WoodScrap - Wood Pulp Circuit - Advanced Circuit Rubber - Synthetic Rubber Torch - Torch
priority
unnecessary tagged ingredients the following tags only have one item in them they are all used in recipes or as fuel they should be expanded or temporarily disabled tag contained item dirt dirt natural fibers plant fibers woodscrap wood pulp circuit advanced circuit rubber synthetic rubber torch torch
1
495,133
14,272,498,640
IssuesEvent
2020-11-21 17:16:54
HHS81/c182s
https://api.github.com/repos/HHS81/c182s
closed
Integrate new default descriptive Properties
GUI enhancement low priority
Just a reminder: There are some new properties that will allow the GUI to make better decisions: http://wiki.flightgear.org/Aircraft-set.xml#Performance_and_Flight-planning
1.0
Integrate new default descriptive Properties - Just a reminder: There are some new properties that will allow the GUI to make better decisions: http://wiki.flightgear.org/Aircraft-set.xml#Performance_and_Flight-planning
priority
integrate new default descriptive properties just a reminder there are some new properties that will allow the gui to make better decisions
1
34,969
2,789,619,684
IssuesEvent
2015-05-08 20:26:48
orwant/google-visualization-issues
https://api.github.com/repos/orwant/google-visualization-issues
opened
Set the column width and legend width
Priority-Low Type-Enhancement
Original [issue 295](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=295) created by orwant on 2010-05-27T10:21:46.000Z: <b>What would you like to see us add to this API?</b> To be able to set the width of the columns in a column chart, and to set the width of the legend (which is cut off if the chart widht is not long). <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> ColumnChart, and all. <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
Set the column width and legend width - Original [issue 295](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=295) created by orwant on 2010-05-27T10:21:46.000Z: <b>What would you like to see us add to this API?</b> To be able to set the width of the columns in a column chart, and to set the width of the legend (which is cut off if the chart widht is not long). <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> ColumnChart, and all. <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
priority
set the column width and legend width original created by orwant on what would you like to see us add to this api to be able to set the width of the columns in a column chart and to set the width of the legend which is cut off if the chart widht is not long what component is this issue related to piechart linechart datatable query etc columnchart and all for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
1
431,994
12,487,434,939
IssuesEvent
2020-05-31 09:03:58
parzh/xrange
https://api.github.com/repos/parzh/xrange
opened
Remove all deprecated entities and functionality
Change: major Domain: main Priority: low Type: improvement
Remove all deprecated entities and functionality
1.0
Remove all deprecated entities and functionality - Remove all deprecated entities and functionality
priority
remove all deprecated entities and functionality remove all deprecated entities and functionality
1
334,728
10,144,492,535
IssuesEvent
2019-08-04 21:26:13
labsquare/cutevariant
https://api.github.com/repos/labsquare/cutevariant
closed
ColumnWidget as a list
low-priority question
Actually, columnWidget display columns as a Tree within 3 categories ( variants, annotations, samples). I propose to have a simple list with 2 columns ( field name, field description) , category as icon and a search bar to perform search on boith 2 columns. Something like this : ![image](https://user-images.githubusercontent.com/1911063/58292233-d7d89900-7dc0-11e9-96b5-bd4d0dbb90a0.png)
1.0
ColumnWidget as a list - Actually, columnWidget display columns as a Tree within 3 categories ( variants, annotations, samples). I propose to have a simple list with 2 columns ( field name, field description) , category as icon and a search bar to perform search on boith 2 columns. Something like this : ![image](https://user-images.githubusercontent.com/1911063/58292233-d7d89900-7dc0-11e9-96b5-bd4d0dbb90a0.png)
priority
columnwidget as a list actually columnwidget display columns as a tree within categories variants annotations samples i propose to have a simple list with columns field name field description category as icon and a search bar to perform search on boith columns something like this
1
456,066
13,144,753,104
IssuesEvent
2020-08-08 00:03:58
chingu-voyages/v21-geckos-team-03
https://api.github.com/repos/chingu-voyages/v21-geckos-team-03
closed
feature/enhancement: new list prompt on empty lists page
enhancement low-priority
Text that reads something like, "You don't have any lists yet!" with a button to add a list
1.0
feature/enhancement: new list prompt on empty lists page - Text that reads something like, "You don't have any lists yet!" with a button to add a list
priority
feature enhancement new list prompt on empty lists page text that reads something like you don t have any lists yet with a button to add a list
1
757,424
26,511,851,669
IssuesEvent
2023-01-18 17:42:36
tempesta-tech/tempesta
https://api.github.com/repos/tempesta-tech/tempesta
opened
Refactoring tfw_http_msg_resp_spec_hid/tfw_http_msg_req_spec_hid and related
enhancement low priority good to start
# Motivation We have two functions `tfw_http_msg_req_spec_hid` and `tfw_http_msg_resp_spec_hid` which using for searching ID of header by its name. `tfw_http_msg_resp_spec_hid` using for a response headers only and `tfw_http_msg_req_spec_hid` for a request only. However, lists of both functions contains same headers, it means `tfw_http_msg_resp_spec_hid` can contain `forwarded` header which must not be presented in current list. `forwarded` header is request only header. # Scope Need to rewrite lists so that the list `tfw_http_msg_req_spec_hid` contains only request headers and `tfw_http_msg_resp_spec_hid` only response headers. Headers that can be presented in both response and request must be listed in both lists. Also `__hdr_is_singular` must contain only raw headers, i.e headers that not listed in `enum tfw_http_hdr_t`
1.0
Refactoring tfw_http_msg_resp_spec_hid/tfw_http_msg_req_spec_hid and related - # Motivation We have two functions `tfw_http_msg_req_spec_hid` and `tfw_http_msg_resp_spec_hid` which using for searching ID of header by its name. `tfw_http_msg_resp_spec_hid` using for a response headers only and `tfw_http_msg_req_spec_hid` for a request only. However, lists of both functions contains same headers, it means `tfw_http_msg_resp_spec_hid` can contain `forwarded` header which must not be presented in current list. `forwarded` header is request only header. # Scope Need to rewrite lists so that the list `tfw_http_msg_req_spec_hid` contains only request headers and `tfw_http_msg_resp_spec_hid` only response headers. Headers that can be presented in both response and request must be listed in both lists. Also `__hdr_is_singular` must contain only raw headers, i.e headers that not listed in `enum tfw_http_hdr_t`
priority
refactoring tfw http msg resp spec hid tfw http msg req spec hid and related motivation we have two functions tfw http msg req spec hid and tfw http msg resp spec hid which using for searching id of header by its name tfw http msg resp spec hid using for a response headers only and tfw http msg req spec hid for a request only however lists of both functions contains same headers it means tfw http msg resp spec hid can contain forwarded header which must not be presented in current list forwarded header is request only header scope need to rewrite lists so that the list tfw http msg req spec hid contains only request headers and tfw http msg resp spec hid only response headers headers that can be presented in both response and request must be listed in both lists also hdr is singular must contain only raw headers i e headers that not listed in enum tfw http hdr t
1
681,325
23,305,573,454
IssuesEvent
2022-08-08 00:02:17
GTNewHorizons/GT-New-Horizons-Modpack
https://api.github.com/repos/GTNewHorizons/GT-New-Horizons-Modpack
closed
Recipe maps don't recognize industrial diamonds
Type: bugMinor Priority: very low Status: stale Mod: GT Comment to reopen
#### Which modpack version are you using? 2.1.0 # #### If in multiplayer; On which server does this happen? Private server, reproducible in single player # #### What did you try to do, and what did you expect to happen? Tried automating industrial diamond -> flawless -> exquisite with laser engraver. # #### What happened instead? (Attach screenshots if needed) Industrial diamonds cannot be automatically inserted into a laser engraver. I tried conveyor, item conduit, and AE interface. Manually inserting industrial diamonds runs the recipe just fine, resulting in 3 industrial -> 1 flawless. The same is true for implosion compressor input bus, but you can get around this by disabling input filter of bus. # #### What do you suggest instead/what changes do you propose? Implosion compressor and laser engraver should recognize industrial diamonds as valid inputs. Worth noting that compressors do recognize industrial diamonds as valid inputs (for blocks), so this is not an issue for every recipe map # #### What is your GTNH Discord username? Greesy
1.0
Recipe maps don't recognize industrial diamonds - #### Which modpack version are you using? 2.1.0 # #### If in multiplayer; On which server does this happen? Private server, reproducible in single player # #### What did you try to do, and what did you expect to happen? Tried automating industrial diamond -> flawless -> exquisite with laser engraver. # #### What happened instead? (Attach screenshots if needed) Industrial diamonds cannot be automatically inserted into a laser engraver. I tried conveyor, item conduit, and AE interface. Manually inserting industrial diamonds runs the recipe just fine, resulting in 3 industrial -> 1 flawless. The same is true for implosion compressor input bus, but you can get around this by disabling input filter of bus. # #### What do you suggest instead/what changes do you propose? Implosion compressor and laser engraver should recognize industrial diamonds as valid inputs. Worth noting that compressors do recognize industrial diamonds as valid inputs (for blocks), so this is not an issue for every recipe map # #### What is your GTNH Discord username? Greesy
priority
recipe maps don t recognize industrial diamonds which modpack version are you using if in multiplayer on which server does this happen private server reproducible in single player what did you try to do and what did you expect to happen tried automating industrial diamond flawless exquisite with laser engraver what happened instead attach screenshots if needed industrial diamonds cannot be automatically inserted into a laser engraver i tried conveyor item conduit and ae interface manually inserting industrial diamonds runs the recipe just fine resulting in industrial flawless the same is true for implosion compressor input bus but you can get around this by disabling input filter of bus what do you suggest instead what changes do you propose implosion compressor and laser engraver should recognize industrial diamonds as valid inputs worth noting that compressors do recognize industrial diamonds as valid inputs for blocks so this is not an issue for every recipe map what is your gtnh discord username greesy
1
282,615
8,708,656,320
IssuesEvent
2018-12-06 11:34:36
telerik/kendo-ui-core
https://api.github.com/repos/telerik/kendo-ui-core
opened
Rows are misaligned with multi-level grouping and locked columns
Bug C: Grid Kendo1 Priority 1 SEV: Low
### Reproduction of the problem 1. Run [this dojo](https://dojo.telerik.com/IDEDUCIq) 1. Collapse one or more of the subCategory groups 1. Collapse the category Group 1. Expand Category ### Current behavior The aggregate rows are missing and the locked and unlocked content is misaligned ### Expected/desired behavior Rows should be aligned ### Environment * **Kendo UI version:** 2018.3.1017
1.0
Rows are misaligned with multi-level grouping and locked columns - ### Reproduction of the problem 1. Run [this dojo](https://dojo.telerik.com/IDEDUCIq) 1. Collapse one or more of the subCategory groups 1. Collapse the category Group 1. Expand Category ### Current behavior The aggregate rows are missing and the locked and unlocked content is misaligned ### Expected/desired behavior Rows should be aligned ### Environment * **Kendo UI version:** 2018.3.1017
priority
rows are misaligned with multi level grouping and locked columns reproduction of the problem run collapse one or more of the subcategory groups collapse the category group expand category current behavior the aggregate rows are missing and the locked and unlocked content is misaligned expected desired behavior rows should be aligned environment kendo ui version
1
460,973
13,221,558,234
IssuesEvent
2020-08-17 14:14:08
tempesta-tech/tempesta
https://api.github.com/repos/tempesta-tech/tempesta
closed
KASAN: Errors on starting Tempesta
bug invalid low priority
It's hard to say, how much this issue is really apply to Tempesta, I've tried to run kernel with slightly different configs, where KASAN was enabled, but I couldn't start Tempesta, when the KASAN is enabled. In the same time other teammates has no issues with running KASAN-enabled kernels and starting Tempesta on them. Probably there is some issue in my environment, but I've spent too much time figuring it out and still have no understanding of the roots of the problem. Kernel is the current master tempesta-tech/linux-4.14.32-tfw@580e5dd973162b01a811fc6731c76dd51f263f91 , [this config](https://github.com/tempesta-tech/tempesta/files/3156911/config-with-kasan.txt) which is based on [kernel config for our kernel packages](https://github.com/tempesta-tech/linux-4.14.32-tfw/releases/tag/debian-9%2F4.14.32-tfw6). To reproduce: - build kernel with KASAN enabled (my environment is Debian 9 Stable) and run it; - build Tempesta; - try to start Tempesta with empty config file. In this configuration i have following issues: `BUILD_BUG_ON` failure: ``` tempesta/tempesta_fw/http_parser.c:460:2: note: in expansion of macro ‘BUILD_BUG_ON’ BUILD_BUG_ON(!__builtin_constant_p((limit - 10) / 10)); ``` Not a crucial issue, since it only affect performance, can be ignored in KASAN builds. And I get an oops on Tempesta start: ``` [90440.504922] tempesta_lib: loading out-of-tree module taints kernel. [90440.689935] [tdb] Start Tempesta DB [90440.748204] [tempesta fw] Initializing Tempesta FW kernel module... [90440.749853] [tempesta fw] Registering new classifier: frang [90440.759226] [tempesta fw] Registering new scheduler: hash [90440.760946] [tempesta fw] Registering new scheduler: ratio [90440.791811] [tempesta fw] Preparing for the configuration processing. [90440.794827] kasan: CONFIG_KASAN_INLINE enabled [90440.796387] kasan: GPF could be caused by NULL-ptr deref or user memory access [90440.798250] general protection fault: 0000 [#1] SMP KASAN PTI [90440.800374] Modules linked in: tempesta_fw(O) tempesta_db(O) sha256_ssse3 sha512_ssse3 sha512_generic ccm ctr gcm tempesta_tls(O) tempesta_lib(O) snd_hda_codec_generic kvm_intel iTCO_wdt snd_hda_intel iTCO_vendor_support qxl snd_hda_codec kvm ttm snd_hda_core irqbypass snd_hwdep crct10dif_pclmul crc32_pclmul snd_pcm drm_kms_helper snd_timer ghash_clmulni_intel virtio_balloon cryptd sg snd virtio_console soundcore pcspkr evdev serio_raw drm lpc_ich mfd_core shpchp button binfmt_misc ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 fscrypto crc32c_generic sr_mod cdrom virtio_blk virtio_net crc32c_intel ahci libahci psmouse ehci_pci uhci_hcd ehci_hcd i2c_i801 libata sym53c8xx scsi_transport_spi usbcore virtio_pci virtio_ring virtio scsi_mod [90440.800374] CPU: 0 PID: 5196 Comm: sysctl Tainted: G O 4.14.32+ #16 [90440.800374] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 [90440.800374] task: ffff880057c32e80 task.stack: ffff8800247d8000 [90440.800374] RIP: 0010:tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] [90440.800374] RSP: 0018:ffff8800247df168 EFLAGS: 00010202 [90440.800374] RAX: 00000000ffff0000 RBX: ffffffffc24ccfc0 RCX: dffffc0000000000 [90440.800374] RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000050 [90440.800374] RBP: 1ffff100048fbe2f R08: 1ffff100048fbde0 R09: ffff8800247defc8 [90440.800374] R10: 0000000000000001 R11: 0000000000236448 R12: ffff88006b74e870 [90440.800374] R13: ffff8800247df198 R14: ffff8800247df218 R15: 0000000000000000 [90440.800374] FS: 00007f92a3de48c0(0000) GS:ffff88008c600000(0000) knlGS:0000000000000000 [90440.800374] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [90440.800374] CR2: 00007f92a34cb260 CR3: 000000004b5f6003 CR4: 00000000003606f0 [90440.800374] Call Trace: [90440.800374] ? tfw_listen_sock_add+0x1f0/0x1f0 [tempesta_fw] [90440.800374] ? format_decode+0x3e4/0x9f0 [90440.800374] ? kasan_unpoison_shadow+0x30/0x40 [90440.800374] ? kasan_kmalloc+0xa0/0xd0 [90440.800374] ? __kmalloc+0x15c/0x3f0 [90440.800374] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [90440.800374] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [90440.800374] ? entry_set_name+0x8f/0x1d0 [tempesta_fw] [90440.800374] ? tfw_sched_unregister+0x390/0x390 [tempesta_fw] [90440.800374] ? parse_cfg_entry+0x967/0x2590 [tempesta_fw] [90440.800374] spec_handle_entry+0xc9/0x310 [tempesta_fw] [90440.800374] spec_finish_handling+0xf6/0x320 [tempesta_fw] [90440.800374] ? tfw_cfg_parse+0x8c/0x160 [tempesta_fw] [90440.800374] tfw_cfg_parse_mods+0x4eb/0x700 [tempesta_fw] [90440.800374] ? tfw_cfg_handle_children+0x6a0/0x6a0 [tempesta_fw] [90440.800374] ? parse_cfg_entry+0x4ef/0x2590 [tempesta_fw] [90440.800374] tfw_cfg_parse+0xa3/0x160 [tempesta_fw] [90440.800374] ? tfw_cfg_read_file+0x620/0x620 [tempesta_fw] [90440.800374] tfw_ctlfn_state_io+0x721/0xf30 [tempesta_fw] [90440.800374] ? mutex_lock+0xe/0x30 [90440.800374] ? tfw_ctlfn_state_io+0x138/0xf30 [tempesta_fw] [90440.800374] ? blk_mq_debugfs_unregister_sched_hctx+0x90/0x90 [90440.800374] ? tfw_mod_register+0x2f0/0x2f0 [tempesta_fw] [90440.800374] ? __read_once_size_nocheck.constprop.3+0x50/0x50 [90440.800374] ? bpf_prog_alloc+0x350/0x350 [90440.800374] ? unwind_next_frame+0xb2e/0x2f50 [90440.800374] ? __mod_tree_remove+0x40/0x40 [90440.800374] ? __handle_mm_fault+0x2db7/0x5820 [90440.800374] ? get_stack_info+0x3f/0x3a0 [90440.800374] ? __pmd_alloc+0x360/0x360 [90440.800374] ? __free_insn_slot+0x7b0/0x7b0 [90440.800374] ? unwind_next_frame+0x14ce/0x2f50 [90440.800374] ? rcu_barrier_callback+0x90/0x90 [90440.800374] ? unwind_get_return_address+0x5f/0xc0 [90440.800374] ? security_capable_noaudit+0x75/0xb0 [90440.800374] ? ns_capable_common+0x66/0x180 [90440.800374] ? net_ctl_permissions+0x79/0x180 [90440.800374] proc_sys_call_handler+0x1b5/0x2c0 [90440.800374] ? proc_sys_poll+0x590/0x590 [90440.800374] ? __alloc_fd+0xfd/0x670 [90440.800374] __vfs_write+0xf9/0xae0 [90440.800374] ? kernel_read+0x1a0/0x1a0 [90440.800374] ? cp_new_stat+0x750/0x9b0 [90440.800374] ? SYSC_fstat+0xd0/0xd0 [90440.800374] ? __fdget_pos+0x68/0x1b0 [90440.800374] ? vfs_statx_fd+0x44/0x80 [90440.800374] vfs_write+0x163/0x5a0 [90440.800374] ? SYSC_newfstatat+0xd0/0xd0 [90440.800374] SyS_write+0xd0/0x1e0 [90440.800374] ? SyS_read+0x1e0/0x1e0 [90440.800374] ? SyS_read+0x1e0/0x1e0 [90440.800374] do_syscall_64+0x252/0x6f0 [90440.800374] ? syscall_return_slowpath+0x360/0x360 [90440.800374] ? do_page_fault+0x93/0x3d0 [90440.800374] ? __do_page_fault+0xc00/0xc00 [90440.800374] ? prepare_exit_to_usermode+0x270/0x270 [90440.800374] ? perf_trace_sys_enter+0x1720/0x1720 [90440.800374] ? __put_user_4+0x1c/0x30 [90440.800374] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [90440.800374] RIP: 0033:0x7f92a3566134 [90440.800374] RSP: 002b:00007fff99731ab8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [90440.800374] RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 00007f92a3566134 [90440.800374] RDX: 0000000000000006 RSI: 0000558e4d9b3540 RDI: 0000000000000004 [90440.800374] RBP: 0000558e4d9b3540 R08: 00007f92a3de48c0 R09: 00007fff99733883 [90440.800374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000558e4d9b3290 [90440.800374] R13: 0000000000000006 R14: 00007f92a382e760 R15: 0000000000000006 [90440.800374] Code: f0 00 00 00 0f 11 84 24 f4 00 00 00 66 c1 c0 08 c7 84 24 00 01 00 00 00 00 ff ff 66 89 84 24 f2 00 00 00 48 8b 84 24 00 01 00 00 <66> 0f 6f 84 24 f0 00 00 00 48 89 84 24 40 01 00 00 8b 84 24 08 [90440.800374] RIP: tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] RSP: ffff8800247df168 [90440.932972] ---[ end trace d82a72eb02da4577 ]--- ``` But some start attempts is even more weird: ``` [ 66.858803] tempesta_lib: loading out-of-tree module taints kernel. [ 67.042644] [tdb] Start Tempesta DB [ 67.104799] [tempesta fw] Initializing Tempesta FW kernel module... [ 67.106289] [tempesta fw] Registering new classifier: frang [ 67.115439] [tempesta fw] Registering new scheduler: hash [ 67.117462] [tempesta fw] Registering new scheduler: ratio [ 67.151609] [tempesta fw] Preparing for the configuration processing. [ 67.155901] kasan: CONFIG_KASAN_INLINE enabled [ 67.157478] kasan: GPF could be caused by NULL-ptr deref or user memory access [ 67.159942] general protection fault: 0000 [#1] SMP KASAN PTI [ 67.161343] Modules linked in: tempesta_fw(O) tempesta_db(O) sha256_ssse3 sha512_ssse3 sha512_generic ccm ctr gcm tempesta_tls(O) tempesta_lib(O) kvm_intel snd_hda_codec_generic kvm iTCO_wdt qxl iTCO_vendor_support snd_hda_intel irqbypass crct10dif_pclmul ttm crc32_pclmul snd_hda_codec ghash_clmulni_intel cryptd snd_hda_core sg drm_kms_helper snd_hwdep snd_pcm virtio_balloon snd_timer virtio_console snd soundcore drm evdev serio_raw pcspkr lpc_ich mfd_core shpchp binfmt_misc button ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 fscrypto crc32c_generic sr_mod cdrom virtio_blk virtio_net ahci libahci crc32c_intel uhci_hcd ehci_pci libata ehci_hcd sym53c8xx psmouse scsi_transport_spi usbcore i2c_i801 virtio_pci virtio_ring virtio scsi_mod [ 67.164072] CPU: 1 PID: 1621 Comm: sysctl Tainted: G O 4.14.32+ #16 [ 67.164072] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 [ 67.164072] task: ffff880030b46c80 task.stack: ffff88008fd30000 [ 67.164072] RIP: 0010:tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] [ 67.164072] RSP: 0018:ffff88008fd37168 EFLAGS: 00010202 [ 67.164072] RAX: 00000000ffff0000 RBX: ffffffffc2314000 RCX: dffffc0000000000 [ 67.164072] RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000050 [ 67.164072] RBP: 1ffff10011fa6e2f R08: 1ffff10011fa6de0 R09: ffff88008fd36fc8 [ 67.164072] R10: 0000000000000001 R11: 0000000000236448 R12: ffff880027b4a240 [ 67.164072] R13: ffff88008fd37198 R14: ffff88008fd37218 R15: 0000000000000000 [ 67.164072] FS: 00007f2cc6e9c8c0(0000) GS:ffff880086500000(0000) knlGS:0000000000000000 [ 67.164072] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 67.164072] CR2: 00007f2cc6583260 CR3: 00000000424f4001 CR4: 00000000003606e0 [ 67.164072] Call Trace: [ 67.164072] ? tfw_listen_sock_add+0x1f0/0x1f0 [tempesta_fw] [ 67.164072] ? format_decode+0x3e4/0x9f0 [ 67.164072] ? kasan_unpoison_shadow+0x30/0x40 [ 67.164072] ? kasan_kmalloc+0xa0/0xd0 [ 67.164072] ? __kmalloc+0x15c/0x3f0 [ 67.164072] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [ 67.164072] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [ 67.164072] ? entry_set_name+0x8f/0x1d0 [tempesta_fw] [ 67.164072] ? tfw_addr_ntop+0xf0/0xf0 [tempesta_fw] [ 67.164072] ? parse_cfg_entry+0x967/0x2590 [tempesta_fw] [ 67.164072] spec_handle_entry+0xc9/0x310 [tempesta_fw] [ 67.164072] spec_finish_handling+0xf6/0x320 [tempesta_fw] [ 67.164072] ? tfw_cfg_parse+0x8c/0x160 [tempesta_fw] [ 67.164072] tfw_cfg_parse_mods+0x4eb/0x700 [tempesta_fw] [ 67.164072] ? tfw_cfg_handle_children+0x6a0/0x6a0 [tempesta_fw] [ 67.164072] ? parse_cfg_entry+0x4ef/0x2590 [tempesta_fw] [ 67.164072] tfw_cfg_parse+0xa3/0x160 [tempesta_fw] [ 67.164072] ? tfw_cfg_read_file+0x620/0x620 [tempesta_fw] [ 67.164072] tfw_ctlfn_state_io+0x721/0xf30 [tempesta_fw] [ 67.164072] ? mutex_lock+0xe/0x30 [ 67.164072] ? tfw_ctlfn_state_io+0x138/0xf30 [tempesta_fw] [ 67.164072] ? blk_mq_debugfs_unregister_sched_hctx+0x90/0x90 [ 67.164072] ? tfw_mod_register+0x2f0/0x2f0 [tempesta_fw] [ 67.164072] ? __read_once_size_nocheck.constprop.3+0x50/0x50 [ 67.164072] ? bpf_prog_alloc+0x350/0x350 [ 67.164072] ? unwind_next_frame+0xb2e/0x2f50 [ 67.164072] ? __mod_tree_remove+0x40/0x40 [ 67.164072] ? __handle_mm_fault+0x2db7/0x5820 [ 67.164072] ? get_stack_info+0x3f/0x3a0 [ 67.164072] ? __pmd_alloc+0x360/0x360 [ 67.164072] ? __free_insn_slot+0x7b0/0x7b0 [ 67.164072] ? unwind_next_frame+0x14ce/0x2f50 [ 67.164072] ? rcu_barrier_callback+0x90/0x90 [ 67.164072] ? unwind_get_return_address+0x5f/0xc0 [ 67.164072] ? security_capable_noaudit+0x75/0xb0 [ 67.164072] ? ns_capable_common+0x66/0x180 [ 67.164072] ? net_ctl_permissions+0x79/0x180 [ 67.164072] proc_sys_call_handler+0x1b5/0x2c0 [ 67.164072] ? proc_sys_poll+0x590/0x590 [ 67.164072] ? __alloc_fd+0xfd/0x670 [ 67.164072] __vfs_write+0xf9/0xae0 [ 67.164072] ? kernel_read+0x1a0/0x1a0 [ 67.164072] ? cp_new_stat+0x750/0x9b0 [ 67.164072] ? SYSC_fstat+0xd0/0xd0 [ 67.164072] ? __fdget_pos+0x68/0x1b0 [ 67.164072] ? vfs_statx_fd+0x44/0x80 [ 67.164072] vfs_write+0x163/0x5a0 [ 67.164072] ? SYSC_newfstatat+0xd0/0xd0 [ 67.164072] SyS_write+0xd0/0x1e0 [ 67.164072] ? SyS_read+0x1e0/0x1e0 [ 67.164072] ? SyS_read+0x1e0/0x1e0 [ 67.164072] do_syscall_64+0x252/0x6f0 [ 67.164072] ? syscall_return_slowpath+0x360/0x360 [ 67.164072] ? do_page_fault+0x93/0x3d0 [ 67.164072] ? __do_page_fault+0xc00/0xc00 [ 67.164072] ? prepare_exit_to_usermode+0x270/0x270 [ 67.164072] ? perf_trace_sys_enter+0x1720/0x1720 [ 67.164072] ? __put_user_4+0x1c/0x30 [ 67.164072] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 67.164072] RIP: 0033:0x7f2cc661e134 [ 67.164072] RSP: 002b:00007ffe68f680b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 67.164072] RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 00007f2cc661e134 [ 67.164072] RDX: 0000000000000006 RSI: 00005602eec50540 RDI: 0000000000000004 [ 67.164072] RBP: 00005602eec50540 R08: 00007f2cc6e9c8c0 R09: 00007ffe68f69883 [ 67.164072] R10: 0000000000000000 R11: 0000000000000246 R12: 00005602eec50290 [ 67.164072] R13: 0000000000000006 R14: 00007f2cc68e6760 R15: 0000000000000006 [ 67.164072] Code: f0 00 00 00 0f 11 84 24 f4 00 00 00 66 c1 c0 08 c7 84 24 00 01 00 00 00 00 ff ff 66 89 84 24 f2 00 00 00 48 8b 84 24 00 01 00 00 <66> 0f 6f 84 24 f0 00 00 00 48 89 84 24 40 01 00 00 8b 84 24 08 [ 67.164072] RIP: tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] RSP: ffff88008fd37168 [ 67.271787] ---[ end trace 642da9fd3673eef7 ]--- [ 67.273726] ================================================================== [ 67.276522] BUG: KASAN: stack-out-of-bounds in flush_tlb_mm_range+0x36c/0x380 [ 67.277689] Write of size 8 at addr ffff88008fd376b8 by task sysctl/1621 [ 67.277689] [ 67.277689] CPU: 1 PID: 1621 Comm: sysctl Tainted: G D O 4.14.32+ #16 [ 67.277689] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 [ 67.277689] Call Trace: [ 67.277689] dump_stack+0xad/0x143 [ 67.277689] ? dma_virt_map_sg+0x2bd/0x2bd [ 67.277689] ? show_regs_print_info+0x6d/0x6d [ 67.277689] print_address_description+0x7a/0x440 [ 67.277689] ? flush_tlb_mm_range+0x36c/0x380 [ 67.277689] kasan_report+0x1dc/0x450 [ 67.277689] ? flush_tlb_mm_range+0x36c/0x380 [ 67.277689] flush_tlb_mm_range+0x36c/0x380 [ 67.277689] ? native_flush_tlb_others+0x490/0x490 [ 67.277689] ? __account_cfs_rq_runtime+0x6e0/0x6e0 [ 67.277689] tlb_flush_mmu_tlbonly+0x25b/0x4c0 [ 67.277689] arch_tlb_finish_mmu+0x8a/0x170 [ 67.277689] tlb_finish_mmu+0x11e/0x200 [ 67.277689] ? tlb_gather_mmu+0x50/0x50 [ 67.277689] free_ldt_pgtables+0xc3/0x110 [ 67.277689] ? restart_nmi+0x40/0x40 [ 67.277689] ? exit_robust_list+0x18d/0x6e0 [ 67.277689] ? get_stack_info+0x3f/0x3a0 [ 67.277689] ? do_io_submit+0x1d40/0x1d40 [ 67.277689] ? handle_futex_death+0x450/0x450 [ 67.277689] exit_mmap+0x18b/0x470 [ 67.277689] ? SyS_munmap+0x30/0x30 [ 67.277689] ? __schedule+0x3d0/0x21c0 [ 67.277689] ? __hrtimer_get_remaining+0x240/0x240 [ 67.277689] ? uprobe_munmap+0x450/0x450 [ 67.277689] ? taskstats_exit+0x1020/0x1020 [ 67.277689] mmput+0x17b/0x600 [ 67.277689] ? mmdrop_async_fn+0x10/0x10 [ 67.277689] ? mm_release+0x164/0x540 [ 67.277689] ? mm_access+0x150/0x150 [ 67.277689] ? xacct_add_tsk+0x920/0x920 [ 67.277689] ? down_read+0x60/0x160 [ 67.277689] ? down_write_killable+0x130/0x130 [ 67.277689] do_exit+0x819/0x1860 [ 67.277689] ? kernel_read+0x1a0/0x1a0 [ 67.277689] ? mm_update_next_owner+0xf60/0xf60 [ 67.277689] ? SYSC_fstat+0xd0/0xd0 [ 67.277689] ? __fdget_pos+0x68/0x1b0 [ 67.277689] ? vfs_statx_fd+0x44/0x80 [ 67.277689] ? vfs_write+0x163/0x5a0 [ 67.277689] ? SYSC_newfstatat+0xd0/0xd0 [ 67.277689] ? SyS_write+0xd0/0x1e0 [ 67.277689] ? SyS_read+0x1e0/0x1e0 [ 67.277689] ? SyS_read+0x1e0/0x1e0 [ 67.277689] ? do_syscall_64+0x252/0x6f0 [ 67.277689] ? syscall_return_slowpath+0x360/0x360 [ 67.277689] ? do_page_fault+0x93/0x3d0 [ 67.277689] ? __do_page_fault+0xc00/0xc00 [ 67.277689] ? prepare_exit_to_usermode+0x270/0x270 [ 67.277689] ? perf_trace_sys_enter+0x1720/0x1720 [ 67.277689] ? __put_user_4+0x1c/0x30 [ 67.277689] rewind_stack_do_exit+0x17/0x20 [ 67.277689] [ 67.277689] The buggy address belongs to the page: [ 67.277689] page:ffffea00023f4dc0 count:0 mapcount:0 mapping: (null) index:0x1 [ 67.277689] flags: 0xffffc000000000() [ 67.277689] raw: 00ffffc000000000 0000000000000000 0000000000000001 00000000ffffffff [ 67.277689] raw: 0000000000000000 dead000000000200 0000000000000000 0000000000000000 [ 67.277689] page dumped because: kasan: bad access detected [ 67.277689] [ 67.277689] Memory state around the buggy address: [ 67.277689] ffff88008fd37580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 67.277689] ffff88008fd37600: 00 00 00 00 00 00 00 00 f2 f2 f2 00 00 00 00 00 [ 67.277689] >ffff88008fd37680: 00 f1 f1 f1 f1 00 00 f1 00 f3 f3 f3 f3 00 00 00 [ 67.277689] ^ [ 67.277689] ffff88008fd37700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 67.277689] ffff88008fd37780: 00 00 00 00 00 f1 f1 f1 f1 f1 f1 f8 f2 f2 f2 f2 [ 67.277689] ================================================================== ``` Here bug is also happened inside KASAN itself while stack unwinding. There are some issues in `flush_tlb_mm_range()`, but they apply to more recent version of the function that we have.
1.0
KASAN: Errors on starting Tempesta - It's hard to say, how much this issue is really apply to Tempesta, I've tried to run kernel with slightly different configs, where KASAN was enabled, but I couldn't start Tempesta, when the KASAN is enabled. In the same time other teammates has no issues with running KASAN-enabled kernels and starting Tempesta on them. Probably there is some issue in my environment, but I've spent too much time figuring it out and still have no understanding of the roots of the problem. Kernel is the current master tempesta-tech/linux-4.14.32-tfw@580e5dd973162b01a811fc6731c76dd51f263f91 , [this config](https://github.com/tempesta-tech/tempesta/files/3156911/config-with-kasan.txt) which is based on [kernel config for our kernel packages](https://github.com/tempesta-tech/linux-4.14.32-tfw/releases/tag/debian-9%2F4.14.32-tfw6). To reproduce: - build kernel with KASAN enabled (my environment is Debian 9 Stable) and run it; - build Tempesta; - try to start Tempesta with empty config file. In this configuration i have following issues: `BUILD_BUG_ON` failure: ``` tempesta/tempesta_fw/http_parser.c:460:2: note: in expansion of macro ‘BUILD_BUG_ON’ BUILD_BUG_ON(!__builtin_constant_p((limit - 10) / 10)); ``` Not a crucial issue, since it only affect performance, can be ignored in KASAN builds. And I get an oops on Tempesta start: ``` [90440.504922] tempesta_lib: loading out-of-tree module taints kernel. [90440.689935] [tdb] Start Tempesta DB [90440.748204] [tempesta fw] Initializing Tempesta FW kernel module... [90440.749853] [tempesta fw] Registering new classifier: frang [90440.759226] [tempesta fw] Registering new scheduler: hash [90440.760946] [tempesta fw] Registering new scheduler: ratio [90440.791811] [tempesta fw] Preparing for the configuration processing. [90440.794827] kasan: CONFIG_KASAN_INLINE enabled [90440.796387] kasan: GPF could be caused by NULL-ptr deref or user memory access [90440.798250] general protection fault: 0000 [#1] SMP KASAN PTI [90440.800374] Modules linked in: tempesta_fw(O) tempesta_db(O) sha256_ssse3 sha512_ssse3 sha512_generic ccm ctr gcm tempesta_tls(O) tempesta_lib(O) snd_hda_codec_generic kvm_intel iTCO_wdt snd_hda_intel iTCO_vendor_support qxl snd_hda_codec kvm ttm snd_hda_core irqbypass snd_hwdep crct10dif_pclmul crc32_pclmul snd_pcm drm_kms_helper snd_timer ghash_clmulni_intel virtio_balloon cryptd sg snd virtio_console soundcore pcspkr evdev serio_raw drm lpc_ich mfd_core shpchp button binfmt_misc ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 fscrypto crc32c_generic sr_mod cdrom virtio_blk virtio_net crc32c_intel ahci libahci psmouse ehci_pci uhci_hcd ehci_hcd i2c_i801 libata sym53c8xx scsi_transport_spi usbcore virtio_pci virtio_ring virtio scsi_mod [90440.800374] CPU: 0 PID: 5196 Comm: sysctl Tainted: G O 4.14.32+ #16 [90440.800374] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 [90440.800374] task: ffff880057c32e80 task.stack: ffff8800247d8000 [90440.800374] RIP: 0010:tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] [90440.800374] RSP: 0018:ffff8800247df168 EFLAGS: 00010202 [90440.800374] RAX: 00000000ffff0000 RBX: ffffffffc24ccfc0 RCX: dffffc0000000000 [90440.800374] RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000050 [90440.800374] RBP: 1ffff100048fbe2f R08: 1ffff100048fbde0 R09: ffff8800247defc8 [90440.800374] R10: 0000000000000001 R11: 0000000000236448 R12: ffff88006b74e870 [90440.800374] R13: ffff8800247df198 R14: ffff8800247df218 R15: 0000000000000000 [90440.800374] FS: 00007f92a3de48c0(0000) GS:ffff88008c600000(0000) knlGS:0000000000000000 [90440.800374] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [90440.800374] CR2: 00007f92a34cb260 CR3: 000000004b5f6003 CR4: 00000000003606f0 [90440.800374] Call Trace: [90440.800374] ? tfw_listen_sock_add+0x1f0/0x1f0 [tempesta_fw] [90440.800374] ? format_decode+0x3e4/0x9f0 [90440.800374] ? kasan_unpoison_shadow+0x30/0x40 [90440.800374] ? kasan_kmalloc+0xa0/0xd0 [90440.800374] ? __kmalloc+0x15c/0x3f0 [90440.800374] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [90440.800374] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [90440.800374] ? entry_set_name+0x8f/0x1d0 [tempesta_fw] [90440.800374] ? tfw_sched_unregister+0x390/0x390 [tempesta_fw] [90440.800374] ? parse_cfg_entry+0x967/0x2590 [tempesta_fw] [90440.800374] spec_handle_entry+0xc9/0x310 [tempesta_fw] [90440.800374] spec_finish_handling+0xf6/0x320 [tempesta_fw] [90440.800374] ? tfw_cfg_parse+0x8c/0x160 [tempesta_fw] [90440.800374] tfw_cfg_parse_mods+0x4eb/0x700 [tempesta_fw] [90440.800374] ? tfw_cfg_handle_children+0x6a0/0x6a0 [tempesta_fw] [90440.800374] ? parse_cfg_entry+0x4ef/0x2590 [tempesta_fw] [90440.800374] tfw_cfg_parse+0xa3/0x160 [tempesta_fw] [90440.800374] ? tfw_cfg_read_file+0x620/0x620 [tempesta_fw] [90440.800374] tfw_ctlfn_state_io+0x721/0xf30 [tempesta_fw] [90440.800374] ? mutex_lock+0xe/0x30 [90440.800374] ? tfw_ctlfn_state_io+0x138/0xf30 [tempesta_fw] [90440.800374] ? blk_mq_debugfs_unregister_sched_hctx+0x90/0x90 [90440.800374] ? tfw_mod_register+0x2f0/0x2f0 [tempesta_fw] [90440.800374] ? __read_once_size_nocheck.constprop.3+0x50/0x50 [90440.800374] ? bpf_prog_alloc+0x350/0x350 [90440.800374] ? unwind_next_frame+0xb2e/0x2f50 [90440.800374] ? __mod_tree_remove+0x40/0x40 [90440.800374] ? __handle_mm_fault+0x2db7/0x5820 [90440.800374] ? get_stack_info+0x3f/0x3a0 [90440.800374] ? __pmd_alloc+0x360/0x360 [90440.800374] ? __free_insn_slot+0x7b0/0x7b0 [90440.800374] ? unwind_next_frame+0x14ce/0x2f50 [90440.800374] ? rcu_barrier_callback+0x90/0x90 [90440.800374] ? unwind_get_return_address+0x5f/0xc0 [90440.800374] ? security_capable_noaudit+0x75/0xb0 [90440.800374] ? ns_capable_common+0x66/0x180 [90440.800374] ? net_ctl_permissions+0x79/0x180 [90440.800374] proc_sys_call_handler+0x1b5/0x2c0 [90440.800374] ? proc_sys_poll+0x590/0x590 [90440.800374] ? __alloc_fd+0xfd/0x670 [90440.800374] __vfs_write+0xf9/0xae0 [90440.800374] ? kernel_read+0x1a0/0x1a0 [90440.800374] ? cp_new_stat+0x750/0x9b0 [90440.800374] ? SYSC_fstat+0xd0/0xd0 [90440.800374] ? __fdget_pos+0x68/0x1b0 [90440.800374] ? vfs_statx_fd+0x44/0x80 [90440.800374] vfs_write+0x163/0x5a0 [90440.800374] ? SYSC_newfstatat+0xd0/0xd0 [90440.800374] SyS_write+0xd0/0x1e0 [90440.800374] ? SyS_read+0x1e0/0x1e0 [90440.800374] ? SyS_read+0x1e0/0x1e0 [90440.800374] do_syscall_64+0x252/0x6f0 [90440.800374] ? syscall_return_slowpath+0x360/0x360 [90440.800374] ? do_page_fault+0x93/0x3d0 [90440.800374] ? __do_page_fault+0xc00/0xc00 [90440.800374] ? prepare_exit_to_usermode+0x270/0x270 [90440.800374] ? perf_trace_sys_enter+0x1720/0x1720 [90440.800374] ? __put_user_4+0x1c/0x30 [90440.800374] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [90440.800374] RIP: 0033:0x7f92a3566134 [90440.800374] RSP: 002b:00007fff99731ab8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [90440.800374] RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 00007f92a3566134 [90440.800374] RDX: 0000000000000006 RSI: 0000558e4d9b3540 RDI: 0000000000000004 [90440.800374] RBP: 0000558e4d9b3540 R08: 00007f92a3de48c0 R09: 00007fff99733883 [90440.800374] R10: 0000000000000000 R11: 0000000000000246 R12: 0000558e4d9b3290 [90440.800374] R13: 0000000000000006 R14: 00007f92a382e760 R15: 0000000000000006 [90440.800374] Code: f0 00 00 00 0f 11 84 24 f4 00 00 00 66 c1 c0 08 c7 84 24 00 01 00 00 00 00 ff ff 66 89 84 24 f2 00 00 00 48 8b 84 24 00 01 00 00 <66> 0f 6f 84 24 f0 00 00 00 48 89 84 24 40 01 00 00 8b 84 24 08 [90440.800374] RIP: tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] RSP: ffff8800247df168 [90440.932972] ---[ end trace d82a72eb02da4577 ]--- ``` But some start attempts is even more weird: ``` [ 66.858803] tempesta_lib: loading out-of-tree module taints kernel. [ 67.042644] [tdb] Start Tempesta DB [ 67.104799] [tempesta fw] Initializing Tempesta FW kernel module... [ 67.106289] [tempesta fw] Registering new classifier: frang [ 67.115439] [tempesta fw] Registering new scheduler: hash [ 67.117462] [tempesta fw] Registering new scheduler: ratio [ 67.151609] [tempesta fw] Preparing for the configuration processing. [ 67.155901] kasan: CONFIG_KASAN_INLINE enabled [ 67.157478] kasan: GPF could be caused by NULL-ptr deref or user memory access [ 67.159942] general protection fault: 0000 [#1] SMP KASAN PTI [ 67.161343] Modules linked in: tempesta_fw(O) tempesta_db(O) sha256_ssse3 sha512_ssse3 sha512_generic ccm ctr gcm tempesta_tls(O) tempesta_lib(O) kvm_intel snd_hda_codec_generic kvm iTCO_wdt qxl iTCO_vendor_support snd_hda_intel irqbypass crct10dif_pclmul ttm crc32_pclmul snd_hda_codec ghash_clmulni_intel cryptd snd_hda_core sg drm_kms_helper snd_hwdep snd_pcm virtio_balloon snd_timer virtio_console snd soundcore drm evdev serio_raw pcspkr lpc_ich mfd_core shpchp binfmt_misc button ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 fscrypto crc32c_generic sr_mod cdrom virtio_blk virtio_net ahci libahci crc32c_intel uhci_hcd ehci_pci libata ehci_hcd sym53c8xx psmouse scsi_transport_spi usbcore i2c_i801 virtio_pci virtio_ring virtio scsi_mod [ 67.164072] CPU: 1 PID: 1621 Comm: sysctl Tainted: G O 4.14.32+ #16 [ 67.164072] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 [ 67.164072] task: ffff880030b46c80 task.stack: ffff88008fd30000 [ 67.164072] RIP: 0010:tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] [ 67.164072] RSP: 0018:ffff88008fd37168 EFLAGS: 00010202 [ 67.164072] RAX: 00000000ffff0000 RBX: ffffffffc2314000 RCX: dffffc0000000000 [ 67.164072] RDX: 000000000000000a RSI: 0000000000000000 RDI: 0000000000000050 [ 67.164072] RBP: 1ffff10011fa6e2f R08: 1ffff10011fa6de0 R09: ffff88008fd36fc8 [ 67.164072] R10: 0000000000000001 R11: 0000000000236448 R12: ffff880027b4a240 [ 67.164072] R13: ffff88008fd37198 R14: ffff88008fd37218 R15: 0000000000000000 [ 67.164072] FS: 00007f2cc6e9c8c0(0000) GS:ffff880086500000(0000) knlGS:0000000000000000 [ 67.164072] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 67.164072] CR2: 00007f2cc6583260 CR3: 00000000424f4001 CR4: 00000000003606e0 [ 67.164072] Call Trace: [ 67.164072] ? tfw_listen_sock_add+0x1f0/0x1f0 [tempesta_fw] [ 67.164072] ? format_decode+0x3e4/0x9f0 [ 67.164072] ? kasan_unpoison_shadow+0x30/0x40 [ 67.164072] ? kasan_kmalloc+0xa0/0xd0 [ 67.164072] ? __kmalloc+0x15c/0x3f0 [ 67.164072] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [ 67.164072] ? __alloc_and_copy_literal+0x33/0x1f0 [tempesta_fw] [ 67.164072] ? entry_set_name+0x8f/0x1d0 [tempesta_fw] [ 67.164072] ? tfw_addr_ntop+0xf0/0xf0 [tempesta_fw] [ 67.164072] ? parse_cfg_entry+0x967/0x2590 [tempesta_fw] [ 67.164072] spec_handle_entry+0xc9/0x310 [tempesta_fw] [ 67.164072] spec_finish_handling+0xf6/0x320 [tempesta_fw] [ 67.164072] ? tfw_cfg_parse+0x8c/0x160 [tempesta_fw] [ 67.164072] tfw_cfg_parse_mods+0x4eb/0x700 [tempesta_fw] [ 67.164072] ? tfw_cfg_handle_children+0x6a0/0x6a0 [tempesta_fw] [ 67.164072] ? parse_cfg_entry+0x4ef/0x2590 [tempesta_fw] [ 67.164072] tfw_cfg_parse+0xa3/0x160 [tempesta_fw] [ 67.164072] ? tfw_cfg_read_file+0x620/0x620 [tempesta_fw] [ 67.164072] tfw_ctlfn_state_io+0x721/0xf30 [tempesta_fw] [ 67.164072] ? mutex_lock+0xe/0x30 [ 67.164072] ? tfw_ctlfn_state_io+0x138/0xf30 [tempesta_fw] [ 67.164072] ? blk_mq_debugfs_unregister_sched_hctx+0x90/0x90 [ 67.164072] ? tfw_mod_register+0x2f0/0x2f0 [tempesta_fw] [ 67.164072] ? __read_once_size_nocheck.constprop.3+0x50/0x50 [ 67.164072] ? bpf_prog_alloc+0x350/0x350 [ 67.164072] ? unwind_next_frame+0xb2e/0x2f50 [ 67.164072] ? __mod_tree_remove+0x40/0x40 [ 67.164072] ? __handle_mm_fault+0x2db7/0x5820 [ 67.164072] ? get_stack_info+0x3f/0x3a0 [ 67.164072] ? __pmd_alloc+0x360/0x360 [ 67.164072] ? __free_insn_slot+0x7b0/0x7b0 [ 67.164072] ? unwind_next_frame+0x14ce/0x2f50 [ 67.164072] ? rcu_barrier_callback+0x90/0x90 [ 67.164072] ? unwind_get_return_address+0x5f/0xc0 [ 67.164072] ? security_capable_noaudit+0x75/0xb0 [ 67.164072] ? ns_capable_common+0x66/0x180 [ 67.164072] ? net_ctl_permissions+0x79/0x180 [ 67.164072] proc_sys_call_handler+0x1b5/0x2c0 [ 67.164072] ? proc_sys_poll+0x590/0x590 [ 67.164072] ? __alloc_fd+0xfd/0x670 [ 67.164072] __vfs_write+0xf9/0xae0 [ 67.164072] ? kernel_read+0x1a0/0x1a0 [ 67.164072] ? cp_new_stat+0x750/0x9b0 [ 67.164072] ? SYSC_fstat+0xd0/0xd0 [ 67.164072] ? __fdget_pos+0x68/0x1b0 [ 67.164072] ? vfs_statx_fd+0x44/0x80 [ 67.164072] vfs_write+0x163/0x5a0 [ 67.164072] ? SYSC_newfstatat+0xd0/0xd0 [ 67.164072] SyS_write+0xd0/0x1e0 [ 67.164072] ? SyS_read+0x1e0/0x1e0 [ 67.164072] ? SyS_read+0x1e0/0x1e0 [ 67.164072] do_syscall_64+0x252/0x6f0 [ 67.164072] ? syscall_return_slowpath+0x360/0x360 [ 67.164072] ? do_page_fault+0x93/0x3d0 [ 67.164072] ? __do_page_fault+0xc00/0xc00 [ 67.164072] ? prepare_exit_to_usermode+0x270/0x270 [ 67.164072] ? perf_trace_sys_enter+0x1720/0x1720 [ 67.164072] ? __put_user_4+0x1c/0x30 [ 67.164072] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 67.164072] RIP: 0033:0x7f2cc661e134 [ 67.164072] RSP: 002b:00007ffe68f680b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 67.164072] RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 00007f2cc661e134 [ 67.164072] RDX: 0000000000000006 RSI: 00005602eec50540 RDI: 0000000000000004 [ 67.164072] RBP: 00005602eec50540 R08: 00007f2cc6e9c8c0 R09: 00007ffe68f69883 [ 67.164072] R10: 0000000000000000 R11: 0000000000000246 R12: 00005602eec50290 [ 67.164072] R13: 0000000000000006 R14: 00007f2cc68e6760 R15: 0000000000000006 [ 67.164072] Code: f0 00 00 00 0f 11 84 24 f4 00 00 00 66 c1 c0 08 c7 84 24 00 01 00 00 00 00 ff ff 66 89 84 24 f2 00 00 00 48 8b 84 24 00 01 00 00 <66> 0f 6f 84 24 f0 00 00 00 48 89 84 24 40 01 00 00 8b 84 24 08 [ 67.164072] RIP: tfw_cfgop_listen+0x3eb/0x570 [tempesta_fw] RSP: ffff88008fd37168 [ 67.271787] ---[ end trace 642da9fd3673eef7 ]--- [ 67.273726] ================================================================== [ 67.276522] BUG: KASAN: stack-out-of-bounds in flush_tlb_mm_range+0x36c/0x380 [ 67.277689] Write of size 8 at addr ffff88008fd376b8 by task sysctl/1621 [ 67.277689] [ 67.277689] CPU: 1 PID: 1621 Comm: sysctl Tainted: G D O 4.14.32+ #16 [ 67.277689] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 [ 67.277689] Call Trace: [ 67.277689] dump_stack+0xad/0x143 [ 67.277689] ? dma_virt_map_sg+0x2bd/0x2bd [ 67.277689] ? show_regs_print_info+0x6d/0x6d [ 67.277689] print_address_description+0x7a/0x440 [ 67.277689] ? flush_tlb_mm_range+0x36c/0x380 [ 67.277689] kasan_report+0x1dc/0x450 [ 67.277689] ? flush_tlb_mm_range+0x36c/0x380 [ 67.277689] flush_tlb_mm_range+0x36c/0x380 [ 67.277689] ? native_flush_tlb_others+0x490/0x490 [ 67.277689] ? __account_cfs_rq_runtime+0x6e0/0x6e0 [ 67.277689] tlb_flush_mmu_tlbonly+0x25b/0x4c0 [ 67.277689] arch_tlb_finish_mmu+0x8a/0x170 [ 67.277689] tlb_finish_mmu+0x11e/0x200 [ 67.277689] ? tlb_gather_mmu+0x50/0x50 [ 67.277689] free_ldt_pgtables+0xc3/0x110 [ 67.277689] ? restart_nmi+0x40/0x40 [ 67.277689] ? exit_robust_list+0x18d/0x6e0 [ 67.277689] ? get_stack_info+0x3f/0x3a0 [ 67.277689] ? do_io_submit+0x1d40/0x1d40 [ 67.277689] ? handle_futex_death+0x450/0x450 [ 67.277689] exit_mmap+0x18b/0x470 [ 67.277689] ? SyS_munmap+0x30/0x30 [ 67.277689] ? __schedule+0x3d0/0x21c0 [ 67.277689] ? __hrtimer_get_remaining+0x240/0x240 [ 67.277689] ? uprobe_munmap+0x450/0x450 [ 67.277689] ? taskstats_exit+0x1020/0x1020 [ 67.277689] mmput+0x17b/0x600 [ 67.277689] ? mmdrop_async_fn+0x10/0x10 [ 67.277689] ? mm_release+0x164/0x540 [ 67.277689] ? mm_access+0x150/0x150 [ 67.277689] ? xacct_add_tsk+0x920/0x920 [ 67.277689] ? down_read+0x60/0x160 [ 67.277689] ? down_write_killable+0x130/0x130 [ 67.277689] do_exit+0x819/0x1860 [ 67.277689] ? kernel_read+0x1a0/0x1a0 [ 67.277689] ? mm_update_next_owner+0xf60/0xf60 [ 67.277689] ? SYSC_fstat+0xd0/0xd0 [ 67.277689] ? __fdget_pos+0x68/0x1b0 [ 67.277689] ? vfs_statx_fd+0x44/0x80 [ 67.277689] ? vfs_write+0x163/0x5a0 [ 67.277689] ? SYSC_newfstatat+0xd0/0xd0 [ 67.277689] ? SyS_write+0xd0/0x1e0 [ 67.277689] ? SyS_read+0x1e0/0x1e0 [ 67.277689] ? SyS_read+0x1e0/0x1e0 [ 67.277689] ? do_syscall_64+0x252/0x6f0 [ 67.277689] ? syscall_return_slowpath+0x360/0x360 [ 67.277689] ? do_page_fault+0x93/0x3d0 [ 67.277689] ? __do_page_fault+0xc00/0xc00 [ 67.277689] ? prepare_exit_to_usermode+0x270/0x270 [ 67.277689] ? perf_trace_sys_enter+0x1720/0x1720 [ 67.277689] ? __put_user_4+0x1c/0x30 [ 67.277689] rewind_stack_do_exit+0x17/0x20 [ 67.277689] [ 67.277689] The buggy address belongs to the page: [ 67.277689] page:ffffea00023f4dc0 count:0 mapcount:0 mapping: (null) index:0x1 [ 67.277689] flags: 0xffffc000000000() [ 67.277689] raw: 00ffffc000000000 0000000000000000 0000000000000001 00000000ffffffff [ 67.277689] raw: 0000000000000000 dead000000000200 0000000000000000 0000000000000000 [ 67.277689] page dumped because: kasan: bad access detected [ 67.277689] [ 67.277689] Memory state around the buggy address: [ 67.277689] ffff88008fd37580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 67.277689] ffff88008fd37600: 00 00 00 00 00 00 00 00 f2 f2 f2 00 00 00 00 00 [ 67.277689] >ffff88008fd37680: 00 f1 f1 f1 f1 00 00 f1 00 f3 f3 f3 f3 00 00 00 [ 67.277689] ^ [ 67.277689] ffff88008fd37700: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [ 67.277689] ffff88008fd37780: 00 00 00 00 00 f1 f1 f1 f1 f1 f1 f8 f2 f2 f2 f2 [ 67.277689] ================================================================== ``` Here bug is also happened inside KASAN itself while stack unwinding. There are some issues in `flush_tlb_mm_range()`, but they apply to more recent version of the function that we have.
priority
kasan errors on starting tempesta it s hard to say how much this issue is really apply to tempesta i ve tried to run kernel with slightly different configs where kasan was enabled but i couldn t start tempesta when the kasan is enabled in the same time other teammates has no issues with running kasan enabled kernels and starting tempesta on them probably there is some issue in my environment but i ve spent too much time figuring it out and still have no understanding of the roots of the problem kernel is the current master tempesta tech linux tfw which is based on to reproduce build kernel with kasan enabled my environment is debian stable and run it build tempesta try to start tempesta with empty config file in this configuration i have following issues build bug on failure tempesta tempesta fw http parser c note in expansion of macro ‘build bug on’ build bug on builtin constant p limit not a crucial issue since it only affect performance can be ignored in kasan builds and i get an oops on tempesta start tempesta lib loading out of tree module taints kernel start tempesta db initializing tempesta fw kernel module registering new classifier frang registering new scheduler hash registering new scheduler ratio preparing for the configuration processing kasan config kasan inline enabled kasan gpf could be caused by null ptr deref or user memory access general protection fault smp kasan pti modules linked in tempesta fw o tempesta db o generic ccm ctr gcm tempesta tls o tempesta lib o snd hda codec generic kvm intel itco wdt snd hda intel itco vendor support qxl snd hda codec kvm ttm snd hda core irqbypass snd hwdep pclmul pclmul snd pcm drm kms helper snd timer ghash clmulni intel virtio balloon cryptd sg snd virtio console soundcore pcspkr evdev serio raw drm lpc ich mfd core shpchp button binfmt misc ip tables x tables mbcache fscrypto generic sr mod cdrom virtio blk virtio net intel ahci libahci psmouse ehci pci uhci hcd ehci hcd libata scsi transport spi usbcore virtio pci virtio ring virtio scsi mod cpu pid comm sysctl tainted g o hardware name qemu standard pc bios anatol task task stack rip tfw cfgop listen rsp eflags rax rbx rcx rdx rsi rdi rbp fs gs knlgs cs ds es call trace tfw listen sock add format decode kasan unpoison shadow kasan kmalloc kmalloc alloc and copy literal alloc and copy literal entry set name tfw sched unregister parse cfg entry spec handle entry spec finish handling tfw cfg parse tfw cfg parse mods tfw cfg handle children parse cfg entry tfw cfg parse tfw cfg read file tfw ctlfn state io mutex lock tfw ctlfn state io blk mq debugfs unregister sched hctx tfw mod register read once size nocheck constprop bpf prog alloc unwind next frame mod tree remove handle mm fault get stack info pmd alloc free insn slot unwind next frame rcu barrier callback unwind get return address security capable noaudit ns capable common net ctl permissions proc sys call handler proc sys poll alloc fd vfs write kernel read cp new stat sysc fstat fdget pos vfs statx fd vfs write sysc newfstatat sys write sys read sys read do syscall syscall return slowpath do page fault do page fault prepare exit to usermode perf trace sys enter put user entry syscall after hwframe rip rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp code ff ff rip tfw cfgop listen rsp but some start attempts is even more weird tempesta lib loading out of tree module taints kernel start tempesta db initializing tempesta fw kernel module registering new classifier frang registering new scheduler hash registering new scheduler ratio preparing for the configuration processing kasan config kasan inline enabled kasan gpf could be caused by null ptr deref or user memory access general protection fault smp kasan pti modules linked in tempesta fw o tempesta db o generic ccm ctr gcm tempesta tls o tempesta lib o kvm intel snd hda codec generic kvm itco wdt qxl itco vendor support snd hda intel irqbypass pclmul ttm pclmul snd hda codec ghash clmulni intel cryptd snd hda core sg drm kms helper snd hwdep snd pcm virtio balloon snd timer virtio console snd soundcore drm evdev serio raw pcspkr lpc ich mfd core shpchp binfmt misc button ip tables x tables mbcache fscrypto generic sr mod cdrom virtio blk virtio net ahci libahci intel uhci hcd ehci pci libata ehci hcd psmouse scsi transport spi usbcore virtio pci virtio ring virtio scsi mod cpu pid comm sysctl tainted g o hardware name qemu standard pc bios anatol task task stack rip tfw cfgop listen rsp eflags rax rbx rcx rdx rsi rdi rbp fs gs knlgs cs ds es call trace tfw listen sock add format decode kasan unpoison shadow kasan kmalloc kmalloc alloc and copy literal alloc and copy literal entry set name tfw addr ntop parse cfg entry spec handle entry spec finish handling tfw cfg parse tfw cfg parse mods tfw cfg handle children parse cfg entry tfw cfg parse tfw cfg read file tfw ctlfn state io mutex lock tfw ctlfn state io blk mq debugfs unregister sched hctx tfw mod register read once size nocheck constprop bpf prog alloc unwind next frame mod tree remove handle mm fault get stack info pmd alloc free insn slot unwind next frame rcu barrier callback unwind get return address security capable noaudit ns capable common net ctl permissions proc sys call handler proc sys poll alloc fd vfs write kernel read cp new stat sysc fstat fdget pos vfs statx fd vfs write sysc newfstatat sys write sys read sys read do syscall syscall return slowpath do page fault do page fault prepare exit to usermode perf trace sys enter put user entry syscall after hwframe rip rsp eflags orig rax rax ffffffffffffffda rbx rcx rdx rsi rdi rbp code ff ff rip tfw cfgop listen rsp bug kasan stack out of bounds in flush tlb mm range write of size at addr by task sysctl cpu pid comm sysctl tainted g d o hardware name qemu standard pc bios anatol call trace dump stack dma virt map sg show regs print info print address description flush tlb mm range kasan report flush tlb mm range flush tlb mm range native flush tlb others account cfs rq runtime tlb flush mmu tlbonly arch tlb finish mmu tlb finish mmu tlb gather mmu free ldt pgtables restart nmi exit robust list get stack info do io submit handle futex death exit mmap sys munmap schedule hrtimer get remaining uprobe munmap taskstats exit mmput mmdrop async fn mm release mm access xacct add tsk down read down write killable do exit kernel read mm update next owner sysc fstat fdget pos vfs statx fd vfs write sysc newfstatat sys write sys read sys read do syscall syscall return slowpath do page fault do page fault prepare exit to usermode perf trace sys enter put user rewind stack do exit the buggy address belongs to the page page count mapcount mapping null index flags raw raw page dumped because kasan bad access detected memory state around the buggy address here bug is also happened inside kasan itself while stack unwinding there are some issues in flush tlb mm range but they apply to more recent version of the function that we have
1
798,699
28,292,689,071
IssuesEvent
2023-04-09 12:15:00
bounswe/bounswe2023group8
https://api.github.com/repos/bounswe/bounswe2023group8
closed
Add Project Requirements to Milestone 1 Report Page
status: to-do priority: high effort: low
Milestone 1 report page has already been created. There is a "Software Requirement Specification" section. Take the project requirements from "Requirements" page and add them to corresponding section.
1.0
Add Project Requirements to Milestone 1 Report Page - Milestone 1 report page has already been created. There is a "Software Requirement Specification" section. Take the project requirements from "Requirements" page and add them to corresponding section.
priority
add project requirements to milestone report page milestone report page has already been created there is a software requirement specification section take the project requirements from requirements page and add them to corresponding section
1
401,597
11,795,200,749
IssuesEvent
2020-03-18 08:28:26
thaliawww/concrexit
https://api.github.com/repos/thaliawww/concrexit
closed
Improve messages for exam/summary upload
education priority: low technical change
In GitLab by @se-bastiaan on Jan 14, 2020, 11:39 ### One-sentence description Improve messages for exam/summary upload ### Why? They're unclear?. It does not say that the exam will be added to the approval queue. And that causes confusion because people often upload the document again. Edit: I did not get any messages at all. (Maybe we could even make the approval queue crowd sourced so that everyone can approve documents that are not their own? Although that may not work because you can get money for summaries)
1.0
Improve messages for exam/summary upload - In GitLab by @se-bastiaan on Jan 14, 2020, 11:39 ### One-sentence description Improve messages for exam/summary upload ### Why? They're unclear?. It does not say that the exam will be added to the approval queue. And that causes confusion because people often upload the document again. Edit: I did not get any messages at all. (Maybe we could even make the approval queue crowd sourced so that everyone can approve documents that are not their own? Although that may not work because you can get money for summaries)
priority
improve messages for exam summary upload in gitlab by se bastiaan on jan one sentence description improve messages for exam summary upload why they re unclear it does not say that the exam will be added to the approval queue and that causes confusion because people often upload the document again edit i did not get any messages at all maybe we could even make the approval queue crowd sourced so that everyone can approve documents that are not their own although that may not work because you can get money for summaries
1
299,427
9,205,486,705
IssuesEvent
2019-03-08 10:42:47
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
PostGIS Add Layer does not support non-TCP/IP connections
Category: Data Provider Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
--- Author Name: **Frank Warmerdam -** (Frank Warmerdam -) Original Redmine Issue: 817, https://issues.qgis.org/issues/817 Original Assignee: nobody - --- There is no mechanism to access a local postgres server using named pipes instead of tcp/ip sockets. In PQconnectdb() all that is needed to support this case is to omit the host= and port= keywords in the connection string. Likewise it would be helpful (at least for this case) to be able to omit the userid and password. --- - [pg_namedpipes.diff](https://issues.qgis.org/attachments/download/1951/pg_namedpipes.diff) (Frank Warmerdam -)
1.0
PostGIS Add Layer does not support non-TCP/IP connections - --- Author Name: **Frank Warmerdam -** (Frank Warmerdam -) Original Redmine Issue: 817, https://issues.qgis.org/issues/817 Original Assignee: nobody - --- There is no mechanism to access a local postgres server using named pipes instead of tcp/ip sockets. In PQconnectdb() all that is needed to support this case is to omit the host= and port= keywords in the connection string. Likewise it would be helpful (at least for this case) to be able to omit the userid and password. --- - [pg_namedpipes.diff](https://issues.qgis.org/attachments/download/1951/pg_namedpipes.diff) (Frank Warmerdam -)
priority
postgis add layer does not support non tcp ip connections author name frank warmerdam frank warmerdam original redmine issue original assignee nobody there is no mechanism to access a local postgres server using named pipes instead of tcp ip sockets in pqconnectdb all that is needed to support this case is to omit the host and port keywords in the connection string likewise it would be helpful at least for this case to be able to omit the userid and password frank warmerdam
1
581,676
17,314,826,360
IssuesEvent
2021-07-27 03:39:54
ankidroid/Anki-Android
https://api.github.com/repos/ankidroid/Anki-Android
closed
desktop anki can't import ankidroid apkg backup files due to missing media file
Bug Help Wanted Priority-Low Reproduced Stale
``` Import failed. Traceback (most recent call last): File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/aqt.importing", line 327, in importFile File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/anki.importing.apkg", line 22, in run File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/zipfile", line 869, in read File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/zipfile", line 892, in open File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/zipfile", line 859, in getinfo KeyError: "There is no item named 'media' in the archive" ```
1.0
desktop anki can't import ankidroid apkg backup files due to missing media file - ``` Import failed. Traceback (most recent call last): File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/aqt.importing", line 327, in importFile File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/anki.importing.apkg", line 22, in run File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/zipfile", line 869, in read File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/zipfile", line 892, in open File "C:\cygwin\home\dae\win\build\pyi.win32\anki\outPYZ1.pyz/zipfile", line 859, in getinfo KeyError: "There is no item named 'media' in the archive" ```
priority
desktop anki can t import ankidroid apkg backup files due to missing media file import failed traceback most recent call last file c cygwin home dae win build pyi anki pyz aqt importing line in importfile file c cygwin home dae win build pyi anki pyz anki importing apkg line in run file c cygwin home dae win build pyi anki pyz zipfile line in read file c cygwin home dae win build pyi anki pyz zipfile line in open file c cygwin home dae win build pyi anki pyz zipfile line in getinfo keyerror there is no item named media in the archive
1
646,278
21,043,037,059
IssuesEvent
2022-03-31 13:53:18
thesaurus-linguae-aegyptiae/tla-web
https://api.github.com/repos/thesaurus-linguae-aegyptiae/tla-web
closed
Belegstellenseite: case-Permuationen als solche markieren
feature request low priority
**Hintergrund** Einzelne Sätze eines Texts existieren ggf. mehrfach, in einem Set von möglichen Permuationen von Lesevarianten (dazu https://github.com/thesaurus-linguae-aegyptiae/tla-datentransformation/issues/43). Diese sind daran erkennbar, dass hinter der eigentlichen Satz-ID ein Index "-00", "-01", ... steht, z.B. -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-00 -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-01 -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-02 -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-03 **To Do** In der Belegstellensuchergebnisseite sollen diese 'Permutationsfall'-Sätze besonders markiert werden, z.B. mit ```(mögliche Lesevariante)```. Das sentence-Objekt enthält dabei selbts schon die Information, ob es eine variante ist ("variants"): ``` { ... { "_index": "sentence", "_type": "_doc", "_id": "IBcBiFkWlf6udUn1rl8XyMKXdJY-03", "_score": 11.548093, "_source": { "_class": "tla.backend.es.model.SentenceEntity", "id": "IBcBiFkWlf6udUn1rl8XyMKXdJY-03", "context": { "textId": "LCCCO3C7YNFPFP3CZC3MCFYNCI", "textType": "Text", "line": "[86,15]", "paragraph": "Eb 712 = H 17", "position": 30, "variants": 4 }, ``` Siehe auch #219
1.0
Belegstellenseite: case-Permuationen als solche markieren - **Hintergrund** Einzelne Sätze eines Texts existieren ggf. mehrfach, in einem Set von möglichen Permuationen von Lesevarianten (dazu https://github.com/thesaurus-linguae-aegyptiae/tla-datentransformation/issues/43). Diese sind daran erkennbar, dass hinter der eigentlichen Satz-ID ein Index "-00", "-01", ... steht, z.B. -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-00 -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-01 -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-02 -http://localhost:9200/sentence/_search?q=id:IBcBiFkWlf6udUn1rl8XyMKXdJY-03 **To Do** In der Belegstellensuchergebnisseite sollen diese 'Permutationsfall'-Sätze besonders markiert werden, z.B. mit ```(mögliche Lesevariante)```. Das sentence-Objekt enthält dabei selbts schon die Information, ob es eine variante ist ("variants"): ``` { ... { "_index": "sentence", "_type": "_doc", "_id": "IBcBiFkWlf6udUn1rl8XyMKXdJY-03", "_score": 11.548093, "_source": { "_class": "tla.backend.es.model.SentenceEntity", "id": "IBcBiFkWlf6udUn1rl8XyMKXdJY-03", "context": { "textId": "LCCCO3C7YNFPFP3CZC3MCFYNCI", "textType": "Text", "line": "[86,15]", "paragraph": "Eb 712 = H 17", "position": 30, "variants": 4 }, ``` Siehe auch #219
priority
belegstellenseite case permuationen als solche markieren hintergrund einzelne sätze eines texts existieren ggf mehrfach in einem set von möglichen permuationen von lesevarianten dazu diese sind daran erkennbar dass hinter der eigentlichen satz id ein index steht z b to do in der belegstellensuchergebnisseite sollen diese permutationsfall sätze besonders markiert werden z b mit mögliche lesevariante das sentence objekt enthält dabei selbts schon die information ob es eine variante ist variants index sentence type doc id score source class tla backend es model sentenceentity id context textid texttype text line paragraph eb h position variants siehe auch
1
343,446
10,330,512,015
IssuesEvent
2019-09-02 14:51:26
MajorCooke/Doom4Doom
https://api.github.com/repos/MajorCooke/Doom4Doom
opened
5:4 Hud Clipping
Low Priority bug
5:4 resolutions have some hud clipping issues that will eventually need sorting out. Low priority at the moment.
1.0
5:4 Hud Clipping - 5:4 resolutions have some hud clipping issues that will eventually need sorting out. Low priority at the moment.
priority
hud clipping resolutions have some hud clipping issues that will eventually need sorting out low priority at the moment
1
149,109
5,711,677,293
IssuesEvent
2017-04-19 00:05:01
playasoft/laravel-voldb
https://api.github.com/repos/playasoft/laravel-voldb
opened
Allow username or email when logging in
enhancement priority: low
Related issue: https://github.com/playasoft/weightlifter/issues/78 Many users in the Art Grant Database project seem to forget their username after registering, so we should allow users to sign in by using either their username or their email address to prevent confusion.
1.0
Allow username or email when logging in - Related issue: https://github.com/playasoft/weightlifter/issues/78 Many users in the Art Grant Database project seem to forget their username after registering, so we should allow users to sign in by using either their username or their email address to prevent confusion.
priority
allow username or email when logging in related issue many users in the art grant database project seem to forget their username after registering so we should allow users to sign in by using either their username or their email address to prevent confusion
1
655,328
21,685,844,550
IssuesEvent
2022-05-09 11:11:41
canonical-web-and-design/charmed-osm.com
https://api.github.com/repos/canonical-web-and-design/charmed-osm.com
closed
Contact modal should be NPM module
Priority: Low
The `dynamic-contact-form.js` is a file for the contact pop up modal used in multiple repos - it should be made into an NPM module which can be installed.
1.0
Contact modal should be NPM module - The `dynamic-contact-form.js` is a file for the contact pop up modal used in multiple repos - it should be made into an NPM module which can be installed.
priority
contact modal should be npm module the dynamic contact form js is a file for the contact pop up modal used in multiple repos it should be made into an npm module which can be installed
1
745,722
25,997,681,560
IssuesEvent
2022-12-20 12:58:52
pendulum-chain/spacewalk
https://api.github.com/repos/pendulum-chain/spacewalk
opened
Add `clippy --fix` to pre-commit script
priority:low
It might be a good idea to add a `clippy --fix` command to the existing pre-commit script. It can automatically apply some best practices to the code, which the CI is likely to complain about anyways.
1.0
Add `clippy --fix` to pre-commit script - It might be a good idea to add a `clippy --fix` command to the existing pre-commit script. It can automatically apply some best practices to the code, which the CI is likely to complain about anyways.
priority
add clippy fix to pre commit script it might be a good idea to add a clippy fix command to the existing pre commit script it can automatically apply some best practices to the code which the ci is likely to complain about anyways
1
4,947
2,566,459,924
IssuesEvent
2015-02-08 15:34:17
cs2103jan2015-t13-2c/main
https://api.github.com/repos/cs2103jan2015-t13-2c/main
opened
As a user I can categorise my tasks (work, family, etc.)
priority.low
so that I can view tasks of the same type together.
1.0
As a user I can categorise my tasks (work, family, etc.) - so that I can view tasks of the same type together.
priority
as a user i can categorise my tasks work family etc so that i can view tasks of the same type together
1
123,250
4,859,278,536
IssuesEvent
2016-11-13 15:34:40
choderalab/yank
https://api.github.com/repos/choderalab/yank
opened
SMILES with uncertain stereochemistry
enhancement Priority low
Our SMILES-based setup pipeline will fail if the stereochemistry is not specified for molecules with chiral atoms or bonds. We should probably either: * Require users specify molecules with certain stereochemistry and issue a failure quickly with a clear error message about how to correct this. We can use the [OpenEye stereochemistry perception](https://docs.eyesopen.com/toolkits/python/oechemtk/stereochemistry.html) here. * Automatically expand stereochemically uncertain molecules as a sort of `!Combinatorial`, maybe if `expand_stereochemistry: yes` option is enabled for the `molecules:` description for that molecule. For now, the only major issue is that sometimes we don't get the current error message and YANK hangs. cc: https://github.com/choderalab/yank-examples/pull/31#issuecomment-260192892
1.0
SMILES with uncertain stereochemistry - Our SMILES-based setup pipeline will fail if the stereochemistry is not specified for molecules with chiral atoms or bonds. We should probably either: * Require users specify molecules with certain stereochemistry and issue a failure quickly with a clear error message about how to correct this. We can use the [OpenEye stereochemistry perception](https://docs.eyesopen.com/toolkits/python/oechemtk/stereochemistry.html) here. * Automatically expand stereochemically uncertain molecules as a sort of `!Combinatorial`, maybe if `expand_stereochemistry: yes` option is enabled for the `molecules:` description for that molecule. For now, the only major issue is that sometimes we don't get the current error message and YANK hangs. cc: https://github.com/choderalab/yank-examples/pull/31#issuecomment-260192892
priority
smiles with uncertain stereochemistry our smiles based setup pipeline will fail if the stereochemistry is not specified for molecules with chiral atoms or bonds we should probably either require users specify molecules with certain stereochemistry and issue a failure quickly with a clear error message about how to correct this we can use the here automatically expand stereochemically uncertain molecules as a sort of combinatorial maybe if expand stereochemistry yes option is enabled for the molecules description for that molecule for now the only major issue is that sometimes we don t get the current error message and yank hangs cc
1
290,248
8,883,652,465
IssuesEvent
2019-01-14 16:10:25
vuejs/rollup-plugin-vue
https://api.github.com/repos/vuejs/rollup-plugin-vue
closed
Add documentation for vue-template-compiler options
Priority: Low Status: Available Type: Maintenance
I used the compileOptions -> modules, I basically copied it from vue-loader. Do you maybe want to add it to the documentation?
1.0
Add documentation for vue-template-compiler options - I used the compileOptions -> modules, I basically copied it from vue-loader. Do you maybe want to add it to the documentation?
priority
add documentation for vue template compiler options i used the compileoptions modules i basically copied it from vue loader do you maybe want to add it to the documentation
1
413,289
12,064,499,749
IssuesEvent
2020-04-16 08:23:50
minetest/minetest
https://api.github.com/repos/minetest/minetest
closed
testStreamRead and testBufReader failures
Bug Low priority
Is this a real bug or just because the values were set to 53.53467f and tested as 53.534f? ``` Test assertion failed: readF1000(is) == 53.534f at test_serialization.cpp:305 [FAIL] testStreamRead - 0ms Test assertion failed: buf.getF1000() == 53.534f at test_serialization.cpp:472 [FAIL] testBufReader - 0ms ``` Minetest 0.4.13-dev-f9a9038-dirty gentoo linux gcc (Gentoo 4.9.3 p1.3, pie-0.6.3) 4.9.3
1.0
testStreamRead and testBufReader failures - Is this a real bug or just because the values were set to 53.53467f and tested as 53.534f? ``` Test assertion failed: readF1000(is) == 53.534f at test_serialization.cpp:305 [FAIL] testStreamRead - 0ms Test assertion failed: buf.getF1000() == 53.534f at test_serialization.cpp:472 [FAIL] testBufReader - 0ms ``` Minetest 0.4.13-dev-f9a9038-dirty gentoo linux gcc (Gentoo 4.9.3 p1.3, pie-0.6.3) 4.9.3
priority
teststreamread and testbufreader failures is this a real bug or just because the values were set to and tested as test assertion failed is at test serialization cpp teststreamread test assertion failed buf at test serialization cpp testbufreader minetest dev dirty gentoo linux gcc gentoo pie
1
243,491
7,858,503,868
IssuesEvent
2018-06-21 14:06:19
alinaciuysal/OEDA
https://api.github.com/repos/alinaciuysal/OEDA
closed
Incoming data type selection using dropdown list
low priority
A small bug exists in successful & running experiment pages. Dropdown lists in these pages do not reflect the change in the incoming data type at initialization. However, plots are generated successfully. it's related with [(ngValue)] and [selected] attributes of Angular: [see](https://stackoverflow.com/questions/41299247/use-ngvalue-and-selected-in-select-tag)
1.0
Incoming data type selection using dropdown list - A small bug exists in successful & running experiment pages. Dropdown lists in these pages do not reflect the change in the incoming data type at initialization. However, plots are generated successfully. it's related with [(ngValue)] and [selected] attributes of Angular: [see](https://stackoverflow.com/questions/41299247/use-ngvalue-and-selected-in-select-tag)
priority
incoming data type selection using dropdown list a small bug exists in successful running experiment pages dropdown lists in these pages do not reflect the change in the incoming data type at initialization however plots are generated successfully it s related with and attributes of angular
1
655,149
21,678,611,860
IssuesEvent
2022-05-09 02:20:07
darwinia-network/apps
https://api.github.com/repos/darwinia-network/apps
closed
切换网络后,Staking里当前的Stash账户地址和Controller地址,未自动更新成对应地址格式
low-priority
<img width="1463" alt="image" src="https://user-images.githubusercontent.com/102211220/165881193-d3e50794-ba0c-4747-ab1f-6a19a3acd90c.png"> <img width="1468" alt="image" src="https://user-images.githubusercontent.com/102211220/165881266-d5cc3321-efb7-4bbe-bd01-575d7cade615.png">
1.0
切换网络后,Staking里当前的Stash账户地址和Controller地址,未自动更新成对应地址格式 - <img width="1463" alt="image" src="https://user-images.githubusercontent.com/102211220/165881193-d3e50794-ba0c-4747-ab1f-6a19a3acd90c.png"> <img width="1468" alt="image" src="https://user-images.githubusercontent.com/102211220/165881266-d5cc3321-efb7-4bbe-bd01-575d7cade615.png">
priority
切换网络后,staking里当前的stash账户地址和controller地址,未自动更新成对应地址格式 img width alt image src img width alt image src
1
737,273
25,509,285,609
IssuesEvent
2022-11-28 11:54:10
nanoframework/Home
https://api.github.com/repos/nanoframework/Home
closed
Failure to flash nanoframework onto ESP32 Lilygo
Type: Feature request Area: Tools Priority: Low
### Tool nanoff ### Description Trying to update ESP32 Lilygo module with nanoframework version 1.8.0.581 using nanoff version nanoff 2.4.2+c4df2f6716. Flashing device is successful but looking at ESP32 on TeraTerm shows a continual reboot cycle with the following debug output. ``` rst:0x7 (TG0WDT_SYS_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid heáets Jul 29 2019 12:21:46 rst:0x7 (TG0WDT_SYS_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid heáets Jul 29 2019 12:21:46 ``` ### How to reproduce 1. Download firmware version 1.8.0.581 from cloudsmith and unzip into a directory. 2. Flash device ```nanoff --update --target ESP32_LILYGO --serialport COMx --fwversion 1.8.0.581``` 3. Connect to device using serial terminal such as TeraTerm @ 115200 baud. 4. Observe boot messages. ### Expected behaviour Expect device to be ready to accept .NET application. ### Screenshots _No response_ ### Aditional context _No response_
1.0
Failure to flash nanoframework onto ESP32 Lilygo - ### Tool nanoff ### Description Trying to update ESP32 Lilygo module with nanoframework version 1.8.0.581 using nanoff version nanoff 2.4.2+c4df2f6716. Flashing device is successful but looking at ESP32 on TeraTerm shows a continual reboot cycle with the following debug output. ``` rst:0x7 (TG0WDT_SYS_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid heáets Jul 29 2019 12:21:46 rst:0x7 (TG0WDT_SYS_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT) invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid header: 0xffffffff invalid heáets Jul 29 2019 12:21:46 ``` ### How to reproduce 1. Download firmware version 1.8.0.581 from cloudsmith and unzip into a directory. 2. Flash device ```nanoff --update --target ESP32_LILYGO --serialport COMx --fwversion 1.8.0.581``` 3. Connect to device using serial terminal such as TeraTerm @ 115200 baud. 4. Observe boot messages. ### Expected behaviour Expect device to be ready to accept .NET application. ### Screenshots _No response_ ### Aditional context _No response_
priority
failure to flash nanoframework onto lilygo tool nanoff description trying to update lilygo module with nanoframework version using nanoff version nanoff flashing device is successful but looking at on teraterm shows a continual reboot cycle with the following debug output rst sys reset boot spi fast flash boot invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid heáets jul rst sys reset boot spi fast flash boot invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid header invalid heáets jul how to reproduce download firmware version from cloudsmith and unzip into a directory flash device nanoff update target lilygo serialport comx fwversion connect to device using serial terminal such as teraterm baud observe boot messages expected behaviour expect device to be ready to accept net application screenshots no response aditional context no response
1
349,653
10,471,565,185
IssuesEvent
2019-09-23 08:11:03
garden-io/garden
https://api.github.com/repos/garden-io/garden
reopened
Hot reload tasks wait for last batch of tasks to complete before running.
bug enhancement priority:low
## Bug Because the `TaskGraph` currently sequences `processTasks` calls (and thus implicitly batches the tasks that were in the graph at the time of that `processTasks` call), `HotReloadTask`s that are added while a `processTasks` batch is in progress aren't run until that batch is completed. This isn't really a bug, but a known limitation of our current implementation. ### Current Behavior When I run `garden dev --hot-reload=vote` inside the `vote` project: ``` $ g dev --hot-reload=vote Good evening! Let's get your environment wired up... ✔ local-kubernetes → Configured ✔ tiller → Installing... → Done (took 6 sec) ✔ kubernetes-dashboard → Checking status... → Version a50f2fd8be already deployed ✔ default-backend → Building default-backend:a50f2fd8be... → Done (took 0.3 sec) ✔ default-backend → Deploying version a50f2fd8be... → Done (took 3.8 sec) ✔ ingress-controller → Checking status... → Version a50f2fd8be already deployed ✔ tiller → Installing... → Done (took 6 sec) ✔ jworker → Building jworker:a50f2fd8be... → Done (took 0.5 sec) ✔ api → Building api:a50f2fd8be... → Done (took 0.5 sec) ✔ redis → Deploying version a50f2fd8be... → Done (took 4.2 sec) ✔ db → Deploying version a50f2fd8be... → Done (took 22.6 sec) ✔ api → Running unit tests → Success ✔ vote → Building vote:a50f2fd8be... → Done (took 0.5 sec) ✔ vote → Running unit tests → Success ✔ result → Building result:a50f2fd8be... → Done (took 0.5 sec) ✔ api → Deploying version a50f2fd8be... → Done (took 7.7 sec) ✔ result → Running integ tests → Success ✔ vote → Deploying version a50f2fd8be... → Done (took 7.1 sec) ✔ vote → Running integ tests → Success ✔ db-init → Running → Done (took 2.7 sec) ✔ result → Deploying version a50f2fd8be... → Done (took 4.3 sec) ✔ javaworker → Deploying version a50f2fd8be... → Done (took 4.1 sec) ✔ vote → Hot reloading... → Done (took 640 ms) ✔ vote → Building vote:a50f2fd8be-1549038542... → Done (took 1.1 sec) # At this point, I changed a file, but the next hot reload task didn't start # running until after the the build, unit tests and integ tests here completed, # resulting in a subjective delay of several seconds until the hot reload happend. ✔ vote → Running unit tests → Success ✔ vote → Running integ tests → Success ✔ vote → Hot reloading... → Done (took 307 ms) ✔ vote → Building vote:a50f2fd8be-1549038548... → Done (took 1 sec) ✔ vote → Running integ tests → Success ✔ vote → Running unit tests → Success ``` ### Expected behavior Ideally, the second hot reload should have happened concurrently to the build, unit tests and integ tests. ### Suggested solution(s) I believe @edvald's working on a branch which would, among other things, resolve this, by changing the control flow for adding/processing/waiting on tasks. ### Your environment `garden version` `0.8.1`  (Checked out at commit hash `a50f2fd8bef1972d555bc9765fb096b1bb93311c`) `kubectl version` ``` Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} ``` `docker version` ```Client: Docker Engine - Community Version: 18.09.1 API version: 1.39 Go version: go1.10.6 Git commit: 4c52b90 Built: Wed Jan 9 19:33:12 2019 OS/Arch: darwin/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.10.6 Git commit: 4c52b90 Built: Wed Jan 9 19:41:49 2019 OS/Arch: linux/amd64 Experimental: true```
1.0
Hot reload tasks wait for last batch of tasks to complete before running. - ## Bug Because the `TaskGraph` currently sequences `processTasks` calls (and thus implicitly batches the tasks that were in the graph at the time of that `processTasks` call), `HotReloadTask`s that are added while a `processTasks` batch is in progress aren't run until that batch is completed. This isn't really a bug, but a known limitation of our current implementation. ### Current Behavior When I run `garden dev --hot-reload=vote` inside the `vote` project: ``` $ g dev --hot-reload=vote Good evening! Let's get your environment wired up... ✔ local-kubernetes → Configured ✔ tiller → Installing... → Done (took 6 sec) ✔ kubernetes-dashboard → Checking status... → Version a50f2fd8be already deployed ✔ default-backend → Building default-backend:a50f2fd8be... → Done (took 0.3 sec) ✔ default-backend → Deploying version a50f2fd8be... → Done (took 3.8 sec) ✔ ingress-controller → Checking status... → Version a50f2fd8be already deployed ✔ tiller → Installing... → Done (took 6 sec) ✔ jworker → Building jworker:a50f2fd8be... → Done (took 0.5 sec) ✔ api → Building api:a50f2fd8be... → Done (took 0.5 sec) ✔ redis → Deploying version a50f2fd8be... → Done (took 4.2 sec) ✔ db → Deploying version a50f2fd8be... → Done (took 22.6 sec) ✔ api → Running unit tests → Success ✔ vote → Building vote:a50f2fd8be... → Done (took 0.5 sec) ✔ vote → Running unit tests → Success ✔ result → Building result:a50f2fd8be... → Done (took 0.5 sec) ✔ api → Deploying version a50f2fd8be... → Done (took 7.7 sec) ✔ result → Running integ tests → Success ✔ vote → Deploying version a50f2fd8be... → Done (took 7.1 sec) ✔ vote → Running integ tests → Success ✔ db-init → Running → Done (took 2.7 sec) ✔ result → Deploying version a50f2fd8be... → Done (took 4.3 sec) ✔ javaworker → Deploying version a50f2fd8be... → Done (took 4.1 sec) ✔ vote → Hot reloading... → Done (took 640 ms) ✔ vote → Building vote:a50f2fd8be-1549038542... → Done (took 1.1 sec) # At this point, I changed a file, but the next hot reload task didn't start # running until after the the build, unit tests and integ tests here completed, # resulting in a subjective delay of several seconds until the hot reload happend. ✔ vote → Running unit tests → Success ✔ vote → Running integ tests → Success ✔ vote → Hot reloading... → Done (took 307 ms) ✔ vote → Building vote:a50f2fd8be-1549038548... → Done (took 1 sec) ✔ vote → Running integ tests → Success ✔ vote → Running unit tests → Success ``` ### Expected behavior Ideally, the second hot reload should have happened concurrently to the build, unit tests and integ tests. ### Suggested solution(s) I believe @edvald's working on a branch which would, among other things, resolve this, by changing the control flow for adding/processing/waiting on tasks. ### Your environment `garden version` `0.8.1`  (Checked out at commit hash `a50f2fd8bef1972d555bc9765fb096b1bb93311c`) `kubectl version` ``` Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} ``` `docker version` ```Client: Docker Engine - Community Version: 18.09.1 API version: 1.39 Go version: go1.10.6 Git commit: 4c52b90 Built: Wed Jan 9 19:33:12 2019 OS/Arch: darwin/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.10.6 Git commit: 4c52b90 Built: Wed Jan 9 19:41:49 2019 OS/Arch: linux/amd64 Experimental: true```
priority
hot reload tasks wait for last batch of tasks to complete before running bug because the taskgraph currently sequences processtasks calls and thus implicitly batches the tasks that were in the graph at the time of that processtasks call hotreloadtask s that are added while a processtasks batch is in progress aren t run until that batch is completed this isn t really a bug but a known limitation of our current implementation current behavior when i run garden dev hot reload vote inside the vote project g dev hot reload vote good evening let s get your environment wired up ✔ local kubernetes → configured ✔ tiller → installing → done took sec ✔ kubernetes dashboard → checking status → version already deployed ✔ default backend → building default backend → done took sec ✔ default backend → deploying version → done took sec ✔ ingress controller → checking status → version already deployed ✔ tiller → installing → done took sec ✔ jworker → building jworker → done took sec ✔ api → building api → done took sec ✔ redis → deploying version → done took sec ✔ db → deploying version → done took sec ✔ api → running unit tests → success ✔ vote → building vote → done took sec ✔ vote → running unit tests → success ✔ result → building result → done took sec ✔ api → deploying version → done took sec ✔ result → running integ tests → success ✔ vote → deploying version → done took sec ✔ vote → running integ tests → success ✔ db init → running → done took sec ✔ result → deploying version → done took sec ✔ javaworker → deploying version → done took sec ✔ vote → hot reloading → done took ms ✔ vote → building vote → done took sec at this point i changed a file but the next hot reload task didn t start running until after the the build unit tests and integ tests here completed resulting in a subjective delay of several seconds until the hot reload happend ✔ vote → running unit tests → success ✔ vote → running integ tests → success ✔ vote → hot reloading → done took ms ✔ vote → building vote → done took sec ✔ vote → running integ tests → success ✔ vote → running unit tests → success expected behavior ideally the second hot reload should have happened concurrently to the build unit tests and integ tests suggested solution s i believe edvald s working on a branch which would among other things resolve this by changing the control flow for adding processing waiting on tasks your environment garden version   checked out at commit hash kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin server version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux docker version client docker engine community version api version go version git commit built wed jan os arch darwin experimental false server docker engine community engine version api version minimum version go version git commit built wed jan os arch linux experimental true
1
405,776
11,882,538,968
IssuesEvent
2020-03-27 14:33:42
ntop/ntopng
https://api.github.com/repos/ntop/ntopng
closed
Unexpected Shutdown & List 'name' has 0 rules
in progress low-priority bug
Feb 28 00:00:05 ntop ntopng[1722]: 28/Feb/2020 00:00:05 [MySQLDB.cpp:824] Attempting to connect to MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:00:05 ntop ntopng[1722]: 28/Feb/2020 00:00:05 [MySQLDB.cpp:850] Successfully connected to MySQL [root@localhost:3306] for interface tcp://127.0.0.1:5556 Feb 28 00:00:26 ntop ntopng[1722]: 28/Feb/2020 00:00:26 [main.cpp:50] Shutting down... Feb 28 00:00:26 ntop systemd[1]: Stopping ntopng high-speed web-based traffic monitoring and analysis tool... Feb 28 00:00:28 ntop ntopng[1722]: 28/Feb/2020 00:00:28 [Ntop.cpp:2370] Terminating periodic activities Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Cisco Talos Intelligence' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Cisco Talos Intelligence' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Emerging Threats' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Emerging Threats' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Feodo Tracker Botnet C2 IP Blocklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Feodo Tracker Botnet C2 IP Blocklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'NoCoin Filter List' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'NoCoin Filter List' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL Botnet C2 IP Blacklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL Botnet C2 IP Blacklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL JA3' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL JA3' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [Ntop.cpp:2376] Executing shutdown script Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [IPv4] 37.11 TB/48102.64 M Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [IPv6] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [ARP] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [MPLS] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [Other] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [Ntop.cpp:2359] Polling shut down [interface: tcp://127.0.0.1:5556] Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [Ntop.cpp:2393] Deleted PID /var/run/ntopng.pid: [rc: -1][Permission denied] Feb 28 00:00:53 ntop ntopng[1722]: 28/Feb/2020 00:00:53 [HTTPserver.cpp:1350] HTTP server terminated Feb 28 00:00:53 ntop ntopng[1722]: 28/Feb/2020 00:00:53 [NetworkInterface.cpp:541] Flushing host contacts for interface tcp://127.0.0.1:5556 Feb 28 00:00:54 ntop ntopng[1722]: 28/Feb/2020 00:00:54 [NetworkInterface.cpp:2372] Cleanup interface tcp://127.0.0.1:5556 Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [MySQLDB.cpp:744] Disconnected from MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [MySQLDB.cpp:744] Disconnected from MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [MySQLDB.cpp:744] Disconnected from MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [AddressResolution.cpp:63] Address resolution stats [0 resolved][197 failures] Feb 28 00:01:03 ntop systemd[1]: Stopped ntopng high-speed web-based traffic monitoring and analysis tool.
1.0
Unexpected Shutdown & List 'name' has 0 rules - Feb 28 00:00:05 ntop ntopng[1722]: 28/Feb/2020 00:00:05 [MySQLDB.cpp:824] Attempting to connect to MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:00:05 ntop ntopng[1722]: 28/Feb/2020 00:00:05 [MySQLDB.cpp:850] Successfully connected to MySQL [root@localhost:3306] for interface tcp://127.0.0.1:5556 Feb 28 00:00:26 ntop ntopng[1722]: 28/Feb/2020 00:00:26 [main.cpp:50] Shutting down... Feb 28 00:00:26 ntop systemd[1]: Stopping ntopng high-speed web-based traffic monitoring and analysis tool... Feb 28 00:00:28 ntop ntopng[1722]: 28/Feb/2020 00:00:28 [Ntop.cpp:2370] Terminating periodic activities Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Cisco Talos Intelligence' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Cisco Talos Intelligence' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Emerging Threats' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Emerging Threats' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Feodo Tracker Botnet C2 IP Blocklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'Feodo Tracker Botnet C2 IP Blocklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'NoCoin Filter List' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'NoCoin Filter List' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL Botnet C2 IP Blacklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL Botnet C2 IP Blacklist' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL JA3' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [housekeeping.lua:32] [lists_utils.lua:584] WARNING: List 'SSLBL JA3' has 0 rules. Please report this to https://github.com/ntop/ntopng Feb 28 00:00:50 ntop ntopng[1722]: 28/Feb/2020 00:00:50 [Ntop.cpp:2376] Executing shutdown script Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [IPv4] 37.11 TB/48102.64 M Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [IPv6] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [ARP] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [MPLS] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [ProtoStats.cpp:35] [Other] 0 B/0.00 Packets Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [Ntop.cpp:2359] Polling shut down [interface: tcp://127.0.0.1:5556] Feb 28 00:00:51 ntop ntopng[1722]: 28/Feb/2020 00:00:51 [Ntop.cpp:2393] Deleted PID /var/run/ntopng.pid: [rc: -1][Permission denied] Feb 28 00:00:53 ntop ntopng[1722]: 28/Feb/2020 00:00:53 [HTTPserver.cpp:1350] HTTP server terminated Feb 28 00:00:53 ntop ntopng[1722]: 28/Feb/2020 00:00:53 [NetworkInterface.cpp:541] Flushing host contacts for interface tcp://127.0.0.1:5556 Feb 28 00:00:54 ntop ntopng[1722]: 28/Feb/2020 00:00:54 [NetworkInterface.cpp:2372] Cleanup interface tcp://127.0.0.1:5556 Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [MySQLDB.cpp:744] Disconnected from MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [MySQLDB.cpp:744] Disconnected from MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [MySQLDB.cpp:744] Disconnected from MySQL for interface tcp://127.0.0.1:5556... Feb 28 00:01:03 ntop ntopng[1722]: 28/Feb/2020 00:01:03 [AddressResolution.cpp:63] Address resolution stats [0 resolved][197 failures] Feb 28 00:01:03 ntop systemd[1]: Stopped ntopng high-speed web-based traffic monitoring and analysis tool.
priority
unexpected shutdown list name has rules feb ntop ntopng feb attempting to connect to mysql for interface tcp feb ntop ntopng feb successfully connected to mysql for interface tcp feb ntop ntopng feb shutting down feb ntop systemd stopping ntopng high speed web based traffic monitoring and analysis tool feb ntop ntopng feb terminating periodic activities feb ntop ntopng warning list cisco talos intelligence has rules please report this to feb ntop ntopng feb warning list cisco talos intelligence has rules please report this to feb ntop ntopng warning list emerging threats has rules please report this to feb ntop ntopng feb warning list emerging threats has rules please report this to feb ntop ntopng warning list feodo tracker botnet ip blocklist has rules please report this to feb ntop ntopng feb warning list feodo tracker botnet ip blocklist has rules please report this to feb ntop ntopng warning list nocoin filter list has rules please report this to feb ntop ntopng feb warning list nocoin filter list has rules please report this to feb ntop ntopng warning list sslbl botnet ip blacklist has rules please report this to feb ntop ntopng feb warning list sslbl botnet ip blacklist has rules please report this to feb ntop ntopng warning list sslbl has rules please report this to feb ntop ntopng feb warning list sslbl has rules please report this to feb ntop ntopng feb executing shutdown script feb ntop ntopng feb tb m packets feb ntop ntopng feb b packets feb ntop ntopng feb b packets feb ntop ntopng feb b packets feb ntop ntopng feb b packets feb ntop ntopng feb polling shut down feb ntop ntopng feb deleted pid var run ntopng pid feb ntop ntopng feb http server terminated feb ntop ntopng feb flushing host contacts for interface tcp feb ntop ntopng feb cleanup interface tcp feb ntop ntopng feb disconnected from mysql for interface tcp feb ntop ntopng feb disconnected from mysql for interface tcp feb ntop ntopng feb disconnected from mysql for interface tcp feb ntop ntopng feb address resolution stats feb ntop systemd stopped ntopng high speed web based traffic monitoring and analysis tool
1
270,802
8,470,414,346
IssuesEvent
2018-10-24 04:09:16
medic/medic-webapp
https://api.github.com/repos/medic/medic-webapp
opened
Warn if uploading configuration will overwrite someone elses changes
Configuration Priority: 3 - Low Status: 1 - Triaged Type: Improvement medic-conf
Technical users use medic-conf to make configuration changes to instances and hopefully remember to commit the changes to git to track, back up, and share their work. This can overwrite other configuration changes if... 1. a user has made changes through the admin app, 2. a user has made changes directly to the database, 3. a user made changes with medic-conf but neglected to commit their changes, or 4. a user forgets to pull updates from the repo before making their changes. In any of these cases the configuration will be overwritten. Instead medic-conf should warn when the configuration is different from when it was last executed. The user should be given the option to overwrite the changes (`force`) or update their configuration to be in sync (eg: `git stash`, export configuration, `git stash pop`).
1.0
Warn if uploading configuration will overwrite someone elses changes - Technical users use medic-conf to make configuration changes to instances and hopefully remember to commit the changes to git to track, back up, and share their work. This can overwrite other configuration changes if... 1. a user has made changes through the admin app, 2. a user has made changes directly to the database, 3. a user made changes with medic-conf but neglected to commit their changes, or 4. a user forgets to pull updates from the repo before making their changes. In any of these cases the configuration will be overwritten. Instead medic-conf should warn when the configuration is different from when it was last executed. The user should be given the option to overwrite the changes (`force`) or update their configuration to be in sync (eg: `git stash`, export configuration, `git stash pop`).
priority
warn if uploading configuration will overwrite someone elses changes technical users use medic conf to make configuration changes to instances and hopefully remember to commit the changes to git to track back up and share their work this can overwrite other configuration changes if a user has made changes through the admin app a user has made changes directly to the database a user made changes with medic conf but neglected to commit their changes or a user forgets to pull updates from the repo before making their changes in any of these cases the configuration will be overwritten instead medic conf should warn when the configuration is different from when it was last executed the user should be given the option to overwrite the changes force or update their configuration to be in sync eg git stash export configuration git stash pop
1
720,753
24,805,365,106
IssuesEvent
2022-10-25 03:39:55
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
it8xxx2_evb: The testcase tests/kernel/sleep/failed to run.
bug priority: low platform: ITE
**Describe the bug** The testcase tests/kernel/sleep/ failed to run on the it8xxx2_evb **To Reproduce** twister -p it8xxx2_evb --device-testing --west-flash="../../itetool/loader.sh" --device-serial=/dev/ttyUSB0 -T tests/kernel/sleep/ -v **Logs and console output** ``` ***** delaying boot 1ms (per build configuration) ***** *** Booting Zephyr OS build zephyr-v3.2.0-404-ge852247de33f (delayed boot 1ms) *** Running TESTSUITE sleep =================================================================== START - test_sleep Kernel objects initialized Test thread started: id = 0x80101108 Helper thread started: id = 0x80101060 Testing normal expiration of k_sleep() Testing: test thread sleep + helper thread wakeup test Testing: test thread sleep + isr offload wakeup test Testing: test thread sleep + main wakeup test thread Testing kernel k_sleep() PASS - test_sleep in 2.035 seconds =================================================================== START - test_sleep_forever Kernel objects initialized PASS - test_sleep_forever in 0.003 seconds =================================================================== START - test_usleep elapsed_ms = 2167 Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/sleep/src/usleep.c:95: sleep_test_usleep: (elapsed_ms <= UPPER_BOUND_MS is false) overslept FAIL - test_usleep in 21.683 seconds =================================================================== TESTSUITE sleep failed. ------ TESTSUITE SUMMARY START ------ SUITE FAIL - 66.67% [sleep]: pass = 2, fail = 1, skip = 0, total = 3 duration = 23.721 seconds - PASS - [sleep.test_sleep] duration = 2.035 seconds - PASS - [sleep.test_sleep_forever] duration = 0.003 seconds - FAIL - [sleep.test_usleep] duration = 21.683 seconds ------ TESTSUITE SUMMARY END ------ =================================================================== RunID: 9c955129b21eb68ec9b0d6d00d7d91ee PROJECT EXECUTION FAILED ``` **Environment (please complete the following information):** - OS: Linux - Toolchain: Zephyr SDK 0.15.0 - Commit: b663008d0c03c025191bc67956feaa57
1.0
it8xxx2_evb: The testcase tests/kernel/sleep/failed to run. - **Describe the bug** The testcase tests/kernel/sleep/ failed to run on the it8xxx2_evb **To Reproduce** twister -p it8xxx2_evb --device-testing --west-flash="../../itetool/loader.sh" --device-serial=/dev/ttyUSB0 -T tests/kernel/sleep/ -v **Logs and console output** ``` ***** delaying boot 1ms (per build configuration) ***** *** Booting Zephyr OS build zephyr-v3.2.0-404-ge852247de33f (delayed boot 1ms) *** Running TESTSUITE sleep =================================================================== START - test_sleep Kernel objects initialized Test thread started: id = 0x80101108 Helper thread started: id = 0x80101060 Testing normal expiration of k_sleep() Testing: test thread sleep + helper thread wakeup test Testing: test thread sleep + isr offload wakeup test Testing: test thread sleep + main wakeup test thread Testing kernel k_sleep() PASS - test_sleep in 2.035 seconds =================================================================== START - test_sleep_forever Kernel objects initialized PASS - test_sleep_forever in 0.003 seconds =================================================================== START - test_usleep elapsed_ms = 2167 Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/sleep/src/usleep.c:95: sleep_test_usleep: (elapsed_ms <= UPPER_BOUND_MS is false) overslept FAIL - test_usleep in 21.683 seconds =================================================================== TESTSUITE sleep failed. ------ TESTSUITE SUMMARY START ------ SUITE FAIL - 66.67% [sleep]: pass = 2, fail = 1, skip = 0, total = 3 duration = 23.721 seconds - PASS - [sleep.test_sleep] duration = 2.035 seconds - PASS - [sleep.test_sleep_forever] duration = 0.003 seconds - FAIL - [sleep.test_usleep] duration = 21.683 seconds ------ TESTSUITE SUMMARY END ------ =================================================================== RunID: 9c955129b21eb68ec9b0d6d00d7d91ee PROJECT EXECUTION FAILED ``` **Environment (please complete the following information):** - OS: Linux - Toolchain: Zephyr SDK 0.15.0 - Commit: b663008d0c03c025191bc67956feaa57
priority
evb the testcase tests kernel sleep failed to run describe the bug the testcase tests kernel sleep failed to run on the evb to reproduce twister p evb device testing west flash itetool loader sh device serial dev t tests kernel sleep v logs and console output delaying boot per build configuration booting zephyr os build zephyr delayed boot running testsuite sleep start test sleep kernel objects initialized test thread started id helper thread started id testing normal expiration of k sleep testing test thread sleep helper thread wakeup test testing test thread sleep isr offload wakeup test testing test thread sleep main wakeup test thread testing kernel k sleep pass test sleep in seconds start test sleep forever kernel objects initialized pass test sleep forever in seconds start test usleep elapsed ms assertion failed at west topdir zephyr tests kernel sleep src usleep c sleep test usleep elapsed ms upper bound ms is false overslept fail test usleep in seconds testsuite sleep failed testsuite summary start suite fail pass fail skip total duration seconds pass duration seconds pass duration seconds fail duration seconds testsuite summary end runid project execution failed environment please complete the following information os linux toolchain zephyr sdk commit
1
727,300
25,030,379,962
IssuesEvent
2022-11-04 11:50:15
YangCatalog/sdo_analysis
https://api.github.com/repos/YangCatalog/sdo_analysis
closed
`check_archived_drafts.py` attemting to use nonexistent directory
bug Priority: Low
The `check_archived_drafts.py` script constructs it's output path like [this](https://github.com/YangCatalog/sdo_analysis/blob/ebfcc6c24251de4ef5ad9bd2f218a6ab2fa64a09/bin/check_archived_drafts.py#L64): ```python3 all_yang_path = os.path.join(temp_dir, 'YANG-ALL') ``` Usually this will resolve to `/var/yang/tmp/YANG-ALL`. Such a directory doesn't ever seem to be created. This doesn't stop the script from functioning, but it floods the log file with messages that look like this ``` [Errno 2] No such file or directory: '/var/yang/tmp/YANG-ALL/ietf-isis-remaining-lifetime@2020-05-06.yang' ``` The directory should be created to avoid the errors and removed at the end of the script run.
1.0
`check_archived_drafts.py` attemting to use nonexistent directory - The `check_archived_drafts.py` script constructs it's output path like [this](https://github.com/YangCatalog/sdo_analysis/blob/ebfcc6c24251de4ef5ad9bd2f218a6ab2fa64a09/bin/check_archived_drafts.py#L64): ```python3 all_yang_path = os.path.join(temp_dir, 'YANG-ALL') ``` Usually this will resolve to `/var/yang/tmp/YANG-ALL`. Such a directory doesn't ever seem to be created. This doesn't stop the script from functioning, but it floods the log file with messages that look like this ``` [Errno 2] No such file or directory: '/var/yang/tmp/YANG-ALL/ietf-isis-remaining-lifetime@2020-05-06.yang' ``` The directory should be created to avoid the errors and removed at the end of the script run.
priority
check archived drafts py attemting to use nonexistent directory the check archived drafts py script constructs it s output path like all yang path os path join temp dir yang all usually this will resolve to var yang tmp yang all such a directory doesn t ever seem to be created this doesn t stop the script from functioning but it floods the log file with messages that look like this no such file or directory var yang tmp yang all ietf isis remaining lifetime yang the directory should be created to avoid the errors and removed at the end of the script run
1
122,281
4,833,059,243
IssuesEvent
2016-11-08 09:46:19
MiT-HEP/ChargedHiggs
https://api.github.com/repos/MiT-HEP/ChargedHiggs
opened
CombineTools
low priority
make a standardize set of tools to write datacards. and use them in the two scripts.
1.0
CombineTools - make a standardize set of tools to write datacards. and use them in the two scripts.
priority
combinetools make a standardize set of tools to write datacards and use them in the two scripts
1
772,109
27,106,637,451
IssuesEvent
2023-02-15 12:36:32
conan-io/conan
https://api.github.com/repos/conan-io/conan
closed
[bug] Symbolic links are not properly copied when importing on Linux
type: bug good first issue stage: queue priority: medium complex: low component: ux
### Environment Details (include every applicable attribute) * Operating System+version: Ubuntu 19.10 * Compiler+version: GCC 9.2.1 * Conan version: Conan 1.21.0 * Python version: Python 3.7.5 ### Steps to reproduce (Include if Applicable) `conanfile.txt`: ``` [requires] expat/2.2.8 [imports] bin, *.dll -> . lib, *.dylib -> . lib, *.so -> . ``` ### Actual Behaviour After using `conan install .`, I got these in my `~/.conan/data/expat/2.2.8/_/_/package/6af9cc7cb931c5ad942174fd7838eb655717c709/lib`: libexpat.so libexpat.so.1 libexpat.so.1.6.10 And the result of `stat` is: ``` File: libexpat.so -> libexpat.so.1 Size: 13 Blocks: 0 IO Block: 4096 symbolic link Device: 802h/2050d Inode: 666147 Links: 1 File: libexpat.so.1 -> libexpat.so.1.6.10 Size: 18 Blocks: 0 IO Block: 4096 symbolic link Device: 802h/2050d Inode: 666148 Links: 1 ``` When importing, those symbolic links with relative paths are directly copied into project dir, and they became invalid, failing the build process. ### Expected Behaviour Source file of symbolic links should be copied and imported instead of link itself. ### Logs (Executed commands with output) (Include/Attach if Applicable) Command: `conan install .` ``` expat/2.2.8: Downloaded package revision 0 conanfile.txt: Generator txt created conanbuildinfo.txt conanfile.txt: Generated conaninfo.txt conanfile.txt: Generated graphinfo conanfile.txt imports(): Copied 1 '.so' file: libexpat.so Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/conans/client/command.py", line 1947, in run method(args[0][1:]) File "/usr/local/lib/python3.7/dist-packages/conans/client/command.py", line 481, in install lockfile=args.lockfile) File "/usr/local/lib/python3.7/dist-packages/conans/client/conan_api.py", line 81, in wrapper return f(api, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/conans/client/conan_api.py", line 565, in install recorder=recorder) File "/usr/local/lib/python3.7/dist-packages/conans/client/manager.py", line 100, in deps_install run_imports(conanfile, install_folder) File "/usr/local/lib/python3.7/dist-packages/conans/client/importer.py", line 86, in run_imports _report_save_manifest(copied_files, import_output, dest_folder, IMPORTS_MANIFESTS) File "/usr/local/lib/python3.7/dist-packages/conans/client/importer.py", line 61, in _report_save_manifest file_dict[f] = md5sum(abs_path) File "/usr/local/lib/python3.7/dist-packages/conans/util/files.py", line 136, in md5sum return _generic_algorithm_sum(file_path, "md5") File "/usr/local/lib/python3.7/dist-packages/conans/util/files.py", line 149, in _generic_algorithm_sum with open(file_path, 'rb') as fh: FileNotFoundError: [Errno 2] No such file or directory: '/home/charliejiang/temp/libexpat.so' ```
1.0
[bug] Symbolic links are not properly copied when importing on Linux - ### Environment Details (include every applicable attribute) * Operating System+version: Ubuntu 19.10 * Compiler+version: GCC 9.2.1 * Conan version: Conan 1.21.0 * Python version: Python 3.7.5 ### Steps to reproduce (Include if Applicable) `conanfile.txt`: ``` [requires] expat/2.2.8 [imports] bin, *.dll -> . lib, *.dylib -> . lib, *.so -> . ``` ### Actual Behaviour After using `conan install .`, I got these in my `~/.conan/data/expat/2.2.8/_/_/package/6af9cc7cb931c5ad942174fd7838eb655717c709/lib`: libexpat.so libexpat.so.1 libexpat.so.1.6.10 And the result of `stat` is: ``` File: libexpat.so -> libexpat.so.1 Size: 13 Blocks: 0 IO Block: 4096 symbolic link Device: 802h/2050d Inode: 666147 Links: 1 File: libexpat.so.1 -> libexpat.so.1.6.10 Size: 18 Blocks: 0 IO Block: 4096 symbolic link Device: 802h/2050d Inode: 666148 Links: 1 ``` When importing, those symbolic links with relative paths are directly copied into project dir, and they became invalid, failing the build process. ### Expected Behaviour Source file of symbolic links should be copied and imported instead of link itself. ### Logs (Executed commands with output) (Include/Attach if Applicable) Command: `conan install .` ``` expat/2.2.8: Downloaded package revision 0 conanfile.txt: Generator txt created conanbuildinfo.txt conanfile.txt: Generated conaninfo.txt conanfile.txt: Generated graphinfo conanfile.txt imports(): Copied 1 '.so' file: libexpat.so Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/conans/client/command.py", line 1947, in run method(args[0][1:]) File "/usr/local/lib/python3.7/dist-packages/conans/client/command.py", line 481, in install lockfile=args.lockfile) File "/usr/local/lib/python3.7/dist-packages/conans/client/conan_api.py", line 81, in wrapper return f(api, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/conans/client/conan_api.py", line 565, in install recorder=recorder) File "/usr/local/lib/python3.7/dist-packages/conans/client/manager.py", line 100, in deps_install run_imports(conanfile, install_folder) File "/usr/local/lib/python3.7/dist-packages/conans/client/importer.py", line 86, in run_imports _report_save_manifest(copied_files, import_output, dest_folder, IMPORTS_MANIFESTS) File "/usr/local/lib/python3.7/dist-packages/conans/client/importer.py", line 61, in _report_save_manifest file_dict[f] = md5sum(abs_path) File "/usr/local/lib/python3.7/dist-packages/conans/util/files.py", line 136, in md5sum return _generic_algorithm_sum(file_path, "md5") File "/usr/local/lib/python3.7/dist-packages/conans/util/files.py", line 149, in _generic_algorithm_sum with open(file_path, 'rb') as fh: FileNotFoundError: [Errno 2] No such file or directory: '/home/charliejiang/temp/libexpat.so' ```
priority
symbolic links are not properly copied when importing on linux environment details include every applicable attribute operating system version ubuntu compiler version gcc conan version conan python version python steps to reproduce include if applicable conanfile txt expat bin dll lib dylib lib so actual behaviour after using conan install i got these in my conan data expat package lib libexpat so libexpat so libexpat so and the result of stat is file libexpat so libexpat so size blocks io block symbolic link device inode links file libexpat so libexpat so size blocks io block symbolic link device inode links when importing those symbolic links with relative paths are directly copied into project dir and they became invalid failing the build process expected behaviour source file of symbolic links should be copied and imported instead of link itself logs executed commands with output include attach if applicable command conan install expat downloaded package revision conanfile txt generator txt created conanbuildinfo txt conanfile txt generated conaninfo txt conanfile txt generated graphinfo conanfile txt imports copied so file libexpat so traceback most recent call last file usr local lib dist packages conans client command py line in run method args file usr local lib dist packages conans client command py line in install lockfile args lockfile file usr local lib dist packages conans client conan api py line in wrapper return f api args kwargs file usr local lib dist packages conans client conan api py line in install recorder recorder file usr local lib dist packages conans client manager py line in deps install run imports conanfile install folder file usr local lib dist packages conans client importer py line in run imports report save manifest copied files import output dest folder imports manifests file usr local lib dist packages conans client importer py line in report save manifest file dict abs path file usr local lib dist packages conans util files py line in return generic algorithm sum file path file usr local lib dist packages conans util files py line in generic algorithm sum with open file path rb as fh filenotfounderror no such file or directory home charliejiang temp libexpat so
1
689,934
23,640,840,622
IssuesEvent
2022-08-25 16:54:18
WordPress/openverse-frontend
https://api.github.com/repos/WordPress/openverse-frontend
opened
Playwright logs are too verbose
🟩 priority: low 🛠 goal: fix 🤖 aspect: dx
## Description <!-- Concisely describe the bug. Compare your experience with what you expected to happen. --> <!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." --> Currently, it is very difficult to read through the CI Playwright logs because they are too verbose. 1. There are talkback proxy logs for each tape that is found. Unfortunatly, it is not possible to only log the not-found tapes, so all of the tapes are being logged. It is much easier to run the tests with `update_tapes` set to true to see which tapes were added than it is to see which tapes are missing from the logs. We should set `silent: true` in the talkback options. 2. There are lots of warnings about authentication requests failing. I'm not sure what the best way of silencing them is, but they take up a lot of vertical space in the CI logs. <details> <summary> Request exception </summary> ``` [WebServer] playwright_1 | WARN $sentry.captureException() called, but Sentry plugin is disabled. Arguments: [ playwright_1 | [AxiosError: Unable to retrieve API token. Request failed with status code 401] { playwright_1 | code: 'ERR_BAD_REQUEST', playwright_1 | config: { playwright_1 | transitional: [Object], playwright_1 | adapter: [Function: httpAdapter], playwright_1 | transformRequest: [Array], playwright_1 | transformResponse: [Array], playwright_1 | timeout: 30000, playwright_1 | xsrfCookieName: 'XSRF-TOKEN', playwright_1 | xsrfHeaderName: 'X-XSRF-TOKEN', playwright_1 | maxContentLength: -1, playwright_1 | maxBodyLength: -1, playwright_1 | env: [Object], playwright_1 | validateStatus: [Function: validateStatus], playwright_1 | headers: [Object], playwright_1 | baseURL: 'http://localhost:49153/v1/', playwright_1 | method: 'post', playwright_1 | url: 'auth_tokens/token/', playwright_1 | data: 'client_id=<xxx>&client_secret=<xxx>&grant_type=client_credentials' playwright_1 | }, playwright_1 | request: ClientRequest { playwright_1 | _events: [Object: null prototype], playwright_1 | _eventsCount: 7, playwright_1 | _maxListeners: undefined, playwright_1 | outputData: [], playwright_1 | outputSize: 0, playwright_1 | writable: true, playwright_1 | destroyed: false, playwright_1 | _last: true, playwright_1 | chunkedEncoding: false, playwright_1 | shouldKeepAlive: false, playwright_1 | maxRequestsOnConnectionReached: false, playwright_1 | _defaultKeepAlive: true, playwright_1 | useChunkedEncodingByDefault: true, playwright_1 | sendDate: false, playwright_1 | _removedConnection: false, playwright_1 | _removedContLen: false, playwright_1 | _removedTE: false, playwright_1 | _contentLength: null, playwright_1 | _hasBody: true, playwright_1 | _trailer: '', playwright_1 | finished: true, playwright_1 | _headerSent: true, playwright_1 | _closed: false, playwright_1 | socket: [Socket], playwright_1 | _header: 'POST /v1/auth_tokens/token/ HTTP/1.1\r\n' + playwright_1 | 'Accept: application/json, text/plain, */*\r\n' + playwright_1 | 'Content-Type: application/x-www-form-urlencoded;charset=utf-8\r\n' + playwright_1 | 'User-Agent: axios/0.27.2\r\n' + playwright_1 | 'Content-Length: 223\r\n' + playwright_1 | 'Host: localhost:49153\r\n' + playwright_1 | 'Connection: close\r\n' + playwright_1 | '\r\n', playwright_1 | _keepAliveTimeout: 0, playwright_1 | _onPendingData: [Function: nop], playwright_1 | agent: [Agent], playwright_1 | socketPath: undefined, playwright_1 | method: 'POST', playwright_1 | maxHeaderSize: undefined, playwright_1 | insecureHTTPParser: undefined, playwright_1 | path: '/v1/auth_tokens/token/', playwright_1 | _ended: true, playwright_1 | res: [IncomingMessage], playwright_1 | aborted: false, playwright_1 | timeoutCb: null, playwright_1 | upgradeOrConnect: false, playwright_1 | parser: null, playwright_1 | maxHeadersCount: null, playwright_1 | reusedSocket: false, playwright_1 | host: 'localhost', playwright_1 | protocol: 'http:', playwright_1 | _redirectable: [Writable], playwright_1 | [Symbol(kCapture)]: false, playwright_1 | [Symbol(kNeedDrain)]: false, playwright_1 | [Symbol(corked)]: 0, playwright_1 | [Symbol(kOutHeaders)]: [Object: null prototype] playwright_1 | }, playwright_1 | response: { playwright_1 | status: 401, playwright_1 | statusText: 'Unauthorized', playwright_1 | headers: [Object], playwright_1 | config: [Object], playwright_1 | request: [ClientRequest], playwright_1 | data: [Object] playwright_1 | } playwright_1 | } playwright_1 | ] ``` </details> ## Reproduction <!-- Provide detailed steps to reproduce the bug. --> Look at the CI Playwright logs. ## Screenshots <!-- Add screenshots to show the problem; or delete the section entirely. --> ## Resolution <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in resolving this bug.
1.0
Playwright logs are too verbose - ## Description <!-- Concisely describe the bug. Compare your experience with what you expected to happen. --> <!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." --> Currently, it is very difficult to read through the CI Playwright logs because they are too verbose. 1. There are talkback proxy logs for each tape that is found. Unfortunatly, it is not possible to only log the not-found tapes, so all of the tapes are being logged. It is much easier to run the tests with `update_tapes` set to true to see which tapes were added than it is to see which tapes are missing from the logs. We should set `silent: true` in the talkback options. 2. There are lots of warnings about authentication requests failing. I'm not sure what the best way of silencing them is, but they take up a lot of vertical space in the CI logs. <details> <summary> Request exception </summary> ``` [WebServer] playwright_1 | WARN $sentry.captureException() called, but Sentry plugin is disabled. Arguments: [ playwright_1 | [AxiosError: Unable to retrieve API token. Request failed with status code 401] { playwright_1 | code: 'ERR_BAD_REQUEST', playwright_1 | config: { playwright_1 | transitional: [Object], playwright_1 | adapter: [Function: httpAdapter], playwright_1 | transformRequest: [Array], playwright_1 | transformResponse: [Array], playwright_1 | timeout: 30000, playwright_1 | xsrfCookieName: 'XSRF-TOKEN', playwright_1 | xsrfHeaderName: 'X-XSRF-TOKEN', playwright_1 | maxContentLength: -1, playwright_1 | maxBodyLength: -1, playwright_1 | env: [Object], playwright_1 | validateStatus: [Function: validateStatus], playwright_1 | headers: [Object], playwright_1 | baseURL: 'http://localhost:49153/v1/', playwright_1 | method: 'post', playwright_1 | url: 'auth_tokens/token/', playwright_1 | data: 'client_id=<xxx>&client_secret=<xxx>&grant_type=client_credentials' playwright_1 | }, playwright_1 | request: ClientRequest { playwright_1 | _events: [Object: null prototype], playwright_1 | _eventsCount: 7, playwright_1 | _maxListeners: undefined, playwright_1 | outputData: [], playwright_1 | outputSize: 0, playwright_1 | writable: true, playwright_1 | destroyed: false, playwright_1 | _last: true, playwright_1 | chunkedEncoding: false, playwright_1 | shouldKeepAlive: false, playwright_1 | maxRequestsOnConnectionReached: false, playwright_1 | _defaultKeepAlive: true, playwright_1 | useChunkedEncodingByDefault: true, playwright_1 | sendDate: false, playwright_1 | _removedConnection: false, playwright_1 | _removedContLen: false, playwright_1 | _removedTE: false, playwright_1 | _contentLength: null, playwright_1 | _hasBody: true, playwright_1 | _trailer: '', playwright_1 | finished: true, playwright_1 | _headerSent: true, playwright_1 | _closed: false, playwright_1 | socket: [Socket], playwright_1 | _header: 'POST /v1/auth_tokens/token/ HTTP/1.1\r\n' + playwright_1 | 'Accept: application/json, text/plain, */*\r\n' + playwright_1 | 'Content-Type: application/x-www-form-urlencoded;charset=utf-8\r\n' + playwright_1 | 'User-Agent: axios/0.27.2\r\n' + playwright_1 | 'Content-Length: 223\r\n' + playwright_1 | 'Host: localhost:49153\r\n' + playwright_1 | 'Connection: close\r\n' + playwright_1 | '\r\n', playwright_1 | _keepAliveTimeout: 0, playwright_1 | _onPendingData: [Function: nop], playwright_1 | agent: [Agent], playwright_1 | socketPath: undefined, playwright_1 | method: 'POST', playwright_1 | maxHeaderSize: undefined, playwright_1 | insecureHTTPParser: undefined, playwright_1 | path: '/v1/auth_tokens/token/', playwright_1 | _ended: true, playwright_1 | res: [IncomingMessage], playwright_1 | aborted: false, playwright_1 | timeoutCb: null, playwright_1 | upgradeOrConnect: false, playwright_1 | parser: null, playwright_1 | maxHeadersCount: null, playwright_1 | reusedSocket: false, playwright_1 | host: 'localhost', playwright_1 | protocol: 'http:', playwright_1 | _redirectable: [Writable], playwright_1 | [Symbol(kCapture)]: false, playwright_1 | [Symbol(kNeedDrain)]: false, playwright_1 | [Symbol(corked)]: 0, playwright_1 | [Symbol(kOutHeaders)]: [Object: null prototype] playwright_1 | }, playwright_1 | response: { playwright_1 | status: 401, playwright_1 | statusText: 'Unauthorized', playwright_1 | headers: [Object], playwright_1 | config: [Object], playwright_1 | request: [ClientRequest], playwright_1 | data: [Object] playwright_1 | } playwright_1 | } playwright_1 | ] ``` </details> ## Reproduction <!-- Provide detailed steps to reproduce the bug. --> Look at the CI Playwright logs. ## Screenshots <!-- Add screenshots to show the problem; or delete the section entirely. --> ## Resolution <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in resolving this bug.
priority
playwright logs are too verbose description currently it is very difficult to read through the ci playwright logs because they are too verbose there are talkback proxy logs for each tape that is found unfortunatly it is not possible to only log the not found tapes so all of the tapes are being logged it is much easier to run the tests with update tapes set to true to see which tapes were added than it is to see which tapes are missing from the logs we should set silent true in the talkback options there are lots of warnings about authentication requests failing i m not sure what the best way of silencing them is but they take up a lot of vertical space in the ci logs request exception playwright warn sentry captureexception called but sentry plugin is disabled arguments playwright playwright code err bad request playwright config playwright transitional playwright adapter playwright transformrequest playwright transformresponse playwright timeout playwright xsrfcookiename xsrf token playwright xsrfheadername x xsrf token playwright maxcontentlength playwright maxbodylength playwright env playwright validatestatus playwright headers playwright baseurl playwright method post playwright url auth tokens token playwright data client id client secret grant type client credentials playwright playwright request clientrequest playwright events playwright eventscount playwright maxlisteners undefined playwright outputdata playwright outputsize playwright writable true playwright destroyed false playwright last true playwright chunkedencoding false playwright shouldkeepalive false playwright maxrequestsonconnectionreached false playwright defaultkeepalive true playwright usechunkedencodingbydefault true playwright senddate false playwright removedconnection false playwright removedcontlen false playwright removedte false playwright contentlength null playwright hasbody true playwright trailer playwright finished true playwright headersent true playwright closed false playwright socket playwright header post auth tokens token http r n playwright accept application json text plain r n playwright content type application x www form urlencoded charset utf r n playwright user agent axios r n playwright content length r n playwright host localhost r n playwright connection close r n playwright r n playwright keepalivetimeout playwright onpendingdata playwright agent playwright socketpath undefined playwright method post playwright maxheadersize undefined playwright insecurehttpparser undefined playwright path auth tokens token playwright ended true playwright res playwright aborted false playwright timeoutcb null playwright upgradeorconnect false playwright parser null playwright maxheaderscount null playwright reusedsocket false playwright host localhost playwright protocol http playwright redirectable playwright false playwright false playwright playwright playwright playwright response playwright status playwright statustext unauthorized playwright headers playwright config playwright request playwright data playwright playwright playwright reproduction look at the ci playwright logs screenshots resolution 🙋 i would be interested in resolving this bug
1
81,392
3,590,450,271
IssuesEvent
2016-02-01 05:55:58
ESAPI/esapi-java-legacy
https://api.github.com/repos/ESAPI/esapi-java-legacy
closed
Patch for /trunk/src/main/java/org/owasp/esapi/reference/crypto/JavaEncryptor.java
Component-Encryptor imported Maintainability Milestone-Release2.1 OpSys-All Priority-Low Type-Patch
_From [noloa...@gmail.com](https://code.google.com/u/114558122492435650190/) on October 08, 2011 17:20:41_ Removed all those damn errant UTF-8 BOMs (EF BB BF) which made it impossible to format the source **Attachment:** [JavaEncryptor.java.patch](http://code.google.com/p/owasp-esapi-java/issues/detail?id=247) _Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=247_
1.0
Patch for /trunk/src/main/java/org/owasp/esapi/reference/crypto/JavaEncryptor.java - _From [noloa...@gmail.com](https://code.google.com/u/114558122492435650190/) on October 08, 2011 17:20:41_ Removed all those damn errant UTF-8 BOMs (EF BB BF) which made it impossible to format the source **Attachment:** [JavaEncryptor.java.patch](http://code.google.com/p/owasp-esapi-java/issues/detail?id=247) _Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=247_
priority
patch for trunk src main java org owasp esapi reference crypto javaencryptor java from on october removed all those damn errant utf boms ef bb bf which made it impossible to format the source attachment original issue
1
501,008
14,518,419,746
IssuesEvent
2020-12-13 23:34:31
godaddy-wordpress/coblocks
https://api.github.com/repos/godaddy-wordpress/coblocks
opened
Cover block - style should not apply when there's no image selected
[Priority] Low [Type] Bug
### Describe the bug: ISBAT see what I'm doing. When you apply top or bottom wave to the cover block and you still didn't select an image, the block is covered by the wave. It also happens in WP.com, and it's specially problematic when you use full width, as the block is almost covered. ### To reproduce: <!-- Steps to reproduce the behavior: --> #### - Hosted WP 1. Install CoBlocks 1. Add a cover block in any page or post 1. Select a wave style in the right sidebar #### - WP.com 1. Add a cover block in any page or post 1. Select a wave style in the right sidebar ### Expected behavior: Don't apply style until you have an image selected OR Don't allow to select a style until you have selected an image ### Screenshots: ###### Original ![original](https://user-images.githubusercontent.com/254308/102027334-a0b34980-3da3-11eb-9aaf-b85351ea15a1.png) ###### Hosted WP ![top wave hosted](https://user-images.githubusercontent.com/254308/102027336-a0b34980-3da3-11eb-8bef-bf1c6b4cd01e.png) ![bottom wave hosted](https://user-images.githubusercontent.com/254308/102027333-a01ab300-3da3-11eb-8cde-1f6762ceb7ef.png) ###### WP.com ![top wave wpcom](https://user-images.githubusercontent.com/254308/102027331-9f821c80-3da3-11eb-826a-43e775ae1840.png) ![bottom wave wpcom](https://user-images.githubusercontent.com/254308/102027332-9f821c80-3da3-11eb-9c51-2af694bbaade.png) ###### Full width ![top wave full width](https://user-images.githubusercontent.com/254308/102027329-9e50ef80-3da3-11eb-9e07-6aebb7118a1b.png) ![bottom wave full width](https://user-images.githubusercontent.com/254308/102027330-9ee98600-3da3-11eb-9d49-999ae1e133ec.png) ### Isolating the problem: - [x] This bug happens with no other plugins activated - [x] This bug happens with a default WordPress theme active - [x] I can reproduce this bug consistently using the steps above - [x] It also happens in WordPress.com Related: https://github.com/WordPress/gutenberg/issues/23122 https://github.com/WordPress/gutenberg/issues/23198 ### WordPress version: 5.5, 5.5.x, 5.6, 5.6.x
1.0
Cover block - style should not apply when there's no image selected - ### Describe the bug: ISBAT see what I'm doing. When you apply top or bottom wave to the cover block and you still didn't select an image, the block is covered by the wave. It also happens in WP.com, and it's specially problematic when you use full width, as the block is almost covered. ### To reproduce: <!-- Steps to reproduce the behavior: --> #### - Hosted WP 1. Install CoBlocks 1. Add a cover block in any page or post 1. Select a wave style in the right sidebar #### - WP.com 1. Add a cover block in any page or post 1. Select a wave style in the right sidebar ### Expected behavior: Don't apply style until you have an image selected OR Don't allow to select a style until you have selected an image ### Screenshots: ###### Original ![original](https://user-images.githubusercontent.com/254308/102027334-a0b34980-3da3-11eb-9aaf-b85351ea15a1.png) ###### Hosted WP ![top wave hosted](https://user-images.githubusercontent.com/254308/102027336-a0b34980-3da3-11eb-8bef-bf1c6b4cd01e.png) ![bottom wave hosted](https://user-images.githubusercontent.com/254308/102027333-a01ab300-3da3-11eb-8cde-1f6762ceb7ef.png) ###### WP.com ![top wave wpcom](https://user-images.githubusercontent.com/254308/102027331-9f821c80-3da3-11eb-826a-43e775ae1840.png) ![bottom wave wpcom](https://user-images.githubusercontent.com/254308/102027332-9f821c80-3da3-11eb-9c51-2af694bbaade.png) ###### Full width ![top wave full width](https://user-images.githubusercontent.com/254308/102027329-9e50ef80-3da3-11eb-9e07-6aebb7118a1b.png) ![bottom wave full width](https://user-images.githubusercontent.com/254308/102027330-9ee98600-3da3-11eb-9d49-999ae1e133ec.png) ### Isolating the problem: - [x] This bug happens with no other plugins activated - [x] This bug happens with a default WordPress theme active - [x] I can reproduce this bug consistently using the steps above - [x] It also happens in WordPress.com Related: https://github.com/WordPress/gutenberg/issues/23122 https://github.com/WordPress/gutenberg/issues/23198 ### WordPress version: 5.5, 5.5.x, 5.6, 5.6.x
priority
cover block style should not apply when there s no image selected describe the bug isbat see what i m doing when you apply top or bottom wave to the cover block and you still didn t select an image the block is covered by the wave it also happens in wp com and it s specially problematic when you use full width as the block is almost covered to reproduce hosted wp install coblocks add a cover block in any page or post select a wave style in the right sidebar wp com add a cover block in any page or post select a wave style in the right sidebar expected behavior don t apply style until you have an image selected or don t allow to select a style until you have selected an image screenshots original hosted wp wp com full width isolating the problem this bug happens with no other plugins activated this bug happens with a default wordpress theme active i can reproduce this bug consistently using the steps above it also happens in wordpress com related wordpress version x x
1
741,992
25,831,493,601
IssuesEvent
2022-12-12 16:23:04
HEPData/hepdata
https://api.github.com/repos/HEPData/hepdata
opened
search: check for `error` in `query_result` before returning JSON
type: bug priority: medium complexity: low
If search results are requested in JSON format, the current code tries to calculate `query_result['hits']` and return the JSON results without checking if there is an `error` in the `query_result`: https://github.com/HEPData/hepdata/blob/3dcdfe62dede1fb28752f31cf863da42ff2aacde/hepdata/modules/search/views.py#L249-L250 An invalid query like `q=%27%5B0%5D` gives an exception `KeyError: 'total'`, e.g. [Sentry event](https://hepdata-sentry.web.cern.ch/sentry/hepdata-prod/issues/35402/events/201603/?environment=production). A JSON error message should be returned instead.
1.0
search: check for `error` in `query_result` before returning JSON - If search results are requested in JSON format, the current code tries to calculate `query_result['hits']` and return the JSON results without checking if there is an `error` in the `query_result`: https://github.com/HEPData/hepdata/blob/3dcdfe62dede1fb28752f31cf863da42ff2aacde/hepdata/modules/search/views.py#L249-L250 An invalid query like `q=%27%5B0%5D` gives an exception `KeyError: 'total'`, e.g. [Sentry event](https://hepdata-sentry.web.cern.ch/sentry/hepdata-prod/issues/35402/events/201603/?environment=production). A JSON error message should be returned instead.
priority
search check for error in query result before returning json if search results are requested in json format the current code tries to calculate query result and return the json results without checking if there is an error in the query result an invalid query like q gives an exception keyerror total e g a json error message should be returned instead
1
510,631
14,813,379,869
IssuesEvent
2021-01-14 01:55:03
onicagroup/runway
https://api.github.com/repos/onicagroup/runway
opened
Binary builds require glibc >= 2.29, breaking compatibility with older linux distros (e.g. Amazon Linux 2)
bug priority:low status:review_needed
It appears that GitHub Actions update bumped the glibc version used in binary builds, causing errors like: ``` 155] Error loading Python lib '/codebuild/output/src662356390/src/github.com/devmohammedothman/runway-cloudformation/node_modules/@onica/runway/src/runway/libpython3.7m.so.1.0': dlopen: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /codebuild/output/src662356390/src/github.com/devmohammedothman/runway-cloudformation/node_modules/@onica/runway/src/runway/libpython3.7m.so.1.0) ``` Assuming CodeBuild w/ AL2 remains a supported platform, we should find a way to make this work.
1.0
Binary builds require glibc >= 2.29, breaking compatibility with older linux distros (e.g. Amazon Linux 2) - It appears that GitHub Actions update bumped the glibc version used in binary builds, causing errors like: ``` 155] Error loading Python lib '/codebuild/output/src662356390/src/github.com/devmohammedothman/runway-cloudformation/node_modules/@onica/runway/src/runway/libpython3.7m.so.1.0': dlopen: /lib64/libm.so.6: version `GLIBC_2.29' not found (required by /codebuild/output/src662356390/src/github.com/devmohammedothman/runway-cloudformation/node_modules/@onica/runway/src/runway/libpython3.7m.so.1.0) ``` Assuming CodeBuild w/ AL2 remains a supported platform, we should find a way to make this work.
priority
binary builds require glibc breaking compatibility with older linux distros e g amazon linux it appears that github actions update bumped the glibc version used in binary builds causing errors like error loading python lib codebuild output src github com devmohammedothman runway cloudformation node modules onica runway src runway so dlopen libm so version glibc not found required by codebuild output src github com devmohammedothman runway cloudformation node modules onica runway src runway so assuming codebuild w remains a supported platform we should find a way to make this work
1
716,535
24,638,283,126
IssuesEvent
2022-10-17 09:37:12
ibissource/frank-flow
https://api.github.com/repos/ibissource/frank-flow
closed
You should see a clear error message when you have a backend jar without frontend
feature priority:low
**Is your feature request related to a problem? Please describe.** The Maven build of the frank-flow project produces a .jar that should be deployed on the server. There is a profile `frontend`. Only if this profile is enabled, then the frontend is added to this .jar file. We expect that developers will sometimes produce .jar files that do not include the frontend. On July 8 2021 Martijn, Jaco and Niels M. spent a lot of time debugging before they discovered that the .jar missed the frontend. A clear error message in this case will help. **Describe the solution you'd like** The URL /frank-flow should always produce some HTML page, also if the backend was built without the `frontend` profile. We can also add a Cypress test that calls the URL /frontend/api/configurations. If this URL produces a result, then we know that there is a frank-flow .jar on the classpath of the Java application on the server. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
1.0
You should see a clear error message when you have a backend jar without frontend - **Is your feature request related to a problem? Please describe.** The Maven build of the frank-flow project produces a .jar that should be deployed on the server. There is a profile `frontend`. Only if this profile is enabled, then the frontend is added to this .jar file. We expect that developers will sometimes produce .jar files that do not include the frontend. On July 8 2021 Martijn, Jaco and Niels M. spent a lot of time debugging before they discovered that the .jar missed the frontend. A clear error message in this case will help. **Describe the solution you'd like** The URL /frank-flow should always produce some HTML page, also if the backend was built without the `frontend` profile. We can also add a Cypress test that calls the URL /frontend/api/configurations. If this URL produces a result, then we know that there is a frank-flow .jar on the classpath of the Java application on the server. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
priority
you should see a clear error message when you have a backend jar without frontend is your feature request related to a problem please describe the maven build of the frank flow project produces a jar that should be deployed on the server there is a profile frontend only if this profile is enabled then the frontend is added to this jar file we expect that developers will sometimes produce jar files that do not include the frontend on july martijn jaco and niels m spent a lot of time debugging before they discovered that the jar missed the frontend a clear error message in this case will help describe the solution you d like the url frank flow should always produce some html page also if the backend was built without the frontend profile we can also add a cypress test that calls the url frontend api configurations if this url produces a result then we know that there is a frank flow jar on the classpath of the java application on the server describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
1
631,302
20,150,102,768
IssuesEvent
2022-02-09 11:30:12
ita-social-projects/horondi_client_fe
https://api.github.com/repos/ita-social-projects/horondi_client_fe
closed
(SP: 3) "Sell" registration on buy as a guest page
FrontEnd part priority: low
Create mockup and "Selling" text and realise it on the checkout page Show benefits on checkout, like you will see and manage your orders.
1.0
(SP: 3) "Sell" registration on buy as a guest page - Create mockup and "Selling" text and realise it on the checkout page Show benefits on checkout, like you will see and manage your orders.
priority
sp sell registration on buy as a guest page create mockup and selling text and realise it on the checkout page show benefits on checkout like you will see and manage your orders
1
484,229
13,936,566,362
IssuesEvent
2020-10-22 13:06:24
cds-snc/report-a-cybercrime
https://api.github.com/repos/cds-snc/report-a-cybercrime
closed
Self Harm Words Incorrectly indicated in Server Console.
bug low priority
## Summary When a report is submitted the we scan the contents for words that indicate the user is at risk for self harm. Any matches are flagged in the analyst report and printed in the server console. When no self harm words are found we still print an empty list in the console. ![sfwords](https://user-images.githubusercontent.com/62246403/95462372-6c578300-0945-11eb-979e-3fe020ff86bf.png) This may lead to confusion in the future and this should be removed. It seems the issue is the self harm word scan returns an array of matching words. The result is evaluated and if true the results are printed. I believe that originally the return type was String so this would have worked ("" is falsy, [] is truthy). The simple fix is to update the evaluation. ## Steps to reproduce Complete a report without any self harm words. Check server console after submitting. ## Unresolved questions N/A
1.0
Self Harm Words Incorrectly indicated in Server Console. - ## Summary When a report is submitted the we scan the contents for words that indicate the user is at risk for self harm. Any matches are flagged in the analyst report and printed in the server console. When no self harm words are found we still print an empty list in the console. ![sfwords](https://user-images.githubusercontent.com/62246403/95462372-6c578300-0945-11eb-979e-3fe020ff86bf.png) This may lead to confusion in the future and this should be removed. It seems the issue is the self harm word scan returns an array of matching words. The result is evaluated and if true the results are printed. I believe that originally the return type was String so this would have worked ("" is falsy, [] is truthy). The simple fix is to update the evaluation. ## Steps to reproduce Complete a report without any self harm words. Check server console after submitting. ## Unresolved questions N/A
priority
self harm words incorrectly indicated in server console summary when a report is submitted the we scan the contents for words that indicate the user is at risk for self harm any matches are flagged in the analyst report and printed in the server console when no self harm words are found we still print an empty list in the console this may lead to confusion in the future and this should be removed it seems the issue is the self harm word scan returns an array of matching words the result is evaluated and if true the results are printed i believe that originally the return type was string so this would have worked is falsy is truthy the simple fix is to update the evaluation steps to reproduce complete a report without any self harm words check server console after submitting unresolved questions n a
1
29,234
2,714,195,452
IssuesEvent
2015-04-10 00:36:08
hamiltont/clasp
https://api.github.com/repos/hamiltont/clasp
opened
Have the clients kill themselves if they receive a disconnect event from the master and *dont* receive a reconnect within a timeout
bug Low priority
_From @hamiltont on September 11, 2014 17:10_ _Copied from original issue: hamiltont/attack#26_
1.0
Have the clients kill themselves if they receive a disconnect event from the master and *dont* receive a reconnect within a timeout - _From @hamiltont on September 11, 2014 17:10_ _Copied from original issue: hamiltont/attack#26_
priority
have the clients kill themselves if they receive a disconnect event from the master and dont receive a reconnect within a timeout from hamiltont on september copied from original issue hamiltont attack
1
210,083
7,183,076,624
IssuesEvent
2018-02-01 12:05:09
CorsixTH/CorsixTH
https://api.github.com/repos/CorsixTH/CorsixTH
closed
Counting dates in CorsixTH
Priority-Low Type-Enhancement
Currently world.lua and possibly other files track the date in months (this stems from the original level files operating in months) meaning all over the code base there is explicit conversion of date in months to the actual month something like `actual_months = 1 + ((months - 1) mod 12)` i.e. - months = 12 = actual_months = 12 and - months = 32 = actual_months = 8 (August of year 2) Ideas for solving this would be one of: 1. Track the actual_months separately either keep them updated or introduce functions to calculate them on the fly `get_actual_months(months)` 2. Rework how counting is done either count years and month separately so we always have the actual month, or just count in months and work out what year it is when required (currently knowing the year isn't a very common operation but a necessary one) The second way is my preferred way but this is really an open discussion.
1.0
Counting dates in CorsixTH - Currently world.lua and possibly other files track the date in months (this stems from the original level files operating in months) meaning all over the code base there is explicit conversion of date in months to the actual month something like `actual_months = 1 + ((months - 1) mod 12)` i.e. - months = 12 = actual_months = 12 and - months = 32 = actual_months = 8 (August of year 2) Ideas for solving this would be one of: 1. Track the actual_months separately either keep them updated or introduce functions to calculate them on the fly `get_actual_months(months)` 2. Rework how counting is done either count years and month separately so we always have the actual month, or just count in months and work out what year it is when required (currently knowing the year isn't a very common operation but a necessary one) The second way is my preferred way but this is really an open discussion.
priority
counting dates in corsixth currently world lua and possibly other files track the date in months this stems from the original level files operating in months meaning all over the code base there is explicit conversion of date in months to the actual month something like actual months months mod i e months actual months and months actual months august of year ideas for solving this would be one of track the actual months separately either keep them updated or introduce functions to calculate them on the fly get actual months months rework how counting is done either count years and month separately so we always have the actual month or just count in months and work out what year it is when required currently knowing the year isn t a very common operation but a necessary one the second way is my preferred way but this is really an open discussion
1
407,248
11,911,133,258
IssuesEvent
2020-03-31 08:06:12
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0 Staging 1489] Large/Small Ashlar Stone fountain values wrong way around
Priority: Low Status: Fixed
1. 0.9.0 Staging 1489 2. Create Large or Small Ashlar Stone fountin 3. Small fountain should cost less than a large fountain 4. Large fountain is 40 Ashlar, Small is 60 Resource use is also very high for the item ![image](https://user-images.githubusercontent.com/61889138/77852841-adbde780-71d8-11ea-8204-b0e02c5daeba.png)
1.0
[0.9.0 Staging 1489] Large/Small Ashlar Stone fountain values wrong way around - 1. 0.9.0 Staging 1489 2. Create Large or Small Ashlar Stone fountin 3. Small fountain should cost less than a large fountain 4. Large fountain is 40 Ashlar, Small is 60 Resource use is also very high for the item ![image](https://user-images.githubusercontent.com/61889138/77852841-adbde780-71d8-11ea-8204-b0e02c5daeba.png)
priority
large small ashlar stone fountain values wrong way around staging create large or small ashlar stone fountin small fountain should cost less than a large fountain large fountain is ashlar small is resource use is also very high for the item
1
518,154
15,024,849,282
IssuesEvent
2021-02-01 20:13:38
Esri/data-collection-ios
https://api.github.com/repos/Esri/data-collection-ios
closed
Consider using final attribute
Effort - Small Priority - Low Status - Backlog
Marking appropriate declarations as `final` can improve runtime efficiency for debug builds. The compiler can infer `final` for `public`/`internal` classes that aren't subclassed elsewhere in the module when using Whole Module Optimization, but WMO is only turned on for release builds. Marking classes/methods/properties as `final` allows the compiler to use direct dispatch rather than dynamic dispatch since it knows that the declaration cannot be overridden. See [Increasing Performance by Reducing Dynamic Dispatch](https://developer.apple.com/swift/blog/?id=27).
1.0
Consider using final attribute - Marking appropriate declarations as `final` can improve runtime efficiency for debug builds. The compiler can infer `final` for `public`/`internal` classes that aren't subclassed elsewhere in the module when using Whole Module Optimization, but WMO is only turned on for release builds. Marking classes/methods/properties as `final` allows the compiler to use direct dispatch rather than dynamic dispatch since it knows that the declaration cannot be overridden. See [Increasing Performance by Reducing Dynamic Dispatch](https://developer.apple.com/swift/blog/?id=27).
priority
consider using final attribute marking appropriate declarations as final can improve runtime efficiency for debug builds the compiler can infer final for public internal classes that aren t subclassed elsewhere in the module when using whole module optimization but wmo is only turned on for release builds marking classes methods properties as final allows the compiler to use direct dispatch rather than dynamic dispatch since it knows that the declaration cannot be overridden see
1
541,422
15,826,743,337
IssuesEvent
2021-04-06 07:49:59
anders-biostat/covid-test-web-site
https://api.github.com/repos/anders-biostat/covid-test-web-site
opened
Probe Suchen search result template preview
Priority: Low Type: Feature
Add a button to preview which template the user/probant gets displayed within the frontend
1.0
Probe Suchen search result template preview - Add a button to preview which template the user/probant gets displayed within the frontend
priority
probe suchen search result template preview add a button to preview which template the user probant gets displayed within the frontend
1
180,984
6,655,024,188
IssuesEvent
2017-09-29 14:54:52
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[studio-ui] Make save/cancel in content type editor consistent style with the rest of the app
enhancement Priority: Lowest
# Current style <img width="940" alt="screen shot 2017-09-05 at 1 42 20 pm" src="https://user-images.githubusercontent.com/169432/30074614-2ee56f34-9240-11e7-8aa2-e1427da412ef.png"> # Make consistent with the style used by the form engine and the rest of studio <img width="1035" alt="screen shot 2017-09-05 at 1 43 43 pm" src="https://user-images.githubusercontent.com/169432/30074678-5e3baad2-9240-11e7-8793-0aafe11a0826.png">
1.0
[studio-ui] Make save/cancel in content type editor consistent style with the rest of the app - # Current style <img width="940" alt="screen shot 2017-09-05 at 1 42 20 pm" src="https://user-images.githubusercontent.com/169432/30074614-2ee56f34-9240-11e7-8aa2-e1427da412ef.png"> # Make consistent with the style used by the form engine and the rest of studio <img width="1035" alt="screen shot 2017-09-05 at 1 43 43 pm" src="https://user-images.githubusercontent.com/169432/30074678-5e3baad2-9240-11e7-8793-0aafe11a0826.png">
priority
make save cancel in content type editor consistent style with the rest of the app current style img width alt screen shot at pm src make consistent with the style used by the form engine and the rest of studio img width alt screen shot at pm src
1
292,978
8,971,416,907
IssuesEvent
2019-01-29 15:52:32
ysik82/Customer-Sphere
https://api.github.com/repos/ysik82/Customer-Sphere
closed
Customer Experience - PickUpOnTime - Develop
All Segments Enhancement Low Priority Sphere Team T-shirt size 4
***Description*** As a customer I want to see the KPI-Product Availability per day and month so that be able to order the next month the exact number of product. ***Tasks*** - [ ] Task 1 - [ ] Task 2 - [ ] Task 3 ***Acceptance Criteria*** The team and the product owner must approve the information, under the following scenarios: Valid Email Address | Email Validation | Message sent to email address -- | -- | -- Invalid Email Address | Email Validation | Flag online profile as incomplete, kickoff snail mail message. Valid Email Address | Marketing Messaging | Marketing message copy matches copy provided by marketing Valid Email Address | Marketing Messaging | Marketing message design matches the specs provided by marketing Valid Email Address | Marketing Messaging | Message contains email link that allows the user to navigate to online banking Valid Email Address | Email Validation | Message sent to email address
1.0
Customer Experience - PickUpOnTime - Develop - ***Description*** As a customer I want to see the KPI-Product Availability per day and month so that be able to order the next month the exact number of product. ***Tasks*** - [ ] Task 1 - [ ] Task 2 - [ ] Task 3 ***Acceptance Criteria*** The team and the product owner must approve the information, under the following scenarios: Valid Email Address | Email Validation | Message sent to email address -- | -- | -- Invalid Email Address | Email Validation | Flag online profile as incomplete, kickoff snail mail message. Valid Email Address | Marketing Messaging | Marketing message copy matches copy provided by marketing Valid Email Address | Marketing Messaging | Marketing message design matches the specs provided by marketing Valid Email Address | Marketing Messaging | Message contains email link that allows the user to navigate to online banking Valid Email Address | Email Validation | Message sent to email address
priority
customer experience pickupontime develop description as a customer i want to see the kpi product availability per day and month so that be able to order the next month the exact number of product tasks task task task acceptance criteria the team and the product owner must approve the information under the following scenarios valid email address email validation message sent to email address invalid email address email validation flag online profile as incomplete kickoff snail mail message valid email address marketing messaging marketing message copy matches copy provided by marketing valid email address marketing messaging marketing message design matches the specs provided by marketing valid email address marketing messaging message contains email link that allows the user to navigate to online banking valid email address email validation message sent to email address
1
298,483
9,200,382,955
IssuesEvent
2019-03-07 16:56:04
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
Map composer: width&height swapped
Category: Map Canvas Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
--- Author Name: **werchowyna-epf-pl -** (werchowyna-epf-pl -) Original Redmine Issue: 56, https://issues.qgis.org/issues/56 Original Assignee: Gary Sherman --- In the map composer the width&height in General and Item tabs have different meaning - but it's hard to tell which one is right. Anyway, width field in General tab is the same what height is in Item tab. Maciek
1.0
Map composer: width&height swapped - --- Author Name: **werchowyna-epf-pl -** (werchowyna-epf-pl -) Original Redmine Issue: 56, https://issues.qgis.org/issues/56 Original Assignee: Gary Sherman --- In the map composer the width&height in General and Item tabs have different meaning - but it's hard to tell which one is right. Anyway, width field in General tab is the same what height is in Item tab. Maciek
priority
map composer width height swapped author name werchowyna epf pl werchowyna epf pl original redmine issue original assignee gary sherman in the map composer the width height in general and item tabs have different meaning but it s hard to tell which one is right anyway width field in general tab is the same what height is in item tab maciek
1
793,351
27,991,956,382
IssuesEvent
2023-03-27 05:03:39
chaotic-aur/packages
https://api.github.com/repos/chaotic-aur/packages
closed
[Request] fairy-stockfish (and -git)
request:new-pkg priority:low
### Link to the package base(s) in the AUR [fairy-stockfish](https://aur.archlinux.org/packages/fairy-stockfish) [fairy-stockfish-git](https://aur.archlinux.org/packages/fairy-stockfish-git) ### Utility this package has for you Chess variants are fun. Why not be able to play against a computer? ### Do you consider the package(s) to be useful for every Chaotic-AUR user? No, but for a great amount. ### Do you consider the package to be useful for feature testing/preview? - [ ] Yes ### Have you tested if the package builds in a clean chroot? - [ ] Yes ### Does the package's license allow redistributing it? YES! ### Have you searched the issues to ensure this request is unique? - [X] YES! ### Have you read the README to ensure this package is not banned? - [X] YES! ### More information Regular `stockfish` doesn't support variants.
1.0
[Request] fairy-stockfish (and -git) - ### Link to the package base(s) in the AUR [fairy-stockfish](https://aur.archlinux.org/packages/fairy-stockfish) [fairy-stockfish-git](https://aur.archlinux.org/packages/fairy-stockfish-git) ### Utility this package has for you Chess variants are fun. Why not be able to play against a computer? ### Do you consider the package(s) to be useful for every Chaotic-AUR user? No, but for a great amount. ### Do you consider the package to be useful for feature testing/preview? - [ ] Yes ### Have you tested if the package builds in a clean chroot? - [ ] Yes ### Does the package's license allow redistributing it? YES! ### Have you searched the issues to ensure this request is unique? - [X] YES! ### Have you read the README to ensure this package is not banned? - [X] YES! ### More information Regular `stockfish` doesn't support variants.
priority
fairy stockfish and git link to the package base s in the aur utility this package has for you chess variants are fun why not be able to play against a computer do you consider the package s to be useful for every chaotic aur user no but for a great amount do you consider the package to be useful for feature testing preview yes have you tested if the package builds in a clean chroot yes does the package s license allow redistributing it yes have you searched the issues to ensure this request is unique yes have you read the readme to ensure this package is not banned yes more information regular stockfish doesn t support variants
1
199,804
6,994,374,368
IssuesEvent
2017-12-15 15:09:30
rathena/rathena
https://api.github.com/repos/rathena/rathena
closed
Chain Lightning vs icewall
component:skill mode:prerenewal mode:renewal priority:low status:confirmed type:bug
<!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. --> * **rAthena Hash**: [b2d904b](https://github.com/rathena/rathena/commit/b2d904b764f43137547e5ed689af2b5983b8b234) <!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue. How to get your GitHub Hash: 1. cd your/rAthena/directory/ 2. git rev-parse --short HEAD 3. Copy the resulting hash. --> * **Client Date**: 20170614 <!-- Please specify the client date you used. --> * **Server Mode**: Renewal <!-- Which mode does your server use: Pre-Renewal or Renewal? --> Skill information : [Chain Lightning](http://irowiki.org/wiki/Chain_Lightning) * **Description of Issue**: * Result: <!-- Describe the issue that you experienced in detail. -->Chain Lightning will hit the ice wall * Expected Result: <!-- Describe what you would expect to happen in detail. -->Chain Lightning will not hit the ice wall. If you choose the ice wall as the first target, chain lightning will hit ice wall once then jump to other targets except the ice wall * How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. --> * Official Information:<!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. --> This can not be confirmed without futher information. But i thing i m right. Hope someone can advise <!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ --> * **Modifications that may affect results**: <!-- * Please provide any information that could influence the expected result. --> <!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
1.0
Chain Lightning vs icewall - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. --> * **rAthena Hash**: [b2d904b](https://github.com/rathena/rathena/commit/b2d904b764f43137547e5ed689af2b5983b8b234) <!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue. How to get your GitHub Hash: 1. cd your/rAthena/directory/ 2. git rev-parse --short HEAD 3. Copy the resulting hash. --> * **Client Date**: 20170614 <!-- Please specify the client date you used. --> * **Server Mode**: Renewal <!-- Which mode does your server use: Pre-Renewal or Renewal? --> Skill information : [Chain Lightning](http://irowiki.org/wiki/Chain_Lightning) * **Description of Issue**: * Result: <!-- Describe the issue that you experienced in detail. -->Chain Lightning will hit the ice wall * Expected Result: <!-- Describe what you would expect to happen in detail. -->Chain Lightning will not hit the ice wall. If you choose the ice wall as the first target, chain lightning will hit ice wall once then jump to other targets except the ice wall * How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. --> * Official Information:<!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. --> This can not be confirmed without futher information. But i thing i m right. Hope someone can advise <!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ --> * **Modifications that may affect results**: <!-- * Please provide any information that could influence the expected result. --> <!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
priority
chain lightning vs icewall rathena hash please specify the rathena on which you encountered this issue how to get your github hash cd your rathena directory git rev parse short head copy the resulting hash client date server mode renewal skill information description of issue result chain lightning will hit the ice wall expected result chain lightning will not hit the ice wall if you choose the ice wall as the first target chain lightning will hit ice wall once then jump to other targets except the ice wall how to reproduce official information this can not be confirmed without futher information but i thing i m right hope someone can advise modifications that may affect results
1
804,251
29,481,494,901
IssuesEvent
2023-06-02 06:12:36
wp-media/wp-rocket
https://api.github.com/repos/wp-media/wp-rocket
closed
rocket_clean_post() - The whole cache is cleared under certain conditions
type: bug module: cache priority: low needs: grooming severity: moderate
**Before submitting an issue please check that you’ve completed the following steps:** - Made sure you’re on the latest version ✅ - Used the search feature to ensure that the bug hasn’t been reported before ✅ **Describe the bug** When the cache of a post is cleared, we are using `rocket_clean_post()` which calls `rocket_get_purge_urls()` to get related posts whose cache also needs to be cleared. If one of them doesn't contain a path, then the `$entry` will be just `$dir`, and that will lead to the clearance of the whole cache folder: https://github.com/wp-media/wp-rocket/blob/e49355167e75f1d3cf51ee65c429b75f3e28682e/inc/functions/files.php#L538-L551 This issue occurred on a customer's website where, for security reasons, they were using the following to disable author archive pages: ```PHP add_filter('author_link', function() { return '#'; }, 99); ``` So, here: https://github.com/wp-media/wp-rocket/blob/e49355167e75f1d3cf51ee65c429b75f3e28682e/inc/common/purge.php#L134 that resulted in: ```PHP $purge_urls[] = '#'; ``` and finally, the whole cache was cleared every time the `rocket_clean_post()` run. **To Reproduce** Steps to reproduce the behavior: 1. Hardcode the `#` as a value [here](https://github.com/wp-media/wp-rocket/blob/e49355167e75f1d3cf51ee65c429b75f3e28682e/inc/common/purge.php#L134). 2. Run `rocket_clean_post()` to clear the cache of a post. 3. Monitor the `/cache/wp-rocket/` folder. **Expected behavior** Clear only the cache of related posts/archive pages. **Additional context** This is an edge case, but it's causing unnecessary cache clearing, and it is hard to troubleshoot. It's necessary to safeguard the process. Could be taken into consideration when #2549 will be tackled. **Related ticket:** https://secure.helpscout.net/conversation/1305116799/201131?folderId=273766 **Backlog Grooming (for WP Media dev team use only)** - [ ] Reproduce the problem - [ ] Identify the root cause - [ ] Scope a solution - [ ] Estimate the effort
1.0
rocket_clean_post() - The whole cache is cleared under certain conditions - **Before submitting an issue please check that you’ve completed the following steps:** - Made sure you’re on the latest version ✅ - Used the search feature to ensure that the bug hasn’t been reported before ✅ **Describe the bug** When the cache of a post is cleared, we are using `rocket_clean_post()` which calls `rocket_get_purge_urls()` to get related posts whose cache also needs to be cleared. If one of them doesn't contain a path, then the `$entry` will be just `$dir`, and that will lead to the clearance of the whole cache folder: https://github.com/wp-media/wp-rocket/blob/e49355167e75f1d3cf51ee65c429b75f3e28682e/inc/functions/files.php#L538-L551 This issue occurred on a customer's website where, for security reasons, they were using the following to disable author archive pages: ```PHP add_filter('author_link', function() { return '#'; }, 99); ``` So, here: https://github.com/wp-media/wp-rocket/blob/e49355167e75f1d3cf51ee65c429b75f3e28682e/inc/common/purge.php#L134 that resulted in: ```PHP $purge_urls[] = '#'; ``` and finally, the whole cache was cleared every time the `rocket_clean_post()` run. **To Reproduce** Steps to reproduce the behavior: 1. Hardcode the `#` as a value [here](https://github.com/wp-media/wp-rocket/blob/e49355167e75f1d3cf51ee65c429b75f3e28682e/inc/common/purge.php#L134). 2. Run `rocket_clean_post()` to clear the cache of a post. 3. Monitor the `/cache/wp-rocket/` folder. **Expected behavior** Clear only the cache of related posts/archive pages. **Additional context** This is an edge case, but it's causing unnecessary cache clearing, and it is hard to troubleshoot. It's necessary to safeguard the process. Could be taken into consideration when #2549 will be tackled. **Related ticket:** https://secure.helpscout.net/conversation/1305116799/201131?folderId=273766 **Backlog Grooming (for WP Media dev team use only)** - [ ] Reproduce the problem - [ ] Identify the root cause - [ ] Scope a solution - [ ] Estimate the effort
priority
rocket clean post the whole cache is cleared under certain conditions before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version ✅ used the search feature to ensure that the bug hasn’t been reported before ✅ describe the bug when the cache of a post is cleared we are using rocket clean post which calls rocket get purge urls to get related posts whose cache also needs to be cleared if one of them doesn t contain a path then the entry will be just dir and that will lead to the clearance of the whole cache folder this issue occurred on a customer s website where for security reasons they were using the following to disable author archive pages php add filter author link function return so here that resulted in php purge urls and finally the whole cache was cleared every time the rocket clean post run to reproduce steps to reproduce the behavior hardcode the as a value run rocket clean post to clear the cache of a post monitor the cache wp rocket folder expected behavior clear only the cache of related posts archive pages additional context this is an edge case but it s causing unnecessary cache clearing and it is hard to troubleshoot it s necessary to safeguard the process could be taken into consideration when will be tackled related ticket backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort
1
532,853
15,572,047,720
IssuesEvent
2021-03-17 06:18:33
containrrr/watchtower
https://api.github.com/repos/containrrr/watchtower
closed
docker image linux/arm/v7 platform request
Priority: Low Status: Available Type: Enhancement
**Is your feature request related to a problem? Please describe.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> I choose the `armhf-latest` tag to run on my raspberry and got the warning ```shell docker run -d \ --name watchtower \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower:armhf-latest WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested ``` however watchtower is running okay. Since i saw the tags from the docker official document [here](https://github.com/docker-library/official-images#architectures-other-than-amd64) and there are `arm32v6` and `arm32v7` tags So, should we mv `armhf` to `arm32v6` and `arm32v7` ? **Describe the solution you'd like** <!-- A clear and concise description of what you want to happen. --> or add a tag like `arm32v7`? **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Additional context** <!-- Add any other context or screenshots about the feature request here. -->
1.0
docker image linux/arm/v7 platform request - **Is your feature request related to a problem? Please describe.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> I choose the `armhf-latest` tag to run on my raspberry and got the warning ```shell docker run -d \ --name watchtower \ -v /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower:armhf-latest WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested ``` however watchtower is running okay. Since i saw the tags from the docker official document [here](https://github.com/docker-library/official-images#architectures-other-than-amd64) and there are `arm32v6` and `arm32v7` tags So, should we mv `armhf` to `arm32v6` and `arm32v7` ? **Describe the solution you'd like** <!-- A clear and concise description of what you want to happen. --> or add a tag like `arm32v7`? **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> **Additional context** <!-- Add any other context or screenshots about the feature request here. -->
priority
docker image linux arm platform request is your feature request related to a problem please describe i choose the armhf latest tag to run on my raspberry and got the warning shell docker run d name watchtower v var run docker sock var run docker sock containrrr watchtower armhf latest warning the requested image s platform linux does not match the detected host platform linux arm and no specific platform was requested however watchtower is running okay since i saw the tags from the docker official document and there are and tags so should we mv armhf to and describe the solution you d like or add a tag like describe alternatives you ve considered additional context
1
108,350
4,337,636,454
IssuesEvent
2016-07-28 01:34:10
RobotLocomotion/drake
https://api.github.com/repos/RobotLocomotion/drake
opened
Runtime Drake Diagnostics and Performance Measurements Are Needed
priority: low team: software core type: feature request
When running a simulation, it's often useful for various types of diagnostic and performance measurements to be made readily available. Such information may provide insight into the "health" of the simulation and things being simulated like perception pipelines, controllers, and dynamical systems. The information should ideally be exposed through multiple channels, e.g., LCM, ROS, SpdLog, etc. Example diagnostic and performance measurements include: * Simulation real-time factor. * Controller servo frequency. * Controller parameters. * Model states. * Executor time step. * Memory footprint. * Current simulation time. * Time spent updating the models. * Time spent updating collisions. * Time spent idling. * Current executor time step. * Number of collisions being checked and modeled. This issue tracks the identification of the relevant diagnostic and performance measurements and the design and implementation of a mechanism that enables this information to be gathered and exposed to external processes and users.
1.0
Runtime Drake Diagnostics and Performance Measurements Are Needed - When running a simulation, it's often useful for various types of diagnostic and performance measurements to be made readily available. Such information may provide insight into the "health" of the simulation and things being simulated like perception pipelines, controllers, and dynamical systems. The information should ideally be exposed through multiple channels, e.g., LCM, ROS, SpdLog, etc. Example diagnostic and performance measurements include: * Simulation real-time factor. * Controller servo frequency. * Controller parameters. * Model states. * Executor time step. * Memory footprint. * Current simulation time. * Time spent updating the models. * Time spent updating collisions. * Time spent idling. * Current executor time step. * Number of collisions being checked and modeled. This issue tracks the identification of the relevant diagnostic and performance measurements and the design and implementation of a mechanism that enables this information to be gathered and exposed to external processes and users.
priority
runtime drake diagnostics and performance measurements are needed when running a simulation it s often useful for various types of diagnostic and performance measurements to be made readily available such information may provide insight into the health of the simulation and things being simulated like perception pipelines controllers and dynamical systems the information should ideally be exposed through multiple channels e g lcm ros spdlog etc example diagnostic and performance measurements include simulation real time factor controller servo frequency controller parameters model states executor time step memory footprint current simulation time time spent updating the models time spent updating collisions time spent idling current executor time step number of collisions being checked and modeled this issue tracks the identification of the relevant diagnostic and performance measurements and the design and implementation of a mechanism that enables this information to be gathered and exposed to external processes and users
1
413,431
12,067,204,763
IssuesEvent
2020-04-16 13:02:54
osmontrouge/caresteouvert
https://api.github.com/repos/osmontrouge/caresteouvert
closed
Display fax number
enhancement priority: low
**Is your feature request related to a problem? Please describe.** Display fax number Example: https://www.caresteouvert.fr/health@46.479371,0.368976,18.62/place/n2153404526
1.0
Display fax number - **Is your feature request related to a problem? Please describe.** Display fax number Example: https://www.caresteouvert.fr/health@46.479371,0.368976,18.62/place/n2153404526
priority
display fax number is your feature request related to a problem please describe display fax number example
1
101,275
4,111,986,164
IssuesEvent
2016-06-07 08:47:42
japanesemediamanager/jmmclient
https://api.github.com/repos/japanesemediamanager/jmmclient
closed
Download Info Displayed Incorrectly When Sorting
Bug - Low Priority
Download info such as Size, Seeders and Leechers is not displayed correctly when you sort by it. It's displayed based on the first number ignoring everything after. Look at the seeders column. ![jmmdesktop_2016-02-07_15-03-54](https://cloud.githubusercontent.com/assets/9443295/12876019/0c097f78-cdac-11e5-93e1-2698afc989ad.png)
1.0
Download Info Displayed Incorrectly When Sorting - Download info such as Size, Seeders and Leechers is not displayed correctly when you sort by it. It's displayed based on the first number ignoring everything after. Look at the seeders column. ![jmmdesktop_2016-02-07_15-03-54](https://cloud.githubusercontent.com/assets/9443295/12876019/0c097f78-cdac-11e5-93e1-2698afc989ad.png)
priority
download info displayed incorrectly when sorting download info such as size seeders and leechers is not displayed correctly when you sort by it it s displayed based on the first number ignoring everything after look at the seeders column
1
609,238
18,858,162,039
IssuesEvent
2021-11-12 09:27:14
chaotic-aur/packages
https://api.github.com/repos/chaotic-aur/packages
closed
[Request] Add obs-studio-tytan652 and obs-studio-rc
request:new-pkg priority:low
## 👶 For requesting new packages - Link to the package(s) in AUR: - [obs-studio-tytan652](https://aur.archlinux.org/packages/obs-studio-tytan652/) - [obs-studio-rc](https://aur.archlinux.org/packages/obs-studio-rc/) - Utility this package has for you: - obs-studio-tytan652 is a OBS Studio package with almost all feature with some QOL PR added and sometimes backported fix. It is also build with the CEF version that OBS vendored on Linux. You can ask on their discord this is also the one who is recommended when someone with a Arch base have issue with dependencies or ask for missing features. - obs-studio-rc is a OBS Studio package for OBS Studio that follow the beta (release candidates) version branch. Allow user to try OBS Studio RCs. Will may need cef-minimal-obs-rc-bin in the future. Since I'm the maintainer of those two package and many OBS related package on AUR, you can consider me biased because these are my packages. But when I read messages about people appreciating my packages on OBS discord, I seriously think that I should ask you to add it for the ones that may use your repository. I have also many OBS plugins packages but I let other users request them. - Do you consider this package(s) to be useful for **every** chaotic user?: - [ ] YES - [x] No, but yes for a great amount for obs-studio-tytan652. - [x] No, but yes for a few for obs-studio-rc. - [ ] No, it's useful only for me. - Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?: - [x] YES for obs-studio-rc - [ ] NO - Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?: - [x] YES - Have you tested if this package builds in a clean chroot?: - [x] YES - [ ] NO - Does the package's license allows us to redistribute it?: - [x] YES - [ ] No clue. - [ ] No, but the author doesn't really care, it's just for bureaucracy. - Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?: - [x] YES - Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?: - [x] YES
1.0
[Request] Add obs-studio-tytan652 and obs-studio-rc - ## 👶 For requesting new packages - Link to the package(s) in AUR: - [obs-studio-tytan652](https://aur.archlinux.org/packages/obs-studio-tytan652/) - [obs-studio-rc](https://aur.archlinux.org/packages/obs-studio-rc/) - Utility this package has for you: - obs-studio-tytan652 is a OBS Studio package with almost all feature with some QOL PR added and sometimes backported fix. It is also build with the CEF version that OBS vendored on Linux. You can ask on their discord this is also the one who is recommended when someone with a Arch base have issue with dependencies or ask for missing features. - obs-studio-rc is a OBS Studio package for OBS Studio that follow the beta (release candidates) version branch. Allow user to try OBS Studio RCs. Will may need cef-minimal-obs-rc-bin in the future. Since I'm the maintainer of those two package and many OBS related package on AUR, you can consider me biased because these are my packages. But when I read messages about people appreciating my packages on OBS discord, I seriously think that I should ask you to add it for the ones that may use your repository. I have also many OBS plugins packages but I let other users request them. - Do you consider this package(s) to be useful for **every** chaotic user?: - [ ] YES - [x] No, but yes for a great amount for obs-studio-tytan652. - [x] No, but yes for a few for obs-studio-rc. - [ ] No, it's useful only for me. - Do you consider this package(s) to be useful for feature testing/preview (e.g: mesa-aco, wine-wayland)?: - [x] YES for obs-studio-rc - [ ] NO - Are you sure we don't have this package already (test with `pacman -Ss <pkgname>`)?: - [x] YES - Have you tested if this package builds in a clean chroot?: - [x] YES - [ ] NO - Does the package's license allows us to redistribute it?: - [x] YES - [ ] No clue. - [ ] No, but the author doesn't really care, it's just for bureaucracy. - Have you searched the [issues](https://github.com/chaotic-aur/packages/issues) to ensure this request is new (not duplicated)?: - [x] YES - Have you read the [README](https://github.com/chaotic-aur/packages#banished-and-rejected-packages) to ensure this package is not banned?: - [x] YES
priority
add obs studio and obs studio rc 👶 for requesting new packages link to the package s in aur utility this package has for you obs studio is a obs studio package with almost all feature with some qol pr added and sometimes backported fix it is also build with the cef version that obs vendored on linux you can ask on their discord this is also the one who is recommended when someone with a arch base have issue with dependencies or ask for missing features obs studio rc is a obs studio package for obs studio that follow the beta release candidates version branch allow user to try obs studio rcs will may need cef minimal obs rc bin in the future since i m the maintainer of those two package and many obs related package on aur you can consider me biased because these are my packages but when i read messages about people appreciating my packages on obs discord i seriously think that i should ask you to add it for the ones that may use your repository i have also many obs plugins packages but i let other users request them do you consider this package s to be useful for every chaotic user yes no but yes for a great amount for obs studio no but yes for a few for obs studio rc no it s useful only for me do you consider this package s to be useful for feature testing preview e g mesa aco wine wayland yes for obs studio rc no are you sure we don t have this package already test with pacman ss yes have you tested if this package builds in a clean chroot yes no does the package s license allows us to redistribute it yes no clue no but the author doesn t really care it s just for bureaucracy have you searched the to ensure this request is new not duplicated yes have you read the to ensure this package is not banned yes
1
549,506
16,094,238,814
IssuesEvent
2021-04-26 20:38:38
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
opened
Soul shards quickly lose their usefulness
:grey_exclamation: priority low :question: suggestion :question:
<!-- DO NOT REMOVE PRE-EXISTING LINES IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION ---------------------------------------------------------------------------------------------------------- --> This is a part of a much larger post I put out earlier. However at the request of a dev, I went out and split it into a set of smaller suggestions. So I just finished a nearly 40 year playthrough of the Burning Blade clan in Desolace. My goal was to establish something akin to the Old Horde, visibly consorting with demons and ruling with an iron fist, and while I did have a blast exploring the various events and options that you folk have made for this unique playthrough, there were some problems that eventually forced me to stop the run as a whole. Soul shards quickly lost their usefulness too, I had 6 in my inventory since draining a soul is a convenient way to kill a prisoner without any of the negative reputation modifiers as I was trying to build my fel power anyway. Perhaps a much more advanced function could be added like Summon Higher Demon which would require the use of 5 soul shards in a ritual to summon a far more menacing being. A Dreadlord for example who can act as an upgraded version of the Succubus, boasting a higher intrigue and being far more ruthless. An Eredar warlock who boasts an exceptionally high Learning and so perfect for the role of warlock in your council chamber. Perhaps even a Pit Lord, I mean why not, the pit lord could act as an insane battle commander who cannot be in your council, where his presence of commanding an army could bridge the gap of 500-1000 soldiers, Mannaroth in the War of the ancients killed hundreds with every swing of his glaive. I can already see the unique interactions appearing of a pit lord commanding my army. Considerable balancing would obviously need to be practiced here, making the cooldown time between each ritual 3-5 years and perhaps allowing only 1 Pit Lord to exist in your army at a time, and making a defeat always resulting in the Pit Lord death, as I can't see something as big as that making an escape.
1.0
Soul shards quickly lose their usefulness - <!-- DO NOT REMOVE PRE-EXISTING LINES IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION ---------------------------------------------------------------------------------------------------------- --> This is a part of a much larger post I put out earlier. However at the request of a dev, I went out and split it into a set of smaller suggestions. So I just finished a nearly 40 year playthrough of the Burning Blade clan in Desolace. My goal was to establish something akin to the Old Horde, visibly consorting with demons and ruling with an iron fist, and while I did have a blast exploring the various events and options that you folk have made for this unique playthrough, there were some problems that eventually forced me to stop the run as a whole. Soul shards quickly lost their usefulness too, I had 6 in my inventory since draining a soul is a convenient way to kill a prisoner without any of the negative reputation modifiers as I was trying to build my fel power anyway. Perhaps a much more advanced function could be added like Summon Higher Demon which would require the use of 5 soul shards in a ritual to summon a far more menacing being. A Dreadlord for example who can act as an upgraded version of the Succubus, boasting a higher intrigue and being far more ruthless. An Eredar warlock who boasts an exceptionally high Learning and so perfect for the role of warlock in your council chamber. Perhaps even a Pit Lord, I mean why not, the pit lord could act as an insane battle commander who cannot be in your council, where his presence of commanding an army could bridge the gap of 500-1000 soldiers, Mannaroth in the War of the ancients killed hundreds with every swing of his glaive. I can already see the unique interactions appearing of a pit lord commanding my army. Considerable balancing would obviously need to be practiced here, making the cooldown time between each ritual 3-5 years and perhaps allowing only 1 Pit Lord to exist in your army at a time, and making a defeat always resulting in the Pit Lord death, as I can't see something as big as that making an escape.
priority
soul shards quickly lose their usefulness do not remove pre existing lines if you want to suggest a few things open a new issue per every suggestion this is a part of a much larger post i put out earlier however at the request of a dev i went out and split it into a set of smaller suggestions so i just finished a nearly year playthrough of the burning blade clan in desolace my goal was to establish something akin to the old horde visibly consorting with demons and ruling with an iron fist and while i did have a blast exploring the various events and options that you folk have made for this unique playthrough there were some problems that eventually forced me to stop the run as a whole soul shards quickly lost their usefulness too i had in my inventory since draining a soul is a convenient way to kill a prisoner without any of the negative reputation modifiers as i was trying to build my fel power anyway perhaps a much more advanced function could be added like summon higher demon which would require the use of soul shards in a ritual to summon a far more menacing being a dreadlord for example who can act as an upgraded version of the succubus boasting a higher intrigue and being far more ruthless an eredar warlock who boasts an exceptionally high learning and so perfect for the role of warlock in your council chamber perhaps even a pit lord i mean why not the pit lord could act as an insane battle commander who cannot be in your council where his presence of commanding an army could bridge the gap of soldiers mannaroth in the war of the ancients killed hundreds with every swing of his glaive i can already see the unique interactions appearing of a pit lord commanding my army considerable balancing would obviously need to be practiced here making the cooldown time between each ritual years and perhaps allowing only pit lord to exist in your army at a time and making a defeat always resulting in the pit lord death as i can t see something as big as that making an escape
1
378,649
11,205,902,282
IssuesEvent
2020-01-05 17:24:56
khval/AmosKittens
https://api.github.com/repos/khval/AmosKittens
closed
Improvements needed on autoback 1 and 2.
Not important (low priority)
Draw commands don't draw on background buffer, and swap then on the front buffer, it just draws to 2 buffers, in autoback 1 and 2 at same time.
1.0
Improvements needed on autoback 1 and 2. - Draw commands don't draw on background buffer, and swap then on the front buffer, it just draws to 2 buffers, in autoback 1 and 2 at same time.
priority
improvements needed on autoback and draw commands don t draw on background buffer and swap then on the front buffer it just draws to buffers in autoback and at same time
1
325,685
9,934,661,341
IssuesEvent
2019-07-02 14:54:26
kubeapps/kubeapps
https://api.github.com/repos/kubeapps/kubeapps
closed
display update of chart and update of application
kind/feature priority/low
New feature proposal: Add a distinction between helm chart change and application change (and the degree of changement). I propose also that color of update ribbon notification give indication of changment level. According the classic rule 0.0.0: First digit is major update, Second is minor update, Third is patch update the color could relect this level An option could also propose to user to be only notified for application update
1.0
display update of chart and update of application - New feature proposal: Add a distinction between helm chart change and application change (and the degree of changement). I propose also that color of update ribbon notification give indication of changment level. According the classic rule 0.0.0: First digit is major update, Second is minor update, Third is patch update the color could relect this level An option could also propose to user to be only notified for application update
priority
display update of chart and update of application new feature proposal add a distinction between helm chart change and application change and the degree of changement i propose also that color of update ribbon notification give indication of changment level according the classic rule first digit is major update second is minor update third is patch update the color could relect this level an option could also propose to user to be only notified for application update
1
276,936
8,614,720,698
IssuesEvent
2018-11-19 18:19:59
GSA/caribou
https://api.github.com/repos/GSA/caribou
closed
When user clicks off of the main navigation, the navigation should return to default state
low-priority
**Describe the bug** When user clicks off of the main navigation, the nav should return to default state. Also, if a new page is loaded the nav should go back to its default state. **To Reproduce** Steps to reproduce the behavior: 1. Click on nav item 2. Click elsewhere on the screen 3. The nav stays visible **Expected behavior** Nav should hide
1.0
When user clicks off of the main navigation, the navigation should return to default state - **Describe the bug** When user clicks off of the main navigation, the nav should return to default state. Also, if a new page is loaded the nav should go back to its default state. **To Reproduce** Steps to reproduce the behavior: 1. Click on nav item 2. Click elsewhere on the screen 3. The nav stays visible **Expected behavior** Nav should hide
priority
when user clicks off of the main navigation the navigation should return to default state describe the bug when user clicks off of the main navigation the nav should return to default state also if a new page is loaded the nav should go back to its default state to reproduce steps to reproduce the behavior click on nav item click elsewhere on the screen the nav stays visible expected behavior nav should hide
1
369,574
10,914,971,571
IssuesEvent
2019-11-21 10:14:06
SuperblocksHQ/ethereum-studio
https://api.github.com/repos/SuperblocksHQ/ethereum-studio
closed
Download button in TopBar not working unless Preview window is opened
bug low-priority
<!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest beta version to make sure your issue has not already been fixed: https://studio.ethereum.org/ --> ## Environment/Browser Live/Firefox ## Description I guess title is self explanatory. ## Steps to reproduce 1. Open any project and keep the _Preview_ window open 2. Try clicking on _Download Project_ action from TopBar's left corner dropdown 3. It should open up modal 4. Now try to close the _Preview_ window and repeat step 2 5. _Download Project_ action is not working in this condition ## Expected result Have the download button disabled or fix it to work even without preview window ## Actual result As described. Silent failure. ## Reproducible 100%
1.0
Download button in TopBar not working unless Preview window is opened - <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest beta version to make sure your issue has not already been fixed: https://studio.ethereum.org/ --> ## Environment/Browser Live/Firefox ## Description I guess title is self explanatory. ## Steps to reproduce 1. Open any project and keep the _Preview_ window open 2. Try clicking on _Download Project_ action from TopBar's left corner dropdown 3. It should open up modal 4. Now try to close the _Preview_ window and repeat step 2 5. _Download Project_ action is not working in this condition ## Expected result Have the download button disabled or fix it to work even without preview window ## Actual result As described. Silent failure. ## Reproducible 100%
priority
download button in topbar not working unless preview window is opened environment browser live firefox description i guess title is self explanatory steps to reproduce open any project and keep the preview window open try clicking on download project action from topbar s left corner dropdown it should open up modal now try to close the preview window and repeat step download project action is not working in this condition expected result have the download button disabled or fix it to work even without preview window actual result as described silent failure reproducible
1
547,387
16,042,104,118
IssuesEvent
2021-04-22 09:09:54
returntocorp/semgrep
https://api.github.com/repos/returntocorp/semgrep
closed
Invalid SARIF: invocation should be a child of run
alpha bug external-user feature:cli-output good first issue priority:low
**Describe the bug** According to the SARIF spec, `invocation` should be the child of a `run`: https://docs.oasis-open.org/sarif/sarif/v2.1.0/csprd01/sarif-v2.1.0-csprd01.html#_Toc10540933 Currently `build_sarif_output` is nesting it at the root of the document, which is producing SARIF which does not conform to the specification: https://github.com/returntocorp/semgrep/blob/9a73a142dccfa281999418510843886c3d260c71/semgrep/semgrep/output.py#L309 **To Reproduce** This can been seen by uploading the [test SARIF output](https://github.com/returntocorp/semgrep/blob/9a73a142dccfa281999418510843886c3d260c71/semgrep/tests/e2e/snapshots/test_check/test_sarif_output/results.sarif) to the online [SARIF validator](https://sarifweb.azurewebsites.net/Validation). ![image](https://user-images.githubusercontent.com/14410/113304853-532efe00-92fa-11eb-8c2d-dece6348a31b.png) **Expected behavior** Invocation should be moved up a couple of lines into the run section. :+1: :heart: **What is the priority of the bug to you?** P2 (regular bug that should get fixed).
1.0
Invalid SARIF: invocation should be a child of run - **Describe the bug** According to the SARIF spec, `invocation` should be the child of a `run`: https://docs.oasis-open.org/sarif/sarif/v2.1.0/csprd01/sarif-v2.1.0-csprd01.html#_Toc10540933 Currently `build_sarif_output` is nesting it at the root of the document, which is producing SARIF which does not conform to the specification: https://github.com/returntocorp/semgrep/blob/9a73a142dccfa281999418510843886c3d260c71/semgrep/semgrep/output.py#L309 **To Reproduce** This can been seen by uploading the [test SARIF output](https://github.com/returntocorp/semgrep/blob/9a73a142dccfa281999418510843886c3d260c71/semgrep/tests/e2e/snapshots/test_check/test_sarif_output/results.sarif) to the online [SARIF validator](https://sarifweb.azurewebsites.net/Validation). ![image](https://user-images.githubusercontent.com/14410/113304853-532efe00-92fa-11eb-8c2d-dece6348a31b.png) **Expected behavior** Invocation should be moved up a couple of lines into the run section. :+1: :heart: **What is the priority of the bug to you?** P2 (regular bug that should get fixed).
priority
invalid sarif invocation should be a child of run describe the bug according to the sarif spec invocation should be the child of a run currently build sarif output is nesting it at the root of the document which is producing sarif which does not conform to the specification to reproduce this can been seen by uploading the to the online expected behavior invocation should be moved up a couple of lines into the run section heart what is the priority of the bug to you regular bug that should get fixed
1
641,516
20,828,716,960
IssuesEvent
2022-03-19 04:00:42
encorelab/ck-board
https://api.github.com/repos/encorelab/ck-board
opened
Create finite canvas grid
enhancement low priority
On the canvas, convert the grid pattern from CSS to a finite fabric grid that is layered behind any background images
1.0
Create finite canvas grid - On the canvas, convert the grid pattern from CSS to a finite fabric grid that is layered behind any background images
priority
create finite canvas grid on the canvas convert the grid pattern from css to a finite fabric grid that is layered behind any background images
1
458,249
13,171,754,161
IssuesEvent
2020-08-11 17:13:12
woocommerce/woocommerce-gateway-amazon-pay
https://api.github.com/repos/woocommerce/woocommerce-gateway-amazon-pay
closed
Customers cannot checkout when Amazon's state does not match WC predefinitions
Priority: Low [Type] Bug
**Bug description** If the shipping address state set on Amazon does not match the values in the WC dropdown, customers are faced with an error regarding shipping selection and are not able to place the order. This happens because the `shipping_state` that comes from Amazon does not match the ones predefined in WooCommerce for the country selected. In the screenshot below, the address uses *Lazio*, while Woo expects some other values. **To Reproduce** Steps to reproduce the behavior: 1. Add a product to cart and Login with Amazon to checkout 2. Select an address that has a state not expected in WooCommerce (the one in the screenshot should work) 3. Try to place the order - it should fail with the message `No shipping method has been selected. Please double check your address, or contact us if you need any help`. **Screenshots** ![image](https://user-images.githubusercontent.com/7714042/77453709-597cc700-6dd6-11ea-9b17-baee22047fd0.png) **Expected behavior** Users should be able to checkout, be prompted for missing fields (#35 ) or get instructions to correct their addresses on the Amazon dashboard. **Isolating the problem (mark completed items with an [x]):** - [x] I have deactivated other plugins and confirmed this bug occurs when only the extension is active. - [x] I can reproduce this bug consistently using the steps above.
1.0
Customers cannot checkout when Amazon's state does not match WC predefinitions - **Bug description** If the shipping address state set on Amazon does not match the values in the WC dropdown, customers are faced with an error regarding shipping selection and are not able to place the order. This happens because the `shipping_state` that comes from Amazon does not match the ones predefined in WooCommerce for the country selected. In the screenshot below, the address uses *Lazio*, while Woo expects some other values. **To Reproduce** Steps to reproduce the behavior: 1. Add a product to cart and Login with Amazon to checkout 2. Select an address that has a state not expected in WooCommerce (the one in the screenshot should work) 3. Try to place the order - it should fail with the message `No shipping method has been selected. Please double check your address, or contact us if you need any help`. **Screenshots** ![image](https://user-images.githubusercontent.com/7714042/77453709-597cc700-6dd6-11ea-9b17-baee22047fd0.png) **Expected behavior** Users should be able to checkout, be prompted for missing fields (#35 ) or get instructions to correct their addresses on the Amazon dashboard. **Isolating the problem (mark completed items with an [x]):** - [x] I have deactivated other plugins and confirmed this bug occurs when only the extension is active. - [x] I can reproduce this bug consistently using the steps above.
priority
customers cannot checkout when amazon s state does not match wc predefinitions bug description if the shipping address state set on amazon does not match the values in the wc dropdown customers are faced with an error regarding shipping selection and are not able to place the order this happens because the shipping state that comes from amazon does not match the ones predefined in woocommerce for the country selected in the screenshot below the address uses lazio while woo expects some other values to reproduce steps to reproduce the behavior add a product to cart and login with amazon to checkout select an address that has a state not expected in woocommerce the one in the screenshot should work try to place the order it should fail with the message no shipping method has been selected please double check your address or contact us if you need any help screenshots expected behavior users should be able to checkout be prompted for missing fields or get instructions to correct their addresses on the amazon dashboard isolating the problem mark completed items with an i have deactivated other plugins and confirmed this bug occurs when only the extension is active i can reproduce this bug consistently using the steps above
1
417,965
12,190,991,268
IssuesEvent
2020-04-29 10:17:44
flatlify/flatlify
https://api.github.com/repos/flatlify/flatlify
closed
Prevent repeating in content-types and modified-files endpoint paths
Priority:Low bug
Currently, ContentTypes endpoint is server from `/content-types/content-types` and modified files from `/modified-files/modified-files`. Endpoints should not contain repeating paths
1.0
Prevent repeating in content-types and modified-files endpoint paths - Currently, ContentTypes endpoint is server from `/content-types/content-types` and modified files from `/modified-files/modified-files`. Endpoints should not contain repeating paths
priority
prevent repeating in content types and modified files endpoint paths currently contenttypes endpoint is server from content types content types and modified files from modified files modified files endpoints should not contain repeating paths
1
340,831
10,279,038,881
IssuesEvent
2019-08-25 19:26:40
joonaspaakko/ScriptUI-Dialog-Builder-Joonas
https://api.github.com/repos/joonaspaakko/ScriptUI-Dialog-Builder-Joonas
closed
Request: wrap dialog in its own namespace
.Enhancement :sparkles: .Status: In Queue Priority: Very Low
In order to make the exported dialogs more modular and to avoid name collisions, add an export option (or by default?) to wrap the exported code in its own function, and assigned to a (user-defined) variable name. This is essentially a classic JavaScript module pattern and should fit into any developer's framework without needing to conform to specific bundlers, builders, etc. In order to make this truly useful, the following features should be in place: 1. The dialog's Window object is returned as the final result, without its `show` method called 2. All of the dialog's components... groups, panels, buttons, etc... need to be accessible from the namespace, eg. `namespace.panel1.group1.button1` so events and callbacks can be added. Proposed export: ```javascript var namespace = (function () { var dialog = new Window("dialog"); dialog.text = "Dialog"; dialog.orientation = "column"; dialog.alignChildren = ["center","top"]; dialog.spacing = 10; dialog.margins = 16; var button1 = dialog.add("button"); button1.text = "Button"; button1.justify = "center"; dialog.button1 = button1; //add some way to access via top-level namespace return dialog; }()); ``` This exported snippet can now be used from within the same script _or_ an external script, depending on the developer's needs and how they structure their code: ```javascript namespace.button1.onClick = function () { alert('clicked!'); namespace.close(100); }; var result = namespace.show(); ```
1.0
Request: wrap dialog in its own namespace - In order to make the exported dialogs more modular and to avoid name collisions, add an export option (or by default?) to wrap the exported code in its own function, and assigned to a (user-defined) variable name. This is essentially a classic JavaScript module pattern and should fit into any developer's framework without needing to conform to specific bundlers, builders, etc. In order to make this truly useful, the following features should be in place: 1. The dialog's Window object is returned as the final result, without its `show` method called 2. All of the dialog's components... groups, panels, buttons, etc... need to be accessible from the namespace, eg. `namespace.panel1.group1.button1` so events and callbacks can be added. Proposed export: ```javascript var namespace = (function () { var dialog = new Window("dialog"); dialog.text = "Dialog"; dialog.orientation = "column"; dialog.alignChildren = ["center","top"]; dialog.spacing = 10; dialog.margins = 16; var button1 = dialog.add("button"); button1.text = "Button"; button1.justify = "center"; dialog.button1 = button1; //add some way to access via top-level namespace return dialog; }()); ``` This exported snippet can now be used from within the same script _or_ an external script, depending on the developer's needs and how they structure their code: ```javascript namespace.button1.onClick = function () { alert('clicked!'); namespace.close(100); }; var result = namespace.show(); ```
priority
request wrap dialog in its own namespace in order to make the exported dialogs more modular and to avoid name collisions add an export option or by default to wrap the exported code in its own function and assigned to a user defined variable name this is essentially a classic javascript module pattern and should fit into any developer s framework without needing to conform to specific bundlers builders etc in order to make this truly useful the following features should be in place the dialog s window object is returned as the final result without its show method called all of the dialog s components groups panels buttons etc need to be accessible from the namespace eg namespace so events and callbacks can be added proposed export javascript var namespace function var dialog new window dialog dialog text dialog dialog orientation column dialog alignchildren dialog spacing dialog margins var dialog add button text button justify center dialog add some way to access via top level namespace return dialog this exported snippet can now be used from within the same script or an external script depending on the developer s needs and how they structure their code javascript namespace onclick function alert clicked namespace close var result namespace show
1