repo stringlengths 7 67 | org stringlengths 2 32 ⌀ | issue_id int64 780k 941M | issue_number int64 1 134k | pull_request dict | events list | user_count int64 1 77 | event_count int64 1 192 | text_size int64 0 329k | bot_issue bool 1 class | modified_by_bot bool 2 classes | text_size_no_bots int64 0 279k | modified_usernames bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
zulip/zulip | zulip | 302,021,721 | 8,566 | {
"number": 8566,
"repo": "zulip",
"user_login": "zulip"
} | [
{
"action": "opened",
"author": "shreyanshdwivedi",
"comment_id": null,
"datetime": 1520102328000,
"masked_author": "username_0",
"text": "Fixes #8547 \r\n\r\n**Testing Plan:** I used the developer tool and git grep to find area of the relevant code. Then, I used js to test different cases.\r\nI disabled the `Reply` button when the `stream` or `topic` or `pm` is not having any message, which restores its normal behaviour when it encounters the same with messages or when we go to `Home`,\r\n\r\n**GIFs or Screenshots:** \r\n",
"title": "compose: Reply button becomes inactive when there are no messages.",
"type": "issue"
}
] | 2 | 3 | 1,721 | false | true | 506 | false |
elasticsearch/elasticsearch-cloud-aws | elasticsearch | 31,986,353 | 76 | null | [
{
"action": "opened",
"author": "bodgit",
"comment_id": null,
"datetime": 1398182591000,
"masked_author": "username_0",
"text": "I'm trying to attach a remote (potentially non-EC2) Tribe node to an EC2 cluster. I raised this on the mailing list but the thread died after a bit of back and forth so I figured I would raise this here as I would still like to get this fixed.\r\n\r\nI've created two nodes in EC2 EU region with the following configuration which is as small as possible to illustrate the problem:\r\n\r\n```\r\nnetwork.publish_host: \"_ec2:publicDns_\"\r\ndiscovery.type: ec2\r\ndiscovery.ec2.groups: estest\r\ndiscovery.ec2.host_type: public_dns\r\ncloud.aws.region: \"eu-west-1\"\r\ncloud.aws.access_key: abc123\r\ncloud.aws.secret_key: s3cr3t\r\ncloud.node.auto_attributes: true\r\ndiscovery.zen.minimum_master_nodes: 2\r\ndiscovery.zen.ping.multicast.enabled: false\r\n```\r\n\r\nI'm using the public DNS host type because these resolve within EC2 to the private IP address but outside of EC2 return the public IP address which should theoretically mean an external node can join the cluster (the Tribe node in this case). Keeping traffic private within EC2 is meant to yield better performance avoiding the need to \"hairpin\" inter-cluster traffic as it would have to traverse the EC2 NAT layer if using the public IP addresses.\r\n\r\nBoth nodes have ES 1.1.1 and cloud-aws 2.1.1 installed. Both are members of an ```estest``` security group which has the following rules:\r\n\r\n| Type | Protocol | Port Range | Source |\r\n| ----- | ----- | ----- | ----- |\r\n| Custom TCP Rule | TCP | 9300 - 9399 | sg-feedbeef (estest) |\r\n| Custom ICMP Rule | Echo Request | N/A | sg-feedbeef (estest) |\r\n| SSH | TCP | 22 | 1.2.3.4/32 |\r\n| Custom TCP Rule | TCP | 9200 | sg-feedbeef (estest) |\r\n| Custom TCP Rule | TCP | 9200 | 1.2.3.4/32 |\r\n| Custom TCP Rule |TCP | 9300 - 9399 | 1.2.3.4/32 |\r\n| Custom ICMP Rule | Echo Request | N/A | 1.2.3.4/32 |\r\n\r\n```1.2.3.4``` is my IP address for external access. Both nodes have elastic IP addresses assigned to them so they are on \"well known\" external IP addresses for the purposes of the tribe node configuration.\r\n\r\nBoth nodes correctly find each other on startup and I get a basic two-node cluster. Even though I'm using ```discovery.ec2.host_type: public_dns``` because the external DNS names internally resolve to the private IP addresses within EC2 the cluster is happily using the private IP addresses to communicate.\r\n\r\nHowever, if I query one node using ```http://54.72.215.117:9200/_nodes/transport?pretty``` I get:\r\n\r\n```json\r\n{\r\n \"cluster_name\" : \"elasticsearch\",\r\n \"nodes\" : {\r\n \"0nMUUSkXSnqy35ttLEkcqA\" : {\r\n \"name\" : \"Numinus\",\r\n \"transport_address\" : \"inet[/172.31.12.62:9300]\",\r\n \"host\" : \"ip-172-31-12-62\",\r\n \"ip\" : \"172.31.12.62\",\r\n \"version\" : \"1.1.1\",\r\n \"build\" : \"f1585f0\",\r\n \"http_address\" : \"inet[ec2-54-72-137-131.eu-west-1.compute.amazonaws.com/172.31.12.62:9200]\",\r\n \"attributes\" : {\r\n \"aws_availability_zone\" : \"eu-west-1a\"\r\n },\r\n \"transport\" : {\r\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9300]\",\r\n \"publish_address\" : \"inet[/172.31.12.62:9300]\"\r\n }\r\n },\r\n \"-XlaNF-hSAi2U8tcpEFmug\" : {\r\n \"name\" : \"Howard the Duck\",\r\n \"transport_address\" : \"inet[ec2-54-72-215-117.eu-west-1.compute.amazonaws.com/172.31.12.61:9300]\",\r\n \"host\" : \"ip-172-31-12-61\",\r\n \"ip\" : \"172.31.12.61\",\r\n \"version\" : \"1.1.1\",\r\n \"build\" : \"f1585f0\",\r\n \"http_address\" : \"inet[ec2-54-72-215-117.eu-west-1.compute.amazonaws.com/172.31.12.61:9200]\",\r\n \"attributes\" : {\r\n \"aws_availability_zone\" : \"eu-west-1a\"\r\n },\r\n \"transport\" : {\r\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0:9300]\",\r\n \"publish_address\" : \"inet[ec2-54-72-215-117.eu-west-1.compute.amazonaws.com/172.31.12.61:9300]\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nNotice that only the node that is the one I directly queried (Howard the Duck) has the public DNS included in its transport address. If I query the other node (Numinus) via ```http://54.72.137.131:9200/_nodes/transport?pretty``` I get:\r\n\r\n```json\r\n{\r\n \"cluster_name\" : \"elasticsearch\",\r\n \"nodes\" : {\r\n \"0nMUUSkXSnqy35ttLEkcqA\" : {\r\n \"name\" : \"Numinus\",\r\n \"transport_address\" : \"inet[ec2-54-72-137-131.eu-west-1.compute.amazonaws.com/172.31.12.62:9300]\",\r\n \"host\" : \"ip-172-31-12-62\",\r\n \"ip\" : \"172.31.12.62\",\r\n \"version\" : \"1.1.1\",\r\n \"build\" : \"f1585f0\",\r\n \"http_address\" : \"inet[ec2-54-72-137-131.eu-west-1.compute.amazonaws.com/172.31.12.62:9200]\",\r\n \"attributes\" : {\r\n \"aws_availability_zone\" : \"eu-west-1a\"\r\n },\r\n \"transport\" : {\r\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0:9300]\",\r\n \"publish_address\" : \"inet[ec2-54-72-137-131.eu-west-1.compute.amazonaws.com/172.31.12.62:9300]\"\r\n }\r\n },\r\n \"-XlaNF-hSAi2U8tcpEFmug\" : {\r\n \"name\" : \"Howard the Duck\",\r\n \"transport_address\" : \"inet[/172.31.12.61:9300]\",\r\n \"host\" : \"ip-172-31-12-61\",\r\n \"ip\" : \"172.31.12.61\",\r\n \"version\" : \"1.1.1\",\r\n \"build\" : \"f1585f0\",\r\n \"http_address\" : \"inet[ec2-54-72-215-117.eu-west-1.compute.amazonaws.com/172.31.12.61:9200]\",\r\n \"attributes\" : {\r\n \"aws_availability_zone\" : \"eu-west-1a\"\r\n },\r\n \"transport\" : {\r\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9300]\",\r\n \"publish_address\" : \"inet[/172.31.12.61:9300]\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nNotice that the other node (Numinus) now has the correct transport address. I would expect *both* nodes to have the same style transport address when queried from *either* node.\r\n\r\nMy tribe node (also running ES 1.1.1) has the following minimal configuration:\r\n\r\n```yaml\r\ndiscovery.zen.ping.multicast.enabled: false\r\ntribe:\r\n dublin:\r\n cluster:\r\n name: elasticsearch\r\n discovery:\r\n zen:\r\n ping:\r\n unicast:\r\n hosts:\r\n - 54.72.215.117\r\n - 54.72.137.131\r\n```\r\n\r\nThe tribe node correctly connects to the external IP address and (I'm guessing) retrieves the node list and then promptly tries to connect to the private IP addresses, and fails.\r\n\r\nNow I can change the EC2 cluster configuration to use the public IP addresses like so:\r\n\r\n```\r\nnetwork.publish_host: \"_ec2:publicIp_\"\r\ndiscovery.type: ec2\r\ndiscovery.ec2.groups: estest\r\ndiscovery.ec2.host_type: public_ip\r\ncloud.aws.region: \"eu-west-1\"\r\ncloud.aws.access_key: abc123\r\ncloud.aws.secret_key: s3cr3t\r\ncloud.node.auto_attributes: true\r\ndiscovery.zen.minimum_master_nodes: 2\r\ndiscovery.zen.ping.multicast.enabled: false\r\n```\r\n\r\nHowever the cluster won't associate until I add the following additional rules to the ```estest``` security group:\r\n\r\n| Type | Protocol | Port Range | Source |\r\n| ----- | ----- | ----- | ----- |\r\n| Custom TCP Rule | TCP | 9300 - 9399 | 54.72.137.131/32 |\r\n| Custom TCP Rule | TCP | 9300 - 9399 | 54.72.215.117/32 |\r\n\r\nThis isn't good as I can't easily scale the cluster without adding the public IP address of each node to the security group and also the performance is worse as every inter-cluster connection is hairpinning through the EC2 NAT layer.\r\n\r\nWith the cluster up, I get consistent output from ```http://54.72.215.117:9200/_nodes/transport?pretty```, e.g.\r\n\r\n```json\r\n{\r\n \"cluster_name\" : \"elasticsearch\",\r\n \"nodes\" : {\r\n \"L4rjg5JNR1yuk-kU_eAthw\" : {\r\n \"name\" : \"Phil Urich\",\r\n \"transport_address\" : \"inet[/54.72.137.131:9300]\",\r\n \"host\" : \"ip-172-31-12-62\",\r\n \"ip\" : \"172.31.12.62\",\r\n \"version\" : \"1.1.1\",\r\n \"build\" : \"f1585f0\",\r\n \"http_address\" : \"inet[/54.72.137.131:9200]\",\r\n \"attributes\" : {\r\n \"aws_availability_zone\" : \"eu-west-1a\"\r\n },\r\n \"transport\" : {\r\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9300]\",\r\n \"publish_address\" : \"inet[/54.72.137.131:9300]\"\r\n }\r\n },\r\n \"WvFPkVyfSi2dpHJ4k3yk8w\" : {\r\n \"name\" : \"Blue Streak\",\r\n \"transport_address\" : \"inet[/54.72.215.117:9300]\",\r\n \"host\" : \"ip-172-31-12-61\",\r\n \"ip\" : \"172.31.12.61\",\r\n \"version\" : \"1.1.1\",\r\n \"build\" : \"f1585f0\",\r\n \"http_address\" : \"inet[/54.72.215.117:9200]\",\r\n \"attributes\" : {\r\n \"aws_availability_zone\" : \"eu-west-1a\"\r\n },\r\n \"transport\" : {\r\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0:9300]\",\r\n \"publish_address\" : \"inet[/54.72.215.117:9300]\"\r\n }\r\n }\r\n }\r\n}\r\n```\r\n\r\nI can then attach a tribe node successfully.\r\n\r\nThere's a slight issue in that the tribe node must be either directly attached to the internet or have a 1:1 NAT with the external address set as the value for ```network.publish_host``` so that the EC2 cluster can talk back to the tribe node, but that's not directly related to this issue.\r\n\r\nIt may be that a VPN is the only viable solution so you should just use private IP addresses everywhere and let the IPv4 routing layer deal with everything, or use IPv6, and then this issue goes away but the setting as it currently stands seems of little use.",
"title": "Using \"_ec2:publicDns_\" for network.publish_host doesn't seem to work properly",
"type": "issue"
},
{
"action": "created",
"author": "rparkhunovsky",
"comment_id": 74238540,
"datetime": 1423825871000,
"masked_author": "username_1",
"text": "The issue is pretty actual for cross-region cloud environments and appears a blocker for usage as a feature.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ssuprun",
"comment_id": 74876599,
"datetime": 1424271303000,
"masked_author": "username_2",
"text": "I'm not sure if my change will fix this issue, but any way it looks like bug.\r\nCould somebody review it - https://github.com/elasticsearch/elasticsearch-cloud-aws/pull/175",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "clintongormley",
"comment_id": 96428498,
"datetime": 1430078092000,
"masked_author": "username_3",
"text": "@dadoonet with #175 merged, should this be closed? and maybe https://github.com/elastic/elasticsearch/issues/6333 too?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rparkhunovsky",
"comment_id": 96518694,
"datetime": 1430115778000,
"masked_author": "username_1",
"text": "Agree for closing #76. But tested the changes with elastic/elasticsearch#6333 - it doesn't resolve the issue anyway.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "clintongormley",
"comment_id": 96561295,
"datetime": 1430123525000,
"masked_author": "username_3",
"text": "thanks @username_1 - closing this one",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "taraslayshchuk",
"comment_id": 137160252,
"datetime": 1441211612000,
"masked_author": "username_4",
"text": "Hi. I get in trouble with configurating 2 nodes in aws cloud in tribute cluster.\r\n\r\nConfig file of first node:\r\n```\r\ncluster.name: es_one\r\nnode.name: node-t1\r\n\r\nplugin.mandatory: \"cloud-aws\"\r\n\r\n\r\n\r\nnetwork.bound_address: \"_ec2:privateDns_\"\r\nnetwork.public_host: \"_ec2:publicDns_\"\r\n\r\n\r\ntransport.bind_host: \"_ec2:privateDns_\"\r\ntransport.publish_host: \"_ec2:publicDns_\"\r\n\r\ndiscovery.zen.minimum_master_nodes: 1\r\ndiscovery.zen.ping.timeout: 10s\r\ndiscovery.zen.ping.multicast.enabled: false\r\n\r\n\r\ncloud:\r\n aws:\r\n access_key: key\r\n secret_key: value\r\n\r\ncloud.aws.region: \"us-east-1\"\r\n\r\ndiscovery.type: ec2\r\n```\r\nLog output from first node:\r\n```log\r\n[2015-09-02 15:06:18,828][INFO ][node ] [node-t1] version[1.7.1], pid[14686], build[b88f43f/2015-07-29T09:54:16Z]\r\n[2015-09-02 15:06:18,828][INFO ][node ] [node-t1] initializing ...\r\n[2015-09-02 15:06:18,925][INFO ][plugins ] [node-t1] loaded [cloud-aws], sites []\r\n[2015-09-02 15:06:18,987][INFO ][env ] [node-t1] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [5.3gb], net total_space [7.7gb], types [ext4]\r\n[2015-09-02 15:06:22,716][INFO ][node ] [node-t1] initialized\r\n[2015-09-02 15:06:22,716][INFO ][node ] [node-t1] starting ...\r\n[2015-09-02 15:06:22,790][INFO ][transport ] [node-t1] bound_address {inet[/172.31.13.137:9301]}, publish_address {inet[ec2-52-3-85-143.compute-1.amazonaws.com/172.31.13.137:9301]}\r\n[2015-09-02 15:06:22,808][INFO ][discovery ] [node-t1] es_one/qwcnD8D2QEest37d39onow\r\n[2015-09-02 15:06:33,992][INFO ][cluster.service ] [node-t1] new_master [node-t1][qwcnD8D2QEest37d39onow][ip-172-31-13-137][inet[ec2-52-3-85-143.compute-1.amazonaws.com/172.31.13.137:9301]], reason: zen-disco-join (elected_as_master)\r\n[2015-09-02 15:06:34,021][INFO ][http ] [node-t1] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.31.13.137:9200]}\r\n[2015-09-02 15:06:34,022][INFO ][node ] [node-t1] started\r\n[2015-09-02 15:06:34,029][INFO ][gateway ] [node-t1] recovered [0] indices into cluster_state\r\n```\r\nConfig file of second node:\r\n```\r\ncluster.name: es_two\r\nnode.name: node-t2\r\n\r\nplugin.mandatory: \"cloud-aws\"\r\n\r\nnetwork.host: \"_ec2:publicDns_\"\r\n\r\nnetwork.bind_host: \"_ec2:publicDns_\"\r\nnetwork.public_host: \"_ec2:publicDns_\"\r\n\r\ntransport.host: \"_ec2:publicDns_\"\r\ntransport.bind_host: \"_ec2:publicDns_\"\r\ntransport.publish_host: \"_ec2:publicDns_\"\r\n\r\ndiscovery.zen.minimum_master_nodes: 1\r\ndiscovery.zen.ping.timeout: 10s\r\ndiscovery.zen.ping.multicast.enabled: false\r\n\r\n\r\ncloud:\r\n aws:\r\n access_key: key\r\n secret_key: value\r\n\r\ncloud.aws.region: \"us-west-2\"\r\n\r\ndiscovery.type: ec2\r\n```\r\nLog output from second node:\r\n```\r\n[2015-09-02 15:15:09,871][INFO ][node ] [node-t2] version[1.7.1], pid[2768], build[b88f43f/2015-07-29T09:54:16Z]\r\n[2015-09-02 15:15:09,872][INFO ][node ] [node-t2] initializing ...\r\n[2015-09-02 15:15:09,979][INFO ][plugins ] [node-t2] loaded [cloud-aws], sites []\r\n[2015-09-02 15:15:10,051][INFO ][env ] [node-t2] using [1] data paths, mounts [[/ (/dev/xvda1)]], net usable_space [5.4gb], net total_space [7.7gb], types [ext4]\r\n[2015-09-02 15:15:13,801][INFO ][node ] [node-t2] initialized\r\n[2015-09-02 15:15:13,802][INFO ][node ] [node-t2] starting ...\r\n[2015-09-02 15:15:13,884][INFO ][transport ] [node-t2] bound_address {inet[/192.168.1.155:9301]}, publish_address {inet[ec2-52-88-78-52.us-west-2.compute.amazonaws.com/192.168.1.155:9301]}\r\n[2015-09-02 15:15:13,908][INFO ][discovery ] [node-t2] es_two/UIBV-NoFT8-YNPzIiqO8Jw\r\n[2015-09-02 15:15:25,380][INFO ][cluster.service ] [node-t2] new_master [node-t2][UIBV-NoFT8-YNPzIiqO8Jw][ip-192-168-1-155][inet[ec2-52-88-78-52.us-west-2.compute.amazonaws.com/192.168.1.155:9301]], reason: zen-disco-join (elected_as_master)\r\n[2015-09-02 15:15:25,407][INFO ][http ] [node-t2] bound_address {inet[/192.168.1.155:9200]}, publish_address {inet[ec2-52-88-78-52.us-west-2.compute.amazonaws.com/192.168.1.155:9200]}\r\n[2015-09-02 15:15:25,407][INFO ][node ] [node-t2] started\r\n[2015-09-02 15:15:25,408][INFO ][gateway ] [node-t2] recovered [0] indices into cluster_state\r\n```\r\nLocal node config file:\r\n```\r\nnode.name: cluster\r\n\r\n\r\nplugin.mandatory: \"cloud-aws\"\r\n\r\ncloud:\r\n aws:\r\n access_key: key\r\n secret_key: value\r\n\r\nnetwork.host: my.localhost.dns\r\n\r\n\r\nnetwork.bind_host: my.localhost.dns\r\n\r\nnetwork.public_host: my.localhost.dns\r\n\r\n\r\ntransport.host: my.localhost.dns\r\n\r\ntransport.bind_host: my.localhost.dns\r\n\r\ntransport.publish_host: my.localhost.dns\r\n\r\n\r\ndiscovery.zen.minimum_master_nodes: 1\r\ndiscovery.zen.ping.timeout: 10s\r\ndiscovery.zen.ping.multicast.enabled: false\r\n\r\ntribe:\r\n es_one:\r\n cluster.name: es_one\r\n discovery.zen.ping.multicast.enabled: false\r\n discovery.type: ec2\r\n discovery.ec2.host_type: public_dns\r\n discovery.ec2.groups: sg-7b3c461c\r\n\r\n\r\n es_two:\r\n cluster.name: es_two\r\n discovery.zen.ping.multicast.enabled: false\r\n discovery.type: ec2\r\n discovery.ec2.host_type: public_dns\r\n discovery.ec2.groups: sg-8e55e0ea\r\n\r\n\r\n\r\nlogger:\r\n level: DEBUG\r\n\r\ndiscovery.type: ec2\r\n```\r\nLocal node log file output:\r\n```\r\n[2015-09-02 18:09:34,089][INFO ][node ] [cluster] version[1.7.1], pid[17251], build[b88f43f/2015-07-29T09:54:16Z]\r\n[2015-09-02 18:09:34,089][INFO ][node ] [cluster] initializing ...\r\n[2015-09-02 18:09:34,161][INFO ][plugins ] [cluster] loaded [cloud-aws], sites [bigdesk, head]\r\n[2015-09-02 18:09:35,998][INFO ][node ] [cluster/es_one] version[1.7.1], pid[17251], build[b88f43f/2015-07-29T09:54:16Z]\r\n[2015-09-02 18:09:35,998][INFO ][node ] [cluster/es_one] initializing ...\r\n[2015-09-02 18:09:35,998][INFO ][plugins ] [cluster/es_one] loaded [cloud-aws], sites []\r\n[2015-09-02 18:09:37,157][INFO ][node ] [cluster/es_one] initialized\r\n[2015-09-02 18:09:37,159][INFO ][node ] [cluster/es_two] version[1.7.1], pid[17251], build[b88f43f/2015-07-29T09:54:16Z]\r\n[2015-09-02 18:09:37,159][INFO ][node ] [cluster/es_two] initializing ...\r\n[2015-09-02 18:09:37,160][INFO ][plugins ] [cluster/es_two] loaded [cloud-aws], sites []\r\n[2015-09-02 18:09:37,845][INFO ][node ] [cluster/es_two] initialized\r\n[2015-09-02 18:09:37,857][INFO ][node ] [cluster] initialized\r\n[2015-09-02 18:09:37,857][INFO ][node ] [cluster] starting ...\r\n[2015-09-02 18:09:37,919][INFO ][transport ] [cluster] bound_address {inet[/10.131.72.169:9301]}, publish_address {inet[tlaispc.ddns.softservecom.com/10.131.72.169:9301]}\r\n[2015-09-02 18:09:37,925][INFO ][discovery ] [cluster] elasticsearch/xFarBkexRqKrBw6gezgPdA\r\n[2015-09-02 18:09:37,925][WARN ][discovery ] [cluster] waited for 0s and no initial state was set by the discovery\r\n[2015-09-02 18:09:37,929][INFO ][http ] [cluster] bound_address {inet[/10.131.72.169:9200]}, publish_address {inet[tlaispc.ddns.softservecom.com/10.131.72.169:9200]}\r\n[2015-09-02 18:09:37,929][INFO ][node ] [cluster/es_one] starting ...\r\n[2015-09-02 18:09:37,938][INFO ][transport ] [cluster/es_one] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.131.72.169:9302]}\r\n[2015-09-02 18:09:37,942][INFO ][discovery ] [cluster/es_one] es_one/DohOLLs9QwqCk4eKt9VDFw\r\n[2015-09-02 18:10:07,942][WARN ][discovery ] [cluster/es_one] waited for 30s and no initial state was set by the discovery\r\n[2015-09-02 18:10:07,943][INFO ][node ] [cluster/es_one] started\r\n[2015-09-02 18:10:07,943][INFO ][node ] [cluster/es_two] starting ...\r\n[2015-09-02 18:10:07,951][INFO ][transport ] [cluster/es_two] bound_address {inet[/0:0:0:0:0:0:0:0:9303]}, publish_address {inet[/10.131.72.169:9303]}\r\n[2015-09-02 18:10:07,953][INFO ][discovery ] [cluster/es_two] es_two/IJVyDR_lRuOoE3bqlSnpGQ\r\n[2015-09-02 18:10:37,953][WARN ][discovery ] [cluster/es_two] waited for 30s and no initial state was set by the discovery\r\n[2015-09-02 18:10:37,954][INFO ][node ] [cluster/es_two] started\r\n[2015-09-02 18:10:37,954][INFO ][node ] [cluster] started\r\n```\r\nSo what we have? Nodes see each other, but did not make cluster. I don`t know why, but cluster.service did not start before discovery. Can you help me please - what I am doing wrong?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Pryz",
"comment_id": 145813489,
"datetime": 1444126952000,
"masked_author": "username_5",
"text": "I'm also hitting this bug. Any news on this ?\r\n\r\nelasticsearch-cloud-aws : 2.7.1\r\nelasticsearch : 1.7.1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "toleksyn",
"comment_id": 146087812,
"datetime": 1444198235000,
"masked_author": "username_6",
"text": "Also encountered this one with es 1.6. Please fix.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RaffaelloBertini",
"comment_id": 156400768,
"datetime": 1447413368000,
"masked_author": "username_7",
"text": "@username_4 you have different cluster name among the nodes, that way they didn't build a cluster.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Raffaello",
"comment_id": 156400914,
"datetime": 1447413399000,
"masked_author": "username_8",
"text": "@username_4 you have different cluster name among the nodes, that way they didn't build a cluster.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "bodgit",
"comment_id": null,
"datetime": 1647863823000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 9 | 12 | 18,737 | false | false | 18,737 | true |
AprilRobotics/apriltag_ros | AprilRobotics | 320,487,305 | 10 | null | [
{
"action": "opened",
"author": "mickeyouyou",
"comment_id": null,
"datetime": 1525505193000,
"masked_author": "username_0",
"text": "`catkin_make_isolated` on Raspberry Pi result:\r\n```\r\n[ 66%] Linking CXX shared library /home/hdwl/catkin_ws/devel_isolated/apriltags2_ros/lib/libcommon.so\r\n[ 66%] Built target common\r\nScanning dependencies of target apriltags2_ros_single_image_client_node\r\nScanning dependencies of target continuous_detector\r\nScanning dependencies of target single_image_detector\r\n[ 70%] Building CXX object CMakeFiles/continuous_detector.dir/src/continuous_detector.cpp.o\r\n[ 73%] Building CXX object CMakeFiles/apriltags2_ros_single_image_client_node.dir/src/apriltags2_ros_single_image_client_node.cpp.o\r\n[ 76%] Building CXX object CMakeFiles/single_image_detector.dir/src/single_image_detector.cpp.o\r\nc++: internal compiler error: Killed (program cc1plus)\r\nPlease submit a full bug report,\r\nwith preprocessed source if appropriate.\r\nSee <file:///usr/share/doc/gcc-5/README.Bugs> for instructions.\r\nCMakeFiles/single_image_detector.dir/build.make:62: recipe for target 'CMakeFiles/single_image_detector.dir/src/single_image_detector.cpp.o' failed\r\nmake[2]: *** [CMakeFiles/single_image_detector.dir/src/single_image_detector.cpp.o] Error 4\r\nCMakeFiles/Makefile2:1387: recipe for target 'CMakeFiles/single_image_detector.dir/all' failed\r\nmake[1]: *** [CMakeFiles/single_image_detector.dir/all] Error 2\r\nmake[1]: *** Waiting for unfinished jobs....\r\n```\r\n\r\ncould you give some advise to build it ?",
"title": "issue on Raspberry Pi ",
"type": "issue"
},
{
"action": "closed",
"author": "wxmerkt",
"comment_id": null,
"datetime": 1557739397000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 1,381 | false | false | 1,381 | false |
chrismattmann/tika-python | null | 349,393,425 | 193 | null | [
{
"action": "opened",
"author": "manmohan556",
"comment_id": null,
"datetime": 1533883018000,
"masked_author": "username_0",
"text": "Is it possible to configure google api vision with tika .If yes the please help what steps i need to follow.",
"title": "About Configuration",
"type": "issue"
},
{
"action": "created",
"author": "chrismattmann",
"comment_id": 412289921,
"datetime": 1534008413000,
"masked_author": "username_1",
"text": "Duplicate of https://github.com/username_1/tika-python/issues/192",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "chrismattmann",
"comment_id": null,
"datetime": 1534008414000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 176 | false | false | 176 | true |
landportal/drupal | landportal | 218,428,815 | 276 | null | [
{
"action": "opened",
"author": "lisettemeij",
"comment_id": null,
"datetime": 1490948003000,
"masked_author": "username_0",
"text": "Hi,\r\n\r\nIf you go to the 'Contact us' menu item in the drop down menu on the about us page, you are directed to a page where the brown menu bar disappears:\r\n\r\nhttps://landportal.info/about\r\n\r\n\r\nhttps://landportal.info/webform/2016/04/contact-form\r\n\r\n\r\n\r\nIs it possible to include the brown menu bar to the contact form page? \r\n\r\n(and perhaps change the url to something more permanent, not including the date - I thought that was done at some point already?)",
"title": "About us menu - menu bar disappears on Contact form page",
"type": "issue"
},
{
"action": "closed",
"author": "jcr",
"comment_id": null,
"datetime": 1490953612000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "jcr",
"comment_id": 290668840,
"datetime": 1490953612000,
"masked_author": "username_1",
"text": "the link was incorrect, and shouldn't have brought to this page.\r\nfixed.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 808 | false | false | 808 | false |
MylesIsCool/ViaVersion | null | 374,657,895 | 1,061 | {
"number": 1061,
"repo": "ViaVersion",
"user_login": "MylesIsCool"
} | [
{
"action": "opened",
"author": "ForceUpdate1",
"comment_id": null,
"datetime": 1540654551000,
"masked_author": "username_0",
"text": "Currently works:\r\n- [x] Chests\r\n- [x] Glass\r\n- [x] Melon & Pumpkins\r\n- [x] Fences & Walls\r\n- [x] Flowers\r\n- [x] Doors\r\n\r\nImplemented but not finished:\r\n- [ ] Stairs\r\n- [ ] Redstone\r\n\r\n**Currently BungeeCord is not supported**",
"title": "[WIP] Block connections for 1.13+",
"type": "issue"
},
{
"action": "created",
"author": "TNTUP",
"comment_id": 433749805,
"datetime": 1540766983000,
"masked_author": "username_1",
"text": "Using Gerrygames's #4 build and having memory leaks what @PinkLolicorn said in https://github.com/MylesIsCool/ViaVersion/issues/856#issuecomment-433719471 so gotta live with that till it gets fixed/merged as reverting to an official one without the connectable blocks would annoy players again xD Keep the good work!!! Love Viaversion <3",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 433845330,
"datetime": 1540805956000,
"masked_author": "username_0",
"text": "@zedwick The reason for this is that we need all the blocks from the world to connect the blocks. At first we read the blocks from the chunk packets. These were then saved for each player that has led to a high ram and server utilization. Now we've changed it to get the blocks out of the world. We will probably add the packet level support to the bungee again. But it is still not recommended to use it",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 433845975,
"datetime": 1540806088000,
"masked_author": "username_0",
"text": "@username_1 I'll take a look at it later, but actually we do not save anything in the UserConnection anymore. Are you sure you have the latest version of jenkins?\r\n\r\nhttps://ci.viaversion.com/job/ViaVersion-Block-Connections-WIP/",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TNTUP",
"comment_id": 433855344,
"datetime": 1540807919000,
"masked_author": "username_1",
"text": "@username_0 I've already tried this one, blocks are no longer connected, like if I was using the official ViaVersion plugin. Doors are still messed, chests are no longer connected, glass panes and so on. Gerrygames's fork worked in the box from a 3rd party link that got deleted but I had memory leaks so forced to downgrade to the best good version which is stable",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TNTUP",
"comment_id": 433862562,
"datetime": 1540809407000,
"masked_author": "username_1",
"text": "You're calling me an idiot??? No where told me to enable that setting, and I don't run bungeecord as I don't trust that. Shame I cannot report your comment",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "toxamin",
"comment_id": 433866263,
"datetime": 1540810168000,
"masked_author": "username_2",
"text": "blind https://i.imgur.com/r98lV0X.png",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TNTUP",
"comment_id": 434155715,
"datetime": 1540868824000,
"masked_author": "username_1",
"text": "I've reproduced the Door bug again. On my 1.13.1 client, opening/closing doors are fine. If another player on 1.12.2 or 1.13.1 closes the door, on his/her side is fine, but for other players on 1.13.1 the door is bugged\r\n\r\nUsing the #4 from the CI https://i.qcfb.ca/unknowndasfdsfsdf.png I can try other possible sides for doors but I guess it can be fixed for possible door patterns^^",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "StormyIceLeopard",
"comment_id": 435690656,
"datetime": 1541353821000,
"masked_author": "username_3",
"text": "I just downloaded the latest build for 1.6.1-SNAPSHOT for a 1.12.2 Spigot Server on BungeeCord using SubServers2 and I still get block connection issues on client 1.13.2\r\n\r\nThere I hope I helped with the testing.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 436239952,
"datetime": 1541508302000,
"masked_author": "username_0",
"text": "For BungeeCord Support progess follow: https://github.com/Gerrygames/ViaVersion/pull/11",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 436588494,
"datetime": 1541588816000,
"masked_author": "username_0",
"text": "Bungeecord is now also supported from build #8\r\n\r\nhttps://ci.viaversion.com/job/ViaVersion-Block-Connections-WIP/",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 437090726,
"datetime": 1541698852000,
"masked_author": "username_0",
"text": "@KennyTV fixed the last problem with Redstone.\r\nIf something else is missing, please report.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "StormyIceLeopard",
"comment_id": 437183282,
"datetime": 1541717012000,
"masked_author": "username_3",
"text": "Still have block connection issues.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 437185697,
"datetime": 1541717599000,
"masked_author": "username_0",
"text": "For example ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "StormyIceLeopard",
"comment_id": 437223057,
"datetime": 1541728611000,
"masked_author": "username_3",
"text": "Stairs and glass panes are still broken.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "StormyIceLeopard",
"comment_id": 437223366,
"datetime": 1541728716000,
"masked_author": "username_3",
"text": "[\r\n\r\n\r\n\r\n\r\n\r\n](url)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForceUpdate1",
"comment_id": 437265079,
"datetime": 1541745442000,
"masked_author": "username_0",
"text": "Have you used this latest build https://ci.viaversion.com/job/ViaVersion-Block-Connections-WIP/ and enabled serverside-blockconnetion in the ViaVersion config",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "StormyIceLeopard",
"comment_id": 437427978,
"datetime": 1541783480000,
"masked_author": "username_3",
"text": "Was unaware of the config setting.\r\nWorking nicely now.\r\nThanks.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TechieBlort",
"comment_id": 437813698,
"datetime": 1542015138000,
"masked_author": "username_4",
"text": "I don't see a serverside-blockconnection in the config am I missing something or do I add it myself?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "krusic22",
"comment_id": 437814129,
"datetime": 1542015217000,
"masked_author": "username_5",
"text": "Make sure you are using the ViaVersion build from the link above. Then run it once and it will automatically add the config option.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TechieBlort",
"comment_id": 437814250,
"datetime": 1542015240000,
"masked_author": "username_4",
"text": "I just found the setting.",
"title": null,
"type": "comment"
}
] | 6 | 21 | 3,595 | false | false | 3,595 | true |
brunch/jshint-brunch | brunch | 388,360,938 | 30 | {
"number": 30,
"repo": "jshint-brunch",
"user_login": "brunch"
} | [
{
"action": "opened",
"author": "gcatto",
"comment_id": null,
"datetime": 1544124255000,
"masked_author": "username_0",
"text": "",
"title": "Updating dependencies in package.json to address npm vulnerability advisories",
"type": "issue"
},
{
"action": "created",
"author": "paulmillr",
"comment_id": 445420144,
"datetime": 1544233973000,
"masked_author": "username_1",
"text": "this broke tests",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gcatto",
"comment_id": 454441939,
"datetime": 1547567497000,
"masked_author": "username_0",
"text": "@username_1 this is passing, can merge it in?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gcatto",
"comment_id": 455260532,
"datetime": 1547746544000,
"masked_author": "username_0",
"text": "@username_1 thanks! Any chance of a v2.0.1 being published to npm?",
"title": null,
"type": "comment"
}
] | 2 | 4 | 125 | false | false | 125 | true |
lektor/lektor-website | lektor | 174,651,113 | 118 | {
"number": 118,
"repo": "lektor-website",
"user_login": "lektor"
} | [
{
"action": "opened",
"author": "go-bears",
"comment_id": null,
"datetime": 1472773522000,
"masked_author": "username_0",
"text": "explicitly name which file to add target deployment server information",
"title": "update fork & improve documentation",
"type": "issue"
},
{
"action": "created",
"author": "nixjdm",
"comment_id": 358473042,
"datetime": 1516228635000,
"masked_author": "username_1",
"text": "I can't pull it with the changes to our Website.lektorproject file and it doesn't look like I can edit it back. I'm fine with the other changes though. if you want to revert those changes feel free to resubmit this, but I'm closing this because of age.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 322 | false | false | 322 | false |
home-assistant/home-assistant | home-assistant | 396,207,198 | 19,807 | null | [
{
"action": "opened",
"author": "abhinavsingh",
"comment_id": null,
"datetime": 1546728539000,
"masked_author": "username_0",
"text": "<!-- READ THIS FIRST:\r\n- If you need additional help with this template please refer to https://www.home-assistant.io/help/reporting_issues/\r\n- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/home-assistant/releases\r\n- Frontend issues should be submitted to the home-assistant-polymer repository: https://github.com/home-assistant/home-assistant-polymer/issues\r\n- Do not report issues for components if you are using custom components: files in <config-dir>/custom_components\r\n- This is for bugs only. Feature and enhancement requests should go in our community forum: https://community.home-assistant.io/c/feature-requests\r\n- Provide as many details as possible. Paste logs, configuration sample and code into the backticks. Do not delete any text from this template!\r\n-->\r\n\r\n**Home Assistant release with the issue:**\r\n<!--\r\n- Frontend -> Developer tools -> Info\r\n- Or use this command: hass --version\r\n-->\r\n\r\n0.84.6\r\n\r\n**Last working Home Assistant release (if known):**\r\n\r\nNone\r\n\r\n**Operating environment (Hass.io/Docker/Windows/etc.):**\r\n<!--\r\nPlease provide details about your environment.\r\n-->\r\n\r\nmacOS Mojave 10.18.2\r\nPython 3.7.1\r\npip 10.0.1\r\n\r\nCommands used to setup hass:\r\n\r\n$ python3 -m venv .\r\n$ source bin/activate\r\n$ pip3 install homeassistant\r\n$ hass --open-ui\r\n.... no ui, exception on console ...\r\n\r\n**Component/platform:**\r\n<!--\r\nPlease add the link to the documentation at https://www.home-assistant.io/components/ of the component/platform in question.\r\n-->\r\n\r\nNone, UI doesn't show up.\r\n\r\n**Description of problem:**\r\n\r\n**Problem-relevant `configuration.yaml` entries and (fill out even if it seems unimportant):**\r\n\r\nNo changes done here, using default config.\r\n\r\n```yaml\r\nhomeassistant:\r\n # Name of the location where Home Assistant is running\r\n name: Home\r\n # Location required to calculate the time the sun rises and sets\r\n latitude: 0\r\n longitude: 0\r\n # Impacts weather/sunrise data (altitude above sea level in meters)\r\n elevation: 0\r\n # metric for Metric, imperial for Imperial\r\n unit_system: metric\r\n # Pick yours from here: http://en.wikipedia.org/wiki/List_of_tz_database_time_zones\r\n time_zone: UTC\r\n # Customization file\r\n customize: !include customize.yaml\r\n\r\n# Show links to resources in log and frontend\r\nintroduction:\r\n\r\n# Enables the frontend\r\nfrontend:\r\n\r\n# Enables configuration UI\r\nconfig:\r\n\r\n# Uncomment this if you are using SSL/TLS, running in Docker container, etc.\r\n# http:\r\n# base_url: example.duckdns.org:8123\r\n\r\n# Checks for available updates\r\n# Note: This component will send some information about your system to\r\n# the developers to assist with development of Home Assistant.\r\n# For more information, please see:\r\n# https://home-assistant.io/blog/2016/10/25/explaining-the-updater/\r\nupdater:\r\n # Optional, allows Home Assistant developers to focus on popular components.\r\n # include_used_components: true\r\n\r\n# Discover some devices automatically\r\ndiscovery:\r\n\r\n# Allows you to issue voice commands from the frontend in enabled browsers\r\nconversation:\r\n\r\n# Enables support for tracking state changes over time\r\nhistory:\r\n\r\n# View all events in a logbook\r\nlogbook:\r\n\r\n# Enables a map showing the location of tracked devices\r\nmap:\r\n\r\n# Track the sun\r\nsun:\r\n\r\n# Sensors\r\nsensor:\r\n # Weather prediction\r\n - platform: yr\r\n\r\n# Text to speech\r\ntts:\r\n - platform: google\r\n\r\n# Cloud\r\ncloud:\r\n\r\ngroup: !include groups.yaml\r\nautomation: !include automations.yaml\r\nscript: !include scripts.yaml\r\n```\r\n\r\n**Traceback (if applicable):**\r\n```\r\n2019-01-05 14:35:10 ERROR (MainThread) [homeassistant.components.media_player] Error while setting up platform songpal\r\nTraceback (most recent call last):\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/homeassistant/helpers/entity_platform.py\", line 128, in _async_setup_platform\r\n SLOW_SETUP_MAX_WAIT, loop=hass.loop)\r\n File \"/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py\", line 416, in wait_for\r\n return fut.result()\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/homeassistant/components/media_player/songpal.py\", line 68, in async_setup_platform\r\n await device.initialize()\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/homeassistant/components/media_player/songpal.py\", line 122, in initialize\r\n self._sysinfo = await self.dev.get_system_info()\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/songpal/device.py\", line 174, in get_system_info\r\n return Sysinfo.make(**await self.services[\"system\"][\"getSystemInformation\"]())\r\nKeyError: 'system'\r\n```\r\n\r\n**Additional information:**",
"title": "No UI: KeyError: 'system'",
"type": "issue"
},
{
"action": "created",
"author": "ReneNulschDE",
"comment_id": 451727958,
"datetime": 1546767250000,
"masked_author": "username_1",
"text": "looks like the songpal discovery is a problem here. First let us try to disable the discovery of songpal devices. Change your config and add the ignore part.\r\n```\r\ndiscovery:\r\n ignore:\r\n - songpal\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "abhinavsingh",
"comment_id": 451768012,
"datetime": 1546803183000,
"masked_author": "username_0",
"text": "Thanks, @username_1 that helped in getting past the fatal exception but I still don't see a UI :(\r\n\r\nIt rolls up till here:\r\n```\r\n2019-01-06 11:25:10 INFO (MainThread) [homeassistant.core] Starting Home Assistant\r\n2019-01-06 11:25:10 INFO (MainThread) [homeassistant.core] Timer:starting\r\n```\r\n\r\nthen it does the service discovery\r\n```\r\n2019-01-06 11:25:23 INFO (MainThread) [homeassistant.components.discovery] Unknown service discovered: homekit {'host': '10.0.0.143', 'port': 8080,\r\n..\r\n..\r\n..\r\n2019-01-06 11:25:23 WARNING (MainThread) [homeassistant.components.hue] Connected to Hue at 10.0.0.143 but not registered.\r\n```\r\n\r\nbut a UI never opens up. Also tried destroying the virtualenv and re-installing everything.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "abhinavsingh",
"comment_id": 451770224,
"datetime": 1546805023000,
"masked_author": "username_0",
"text": "Might be helpful for debugging. After 10 minutes or so, it eventually failed with following stacktrace:\r\n\r\n```\r\n2019-01-06 11:59:31 ERROR (MainThread) [homeassistant.core] Error doing job: Task exception was never retrieved\r\nTraceback (most recent call last):\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/homeassistant/components/discovery.py\", line 172, in scan_devices\r\n results = await hass.async_add_job(_discover, netdisco)\r\n File \"/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/homeassistant/components/discovery.py\", line 198, in _discover\r\n netdisco.scan()\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/netdisco/discovery.py\", line 63, in scan\r\n self.gdm.scan()\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/netdisco/gdm.py\", line 22, in scan\r\n self.update()\r\n File \"/Users/abhinav/Dev/home-automation/lib/python3.7/site-packages/netdisco/gdm.py\", line 79, in update\r\n sock.sendto(gdm_msg, (gdm_ip, gdm_port))\r\nOSError: [Errno 50] Network is down\r\n```",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "abhinavsingh",
"comment_id": null,
"datetime": 1547445522000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "abhinavsingh",
"comment_id": 453908021,
"datetime": 1547445522000,
"masked_author": "username_0",
"text": "Even though no UI popped up, visiting http://localhost:8123 seems to work.",
"title": null,
"type": "comment"
}
] | 2 | 6 | 6,997 | false | false | 6,997 | true |
holon-platform/holon-json | holon-platform | 393,996,327 | 36 | {
"number": 36,
"repo": "holon-json",
"user_login": "holon-platform"
} | [
{
"action": "created",
"author": "rrighi",
"comment_id": 451931592,
"datetime": 1546867041000,
"masked_author": "username_0",
"text": "Already fixed.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 2,069 | false | true | 14 | false |
ArkEcosystem/rust-crypto | ArkEcosystem | 413,885,871 | 17 | null | [
{
"action": "closed",
"author": "faustbrian",
"comment_id": null,
"datetime": 1557749932000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 1,017 | false | true | 0 | false |
panghaijiao/HJDanmakuDemo | null | 158,093,604 | 5 | null | [
{
"action": "opened",
"author": "hi363138911",
"comment_id": null,
"datetime": 1464858759000,
"masked_author": "username_0",
"text": "DanmakuBaseModel 这个class 下方法过期了 - -。\r\n\r\n self.size = CGSizeMake([self.text sizeWithFont:[UIFont systemFontOfSize:self.textSize]].width, paintHeight);\r\n///可以修改\r\n CGSize textSize =CGSizeMake([self.text sizeWithAttributes:@{ NSFontAttributeName : [UIFont systemFontOfSize:self.textSize] }].width, paintHeight) ;\r\n\r\n感谢你的开源 框架",
"title": "谢谢你 开源的 框架",
"type": "issue"
},
{
"action": "created",
"author": "panghaijiao",
"comment_id": 223243722,
"datetime": 1464860358000,
"masked_author": "username_1",
"text": "恩,谢谢提醒,因为这个库支持最低版本比较低,所以没有使用新的接口,当然,如果你们工程项目支持版本高一点可以使用新接口",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "hi363138911",
"comment_id": 223248429,
"datetime": 1464861647000,
"masked_author": "username_0",
"text": "了解 - -",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "panghaijiao",
"comment_id": null,
"datetime": 1464861700000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 393 | false | false | 393 | false |
annaked/ExpressEntryCalculator | null | 423,557,416 | 24 | null | [
{
"action": "opened",
"author": "DamianKedzior",
"comment_id": null,
"datetime": 1553138243000,
"masked_author": "username_0",
"text": "**Problem overview**\r\nCurrently there is a LanguagePoints class which holds logic to calculate CLB points based on specific language points. CRS supports 4 different language tests which makes LanguagePoints having logic for all them. Class already has too many methods and lines of code.\r\n\r\n**Tasks**\r\n- extract interface from LanguagePoints class with CLB points properties and one method to calculate points\r\n- create 4 classes, each one for specific language tests, implementing newly created interface\r\n- original LanguagePoints class should not exists anymore.",
"title": "Refactor CLB calculation points",
"type": "issue"
},
{
"action": "closed",
"author": "DamianKedzior",
"comment_id": null,
"datetime": 1585535254000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 566 | false | false | 566 | false |
PromyLOPh/pianobar | null | 281,760,926 | 647 | null | [
{
"action": "opened",
"author": "emcek",
"comment_id": null,
"datetime": 1513173949000,
"masked_author": "username_0",
"text": "I try compile pianobar on low cost chip HW arm board.\r\nI've install needed libraries (maybe not all):\r\n\r\n```\r\nsudo apt-get install libpthread-stubs0-dev libao-dev libao4 libcurl3-gnutls libcurl3 libgcrypt11-dev libgcrypt20-dev libjson-c-dev libjson-c2 libjson0 libjson0-dev ffmpeg libavfilter-dev libcurl4-gnutls-dev libavformat-dev libavcodec-extra57 libavcodec-extra pkg-config\r\n```\r\n\r\nDue to issue #614 I use 2016.06.02 release, but I got some warrnings:\r\n```\r\nchip@chip:~/pianobar-2016.06.02$ make\r\n CC src/main.c\r\n CC src/player.c\r\n CC src/settings.c\r\n CC src/terminal.c\r\n CC src/ui_act.c\r\n CC src/ui.c\r\n CC src/ui_readline.c\r\n CC src/ui_dispatch.c\r\n CC src/libpiano/crypt.c\r\n CC src/libpiano/piano.c\r\n CC src/libpiano/request.c\r\n CC src/libpiano/response.c\r\nsrc/libpiano/response.c: In function 'PianoJsonStrdup':\r\nsrc/libpiano/response.c:37:2: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n return strdup (json_object_get_string (json_object_object_get (j, key)));\r\n ^\r\nsrc/libpiano/response.c: In function 'PianoJsonParseStation':\r\nsrc/libpiano/response.c:43:2: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n s->isCreator = !json_object_get_boolean (json_object_object_get (j,\r\n ^\r\nsrc/libpiano/response.c:45:2: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n s->isQuickMix = json_object_get_boolean (json_object_object_get (j,\r\n ^\r\nsrc/libpiano/response.c: In function 'PianoResponse':\r\nsrc/libpiano/response.c:87:2: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n status = json_object_object_get (j, \"stat\");\r\n ^\r\nsrc/libpiano/response.c:95:3: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *code = json_object_object_get (j, \"code\");\r\n ^\r\nsrc/libpiano/response.c:117:2: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n result = json_object_object_get (j, \"result\");\r\n ^\r\nsrc/libpiano/response.c:131:8: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (result, \"syncTime\"));\r\n ^\r\nsrc/libpiano/response.c:152:8: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (result, \"partnerId\"));\r\n ^\r\nsrc/libpiano/response.c:175:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *stations = json_object_object_get (result,\r\n ^\r\nsrc/libpiano/response.c:190:6: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n mix = json_object_object_get (s, \"quickMixStationIds\");\r\n ^\r\nsrc/libpiano/response.c:222:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *items = json_object_object_get (result, \"items\");\r\n ^\r\nsrc/libpiano/response.c:233:5: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n if (json_object_object_get (s, \"artistName\") == NULL) {\r\n ^\r\nsrc/libpiano/response.c:243:5: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *map = json_object_object_get (s, \"audioUrlMap\");\r\n ^\r\nsrc/libpiano/response.c:247:6: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n map = json_object_object_get (map, qualityMap[reqData->quality]);\r\n ^\r\nsrc/libpiano/response.c:251:9: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (map, \"encoding\"));\r\n ^\r\nsrc/libpiano/response.c:277:7: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (s, \"trackGain\"));\r\n ^\r\nsrc/libpiano/response.c:279:7: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (s, \"trackLength\"));\r\n ^\r\nsrc/libpiano/response.c:280:5: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n switch (json_object_get_int (json_object_object_get (s,\r\n ^\r\nsrc/libpiano/response.c:343:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *artists = json_object_object_get (result, \"artists\");\r\n ^\r\nsrc/libpiano/response.c:362:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *songs = json_object_object_get (result, \"songs\");\r\n ^\r\nsrc/libpiano/response.c:417:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *categories = json_object_object_get (result, \"categories\");\r\n ^\r\nsrc/libpiano/response.c:432:6: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *stations = json_object_object_get (c,\r\n ^\r\nsrc/libpiano/response.c:483:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *explanations = json_object_object_get (result,\r\n ^\r\nsrc/libpiano/response.c:494:8: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (e, \"focusTraitName\"));\r\n ^\r\nsrc/libpiano/response.c:515:6: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (result, \"isExplicitContentFilterEnabled\"));\r\n ^\r\nsrc/libpiano/response.c:531:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *music = json_object_object_get (result, \"music\");\r\n ^\r\nsrc/libpiano/response.c:534:5: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *songs = json_object_object_get (music, \"songs\");\r\n ^\r\nsrc/libpiano/response.c:555:5: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *artists = json_object_object_get (music,\r\n ^\r\nsrc/libpiano/response.c:577:4: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object *feedback = json_object_object_get (result,\r\n ^\r\nsrc/libpiano/response.c:582:6: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object * const val = json_object_object_get (feedback,\r\n ^\r\nsrc/libpiano/response.c:603:9: warning: 'json_object_object_get' is deprecated (declared at /usr/include/json-c/json_object.h:271) [-Wdeprecated-declarations]\r\n json_object_object_get (s, \"isPositive\")) ?\r\n ^\r\n CC src/libpiano/list.c\r\n LINK pianobar\r\n```\r\n\r\nDo I miss any library?",
"title": "json problem during compilation",
"type": "issue"
},
{
"action": "closed",
"author": "PromyLOPh",
"comment_id": null,
"datetime": 1516647891000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 8,217 | false | false | 8,217 | false |
LBL-EESA/TECA | LBL-EESA | 388,003,708 | 145 | null | [
{
"action": "opened",
"author": "Steve-JJ",
"comment_id": null,
"datetime": 1544055806000,
"masked_author": "username_0",
"text": "Hi,\r\n\r\nI am trying to install TECA on SUSE Enterprise Linux (12.3) with GCC 7.3.1. \r\n\r\nI used 'git clone' to obtain both TECA and TECA superbuild (a few days ago).\r\n\r\nThe dependencies seem to have installed correctly, but it fails when installing TECA.\r\n\r\nThere are a few issues and I am not sure what is important (the full log is shown below and the errors are highlighted in bold):\r\n\r\n- warning message (just after the 94% progress point when it starts installing TECA): _System is unknown to cmake._ In the cmake configure step I tried to add info such as the system name, but this didn't seem to help\r\n\r\n- it appears to be using the system version of Boost, not the local version it just installed: _CMake Warning at /usr/share/cmake/Modules/FindBoost.cmake:725 (message)_\r\n\r\n- Numpy was installed (see the 29% progress point), but then it reports it can't find Numpy: _Could NOT find NUMPY (missing: NUMPY_INCLUDE_FOUND)_\r\n\r\n- compilation failure at the end of the log: _error: 'function' in namespace 'std' does not name a template type_\r\n\r\nCan someone please show me what I have done wrong?\r\n\r\nThanks\r\nSteve\r\n\r\n\r\n\r\n/usr/local/freeware/TECA_superbuild/build> cmake -DCMAKE_INSTALL_PREFIX=/usr/local/freeware/TECA -DCMAKE_CXX_COMPILER=/usr/bin/g++-7 -DCMAKE_C_COMPILER=/usr/bin/gcc-7 -DTARGET_PLATFORM=native -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_C_COMPILER_ID=GNU -DCMAKE_CXX_COMPILER_ID=GNU .. 2>&1 | tee cmake.log.v1\r\n\r\n/usr/local/freeware/TECA_superbuild/build> make 2>&1 | tee make.log.v1\r\nScanning dependencies of target udunits\r\n[ 1%] Creating directories for 'udunits'\r\n[ 1%] Performing download step (download, verify and extract) for 'udunits'\r\n-- udunits download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/udunits-prefix/src/udunits-stamp/udunits-download-*.log\r\n[ 1%] No patch step for 'udunits'\r\n[ 2%] No update step for 'udunits'\r\n[ 3%] Performing configure step for 'udunits'\r\n-- udunits configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/udunits-prefix/src/udunits-stamp/udunits-configure-*.log\r\n[ 3%] Performing build step for 'udunits'\r\n-- udunits build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/udunits-prefix/src/udunits-stamp/udunits-build-*.log\r\n[ 3%] Performing install step for 'udunits'\r\n-- udunits install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/udunits-prefix/src/udunits-stamp/udunits-install-*.log\r\n[ 4%] Completed 'udunits'\r\n[ 4%] Built target udunits\r\nScanning dependencies of target zlib\r\n[ 4%] Creating directories for 'zlib'\r\n[ 5%] Performing download step (download, verify and extract) for 'zlib'\r\n-- zlib download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/zlib-prefix/src/zlib-stamp/zlib-download-*.log\r\n[ 6%] No patch step for 'zlib'\r\n[ 6%] No update step for 'zlib'\r\n[ 6%] Performing configure step for 'zlib'\r\n-- zlib configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/zlib-prefix/src/zlib-stamp/zlib-configure-*.log\r\n[ 7%] Performing build step for 'zlib'\r\n-- zlib build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/zlib-prefix/src/zlib-stamp/zlib-build-*.log\r\n[ 8%] Performing install step for 'zlib'\r\n-- zlib install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/zlib-prefix/src/zlib-stamp/zlib-install-*.log\r\n[ 8%] Completed 'zlib'\r\n[ 8%] Built target zlib\r\nScanning dependencies of target ncurses\r\n[ 9%] Creating directories for 'ncurses'\r\n[ 9%] Performing download step (download, verify and extract) for 'ncurses'\r\n-- ncurses download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/ncurses-prefix/src/ncurses-stamp/ncurses-download-*.log\r\n[ 9%] No patch step for 'ncurses'\r\n[ 10%] No update step for 'ncurses'\r\n[ 10%] Performing configure step for 'ncurses'\r\n-- ncurses configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/ncurses-prefix/src/ncurses-stamp/ncurses-configure-*.log\r\n[ 11%] Performing build step for 'ncurses'\r\n-- ncurses build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/ncurses-prefix/src/ncurses-stamp/ncurses-build-*.log\r\n[ 11%] Performing install step for 'ncurses'\r\n-- ncurses install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/ncurses-prefix/src/ncurses-stamp/ncurses-install-*.log\r\n[ 12%] Completed 'ncurses'\r\n[ 12%] Built target ncurses\r\nScanning dependencies of target readline\r\n[ 13%] Creating directories for 'readline'\r\n[ 13%] Performing download step (download, verify and extract) for 'readline'\r\n-- readline download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/readline-prefix/src/readline-stamp/readline-download-*.log\r\n[ 14%] No patch step for 'readline'\r\n[ 14%] No update step for 'readline'\r\n[ 14%] Performing configure step for 'readline'\r\n-- readline configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/readline-prefix/src/readline-stamp/readline-configure-*.log\r\n[ 15%] Performing build step for 'readline'\r\n-- readline build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/readline-prefix/src/readline-stamp/readline-build-*.log\r\n[ 15%] Performing install step for 'readline'\r\n-- readline install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/readline-prefix/src/readline-stamp/readline-install-*.log\r\n[ 16%] Completed 'readline'\r\n[ 16%] Built target readline\r\nScanning dependencies of target Python\r\n[ 16%] Creating directories for 'Python'\r\n[ 17%] Performing download step (download, verify and extract) for 'Python'\r\n-- Python download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/Python-prefix/src/Python-stamp/Python-download-*.log\r\n[ 18%] No patch step for 'Python'\r\n[ 18%] No update step for 'Python'\r\n[ 18%] Performing configure step for 'Python'\r\n-- Python configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/Python-prefix/src/Python-stamp/Python-configure-*.log\r\n[ 19%] Performing build step for 'Python'\r\n-- Python build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/Python-prefix/src/Python-stamp/Python-build-*.log\r\n[ 19%] Performing install step for 'Python'\r\n-- Python install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/Python-prefix/src/Python-stamp/Python-install-*.log\r\n[ 20%] Completed 'Python'\r\n[ 20%] Built target Python\r\nScanning dependencies of target setuptools\r\n[ 20%] Creating directories for 'setuptools'\r\n[ 21%] Performing download step (download, verify and extract) for 'setuptools'\r\n-- setuptools download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/setuptools-prefix/src/setuptools-stamp/setuptools-download-*.log\r\n[ 21%] No patch step for 'setuptools'\r\n[ 21%] No update step for 'setuptools'\r\n[ 22%] No configure step for 'setuptools'\r\n[ 22%] Performing build step for 'setuptools'\r\n-- setuptools build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/setuptools-prefix/src/setuptools-stamp/setuptools-build-*.log\r\n[ 23%] No install step for 'setuptools'\r\n[ 23%] Completed 'setuptools'\r\n[ 23%] Built target setuptools\r\nScanning dependencies of target Cython\r\n[ 24%] Creating directories for 'Cython'\r\n[ 24%] Performing download step (download, verify and extract) for 'Cython'\r\n-- Cython download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/Cython-prefix/src/Cython-stamp/Cython-download-*.log\r\n[ 24%] No patch step for 'Cython'\r\n[ 25%] No update step for 'Cython'\r\n[ 26%] No configure step for 'Cython'\r\n[ 26%] Performing build step for 'Cython'\r\n-- Cython build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/Cython-prefix/src/Cython-stamp/Cython-build-*.log\r\n[ 26%] No install step for 'Cython'\r\n[ 26%] Completed 'Cython'\r\n[ 26%] Built target Cython\r\nScanning dependencies of target numpy\r\n[ 27%] Creating directories for 'numpy'\r\n[ 27%] Performing download step (download, verify and extract) for 'numpy'\r\n-- numpy download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/numpy-prefix/src/numpy-stamp/numpy-download-*.log\r\n[ 27%] No patch step for 'numpy'\r\n[ 28%] No update step for 'numpy'\r\n[ 29%] No configure step for 'numpy'\r\n[ 29%] Performing build step for 'numpy'\r\n-- numpy build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/numpy-prefix/src/numpy-stamp/numpy-build-*.log\r\n[ 29%] No install step for 'numpy'\r\n[ 30%] Completed 'numpy'\r\n[ 30%] Built target numpy\r\nScanning dependencies of target mpi\r\n[ 31%] Creating directories for 'mpi'\r\n[ 31%] Performing download step (download, verify and extract) for 'mpi'\r\n-- mpi download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi-prefix/src/mpi-stamp/mpi-download-*.log\r\n[ 32%] No patch step for 'mpi'\r\n[ 32%] No update step for 'mpi'\r\n[ 32%] Performing configure step for 'mpi'\r\n-- mpi configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi-prefix/src/mpi-stamp/mpi-configure-*.log\r\n[ 33%] Performing build step for 'mpi'\r\n-- mpi build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi-prefix/src/mpi-stamp/mpi-build-*.log\r\n[ 33%] Performing install step for 'mpi'\r\n-- mpi install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi-prefix/src/mpi-stamp/mpi-install-*.log\r\n[ 34%] Completed 'mpi'\r\n[ 34%] Built target mpi\r\nScanning dependencies of target mpi4py\r\n[ 34%] Creating directories for 'mpi4py'\r\n[ 35%] Performing download step (download, verify and extract) for 'mpi4py'\r\n-- mpi4py download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi4py-prefix/src/mpi4py-stamp/mpi4py-download-*.log\r\n[ 35%] No patch step for 'mpi4py'\r\n[ 35%] No update step for 'mpi4py'\r\n[ 36%] No configure step for 'mpi4py'\r\n[ 36%] Performing build step for 'mpi4py'\r\n-- mpi4py build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi4py-prefix/src/mpi4py-stamp/mpi4py-build-*.log\r\n[ 37%] Performing install step for 'mpi4py'\r\n-- mpi4py install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/mpi4py-prefix/src/mpi4py-stamp/mpi4py-install-*.log\r\n[ 37%] Completed 'mpi4py'\r\n[ 37%] Built target mpi4py\r\nScanning dependencies of target boost\r\n[ 37%] Creating directories for 'boost'\r\n[ 38%] Performing download step (download, verify and extract) for 'boost'\r\n-- boost download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/boost-prefix/src/boost-stamp/boost-download-*.log\r\n[ 38%] No patch step for 'boost'\r\n[ 38%] No update step for 'boost'\r\n[ 39%] Performing configure step for 'boost'\r\n-- boost configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/boost-prefix/src/boost-stamp/boost-configure-*.log\r\n[ 39%] Performing build step for 'boost'\r\n-- boost build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/boost-prefix/src/boost-stamp/boost-build-*.log\r\n[ 40%] Performing install step for 'boost'\r\n-- boost install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/boost-prefix/src/boost-stamp/boost-install-*.log\r\n[ 40%] Completed 'boost'\r\n[ 40%] Built target boost\r\nScanning dependencies of target libxlsxwriter\r\n[ 40%] Creating directories for 'libxlsxwriter'\r\n[ 41%] Performing download step (git clone) for 'libxlsxwriter'\r\n-- libxlsxwriter download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libxlsxwriter-prefix/src/libxlsxwriter-stamp/libxlsxwriter-download-*.log\r\n[ 42%] No patch step for 'libxlsxwriter'\r\n[ 42%] No update step for 'libxlsxwriter'\r\n[ 42%] Performing configure step for 'libxlsxwriter'\r\n-- libxlsxwriter configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libxlsxwriter-prefix/src/libxlsxwriter-stamp/libxlsxwriter-configure-*.log\r\n[ 43%] Performing build step for 'libxlsxwriter'\r\n-- libxlsxwriter build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libxlsxwriter-prefix/src/libxlsxwriter-stamp/libxlsxwriter-build-*.log\r\n[ 43%] Performing install step for 'libxlsxwriter'\r\n-- libxlsxwriter install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libxlsxwriter-prefix/src/libxlsxwriter-stamp/libxlsxwriter-install-*.log\r\n[ 44%] Completed 'libxlsxwriter'\r\n[ 44%] Built target libxlsxwriter\r\nScanning dependencies of target libpng\r\n[ 45%] Creating directories for 'libpng'\r\n[ 45%] Performing download step (download, verify and extract) for 'libpng'\r\n-- libpng download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libpng-prefix/src/libpng-stamp/libpng-download-*.log\r\n[ 45%] No patch step for 'libpng'\r\n[ 46%] No update step for 'libpng'\r\n[ 47%] Performing configure step for 'libpng'\r\n-- libpng configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libpng-prefix/src/libpng-stamp/libpng-configure-*.log\r\n[ 47%] Performing build step for 'libpng'\r\n-- libpng build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libpng-prefix/src/libpng-stamp/libpng-build-*.log\r\n[ 47%] Performing install step for 'libpng'\r\n-- libpng install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/libpng-prefix/src/libpng-stamp/libpng-install-*.log\r\n[ 47%] Completed 'libpng'\r\n[ 47%] Built target libpng\r\nScanning dependencies of target pyparsing\r\n[ 48%] Creating directories for 'pyparsing'\r\n[ 48%] Performing download step (download, verify and extract) for 'pyparsing'\r\n-- pyparsing download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/pyparsing-prefix/src/pyparsing-stamp/pyparsing-download-*.log\r\n[ 48%] No patch step for 'pyparsing'\r\n[ 49%] No update step for 'pyparsing'\r\n[ 50%] No configure step for 'pyparsing'\r\n[ 50%] Performing build step for 'pyparsing'\r\n-- pyparsing build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/pyparsing-prefix/src/pyparsing-stamp/pyparsing-build-*.log\r\n[ 50%] No install step for 'pyparsing'\r\n[ 50%] Completed 'pyparsing'\r\n[ 50%] Built target pyparsing\r\nScanning dependencies of target six\r\n[ 51%] Creating directories for 'six'\r\n[ 51%] Performing download step (download, verify and extract) for 'six'\r\n-- six download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/six-prefix/src/six-stamp/six-download-*.log\r\n[ 51%] No patch step for 'six'\r\n[ 52%] No update step for 'six'\r\n[ 52%] No configure step for 'six'\r\n[ 53%] Performing build step for 'six'\r\n-- six build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/six-prefix/src/six-stamp/six-build-*.log\r\n[ 53%] No install step for 'six'\r\n[ 54%] Completed 'six'\r\n[ 54%] Built target six\r\nScanning dependencies of target python_dateutil\r\n[ 54%] Creating directories for 'python_dateutil'\r\n[ 55%] Performing download step (download, verify and extract) for 'python_dateutil'\r\n-- python_dateutil download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/python_dateutil-prefix/src/python_dateutil-stamp/python_dateutil-download-*.log\r\n[ 56%] No patch step for 'python_dateutil'\r\n[ 56%] No update step for 'python_dateutil'\r\n[ 56%] No configure step for 'python_dateutil'\r\n[ 57%] Performing build step for 'python_dateutil'\r\n-- python_dateutil build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/python_dateutil-prefix/src/python_dateutil-stamp/python_dateutil-build-*.log\r\n[ 57%] No install step for 'python_dateutil'\r\n[ 58%] Completed 'python_dateutil'\r\n[ 58%] Built target python_dateutil\r\nScanning dependencies of target pytz\r\n[ 58%] Creating directories for 'pytz'\r\n[ 58%] Performing download step (download, verify and extract) for 'pytz'\r\n-- pytz download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/pytz-prefix/src/pytz-stamp/pytz-download-*.log\r\n[ 58%] No patch step for 'pytz'\r\n[ 59%] No update step for 'pytz'\r\n[ 60%] No configure step for 'pytz'\r\n[ 60%] Performing build step for 'pytz'\r\n-- pytz build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/pytz-prefix/src/pytz-stamp/pytz-build-*.log\r\n[ 61%] No install step for 'pytz'\r\n[ 61%] Completed 'pytz'\r\n[ 61%] Built target pytz\r\nScanning dependencies of target cycler\r\n[ 62%] Creating directories for 'cycler'\r\n[ 62%] Performing download step (download, verify and extract) for 'cycler'\r\n-- cycler download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/cycler-prefix/src/cycler-stamp/cycler-download-*.log\r\n[ 62%] No patch step for 'cycler'\r\n[ 63%] No update step for 'cycler'\r\n[ 63%] No configure step for 'cycler'\r\n[ 64%] Performing build step for 'cycler'\r\n-- cycler build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/cycler-prefix/src/cycler-stamp/cycler-build-*.log\r\n[ 64%] No install step for 'cycler'\r\n[ 65%] Completed 'cycler'\r\n[ 65%] Built target cycler\r\nScanning dependencies of target functools32\r\n[ 66%] Creating directories for 'functools32'\r\n[ 66%] Performing download step (download, verify and extract) for 'functools32'\r\n-- functools32 download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/functools32-prefix/src/functools32-stamp/functools32-download-*.log\r\n[ 66%] No patch step for 'functools32'\r\n[ 67%] No update step for 'functools32'\r\n[ 68%] No configure step for 'functools32'\r\n[ 68%] Performing build step for 'functools32'\r\n-- functools32 build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/functools32-prefix/src/functools32-stamp/functools32-build-*.log\r\n[ 68%] No install step for 'functools32'\r\n[ 69%] Completed 'functools32'\r\n[ 69%] Built target functools32\r\nScanning dependencies of target freetype\r\n[ 69%] Creating directories for 'freetype'\r\n[ 70%] Performing download step (download, verify and extract) for 'freetype'\r\n-- freetype download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/freetype-prefix/src/freetype-stamp/freetype-download-*.log\r\n[ 71%] No patch step for 'freetype'\r\n[ 71%] No update step for 'freetype'\r\n[ 71%] Performing configure step for 'freetype'\r\n-- freetype configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/freetype-prefix/src/freetype-stamp/freetype-configure-*.log\r\n[ 71%] Performing build step for 'freetype'\r\n-- freetype build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/freetype-prefix/src/freetype-stamp/freetype-build-*.log\r\n[ 72%] Performing install step for 'freetype'\r\n-- freetype install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/freetype-prefix/src/freetype-stamp/freetype-install-*.log\r\n[ 72%] Completed 'freetype'\r\n[ 72%] Built target freetype\r\nScanning dependencies of target subprocess32\r\n[ 72%] Creating directories for 'subprocess32'\r\n[ 73%] Performing download step (download, verify and extract) for 'subprocess32'\r\n-- subprocess32 download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/subprocess32-prefix/src/subprocess32-stamp/subprocess32-download-*.log\r\n[ 74%] No patch step for 'subprocess32'\r\n[ 74%] No update step for 'subprocess32'\r\n[ 74%] No configure step for 'subprocess32'\r\n[ 74%] Performing build step for 'subprocess32'\r\n-- subprocess32 build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/subprocess32-prefix/src/subprocess32-stamp/subprocess32-build-*.log\r\n[ 75%] No install step for 'subprocess32'\r\n[ 75%] Completed 'subprocess32'\r\n[ 75%] Built target subprocess32\r\nScanning dependencies of target matplotlib\r\n[ 75%] Creating directories for 'matplotlib'\r\n[ 75%] Performing download step (download, verify and extract) for 'matplotlib'\r\n-- matplotlib download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/matplotlib-prefix/src/matplotlib-stamp/matplotlib-download-*.log\r\n[ 75%] No patch step for 'matplotlib'\r\n[ 76%] No update step for 'matplotlib'\r\n[ 77%] No configure step for 'matplotlib'\r\n[ 77%] Performing build step for 'matplotlib'\r\n-- matplotlib build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/matplotlib-prefix/src/matplotlib-stamp/matplotlib-build-*.log\r\n[ 78%] No install step for 'matplotlib'\r\n[ 78%] Completed 'matplotlib'\r\n[ 78%] Built target matplotlib\r\nScanning dependencies of target hdf5\r\n[ 78%] Creating directories for 'hdf5'\r\n[ 79%] Performing download step (download, verify and extract) for 'hdf5'\r\n-- hdf5 download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/hdf5-prefix/src/hdf5-stamp/hdf5-download-*.log\r\n[ 80%] No patch step for 'hdf5'\r\n[ 80%] No update step for 'hdf5'\r\n[ 80%] Performing configure step for 'hdf5'\r\n-- hdf5 configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/hdf5-prefix/src/hdf5-stamp/hdf5-configure-*.log\r\n[ 81%] Performing build step for 'hdf5'\r\n-- hdf5 build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/hdf5-prefix/src/hdf5-stamp/hdf5-build-*.log\r\n[ 82%] Performing install step for 'hdf5'\r\n-- hdf5 install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/hdf5-prefix/src/hdf5-stamp/hdf5-install-*.log\r\n[ 82%] Completed 'hdf5'\r\n[ 82%] Built target hdf5\r\nScanning dependencies of target netcdf\r\n[ 82%] Creating directories for 'netcdf'\r\n[ 83%] Performing download step (download, verify and extract) for 'netcdf'\r\n-- netcdf download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/netcdf-prefix/src/netcdf-stamp/netcdf-download-*.log\r\n[ 84%] No patch step for 'netcdf'\r\n[ 84%] No update step for 'netcdf'\r\n[ 84%] Performing configure step for 'netcdf'\r\n-- netcdf configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/netcdf-prefix/src/netcdf-stamp/netcdf-configure-*.log\r\n[ 84%] Performing build step for 'netcdf'\r\n-- netcdf build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/netcdf-prefix/src/netcdf-stamp/netcdf-build-*.log\r\n[ 85%] Performing install step for 'netcdf'\r\n-- netcdf install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/netcdf-prefix/src/netcdf-stamp/netcdf-install-*.log\r\n[ 85%] Completed 'netcdf'\r\n[ 85%] Built target netcdf\r\nScanning dependencies of target SWIG\r\n[ 85%] Creating directories for 'SWIG'\r\n[ 85%] Performing download step (download, verify and extract) for 'SWIG'\r\n-- SWIG download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/SWIG-prefix/src/SWIG-stamp/SWIG-download-*.log\r\n[ 85%] No patch step for 'SWIG'\r\n[ 86%] No update step for 'SWIG'\r\n[ 87%] Performing configure step for 'SWIG'\r\n-- SWIG configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/SWIG-prefix/src/SWIG-stamp/SWIG-configure-*.log\r\n[ 87%] Performing build step for 'SWIG'\r\n-- SWIG build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/SWIG-prefix/src/SWIG-stamp/SWIG-build-*.log\r\n[ 88%] Performing install step for 'SWIG'\r\n-- SWIG install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/SWIG-prefix/src/SWIG-stamp/SWIG-install-*.log\r\n[ 88%] Completed 'SWIG'\r\n[ 88%] Built target SWIG\r\nScanning dependencies of target openssl\r\n[ 88%] Creating directories for 'openssl'\r\n[ 89%] Performing download step (download, verify and extract) for 'openssl'\r\n-- openssl download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/openssl-prefix/src/openssl-stamp/openssl-download-*.log\r\n[ 90%] No patch step for 'openssl'\r\n[ 90%] No update step for 'openssl'\r\n[ 90%] Performing configure step for 'openssl'\r\n-- openssl configure command succeeded. See also /usr/local/freeware/TECA_superbuild/build/openssl-prefix/src/openssl-stamp/openssl-configure-*.log\r\n[ 91%] Performing build step for 'openssl'\r\n-- openssl build command succeeded. See also /usr/local/freeware/TECA_superbuild/build/openssl-prefix/src/openssl-stamp/openssl-build-*.log\r\n[ 92%] Performing install step for 'openssl'\r\n-- openssl install command succeeded. See also /usr/local/freeware/TECA_superbuild/build/openssl-prefix/src/openssl-stamp/openssl-install-*.log\r\n[ 92%] Completed 'openssl'\r\n[ 92%] Built target openssl\r\nScanning dependencies of target TECA\r\n[ 93%] Creating directories for 'TECA'\r\n[ 93%] Performing download step (git clone) for 'TECA'\r\n-- TECA download command succeeded. See also /usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA-stamp/TECA-download-*.log\r\n[ 94%] No patch step for 'TECA'\r\n[ 94%] Performing update step for 'TECA'\r\n[ 94%] Performing configure step for 'TECA'\r\n-- The C compiler identification is GNU 7.3.1\r\n-- The CXX compiler identification is GNU 7.3.1\r\n-- The Fortran compiler identification is GNU 7.3.1\r\n**System is unknown to cmake, create:**\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Check for working C compiler: /usr/bin/gcc-7\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Check for working C compiler: /usr/bin/gcc-7 -- works\r\n-- Detecting C compiler ABI info\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Detecting C compiler ABI info - done\r\n-- Detecting C compile features\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Detecting C compile features - done\r\n-- Check for working CXX compiler: /usr/bin/g++-7\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Check for working CXX compiler: /usr/bin/g++-7 -- works\r\n-- Detecting CXX compiler ABI info\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Detecting CXX compiler ABI info - done\r\n-- Detecting CXX compile features\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Detecting CXX compile features - done\r\n-- Check for working Fortran compiler: /usr/bin/gfortran-7\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Check for working Fortran compiler: /usr/bin/gfortran-7 -- works\r\n-- Detecting Fortran compiler ABI info\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Detecting Fortran compiler ABI info - done\r\n-- Checking whether /usr/bin/gfortran-7 supports Fortran 90\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Checking whether /usr/bin/gfortran-7 supports Fortran 90 -- yes\r\n-- Configuring a Release build\r\n-- Check for c++ regex support\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Check for c++ regex support -- enabled\r\n-- Found MPI_C: /usr/local/freeware/TECA/lib/libmpi.so \r\n-- Found MPI_CXX: /usr/local/freeware/TECA/lib/libmpi.so \r\n-- Found MPI_Fortran: /usr/local/freeware/TECA/lib/libmpi_usempif08.so;/usr/local/freeware/TECA/lib/libmpi_usempi_ignore_tkr.so;/usr/local/freeware/TECA/lib/libmpi_mpifh.so;/usr/local/freeware/TECA/lib/libmpi.so \r\n-- MPI features -- enabled\r\n-- Found NetCDF: /usr/local/freeware/TECA/lib64/libnetcdf.so \r\n-- NetCDF features -- enabled\r\n-- Found ZLIB: /usr/local/freeware/TECA/lib/libz.so (found version \"1.2.8\") \r\n-- Found LibXLSXWriter: /usr/local/freeware/TECA/lib/libxlsxwriter.a \r\n-- libxlsxwriter features -- enabled\r\n-- Found UDUnits: /usr/local/freeware/TECA/lib/libudunits2.so \r\n-- UDUnits features -- enabled\r\n-- ParaView features -- not found. set ParaView_DIR to enable.\r\n-- VTK features -- not found. set VTK_DIR to enable.\r\n**CMake Warning at /usr/share/cmake/Modules/FindBoost.cmake:725 (message):\r\n Imported targets not available for Boost version 106400\r\nCall Stack (most recent call first):\r\n /usr/share/cmake/Modules/FindBoost.cmake:763 (_Boost_COMPONENT_DEPENDENCIES)\r\n /usr/share/cmake/Modules/FindBoost.cmake:1315 (_Boost_MISSING_DEPENDENCIES)\r\n CMakeLists.txt:142 (find_package)**\r\n\r\n\r\n-- Boost features -- enabled\r\n-- OpenSSL features -- enabled\r\n-- Found PythonInterp: /usr/local/freeware/TECA/bin/python (found version \"2.7.13\") \r\n-- Found PythonLibs: /usr/local/freeware/TECA/lib/libpython2.7.so (found version \"2.7.13\") \r\n**-- Could NOT find NUMPY (missing: NUMPY_INCLUDE_FOUND)** \r\n-- Found MPI4PY: TRUE \r\n-- Python features -- disabled\r\n-- Found Git: /usr/bin/git (found version \"2.12.3\") \r\n-- TECA version 2.1.2\r\n-- Detecting Fortran/C Interface\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Detecting Fortran/C Interface - Found GLOBAL and MODULE mangling\r\n-- Verifying Fortran/C Compiler Compatibility\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\nSystem is unknown to cmake, create:\r\nPlatform//bin/sh: /usr/local/freeware/TECA/lib/libreadline.so.6: no version information available (required by /bin/sh)\r\nLinux to use this system, please send your config file to cmake@www.cmake.org so it can be added to cmake\r\n-- Verifying Fortran/C Compiler Compatibility - Success\r\n-- Test data -- available\r\n-- Configuring done\r\n-- Generating done\r\n-- Build files have been written to: /usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA-build\r\n[ 95%] Performing build step for 'TECA'\r\nScanning dependencies of target teca_core\r\n[ 0%] Building CXX object core/CMakeFiles/teca_core.dir/teca_algorithm.cxx.obj\r\nIn file included from /usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA/core/teca_algorithm.h:8:0,\r\n from /usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA/core/teca_algorithm.cxx:1:\r\n/usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA/core/teca_algorithm_fwd.h:145:28: **error: 'function' in namespace 'std' does not name a template type\r\n bool operator!=(const std::function<T> &lhs, const std::function<T> &rhs)\r\n ^~~~~~~~\r\n/usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA/core/teca_algorithm_fwd.h:145:36: error: expected ',' or '...' before '<' token\r\n bool operator!=(const std::function<T> &lhs, const std::function<T> &rhs)\r\n ^\r\n/usr/local/freeware/TECA_superbuild/build/TECA-prefix/src/TECA/core/teca_algorithm_fwd.h:145:73: error: 'bool operator!=(int)' must have an argument of class or enumerated type\r\n bool operator!=(const std::function<T> &lhs, const std::function<T> &rhs)**\r\n ^\r\ncore/CMakeFiles/teca_core.dir/build.make:62: recipe for target 'core/CMakeFiles/teca_core.dir/teca_algorithm.cxx.obj' failed\r\nmake[5]: *** [core/CMakeFiles/teca_core.dir/teca_algorithm.cxx.obj] Error 1\r\nCMakeFiles/Makefile2:981: recipe for target 'core/CMakeFiles/teca_core.dir/all' failed\r\nmake[4]: *** [core/CMakeFiles/teca_core.dir/all] Error 2\r\nMakefile:138: recipe for target 'all' failed\r\nmake[3]: *** [all] Error 2\r\nCMakeFiles/TECA.dir/build.make:121: recipe for target 'TECA-prefix/src/TECA-stamp/TECA-build' failed\r\nmake[2]: *** [TECA-prefix/src/TECA-stamp/TECA-build] Error 2\r\nCMakeFiles/Makefile2:76: recipe for target 'CMakeFiles/TECA.dir/all' failed\r\nmake[1]: *** [CMakeFiles/TECA.dir/all] Error 2\r\nMakefile:127: recipe for target 'all' failed\r\nmake: *** [all] Error 2",
"title": "TECA_superbuild failures",
"type": "issue"
},
{
"action": "created",
"author": "burlen",
"comment_id": 444987434,
"datetime": 1544122676000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "burlen",
"comment_id": 469342351,
"datetime": 1551720738000,
"masked_author": "username_1",
"text": "we've updated and tested the super build one the latest Ubuntu and Fedora releases.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "burlen",
"comment_id": 476706545,
"datetime": 1553614516000,
"masked_author": "username_1",
"text": "I think this is resolved to the point that we can do. Boost and Cmake versions have a strong dependency, due to the way boost deploys it's cmake modules with cmake rather than with boost. Since this issue was reported we have improved thread safety in our NetCDF CF2 reader, and we now support any NetCDF builds, including those shipped with common linux distros (ubuntu and fedora have been tested). It should be fairly straight forward to build teca against system packages now.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "burlen",
"comment_id": null,
"datetime": 1553614517000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "Steve-JJ",
"comment_id": 476862113,
"datetime": 1553636122000,
"masked_author": "username_0",
"text": "Hi Burlen,\nThanks for following this up.\nI will try the installation once our new HPC installation is finished.\ncheerssteve",
"title": null,
"type": "comment"
}
] | 2 | 6 | 35,877 | false | false | 35,877 | false |
kube-HPC/hkube | kube-HPC | 356,759,971 | 226 | null | [
{
"action": "opened",
"author": "eytangro",
"comment_id": null,
"datetime": 1536059088000,
"masked_author": "username_0",
"text": "",
"title": "add metrics to write current workers count from each algorithm to task executor",
"type": "issue"
},
{
"action": "closed",
"author": "NassiHarel",
"comment_id": null,
"datetime": 1591527003000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 172 | false | true | 0 | false |
cityofaustin/janis | cityofaustin | 317,872,233 | 231 | null | [
{
"action": "opened",
"author": "ifsimicoded",
"comment_id": null,
"datetime": 1524716338000,
"masked_author": "username_0",
"text": "Thoughts on this?",
"title": "Form CSRF, API throttling",
"type": "issue"
},
{
"action": "closed",
"author": "easherma",
"comment_id": null,
"datetime": 1572298250000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 17 | false | false | 17 | false |
andreakarasho/ClassicUO | null | 425,458,808 | 371 | null | [
{
"action": "opened",
"author": "natryl",
"comment_id": null,
"datetime": 1553611181000,
"masked_author": "username_0",
"text": "Current hotkey is mouse scroll up and down for zoom feature. This prevents that from being used by the assistant. mouse scroll up and down is a common hotkey for targeting.\r\n\r\nSuggestion is to change the zoom hotkey to ctrl+mouse scroll up and ctrl+mouse scroll down",
"title": "Hotkey for zoom feature",
"type": "issue"
},
{
"action": "created",
"author": "luuutz",
"comment_id": 476764077,
"datetime": 1553621713000,
"masked_author": "username_1",
"text": "+1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andreakarasho",
"comment_id": 476769120,
"datetime": 1553622410000,
"masked_author": "username_2",
"text": "Added!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "andreakarasho",
"comment_id": null,
"datetime": 1553622410000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 274 | false | false | 274 | false |
threefoldfoundation/rexplorer | threefoldfoundation | 348,827,774 | 5 | null | [
{
"action": "opened",
"author": "GlenDC",
"comment_id": null,
"datetime": 1533750546000,
"masked_author": "username_0",
"text": "When reverting blocks, we go essentially back in time and block height. Meaning that an output that was unlocked, could be locked once again. This is for now not tracked however.\r\n\r\nIn practice this is probably an issue not very noticeable as we usually revert just in small jumps backwards, making it already that not all TimeLockedConditions outputs will suffer from this. Even if they do, it would just be for one or a couple of blocks, and thus not really noticeable. As the result the reporting is not 100% correct, but will correct itself quick enough and we know for sure the consensus module does check the rules correctly, so the later auto-correction is for sure trustworthy.\r\n\r\nStill, it would be good that this auto-locking is done as well, so that the reporting becomes accurate at all times.",
"title": "TimeLockConditioned outputs aren't locked again while reversing blocks",
"type": "issue"
}
] | 1 | 1 | 805 | false | false | 805 | false |
gobuffalo/buffalo | gobuffalo | 373,088,082 | 1,402 | {
"number": 1402,
"repo": "buffalo",
"user_login": "gobuffalo"
} | [
{
"action": "opened",
"author": "markbates",
"comment_id": null,
"datetime": 1540311978000,
"masked_author": "username_0",
"text": "",
"title": "fixes strange issues with pages being downloaded instead of rendered",
"type": "issue"
}
] | 2 | 2 | 0 | false | true | 0 | false |
dask/dask | dask | 459,927,059 | 4,991 | {
"number": 4991,
"repo": "dask",
"user_login": "dask"
} | [
{
"action": "opened",
"author": "jakirkham",
"comment_id": null,
"datetime": 1561386790000,
"masked_author": "username_0",
"text": "As `da.concatenate` now drops out 0-size arrays from the final result, this behavior should extend to `da.block`, which just combines many nested calls to `da.concatenate`. This adds some tests to ensure that `da.block` keeps this behavior going forward.\r\n\r\nRelated to issue ( https://github.com/dask/dask/issues/4982 ) and PR ( https://github.com/dask/dask/pull/4990 ).\r\n\r\n- [ ] Tests added / passed\r\n- [ ] Passes `flake8 dask`",
"title": "Test `da.block` with 0-size arrays",
"type": "issue"
},
{
"action": "created",
"author": "jakirkham",
"comment_id": 505091139,
"datetime": 1561395075000,
"masked_author": "username_0",
"text": "@jrbourbeau, would you have a chance to take a quick look?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jakirkham",
"comment_id": 505112745,
"datetime": 1561398681000,
"masked_author": "username_0",
"text": "Thanks for the review! 😄",
"title": null,
"type": "comment"
}
] | 1 | 3 | 510 | false | false | 510 | false |
Tejpbit/taxi-planner | null | 320,489,463 | 1 | null | [
{
"action": "opened",
"author": "Tejpbit",
"comment_id": null,
"datetime": 1525507338000,
"masked_author": "username_0",
"text": "",
"title": "utforska google spreadsheets integration",
"type": "issue"
},
{
"action": "closed",
"author": "lindskogen",
"comment_id": null,
"datetime": 1525509868000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "lindskogen",
"comment_id": 386790429,
"datetime": 1525509868000,
"masked_author": "username_1",
"text": "Solved by 14d50e0e96263c2470cbfa9479f43c3ce7c41d60",
"title": null,
"type": "comment"
}
] | 2 | 3 | 50 | false | false | 50 | false |
awslabs/aws-c-io | awslabs | 381,806,692 | 60 | {
"number": 60,
"repo": "aws-c-io",
"user_login": "awslabs"
} | [
{
"action": "opened",
"author": "JonathanHenson",
"comment_id": null,
"datetime": 1542414759000,
"masked_author": "username_0",
"text": "… a bug that changed my buildspec unexpectedely so we didn't know.\r\n\r\nBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.",
"title": "Apparently this has been broken for a little while and code build had…",
"type": "issue"
},
{
"action": "created",
"author": "JonathanHenson",
"comment_id": 440779243,
"datetime": 1542827583000,
"masked_author": "username_0",
"text": "This was fixed in a different PR. Closing.",
"title": null,
"type": "comment"
}
] | 1 | 2 | 226 | false | false | 226 | false |
scanny/python-pptx | null | 295,207,886 | 352 | null | [
{
"action": "opened",
"author": "anekix",
"comment_id": null,
"datetime": 1518021618000,
"masked_author": "username_0",
"text": "Is there any we can change the width of overall slide ? ppt generated by python-pptx is square-ish but other slides have a rectangular layout.",
"title": "change aspect ratio/width of slide",
"type": "issue"
},
{
"action": "closed",
"author": "scanny",
"comment_id": null,
"datetime": 1518022891000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "scanny",
"comment_id": 363837132,
"datetime": 1518022891000,
"masked_author": "username_1",
"text": "Aspect ratio is set in template file used as (optional) argument to `Presentation()` call.\r\n\r\nThere are also `.slide_height` and `.slide_width` properties on the Presentation object, see documentation.\r\n\r\nPlease use stackoverflow.com with the 'python-pptx' tab in future for support requests.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 434 | false | false | 434 | false |
home-assistant/home-assistant | home-assistant | 267,753,535 | 10,084 | null | [
{
"action": "opened",
"author": "Frostman",
"comment_id": null,
"datetime": 1508780052000,
"masked_author": "username_0",
"text": "Make sure you are running the latest version of Home Assistant before reporting an issue.\r\n\r\nYou should only file an issue if you found a bug. Feature and enhancement requests should go in [the Feature Requests section](https://community.home-assistant.io/c/feature-requests) of our community forum:\r\n\r\n**Home Assistant release (`hass --version`):**\r\n\r\n0.56.0\r\n\r\n**Python release (`python3 --version`):**\r\n\r\nPython 3.6.3\r\n\r\n**Component/platform:**\r\n\r\nprometheus\r\n\r\n**Description of problem:**\r\n\r\nDue to the code in prometheus component https://github.com/home-assistant/home-assistant/blob/dev/homeassistant/components/prometheus.py#L176-L179 there is incorrect handling of battery level as humidity due to the classification based on the measurement.\r\n\r\n**Expected:**\r\n\r\nHave humidity and battery level reported separately.\r\n\r\n**Problem-relevant `configuration.yaml` entries and steps to reproduce:**\r\n\r\nTo reproduce you should have devices with battery level added (like iOS app) and prometheus enabled.",
"title": "Prometheus: reporting battery level as humidity",
"type": "issue"
},
{
"action": "created",
"author": "Frostman",
"comment_id": 338738946,
"datetime": 1508780414000,
"masked_author": "username_0",
"text": "I'd like to contribute fix for it, so, some advice will be very helpful if there were already some thoughts on doing it.\r\n\r\nThere is an easy way to fix, but not very nice looking. Just make additional separation based on the entity attributes. For example, device battery level has an attribute \"Device Name\" specific to it, so, we can add condition for attribute exist or not after selecting by unit of measurement.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "fabaff",
"comment_id": null,
"datetime": 1515427906000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 1,421 | false | false | 1,421 | false |
bazelbuild/bazel | bazelbuild | 339,621,297 | 5,557 | null | [
{
"action": "opened",
"author": "danielcompton",
"comment_id": null,
"datetime": 1531173396000,
"masked_author": "username_0",
"text": "### Description of the problem\r\n\r\nI think there are two compounding issues here:\r\n\r\n1. xcode-locator chooses an Xcode on my cloned backup drive (`Macintosh SSD`) over my primary boot drive (`MacintoshNVME`). \r\n2. Additionally, it does so without quoting the spaces in the drive name, so the clang build fails.\r\n\r\n<pre>\r\nbazel clean; and bazel build ratelimit --verbose_failures --sandbox_debug\r\n...\r\n (cd /private/var/tmp/_bazel_daniel/c1e18630777b604bf59a4d2d2e564069/execroot/smyte && \\\r\n exec env - \\\r\n APPLE_SDK_PLATFORM='' \\\r\n APPLE_SDK_VERSION_OVERRIDE='' \\\r\n <strong>DEVELOPER_DIR='/Volumes/Macintosh SSD/Applications/Xcode.app/Contents/Developer' \\</strong>\r\n PATH={elided}\r\n SDKROOT='/Volumes/Macintosh SSD/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk' \\\r\n...\r\nclang: <strong>error: no such file or directory: 'SSD/Applications/Xcode.app</strong>/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk'\r\nclang: <strong>warning: no such sysroot directory: '/Volumes/Macintosh'</strong> [-Wmissing-sysroot]\r\nTarget //ratelimit:ratelimit failed to build\r\n...\r\nFAILED: Build did NOT complete successfully\r\n</pre>\r\n\r\n### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.\r\n\r\n* Setup a clone of your boot drive onto another drive on the Mac. Name it `Macintosh SSD`\r\n* Clone https://github.com/smyte/smyte-db\r\n* Run `bazel clean && bazel build ratelimit`\r\n\r\n### What operating system are you running Bazel on?\r\n\r\nmacOS 10.13.5 (High Sierra)\r\n\r\n### What's the output of `bazel info release`?\r\n\r\nrelease 0.15.0-homebrew\r\n\r\n### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ?\r\n\r\n```\r\nhttps://github.com/smyte/smyte-db.git\r\n34129e0184408086cde6fe9694d9a7d7ca9c2016\r\n34129e0184408086cde6fe9694d9a7d7ca9c2016\r\n```\r\n\r\n### Have you found anything relevant by searching the web?\r\n\r\nI've looked, but couldn't see anything for this issue. It was hard to search though, as there are lots of other hits for issues with Xcode-locator which weren't relevant here.\r\n\r\n### Any other information, logs, or outputs that you want to share?\r\n\r\nIf I eject the drive and run a full `bazel clean --expunge` then I can get further in the build process. It still fails, but it doesn't look to be related to this issue.\r\n\r\nApologies for the maximal test case, I've never used bazel before, and am not very familiar with the C/C++ ecosystem to be able to make a smaller case.\r\n\r\nIf I run `Xcode-locator` with the drive attached from the temporary build root I get:\r\n\r\n```console\r\n$ _bin/xcode-locator\r\n{\r\n\t\"9.3\": \"/Applications/Xcode-beta.app/Contents/Developer\",\r\n\t\"9.3.0\": \"/Volumes/Macintosh SSD/Applications/Xcode-beta.app/Contents/Developer\",\r\n\t\"9\": \"/Applications/Xcode-beta.app/Contents/Developer\",\r\n\t\"9.4.1\": \"/Volumes/Macintosh SSD/Applications/Xcode.app/Contents/Developer\",\r\n\t\"9.4\": \"/Applications/Xcode.app/Contents/Developer\",\r\n}\r\n```\r\n\r\nIf I run it after detaching the drives I get \r\n\r\n```console\r\n$ _bin/xcode-locator\r\n{\r\n\t\"9.3\": \"/Applications/Xcode-beta.app/Contents/Developer\",\r\n\t\"9.3.0\": \"/Volumes/MacintoshNVME/Applications/Xcode-beta.app/Contents/Developer\",\r\n\t\"9\": \"/Applications/Xcode-beta.app/Contents/Developer\",\r\n\t\"9.4.1\": \"/Applications/Xcode.app/Contents/Developer\",\r\n\t\"9.4\": \"/Applications/Xcode.app/Contents/Developer\",\r\n}\r\n```\r\n\r\nFull build output:\r\n\r\n<details>\r\n<pre>\r\n$ bazel clean; and bazel build ratelimit --verbose_failures --sandbox_debug\r\nINFO: Starting clean.\r\nINFO: Analysed target //ratelimit:ratelimit (45 packages loaded).\r\nINFO: Found 1 target...\r\nERROR: /private/var/tmp/_bazel_daniel/c1e18630777b604bf59a4d2d2e564069/external/hiredis_git/BUILD.bazel:1:1: C++ compilation of rule '@hiredis_git//:hiredis_c' failed (Exit 1): sandbox-exec failed: error executing command\r\n (cd /private/var/tmp/_bazel_daniel/c1e18630777b604bf59a4d2d2e564069/execroot/smyte && \\\r\n exec env - \\\r\n APPLE_SDK_PLATFORM='' \\\r\n APPLE_SDK_VERSION_OVERRIDE='' \\\r\n DEVELOPER_DIR='/Volumes/Macintosh SSD/Applications/Xcode.app/Contents/Developer' \\\r\n PATH={redacted} \\\r\n SDKROOT='/Volumes/Macintosh SSD/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk' \\\r\n TMPDIR=/var/folders/rg/ghxj82wn4gq5cwgncr08n37w0000gr/T/ \\\r\n XCODE_VERSION_OVERRIDE=9.4.1 \\\r\n /usr/bin/sandbox-exec -f /private/var/tmp/_bazel_daniel/c1e18630777b604bf59a4d2d2e564069/sandbox/darwin-sandbox/48/sandbox.sb /private/var/tmp/_bazel_daniel/c1e18630777b604bf59a4d2d2e564069/execroot/smyte/_bin/process-wrapper '--timeout=0' '--kill_delay=15' external/local_config_cc/wrapped_clang '-D_FORTIFY_SOURCE=1' -fstack-protector -fcolor-diagnostics -Wall -Wthread-safety -Wself-assign -fno-omit-frame-pointer -O0 -DDEBUG -iquote external/hiredis_git -iquote bazel-out/darwin-fastbuild/genfiles/external/hiredis_git -iquote external/bazel_tools -iquote bazel-out/darwin-fastbuild/genfiles/external/bazel_tools -Ibazel-out/darwin-fastbuild/bin/external/hiredis_git/_virtual_includes/hiredis_c -MD -MF bazel-out/darwin-fastbuild/bin/external/hiredis_git/_objs/hiredis_c/external/hiredis_git/dict.d '-isysroot __BAZEL_XCODE_SDKROOT__' -Wno-unused-function -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__=\"redacted\"' '-D__TIMESTAMP__=\"redacted\"' '-D__TIME__=\"redacted\"' -c external/hiredis_git/dict.c -o bazel-out/darwin-fastbuild/bin/external/hiredis_git/_objs/hiredis_c/external/hiredis_git/dict.o)\r\nclang: error: no such file or directory: 'SSD/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk'\r\nclang: warning: no such sysroot directory: '/Volumes/Macintosh' [-Wmissing-sysroot]\r\nTarget //ratelimit:ratelimit failed to build\r\nINFO: Elapsed time: 14.623s, Critical Path: 0.39s\r\nINFO: 45 processes: 45 darwin-sandbox.\r\nFAILED: Build did NOT complete successfully\r\n</pre>\r\n</details>\r\n\r\nRunning a Swift Playground on the machine with the extra drive detached prints\r\n\r\n```\r\nimport Foundation\r\n\r\nlet arr = LSCopyApplicationURLsForBundleIdentifier(\"com.apple.dt.Xcode\" as CFString, nil)\r\nprint(arr!)\r\n```\r\n\r\n```\r\nUnmanaged<CFArray>(_value: <__NSArrayI 0x7fd94084bfc0>(\r\nfile:///Applications/Xcode.app/,\r\nfile:///Applications/Xcode-beta.app/,\r\nfile:///Volumes/MacintoshNVME/Applications/Xcode-beta.app/\r\n)\r\n)\r\n```\r\n\r\nI'll update this report shortly with the drive re-attached.",
"title": "xcode-locator chooses Xcode on cloned drive over boot drive",
"type": "issue"
},
{
"action": "created",
"author": "jin",
"comment_id": 405395225,
"datetime": 1531778451000,
"masked_author": "username_1",
"text": "cc @username_2",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jmmv",
"comment_id": 627573255,
"datetime": 1589315011000,
"masked_author": "username_2",
"text": "I don't think we can do anything here. We could add heuristics to prefer the local drive... but then surely someone would want the opposite behavior.\r\n\r\nSeparately, I'm pretty sure Bazel doesn't handle spaces in paths for many reasons. So I don't think it's worth keeping this bug open just for the Xcode case...",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jmmv",
"comment_id": null,
"datetime": 1589315012000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "troberti",
"comment_id": 840424905,
"datetime": 1620896626000,
"masked_author": "username_3",
"text": "I got a similar problem, where Bazel is trying to select a developer dir on a Time Machine backup drive. Surely the Time Machine volume should be excluded.\r\n\r\n```\r\nDEBUG: /private/var/tmp/_bazel_tijmen/3067a3892c7a5fd8ba6cbb960045e8bd/external/bazel_tools/tools/osx/xcode_configure.bzl:89:14: Invoking xcodebuild failed, developer dir: /Volumes/Durandal/Backups.backupdb/Marathon/2020-08-20-091648/Leela - Data/Applications/Xcode-beta.app/Contents/Developer ,return code 256, stderr: Timed out, stdout: \r\n```\r\n\r\nThis then times out the \"local_config_cc\" step, and bazel does not work.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "keith",
"comment_id": 840501406,
"datetime": 1620905993000,
"masked_author": "username_4",
"text": "If you're add the time machine backup to spotlight's ignored directories does that fix it? https://apple.stackexchange.com/a/10086",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "troberti",
"comment_id": 840513521,
"datetime": 1620907681000,
"masked_author": "username_3",
"text": "I tried that, but it gives a notice that you cannot exclude Time Machine backup folders. Presumably that is done automatically.\r\n\r\nI fixed the issue by unmounting the Time Machine drive, and then running bazel works as fast as expected. Otherwise it hangs and fails eventually.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "keith",
"comment_id": 840779489,
"datetime": 1620933825000,
"masked_author": "username_4",
"text": "pretty interesting, I also have backups setup similar to this (not with time machine) and those don't appear in xcode-locators output, theoretically if we could find some heuristic to ignore them it would go here https://github.com/bazelbuild/bazel/blob/f572d3ba1e6c977bace5c638da1628724f7b3e1f/tools/osx/xcode_locator.m#L125-L135",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danielcompton",
"comment_id": 840828368,
"datetime": 1620939284000,
"masked_author": "username_0",
"text": "Apple provides an API [CSBackupSetItemExcluded](https://developer.apple.com/documentation/coreservices/1445043-csbackupsetitemexcluded?language=objc) to mark directories as excluded by Time Machine, that might be readable?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "keith",
"comment_id": 840830018,
"datetime": 1620939469000,
"masked_author": "username_4",
"text": "you can try changing bazel to read this to see https://developer.apple.com/documentation/coreservices/1443602-csbackupisitemexcluded?language=objc",
"title": null,
"type": "comment"
}
] | 5 | 10 | 8,445 | false | false | 8,445 | true |
nuxt/nuxt.js | nuxt | 415,411,155 | 5,128 | null | [
{
"action": "opened",
"author": "uhyo",
"comment_id": null,
"datetime": 1551319158000,
"masked_author": "username_0",
"text": "### Version\n\n[v2.4.5](https://github.com/nuxt.js/releases/tag/v2.4.5)\n\n### Reproduction link\n\n[https://github.com/username_0/gengou-yosou/tree/flyio](https://github.com/username_0/gengou-yosou/tree/flyio)\n\n### Steps to reproduce\n\nRun `npx nuxt generate`.\n\n\n### What is expected ?\n\nNuxt should generate 4889 static pages (4 from `pages` and 4885 as instructed in `nuxt.config.js`) without exhausting memory.\n\n\n### What is actually happening?\n\n`JavaScript heap out of memory` error occurs (on my PC with 8GB of memory).\n\n```\nℹ Production build 10:45:02\n✔ Builder initialized 10:45:02\n✔ Nuxt files generated 10:45:02\n\n✔ Client\n Compiled successfully in 8.73s\n\n✔ Server\n Compiled successfully in 3.71s\n\n\nHash: 2b064fae45a5ab220fa8\nVersion: webpack 4.29.5\nTime: 8730ms\nBuilt at: 2019-02-28 10:45:13\n Asset Size Chunks Chunk Names\n../server/client.manifest.json 6.32 KiB [emitted] \n 105b44c228445b1c9246.js 39.7 KiB 1 [emitted] app\n 1b7b70fde28ccc903939.js 2.97 KiB 7 [emitted] pages/search\n 3257a673f63e2552412d.js 155 KiB 2 [emitted] commons.app\n 6065dc6a071d304c8a5c.js 2.7 KiB 5 [emitted] pages/list/_id\n 686fc8e50276ec882be9.js 3.65 KiB 3 [emitted] pages/_gengou/index\n 6a105754fe9556767f41.js 3.14 KiB 8 [emitted] pages/template/gengou\n 6d633a92122a24985760.js 51.3 KiB 0 [emitted] \n 7572a37206b342264d67.js 1.29 KiB 4 [emitted] pages/index\n 7e424e0b065df14e8d3f.js 466 bytes 6 [emitted] pages/random\n LICENSES 486 bytes [emitted] \n f10fff28c29091d8d8ee.js 2.6 KiB 9 [emitted] runtime\n + 2 hidden assets\nEntrypoint app = f10fff28c29091d8d8ee.js 3257a673f63e2552412d.js 105b44c228445b1c9246.js\n\nHash: c7341f34eb9da355820a\nVersion: webpack 4.29.5\nTime: 3712ms\nBuilt at: 2019-02-28 10:45:16\n Asset Size Chunks Chunk Names\n0e34725d5dff9dc18fcc.js 53.6 KiB 3, 7 [emitted] pages/list/_id\n1b8a280131b10484a5cf.js 53.6 KiB 5, 7 [emitted] pages/search\n1c342de4b10ae6a9765c.js 463 bytes 4 [emitted] pages/random\n426e227990f2f5736949.js 50.9 KiB 7 [emitted] \n4b13172a835655cba71b.js 3.08 KiB 6 [emitted] pages/template/gengou\n8d222bf0036c3f3690c2.js 3.42 KiB 1 [emitted] pages/_gengou/index\nad89d28763f905c28785.js 1.43 KiB 2 [emitted] pages/index\n server.js 28.9 KiB 0 [emitted] app\n server.manifest.json 963 bytes [emitted] \n + 8 hidden assets\nEntrypoint app = server.js server.js.map\nℹ Generating pages 10:45:16\n\n<--- Last few GCs --->\n\n[6544:0x423ff70] 40055 ms: Mark-sweep 1359.9 (1446.8) -> 1348.4 (1449.3) MB, 805.1 / 0.0 ms (average mu = 0.151, current mu = 0.071) allocation failure scavenge might not succeed\n[6544:0x423ff70] 41139 ms: Mark-sweep 1362.1 (1449.3) -> 1350.7 (1451.8) MB, 1026.7 / 0.0 ms (average mu = 0.098, current mu = 0.053) allocation failure scavenge might not succeed\n\n\n<--- JS stacktrace --->\n\n==== JS stack trace =========================================\n\n 0: ExitFrame [pc: 0x37fd8cedbe1d]\nSecurity context: 0x1c3fe9a9e6e1 <JSObject>\n 1: formatRaw(aka formatRaw) [0xa0c0c18fe09] [internal/util/inspect.js:~474] [pc=0x37fd8da125d7](this=0x2487ff4826f1 <undefined>,ctx=0x2c820b2c42d1 <Object map = 0x1e1f3c7c7591>,value=0x1c3fe9a8faa9 <JSFunction String (sfi = 0x139b35691519)>,recurseTimes=0)\n 2: inspect(aka inspect) [0xa0c0c189bf9] [internal/util/inspect.js:~154] [pc=0x37fd8cf6cd39](thi...\n\nFATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory\n 1: 0x8daaa0 node::Abort() [node]\n 2: 0x8daaec [node]\n 3: 0xad73ce v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]\n 4: 0xad7604 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]\n 5: 0xec4c32 [node]\n 6: 0xec4d38 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]\n 7: 0xed0e12 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]\n 8: 0xed1744 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]\n 9: 0xed43b1 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]\n10: 0xe9d834 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]\n11: 0x113cf9e v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]\n12: 0x37fd8cedbe1d \nzsh: abort (core dumped) npx nuxt generate\n```\n\n### Additional comments?\n\nSorry that I haven't managed to make a minimal reproduction. \n\n<!--cmty--><!--cmty_prevent_hook-->\n<div align=\"right\"><sub><em>This bug report is available on <a href=\"https://cmty.app/nuxt\">Nuxt</a> community (<a href=\"https://cmty.app/nuxt/nuxt.js/issues/c8746\">#c8746</a>)</em></sub></div>",
"title": "Nuxt runs out of memory while generating thousands of pages",
"type": "issue"
},
{
"action": "created",
"author": "pimlie",
"comment_id": 468202786,
"datetime": 1551346454000,
"masked_author": "username_1",
"text": "Try to increase node's memory limit. Looking at your error its still at the default 1.5GB. Try to increase it by running node with e.g.`node --max_old_space_size=4000` to set the limit to 4GB\r\n\r\nIf it then still runs out of memory you probably have a memory leak somewhere.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "uhyo",
"comment_id": null,
"datetime": 1551357293000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "uhyo",
"comment_id": 468256558,
"datetime": 1551357293000,
"masked_author": "username_0",
"text": "@plmlie I didn't notice the `generate.concurrency` config; sorry about that.\r\nI admit that it's not a bug as Nuxt has appropriate config for my case. \r\nThank you very much.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 5,751 | false | false | 5,751 | true |
camunda/camunda-modeler | camunda | 409,329,382 | 1,213 | null | [
{
"action": "opened",
"author": "mschoe",
"comment_id": null,
"datetime": 1549982023000,
"masked_author": "username_0",
"text": "// given\r\nsee attached screen cast\r\n\r\n// when\r\nI try to model a message flow between two pools\r\n\r\n// then\r\nI have to know that I can attach the flow to the very left part of the pool only",
"title": "Difficult to attach a message flow to a pool with lanes",
"type": "issue"
},
{
"action": "created",
"author": "mitchhanks",
"comment_id": 501361809,
"datetime": 1560358308000,
"masked_author": "username_1",
"text": "Is this issue related to it?\r\nhttps://www.screencast.com/t/B4kNQG7VBA",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pinussilvestrus",
"comment_id": 502979768,
"datetime": 1560842269000,
"masked_author": "username_2",
"text": "@username_1 I can't reproduce the behavior shown in the video. Can you maybe open [another issue](https://github.com/camunda/camunda-modeler/issues/new/choose) where you briefly describe the bug shown in your screencast?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "nikku",
"comment_id": null,
"datetime": 1560850107000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "nikku",
"comment_id": 503025136,
"datetime": 1560850107000,
"masked_author": "username_3",
"text": "Closed via https://github.com/camunda/camunda-modeler/commit/f16835b4f31d6932ca2dc921e68d661db66e8242.",
"title": null,
"type": "comment"
}
] | 4 | 5 | 578 | false | false | 578 | true |
apache/beam | apache | 377,304,101 | 6,940 | {
"number": 6940,
"repo": "beam",
"user_login": "apache"
} | [
{
"action": "opened",
"author": "robertwb",
"comment_id": null,
"datetime": 1541408286000,
"masked_author": "username_0",
"text": "Reverts \"Merge pull request #6855: Revert #6752 #6798 #6807 #6837 to fix Dataflow test breakage\"\r\n\r\nThis reverts commit a7c3078712db2d11fd0cbc86ff41b35271559093, reversing\r\nchanges made to 52721a1fabc95fe0e26fbb8b6338ae75f773ec8c.\r\n\r\n**Please** add a meaningful description for your change here\r\n\r\n------------------------\r\n\r\nFollow this checklist to help us incorporate your contribution quickly and easily:\r\n\r\n - [ ] Format the pull request title like `[BEAM-XXX] Fixes bug in ApproximateQuantiles`, where you replace `BEAM-XXX` with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.\r\n - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).\r\n\r\nIt will help us expedite review of your Pull Request if you tag someone (e.g. `@username`) to look at it.\r\n\r\nPost-Commit Tests Status (on master branch)\r\n------------------------------------------------------------------------------------------------\r\n\r\nLang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark\r\n--- | --- | --- | --- | --- | --- | --- | ---\r\nGo | [](https://builds.apache.org/job/beam_PostCommit_Go_GradleBuild/lastCompletedBuild/) | --- | --- | --- | --- | --- | ---\r\nJava | [](https://builds.apache.org/job/beam_PostCommit_Java_GradleBuild/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex_Gradle/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow_Gradle/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink_Gradle/lastCompletedBuild/) [](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump_Gradle/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza_Gradle/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark_Gradle/lastCompletedBuild/)\r\nPython | [](https://builds.apache.org/job/beam_PostCommit_Python_Verify/lastCompletedBuild/) | --- | [](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/) </br> [](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/) | [](https://builds.apache.org/job/beam_PostCommit_Python_VR_Flink/lastCompletedBuild/) | --- | --- | ---",
"title": "[BEAM-5791] Implement time-based pushback in the dataflow harness data plane.",
"type": "issue"
},
{
"action": "created",
"author": "robertwb",
"comment_id": 435880117,
"datetime": 1541425786000,
"masked_author": "username_0",
"text": "R: @username_1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "robertwb",
"comment_id": 435896864,
"datetime": 1541428541000,
"masked_author": "username_0",
"text": "Run Java PreCommit",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "robertwb",
"comment_id": 436214849,
"datetime": 1541502208000,
"masked_author": "username_0",
"text": "Run Java PostCommit",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "charlesccychen",
"comment_id": 437773493,
"datetime": 1542004988000,
"masked_author": "username_1",
"text": "Thanks! This LGTM.",
"title": null,
"type": "comment"
}
] | 2 | 5 | 4,062 | false | false | 4,062 | true |
dnovikoff/tempai-core | null | 313,091,600 | 2 | null | [
{
"action": "opened",
"author": "fumin",
"comment_id": null,
"datetime": 1523394845000,
"masked_author": "username_0",
"text": "Thanks for this great library.\r\nI wonder is there an example, possibly built on top of this library, that contains a playable game in the terminal?\r\nI imagine such a full implementation of the game in the terminal might require additional logic to handle proposing melds, riichi, and imposing furiten.",
"title": "Is there logic for the full game such as declaring riichi and calculating furiten?",
"type": "issue"
},
{
"action": "created",
"author": "dnovikoff",
"comment_id": 380383864,
"datetime": 1523437971000,
"masked_author": "username_1",
"text": "Hello! Thanks for the interest to my library. As it is mentioned in the https://github.com/username_1/tempai-core/blob/master/README.md, this package is the extraction of core code from my private repo. This library does not contain the game mechanics. And at the moment there are no plans to reveal other layers of my code.\r\nStill you can find a playable simplified example of a game for two players here https://github.com/username_1/tenhou/tree/master/cmd/pimboo-server\r\nSee the README file here https://github.com/username_1/tenhou/blob/master/README.md\r\n\r\nFeel free to ask other questions on library functionality.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fumin",
"comment_id": 380405378,
"datetime": 1523442636000,
"masked_author": "username_0",
"text": "Hi Dnovikoff\r\n\r\nThanks for this information, and for pointing to pimboo-server. \r\nI have had cursory look at pimboo-server, and if I understand correctly, it outsources the game mechanics to tenhou.net, doesn't it?\r\n\r\nI am aware that this might be too much to ask, but I wonder if it is possible for you to share some parts of your private repo regarding the game mechanics, under certain terms that you dictate?\r\nHappy to talk about this offline, too.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dnovikoff",
"comment_id": 380460948,
"datetime": 1523454984000,
"masked_author": "username_1",
"text": "Not sure I want to share other layers. Could you please describe your purposes of library usage? I think you can contact me via some messanger. Do you use Telegram? I also have Skype, but I dont use it much.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fumin",
"comment_id": 380508632,
"datetime": 1523463023000,
"masked_author": "username_0",
"text": "My intended use of the library is to use it as a simulator to generate training data. Ultimately, I am planning to use this generated data to train a bot. I do use Telegram, and my handle is @ylfawaw on Telegram. Happy to discuss more about my use case privately.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "fumin",
"comment_id": null,
"datetime": 1525342079000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "dnovikoff",
"comment_id": 546986062,
"datetime": 1572274678000,
"masked_author": "username_1",
"text": "@username_0 please check https://github.com/username_1/mahjong-api",
"title": null,
"type": "comment"
}
] | 2 | 7 | 1,899 | false | false | 1,899 | true |
saltstack/salt | saltstack | 232,489,516 | 41,516 | {
"number": 41516,
"repo": "salt",
"user_login": "saltstack"
} | [
{
"action": "opened",
"author": "kstreee",
"comment_id": null,
"datetime": 1496218861000,
"masked_author": "username_0",
"text": "### What does this PR do?\r\n\r\nBy implementing MessageClientPool for AsyncReqMessageClient (ZMQ),SaltMessageClient (TCP),\r\navoiding blocking waiting of a thread.\r\n\r\n### What issues does this PR fix or reference?\r\n\r\nThis is an additional commit of #37878, and I resolved all items in a todo list of this issue, #38093.\r\n\r\n### Previous Behavior\r\n\r\n1. TCP version of MessageClient could be hanging while writing a data to a socket. ZMQ MessageClient was already patched by a former PR #37878, but TCP version of MessageClient haven't been patched (this issue #38093 mentioned about it).\r\n2. Couldn't set `sock_pool_size` explicitly by config files, thus I had modified the pool size value in code to use ZMQ version of MessageClient pool.\r\n\r\n### New Behavior\r\n\r\n1. By wrapping SaltMessageClient, TCP version of message client can avoid blocking waiting, like a ZMQ case.\r\n2. By making `sock_pool_size` as a config parameter, the size of socket pool could be set explicitly by users.\r\n\r\n### Tests written?\r\n\r\nYes",
"title": "Implements MessageClientPool to avoid blocking waiting for zeromq and tcp communications.",
"type": "issue"
},
{
"action": "created",
"author": "jacksontj",
"comment_id": 305231746,
"datetime": 1496245999000,
"masked_author": "username_1",
"text": "The TCP transport shouldn't be blocking the thread on socket writes as it it using nonblocking sockets wrapped in tornado coroutines. ZMQ shouldn't either (should also be using nonblocking sockets and coroutines). Are we seeing blocking behavior? If so, is it easy to reproduce? If we are in fact seeing blocking behavior there is most likely a bug-- no need to work around it in code (since the libraries are supposed to do it already).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cachedout",
"comment_id": 305233814,
"datetime": 1496246402000,
"masked_author": "username_2",
"text": "@DmitryKuzmenko Could you also swing by here and let us know what you think?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kstreee",
"comment_id": 305251780,
"datetime": 1496250108000,
"masked_author": "username_0",
"text": "In my case, I experienced 'blocking waiting' (I think I miss-chose an expression, it is not blocking an entire thread, but blocking other queued jobs, detail descriptions will be following.) while processing Salt API requests with a ZMQ transport. Before saying the details of 'blocking behavior' examples with code, [the class documentary comment](https://github.com/saltstack/salt/blob/2016.11/salt/transport/zeromq.py#L848-L854) in `AsyncReqMessageClient` is referring this problem directly.\r\n\r\n```\r\nThis class wraps the underylying zeromq REQ socket and gives a future-based\r\ninterface to sending and recieving messages. This works around the primary\r\nlimitation of serialized send/recv on the underlying socket by queueing the\r\nmessage sends in this class. In the future if we decide to attempt to multiplex\r\nwe can manage a pool of REQ/REP sockets-- but for now we'll just do them in serial\r\n```\r\n\r\nLiterally, the AsyncReqMessageClient is delivering async-like behaviors using `send_queue` for send/recv messages. By dequeuing message one by one from `send_queue` in AsyncReqMessageClient, the message client write a data to a ZMQ socket. [This block](https://github.com/saltstack/salt/blob/2016.11/salt/transport/zeromq.py#L957-L981) is the dequeuing block. And the 'blocking behavior' is caused by [this line](https://github.com/saltstack/salt/blob/2016.11/salt/transport/zeromq.py#L974) of code, `ret = yield future`. The `yield` code of dequeuing block makes other queued messages waits until the processing of a current message is finished.\r\n\r\n\r\nQueued messages must wait until a former message is finished to be written. For example, a job which has plenty of host list (300+) as a target host, takes a long time to write the job data to a socket. Let's assume that Salt API faced this kind of situations. Then, other requests, which are queued after the problematic job, will wait until the problematic job data is finished to be written by the dequeuing block. Salt API can't response for while to the 'other requests' that is requested after the problematic job.\r\n\r\n\r\nI thought that this problem could be resolved by several ways.\r\n1. Re-design dequeuing block to dequeue simultaneously unlike current logic.\r\n2. Wrap dequeuing block using 'pool' technique to avoid waiting while dequeuing.\r\n\r\nThe first solution is definitive and best way to solve this problem, but I think it is really risky and hard to fix this problem by re-designing and re-implementing. Because the entire code of AsyncReqMessageClient is highly wired to ZMQ behaviors.\r\nThe second solution is, I think, work around of this issue, however, with this solution, we don't have to modify the dequeuing logic in ZMQ and TCP transports. I think the second solution, which is this PR, is more efficient, and safer than the first solution.\r\n(Obviously, we have a third option that implementing 'multiplexing', as mentioned in the class documentary comment.)\r\n\r\nI apologize that I described this problem as 'blocking behaviors'. The expression is, definitely, too blurry and isn't accurate for this problem.\r\nI hope that this comment will help to understand the problem what I faced in Salt API, and if there are any ambiguous points or if you have questions about this issue, It will be my pleasure to discuss through this PR.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kstreee",
"comment_id": 305252940,
"datetime": 1496250357000,
"masked_author": "username_0",
"text": "In the case of a TCP transport, [this dequeuing block](https://github.com/saltstack/salt/blob/2016.11/salt/transport/tcp.py#L931-L950) cause the same problem that I described in the former comment.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kstreee",
"comment_id": 305265297,
"datetime": 1496252996000,
"masked_author": "username_0",
"text": "By the way, while writing the description of this issue, suddenly raised some ideas and questions that is related to this issue.\r\n\r\n1. It is available to embed 'pool' to each MessageClient for ZMQ and TCP. Instead of using single 'stream' ([`self.stream`](https://github.com/saltstack/salt/blob/2016.11/salt/transport/zeromq.py#L928) for ZMQ, [`self._stream`](https://github.com/saltstack/salt/blob/2016.11/salt/transport/tcp.py#L855) for TCP), by making 'stream' member variable as a list that contains multiple streams, the 'pool' can easily be embedded to each MessageClient class. And simply use the streams by using scheduling strategy, like the round-robin. I think this design is more seamless than current PR. This design is riskier but less tricky than my first solution.\r\n\r\n2. I have assumed that using multiple connections of ZMQ (or TCP) is safe (For example, writing a data X and a data Y using connection A and connection B respectively). I'm sure the safeness of this kind of usage is responsible for ZMQ and TCP libraries but is it actually 'safe'? I am suddenly confused\u001b about the safety properties of the libraries 😂.. Practically saying, until now, there have been no problems of using that kind of ways. I set socket pool as 15 and have served plenty and high loads of requests, there weren't any problems that related to it. But, I'm gonna run some tests and find some references about it and share through additional comments. If you know any recommendable references, please share those for me.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cachedout",
"comment_id": 306859735,
"datetime": 1496854999000,
"masked_author": "username_2",
"text": "@DmitryKuzmenko Could you please comment on this proposal?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kstreee",
"comment_id": 307545487,
"datetime": 1497074878000,
"masked_author": "username_0",
"text": "@skizunov I wanted to make it sure that the pool closes sockets, but like as you said about it, the closing statements were redundant. I agreed what you commented. Thanks for your reviewing this.\r\nI removed the redundant closing statements in an additional commit.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cachedout",
"comment_id": 308501731,
"datetime": 1497461208000,
"masked_author": "username_2",
"text": "Go Go Jenkins!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cachedout",
"comment_id": 310442004,
"datetime": 1498151080000,
"masked_author": "username_2",
"text": "Go Go Jenkins!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cachedout",
"comment_id": 310728022,
"datetime": 1498239427000,
"masked_author": "username_2",
"text": "This appears to be leaking file handles. Tests are now failing with: `Too many open files (epoll.cpp:39)`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kstreee",
"comment_id": 310826214,
"datetime": 1498294543000,
"masked_author": "username_0",
"text": "@username_2 I think the last modification is the cause of leaking file handles. I will fix and commit for the leaking with test cases soon. Thanks fou your inform :).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kstreee",
"comment_id": 311071428,
"datetime": 1498486421000,
"masked_author": "username_0",
"text": "@username_2 I fixed the issue.",
"title": null,
"type": "comment"
}
] | 4 | 14 | 7,355 | false | true | 7,190 | true |
apache/nifi | apache | 413,208,186 | 3,326 | {
"number": 3326,
"repo": "nifi",
"user_login": "apache"
} | [
{
"action": "opened",
"author": "MikeThomsen",
"comment_id": null,
"datetime": 1550801026000,
"masked_author": "username_0",
"text": "…ontainer.\r\n\r\nThank you for submitting a contribution to Apache NiFi.\r\n\r\nIn order to streamline the review of the contribution we ask you\r\nto ensure the following steps have been taken:\r\n\r\n### For all changes:\r\n- [ ] Is there a JIRA ticket associated with this PR? Is it referenced \r\n in the commit message?\r\n\r\n- [ ] Does your PR title start with NIFI-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen \"-\" character.\r\n\r\n- [ ] Has your PR been rebased against the latest commit within the target branch (typically master)?\r\n\r\n- [ ] Is your initial contribution a single, squashed commit?\r\n\r\n### For code changes:\r\n- [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder?\r\n- [ ] Have you written or updated unit tests to verify your changes?\r\n- [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? \r\n- [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly?\r\n- [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly?\r\n- [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties?\r\n\r\n### For documentation related changes:\r\n- [ ] Have you ensured that format looks appropriate for the output in which it is rendered?\r\n\r\n### Note:\r\nPlease ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible.",
"title": "NIFI-6067 Enabled support for the JVM remote debugger in the Docker c…",
"type": "issue"
},
{
"action": "created",
"author": "MikeThomsen",
"comment_id": 467644860,
"datetime": 1551220870000,
"masked_author": "username_0",
"text": "@username_1 got time for a review?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "apiri",
"comment_id": 468091492,
"datetime": 1551315115000,
"masked_author": "username_1",
"text": "@username_0 Will scope it out",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "apiri",
"comment_id": 468096438,
"datetime": 1551316555000,
"masked_author": "username_1",
"text": "Looks good here! Merging",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,786 | false | false | 1,786 | true |
IndigoUnited/node-cross-spawn | IndigoUnited | 184,446,759 | 42 | null | [
{
"action": "opened",
"author": "johanneswuerbach",
"comment_id": null,
"datetime": 1477044004000,
"masked_author": "username_0",
"text": "console.log\\(\\\"hello\\\"\\)\r\n ^^^^\r\nSyntaxError: Unexpected token ILLEGAL\r\n at Object.exports.runInThisContext (vm.js:53:16)\r\n at Object.<anonymous> ([eval]-wrapper:6:22)\r\n at Module._compile (module.js:409:26)\r\n at node.js:579:27\r\n at nextTickCallbackWith0Args (node.js:420:9)\r\n at process._tickCallback (node.js:349:13)\r\n```",
"title": "Different escaping between spawn and cross-spawn",
"type": "issue"
},
{
"action": "created",
"author": "satazor",
"comment_id": 255578611,
"datetime": 1477215660000,
"masked_author": "username_1",
"text": "That's strange indeed. At the moment, I'm overwhelmed with work. Can you pinpoint where the issue is?\r\nThanks",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 256706126,
"datetime": 1477587610000,
"masked_author": "username_0",
"text": "@username_1 send a test + fix here: https://github.com/IndigoUnited/node-cross-spawn/issues/43",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 256746479,
"datetime": 1477596923000,
"masked_author": "username_1",
"text": "@username_0 Running your example on Win10 throws me an error:\r\n\r\n```\r\nspawn [eval]:1\r\nconsole.log\\(\"hello\"\\)\r\n```\r\n\r\nThe correct way is:\r\n\r\n```js\r\nconst cp1 = spawn('node', ['-e', 'console.log(\\\\\"hello\\\\\")'], { shell: true });\r\n\r\ncp1.stdout.on('data', (data) => console.log('spawn', data.toString()));\r\ncp1.stderr.on('data', (data) => console.log('spawn', data.toString()));\r\n```\r\n\r\nAnyway, you are right that the behavior between spawn and cross-spawn is not consistent when `options.shell` is provided. I've taken a look at your PR but I think we should do NO escaping/manipulation whatsoever when `options.shell` is specified so that we mach NodeJS [behavior](https://github.com/nodejs/node/blob/master/lib/child_process.js#L326-L343).\r\n\r\nI'm currently working on a PR based on your code.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 256766404,
"datetime": 1477601736000,
"masked_author": "username_0",
"text": "...\r\n/bin/sh: -c: line 0: syntax error near unexpected token `('\r\n/bin/sh: -c: line 0: `node -e console.log(\\\"hello\\\")'\r\n```\r\n\r\nas the `(` also needs to be escaped.\r\n\r\nWhile I could modify the command for both platforms, I thought cross-spawn should handle this somehow automagically.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 256785441,
"datetime": 1477606502000,
"masked_author": "username_1",
"text": "Thinking out of the box, I think the correct call here is not to support the `shell` option.\r\n\r\nLet me elaborate.. `cross-spawn` uses a shell on windows (cmd.exe) and does escape the command and args as necessary:\r\n\r\n- If user passes `options.shell=false`, what's the behavior that cross-spawn should follow since it needs `cmd.exe`?\r\n- If user passes `options.shell=true` could be easily supported, but since the args passed are already escaped, we need to bypass the escaping and escape command in case of shebangs, see: https://github.com/IndigoUnited/node-cross-spawn/blob/master/lib/parse.js#L107. \r\n- If user passes `options.shell=string`, then he is trying to do something fancy and not so cross platform..\r\n\r\nI think that, after weighting things, we should error out if user passed `options.shell=truish|false`.\r\nWhat's your opinion @timdp?\r\n\r\n@username_0 why are you using the `shell` option in the example above? You don't need to.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 256787294,
"datetime": 1477607092000,
"masked_author": "username_0",
"text": "I need the shell option so commands like `spawn('coffee', ['-c', '*.coffee'])` are correctly expanded in https://github.com/testem/testem/pull/998 on unix platforms.\r\n\r\nAnother option might be to unset `shell` when the platform is windows and to remove the escaping already in testem or to use plain windows node spawn when the option is set 🤔",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 256789369,
"datetime": 1477607795000,
"masked_author": "username_1",
"text": "@username_0 I see. Well we can certainly support case `2` and `3`. For case `1`, we should error out saying we don't support it because we need to use `cmd.exe`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 256789571,
"datetime": 1477607861000,
"masked_author": "username_0",
"text": "That would definitely work for me. 👍",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 256790110,
"datetime": 1477608031000,
"masked_author": "username_1",
"text": "@username_0 alright I'm going to work on this asap. I'm feeling very tired these days, working too much, so bare with me.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 256790517,
"datetime": 1477608166000,
"masked_author": "username_0",
"text": "No worries, if there is anything I can help with, just ping :-)\r\n\r\nCan also refine https://github.com/IndigoUnited/node-cross-spawn/pull/43 if you give me any hints / more tests cases.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 256792058,
"datetime": 1477608702000,
"masked_author": "username_1",
"text": "#43 is a great start, but it needs to be refined. The desired behavior looks like this:\r\n\r\nIf `options.shell === false`, then fail with a \"not supported message\".\r\nIf `options.shell` is truish, then bypass any escaping; if a shebang is detected, the arg pushed needs to be escaped.\r\nIf `options.shell` is null, then it basically is the current behavior.\r\n\r\nTests that need to be added:\r\n - The test added in #43 \r\n - Test that shebang works with `shell:true` and also test if the arg pushed is correctly escaped\r\n\r\nLet me know if you want to work on this, otherwise I will pick it up when I get some time.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 257086030,
"datetime": 1477740069000,
"masked_author": "username_1",
"text": "@username_0 are you sure the windows shell support wildcard expansion? I think that, in your example, is actually \"coffee\" that is expanding the files, not the shell. See: http://superuser.com/questions/460598/is-there-any-way-to-get-the-windows-cmd-shell-to-expand-wildcard-paths",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 257088793,
"datetime": 1477743841000,
"masked_author": "username_0",
"text": "Correct, the windows shell doesn't support this. My use-case was more to allow running the same command (like the provided node script) consistently across all OSes.\r\n\r\nSome commands testem receives are already escaped and were previously executed using exec, which has a limited output buffer size. I'm currently in the process of migrating all calls to use spawn with or without the shell option (depending whether exec was use previously).\r\n\r\nSo my use case was more to make cross-spawn more consistently work as a drop in replacement and not really about the shell features as those are shell dependent anyway. Does that make sense?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 257088909,
"datetime": 1477744006000,
"masked_author": "username_0",
"text": "Another idea might be to just forward when shell: true and isWin to the default node implementation?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 257090379,
"datetime": 1477746016000,
"masked_author": "username_1",
"text": "I like that idea. We need to make that clear in the Readme. Time for a new PR? :p",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 257104547,
"datetime": 1477762284000,
"masked_author": "username_1",
"text": "@username_0 can you take a look on https://github.com/IndigoUnited/node-cross-spawn/pull/45?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "johanneswuerbach",
"comment_id": 257107696,
"datetime": 1477765665000,
"masked_author": "username_0",
"text": "Also modified https://github.com/IndigoUnited/node-cross-spawn/pull/43, but having trouble to make the test pass.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "satazor",
"comment_id": 257156428,
"datetime": 1477839842000,
"masked_author": "username_1",
"text": "@username_0 I've worked a bit more and I came up with #46. It adds support for the `shell` option, even for node versions < 6. That PR also refactor the code to make it more clear.\r\n\r\nTell me what you think.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "satazor",
"comment_id": null,
"datetime": 1477845027000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "satazor",
"comment_id": 257161600,
"datetime": 1477845163000,
"masked_author": "username_1",
"text": "@username_0 #46 landed in `v5.0.0`.",
"title": null,
"type": "comment"
}
] | 2 | 21 | 5,611 | false | false | 5,611 | true |
sensu/sandbox | sensu | 434,287,798 | 49 | {
"number": 49,
"repo": "sandbox",
"user_login": "sensu"
} | [
{
"action": "opened",
"author": "elias-neves93",
"comment_id": null,
"datetime": 1555508189000,
"masked_author": "username_0",
"text": "",
"title": " Fix rabbitmq yum repo",
"type": "issue"
},
{
"action": "created",
"author": "jspaleta",
"comment_id": 484706112,
"datetime": 1555625339000,
"masked_author": "username_1",
"text": "Just tested locally, looks good! merging.\r\n\r\nThanks for taking the time to contribute the fix! :+1:",
"title": null,
"type": "comment"
}
] | 2 | 2 | 100 | false | false | 100 | false |
osmlab/name-suggestion-index | osmlab | 421,789,769 | 2,470 | {
"number": 2470,
"repo": "name-suggestion-index",
"user_login": "osmlab"
} | [
{
"action": "opened",
"author": "Adamant36",
"comment_id": null,
"datetime": 1552729474000,
"masked_author": "username_0",
"text": "I created some Wikidata entries for a couple of brands and also added the operator information for Arby's. \r\n\r\nCloses #1749, closes #1677, closes #1653, closes #1632, closes #1589, closes #1531, closes #1241, closes #1204, closes #1182, closes #1106, closes #871, closes #856",
"title": "add Wikidata entries",
"type": "issue"
},
{
"action": "created",
"author": "bhousel",
"comment_id": 473528818,
"datetime": 1552741785000,
"masked_author": "username_1",
"text": "Awesome thanks @username_0 👍",
"title": null,
"type": "comment"
}
] | 2 | 2 | 302 | false | false | 302 | true |
opnsense/plugins | opnsense | 338,557,990 | 722 | null | [
{
"action": "opened",
"author": "jpawlowski",
"comment_id": null,
"datetime": 1530794503000,
"masked_author": "username_0",
"text": "I would like the frontend to be listening independent from an IP address but to all IP addresses of a specific interface. Main use case would be to listen to an interface with dynamic IP address, like with PPP dial-up connections (xDSL).\r\n\r\nFrom the HAproxy documentation, I think one should be able to enter 0.0.0.0:<port> and :::<port> as listen address first, and then add something like \"interface pppoe0\". I tried to add this to through the Advanced SSL settings under Bind options but it seems dynamic content is not excepted there, see screenshot:\r\n\r\n\r\n\r\nIf such settings would work, it might even be nicer to just select one (or more) interfaces from a dropdown menu as an alternative to the \"Bind address\" text field so it does not get mixed with the SSL Advanced Settings (even though in the generated config file it will belong together).",
"title": "net/haproxy: Bind frontend to interface",
"type": "issue"
},
{
"action": "created",
"author": "jpawlowski",
"comment_id": 402710045,
"datetime": 1530794752000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fichtner",
"comment_id": 402711661,
"datetime": 1530795139000,
"masked_author": "username_1",
"text": "FreeBSD port doesn't seem to be aware of this option, at quick glance nothing that seems to match the description here https://github.com/opnsense/ports/blob/master/net/haproxy-devel/Makefile#L26",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fraenki",
"comment_id": 403320355,
"datetime": 1531087111000,
"masked_author": "username_2",
"text": "Unless HAProxy adds support for this feature on FreeBSD, there's nothing we can do. :(",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "fraenki",
"comment_id": null,
"datetime": 1531087111000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 5 | 1,237 | false | false | 1,237 | false |
ppumkin/PrestoCoverage | null | 366,284,726 | 3 | null | [
{
"action": "opened",
"author": "ppumkin",
"comment_id": null,
"datetime": 1538564695000,
"masked_author": "username_0",
"text": "The ability to merge line visits if the same lines are found across other files. \r\n\r\nThis could be useful if you have Unit tests and Integration tests and need to see the overall coverage of your library or application. \r\n\r\nThis will require me to create an internal model to track these things and rethink the caching methods too.",
"title": "Merge visits from several files",
"type": "issue"
},
{
"action": "created",
"author": "ppumkin",
"comment_id": 428341600,
"datetime": 1539117296000,
"masked_author": "username_0",
"text": "Solved with \r\nhttps://github.com/username_0/PrestoCoverage/pull/7",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ppumkin",
"comment_id": null,
"datetime": 1539117296000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 393 | false | false | 393 | true |
ICT-BDA/EasyML | ICT-BDA | 456,505,735 | 109 | null | [
{
"action": "opened",
"author": "TangVVV",
"comment_id": null,
"datetime": 1560580055000,
"masked_author": "username_0",
"text": "",
"title": "请问--master spark://bda07:7077这条指令中bda07指的是什么啊",
"type": "issue"
},
{
"action": "created",
"author": "feiruyun",
"comment_id": 504007299,
"datetime": 1561033881000,
"masked_author": "username_1",
"text": "应该是宿主机或者hadoop-master的IP地址吧",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "TangVVV",
"comment_id": null,
"datetime": 1561346181000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "TangVVV",
"comment_id": 504841606,
"datetime": 1561346181000,
"masked_author": "username_0",
"text": "试了试确实是的,谢谢!",
"title": null,
"type": "comment"
}
] | 2 | 4 | 38 | false | false | 38 | false |
platformio/platformio-vscode-ide | platformio | 444,902,039 | 785 | null | [
{
"action": "opened",
"author": "GeorgeFlorian",
"comment_id": null,
"datetime": 1558006062000,
"masked_author": "username_0",
"text": "Hello !\r\nI had a piece of code that yesterday worked without any errors, but now it crashes the ESP.\r\nThe only thing I did was to update `framework-arduinoespressif32` inside Platformio IDE.\r\n\r\n```\r\n//------------------------- fileReadLines()\r\nvoid fileReadLines(File file, String x[]) {\r\n int i = 0;\r\n while(file.available()){\r\n Serial.println((String)\"File size: \" + file.size());\r\n String line= file.readStringUntil('\\n');\r\n line.trim();\r\n x[i] = line; // This is where it crashes after i\r\n i++;\r\n logOutput(line);\r\n } \r\n}\r\n```\r\n\r\nThe problem is that the `file.size()` stays the same, and `while(file.available())` goes on until I reach the limit of `String x[]`.\r\n\r\nHow was it that up until the update it worked and now it doesn't ?\r\n\r\nIs `String line= file.readStringUntil('\\n');` not working like before ?",
"title": "ESP32 crashes after `framework-arduinoespressif32` update",
"type": "issue"
},
{
"action": "closed",
"author": "ivankravets",
"comment_id": null,
"datetime": 1558019131000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "ivankravets",
"comment_id": 493106442,
"datetime": 1558019131000,
"masked_author": "username_1",
"text": "Please switch to the stable version of dev/platform or report to https://github.com/espressif/arduino-esp32/issues",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "GeorgeFlorian",
"comment_id": 493119052,
"datetime": 1558020993000,
"masked_author": "username_0",
"text": "I did both of those things. I also thought it may be a Platformio issue. Thank you !",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,061 | false | false | 1,061 | false |
fo-dicom/fo-dicom | fo-dicom | 430,404,923 | 838 | null | [
{
"action": "opened",
"author": "devsko",
"comment_id": null,
"datetime": 1554723985000,
"masked_author": "username_0",
"text": "In a view weeks .NET Core 3 (and .NET Standard 2.1) gets released with support for many scenarios that were .NET Desktop only until now. Are there any plans to add a fo-DICOM platform version for this release? It should include more or less the DICOM.Desktop version in respect of files/networking/codecs/imaging. Especially supporting the native JPEG codecs would be interesting.",
"title": "Question about .NET Core 3 support",
"type": "issue"
},
{
"action": "created",
"author": "gofal",
"comment_id": 480933367,
"datetime": 1554745514000,
"masked_author": "username_1",
"text": "currently the cross platform is PCL, which is rather outdated. I have plans to raise it to .net standard 2.0, where some of the current platform spcific implementations of networking- or filesystem managers are already included.\r\nSince .net standard 2.1 will not be supported by net framework, which is still a very big thing, I personally am not so much interested in net standard 2.1. But maybe someone else is? Pull requests for a new project are wellcome.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gofal",
"comment_id": null,
"datetime": 1556093930000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 839 | false | false | 839 | false |
JosPolfliet/pandas-profiling | null | 238,831,362 | 49 | null | [
{
"action": "opened",
"author": "eyadsibai",
"comment_id": null,
"datetime": 1498565905000,
"masked_author": "username_0",
"text": "``` File \"/python3.6/site-packages/pandas_profiling/base.py\", line 59, in describe_numeric_1d\r\n stats['range'] = stats['max'] - stats['min']\r\nTypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.```\r\nI got this error",
"title": "TypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.",
"type": "issue"
},
{
"action": "created",
"author": "tejaslodaya",
"comment_id": 312811595,
"datetime": 1499156020000,
"masked_author": "username_1",
"text": "Boolean values shouldn't enter `describe_numeric_1d` function. It must be treated as Categorical.\r\n\r\nCan you provide a snapshot of the data you're running it on?\r\nOr the datatype may also help.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "eyadsibai",
"comment_id": 312839672,
"datetime": 1499162903000,
"masked_author": "username_0",
"text": "I had a boolean column using latest numpy version... and pandas 0.20.2",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "tejaslodaya",
"comment_id": 312842585,
"datetime": 1499163735000,
"masked_author": "username_1",
"text": "In `base.py` file, go to line number 59.\r\nChange \r\n`stats['range'] = stats['max'] - stats['min']`\r\nto \r\n`stats['range'] = np.subtract(stats['max'], stats['min'], dtype=np.float32)`\r\nand report the results.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "tejaslodaya",
"comment_id": 312880278,
"datetime": 1499175394000,
"masked_author": "username_1",
"text": "@username_0 Does it work?\r\nI couldn't reproduce the results. So, bear with me for some manual editing.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "eyadsibai",
"comment_id": 320525626,
"datetime": 1502046150000,
"masked_author": "username_0",
"text": "Yes I believe this solves the problem!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "eyadsibai",
"comment_id": null,
"datetime": 1510363764000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "chinmaythosar",
"comment_id": 366278723,
"datetime": 1518797254000,
"masked_author": "username_2",
"text": "yep. had a similar issue where series.py at line ~1800 had to manually edit\r\n'good = -bad' to 'good = ~bad'\r\n\r\nsurprisingly even when I commented the first line to add the second it game me an error saying - is not supported ...",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Rubyguanzi",
"comment_id": 439340177,
"datetime": 1542361859000,
"masked_author": "username_3",
"text": "I also got this error ,and then check the numpy version numpy1.8.1 can ,but 1.5.1 not",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jvschoen",
"comment_id": 573790482,
"datetime": 1578938360000,
"masked_author": "username_4",
"text": "Still encountering this error when going through a GCP tutorial:\r\nhttps://cloud.google.com/solutions/building-a-propensity-model-for-financial-services-on-gcp\r\nTrying to run the 2nd cell in section 2) Data Exploration.",
"title": null,
"type": "comment"
},
{
"action": "reopened",
"author": "sbrugman",
"comment_id": null,
"datetime": 1578938425000,
"masked_author": "username_5",
"text": "``` File \"/python3.6/site-packages/pandas_profiling/base.py\", line 59, in describe_numeric_1d\r\n stats['range'] = stats['max'] - stats['min']\r\nTypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.\r\n```\r\nI got this error",
"title": "TypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.",
"type": "issue"
},
{
"action": "created",
"author": "sbrugman",
"comment_id": 573798407,
"datetime": 1578939436000,
"masked_author": "username_5",
"text": "@username_4 thank you for reporting this issue. Are you in the position to export the data frame (for example .csv) and post it here easily? \r\n\r\nWithout using BigQuery, the code seems to work just fine:\r\n```\r\n# As features on the Google Cloud Platform website:\r\n# https://cloud.google.com/solutions/building-a-propensity-model-for-financial-services-on-gcp\r\n\r\n\r\nimport pandas as pd\r\n\r\nfrom pandas_profiling import ProfileReport\r\n\r\n\r\nif __name__ == \"__main__\":\r\n df = pd.read_csv('https://storage.googleapis.com/erwinh-public-data/bankingdata/bank-full.csv', sep=';')\r\n\r\n profile = ProfileReport(df, title=\"GCP Banking Data\")\r\n profile.to_file('gcp_banking_report.html', silent=False)\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jvschoen",
"comment_id": 573896273,
"datetime": 1578953393000,
"masked_author": "username_4",
"text": "@username_5 I worked through a force update earlier that worked, but this seems a bit cleaner approach. I'll edit it in my notebook and run accordingly in the future. Thanks for the quick response and solution!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jvschoen",
"comment_id": 585267290,
"datetime": 1581522146000,
"masked_author": "username_4",
"text": "To add to this thread. This solution will break the notebook instance and you can no longer access the jupyterlab. you'll need to ssh in to troubleshoot. However, I did figure out you need to `pip uninstall attr` then `pip install attrs` before shutting down your instance to ensure jupyter starts up properly. I know this is a bit of a tangent from the question, and more of an issue with GCP, but just wanted to make sure if someone tried this solution they may run into issues getting back into their AI notebook.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "sbrugman",
"comment_id": null,
"datetime": 1586811137000,
"masked_author": "username_5",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "joshtemple",
"comment_id": 618628905,
"datetime": 1587671659000,
"masked_author": "username_6",
"text": "What is the status of this issue? I'm getting the same error when trying to run the report in a JupyterLab notebook. It looks like solutions have been proposed (modifying the local version of the user's code), but no PRs are referenced or merged.",
"title": null,
"type": "comment"
},
{
"action": "reopened",
"author": "sbrugman",
"comment_id": null,
"datetime": 1587817206000,
"masked_author": "username_5",
"text": "``` File \"/python3.6/site-packages/pandas_profiling/base.py\", line 59, in describe_numeric_1d\r\n stats['range'] = stats['max'] - stats['min']\r\nTypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.\r\n```\r\nI got this error",
"title": "TypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.",
"type": "issue"
},
{
"action": "created",
"author": "sbrugman",
"comment_id": 619371309,
"datetime": 1587817312000,
"masked_author": "username_5",
"text": "@username_6 \r\nCould you provide the minimal information to reproduce this error? [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help crafting a minimal bug report.\r\n\r\n- the minimal code you are using to generate the report\r\n\r\n- which environment you are using: \r\n - operating system (e.g. Windows, Linux, Mac)\r\n - Python version (e.g. 3.7)\r\n - jupyter notebook, console or IDE such as PyCharm\r\n - Package manager (e.g. pip, conda)\r\n - packages (`pip freeze > packages.txt`)\r\n\r\n- a sample or description of the dataset (`df.head()`, `df.info()`)",
"title": null,
"type": "comment"
}
] | 8 | 21 | 4,357 | false | true | 4,335 | true |
mozilla/nixpkgs-mozilla | mozilla | 357,735,405 | 111 | {
"number": 111,
"repo": "nixpkgs-mozilla",
"user_login": "mozilla"
} | [
{
"action": "opened",
"author": "Mic92",
"comment_id": null,
"datetime": 1536251652000,
"masked_author": "username_0",
"text": "fixes #110",
"title": "servo: fix evaluation on unstable/18.09",
"type": "issue"
},
{
"action": "created",
"author": "Mic92",
"comment_id": 419160204,
"datetime": 1536251748000,
"masked_author": "username_0",
"text": "This blocks https://github.com/nix-community/NUR/pull/77",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Mic92",
"comment_id": 419674496,
"datetime": 1536442648000,
"masked_author": "username_0",
"text": "Anything blocking this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zimbatm",
"comment_id": 419675338,
"datetime": 1536443645000,
"masked_author": "username_1",
"text": "@garbas @username_2 mind merging this? I lost access on this repo",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nbp",
"comment_id": 419964478,
"datetime": 1536594942000,
"masked_author": "username_2",
"text": "Trying with nix-instantiate, I have the same error with:\r\n\r\n```nix\r\nlet x = {}; in let inherit (x) mesa; in (mesa ? null)\r\n```\r\n\r\nThe `?` operator will return `true` or `false` whether the attribute `null` exists in `mesa`. I guess you meant something like `pkgs.mesa or null` ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Mic92",
"comment_id": 420060977,
"datetime": 1536613569000,
"masked_author": "username_0",
"text": "Weird. I thought I tested this.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Mic92",
"comment_id": 420705565,
"datetime": 1536768454000,
"masked_author": "username_0",
"text": "This pull request will be obsolete when this one is merged: https://github.com/mozilla/nixpkgs-mozilla/pull/114",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Mic92",
"comment_id": 421297056,
"datetime": 1536918268000,
"masked_author": "username_0",
"text": "obsolete now",
"title": null,
"type": "comment"
}
] | 3 | 8 | 579 | false | false | 579 | true |
NurlashKO/BugTrackerBot | null | 345,564,264 | 1 | {
"number": 1,
"repo": "BugTrackerBot",
"user_login": "NurlashKO"
} | [
{
"action": "opened",
"author": "abdrakhman",
"comment_id": null,
"datetime": 1532899776000,
"masked_author": "username_0",
"text": "",
"title": "Update README.md",
"type": "issue"
},
{
"action": "created",
"author": "NurlashKO",
"comment_id": 408741998,
"datetime": 1532924155000,
"masked_author": "username_1",
"text": "Wow, thank you for translation and styling :)\r\nGood job !",
"title": null,
"type": "comment"
}
] | 2 | 2 | 57 | false | false | 57 | false |
micky2be/superlogin-client | null | 227,592,263 | 58 | {
"number": 58,
"repo": "superlogin-client",
"user_login": "micky2be"
} | [
{
"action": "opened",
"author": "polco",
"comment_id": null,
"datetime": 1494402038000,
"masked_author": "username_0",
"text": "",
"title": "added Session and ConfigurationOptions types",
"type": "issue"
},
{
"action": "created",
"author": "polco",
"comment_id": 300404013,
"datetime": 1494402693000,
"masked_author": "username_0",
"text": "old Axios versions contain wrong types.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "micky2be",
"comment_id": 300423381,
"datetime": 1494407547000,
"masked_author": "username_1",
"text": "Thanks.\r\nReleased with version 0.6.0",
"title": null,
"type": "comment"
}
] | 2 | 3 | 75 | false | false | 75 | false |
sindresorhus/ky | null | 383,223,587 | 62 | null | [
{
"action": "opened",
"author": "jdalrymple",
"comment_id": null,
"datetime": 1542821249000,
"masked_author": "username_0",
"text": "From looking at the tags in the package.json file, I noticed it says XHR, but when looking at the actual repository's docs, it uses the browser fetch API. Does this mean this library does not support XHR?",
"title": "Question: Is XHR Supported",
"type": "issue"
},
{
"action": "closed",
"author": "sindresorhus",
"comment_id": null,
"datetime": 1542822949000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "sindresorhus",
"comment_id": 440757441,
"datetime": 1542822949000,
"masked_author": "username_1",
"text": "Yes, no XHR, only Fetch. You can use the Fetch polyfill in browsers that doesn't support Fetch.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jdalrymple",
"comment_id": 440811171,
"datetime": 1542835070000,
"masked_author": "username_0",
"text": "Ah good to know. Might want to remove xhr from the keywords to remove confusion in the future. Thanks for the quick response!",
"title": null,
"type": "comment"
}
] | 2 | 4 | 424 | false | false | 424 | false |
screwdriver-cd/launcher | screwdriver-cd | 332,080,719 | 181 | {
"number": 181,
"repo": "launcher",
"user_login": "screwdriver-cd"
} | [
{
"action": "opened",
"author": "tkyi",
"comment_id": null,
"datetime": 1528908945000,
"masked_author": "username_0",
"text": "## Context\r\nI'd like to fix the launcher so I can test out my change (https://github.com/screwdriver-cd/launcher/pull/180) but the launcher is broken from Darren's change, screwdriver-cd/launcher#177 (> v4.0.103).\r\n\r\nSee Darren's comment https://github.com/screwdriver-cd/screwdriver/issues/979#issuecomment-394546810\r\n\r\n## Objective\r\nReverts screwdriver-cd/launcher#177",
"title": "Revert \"fix: defer emitter creation to container start-up\"",
"type": "issue"
},
{
"action": "created",
"author": "minz1027",
"comment_id": 397030237,
"datetime": 1528912815000,
"masked_author": "username_1",
"text": "shouldn't need to do this. we override the entrypoint in all executors now",
"title": null,
"type": "comment"
}
] | 2 | 2 | 444 | false | false | 444 | false |
dotnet/roslyn | dotnet | 415,421,074 | 33,750 | null | [
{
"action": "opened",
"author": "heejaechang",
"comment_id": null,
"datetime": 1551321837000,
"masked_author": "username_0",
"text": "```csharp\r\npublic void Test()\r\n {\r\n ValueTuple<string, int> a = (\"a\", 1);\r\n ValueTuple<string, int> a1;\r\n\r\n if (a is ValueTuple<string, int> c1)\r\n {\r\n a1 = c1;\r\n }\r\n\r\n Tuple<string, int> t = new Tuple<string, int>(\"a\", 1);\r\n Tuple<string, int> t1;\r\n\r\n if (t is Tuple<string, int> t2)\r\n {\r\n t1 = t2;\r\n }\r\n }\r\n```\r\n\r\nFAR on ValueTuple returns nothing, but FAR on Tuple returns right references.\r\n\r\nlooks like FAR is not updated to handle ValueTuple case.",
"title": "ValueTuple is not supported in FAR",
"type": "issue"
},
{
"action": "created",
"author": "heejaechang",
"comment_id": 468114960,
"datetime": 1551321872000,
"masked_author": "username_0",
"text": "@username_1 can there be any reason supporting ValueTuple in FAR hard?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jcouv",
"comment_id": 468117665,
"datetime": 1551322680000,
"masked_author": "username_1",
"text": "The symbol for representing tuples and `ValueTuple` instances is not a plain `NamedTypeSymbol`, it is a `TupleTypeSymbol` which is a wrapper around a `NamedTypeSymbol`.\r\nThat's why FAR behaves differently on `System.ValueTuple`/tuple than it does on `System.Tuple`.\r\n\r\nMy sense is that it shouldn't be too hard to add support in FAR. But it also doesn't seem too important.\r\n\r\nFor `System.ValueTuple`, I don't think many people would write that.\r\nFor the tuple syntax, that is what I expect people to use, but I don't think it is crucial for FAR/GoToDefinition to work on that syntax. For comparison, you can use FAR/GoToDefinition on `System.Nullable<int>`, but not on the short syntax (`int?`).\r\nAnother indication is that we didn't get reports on that.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "heejaechang",
"comment_id": 468119388,
"datetime": 1551323191000,
"masked_author": "username_0",
"text": "@username_1 thank you for the explanation. sure I dont think it is urgent either. tagging @jinujoseph",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CyrusNajmabadi",
"comment_id": 468149019,
"datetime": 1551333713000,
"masked_author": "username_2",
"text": "This was considered, but not implemented, due to too many open questions about what this should mean. For example, if a user is doing FAR on `(a: 1, b: 2)` should they only find other tuples with those names? Or could they find `(c: 1, d: 2)` as well?\r\n\r\nFor the most part, the IDE does not distinguish `ValueTuple<...>` from `(...)`. So the choice to not do anything for `(...)` extends to `ValueTuple<...>` as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "heejaechang",
"comment_id": 468508891,
"datetime": 1551404439000,
"masked_author": "username_0",
"text": "if it is a design question then let me tag @kendrahavens \r\n\r\nfor me, since we already doesn't distinguish List<string> with List<int> or Tuple<int, string> with Tuple<string, string>, I am not sure why we should do differently for ValueTuple. \r\n\r\nalso, for syntactic sugar (...) case, our FAR already doesn't show \"var\" in FAR as well even if one can initiate FAR from \"var\". and I am pretty sure both get bound to same type. so since we already does this kind of thing, I think it is fine us not including (...) in ValueTuple case as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CyrusNajmabadi",
"comment_id": 468525956,
"datetime": 1551409780000,
"masked_author": "username_2",
"text": "Right. Which is why we didn't bother. Because we didn't want to make it seem as if we supported it, only to fail to actually support it.\r\n\r\nThe situation now is: this is not supported.\r\n\r\nBeing not supported is easy to explain. It's consistent. If you partially support, then you've made a less understandable and more confusing situation for users. \r\n\r\n\r\nAs i already mentioned, this was discussed and we went through these points then. I'm not sure how/if anything has changed since the original language feature was added and we did the entire IDE pass in terms of how it should behave. Do you feel like anything has come up to overturn those decisions?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CyrusNajmabadi",
"comment_id": 468526133,
"datetime": 1551409841000,
"masked_author": "username_2",
"text": "We drove the decisions here based on what we thought would be an acceptable user experience, and then if we could provide that experience. When we found no suitable solution, we scrapped things. I don't want us to just do \"something\" just because we can. We should do it because it would be the right thing to do for customers.",
"title": null,
"type": "comment"
}
] | 3 | 8 | 3,475 | false | false | 3,475 | true |
yingruoheng/yingruoheng.github.io | null | 404,312,076 | 10 | null | [
{
"action": "opened",
"author": "yingruoheng",
"comment_id": null,
"datetime": 1548771412000,
"masked_author": "username_0",
"text": "https://username_0.com/2019/01/29/%E7%94%A8vue-cli%E5%BF%AB%E9%80%9F%E6%90%AD%E5%BB%BAvue%E8%84%9A%E6%89%8B%E6%9E%B6+%E5%AE%9E%E7%8E%B0vue%E9%A1%B5%E9%9D%A2%E8%B7%B3%E8%BD%AC%E7%9A%84%E5%B0%8Fdemo/ \n\n 等风来 不如追风去",
"title": "用vue-cli快速搭建vue脚手架+实现vue页面跳转的小demo - 若珩的博客 | YRH Blog",
"type": "issue"
}
] | 1 | 1 | 211 | false | false | 211 | true |
bamlab/react-native-image-resizer | bamlab | 401,989,916 | 162 | null | [
{
"action": "opened",
"author": "PradeepNedun",
"comment_id": null,
"datetime": 1548197494000,
"masked_author": "username_0",
"text": "I am downloading some images from api and saving inside default app directory. On reload, my api is again called, it responds the different file content with same filename. But in this case in my app if I read with RNFS.DocumentDirectoryPath of the file path, it always load the initial image content downloaded. But if I close and open my app, the latest downloaded content is displayed as per my requirement. This is observed only in android not in ios. Please suggest solution.",
"title": "RNFS.DocumentDirectoryPath of re-downloaded file is loaded incorrect. ",
"type": "issue"
}
] | 1 | 1 | 480 | false | false | 480 | false |
zulip/zulip | zulip | 270,687,513 | 7,265 | {
"number": 7265,
"repo": "zulip",
"user_login": "zulip"
} | [
{
"action": "opened",
"author": "rht",
"comment_id": null,
"datetime": 1509635069000,
"masked_author": "username_0",
"text": "So that the certbot nginx patch can be automatically applied after the nginx confs have be set up by puppet.",
"title": "prod install: Move certbot setup to be after puppet apply.",
"type": "issue"
},
{
"action": "created",
"author": "timabbott",
"comment_id": 341567698,
"datetime": 1509659124000,
"masked_author": "username_1",
"text": "@username_0 have you tested this? I would expect this change to break the certbot feature, since `zulip-puppet-apply` will fail in the event that certs don't exist yet. Is there a second half of this changeset that's missing?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rht",
"comment_id": 341614402,
"datetime": 1509679011000,
"masked_author": "username_0",
"text": "I wasn't aware of this. Should the certbot download and setup happen within the puppet apply instead? In this case, as I had stated, there is a Puppet module that can be used https://github.com/voxpupuli/puppet-letsencrypt, depending on how elaborate the setup needs to be.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rht",
"comment_id": 341626167,
"datetime": 1509687674000,
"masked_author": "username_0",
"text": "Another option is to disable the certs check within Puppet when the `--certbot` flag is activated.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timabbott",
"comment_id": 341761834,
"datetime": 1509727748000,
"masked_author": "username_1",
"text": "@username_0 please don't submit pull requests that aren't tagged WIP and you haven't tested. It wastes the time of reviewers. \r\n\r\nI think there's no need to move the certbot setup -- you don't need to use certbot in the same mode for renewal as you used to create the cert in the first place. So maybe just drop that commit?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rht",
"comment_id": 341808691,
"datetime": 1509738581000,
"masked_author": "username_0",
"text": "Sure, please drop that commit.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timabbott",
"comment_id": 341846463,
"datetime": 1509749518000,
"masked_author": "username_1",
"text": "I opened #7285 with a version of your commit. We should test that carefully and if it works, merge it.",
"title": null,
"type": "comment"
}
] | 2 | 7 | 1,152 | false | false | 1,152 | true |
fluent-project/fluent | fluent-project | 424,426,509 | 75 | {
"number": 75,
"repo": "fluent",
"user_login": "fluent-project"
} | [
{
"action": "opened",
"author": "xyclin",
"comment_id": null,
"datetime": 1553294973000,
"masked_author": "username_0",
"text": "KVS server will now request fresh cache IPs synchronously from kops server in the stats reporting interval. This happens immediately before sending out the requests for cached keys per cache IP.",
"title": "KVS server now requests cache IPs from kops server.",
"type": "issue"
},
{
"action": "created",
"author": "vsreekanti",
"comment_id": 475835613,
"datetime": 1553311994000,
"masked_author": "username_1",
"text": "@username_0, the [Travis build](https://travis-ci.com/fluent-project/fluent/builds/105515762) is failing because of formatting.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 317 | false | false | 317 | true |
OP-TEE/optee_os | OP-TEE | 403,461,107 | 2,767 | null | [
{
"action": "opened",
"author": "jedichen121",
"comment_id": null,
"datetime": 1548522690000,
"masked_author": "username_0",
"text": "<!--\r\n General guidance when creating issues:\r\n\r\n 1. Please try to remember to close the issue when you have\r\n got an answer to your question.\r\n\r\n 2. It never hurts to state which commit or release tag you are using in case\r\n the question is about build issues.\r\n\r\n 3. Try to use GitHub markdown formatting to make your issue more readable:\r\n https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code\r\n\r\n 4. Try to search for the issue before posting the question:\r\n -> Issues tab -> Filters\r\n\r\n 5. Check the FAQ before posting a question:\r\n https://www.op-tee.org/faq\r\n\r\n NOTE: This comment will not be shown in the issue, so no harm keeping it,\r\n but feel free to remove it if you like.\r\n-->\r\n\r\nI was trying to setup NFS to use with raspberry pi 3 with build 3.3 following this [guide](https://github.com/OP-TEE/build/blob/master/docs/rpi3.md#5-nfs-boot). However, I couldn't obtain an IP address when using wired connection. I found issue #1367 and used `ifconfig eth0 up`, but still the `ifconfig` results showed no ip address. \r\n\r\n```\r\n# ifconfig\r\neth0 Link encap:Ethernet HWaddr B8:27:EB:DA:46:B7 \r\n UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1\r\n RX packets:1 errors:0 dropped:0 overruns:0 frame:0\r\n TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r\n collisions:0 txqueuelen:1000 \r\n RX bytes:201 (201.0 B) TX bytes:0 (0.0 B)\r\n\r\nlo Link encap:Local Loopback \r\n inet addr:127.0.0.1 Mask:255.0.0.0\r\n UP LOOPBACK RUNNING MTU:65536 Metric:1\r\n RX packets:0 errors:0 dropped:0 overruns:0 frame:0\r\n TX packets:0 errors:0 dropped:0 overruns:0 carrier:0\r\n collisions:0 txqueuelen:1000 \r\n RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)\r\n```\r\n\r\nI also tried to edit the `/etc/network/interfaces` file, adding \r\n```\r\nauto eth0\r\niface eth0 inet dhcp\r\n```\r\nIt didn't help and I could see in the boot message showing\r\n```\r\nStarting network: Segmentation fault\r\nFAIL\r\n```\r\nUsing `ifup eth0` gave the same segmentation fault. \r\n\r\nAny help would be appreciated. Thank you.",
"title": "Cannot obtain IP address for eth0 on RPI 3",
"type": "issue"
},
{
"action": "created",
"author": "jbech-linaro",
"comment_id": 459373611,
"datetime": 1548946505000,
"masked_author": "username_1",
"text": "I'm closing the ticket, either because it has already been answered or that it is no longer relevant or it could be lack of response from the author. Having that said, feel free to re-open the ticket if you have more to add to the ticket.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jbech-linaro",
"comment_id": null,
"datetime": 1548946505000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 2,402 | false | false | 2,402 | false |
projectcalico/calicoctl | projectcalico | 183,927,009 | 1,222 | null | [
{
"action": "opened",
"author": "kharisj",
"comment_id": null,
"datetime": 1476873837000,
"masked_author": "username_0",
"text": "I'm new for kubernetes and I choose canal for my kubernetes network.\nI struck with this error for a week and I cannot bring kube-dns up. For more information I try to compare with weave network. in my kubernetes with weave was succesful and I hope canal will be more powerful than weave. error as following\n\n```\n[root@master ~]# kubectl get po --all-namespaces\nNAMESPACE NAME READY STATUS RESTARTS AGE\nkube-system calico-etcd-2r5k6 0/1 CrashLoopBackOff 5 3m\nkube-system calico-node-nbad7 1/2 Error 4 3m\nkube-system calico-node-sj2z5 1/2 Error 2 3m\nkube-system calico-policy-controller-snp8a 0/1 CrashLoopBackOff 3 3m\nkube-system configure-calico-jqp0m 0/1 CrashLoopBackOff 2 3m\nkube-system etcd-master.local 1/1 Running 0 6m\nkube-system kube-apiserver-master.local 1/1 Running 0 6m\nkube-system kube-controller-manager-master.local 1/1 Running 0 8m\nkube-system kube-discovery-982812725-dhbby 1/1 Running 0 8m\nkube-system kube-dns-2247936740-89ob1 0/3 ContainerCreating 0 7m\nkube-system kube-proxy-amd64-ib9ct 1/1 Running 0 7m\nkube-system kube-proxy-amd64-rn4r9 1/1 Running 0 7m\nkube-system kube-scheduler-master.local 1/1 Running 0 7m\n```\n\n[canal.txt](https://github.com/projectcalico/calico-containers/files/538674/canal.txt)",
"title": "All canal container CrashLoopBackOff with default kubeadm initialize",
"type": "issue"
},
{
"action": "created",
"author": "xiaochunyn",
"comment_id": 299613667,
"datetime": 1494043081000,
"masked_author": "username_1",
"text": "@username_0 Disabling SELinux by running \"setenforce 0\" is required in order to allow containers to access the host filesystem.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 1,935 | false | false | 1,935 | true |
broadinstitute/gatk | broadinstitute | 110,439,695 | 979 | {
"number": 979,
"repo": "gatk",
"user_login": "broadinstitute"
} | [
{
"action": "opened",
"author": "cmnbroad",
"comment_id": null,
"datetime": 1444307977000,
"masked_author": "username_0",
"text": "",
"title": "Upgrade htsjdk to 1.140.",
"type": "issue"
},
{
"action": "created",
"author": "akiezun",
"comment_id": 146545002,
"datetime": 1444310881000,
"masked_author": "username_1",
"text": ":+1:",
"title": null,
"type": "comment"
}
] | 2 | 2 | 4 | false | false | 4 | false |
bayesimpact/docker-react | bayesimpact | 262,294,888 | 229 | {
"number": 229,
"repo": "docker-react",
"user_login": "bayesimpact"
} | [
{
"action": "created",
"author": "pcorpet",
"comment_id": 333774472,
"datetime": 1507019193000,
"masked_author": "username_0",
"text": "<img class=\"emoji\" title=\":lgtm:\" alt=\":lgtm:\" align=\"absmiddle\" src=\"https://reviewable.io/lgtm.png\" height=\"20\" width=\"61\"/>\n\n---\n\nReviewed 1 of 1 files at r1.\nReview status: all files reviewed at latest revision, all discussions resolved.\n\n---\n\n\n\n*Comments from [Reviewable](https://reviewable.io:443/reviews/bayesimpact/docker-react/229#-:-KvWKyp4TL-gh5oFftxF:bnfp4nl)*\n<!-- Sent from Reviewable.io -->",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pcorpet",
"comment_id": 333774530,
"datetime": 1507019206000,
"masked_author": "username_0",
"text": "+@username_0\n\n---\n\nReview status: all files reviewed at latest revision, all discussions resolved.\n\n---\n\n\n\n*Comments from [Reviewable](https://reviewable.io:443/reviews/bayesimpact/docker-react/229#-:-KvWL0yMt9MbQSq-7a2J:bzdemrz)*\n<!-- Sent from Reviewable.io -->",
"title": null,
"type": "comment"
}
] | 2 | 3 | 1,164 | false | true | 666 | true |
rentpath/react-ui | rentpath | 356,969,875 | 393 | {
"number": 393,
"repo": "react-ui",
"user_login": "rentpath"
} | [
{
"action": "opened",
"author": "paulpostel",
"comment_id": null,
"datetime": 1536093999000,
"masked_author": "username_0",
"text": "[Card](https://rentpath.leankit.com/card/716386936)\r\n\r\nAll `isBasic` listings should now receive the grey pin, rather than just\r\nthe `!isActive` listings. `isBasic` listings are composed of tplsource\r\n=== 'INACTIVE' + tplsource === 'LNP' (aka \"Large Never Paid\").\r\n\r\nThe LNP listings are a new set being added by the inventory team.",
"title": "Use isBasic listing attribute for icon selection",
"type": "issue"
},
{
"action": "created",
"author": "tadjohnston",
"comment_id": 418512630,
"datetime": 1536094108000,
"masked_author": "username_1",
"text": "Please use `git cz` to format your commit properly.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 384 | false | false | 384 | false |
QuantEcon/InstantiateFromURL.jl | QuantEcon | 369,924,746 | 17 | null | [
{
"action": "opened",
"author": "vchuravy",
"comment_id": null,
"datetime": 1539538458000,
"masked_author": "username_0",
"text": "https://github.com/QuantEcon/InstantiateFromURL.jl/blob/25e93412b9c7a46d22487a9ea919224f574730fc/src/activate.jl#L7\r\n\r\nSHAs can be normally be shorter than 40chars as long as they are still unique within a repository. Normally about 6 chars is enough for that.",
"title": "SHA can be abbreviated",
"type": "issue"
},
{
"action": "created",
"author": "jlperla",
"comment_id": 429699388,
"datetime": 1539574950000,
"masked_author": "username_1",
"text": "I think the sha serves two purposes here. First, it makes it so that we can download a tarball, as in https://github.com/QuantEcon/InstantiateFromURL.jl/blob/master/src/activate.jl#L21\r\n\r\nIs there a consistent way that we could take 6'ish characters and have the API call `\"https://github.com/$(reponame)/archive/$(sha).tar.gz\"` work? If so, is it a particular 6 characters (end or beginning, for example).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vchuravy",
"comment_id": 429700992,
"datetime": 1539575684000,
"masked_author": "username_0",
"text": "It must be the beginning of the SHA, as an example this seems to work https://github.com/QuantEcon/InstantiateFromURL.jl/archive/25e9341.tar.gz",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jlperla",
"comment_id": 429701278,
"datetime": 1539575826000,
"masked_author": "username_1",
"text": "@username_2 Can you give this a shot? If so, then I think we should drop the `sha` length check, and convert the tests to use the 6 characters at the beginning of SHA for the master and tag versions? The version with the sha passed in directly should require no modifications, if I recall...",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "arnavs",
"comment_id": null,
"datetime": 1539627248000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "arnavs",
"comment_id": 429958450,
"datetime": 1539627248000,
"masked_author": "username_2",
"text": "Done. Was a little trickier than anticipated, because the tarballs we get from GitHub always unpack into full-SHA directories. The only invariant we enforce on the hash we pass in is that it's at least 6 characters.",
"title": null,
"type": "comment"
}
] | 3 | 6 | 1,316 | false | false | 1,316 | true |
Microsoft/ros_azure_iothub | Microsoft | 323,358,340 | 4 | {
"number": 4,
"repo": "ros_azure_iothub",
"user_login": "Microsoft"
} | [
{
"action": "opened",
"author": "xinyiou",
"comment_id": null,
"datetime": 1526413761000,
"masked_author": "username_0",
"text": "Add ability to set debug logging from roslaunch arg. \r\n\r\nThese changes make it possible to run:\r\n\r\nroslaunch <snip> verbose:=True",
"title": "Add roslaunch configuration for debug logging",
"type": "issue"
},
{
"action": "created",
"author": "Nardax",
"comment_id": 392149311,
"datetime": 1527274403000,
"masked_author": "username_1",
"text": "LGTM",
"title": null,
"type": "comment"
}
] | 2 | 2 | 133 | false | false | 133 | false |
solariumphp/solarium | solariumphp | 348,904,038 | 619 | {
"number": 619,
"repo": "solarium",
"user_login": "solariumphp"
} | [
{
"action": "opened",
"author": "jeroenherczeg",
"comment_id": null,
"datetime": 1533764819000,
"masked_author": "username_0",
"text": "Fix for https://github.com/solariumphp/solarium/issues/618",
"title": "Fix for #618 DateTime is not set to UTC when using Extract ",
"type": "issue"
},
{
"action": "created",
"author": "jeroenherczeg",
"comment_id": 411751492,
"datetime": 1533820154000,
"masked_author": "username_0",
"text": "Thank you!",
"title": null,
"type": "comment"
}
] | 2 | 3 | 364 | false | true | 68 | false |
KC3Kai/KC3Kai | KC3Kai | 306,125,815 | 2,504 | {
"number": 2504,
"repo": "KC3Kai",
"user_login": "KC3Kai"
} | [
{
"action": "opened",
"author": "sorewachigauyo",
"comment_id": null,
"datetime": 1521259009000,
"masked_author": "username_0",
"text": "Will probably need to update terms.json too",
"title": "Add air battle to airraid mouseover",
"type": "issue"
},
{
"action": "created",
"author": "sorewachigauyo",
"comment_id": 374904721,
"datetime": 1521631185000,
"masked_author": "username_0",
"text": "Sorry for making a very long term\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sorewachigauyo",
"comment_id": 374993810,
"datetime": 1521648030000,
"masked_author": "username_0",
"text": "\"SRoomAirRaidTip\" : \"Air Battle: {0}\\nBase Damage: {1}\\nBauxite or Fuel lost: {2}\\nSlot 1: {3}\\nSlot 2: {4}\\nSlot 3: {5}\\nSlot 4: {6}\\nTB Remaining: {7}\\nDB Remaining: {8}\\nPercent Shotdown: {9}\",",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sinsinpub",
"comment_id": 375164176,
"datetime": 1521687862000,
"masked_author": "username_1",
"text": "Refined some codes, slightly changed the contents of term. nonsensed might refine EN wording later.\r\n",
"title": null,
"type": "comment"
}
] | 2 | 4 | 593 | false | false | 593 | false |
openshift/cluster-api-provider-libvirt | openshift | 436,685,854 | 147 | {
"number": 147,
"repo": "cluster-api-provider-libvirt",
"user_login": "openshift"
} | [
{
"action": "opened",
"author": "ingvagabund",
"comment_id": null,
"datetime": 1556111150000,
"masked_author": "username_0",
"text": "```\r\nError: packet_device.libvirt: \"facilities\": required field is not set\r\n\r\nError: packet_device.libvirt: \"facility\": [REMOVED] Use the \"facilities\" array instead, i.e. change\r\n facility = \"ewr1\"\r\nto\r\n facilities = [\"ewr1\"]\r\n```",
"title": "facility field removed in favor of facilities",
"type": "issue"
},
{
"action": "created",
"author": "zeenix",
"comment_id": 486221842,
"datetime": 1556111370000,
"masked_author": "username_1",
"text": "Seems like an obvious fix to me.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ingvagabund",
"comment_id": 486223376,
"datetime": 1556111606000,
"masked_author": "username_0",
"text": "CI is rolling, hopefully it will take at most 15 minutes to run to completion.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ingvagabund",
"comment_id": 486228071,
"datetime": 1556112323000,
"masked_author": "username_0",
"text": "`/usr/bin/docker-current: error pulling image configuration: Get https://registry.svc.ci.openshift.org/v2/openshift/origin-release/blobs/sha256:43a478d6a38d27af0e4473b9e258aad3d052f8702ce553a5a0918b2a2105f18c: dial tcp 35.196.103.194:443: i/o timeout.\r\nSee '/usr/bin/docker-current run --help'.`\r\n\r\n/retest",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ingvagabund",
"comment_id": 486252384,
"datetime": 1556114801000,
"masked_author": "username_0",
"text": "/retest",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ingvagabund",
"comment_id": 486272489,
"datetime": 1556117062000,
"masked_author": "username_0",
"text": "/retest",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ingvagabund",
"comment_id": 486287209,
"datetime": 1556119147000,
"masked_author": "username_0",
"text": "`'events is forbidden: User \"system:serviceaccount:namespace-dff96ea8-66a2-11e9-978f-0cc47ab21966:default\" cannot create events in the namespace \"namespace-dff96ea8-66a2-11e9-978f-0cc47ab21966\"'`",
"title": null,
"type": "comment"
}
] | 3 | 16 | 14,131 | false | true | 857 | false |
iview/iview | iview | 348,189,369 | 4,221 | null | [
{
"action": "opened",
"author": "spencer-live",
"comment_id": null,
"datetime": 1533624608000,
"masked_author": "username_0",
"text": "### What problem does this feature solve?\r\nBetter operational for the business.\r\n\r\n### What does the proposed API look like?\r\nWhen the on-change event is triggered by select, oldValue & newValue & selectedIndex is returned.\r\nselect 触发 on-change 事件时, 返回oldValue & newValue & selectedIndex .\r\n\r\n<!-- generated by iview-issues. DO NOT REMOVE -->",
"title": "[Feature Request]When the on-change event is triggered by select, oldValue & newValue & selectedIndex is returned.",
"type": "issue"
},
{
"action": "created",
"author": "icarusion",
"comment_id": 410969743,
"datetime": 1533628857000,
"masked_author": "username_1",
"text": "No such plan",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "icarusion",
"comment_id": null,
"datetime": 1533628858000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 354 | false | false | 354 | false |
automationbs/testbugreporting | null | 430,029,733 | 1,478 | null | [
{
"action": "opened",
"author": "automationbs",
"comment_id": null,
"datetime": 1554554246000,
"masked_author": "username_0",
"text": "default description\n\n**URL tested:** http://ci.bsstag.com/welcome\n\nOpen [URL](https://live-ci.bsstag.com/dashboard#os=Windows&os_version=7&browser=Chrome&browser_version=72.0&zoom_to_fit=true&full_screen=true&resolution=responsive-mode&url=http%3A%2F%2Fci.bsstag.com%2Fwelcome&speed=1&host_ports=google.com%2C80%2C0&start_element=ClickedReproduceIssueFromgithub&start=true&furl=http://ci.bsstag.com/welcome) on Browserstack\n\n|Property | Value|\n|------------ | -------------|\n| Browser | Chrome 72.0 |\n| Operating System | Windows 7 |\n| Resolution | 1600x1050 |\n\n\n**Screenshot Attached** \n[Screenshot URL](https://live-ci.bsstag.com/issue-tracker/f830d0092ac6b9704cece6b5a59d7e848172fe4e/win7_chrome_72.0.jpg)\n \n\n**[Click here](https://live-ci.bsstag.com/dashboard#os=Windows&os_version=7&browser=Chrome&browser_version=72.0&zoom_to_fit=true&full_screen=true&resolution=responsive-mode&url=http%3A%2F%2Fci.bsstag.com%2Fwelcome&speed=1&host_ports=google.com%2C80%2C0&start_element=ClickedReproduceIssueFromgithub&start=true&furl=http://ci.bsstag.com/welcome) to reproduce the issue on Browserstack**",
"title": "Default title",
"type": "issue"
}
] | 1 | 1 | 1,218 | false | false | 1,218 | false |
espressif/esp-idf | espressif | 350,117,802 | 2,294 | null | [
{
"action": "opened",
"author": "korstiaanS",
"comment_id": null,
"datetime": 1534180535000,
"masked_author": "username_0",
"text": "Hi,\r\n\r\nCan somebody explain me why the following code\r\n\r\n```\r\nvoid sendCmdToDevice(int sd, char *ip, char *sendstring, int port) {\r\n uint8_t retry=0;\r\n int sent_data;\r\n struct sockaddr_in destaddr;\r\n char packetBuffer[MAX_BUFFER+1]; // buffer to hold incoming packet\r\n unsigned int cli_len;\r\n int recv_data;\r\n \r\n //Set destination IP\r\n destaddr.sin_family = AF_INET;\r\n destaddr.sin_addr.s_addr = inet_addr(ip);\r\n destaddr.sin_port = htons(port);\r\n cli_len = sizeof(destaddr);\r\n \r\n do {\r\n sent_data = sendto(sd, sendstring, strlen(sendstring), 0, (struct sockaddr*)&destaddr, sizeof(destaddr));\r\n ESP_LOGI(TAG, \"Send UDP packet: %s to %s:%d\", sendstring, ip, port);\r\n if (sent_data < 0) {\r\n ESP_LOGW(TAG, \"UDP send failed return code: %d\", sent_data);\r\n vTaskDelay(250 / portTICK_PERIOD_MS); // wait to process and receive the answer back\r\n }\r\n vTaskDelay(10 / portTICK_PERIOD_MS); // wait to process and receive the answer back\r\n recv_data = recvfrom(sd, packetBuffer, sizeof(packetBuffer), 0, (struct sockaddr*)&destaddr, &cli_len); // also wait on timeout!\r\n char *recip = inet_ntoa(destaddr.sin_addr);\r\n if ((strcmp(recip, ip) == 0) && (recv_data > 0)) {\r\n packetBuffer[recv_data] = '\\0';\r\n ESP_LOGI(TAG, \"Received from %s UDP packet: %s\", recip, packetBuffer);\r\n if (strstr(packetBuffer, \"OK\") != NULL) {\r\n break;\r\n }\r\n }\r\n retry++;\r\n } while (retry < 100);\r\n}\r\n\r\n```\r\ngenerates 1 or more `sendto` errors (-1) and we have to retry a lot ONLY the first time after a reboot of the ESP32?\r\nAll the following times I execute this I don't get any errors anymore and the responses also come quick. (almost no retries anymore)\r\n\r\nFirst time after boot (execute method for 7 ip's in a loop) gives result:\r\n\r\n```\r\nI (12284) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nI (12364) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nI (12444) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nI (12524) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nW (12524) livingHub - myUdp.c: UDP send failed return code: -1\r\nI (12524) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nW (12524) livingHub - myUdp.c: UDP send failed return code: -1\r\nI (12524) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nW (12524) livingHub - myUdp.c: UDP send failed return code: -1\r\nI (12854) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nI (12864) livingHub - myUdp.c: Received from 192.168.56.5 UDP packet: 61FCA1=LOK\r\nI (12864) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (12904) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (12944) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (12994) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (13064) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (13104) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (13184) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (13214) livingHub - myUdp.c: Received from 192.168.56.4 UDP packet: 617DA1=LOK\r\nI (13214) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13234) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13244) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13314) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13374) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13404) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13454) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (13474) livingHub - myUdp.c: Received from 192.168.56.2 UDP packet: 5BD5A1=LOK\r\nI (13484) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13504) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13574) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13624) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13674) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13704) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13774) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (13794) livingHub - myUdp.c: Received from 192.168.56.8 UDP packet: 6190A1=LOK\r\nI (13804) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (13814) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (13844) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (13894) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (13944) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (14004) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (14054) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (14104) livingHub - myUdp.c: Received from 192.168.56.11 UDP packet: 6163A1=LOK\r\nI (14104) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14114) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14144) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14204) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14244) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14314) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14354) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (14394) livingHub - myUdp.c: Received from 192.168.56.3 UDP packet: E14EA1=LOK\r\nI (14394) livingHub - myUdp.c: Send UDP packet: D1=L to 192.168.56.11:9750\r\nI (14414) livingHub - myUdp.c: Send UDP packet: D1=L to 192.168.56.11:9750\r\nI (14424) livingHub - myUdp.c: Received from 192.168.56.11 UDP packet: 6163D1=LOK\r\n```\r\n\r\nBUT after this first time all the consecutives call to all 7 always gives:\r\n\r\n```\r\nI (150774) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.5:9750\r\nI (150784) livingHub - myUdp.c: Received from 192.168.56.5 UDP packet: 61FCA1=LOK\r\nI (150784) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.4:9750\r\nI (150794) livingHub - myUdp.c: Received from 192.168.56.4 UDP packet: 617DA1=LOK\r\nI (150794) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.2:9750\r\nI (150814) livingHub - myUdp.c: Received from 192.168.56.2 UDP packet: 5BD5A1=LOK\r\nI (150814) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.8:9750\r\nI (150824) livingHub - myUdp.c: Received from 192.168.56.8 UDP packet: 6190A1=LOK\r\nI (150834) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.11:9750\r\nI (150844) livingHub - myUdp.c: Received from 192.168.56.11 UDP packet: 6163A1=LOK\r\nI (150844) livingHub - myUdp.c: Send UDP packet: A1=L to 192.168.56.3:9750\r\nI (150874) livingHub - myUdp.c: Received from 192.168.56.3 UDP packet: E14EA1=LOK\r\nI (150874) livingHub - myUdp.c: Send UDP packet: D1=L to 192.168.56.11:9750\r\nI (150924) livingHub - myUdp.c: Received from 192.168.56.11 UDP packet: 6163D1=LOK\r\n```\r\n\r\nAt the end it works fine but still these two questions:\r\n\r\n1. Why, the first time I always get 1 or more sendto -1 errors?\r\n2. Why do I have to retry so much only the first time? (caching somewhere in the ESP?)",
"title": "why errors and a lot of retries for UDP sendto only the first time after reboot",
"type": "issue"
},
{
"action": "created",
"author": "TimXia",
"comment_id": 414311181,
"datetime": 1534770656000,
"masked_author": "username_1",
"text": "@username_0 I guess it is caused by ARP. The buffer for UDP is limited before ARP request is replied. When the buffers are full, it will return -1 until ARP request succeeds.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Alvin1Zhang",
"comment_id": null,
"datetime": 1537433529000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 7,580 | false | false | 7,580 | true |
glumpy/glumpy | glumpy | 395,670,753 | 182 | null | [
{
"action": "opened",
"author": "nickums",
"comment_id": null,
"datetime": 1546538638000,
"masked_author": "username_0",
"text": "A beautiful package, beautiful documentation, beaurtiful examples but I dont have a freetype library, so examples fail. \r\n in my shell print(ctypes.util.find_library(\"freetype\")) gives None.Traceback (most recent call last):\r\n\r\n File \"C:/Python/Python36/Dhruve and me/Glumpy/gloo-framebuffer.py\", line 9, in <module>\r\n from glumpy import app, gl, gloo\r\n File \"C:\\Python\\Python36\\lib\\site-packages\\glumpy\\__init__.py\", line 7, in <module>\r\n from . import app\r\n File \"C:\\Python\\Python36\\lib\\site-packages\\glumpy\\app\\__init__.py\", line 16, in <module>\r\n from glumpy.ext.inputhook import inputhook_manager, stdin_ready\r\n File \"C:\\Python\\Python36\\lib\\site-packages\\glumpy\\ext\\__init__.py\", line 6, in <module>\r\n from . import freetype\r\n File \"C:\\Python\\Python36\\lib\\site-packages\\glumpy\\ext\\freetype\\__init__.py\", line 49, in <module>\r\n raise RuntimeError('Freetype library not found')\r\nRuntimeError: Freetype library not found",
"title": "Python36 Windows10 RuntimeError: Freetype library not found",
"type": "issue"
},
{
"action": "created",
"author": "rougier",
"comment_id": 451235564,
"datetime": 1546540532000,
"masked_author": "username_1",
"text": "You can try to install the freetype-py bindings that will install freetype on your system. Maybe I should get rid of the internal copy of freetype-py and use the \"official\" one.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "nickums",
"comment_id": null,
"datetime": 1546821160000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "nickums",
"comment_id": 451790399,
"datetime": 1546821160000,
"masked_author": "username_0",
"text": "Installing freetype-py and updating ext directory fixed that problem.\r\nI then found a similar problem with glfw, which I fixed the same way.\r\nThanks for the hint!",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,288 | false | false | 1,288 | false |
pharo-ide/Calypso | pharo-ide | 396,030,613 | 412 | {
"number": 412,
"repo": "Calypso",
"user_login": "pharo-ide"
} | [
{
"action": "opened",
"author": "dionisiydk",
"comment_id": null,
"datetime": 1546628249000,
"masked_author": "username_0",
"text": "#378:\r\n- all queries are renamed to have suffix Query\r\n- all queriy tests are rename accordingly\r\n\r\nChanges are done with following script:\r\n```Smalltalk\r\ntoRename := ClyQuery allSubclasses reject: [ :each | each name includesSubstring: 'Query' ].\r\n\r\ntoRename do: [ :each | | c |\r\n\tc := SycRenameClassCommand new targetClass: each; newName: (each name, 'Query') asSymbol.\r\n\tc execute\t ].\r\n\r\ntestsToRename := toRename collect: [ :each | (each name withoutSuffix: 'Query'), 'Tests' ] thenSelect: [ :each | (self environment at: each asSymbol ifAbsent: [ nil ]) notNil ].\r\n\r\ntestsToRename do: [ :each | | c |\r\n\tc := SycRenameClassCommand new targetClass: each asClass; newName: ((each withoutSuffix: 'Tests'), 'QueryTest') asSymbol.\r\n\tc execute\t ].\r\n```\r\n\r\n#407:\r\n- all tests are renamed to have singular suffix Test instead of Tests\r\n\r\nChanges are done with following script:\r\n```Smalltalk\r\nallTests := TestCase allSubclasses select: [ :each | each package name beginsWith: 'Calypso' ].\r\n\r\ntoRename := allTests select: [ :each | each name endsWith: 'Tests' ].\r\n\"toRename collect: [ :each | each name allButLast ].\"\r\n\r\ntoRename do: [ :each | | c |\r\n\tc := SycRenameClassCommand new targetClass: each; newName: (each name allButLast) asSymbol.\r\n\tc execute\t ].\r\n```",
"title": "378-add-suffix-Query-for-all-queries",
"type": "issue"
},
{
"action": "created",
"author": "dionisiydk",
"comment_id": 451538445,
"datetime": 1546628920000,
"masked_author": "username_0",
"text": "I thought Iceberg issue https://pharo.manuscript.com/f/cases/22749/Iceberg-does-not-see-rename-of-extended-class-which-leads-to-broken-commit was fixed but it is not",
"title": null,
"type": "comment"
}
] | 1 | 2 | 1,424 | false | false | 1,424 | false |
Ticketmaster/aurora | Ticketmaster | 361,083,221 | 171 | {
"number": 171,
"repo": "aurora",
"user_login": "Ticketmaster"
} | [
{
"action": "opened",
"author": "ooHmartY",
"comment_id": null,
"datetime": 1537227600000,
"masked_author": "username_0",
"text": "<!--\r\nThanks for your interest in the project. Bugs filed and PRs submitted are appreciated!\r\n\r\nPlease make sure that you are familiar with and follow the Code of Conduct for\r\nthis project (found in the CODE_OF_CONDUCT.md file).\r\n\r\nAlso, please make sure you're familiar with and follow the instructions in the\r\ncontributing guidelines (found in the CONTRIBUTING.md file).\r\n\r\nPlease fill out the information below to expedite the review and (hopefully)\r\nmerge of your pull request!\r\n-->\r\n\r\n<!-- What changes are being made? (What feature/bug is being fixed here?) -->\r\n\r\n**What**:\r\n\r\n<!-- Why are these changes necessary? -->\r\n\r\n**Why**:\r\n\r\n<!-- How were these changes implemented? -->\r\n\r\n**How**:\r\n\r\n<!-- Have you done all of these things? -->\r\n\r\n**Checklist**:\r\n\r\n<!-- add \"N/A\" to the end of each line that's irrelevant to your changes -->\r\n\r\n<!-- to check an item, place an \"x\" in the box like so: \"- [x] Documentation\" -->\r\n\r\n* [ ] Documentation\r\n* [ ] Tests\r\n* [ ] Ready to be merged <!-- In your opinion, is this ready to be merged as soon as it's reviewed? -->\r\n\r\n<!-- feel free to add additional comments -->",
"title": "fix(Link): Forward click handler",
"type": "issue"
}
] | 2 | 2 | 1,419 | false | true | 1,118 | false |
kubeflow/metadata | kubeflow | 468,580,603 | 97 | null | [
{
"action": "opened",
"author": "jlewi",
"comment_id": null,
"datetime": 1563273648000,
"masked_author": "username_0",
"text": "/kind feature\r\n\r\n**Describe the solution you'd like**\r\n[A clear and concise description of what you want to happen.]\r\n\r\nWe'd like to automatically log K8s objects to the metadata store; e.g.\r\n\r\n* K8s Job\r\n* TFJob \r\n* StudyJob\r\n* Argo workflows\r\n\r\nWe'd like some sort of logger to do this.\r\n\r\nPipelines already has the persistence agent so maybe we can extend that?\r\nhttps://github.com/kubeflow/pipelines/tree/master/backend/src/agent/persistence\r\n\r\n\r\n\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]",
"title": "Create a generic logger for data to metadata using the persistence agent",
"type": "issue"
},
{
"action": "created",
"author": "hougangliu",
"comment_id": 512631130,
"datetime": 1563414150000,
"masked_author": "username_1",
"text": "StudyJob is Katib v1alpha1 object, for katib v1alpha2 or following versions, Experiments/Trials CRD will replace StudyJob totally.\r\n@richardsliu @johnugeorge @gaocegege I think supporting Experiments/Trials in metadata is enough, WDYT?\r\n\r\n@username_0 @username_2 BTW, @dreamryx in my team can help contribute metadata in coming days.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jlewi",
"comment_id": 529257322,
"datetime": 1567986964000,
"masked_author": "username_0",
"text": "I believe @username_2 has a prototype of a generic logger and is working on getting it turned into a PR.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zhenghuiwang",
"comment_id": 531424806,
"datetime": 1568418955000,
"masked_author": "username_2",
"text": "@username_0 @username_1 @dreamryx \r\n\r\nI'm adding a POC in #126, which only log the k8s objects' name/uid/whole value into the metadata store.\r\n\r\nI will keep working on it but feel free to give early feedback.",
"title": null,
"type": "comment"
}
] | 4 | 5 | 1,202 | false | true | 1,202 | true |
alipay/sofa-common-tools | alipay | 413,248,063 | 44 | {
"number": 44,
"repo": "sofa-common-tools",
"user_login": "alipay"
} | [
{
"action": "opened",
"author": "QilongZhang",
"comment_id": null,
"datetime": 1550812849000,
"masked_author": "username_0",
"text": "",
"title": "Prepare to release v1.0.18",
"type": "issue"
}
] | 2 | 2 | 594 | false | true | 0 | false |
hmcts/div-petitioner-frontend | hmcts | 377,410,061 | 342 | {
"number": 342,
"repo": "div-petitioner-frontend",
"user_login": "hmcts"
} | [
{
"action": "opened",
"author": "qzhou-hmcts",
"comment_id": null,
"datetime": 1541426059000,
"masked_author": "username_0",
"text": "# Description\r\n\r\nDeploy COS on aat env blocking deployment to prod\r\n\r\nFixes # (issue)\r\n\r\n## Type of change\r\n\r\nPlease delete options that are not relevant.\r\n\r\n- [ ] Bug fix (non-breaking change which fixes an issue)\r\n- [ ] New feature (non-breaking change which adds functionality)\r\n- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)\r\n- [ ] This change requires a documentation update\r\n\r\n# How Has This Been Tested?\r\n\r\nPlease describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration\r\n\r\n\r\n**Test Configuration**:\r\n\r\n* Hardware:\r\n* O/S and version:\r\n* JDK:\r\n\r\n# Checklist:\r\n\r\n- [ ] My code follows the style guidelines of this project\r\n- [ ] I have performed a self-review of my own code\r\n- [ ] I have commented my code, particularly in hard-to-understand areas\r\n- [ ] I have made corresponding changes to the documentation\r\n- [ ] My changes generate no new warnings\r\n- [ ] I have added tests that prove my fix is effective or that my feature works\r\n- [ ] New and existing unit tests pass locally with my changes\r\n- [ ] Any dependent changes have been merged and published in downstream modules",
"title": "Cos master",
"type": "issue"
},
{
"action": "created",
"author": "tchow8",
"comment_id": 435901236,
"datetime": 1541429287000,
"masked_author": "username_1",
"text": "Going to merge this one for testing purposes.\r\nLets have a proper discussion on https://github.com/hmcts/div-petitioner-frontend/pull/338 (merge into master)",
"title": null,
"type": "comment"
}
] | 2 | 2 | 1,407 | false | false | 1,407 | false |
EasyCorp/EasyAdminBundle | EasyCorp | 305,293,381 | 2,169 | {
"number": 2169,
"repo": "EasyAdminBundle",
"user_login": "EasyCorp"
} | [
{
"action": "opened",
"author": "tdsat",
"comment_id": null,
"datetime": 1521054806000,
"masked_author": "username_0",
"text": "Added Greek translation files. \r\nIt should be decent enough for out-of-the-box usage but the greek language has some nuances that are hard to translate.\r\n\r\nI also translated the comments. If I shouldn't have, tell me and i'll fix it.\r\n\r\nPS : This is my first PR ever so if I mess something up please be gentle :/",
"title": "Added greek translation files",
"type": "issue"
},
{
"action": "created",
"author": "javiereguiluz",
"comment_id": 373282256,
"datetime": 1521097826000,
"masked_author": "username_1",
"text": "@username_0 thanks a lot for contributing this new translation. You did it great, but we're going to merge it in 1-x branch instead of master so old versions of this bundle can use it too. Also, there's no need to translate the comments in translation files, but don't worry because I'll change them in a separate pull request. Lastly, it's an honor that your first pull request ever it's made to this repository. Thank you! *Ευχαριστώ πολύ*",
"title": null,
"type": "comment"
}
] | 2 | 2 | 748 | false | false | 748 | true |
zephyrproject-rtos/zephyr | zephyrproject-rtos | 321,752,432 | 7,441 | null | [
{
"action": "opened",
"author": "andrewboie",
"comment_id": null,
"datetime": 1525906234000,
"masked_author": "username_0",
"text": "In the course of working on #6814 I discovered that our problems with this alternate C library run far deeper.\r\n\r\nIf I modify qemu_x86_defconfig to set CONFIG_NEWLIB_LIBC=y, and then run sanitycheck for qemu_x86, I get many failures:\r\n\r\n```\r\ntotal complete: 282/ 282 100% failed: 143\r\n139 of 282 tests passed with 0 warnings in 308 seconds\r\n```",
"title": "newlib support in zephyr is untested and very broken",
"type": "issue"
},
{
"action": "created",
"author": "nashif",
"comment_id": 390007590,
"datetime": 1526590350000,
"masked_author": "username_1",
"text": "with qemu_x86_nommu all tests pass, could this be related to mmu support and userspace?\r\nqemu_cortex_m3 fails with a few tests that we should be able to fix easily.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nashif",
"comment_id": 406752739,
"datetime": 1532130399000,
"masked_author": "username_1",
"text": "seems like a stack probelm",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nashif",
"comment_id": 466614765,
"datetime": 1550895259000,
"masked_author": "username_1",
"text": "we have various dealing with newlib integration, this seems to be a catch-all. Lets deal with those issues individually.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "nashif",
"comment_id": null,
"datetime": 1550895259000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 658 | false | false | 658 | false |
Daemonite/discourse-material-theme | Daemonite | 398,640,508 | 13 | null | [
{
"action": "opened",
"author": "amotl",
"comment_id": null,
"datetime": 1547373191000,
"masked_author": "username_0",
"text": "# Problem\r\nWhen rendering a markdown bullet list, no margin is applied after the last item.\r\n\r\n## Expected behavior\r\n### Input\r\n```\r\nHello\r\n- one\r\n- two\r\n- three\r\n\r\nworld!\r\n```\r\n\r\n### Output: GitHub renderer\r\nHello\r\n- one\r\n- two\r\n- three\r\n\r\nworld!\r\n\r\n### Output: Discourse with Default theme\r\n\r\n\r\n## Erratic behaviour\r\n### Output: Discourse with Daemonite Material theme\r\n",
"title": "Markdown rendering of bullet lists",
"type": "issue"
},
{
"action": "closed",
"author": "sesemaya",
"comment_id": null,
"datetime": 1547602906000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "sesemaya",
"comment_id": 454622760,
"datetime": 1547604084000,
"masked_author": "username_1",
"text": "Discourse resets the styles (mostly`font-size`s and `margin`s) of some common elements in `.cooked` and `.d-editor-preview`. 3c29660 re-applies the default `font-size` and `margin` back on these elements.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "amotl",
"comment_id": 454946095,
"datetime": 1547673593000,
"masked_author": "username_0",
"text": "Thanks @username_1!\r\n\r\nWe recognize the intrinsic details required to be taken care of when implementing such fixes for real with Sass, in comparison to our [humble tweaks](https://github.com/ip-tools/discourse-material-theme-ample/blob/master/common/common.scss) we made mostly on the CSS level for ad hoc fixing some things we are listing here. Sorry that we won't be of any help for you resolving the various filed issues on the expert level until possibly reading more on the commit stream and doing hands-on work with Sass ourselves. So, thanks again for taking care!\r\n\r\nIf you **do** like the reports we are making in order to eventually assist with further teeth cutting of this excellent theme by also looking at the nitty gritty details and you feel the time is right for this, please let us know.\r\n\r\nSaying that, sorry in retrospective for the bulk-filing of some of the issues. We thought it would be a good idea to have them separated from each other but actually didn't ask before what your preferred way would look like. The reason we chose GitHub was that we didn't want to clutter [the Discourse topic](https://meta.discourse.org/t/daemonite-material-theme/64521) with our ramifications.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,993 | false | false | 1,993 | true |
nock/nock | nock | 449,043,810 | 1,570 | {
"number": 1570,
"repo": "nock",
"user_login": "nock"
} | [
{
"action": "opened",
"author": "mastermatt",
"comment_id": null,
"datetime": 1559012705000,
"masked_author": "username_0",
"text": "Removes most of the eslint overrides that had the \"TODO These are included in standard and should be cleaned up and turned on.\"\r\n- no-path-concat\r\n- no-throw-literal\r\n- handle-callback-err\r\n- eqeqeq\r\n- new-cap\r\n- camelcase\r\n\r\nEach rule is isolated in it's own commit to ease reviewing. \r\n\r\nThe only setting I left in place was `no-deprecated-api`. There are two places where this rule is broken and each deserve their own PR.\r\n\r\n- `tests/test_header_matching.js` makes use of the `domain` module, which has been deprecated since Node v4.0.0, to aid in asserting errors. That whole file needs to be refactored anyway to use `got`.\r\n- `url.parse` was deprecated in Node v11.0.0 in favor of the `url.URL` constructor. This will require some careful work to change as POJOs that resemble the output of `url.parse` get passed around a lot without a whole lot of documenting or commenting. I started down this rabbit hole and ran into a few edge cases that I think would introduce backwards breaking API changes.",
"title": "Lint: Remove overrides.",
"type": "issue"
}
] | 2 | 3 | 1,686 | false | true | 1,006 | false |
folio-org/ui-circulation | folio-org | 419,447,787 | 274 | {
"number": 274,
"repo": "ui-circulation",
"user_login": "folio-org"
} | [
{
"action": "opened",
"author": "maximdidenkoepam",
"comment_id": null,
"datetime": 1552307896000,
"masked_author": "username_0",
"text": "# Purpose \r\nChange path for template subject field\r\n\r\n# Link\r\nhttps://issues.folio.org/browse/UICIRC-211",
"title": "UICIRC-211 - changed path for patron notice template subject field",
"type": "issue"
}
] | 2 | 2 | 207 | false | true | 104 | false |
jamesagnew/hapi-fhir | null | 236,549,441 | 676 | null | [
{
"action": "opened",
"author": "javajeff",
"comment_id": null,
"datetime": 1497635135000,
"masked_author": "username_0",
"text": "If I omit the subscription payload field, I get an error saying that it is required. However in the FHIR 3.0 spec it is optional. An empty payload field will return just a notification without the resource in the body.",
"title": "rest-hook subscription payload should be an optional field",
"type": "issue"
}
] | 2 | 3 | 392 | false | true | 220 | false |
vaadin/flow | vaadin | 389,186,147 | 4,841 | null | [
{
"action": "opened",
"author": "pleku",
"comment_id": null,
"datetime": 1544429705000,
"masked_author": "username_0",
"text": "The API should allow handling keyboard events, including usage of modifier keys, simply from Java API of the component. Should look into declaring a mixin interface that adds the API to the component, and a static way of adding handlers to any component.\n\n@marcushellberg has an add-on for this for V10 https://vaadin.com/directory/component/shortcut/overview which does the same.",
"title": "Spike: Prototyping an API for adding keyboard shortcuts to any component",
"type": "issue"
},
{
"action": "created",
"author": "bogdanudrescu",
"comment_id": 446522268,
"datetime": 1544607264000,
"masked_author": "username_1",
"text": "`ComponentUtil.addListener(... eventType=KeyPressEvent.class ...)` or any other `KeyboardEvent` eventTypes already works and shortcut filtering may be achieved on the server side.\r\n\r\nIn case we need it on the client side we need to use a similar approach as in https://github.com/marcushellberg/shortcut, though the API used there is at Element level, and beside the filter we need to send also `event.preventDefault();` and `event.stopPropagation();`.\r\n\r\nConsidering that we need the event at `Component` level we may need at least another `ComponentUtil` method specifically designed for `KeyboardEvent`s, preferably reusing the same listener type.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ujoni",
"comment_id": 446637358,
"datetime": 1544630037000,
"masked_author": "username_2",
"text": "I made a small demo for an approach that registers all shortcuts on the body of the page: [shortcut2](https://github.com/username_2/shortcuts2). It is build on the addon implementation and does not touch anything in the flow (for now).\r\n\r\nI listed some potential problems with this approach into the README and I am sure there are others.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "pleku",
"comment_id": null,
"datetime": 1544783600000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 1,363 | false | false | 1,363 | true |
Day8/re-frame-trace | Day8 | 273,987,495 | 108 | null | [
{
"action": "opened",
"author": "danielcompton",
"comment_id": null,
"datetime": 1510704107000,
"masked_author": "username_0",
"text": "",
"title": "Move away from table based traces",
"type": "issue"
},
{
"action": "created",
"author": "daiyi",
"comment_id": 344441544,
"datetime": 1510704154000,
"masked_author": "username_1",
"text": "Were you thinking about moving more towards dashboard-based traces? What might it look like?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danielcompton",
"comment_id": 344447816,
"datetime": 1510706273000,
"masked_author": "username_0",
"text": "This was more about the literal `<table>` based implementation. The dashboard stuff is still on the cards, but I don't have a clear idea of what that would be yet.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "daiyi",
"comment_id": 344450728,
"datetime": 1510707322000,
"masked_author": "username_1",
"text": "Ah! I have abstracted too much.\r\n\r\nI have my eyes set on a flexbox rewrite of that table!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "superstructor",
"comment_id": null,
"datetime": 1621390601000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "superstructor",
"comment_id": 843692333,
"datetime": 1621390601000,
"masked_author": "username_2",
"text": "Traces panel has been rewritten to use flexbox instead of table.",
"title": null,
"type": "comment"
}
] | 3 | 6 | 408 | false | false | 408 | false |
Alorel/polyfill.io-aot | null | 443,664,126 | 109 | {
"number": 109,
"repo": "polyfill.io-aot",
"user_login": "Alorel"
} | [
{
"action": "created",
"author": "Alorel",
"comment_id": 493890607,
"datetime": 1558341264000,
"masked_author": "username_0",
"text": ":tada: This PR is included in version 4.0.3 :tada:\n\nThe release is available on:\n- [GitHub release](https://github.com/username_0/polyfill.io-aot/releases/tag/4.0.3)\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@polyfill-io-aot/builder)\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@polyfill-io-aot/builder-cli)\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@polyfill-io-aot/common)\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@polyfill-io-aot/core)\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@polyfill-io-aot/express)\n\nYour **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:",
"title": null,
"type": "comment"
}
] | 3 | 3 | 1,864 | false | true | 721 | true |
zeit/next.js | zeit | 387,574,225 | 5,821 | {
"number": 5821,
"repo": "next.js",
"user_login": "zeit"
} | [
{
"action": "opened",
"author": "j0lv3r4",
"comment_id": null,
"datetime": 1543979379000,
"masked_author": "username_0",
"text": "This is my attempt at https://github.com/zeit/next.js/issues/153\r\n\r\nFollowing @rauchg instructions:\r\n\r\n- it uses an authentication helper across pages which returns a token if there's one\r\n- it has session synchronization across tabs\r\n- I deployed a passwordless backend on `now.sh` (https://with-cookie-api.now.sh, [src](https://github.com/username_0/next.js-with-cookies-api))\r\n\r\nAlso, from reviewing other PRs, I made sure to:\r\n\r\n- use [isomorphic-fetch](https://www.npmjs.com/package/isomorphic-fetch).\r\n- use [next-cookies](https://www.npmjs.com/package/next-cookies).\r\n\r\nHere's a little demo:\r\n\r\n",
"title": "Example with cookie auth",
"type": "issue"
},
{
"action": "created",
"author": "timneutkens",
"comment_id": 446004743,
"datetime": 1544482288000,
"masked_author": "username_1",
"text": "This PR looks pretty good @username_0! I'm going to invite you to the example repo I made but didn't finish so that you can compare notes.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timneutkens",
"comment_id": 446005013,
"datetime": 1544482346000,
"masked_author": "username_1",
"text": "https://github.com/username_1/next.js-auth-example/invitations",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timneutkens",
"comment_id": 446005146,
"datetime": 1544482373000,
"masked_author": "username_1",
"text": "Could you also send me a DM on Spectrum: https://spectrum.chat/users/username_1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "j0lv3r4",
"comment_id": 446012796,
"datetime": 1544484168000,
"masked_author": "username_0",
"text": "thanks, @username_1! DM sent.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "malixsys",
"comment_id": 447605923,
"datetime": 1544917224000,
"masked_author": "username_2",
"text": "I think see some of my stuff, glad this made it in!! 😁",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timneutkens",
"comment_id": 447606330,
"datetime": 1544917795000,
"masked_author": "username_1",
"text": "🙌",
"title": null,
"type": "comment"
}
] | 3 | 7 | 1,001 | false | false | 1,001 | true |
P1xt/speedstudy | null | 286,286,106 | 4 | {
"number": 4,
"repo": "speedstudy",
"user_login": "P1xt"
} | [
{
"action": "opened",
"author": "elloo",
"comment_id": null,
"datetime": 1515158197000,
"masked_author": "username_0",
"text": "",
"title": "Added my study repo",
"type": "issue"
},
{
"action": "created",
"author": "P1xt",
"comment_id": 355576850,
"datetime": 1515164878000,
"masked_author": "username_1",
"text": "Happy studying :smile:",
"title": null,
"type": "comment"
}
] | 2 | 2 | 22 | false | false | 22 | false |
snowflakedb/snowflake-connector-nodejs | snowflakedb | 379,443,307 | 16 | null | [
{
"action": "opened",
"author": "mhseiden",
"comment_id": null,
"datetime": 1541871727000,
"masked_author": "username_0",
"text": "We are seeing the following error bubble up on occasion. It looks like it's coming from the `ocsp` library that y'all are using.\r\n\r\n```\r\nERROR: [9:34:45.3445 AM]: OCSP validation failed: Error: Unsupported type: object at: (shallow)\r\n```",
"title": "OCSP is randomly failing in master",
"type": "issue"
},
{
"action": "created",
"author": "smtakeda",
"comment_id": 438367039,
"datetime": 1542130937000,
"masked_author": "username_1",
"text": "hmm, sounds like garbage was returned from OCSP responder... The driver needs retry.\r\n\r\nCurrently NodeJS doesn't cache the OCSP response nor uses the OCSP cache server, OCSP check is not as stable as JDBC, ODBC, Python Connector. We'll add it in the earliest opportunity.\r\nFor workaround, use `insecureConnect` parameter to skip OCSP check.\r\n```\r\nvar connectionOptions =\r\n{\r\n account: testaccount, ... , insecureConnect: true\r\n}\r\n```\r\nNote `insecureConnect=true` will still do 1) verify the certificate signature, 2) check validity, 3) ensure the certificate is associated with the hostname.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "natesilva",
"comment_id": 439218623,
"datetime": 1542322106000,
"masked_author": "username_2",
"text": "Workaround: the `insecureConnect` option needs to be passed to `snowflake.configure(...)`, not to the `createConnection` function.\r\n\r\n```\r\nsnowflake.configure({insecureConnect: true});\r\nvar connection = snowflake.createConnection(...);\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rem7",
"comment_id": 439219067,
"datetime": 1542322213000,
"masked_author": "username_3",
"text": "``` name: 'NetworkError',\r\n code: 401001,\r\n message: 'Network error. Could not reach Snowflake.',\r\n cause: Error: Bad OCSP response status: try_later\r\n```\r\nseeing the same issue",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "moniecodes",
"comment_id": 708672757,
"datetime": 1602711269000,
"masked_author": "username_4",
"text": "Is there any update on this issue? \r\n\r\nWhat is the impact of using` insecureConnect:true`. I assume we want the OCSP checks when using node client?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "NoxHarmonium",
"comment_id": 711436497,
"datetime": 1603061802000,
"masked_author": "username_5",
"text": "FWIW I was having the `OCSP validation failed: Error: Unsupported type: object at: (shallow)` error and it turned out to be the version of `asn1.js` that was getting resolved. I think that the version range isn't restrictive enough so if another library uses `asn1.js` but an incompatible version it can break this library. I fixed it by adding:\r\n\r\n```\r\n \"resolutions\": {\r\n \"asn1.js\": \"^5.4.1\"\r\n },\r\n```\r\n\r\nTo my package.json file (I use Yarn, the syntax might vary for NPM). I'm not sure if it is the same issue you are having but it might help.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bajohn",
"comment_id": 776801846,
"datetime": 1612971765000,
"masked_author": "username_6",
"text": "Thank you! This worked for me, in Node I just set the following in package.json dependencies:\r\n```\r\n \"asn1\": \"^0.2.4\",\r\n```\r\n, delete my package-lock.json / node_modules to make sure a lower version wasn't still being used, then rerun `npm install`. Snowflake is now connecting correctly.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "markandrus",
"comment_id": 943214380,
"datetime": 1634206336000,
"masked_author": "username_7",
"text": "@username_5 @username_6 thanks for pointing me in the right direction. I dove into snowflake-sdk's ocsp and asn1.js dependencies. It looks really messy:\r\n\r\n* ocsp@1.2.0 depends on asn1.js-rfc5280@^2.0.0\r\n* ocsp@1.2.0 depends on asn1.js-rfc2560@^4.0.0\r\n* asn1.js-rfc2560@4.0.6 depends on asn1.js-rfc5280@^2.0.0\r\n * asn1.js-rfc5280@2.0.1 depends on asn1.js@^4.5.0\r\n* snowflake-sdk@1.6.4 depends on asn1.js-rc2560@^5.0.0\r\n* snowflake-sdk@1.6.4 depends on asn1.js-rfc5280@^3.0.0\r\n* asn1.js-rfc2560@5.0.1 depends on asn1.js-rfc5280@^3.0.0\r\n * asn1.js-rfc5280@3.0.0 depends on asn1.js@^5.0.0\r\n* snowflake-sdk@1.6.4 depends on ocsp@^1.2.0\r\n * ocsp@1.2.0 depends on asn1.js@^4.8.0\r\n\r\nIn a recent install, I ended up with\r\n\r\n* asn1.js@4.10.1 and asn1.js@5.4.1\r\n* asn1.js-rfc5280@2.0.1 and asn1.js-rfc5280@3.0.0\r\n* asn1.js-rfc2560@4.0.6 and asn1.js-rfc2560@5.0.1\r\n\r\nSo I can definitely see how this can cause problems. I think the issue would best be solved in ocsp and asn1.js, releasing packages that update to the latest versions (assuming they're compatible). Then, I'd expect to just need\r\n\r\n* asn1.js@5.4.1\r\n* asn1.js-rfc5280@3.0.0\r\n* asn1.js-rfc2560@5.0.1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "wpride",
"comment_id": 1069801360,
"datetime": 1647479447000,
"masked_author": "username_8",
"text": "You're a legend, this worked beautifully",
"title": null,
"type": "comment"
}
] | 9 | 9 | 3,441 | false | false | 3,441 | true |
micro-os-plus/riscv-arch-xpack | micro-os-plus | 288,720,805 | 3 | null | [
{
"action": "opened",
"author": "ilg-ul",
"comment_id": null,
"datetime": 1516051697000,
"masked_author": "username_0",
"text": "Implement the semihosting call via a workaround to compensate for the lack of multiple `ebreak` instructions:\r\n\r\n```asm\r\n.option norvc\r\nslli x0, x0, 0x1f\r\nebreak\r\nsrai x0, x0, 0x7\r\n```",
"title": "Add semihosting call_host() implementation",
"type": "issue"
},
{
"action": "created",
"author": "ilg-ul",
"comment_id": 357794235,
"datetime": 1516052073000,
"masked_author": "username_0",
"text": "Done on 2018-01-15.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ilg-ul",
"comment_id": null,
"datetime": 1516052074000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 203 | false | false | 203 | false |
gyf-dev/ImmersionBar | null | 415,904,861 | 290 | null | [
{
"action": "opened",
"author": "Mrsun1127",
"comment_id": null,
"datetime": 1551405259000,
"masked_author": "username_0",
"text": "2.3.0版本 ImmersionBar.with(getActivity())\r\n .statusBarDarkFont(true,0.2f)\r\n .init();\r\n这么初始化 会报错 怎么解",
"title": "沉浸式 初始化报错",
"type": "issue"
},
{
"action": "created",
"author": "gyf-dev",
"comment_id": 468558010,
"datetime": 1551421441000,
"masked_author": "username_1",
"text": "如果你在Fragment里使用的话,请先在宿主Activity里使用,建议使用最新版本2.3.3",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Mrsun1127",
"comment_id": 468565903,
"datetime": 1551423956000,
"masked_author": "username_0",
"text": "Ok, thank you. I successfully solved the problem by upgrading to the latest version",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gyf-dev",
"comment_id": null,
"datetime": 1551436169000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 259 | false | false | 259 | false |
WayofTime/BloodMagic | null | 335,176,585 | 1,351 | null | [
{
"action": "opened",
"author": "MajorGeneralRelativity",
"comment_id": null,
"datetime": 1529844341000,
"masked_author": "username_0",
"text": "#### Issue Description:\r\nThis does occur in a modpack, but it appears to potentially be a mod interaction problem at the very least, so it's fitting. When you kill yourself using a Dagger of Sacrifice, an abnormal death process occurs, and you do not have your inventory saved by OpenBlocks.\r\n\r\n\r\n#### What happens:\r\nUpon right clicking with the Dagger of Sacrifice one too many times, you die. However, a grave does not generate, and OpenBlocks does not save a copy of your inventory to restore, which leads to your items not being saved at all, nor dumped out in the world like in the absence of any other mods (and keepInventory)\r\n\r\n\r\n#### What you expected to happen:\r\nOpenBlocks to spit out a \"if you want to restore your inventory, use command blah-blah-death-0 to restore your inventory\" and generate a grave.\r\n\r\n\r\n#### Steps to reproduce:\r\n\r\n1. Install Modpack at: http://bit.ly/2K7ueRO\r\n2. Kill yourself by holding right click with Dagger of Sacrifice\r\n3. Profit?\r\n...\r\n\r\n____\r\n#### Affected Versions (Do *not* use \"latest\"):\r\n\r\n- BloodMagic: BloodMagic-1.12.2-2.2.12-97.jar (latest 😉 )\r\n- Minecraft: 1.12.2\r\n- Forge: 14.23.4.2705\r\n\r\nPlease let me know if I need to provide any further information",
"title": "Blood Magic Dagger of Sacrifice causing abnormal death process",
"type": "issue"
},
{
"action": "created",
"author": "Iorce",
"comment_id": 399765152,
"datetime": 1529854356000,
"masked_author": "username_1",
"text": "Why are you using a link shortener (with advertisments even) on a website that most people will access from on the desktop.\r\nIt doesn't add to convenience and harbors the potential for malware on the other side.\r\n\r\nLink destination: https://drive.google.com/file/d/1yN0sqyhsOxNxbNS8HcJqcByhcsRHmC-q/view?u sp=drive_web",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MajorGeneralRelativity",
"comment_id": 399776393,
"datetime": 1529864364000,
"masked_author": "username_0",
"text": "I'm the modpack author, so I know that the link is advertisement free and safe. The link I provided is the link I give to everyone on my Discord to download the modpack. I was unaware it would be an issue, as it wasn't mentioned in the contribution guidelines, or the template.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Iorce",
"comment_id": 446048263,
"datetime": 1544494927000,
"masked_author": "username_1",
"text": "Needs to be investigated. Update. Bug label.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,845 | false | false | 1,845 | false |
JuliaEditorSupport/julia-vscode | JuliaEditorSupport | 346,405,696 | 532 | null | [
{
"action": "opened",
"author": "tlnagy",
"comment_id": null,
"datetime": 1533083497000,
"masked_author": "username_0",
"text": "Love the extension! I was trying it and I didn't see how I can change the directory that Weave is run in. AFAICT, Weave is always run from my home directory on Linux so all my relative paths fail in my Julia markdown file. \r\n\r\n- Is there any reason that the default isn't to run it in the current files directory? \r\n- Failing that, there should be a way to set this in the settings.",
"title": "Weave should run in current files directory",
"type": "issue"
},
{
"action": "created",
"author": "ZacLN",
"comment_id": 428462132,
"datetime": 1539155318000,
"masked_author": "username_1",
"text": "https://github.com/JuliaEditorSupport/julia-vscode/pull/548",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "tlnagy",
"comment_id": 428651739,
"datetime": 1539190999000,
"masked_author": "username_0",
"text": "Thanks!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "davidanthoff",
"comment_id": null,
"datetime": 1539191545000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 448 | false | false | 448 | false |
jacobp100/react-twentytwenty | null | 300,041,725 | 7 | null | [
{
"action": "opened",
"author": "vjustov",
"comment_id": null,
"datetime": 1519577818000,
"masked_author": "username_0",
"text": "with that style I cannot make the divider taller than the images.",
"title": "Why does the container have overflow: hidden?",
"type": "issue"
},
{
"action": "created",
"author": "jacobp100",
"comment_id": 368325941,
"datetime": 1519578560000,
"masked_author": "username_1",
"text": "Add vertical margin or padding to the images to increase the container size, then your divider will be taller",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jacobp100",
"comment_id": 368477709,
"datetime": 1519646256000,
"masked_author": "username_1",
"text": "Did this solve your issue?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vjustov",
"comment_id": 368487421,
"datetime": 1519648814000,
"masked_author": "username_0",
"text": "No, I haven't tried it, just saw these comments. but it sounds like it would work. But I can't help but wonderwhy do you need it, in my particular use case the component behaved exactly the same without it.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jacobp100",
"comment_id": null,
"datetime": 1521846142000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 406 | false | false | 406 | false |
ValveSoftware/Proton | ValveSoftware | 352,761,080 | 11 | null | [
{
"action": "opened",
"author": "hcorion",
"comment_id": null,
"datetime": 1534899059000,
"masked_author": "username_0",
"text": "I have two steam library folders, one on my SSD, located at ~/.local/share/Steam, and one stored on my HDD, mounted at `/mnt/hd`, the HDD one is set to default. It seems that Photon installs to the default folder, `/mnt/hd/SteamLibrary-linux/steamapps/common/Proton 3.7/`.\r\nTrying to launch A Hat in Time fails with the following\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/hd/SteamLibrary-linux/steamapps/common/Proton 3.7/proton\", line 89, in <module>\r\n tar.extractall(path=basedir + \"/dist\")\r\n File \"/usr/lib/python2.7/tarfile.py\", line 2081, in extractall\r\n self.extract(tarinfo, path)\r\n File \"/usr/lib/python2.7/tarfile.py\", line 2118, in extract\r\n self._extract_member(tarinfo, os.path.join(path, tarinfo.name))\r\n File \"/usr/lib/python2.7/tarfile.py\", line 2202, in _extract_member\r\n self.makelink(tarinfo, targetpath)\r\n File \"/usr/lib/python2.7/tarfile.py\", line 2280, in makelink\r\n os.symlink(tarinfo.linkname, targetpath)\r\nOSError: [Errno 22] Invalid argument\r\n```",
"title": "Trying to launch a game on a different drive from the main install fails",
"type": "issue"
},
{
"action": "created",
"author": "Mikerr1111",
"comment_id": 414876313,
"datetime": 1534901406000,
"masked_author": "username_1",
"text": "Check to make you have Python 3 installed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sbmn",
"comment_id": 414881484,
"datetime": 1534903134000,
"masked_author": "username_2",
"text": "Exact same issue. Python 3 is installed and the drive has rwx perms",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "hcorion",
"comment_id": 414899178,
"datetime": 1534909067000,
"masked_author": "username_0",
"text": "the issue might be with ntfs, @username_2 what format is your secondary drive?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "meowmeowfuzzyface",
"comment_id": 414899565,
"datetime": 1534909209000,
"masked_author": "username_3",
"text": "Same issue on Ubuntu 18.04.1. Error log contains the exact same error. Tried with Doom (2016), Quake, and Doom 2. OS drive is ext4, games are installed on an NTFS drive mounted with the following options:\r\nntfs-3g defaults,exec,uid=1000,gid=1000,windows_names,locale=en_US.UTF-8 0 0",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "meowmeowfuzzyface",
"comment_id": 414992757,
"datetime": 1534935196000,
"masked_author": "username_3",
"text": "Removing \"windows_names\" from fstab fixed it. Left locale in there without problems. On a side note, the official Ubuntu documentation for mounting NTFS drives says to use the windows_names option, so I think this is an issue that will come up a lot in the future for dual-booters.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TheGreatMcPain",
"comment_id": 415036306,
"datetime": 1534945416000,
"masked_author": "username_4",
"text": "The only issue that would come up is when windows can't move, copy, or delete a file because it has a character in it's name that Windows doesn't like. I do believe windows can still open those files like normal though.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "meowmeowfuzzyface",
"comment_id": 415109146,
"datetime": 1534958204000,
"masked_author": "username_3",
"text": "I meant that this is an issue that I think a lot of users will run into because the official Ubuntu documentation instructs users to use the \"windows_names\" mount option in the fstab entry, and Ubuntu's automated method to add NTFS drives to fstab includes the \"windows_names\" option by default. But it's good to know that removing it shouldn't cause any major issues on the Windows installation.\r\n\r\nThank you both for your help. I hope this solution works for OP as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sbmn",
"comment_id": 415410115,
"datetime": 1535030166000,
"masked_author": "username_2",
"text": "Solution worked.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "peterson432",
"comment_id": 475522796,
"datetime": 1553240503000,
"masked_author": "username_5",
"text": "Try to run ntfs-3g command without names or locale, \r\nntfs-3g defaults,exec,uid=1000,gid=1000",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kisak-valve",
"comment_id": 500134014,
"datetime": 1560008719000,
"masked_author": "username_6",
"text": "Noted at https://github.com/ValveSoftware/Proton/issues/2775#issuecomment-499477244, upstream wine has prefix ownership check which may have been added due to past issues with the user creating the prefix as root and then having issues accessing the prefix later.\r\n\r\nRunning Steam as root is explicitly unsupported and blocked in its startup script, so that particular scenario would require a bad actor to make a folder that the current user can't manipulate, but there may be other methods leading to permission issues that need to be considered.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "madewokherd",
"comment_id": 500437438,
"datetime": 1560176944000,
"masked_author": "username_7",
"text": "We should at least be able to detect some of these situations (can't create c: symlink, can't create a prefix with the correct ownership, existing prefix has wrong ownership) and report an error. Is there a way we can report an error message back to steam, or failing that can we assume the presence of something like xmessage or zenity?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RockyTV",
"comment_id": 562852523,
"datetime": 1575726315000,
"masked_author": "username_8",
"text": "IIf it is of any help, I have some log files and crash dumps after I tried running Elder Scrolls Online on my Linux install on an NTFS drive mounted with `windows_names`.\r\n\r\n\r\n\r\n[eso_crash.zip](https://github.com/ValveSoftware/Proton/files/3935116/eso_crash.zip)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Wenzel",
"comment_id": 586932817,
"datetime": 1581936574000,
"masked_author": "username_9",
"text": "Hi, got the same issue, removed `windows_names` and it works fine now.\r\n\r\nThanks !",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SomeoneIsWorking",
"comment_id": 1006920798,
"datetime": 1641501751000,
"masked_author": "username_10",
"text": "On Ubuntu + Lutris. I had to do the opposite to solve it. Which was to add the `windows_names` that wasn't there before. \r\nCan't figure out how to do it in kubuntu though because it is mounting as fuseblk and unmounting and remounting with ntfs-3g -o windows_names didn't help",
"title": null,
"type": "comment"
}
] | 11 | 15 | 4,060 | false | false | 4,060 | true |
quantumlib/OpenFermion-Cirq | quantumlib | 470,065,936 | 353 | {
"number": 353,
"repo": "OpenFermion-Cirq",
"user_login": "quantumlib"
} | [
{
"action": "opened",
"author": "bryano",
"comment_id": null,
"datetime": 1563495730000,
"masked_author": "username_0",
"text": "Adds `FermionicSimulationGate` metaclass with common methods.\r\n\r\nAdds `trotterize` method.",
"title": "Utilities for trotterizing operators",
"type": "issue"
},
{
"action": "created",
"author": "bryano",
"comment_id": 513045598,
"datetime": 1563496842000,
"masked_author": "username_0",
"text": "Note that the only failing test is pylint complaining about wrong import order because I'm importing from GitHub. (Needs more recent OF master and two of my PRs on Cirq.)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bryano",
"comment_id": 513045991,
"datetime": 1563496988000,
"masked_author": "username_0",
"text": "Sorry, I did not realize how big this PR got. Let me know if you want me to break it up.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bryano",
"comment_id": 530562962,
"datetime": 1568235502000,
"masked_author": "username_0",
"text": "Why does pylint say that cirq should come before numpy? Aren't they both 3rd-party imports?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kevinsung",
"comment_id": 530578184,
"datetime": 1568238195000,
"masked_author": "username_1",
"text": "I don't know, just satisfy it haha.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bryano",
"comment_id": 564828343,
"datetime": 1576119572000,
"masked_author": "username_0",
"text": "Fixes #253",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kevinsung",
"comment_id": 573225547,
"datetime": 1578694301000,
"masked_author": "username_1",
"text": "Hi @username_0 sorry for the delay in reviewing this. Can you please split this into two PR's, with one introducing the gates and the other with the trotterization utilities?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bryano",
"comment_id": 575560993,
"datetime": 1579255729000,
"masked_author": "username_0",
"text": "@username_1 I could split it up, but the trotterization utilities are currently used to test the gates, so it wouldn't be trivial.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kevinsung",
"comment_id": 578250398,
"datetime": 1579891234000,
"masked_author": "username_1",
"text": "Yes, please split it up. The gates and their tests should not depend on the trotterization utilities, but the other way around is okay.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bryano",
"comment_id": 581099246,
"datetime": 1580619919000,
"masked_author": "username_0",
"text": "Okay, I'll start splitting it up.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncrubin",
"comment_id": 668822638,
"datetime": 1596574782000,
"masked_author": "username_2",
"text": "@username_0 We've come up on 1 year open for this PR. Given it's age can I assume it is stale?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bryano",
"comment_id": 668825087,
"datetime": 1596575142000,
"masked_author": "username_0",
"text": "@username_2 Yea, I think everything already got integrated in parts (#370, #371, #388).",
"title": null,
"type": "comment"
}
] | 3 | 12 | 1,126 | false | false | 1,126 | true |
SteveGilham/altcover | null | 345,018,537 | 27 | null | [
{
"action": "opened",
"author": "RichardD012",
"comment_id": null,
"datetime": 1532641940000,
"masked_author": "username_0",
"text": "Maybe I'm misreading the command line parameters and instructions, but its unclear if I'm able to call AltCover to completely ignore a subset of directories.\r\n\r\nCalling from (based on instructions its regex so the .* should define wildcard path):\r\n`dotnet test -v n /p:AltCover=true /p:AltCoverMethodFilter=.\\*/src/Data/Generated/.\\*/.\\*.cs` \r\n\r\nall files matching our generated code in src/Data/Generated still show up in the output report. The usage instructions could use some concrete examples of how to properly use these flags as I'm expecting it to take standard wildcard paths which doesn't seem to be the case.",
"title": "Unable to Exclude Folders",
"type": "issue"
},
{
"action": "created",
"author": "RichardD012",
"comment_id": 408471544,
"datetime": 1532709127000,
"masked_author": "username_0",
"text": "Thanks this actually clears up a lot. I was thinking full path file filter.\r\n\r\nAlso, I totally copied and pasted the wrong attribute there so of course that doesn't work.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SteveGilham",
"comment_id": 408518891,
"datetime": 1532720224000,
"masked_author": "username_1",
"text": "[Pre-release build 3.5.577](https://ci.appveyor.com/project/username_1/altcover/build/3.5.577-pre/artifacts) has a new -pathFilter (/p:AltCoverPathFilter) parameter that should do what you're looking for. As always the test is \"does the regex match at all\", not \"does the regex match the entire string\".",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SteveGilham",
"comment_id": 408665227,
"datetime": 1532858249000,
"masked_author": "username_1",
"text": "And formally in release 3.5.580",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "SteveGilham",
"comment_id": null,
"datetime": 1532858250000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 1,127 | false | false | 1,127 | true |
fex-team/ueditor | fex-team | 190,210,153 | 3,150 | null | [
{
"action": "opened",
"author": "liyang348918334",
"comment_id": null,
"datetime": 1479431184000,
"masked_author": "username_0",
"text": "各位大哥,135编辑器中编辑的图文排版复制到微信公众平台,样式没问题,它们用的都是你们的这个编辑器,但是为什么我把内容复制到官网的demo中却不行?请教!!",
"title": "急!谁能帮我解答下,这是需要插件吗?",
"type": "issue"
},
{
"action": "closed",
"author": "Phinome",
"comment_id": null,
"datetime": 1481256027000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 78 | false | false | 78 | false |
nipy/nipype | nipy | 314,655,117 | 2,543 | null | [
{
"action": "opened",
"author": "atsuch",
"comment_id": null,
"datetime": 1523885922000,
"masked_author": "username_0",
"text": "### Summary\r\nI'm not sure why others wouldn't have caught this one, but I think there is a spelling mistake in the command for antsRegistrationSyNQuick.sh... \r\n\r\nI think it has to be the capital N for SyN, but in nipype/interfaces/ants/registration.py the _cmd is antsRegistrationSynQuick.sh\r\n\r\n### Actual behavior\r\nCauses a crash when using a node with antsRegistrationSyNQuick with the following message\r\n\r\nIOError: No command \"antsRegistrationSynQuick.sh\" found on host c2. Please check that the corresponding package is installed.\r\n\r\n### Platform details:\r\nI am using nipype 1.0.2, ants version 2.1.",
"title": "antsRegistrationSyNQuick- spelling error?",
"type": "issue"
},
{
"action": "created",
"author": "effigies",
"comment_id": 381606050,
"datetime": 1523886610000,
"masked_author": "username_1",
"text": "Yup, you're right. Any interest in submitting a fix? The `_cmd` attribute and test commands in the doc string will need updating.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "effigies",
"comment_id": null,
"datetime": 1523995343000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "effigies",
"comment_id": 382741961,
"datetime": 1524145614000,
"masked_author": "username_1",
"text": "@username_0 This will be fixed in 1.0.3.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "atsuch",
"comment_id": 382757126,
"datetime": 1524148011000,
"masked_author": "username_0",
"text": "@username_1, Thank you! \r\n\r\nI will try contributing next time...!",
"title": null,
"type": "comment"
}
] | 2 | 5 | 831 | false | false | 831 | true |
hoodiehq/hoodie-server | hoodiehq | 124,319,046 | 445 | {
"number": 445,
"repo": "hoodie-server",
"user_login": "hoodiehq"
} | [
{
"action": "created",
"author": "gr2m",
"comment_id": 167973205,
"datetime": 1451470771000,
"masked_author": "username_0",
"text": "closing because `tap@3` is on `@next` now",
"title": null,
"type": "comment"
}
] | 2 | 2 | 2,582 | false | true | 41 | false |
schmittjoh/JMSPaymentCoreBundle | null | 40,774,869 | 157 | {
"number": 157,
"repo": "JMSPaymentCoreBundle",
"user_login": "schmittjoh"
} | [
{
"action": "opened",
"author": "petrjaros",
"comment_id": null,
"datetime": 1408600536000,
"masked_author": "username_0",
"text": "``method_type`` service tag should be ``method_form_type``",
"title": "Typo in documentation: method_form_type",
"type": "issue"
},
{
"action": "created",
"author": "regularjack",
"comment_id": 240561515,
"datetime": 1471470931000,
"masked_author": "username_1",
"text": "Closing in favour of #172",
"title": null,
"type": "comment"
}
] | 2 | 2 | 83 | false | false | 83 | false |
GoogleChrome/lighthouse | GoogleChrome | 435,402,130 | 8,471 | null | [
{
"action": "opened",
"author": "FANHATCHA",
"comment_id": null,
"datetime": 1555768653000,
"masked_author": "username_0",
"text": "**Initial URL**: http://127.0.0.1:8000/cours/developpement-de-carriere\r\n**Chrome Version**: 73.0.3683.103\r\n**Error Message**: PROTOCOL_TIMEOUT\r\n**Stack Trace**:\r\n```\r\nLHError: PROTOCOL_TIMEOUT\r\n at eval (chrome-devtools://devtools/remote/serve_file/@e82a658d8159cabbd4938c1660f9bb00b4a82a23/audits2_worker/audits2_worker_module.js:1027:210)\r\n```",
"title": "DevTools Error: PROTOCOL_TIMEOUT",
"type": "issue"
},
{
"action": "closed",
"author": "FANHATCHA",
"comment_id": null,
"datetime": 1555768683000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 348 | false | false | 348 | false |
OpenDataServices/cove | OpenDataServices | 265,757,445 | 871 | null | [
{
"action": "opened",
"author": "Bjwebb",
"comment_id": null,
"datetime": 1508158052000,
"masked_author": "username_0",
"text": "https://github.com/IATI/IATI-Rulesets/blob/version-2.02/rulesets/standard.json#L23-L28",
"title": "recipient-country|recipient-region percentage sum check missing",
"type": "issue"
},
{
"action": "created",
"author": "Bjwebb",
"comment_id": 336903813,
"datetime": 1508164013000,
"masked_author": "username_0",
"text": "This check is also missing from @username_1's code https://github.com/pwyf/data-quality-tester/tree/develop/test_definitions/iati_standard_ruleset\r\n\r\nIt's not listed on the iatistandard website http://iatistandard.org/202/rulesets/standard-ruleset/#non-machine-readable-rules (I've reported an issue at https://github.com/IATI/IATI-Rulesets/issues/46)\r\n\r\nAlso, the check as implemented in the IATI-Rulesets repo is inconsistent with what happens in the example XML (http://iatistandard.org/202/activity-standard/example-xml/) - the check is about the sum of all recipient-country and recipient-region, whereas the example XML treats different vocabularies differently.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andylolz",
"comment_id": 336937102,
"datetime": 1508170323000,
"masked_author": "username_1",
"text": "It’s true – looks like I missed these ones.\r\n\r\ntbh I’m happy to remove these tests completely, given [you have your own copy](https://github.com/OpenDataServices/cove/tree/master/cove_iati/rulesets/iati_standard_v2_ruleset)… They were only ever in the repo for demonstration purposes.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Bjwebb",
"comment_id": 337184608,
"datetime": 1508235051000,
"masked_author": "username_0",
"text": "@username_1 Yes, you can remove those tests if you want. A link over to this repo might also be useful for people.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "edugomez",
"comment_id": null,
"datetime": 1508245966000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "andylolz",
"comment_id": 424271191,
"datetime": 1537867721000,
"masked_author": "username_1",
"text": "I didn’t do this, because it wouldn’t be useful for anyone. These tests were in my code for demonstration purposes only. Pretty sure no-one relied on them, except maybe you guys.",
"title": null,
"type": "comment"
}
] | 3 | 6 | 1,326 | false | false | 1,326 | true |
aws/aws-cdk | aws | 466,612,313 | 3,276 | {
"number": 3276,
"repo": "aws-cdk",
"user_login": "aws"
} | [
{
"action": "created",
"author": "RomainMuller",
"comment_id": 510797439,
"datetime": 1562919672000,
"masked_author": "username_0",
"text": "@dependabot merge",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RomainMuller",
"comment_id": 510798066,
"datetime": 1562919795000,
"masked_author": "username_0",
"text": "@dependabot rebase",
"title": null,
"type": "comment"
}
] | 2 | 3 | 3,550 | false | true | 35 | false |
IgniteUI/igniteui-angular | IgniteUI | 420,457,028 | 4,300 | null | [
{
"action": "opened",
"author": "sstoyanovIG",
"comment_id": null,
"datetime": 1552477643000,
"masked_author": "username_0",
"text": "## Description \r\nExcel style filtering dialog doesn't use igxGrid's display density.\r\n\r\n * igniteui-angular version: 7.2.x\r\n * browser: all\r\n\r\n## Steps to reproduce \r\n\r\n1. Open 'Grid Cell Editing' demo\r\n2. Enable Filtering and set it to be excel style\r\n3. Open Filter menu for some column\r\n\r\n## Result \r\nThe components inside of the menu doesn't have 'Compact' density\r\n\r\n## Expected result \r\nThe components inside of the menu should have 'Compact' density",
"title": "Use displayDensity of the igxGrid in ESF dialog",
"type": "issue"
},
{
"action": "closed",
"author": "gedinakova",
"comment_id": null,
"datetime": 1557159704000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 460 | false | false | 460 | false |
angular/universal | angular | 292,849,822 | 885 | null | [
{
"action": "opened",
"author": "samarmeena",
"comment_id": null,
"datetime": 1517329970000,
"masked_author": "username_0",
"text": "I want to render my dynamic title and metadata for dynamic pages. \r\n\r\nThe dynamic meta working fine with browser but in SEO result the meta is not found. \r\n\r\nI Think route switch have issue or miss configured.\r\n\r\nI need some support to solve this issue to make a web app with dynamic content with SEO friendly.",
"title": "Component Dynamic title not rendered",
"type": "issue"
},
{
"action": "created",
"author": "Gorniv",
"comment_id": 361672746,
"datetime": 1517333910000,
"masked_author": "username_1",
"text": "https://github.com/Angular-RU/angular-universal-starter use ngx-meta",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "samarmeena",
"comment_id": null,
"datetime": 1517339069000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 738 | false | true | 378 | false |
abbasfreestyle/react-native-af-video-player | null | 293,422,734 | 8 | null | [
{
"action": "opened",
"author": "mvpdream",
"comment_id": null,
"datetime": 1517468148000,
"masked_author": "username_0",
"text": "After the full screen, the video is played, and the play button is clicked again. The video can't be played. (Android)",
"title": "After the full screen, the video is played, and the play button is clicked again. The video can't be played. (Android)",
"type": "issue"
},
{
"action": "created",
"author": "abbasfreestyle",
"comment_id": 362446310,
"datetime": 1517530941000,
"masked_author": "username_1",
"text": "I can't seem to replicate this issue.\nNo code, project version data or screenshots makes it hard to diagnose your issue.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mvpdream",
"comment_id": 362462151,
"datetime": 1517536693000,
"masked_author": "username_0",
"text": "\r\n\r\nThe first step is to open the video in the linkage mode and suspend the function of the playback.\r\n\r\nThe second step, switch to full screen mode.\r\n\r\nThe third step, again switch to the linkage mode, video abrupt interruption, suspend play function failure.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mvpdream",
"comment_id": 362902102,
"datetime": 1517746244000,
"masked_author": "username_0",
"text": "@username_1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "abbasfreestyle",
"comment_id": 367163924,
"datetime": 1519171023000,
"masked_author": "username_1",
"text": "@username_0 I've pushed a fixed in version 0.1.6\r\n\r\nThanks for raising this issue. Let me know if this has fixed it :)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mvpdream",
"comment_id": null,
"datetime": 1519348346000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 6 | 873 | false | false | 873 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.