repo stringlengths 7 67 | org stringlengths 2 32 ⌀ | issue_id int64 780k 941M | issue_number int64 1 134k | pull_request dict | events list | user_count int64 1 77 | event_count int64 1 192 | text_size int64 0 329k | bot_issue bool 1 class | modified_by_bot bool 2 classes | text_size_no_bots int64 0 279k | modified_usernames bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
tremor-rs/tremor-runtime | tremor-rs | 724,655,396 | 508 | null | [
{
"action": "opened",
"author": "mfelsche",
"comment_id": null,
"datetime": 1603116550000,
"masked_author": "username_0",
"text": "**Describe the problem you are trying to solve**\r\nLogging uses different kinds of logging prefixes. This might be hard to evaluate with a machine or hard for searching systematically.\r\n\r\n\r\n**Describe the solution you'd like**\r\n\r\nUse consistent logging formats. Maybe use our own logging wrapper or anything to make it easier. Also use more meaningful error types to help with this.",
"title": "Streamline Logging and Error messages",
"type": "issue"
},
{
"action": "created",
"author": "Licenser",
"comment_id": 712885091,
"datetime": 1603203366000,
"masked_author": "username_1",
"text": "This might be something worth tackling alongside looking at moving away from error chain like #389 ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Licenser",
"comment_id": 726768936,
"datetime": 1605274646000,
"masked_author": "username_1",
"text": "depends on #506",
"title": null,
"type": "comment"
}
] | 2 | 3 | 496 | false | false | 496 | false |
karol-f/vue-custom-element | null | 595,203,876 | 201 | null | [
{
"action": "opened",
"author": "vertic4l",
"comment_id": null,
"datetime": 1586186304000,
"masked_author": "username_0",
"text": "",
"title": "Issue with vue-class-component",
"type": "issue"
},
{
"action": "closed",
"author": "vertic4l",
"comment_id": null,
"datetime": 1586186564000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "reopened",
"author": "vertic4l",
"comment_id": null,
"datetime": 1586186573000,
"masked_author": "username_0",
"text": "",
"title": "Issue with vue-class-component",
"type": "issue"
},
{
"action": "created",
"author": "vertic4l",
"comment_id": 609862969,
"datetime": 1586186733000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "karol-f",
"comment_id": 654783286,
"datetime": 1594120583000,
"masked_author": "username_1",
"text": "Can You prepare CodeSandbox?\r\n\r\nAlso maybe use `Vue.customElement(\"test-component\", Component.prototype.constructor.options);`",
"title": null,
"type": "comment"
}
] | 2 | 5 | 126 | false | false | 126 | false |
apple/coremltools | apple | 510,144,637 | 500 | {
"number": 500,
"repo": "coremltools",
"user_login": "apple"
} | [
{
"action": "opened",
"author": "slin07",
"comment_id": null,
"datetime": 1571679049000,
"masked_author": "username_0",
"text": "This PR solves the 16-bit compression issue by rerouting the old code path for FP16 only to quantization APIs",
"title": "Fix bugs in FP16 compression utility and quantization",
"type": "issue"
},
{
"action": "created",
"author": "1duo",
"comment_id": 544624537,
"datetime": 1571679380000,
"masked_author": "username_1",
"text": "Thanks!",
"title": null,
"type": "comment"
}
] | 2 | 2 | 116 | false | false | 116 | false |
DEVSENSE/phptools-docs | DEVSENSE | 674,266,049 | 64 | null | [
{
"action": "opened",
"author": "addshore",
"comment_id": null,
"datetime": 1596716018000,
"masked_author": "username_0",
"text": "If I have a @deprecated phpdoc annotation above a method that causes a PHP6406 problem to be listed in another file\r\nWhen I remove that @deprecated phpdoc annotation\r\nI would expect that the PHP6406 problem in the other file would go away / not be listed\r\nInstead the PHP6406 problem continues to be listed\r\n\r\nIf I remove and re add the method call the PHP6406 problem will no longer appear.",
"title": "Deprecated method problems don't go away if I remove the @deprecated tag",
"type": "issue"
},
{
"action": "created",
"author": "addshore",
"comment_id": 669961828,
"datetime": 1596724031000,
"masked_author": "username_0",
"text": "It looks like intelephense for example already does this, so looks like it is already possible within vscode, but phptools doesn't implement it yet\r\n\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jakubmisek",
"comment_id": 670925912,
"datetime": 1596891247000,
"masked_author": "username_1",
"text": "Oh you are right, we suppose to do that. Will be fixed!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jakubmisek",
"comment_id": 670935484,
"datetime": 1596896941000,
"masked_author": "username_1",
"text": "fixed since 1.0.5031 (to be released)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jakubmisek",
"comment_id": null,
"datetime": 1596896941000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 743 | false | false | 743 | false |
MicrosoftDocs/azure-docs | MicrosoftDocs | 490,527,185 | 38,477 | null | [
{
"action": "opened",
"author": "cloudysingh",
"comment_id": null,
"datetime": 1567804735000,
"masked_author": "username_0",
"text": "Under the 'configure dependency' section, right below the diagram, 'DependOn' is written. It should be 'DependsOn'\n\n---\n#### Document Details\n\n⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*\n\n* ID: 255db119-3d96-f4de-a2bb-c90f782b883d\n* Version Independent ID: f769af8b-63af-42ba-0ebc-933ade25645e\n* Content: [Create linked Azure Resource Manager templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-tutorial-create-linked-templates)\n* Content Source: [articles/azure-resource-manager/resource-manager-tutorial-create-linked-templates.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-resource-manager/resource-manager-tutorial-create-linked-templates.md)\n* Service: **azure-resource-manager**\n* GitHub Login: @mumian\n* Microsoft Alias: **jgao**",
"title": "Documentation Error : 'DependOn' written instead of 'DependsOn'",
"type": "issue"
},
{
"action": "closed",
"author": "PRMerger6",
"comment_id": null,
"datetime": 1567810942000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "KrishnaG-MSFT",
"comment_id": 529045311,
"datetime": 1567812437000,
"masked_author": "username_2",
"text": "@username_0 Thanks for the feedback. As per the above details, a PR is raised to update the document as required. The change in the doc should reflect soon as the PR is currently in merged state.\r\n@tfitzmac Thanks for the PR and quick response. :+1:",
"title": null,
"type": "comment"
}
] | 3 | 3 | 1,097 | false | false | 1,097 | true |
jneilliii/OctoPrint-GoogleDriveBackup | null | 707,626,517 | 3 | null | [
{
"action": "opened",
"author": "ashleycawley",
"comment_id": null,
"datetime": 1600889615000,
"masked_author": "username_0",
"text": "Hopefully an easy one for you; please make it more explicit / obvious in your OctoPrint Plugin description and Github docs as to what exactly it is backing up for the user.\r\n\r\nIs this backing up a single print job GCODE after a successful print? Or is this backing up the whole OctoPrint configuration or something in between? I'm not sure. If its more obvious to readers then perhaps more users will install it and give it a go.\r\n\r\nThanks for your time on this one!",
"title": "Improve Description",
"type": "issue"
},
{
"action": "created",
"author": "jneilliii",
"comment_id": 697966101,
"datetime": 1600894309000,
"masked_author": "username_1",
"text": "Hey Ashley, this plugin is kind of an auxiliary tool to the default backup plugin that is bundled with OctoPrint. Basically what it does is upload any backup you create manually or through my new backup scheduler plugin to your google drive. If you choose to include gcode files or timelapses, those are included in the backup, etc.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jneilliii",
"comment_id": 697966661,
"datetime": 1600894385000,
"masked_author": "username_1",
"text": "And BTW it won't work until future version OctoPrint 1.5.0 anyway, so might not be something you'll be able to use until the next official release.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cyberfilou",
"comment_id": 697986321,
"datetime": 1600897081000,
"masked_author": "username_2",
"text": "Hello\r\nIn addition, version 1.5.0 is not yet available, so why release this plugin if it does not work yet?\r\nThank you",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jneilliii",
"comment_id": 698034127,
"datetime": 1600905781000,
"masked_author": "username_1",
"text": "There are some people running 1.5.0, mostly developers like myself, where the feature is available already.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jneilliii",
"comment_id": null,
"datetime": 1609183102000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 6 | 1,170 | false | false | 1,170 | false |
binary-com/deriv-com | binary-com | 776,816,199 | 1,327 | {
"number": 1327,
"repo": "deriv-com",
"user_login": "binary-com"
} | [
{
"action": "opened",
"author": "mustofa-binary",
"comment_id": null,
"datetime": 1609396465000,
"masked_author": "username_0",
"text": "# Description\r\n\r\nSummary\r\n\r\nChanges:\r\n\r\n- ...\r\n\r\n## Type of change\r\n\r\n- [ ] Bug fix\r\n- [ ] New feature\r\n- [ ] Update feature\r\n- [ ] Refactor code\r\n- [ ] Translation to code\r\n- [ ] Translation to crowdin\r\n- [ ] Script configuration\r\n- [ ] Improve performance\r\n- [ ] Style only\r\n- [ ] Dependency update\r\n- [ ] Documentation update\r\n- [ ] Release",
"title": "Crowdin",
"type": "issue"
}
] | 3 | 3 | 868 | false | true | 369 | false |
openziti/desktop-edge-win | openziti | 668,056,544 | 85 | null | [
{
"action": "opened",
"author": "dovholuknf",
"comment_id": null,
"datetime": 1596047871000,
"masked_author": "username_0",
"text": "Installed the latest installer into windows sandbox. No shortcut produced on desktop nor in start menu.",
"title": "0.0.19 installer does not create shortcuts",
"type": "issue"
},
{
"action": "closed",
"author": "JeremyTellier",
"comment_id": null,
"datetime": 1597091153000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 103 | false | false | 103 | false |
MaybeShewill-CV/bisenetv2-tensorflow | null | 644,955,080 | 14 | null | [
{
"action": "opened",
"author": "lev1khachatryan",
"comment_id": null,
"datetime": 1593031925000,
"masked_author": "username_0",
"text": "Hi,",
"title": "Explanation of the data augmentation strategy",
"type": "issue"
},
{
"action": "created",
"author": "MaybeShewill-CV",
"comment_id": 649239886,
"datetime": 1593065176000,
"masked_author": "username_1",
"text": "@username_0 I believe the comment has been pretty clear. Those params are used when you apply resize with different strategy (eg. rangescaling and stepscaling). Cropping operation was applied when the image was resized properly:)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "MaybeShewill-CV",
"comment_id": null,
"datetime": 1593408886000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 237 | false | false | 237 | true |
jetstack/cert-manager | jetstack | 746,987,634 | 3,471 | null | [
{
"action": "opened",
"author": "kyprifog",
"comment_id": null,
"datetime": 1605826335000,
"masked_author": "username_0",
"text": "No matter what I do I always end up with a self signed cert using the following syntax:\r\n```\r\napiVersion: cert-manager.io/v1alpha2\r\nkind: Certificate\r\nmetadata:\r\n name: service-tls-cert\r\n namespace: service \r\nspec:\r\n dnsNames:\r\n - <DOMAIN>\r\n secretName: service-tls-cert\r\n issuerRef:\r\n name: letsencrypt-prod\r\n\r\n```\r\n\r\n```\r\napiVersion: cert-manager.io/v1alpha2\r\nkind: ClusterIssuer\r\nmetadata:\r\n name: letsencrypt-prod\r\nspec:\r\n acme:\r\n # The ACME server URL\r\n server: https://acme-v02.api.letsencrypt.org/directory\r\n # Email address used for ACME registration\r\n email: <EMAIL>\r\n # Name of a secret used to store the ACME account private key\r\n privateKeySecretRef:\r\n name: letsencrypt-prod\r\n solvers:\r\n # An empty 'selector' means that this solver matches all domains\r\n - selector: {}\r\n http01:\r\n ingress:\r\n class: nginx\r\n```\r\n\r\n```\r\nhelm upgrade \\\r\n cert-manager jetstack/cert-manager \\\r\n --namespace cert-manager \\\r\n --set ingressShim.defaultIssuerName=letsencrypt-prod \\\r\n --set ingressShim.defaultIssuerKind=ClusterIssuer \\\r\n --set installCRDs=true \\\r\n```\r\n\r\nThis results in a self signed cert when I go to access <DOMAIN> instead of the CA issuer cert made by letsencrypt-prod. Nothing unusual or relevant shows up in the cert-manager pod logs.",
"title": "Issuer is ignored, results in self signed cert no matter what",
"type": "issue"
},
{
"action": "created",
"author": "kyprifog",
"comment_id": 730687653,
"datetime": 1605826529000,
"masked_author": "username_0",
"text": "I'm finding this:\r\n```\r\nkubectl describe certificaterequests.cert-manager.io --namespace service\r\n\r\nStatus:\r\n Conditions:\r\n Last Transition Time: 2020-11-19T22:45:00Z\r\n Message: Referenced \"Issuer\" not found: issuer.cert-manager.io \"letsencrypt-prod\" not found\r\n Reason: Pending\r\n Status: False\r\n Type: Ready\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kyprifog",
"comment_id": 730694312,
"datetime": 1605827303000,
"masked_author": "username_0",
"text": "I think this answered my issue: https://stackoverflow.com/questions/58553510/cant-get-certs-working-with-cert-manager",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "kyprifog",
"comment_id": null,
"datetime": 1605827303000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 4 | 1,829 | false | false | 1,829 | false |
Adyen/adyen-web | Adyen | 678,119,638 | 192 | {
"number": 192,
"repo": "adyen-web",
"user_login": "Adyen"
} | [
{
"action": "opened",
"author": "shernaz",
"comment_id": null,
"datetime": 1597289013000,
"masked_author": "username_0",
"text": "## Summary\r\n<!--\r\nDescribe the changes proposed in this pull request:\r\nInclude Affirm payment method as a component\r\n-->\r\n\r\n## Tested scenarios\r\n<!-- Test cases written -->",
"title": "Include Affirm component as payment method",
"type": "issue"
}
] | 3 | 4 | 6,452 | false | true | 172 | false |
kraiz/django-crontab | null | 662,632,958 | 114 | null | [
{
"action": "opened",
"author": "tifeng",
"comment_id": null,
"datetime": 1595311666000,
"masked_author": "username_0",
"text": "#### Environment & Versions\r\n* Operating system: CentOS 3.10\r\n* Python: 3.6.8\r\n* Django:2.1.7\r\n* django-crontab:0.7.1\r\n\r\nDoes anybody know how to run crontab job as user instead of root?\r\nWhen Crontab in linux can set specific user like this:\r\n* * * * * [username] touch /tmp/file\r\nBut There is no way to run as a user in django-crontab.\r\nThanks!",
"title": "How to run crontab job as user instead of root",
"type": "issue"
}
] | 1 | 1 | 348 | false | false | 348 | false |
Busry9126/github-slideshow | null | 778,450,984 | 3 | {
"number": 3,
"repo": "github-slideshow",
"user_login": "Busry9126"
} | [
{
"action": "opened",
"author": "Busry9126",
"comment_id": null,
"datetime": 1609803497000,
"masked_author": "username_0",
"text": "im new",
"title": "Create 0000-01-02-Busry9126.md",
"type": "issue"
}
] | 2 | 2 | 1,772 | false | true | 6 | false |
flutter/flutter | flutter | 703,929,552 | 66,066 | null | [
{
"action": "opened",
"author": "jonahwilliams",
"comment_id": null,
"datetime": 1600378952000,
"masked_author": "username_0",
"text": "This should speed up the performance of the initial load the first time an application using Flutter Web is visited. Currently registration is delayed to a load event:\r\n\r\n```javascript\r\n if ('serviceWorker' in navigator) {\r\n window.addEventListener('load', function () {\r\n navigator.serviceWorker.register('flutter_service_worker.js');\r\n });\r\n }\r\n```\r\n\r\nInstead we should have the engine dispatch an event after the first frame. Then listen for this instead:\r\n\r\n```\r\n if ('serviceWorker' in navigator) {\r\n window.addEventListener('flutter-first-frame', function () {\r\n navigator.serviceWorker.register('flutter_service_worker.js');\r\n });\r\n }\r\n```",
"title": "Use a first-frame event to signal that it is safe for the service worker to activate",
"type": "issue"
},
{
"action": "closed",
"author": "jonahwilliams",
"comment_id": null,
"datetime": 1600912687000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 692 | false | false | 692 | false |
valter-tonon/blog-mybot | null | 597,853,012 | 14 | {
"number": 14,
"repo": "blog-mybot",
"user_login": "valter-tonon"
} | [
{
"action": "opened",
"author": "valter-tonon",
"comment_id": null,
"datetime": 1586519976000,
"masked_author": "username_0",
"text": "Automatically generated by Netlify CMS",
"title": "Update Posts “2020-04-10-série-tape-reading-1”",
"type": "issue"
}
] | 2 | 2 | 230 | false | true | 38 | false |
TerryHowe/ansible-modules-hashivault | null | 617,096,571 | 247 | null | [
{
"action": "opened",
"author": "jamielennox",
"comment_id": null,
"datetime": 1589337805000,
"masked_author": "username_0",
"text": "Me again, I promise i'm doing this to help not be a pain :) \r\n\r\nSo this is kind of an inbuilt problem with ansible, but I _think_ it might be something that can be worked around. There are a lot of places where vault expects and returns an integer value. The ones that are getting me at the moment are all the TTLs in certificates. However due to the way jinja works ansible generally only templates out strings.\r\n\r\nTaking a new-ish example: \r\n\r\n```\r\n- hosts: localhost\r\n tasks:\r\n - hashivault_pki_role:\r\n name: tester\r\n config:\r\n allow_subdomains: true\r\n ttl: 3600\r\n```\r\n\r\nthis should work because the ttl is an integer and the values returned by the vault APIs are integers, if i put that into a loop though: \r\n\r\n```\r\n- hosts: localhost\r\n tasks:\r\n - hashivault_pki_role:\r\n name: \"{{ item.name }}\"\r\n config:\r\n allow_subdomains: \"{{ item.allow_subdomains | default(omit) }}\"\r\n ttl: \"{{ item.ttl | default('3600') }}\"\r\n loop: \"{{ roles }}\"\r\n```\r\n\r\nThis will always return changed and update because the ttl value is a string, being compared to an integer even though the value is the same. \r\n\r\nThere is actually a relatively new feature on the ansible side that can let a jinja value be templated and return a non-string type: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-jinja2-native but it's not really ideal to set that at the cfg level. \r\n\r\nI think the easiest way to fix this would be to cast everything to strings in the compare function? Or at least check if the return value is a native type (int, float, ?) and then cast both sides to string before doing the check? \r\n\r\nNote: i'm typing this directly into the bug, so my examples probably don't work perfectly, but hopefully show the problem.",
"title": "Modules report changed due to type mismatch",
"type": "issue"
},
{
"action": "created",
"author": "jamielennox",
"comment_id": 627721299,
"datetime": 1589340174000,
"masked_author": "username_0",
"text": "Oh, this functionality is particularly annoying in this case. \r\n\r\nThe module typecasts all the incoming parameters, so you can't pass in an integer because the type is marked as string: https://github.com/username_1/ansible-modules-hashivault/blob/2bc7447df6fdcd686fe0745d5751e18f37be0226/ansible/modules/hashivault/hashivault_pki_role.py#L324-L332\r\n\r\nThis would appear to be because you can use `ttl`/`max_ttl` with a `h` suffix, so `100h` vs seconds (rant about h, but no d, or y in another place - not your doing). However when you query vault for it's configuration it doesn't return the `100h` format it returns an integer of seconds like `6000`. So this will never line up. \r\n\r\nI think the bug stands, I've definitely hit places where the int != string is a problem, just that this may not have been the best example. \r\n\r\nIf there's a useful ansible/python utility to convert human readable times to integers it would be great to preprocess this and avoid the problem and be more useful - but different bug. \r\n\r\nFor now I don't think you can do much about comparing `100h` to `6000`, but if we input a value in seconds we shouldn't report changed.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "TerryHowe",
"comment_id": null,
"datetime": 1589381389000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "TerryHowe",
"comment_id": 628043030,
"datetime": 1589381483000,
"masked_author": "username_1",
"text": "The whole hour/minute/second thing, I think a lot of modules don't use the common state compare and more should. I've on one hand been avoiding common code here, but the comparison is actually a good case for common code.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 3,187 | false | false | 3,187 | true |
onecentlin/laravel-blade-snippets-vscode | null | 744,528,296 | 123 | null | [
{
"action": "opened",
"author": "ragulka",
"comment_id": null,
"datetime": 1605602258000,
"masked_author": "username_0",
"text": "Not sure if this is specifically caused by this extension, but it starts happening when I enable this extension. Whenever I change some lines of a blade file and save the file, indentation of changed lines is removed.\r\n\r\nHere's a GIF demonstrating the issue: https://d.pr/i/qxay9V\r\n\r\nWhen I disable this extension, this issue does not occur. Also note that when I Format the document, this issue does not happen either.",
"title": "Indentation of modified lines removed on save",
"type": "issue"
}
] | 1 | 1 | 419 | false | false | 419 | false |
erdomke/RtfPipe | null | 503,458,717 | 34 | null | [
{
"action": "opened",
"author": "uzrptav",
"comment_id": null,
"datetime": 1570455515000,
"masked_author": "username_0",
"text": "Hello! Please, Help! \r\nI'm trying to parse **RTF** file and get **StackOverflowException**.\r\n\r\n```\r\nvar rtf = System.IO.File.ReadAllText(\"test.rtf\");\r\nvar html = Rtf.ToHtml(rtf);\r\n```\r\nExample in attachment (please, rename `\"test_RTF.txt\"` ->`\"test.rtf\"` )\r\n\r\n[test_RTF.txt](https://github.com/username_2/RtfPipe/files/3697628/test_RTF.txt)\r\n\r\nThank you a lot!",
"title": "StackOverflowException.",
"type": "issue"
},
{
"action": "created",
"author": "sanjay95",
"comment_id": 542040291,
"datetime": 1571116481000,
"masked_author": "username_1",
"text": "try this \r\n StreamReader file = new StreamReader(\"test.rtf\");\r\n RtfSource source = file.ReadToEnd();\r\n var html = Rtf.ToHtml(source);",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "uzrptav",
"comment_id": 542052193,
"datetime": 1571119520000,
"masked_author": "username_0",
"text": "Hello!\nThnx a lot for reply, but it still does not work.\nI have *System.StackOverflowException* in *HtmlWriter *class in the\nmethod *PxString(UnitValue\nvalue) *on code line:\n\n*var px = value.ToPx().ToString(\"0.#\");*",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "erdomke",
"comment_id": null,
"datetime": 1571585932000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "erdomke",
"comment_id": 544265745,
"datetime": 1571587317000,
"masked_author": "username_2",
"text": "This should be fixed with version 1.0.7232.28557 which is being pushed to NuGet.",
"title": null,
"type": "comment"
}
] | 3 | 5 | 817 | false | false | 817 | true |
input-output-hk/daedalus | input-output-hk | 730,643,616 | 2,218 | {
"number": 2218,
"repo": "daedalus",
"user_login": "input-output-hk"
} | [
{
"action": "opened",
"author": "DeeJayElly",
"comment_id": null,
"datetime": 1603818357000,
"masked_author": "username_0",
"text": "This PR fixes Address validation on Shelley QA Network\r\n\r\n## Screenshots\r\n\r\n<img width=\"1148\" alt=\"Screen Shot 2020-10-27 at 6 16 28 PM\" src=\"https://user-images.githubusercontent.com/15862062/97337436-92a37b00-1880-11eb-8619-a300e773867a.png\">\r\n<img width=\"1149\" alt=\"Screen Shot 2020-10-27 at 6 16 00 PM\" src=\"https://user-images.githubusercontent.com/15862062/97337450-9800c580-1880-11eb-864f-dd9549c5ce32.png\">\r\n\r\n\r\n---\r\n\r\n## Testing Checklist\r\n\r\n- [Slack QA thread](https://input-output-rnd.slack.com/archives/GGKFXSKC6/p1603819038247400)\r\n\r\n---\r\n\r\n## Review Checklist\r\n\r\n### Basics\r\n\r\n- [ ] PR has been assigned and has appropriate labels (`feature`/`bug`/`chore`, `release-x.x.x`)\r\n- [ ] PR is updated to the most recent version of the target branch (and there are no conflicts)\r\n- [ ] PR has a good description that summarizes all changes\r\n- [ ] PR has default-sized Daedalus window screenshots or animated GIFs of important UI changes:\r\n - [ ] In English\r\n - [ ] In Japanese\r\n- [ ] CHANGELOG entry has been added to the top of the appropriate section (*Features*, *Fixes*, *Chores*) and is linked to the correct PR on GitHub\r\n- [ ] Automated tests: All acceptance and unit tests are passing (`yarn test`)\r\n- [ ] Manual tests (minimum tests should cover newly added feature/fix): App works correctly in *development* build (`yarn dev`)\r\n- [ ] Manual tests (minimum tests should cover newly added feature/fix): App works correctly in *production* build (`yarn package` / CI builds)\r\n- [ ] There are no *flow* errors or warnings (`yarn flow:test`)\r\n- [ ] There are no *lint* errors or warnings (`yarn lint`)\r\n- [ ] There are no *prettier* errors or warnings (`yarn prettier:check`)\r\n- [ ] There are no missing translations (running `yarn manage:translations` produces no changes)\r\n- [ ] Text changes are proofread and approved (Jane Wild / Amy Reeve)\r\n- [ ] Japanese text changes are proofread and approved (Junko Oda)\r\n- [ ] UI changes look good in all themes (Alexander Rukin)\r\n- [ ] Storybook works and no stories are broken (`yarn storybook`)\r\n- [ ] In case of dependency changes `yarn.lock` file is updated\r\n\r\n### Code Quality\r\n- [ ] Important parts of the code are properly commented and documented\r\n- [ ] Code is properly typed with flow\r\n- [ ] React components are split-up enough to avoid unnecessary re-renderings\r\n- [ ] Any code that only works in main process is neatly separated from components\r\n\r\n### Testing\r\n- [ ] New feature/change is covered by acceptance tests\r\n- [ ] New feature/change is manually tested and approved by QA team\r\n- [ ] All existing acceptance tests are still up-to-date\r\n- [ ] New feature/change is covered by Daedalus Testing scenario\r\n- [ ] All existing Daedalus Testing scenarios are still up-to-date\r\n\r\n### After Review\r\n- [ ] Merge the PR\r\n- [ ] Delete the source branch\r\n- [ ] Move the ticket to `done` column on the YouTrack board\r\n- [ ] Update Slack QA thread by marking it with a green checkmark",
"title": "[DDW-376] Fix address validation on shelley qa network",
"type": "issue"
},
{
"action": "created",
"author": "gnpf",
"comment_id": 718180650,
"datetime": 1603915863000,
"masked_author": "username_1",
"text": "@username_0 this is working well. I was wondering if the same issue with Staging is going to be addressed on this PR too.\r\n@username_3 [mentioned it happens there as well.](https://jira.iohk.io/browse/DDW-376?focusedCommentId=26027&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-26027)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nikolaglumac",
"comment_id": 718184036,
"datetime": 1603916269000,
"masked_author": "username_2",
"text": "@username_0 please check this 🙏",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nikolaglumac",
"comment_id": 718559189,
"datetime": 1603964816000,
"masked_author": "username_2",
"text": "@username_0 I have added one more TODO for you:\r\n\r\n\r\ncc @username_3",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "DeeJayElly",
"comment_id": 718681328,
"datetime": 1603970360000,
"masked_author": "username_0",
"text": "Fix is added for network tag",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alan-mcnicholas",
"comment_id": 718709108,
"datetime": 1603973086000,
"masked_author": "username_3",
"text": "Address validation is now working on staging, great work @username_0 ! :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nikolaglumac",
"comment_id": 718709670,
"datetime": 1603973166000,
"masked_author": "username_2",
"text": "Thanks @username_3! Did you also make sure non-staging addresses are rejected? E.g. the mainnet ones?\r\n\r\ncc @username_1 @ManusMcCole @mioriohk",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alan-mcnicholas",
"comment_id": 718742545,
"datetime": 1603977081000,
"masked_author": "username_3",
"text": "@username_2 I checked that address discrimination is working between staging\\mainnet and testnet.\r\nValid addresses for both are accepted and invalid ones are rejected as expected due to incompatible networkMagic.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nikolaglumac",
"comment_id": 718743116,
"datetime": 1603977143000,
"masked_author": "username_2",
"text": "Great! Thanks for checking that @username_3 ❤️ \r\nCan you please approve the PR?",
"title": null,
"type": "comment"
}
] | 4 | 9 | 4,051 | false | false | 4,051 | true |
knative/homebrew-client | knative | 636,180,710 | 6 | {
"number": 6,
"repo": "homebrew-client",
"user_login": "knative"
} | [
{
"action": "opened",
"author": "zetaron",
"comment_id": null,
"datetime": 1591789677000,
"masked_author": "username_0",
"text": "When tapping knative/client homebrew expects the tagged Formula to be named differend and currently fails even though we try to install the current `0.15` version which is declared in `kn.rb`\r\n\r\nLogs:\r\n```\r\n==> Tapping knative/client\r\nCloning into '/usr/local/Homebrew/Library/Taps/knative/homebrew-client'...\r\nError: Invalid formula: /usr/local/Homebrew/Library/Taps/knative/homebrew-client/kn@0.14.rb\r\nError: Cannot tap knative/client: invalid syntax in tap!\r\nNo available formula with the name \"kn@0.14\"\r\nIn formula file: /usr/local/Homebrew/Library/Taps/knative/homebrew-client/kn@0.14.rb\r\nExpected to find class KnAT014, but only found: Kn.\r\nTapping knative/client has failed!\r\nWarning: 'knative/client/kn' formula is unreadable: No available formula with the name \"knative/client/kn\"\r\nPlease tap it and then try again: brew tap knative/client\r\n```",
"title": "fix: brew tap knative/client fails",
"type": "issue"
},
{
"action": "created",
"author": "zetaron",
"comment_id": 641951925,
"datetime": 1591789919000,
"masked_author": "username_0",
"text": "@googlebot I signed it!",
"title": null,
"type": "comment"
}
] | 3 | 6 | 3,873 | false | true | 876 | false |
conda-forge/r-stringi-feedstock | conda-forge | 600,862,930 | 29 | {
"number": 29,
"repo": "r-stringi-feedstock",
"user_login": "conda-forge"
} | [
{
"action": "created",
"author": "jayfurmanek",
"comment_id": 614689772,
"datetime": 1587047447000,
"masked_author": "username_0",
"text": "@conda-forge-admin, please rerender",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jayfurmanek",
"comment_id": 614699543,
"datetime": 1587048424000,
"masked_author": "username_0",
"text": "I don't see the aarch64 build on Drone..",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jayfurmanek",
"comment_id": 615341047,
"datetime": 1587140796000,
"masked_author": "username_0",
"text": "@conda-forge-admin, please rerender",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mariusvniekerk",
"comment_id": 615353266,
"datetime": 1587142289000,
"masked_author": "username_1",
"text": "@conda-forge-admin, please rerender",
"title": null,
"type": "comment"
}
] | 5 | 8 | 2,526 | false | true | 145 | false |
carbon-design-system/carbon-for-ibm-dotcom | carbon-design-system | 746,944,309 | 4,534 | {
"number": 4534,
"repo": "carbon-for-ibm-dotcom",
"user_login": "carbon-design-system"
} | [
{
"action": "opened",
"author": "andysherman2121",
"comment_id": null,
"datetime": 1605821924000,
"masked_author": "username_0",
"text": "### Related Ticket(s)\r\n\r\n#3630 \r\n### Description\r\n\r\nCreated layout web component story example\r\n\r\n<!-- React and Web Component deploy previews are enabled by default. -->\r\n<!-- To enable additional available deploy previews, apply the following -->\r\n<!-- labels for the corresponding package: -->\r\n<!-- *** \"package: services\": Services -->\r\n<!-- *** \"package: utilities\": Utilities -->\r\n<!-- *** \"package: styles\": Carbon Expressive -->\r\n<!-- *** \"RTL\": React / Web Components (RTL) -->\r\n<!-- *** \"feature flag\": React / Web Components (experimental) -->",
"title": "feat(layout): created layout story",
"type": "issue"
},
{
"action": "created",
"author": "jeffchew",
"comment_id": 733197185,
"datetime": 1606247450000,
"masked_author": "username_1",
"text": "@username_0 we probably need some documentation on how layout works. Perhaps if there's a README for the layout styles/classes, and have that in the style package, then embed that into the Docs tab here?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jeffchew",
"comment_id": 733198018,
"datetime": 1606247561000,
"masked_author": "username_1",
"text": "@username_0 specifically this README, should also see if the documentation itself needs to be beefed up at all:\r\n\r\nhttps://github.com/carbon-design-system/carbon-for-ibm-dotcom/tree/master/packages/styles/scss/components/layout",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andysherman2121",
"comment_id": 733202738,
"datetime": 1606248140000,
"masked_author": "username_0",
"text": "@username_1 Gotcha, yeah i was wondering about the documentation. I'll start working on fixing it all up",
"title": null,
"type": "comment"
}
] | 3 | 6 | 1,757 | false | true | 1,097 | true |
sumup-oss/circuit-ui | sumup-oss | 753,913,499 | 731 | {
"number": 731,
"repo": "circuit-ui",
"user_login": "sumup-oss"
} | [
{
"action": "opened",
"author": "connor-baer",
"comment_id": null,
"datetime": 1606783964000,
"masked_author": "username_0",
"text": "Fixes #726.\r\n\r\n## Purpose\r\n\r\nThe SelectorGroup is not exported publicly which is an oversight. The prop types warning about a missing change handler is incorrect because we use click handler instead for better browser compatibility. \r\n\r\n## Approach and changes\r\n\r\n- add missing export\r\n- add noop change handlers to silence React warnings\r\n\r\n## Definition of done\r\n\r\n* [x] Development completed\r\n* [x] Reviewers assigned\r\n* [x] Unit and integration tests\r\n* [x] Meets minimum browser support\r\n* [x] Meets accessibility requirements",
"title": "fix(components): export SelectorGroup",
"type": "issue"
}
] | 4 | 5 | 3,151 | false | true | 531 | false |
flutter/flutter | flutter | 694,422,158 | 65,316 | null | [
{
"action": "opened",
"author": "amanv8060",
"comment_id": null,
"datetime": 1599406172000,
"masked_author": "username_0",
"text": "I use 2 channels beta and stable . Everytime i switch from one to another there is download of approximately 200 mb, as far as i remember it is dart sdk which gets downloaded in case the flutter repo has not been changed since last switch . what i feel is that this download should be skipped if the dart sdk for both the channel is same . or it might be a issue only in my computer ..\r\n\r\nSpecs\r\nprocessor : Intel® Core™ i5-8250U CPU @ 1.60GHz × 8\r\nOs:Ubuntu 20.04.1 LTS\r\nOS type : 64 bit\r\ngnome version:3.36.3\r\ngit version : 2.25.1",
"title": "Flutter channel switching problem",
"type": "issue"
},
{
"action": "created",
"author": "hamdikahloun",
"comment_id": 687845763,
"datetime": 1599412522000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "darshankawar",
"comment_id": null,
"datetime": 1599467302000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "darshankawar",
"comment_id": 688159496,
"datetime": 1599467302000,
"masked_author": "username_2",
"text": "Hi @username_0,\r\nThere's similar open issue describing your case, [44212](https://github.com/flutter/flutter/issues/44212).\r\nPlease follow-up there for updates and any questions.\r\nClosing this as duplicate. If you disagree, write in comments and I'll reopen.\r\nThanks.",
"title": null,
"type": "comment"
}
] | 4 | 5 | 1,063 | false | true | 801 | true |
nlohmann/json | null | 615,117,453 | 2,095 | null | [
{
"action": "opened",
"author": "sametdev",
"comment_id": null,
"datetime": 1589010481000,
"masked_author": "username_0",
"text": "I want dump folder/file names in a json file.. With utf8 chars. like indian/turkish/german/finland chars etc. But i get crashs when i add some file with utf8 format.\r\n\r\nI'm using VS2019 Preview currently. Also im using the lastest version of json.hpp\r\n\r\nThe code:\r\n`\r\nvoid read_dir(string folder)\r\n{\r\n json jdata;\r\n string search_path = folder + \"/*.*\";\r\n WIN32_FIND_DATA fd;\r\n\r\n std::wstring stemp = utils::s2ws(search_path);\r\n LPCWSTR result = stemp.c_str();\r\n\r\n HANDLE hFind = ::FindFirstFile(result, &fd);\r\n if (hFind != INVALID_HANDLE_VALUE) {\r\n do {\r\n json j;\r\n string name = \"\";\r\n //Eger listedeki bir dosya ise\r\n if (!(fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {\r\n wstring ws(fd.cFileName);\r\n string str(ws.begin(), ws.end());\r\n name = str.c_str();\r\n j[\"object\"][\"type\"] = \"File\";\r\n j[\"object\"][\"name\"] = name; //put the name of file anyways.. i need an converter or something else\r\n }\r\n //Eger listedeki bir klasor ise (gizli degilse)\r\n if (!(fd.dwFileAttributes & FILE_ATTRIBUTE_HIDDEN) && (fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY))\r\n {\r\n wstring ws(fd.cFileName);\r\n string str(ws.begin(), ws.end());\r\n name = str.c_str();\r\n j[\"object\"][\"type\"] = \"Folder\";\r\n j[\"object\"][\"name\"] = name; //put the name of file anyways.. i need an converter or something else\r\n }\r\n jdata.push_back(j);\r\n } while (::FindNextFile(hFind, &fd));\r\n ::FindClose(hFind);\r\n\r\n std::ofstream o(\"dump.json\");\r\n o << std::setw(4) << jdata << std::endl;\r\n\r\n }\r\n}\r\n`\r\n\r\nhttps://prnt.sc/sdlaet",
"title": "UTF8 ",
"type": "issue"
},
{
"action": "created",
"author": "nlohmann",
"comment_id": 626145670,
"datetime": 1589020547000,
"masked_author": "username_1",
"text": "The error says that the input is not valid UTF-8. From your code, it seems that you convert a `std::wstring` to a `std::string`. I assume the `std::wstring` is UTF-16 encoded, and just casting it into a `std::string` does not change the encoding to UTF-8. The library only supports UTF-8.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sametdev",
"comment_id": 626396438,
"datetime": 1589148136000,
"masked_author": "username_0",
"text": "I just make a function for delete utf8 characters :D its look better for now.. Thanks username_1.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "sametdev",
"comment_id": null,
"datetime": 1589148142000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 2,186 | false | false | 2,186 | true |
codeplayr/alphacate | null | 678,559,970 | 1 | null | [
{
"action": "opened",
"author": "stefankirchfeld",
"comment_id": null,
"datetime": 1597335272000,
"masked_author": "username_0",
"text": "Since these fields are not only result fields but are also used internally for the i+1 calculations for the next period, and they are rounded, the average gain remains the same for a lot of periods over time, although smaller gains/losses occurred, which actually do affect the RSI.\r\nHence the indicator tends to deliver values that are too low.\r\n\r\nI did not check if similar rounding errors happen for other indicators as well, maybe it's worth going over your code again and double check this.\r\n\r\nI would suggest to not round any fields that are used for in-between calculations. Only at the very end, just when the data is returned to the caller, fields can be rounded.\r\n\r\nI noticed the discrepancies when comparing your RSI values with RSIs for example on tradingview.com or results delivered from another node library (@d3fc/d3fc-technical-indicator). \r\n\r\nBest regards,\r\nStefan",
"title": "Wrong RSI calculations due to rounding errors",
"type": "issue"
},
{
"action": "created",
"author": "mdkrieg",
"comment_id": 869953587,
"datetime": 1624907752000,
"masked_author": "username_1",
"text": "Hello @username_0 \r\n\r\nI've forked this repository and fixed the issue you mentioned. I increased the rounding to 10 decimals and fixed a problem with the intial averages including the first bar which is always zero gain & loss.\r\n\r\nSee here: https://github.com/username_1/alphacate\r\n\r\n@codeplayr , big thanks and nice work on this repository!\r\n\r\nRegards,\r\n-Matt",
"title": null,
"type": "comment"
}
] | 2 | 2 | 1,244 | false | false | 1,244 | true |
MITgcm/xmitgcm | MITgcm | 542,092,256 | 188 | null | [
{
"action": "opened",
"author": "lanmanli",
"comment_id": null,
"datetime": 1577185513000,
"masked_author": "username_0",
"text": "I am trying to plot ssh at a some point,but get some error.\r\n\r\nThis is what i wrote:\r\nfrom xmitgcm import llcreader\r\nmodel = llcreader.ECCOPortalLLC4320Model()\r\nds = model.get_dataset(varnames=['Eta'])\r\na1=ds['Eta'].sel(face=4,time=slice(\"2011-09-15T05:00:00\",\"2011-09-15T05:00:00\"))\r\na1=a1.mean(dim='time')\r\na1.plot()\r\n\r\n\r\nThe error is:\r\nTypeError: request() got an unexpected keyword argument 'size_policy'",
"title": "can't plot or transform llc4320 data to netcdf",
"type": "issue"
},
{
"action": "created",
"author": "rabernat",
"comment_id": 568752765,
"datetime": 1577195706000,
"masked_author": "username_1",
"text": "HI @username_0 - thanks for your question.\r\n\r\nThis is the same issue as #175. You need to update xmitgcm and fsspec to their latest versions. Please see #175 for detailed instructions on how to resolve.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "rabernat",
"comment_id": null,
"datetime": 1577195707000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "lanmanli",
"comment_id": 568823807,
"datetime": 1577240796000,
"masked_author": "username_0",
"text": "Thank you very much for your reply. I'm very sorry, but I may have to hold you up a bit.\r\n I have update xmitgcm and fsspec with conda,now their version is 0.4.1 and 0.6.0 respectively.And I also edited to remove unnecessary NUMPY_EXPERIMENTAL_ARRAY_FUNCTION stuff. But still get the same error.\r\n\r\nHere is my code:\r\n\r\nimport os\r\nos.environ['NUMPY_EXPERIMENTAL_ARRAY_FUNCTION']='0'\r\nimport xmitgcm\r\nfrom xmitgcm import llcreader\r\nimport fsspec\r\nprint(xmitgcm.__version__, fsspec.__version__)\r\nmodel = llcreader.ECCOPortalLLC4320Model()\r\nds = model.get_dataset(varnames=['Eta'])\r\na1=ds['Eta'].sel(face=4,time=slice(\"2011-09-15T05:00:00\",\"2011-09-15T05:00:00\"))\r\na1=a1.mean(dim='time')\r\na1.plot()\r\n\r\n\r\nTypeError: request() got an unexpected keyword argument 'size_policy'",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,378 | false | false | 1,378 | true |
darkreader/darkreader | darkreader | 585,721,228 | 2,154 | {
"number": 2154,
"repo": "darkreader",
"user_login": "darkreader"
} | [
{
"action": "opened",
"author": "Myshor",
"comment_id": null,
"datetime": 1584884454000,
"masked_author": "username_0",
"text": "Before:\r\n\r\n\r\nAfter:\r\n",
"title": "facebook.com - new fix for memories inverted img",
"type": "issue"
},
{
"action": "created",
"author": "Myshor",
"comment_id": 602824669,
"datetime": 1584993612000,
"masked_author": "username_0",
"text": "Fix for likes icons for recommended pages as they sometimes are <img> but sometimes <i>.\r\nCatched that there is always .img class used.\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Myshor",
"comment_id": 605684781,
"datetime": 1585509082000,
"masked_author": "username_0",
"text": "reworked memories picture fix to be without classes thst change all the time",
"title": null,
"type": "comment"
}
] | 1 | 3 | 564 | false | false | 564 | false |
Gregory94/LaanLab-SATAY-DataAnalysis | null | 636,309,767 | 14 | null | [
{
"action": "opened",
"author": "Gregory94",
"comment_id": null,
"datetime": 1591800341000,
"masked_author": "username_0",
"text": "Using Python makes it easier to integrate the code in the rest of the workflow.\r\nAlso, this makes documentation easier using for example Jupyter Notebooks or Jupyter Books.",
"title": "Convert Matlab code provided by the Kornmann lab for SATAY analysis to Python.",
"type": "issue"
},
{
"action": "created",
"author": "Gregory94",
"comment_id": 650137474,
"datetime": 1593172150000,
"masked_author": "username_0",
"text": "The Matlab code from the [Kornmann lab](https://sites.google.com/site/satayusers/complete-protocol/bioinformatics-analysis/matlab-script) for processing the number of transposons and reads for each possible insertion site uses specific functions. These functions are not all present in Python. For example, Matlab uses the `Biomap` function to read a .bam file. However, implementing a similar function in Python does not seem to be trivial. There is a package called [`pysam`](https://pysam.readthedocs.io/en/latest/index.html) that might be able to read .bam files, but this is not working on Windows machines. I will check if this will work in Linux (which should run the code in the end anyway) and see if this package gives the desired results.\r\nI will also check if it would be feasible to escape to another programming language that might be able to do the analysis of the .bam file.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Gregory94",
"comment_id": 656763855,
"datetime": 1594398250000,
"masked_author": "username_0",
"text": "The progress on translating the [Matlab code from the Kornmann lab](https://sites.google.com/site/satayusers/complete-protocol/bioinformatics-analysis/matlab-script) to python can be found [here](https://github.com/username_0/LaanLab-SATAY-DataAnalysis/blob/dev_Gregory/Python_TransposonMapping/pysam_test.py).\r\nThe pysam function seems to work pretty well and the python code has very similar results compared to the Matlab code.\r\nThere are some differences, for example due to the fact that some python functions work slightly different compared to their Matlab equivalents. Also, the Python code uses some different files for finding (essential) genes that are not perfectly identical to the ones used by the Kornmann lab. Tests need to prove if these differences are acceptable.\r\nWhen the tests prove successful, this code will be changed to a more organized python code and some improvements in terms of efficiency and speed will be made.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Gregory94",
"comment_id": 671257329,
"datetime": 1597052290000,
"masked_author": "username_0",
"text": "The python code for transposon mapping is finished and integrated into the workflow for satay analysis.\r\nThe code can be found [here](https://github.com/username_0/LaanLab-SATAY-DataAnalysis/blob/master/Python_TransposonMapping/transposonmapping_satay.py) and is now also present in the virtual machine.\r\nI solved all the issues that were present in the matlab code by Benoit and the code creates some additional files that might be helpful for our research.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "leilaicruz",
"comment_id": null,
"datetime": 1601742477000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 2,461 | false | false | 2,461 | true |
analysis-tools-dev/static-analysis | analysis-tools-dev | 722,352,785 | 580 | {
"number": 580,
"repo": "static-analysis",
"user_login": "analysis-tools-dev"
} | [
{
"action": "opened",
"author": "AristoChen",
"comment_id": null,
"datetime": 1602769620000,
"masked_author": "username_0",
"text": "Hi @username_1,\r\n\r\nI found [this](https://github.com/collections/clean-code-linters) in github collections, so I add it to `More Collections`, and also add new entries from that list.\r\n\r\nI am wondering if it makes sense to contribute a PR to [gihub collection repo](https://github.com/github/explore/blob/master/collections/clean-code-linters/index.md) to include our static-analysis repo?",
"title": "Update \"More Collection\" and add new entries",
"type": "issue"
},
{
"action": "created",
"author": "mre",
"comment_id": 709616000,
"datetime": 1602799705000,
"masked_author": "username_1",
"text": "I *think* so. Will you take care of it?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "AristoChen",
"comment_id": 710014773,
"datetime": 1602851251000,
"masked_author": "username_0",
"text": "I read their [requirements](https://github.com/github/explore/blob/master/.github/PULL_REQUEST_TEMPLATE.md), It seems that they do not allow repo maintainer to suggest their own repo :(\r\n\r\n```markdown\r\n### Thank you for contributing! Please confirm this pull request meets the following requirements:\r\n\r\n- [ ] I followed the contributing guidelines: https://github.com/github/explore/blob/master/CONTRIBUTING.md\r\n- [ ] I have no affiliation with the project I am suggesting (as a maintainer, creator, contractor, or employee).\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mre",
"comment_id": 710100762,
"datetime": 1602860548000,
"masked_author": "username_1",
"text": "Hum... that's actually a sensible rule. Then let's skip that. Was worth a try, though.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,038 | false | false | 1,038 | true |
pandas-dev/pandas | pandas-dev | 646,285,073 | 35,014 | null | [
{
"action": "opened",
"author": "TomAugspurger",
"comment_id": null,
"datetime": 1593180131000,
"masked_author": "username_0",
"text": "#### Code Sample, a copy-pastable example\r\n\r\n```pytb\r\nIn [1]: import pandas as pd\r\nIn [2]: df = pd.DataFrame({\"A\": [0, 0, 1, None], \"B\": [1, 2, 3, None]})\r\nIn [3]: gb = df.groupby(\"A\", dropna=False)\r\nIn [6]: gb['B'].transform(len)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-6-3bae7d67a46f> in <module>\r\n----> 1 gb['B'].transform(len)\r\n\r\n~/sandbox/pandas/pandas/core/groupby/generic.py in transform(self, func, engine, engine_kwargs, *args, **kwargs)\r\n 471 if not isinstance(func, str):\r\n 472 return self._transform_general(\r\n--> 473 func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs\r\n 474 )\r\n 475\r\n\r\n~/sandbox/pandas/pandas/core/groupby/generic.py in _transform_general(self, func, engine, engine_kwargs, *args, **kwargs)\r\n 537\r\n 538 result.name = self._selected_obj.name\r\n--> 539 result.index = self._selected_obj.index\r\n 540 return result\r\n 541\r\n\r\n~/sandbox/pandas/pandas/core/generic.py in __setattr__(self, name, value)\r\n 5141 try:\r\n 5142 object.__getattribute__(self, name)\r\n-> 5143 return object.__setattr__(self, name, value)\r\n 5144 except AttributeError:\r\n 5145 pass\r\n\r\n~/sandbox/pandas/pandas/_libs/properties.pyx in pandas._libs.properties.AxisProperty.__set__()\r\n 64\r\n 65 def __set__(self, obj, value):\r\n---> 66 obj._set_axis(self.axis, value)\r\n\r\n~/sandbox/pandas/pandas/core/series.py in _set_axis(self, axis, labels, fastpath)\r\n 422 if not fastpath:\r\n 423 # The ensure_index call above ensures we have an Index object\r\n--> 424 self._mgr.set_axis(axis, labels)\r\n 425\r\n 426 # ndarray compatibility\r\n\r\n~/sandbox/pandas/pandas/core/internals/managers.py in set_axis(self, axis, new_labels)\r\n 213 if new_len != old_len:\r\n 214 raise ValueError(\r\n--> 215 f\"Length mismatch: Expected axis has {old_len} elements, new \"\r\n 216 f\"values have {new_len} elements\"\r\n 217 )\r\n\r\nValueError: Length mismatch: Expected axis has 3 elements, new values have 4 elements\r\n```\r\n\r\n#### Problem description\r\n\r\nCompare that with the following\r\n\r\n```python\r\nIn [4]: gb.transform(len)\r\nOut[4]:\r\n B\r\n0 2\r\n1 2\r\n2 1\r\n3 1\r\n\r\nIn [5]: gb[['B']].transform(len)\r\nOut[5]:\r\n B\r\n0 2\r\n1 2\r\n2 1\r\n3 1\r\n```\r\n\r\nSo it's just when slicing down to a SeriesGroupBy object.\r\n\r\n#### Expected Output\r\n\r\nA series:\r\n\r\n```python\r\nOut[5]:\r\n0 2\r\n1 2\r\n2 1\r\n3 1\r\n```",
"title": "BUG: DataFrameGroupBy.__getitem__ fails to propagate dropna",
"type": "issue"
},
{
"action": "created",
"author": "arw2019",
"comment_id": 650634396,
"datetime": 1593294198000,
"masked_author": "username_1",
"text": "take",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "arw2019",
"comment_id": 652094314,
"datetime": 1593558727000,
"masked_author": "username_1",
"text": "@username_0 I think that the problem in `SeriesGroupBy.transform` comes down to L387-388 in `pandas/core/groupby/grouper.py`:\r\nhttps://github.com/pandas-dev/pandas/blob/b687cd4d9e520666a956a60849568a98dd00c672/pandas/core/groupby/grouper.py#L387\r\nFor `gb['B']` if we print out `values` `NaN` is getting getting dropped (and this then propagates along to where the original issue came up):\r\n```\r\nself.grouper = [ 0. 0. 1. nan]\r\n\r\nvalues = {'_dtype': CategoricalDtype(categories=[0.0, 1.0], ordered=False), '_codes': array([ 0, 0, 1, -1], dtype=int8)}\r\n```\r\nSince what we need from `indices` is a dict of values and indices where they occur a quick solution could be to do that on the fly in `Grouping.indices`. Would that work or would we want to do something else?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jreback",
"comment_id": null,
"datetime": 1596835986000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 3,454 | false | false | 3,454 | true |
bbc/sqs-producer | bbc | 626,484,593 | 51 | null | [
{
"action": "opened",
"author": "vorticalbox",
"comment_id": null,
"datetime": 1590670375000,
"masked_author": "username_0",
"text": "**Question**\r\nversion 1.6.1 types for .send were\r\n\r\n```\r\nsend(messages: string | string[] | ProducerMessage | ProducerMessage[], cb: ProducerCallback<void>): void;\r\n```\r\n\r\nthis has been changed to \r\n\r\n```\r\nsend(messages: string | string[]): Promise<SendMessageBatchResultEntryList>;\r\n```\r\ndoes this mean to old format of\r\n\r\n```\r\n{\r\n id: string;\r\n body: string;\r\n messageAttributes?: { [key: string]: ProducerMessageAttribute };\r\n delaySeconds?: number;\r\n groupId?: string;\r\n deduplicationId?: string;\r\n}\r\n```\r\n\r\nis no longer supported or is it just missing types?",
"title": "chnages to send types",
"type": "issue"
},
{
"action": "created",
"author": "jeanrauwers",
"comment_id": 639351663,
"datetime": 1591347403000,
"masked_author": "username_1",
"text": "You are welcome to send a PR with the fix!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jeanrauwers",
"comment_id": null,
"datetime": 1594037348000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 611 | false | false | 611 | false |
pandas-dev/pandas | pandas-dev | 680,158,314 | 35,766 | {
"number": 35766,
"repo": "pandas",
"user_login": "pandas-dev"
} | [
{
"action": "opened",
"author": "simonjayhawkins",
"comment_id": null,
"datetime": 1597663271000,
"masked_author": "username_0",
"text": "Reverts pandas-dev/pandas#35543",
"title": "Revert \"REGR: Fix interpolation on empty dataframe\"",
"type": "issue"
},
{
"action": "created",
"author": "jorisvandenbossche",
"comment_id": 674984640,
"datetime": 1597682148000,
"masked_author": "username_1",
"text": "I don't think there is a need to revert, let's just fix the `self` to `self.copy()` in a follow-up PR",
"title": null,
"type": "comment"
}
] | 2 | 2 | 132 | false | false | 132 | false |
sourcegraph/sourcegraph | sourcegraph | 706,016,677 | 14,021 | null | [
{
"action": "opened",
"author": "unknwon",
"comment_id": null,
"datetime": 1600738183000,
"masked_author": "username_0",
"text": "We should start to be able to generate new licenses with new versioned tags on Sourcegraph Cloud for sales team to use.\r\n\r\nSome of the work has been done in this branch: https://github.com/sourcegraph/sourcegraph/compare/jc/enforce-product-tiers (i.e. define the new tags).\r\n\r\nPart of [RFC 167](https://docs.google.com/document/d/1XozQ4JINJqirdaG-XqGtboT2-PlIXPyBn6EwV7Q3pWI/edit).",
"title": "licenses: Generate new licenses with versioned tags",
"type": "issue"
},
{
"action": "closed",
"author": "unknwon",
"comment_id": null,
"datetime": 1602490294000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 381 | false | false | 381 | false |
ladybug-tools/dragonfly-schema | ladybug-tools | 616,900,765 | 69 | {
"number": 69,
"repo": "dragonfly-schema",
"user_login": "ladybug-tools"
} | [
{
"action": "opened",
"author": "mostaphaRoudsari",
"comment_id": null,
"datetime": 1589311482000,
"masked_author": "username_0",
"text": "this commit uses the updated methods from honeybee-schema to:\n\n1. reuse the enum objects\n2. update the mapper class structure\n\nfixes: #63",
"title": "fix(openapi-spec): make enum reusable",
"type": "issue"
}
] | 2 | 2 | 426 | false | true | 137 | false |
adobe/spectrum-web-components | adobe | 722,581,933 | 959 | null | [
{
"action": "opened",
"author": "adixon-adobe",
"comment_id": null,
"datetime": 1602787728000,
"masked_author": "username_0",
"text": "### Expected Behaviour\r\nDescription isn't transformed to uppercase. The spectrum examples don't do this: https://spectrum.adobe.com/page/cards/\r\n\r\nThe web component documentation examples should also include longer descriptions.\r\n\r\n### Actual Behaviour\r\nWe're picking up `text-transform: uppercase` from spectrum-css (which is written up as https://github.com/adobe/spectrum-css/issues/1054)\r\n\r\n### Reproduce Scenario (including but not limited to)\r\n1. View the examples here: https://opensource.adobe.com/spectrum-web-components/components/card\r\n2. Inspect the css & notice the `text-transform: uppercase (I see that all the example text is all uppercase but you can also repro by changing the label text).",
"title": "sp-card subheading incorrectly uses text-transform: uppercase",
"type": "issue"
},
{
"action": "created",
"author": "adixon-adobe",
"comment_id": 709523352,
"datetime": 1602787746000,
"masked_author": "username_0",
"text": "Writing this up since we'll want to override the bad css for now.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Westbrook",
"comment_id": 709558721,
"datetime": 1602791976000,
"masked_author": "username_1",
"text": "This is touched on in https://github.com/adobe/spectrum-web-components/pull/936 where we no long receive text styling as part of the component (don't ask), so we'd be able to address this there.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Westbrook",
"comment_id": 745580206,
"datetime": 1608067884000,
"masked_author": "username_1",
"text": "fixed by #1045",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Westbrook",
"comment_id": null,
"datetime": 1608067884000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 980 | false | false | 980 | false |
cseeger-epages/mail2most | null | 476,768,811 | 3 | null | [
{
"action": "opened",
"author": "silberzwiebel",
"comment_id": null,
"datetime": 1564999164000,
"masked_author": "username_0",
"text": "**Describe the bug**\r\nNew e-mails are not posted to Mattermost, if you have previously deleted old e-mails from your e-mail account. You will need to receive as many new e-mails as you have deleted to get a new mattermost message.\r\nThis is very likely because `mail2most` seems to only count the mails it has posted but not the content to decide whether it will post a new notification (reasoning from the content of `data.json`).\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Delete an e-mail in your watched folder\r\n2. send a new test e-mail to your wached e-mail account\r\n3. no mattermost message for this new e-mail (the log says, this e-mail is already posted)\r\n\r\n**Expected behavior**\r\nNo matter how many e-mails you delete, every *new* e-mail should result in a new mattermost message.\r\n\r\n**Release version**\r\nv1.1.0, arm version\r\n\r\n\r\n**Additional context**\r\nI'm running mattermost on a raspberry pi, with [these unofficial releases](https://github.com/SmartHoneybee/ubiquitous-memory).",
"title": "new e-mails are not posted if you have deleted e-mails",
"type": "issue"
},
{
"action": "created",
"author": "cseeger-epages",
"comment_id": 518267409,
"datetime": 1565016784000,
"masked_author": "username_1",
"text": "@username_0 what email server do you use ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "silberzwiebel",
"comment_id": 518274931,
"datetime": 1565017875000,
"masked_author": "username_0",
"text": "mail.lima-city.de",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cseeger-epages",
"comment_id": 518278833,
"datetime": 1565018465000,
"masked_author": "username_1",
"text": "Looks like a classic POSTFIX roundcube combination ill try to reproduce the issue tomorrow. \r\n\r\nThis seems to be a problem of the reusage of mail id's maybe i find a better way to recognize if the mail is already send. My test mailserver never reuses mail id's but that maybe handled differently by different mailservers.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cseeger-epages",
"comment_id": 518555008,
"datetime": 1565078219000,
"masked_author": "username_1",
"text": "I fixed the uid to use the correct uid from the mail server and not the sequence number. Please delete your `data.json` on the first start you will get all new mails passing your filter since no ids are cached but then deleting shouldn't be an issue. Can you confirm its working now ?\r\n\r\n[linux-arm.tar.gz](https://github.com/username_1/mail2most/files/3470997/linux-arm.tar.gz)\r\n[linux-arm64.tar.gz](https://github.com/username_1/mail2most/files/3470998/linux-arm64.tar.gz)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "silberzwiebel",
"comment_id": null,
"datetime": 1565168364000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "silberzwiebel",
"comment_id": 519008519,
"datetime": 1565168364000,
"masked_author": "username_0",
"text": "Works now, thanks!",
"title": null,
"type": "comment"
}
] | 2 | 7 | 1,888 | false | false | 1,888 | true |
espnet/espnet | espnet | 702,724,564 | 2,485 | null | [
{
"action": "opened",
"author": "dqqcasia",
"comment_id": null,
"datetime": 1600259457000,
"masked_author": "username_0",
"text": "stage -1: Data Download\r\n/data5/dqq/codes/espnet/data/MUSTC_v1.0/MUSTC_v1.0_en-de.tar.gz exists and appears to be complete.\r\ntar: You may not specify more than one '-Acdtrux', '--delete' or '--test-label' option\r\nTry 'tar --help' or 'tar --usage' for more information.\r\nlocal/download_and_untar.sh: error un-tarring archive /data5/dqq/codes/espnet/data/MUSTC_v1.0/MUSTC_v1.0_en-de.tar.gz",
"title": "wrong parameter for tar -d",
"type": "issue"
},
{
"action": "created",
"author": "sw005320",
"comment_id": 694289593,
"datetime": 1600354385000,
"masked_author": "username_1",
"text": "Could you give us more information?\r\nWhich recipe are you using?\r\nIt would be better if you follow our bag report template to some extent.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kamo-naoyuki",
"comment_id": 694593936,
"datetime": 1600392293000,
"masked_author": "username_2",
"text": "Anyway, such a bug can be fixed easily for you, but it's hart for the other people without running the script. Please debug by yourself before reporting a as a bug.\r\nI recommend you writing `set -x` in run.sh.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "dqqcasia",
"comment_id": null,
"datetime": 1601615554000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 735 | false | false | 735 | false |
reflex-frp/reflex | reflex-frp | 641,906,503 | 436 | {
"number": 436,
"repo": "reflex",
"user_login": "reflex-frp"
} | [
{
"action": "opened",
"author": "maralorn",
"comment_id": null,
"datetime": 1592566519000,
"masked_author": "username_0",
"text": "This haden‘t cropped up before because the version constraint in patch\nprevent cabal from ever picking this dependent-map version",
"title": "Make build work with newest dependent-map",
"type": "issue"
},
{
"action": "created",
"author": "3noch",
"comment_id": 647550465,
"datetime": 1592835678000,
"masked_author": "username_1",
"text": "Excellent. Thank you very much!",
"title": null,
"type": "comment"
}
] | 2 | 2 | 160 | false | false | 160 | false |
fdgt-apis/api | fdgt-apis | 802,964,263 | 352 | null | [
{
"action": "opened",
"author": "Duao20",
"comment_id": null,
"datetime": 1612707858000,
"masked_author": "username_0",
"text": "229",
"title": "?",
"type": "issue"
},
{
"action": "closed",
"author": "trezy",
"comment_id": null,
"datetime": 1612713317000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 3 | false | false | 3 | false |
dmlc/gluon-nlp | dmlc | 690,374,672 | 1,340 | null | [
{
"action": "opened",
"author": "szha",
"comment_id": null,
"datetime": 1598986903000,
"masked_author": "username_0",
"text": "## Description\r\nonce we finish #1339, we should expose installation options in batch docker script so that maintainers can control what dependencies to install in PRs.\r\nhttps://github.com/dmlc/gluon-nlp/blob/a842bf3c0e33bbcfc376f57ea6e5bccda4144a55/tools/batch/docker/gluon_nlp_job.sh#L22",
"title": "expose installation options as argument in batch docker",
"type": "issue"
}
] | 1 | 1 | 288 | false | false | 288 | false |
FarrokhGames/Inventory | FarrokhGames | 577,280,469 | 17 | null | [
{
"action": "opened",
"author": "Pixellore",
"comment_id": null,
"datetime": 1583558069000,
"masked_author": "username_0",
"text": "First of all, really great and neat inventory template.\r\n\r\nThis is not as much of an issue, but when you drop item outside inventory, it is desirable to have larger padding outside actual inventory grid area, because normally, you would want the dropping to happen say on outside actual UI graphics that represents the background area of the inventory. Right now, the item can be dropped just outside boundary of grid space, making it look not intuitive from the player's point of view when dropping happens on the background border UI element. When this happens, the item should just return to the inventory as original position instead of dropping. In other words we need to be able to define where the drop can happen much better than just a grid boundary. Maybe this can be solved by checking if any disallowed UI elements are at the point of drop,... somehow able to add those ui elements on the inventory controller?",
"title": "Dropping item outside inventory padding",
"type": "issue"
}
] | 1 | 1 | 923 | false | false | 923 | false |
code-dot-org/code-dot-org | code-dot-org | 769,361,482 | 38,301 | {
"number": 38301,
"repo": "code-dot-org",
"user_login": "code-dot-org"
} | [
{
"action": "opened",
"author": "bethanyaconnor",
"comment_id": null,
"datetime": 1608163270000,
"masked_author": "username_0",
"text": "This is\r\n1. much cleaner\r\n2. fixes a bug where the fake key would get sent back to the server preventing the lesson from being saved ([slack thread](https://codedotorg.slack.com/archives/CNZP84FJ5/p1608162502019700?thread_ts=1608156821.014200&cid=CNZP84FJ5))\r\n\r\n<!-- ### Background -->\r\n<!-- ### Privacy -->\r\n<!--\r\n1.\tDoes the Project involve the collection, use or sharing of new Personal Data?\r\n\r\n2.\tDoes the Project involve a new or changed use or sharing of existing Personal Data?\r\n-->\r\n<!-- ### Security -->\r\n<!--\r\nLink to Jira Task where sensitive security issues are discussed privately.\r\n-->\r\n<!-- ### Caching -->\r\n<!-- ### Deployment strategy -->\r\n<!-- ### Future work -->\r\n\r\n## Links\r\n\r\n<!--\r\n Any relevant links to external resources; ie, specification documents, jira\r\n items, related PRs, honeybadger errors, etc\r\n-->\r\n\r\n- [spec]()\r\n- [jira]()\r\n\r\n## Testing story\r\n\r\n<!--\r\n Does your change include appropriate tests?\r\n\r\n If so, please describe how the tests included in this PR are sufficient\r\n\r\n If not, please explain why this change does not need to be tested.\r\n-->\r\n\r\n# Reviewer Checklist:\r\n\r\n- [ ] Tests provide adequate coverage\r\n- [ ] Privacy and Security impacts have been assessed\r\n- [ ] Code is well-commented\r\n- [ ] New features are translatable or updates will not break translations\r\n- [ ] Relevant documentation has been added or updated\r\n- [ ] User impact is well-understood and desirable\r\n- [ ] Pull Request is labeled appropriately\r\n- [ ] Follow-up work items (including potential tech debt) are tracked and linked",
"title": "Use objective key to identify objectives",
"type": "issue"
},
{
"action": "created",
"author": "davidsbailey",
"comment_id": 747114760,
"datetime": 1608163565000,
"masked_author": "username_1",
"text": "I'm not totally sure I understand what's going on here. is the key generated by the client being used as the key on the server?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidsbailey",
"comment_id": 747116152,
"datetime": 1608163830000,
"masked_author": "username_1",
"text": "OK, I caught up on the slack thread and see what's going on now. this looks like a good fix. thanks for getting to the bottom of this!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bethanyaconnor",
"comment_id": 747119374,
"datetime": 1608164406000,
"masked_author": "username_0",
"text": "Yeah, the key wasn't being used on the server but was being passed back in `originalLessonData` as it was edited server-side.\r\n\r\nI agree that it would be good to have an end-to-end test on editing an entirely populated lesson to try to catch things like this. I'll add something to our backlog",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidsbailey",
"comment_id": 747146949,
"datetime": 1608168944000,
"masked_author": "username_1",
"text": "I think just adding some objectives (and other curriculum object types) to `TestController#create_migrated_script` would cover this",
"title": null,
"type": "comment"
}
] | 2 | 5 | 2,236 | false | false | 2,236 | false |
PaddlePaddle/Serving | PaddlePaddle | 748,235,388 | 890 | {
"number": 890,
"repo": "Serving",
"user_login": "PaddlePaddle"
} | [
{
"action": "opened",
"author": "MRXLT",
"comment_id": null,
"datetime": 1606049193000,
"masked_author": "username_0",
"text": "local predict在接收lod tensor时最后一个维度报错,需要补充维度1",
"title": "fix local_predict",
"type": "issue"
},
{
"action": "created",
"author": "TeslaZhao",
"comment_id": 739635377,
"datetime": 1607310452000,
"masked_author": "username_1",
"text": "最后维度不能补充1",
"title": null,
"type": "comment"
}
] | 3 | 3 | 123 | false | true | 52 | false |
MicrosoftDocs/azure-docs | MicrosoftDocs | 804,541,729 | 70,276 | null | [
{
"action": "opened",
"author": "SnoWolfT",
"comment_id": null,
"datetime": 1612876573000,
"masked_author": "username_0",
"text": "Need the notification of Eviction of spot VMSS on Portal. I make some research and find the similar concern as below. There is no answer or plan for this feature yet. It's appreciated if you can suggest whether we have any plan for the Eviction notification! Or do we have any feature and achieve that?\r\n\r\nhttps://docs.microsoft.com/en-us/answers/questions/165844/terminate-event-notification-for-spot-instances.html\r\n\r\nhttps://feedback.azure.com/forums/34192--general-feedback/suggestions/42055621-support-terminate-notification-for-spot-instances\r\n\r\n\r\nIn additional, there is one feature for the terminate notification for VMSS as below. But this isn’t suitable for the low-priority VMSS. \r\nhttps://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification\r\n\r\nI use the regular VMSS to have a try. After I click \"Delete\" of one instance, I can receive the scheduled event as below. \r\n====\r\ntomzhu@aks-tompool-39923754-vmss00000D:~$ curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2019-08-01\r\n{\"DocumentIncarnation\":1,\"Events\":[{\"EventId\":\"C37355DB-2C23-4195-90CA-1A712A85650C\",\"EventStatus\":\"Scheduled\",\"EventType\":\"**Terminate**\",\"ResourceType\":\"VirtualMachine\",\"Resources\":[\"aks-tompool-39923754-vmss_13\"],\"NotBefore\":\"Tue, 09 Feb 2021 11:55:39 GMT\",\"Description\":\"\",\"EventSource\":\"Platform\"}]}\r\n====\r\n\r\nIf I enable the simulate eviction and query the schedule event within the instance of spot VMSS, I receive the event as below. \r\nhttps://docs.microsoft.com/en-us/rest/api/compute/virtualmachinescalesetvms/simulateeviction#code-try-0\r\n====\r\ntomzhu@test1t46400000A:~> curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2019-08-01\r\n{\"DocumentIncarnation\":1,\"Events\":[{\"EventId\":\"CDDA70DC-537F-4151-8F2F-9FDB3EB961DB\",\"EventStatus\":\"Scheduled\",\"EventType\":\"**Preempt**\",\"ResourceType\":\"VirtualMachine\",\"Resources\":[\"test1_10\"],\"NotBefore\":\"Tue, 09 Feb 2021 12:09:38 GMT\",\"Description\":\"\",\"EventSource\":\"Platform\"}]}\r\n====\r\n\r\nWe can see the EventType is \"Preempt\", not \"Terminate\". May I know when the real eviction happens, is the EventType \"Preempt\"? Can we use this to EventType for the notification of Eviction?\r\n\r\n\r\n---\r\n#### Document Details\r\n\r\n⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*\r\n\r\n* ID: a1bd5076-fe52-cfca-4173-dec861167812\r\n* Version Independent ID: 0122fff1-79e0-446c-00a4-85248a21b6dc\r\n* Content: [Create a scale set that uses Azure Spot VMs - Azure Virtual Machine Scale Sets](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/use-spot)\r\n* Content Source: [articles/virtual-machine-scale-sets/use-spot.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machine-scale-sets/use-spot.md)\r\n* Service: **virtual-machine-scale-sets**\r\n* Sub-service: **spot**\r\n* GitHub Login: @username_2\r\n* Microsoft Alias: **username_2**",
"title": "Need the notification of Eviction of spot VMSS on Portal",
"type": "issue"
},
{
"action": "created",
"author": "ChaitanyaNaykodi-MSFT",
"comment_id": 776251044,
"datetime": 1612905548000,
"masked_author": "username_1",
"text": "Hello @username_0, \r\nThank you for your feedback! We will review and update as appropriate.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cynthn",
"comment_id": 776252847,
"datetime": 1612905758000,
"masked_author": "username_2",
"text": "@JagVeerappan",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "VikasPullagura-MSFT",
"comment_id": 784969131,
"datetime": 1614161891000,
"masked_author": "username_3",
"text": "@JagVeerappan \r\nCan you please check and add your comments on this request.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cynthn",
"comment_id": 843325646,
"datetime": 1621355242000,
"masked_author": "username_2",
"text": "I've added some new information about simulating evictions here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/spot-powershell#simulate-an-eviction. It includes what the response will look like and the `\"EventType\":\"Preempt\"` and not terminate.\r\n\r\nIf this content isn't helpful, or you have more questions, feel free to reopen the item or create a new one.\r\n\r\n#please-close",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "PRMerger12",
"comment_id": null,
"datetime": 1621355243000,
"masked_author": "username_4",
"text": "",
"title": null,
"type": "issue"
}
] | 5 | 6 | 3,505 | false | false | 3,505 | true |
agalwood/Motrix | null | 777,729,787 | 846 | null | [
{
"action": "opened",
"author": "XokviqTzqKbs7V",
"comment_id": null,
"datetime": 1609713457000,
"masked_author": "username_0",
"text": "<!--\r\n反馈之前请搜索一下已有 issues 和 帮助文档,看是否已经有人提交了类似的新功能请求\r\nhttps://github.com/agalwood/Motrix/issues\r\nhttp://motrix.app/support\r\n\r\n按以下格式填写反馈信息,谢谢\r\n-->\r\n\r\n**请描述一下你的新功能请求是否与已知问题有关?**\r\n安装YAAW for Chrome并修改JSON-RPC配置后,可顺利拦截 Chrome 普通下载任务至Motrix,旦任务却直接开始下载,因此无法「重新命名」档案。\r\n\r\n**描述你想要的解决方案**\r\n因为有「重新命名」档案的需求,所以从Chrome拦截下载任务到Motrix后,不要直接开始下载,而应先跳到先跳到连结任务拦,待修改确认送出后才开始下载。\r\n\r\n**描述你考虑过的替代方案**\r\n原本我是Free Download Manager (FDM)长期使用者,意外发现Motrix这个下载器,下载速度跟稳定度非常满意,唯一美中不足的就是重新命名档案的这个功能,不如FDM方便,如果Motrix能改善的话,就打算改用Motrix取代掉FDM。\r\n\r\n**更多信息**\r\n希望Motrix能参考一下FDM的连结任务拦功能 (附图):\r\nhttps://i.imgur.com/gLvIUo4.jpg\r\n(FDM下载测试连结 https://github.com/agalwood/Motrix/releases/download/v1.5.15/Motrix-1.5.15-win.zip)\r\n\r\n希望Motrix连结任务拦增加的功能就是:\r\n1、自动显示出「原始档名+副档名」→方便就原始档名作修改\r\n2、显示出档案的大小",
"title": "拦截 Chrome 普通下载任务,不要直接开始下载,先跳到连结任务拦,确认送出后才开始下载",
"type": "issue"
}
] | 1 | 1 | 742 | false | false | 742 | false |
miguelcobain/ember-leaflet | null | 665,195,070 | 513 | null | [
{
"action": "opened",
"author": "ludo-syt",
"comment_id": null,
"datetime": 1595599814000,
"masked_author": "username_0",
"text": "Hello there, I have an issue with the marker-layer \"iconDidChange\" observer, the \"dragging\" object is sometimes undefined.\r\nNote that we use [Ember Leaflet Marker Cluster](https://github.com/canufeel/ember-leaflet-marker-cluster) and the bug appears in cluster mode. Hidden markers don't have the \"dragging\" property as they are not rendered, however iconDidChange is triggered for those markers.\r\nOur markers are not draggable.\r\n\r\nMaybe you can check that this._layer.dragging object is valid before calling enable/disable.\r\nLeaflet doc says on \"dragging\" property : \"Only valid when the marker is on the map\" [Doc](https://leafletjs.com/reference-1.6.0.html#marker-dragging)\r\n\r\nThank you for your help\r\n\r\n\r\n",
"title": "Leaflet icon/draggability bug, dragging object can be undefined for hidden markers (related to #77)",
"type": "issue"
},
{
"action": "closed",
"author": "miguelcobain",
"comment_id": null,
"datetime": 1596407715000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "miguelcobain",
"comment_id": 667737841,
"datetime": 1596410343000,
"masked_author": "username_1",
"text": "@username_0 released a fix on version `4.1.1`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ludo-syt",
"comment_id": 667959957,
"datetime": 1596452658000,
"masked_author": "username_0",
"text": "Thank you @username_1",
"title": null,
"type": "comment"
}
] | 2 | 4 | 996 | false | false | 996 | true |
AndrewRussellNet/FNA-Template | null | 333,857,886 | 3 | null | [
{
"action": "opened",
"author": "sherjilozair",
"comment_id": null,
"datetime": 1529446388000,
"masked_author": "username_0",
"text": "Please update FNA so that https://github.com/FNA-XNA/FNA/commit/c8f0711edc00240ab4dd38f39d9560ea164f994d is added which points to the correct submodule in FNA.",
"title": "Update submodule",
"type": "issue"
},
{
"action": "closed",
"author": "AndrewRussellNet",
"comment_id": null,
"datetime": 1529650272000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "AndrewRussellNet",
"comment_id": 399340142,
"datetime": 1529650320000,
"masked_author": "username_1",
"text": "Thanks @username_0 and @danielcrenna",
"title": null,
"type": "comment"
}
] | 2 | 3 | 197 | false | false | 197 | true |
knex/knex | knex | 684,453,275 | 3,998 | null | [
{
"action": "opened",
"author": "elhigu",
"comment_id": null,
"datetime": 1598255699000,
"masked_author": "username_0",
"text": "# Environment\r\n\r\nKnex version: 0.21\r\nDatabase + version: mssql \r\nOS: Any / runkit\r\n\r\n@smorey2.\r\n\r\n# Bug\r\n\r\n1. Explain what kind of behaviour you are getting and how you think it should do\r\n\r\n//? should be written as ? in native SQL that is sent to server.\r\n\r\n2. Error message\r\n None. But ? mark is replaced with mssql binding syntax.\r\n\r\n3. Reduced test code, for example in https://npm.runkit.com/knex or if it needs real\r\n database connection to MySQL or PostgreSQL, then single file example which initializes\r\n needed data and demonstrates the problem.\r\n\r\nhttps://runkit.com/embed/xt3cgb62kfko\r\n\r\n```\r\nconst Knex = require('knex');\r\n\r\nconst knexMssql = Knex({\r\n client: 'mssql'\r\n});\r\n\r\nconst knexPg = Knex({\r\n client: 'pg'\r\n});\r\n\r\n// using knex.select() to wrap .raw to get an access to .toNative() method (knex.raw().toSQL().toNative() doesn't exists)\r\nconsole.log(\r\nknexMssql.select(knexMssql.raw('mssql insert ?')).toSQL().toNative(),\r\nknexMssql.select(knexMssql.raw('mssql insert ?', ['?'])).toSQL().toNative(),\r\nknexMssql.select(knexMssql.raw('mssql insert \\\\?')).toSQL().toNative(),\r\nknexMssql.select(knexMssql.raw('mssql insert \\\\\\\\?')).toSQL().toNative(),\r\nknexPg.select(knexPg.raw('pg insert ?')).toSQL().toNative(),\r\nknexPg.select(knexPg.raw('pg insert ?', ['?'])).toSQL().toNative(),\r\nknexPg.select(knexPg.raw('pg insert \\\\?')).toSQL().toNative(),\r\nknexPg.select(knexPg.raw('pg insert \\\\\\\\?')).toSQL().toNative()\r\n)\r\n```\r\n\r\n```\r\nOutputs:\r\n\r\nObject {bindings: [], sql: \"select mssql insert @p0\"}\r\nObject {bindings: [\"?\"], sql: \"select mssql insert @p0\"}\r\nObject {bindings: [], sql: \"select mssql insert \\\\@p0\"}\r\nObject {bindings: [], sql: \"select mssql insert \\\\\\\\@p0\"}\r\nObject {bindings: [], sql: \"select pg insert $1\"}\r\nObject {bindings: [\"?\"], sql: \"select pg insert $1\"}\r\nObject {bindings: [], sql: \"select pg insert ?\"}\r\nObject {bindings: [], sql: \"select pg insert $1\"}\r\n```",
"title": "Mssql \\\\? question mark escaping doesn't work on mssql",
"type": "issue"
},
{
"action": "created",
"author": "sigmama",
"comment_id": 709698120,
"datetime": 1602816536000,
"masked_author": "username_1",
"text": "same issue encounted with mssql",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kibertoad",
"comment_id": 719994510,
"datetime": 1604182186000,
"masked_author": "username_2",
"text": "@username_0 Is it the same as #4053?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "elhigu",
"comment_id": 721092696,
"datetime": 1604407300000,
"masked_author": "username_0",
"text": "Yep. That runkit also works with latest knex now https://runkit.com/embed/kzxpan3w8594 though \\\\\\\\? with postgres does something weird...",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "elhigu",
"comment_id": null,
"datetime": 1604407301000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 5 | 2,098 | false | false | 2,098 | true |
tikv/client-rust | tikv | 785,408,786 | 227 | null | [
{
"action": "opened",
"author": "nrc",
"comment_id": null,
"datetime": 1610568731000,
"masked_author": "username_0",
"text": "For long-running transactions, the client should send heartbeat messages to the server. This should be optional (similar to retrying) and on by default. It should still be possible to manually send heartbeats.",
"title": "Automatically send heartbeats",
"type": "issue"
},
{
"action": "created",
"author": "andylokandy",
"comment_id": 760206746,
"datetime": 1610632045000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nrc",
"comment_id": 761875908,
"datetime": 1610915802000,
"masked_author": "username_0",
"text": "My philosophy for the client is that the default API should make things as easy as possible for the user and so things like heartbeats and retries should happen automatically. However, it should be possible to customise the workflow up to the level of the gRPC API. The use cases here are for somebody re-implementing TiDB in Rust, or more likely implementing some other, equally complex, DB on top of TiKV, or using the Rust client to automatically test edge cases in TiKV where we might want to simulate missing heart beats or heart beats which are out of order or duplicated or have incorrect timestamps or whatever.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TszKitLo40",
"comment_id": 774835163,
"datetime": 1612753551000,
"masked_author": "username_2",
"text": "Where the API should be added in the `rust-client`?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ekexium",
"comment_id": 774839844,
"datetime": 1612754636000,
"masked_author": "username_3",
"text": "@username_2 You can add a field in `TransactionOptions`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TszKitLo40",
"comment_id": 774840953,
"datetime": 1612754902000,
"masked_author": "username_2",
"text": "Do you mean it should be a `bool` to indicate whether is will send heartbeats automatically? But where should the API be added?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ekexium",
"comment_id": 774845372,
"datetime": 1612755860000,
"masked_author": "username_3",
"text": "@username_2 I think it can be a bool.\r\nThe interface should be the `TransactionOptions` that users supply. Do you mean where to implement it? One initial idea is to make use of the `bg_worker`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TszKitLo40",
"comment_id": 774846269,
"datetime": 1612756087000,
"masked_author": "username_2",
"text": "Yes, I mean where to implement this API. Do you mean we should implement it as a method in the `Client` in `transaction/client.rs` and make use of `bg_worker`?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ekexium",
"comment_id": 774849082,
"datetime": 1612756723000,
"masked_author": "username_3",
"text": "@username_2 \r\nHeartbeats are for transactions, so I think it shouldn't be implemented in `Client`. \r\nEach transaction has its `TransactionOptions` which is the place that users can specify the behavior.\r\nI think we can start periodically sending heartbeats since one transaction starts, and stop when the transaction is rolled back or committed. It can be implemented in `transaction.rs`, I guess.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TszKitLo40",
"comment_id": 775745490,
"datetime": 1612857382000,
"masked_author": "username_2",
"text": "Should we also define a variable for the heartbeat interval?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ekexium",
"comment_id": 775777950,
"datetime": 1612861090000,
"masked_author": "username_3",
"text": "@username_2 \r\nYes there should be a constant variable for the default interval.\r\nPersonally I don't think it needs to be configurable (for now).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TszKitLo40",
"comment_id": 775779598,
"datetime": 1612861264000,
"masked_author": "username_2",
"text": "I want to start periodically sending heartbeats in the `new method` of Transaction. And it seems that the `bg_worker` which is a `ThreadPool` can't be executed at a fixed rate, any example for me to refer?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ekexium",
"comment_id": 775785249,
"datetime": 1612861856000,
"masked_author": "username_3",
"text": "How about spawning a task that periodically do something in the thread pool?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TszKitLo40",
"comment_id": 777950794,
"datetime": 1613101788000,
"masked_author": "username_2",
"text": "When should we spawn the task? Should we distinguish the optimistic lock and pessimistic lock? @username_3",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ekexium",
"comment_id": 781090080,
"datetime": 1613629588000,
"masked_author": "username_3",
"text": "@username_2 Similar to TiDB, after the prewrite, and after each pessimistic lock acquisition.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "nrc",
"comment_id": null,
"datetime": 1615413697000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 4 | 16 | 2,492 | false | false | 2,492 | true |
ThomasBury/arfs | null | 774,736,356 | 1 | null | [
{
"action": "opened",
"author": "Weidong725",
"comment_id": null,
"datetime": 1608904610000,
"masked_author": "username_0",
"text": "When using leshy, it works normally, but it can't print out important pictures",
"title": "ImportError: cannot import name '_check_savefig_extra_args' from 'matplotlib.backend_bases' ",
"type": "issue"
},
{
"action": "created",
"author": "ThomasBury",
"comment_id": 751292404,
"datetime": 1608931715000,
"masked_author": "username_1",
"text": "Hi,\r\nwhich version of `matplotlib` are you using? And are you using jupyter? You might need to update both.\r\n\r\nif you are using pip:\r\n```bash\r\npip uninstall matplotlib\r\npip install --upgrade matplotlib\r\n```\r\n\r\n```bash\r\npip install -U jupyter\r\n```\r\n\r\nor if you are using conda:\r\n\r\n```bash\r\nconda uninstall matplotlib\r\nconda install matplotlib\r\n```\r\n\r\n```bash\r\nconda update jupyter\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Weidong725",
"comment_id": 751312948,
"datetime": 1608952102000,
"masked_author": "username_0",
"text": "Thank you. It's really a problem with the Matplotlib version",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ThomasBury",
"comment_id": null,
"datetime": 1608975797000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 523 | false | false | 523 | false |
equinor/intersection | equinor | 558,153,347 | 54 | null | [
{
"action": "opened",
"author": "thuve",
"comment_id": null,
"datetime": 1580479710000,
"masked_author": "username_0",
"text": "The component should have a casing schematic layer displaying casing data.",
"title": "Casing schematic layer",
"type": "issue"
},
{
"action": "created",
"author": "thuve",
"comment_id": 585838763,
"datetime": 1581610473000,
"masked_author": "username_0",
"text": "Can render the casing wall thickness with an equal thickness for all casings. \r\nCasing with item type screen should be visualized differently than casing (similar to how screens are in REP)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Brynjulf",
"comment_id": null,
"datetime": 1588850453000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 263 | false | false | 263 | false |
rominf/jira-oauth | null | 706,189,129 | 6 | {
"number": 6,
"repo": "jira-oauth",
"user_login": "rominf"
} | [
{
"action": "opened",
"author": "goncalovalverde",
"comment_id": null,
"datetime": 1600764208000,
"masked_author": "username_0",
"text": "The current yarl library replaces all the original path when you use \"with_path\".\r\nSo if your base URL includes a path, all requests are going to fail. \r\nChanged this to include the original path if exists and adding it to all additional paths",
"title": "URL path fix",
"type": "issue"
},
{
"action": "created",
"author": "rominf",
"comment_id": 696753389,
"datetime": 1600784394000,
"masked_author": "username_1",
"text": "Thanks, @username_0. I've published a new version of jira-oauth on PyPI.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 320 | false | false | 320 | true |
corteva/djangorestframework-mvt | corteva | 671,658,585 | 10 | null | [
{
"action": "opened",
"author": "krishnakafle",
"comment_id": null,
"datetime": 1596390085000,
"masked_author": "username_0",
"text": "i want to get the vector tiles for the query like,\r\n\r\n`api/v1/data/example.mvt?tile=1/0/0&my_column__in=foo,foo1,foo2`\r\n\r\nIn my specific case scenario, I have a column name houseId. Now I would like to extract vector tile, where houseId can take multiple values as illustrated in following code.\r\n`/api/v1/data/house.mvt?tile={z}/{x}/{y}&houseId__in=547090906080,547090105191`\r\n\r\nSimply stating the condition, I would prefer to have an \"In\" operator which inquires with a list of values over the column in models as in Django ORM filter with code like:\r\n`House.objects.filter(houseId__in=[547090906080,547090105191])`\r\n\r\nFurther elaboration can be found in: [https://stackoverflow.com/questions/63217920/how-to-pass-mulltiple-values-for-same-column-in-get-request-vector-tile-django](url)",
"title": "How to pass mulltiple values for same column in get request?",
"type": "issue"
},
{
"action": "created",
"author": "trollefson",
"comment_id": 669303952,
"datetime": 1596645864000,
"masked_author": "username_1",
"text": "Hi @username_0,\r\n\r\nThank you for showing interest in adding a new feature to the package. The package doesn't currently support an `in` operator for filtering. I think it would be possible to add such a feature within the method here:\r\n\r\nhttps://github.com/corteva/djangorestframework-mvt/blob/master/rest_framework_mvt/managers.py#L109\r\n\r\nIf you'd like to attempt to add the feature please open a pull request. I can see how an `in` operator would be useful and will make sure it gets prioritized for the next release.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "krishnakafle",
"comment_id": 669359902,
"datetime": 1596651690000,
"masked_author": "username_0",
"text": "Hello @username_1,\r\n\r\nI solved the issue with a different approach by customizing raw SQL according to my specific requirements in managers.py file.\r\nThough I will try to contribute by creating the pull request as per the feature I have pointed out as soon as I can.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "henhuy",
"comment_id": 728885301,
"datetime": 1605614953000,
"masked_author": "username_2",
"text": "Maybe some integration of [`django-filter`](https://django-filter.readthedocs.io/en/stable/index.html) would be very cool!",
"title": null,
"type": "comment"
}
] | 3 | 4 | 1,700 | false | false | 1,700 | true |
laravel/nova-docs | laravel | 592,362,775 | 243 | {
"number": 243,
"repo": "nova-docs",
"user_login": "laravel"
} | [
{
"action": "opened",
"author": "eugenevdm",
"comment_id": null,
"datetime": 1585804358000,
"masked_author": "username_0",
"text": "",
"title": "Added information on how to change the Action title",
"type": "issue"
},
{
"action": "created",
"author": "jbrooksuk",
"comment_id": 607726296,
"datetime": 1585819210000,
"masked_author": "username_1",
"text": "Thanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jbrooksuk",
"comment_id": 607726312,
"datetime": 1585819211000,
"masked_author": "username_1",
"text": "Thanks!",
"title": null,
"type": "comment"
}
] | 2 | 3 | 14 | false | false | 14 | false |
BlockchainCommons/FullyNoded-2 | BlockchainCommons | 651,925,044 | 82 | {
"number": 82,
"repo": "FullyNoded-2",
"user_login": "BlockchainCommons"
} | [
{
"action": "opened",
"author": "henkvancann",
"comment_id": null,
"datetime": 1594086062000,
"masked_author": "username_0",
"text": "Hopefully I won't eat more of your time with this first PR of mine.",
"title": "CLA signed by HvC",
"type": "issue"
},
{
"action": "created",
"author": "henkvancann",
"comment_id": 654568084,
"datetime": 1594090480000,
"masked_author": "username_0",
"text": "I added the github global e-mail address to the GPG key and I added user.email address as being hvancann@2value.nl\n\nCould you try again the verify the new signature in the same pull request?\n\nBest, Henk\n\n>",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "henkvancann",
"comment_id": 654569272,
"datetime": 1594090765000,
"masked_author": "username_0",
"text": "I added the github global git user.email address to the GPG key using GPG Keychain for Mac and I also added user.email address hvancann@2value.nl locally to the FN2 repo. I signed the commit again with the renewed key and pushed the commit in the same PR.\r\n\r\nCould you pls try again the verify the new signature?",
"title": null,
"type": "comment"
}
] | 1 | 3 | 584 | false | false | 584 | false |
rioastamal/shfokus | null | 689,742,652 | 2 | null | [
{
"action": "opened",
"author": "rioastamal",
"comment_id": null,
"datetime": 1598925425000,
"masked_author": "username_0",
"text": "Currently shFokus only support MacOS and Linux since both OS mostly shipped with Bash by default. On top of that both of it are similar because it's a UNIX/Unix Like OS so location of the hosts file is the same.\r\n\r\n## Expected\r\n\r\nSupport for Windows OS should be added. On top of my mind it should be a separate file since we will using Windows specific command. Could be something like `shfokus-win.bat`.",
"title": "Support for Windows",
"type": "issue"
}
] | 1 | 1 | 405 | false | false | 405 | false |
infinispan/infinispan | infinispan | 579,304,374 | 8,035 | {
"number": 8035,
"repo": "infinispan",
"user_login": "infinispan"
} | [
{
"action": "opened",
"author": "chiroito",
"comment_id": null,
"datetime": 1583936791000,
"masked_author": "username_0",
"text": "https://issues.redhat.com/browse/ISPN-11456",
"title": "ISPN-11456 ConfigurationUnitTest is fail on Windows",
"type": "issue"
},
{
"action": "created",
"author": "tristantarrant",
"comment_id": 604377541,
"datetime": 1585222136000,
"masked_author": "username_1",
"text": "Replaced by https://github.com/infinispan/infinispan/pull/8108",
"title": null,
"type": "comment"
}
] | 3 | 3 | 145 | false | true | 105 | false |
src-d/enry | src-d | 433,761,032 | 226 | null | [
{
"action": "opened",
"author": "bzz",
"comment_id": null,
"datetime": 1555418869000,
"masked_author": "username_0",
"text": "Steps to reproduce:\r\n```\r\nmkdir -p /tmp/linguist-django\r\ncd /tmp/linguist-django\r\ngit clone --depth 1 https://github.com/django/django.git\r\ncd -\r\n\r\n./enry /tmp/linguist-django/django/\r\n95.87%\tPython\r\n1.85%\tJavaScript\r\n1.65%\tHTML\r\n0.63%\tCSS\r\n0.01%\tShell\r\n0.00%\tSmarty\r\n0.00%\tMakefile\r\n\r\n./enry /tmp/linguist-django/\r\n95.30%\tPython\r\n1.83%\tJavaScript\r\n1.63%\tHTML\r\n0.62%\tCSS\r\n0.53%\tRoff Manpage\r\n0.04%\tMakefile\r\n0.04%\tBatchfile\r\n0.01%\tShell\r\n0.00%\tSmarty\r\n```\r\n\r\nThe difference is in `Roff Manpage` and `Batchfile` both under `docs/` subdir in Django not being skipped on the second run.\r\n\r\nThe reason is that current [documentation path filtering condition](https://github.com/src-d/enry/blob/master/cmd/enry/main.go#L87) logic is not triggered, as the relative path in second case has prefix and [does not match the regexp](https://github.com/src-d/enry/blob/master/data/documentation.go#L9).",
"title": "CLI: inconsistent path filtering",
"type": "issue"
},
{
"action": "created",
"author": "bzz",
"comment_id": 483644962,
"datetime": 1555418928000,
"masked_author": "username_0",
"text": "Need to decide if that is expected from the user perspective.\r\n\r\nFor now, I tend to think that this is a bug.",
"title": null,
"type": "comment"
}
] | 1 | 2 | 999 | false | false | 999 | false |
apache/spark | apache | 496,611,905 | 25,879 | {
"number": 25879,
"repo": "spark",
"user_login": "apache"
} | [
{
"action": "opened",
"author": "dongjoon-hyun",
"comment_id": null,
"datetime": 1569040099000,
"masked_author": "username_0",
"text": "### What changes were proposed in this pull request?\r\n\r\nThis PR aims to add linters and license/dependency checkers to GitHub Action.\r\n\r\n### Why are the changes needed?\r\n\r\nThis will help the PR reviews.\r\n\r\n### Does this PR introduce any user-facing change?\r\n\r\nNo.\r\n\r\n### How was this patch tested?\r\n\r\nSee the GitHub Action result on this PR.",
"title": "[SPARK-29199][INFRA] Add linters and license/dependency checkers to GitHub Action",
"type": "issue"
},
{
"action": "created",
"author": "dongjoon-hyun",
"comment_id": 533767464,
"datetime": 1569041226000,
"masked_author": "username_0",
"text": "`Linters` job passed in `9 min` already.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dongjoon-hyun",
"comment_id": 533805524,
"datetime": 1569078751000,
"masked_author": "username_0",
"text": "Thank you, @srowen .\r\nMerged to master.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "HyukjinKwon",
"comment_id": 533805591,
"datetime": 1569078806000,
"masked_author": "username_1",
"text": "LGTM too",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dongjoon-hyun",
"comment_id": 533805717,
"datetime": 1569078904000,
"masked_author": "username_0",
"text": "Thanks for the advice, @username_1 .",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dongjoon-hyun",
"comment_id": 533831325,
"datetime": 1569101101000,
"masked_author": "username_0",
"text": "Since `branch-2.4` is our LTS branch and this works correctly, I'll backport this to `branch-2.4`, too.",
"title": null,
"type": "comment"
}
] | 4 | 18 | 2,602 | false | true | 568 | true |
thu-coai/ConvLab-2 | thu-coai | 690,689,216 | 94 | null | [
{
"action": "opened",
"author": "stephenroller",
"comment_id": null,
"datetime": 1599018165000,
"masked_author": "username_0",
"text": "Hi there,\r\n\r\nThanks for the excellent paper of [Takanobu et al. (2020)](https://arxiv.org/pdf/2005.07362.pdf). I thoroughly appreciate the systematic comparison across so many points, and especially the call out of mismatch between single turn performance and overall performance.\r\n\r\nOne question I have is about the source code. Right now, the top line README for this repo shows a very similar table to Table 1 of the paper, but with very different results (and indeed, quite a few different systems, with some disagreeing). Furthermore, you have similar tables for 2, 3, etc in the README, but with different metrics.\r\n\r\n- How the table in the README different from the one presented in the paper?\r\n- Why the mismatch of reported metrics? Can one use ConvLab-2 to produce the rest of the metrics reported in the paper?\r\n- The paper calls out that the code was all open source in February, while this repo became available in May. Perhaps I should be looking at a different repo?\r\n- If there are two repos, which one do you anticipate receiving more support? I assume this one, as it aims to be a standardized platform.\r\n\r\nBest wishes from the [ParlAI](https://github.com/facebookresearch/ParlAI) team & collaborators\r\n\r\nThanks,\r\nStephen",
"title": "Questions about \"Is your goal oriented model performing well\"",
"type": "issue"
},
{
"action": "created",
"author": "zqwerty",
"comment_id": 685421662,
"datetime": 1599033110000,
"masked_author": "username_1",
"text": "Thanks for your interest. The paper of Takanobu et al. (2020) use [ConvLab](https://github.com/ConvLab/ConvLab) platform. This repo updated data, algorithms, the simulator, the evaluator, etc. This repo will receive more support.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "stephenroller",
"comment_id": 685698979,
"datetime": 1599049538000,
"masked_author": "username_0",
"text": "Great, thanks much!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "stephenroller",
"comment_id": null,
"datetime": 1599049542000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 1,487 | false | false | 1,487 | false |
trailofbits/ebpfpub | trailofbits | 765,997,134 | 2,932 | null | [
{
"action": "opened",
"author": "17071040292",
"comment_id": null,
"datetime": 1607928526000,
"masked_author": "username_0",
"text": "青岛崂山区哪里有真实大保健(找特色服务【+V:781372524】 今年夏天,《乐队的夏天》这档节目的成功出圈,让乐队文化走进了无数观众的视野,《乐队的夏天》也成为了时下最热的音乐综艺大。并且《乐队的夏天》全国巡演南京、成都、郑州、武汉、上海、深圳等站,预售票开票仅几分钟便售罄。这就不得不让人思考对其的长效运营和背后的价值挖掘。 综艺收官即结束的长效运营值得关注 综艺领域从来都是热门的制造机。“超女”“跑男”“好声音”……这里每一个家喻户晓的背后,都是足以代表一个行业鼎盛时期的经典综艺。但当我们把它们拿来与《乐队的夏天》作对比时,我们也发现,他们在内容搭建的体系和延伸上有着非常大的区别。 对于这类传统季播综艺来说,节目模式就是一切。所以我们在“超女”身上记住了“”,在“跑男”身上记住了“撕名牌”,但是如果要问观众对具体哪一期节目中的哪一段节目内容印象最深,恐怕任何一个观众都很难快速给出答案。而这些爆款季播综艺总结出的成功经验,也导致了的打造和价值挖掘往往局限于每一季的节目内容本身,这就导致错过了很多在节目内容之外的机会,或者说忽略了对的长效运营所蕴含的价值。 因此,完全以节目模式为一切的传统季播综艺,往往收官即结束。而无法像《乐队的夏天》一样,在节目收官后还能够在节目之外的演唱会上收获持续的影响力和价值变现。 从夏天到冬天,长效运营的源头在哪里 传统季播综艺只看重节目模式的做法,忽略了的长效运营,错失了更长的生命周期所能带来的巨大价值。所以要想实现的长效运营,就需要在其他节目元素——或者叫内容元素,身上找到突破点。 《乐队的夏天》的节目模式是乐队竞演——模式类型上其实与选秀没有特别大的区别。但是《乐队的夏天》邀请来的各种不同风格、不同类别,以及知名和不具名的乐队,恰恰共同组成了最丰富、最戏剧性的节目元素。这也是节目能够顺利“出圈”,甚至能够脱离节目模式实现独立的重要原因。 《乐队的夏天》邀请的新裤子、反光镜、刺猬、痛痒等一批中国青年乐队,自带观众和影响力。加之不同风格、不同类型的乐队聚集到一起,首先就为节目奠定了数量可观的观众基础。这些乐队作为节目内容的核心来源,也决定了每一部分的内容都能够准确击中至少一部分观众的内心,从而在“乐队的夏天”这个下频繁产生共鸣,塑造了多元的内容印象。 此外,要想做到长效运营,“出圈”必不可少。不难想象,如果《乐队的夏天》没有“出圈”,那么最终也只会沦为一场粉丝在线自嗨的音乐节。所以《乐队的夏天》在成功塑造的同时,还借助话题乐手等内容素材,在已有的受众之外形成话题传播、扩散影响范围。如此就使得的影响力脱离了节目本身,在节目收官之后仍然能够保持影响力。《乐队的夏天》巡回演唱会的票房表现,也证明了这一点,票房即市场认可。 长效运营的价值对当下行业的特殊意义 对于节目观众们来说,爱奇艺平台的《乐队的夏天》能够入选百度沸点年度关键词、巡回演唱会门票场场出票即售罄,或许并没有什么值得意外的。但是,对于综艺领域甚至是内容领域来说,《乐队的夏天》在长效运营和价值挖掘上的实践却是非常值得研究的。 今年以来行业形势并不乐观。新增综艺节目的制作数量首次出现负增长,与此同时,广告主们也因为受经济大环境的影响,大幅削减了广告投放的预算。不论是塑造还是单纯的节目制作,没有内容变现保障一切都是空谈。所以,当前综艺制作行业所面临的关键问题是如何在内容价值和内容变现之间实现效果最大化。 这个问题如果按照以往季播综艺的成功经验,恐怕就只有通过在内容中不停地开放更多商务权益,以及制作综艺大电影来解决,但最终的结果是直接导致的价值和影响力被透支。 从广告主的实际需求来看,他们当下更乐于投放能够在当季形成话题并成为爆款的节目。此外,广告主对于综艺营销的需求也从节目内扩展到节目外,品牌的营销思维也已经从整合营销升级到全链路思维,除了在不同内容载体上实现品牌传播,更希望与真实受众进行沟通甚至转化。比如麦当劳与爱奇艺出品的《中国有嘻哈》深度合作,在内容投放之外,还改造了多家线下嘻哈主题店面。《奇葩说》中更开发了冠名客户及电商渠道带货新合作,带动了江小白海澜之家等线上售卖。爱奇艺推出的《乐队的夏天》这一类能够“出圈”的综艺节目,无疑已经具备了在节目之外拓宽品牌传播和销售转化的能力。肚沤换举慷https://github.com/trailofbits/ebpfpub/issues/1615?gkjjn <br />https://github.com/trailofbits/ebpfpub/issues/1497?y0346 <br />https://github.com/trailofbits/ebpfpub/issues/1551?xhunq <br />https://github.com/trailofbits/ebpfpub/issues/171?35583 <br />https://github.com/trailofbits/ebpfpub/issues/1651?70542 <br />https://github.com/trailofbits/ebpfpub/issues/274?34408 <br />https://github.com/trailofbits/ebpfpub/issues/155?thiuz <br />krliihkastuddadpxiwwhleqnibhkxhpitc",
"title": "青岛崂山区哪里有真实大保健(找特色服务s",
"type": "issue"
}
] | 1 | 1 | 2,257 | false | false | 2,257 | false |
PixiEditor/PixiEditor | PixiEditor | 679,622,854 | 37 | null | [
{
"action": "opened",
"author": "flabbet",
"comment_id": null,
"datetime": 1597516501000,
"masked_author": "username_0",
"text": "This is a major thing to implement in PixiEditor. \r\n\r\nAnimation pane is a system, that will allow the user to create animations from sprites. It should consist of:\r\n\r\n- Frames panel\r\n- Frames settings (speed)\r\n- Onion skinning\r\n- etc.",
"title": "Animation Pane",
"type": "issue"
},
{
"action": "created",
"author": "flabbet",
"comment_id": 752068365,
"datetime": 1609247186000,
"masked_author": "username_0",
"text": "This is too complex for one issue, it will be a separate project at least",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "flabbet",
"comment_id": null,
"datetime": 1609247187000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 307 | false | false | 307 | false |
sherrodharris/reviewing-a-pull-request | null | 561,752,843 | 4 | {
"number": 4,
"repo": "reviewing-a-pull-request",
"user_login": "sherrodharris"
} | [
{
"action": "opened",
"author": "sherrodharris",
"comment_id": null,
"datetime": 1581094062000,
"masked_author": "username_0",
"text": "",
"title": "change title on README",
"type": "issue"
},
{
"action": "created",
"author": "sherrodharris",
"comment_id": 583498854,
"datetime": 1581094589000,
"masked_author": "username_0",
"text": "This was great. Get more reps!",
"title": null,
"type": "comment"
}
] | 2 | 4 | 1,561 | false | true | 30 | false |
dhis2/cli-style | dhis2 | 592,450,565 | 225 | {
"number": 225,
"repo": "cli-style",
"user_login": "dhis2"
} | [
{
"action": "opened",
"author": "varl",
"comment_id": null,
"datetime": 1585816072000,
"masked_author": "username_0",
"text": "Related to: https://github.com/dhis2/cli/pull/302\r\n\r\nBREAKING CHANGE: Require Node version 10 or above.",
"title": "chore: update node engine to >= 10",
"type": "issue"
}
] | 2 | 2 | 454 | false | true | 103 | false |
pelotom/runtypes | null | 680,543,806 | 162 | {
"number": 162,
"repo": "runtypes",
"user_login": "pelotom"
} | [
{
"action": "opened",
"author": "reinismu",
"comment_id": null,
"datetime": 1597700616000,
"masked_author": "username_0",
"text": "Added `.exact()` for `Record`. It allows to check that incoming object doesn't have additional fields.\r\n\r\nClose #41 \r\n\r\nFirst I tried to create `{ strict: true }`, but `validate`, `check` etc. are shared between multiple Runtypes and they don't really need that setting.",
"title": "Exact record support",
"type": "issue"
},
{
"action": "created",
"author": "jraoult",
"comment_id": 710120268,
"datetime": 1602862591000,
"masked_author": "username_1",
"text": "@username_0 I'm excited about this. Have you heard from @pelotom regarding this PR?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "reinismu",
"comment_id": 710212190,
"datetime": 1602867027000,
"masked_author": "username_0",
"text": "Nop :/ \r\n\r\nI made my own build so I could use this change + Omit\r\n```\r\n\"dependencies\": {\r\n \"runtypes\": \"github:username_0/runtypes#with-pick-omit\"\r\n },\r\n```\r\nIf you are interested",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bayareacoder",
"comment_id": 715596901,
"datetime": 1603488257000,
"masked_author": "username_2",
"text": "This doesn't work for me if you have a record with optional properties:\r\nconst Test = Record({...}).And({...}).exact();\r\nSeems a common use case to make sure that no extra keys, besides the optional ones, are present.\r\n\r\n`Property 'exact' does not exist on type 'Intersect2<Record`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jraoult",
"comment_id": 729898881,
"datetime": 1605727199000,
"masked_author": "username_1",
"text": "@username_0 cheers, I ended up moving to [zod](https://github.com/colinhacks/zod) anyway. It wasn't too hard after all. I feel this project is sadly losing momentum and zod provide strict check by default out of the box and some other nice goodies API wise.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bayareacoder",
"comment_id": 729907331,
"datetime": 1605728130000,
"masked_author": "username_2",
"text": "@username_0 You closed this issue but does it work for an object with both required and optional keys, to make sure there are either no required keys missing, or no additional keys, beyond the optional ones, specified? I don't see that case in your tests.\r\nI believe this is a very common use case to eg test an API response.\r\nWe use this library and found a way to check for this that works, but is not elegant:\r\n\r\n```\r\nconst allowedKeys = ['requiredKey1', ..., 'requiredKeyM', 'optionalKey1', ..., 'optionalKeyN'];\r\nconst RTtype = Record({\r\nrequiredKey1: ...\r\n...\r\nrequiredKeyM: ...\r\n})\r\n.And(Partial({\r\noptionalKey1: ...\r\n...\r\noptionalKeyN: ...\r\n})\r\n.And(Dictionary(Unknown, allowedKeys));\r\n```\r\nThis will check that:\r\n- all required keys are present and of correct type\r\n- optional keys, if present, are of the correct type\r\n- no additional keys are present.\r\n\r\nBasically I don't think an exact spec on any union type works, since it's not the same as the union of the exact types...",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "reinismu",
"comment_id": 729927873,
"datetime": 1605730534000,
"masked_author": "username_0",
"text": "@username_1 Thanks! It seems promising \r\n\r\n@username_2 From use of it I have noticed few edge cases where it doesn't work. One of them is when trying to use `Or`",
"title": null,
"type": "comment"
}
] | 4 | 8 | 2,501 | false | true | 2,214 | true |
queue-interop/queue-interop | queue-interop | 751,847,127 | 36 | {
"number": 36,
"repo": "queue-interop",
"user_login": "queue-interop"
} | [
{
"action": "opened",
"author": "KartaviK",
"comment_id": null,
"datetime": 1606428844000,
"masked_author": "username_0",
"text": "",
"title": "PHP8 Support",
"type": "issue"
},
{
"action": "created",
"author": "martinssipenko",
"comment_id": 748961984,
"datetime": 1608555708000,
"masked_author": "username_1",
"text": "@username_2 anything we could do to get this merged and released?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "makasim",
"comment_id": 748968500,
"datetime": 1608556638000,
"masked_author": "username_2",
"text": "https://github.com/queue-interop/queue-interop/releases/tag/0.8.1",
"title": null,
"type": "comment"
}
] | 3 | 3 | 127 | false | false | 127 | true |
rmosolgo/graphql-ruby | null | 793,726,421 | 3,299 | null | [
{
"action": "opened",
"author": "denisahearn",
"comment_id": null,
"datetime": 1611608871000,
"masked_author": "username_0",
"text": "After upgrading from graphql 1.11.6 to graphql 1.12.1, a custom `field` instrumentation class that my app defines stopped getting called during query and mutation execution.\r\n\r\nThe custom field implementation is shown below, and is used to convert the value of any `String` field that is empty (e.g. `''`) to `null` in the response. It is based on this suggestion: https://github.com/username_1/graphql-ruby/issues/807#issuecomment-310423259\r\n\r\n```\r\n# For any fields defined as types.String convert an empty string to null.\r\nclass ApplyNullIfEmptyString\r\n def instrument(_type, field)\r\n if field.type == GraphQL::STRING_TYPE\r\n # Get the resolve proc from the field and wrap it with your logic:\r\n new_resolve = EnsureString.new(field.resolve_proc)\r\n\r\n # If you use graphql-batch to load any strings, you'll need this too:\r\n new_lazy_resolve = EnsureString.new(field.lazy_resolve_proc)\r\n\r\n # Make a copy of the field which has the new behavior\r\n field.redefine(resolve: new_resolve, lazy_resolve: new_lazy_resolve)\r\n else\r\n # return the field unchanged\r\n field\r\n end\r\n end\r\nend\r\n\r\nclass EnsureString\r\n def initialize(inner_resolve)\r\n @inner_resolve = inner_resolve\r\n end\r\n\r\n def call(obj, args, ctx)\r\n result = @inner_resolve.call(obj, args, ctx)\r\n if result == ''\r\n # convert empty strings to null\r\n result = nil\r\n end\r\n result\r\n end\r\nend\r\n```\r\n\r\nAnd then in the schema.rb file:\r\n\r\n```\r\nclass Schema < GraphQL::Schema\r\n ...\r\n instrument(:field, ApplyNullIfEmptyString.new)\r\n ...\r\nend\r\n```\r\n\r\n**Versions**\r\n\r\n`graphql` version: 1.12.1\r\n`rails` (or other framework): 6.0.3.4\r\nother applicable versions (`graphql-batch`, etc)\r\n\r\n**GraphQL schema**\r\n\r\n```ruby\r\nclass User < GraphQL::Schema::Object\r\n description 'A user within the system.'\r\n\r\n field :id, ID, null: false, description: 'The ID of the user.'\r\n field :first_name, String, null: false, description: 'The first name of the user.'\r\n field :last_name, String, null: true, description: 'The last name of the user.'\r\n field :email, String, null: false, description: 'The email address for the user.'\r\nend\r\n\r\nclass Query < GraphQL::Schema::Object\r\n field :user, resolver: User\r\nend\r\n\r\nclass Schema < GraphQL::Schema\r\n query Sentera::Types::Query\r\n instrument(:field, ApplyNullIfEmptyString.new)\r\nend\r\n\r\nclass User < GraphQL::Schema::Resolver\r\n type User, null: false\r\n description 'The current user.'\r\n\r\n def resolve\r\n context[:current_user]\r\n end\r\nend\r\n```\r\n\r\n**GraphQL query**\r\n\r\n```graphql\r\nquery FetchCurrentUser {\r\n user {\r\n id\r\n first_name\r\n last_name\r\n email\r\n }\r\n}```\r\n\r\n```json\r\n{\r\n \"data\": {\r\n \"user\": {\r\n \"id\": 5,\r\n \"first_name\": \"John\",\r\n \"last_name\": \"\",\r\n \"email\": \"john.doe@example.com\"\r\n }\r\n }\r\n}```\r\n\r\n**Steps to reproduce**\r\n\r\nExecute the above query for a user record that has an empty `last_name` in the database.\r\n\r\n**Expected behavior**\r\n\r\nThe expected behavior is that the custom instrumentation should convert the empty last name to `null`, with `null` returned in the response.\r\n\r\n**Actual behavior**\r\n\r\nThe custom instrumentation class is no longer getting called during query and mutation execution/resolution. So instead of a `null` getting returned in the response for `String` fields that are empty, an empty string is being returned.\r\n\r\n**Additional context**\r\n\r\nAfter upgrading, I noticed that the following deprecation warning is now getting emitted during query execution:\r\n\r\n\"Field instrumentation (Sentera::FieldInstrumentation::ApplyNullIfEmptyString) will be removed in GraphQL-Ruby 2.0, please upgrade to field extensions: https://graphql-ruby.org/type_definitions/field_extensions.html\"\r\n\r\nThe problem with this suggestion is that I want the conversion of `'' -> null` behavior to be applied consistently across all `String` fields in my GraphQL schema. I don't want to have to add a field extension to each of our String fields to get this behavior, because remembering to add this extension could easily be overlooked the next time a `String` field is added to the schema.\r\n\r\nI found https://github.com/username_1/graphql-ruby/issues/2087, which hints towards using tracing to solve this problem, however it's not clear to me from the online documentation of how to achieve this.\r\n\r\nSince field instrumentation is eventually going away, I would prefer to find an alternative, centralized solution for this problem. Is it possible to achieve what I'm after using the tracing feature?",
"title": "Field instrumentation no longer working",
"type": "issue"
},
{
"action": "created",
"author": "rmosolgo",
"comment_id": 767250344,
"datetime": 1611628931000,
"masked_author": "username_1",
"text": "Hi! I think an extension would work here. For example: \r\n\r\n```ruby \r\nclass EnsureString < GraphQL::Schema::FieldExtension \r\n def after_resolve(value:, **_rest)\r\n value || \"\"\r\n end \r\nend \r\nclass Types::BaseField < GraphQL::Schema::Field \r\n def initialize(**kwargs, &block)\r\n super \r\n # TODO: use `type.unwrap == ...` if you want to match `String!` too \r\n if type == GraphQL::Types::String\r\n extension(EnsureString)\r\n end \r\n end \r\nend \r\n```\r\n\r\nIn that case, the field class would _always_ add an extension whenever it's configured to return `String`. \r\n\r\nThanks for sharing that example; here's an update to it, using the approach I described above: \r\n\r\n<details>\r\n<summary>Ensuring strings with field extensions</summary>\r\n\r\n<p>\r\n\r\n```ruby \r\nrequire \"bundler/inline\"\r\n\r\ngemfile do\r\n source \"https://rubygems.org\"\r\n gem \"graphql\", \"1.12.1\"\r\nend\r\n\r\nclass EnsureString < GraphQL::Schema::FieldExtension\r\n def after_resolve(value:, **_rest)\r\n value || \"\"\r\n end\r\nend\r\n\r\nclass BaseField < GraphQL::Schema::Field\r\n def initialize(*args, **kwargs, &block)\r\n super\r\n # TODO: use `type.unwrap` to also match `String, null: false`\r\n if type == GraphQL::Types::String\r\n extension(EnsureString)\r\n end\r\n end\r\nend\r\n\r\nclass BaseObject < GraphQL::Schema::Object\r\n field_class(BaseField)\r\nend\r\n\r\nclass User < BaseObject\r\n field :first_name, String, null: false, description: 'The first name of the user.'\r\n field :last_name, String, null: true, description: 'The last name of the user.'\r\nend\r\n\r\nclass Query < BaseObject\r\n field :user, User, null: true\r\n\r\n def user\r\n { first_name: \"Plato\", last_name: nil }\r\n end\r\nend\r\n\r\nclass Schema < GraphQL::Schema\r\n query Query\r\nend\r\n\r\npp Schema.execute(<<-GRAPHQL).to_h\r\nquery FetchCurrentUser {\r\n user {\r\n firstName\r\n lastName\r\n }\r\n}\r\nGRAPHQL\r\n# => {\"data\"=>{\"user\"=>{\"firstName\"=>\"Plato\", \"lastName\"=>\"\"}}}\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\nHow does that look? Give a try and let me know how it goes!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "denisahearn",
"comment_id": 767612867,
"datetime": 1611674252000,
"masked_author": "username_0",
"text": "Hi Robert. Thanks so much for the quick support! \r\n\r\nYour suggestion was spot on and easy to implement. In fact, our app already had a `BaseField` class that is plugged into `BaseType` and `BaseInterface` classes, so all I had to do was create the new field extension class, and plug it in via the existing `BaseField` class. That fixed the problem with our broken specs, and our GraphQL API is now back up and running as expected.\r\n\r\nDenis",
"title": null,
"type": "comment"
}
] | 2 | 3 | 6,936 | false | false | 6,936 | true |
AcademySoftwareFoundation/aswf-sample-project | AcademySoftwareFoundation | 599,385,338 | 8 | {
"number": 8,
"repo": "aswf-sample-project",
"user_login": "AcademySoftwareFoundation"
} | [
{
"action": "opened",
"author": "Sarcasm",
"comment_id": null,
"datetime": 1586852400000,
"masked_author": "username_0",
"text": "It is my understanding that CMAKE_CXX_STANDARD\r\nis more targeted to toolchain files than for the projects directly:\r\n\r\n- https://gitlab.kitware.com/cmake/cmake/issues/17146#note_299405\r\n\r\nWhereas `target_compile_features(hello PUBLIC cxx_std_14)`,\r\njust specify a minimum required standard,\r\nbut allow a developer or package to upgrade the standard seemlessly,\r\ne.g. a packager may want to upgrade to C++17 for ABI-compatibility reasons.\r\nThis can be done using CMAKE_CXX_STANDARD from a toolchain file,\r\nor from the command line:\r\n\r\n cmake -DCMAKE_CXX_STANDARD=17 ...\r\n\r\nCMake will also respect the -std=<standard> flags\r\nif specified specified from the environement variable CXXFLAGS:\r\n\r\n env CXXFLAGS=-std=c++17 cmake ...\r\n\r\nor from the command line `CMAKE_CXX_FLAGS`:\r\n\r\n cmake -DCMAKE_CXX_FLAGS=-std=c++17 ...\r\n\r\nOne thing to be aware with this method,\r\nis that if one wants to make sure, e.g. in CI,\r\nthat the minimum C++ standard is actually tested,\r\nit should do so explicitely, using one of the aforementioned methods.\r\nThe azure-pipelines.yml has been updated for this reason.",
"title": "CMake: update method to set C++ Standard",
"type": "issue"
},
{
"action": "created",
"author": "Sarcasm",
"comment_id": 613702622,
"datetime": 1586901591000,
"masked_author": "username_0",
"text": "Fixed the typo, took me a while to see the issue `{ STANTARD => STANDARD }`, thanks for catching it.\r\nYes, I've been reading about the Github Actions move, will keep an eye on that.",
"title": null,
"type": "comment"
}
] | 1 | 2 | 1,276 | false | false | 1,276 | false |
dbeaver/dbeaver | dbeaver | 781,435,288 | 10,899 | null | [
{
"action": "opened",
"author": "2019-05-10",
"comment_id": null,
"datetime": 1610036511000,
"masked_author": "username_0",
"text": "In 7.3.0 a connection to PostgreSQL 12 with SSL (org.postgresql.ssl.DefaultJavaSSLFactory) works.\r\nAfter upgrade to 7.3.2 the very same connection fails with\r\n```\r\nSSL error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target\r\n PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target\r\n unable to find valid certification path to requested target\r\n```\r\n\r\nDowngrading to 7.3.0 and the connection works again.\r\n```\r\n!ENTRY org.jkiss.dbeaver.model 4 0 2021-01-07 17:20:50.612\r\n!MESSAGE unable to find valid certification path to requested target\r\n!SUBENTRY 1 org.jkiss.dbeaver.model 4 0 2021-01-07 17:20:50.612\r\n!MESSAGE unable to find valid certification path to requested target\r\n!STACK 0\r\nsun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target\r\nat java.base/sun.security.provider.certpath.SunCertPathBuilder.build(Unknown Source)\r\nat java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(Unknown Source)\r\nat java.base/java.security.cert.CertPathBuilder.build(Unknown Source)\r\nat java.base/sun.security.validator.PKIXValidator.doBuild(Unknown Source)\r\nat java.base/sun.security.validator.PKIXValidator.engineValidate(Unknown Source)\r\nat java.base/sun.security.validator.Validator.validate(Unknown Source)\r\nat java.base/sun.security.ssl.X509TrustManagerImpl.validate(Unknown Source)\r\nat java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(Unknown Source)\r\nat java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(Unknown Source)\r\nat java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(Unknown Source)\r\nat java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.onCertificate(Unknown Source)\r\nat java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.consume(Unknown Source)\r\nat java.base/sun.security.ssl.SSLHandshake.consume(Unknown Source)\r\nat java.base/sun.security.ssl.HandshakeContext.dispatch(Unknown Source)\r\nat java.base/sun.security.ssl.HandshakeContext.dispatch(Unknown Source)\r\nat java.base/sun.security.ssl.TransportContext.dispatch(Unknown Source)\r\nat java.base/sun.security.ssl.SSLTransport.decode(Unknown Source)\r\nat java.base/sun.security.ssl.SSLSocketImpl.decode(Unknown Source)\r\nat java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(Unknown Source)\r\nat java.base/sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)\r\nat java.base/sun.security.ssl.SSLSocketImpl.startHandshake(Unknown Source)\r\nat org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:40)\r\nat org.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:435)\r\nat org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:94)\r\nat org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)\r\nat org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)\r\nat org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)\r\nat org.postgresql.Driver.makeConnection(Driver.java:454)\r\nat org.postgresql.Driver.access$100(Driver.java:57)\r\nat org.postgresql.Driver$ConnectThread.run(Driver.java:364)\r\nat java.base/java.lang.Thread.run(Unknown Source)\r\n```",
"title": "SSL-Connection broken after upgrade 7.3.0 -> 7.3.2",
"type": "issue"
},
{
"action": "created",
"author": "kseniiaguzeeva",
"comment_id": 756585256,
"datetime": 1610088823000,
"masked_author": "username_1",
"text": "Cannot reproduce, Which settings do you do in SSL tab for the connection?\r\n\r\nCan you also send log files https://github.com/dbeaver/dbeaver/wiki/Log-files? (here or on kseniia@jkiss.org)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "serge-rider",
"comment_id": 756595072,
"datetime": 1610090334000,
"masked_author": "username_2",
"text": "@username_0 Do you have SSL certificates configured in default Java installation? Since version 7.3.1 all DBeaver installers contain Java inside, default Java installed in your system not used anymore.\r\n\r\nSo you need tocnfigure SSL in DBeaver itself (by specifying certificate files) but not in Java (using keytool).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "2019-05-10",
"comment_id": 756650357,
"datetime": 1610098142000,
"masked_author": "username_0",
"text": "What's the rationale behind this?\r\nIt basically means that there's a Java installation I didn't consent to and don't even necessarily know about, which takes up additional space and is not updated together with the installations I installed on my own and know about, thus being a potential security issue.\r\n\r\nFor the moment I was able to fix that by prepending\r\n```\r\n-vm\r\n/usr/bin\r\n```\r\nto\r\n```\r\ndbeaver.ini\r\n```\r\nbut with the next update that will be gone again.\r\nIs there a way to tell DBeaver to use a JRE _I_ want it to instead of a bundled one I have no control over?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "serge-rider",
"comment_id": 756657987,
"datetime": 1610099104000,
"masked_author": "username_2",
"text": "Most DBeaver users don't know what Java is and don't know how install it. Also DBeaver needs proper Java version and installing proper Java may be tricky. We shipped DBeaver for Windows and MacOS along with Java bundle for years. Now we decided to include it into Linux distributions as well.\r\n\r\nAnd yes, you will need to fix configuration i nthe next version. That's inconvenient.\r\n\r\nProbably we need to create new DBeaver download \"without JDK included\". It may be more preferable by more experienced users. \r\n\r\nI've crete the ticket (#10904) for this.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "uslss",
"comment_id": null,
"datetime": 1611130673000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
}
] | 4 | 6 | 5,152 | false | false | 5,152 | true |
JoviDeCroock/prefresh | null | 629,959,158 | 80 | {
"number": 80,
"repo": "prefresh",
"user_login": "JoviDeCroock"
} | [
{
"action": "opened",
"author": "JoviDeCroock",
"comment_id": null,
"datetime": 1591187748000,
"masked_author": "username_0",
"text": "Thanks for showing how this is done @lukeed will do some more PR's to update all packages but already publish the `vite` one in a bit!\r\n\r\nWill probably refactor this to use an exported `utils` equivalent as well in a later PR",
"title": "(snowpack) - strict package manager support",
"type": "issue"
}
] | 2 | 2 | 971 | false | true | 225 | false |
AnySoftKeyboard/AnySoftKeyboard | AnySoftKeyboard | 629,161,498 | 2,307 | {
"number": 2307,
"repo": "AnySoftKeyboard",
"user_login": "AnySoftKeyboard"
} | [
{
"action": "opened",
"author": "lubenard",
"comment_id": null,
"datetime": 1591100741000,
"masked_author": "username_0",
"text": "Now, for any to long string in menus, the text will be left scrolling. \r\n\r\nThis is to be able to read what can be in parenthesis",
"title": "Added horizontal scrolling layout for view item",
"type": "issue"
},
{
"action": "created",
"author": "lubenard",
"comment_id": 639063105,
"datetime": 1591298135000,
"masked_author": "username_0",
"text": "This fail is weird. \r\n\r\nThis is like it failed to start.\r\n\r\nAlready passed it 3 times at home, and it successfully passed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "menny",
"comment_id": 640152011,
"datetime": 1591501070000,
"masked_author": "username_1",
"text": "I've seen that too a few days ago. Could be a github infrastructure issue, which seems to be resolved now.\r\n\r\nCan you rebase this PR?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "menny",
"comment_id": 641689903,
"datetime": 1591757876000,
"masked_author": "username_1",
"text": "@username_0 would you like to take a shot at https://github.com/AnySoftKeyboard/AnySoftKeyboard/issues/2313 ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "lubenard",
"comment_id": 641835896,
"datetime": 1591777772000,
"masked_author": "username_0",
"text": "Yes, i'll see what i can do !",
"title": null,
"type": "comment"
}
] | 2 | 5 | 519 | false | false | 519 | true |
Sunbird-Ed/sunbird-mobile-sdk | Sunbird-Ed | 570,550,629 | 299 | {
"number": 299,
"repo": "sunbird-mobile-sdk",
"user_login": "Sunbird-Ed"
} | [
{
"action": "opened",
"author": "Ajoymaity",
"comment_id": null,
"datetime": 1582635362000,
"masked_author": "username_0",
"text": "",
"title": "Issue #SB-0000 test: Unit test for content mapper",
"type": "issue"
}
] | 2 | 2 | 3,201 | false | true | 0 | false |
lumochift/optgen | lumochift | 710,680,924 | 5 | {
"number": 5,
"repo": "optgen",
"user_login": "lumochift"
} | [
{
"action": "opened",
"author": "h4ckm03d",
"comment_id": null,
"datetime": 1601343006000,
"masked_author": "username_0",
"text": "",
"title": "feat: add goreleaser build",
"type": "issue"
}
] | 2 | 2 | 0 | false | true | 0 | false |
huggingface/transformers | huggingface | 628,015,442 | 4,698 | null | [
{
"action": "opened",
"author": "RafaelWO",
"comment_id": null,
"datetime": 1590949123000,
"masked_author": "username_0",
"text": "Labels for language modeling.\r\n Note that the labels **are shifted** inside the model, i.e. you can set ``lm_labels = input_ids``\r\n\r\nSo what is it about the parameter `lm_labels`? I only see `labels` defined in the `forward` method.\r\n\r\nAnd when the labels \"are shifted\" inside the model, does this mean I have to pass in `data` twice (for `input_ids` and `labels`) because `labels` shifted inside? But how does the model then know the next token to predict (in the case above: `9`) ?\r\n\r\nI also read through [this bug](https://github.com/huggingface/transformers/issues/3711) and the fix in [this pull request](https://github.com/huggingface/transformers/pull/3716) but I don't quite understand how to treat the model now (before vs. after fix). Maybe someone could explain me both versions.\r\n\r\nThanks in advance for some help!\r\n\r\n<!-- You should first ask your question on SO, and only if\r\n you didn't get an answer ask it here on GitHub. -->\r\n**A link to original question on Stack Overflow**: https://stackoverflow.com/q/62069350/9478384",
"title": "Transformer-XL: Input and labels for Language Modeling",
"type": "issue"
},
{
"action": "created",
"author": "sgugger",
"comment_id": 636982754,
"datetime": 1591030349000,
"masked_author": "username_1",
"text": "No, this is not correct, because the labels are shifted inside the model (as the documentation suggests). This happens [here](https://github.com/huggingface/transformers/blob/ec8717d5d8f6edc2c595ff6954ffaa2078dcc97d/src/transformers/modeling_transfo_xl_utilities.py#L104) so in your example, the target vector will become\r\n```\r\ntensor([[3,4,5,6,7,8,9]])\r\n```\r\nto be matched with the predictions corresponding to\r\n```\r\ntensor([[1,2,3,4,5,6,7]])\r\n```\r\nso you'll try to predict the token that is two steps ahead of the current one.\r\n\r\nI am guessing that `lm_labels` is a typo for `labels`, and that you should either:\r\n- pass `labels = input_ids` as suggested by the doc string (in this case you will not compute any loss for the last prediction, but that's probably okay)\r\n- add something at the beginning of your target tensor (anything can work since it will be removed by the shift) : `target = tensor([[42,2,3,4,5,6,7,8,9]])`\r\n\r\nI'm still learning the library, so tagging @username_2 (since he worked on the issue/PR you mentioned) to make sure I'm not saying something wrong (also, do we want to update `LMOrderedIterator` from tokenization_transfo_xl.py to return target tensors that can be used as labels?)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TevenLeScao",
"comment_id": 637148365,
"datetime": 1591049179000,
"masked_author": "username_2",
"text": "Ah yes that does sound like a typo from another model's convention! You do have to pass `data` twice, once to `input_ids` and once to `labels` (in your case, `[1, ... , 8]` for both). The model will then attempt to predict `[2, ... , 8]` from `[1, ... , 7]`). I am not sure adding something at the beginning of the target tensor would work as that would probably cause size mismatches later down the line.\r\n\r\nPassing twice is the default way to do this in `transformers`; before the aforementioned PR, `TransfoXL` did not shift labels internally and you had to shift the labels yourself. The PR changed it to be consistent with the library and the documentation, where you have to pass the same data twice. I believe #4711 fixed the typo, you should be all set ! I'll also answer on StackOverflow in case someone finds that question there.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RafaelWO",
"comment_id": 637352839,
"datetime": 1591083586000,
"masked_author": "username_0",
"text": "So this means that in the versions before the fix my method with shifting the labels beforehand was correct? Because I'm currently using `transformers 2.6`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TevenLeScao",
"comment_id": 637393706,
"datetime": 1591087872000,
"masked_author": "username_2",
"text": "Yes, it was changed in 2.9.0. You should probably consider updating ;)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgugger",
"comment_id": 637462173,
"datetime": 1591095738000,
"masked_author": "username_1",
"text": "Note that if you are using the state, the memory returned is computed on the whole `[1, ... , 8]`, so you should use `[9,10,... , 16]` as your next batch.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RafaelWO",
"comment_id": 637963994,
"datetime": 1591162371000,
"masked_author": "username_0",
"text": "Thanks guys!\r\n\r\nSorry for asking this here, but maybe one of you can help me with my workaround in issue #3554 ? That would help me a lot!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "RafaelWO",
"comment_id": null,
"datetime": 1591162372000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "RafaelWO",
"comment_id": 643127861,
"datetime": 1591948177000,
"masked_author": "username_0",
"text": "Hello again, sorry for bothering again, but I have update my code from version 2.6 to 2.11 as @username_2 has suggested. Now I experience a drop in my model's performance, but I don't know why. I use the same code as before except passing `data` in twice as suggested.\r\n\r\nI know that this can have several other reasons but I just want to know if there where other breaking changes to `TransformerXLLMHeadModel` or to the generation process?\r\n\r\nI skipped through the changelog in the releases but could not find anything.\r\n\r\nThanks in advance!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TevenLeScao",
"comment_id": 644080942,
"datetime": 1592221781000,
"masked_author": "username_2",
"text": "Sorry, by a drop in model performance you mean the loss is worse right? I've noticed discrepancies between CMU code performance (better) and ours in the past, so maybe a bug was introduced between 2.6 and 2.11. I'm comparing the two.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RafaelWO",
"comment_id": 644106519,
"datetime": 1592224335000,
"masked_author": "username_0",
"text": "Well mainly I saw differences during text generation with `model.generate()` The sequences tend to be shorter and end more often with an <eos> in 2.11, where before in 2.6 they were just cut of at some point.\r\n\r\nBut I can't guarantee that there are no mistakes from my side.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RafaelWO",
"comment_id": 644847021,
"datetime": 1592322254000,
"masked_author": "username_0",
"text": "Could it be that this is also related to #4826 ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RafaelWO",
"comment_id": 652311005,
"datetime": 1593596327000,
"masked_author": "username_0",
"text": "FYI: The issue regarding worse model performance on the newer version of `transformers` is solved. There were some errors on my side.\r\n\r\nNevertheless, I hope that the fix in the PR linked above will improve the generated texts, since I also experience low quality output despite proper finetuning.",
"title": null,
"type": "comment"
}
] | 3 | 13 | 5,020 | false | false | 5,020 | true |
symfony/symfony | symfony | 545,680,382 | 35,222 | null | [
{
"action": "opened",
"author": "Jir4",
"comment_id": null,
"datetime": 1578310162000,
"masked_author": "username_0",
"text": "**Symfony version(s) affected**: 4.3.9\r\n\r\n**Description** \r\nUsing a quote before a placeholder in a Yaml translation file cause it not to interpreted.\r\n\r\n**How to reproduce** \r\nUse a string like this in a Yaml translation file :\r\n```\r\nnews_add: L'{type}\r\n```\r\n\r\nDisplay the string in flash\r\n\r\n```\r\n$type = 'event';\r\n$this->addFlash('success', $this->translator->trans('admin.message.news_add', [\r\n '%type%' => $type,\r\n ]);\r\n```\r\n\r\nExpected result:\r\n`L'event` in a flash\r\n\r\nActual result:\r\n`L'{type}`\r\n\r\n**Additional context** \r\nThe problem only exist when the quote is directly followeb by the placeholder, the problem be bypassed by adding a space or using a htm entity",
"title": "Using a quote before a placeholder cause it not to be interpreted",
"type": "issue"
},
{
"action": "created",
"author": "xabbuh",
"comment_id": 571125911,
"datetime": 1578314860000,
"masked_author": "username_1",
"text": "Your example uses `{type}` in the translation file, but `%type%` when calling the translator. I don't see how this could ever work. Can you double-check and create a small example application that allows to reproduce otherwise?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Jir4",
"comment_id": 571146300,
"datetime": 1578318901000,
"masked_author": "username_0",
"text": "Umh, this mistake exist in our project, seems to be `intl` related, i'll give you an example project in few minutes, because as weird as it's seems, it's working.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Jir4",
"comment_id": 571152321,
"datetime": 1578319992000,
"masked_author": "username_0",
"text": "@username_1 : here the test project : https://github.com/username_0/test-trans, you should just have to `composer install`, `bin/console server:run` and open you browser. After that you can play with space between the quote and the placeholder in `message+intl-icu.en.yml`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "stof",
"comment_id": 571153170,
"datetime": 1578320128000,
"masked_author": "username_2",
"text": "you should be using `type`, not `%type%`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Jir4",
"comment_id": 571155206,
"datetime": 1578320506000,
"masked_author": "username_0",
"text": "@username_2 : totaly agree with this, it's a mistake due to a bad migration to intl in our code. The problem with the quote and the placeholder still exist without the percent signs",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "stof",
"comment_id": null,
"datetime": 1578321620000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "stof",
"comment_id": 571162120,
"datetime": 1578321620000,
"masked_author": "username_2",
"text": "hmm, that's because the quote is the escaping char in the ICU message format syntax: http://userguide.icu-project.org/formatparse/messages\r\n\r\nSo you will need to double the quote, so that it is as escaped quote rather than a quote escaping the curly brace.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Jir4",
"comment_id": 571186075,
"datetime": 1578324995000,
"masked_author": "username_0",
"text": "Nice catch",
"title": null,
"type": "comment"
}
] | 3 | 9 | 1,826 | false | false | 1,826 | true |
micro-fan/aiozk | micro-fan | 612,904,838 | 61 | {
"number": 61,
"repo": "aiozk",
"user_login": "micro-fan"
} | [
{
"action": "opened",
"author": "rhdxmr",
"comment_id": null,
"datetime": 1588713908000,
"masked_author": "username_0",
"text": "A process which created znode of lowest sequence number calls wait_on_sibling method multiple times in the loop, but the previous code does not subtract already spent time from given timeout.\r\n\r\nRefactoring DoubleBarrier.leave based on the official zookeeper document.\r\n\r\nAlso some tests are added.",
"title": "Fix timeout accuracy of DoubleBarrier.leave",
"type": "issue"
},
{
"action": "created",
"author": "cybergrind",
"comment_id": 624577531,
"datetime": 1588762194000,
"masked_author": "username_1",
"text": "hey @username_0 \r\nInitially, there was another approach with a time-limit, which I've removed it because of incorrect implementation (thus almost infinite timeout values) here https://github.com/micro-fan/aiozk/commit/e6dc661e832543658cd475aa9760e5bc3210762d\r\n\r\nSo, there are more places like that.\r\n1. Could you please check the commit above, probably it is worth to add more places\r\n2. Looking at the implementation, probably we could move it to separate context manager that can be used like\r\n\r\n```python\r\nwith deadline_timeout(initial_timeout) as dt:\r\n wait_for(dt(), some_action1) # dt() will just return initial_timeout\r\n wait_for(dt(), some_action2) # dt() will return initial_timeout - time of some_action1\r\n```\r\n3. I'm not sure that `time.perf_counter` is really better than `time.time` for our case. Also it is way more cryptic than regular unix-timestamp. So, probably, it is better to use `time.time` instead.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rhdxmr",
"comment_id": 624701558,
"datetime": 1588777224000,
"masked_author": "username_0",
"text": "@username_1 OK. As you said, I am going to find other places for applying the same timeout mechanism.\r\n\r\nAnd your suggestion of that using context manager or generator sounds very nice.\r\nI'll come up with reasonable solution.🙂\r\nThanks",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rhdxmr",
"comment_id": 626450604,
"datetime": 1589167680000,
"masked_author": "username_0",
"text": "@username_1 I sent new commit. Please check this PR\r\n\r\nAs you said, I agree with that time.perf_counter is more cryptic than time.time at python level. So I use time.time instead of time.perf_counter.\r\n\r\nFor reference, I investigated the difference between them. In Linux, time.perf_counter calls a system call clock_gettime(CLOCK_MONOTONIC) and time.time calls clock_gettime(CLOCK_REALTIME). At this level, it is reasonable to use clock_gettime(CLOCK_MONOTONIC) because we just need the difference between 2 time. But at python level, it is not obvious.\r\n\r\nFirst I considered to make use of context manager as you mentioned. But I thought the timeout feature does not deserve more indentations for with-statement. If it is a context manager, every code that uses it should have more indentation. I think it is inconvenient. And in addition it does not have any resources to release.\r\n\r\nBut if I had to raise an exception after timeout, I would make use of context manager.\r\n\r\nSo I made a new class for this feature: Deadline class.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cybergrind",
"comment_id": 626683112,
"datetime": 1589201614000,
"masked_author": "username_1",
"text": ":+1: released as 0.25.0. Good job, thank you",
"title": null,
"type": "comment"
}
] | 2 | 5 | 2,533 | false | false | 2,533 | true |
ekim-mike/github-slideshow | null | 709,417,625 | 1 | null | [
{
"action": "created",
"author": "ekim-mike",
"comment_id": 699344110,
"datetime": 1601092738000,
"masked_author": "username_0",
"text": "Closed",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ekim-mike",
"comment_id": null,
"datetime": 1601092738000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 7 | 6,807 | false | true | 6 | false |
LedgerHQ/ledger-live-mobile | LedgerHQ | 744,204,633 | 1,456 | null | [
{
"action": "opened",
"author": "JPaulMora",
"comment_id": null,
"datetime": 1605562272000,
"masked_author": "username_0",
"text": "Live exchange rate is used in old payments, changing the FIAT value received at that moment \n#### Ledger Live Version\n\n- Ledger Live **2.15.0**\n\n#### Part of the application to improve\n\nUse historical price information for old transactions. \n\nPS. Great app, thanks a lot.",
"title": "Change transactions exchange rate",
"type": "issue"
},
{
"action": "created",
"author": "gre",
"comment_id": 733042060,
"datetime": 1606231166000,
"masked_author": "username_1",
"text": "Good point.\r\nWe will plan this as part of loading older data from our countervalues history.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gre",
"comment_id": 745233358,
"datetime": 1608032212000,
"masked_author": "username_1",
"text": "I believe this is now fixed. Could you confirm? Thanks",
"title": null,
"type": "comment"
}
] | 2 | 3 | 417 | false | false | 417 | false |
sveltejs/language-tools | sveltejs | 782,172,351 | 747 | null | [
{
"action": "opened",
"author": "dkzlv",
"comment_id": null,
"datetime": 1610117504000,
"masked_author": "username_0",
"text": "<!-- Before you submit a bug, please make sure that:\r\n- you have searched and found no existing open issue with the problem at hand\r\n- you don't have \"files.associations\": {\"*.svelte\": \"html\" } inside your VSCode settings (if you can't remember ever doing that, you don't have that)\r\n- you are using Svelte for Vscode (NOT the old \"Svelte\" by James Birtles) and have disabled all other Svelte-related extensions to reproduce the bug\r\n- if it's a preprocessor related bug like \"can't use typescript\", did you setup `svelte-preprocess` and/or `svelte.config.js`? See the docs for more info.\r\n-->\r\n\r\n**Describe the bug**\r\nIf you set SCSS as the default style language and not set it inside the component itself on `<style>` tag, it would mess up the whole highlighting of everything below.\r\n\r\n**To Reproduce**\r\nSet the defaults in `rollup.config.js`:\r\n\r\n```js\r\nimport sveltePreprocess from 'svelte-preprocess';\r\n\r\nconst preprocess = sveltePreprocess({\r\n defaults: {\r\n style: 'scss',\r\n },\r\n scss: true,\r\n});\r\n```\r\n\r\nSet the same in `svelte.config.js`:\r\n\r\n```js\r\nconst sveltePreprocess = require('svelte-preprocess');\r\n\r\nmodule.exports = {\r\n preprocess: sveltePreprocess({\r\n defaults: {\r\n style: 'scss',\r\n },\r\n scss: true,\r\n }),\r\n};\r\n```\r\n\r\nUse some basic component like this:\r\n\r\n```svelte\r\n<script>\r\n let blabla = 'wer';\r\n</script>\r\n\r\n<style>\r\n @mixin a($var) {\r\n font-style: $var;\r\n }\r\n\r\n .selector {\r\n @include a('qwd');\r\n }\r\n</style>\r\n\r\n<!-- Messes up everything -->\r\n<div class=\"selector\">Yep {blabla}</div>\r\n```\r\n\r\n**Expected behavior**\r\nShould work as if `<style lang='scss'>` is saved in the file.\r\n\r\n**Screenshots**\r\n\r\n\r\n**System (please complete the following information):**\r\n - OS: OS X\r\n - IDE: VS Code\r\n - Plugin/Package: Svelte for VS Code (103.0.0)",
"title": "SCSS highlighting breaks if defined as default and not in the component",
"type": "issue"
},
{
"action": "created",
"author": "dummdidumm",
"comment_id": 756823222,
"datetime": 1610120448000,
"masked_author": "username_1",
"text": "I don't think there's a way to solve this I'm afraid. [The docs mention this and give an explanation why](https://github.com/sveltejs/language-tools/blob/master/docs/preprocessors/in-general.md#using-language-defaults). If someone knows a way, let us know.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dummdidumm",
"comment_id": 766352205,
"datetime": 1611496603000,
"masked_author": "username_1",
"text": "Related VS Code issue: https://github.com/microsoft/vscode/issues/68647",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dkzlv",
"comment_id": 766675006,
"datetime": 1611566476000,
"masked_author": "username_0",
"text": "Offtop: this idea is so dope. `<i18n>` tag would be so cool in the context of https://github.com/sveltejs/sapper/issues/576.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 2,375 | false | false | 2,375 | false |
Barsik008/PossumBot | null | 780,077,011 | 81 | null | [
{
"action": "opened",
"author": "yeetyeet96",
"comment_id": null,
"datetime": 1609912288000,
"masked_author": "username_0",
"text": "After installing the bot and stuff when I tried to bring it to my server \nThis happened \nWhat does this mean",
"title": "Hello! ",
"type": "issue"
},
{
"action": "created",
"author": "yeetyeet96",
"comment_id": 755098180,
"datetime": 1609912530000,
"masked_author": "username_0",
"text": "I'm a little dumb so",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yeetyeet96",
"comment_id": 755110332,
"datetime": 1609914443000,
"masked_author": "username_0",
"text": "I figured it out nvm LMFAO",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "n18s",
"comment_id": 757104348,
"datetime": 1610173086000,
"masked_author": "username_1",
"text": "WTF was the solution and problem?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Barsik008",
"comment_id": null,
"datetime": 1610198129000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 5 | 317 | false | false | 317 | false |
cerner/terra-framework | cerner | 696,918,378 | 1,234 | null | [
{
"action": "opened",
"author": "andrewnottaviano",
"comment_id": null,
"datetime": 1599665707000,
"masked_author": "username_0",
"text": "# Bug Report\r\n\r\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n<!-- Providing a link to a live example / minimal demo of the problem greatly helps us debug issues. -->\r\nWe switched to using the terra-menu component in our project and I noticed that when I use `contentWidth=\"auto\"`, there is no right padding. The text is crammed against the right side of the popup (see screenshot). Note, we are not using selection in our menu; on selection, we close the menu.\r\n## Steps to Reproduce\r\n<!-- Please specify the exact steps you took for this bug to occur. -->\r\n<!-- Provide as much detail as possible so we're able to reproduce these steps. -->\r\n1. Render a menu with a few options (it is easier to see if you have one of them longer than the other menu items)\r\n2. Set the `contentWidth` prop to `auto`\r\n3. Observe the popup is not sized properly to give the content room to breathe\r\n\r\n## Additional Context / Screenshots\r\n<!-- Add any other context about the problem here. If applicable, add screenshots to help explain. -->\r\n\r\n\r\n## Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nI would expect there to be a small amount of padding or margin to the right of the text.\r\n## Possible Solution\r\n<!--- If you have suggestions to fix the bug, let us know -->\r\nAdd some scss.\r\n## Environment\r\n<!-- Include as many relevant details about the environment you experienced the bug in -->\r\n* Component Name and Version: terra-menu - 6.40.0\r\n* Browser Name and Version: Chrome - 85.0.4183.83\r\n* Node/npm Version: Node 12/npm 6.13.4\r\n* Webpack Version: \r\n* Operating System and version (desktop or mobile): mac OSX desktop\r\n\r\n## @ Mentions\r\n<!-- @ Mention anyone on the terra team that you have been working with so far. -->\r\n@dkasper-was-taken",
"title": "terra-menu Right Padding Non-existent",
"type": "issue"
},
{
"action": "closed",
"author": "neilpfeiffer",
"comment_id": null,
"datetime": 1603109704000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 1,947 | false | false | 1,947 | false |
approvals/ApprovalTests.Net | approvals | 648,771,000 | 409 | null | [
{
"action": "opened",
"author": "NikiforovAll",
"comment_id": null,
"datetime": 1593591244000,
"masked_author": "username_0",
"text": "Since the subset names for embedded fonts are generated in an indeterministic way. It is not possible to use approval tests in this case. My suggestion is to extend _PdfScrubber_, so 6 digit prefix is not involved in the comparison.\r\n\r\nRef: https://tex.stackexchange.com/questions/156429/what-is-the-scheme-of-fonts-naming-in-pdfs-generated-by-latex-and-other-software\r\n\r\nE.g. `FAUUBT` is a randomly generated subset name.\r\n`<</Type/FontDescriptor/Ascent 765/CapHeight 713/Descent -240/FontBBox[-549 -270 1204 1047]/FontName/` + **FAUUBT** + `+OpenSans/ItalicAngle 0/StemV 80/FontFile2 23 0 R/Flags 32>>`",
"title": "Add possibility to ignore font subsets during pdf comparison",
"type": "issue"
},
{
"action": "created",
"author": "isidore",
"comment_id": 663270341,
"datetime": 1595544798000,
"masked_author": "username_1",
"text": "I'd be up for this, can we setup a time to pair next week?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "NikiforovAll",
"comment_id": 664361185,
"datetime": 1595852252000,
"masked_author": "username_0",
"text": "@username_1 sound fun, let's do that. How are we going to do that?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SimonCropp",
"comment_id": 1000796598,
"datetime": 1640344356000,
"masked_author": "username_2",
"text": "This project is not being actively maintained. Instead consider using [Verify](https://github.com/VerifyTests/Verify/). See [Migrating from ApprovalTests](https://github.com/VerifyTests/Verify/blob/main/docs/compared-to-approvaltests.md#migrating-from-approvaltests) for more information.",
"title": null,
"type": "comment"
}
] | 3 | 4 | 1,013 | false | false | 1,013 | true |
pingcap/pd | pingcap | 481,475,604 | 1,688 | {
"number": 1688,
"repo": "pd",
"user_login": "pingcap"
} | [
{
"action": "opened",
"author": "jiyingtk",
"comment_id": null,
"datetime": 1565939503000,
"masked_author": "username_0",
"text": "<!--\r\nThank you for working on PD! Please read PD's [CONTRIBUTING](https://github.com/pingcap/pd/blob/master/CONTRIBUTING.md) document **BEFORE** filing this PR.\r\n-->\r\n\r\n### What problem does this PR solve? <!--add the issue link with summary if it exists-->\r\nRegion flow statistics would be inaccurate when transfer leader occurs.\r\n\r\n### What is changed and how it works?\r\nUse time interval reported from region heartbeat to calculate flow.\r\n\r\n### Check List <!--REMOVE the items that are not applicable-->\r\n\r\nTests <!-- At least one of them must be included. -->\r\n\r\n - Manual test",
"title": "statistics: fix region flow calculation",
"type": "issue"
}
] | 3 | 4 | 773 | false | true | 582 | false |
adobe/aio-cli-plugin-cloudmanager | adobe | 722,187,196 | 142 | null | [
{
"action": "opened",
"author": "mirkogolze",
"comment_id": null,
"datetime": 1602756091000,
"masked_author": "username_0",
"text": "In the documentation for https://github.com/adobe/aio-cli-plugin-cloudmanager/blob/main/README.md#aio-cloudmanagerdownload-logs-environmentid-service-name-days and https://github.com/adobe/aio-cli-plugin-cloudmanager/blob/main/README.md#aio-cloudmanagertail-log-environmentid-service-name is written \"lists available logs for an environment in a Cloud Manager program\"\r\n\r\nFor both commands, this is not really the case. Could you please improve the description?\r\nFor people don't have access to a sandbox it would be easier to unterstand.",
"title": "Improvements for documentation desired",
"type": "issue"
},
{
"action": "created",
"author": "justinedelson",
"comment_id": 709314796,
"datetime": 1602767500000,
"masked_author": "username_1",
"text": "@username_0 can you clarify what you mean? Is the issue that this needs to be qualified to apply only to Cloud Service programs?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mirkogolze",
"comment_id": 709321149,
"datetime": 1602768162000,
"masked_author": "username_0",
"text": "This only faces to the documentation in the README.md for this git repository.\r\nFor the documentation of \"aio cloudmanager:download-logs ENVIRONMENTID SERVICE NAME [DAYS]\" I would think the following description is better than \"lists available logs for an environment in a Cloud Manager program\":\r\n\r\n\"download the log files for an environment in a Cloud Manager program\" \r\nSome explainations can be helpful. E.g. how the number of days will work.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "justinedelson",
"comment_id": 709330801,
"datetime": 1602769095000,
"masked_author": "username_1",
"text": "Ah. I get it now. Sorry, misunderstood. Indeed this is a copy/paste issue. Fixed in #143",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "justinedelson",
"comment_id": null,
"datetime": 1602769538000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 1,200 | false | false | 1,200 | true |
platformio/platformio-vscode-ide | platformio | 724,327,066 | 2,183 | null | [
{
"action": "opened",
"author": "sebby49",
"comment_id": null,
"datetime": 1603090446000,
"masked_author": "username_0",
"text": "\r\n\r\ncan somebody tell me why its not working as it should i been messing with this for a week already reading all kinds of solutions but nothing works like wtf is it so hard to just put a firmware out there that works so noobs like me don't get seriously frustrated",
"title": "PIO home does not work also platformIO fails to instal",
"type": "issue"
}
] | 2 | 3 | 583 | false | true | 380 | false |
codeforboston/clean-slate-data | codeforboston | 738,588,447 | 178 | null | [
{
"action": "opened",
"author": "laurafeeney",
"comment_id": null,
"datetime": 1604881120000,
"masked_author": "username_0",
"text": "General research: compare demographics (age distributions, population numbers) for the 3 districts we have (Northwestern, Suffolk, Middlesex) to each other and to MA as a whole. \r\n\r\n- How representative are these districts? \r\n- What share of the MA population does each represent? \r\n- How comparable are the districts to each other or to the state?\r\n\r\nThis will help us assess how to deal with lack of individual identifiers (#175 ) and lack of age data (#174 )",
"title": "General research: Compare demographics",
"type": "issue"
},
{
"action": "created",
"author": "laurafeeney",
"comment_id": 780225287,
"datetime": 1613524415000,
"masked_author": "username_0",
"text": "conversation w/Sana in Jan 2021: This would not be helpful in interpreting how many are eligible. There are too many other differences across districts.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "laurafeeney",
"comment_id": null,
"datetime": 1613524415000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 613 | false | false | 613 | false |
marcomusy/vedo | null | 721,829,364 | 228 | null | [
{
"action": "opened",
"author": "jsanchez679",
"comment_id": null,
"datetime": 1602713758000,
"masked_author": "username_0",
"text": "Dear Marco, \r\nHope this message finds you well! \r\nI am currently trying to use the booleanOperation function to add a series of spheres and create a unique mesh from which I can get the volume. I tried the example boolean.py and it works, but when I try to add a series of spheres the next error appears, and the kernel needs to be restarted. \r\n\r\nThe function I am trying to create goes like this: \r\n\r\n` def getBalloonedHeart(dictLayer):\r\n \r\n pts_cl = np.asarray(dictLayer['Points'])\r\n sphData_cl = np.asarray(dictLayer['PointData']['MaximumInscribedSphereRadius'])\r\n \r\n sph_all = []\r\n for n, pt_pos, radius in zip(count(), pts_cl, sphData_cl):\r\n sph2add = Sphere(pos=pt_pos, r=radius, c='red')\r\n sph_all.append(sph2add)\r\n \r\n sph_whole = sph_all[0]\r\n for sphere in sph_all:\r\n sph_whole = booleanOperation(sphere, \"plus\", sph_whole.clone())\r\n \r\n sph_whole = sph_whole.color('coral').legend('Filled')\r\n \r\n return sph_all, sph_whole`\r\n\r\nwhere _pts_cl_ is an array of pos (x,y,z) coordinates, and _sphData_cl_ is an array of the same length with the radii of the spheres to create. The first **for** of the funcion runs smoothly, and creates a list of all the spheres. The problem arises when trying to create the volume of added spheres. \r\n\r\nLet me know if you have any questions. \r\nbest wishes, \r\n\r\nJuliana",
"title": "Error using booleanOperation",
"type": "issue"
},
{
"action": "created",
"author": "marcomusy",
"comment_id": 708703203,
"datetime": 1602716167000,
"masked_author": "username_1",
"text": "Hi Juliana,\r\ncould it be that at some step any the two spheres are not intersecting?\r\nthe message seems to say that a tree-search object cannot be created.\r\nWhat are you trying to achieve? maybe there are better ways than the boolean union (?)\r\ncheers\r\nM.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "marcomusy",
"comment_id": null,
"datetime": 1606056498000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "Aniwax",
"comment_id": 740034464,
"datetime": 1607359191000,
"masked_author": "username_2",
"text": "It's a pity that this discussion didn't continue. I have a similar question: \r\nI want to create a grid of small mesh box/cubes, then intersect them as a whole with a huge cube beneath this grid plane.\r\nIs there a way to generate a union of mesh objects from a list of mesh objects?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "marcomusy",
"comment_id": 741979892,
"datetime": 1607540266000,
"masked_author": "username_1",
"text": "Yes. You can use `mesh.boolean()` this way:\r\n```python\r\nfrom vedo import *\r\n\r\nc1 = Cube().triangulate().clean().flat()\r\nc2 = Cube().triangulate().clean().flat()\r\nc2.pos(0.2,0.1,0.1).rotateX(10)\r\n\r\nb = c1.boolean(\"+\", c2).flat().addScalarBar()\r\n\r\nshow([(c1, c2.alpha(0.5)), b], N=2, axes=1)\r\n```\r\n\r\n\r\n\r\n\r\nNote that if the 2 object do not touch then the operation fails. In this case you can use instead `merge()` to obtain the sum of the two.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Aniwax",
"comment_id": 742550852,
"datetime": 1607610089000,
"masked_author": "username_2",
"text": "@username_1 Thanks for the reply and the nice example! \r\nBut this didn't really solve my issue. I need to calculate a boolean operations on more than 2 not-touching-objects. \r\nIs there a way to do it?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "marcomusy",
"comment_id": 742622387,
"datetime": 1607616979000,
"masked_author": "username_1",
"text": "If the boolean operation is just summing non-touching meshes you can simply use `merge(mesh1,mesh2,mesh3,..)`.",
"title": null,
"type": "comment"
}
] | 3 | 7 | 2,881 | false | false | 2,881 | true |
conda-forge/cutensor-feedstock | conda-forge | 793,800,508 | 1 | {
"number": 1,
"repo": "cutensor-feedstock",
"user_login": "conda-forge"
} | [
{
"action": "opened",
"author": "leofang",
"comment_id": null,
"datetime": 1611617251000,
"masked_author": "username_0",
"text": "<!--\r\nThank you for pull request.\r\nBelow are a few things we ask you kindly to self-check before getting a review. Remove checks that are not relevant.\r\n-->\r\nChecklist\r\n* [ ] Used a [personal fork of the feedstock to propose changes](https://conda-forge.org/docs/maintainer/updating_pkgs.html#forking-and-pull-requests)\r\n* [ ] Bumped the build number (if the version is unchanged)\r\n* [ ] Reset the build number to `0` (if the version changed)\r\n* [ ] [Re-rendered]( https://conda-forge.org/docs/maintainer/updating_pkgs.html#rerendering-feedstocks ) with the latest `conda-smithy` (Use the phrase <code>@<space/>conda-forge-admin, please rerender</code> in a comment in this PR for automated rerendering)\r\n* [ ] Ensured the license file is being packaged.\r\n\r\n<!--\r\nPlease note any issues this fixes using [closing keywords]( https://help.github.com/articles/closing-issues-using-keywords/ ):\r\n-->\r\n\r\n<!--\r\nPlease add any other relevant info below:\r\n-->",
"title": "[WIP] Fix Windows builds",
"type": "issue"
},
{
"action": "created",
"author": "leofang",
"comment_id": 767178573,
"datetime": 1611617267000,
"masked_author": "username_0",
"text": "@conda-forge-admin, please rerender",
"title": null,
"type": "comment"
}
] | 2 | 3 | 1,525 | false | true | 986 | false |
google/closure-compiler | google | 696,401,875 | 3,679 | null | [
{
"action": "opened",
"author": "juj",
"comment_id": null,
"datetime": 1599625751000,
"masked_author": "username_0",
"text": "In wiki page https://github.com/google/closure-compiler/wiki/Flags-and-Options, the docs\r\n\r\n```\r\n--language_in VAL\r\nSets the language spec to which input sources should conform. Options: ECMASCRIPT3, ECMASCRIPT5, ECMASCRIPT5_STRICT, ECMASCRIPT6_TYPED (experimental), ECMASCRIPT_2015, ECMASCRIPT_2016, ECMASCRIPT_2017, ECMASCRIPT_2018, ECMASCRIPT_2019, STABLE, ECMASCRIPT_NEXT\r\n\r\n--language_out VAL\r\nSets the language spec to which output should conform. Options: ECMASCRIPT3, ECMASCRIPT5, ECMASCRIPT5_STRICT, ECMASCRIPT_2015, ECMASCRIPT_2016, ECMASCRIPT_2017, ECMASCRIPT_2018, ECMASCRIPT_2019, STABLE\r\n```\r\ndo not say what are the default options for these settings. That would be good to add?\r\n\r\nAlso, it would be good to describe the behavior of what happens if one passes only one of `--language_in`, or `--language_out`, but not the other? (e.g. does `--language_in ECMASCRIPT_2018` without an explicitly specified `--language_out` directive imply `--language_out ECMASCRIPT_2018`? Or does e.g. `--language_out ECMASCRIPT_2018` without an explicitly specified `--language_in` directive imply `--language_in ECMASCRIPT_2018`?)",
"title": "Documentation missing default language_in and language_out modes",
"type": "issue"
},
{
"action": "created",
"author": "juj",
"comment_id": 689297957,
"datetime": 1599626597000,
"masked_author": "username_0",
"text": "Also it seems that there is an option `--language_out NO_TRANSPILE`, but that is not documented in the list?\r\n\r\nAlso, related question I have is:\r\n\r\nIf one specifies `--language_in ECMASCRIPT_2018`, can/will Closure emit ES2018 constructs of its own (during the minification process) to the build output? How about if `--language_in ECMASCRIPT_2018 --language_out NO_TRANSPILE` is passed?\r\n\r\nWhat I would like to achieve is to enable the source code that I am minifying to (possibly) contain ES 2018 constructs, but have Closure not emit any ES 2018 constructs by itself during minification process, but only emit ES5 constructs.\r\n\r\nIn other words, is there a set of `--language_in` and `--language_out` flags, such that, given an input file that is either ECMASCRIPT_2018 or ES5 (at the time of invocation it is unknown which):\r\n - if the input file is ECMASCRIPT_2018, Closure should not do any transpiling on it (can keep output as ECMASCRIPT_2018),\r\n - if the input file is ES5, the output file should also be ES5.\r\n\r\nThe motivation for this comes from providing a set of default flags for Emscripten compiler's Closure operation.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "brad4d",
"comment_id": 689925089,
"datetime": 1599702970000,
"masked_author": "username_1",
"text": "I'll try to answer your questions here, then maybe use this to update the docs.\r\n\r\nThe default value for `--language_in` is `STABLE`.\r\nThe default value for `--language_out` is whatever `--language_in` is.\r\n\r\nWhat `STABLE` means depends on whether it's used for `--language_in` or `--language_out`.\r\nIn general for input it means the highest level of the released ES spec we currently support.\r\nFor output it means the lowest level of ES spec we think it's reasonable for an application to be still trying to support.\r\n\r\nAt the moment `STABLE` means:\r\n\r\n| `--language_in` | `--language_out` |\r\n| ------------------- | ---------------------- |\r\n| `ES_2019` | `ES5_STRICT` |\r\n\r\nWe promise not to output any language features higher than `--language_out`, but that doesn't mean we promise we'll never upgrade input code to use newer language features.\r\n\r\nFor historical and ease-of-maintenance reasons we rarely ever output newer language features than those that originally appeared in the input, but there are some cases where we do.\r\n\r\nWe'll likely do it more in the future, since newer language features often also allow us to output less code.\r\n\r\nSo, no, there's no combination of options that would guarantee not to add newer features without also transpiling away those features if they appear in the input.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "brad4d",
"comment_id": 689983739,
"datetime": 1599714243000,
"masked_author": "username_1",
"text": "Logic for interpreting the command line options is here.\r\n\r\nhttps://github.com/google/closure-compiler/blob/aaae1b74c35846dc79ef0f40543d538b362083e9/src/com/google/javascript/jscomp/CommandLineRunner.java#L1780-L1808\r\n\r\nOh, also, `NO_TRANSPILE` is only valid for `--language_out`\r\n\r\nhttps://github.com/google/closure-compiler/blob/aaae1b74c35846dc79ef0f40543d538b362083e9/src/com/google/javascript/jscomp/CompilerOptions.java#L1906-L1909\r\n\r\nand means the same thing as not specifying `--language_out` at all.\r\n\r\nhttps://github.com/google/closure-compiler/blob/aaae1b74c35846dc79ef0f40543d538b362083e9/src/com/google/javascript/jscomp/CompilerOptions.java#L1923-L1932\r\n\r\nWhich looks really suspiciously like `NO_TRANSPILE`, means **no output features** until you look at the implementation of `getOutputFeatureSet()`\r\n\r\nhttps://github.com/google/closure-compiler/blob/aaae1b74c35846dc79ef0f40543d538b362083e9/src/com/google/javascript/jscomp/CompilerOptions.java#L1946-L1953\r\n\r\nIt seems likely that some cleanup could be done here. :)",
"title": null,
"type": "comment"
}
] | 2 | 4 | 4,615 | false | false | 4,615 | false |
MicrosoftDocs/appcenter-docs | MicrosoftDocs | 405,547,747 | 445 | null | [
{
"action": "opened",
"author": "Eyesonly88",
"comment_id": null,
"datetime": 1548995441000,
"masked_author": "username_0",
"text": "The documentation doesn't show how to change the crash reporting configuration from ALWAYS_SEND to ASK_JAVASCRIPT for projects that have already answered \"ALWAYS_SEND\".\n\nI had to go into the source code to find out what to do in order to change my existing project from the ALWAYS_SEND configuration to the ASK_JAVASCRIPT configuration.\n\nThe documentation should outline these steps.\n\nFor anyone interested in doing this before the docs are updated, read a file called `postlink.js` inside `appcenter-crashes/scripts`.\n\n---\n#### Document Details\n\n⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*\n\n* ID: 519e420f-cf2a-1d5b-d86a-d01c09bcf7a0\n* Version Independent ID: 59c746fb-cf4d-3019-2b48-56b69e09aa27\n* Content: [App Center Crashes for React Native - Visual Studio App Center](https://docs.microsoft.com/en-us/appcenter/sdk/crashes/react-native)\n* Content Source: [docs/sdk/crashes/react-native.md](https://github.com/MicrosoftDocs/appcenter-docs/blob/live/docs/sdk/crashes/react-native.md)\n* Service: **vs-appcenter**\n* GitHub Login: @elamalani\n* Microsoft Alias: **emalani**",
"title": "Documentation is missing how to change crash config to be processed by JavaScript",
"type": "issue"
},
{
"action": "created",
"author": "jwargo",
"comment_id": 460250830,
"datetime": 1549287146000,
"masked_author": "username_1",
"text": "@username_0 thanks, I'll have the team take a look.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "dhei",
"comment_id": null,
"datetime": 1550683640000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "dhei",
"comment_id": 465676262,
"datetime": 1550683640000,
"masked_author": "username_2",
"text": "Close due to lack of response and unable to repro.",
"title": null,
"type": "comment"
}
] | 3 | 4 | 1,222 | false | false | 1,222 | true |
mkevenaar/chocolatey-packages | null | 715,803,352 | 74 | null | [
{
"action": "opened",
"author": "treischl",
"comment_id": null,
"datetime": 1601998887000,
"masked_author": "username_0",
"text": "A possible solution might be to search for the _.exe_ asset instead of the _.msi_ asset:\r\n\r\n```diff\r\n@@ chocolatey-packages/automatic/powertoys/update.ps1\r\n- $re = \"PowerToysSetup(.+)?.msi\"\r\n+ $re = \"PowerToysSetup(.+)?.exe\"\r\n```\r\n\r\nIf the above change is appropriate, I can submit the pull request to help expedite the process.\r\n\r\n## Steps to Reproduce (for bugs)\r\n\r\n1. Install powertoys via chocolatey: `choco install powertoys -y`.\r\n2. Within the next 24 hours, powertoys will provide a notification that version 0.23.0 is available.\r\n3. Verify package with version 0.23.0 is not available: `choco list powertoys -e`.\r\n\r\n## Context\r\n\r\nI prefer to install and upgrade software using chocolatey when possible. This has prevented me from updating the powertoys package to the latest version.\r\n\r\n## Your Environment\r\n\r\n* Package Version used: 0.21.1\r\n* Operating System and version: Windows 10 Pro 2004\r\n* Chocolatey version: 0.10.15",
"title": "(powertoys) package is not automatically updating from 0.21.1 to 0.23.0",
"type": "issue"
},
{
"action": "created",
"author": "mkevenaar",
"comment_id": 705084040,
"datetime": 1602091682000,
"masked_author": "username_1",
"text": "@username_0 thank you for this issue. I am going to pick this up. Unfortunately the package will now require some extra dependencies and additional parameters. \r\n\r\nhttps://github.com/microsoft/PowerToys/wiki/Installer-arguments-for-exe",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mkevenaar",
"comment_id": null,
"datetime": 1602098711000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 1,174 | false | false | 1,174 | true |
zhangxiaosong18/FreeAnchor | null | 556,614,254 | 28 | null | [
{
"action": "opened",
"author": "HansolEom",
"comment_id": null,
"datetime": 1580268504000,
"masked_author": "username_0",
"text": "I want to test every image in one gpu. \r\nBut it ends in 626iter. \r\nWhat do I have to do?",
"title": "How do i test 5,000 images?",
"type": "issue"
},
{
"action": "created",
"author": "zhangxiaosong18",
"comment_id": 579576692,
"datetime": 1580268836000,
"masked_author": "username_1",
"text": "I can't give useful help if you do not provide command and error details etc.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "HansolEom",
"comment_id": null,
"datetime": 1580281402000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 165 | false | false | 165 | false |
DylanVanAssche/status-page | null | 776,676,835 | 32 | null | [
{
"action": "opened",
"author": "DylanVanAssche",
"comment_id": null,
"datetime": 1609371047000,
"masked_author": "username_0",
"text": "In [`967b5bc`](https://github.com/username_0/status-page/commit/967b5bc14c3c5afccf42c9e11eb11e22186731dd\n), Matrix (https://chat.dylanvanassche.be) was **down**:\n- HTTP code: 0\n- Response time: 0 ms",
"title": "🛑 Matrix is down",
"type": "issue"
},
{
"action": "created",
"author": "DylanVanAssche",
"comment_id": 752882193,
"datetime": 1609401892000,
"masked_author": "username_0",
"text": "**Resolved:** Matrix is back up in [`00f8e5b`](https://github.com/username_0/status-page/commit/00f8e5b80647b9154b4fed37ccd0e5873e870e66\n).",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "DylanVanAssche",
"comment_id": null,
"datetime": 1609401893000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 345 | false | false | 345 | true |
TheEightBot/Xamarin.IQKeyboardManager | TheEightBot | 673,384,741 | 16 | null | [
{
"action": "opened",
"author": "lobbo232",
"comment_id": null,
"datetime": 1596618731000,
"masked_author": "username_0",
"text": "I have found an issue with tabbed pages on Xamarin that occurs on iOS.\r\nI have enabled the plugin in AppDelegate with `IQKeyboardManager.SharedManager.Enable = true;`\r\n\r\nMy page is this:\r\n\r\n\r\nWhen I click in the textbox, the keyboard works exactly as expected and the content is not overlapped any more. Great!\r\n\r\n\r\nHowever, when the keyboard closes the tab page buttons are also gone as shown here:\r\n\r\n\r\nXamarin.IQKeyboardManage: v1.4.1\r\nDevice: iPhone XS Max iOS 13.5.1\r\nXamarin: 16.6.000.1061\r\nXamarin.Forms: 4.6.0.762\r\nXamarin.iOS: 13.18.2.1",
"title": "Tab page headings get compressed on iOS",
"type": "issue"
},
{
"action": "created",
"author": "restorepro",
"comment_id": 721395698,
"datetime": 1604440692000,
"masked_author": "username_1",
"text": "I have the same issue. My tabbed page doesn't completely go away but the tabbed page (menu icons) are cut in half so you can only see only about half of their height...",
"title": null,
"type": "comment"
}
] | 2 | 2 | 1,052 | false | false | 1,052 | false |
ignitionrobotics/ign-physics | ignitionrobotics | 761,458,567 | 176 | null | [
{
"action": "opened",
"author": "acxz",
"comment_id": null,
"datetime": 1607622406000,
"masked_author": "username_0",
"text": "```\r\n[ 4%] Building CXX object src/CMakeFiles/ignition-physics3.dir/Identity.cc.o\r\nIn file included from /home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:303,\r\n from /home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/src/Identity.cc:18:\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/detail/Entity.hh:363:5: error: no declaration matches ‘typename FeatureT::Implementation<Policy>* ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Interface()’\r\n 363 | Entity<Policy, Features>::Interface()\r\n | ^~~~~~~~~~~~~~~~~~~~~~~~\r\nIn file included from /home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/src/Identity.cc:18:\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:283:58: note: candidates are: ‘template<class PolicyT, class FeaturesT> template<class F> const typename F::Implementation<ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Policy>* ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Interface() const’\r\n 283 | const typename F::template Implementation<Policy> *Interface() const;\r\n | ^~~~~~~~~\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:276:59: note: ‘template<class PolicyT, class FeaturesT> template<class FeatureT> typename FeatureT::Implementation<ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Policy>* ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Interface()’\r\n 276 | typename FeatureT::template Implementation<Policy> *Interface();\r\n | ^~~~~~~~~\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:216:11: note: ‘class ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >’ defined here\r\n 216 | class Entity\r\n | ^~~~~~\r\nIn file included from /home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:303,\r\n from /home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/src/Identity.cc:18:\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/detail/Entity.hh:373:5: error: no declaration matches ‘const typename FeatureT::Implementation<Policy>* ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Interface() const’\r\n 373 | Entity<Policy, Features>::Interface() const\r\n | ^~~~~~~~~~~~~~~~~~~~~~~~\r\nIn file included from /home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/src/Identity.cc:18:\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:283:58: note: candidates are: ‘template<class PolicyT, class FeaturesT> template<class F> const typename F::Implementation<ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Policy>* ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Interface() const’\r\n 283 | const typename F::template Implementation<Policy> *Interface() const;\r\n | ^~~~~~~~~\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:276:59: note: ‘template<class PolicyT, class FeaturesT> template<class FeatureT> typename FeatureT::Implementation<ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Policy>* ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >::Interface()’\r\n 276 | typename FeatureT::template Implementation<Policy> *Interface();\r\n | ^~~~~~~~~\r\n/home/username_0/vcs/git/github/username_0/pkgbuilds/ignition-physics/src/ign-physics-ignition-physics3_3.1.0/include/ignition/physics/Entity.hh:216:11: note: ‘class ignition::physics::Entity< <template-parameter-1-1>, <template-parameter-1-2> >’ defined here\r\n 216 | class Entity\r\n | ^~~~~~\r\nmake[2]: *** [src/CMakeFiles/ignition-physics3.dir/build.make:147: src/CMakeFiles/ignition-physics3.dir/Identity.cc.o] Error 1\r\nmake[1]: *** [CMakeFiles/Makefile2:388: src/CMakeFiles/ignition-physics3.dir/all] Error 2\r\nmake: *** [Makefile:160: all] Error 2\r\n```\r\n\r\nOS: Arch Linux\r\ngcc: 10.2.0\r\nign-physics: 3.1.0",
"title": "Build error when compiling Identity.cc",
"type": "issue"
},
{
"action": "created",
"author": "noctrog",
"comment_id": 750912601,
"datetime": 1608824144000,
"masked_author": "username_1",
"text": "I can reproduce the same error on my machine. However, when using Clang instead of GCC, it works. I generate the build directory with:\r\n```bash\r\ncmake -S . -B ./build/Linux64 -G \"Unix Makefiles\" -DCMAKE_BUILD_TYPE=\"Debug\" -DCMAKE_C_COMPILER=/usr/bin/clang -DCMAKE_CXX_COMPILER=/usr/bin/clang++\r\n```\r\n\r\nOS: Arch Linux\r\nGCC: 10.2.0\r\nClang: 11.0.0\r\nign-physics: ign-physics3 branch",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "acxz",
"comment_id": 750925879,
"datetime": 1608828574000,
"masked_author": "username_0",
"text": "weird I wonder if it's a regression due to gcc: 10.2.0 then?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "noctrog",
"comment_id": 751124624,
"datetime": 1608848317000,
"masked_author": "username_1",
"text": "Seems to be working with GCC 9.3.0 too.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chapulina",
"comment_id": 751837666,
"datetime": 1609183774000,
"masked_author": "username_2",
"text": "Yeah I was able to reproduce the problem with gcc 10.2.0 on Ubuntu Focal.\r\n\r\nI have a fix on https://github.com/ignitionrobotics/ign-physics/pull/185, but I don't know if that's \"the correct\" way.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "acxz",
"comment_id": 755792263,
"datetime": 1609978790000,
"masked_author": "username_0",
"text": "@username_2 I believe we can close this now with #185 merged in",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "chapulina",
"comment_id": null,
"datetime": 1609979196000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 7 | 5,875 | false | false | 5,875 | true |
nic-delhi/AarogyaSetu_Android | nic-delhi | 625,377,770 | 96 | null | [
{
"action": "opened",
"author": "shraddha1112",
"comment_id": null,
"datetime": 1590555567000,
"masked_author": "username_0",
"text": "If user changes the language then the \r\n1. App name and State Names should be shown in selected language\r\n2. Covid Updates tab the state names dont get displayed as per user language selection\r\n3. Forgets the selection \"In your area, within radius of\"\r\n4. Text gets overlap in Tamil language in the section \"In your area, within radius of\" also complete view disturbed in \"Covid Updates\" section\r\n\r\n\r\nRegards,\r\nShraddha",
"title": "Language Change issues",
"type": "issue"
}
] | 1 | 1 | 420 | false | false | 420 | false |
pandas-dev/pandas | pandas-dev | 746,498,162 | 37,956 | null | [
{
"action": "opened",
"author": "nf78",
"comment_id": null,
"datetime": 1605785495000,
"masked_author": "username_0",
"text": "#### Is your feature request related to a problem?\r\n\r\nWhen a function is passed to DataFrame.apply (with any axis) unfortunately, this doesn't take advantage of the multiprocessing module, making it inefficient for large datasets, especially when more cores are available to work.\r\n\r\nOther modules like modin or dask, already implement this, but I think that pandas should implement by itself, if called for.\r\n\r\n#### Describe the solution you'd like\r\n\r\nIt should be able to work with the multiprocessing module out of the box, as an initial enhancement, and then in the future support other possible backends like joblib.\r\n\r\nAn example of application would be:\r\n\r\n`df.apply(lambda x: x['A'] * x['B'], axis=1, multiprocessing=True)`\r\n\r\n#### API breaking implications\r\n\r\nThis should not change established behavior, considering that the default value for the \"multiprocessing\" argument should be \"None\" by default.\r\n\r\nThe only concern would be to return the DataFrame without changing indices, and the result be the same as without multiprocessing.\r\n\r\n#### Describe alternatives you've considered\r\n\r\nI have also considered extra backend options for future enhancements of this implementation, like joblib, ray, dask.\r\n\r\n#### Additional context\r\n\r\nNOTE: I have already a proof-of-concept for the solution, so I can work a bit further on it.",
"title": "ENH: Add argument \"multiprocessing\" to DataFrame.apply() method (any axis)",
"type": "issue"
},
{
"action": "created",
"author": "anilkumarKanasani",
"comment_id": 737441865,
"datetime": 1606936825000,
"masked_author": "username_1",
"text": "Hello @username_0 ,\r\n\r\nI would like to work on this issue. I will provide a POC on this issue soon.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TomAugspurger",
"comment_id": 737453429,
"datetime": 1606938197000,
"masked_author": "username_2",
"text": "@username_1 there needs to be some discussion on the design first.\r\n\r\nOne problem here: \r\n\r\n```\r\ndf.apply(lambda x: x['A'] * x['B'], axis=1, multiprocessing=True)\r\n```\r\n\r\nThat doesn't allow any flexibility in how the parallelism is achieved. If we add this, it'd be better to standardize around something like `concurrent.futures`'s API. There are some upstream issues with that (you can't use `concurrent.futures.gather()` on tasks that don't subclass `concurrent.futures.Task` IIRC) but it's at least more flexible that simply multiprocessing or nothing.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nf78",
"comment_id": 739536372,
"datetime": 1607276290000,
"masked_author": "username_0",
"text": "@username_2 and @username_1 ,\r\n\r\nThanks for your replies and interest on this issue.\r\n\r\nWhen I suggested the argument \"multiprocessing = True\", I don't mean this to be a hard rule. This is meant to be done with a module like multiprocessing or joblib, just as an example, and the argument could actually be set to a **boolean True/False**, if we choose to use **only one backend** like multiprocessing (for simplicity).\r\n\r\nOn the other hand, we can actually pass to the argument **a value like \"None\", \"multiprocessing\" or \"joblib\"**, to allow the user to select the prefered backend. This should give more flexibility to the user on how the parallelism is achieved.\r\n\r\nI have already a POC based on joblib, splitting the dataframe into the number of available cores (this could also be a separate argument like \"n_jobs\"). Per example a dataset with 1 million rows would be split into 4 dataframes of 250 thousand rows, then use apply() method on each dataframe, then finally concatenating the result. This is similar to how other backends like dask and modin are already doing.\r\n\r\nLet me know your comments. Thanks.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "anilkumarKanasani",
"comment_id": 741769142,
"datetime": 1607520353000,
"masked_author": "username_1",
"text": "@username_0 ,\r\n\r\nThanks for your quick response. I understand the requirement for this issue.\r\n\r\nI will start working on this. Do we have any discussion form (or) virtual meeting rooms to discuss with community members ( for any questions or suggestions ).\r\n\r\nCan I know, the process to contribute for pandas package ? Do we need to block this issue ?\r\n\r\nThanks,\r\nAnil Kumar Kanasani",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nf78",
"comment_id": 742475155,
"datetime": 1607601302000,
"masked_author": "username_0",
"text": "Hi @username_1,\r\n\r\nThanks for your interest in contributing, but as I mentioned before on other notes and on the \"Additional context\", I already have a proof-of-concept that I can just work a bit further on it.\r\n\r\nI just need to know if it is ok to go ahead @username_2 ? Is it ok to actually pass to the argument a value like \"None\", \"multiprocessing\" or \"joblib\", to allow the user to select the prefered backend?\r\n\r\nThanks.",
"title": null,
"type": "comment"
}
] | 3 | 6 | 3,934 | false | false | 3,934 | true |
desktop/desktop | desktop | 734,918,015 | 10,958 | null | [
{
"action": "opened",
"author": "letoileservicevip",
"comment_id": null,
"datetime": 1604361516000,
"masked_author": "username_0",
"text": "### Describe the feature or problem you’d like to solve\r\n\r\nA clear and concise description of what the feature or problem is. If this is a bug report, please use the bug report template instead.\r\n\r\n### Proposed solution\r\n\r\nHow will it benefit Desktop and its users?\r\n\r\n### Additional context\r\n\r\nAdd any other context like screenshots or mockups are helpful, if applicable.",
"title": "Microsoft.google.word",
"type": "issue"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720793355,
"datetime": 1604361560000,
"masked_author": "username_0",
"text": "Microsoft.outlook.cloud.app#moca",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720793491,
"datetime": 1604361591000,
"masked_author": "username_0",
"text": "Office@share.com#Microsoft",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720793762,
"datetime": 1604361642000,
"masked_author": "username_0",
"text": "Github@share.com",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720793994,
"datetime": 1604361684000,
"masked_author": "username_0",
"text": "Github@do",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720794128,
"datetime": 1604361708000,
"masked_author": "username_0",
"text": "Microsoft@share.com",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720794372,
"datetime": 1604361756000,
"masked_author": "username_0",
"text": "Google@share.com",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720794727,
"datetime": 1604361819000,
"masked_author": "username_0",
"text": "Jefsurprenant@share.com",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720794878,
"datetime": 1604361845000,
"masked_author": "username_0",
"text": "ICloud@share.com",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720795021,
"datetime": 1604361874000,
"masked_author": "username_0",
"text": "Onedrive@share.com",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "letoileservicevip",
"comment_id": 720795212,
"datetime": 1604361911000,
"masked_author": "username_0",
"text": "Outlook@share.com",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "steveward",
"comment_id": null,
"datetime": 1604370118000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 12 | 564 | false | false | 564 | false |
hannahqchou/intro-html | null | 784,527,969 | 2 | {
"number": 2,
"repo": "intro-html",
"user_login": "hannahqchou"
} | [
{
"action": "opened",
"author": "hannahqchou",
"comment_id": null,
"datetime": 1610479795000,
"masked_author": "username_0",
"text": "",
"title": "Add index.html",
"type": "issue"
}
] | 2 | 2 | 215 | false | true | 0 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.