repo stringlengths 7 67 | org stringlengths 2 32 ⌀ | issue_id int64 780k 941M | issue_number int64 1 134k | pull_request dict | events list | user_count int64 1 77 | event_count int64 1 192 | text_size int64 0 329k | bot_issue bool 1 class | modified_by_bot bool 2 classes | text_size_no_bots int64 0 279k | modified_usernames bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
mistic100/sql-parser | null | 612,335,136 | 14 | {
"number": 14,
"repo": "sql-parser",
"user_login": "mistic100"
} | [
{
"action": "created",
"author": "mistic100",
"comment_id": 624079665,
"datetime": 1588687882000,
"masked_author": "username_0",
"text": "@dependabot merge",
"title": null,
"type": "comment"
}
] | 2 | 2 | 15,302 | false | true | 17 | false |
microsoft/dtslint | microsoft | 627,360,528 | 295 | {
"number": 295,
"repo": "dtslint",
"user_login": "microsoft"
} | [
{
"action": "opened",
"author": "ExE-Boss",
"comment_id": null,
"datetime": 1590766992000,
"masked_author": "username_0",
"text": "Fixes <https://github.com/microsoft/dtslint/issues/281>\r\n\r\n## Depends on:\r\n- [ ] <https://github.com/DefinitelyTyped/DefinitelyTyped/pull/45148>",
"title": "chore(deps): Move `typescript` to `peerDependencies`",
"type": "issue"
},
{
"action": "created",
"author": "sandersn",
"comment_id": 636085929,
"datetime": 1590772377000,
"masked_author": "username_1",
"text": "I think it's actually the fault of dtslint's dependencies dts-critic and @definitelytyped/definitions-parser. dtslint needs to depend directly on typescript so that it can be run via package.json's `bin` entry.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ExE-Boss",
"comment_id": 636089324,
"datetime": 1590772833000,
"masked_author": "username_0",
"text": "That will also work with peer dependencies.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sandersn",
"comment_id": 636102806,
"datetime": 1590774630000,
"masked_author": "username_1",
"text": "https://github.com/microsoft/dtslint/pull/296 updated dtslint's dependencies. I published it and will watch DT to see if it works.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sandersn",
"comment_id": 636103037,
"datetime": 1590774664000,
"masked_author": "username_1",
"text": "(It works locally.)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Maxim-Mazurok",
"comment_id": 636291147,
"datetime": 1590823001000,
"masked_author": "username_2",
"text": "That doesn't seem to be the case. Even with latest `dtslint`, I still can reproduce the issue.\r\nHere's what happens:\r\n- I have `typescript@3.9.0-dev.20200321` as a dependency of my project.\r\n- I'm running `dtslint`\r\n- `dtslint` imports and calls `tslint`\r\n- `tslint` doesn't have `typescript` dependency, so it uses my projects version to parse ts files and convert them into linkable structure\r\n- `dtslint` rule `voidReturnRule` also imports `typescript` but since `dtslint` has `typescript` dependency, it uses local ts version which has different enum values\r\n\r\nHere's an illustration:\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Maxim-Mazurok",
"comment_id": 636293661,
"datetime": 1590824440000,
"masked_author": "username_2",
"text": "I just tried this fork (PR) on my project using `npm link` and it worked perfectly!\r\nI was running it using `bin` like so:\r\n```sh\r\n./node_modules/.bin/dtslint types/gapi.client.abusiveexperiencereport\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sandersn",
"comment_id": 637648243,
"datetime": 1591113790000,
"masked_author": "username_1",
"text": "OK, 3.6.10 is published, let me know if there are problems.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "IvanGoncharov",
"comment_id": 637674043,
"datetime": 1591116290000,
"masked_author": "username_3",
"text": "@username_1 It fixed every for `graphql-js`.\r\nThe next challenge is to understand why our typing is failing `strict-export-declare-modifiers` but I think it is a different bug.",
"title": null,
"type": "comment"
}
] | 4 | 9 | 1,685 | false | false | 1,685 | true |
snyk/snyk-cli-interface | snyk | 684,737,418 | 38 | {
"number": 38,
"repo": "snyk-cli-interface",
"user_login": "snyk"
} | [
{
"action": "opened",
"author": "lili2311",
"comment_id": null,
"datetime": 1598280862000,
"masked_author": "username_0",
"text": "- [x] Ready for review\r\n- [x] Follows [CONTRIBUTING](https://github.com/snyk/snyk/blob/master/.github/CONTRIBUTING.md) rules\r\n- [x] Reviewed by Snyk internal team\r\n\r\n#### What does this PR do?\r\nreturning targetFile in plugin can cause issue with project unique constraints, returning here when used outside of that.\r\n\r\nUsed in: https://github.com/snyk/snyk-gradle-plugin/pull/142/files",
"title": "feat: optional meta targetFile",
"type": "issue"
},
{
"action": "created",
"author": "snyksec",
"comment_id": 680783316,
"datetime": 1598435918000,
"masked_author": "username_1",
"text": ":tada: This PR is included in version 2.9.0 :tada:\n\nThe release is available on:\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/@snyk/cli-interface/v/2.9.0)\n- [GitHub release](https://github.com/snyk/snyk-cli-interface/releases/tag/v2.9.0)\n\nYour **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:",
"title": null,
"type": "comment"
}
] | 2 | 2 | 747 | false | false | 747 | false |
dotnet/runtime | dotnet | 558,469,565 | 30,180 | null | [
{
"action": "opened",
"author": "Symbai",
"comment_id": null,
"datetime": 1562573240000,
"masked_author": "username_0",
"text": "Similar to https://www.newtonsoft.com/json/help/html/JsonObjectAttributeOptIn.htm an option to serialize only specific properties would be very useful. Especially if the amount of properties to include are less than the properties to exclude.\r\n\r\nFor example I have one base class `Animal` where lots of animal type classes inherits from: `Cat, Dog, Horse, Monkey, ...` (and imagine https://github.com/dotnet/corefx/issues/39031 gets addressed sometime). It's easier to just include the property by a custom attribute rather to exclude all unwanted for maintainability reasons. For the same reasons I also don't want to mess around with a sort of converter or something when there are many classes.\r\n\r\nIn Newtonsoft Json it was easy and worked very well by just adding a custom attribute on top of the class and skip >300 JsonIgnore attributes on every property.",
"title": "Request: System.Text.Json Serialization with Opt-In",
"type": "issue"
},
{
"action": "created",
"author": "AraHaan",
"comment_id": 650478155,
"datetime": 1593226713000,
"masked_author": "username_1",
"text": "@layomia is this still in the works for 5.0?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Symbai",
"comment_id": 947346758,
"datetime": 1634708627000,
"masked_author": "username_0",
"text": "@eiriktsarpalis Mine was posted on July, the duplicate was posted in December. So the others has duplicated mine actually.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 1,027 | false | false | 1,027 | false |
CCExtractor/vardbg | CCExtractor | 557,199,569 | 2 | null | [
{
"action": "opened",
"author": "dmwyatt",
"comment_id": null,
"datetime": 1580344375000,
"masked_author": "username_0",
"text": "It's jarring and confusing, particularly when the next line is many lines away from the current line.\r\n\r\nFor example, in the animation in the README, when it scrolls to the `insertion_sort` function implementation, it's not immediately clear what happened.\r\n\r\nA possibly better implementation would have some sort of animation showing the scroll to that point and the movement of the highlight to that point.",
"title": "The executing line highlighting should animate the scroll to line",
"type": "issue"
},
{
"action": "created",
"author": "kdrag0n",
"comment_id": 580100446,
"datetime": 1580365011000,
"masked_author": "username_1",
"text": "I'm not so sure it's a good idea because it might be a bit distracting and/or out-of-place. For example, most terminals don't have smooth scrolling in order to keep all the text aligned and some editors behave the same way.\r\n\r\nI'll look into it though, thanks for the suggestion.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dmwyatt",
"comment_id": 580368084,
"datetime": 1580405734000,
"masked_author": "username_0",
"text": "Yes, I agree that most terminals do not do this. However, it seems like that might not be so relevant to a tool that is at least partly designed for learning.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 846 | false | false | 846 | false |
SocialiteProviders/Providers | SocialiteProviders | 522,785,129 | 367 | null | [
{
"action": "opened",
"author": "philmarc",
"comment_id": null,
"datetime": 1573729388000,
"masked_author": "username_0",
"text": "By default, when authorizing with Etsy, all of the [Etsy's permission scopes](https://www.etsy.com/developers/documentation/getting_started/oauth#section_permission_scopes) are enabled.\r\n\r\nIn order to restrict those permissions, is there a way to override the url returned by [`urlTemporaryCredentials()`](https://github.com/SocialiteProviders/Providers/blob/master/src/Etsy/Server.php#L16) method:\r\n```\r\npublic function urlTemporaryCredentials()\r\n{\r\n return 'https://openapi.etsy.com/v2/oauth/request_token';\r\n}\r\n```\r\n\r\nto\r\n\r\n`https://openapi.etsy.com/v2/oauth/request_token?scope=email_r%20listings_r`",
"title": "Etsy permission scopes",
"type": "issue"
},
{
"action": "closed",
"author": "philmarc",
"comment_id": null,
"datetime": 1573820101000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 606 | false | false | 606 | false |
scandihealth/lpr3-docs | scandihealth | 426,921,055 | 334 | null | [
{
"action": "opened",
"author": "AnneRegionSyd",
"comment_id": null,
"datetime": 1553855659000,
"masked_author": "username_0",
"text": "We have recived some response files containing business rules but in LPR there is no business rules to see. Please explain why there is a difference. \r\n\r\nExample: 3e3e209d-d402-5ebb-bd1e-cc7c77e61084\r\nReporting 26.03.2019 contains a business rule and reporting 28.03.2019 the business rule is gone - but not in the responsfile.",
"title": "Difference between responsfiles and LPR",
"type": "issue"
},
{
"action": "created",
"author": "KirstenLHSDS",
"comment_id": 478454615,
"datetime": 1554100785000,
"masked_author": "username_1",
"text": "Please report this to SDS Servicedesk: servicedesk@sundhedsdata.dk\r\nThank you!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "AnneRegionSyd",
"comment_id": null,
"datetime": 1554105646000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 405 | false | false | 405 | false |
Azgaar/Fantasy-Map-Generator | null | 581,470,422 | 412 | {
"number": 412,
"repo": "Fantasy-Map-Generator",
"user_login": "Azgaar"
} | [
{
"action": "opened",
"author": "chung-nguyen",
"comment_id": null,
"datetime": 1584239019000,
"masked_author": "username_0",
"text": "The function to save as PNG/JPG image is using window's size which might not be useful for most situations. I suggest using the map canvas size (which users type in the Options UI) instead.\r\n\r\nThis PR resize the cloned SVG node and save the image file using the map canvas size.\r\n\r\nConsider my situation when doing a grand strategy game, I would need:\r\n+ The alpha mask of landmass to determine where is water and where is land.\r\n+ The color map of provinces.\r\n+ Data of rivers, routes, etc.\r\n\r\nIn my game prototype, the map will need to be scroll-able so I need the full size of everything rather just a portion of it. Therefore I need the ability to export the full-size map rather the screenshot-size.",
"title": "Save image PNG/JPG with map canvas size instead of screenshot size",
"type": "issue"
},
{
"action": "created",
"author": "Azgaar",
"comment_id": 599196388,
"datetime": 1584270805000,
"masked_author": "username_1",
"text": "Hi @username_0, what is this custom save function?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chung-nguyen",
"comment_id": 599198191,
"datetime": 1584271863000,
"masked_author": "username_0",
"text": "I already reverted that custom save part. Sorry for the inconvenience.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 826 | false | false | 826 | true |
bytedance/xgplayer | bytedance | 642,413,459 | 396 | null | [
{
"action": "opened",
"author": "jinsom",
"comment_id": null,
"datetime": 1592673530000,
"masked_author": "username_0",
"text": "xgplayer-playing这个class无论暂停还是播放中都会有。\r\nxgplayer-pause 只有暂停的时候有",
"title": "希望视频在播放中,父级可以加上一个唯一的class用于区分,",
"type": "issue"
},
{
"action": "created",
"author": "zhangxin92",
"comment_id": 647071397,
"datetime": 1592708393000,
"masked_author": "username_1",
"text": "具体是用于区分什么哈?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jinsom",
"comment_id": 647076638,
"datetime": 1592712977000,
"masked_author": "username_0",
"text": "想获取当前正在播放中的视频的实例。互斥播放我找不到更好的思路,每个视频id都是 “video-唯一id”",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "zhangxin92",
"comment_id": null,
"datetime": 1610275872000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "zhangxin92",
"comment_id": 757455392,
"datetime": 1610275872000,
"masked_author": "username_1",
"text": "可以在实例化后自己添加class\r\nPlayer.util.addClass(player.root, 'video-xxx')",
"title": null,
"type": "comment"
}
] | 2 | 5 | 188 | false | false | 188 | false |
dotnet/aspnetcore | dotnet | 689,769,459 | 25,481 | {
"number": 25481,
"repo": "aspnetcore",
"user_login": "dotnet"
} | [
{
"action": "opened",
"author": "captainsafia",
"comment_id": null,
"datetime": 1598929760000,
"masked_author": "username_0",
"text": "This PR increases the amount of data buffered to memory before switching to buffering on disk from 30kb to 1mb. The table below highlights (requests per second, mean latency) across different combinations of input size and threshold size for this scenario.\r\n\r\n| | 2kb input | 60kb input | 2mb input |\r\n|-----------------|------------------|------------------|------------------|\r\n| 30kb threshold | (7253.09, 37.08) | (538.29, 473.52) | (109.69, 1320.0) |\r\n| 512kb threshold | (7424.88, 36.28) | (519.51, 486.47) | (107.7, 1350.0) |\r\n| 1mb threshold | (52428.87, 7.92) | (518.21, 483.33) | (109.37, 1240.0) |\r\n| 2mb threshold | (51887.11, 7.95) | (521.88, 488.08) | (107.43, 1320.0) |",
"title": "Increase memory buffering threshold for input formatter",
"type": "issue"
},
{
"action": "created",
"author": "Tratcher",
"comment_id": 685091758,
"datetime": 1598989375000,
"masked_author": "username_1",
"text": "@davidfowl had proposed raising this to match the default request body size limit of 30mb.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tratcher",
"comment_id": 685093790,
"datetime": 1598989655000,
"masked_author": "username_1",
"text": "Any explanation why the 60kb input is the same or worse? Only the 2kb input seems to be showing improvements.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "halter73",
"comment_id": 685227561,
"datetime": 1599010297000,
"masked_author": "username_2",
"text": "If the numbers support this, it's an interesting idea. However, I'm hesitant to increase this limit all the way to 30 MB because today in this scenario we buffer up to 1030 KB raw bytes per HTTP/1.1 Connection (or HTTP/2 stream) AFAIK (up to 1 MB in Kestrel's input Pipe + up to 30 KB in FileBufferingReadStream's MemoryStream). Increasing this to 2 MB doesn't sound too scary. Increasing this to 31 MB seems like a huge jump.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "captainsafia",
"comment_id": 685928649,
"datetime": 1599072438000,
"masked_author": "username_0",
"text": "Closing this while we sort out the spookiness with the perf stats per our Teams chat.",
"title": null,
"type": "comment"
}
] | 3 | 5 | 1,436 | false | false | 1,436 | false |
cfpb/hmda-platform | cfpb | 349,585,163 | 1,714 | null | [
{
"action": "opened",
"author": "chynnakeys",
"comment_id": null,
"datetime": 1533919565000,
"masked_author": "username_0",
"text": "Edit #: V639\r\nEdit Type: Validity\r\nCategory: LAR\r\nData Fields: Race of Co-Applicant or Co-Borrower: 1; Race of Co-Applicant or Co-Borrower: 2; Race of Co-Applicant or Co-Borrower: 3; Race of Co-Applicant or Co-Borrower: 4; Race of Co-Applicant or Co-Borrower: 5; Race of Co-Applicant or Co-Borrower Collected on the Basis of Visual Observation or Surname; Race of Co-Applicant or Co-Borrower: Free Form Text Field for American Indian or Alaska Native Enrolled or Principal Tribe; Race of Co-Applicant or Co-Borrower: Free Form Text Field for Other Asian; Race of Co-Applicant or Co-Borrower: Free Form Text Field for Other Pacific Islander\r\nDescription:\r\n\r\n_Instructional Text for Users:_ An invalid Race data field was reported. Please review the information below and update your file accordingly:\r\n\r\n_Edit Logic:_\r\n\r\n1. Race of Co-Applicant or Co-Borrower Collected on the Basis of Visual Observation or Surname must equal 1, 2, 3, or 4, and cannot be left blank. \r\n2. If Race of Co-Applicant or Co-Borrower Collected on the Basis of Visual Observation or Surname equals 1, then Race of Co-Applicant or Co-Borrower: 1 must equal 1, 2, 3, 4, or 5; and Race of Co-Applicant or Co-Borrower: 2; Race of Co-Applicant or Co-Borrower: 3; Race of Co-Applicant or Co-Borrower: 4; Race of Co-Applicant or Co-Borrower: 5 must equal 1, 2, 3, 4, or 5, or be left blank. \r\n3. If Race of Co-Applicant or Co-Borrower Collected on the Basis of Visual Observation or Surname equals 2, then Race of Co-Applicant or Co-Borrower: 1 must equal 1, 2, 21, 22, 23, 24, 25, 26, 27, 3, 4, 41, 42, 43, 44, 5 or 6, and cannot be left blank, unless a race is provided in Race of Co-Applicant or Co-Borrower: Free Form Text Field for American Indian or Alaska Native Enrolled or Principal Tribe, Race of Co-Applicant or Co-Borrower: Free Form Text Field for Other Asian, or Race of Co-Applicant or Co-Borrower: Free Form Text Field for Other Pacific Islander; and Race of Co-Applicant or Co-Borrower: 2; Race of Co-Applicant or Co-Borrower: 3; Race of Co-Applicant or Co-Borrower: 4; Race of Co-Applicant or Co-Borrower: 5 must equal 1, 2, 21, 22, 23, 24, 25, 26, 27, 3, 4, 41, 42, 43, 44, 5, or be left blank.",
"title": "2018 Edits: V639",
"type": "issue"
},
{
"action": "closed",
"author": "jmarin",
"comment_id": null,
"datetime": 1536592045000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 2,182 | false | false | 2,182 | false |
plk/biber | null | 552,852,642 | 306 | null | [
{
"action": "opened",
"author": "mkunes",
"comment_id": null,
"datetime": 1579610461000,
"masked_author": "username_0",
"text": "When compiling the document below using `latexmk`, the compilation runs forever, because the `.bbl` file keeps changing:\r\n\r\nIn `\\refsection{0}` of the `.bbl` file, the line `\\keyalias{TestAlias}{TestSmith}` will only appear on the 1st, 3rd, 5th, etc. run of `biber`. On the 2nd, 4th, 6th, etc. run it is missing. This seems to only affect refsection 0 - for sections 1+ the command appears correctly. Something similar also happens in the `.bcf` file.\r\n\r\nI've managed to resolve the issue by adding `\\newrefsection` at the beginning of my document. But the behaviour still feels like a bug.\r\n\r\nTested with biber version 2.14 (fresh installation of TeX Live 2019). The same issue was also present in v2.10 (TeX Live 2017).\r\n\r\nMWE:\r\n\r\n```\r\n\\documentclass{article} \r\n\r\n\\usepackage{filecontents}\r\n\\begin{filecontents*}{bibliography.bib}\r\n\r\n@article{TestAuthor1,\r\n\ttitle={A Fake Paper},\r\n\tauthor={Author, A. and Advisor, B.},\r\n\tpages={7--8},\r\n\tyear={2014}\r\n}\r\n\r\n@inproceedings{TestAuthor2,\r\n\ttitle={Another Fake Paper},\r\n\tauthor={Author, A. and Colleague, C.},\r\n\tbooktitle={2016 Placeholder Conference on Citation Testing},\r\n\tpages={5--6},\r\n\tyear={2016}\r\n}\r\n\r\n@inproceedings{TestSmith,\r\n\ttitle={This is a Test},\r\n\tauthor={Smith, John and Jones, Jane},\r\n\tbooktitle={2016 Placeholder Conference on Citation Testing},\r\n\tpages={3--4},\r\n\tyear={2016},\r\n\tids={TestAlias},\r\n}\r\n\\end{filecontents*}\r\n\r\n\\usepackage[backend=biber]{biblatex}\r\n\r\n\\addbibresource{bibliography.bib}\r\n\r\n\\DeclareBibliographyCategory{my_papers}\r\n\\addtocategory{my_papers}{TestAuthor1,TestAuthor2} \r\n\r\n\\begin{document}\r\n% \\newrefsection % if I add this, everything works\r\n\r\n\\textcite{TestSmith}\r\n\r\n\\textcite{TestAlias}\r\n\r\n\\textcite{TestAuthor1}\r\n\r\n\\printbibliography\r\n\r\n\\newrefsection\r\n\\nocite{*}\r\n\\printbibliography[category=my_papers,title={Authored and Co-authored Publications}]\r\n\r\n\\end{document}\r\n```",
"title": "With \\newrefsection, \\keyalias commands for refsection 0 are missing from the bbl file on every 2nd run, resulting in an endless latexmk compilation",
"type": "issue"
},
{
"action": "created",
"author": "plk",
"comment_id": 576793970,
"datetime": 1579628159000,
"masked_author": "username_1",
"text": "@username_2 - What is your opinion on this? The issue is that writing of aliases to the .bcf only happens in `\\blx@endrefsection`. When there is an active >0 refsection at document close, the open refsection is closed automatically in `\\AtEndDocument`. This means that refsection 0 is never closed and it's aliases are not written to the .bcf. This is basically a long-standing bug which really only appears with aliases as that's what are processed at the end of a refsection. Anything like this will result in refsection not being cleanly closed:\r\n\r\n```\r\n\\begin{document}\r\n\r\n...\r\n\r\n\\newrefsection\r\n\r\n...\r\n\r\n\\end{document}\r\n\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "moewew",
"comment_id": 578391481,
"datetime": 1579944803000,
"masked_author": "username_2",
"text": "I can't claim to have fully understood what is going on, but if this is only about refsections not ending cleanly, https://github.com/username_2/biblatex/commit/8b9ee045d31b0f3ae592c0a458871df709ae7bad be enough.\r\n\r\nWith that fix the MWE seems to work fine again.\r\n\r\n@username_1 Let me know if you want me to merge this.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "plk",
"comment_id": 578403945,
"datetime": 1579956827000,
"masked_author": "username_1",
"text": "I think that this fix is fine - it's a long-standing issue that really only appear with section 0 aliases - let's merge it.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "moewew",
"comment_id": 578410333,
"datetime": 1579962258000,
"masked_author": "username_2",
"text": "OK. Merged https://github.com/username_1/biblatex/commit/8b9ee045d31b0f3ae592c0a458871df709ae7bad to dev.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "plk",
"comment_id": 578410983,
"datetime": 1579962772000,
"masked_author": "username_1",
"text": "Please try DEV version 3.15 (with biber 2.15 DEV) from Sourceforge.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mkunes",
"comment_id": null,
"datetime": 1580122158000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "mkunes",
"comment_id": 578692537,
"datetime": 1580122158000,
"masked_author": "username_0",
"text": "Yes, the new version works well, my project now compiles without issues. Thank you both!\r\n\r\n(Also thanks for explaining what the actual cause was. Now I know that I can just replace the `\\newrefsection` with `\\begin{refsection} ... \\end{refsection}` and it will work even with older versions.)",
"title": null,
"type": "comment"
}
] | 3 | 8 | 3,380 | false | false | 3,380 | true |
danijar/handout | null | 479,177,158 | 25 | null | [
{
"action": "opened",
"author": "hannes-brt",
"comment_id": null,
"datetime": 1565386837000,
"masked_author": "username_0",
"text": "For my workflow, the greatest problem with this package at the moment is that it throws an error when running `handout.Handout()` in an interactive shell:\r\n\r\n```\r\nhandout.Handout('/tmp')\r\nTraceback (most recent call last):\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/site-packages/IPython/core/interactiveshell.py\", line 3325, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-4-d909ee1709cf>\", line 1, in <module>\r\n handout.Handout('/tmp')\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/site-packages/handout/handout.py\", line 25, in __init__\r\n self._source_text = inspect.getsource(module)\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/inspect.py\", line 973, in getsource\r\n lines, lnum = getsourcelines(object)\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/inspect.py\", line 955, in getsourcelines\r\n lines, lnum = findsource(object)\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/inspect.py\", line 768, in findsource\r\n file = getsourcefile(object)\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/inspect.py\", line 684, in getsourcefile\r\n filename = getfile(object)\r\n File \"/h/hannes/anaconda3/envs/microexon-code-tf2/lib/python3.6/inspect.py\", line 666, in getfile\r\n 'function, traceback, frame, or code object'.format(object))\r\nTypeError: None is not a module, class, method, function, traceback, frame, or code object\r\n```\r\nThis is a problem for my (Jupyter-less) workflow as I like to use `#%%` in PyCharm to create run-able cells and send them to a shell with `Ctrl+Enter`. This way I can create my figures and do my analysis interactively and simply run the script at the end to obtain a rendered report. However, the error above throws a wrench into that workflow.\r\n\r\nI think a workable simple solution would be for handout to simply do nothing when it detects an interactive shell.",
"title": "Prevent error in interactive shell",
"type": "issue"
},
{
"action": "created",
"author": "danijar",
"comment_id": 520087440,
"datetime": 1565390742000,
"masked_author": "username_1",
"text": "Thanks for reporting! I'm wondering what source file you'd expect the handout to be built from? If there is no source file, there won't be any comment cells or code cells. In principle, it would still be possible to allow to add media to a doc if this is a reasonable use case.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "epogrebnyak",
"comment_id": 520144019,
"datetime": 1565438996000,
"masked_author": "username_2",
"text": "Not for immediate implementation, but just to consider a workflow:\r\n\r\n```\r\n# a user creates an `InteractiveHandout` instance \r\nidoc = InteractiveHandout('/tmp')\r\n# alternative: \r\nidoc = Handout('/tmp', interactive=True)\r\n# user adds text, graphs, video, etc in interactive session calls \r\nidoc.add_text(\"Print this text\") \r\n# a user creates an an artefact\r\nidoc.show()\r\n```\r\nThis way there is no source to build from, but a user can try different things in console before writing a script to persist the report. Is this something close to your workflow, @username_0?\r\n\r\nI like the idea because it gives more food for thought for #9 on structuring the classes for different sources and outputs. This workflow suggests there may be no source as text file, just calls to handout instance.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "hannes-brt",
"comment_id": 520162000,
"datetime": 1565454361000,
"masked_author": "username_0",
"text": "I don't necessarily think the package needs to *work* in terms of producing a report in a shell, but at least it shouldn't raise errors. I think the closest model from existing packages is `pypublish` from Pweave (http://mpastell.com/pweave/pypublish.html). Since it only uses special comment syntax to define markdown text, the code can run in a shell, as a script (`python myscript.py`), or in pypublish (`pypublish myscript.py`). The first two don't produce a report, but still run the code which you sometimes want to do. Pypublish has some problems of its own and Pweave is no longer being developed, so I'd hope `handout` can step into that gap, but I think that it's important running the code in a shell doesn't break things.\r\n\r\n@username_2 I don't think it's good to introduce different options or objects for interactive mode, because that again requires changing the code when running in different context. I think the simplest and easiest solution is for `handout` to simply do nothing when an interactive shell is detected.\r\n\r\n@username_1 Since you're local - I'd be happy to swap some ideas if you're ever at the Vector Institute.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danijar",
"comment_id": 520164663,
"datetime": 1565456513000,
"masked_author": "username_1",
"text": "There a lot of good points here. Instead of an interactive mode, we could have a flag to not include any source code. That would work in all environments and would also cover an additional use case people have asked for. Would that be suitable here?\r\n\r\n@username_2 Yes, let's think about making the main class independent of the input and output types. Can you think of any inputs other than \"current file\" and \"nothing\"?\r\n\r\n@username_0 I'm happy to chat, could you shoot me an email so we can coordinate a time, please?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "epogrebnyak",
"comment_id": 520602938,
"datetime": 1565645364000,
"masked_author": "username_2",
"text": "Under the hood in Handout class we merge a chain of blocks derived from static analysis of a source file and a chain of blocks derived from add_x() сalls, in principle they can be separated. Answering @username_1 question - for the source file there is either current file or nothing that can be used.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danijar",
"comment_id": 521337722,
"datetime": 1565803143000,
"masked_author": "username_1",
"text": "@username_2 Do you think they we should separate them?",
"title": null,
"type": "comment"
}
] | 3 | 7 | 5,055 | false | false | 5,055 | true |
OpenNebula/one | OpenNebula | 275,206,965 | 677 | null | [
{
"action": "opened",
"author": "OpenNebulaProject",
"comment_id": null,
"datetime": 1511136944000,
"masked_author": "username_0",
"text": "---\n\n\nAuthor Name: **Javi Fontan** (Javi Fontan)\nOriginal Redmine Issue: 2242, https://dev.opennebula.org/issues/2242\nOriginal Date: 2013-07-29\n\n\n---\n\nThe first command opennebula does with the database is a @CREATE DATABASE IF NOT EXIST@. When the database is already create this permission should not be required.",
"title": "OpenNebula shouldn't need database creation permission in mysql",
"type": "issue"
},
{
"action": "closed",
"author": "juanmont",
"comment_id": null,
"datetime": 1512139363000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 315 | false | false | 315 | false |
KyoriPowered/adventure-text-minimessage | KyoriPowered | 671,584,740 | 38 | null | [
{
"action": "opened",
"author": "Draycia",
"comment_id": null,
"datetime": 1596360966000,
"masked_author": "username_0",
"text": "When attempting to reset all formatting due to variable input, rainbow (and I assume gradients as well) are not auto closed / reset. \r\n\r\nFormatting used: \r\n`<gray>{[<red>Staff<gray>]} <rainbow>%player_displayname% <reset><gray>» <white><message>`\r\n\r\nResult: \r\n\r\n\r\nExpected outcome: \r\n",
"title": "<reset> tag does not reset <rainbow> tags",
"type": "issue"
},
{
"action": "closed",
"author": "MiniDigger",
"comment_id": null,
"datetime": 1596637860000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 508 | false | false | 508 | false |
celo-org/celo-monorepo | celo-org | 655,919,004 | 4,388 | null | [
{
"action": "closed",
"author": "alecps",
"comment_id": null,
"datetime": 1594698256000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 28 | false | true | 0 | false |
folio-org/ui-eholdings | folio-org | 571,295,832 | 969 | {
"number": 969,
"repo": "ui-eholdings",
"user_login": "folio-org"
} | [
{
"action": "opened",
"author": "Godlevskyi",
"comment_id": null,
"datetime": 1582718828000,
"masked_author": "username_0",
"text": "## Purpose\r\nTags filter multi-select component renders only 25 tags. All other tags rendered as empty.\r\n\r\nThe problem is that the titles paginated and only one page was taken,",
"title": "UIEH-843: Tags Filter: Display all Tags limited to 25 tags",
"type": "issue"
}
] | 3 | 5 | 10,262 | false | true | 175 | false |
spockframework/spock | spockframework | 669,979,187 | 1,207 | {
"number": 1207,
"repo": "spock",
"user_login": "spockframework"
} | [
{
"action": "opened",
"author": "leonard84",
"comment_id": null,
"datetime": 1596212215000,
"masked_author": "username_0",
"text": "<!-- Reviewable:start -->\nThis change is [<img src=\"https://reviewable.io/review_button.svg\" height=\"34\" align=\"absmiddle\" alt=\"Reviewable\"/>](https://reviewable.io/reviews/spockframework/spock/1207)\n<!-- Reviewable:end -->",
"title": "Parallel execution",
"type": "issue"
},
{
"action": "created",
"author": "marcphilipp",
"comment_id": 716453790,
"datetime": 1603707821000,
"masked_author": "username_1",
"text": "🎉",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "szpak",
"comment_id": 716861922,
"datetime": 1603751862000,
"masked_author": "username_2",
"text": "I've check it on modern i7 with 12 vcore and the execution time of `:spock-specs:test` decreased from 34 seconds to 16. Nice :).",
"title": null,
"type": "comment"
}
] | 4 | 4 | 352 | false | true | 352 | false |
frostinassiky/gtad | null | 649,046,813 | 21 | null | [
{
"action": "opened",
"author": "yuanqzhang",
"comment_id": null,
"datetime": 1593615170000,
"masked_author": "username_0",
"text": "I have my own dataset pre-trained by I3D model on Kinetics, and output the result of avg-pool layer with 1024 dims. The length of the list is processed after frame extraction.\r\nCan I only use this RGB features as my input of the model, as the extraction of optical flow will cost too much time?\r\n Many thanks for the time.",
"title": "Can I train my own by only using RGB features?",
"type": "issue"
},
{
"action": "created",
"author": "frostinassiky",
"comment_id": 652552056,
"datetime": 1593624596000,
"masked_author": "username_1",
"text": "Hi @username_0 . Thanks for your interest in G-TAD. \r\n\r\nI agree with you that extracting the optical flow takes too much time. You can do as you described: only take the RGB feature as model input.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ZhengLeon",
"comment_id": 653745837,
"datetime": 1593856228000,
"masked_author": "username_2",
"text": "Hi @username_1 \r\nI wonder how to get the Unet_test.npy for my own dataset?\r\nIs it come from the output proposals by your model and the UntrimmedNet?\r\n\r\nThanks a lot!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "frostinassiky",
"comment_id": 653762799,
"datetime": 1593867344000,
"masked_author": "username_1",
"text": "Dear @username_2 \r\n`uNet_test.npy` contains the classification results for each video in the test set. You can use any video classification model to produce the prediction. See #10",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ZhengLeon",
"comment_id": 653830520,
"datetime": 1593912703000,
"masked_author": "username_2",
"text": "Thanks so much for your response! @username_1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yuanqzhang",
"comment_id": 654221892,
"datetime": 1594040621000,
"masked_author": "username_0",
"text": "many thanks",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "yuanqzhang",
"comment_id": null,
"datetime": 1594040625000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "Shuhan136",
"comment_id": 667834031,
"datetime": 1596436697000,
"masked_author": "username_3",
"text": "can you tell me how can i get the result of avg-pool layer with 1024 dims from RGB frame or Optical flow frame?",
"title": null,
"type": "comment"
}
] | 4 | 8 | 1,037 | false | false | 1,037 | true |
dimigoin/dimigoin-front-v2 | dimigoin | 585,744,381 | 36 | {
"number": 36,
"repo": "dimigoin-front-v2",
"user_login": "dimigoin"
} | [
{
"action": "opened",
"author": "cokia",
"comment_id": null,
"datetime": 1584891056000,
"masked_author": "username_0",
"text": "1. emotion 구성\r\n2. styled -> emotion 리펙토링\r\n3. storybook 구성\r\n4. mainpage 구헌\r\n5. IE Redirect 구현\r\n6. Eslint 수정\r\n7. dimiru 에 dimi input 추가\r\n8. 자동배포 구성",
"title": "Merge develop into master ",
"type": "issue"
}
] | 2 | 2 | 448 | false | true | 145 | false |
carllerche/tower-web | null | 357,978,171 | 103 | null | [
{
"action": "opened",
"author": "sunng87",
"comment_id": null,
"datetime": 1536310433000,
"masked_author": "username_0",
"text": "I was looking into tower-web to integrate handlebars into its system.\r\n\r\nThe typical usage of handlebars in web application is to create a server-scoped registry, that holds all templates in memory. We use the registry to render templates returned from handler functions. \r\n\r\nThe ideal API for tower-web in my mind is to create a `Template` struct, which contains template `name` and template `data`. The `Template` struct will implement `Response` to render the data structure into actual http response. The problem is `Response` trait only provides a request-scoped `Context` so there seems to be no way to access server-scoped handlebars registry.\r\n\r\nSo is it possible to bind the registry into some data structure that has server-scoped lifetime? And be accessible from `Response` trait and middlewares.",
"title": "Fitting handlebars template engine into Response/Serializer or Middleware system",
"type": "issue"
},
{
"action": "created",
"author": "lnicola",
"comment_id": 419558079,
"datetime": 1536352440000,
"masked_author": "username_1",
"text": "Does https://github.com/username_2/tower-web/pull/98 help?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sunng87",
"comment_id": 419641009,
"datetime": 1536412189000,
"masked_author": "username_0",
"text": "It seems relevant. But for now the `Config` is only available for extractors. We'd better add it to `Context` of `Response` too.\r\n\r\nThe `Template` response works like `File` pretty much except it requires a server-scoped config, the registry.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "lnicola",
"comment_id": 419644443,
"datetime": 1536415638000,
"masked_author": "username_1",
"text": "Oh, I didn't notice that feature only works for extractors. A quick workaround might be to pass the registry (an `Arc` of it) to your `Template`, but adding `Config` to responses would probably be better.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sunng87",
"comment_id": 419688624,
"datetime": 1536464626000,
"masked_author": "username_0",
"text": "Checked the code I found the response is implemented via macros, which extractors were Service. It seems no easy way to access `Config` from `Resource`. Why not make `Resource` and `Response` serialization a `Middleware`. Iron was doing it that way and I find it works pretty well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 420329399,
"datetime": 1536682259000,
"masked_author": "username_2",
"text": "It should be possible to get the config structure into `response::Context` (that is what it is there for).\r\n\r\nThe context takes a `T: Serializer`, which is service scoped. That trait could have a fn added to access the config.\r\n\r\nReturning a `Template` type is a good first step, however, I would like to figure out a way to avoid having to do that. It would be nice to do something like:\r\n\r\n```rust\r\n#[derive(Response)]\r\nstruct Index {\r\n message: String,\r\n}\r\n\r\n#[derive(Response)]\r\n#[web(view = \"my_custom_template\")]\r\nstruct CustomTemplate {\r\n message: String,\r\n}\r\n\r\nimpl_web! {\r\n impl MyResource {\r\n // Uses convention to render a template at \"my_resource/index\" in the registery.\r\n #[get(\"/\")]\r\n fn index(&self) -> Index {\r\n Ok(Index {\r\n message: \"hello\".to_string(),\r\n })\r\n }\r\n\r\n // Allow specifying the template\r\n #[get(\"/foo\")]\r\n fn with_custom_template(&self) -> CustomTemplate {\r\n Ok(CustomTemplate {\r\n message: \"custom\".to_string(),\r\n })\r\n }\r\n }\r\n}\r\n\r\nServiceBuilder::new()\r\n // By convention, this would look for a `views` directory or something like that and load\r\n // templates from there.\r\n .serializer(Handlebars::new())\r\n .resource(MyResource::new())\r\n .run(&addr)\r\n .unwrap();\r\n```\r\n\r\nTo support this, first adding serializers would need to be supported. Second, we would have to figure out how to handle the `#[web(view = \"my_custom_template\")]` attribute.\r\n\r\nIdeally, tower-web would know nothing about handlebars.\r\n\r\nOne strategy would be to add a fn to `Response` for introspection. So, you could do something like:\r\n\r\n```rust\r\nmy_response.attributes()\r\n```\r\n\r\nand that would return a type that lets third parties see all the attributes on the response. To support *that* the macro would have to let unknown attributes through. There could be a runtime check at boot or something to validate that all attributes are known... but anyway.\r\n\r\nThoughts?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 420340633,
"datetime": 1536684306000,
"masked_author": "username_2",
"text": "@username_0 Is this something you want to try to take on? I can provide guidance as needed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sunng87",
"comment_id": 420578936,
"datetime": 1536744686000,
"masked_author": "username_0",
"text": "@username_2 Thanks! I'm quite interested in taking this on and adding templating support for tower-web. \r\n\r\nFor the template API, what if the handler might return one of two different templates?\r\n\r\n```\r\n\r\n // Allow specifying the template\r\n #[get(\"/foo\")]\r\n fn with_custom_template(&self) -> ?? {\r\n if some_check {\r\n Ok(CustomTemplate {\r\n message: \"custom\".to_string(),\r\n })\r\n } else {\r\n Ok(AnotherTemplate { ... })\r\n }\r\n }\r\n```\r\n\r\nOf course we can make a trait called `ModelAndView` so the handler could return `impl ModelAndView`, and `#[web(view ....)]` generates an impl of `ModelAndView`. But to use the attribute we might want to hide the internals, `impl ModelAndView` still leaks it to user. \r\n\r\nAt the moment I don't feel it's any better than `Template(name, data)`. The `Template` struct made it more flexible to choose actual view to render, while easier to unittest. And `Template` can be implementation independent too. Either handlebars or tera could work with it.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sunng87",
"comment_id": 420592378,
"datetime": 1536747309000,
"masked_author": "username_0",
"text": "One more question, by `.serializer(Handlebars::new())`, can we still be able to use `DefaultSerializer`, the serde based one for json marshalling? A web application could use both template engine and json.\r\n\r\nCurrent `Serializer` API requires data to implement `serde::Serialize`, which is road blocker to the `Template` struct solution. Another solution is to deal with it like `File`, that impls `Response` for `Template`. It requires `Template` a template engine specific API, I can put it into some crate like `tower-handlebars`. But that won't benefit other template engine like tera.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "lnicola",
"comment_id": 420605937,
"datetime": 1536749788000,
"masked_author": "username_1",
"text": "Implementing `Response` is completely fine. The current `serde::Serialize` is certainly useful, but it might be too data-centric, like when you're writing a web service and you're fine with a single serialization format.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 420706368,
"datetime": 1536768592000,
"masked_author": "username_2",
"text": "@username_0 I'm looking [here](https://github.com/username_0/handlebars-rust) and a template is rendered using `json!`, which is serde.\r\n\r\nThe Tower Web serializer is intended to support multiple formats (json, html, etc...) at a time. So, an application can use both JSON and Handlebars. The API I described above is a builder that would chain a new serializer onto the existing list, not replace it. The idea is that the JSON serializer would handle content types of \"application/json\" and the Handlebars serializer would handle \"text/html\".\r\n\r\nIt looks like you use serde to get the variables for the template, is that correct? So, it seems like the problem is you do not have access to the template name. Perhaps the solution is to add an additional argument to [`response::Serializer`](http://rust-doc.s3-website-us-east-1.amazonaws.com/tower-web/v0.2.2/tower_web/response/trait.Serializer.html) including contextual information:\r\n\r\n* Attributes set on the response type\r\n* Resource's module, type, and function\r\n* Original request\r\n* Application configuration\r\n\r\nIf this information was available in `Serializer::serialize`, it *should* be possible to implement the API I described above without hard coding handlebars into Tower Web.\r\n\r\nSketch:\r\n\r\n```rust\r\nstruct SerializeContext { ... }\r\n\r\nimpl SerializeContext {\r\n /// Returns the module in which the `impl_web!` was\r\n pub fn resource_impl_module(&self) -> &'static str { ... }\r\n\r\n /// Returns the type name, for example:\r\n ///\r\n /// ```\r\n /// impl_web! {\r\n /// impl HelloWorld { ... }\r\n /// }\r\n ///\r\n /// serialize_context.resource_name() // \"HelloWorld\"\r\n /// ```\r\n ///\r\n ///\r\n fn resource_name(&self) -> &'static str { ... }\r\n\r\n /// Returns a reference to the original request (without the body).\r\n fn request(&self) -> &http::Request<()> { ... }\r\n\r\n /// Returns an application configuration type\r\n pub fn config<T: 'static>(&self) -> Option<&T> { ... }\r\n\r\n /// Returns the `#[web(...)]` attributes set on response types defined with\r\n /// `#[derive(Response)]`\r\n ///\r\n /// This allows access to the attributes at runtime (i.e. not in the macro).\r\n pub fn response_attributes(&self) -> Attributes { ... }\r\n}\r\n\r\n// The template registery probably can be on the serializer and does not have to go through the\r\n// configuration system.\r\nstruct HandlebarsSerializer {\r\n registery: handlebars::Handlebars,\r\n}\r\n\r\nimpl response::Serializer for HandlebarsSerializer {\r\n type Format = ();\r\n\r\n // This is wonky... not sure what I was thinking, but we can revisit\r\n fn lookup(&self, name: &str) -> ContentType<Self::Format> {\r\n let format = if name == \"application/html\" {\r\n Some(())\r\n } else {\r\n None\r\n };\r\n\r\n ContentType::new(HeaderName::from(name).unwrap(), format)\r\n }\r\n\r\n fn serialize<T>(&self, value: &T, format: &Self::Format, context: &SerializeContext) -> Result<Bytes, Error>\r\n where T: serde::Serialize\r\n {\r\n // Check if a template name is specified.\r\n if let Some(name) context.response_attributes().get(\"template\") {\r\n return self.render(name, value);\r\n }\r\n\r\n // Otherwise, use a conventional name. I don't know what the convention should be, but we\r\n // can figure it out!\r\n let name = format!(\"{}/{}.hbs\");\r\n\r\n // This would also handle a \"template not found\" situation.\r\n self.render(&name, value)\r\n }\r\n}\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 420706970,
"datetime": 1536768700000,
"masked_author": "username_2",
"text": "The entire goal is to be completely data centric :) I would like all HTTP / HTML concerns to be decoupled from business logic. This means that the handler should return the data for the response and the serializer should render it!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sunng87",
"comment_id": 420858291,
"datetime": 1536804321000,
"masked_author": "username_0",
"text": "@username_2 Thanks! That's much clear. So if we want to use different views or data models for a particular handler, is the code below possible?\r\n\r\n```rust\r\n // Allow specifying the template\r\n #[get(\"/foo\")]\r\n fn with_custom_template(&self) -> impl Response {\r\n if some_check {\r\n Ok(CustomData1 {\r\n message: \"custom\".to_string(),\r\n })\r\n } else {\r\n Ok(CustomData2 { ... })\r\n }\r\n }\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 422190159,
"datetime": 1537222663000,
"masked_author": "username_2",
"text": "@username_0 that wouldn't be possible because you are returning two different types. You would need to wrap that w/ `Either` *or* generate an either type.\r\n\r\n```rust\r\n#[derive(Response)]\r\n#[web(either)]\r\nenum MyResponse {\r\n Data1(CustomData1),\r\n Data2(CustomData2),\r\n}\r\n\r\n// Allow specifying the template\r\n#[get(\"/foo\")]\r\nfn with_custom_template(&self) -> impl Future<Item = MyResponse> {\r\n if some_check {\r\n Ok(MyResponse::Data1(CustomData1 {\r\n message: \"custom\".to_string(),\r\n }))\r\n } else {\r\n Ok(MyResponse::Data2(CustomData2 { ... }))\r\n }\r\n}\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "shepmaster",
"comment_id": 422200441,
"datetime": 1537225706000,
"masked_author": "username_3",
"text": "To chime in, I think it would be a shame to enforce that all uses of returning \"text/html\" in an application must be served by a single type. It's pretty common for a Rails application to have both ERB and HAML templates, for example.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 422243990,
"datetime": 1537241913000,
"masked_author": "username_2",
"text": "@username_3 an option to deal w/ that would be to implement a multi-template library serializer. It should be able to use the same logic above and pick the template that is available (haml or erb).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 422974878,
"datetime": 1537394846000,
"masked_author": "username_2",
"text": "I'll probably take a stab at this to get it going.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "carllerche",
"comment_id": 423418108,
"datetime": 1537507940000,
"masked_author": "username_2",
"text": "#115 is a first stab at it this.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "carllerche",
"comment_id": null,
"datetime": 1538107698000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 4 | 19 | 10,936 | false | false | 10,936 | true |
MessageKit/MessageKit | MessageKit | 510,844,713 | 1,207 | null | [
{
"action": "opened",
"author": "denikaev",
"comment_id": null,
"datetime": 1571770559000,
"masked_author": "username_0",
"text": "How can I create space in bottom label? Need install on .zero. Pls check screenshot\r\n",
"title": "Space in bottomLabel",
"type": "issue"
},
{
"action": "created",
"author": "denikaev",
"comment_id": 546590667,
"datetime": 1572085752000,
"masked_author": "username_0",
"text": "Hi, it helped, thanks :)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "denikaev",
"comment_id": null,
"datetime": 1572085755000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 930 | false | true | 251 | false |
RedHatInsights/insights-results-smart-proxy | RedHatInsights | 668,836,092 | 148 | {
"number": 148,
"repo": "insights-results-smart-proxy",
"user_login": "RedHatInsights"
} | [
{
"action": "opened",
"author": "joselsegura",
"comment_id": null,
"datetime": 1596121382000,
"masked_author": "username_0",
"text": "# Description\r\n\r\nMinor update on Makefile to avoid modification of `go.mod` and `go.sum` when running `docgo`\r\n\r\nFixes #147 \r\n\r\n## Type of change\r\n\r\n- Refactor (refactoring code, removing useless files)\r\n- Documentation update\r\n\r\n## Testing steps\r\n\r\nRegular CI",
"title": "Improve docgo generation in Makefile",
"type": "issue"
}
] | 2 | 2 | 260 | false | true | 260 | false |
Azure/aad-pod-identity | Azure | 636,988,082 | 645 | null | [
{
"action": "opened",
"author": "ohadschn",
"comment_id": null,
"datetime": 1591879936000,
"masked_author": "username_0",
"text": "**Describe the bug**\r\nUsage 1: https://github.com/Azure/aad-pod-identity/blob/f4084a15f407b6510bf32d8d10cda8fa516e407c/deploy/infra/deployment-rbac.yaml#L160\r\n\r\nUsage 2: https://github.com/Azure/aad-pod-identity/blob/f4084a15f407b6510bf32d8d10cda8fa516e407c/deploy/infra/deployment-rbac.yaml#L268\r\n\r\nAccording to docs, should be replaced with `kubernetes.io/os`:\r\nhttps://v1-16.docs.kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-os-deprecated",
"title": "Deprecated label \"beta.kubernetes.io/os\" used in \"indeployment-rbac.yaml\"",
"type": "issue"
},
{
"action": "created",
"author": "aramase",
"comment_id": 642772375,
"datetime": 1591891277000,
"masked_author": "username_1",
"text": "@username_0 Thank you for opening the issue. Would you be interested in opening a PR to update the deployment and charts with `kubernetes.io/os`?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "aramase",
"comment_id": null,
"datetime": 1592339504000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 629 | false | false | 629 | true |
pagumakwana/repo_edm | null | 690,331,036 | 71 | null | [
{
"action": "opened",
"author": "sangeetajad",
"comment_id": null,
"datetime": 1598982443000,
"masked_author": "username_0",
"text": "As highlighted in screenshot, \r\n\r\nReplace \"REJECTED\" with \"Rejected\"\r\n\r\nPFA\r\n\r\n",
"title": "Beat module - textual issue ",
"type": "issue"
},
{
"action": "created",
"author": "smsan80",
"comment_id": 692283510,
"datetime": 1600113857000,
"masked_author": "username_1",
"text": "Done",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "smsan80",
"comment_id": null,
"datetime": 1600113857000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 218 | false | false | 218 | false |
filecoin-project/go-fil-markets | filecoin-project | 672,608,625 | 351 | null | [
{
"action": "opened",
"author": "hunjixin",
"comment_id": null,
"datetime": 1596529755000,
"masked_author": "username_0",
"text": "```\r\nDealCid DealId Provider State On Chain? Slashed? PieceCID Size Price Duration Message\r\nbafyreiauhp6wlf2w7rjivkzioj7eaknw52gjtxrl45beacny2wz5wllrfm 0 t010399 StorageDealError N N baga6ea4seaqdvsikcnwpbmzkbnag3ila7kywgus23wg2prewrq5jlwl6bg7v6mi 1.984 MiB 0.0003140655 FIL 628131 adding market funds failed: estimating gas limit: message execution failed: exit 16, reason: balance to add must be greater than zero (RetCode=16) \r\n```\r\n\r\nerror occur in ensurefunds while make a deal in client, if you addbalce to market before, will got a zero to add market , maybe change fsm in market to skip ensurefunds will work if have enough available balance in market",
"title": "balance to add must be greater than zero ",
"type": "issue"
}
] | 1 | 1 | 864 | false | false | 864 | false |
swistakm/pyimgui | null | 529,750,022 | 134 | null | [
{
"action": "opened",
"author": "IL4WI",
"comment_id": null,
"datetime": 1574927861000,
"masked_author": "username_0",
"text": "Rendering pyimgui using the render engines given renders imgui in an window. Is there a way to hide the window or render completely without it? Im new to coding gui but as far as im concerned the render engine used could render without popping up a window in wich it renders?",
"title": "Rendering windowless OpenGl",
"type": "issue"
},
{
"action": "created",
"author": "swistakm",
"comment_id": 559418286,
"datetime": 1574934015000,
"masked_author": "username_1",
"text": "You mean off-screen rendering right? I.e. rendering to the framebuffer without displaying to the use so it could be used elsewhere (e.g. written to file). The answer is yes. And we already do something like that for the documentation purposes.\r\n\r\nImGui doesn't know anything about window context and generally renderers doesn't have to know either. Some of the built-in integration helpers from `imgui.integrations` obviously will assume that there's a window because it is the most common use case.\r\n\r\nIf you want to see how to render to the framebuffer without having actual window you should definitely check out the code in [doc/source/gen_example.py](https://github.com/username_1/pyimgui/blob/master/doc/source/gen_example.py) file because that's where we use off-screen rendering for creating image samples you can see in documentation. We use `glfw` there so technically there is some window context (e.g. you will be able to see app in your dock on macOS) but invisible. How it would be actually done is dependent on the library you'll use for setting up the gl context and handling user interactions. Some of them will allow to work without windows, some won't.\r\n\r\nIf you meant rendering without window decorations and e.g transparent then this is completely dependent on the windowing library you use. GLFW for instance allows to easily render both the transparent windows and undecorated ones.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "swistakm",
"comment_id": 559418541,
"datetime": 1574934055000,
"masked_author": "username_1",
"text": "Note that code in `gen_example.py` is a total mess but should at least give some insight on how it can be done.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "IL4WI",
"comment_id": 559608210,
"datetime": 1574977663000,
"masked_author": "username_0",
"text": "If im understanding the code correctly you are rendering it invisible for the user\r\nwhat im talking about is that you \"open\" up a window and render it in the window. The window should be gone and imgui only visible. But maybe im just reading the code wrong",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "swistakm",
"comment_id": 559975676,
"datetime": 1575124523000,
"masked_author": "username_1",
"text": "Unfortunately I’m still bit confused. Generally ImGui just gives you list of draw commands that you execute in some context with renderer backend of choice. Technically you could even render it as ascii art in terminal although translating these draw commands to such form would be quite problematic.\r\n\r\nStill I don’t know what do you want to achieve. Do you want to simply render ImGui widgets without any window? Do you want to make overlay on top of other application? Or maybe just want to get rid of window decoration? Could you better describe the problem you want to solve or visualize the effect that you want to achieve?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "inequity",
"comment_id": 561828367,
"datetime": 1575491841000,
"masked_author": "username_2",
"text": "Sounds like he wants to render outside of the operating system window. Highly platform specific I think. It's definitely possible on Windows, but it takes work. As an example:\r\n\r\n1. Create fullscreen borderless window, or window the size of the rect that contains your widgets\r\n2. Screenshot the desktop with BitBlt\r\n3. Render window with that screenshot as background, IMGUI over top\r\n4. Override input handling to pass through any events that aren't handled by IMGUI widgets\r\n\r\nIt's been years since I've done this. Maybe something better suited for asking over at the official imgui repo",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "swistakm",
"comment_id": 661053126,
"datetime": 1595253121000,
"masked_author": "username_1",
"text": "Like @username_2 said, imgui isn't system windowing library but a gui toolkit designed to be used within existing window context. You could make it look like standalone window to some extent with transparent backgrounds and removing decorations but it won't work well if you want to make multiple freeform windows (think of Gimp UI).\r\n\r\nI'm closing this because of the above and also due to inactivity of discussion.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "swistakm",
"comment_id": null,
"datetime": 1595253121000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 8 | 3,680 | false | false | 3,680 | true |
elastic/logstash | elastic | 669,184,429 | 12,155 | null | [
{
"action": "opened",
"author": "yaauie",
"comment_id": null,
"datetime": 1596142038000,
"masked_author": "username_0",
"text": "",
"title": "REE: newline at end of conf file fragment corrupts next conf file",
"type": "issue"
},
{
"action": "created",
"author": "yaauie",
"comment_id": 668177815,
"datetime": 1596479730000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andsel",
"comment_id": 668410428,
"datetime": 1596523231000,
"masked_author": "username_1",
"text": "We could count the number of newlines character, something like:\r\n```java\r\n\"Hello\\nWorld\\n\\n\\n\\n\".chars().filter(x -> x == '\\n').count();\r\n```\r\nbut if we have `\\r` or `\\r\\n` we could think to use the Matcher to count:\r\n\r\n```java\r\nMatcher m = Pattern.compile(\"\\r\\n|\\r|\\n\").matcher(\"Hello\\nWorld\\n\\n\\n\");\r\nint linesCount = 0;\r\nwhile (m.find()) {\r\n linesCount++;\r\n}\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yaauie",
"comment_id": 668819229,
"datetime": 1596574345000,
"masked_author": "username_0",
"text": "I created a patch that used `String#split(String, int)` with `-1` for the second argument, which caused it to include an empty entry when there was a trailing newline, but on second thought, it is _weird_ for `SourceWithMetadata#getLinesCount()` to return `2` for the input `input { stdin {} }\\n` because the newline is usually considered to be a part of the line.\r\n\r\nInstead I am trying a fix that addresses the _concatenation_ of source files for our internal representation that joins sources on newline whether or not they already had a trailing newline, by pre-normalizing to chomp trailing newlines before we perform the concatenation.",
"title": null,
"type": "comment"
}
] | 3 | 5 | 1,011 | false | true | 1,011 | false |
rust-osdev/bootloader | rust-osdev | 563,732,727 | 95 | null | [
{
"action": "opened",
"author": "vinaychandra",
"comment_id": null,
"datetime": 1581479436000,
"masked_author": "username_0",
"text": "`tdata` and `tbss` section stores thread local storage for the system. In kernel, we will need to initialize this section per-cpu or per-thread depending on usages. Rust provides a `#[thread_local]` attribute which puts the data in the above sections.\r\n\r\nPlease enable that section in bootloader and pass its location to the kernel so that it can map the TLS data as needed.",
"title": "Enable tls sections",
"type": "issue"
},
{
"action": "created",
"author": "phil-opp",
"comment_id": 585093352,
"datetime": 1581496799000,
"masked_author": "username_1",
"text": "Could you clarify what you mean with \"enable that section\"? Do you mean to just load and map it into the virtual memory or are there any additional required steps?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vinaychandra",
"comment_id": 585255099,
"datetime": 1581520673000,
"masked_author": "username_0",
"text": "Just mapping into memory and then providing location is enough.\r\n[This](https://gitlab.redox-os.org/redox-os/kernel/blob/master/src/arch/x86_64/paging/mod.rs#L202) is an example of how it could be loaded later in kernel",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "phil-opp",
"comment_id": null,
"datetime": 1582631649000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 756 | false | false | 756 | false |
OPS-E2E-PPE/E2E_DocFxV3 | OPS-E2E-PPE | 559,610,178 | 3,572 | {
"number": 3572,
"repo": "E2E_DocFxV3",
"user_login": "OPS-E2E-PPE"
} | [
{
"action": "opened",
"author": "OPSTestPPE",
"comment_id": null,
"datetime": 1580810591000,
"masked_author": "username_0",
"text": "",
"title": "PR build should use locked CRR version",
"type": "issue"
},
{
"action": "created",
"author": "e2ebd4",
"comment_id": 581832634,
"datetime": 1580810639000,
"masked_author": "username_1",
"text": "Docs Build status updates of commit _[3ee9fb3](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/commits/3ee9fb3e8ba944e5cf51d80930e3a032878d5e03)_: \n\n### :white_check_mark: Validation status: passed\r\n\r\n\r\nFile | Status | Preview URL | Details\r\n---- | ------ | ----------- | -------\r\n[E2E_DocsBranch_Dynamic/testTokenPage.md](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/blob/crr-pullrequest/E2E_DocsBranch_Dynamic/testTokenPage.md) | :white_check_mark:Succeeded | [View](https://ppe.docs.microsoft.com/en-us/E2E_DocFxV3/testtokenpage?branch=pr-en-us-3572) |\r\n\r\nFor more details, please refer to the [build report](https://opbuildstoragesandbox2.blob.core.windows.net/report/2020%5C2%5C4%5C8b165afe-a866-8c48-e5c3-ead94440f27e%5CPullRequest%5C202002041003143880-3572%5Cworkflow_report.html?sv=2016-05-31&sr=b&sig=tQhPML73zMBfMVmyka3H8f4H1L0giHduLAK%2B9qXurxA%3D&st=2020-02-04T09%3A58%3A58Z&se=2020-03-06T10%3A03%3A58Z&sp=r).\r\n\r\n**Note:** If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 1,084 | false | false | 1,084 | false |
BoKnowsCoding/hbd-organizer | null | 638,960,376 | 1 | null | [
{
"action": "opened",
"author": "BoKnowsCoding",
"comment_id": null,
"datetime": 1592236341000,
"masked_author": "username_0",
"text": "The runner script (and maybe each of the organizer scripts) should check if an instance is already running before proceeding with execution to avoid a scheduled running coming before the last download/deployment was completed.",
"title": "Multiple instances of scheduled runner script may run if download too long",
"type": "issue"
},
{
"action": "closed",
"author": "BoKnowsCoding",
"comment_id": null,
"datetime": 1593203235000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 226 | false | false | 226 | false |
ni/nimi-python | ni | 607,912,515 | 1,439 | {
"number": 1439,
"repo": "nimi-python",
"user_login": "ni"
} | [
{
"action": "opened",
"author": "sbethur",
"comment_id": null,
"datetime": 1588029148000,
"masked_author": "username_0",
"text": "- [x] This contribution adheres to [CONTRIBUTING.md](https://github.com/ni/nimi-python/blob/master/CONTRIBUTING.md).\r\n\r\n~- [ ] I've updated [CHANGELOG.md](https://github.com/ni/nimi-python/blob/master/CHANGELOG.md) if applicable.~\r\n\r\n- [x] I've added tests applicable for this pull request\r\n\r\n### What does this Pull Request accomplish?\r\n\r\n- Add test for channels repeated capability\r\n- Add test for sites repeated capability\r\n- Add test for sites/pins chained repeated capability\r\n\r\n### List issues fixed by this Pull Request below, if any.\r\n\r\nNone\r\n\r\n### What testing has been done?\r\n\r\nAdded new tests. CI will run the tests.",
"title": "Add nidigital system tests for channels and sites repeated capabilities",
"type": "issue"
}
] | 2 | 2 | 627 | false | true | 627 | false |
pulibrary/approvals | pulibrary | 575,420,112 | 654 | null | [
{
"action": "opened",
"author": "carolyncole",
"comment_id": null,
"datetime": 1583330120000,
"masked_author": "username_0",
"text": "Right now the logs on the servers are just in the release directory, so when the directory gets removed the logs also get removed. It should be a directory in the shared folder. We should also rotate the log daily.",
"title": "Log Management on the servers should be better",
"type": "issue"
}
] | 1 | 1 | 216 | false | false | 216 | false |
xamarin/Xamarin.Forms | xamarin | 499,691,166 | 7,723 | {
"number": 7723,
"repo": "Xamarin.Forms",
"user_login": "xamarin"
} | [
{
"action": "opened",
"author": "PureWeen",
"comment_id": null,
"datetime": 1569627021000,
"masked_author": "username_0",
"text": "### Description of Change ###\r\nClip button and image button so if the user specifies a corner radius the image will get clipped as well.\r\n\r\nThis is how iOS and UWP already work\r\n\r\n### Issues Resolved ### \r\n- fixes #4606\r\n\r\n\r\n### Platforms Affected ### \r\n- Android\r\n\r\n\r\n### Before/After Screenshots ### \r\n```XAML\r\n<StackLayout>\r\n <StackLayout Orientation=\"Horizontal\">\r\n <ImageButton BorderWidth=\"2\" BorderColor=\"Blue\" WidthRequest=\"50\" CornerRadius=\"25\" Source=\"coffee.png\" BackgroundColor=\"Purple\" />\r\n <ImageButton BorderWidth=\"2\" BorderColor=\"Blue\" WidthRequest=\"50\" CornerRadius=\"25\" Source=\"coffee.png\" BackgroundColor=\"Purple\" />\r\n </StackLayout>\r\n <StackLayout Orientation=\"Horizontal\" Visual=\"Material\">\r\n <ImageButton BorderWidth=\"2\" BorderColor=\"Blue\" WidthRequest=\"50\" CornerRadius=\"25\" Source=\"coffee.png\" BackgroundColor=\"Purple\" />\r\n <ImageButton BorderWidth=\"2\" BorderColor=\"Blue\" WidthRequest=\"50\" CornerRadius=\"25\" Source=\"coffee.png\" BackgroundColor=\"Purple\" /> \r\n </StackLayout>\r\n </StackLayout>\r\n```\r\n\r\nBefore\r\n\r\n\r\n\r\nAfter\r\n\r\n\r\n\r\n\r\n### Testing Procedure ###\r\n- run the included UI Test (make sure to enable fast renderers)\r\n\r\n### PR Checklist ###\r\n<!-- To be completed by reviewers -->\r\n\r\n- [ ] Targets the correct branch\r\n- [ ] Tests are passing (or failures are unrelated)",
"title": "Fix corner clipping on ImageButton and Button (FR)",
"type": "issue"
},
{
"action": "created",
"author": "samhouts",
"comment_id": 549601197,
"datetime": 1572911836000,
"masked_author": "username_1",
"text": "@username_0 Please rebase :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Syed-Esqimo",
"comment_id": 550063363,
"datetime": 1572995036000,
"masked_author": "username_2",
"text": "Yay..... Thanks guys!",
"title": null,
"type": "comment"
}
] | 3 | 3 | 1,649 | false | false | 1,649 | true |
focuslabllc/craft-cheat-sheet | focuslabllc | 267,383,958 | 46 | null | [
{
"action": "opened",
"author": "dpanfili",
"comment_id": null,
"datetime": 1508595156000,
"masked_author": "username_0",
"text": "After installing the plugin and setting the url path (in my instance just /cheat ), I'm getting a 404 when trying to reach that page. Running locally through MAMP if that helps troubleshoot.",
"title": "404 After Install",
"type": "issue"
},
{
"action": "created",
"author": "erikreagan",
"comment_id": 338414538,
"datetime": 1508603234000,
"masked_author": "username_1",
"text": "Do other dynamic urls load like entries you've created in sections?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dpanfili",
"comment_id": 338438442,
"datetime": 1508627072000,
"masked_author": "username_0",
"text": "Dynamic urls only show correctly if devMode is set to true. If set to false, the they return 404s.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "erikreagan",
"comment_id": 339334623,
"datetime": 1508939110000,
"masked_author": "username_1",
"text": "Is the same true for the cheat sheet url? Does it work for you with devMode set to true?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "erikreagan",
"comment_id": 339484904,
"datetime": 1508968788000,
"masked_author": "username_1",
"text": "Clarification: the Cheat Sheet is designed to only work when devMode is turned on. So what you're experiencing, if I understand it correctly, is intended.\r\n\r\nhttps://github.com/focuslabllc/craft-cheat-sheet/blob/2eebebcac78024b2afb04f7920a231ed708fc3b6/cheatsheet/controllers/CheatSheet_RoutesController.php#L24\r\n\r\nDoes that help?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dpanfili",
"comment_id": 346810794,
"datetime": 1511523928000,
"masked_author": "username_0",
"text": "Yes, helps for sure. Sorry for the delay in response here. All good. Closing out. Thanks!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "dpanfili",
"comment_id": null,
"datetime": 1511523929000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 7 | 862 | false | false | 862 | false |
signalfx/ondiskencoding | signalfx | 497,322,886 | 19 | {
"number": 19,
"repo": "ondiskencoding",
"user_login": "signalfx"
} | [
{
"action": "opened",
"author": "charliesignalfx",
"comment_id": null,
"datetime": 1569272016000,
"masked_author": "username_0",
"text": "",
"title": "add additional dimensions to span identity",
"type": "issue"
},
{
"action": "created",
"author": "charliesignalfx",
"comment_id": 534578205,
"datetime": 1569334424000,
"masked_author": "username_0",
"text": "@mdubbyap I've fixed the errors you saw with code generation. I also had to tweak the Dims() method to satisfy a test (see the comments I put in encoding.go). And I've fixed remaining lint errors (and ignored one error we can't fix at this point).",
"title": null,
"type": "comment"
}
] | 1 | 2 | 249 | false | false | 249 | false |
carbon-design-system/ibm-cloud-paks | carbon-design-system | 656,471,342 | 18 | null | [
{
"action": "opened",
"author": "SimonFinney",
"comment_id": null,
"datetime": 1594719099000,
"masked_author": "username_0",
"text": "## Summary\r\n\r\nDiscussion around how testing could work across Cloud Pak components packages, and what best practices are available to be leveraged from Carbon and other teams - For example:\r\n\r\n- Accessibility\r\n- End-to-end\r\n- Unit",
"title": "Testing",
"type": "issue"
},
{
"action": "created",
"author": "matthew-chirgwin",
"comment_id": 658232624,
"datetime": 1594738978000,
"masked_author": "username_1",
"text": "I personally I am fan of behavioural testing. It focuses the developer’s mind on the end user of whatever they are producing, and in turn means what a user actually does is tested, rather than the individual internals pieces which make up the whole. It is a very different way of approaching testing - which at first can seem cumbersome and is typically takes longer to complete compared to more traditional methods. The benefits however outweigh the cons for me - you test what the user does, and that in turn returns the best confidence that feature X meets its requirements.\r\n\r\nI recently worked on an open source UI for the product I work on (https://github.com/ibm-messaging/kafka-java-vertx-starter). For that UI we were followed this behavioural first approach for the vast majority of the UI. As a stack we used:\r\n\r\n- Jest\r\n- React-testing-library (vs Enzyme - enforced testing as a user/no internals access)\r\n- Cucumber (via gherkin-jest - to integration test components together)\r\n- Playwright (augmented with jest-playwright-preset/gherkin-jest - to drive the whole UI in an end to end manner)\r\n\r\nWe were very opinionated about the type of components we were writing, and how they should be constructed and composed. One of the first requirements of any new component was a Readme, calling out intent, usage and user (which we then used as doc in storybook). Often the end user of these components would not be someone using the UI, but the developer using the component to construct the UI. These considerations were then used to define test cases and scenarios. Typically, the more presentational components were, the more ‘unit’ tested they became. For components which were more complex, the behavioural approach abstracted away particulars such as triggering data fetching or getting the component in the right state - we physically drove the component to show it achieved a purpose, and in turn, covered all the pieces which allowed it to do so.\r\n\r\nNot everything is a component however. For helpers and custom hooks, we typically wrote more unit style tests, as this was appropriate for those particular pieces of function.\r\n\r\nIn the end, we ended up with a test suite which covered the whole UI in context - which gave us the confidence that it met the stated goals, and if we were to develop it further, we would know if something were to break. We also (as we integrated Jest) got things like code coverage for free, as an extra metric to confirm we had all bases covered. This approach really worked for us, and I would recommend it here for discussion.\r\n\r\nWe didn’t do it for this UI, but for another we used the end to end tests as a chance to capture and automate a11y testing. As a user navigated through the test, we would scrape the page, and then run that through the aat tooling to capture violations. This worked well, but did have flaws - mainly timing issues causing inconsistent results.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SimonFinney",
"comment_id": 659302575,
"datetime": 1594892826000,
"masked_author": "username_0",
"text": "Splitting this out further into #20 , #21 , and #22 for us to align on independently",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "SimonFinney",
"comment_id": null,
"datetime": 1598952776000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "SimonFinney",
"comment_id": 684662020,
"datetime": 1598952776000,
"masked_author": "username_0",
"text": "Closing in favour of the above issues for deeper discussion",
"title": null,
"type": "comment"
}
] | 2 | 5 | 3,294 | false | false | 3,294 | false |
xdan/jodit | null | 674,308,221 | 462 | null | [
{
"action": "opened",
"author": "gitsnuit",
"comment_id": null,
"datetime": 1596719968000,
"masked_author": "username_0",
"text": "<!-- BUGS: Please use this template -->\r\n<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/jodit -->\r\n\r\n**Jodit Version:** 3.2.xxxxx\r\n\r\n**Browser:** Chrome <!-- Chrome/IE/Safary/FF -->\r\n**OS:** Linux <!-- Windows/Mac/Linux -->\r\n\r\nUsed Jodit here: https://xdsoft.net/jodit/\r\n\r\n# Type something\r\n# Press the 'tab' key\r\n\r\n**Actual behavior:**\r\nFocus in the editor is switched to another button or element. \r\n\r\n**Expected behavior:**\r\nI expect a tab to be four spaces",
"title": "Tab key does not give four spaces",
"type": "issue"
},
{
"action": "closed",
"author": "xdan",
"comment_id": null,
"datetime": 1596800708000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "xdan",
"comment_id": 670476972,
"datetime": 1596800708000,
"masked_author": "username_1",
"text": "Nope, for form navigation - `tab` works correctly.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 571 | false | false | 571 | false |
blackflux/lambda-serverless-api | blackflux | 593,021,142 | 1,635 | {
"number": 1635,
"repo": "lambda-serverless-api",
"user_login": "blackflux"
} | [
{
"action": "created",
"author": "MrsFlux",
"comment_id": 609053544,
"datetime": 1586017669000,
"masked_author": "username_0",
"text": ":tada: This PR is included in version 6.11.7 :tada:\n\nThe release is available on:\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/lambda-serverless-api/v/6.11.7)\n- [GitHub release](https://github.com/blackflux/lambda-serverless-api/releases/tag/v6.11.7)\n\nYour **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:",
"title": null,
"type": "comment"
}
] | 2 | 2 | 6,066 | false | true | 375 | false |
simeg/eureka | null | 396,492,703 | 39 | null | [
{
"action": "opened",
"author": "iamaamir",
"comment_id": null,
"datetime": 1546868717000,
"masked_author": "username_0",
"text": "https://github.com/username_1/eureka/blob/81b6a7e74ddf9d858cc11acc03891750ee03c15e/src/main.rs#L112\r\n\r\n\r\nthe path to nano is invalid or in other words it is not guarantee that nano will available at /usr/bin/nano\r\n\r\n",
"title": "invalid nano path",
"type": "issue"
},
{
"action": "created",
"author": "simeg",
"comment_id": 452062969,
"datetime": 1546891308000,
"masked_author": "username_1",
"text": "Hi @username_0. Thanks for reporting this!\r\n\r\nOn my Mac `nano` lives in `/usr/bin/nano`, hence choosing that path. As I said in #38 the good solution would be to assert what binaries are available at what path before offering the options.\r\n\r\nI just create an issue for this: https://github.com/username_1/eureka/issues/40\r\n\r\nI chose to go with low priority because it's easy to work around (`--clear-editor`, and the ability to input custom path for editor). PRs are very welcome!\r\n\r\nI'm closing this in favor of #40.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "simeg",
"comment_id": null,
"datetime": 1546891308000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 835 | false | false | 835 | true |
jlippold/tweakCompatible | null | 414,072,285 | 66,390 | null | [
{
"action": "opened",
"author": "yonigold1",
"comment_id": null,
"datetime": 1551096240000,
"masked_author": "username_0",
"text": "```\r\n{\r\n \"packageId\": \"com.julioverne.cydown\",\r\n \"action\": \"working\",\r\n \"userInfo\": {\r\n \"arch32\": false,\r\n \"packageId\": \"com.julioverne.cydown\",\r\n \"deviceId\": \"iPhone10,5\",\r\n \"url\": \"http://cydia.saurik.com/package/com.julioverne.cydown/\",\r\n \"iOSVersion\": \"12.1.1\",\r\n \"packageVersionIndexed\": false,\r\n \"packageName\": \"CyDown\",\r\n \"category\": \"Tweaks\",\r\n \"repository\": \"julioverne\",\r\n \"name\": \"CyDown\",\r\n \"installed\": \"6.9.7\",\r\n \"packageIndexed\": false,\r\n \"packageStatusExplaination\": \"This tweak has not been reviewed. Please submit a review if you choose to install.\",\r\n \"id\": \"com.julioverne.cydown\",\r\n \"commercial\": false,\r\n \"packageInstalled\": true,\r\n \"tweakCompatVersion\": \"0.1.0\",\r\n \"shortDescription\": \"Cydia Download Manager & Extra Features!\",\r\n \"latest\": \"6.9.7\",\r\n \"author\": \"julioverne\",\r\n \"packageStatus\": \"Unknown\"\r\n },\r\n \"base64\": \"eyJhcmNoMzIiOmZhbHNlLCJwYWNrYWdlSWQiOiJjb20uanVsaW92ZXJuZS5jeWRvd24iLCJkZXZpY2VJZCI6ImlQaG9uZTEwLDUiLCJ1cmwiOiJodHRwOlwvXC9jeWRpYS5zYXVyaWsuY29tXC9wYWNrYWdlXC9jb20uanVsaW92ZXJuZS5jeWRvd25cLyIsImlPU1ZlcnNpb24iOiIxMi4xLjEiLCJwYWNrYWdlVmVyc2lvbkluZGV4ZWQiOmZhbHNlLCJwYWNrYWdlTmFtZSI6IkN5RG93biIsImNhdGVnb3J5IjoiVHdlYWtzIiwicmVwb3NpdG9yeSI6Imp1bGlvdmVybmUiLCJuYW1lIjoiQ3lEb3duIiwiaW5zdGFsbGVkIjoiNi45LjciLCJwYWNrYWdlSW5kZXhlZCI6ZmFsc2UsInBhY2thZ2VTdGF0dXNFeHBsYWluYXRpb24iOiJUaGlzIHR3ZWFrIGhhcyBub3QgYmVlbiByZXZpZXdlZC4gUGxlYXNlIHN1Ym1pdCBhIHJldmlldyBpZiB5b3UgY2hvb3NlIHRvIGluc3RhbGwuIiwiaWQiOiJjb20uanVsaW92ZXJuZS5jeWRvd24iLCJjb21tZXJjaWFsIjpmYWxzZSwicGFja2FnZUluc3RhbGxlZCI6dHJ1ZSwidHdlYWtDb21wYXRWZXJzaW9uIjoiMC4xLjAiLCJzaG9ydERlc2NyaXB0aW9uIjoiQ3lkaWEgRG93bmxvYWQgTWFuYWdlciAmIEV4dHJhIEZlYXR1cmVzISIsImxhdGVzdCI6IjYuOS43IiwiYXV0aG9yIjoianVsaW92ZXJuZSIsInBhY2thZ2VTdGF0dXMiOiJVbmtub3duIn0=\",\r\n \"chosenStatus\": \"working\",\r\n \"notes\": \"\"\r\n}\r\n```",
"title": "`CyDown` working on iOS 12.1.1",
"type": "issue"
}
] | 2 | 3 | 1,989 | false | true | 1,857 | false |
fb55/css-select | null | 580,849,698 | 168 | {
"number": 168,
"repo": "css-select",
"user_login": "fb55"
} | [
{
"action": "created",
"author": "fb55",
"comment_id": 600245924,
"datetime": 1584471887000,
"masked_author": "username_0",
"text": "@dependabot squash and merge",
"title": null,
"type": "comment"
}
] | 3 | 3 | 5,976 | false | true | 28 | false |
rust-lang/rust-clippy | rust-lang | 508,069,224 | 4,683 | {
"number": 4683,
"repo": "rust-clippy",
"user_login": "rust-lang"
} | [
{
"action": "opened",
"author": "HMPerson1",
"comment_id": null,
"datetime": 1571256272000,
"masked_author": "username_0",
"text": "Closes #4586\r\n\r\nchangelog: Add `inefficient_to_string` lint, which checks for calling `to_string` on `&&str`, which would bypass the `str`'s specialization",
"title": "Add `inefficient_to_string` lint",
"type": "issue"
},
{
"action": "created",
"author": "Manishearth",
"comment_id": 543402206,
"datetime": 1571353947000,
"masked_author": "username_1",
"text": "(You'll also need to update the test expectations)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Manishearth",
"comment_id": 543404268,
"datetime": 1571354386000,
"masked_author": "username_1",
"text": "@username_2 r+\r\n\r\nthanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bors",
"comment_id": 543404276,
"datetime": 1571354387000,
"masked_author": "username_2",
"text": ":pushpin: Commit 2106a239669ec2ffbf3ccec98fbc88442d5c001f has been approved by `username_1`\n\n<!-- @username_2 r=username_1 2106a239669ec2ffbf3ccec98fbc88442d5c001f -->\n<!-- homu: {\"type\":\"Approved\",\"sha\":\"2106a239669ec2ffbf3ccec98fbc88442d5c001f\",\"approver\":\"username_1\"} -->",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bors",
"comment_id": 543413215,
"datetime": 1571356139000,
"masked_author": "username_2",
"text": ":hourglass: Testing commit 2106a239669ec2ffbf3ccec98fbc88442d5c001f with merge 14a0f36617cbde9a018331c54727ede5ddf33014...\n<!-- homu: {\"type\":\"BuildStarted\",\"head_sha\":\"2106a239669ec2ffbf3ccec98fbc88442d5c001f\",\"merge_sha\":\"14a0f36617cbde9a018331c54727ede5ddf33014\"} -->",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bors",
"comment_id": 543434162,
"datetime": 1571360525000,
"masked_author": "username_2",
"text": ":sunny: Test successful - [checks-travis](https://travis-ci.com/rust-lang/rust-clippy/builds/132449090), [status-appveyor](https://ci.appveyor.com/project/rust-lang-libs/rust-clippy/builds/28194788)\nApproved by: username_1\nPushing 14a0f36617cbde9a018331c54727ede5ddf33014 to master...\n<!-- homu: {\"type\":\"BuildCompleted\",\"approved_by\":\"username_1\",\"base_ref\":\"master\",\"builders\":{\"status-appveyor\":\"https://ci.appveyor.com/project/rust-lang-libs/rust-clippy/builds/28194788\",\"checks-travis\":\"https://travis-ci.com/rust-lang/rust-clippy/builds/132449090\"},\"merge_sha\":\"14a0f36617cbde9a018331c54727ede5ddf33014\"} -->",
"title": null,
"type": "comment"
}
] | 3 | 6 | 1,382 | false | false | 1,382 | true |
kyma-project/kyma | kyma-project | 665,207,171 | 9,112 | {
"number": 9112,
"repo": "kyma",
"user_login": "kyma-project"
} | [
{
"action": "opened",
"author": "a-thaler",
"comment_id": null,
"datetime": 1595600942000,
"masked_author": "username_0",
"text": "<!-- Thank you for your contribution. Before you submit the pull request:\r\n1. Follow contributing guidelines, templates, the recommended Git workflow, and any related documentation.\r\n2. Read and submit the required Contributor Licence Agreements (https://github.com/kyma-project/community/blob/master/contributing/02-contributing.md#agreements-and-licenses).\r\n3. Test your changes and attach their results to the pull request.\r\n4. Update the relevant documentation.\r\n-->\r\n\r\n**Description**\r\n\r\nChanges proposed in this pull request:\r\n\r\n- updated base image to latest debian version, see https://github.com/kyma-incubator/third-party-images/pull/26\r\n- ...\r\n- ...\r\n\r\n**Related issue(s)**\r\n<!-- If you refer to a particular issue, provide its number. For example, `Resolves #123`, `Fixes #43`, or `See also #33`. -->\r\nhttps://github.com/kyma-incubator/third-party-images/pull/26",
"title": "upgraded base image of fluentbit",
"type": "issue"
}
] | 2 | 2 | 1,795 | false | true | 876 | false |
RedisTimeSeries/RedisTimeSeries | RedisTimeSeries | 555,548,468 | 329 | null | [
{
"action": "opened",
"author": "averias",
"comment_id": null,
"datetime": 1580128889000,
"masked_author": "username_0",
"text": "As per documentation: \r\n`WITHLABELS - Include in the reply the label-value pairs that represent metadata labels of the time-series. If this argument is not set, by default, an empty Array will be replied on the labels array position.`\r\n\r\nBut TS.MRANGE and TS.MGET returns a labels array, not empty, when WITHLABELS flag is not provided (default behavior).\r\n\r\nExample:\r\n127.0.0.1:6379[14]> TS.ADD temperature:2:32 1548149180000 26 LABELS sensor_id 2 area_id 32\r\n(integer) 1548149180000\r\n127.0.0.1:6379[14]> TS.ADD temperature:2:32 1548149190000 27\r\n(integer) 1548149190000\r\n127.0.0.1:6379[14]> TS.ADD temperature:2:32 1548149200000 28\r\n(integer) 1548149200000\r\n127.0.0.1:6379[14]> TS.ADD temperature:2:32 1548149210000 29\r\n(integer) 1548149210000\r\n\r\n127.0.0.1:6379[14]> **TS.MRANGE 1548149180000 1548149210000 AGGREGATION avg 5000 FILTER area_id=32 sensor_id!=1**\r\n```\r\n1) 1) \"temperature:2:32\"\r\n 2) 1) 1) \"sensor_id\"\r\n 2) \"2\"\r\n 2) 1) \"area_id\"\r\n 2) \"32\"\r\n 3) 1) 1) (integer) 1548149180000\r\n 2) \"26\"\r\n 2) 1) (integer) 1548149190000\r\n 2) \"27\"\r\n 3) 1) (integer) 1548149200000\r\n 2) \"28\"\r\n 4) 1) (integer) 1548149210000\r\n 2) \"29\"\r\n```\r\n\r\n\r\n127.0.0.1:6379[14]> **TS.MRANGE 1548149180000 1548149210000 AGGREGATION avg 5000 WITHLABELS FILTER area_id=32 sensor_id!=1**\r\n```\r\n1) 1) \"temperature:2:32\"\r\n 2) 1) 1) \"sensor_id\"\r\n 2) \"2\"\r\n 2) 1) \"area_id\"\r\n 2) \"32\"\r\n 3) 1) 1) (integer) 1548149180000\r\n 2) \"26\"\r\n 2) 1) (integer) 1548149190000\r\n 2) \"27\"\r\n 3) 1) (integer) 1548149200000\r\n 2) \"28\"\r\n 4) 1) (integer) 1548149210000\r\n 2) \"29\"\r\n```\r\n\r\n127.0.0.1:6379[14]> **TS.MGET FILTER area_id=32**\r\n```\r\n1) 1) \"temperature:2:32\"\r\n 2) 1) 1) \"sensor_id\"\r\n 2) \"2\"\r\n 2) 1) \"area_id\"\r\n 2) \"32\"\r\n 3) (integer) 1548149210000\r\n 4) \"29\"\r\n```\r\n\r\n127.0.0.1:6379[14]> **TS.MGET WITHLABELS FILTER area_id=32**\r\n```\r\n1) 1) \"temperature:2:32\"\r\n 2) 1) 1) \"sensor_id\"\r\n 2) \"2\"\r\n 2) 1) \"area_id\"\r\n 2) \"32\"\r\n 3) (integer) 1548149210000\r\n 4) \"29\"\r\n```",
"title": "Is WITHLABELS flag not working in last release 1.2.1?",
"type": "issue"
},
{
"action": "created",
"author": "ashtul",
"comment_id": 578735768,
"datetime": 1580130022000,
"masked_author": "username_1",
"text": "@username_0 \r\nIt looks like you are not running the latest version. The return array for meet had changed a bit as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "averias",
"comment_id": 578739323,
"datetime": 1580130670000,
"masked_author": "username_0",
"text": "you're right @username_1, sorry my bad, I thought that latest docker image matched version 1.2.1",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "averias",
"comment_id": null,
"datetime": 1580130670000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 2,320 | false | false | 2,320 | true |
spring-projects/spring-security | spring-projects | 310,131,910 | 5,188 | null | [
{
"action": "opened",
"author": "rwinch",
"comment_id": null,
"datetime": 1522433452000,
"masked_author": "username_0",
"text": "<!--\r\nFor Security Vulnerabilities, please use https://pivotal.io/security#reporting\r\n-->\r\n\r\n### Summary\r\nAdd WebFlux WebSocket Support",
"title": "Add WebFlux WebSocket Support",
"type": "issue"
},
{
"action": "created",
"author": "bollywood-coder",
"comment_id": 377624155,
"datetime": 1522444234000,
"masked_author": "username_1",
"text": "Hi Rob,\r\nthank you so much for taking care of this. Security is a very crucial thing in any commercial application and we are hitting beta phase soon in our project when is 5.1.0 M2 being released, any estimates or dates you can make?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rwinch",
"comment_id": 377627813,
"datetime": 1522445448000,
"masked_author": "username_0",
"text": "The current estimate is June 25th. You can find updates at https://github.com/spring-projects/spring-security/milestone/106",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bollywood-coder",
"comment_id": 413677680,
"datetime": 1534452026000,
"masked_author": "username_1",
"text": "Any news when this is going to be implemented?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rwinch",
"comment_id": 413688003,
"datetime": 1534454333000,
"masked_author": "username_0",
"text": "@username_1 Unfortunately, no. There are other things taking priority at the moment.\r\n\r\nWhat might help to get things started is if someone could put together a simple sample application that can be used to validate requirements of authentication and authorization.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "maczikasz",
"comment_id": 484076044,
"datetime": 1555506343000,
"masked_author": "username_2",
"text": "Any update here? It's been quite a few months since",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "voliveira89",
"comment_id": 493856506,
"datetime": 1558334211000,
"masked_author": "username_3",
"text": "Any update on this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rwinch",
"comment_id": 494540162,
"datetime": 1558469511000,
"masked_author": "username_0",
"text": "Unfortunately there are no updates. It is likely something that would be in 5.3.x",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "BharathKumarRavichandran",
"comment_id": 649012196,
"datetime": 1593025838000,
"masked_author": "username_4",
"text": "Any updates on this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "BharathKumarRavichandran",
"comment_id": 649013753,
"datetime": 1593026025000,
"masked_author": "username_4",
"text": "Any update on this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rwinch",
"comment_id": 651192972,
"datetime": 1593444403000,
"masked_author": "username_0",
"text": "No updates",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "phnxfire",
"comment_id": 855016877,
"datetime": 1622843136000,
"masked_author": "username_5",
"text": "Any update on this?",
"title": null,
"type": "comment"
}
] | 6 | 12 | 1,027 | false | false | 1,027 | true |
qiime2/docs | qiime2 | 624,089,742 | 463 | {
"number": 463,
"repo": "docs",
"user_login": "qiime2"
} | [
{
"action": "opened",
"author": "hmaru",
"comment_id": null,
"datetime": 1590389879000,
"masked_author": "username_0",
"text": "Hi,\r\n\r\nPerforming the Parkinson's Mouse Tutorial, I found incorrect cage numbers in two question blocks regarding beta-diversity significance. The cage number should be C35 instead of C32, according to the metadata.\r\n\r\nHugo",
"title": "Update pd-mice.rst",
"type": "issue"
},
{
"action": "created",
"author": "jwdebelius",
"comment_id": 633661838,
"datetime": 1590427816000,
"masked_author": "username_1",
"text": "@thermokarst, @username_0, @username_2, it looks good to merge from my perspective.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nbokulich",
"comment_id": 633664099,
"datetime": 1590428331000,
"masked_author": "username_2",
"text": "Thank you @username_0 and @username_1 !",
"title": null,
"type": "comment"
}
] | 3 | 3 | 334 | false | false | 334 | true |
JuliaRegistries/General | JuliaRegistries | 645,821,799 | 16,973 | {
"number": 16973,
"repo": "General",
"user_login": "JuliaRegistries"
} | [
{
"action": "opened",
"author": "jlbuild",
"comment_id": null,
"datetime": 1593115866000,
"masked_author": "username_0",
"text": "Autogenerated JLL package registration\n\n* Registering JLL package Expat_jll.jl\n* Repository: https://github.com/JuliaBinaryWrappers/Expat_jll.jl\n* Version: v2.2.7+3",
"title": "New version: Expat_jll v2.2.7+3",
"type": "issue"
}
] | 2 | 2 | 564 | false | true | 164 | false |
feathersjs-ecosystem/feathers-mongodb | feathersjs-ecosystem | 621,662,282 | 182 | null | [
{
"action": "opened",
"author": "ekahannes",
"comment_id": null,
"datetime": 1589971749000,
"masked_author": "username_0",
"text": "If pagination is activated and a geoQuery is used like $near the following server error is send to the client:\r\n`$geoNear, $near, and $nearSphere are not allowed in this context`\r\n\r\nI tracked the issue down to the following lines (169-171) in the `index.js`:\r\n```\r\n if (paginate && paginate.default) {\r\n return this.Model.countDocuments(query).then(runQuery);\r\n }\r\n```\r\n\r\nMy setup is the following:\r\nModule Version: 6.1.0\r\nNodeJS: v12.13.1\r\nMacOS: 10.15.3",
"title": "$near query with pagination",
"type": "issue"
}
] | 2 | 2 | 885 | false | true | 466 | false |
VirgilSecurity/virgil-sdk-javascript | VirgilSecurity | 627,578,595 | 73 | null | [
{
"action": "opened",
"author": "camhart",
"comment_id": null,
"datetime": 1590791350000,
"masked_author": "username_0",
"text": "Attempting to use e3kit-node with electron v8.3.0 in the background process (non renderer process, so it acts like nodejs). I get this error when I attempt to initialize eThree.\r\n\r\n```\r\nTypeError: fetch is not a function\r\n at eval (webpack:///./node_modules/fetch-ponyfill/fetch-node.js?:13:12)\r\n at Connection.send (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:912:16)\r\n at Connection.post (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:906:21)\r\n at CardClient.eval (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:1003:52)\r\n at Generator.next (<anonymous>)\r\n at eval (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:587:71)\r\n at new Promise (<anonymous>)\r\n at __awaiter (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:583:12)\r\n at CardClient.searchCards (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:1002:16)\r\n at CardManager.eval (webpack:///./node_modules/virgil-sdk/dist/virgil-sdk.es.js?:1388:157)\r\n```",
"title": "Electron v8.3.0, TypeError: fetch is not a function",
"type": "issue"
},
{
"action": "created",
"author": "camhart",
"comment_id": 636222400,
"datetime": 1590791813000,
"masked_author": "username_0",
"text": "Appears to be a bug with fetch-ponyfill... see https://github.com/qubyte/fetch-ponyfill/pull/239/files",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "camhart",
"comment_id": 636457850,
"datetime": 1590924278000,
"masked_author": "username_0",
"text": "Please update package 'fetch-ponyfill' to version v6.1.1 and this will be fixed. It's due to an issue combining webpack w/ nodejs (which electron does).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "camhart",
"comment_id": 636458204,
"datetime": 1590924462000,
"masked_author": "username_0",
"text": "PR is up: https://github.com/VirgilSecurity/virgil-sdk-javascript/pull/74",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xlwknx",
"comment_id": 637054206,
"datetime": 1591039278000,
"masked_author": "username_1",
"text": "Hello @username_0 !\r\nI've updated fetch-ponyfill to v6.1.1 in virgil-sdk [v6.1.2 ](https://www.npmjs.com/package/virgil-sdk/v/6.1.2).\r\ne3kit-node should grab this version in `npm install` process if you delete node_modules and lock files.\r\nPlease report your results to make sure is everything ok!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "camhart",
"comment_id": 637413723,
"datetime": 1591089909000,
"masked_author": "username_0",
"text": "Thanks for this! I need a couple days to verify. Some other stuff came up.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "camhart",
"comment_id": 640016017,
"datetime": 1591434133000,
"masked_author": "username_0",
"text": "Fixed! Thanks.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "camhart",
"comment_id": null,
"datetime": 1591434133000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 8 | 1,733 | false | false | 1,733 | true |
kubernetes-sigs/cluster-api | kubernetes-sigs | 472,945,395 | 1,198 | null | [
{
"action": "opened",
"author": "vincepri",
"comment_id": null,
"datetime": 1564070162000,
"masked_author": "username_0",
"text": "/kind feature\r\n\r\n**Describe the solution you'd like**\r\nThis is a counter-proposal to #1187. For v1alpha2 we can remove all the pivoting logic and only support management clusters. If this issue is accepted, the CLI becomes a toolkit of alpha phases to create and delete clusters without higher level abstraction",
"title": "Remove pivoting logic from clusterctl",
"type": "issue"
},
{
"action": "created",
"author": "moshloop",
"comment_id": 515177283,
"datetime": 1564082035000,
"masked_author": "username_1",
"text": "Is a pivot not conceptually the same as a backup and restore? \r\n\r\nI am in favour of removing both pivot and bootstrapping so the existing pivot workflow would be transformed to:\r\n\r\n1) Create kind/minukube/management cluster (out of band)\r\n2) `clusterctl create cluster`\r\n3) Backup and restore CAP* objects to target cluster (out of band)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vincepri",
"comment_id": 515177892,
"datetime": 1564082142000,
"masked_author": "username_0",
"text": "/area clusterctl",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "detiber",
"comment_id": 515191767,
"datetime": 1564084660000,
"masked_author": "username_2",
"text": "@username_1 pivot is indeed basically a backup/restore, but that backup/restore needs to be done in a certain order (and ensuring that only a single set of controllers is operating for a given set of objects) to ensure a successful migration.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sethp-nr",
"comment_id": 515229255,
"datetime": 1564091654000,
"masked_author": "username_3",
"text": "We're pretty active users of the pivot functionality, and so if it were removed from `clusterctl` that would at least delay our adoption of the v1alpha2 components.\r\n\r\nWe really like the conceptual fit for our users (use `kubectl` to scale your cluster!) and the failure isolation it provides (if you scale all the machine deployments to zero, only your cluster will be affected!). We'd need to take some time to figure out what to do to preserve/recover those benefits.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncdc",
"comment_id": 517768354,
"datetime": 1564764070000,
"masked_author": "username_4",
"text": "Google doc for a gap analysis: https://docs.google.com/document/d/1YWO6nyLToSn7vuwLfbMpvXKU7IVO8FhFApRZIdFShH0/edit#",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vincepri",
"comment_id": 520598671,
"datetime": 1565644564000,
"masked_author": "username_0",
"text": "Closing in favor of #1187\r\n\r\n/close",
"title": null,
"type": "comment"
}
] | 6 | 9 | 1,882 | false | true | 1,525 | true |
intel/dffml | intel | 666,794,974 | 814 | {
"number": 814,
"repo": "dffml",
"user_login": "intel"
} | [
{
"action": "opened",
"author": "aghinsa",
"comment_id": null,
"datetime": 1595916255000,
"masked_author": "username_0",
"text": "fixes #810",
"title": "examples:chatbot: Documentation updates",
"type": "issue"
}
] | 2 | 2 | 10 | false | true | 10 | false |
curioswitch/curiostack | curioswitch | 298,517,721 | 70 | {
"number": 70,
"repo": "curiostack",
"user_login": "curioswitch"
} | [
{
"action": "opened",
"author": "chokoswitch",
"comment_id": null,
"datetime": 1519117703000,
"masked_author": "username_0",
"text": "",
"title": "Add an armeria-based cloud storage client.",
"type": "issue"
},
{
"action": "created",
"author": "curioswitch-role",
"comment_id": 366914095,
"datetime": 1519117870000,
"masked_author": "username_1",
"text": "Build succeded. If you have approval, you're ready to merge!\n\nLogs:\nhttps://console.cloud.google.com/gcr/builds/9ce0a116-e45f-4e8c-975b-2457dc59d0da?project=curioswitch-cluster",
"title": null,
"type": "comment"
}
] | 2 | 2 | 176 | false | false | 176 | false |
morrisjames/morrisjames.github.io | null | 643,181,676 | 5 | null | [
{
"action": "opened",
"author": "morrisjames",
"comment_id": null,
"datetime": 1592840862000,
"masked_author": "username_0",
"text": "Following things must be added in the page : \r\n\r\n-> A related image just below navigation bar\r\n-> Add various types furniture [Use bootstrap grid layout to arrange these items ]",
"title": "Add content in furniture product page",
"type": "issue"
},
{
"action": "created",
"author": "piyushnanwani",
"comment_id": 648541766,
"datetime": 1592965165000,
"masked_author": "username_1",
"text": "https://github.com/username_0/username_0.github.io/pull/17",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "piyushnanwani",
"comment_id": null,
"datetime": 1592965165000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "reopened",
"author": "piyushnanwani",
"comment_id": null,
"datetime": 1592965174000,
"masked_author": "username_1",
"text": "Following things must be added in the page : \r\n\r\n-> A related image just below navigation bar\r\n-> Add various types furniture [Use bootstrap grid layout to arrange these items ]",
"title": "Add content in furniture product page",
"type": "issue"
},
{
"action": "closed",
"author": "piyushnanwani",
"comment_id": null,
"datetime": 1592965406000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 414 | false | false | 414 | true |
FaridSafi/react-native-gifted-chat | null | 352,224,275 | 949 | null | [
{
"action": "opened",
"author": "MingFaiYau",
"comment_id": null,
"datetime": 1534785518000,
"masked_author": "username_0",
"text": "I found that current branch is using FlatList , but 0.4.3 version still using Listview.\r\nFLatlist is better than Listivew in my case because of large messages and some handling.\r\n\r\nCan we install the current branch one? and how?\r\nOr need to update/modify the MessageContainer myself? \r\n\r\nThanks a lit",
"title": "How can i install the flatlist one?",
"type": "issue"
}
] | 1 | 1 | 300 | false | false | 300 | false |
griffithlab/civicpy | griffithlab | 537,420,315 | 64 | null | [
{
"action": "opened",
"author": "fanyucai1",
"comment_id": null,
"datetime": 1576226751000,
"masked_author": "username_0",
"text": "hi doctor, my script as follows:\r\n##################################\r\nfrom civicpy import civic, exports\r\nwith open('civic_variants.vcf', 'w', newline='') as file:\r\n w = exports.VCFWriter(file)\r\n all_variants = civic.get_all_variants()\r\n w.addrecords(all_variants)\r\n w.writerecords()\r\n###########################the erro:\r\nTraceback (most recent call last):\r\n File \"VEP.py\", line 1, in <module>\r\n from civicpy import civic, exports\r\nImportError: cannot import name 'exports'\r\n\r\nwhat should I do?",
"title": "run erro",
"type": "issue"
},
{
"action": "created",
"author": "ahwagner",
"comment_id": 565761968,
"datetime": 1576365868000,
"masked_author": "username_1",
"text": "Hi @username_0. Would you kindly provide us with the version of CIViCpy you are using that produced this error?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fanyucai1",
"comment_id": 565917707,
"datetime": 1576476038000,
"masked_author": "username_0",
"text": "hi doctor, I change my python from 3.6 to 3.7. The erro changed as follows:\r\nWARNING:root:Getting all genes. This may take a couple of minutes...\r\nTraceback (most recent call last):\r\n File \"test.py\", line 6, in <module>\r\n w.addrecords(all_variants)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/exports.py\", line 122, in addrecords\r\n self.addrecord(record)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/exports.py\", line 112, in addrecord\r\n self.addrecord(evidence)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/exports.py\", line 114, in addrecord\r\n valid = self._validate_evidence_record(civic_record)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/exports.py\", line 235, in _validate_evidence_record\r\n valid = self._validate_sequence_variant(variant)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/exports.py\", line 272, in _validate_sequence_variant\r\n valid = self._validate_coordinates(variant, types)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/exports.py\", line 332, in _validate_coordinates\r\n raise NotImplementedError(f'No logic to handle {variant_type.name} {variant}')\r\nNotImplementedError: No logic to handle transcript_fusion <CIViC variant 2663>\r\n#####################\r\nwhat should I do?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ahwagner",
"comment_id": 565923492,
"datetime": 1576477610000,
"masked_author": "username_1",
"text": "To help you troubleshoot this problem, please provide the version _of CIViCpy_ that you are using. You may find this by using `pip freeze | grep civicpy` in your shell.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fanyucai1",
"comment_id": 566444356,
"datetime": 1576572597000,
"masked_author": "username_0",
"text": "civicpy==0.0.3.post1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "susannasiebert",
"comment_id": 566567133,
"datetime": 1576593289000,
"masked_author": "username_2",
"text": "Please upgrade to version `1.0.0rc2` by running `pip install civicpy==1.0.0rc2`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fanyucai1",
"comment_id": 566856829,
"datetime": 1576641484000,
"masked_author": "username_0",
"text": "the erro as follows:\r\nTraceback (most recent call last):\r\n File \"get_civicpy.py\", line 5, in <module>\r\n all_variants = civic.get_all_variants()\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/civic.py\", line 820, in get_all_variants\r\n variants = _get_elements_by_ids('variant', allow_cached=allow_cached, get_all=True)\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/civic.py\", line 661, in _get_elements_by_ids\r\n load_cache()\r\n File \"/usr/local/lib/python3.7/site-packages/civicpy/civic.py\", line 168, in load_cache\r\n old_cache = pickle.load(pf)\r\n_pickle.UnpicklingError: pickle data was truncated",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fanyucai1",
"comment_id": 566913165,
"datetime": 1576654309000,
"masked_author": "username_0",
"text": "I find one file called: cache.pkl,where can I download this file?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ahwagner",
"comment_id": 630407049,
"datetime": 1589832295000,
"masked_author": "username_1",
"text": "Closing this issue; stale. For reference, a fresh cache.pkl file should be automatically grabbed from live CIViC in civicpy v1.0 and above.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ahwagner",
"comment_id": null,
"datetime": 1589832295000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 10 | 3,019 | false | false | 3,019 | true |
rchain/rchain | rchain | 491,577,049 | 2,720 | {
"number": 2720,
"repo": "rchain",
"user_login": "rchain"
} | [
{
"action": "opened",
"author": "ArturGajowy",
"comment_id": null,
"datetime": 1568110061000,
"masked_author": "username_0",
"text": "## Overview\r\n<sup>_What this PR does, and why it's needed_</sup>\r\n\r\n@dzajkowski asked about the `deployParametersRef: Ref[F, DeployParameters]` in Runtime, and it occurred to me we'll be able to get rid of it *entirely* once we migrate the two remaining usages of deployParameters.userId (unsafe!) with `rho:rchain:deployerId` (those two are: in PoS.rhox, and in RevVault.rho due to the former). This PR is a couple quick preparatory / stabilising steps - removing what's unused, so that it doesn't begin to be used.\r\n\r\n### JIRA ticket:\r\n<sup>_Create it if there isn't one already._</sup>\r\n\r\n\r\n\r\n### Notes\r\n<sup>_Optional. Add any notes on caveats, approaches you tried that didn't work, or anything else._</sup>\r\n\r\n\r\n\r\n### Please make sure that this PR:\r\n- [x] is at most 200 lines of code (excluding tests),\r\n- [x] meets [RChain development coding standards](https://rchain.atlassian.net/wiki/spaces/DOC/pages/28082177/Coding+Standards),\r\n- [x] includes tests for all added features,\r\n- [x] has a reviewer assigned,\r\n- [x] has [all commits signed](https://rchain.atlassian.net/wiki/spaces/DOC/pages/498630673/How+to+sign+commits+to+rchain+rchain).\r\n\r\n### [Bors](https://bors.tech/) cheat-sheet:\r\n\r\n- `bors r+` runs integration tests and merges the PR (if it's approved),\r\n- `bors try` runs integration tests for the PR,\r\n- `bors delegate+` enables non-maintainer PR authors to run the above.",
"title": "Trim down deploy parameters",
"type": "issue"
},
{
"action": "created",
"author": "ArturGajowy",
"comment_id": 529867640,
"datetime": 1568110074000,
"masked_author": "username_0",
"text": "bors try",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ArturGajowy",
"comment_id": 529996849,
"datetime": 1568130078000,
"masked_author": "username_0",
"text": "bors r+",
"title": null,
"type": "comment"
}
] | 2 | 5 | 1,626 | false | true | 1,408 | false |
benkeser/slapnap | null | 603,384,277 | 16 | null | [
{
"action": "opened",
"author": "bdwilliamson",
"comment_id": null,
"datetime": 1587401513000,
"masked_author": "username_0",
"text": "@username_1:\r\n\r\nWhen the number of elements in a class for a dichotomous outcome is <= the number of folds, we'll run into lots of numerical instability: we're running stratified CV, which means that each fold will have <= 1 of the small class. This leads to even worse problems for the CV-superlearner, in which case we may not have any of the small class. It also led to a fatal error message when computing variable importance since I split the sample in two prior to doing importance computation.\r\n\r\nMy gut reaction is to print an error message and don't run the analysis for any outcome with <= V in one class. Also, for any outcome with <= V in one class after sample-splitting, print an error message and don't run variable importance for that outcome. Thoughts?",
"title": "Errors when number of elements in class is <= number of folds (dichotomous outcome)",
"type": "issue"
},
{
"action": "created",
"author": "benkeser",
"comment_id": 616680239,
"datetime": 1587401587000,
"masked_author": "username_1",
"text": "Agreed. I would throw an error early on.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bdwilliamson",
"comment_id": 616849267,
"datetime": 1587422456000,
"masked_author": "username_0",
"text": "Fixed in #17",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "bdwilliamson",
"comment_id": null,
"datetime": 1587422457000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 819 | false | false | 819 | true |
openwsn-berkeley/openwsn-fw | openwsn-berkeley | 354,701,614 | 436 | {
"number": 436,
"repo": "openwsn-fw",
"user_login": "openwsn-berkeley"
} | [
{
"action": "opened",
"author": "chris-ho",
"comment_id": null,
"datetime": 1535458066000,
"masked_author": "username_0",
"text": "we found an issue which disabled the interrupts in the schedule and didn't reenabled them in all cases, that led to the stuck timers",
"title": "FW-737 ",
"type": "issue"
},
{
"action": "created",
"author": "changtengfei",
"comment_id": 416568181,
"datetime": 1535459939000,
"masked_author": "username_1",
"text": "@username_0 thanks for finding this bug! With this PR, am I able to close the FW-737? In another word, does it fix the issue described in FW-737?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chris-ho",
"comment_id": 416585963,
"datetime": 1535463289000,
"masked_author": "username_0",
"text": "it improves it but does not solve it completely",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "changtengfei",
"comment_id": 416587054,
"datetime": 1535463468000,
"masked_author": "username_1",
"text": "Then there will be two fixes for this issue. I will merge the current one first. You can create another when you found the bug. Thanks!",
"title": null,
"type": "comment"
}
] | 2 | 4 | 457 | false | false | 457 | true |
Visual-Regression-Tracker/Visual-Regression-Tracker | Visual-Regression-Tracker | 654,054,272 | 67 | null | [
{
"action": "opened",
"author": "pashidlos",
"comment_id": null,
"datetime": 1594300740000,
"masked_author": "username_0",
"text": "",
"title": "Variation details. Go to test button contains not formated date",
"type": "issue"
},
{
"action": "created",
"author": "shankybnl",
"comment_id": 718132512,
"datetime": 1603910413000,
"masked_author": "username_1",
"text": "@username_0 In case if this is still a valid issue and can be assigned, I would like to try it as my first issue.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pashidlos",
"comment_id": 718208979,
"datetime": 1603919199000,
"masked_author": "username_0",
"text": "@username_1 assigned to you \r\nlet me know if you need any help with initial set up!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "shankybnl",
"comment_id": 718339112,
"datetime": 1603942722000,
"masked_author": "username_1",
"text": "Thank you @username_0. Could you please share the steps to reproduce this bug and the expected behavior?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pashidlos",
"comment_id": 718596372,
"datetime": 1603966714000,
"masked_author": "username_0",
"text": "precondition:\r\n- at least one accepted baseline exists\r\n\r\nsteps:\r\n- open homepage\r\n- open project Variations\r\n- open Variation history (direct link `/variations/details/<variation_id>`)\r\n\r\nexpected result:\r\n- date is formatted the same way as on Project list page `/projects` \r\n<img width=\"906\" alt=\"1\" src=\"https://user-images.githubusercontent.com/5182956/97555469-d44b3780-19e0-11eb-862d-c25194972206.png\">",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pashidlos",
"comment_id": 732415677,
"datetime": 1606164240000,
"masked_author": "username_0",
"text": "gonna be released in `4.6.0`\r\n\r\nthank you for contribution!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "pashidlos",
"comment_id": null,
"datetime": 1606164240000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 7 | 870 | false | false | 870 | true |
LLNL/serac | LLNL | 657,650,054 | 78 | {
"number": 78,
"repo": "serac",
"user_login": "LLNL"
} | [
{
"action": "opened",
"author": "white238",
"comment_id": null,
"datetime": 1594846582000,
"masked_author": "username_0",
"text": "* Adds SLIC logging functionality\r\n* Add an exit function that cleans stuff up (serac::exit_gracefully(bool error=false)\r\n* Starts using the new serac C++ namespace\r\n\r\nI haven't gone through any of the code other than the driver yet but wanted to open this up for discussion. This can and should be iterated over. There is a lot of formatting of information in SLIC calls that can be done.\r\n\r\n**Example error** (notice the automatic and human readable stacktrace, filename and line number, so fancy):\r\n\r\n```\r\n[ERROR (/usr/WS2/username_0/serac/repo/src/drivers/serac.cpp:130)]\r\n\r\nCan not open mesh file: fake\r\n** StackTrace of 4 frames **\r\nFrame 1: bin/serac() [0x95bf93]\r\nFrame 2: bin/serac() [0x40bfd7]\r\nFrame 3: __libc_start_main\r\nFrame 4: bin/serac() [0x40b5a4]\r\n=====\r\n```\r\n\r\n**Example info message:**\r\n\r\n```\r\n[INFO]: Opening mesh file: fake\r\n```\r\n\r\nIf you run with more than one rank, a limited amount of ranks will show up at the beginning of the line like this:\r\n\r\n```\r\n[0,1,2][INFO]: Opening mesh file: fake\r\n```",
"title": "Add basic logging functionality",
"type": "issue"
},
{
"action": "created",
"author": "joshessman-llnl",
"comment_id": 659116537,
"datetime": 1594865991000,
"masked_author": "username_1",
"text": "Attempting to reopen, renaming of `master` branch to `main` triggered automatic closure of PR.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "white238",
"comment_id": 659784733,
"datetime": 1594951008000,
"masked_author": "username_0",
"text": "I noticed this might be confusing what is the actual warning message:\r\n\r\n```\r\n[WARNING (/usr/WS2/username_0/serac/repo/src/solvers/nonlinear_solid_solver.cpp:175)]\r\nAttempting to use BoomerAMG with nodal ordering.\r\nstep 1, t = 0.25\r\nNewton iteration 0 : ||r|| = 0.000125\r\nNewton iteration 1 : ||r|| = 0.0453958, ||r||/||r_0|| = 363.166\r\nNewton iteration 2 : ||r|| = 0.000147888, ||r||/||r_0|| = 1.18311\r\nNewton iteration 3 : ||r|| = 2.50446e-08, ||r||/||r_0|| = 0.000200356\r\n```\r\n\r\nThoughts on how to clarify that \"Attempting ... ordering\" is the warning? [ENDWARNING]? extra newline?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jamiebramwell",
"comment_id": 659812020,
"datetime": 1594955404000,
"masked_author": "username_2",
"text": "I think an extra newline would be sufficient.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jamiebramwell",
"comment_id": 660381501,
"datetime": 1595029028000,
"masked_author": "username_2",
"text": "LGTM",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "white238",
"comment_id": 660384176,
"datetime": 1595029803000,
"masked_author": "username_0",
"text": "LGTM",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "white238",
"comment_id": 661314210,
"datetime": 1595276671000,
"masked_author": "username_0",
"text": "LGTM",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "white238",
"comment_id": 661396574,
"datetime": 1595284858000,
"masked_author": "username_0",
"text": "As a first cut of the SLIC logging functionality this is good to go. @samuelpmishLLNL or @username_1 can one or both of you review this and voice your opinion?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "joshessman-llnl",
"comment_id": 662129789,
"datetime": 1595368989000,
"masked_author": "username_1",
"text": "LGTM",
"title": null,
"type": "comment"
}
] | 3 | 9 | 1,926 | false | false | 1,926 | true |
sdispater/poetry | null | 495,484,077 | 1,388 | null | [
{
"action": "opened",
"author": "nathan5280",
"comment_id": null,
"datetime": 1568845623000,
"masked_author": "username_0",
"text": "<!--\r\n Hi there! Thank you for wanting to make Poetry better.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [X] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [X] I have searched the [documentation](https://poetry.eustace.io/docs/) and believe that my question is not covered.\r\n\r\n## Feature Request\r\n<!-- Now feel free to write your idea for improvement. Thanks again 🙌 ❤️ -->\r\nI just got started with Poetry today and added a private repository \"[[tool.poetry.source]]\" to the pyproject.toml file. I didn't really think about it and added it right to the top of the file. When I run \"poetry version\" it reports that it is updating the version, \"Bumping version from 0.4.2-alpha.0 to 0.4.2\", but it doesn't update the file.\r\n\r\nRequest:\r\n1) If it is truly invalid to put this section at the top then an error should be reported.\r\n2) If it is valid to put this section at the top and for some reason Poetry has some other problem updating the version and writing the file it would be good to fix this.",
"title": "Error message when version doesn't update pyproject.toml",
"type": "issue"
},
{
"action": "created",
"author": "etijskens",
"comment_id": 532958945,
"datetime": 1568866822000,
"masked_author": "username_1",
"text": "I had similar problem caused by the order of entries in the pyproject.toml file which went without being reported as an error in poetry.\r\nsee issue https://github.com/sdispater/poetry/issues/1182\r\n\r\nKindest regards,\r\n\r\nDr [Engel]bert Tijskens - HPC Analyst/Consultant\r\nFlemish Supercomputer Center\r\nUniversity of Antwerp / Computational mathematics\r\n++32 3 265 3879, ++32 494 664408",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nathan5280",
"comment_id": 533122465,
"datetime": 1568898493000,
"masked_author": "username_0",
"text": "Well I feel good that I only spent hours on this. I'll close this issue and point to #1182.\r\n\r\nThis issue seems to fit a number of related issues where the order of the entries in the pyproject.toml impact whether or not the file is updated by \"poetry version\". See Issue [#1182](https://github.com/sdispater/poetry/issues/1182).",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "nathan5280",
"comment_id": null,
"datetime": 1568898493000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "etijskens",
"comment_id": 533221772,
"datetime": 1568912580000,
"masked_author": "username_1",
"text": "That makes me feel good that after several days of debugging I was able to pin down the problem and that I posted this :)\r\nThis is a very counterintuitive bug\r\n\r\nKindest regards,\r\n\r\nDr [Engel]bert Tijskens - HPC Analyst/Consultant\r\nFlemish Supercomputer Center\r\nUniversity of Antwerp / Computational mathematics\r\n++32 3 265 3879, ++32 494 664408",
"title": null,
"type": "comment"
}
] | 2 | 5 | 2,344 | false | false | 2,344 | false |
blackflux/lambda-serverless-api | blackflux | 494,894,072 | 1,120 | {
"number": 1120,
"repo": "lambda-serverless-api",
"user_login": "blackflux"
} | [
{
"action": "opened",
"author": "simlu",
"comment_id": null,
"datetime": 1568761010000,
"masked_author": "username_0",
"text": "",
"title": "fix: multiValueHeaders now lower cased",
"type": "issue"
},
{
"action": "created",
"author": "MrsFlux",
"comment_id": 532438867,
"datetime": 1568762534000,
"masked_author": "username_1",
"text": ":tada: This PR is included in version 6.2.1 :tada:\n\nThe release is available on:\n- [npm package (@latest dist-tag)](https://www.npmjs.com/package/lambda-serverless-api)\n- [GitHub release](https://github.com/blackflux/lambda-serverless-api/releases/tag/v6.2.1)\n\nYour **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:",
"title": null,
"type": "comment"
}
] | 2 | 2 | 364 | false | false | 364 | false |
swarmlet/swarmlet | swarmlet | 610,737,776 | 10 | null | [
{
"action": "opened",
"author": "zulhfreelancer",
"comment_id": null,
"datetime": 1588337820000,
"masked_author": "username_0",
"text": "As per the title, I’m curious to know if this project supports horizontal auto-scaling out of the box i.e. when all physical servers couldn’t hold new replicas demand by the master/controller.\n\nIf it’s supported, where is the doc/reference for it?",
"title": "Does swarmlet has auto-scaling feature?",
"type": "issue"
},
{
"action": "created",
"author": "woudsma",
"comment_id": 622382688,
"datetime": 1588338803000,
"masked_author": "username_1",
"text": "Thanks for asking!\r\nAs of right now Swarmlet does not have any functionality to enable auto-scaling.\r\n\r\nEverything is still WIP, to enable auto-scaling we would have to add a feature/service that communicates with the API of your cloud provider (AWS/DigitalOcean/Vultr/...), to create/destroy resources.\r\nI'm sure it's possible to do this.\r\nThe first thing that comes to mind is how to setup auto-scaling from a user perspective.\r\n\r\nHow would you like to configure auto-scaling? You will need to enter and save the cloud provider API key somewhere in the swarm. Maybe via a service with UI hosted on the swarm? I haven't really worked with auto-scaling setups before, but I assume you would want to configure the scaling behaviour as well. This might actually be possible by creating a service, `autoscaler` for example.\r\n\r\nThe built-in services that run after installation are the `deployments`, `loadbalancer`, `swarmpit` and `matomo` services. `autoscaler` could theoretically work in that template.\r\nCheck out the `deployments` service for example:\r\nhttps://github.com/swarmlet/swarmlet/tree/master/services\r\n\r\nLet me know what you think, or if you have comments/suggestions!\r\nPR's are most welcome as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zulhfreelancer",
"comment_id": 622401918,
"datetime": 1588341953000,
"masked_author": "username_0",
"text": "Thank you for the clarification, @username_1. To answer the question, since swarmlet doesn't has CLI on client side (correct me if I'm wrong), I think the options are:\r\n\r\n- Configure the cloud secrets using web console\r\n- Or, by providing a flat file (YAML?) during cluster setup\r\n\r\nWRT configuring the scaling behaviour, I'm not sure if _docker-compose.yml_ supports min and max replicas or not. If it does, I think we can take those values. Otherwise, maybe we need to require another file to be provided for service that needs to be auto-scaled.\r\n\r\nAs for the number of replicas, min and max is just an example here. I'm proposing that for those who are on tight budget and paranoid if their cloud provider will burn their wallet fast. Maybe another possible value would be `auto` where the minimum should always be one container and there is no maximum — ideal for those running production workload.\r\n\r\nIn terms of triggers, I think avg CPU and RAM usage would be nice to start. Disk ops, network, I/O & others can be in later phase.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "woudsma",
"comment_id": 622426882,
"datetime": 1588345849000,
"masked_author": "username_1",
"text": "This definitely sounds like a good idea, thanks for your input.\r\n\r\nWith configuration in web console, do you mean a web console / UI hosted on the swarm itself?\r\nIt might be worth doing some quick research if there already exists a framework (with UI) dedicated to managing and configuring autoscaling. If such a tool exists (think of something like Portainer or Sentry), it would be really easy to deploy it on the swarm.\r\n\r\nYou can give a service as much control over the swarm as you want by making the Docker socket available to the service container(s) for example. And if this imaginary autoscaling service already has a Docker image, we can just define the `autoscaler` app stack in a single `docker-compose.yml`. For example something like this: https://hub.docker.com/r/bitnami/cluster-autoscaler/\r\n\r\nI'm not sure if the user wants to mess with YAML configuration files during cluster setup. I kind of like a single command (with optional options) to install Swarmlet. I'd like to avoid adding complexity during/before the initial setup.\r\n\r\nAs for the details you mentioned such as triggers and min/max replicas, I have to do some research. No experience on that topic yet.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "woudsma",
"comment_id": null,
"datetime": 1603886649000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "woudsma",
"comment_id": 717888471,
"datetime": 1603886649000,
"masked_author": "username_1",
"text": "Closing this issue, as there are no plans to implement this feature.",
"title": null,
"type": "comment"
}
] | 2 | 6 | 3,742 | false | false | 3,742 | true |
jwplayer/jwplayer | jwplayer | 203,787,668 | 1,777 | null | [
{
"action": "opened",
"author": "trinvh",
"comment_id": null,
"datetime": 1485578789000,
"masked_author": "username_0",
"text": "I'm using 7.9.0. I can not change captions position into top of player (default is bottom). How can do that ?\r\nWhen inspect elements, I don't see captions text in DOM.\r\n\r\nAny help will be appreciated.",
"title": "How to change captions position",
"type": "issue"
},
{
"action": "created",
"author": "trinvh",
"comment_id": 275833209,
"datetime": 1485589084000,
"masked_author": "username_0",
"text": "I solved problem myself.\r\nFor HTML5 provider, set \r\n\r\n`renderNatively = false && ...`",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "johnBartos",
"comment_id": null,
"datetime": 1485815927000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 285 | false | false | 285 | false |
serlo/serlo.org | serlo | 467,215,955 | 12 | {
"number": 12,
"repo": "serlo.org",
"user_login": "serlo"
} | [
{
"action": "opened",
"author": "inyono",
"comment_id": null,
"datetime": 1562905941000,
"masked_author": "username_0",
"text": "- [ ] Update README before merging",
"title": "refactor: build docker images without local dev environment",
"type": "issue"
},
{
"action": "created",
"author": "inyono",
"comment_id": 510836023,
"datetime": 1562927082000,
"masked_author": "username_0",
"text": "With this PR, building images works without any local environment (except that we use `yarn` as a task runner). This is mostly so that the production images don't rely on our local environment (that might be different depending on OS). For development, we still need `yarn` for most tasks (e.g. running tests) and the `workflow` part of the docker-compose relies on local node_modules (If really needed/desired, we can install that in the container, too, but I'd rather avoid additional abstractions)",
"title": null,
"type": "comment"
}
] | 1 | 2 | 534 | false | false | 534 | false |
DAzVise/modmail-plugins | null | 522,748,788 | 673 | null | [
{
"action": "opened",
"author": "sdfkjosfsbf",
"comment_id": null,
"datetime": 1573725651000,
"masked_author": "username_0",
"text": "",
"title": "Hi There",
"type": "issue"
},
{
"action": "closed",
"author": "DAzVise",
"comment_id": null,
"datetime": 1573749056000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 0 | false | false | 0 | false |
lunarmodules/luasocket | lunarmodules | 551,160,856 | 294 | {
"number": 294,
"repo": "luasocket",
"user_login": "lunarmodules"
} | [
{
"action": "opened",
"author": "mbartlett21",
"comment_id": null,
"datetime": 1579227733000,
"masked_author": "username_0",
"text": "This pull request allows users to call `socket:receive()` without the parameter having to start with an asterisk. It also adds the `L` option, which includes the end of line marker.",
"title": "Socker:receive(\"l\") without asterisk, `L` option",
"type": "issue"
},
{
"action": "created",
"author": "Tieske",
"comment_id": 1075405292,
"datetime": 1647969012000,
"masked_author": "username_1",
"text": "Is this safe to use on different systems? unix vs windows, with single charcter lineends versus windows CrLf ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tieske",
"comment_id": 1075405791,
"datetime": 1647969048000,
"masked_author": "username_1",
"text": "btw the \"L\" option is useful to be in line with newer Lua versions",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mbartlett21",
"comment_id": 1076362895,
"datetime": 1648041335000,
"masked_author": "username_0",
"text": "@username_1 \r\n\r\nYes, it is. It currently just drops any CRs.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mbartlett21",
"comment_id": 1076367994,
"datetime": 1648041641000,
"masked_author": "username_0",
"text": "For the documentation, do I remove mentions of it with the asterisk, seeing as that seems to be what happened with [`file:read()`](https://www.lua.org/manual/5.4/manual.html#pdf-file:read)?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alerque",
"comment_id": 1076595566,
"datetime": 1648055838000,
"masked_author": "username_2",
"text": "I haven't reviewed this in detail, just noting my impression that it should be merged after the next (safe-harbor) release tag.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mbartlett21",
"comment_id": 1076889463,
"datetime": 1648075067000,
"masked_author": "username_0",
"text": "Lua, for `file:read()` seems to just let CRs through. Is that what you want me to do here?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tieske",
"comment_id": 1077264055,
"datetime": 1648101837000,
"masked_author": "username_1",
"text": "What did you test? I tested \"*L\" (on OSX) and it seems to cut on LF (10) (a single CR is ignored)\r\n\r\nmy test code:\r\n```lua\r\nprint(_VERSION)\r\nlocal CR = string.char(13)\r\nlocal LF = string.char(10)\r\n\r\nlocal function writefile(name, content)\r\n local f = io.open(name, \"w\")\r\n f:write(content)\r\n f:close()\r\nend\r\n\r\nlocal function readfile(name, mode)\r\n local f = io.open(name, \"r\")\r\n while f do\r\n local l = f:read(mode)\r\n if l then\r\n local le = l:gsub(CR, \"+CR\"):gsub(LF, \"+LF\")\r\n if le:sub(1,1) == \"+\" then\r\n le = le:sub(2,-1)\r\n end\r\n print(#l, le)\r\n else\r\n f:close()\r\n f = nil\r\n end\r\n end\r\nend\r\n\r\nlocal lines = { -- \"*L\" result:\r\n \"1234567890\"..LF, -- 1 line, 11 bytes\r\n \"1234567890\"..LF..CR..LF, -- 2 lines, 11 bytes + 2 bytes\r\n \"1234567890\"..CR..LF, -- 1 line, 12 bytes\r\n \"1234567890\"..CR..LF..LF, -- 2 lines, 12 bytes + 1 byte\r\n \"1234567890\" -- 1 line, 10 bytes\r\n}\r\nlines = table.concat(lines)\r\n\r\nwritefile(\"test.txt\", lines)\r\nfor _, mode in ipairs { \"*l\", \"*L\" } do\r\n print(\"\\nTesting mode: \"..mode)\r\n readfile(\"test.txt\", mode)\r\nend\r\n```\r\nOutput:\r\n```\r\nLua 5.3\r\n\r\nTesting mode: *l\r\n10\t1234567890\r\n10\t1234567890\r\n1\tCR\r\n11\t1234567890+CR\r\n11\t1234567890+CR\r\n0\t\r\n10\t1234567890\r\n\r\nTesting mode: *L\r\n11\t1234567890+LF\r\n11\t1234567890+LF\r\n2\tCR+LF\r\n12\t1234567890+CR+LF\r\n12\t1234567890+CR+LF\r\n1\tLF\r\n10\t1234567890\r\nProgram completed in 0.16 seconds (pid: 86623).\r\n```\r\n\r\nSo looks to me like \"*l\" is platform dependent, whereas \"*L\" just cust right after the \"LF\", and is platform independent. I'll try and test on Windows as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tieske",
"comment_id": 1077284426,
"datetime": 1648103644000,
"masked_author": "username_1",
"text": "interestingly, I get the exact same result on Windows;\r\n```\r\nLua 5.3\r\n\r\nTesting mode: *l\r\n10\t1234567890\r\n10\t1234567890\r\n1\tCR\r\n11\t1234567890+CR\r\n11\t1234567890+CR\r\n0\t\r\n10\t1234567890\r\n\r\nTesting mode: *L\r\n11\t1234567890+LF\r\n11\t1234567890+LF\r\n2\tCR+LF\r\n12\t1234567890+CR+LF\r\n12\t1234567890+CR+LF\r\n1\tLF\r\n10\t1234567890\r\nProgram completed in 0.23 seconds (pid: 5044).\r\n```\r\n\r\nwhere I would have expected the \"*l\" option to also drop the CR's.\r\n\r\nBut this does not matter for this implementation, since the \"*L\" behaviour is the same.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tieske",
"comment_id": 1080005538,
"datetime": 1648410098000,
"masked_author": "username_1",
"text": "Updated the test code and test results above, after a bug in the test code.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tieske",
"comment_id": 1080014807,
"datetime": 1648413594000,
"masked_author": "username_1",
"text": "The culprit seems to be the difference between \"text\" and \"binary\" mode on Windows systems, because if the file is opened for reading \"binary\" mode, then the Windows output is identical to the Mac output.\r\n\r\nSo I think we should assume a socket to have a \"binary\" format when reading, and as such should get the same results. That would be cutting the line on each \"LF\" character, and not dropping any preceding \"CR\" character. Which is what the current PR seems to be doing.\r\n\r\nSo a lot of fuss to conclude the PR looks good. But it would need some tests to prove the behavior.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alerque",
"comment_id": 1080556071,
"datetime": 1648468393000,
"masked_author": "username_2",
"text": "On the subject of tests, I would really prefer if all new tests were written in busted instead of the current ad-hock framework. Even if it's one test at a time lets start migrating that direction when we need a new test for something or to do some substantial fix to an existing test. I'll try to get a couple basic ones started so there is a pattern to follow.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alerque",
"comment_id": 1080557146,
"datetime": 1648468470000,
"masked_author": "username_2",
"text": "(This case of course is exacerbated by the fact that we're not current doing any CI testing on Windows at all; I'd be glad to facilitate if somebody with expertise knows how to make it go...)",
"title": null,
"type": "comment"
}
] | 3 | 13 | 4,174 | false | false | 4,174 | true |
mapbox/mapbox-gl-js | mapbox | 631,654,799 | 9,762 | null | [
{
"action": "opened",
"author": "fifiDesPlaines",
"comment_id": null,
"datetime": 1591368428000,
"masked_author": "username_0",
"text": "<!--\r\nHello! Thanks for contributing.\r\n\r\nThe answers to many \"how do I...?\" questions can be found in our [help documentation](https://mapbox.com/help). If you can't find the answer there, the best place to ask is either [Stack Overflow](https://stackoverflow.com/questions/tagged/mapbox-gl-js) or [Mapbox support](https://mapbox.com/contact/).\r\n\r\nHowever, if you have a question that isn't addressed in the documentation but should be, please do let us know by filling out the template below! As a general rule, if a question is about _how Mapbox GL JS works_ rather than your specific use case, we will try to address it here or by improving the documentation. Otherwise, we might close the issue here and instead recommend asking on Stack Overflow or contacting support.\r\n\r\n-->\r\nHi, \r\n\r\n**mapbox-gl-js version**: 1.10.1\r\n\r\n### Question\r\n\r\nI would like to construct a kind of temperature-like map. I would like to fill the map depending on the value of my dots and not their densities. The closest kind of layer that I can use is the Heatmap... Is there a way to display a heatmap whose color depends on an aggregation of the point's properties composing it instead of their densities? \r\n\r\nSomething like that : https://github.com/optimisme/javascript-temperatureMap \r\nwhere dots will be kind of cluster's centers of other points.\r\n\r\n### Links to related documentation\r\n\r\nhttps://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#heatmap\r\nThe doc says : \"Defines the color of each pixel based on its density value in a heatmap. Should be an expression that uses [\"heatmap-density\"] as input.\"\r\n\r\nhttps://docs.mapbox.com/help/tutorials/make-a-heatmap-with-mapbox-gl-js/\r\nHere, you've constructed the heatmap from the density, Is it possible to construct an expression based on an average of property of the geojson ?\r\n\r\n<!-- Include links to the specific section(s) of the documentation where you would have expected to find an answer to this question. -->",
"title": "Aggregate function for the property heatmap-color",
"type": "issue"
},
{
"action": "created",
"author": "mourner",
"comment_id": 640061245,
"datetime": 1591450088000,
"masked_author": "username_1",
"text": "Currently this is not possible with the heatmap layer — the way it is technically implemented makes the rendering dependent on the density of points. We might want to explore adding an option for an alternative rendering. For now, you could try e.g. using blurred circles to represent the temperatures, kind of like a scatterplot.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jda-tid",
"comment_id": 640233053,
"datetime": 1591542509000,
"masked_author": "username_2",
"text": "It would be perfect to show temperature or internet speed from user doesn't matter the amount or density of users that we have.\r\n\r\nUsing the explained heatmap it doesn't make any sense as \"temperature\" can't be added depending on the amount of dots.\r\n\r\nFrom the docs: \r\n\r\nIt is even mentioned the second kind of heatmap, in which the value is the average of the points, instead of the density. However, only one is explained.\r\n\r\nhttps://docs.mapbox.com/help/tutorials/make-a-heatmap-with-mapbox-gl-js/#what-is-the-purpose-of-a-heatmap\r\n\r\n`Among maps you'll find on the web, there are two common categories of heatmaps: those that encourage the user to explore dense point data, and those that interpolate discrete values over a continuous surface, creating a smooth gradient between those points. The latter is less common and most often used in scientific publications or when a phenomenon is distributed over an area in a predictable way. For example, your town may only have a few weather stations, but your favorite weather app displays a smooth gradient of temperatures across the entire area of your town. For your local weather service, it is reasonable to assume that, if two adjacent stations report different temperatures, the temperature between them will transition gradually from one to the next.`\r\n\r\nMore info about these kind of maps:\r\n\r\nhttps://mgimond.github.io/Spatial/spatial-interpolation.html\r\n\r\nSample:\r\n\r\n\r\n\r\nPossible result: \r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fifiDesPlaines",
"comment_id": 640599232,
"datetime": 1591622508000,
"masked_author": "username_0",
"text": "@username_1 , I tried this approach, but the result is not satisfying for the moment. I'm trying to cut the map in tiles with their size depending on the zoom, and to calculate the fill color from an aggregation function.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Rylern",
"comment_id": 936511009,
"datetime": 1633534433000,
"masked_author": "username_3",
"text": "Hi, if anyone still needs this feature I used a custom Mapbox layer to create what you're describing, take a look here:\r\nhttps://github.com/username_3/InterpolateHeatmapLayer",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "royyeah",
"comment_id": 1073911995,
"datetime": 1647870083000,
"masked_author": "username_4",
"text": "@username_3 We tried your approach, but the rendering is not fast / pixel-perfect enough for our use case. We currently went for the approach suggested by @username_1 (blurred circles to represent the temperatures), but we keep our eyes open for something better in the future!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dson",
"comment_id": 1077703016,
"datetime": 1648132886000,
"masked_author": "username_5",
"text": "Any update on this? The heatmap layer looks amazing! It's a shame we can't use it for this other common use case as well. :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Rylern",
"comment_id": 1077824207,
"datetime": 1648140164000,
"masked_author": "username_3",
"text": "Actually you can customize the colors, take a look at the `valueToColor` parameter. It allows you to define the function that maps a value to a color. Tell me the color scale you want, and I can help you define this parameter.\r\nWhen you say it's not rendering fast enough, what do you mean exactly? Is it when you first load the map, or after when you change the map location?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "royyeah",
"comment_id": 1079120810,
"datetime": 1648220812000,
"masked_author": "username_4",
"text": "@username_3 Oh, you’re right, my bad. I confused it with another approach we tried. I think your solution works pretty well for visualizing value distribution on a global scale. For our use case however we ran into some issues we couldn’t solve:\r\n\r\n* It covers the entire map. There is an option to limit it to a specific area, but we would prefer something more like the Mapbox heatmap, where the effect only applies where the points are.\r\n* The points have to be passed at layer creation, which means we’ll have to recreate the layer every time we want to update the values. It would be nice if we could use a datasource for the points.\r\n\r\n<img width=\"714\" alt=\"Screenshot 2022-03-25 at 13 30 46\" src=\"https://user-images.githubusercontent.com/4243242/160147013-6712b1fa-b966-42cd-9c23-6826ef2661f1.png\">",
"title": null,
"type": "comment"
}
] | 6 | 9 | 5,919 | false | false | 5,919 | true |
macports/macports-ports | macports | 380,065,131 | 3,008 | {
"number": 3008,
"repo": "macports-ports",
"user_login": "macports"
} | [
{
"action": "opened",
"author": "chrstphrchvz",
"comment_id": null,
"datetime": 1542086692000,
"masked_author": "username_0",
"text": "#### Description\r\n\r\n<!-- Note: it is best make pull requests from a branch rather than from master -->\r\n\r\n###### Type(s)\r\n<!-- update (title contains \": U(u)pdate to\"), submission (new Portfile) and CVE Identifiers are auto-detected, replace [ ] with [x] to select -->\r\n\r\n- [ ] bugfix\r\n- [x] enhancement\r\n- [ ] security fix\r\n\r\n\r\n###### Verification <!-- (delete not applicable items) -->\r\nHave you\r\n\r\n- [x] followed our [Commit Message Guidelines](https://trac.macports.org/wiki/CommitMessages)?\r\n- [x] squashed and [minimized your commits](https://guide.macports.org/#project.github)?\r\n- [x] checked that there aren't other open [pull requests](https://github.com/macports/macports-ports/pulls) for the same change?\r\n\r\n<!-- Use \"skip notification\" (surrounded with []) to avoid notifying maintainers -->\r\n\r\n\r\n@MarcusCalhoun-Lopez Thanks for adding this port!",
"title": "tkimg: fix spelling in comment",
"type": "issue"
},
{
"action": "created",
"author": "pmetzger",
"comment_id": 438309300,
"datetime": 1542123270000,
"masked_author": "username_1",
"text": "Merged. Thanks for the fix, @username_0!",
"title": null,
"type": "comment"
}
] | 3 | 3 | 1,383 | false | true | 901 | true |
alibaba/pipcook | alibaba | 598,767,339 | 84 | null | [
{
"action": "opened",
"author": "yorkie",
"comment_id": null,
"datetime": 1586768320000,
"masked_author": "username_0",
"text": "We have some debug logs, it's better to use [debug](https://github.com/visionmedia/debug) instead.",
"title": "core, cli: use debug instead of raw console",
"type": "issue"
},
{
"action": "created",
"author": "FeelyChau",
"comment_id": 921635881,
"datetime": 1631869645000,
"masked_author": "username_1",
"text": "All the debug info has used `debug` now.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "FeelyChau",
"comment_id": null,
"datetime": 1631869645000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 138 | false | false | 138 | false |
istio/api | istio | 508,610,457 | 1,135 | {
"number": 1135,
"repo": "api",
"user_login": "istio"
} | [
{
"action": "opened",
"author": "geeknoid",
"comment_id": null,
"datetime": 1571332365000,
"masked_author": "username_0",
"text": "",
"title": "Fix bad alias syntax.",
"type": "issue"
}
] | 2 | 2 | 66 | false | true | 0 | false |
Sage-Bionetworks/dccmonitor | Sage-Bionetworks | 569,073,761 | 53 | null | [
{
"action": "opened",
"author": "Aryllen",
"comment_id": null,
"datetime": 1582304499000,
"masked_author": "username_0",
"text": "Annotations module fails if the specimen/individual IDs that are being joined have different column types (e.g. character and numeric). A fix would be to make all column types character before joining.",
"title": "Cannot join IDs with different column types",
"type": "issue"
},
{
"action": "closed",
"author": "Aryllen",
"comment_id": null,
"datetime": 1582307435000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 201 | false | false | 201 | false |
lief-project/LIEF | lief-project | 622,823,516 | 418 | null | [
{
"action": "opened",
"author": "pdreiter",
"comment_id": null,
"datetime": 1590099309000,
"masked_author": "username_0",
"text": "[43, 81, 0, 0, 52, **81**, 0, 0]\r\n```\r\n\r\n**Environment (please complete the following information):**\r\n - Ubuntu 19.04\r\n - Target format: ELF\r\n - LIEF commit version: 0.10.1-bfe5415\r\n\r\n**Additional context**",
"title": "Last entry in .data section content is not updated to new offset when segment is added",
"type": "issue"
},
{
"action": "created",
"author": "romainthomas",
"comment_id": 633206848,
"datetime": 1590314256000,
"masked_author": "username_1",
"text": "I think I understand the issue but the modification seems not generic ? Also, did you check the relocations ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pdreiter",
"comment_id": 633239343,
"datetime": 1590330508000,
"masked_author": "username_0",
"text": "I did check the relocation sections, but I will check again.\r\nAnything in particular you want me to report? \r\n I will also update with more info about the problematic global symbol.\r\n\r\nI’m rephrasing what I have seen in case my first attempt lacked clarity:\r\nThe global .data symbol whose addresses are partially converted is an 9 element array of const char*s. LIEF updated EPFD[7] to the new virtual address in .rodata, but EPFD[8] contains the unmodified offset.\r\n\r\n\r\nUpdate: the test passed after manually updating .data’s EPFD[8] to the new virtual address.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pdreiter",
"comment_id": 633255599,
"datetime": 1590337492000,
"masked_author": "username_0",
"text": "164\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pdreiter",
"comment_id": 657779791,
"datetime": 1594672473000,
"masked_author": "username_0",
"text": "I was able to generate a VERY simple scenario that duplicates this bug. \r\nPlease note that I found this with **32b ELF** executables.\r\n\r\ncontents of test_418.c\r\n```\r\n#include <stdio.h>\r\n//--------------------------------------------------------\r\n// basic scenario to test out lief bug 418 filed by username_0\r\n//--------------------------------------------------------\r\n\r\nstatic char* words[] = {\r\n\"hello\",\"my\",\"baby\", // # 0-2\r\n\"hello\",\"my\",\"honey\", // # 3-5\r\n\"hello\",\"my\",\"ragtime\",\"gal\" // # 6-9\r\n};\r\n\r\n\r\nvoid main(){\r\n\r\nfor (int i=0;i<10;i++) {\r\n printf(words[i]);\r\n printf(\" \");\r\n}\r\n\r\n}\r\n```\r\nCOMPILE:\r\n`gcc -m32 test_418.c -o lief_test_418`\r\n\r\nlief manipulation:\r\n```\r\nimport lief\r\nx=lief.parse(\"lief_test_418\")\r\nx.add(lief.ELF.Segment())\r\nx.write(\"lief_bug_418\")\r\n```\r\noutput of \"./lief_test_418\":\r\n`hello my baby hello my honey hello my ragtime gal `\r\noutput of \"./lief_bug_418\"\r\n`hello my baby hello my honey hello my ragtime ` \r\n\r\nI cannot duplicate this error with a 64b ELF binary input: \r\n`gcc test_418.c -o lief64_test_418`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pdreiter",
"comment_id": 657780549,
"datetime": 1594672574000,
"masked_author": "username_0",
"text": "@username_1 - I hope that this is good enough for you to root cause!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "romainthomas",
"comment_id": null,
"datetime": 1618483805000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 7 | 2,009 | false | false | 2,009 | true |
CoinAlpha/hummingbot | CoinAlpha | 679,677,346 | 2,218 | null | [
{
"action": "opened",
"author": "lacvapps",
"comment_id": null,
"datetime": 1597546435000,
"masked_author": "username_0",
"text": "**Describe the bug**\r\n// A clear and concise description of what the bug is.\r\n\r\nOn the \"Useful Commands\" list\r\n\"Import **a** existing bot by loading a configuration file\"\r\n_should be_\r\nImport **an**",
"title": "[BUG] Grammatical Error in Hummingbot ui ",
"type": "issue"
},
{
"action": "created",
"author": "PtrckM",
"comment_id": 674477900,
"datetime": 1597552657000,
"masked_author": "username_1",
"text": "thank you for pointing it out, I'll make some PR for the update.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "kmcgowan2000",
"comment_id": null,
"datetime": 1602604057000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 262 | false | false | 262 | false |
sustainers/website | sustainers | 386,268,822 | 198 | null | [
{
"action": "opened",
"author": "jdorfman",
"comment_id": null,
"datetime": 1543596867000,
"masked_author": "username_0",
"text": "An _OSF_ lesson for @Mandihamza \r\n\r\nIn today's lesson we will be doing the following:\r\n\r\n- [ ] Set up `sustainers/website` dev env\r\n- [ ] Create branch\r\n- [ ] Download white Sustain logo\r\n - [ ] Replace current logo\r\n - [ ] Run svgo to optimize logo\r\n- [ ] Create PR\r\n\r\n\r\n\r\nTopics that will be covered:\r\n* Git\r\n* Ruby\r\n * Bundler\r\n * Jekyll\r\n* npm\r\n\r\npart of #197",
"title": "Replace Summit Logo with Sustain",
"type": "issue"
}
] | 1 | 1 | 475 | false | false | 475 | false |
lyrgard/ffbeEquip | null | 449,209,048 | 352 | null | [
{
"action": "opened",
"author": "Darwe-Canine",
"comment_id": null,
"datetime": 1559041938000,
"masked_author": "username_0",
"text": "When building 7* Sabin for Physical Damage Multicast, then adding a 100% DEF bonus to the enemy and rebuilding, the Build Goal Calculated Value is the same, but the Physical Damage on a 100 DEF Enemy with a 1x Skill gets cut in half (see photos for example). If this is intended, it isn't intuitive, and should be clarified somehow.\r\n\r\n\r\n\r\n",
"title": "P_Damage_Multicast and DEF Buff",
"type": "issue"
},
{
"action": "closed",
"author": "Darwe-Canine",
"comment_id": null,
"datetime": 1560899556000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 2 | 560 | false | false | 560 | false |
conda-forge/boto3-feedstock | conda-forge | 551,141,308 | 470 | {
"number": 470,
"repo": "boto3-feedstock",
"user_login": "conda-forge"
} | [
{
"action": "opened",
"author": "tkelman",
"comment_id": null,
"datetime": 1579223060000,
"masked_author": "username_0",
"text": "<!--\r\nThank you for pull request.\r\nBelow are a few things we ask you kindly to self-check before getting a review. Remove checks that are not relevant.\r\n-->\r\nChecklist\r\n* [x] Used a [fork of the feedstock to propose changes](https://conda-forge.org/docs/maintainer/updating_pkgs.html#forking-and-pull-requests)\r\n* [ ] Bumped the build number (if the version is unchanged)\r\n* [x] Reset the build number to `0` (if the version changed)\r\n* [ ] [Re-rendered]( https://conda-forge.org/docs/maintainer/updating_pkgs.html#rerendering-feedstocks ) with the latest `conda-smithy` (Use the phrase <code>@<space/>conda-forge-admin, please rerender</code> in a comment in this PR for automated rerendering)\r\n* [x] Ensured the license file is being packaged.\r\n\r\n<!--\r\nPlease note any issues this fixes using [closing keywords]( https://help.github.com/articles/closing-issues-using-keywords/ ):\r\n-->\r\n\r\n<!--\r\nPlease add any other relevant info below:\r\n-->",
"title": "update to 1.11.4",
"type": "issue"
}
] | 2 | 2 | 1,281 | false | true | 942 | false |
dask/dask | dask | 662,122,446 | 6,430 | {
"number": 6430,
"repo": "dask",
"user_login": "dask"
} | [
{
"action": "opened",
"author": "TomAugspurger",
"comment_id": null,
"datetime": 1595269483000,
"masked_author": "username_0",
"text": "Closes https://github.com/dask/dask/issues/6330\r\n\r\ntest-upstream",
"title": "Compatibility for NumPy dtype deprecation",
"type": "issue"
},
{
"action": "created",
"author": "mrocklin",
"comment_id": 661262134,
"datetime": 1595270106000,
"masked_author": "username_1",
"text": "+1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "eric-wieser",
"comment_id": 661802235,
"datetime": 1595331262000,
"masked_author": "username_2",
"text": "https://github.com/pydata/sparse/issues/383 is unrelated to the deprecation - this is a due to a big change by @seberg.",
"title": null,
"type": "comment"
}
] | 3 | 3 | 185 | false | false | 185 | false |
vgramm/github-slideshow | null | 515,752,899 | 1 | null | [
{
"action": "closed",
"author": "vgramm",
"comment_id": null,
"datetime": 1572557337000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "reopened",
"author": "vgramm",
"comment_id": null,
"datetime": 1572557341000,
"masked_author": "username_0",
"text": "# :wave: Welcome to GitHub Learning Lab's \"Introduction to GitHub\"\n\nTo get started, I’ll guide you through some important first steps in coding and collaborating on GitHub.\n\n:point_down: _This arrow means you can expand the window! Click on them throughout the course to find more information._\n<details><summary>What is GitHub?</summary>\n<hr>\n\n## What is GitHub?\n\nI'm glad you asked! Many people come to GitHub because they want to contribute to open source <sup>[:book:](https://help.github.com/articles/github-glossary/#open-source)</sup> projects, or they're invited by teammates or classmates who use it for their projects. Why do people use GitHub for these projects?\n\n**At its heart, GitHub is a collaboration platform.**\n\nFrom software to legal documents, you can count on GitHub to help you do your best work with the collaboration and security tools your team needs. With GitHub, you can keep projects completely private, invite the world to collaborate, and streamline every step of your project.\n\n**GitHub is also a powerful version control tool.**\n\nGitHub uses Git <sup>[:book:](https://help.github.com/articles/github-glossary/#git)</sup>, the most popular open source version control software, to track every contribution and contributor <sup>[:book:](https://help.github.com/articles/github-glossary/#contributor)</sup> to your project--so you know exactly where every line of code came from.\n\n**GitHub helps people do much more.**\n\nGitHub is used to build some of the most advanced technologies in the world. Whether you're visualizing data or building a new game, there's a whole community and set of tools on GitHub that can get you to the next step. This course starts with the basics, but we'll dig into the rest later!\n\n:tv: [Video: What is GitHub?](https://www.youtube.com/watch?v=w3jLJU7DT5E)\n<hr>\n</details><br>\n\n<details><summary>Exploring a GitHub repository</summary>\n<hr>\n\n## Exploring a GitHub repository\n\n:tv: [Video: Exploring a repository](https://www.youtube.com/watch?v=R8OAwrcMlRw)\n\n### More features\n\nThe video covered some of the most commonly-used features. Here are a few other items you can find in GitHub repositories:\n\n- Project boards: Create Kanban-style task tracking board within GitHub\n- Wiki: Create and store relevant project documentation\n- Insights: View a drop-down menu that contains links to analytics tools for your repository including:\n - Pulse: Find information about the work that has been completed and the work that’s in-progress in this project dashboard\n - Graphs: Graphs provide a more granular view of the repository activity including who contributed to the repository, who forked it, and when they completed the work\n\n### Special Files\n\nIn the video you learned about a special file called the README.md. Here are a few other special files you can add to your repositories:\n\n- CONTRIBUTING.md: The `CONTRIBUTING.md` is used to describe the process for contributing to the repository. A link to the `CONTRIBUTING.md` file is shown anytime someone creates a new issue or pull request.\n- ISSUE_TEMPLATE.md: The `ISSUE_TEMPLATE.md` is another file you can use to pre-populate the body of an issue. For example, if you always need the same types of information for bug reports, include it in the issue template, and every new issue will be opened with your recommended starter text.\n\n<hr>\n</details>\n\n### Using issues\n\nThis is an issue <sup>[:book:](https://help.github.com/articles/github-glossary/#issue)</sup>: a place where you can have conversations about bugs in your code, code review, and just about anything else.\n\nIssue titles are like email subject lines. They tell your collaborators what the issue is about at a glance. For example, the title of this issue is Getting Started with GitHub.\n\n\n<details><summary>Using GitHub Issues</summary>\n\n## Using GitHub issues\n\nIssues are used to discuss ideas, enhancements, tasks, and bugs. They make collaboration easier by:\n\n- Providing everyone (even future team members) with the complete story in one place\n- Allowing you to cross-link to other issues and pull requests <sup>[:book:](https://help.github.com/articles/github-glossary/#pull-request)</sup>\n- Creating a single, comprehensive record of how and why you made certain decisions\n- Allowing you to easily pull the right people and teams into a conversation with @-mentions\n\n:tv: [Video: Using issues](https://www.youtube.com/watch?v=Zhj46r5D0nQ)\n\n<hr>\n</details>\n\n<details><summary>Managing notifications</summary>\n<hr>\n\n## Managing notifications\n\n:tv: [Video: Watching, notifications, stars, and explore](https://www.youtube.com/watch?v=ocQldxF7fMY)\n\nOnce you've commented on an issue or pull request, you'll start receiving email notifications when there's activity in the thread. \n\n### How to silence or unmute specific conversations\n\n1. Go to the issue or pull request\n2. Under _\"Notifications\"_, click the **Unsubscribe** button on the right to silence notifications or **Subscribe** to unmute them\n\nYou'll see a short description that explains your current notification status.\n\n### How to customize notifications in Settings\n\n1. Click your profile icon\n2. Click **Settings**\n3. Click **Notifications** from the menu on the left and [adjust your notification preferences](https://help.github.com/articles/managing-notification-delivery-methods/)\n\n### Repository notification options\n\n* **Watch**: You'll receive a notification when a new issue, pull request or comment is posted, and when an issue is closed or a pull request is merged \n* **Not watching**: You'll no longer receive notifications unless you're @-mentioned\n* **Ignore**: You'll no longer receive any notifications from the repository\n\n### How to review notifications for the repositories you're watching\n\n1. Click your profile icon\n2. Click **Settings**\n3. Click **Notification** from the menu on the left\n4. Click on the [repositories you’re watching](https://github.com/watching) link\n5. Select the **Watching** tab\n6. Click the **Unwatch** button to disable notifications, or **Watch** to enable them\n\n<hr>\n</details>\n\n<hr>\n<h3 align=\"center\">Keep reading below to find your first task</h3>",
"title": "Getting Started with GitHub",
"type": "issue"
},
{
"action": "closed",
"author": "vgramm",
"comment_id": null,
"datetime": 1572557346000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 6 | 12,290 | false | true | 6,145 | false |
jaredreich/pell | null | 242,453,147 | 17 | null | [
{
"action": "opened",
"author": "Valkryst",
"comment_id": null,
"datetime": 1499879658000,
"masked_author": "username_0",
"text": "When the mouse cursor is overtop of a button on the formatting bar (Bold, Italic, Underlined, etc...), the button's background color should change to indicate that it will be used when the user clicks.\r\n\r\nIt's a little difficult to tell which option is selected until you click, in some cases.",
"title": "Highlight Formatting Option on Mouseover",
"type": "issue"
},
{
"action": "created",
"author": "deforder",
"comment_id": 314963537,
"datetime": 1499918132000,
"masked_author": "username_1",
"text": "Putting `.pell-button:hover {background-color:#ebebe0}` in `pell.css` or `pell.min.css` can do the trick.\r\n\r\nI don't think there should be any change in original file since this feature seem like an extension (some people might prefer other color or just want it transparent like this).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jaredreich",
"comment_id": 314965594,
"datetime": 1499919146000,
"masked_author": "username_2",
"text": "As @username_1 suggests, customize the CSS to your heart's content 😄",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jaredreich",
"comment_id": null,
"datetime": 1499919147000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 647 | false | false | 647 | true |
OpenEastleigh/eastleigh-manifesto | OpenEastleigh | 577,345,162 | 15 | null | [
{
"action": "opened",
"author": "pavsmith",
"comment_id": null,
"datetime": 1583592684000,
"masked_author": "username_0",
"text": "Improve cycling routes around Eastleigh by improving cycling routes, cannibalising existing road space where necessary. Reducing overall traffic speeds via both changing maximum speeds and traffic management infrastructure. Discourage the use of cars by implementing a highly effective Park and Ride and cutting back on parking, especially free parking.",
"title": "improve cycling routes and rider safety. discourage car use.",
"type": "issue"
},
{
"action": "created",
"author": "jt-nti",
"comment_id": 596105514,
"datetime": 1583598571000,
"masked_author": "username_1",
"text": "[Improve cycling routes](https://groupthink.openeastleigh.uk/proposals/16) is open and ready for comments!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jt-nti",
"comment_id": null,
"datetime": 1583598572000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 459 | false | false | 459 | false |
EncoreTechnologies/puppet-patching | EncoreTechnologies | 581,242,967 | 40 | {
"number": 40,
"repo": "puppet-patching",
"user_login": "EncoreTechnologies"
} | [
{
"action": "opened",
"author": "vchepkov",
"comment_id": null,
"datetime": 1584198057000,
"masked_author": "username_0",
"text": "add `hostname` as a choice for patching::snapshot_vmware::target_name_property\r\n\r\nit can be used in cases where target discovery uses fully qualified domain names\r\nand VM names don't have domain name component",
"title": "hostname as target_name_property",
"type": "issue"
},
{
"action": "created",
"author": "vchepkov",
"comment_id": 611098097,
"datetime": 1586368016000,
"masked_author": "username_0",
"text": "Rebased",
"title": null,
"type": "comment"
}
] | 1 | 2 | 216 | false | false | 216 | false |
kubeflow/pipelines | kubeflow | 567,256,895 | 3,110 | null | [
{
"action": "opened",
"author": "sakaia",
"comment_id": null,
"datetime": 1582075397000,
"masked_author": "username_0",
"text": "The frontend uses PhantomJS. [frontend/Dockerfile](https://github.com/kubeflow/pipelines/blob/master/frontend/Dockerfile)\r\nRegrettably, the PhantomJS will stop development.[Archiving the project: suspending the development #15344](https://github.com/ariya/phantomjs/issues/15344)\r\nOf course, kubernetes/pipelines used the PhantomJS in build time only.\r\n[update dockerfile and add build step of frontend #567](https://github.com/kubeflow/pipelines/pull/567)\r\n\r\nIs there any plan to move from PhantomJS?\r\nIf so, which package you will plan to use?",
"title": "[Question] PhantomJS",
"type": "issue"
},
{
"action": "created",
"author": "rmgogogo",
"comment_id": 588016633,
"datetime": 1582082772000,
"masked_author": "username_1",
"text": "@username_2",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Bobgy",
"comment_id": 588032255,
"datetime": 1582087284000,
"masked_author": "username_2",
"text": "Thanks @username_0 for raising the question.\r\n\r\nI took this opportunity to investigate a little. Here's what I found:\r\n* We used to use backstopjs for visual regression testing (which used phantomjs internally, but no longer now): https://github.com/kubeflow/pipelines/blob/master/frontend/backstop.ts\r\n* But somehow after main maintainer for kfp frontend changed a few times, we simply don't know or use it any more.\r\n\r\nBackstop js itself has removed phantomjs support in a later release: https://github.com/garris/BackstopJS/issues/772#issuecomment-456057756\r\n\r\nWe have a few options:\r\n1. Restore backstop js tests\r\n2. Just delete them\r\n\r\nSo far, I don't feel we have a strong need for visual regression testing, so I will go with deletion to simplify maintenance.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Bobgy",
"comment_id": 588032740,
"datetime": 1582087423000,
"masked_author": "username_2",
"text": "And FYI, for UI integration testing, we are using selenium (I'm thinking about replacing it with cypress later, but not now).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Bobgy",
"comment_id": 588034685,
"datetime": 1582087952000,
"masked_author": "username_2",
"text": "Sent above PR to remove backstop js, and thus also dependency of phantom js.",
"title": null,
"type": "comment"
}
] | 4 | 6 | 1,514 | false | true | 1,514 | true |
hedgehogqa/scala-hedgehog | hedgehogqa | 627,684,442 | 152 | {
"number": 152,
"repo": "scala-hedgehog",
"user_login": "hedgehogqa"
} | [
{
"action": "opened",
"author": "Kevin-Lee",
"comment_id": null,
"datetime": 1590829193000,
"masked_author": "username_0",
"text": "# Issue #145 - Publish to Maven Central\r\n* Publish tagged artifacts to maven central\r\n* Fix bintrayRepository in build.sbt and made it use ENV var first\r\n* Add compulsory environment variable validation to the release script (`BINTRAY_USER` and `BINTRAY_PASS`)\r\n* Use more environment variables for easy customization\r\n* Now it publishes to only maven repo (no more generic one with ivy and maven styles)\r\n* Add cache setup for Travis\r\n* sbt 1.2.8 => 1.3.10",
"title": "Publish to Maven Central",
"type": "issue"
},
{
"action": "created",
"author": "Kevin-Lee",
"comment_id": 636302327,
"datetime": 1590829411000,
"masked_author": "username_0",
"text": "@username_1 \r\n* I wanted to test it by publishing it to my personal repo but I couldn't as its org name is different from my personal one.\r\n* I really think that Hedgehog is deployed to only one repository (i.e. Maven). It looks confusing when there are two repos and two different style (i.e. ivy and maven). Since sbt never uses ivy style unless it's set explicitly, I don't think it's necessary at all to have ivy style artifacts.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "charleso",
"comment_id": 636401331,
"datetime": 1590884026000,
"masked_author": "username_1",
"text": "Sorry Kevin, even though I agree it's cleaner, it means people who already have those ivy settings will be very confused. I'm happy to \"deprecate\" it, but is there any reason to not keep publishing to the old locations for now?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "charleso",
"comment_id": 636401503,
"datetime": 1590884140000,
"masked_author": "username_1",
"text": "@username_0 Thanks heaps for that. If you don't mind adding back in the old publish set, and maybe verify whether/what you need to set in env vs sbt setting then I'm definitely happy/keen to merge this.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Kevin-Lee",
"comment_id": 636403181,
"datetime": 1590885226000,
"masked_author": "username_0",
"text": "Once we have Hedgehog published to Maven, we don't need to worry about having this extra resolver in `build.sbt` and that's why I removed it. Whether others have ivy setting or not, it will work as it's available in Maven Central.\r\n\r\nOh... It's important because of non-tagged release? I wasn't sure if you want to keep non-tagged release after this Maven Central release. \r\n\r\nAnyway, I will put it back.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Kevin-Lee",
"comment_id": 636403866,
"datetime": 1590885746000,
"masked_author": "username_0",
"text": "I needed them for test deploy to my repo (e.g. https://bintray.com/kevinlee/maven/hedgehog-core). After I finished it, I thought that it didn't harm anything but useful so left it there. Though I'm not sure if you're talking about all ENV vars in the `.travis.yml` and release script or the ones in `build.sbt` or both. \r\n\r\nAnyway, I can put the old ones back if you want. What do you think?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Kevin-Lee",
"comment_id": 636404765,
"datetime": 1590886503000,
"masked_author": "username_0",
"text": "@username_1 I did fixup commit and am waiting for the answers to the remaining issues above. Please let me know and when I get the final answer, I will do `autosquash`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Kevin-Lee",
"comment_id": 636405436,
"datetime": 1590887013000,
"masked_author": "username_0",
"text": "@username_1 I'll do some test commits and let you know once it's ready for reviewing again. I'd like to test it with my repo.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Kevin-Lee",
"comment_id": 636406881,
"datetime": 1590888212000,
"masked_author": "username_0",
"text": "@username_1 I created my personal generic repo for testing. \r\nhttps://bintray.com/kevinlee/scala-hedgehog\r\n\r\nand these are maven ones.\r\n* https://bintray.com/kevinlee/maven/hedgehog-core\r\n* https://bintray.com/kevinlee/maven/hedgehog-runner\r\n* https://bintray.com/kevinlee/maven/hedgehog-sbt\r\n\r\nAs I already mentioned, I can't do a test for publishing to the Maven Central.",
"title": null,
"type": "comment"
}
] | 2 | 9 | 2,771 | false | false | 2,771 | true |
http4s/http4s | http4s | 623,362,430 | 3,452 | null | [
{
"action": "opened",
"author": "nelusnegur",
"comment_id": null,
"datetime": 1590167071000,
"masked_author": "username_0",
"text": "When I use an HTTP client configured with the GZip client middleware to perform a HEAD request, and provided the HTTP server can return a compressed response, the GZip middleware crashes during the processing of the response with the following error:\r\n\r\n```\r\nfs2.compress$NonProgressiveDecompressionException: buffer size 32768 is too small; gunzip cannot make progress\r\n fs2.compress$.$anonfun$gunzip$3(compress.scala:239)\r\n fs2.compress$.$anonfun$gunzip$3$adapted(compress.scala:221)\r\n fs2.Pull$.$anonfun$flatMap$1(Pull.scala:51)\r\n fs2.internal.FreeC$$anon$1.cont(FreeC.scala:29)\r\n fs2.internal.FreeC$ViewL$$anon$9$$anon$10.cont(FreeC.scala:196)\r\n fs2.internal.FreeC$ViewL$.mk(FreeC.scala:185)\r\n fs2.internal.FreeC$ViewL$.apply(FreeC.scala:176)\r\n fs2.internal.FreeC.viewL(FreeC.scala:79)\r\n fs2.internal.Algebra$.go$1(Algebra.scala:178)\r\n fs2.internal.Algebra$.$anonfun$compileLoop$7(Algebra.scala:219)\r\n fs2.internal.Algebra$.$anonfun$compileLoop$1(Algebra.scala:197)\r\n monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:170)\r\n monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101)\r\n monix.eval.internal.TaskRestartCallback$$anon$1.run(TaskRestartCallback.scala:118)\r\n ...\r\n```\r\nHere is small Scala ammonite script, which defines an HTTP client configured with GZip middleware, executes a HEAD request and crashes with the error mentioned above:\r\n\r\n```scala\r\nimport $ivy.`org.http4s::http4s-blaze-client:0.21.4`\r\nimport $ivy.`io.monix::monix:3.2.1`\r\nimport $ivy.`co.fs2::fs2-core:2.3.0`\r\n\r\nimport monix.eval.Task\r\nimport monix.eval.Task._\r\nimport monix.execution.Scheduler\r\nimport monix.execution.Scheduler.Implicits.global\r\nimport org.http4s._\r\nimport org.http4s.client.blaze._\r\nimport org.http4s.client._\r\nimport org.http4s.client.dsl.io._\r\nimport org.http4s.client.blaze.BlazeClientBuilder\r\nimport org.http4s.client.middleware.GZip\r\n\r\nimport scala.concurrent.Await\r\nimport scala.concurrent.duration._\r\n\r\nval (httpClient, shutdown) =\r\n BlazeClientBuilder(global)\r\n .withMaxTotalConnections(3)\r\n .withMaxWaitQueueLimit(5)\r\n .allocated\r\n .map { case (client, shutdown) => GZip()(client) -> shutdown }\r\n .runSyncUnsafe(10.seconds)\r\n\r\nval headRequest = Request[Task](Method.HEAD).withUri(Uri.uri(\"https://doc.akka.io\"))\r\nval task = httpClient.expect[String](headRequest)\r\nval future = task.runToFuture\r\n\r\nAwait.result(future, 3.seconds)\r\n```\r\n\r\nThe error message is a bit misleading, because an HTTP HEAD response doesn't contain a response body regardless what is contained in the headers response. I tried to put the whole implementation of the [GZip client middleware](https://github.com/http4s/http4s/blob/master/client/src/main/scala/org/http4s/client/middleware/GZip.scala) in the above script, and I get the following exception:\r\n```\r\njava.io.EOFException\r\n fs2.compression$.$anonfun$gunzip$6(compression.scala:379)\r\n fs2.compression$.$anonfun$gunzip$6$adapted(compression.scala:368)\r\n fs2.Pull$.$anonfun$flatMap$1(Pull.scala:51)\r\n fs2.internal.FreeC$$anon$1.cont(FreeC.scala:29)\r\n fs2.internal.FreeC$ViewL$$anon$9$$anon$10.<init>(FreeC.scala:194)\r\n ...\r\n```\r\nAccording to this stack trace, when GZip middleware tries to decompress the response body of a HEAD request, it fails because the response body stream is empty; there are no gzip header bytes which the `fs2.compression.gunzip` method expects. \r\n\r\nThe [RFC 7230, section 3.3.3](https://tools.ietf.org/html/rfc7230#section-3.3.3) says that the response body of a HEAD request is empty regardless of the headers present in the response (in our case, `Content-Encoding`), because they show what headers will contain the GET request on the same URI. So, the GZip client middleware should not decompress a HEAD response body even when the `Content-Encoding: gzip` header is present. I will open a PR to fix this.",
"title": "GZip client middleware crashes when processing an HTTP HEAD response",
"type": "issue"
},
{
"action": "created",
"author": "rossabaker",
"comment_id": 645756014,
"datetime": 1592452526000,
"masked_author": "username_1",
"text": "Leaving this one open to track the [related empty body issue](https://github.com/http4s/http4s/pull/3476#discussion_r438229941), even though #3476 covers the common cause. The HEAD will be fixed in 0.21.5.\r\n\r\nAssigning to @username_0, since he expressed interest in the follow-up work as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rossabaker",
"comment_id": 649597880,
"datetime": 1593096611000,
"masked_author": "username_1",
"text": "Note that this is still not properly fixed in 0.21.5. We'll try again. See #3476.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "rossabaker",
"comment_id": null,
"datetime": 1593096612000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "reopened",
"author": "rossabaker",
"comment_id": null,
"datetime": 1593096615000,
"masked_author": "username_1",
"text": "When I use an HTTP client configured with the GZip client middleware to perform a HEAD request, and provided the HTTP server can return a compressed response, the GZip middleware crashes during the processing of the response with the following error:\r\n\r\n```\r\nfs2.compress$NonProgressiveDecompressionException: buffer size 32768 is too small; gunzip cannot make progress\r\n fs2.compress$.$anonfun$gunzip$3(compress.scala:239)\r\n fs2.compress$.$anonfun$gunzip$3$adapted(compress.scala:221)\r\n fs2.Pull$.$anonfun$flatMap$1(Pull.scala:51)\r\n fs2.internal.FreeC$$anon$1.cont(FreeC.scala:29)\r\n fs2.internal.FreeC$ViewL$$anon$9$$anon$10.cont(FreeC.scala:196)\r\n fs2.internal.FreeC$ViewL$.mk(FreeC.scala:185)\r\n fs2.internal.FreeC$ViewL$.apply(FreeC.scala:176)\r\n fs2.internal.FreeC.viewL(FreeC.scala:79)\r\n fs2.internal.Algebra$.go$1(Algebra.scala:178)\r\n fs2.internal.Algebra$.$anonfun$compileLoop$7(Algebra.scala:219)\r\n fs2.internal.Algebra$.$anonfun$compileLoop$1(Algebra.scala:197)\r\n monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:170)\r\n monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101)\r\n monix.eval.internal.TaskRestartCallback$$anon$1.run(TaskRestartCallback.scala:118)\r\n ...\r\n```\r\nHere is small Scala ammonite script, which defines an HTTP client configured with GZip middleware, executes a HEAD request and crashes with the error mentioned above:\r\n\r\n```scala\r\nimport $ivy.`org.http4s::http4s-blaze-client:0.21.4`\r\nimport $ivy.`io.monix::monix:3.2.1`\r\nimport $ivy.`co.fs2::fs2-core:2.3.0`\r\n\r\nimport monix.eval.Task\r\nimport monix.eval.Task._\r\nimport monix.execution.Scheduler\r\nimport monix.execution.Scheduler.Implicits.global\r\nimport org.http4s._\r\nimport org.http4s.client.blaze._\r\nimport org.http4s.client._\r\nimport org.http4s.client.dsl.io._\r\nimport org.http4s.client.blaze.BlazeClientBuilder\r\nimport org.http4s.client.middleware.GZip\r\n\r\nimport scala.concurrent.Await\r\nimport scala.concurrent.duration._\r\n\r\nval (httpClient, shutdown) =\r\n BlazeClientBuilder(global)\r\n .withMaxTotalConnections(3)\r\n .withMaxWaitQueueLimit(5)\r\n .allocated\r\n .map { case (client, shutdown) => GZip()(client) -> shutdown }\r\n .runSyncUnsafe(10.seconds)\r\n\r\nval headRequest = Request[Task](Method.HEAD).withUri(Uri.uri(\"https://doc.akka.io\"))\r\nval task = httpClient.expect[String](headRequest)\r\nval future = task.runToFuture\r\n\r\nAwait.result(future, 3.seconds)\r\n```\r\n\r\nThe error message is a bit misleading, because an HTTP HEAD response doesn't contain a response body regardless what is contained in the headers response. I tried to put the whole implementation of the [GZip client middleware](https://github.com/http4s/http4s/blob/master/client/src/main/scala/org/http4s/client/middleware/GZip.scala) in the above script, and I get the following exception:\r\n```\r\njava.io.EOFException\r\n fs2.compression$.$anonfun$gunzip$6(compression.scala:379)\r\n fs2.compression$.$anonfun$gunzip$6$adapted(compression.scala:368)\r\n fs2.Pull$.$anonfun$flatMap$1(Pull.scala:51)\r\n fs2.internal.FreeC$$anon$1.cont(FreeC.scala:29)\r\n fs2.internal.FreeC$ViewL$$anon$9$$anon$10.<init>(FreeC.scala:194)\r\n ...\r\n```\r\nAccording to this stack trace, when GZip middleware tries to decompress the response body of a HEAD request, it fails because the response body stream is empty; there are no gzip header bytes which the `fs2.compression.gunzip` method expects. \r\n\r\nThe [RFC 7230, section 3.3.3](https://tools.ietf.org/html/rfc7230#section-3.3.3) says that the response body of a HEAD request is empty regardless of the headers present in the response (in our case, `Content-Encoding`), because they show what headers will contain the GET request on the same URI. So, the GZip client middleware should not decompress a HEAD response body even when the `Content-Encoding: gzip` header is present. I will open a PR to fix this.",
"title": "GZip client middleware crashes when processing an HTTP HEAD response",
"type": "issue"
},
{
"action": "closed",
"author": "rossabaker",
"comment_id": null,
"datetime": 1618170362000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "rossabaker",
"comment_id": 817361999,
"datetime": 1618170362000,
"masked_author": "username_1",
"text": "Oh, this looks like it was fixed. :tada:",
"title": null,
"type": "comment"
}
] | 2 | 7 | 8,119 | false | false | 8,119 | true |
servicecatalog/oscm-commons | servicecatalog | 503,443,926 | 13 | {
"number": 13,
"repo": "oscm-commons",
"user_login": "servicecatalog"
} | [
{
"action": "opened",
"author": "cworf91",
"comment_id": null,
"datetime": 1570453980000,
"masked_author": "username_0",
"text": "<!-- Reviewable:start -->\nThis change is [<img src=\"https://reviewable.io/review_button.svg\" height=\"34\" align=\"absmiddle\" alt=\"Reviewable\"/>](https://reviewable.io/reviews/servicecatalog/oscm-commons/13)\n<!-- Reviewable:end -->",
"title": "Task/synchronize users",
"type": "issue"
},
{
"action": "created",
"author": "cworf91",
"comment_id": 539004669,
"datetime": 1570454074000,
"masked_author": "username_0",
"text": "Travisjob fails, because the TimerType is unknown. See pullrequest: \r\nhttps://github.com/servicecatalog/oscm-interfaces/pull/31",
"title": null,
"type": "comment"
}
] | 1 | 2 | 355 | false | false | 355 | false |
mafintosh/why-is-node-running | null | 577,159,409 | 51 | null | [
{
"action": "opened",
"author": "deepal",
"comment_id": null,
"datetime": 1583526557000,
"masked_author": "username_0",
"text": "When a timer is `unref`ed, it will not keep the node process running. Therefore, `unref`ed timers should be excluded from the output. \r\n\r\n**Observed Behaviour**\r\n\r\ne.g, Consider the following example.\r\n\r\n```js\r\nconst log = require('./');\r\n\r\nsetTimeout(() => {}, 99999999).unref()\r\nprocess.stdin.on('data', () => {});\r\n\r\nsetTimeout(function() {\r\n this.unref()\r\n log()\r\n})\r\n```\r\n\r\nThe output for the above script is:\r\n\r\n```\r\nThere are 3 handle(s) keeping the process running\r\n\r\n# Timeout\r\n/Users/username_0/Projects/why-is-node-running/abc.js:3 - setTimeout(() => {}, 99999999)\r\n\r\n# TTYWRAP\r\n/Users/username_0/Projects/why-is-node-running/abc.js:4 - process.stdin.on('data', () => {});\r\n\r\n# Timeout\r\n/Users/username_0/Projects/why-is-node-running/abc.js:6 - setTimeout(function() {\r\n```\r\n\r\n**Expected Behaviour**\r\n\r\nWe should not see the two `Timeout`s as they are `unref`ed and are not keep the process from exiting.\r\n\r\n_Happy to raise a PR for this._",
"title": "Unref'ed timers should be excluded as they do not keep Node process running",
"type": "issue"
},
{
"action": "created",
"author": "deepal",
"comment_id": 595966443,
"datetime": 1583528982000,
"masked_author": "username_0",
"text": "This is fixed in 2.1.1",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "deepal",
"comment_id": null,
"datetime": 1583528985000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 966 | false | false | 966 | true |
valstu/korona-info | null | 581,795,880 | 18 | {
"number": 18,
"repo": "korona-info",
"user_login": "valstu"
} | [
{
"action": "opened",
"author": "valstu",
"comment_id": null,
"datetime": 1584300828000,
"masked_author": "username_0",
"text": "",
"title": "Add translations to production",
"type": "issue"
}
] | 2 | 2 | 296 | false | true | 0 | false |
appsembler/figures | appsembler | 639,076,569 | 226 | {
"number": 226,
"repo": "figures",
"user_login": "appsembler"
} | [
{
"action": "opened",
"author": "johnbaldwin",
"comment_id": null,
"datetime": 1592248350000,
"masked_author": "username_0",
"text": "This commit should dramatically improve the query performance for the\r\nenrollment metrics pipeline\r\n\r\nWhat was wrong?\r\n\r\nQueries were very slow because of a 'LMIT 1' issues with MySQL. For a starting point, see here\r\n\r\n https://stackoverflow.com/questions/15460133/mysql-dramatically-slower-query-execution-if-use-limit-1-instead-of-limit-5\r\n\r\nIn Django, we were doing a filter query that returns a single record or\r\n`None`. Examples:\r\n\r\n```\r\nStudentModule.objects.filter(**filter_args).latest('modified')\r\nStudentModule.objects.filter(**filter_args).order_by('-modified).first()\r\n```\r\n\r\nQuery functions such as `latest`, `first`, `last` and so on add a `LIMIT\r\n1` to the underlying SQL query, which has apparent negative performance on the query analyzer\r\n\r\nTo address this, we do two things\r\n\r\n1. For the specified course, we filter the StudentModule records\r\n2. For the specifid learner in the course, we filter\r\n\r\nAlso, LearnerCourseGradesMetrics queries are slow as the model needs indexing\r\non fields including site, course, and learner. We address this twofold\r\n\r\n1. We will add indexing to the needed fields after we prune old records.\r\nThis is so we're not indexing records we are just going to delete anyway\r\n2. We filter all LearnerCourseGradeMetrics records for the specified\r\ncourse\r\n\r\nThis commit performs #2 above to then filter from this queryset to find\r\nLearnerCourseGradeMetrics records for the specified learner in the\r\ncourse\r\n\r\nEnrollment Metrics tests have been updated to reflect changes in the\r\nproduction code",
"title": "Updates pipeline enrollment metrics queries to improve performance",
"type": "issue"
}
] | 2 | 2 | 1,538 | false | true | 1,538 | false |
goto-bus-stop/react-abstract-autocomplete | null | 534,739,386 | 200 | null | [
{
"action": "closed",
"author": "goto-bus-stop",
"comment_id": null,
"datetime": 1580644634000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 10,552 | false | true | 0 | false |
googleapis/google-cloud-cpp | googleapis | 453,209,807 | 2,744 | {
"number": 2744,
"repo": "google-cloud-cpp",
"user_login": "googleapis"
} | [
{
"action": "opened",
"author": "coryan",
"comment_id": null,
"datetime": 1559850785000,
"masked_author": "username_0",
"text": "Probably because I am lazy, I started using unsigned literals when\r\ncomparing to an unsigned. This is not needed, warnings are produced when\r\ncomparing signed vs. unsigned *variables*, but the literals are promoted\r\nto the larger (unsigned) type.\r\n\r\nThis fixes #2742.\n\n<!-- Reviewable:start -->\n---\nThis change is [<img src=\"https://reviewable.io/review_button.svg\" height=\"34\" align=\"absmiddle\" alt=\"Reviewable\"/>](https://reviewable.io/reviews/googleapis/google-cloud-cpp/2744)\n<!-- Reviewable:end -->",
"title": "Remove unnecessary 'unsigned' literals.",
"type": "issue"
},
{
"action": "created",
"author": "coryan",
"comment_id": 499664387,
"datetime": 1559854563000,
"masked_author": "username_0",
"text": "It was not a hassle at all. And (good news!) we had no such warning enabled.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 579 | false | true | 579 | false |
AdobeDocs/experience-manager-cloud-manager.en | AdobeDocs | 676,951,832 | 25 | null | [
{
"action": "opened",
"author": "kwin",
"comment_id": null,
"datetime": 1597158182000,
"masked_author": "username_0",
"text": "It is crucial to know which server ids are already taken by the default settings.xml and whether certain ids are processed via a mirror.",
"title": "Password-Protected Maven Repository Support does not give details about allowed server ids",
"type": "issue"
},
{
"action": "created",
"author": "justinedelson",
"comment_id": 672010823,
"datetime": 1597159191000,
"masked_author": "username_1",
"text": "Agreed -- I created #26 to add some language about \"reserved\" server ids.\r\n\r\n@username_0 I don't fully understand what you mean about mirrors and how it is related. Are you looking for some language that says that we will *never* define a mirror for `*` but only:\r\n\r\n* `central`\r\n* `adobe-*`\r\n* `cloud-manager-*`\r\n\r\n? But I'm guessing that this is really the concern -- you want some guarantee that Cloud Manager will never try to mirror a custom repository (`myco-repository` in the example).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kwin",
"comment_id": 672069225,
"datetime": 1597163134000,
"masked_author": "username_0",
"text": "Exactly, thanks for the prompt PR.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "justinedelson",
"comment_id": 672283497,
"datetime": 1597180539000,
"masked_author": "username_1",
"text": "OK. Created https://github.com/AdobeDocs/experience-manager-cloud-manager.en/pull/27 for the mirror language.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "bohnertchris",
"comment_id": null,
"datetime": 1597387875000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "bohnertchris",
"comment_id": 673918890,
"datetime": 1597387875000,
"masked_author": "username_2",
"text": "Great collaboration on this. Thanks for the PR, @username_1. Since that seemed to solve the issue, I am closing this.",
"title": null,
"type": "comment"
}
] | 3 | 6 | 886 | false | false | 886 | true |
mwaskom/seaborn | null | 675,763,584 | 2,188 | {
"number": 2188,
"repo": "seaborn",
"user_login": "mwaskom"
} | [
{
"action": "opened",
"author": "MaozGelbart",
"comment_id": null,
"datetime": 1597003775000,
"masked_author": "username_0",
"text": "This PR adds intersphinx links to `heatmap` and `clustermap`. It also updates a url in the installation guide to its most recent version.",
"title": "Improve matrix functions docstrings",
"type": "issue"
},
{
"action": "created",
"author": "mwaskom",
"comment_id": 671608303,
"datetime": 1597096343000,
"masked_author": "username_1",
"text": "Thank you!",
"title": null,
"type": "comment"
}
] | 3 | 3 | 147 | false | true | 147 | false |
kubernetes-sigs/cri-tools | kubernetes-sigs | 600,870,754 | 588 | {
"number": 588,
"repo": "cri-tools",
"user_login": "kubernetes-sigs"
} | [
{
"action": "opened",
"author": "johscheuer",
"comment_id": null,
"datetime": 1587027789000,
"masked_author": "username_0",
"text": "This PR adds precompiled binaries for `darwin` to the release.\r\n\r\nI tested the darwin support locally with my Mac:\r\n\r\n```bash\r\n$ uname -a\r\nDarwinMacBook-Pro.local 19.4.0 Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64 x86_64 i386 MacBookPro15,2 Darwin\r\n```\r\n\r\nand (nearly) everything worked:\r\n\r\n```bash\r\n$ make all\r\nCGO_ENABLED=0 GO111MODULE=on go test -mod=vendor -c -o /Users/jscheuermann/go/src/github.com/kubernetes-sigs/cri-tools/_output/critest \\\r\n -ldflags '-X github.com/kubernetes-sigs/cri-tools/pkg/version.Version=1.18.0-6-g719b9ad' \\\r\n -tags 'selinux' \\\r\n github.com/kubernetes-sigs/cri-tools/cmd/critest\r\nCGO_ENABLED=0 GO111MODULE=on go build -mod=vendor -o /Users/jscheuermann/go/src/github.com/kubernetes-sigs/cri-tools/_output/crictl \\\r\n -ldflags '-X github.com/kubernetes-sigs/cri-tools/pkg/version.Version=1.18.0-6-g719b9ad' \\\r\n -tags 'selinux' \\\r\n github.com/kubernetes-sigs/cri-tools/cmd/crictl\r\n$ make install\r\ninstall -D -m 755 /Users/jscheuermann/go/src/github.com/kubernetes-sigs/cri-tools/_output/critest /usr/local/bin/critest\r\ninstall -D -m 755 /Users/jscheuermann/go/src/github.com/kubernetes-sigs/cri-tools/_output/crictl /u\r\nsr/local/bin/crictl\r\n# Setup the unix socket to the remote Linux machines\r\n$ sudo ssh -L /var/run/dockershim.sock:/var/run/dockershim.sock dev-box\r\n$ sudo crictl ps\r\nCONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID\r\n2d2932ba8b376 5dd8f24429b48 About an hour ago Running kube-controller-manager 2 73c30628d302e\r\na2e4e820fe120 8d2e2e5a92ace About an hour ago Running kube-scheduler 2 8b7430f7450d1\r\n3c0af35ed62ca 628f0e52ae53b About an hour ago Running kube-apiserver 0 4c4d3baeb845e\r\ned250727561f7 calico/kube-controllers@sha256:0fa9d599e2147a5ee6d951c7f18f59144a6459c2346864a4b31c6ab38668a393 47 hours ago Running calico-kube-controllers 0 2358f406d3e55\r\n929e2a9efa221 70f311871ae12 47 hours ago Running coredns 0 f3fc8e8463397\r\n438b303754b43 70f311871ae12 47 hours ago Running coredns 0 0c5bdc574d8af\r\ne79655b1e7589 calico/node@sha256:9519301b28c79741d3c753b23ae1a71c970789ed77c40ea3fa767693591aea98 47 hours ago Running calico-node 0 e62cf4d069a83\r\necd6758c00f66 87a399dffea64 47 hours ago Running kube-proxy 0 132ee961678a0\r\n0771860f6ac23 303ce5db0e90d 47 hours ago Running etcd 0 e20a8c3c544ca\r\n```\r\n\r\nThe only thing that didn't work was `crictl exec` with the following error:\r\n\r\n```bash\r\nFATA[0000] execing command in container failed: error sending request: Post \"http://127.0.0.1:38865/exec/9K-dNcok\": dial tcp 127.0.0.1:38865: connect: connection refused\r\n```\r\n\r\nBut this error is expected since I didn't add `-L 127.0.0.1:38865:127.0.0.1:38865` to the ssh command.\r\n\r\nFixes: #573",
"title": "Add darwin to releases",
"type": "issue"
}
] | 2 | 4 | 7,350 | false | true | 4,031 | false |
aws/aws-cdk | aws | 544,541,958 | 5,608 | {
"number": 5608,
"repo": "aws-cdk",
"user_login": "aws"
} | [
{
"action": "opened",
"author": "rix0rrr",
"comment_id": null,
"datetime": 1577964436000,
"masked_author": "username_0",
"text": "Immutably imported `Role`s could not be used for CodeBuild\r\n`Project`s, because they would create a policy but be unable\r\nto attach it to the Role. That leaves an unattached Policy,\r\nwhich is invalid.\r\n\r\nFix this by making `Policy` objects only render to an `AWS::IAM::Policy`\r\nresource if they actually have any effect. It is perfectly allowed to\r\ncreate new unattached Policy objects, or have empty Policy objects.\r\nOnly if and when they actually need to mutate the policy of an IAM\r\nidentity will they render themselves to the CloudFormation template.\r\nBeing able to abstract away these kinds of concerns is exactly the value\r\nof a higher-level programming model.\r\n\r\nTo allow for the rare cases where an empty Policy object would be\r\nconsidered a programming error, we still have the flag `mustCreate`\r\nwhich triggers the legacy behavior of alwyas creating the\r\n`AWS::IAM::Policy` resource which must have a statement and be\r\nattached to an identity.\r\n\r\n\r\n\r\n----\r\n\r\n*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*\r\n\r\n<!-- \r\nPlease read the contribution guidelines and follow the pull-request checklist:\r\nhttps://github.com/aws/aws-cdk/blob/master/CONTRIBUTING.md\r\n -->",
"title": "fix(codebuild): cannot use immutable roles for Project",
"type": "issue"
}
] | 3 | 12 | 5,806 | false | true | 1,239 | false |
rust-lang/cargo | rust-lang | 602,102,156 | 8,127 | null | [
{
"action": "opened",
"author": "phil-opp",
"comment_id": null,
"datetime": 1587141728000,
"masked_author": "username_0",
"text": "<!-- Thanks for filing a 🙋 feature request 😄! -->\r\n\r\n**Describe the problem you are trying to solve**\r\n<!-- A clear and concise description of the problem this feature request is trying to solve. -->\r\n\r\nThe cargo command line API has some `-Z` flags that allow to opt-in to unstable functionality. Examples are `-Z build-std`, which recompiles the sysroot for custom targets, or `-Z features=build_dep`, which fixes the long-standing problem tracked by https://github.com/rust-lang/cargo/issues/5730.\r\n\r\nThe problem with these flags is that they need to be explicitly passed to every invocation of cargo, which has many downsides:\r\n\r\n- It makes the build/check command much more cumbersome to type.\r\n- Your project might not build if you forget to pass the flags (e.g. with `-Z build-std`).\r\n - This means that you need to adjust all `cargo` invocations in CI scripts etc.\r\n - Users of your (nightly) crate need to use the adjusted build command too.\r\n - Tools like `rust-analyzer` also need to be adjusted to pass the required flag.\r\n- The changed build command is not recorded in the git history. So if you check out an old commit at some point, you need to manually remember which build command was used at that time.\r\n\r\nAs a result of these downsides, many people refrain from using unstable command line features for their (nightly) projects. This means that the features are less tested by the community before they are stabilized at some point.\r\n\r\n**Describe the solution you'd like**\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\nTo solve these problems, I think there should be a way to enable cargo command line flags from a configuration file. One possibility for this could be to introduce a new `build.cargoflags` key in `.cargo/config` files, similar to the existing `build.rustflags` and `build.rustdocsflags` keys. This would be a relatively simple change that solves all the mentioned problems.\r\n\r\n**Notes**\r\n<!-- Any additional context or information you feel may be relevant to the issue. -->\r\n\r\nGiven that it's already possible to enable unstable features of the `rustc` command line API through `build.rustflags`, there is already a precedent for enabling unstable command line features from `.cargo/config` files.",
"title": "Add a `build.cargoflags` configuration key",
"type": "issue"
},
{
"action": "created",
"author": "bearcage",
"comment_id": 647187357,
"datetime": 1592776864000,
"masked_author": "username_1",
"text": "It looks like there's some precedent for a different approach to globally enabling cargo Z flags in [how mtime_on_use is configured](https://github.com/rust-lang/cargo/blob/master/src/cargo/util/config/mod.rs#L745-L749). Doing this for all unstable flags would (I think) be as simple as modifying the snippet linked above to something like this:\r\n\r\n```\r\nif let Some(unstable_key) = key.strip_prefix(\"unstable.\") {\r\n if let Err(err) = self.unstable_flags.add(unstable_key, value) {\r\n // error out on invalid flag? probably a good idea.\r\n }\r\n}\r\n```\r\n\r\nI put together #8393 to see how that'd work, and I think it'd satisfy what you're trying to do @username_0 -- lmk if you agree or if you would still prefer to see something like a cargoflags config entry.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "phil-opp",
"comment_id": 647356760,
"datetime": 1592813182000,
"masked_author": "username_0",
"text": "Yes, I think this would solve my issue. I didn't know about the `unstable` table, but I really like the approach.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bearcage",
"comment_id": 649131126,
"datetime": 1593042274000,
"masked_author": "username_1",
"text": "Glad to hear it -- no problem!\r\n\r\nI didn't know about unstable either until I saw mtime-on-use doing it, and I figured... why not?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "bors",
"comment_id": null,
"datetime": 1594339843000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 5 | 3,273 | false | false | 3,273 | true |
PreAngel/venture-sprint.com | PreAngel | 642,291,524 | 308 | null | [
{
"action": "opened",
"author": "nkshuihan",
"comment_id": null,
"datetime": 1592620407000,
"masked_author": "username_0",
"text": "Change “ventures ” to \"design sprint\" in the same format as “news”",
"title": "change ventures ",
"type": "issue"
},
{
"action": "created",
"author": "huan",
"comment_id": 646945114,
"datetime": 1592631306000,
"masked_author": "username_1",
"text": "Do you mean you want to change the `Ventures` from our navigation menu?\r\n\r\nCurrently, our menu is as the following:\r\n\r\n```\r\nVenture SPRINT\r\nNews\r\nPortfolios\r\nVentures\r\nContact\r\nAbout\r\n```\r\n\r\nPlease confirm that you want to change as the following:\r\n\r\n```diff\r\nVenture SPRINT\r\nNews\r\nPortfolios\r\n- Ventures\r\n+ Design Sprint\r\nContact\r\nAbout\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nkshuihan",
"comment_id": 646959183,
"datetime": 1592639247000,
"masked_author": "username_0",
"text": "yes,exactly",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "huan",
"comment_id": 646999864,
"datetime": 1592661806000,
"masked_author": "username_1",
"text": "I have changed the navigation as requested, please let me know if it's what we want. Thanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nkshuihan",
"comment_id": 647074210,
"datetime": 1592710479000,
"masked_author": "username_0",
"text": "I've read it . Not only need to change the name of the navigation , but also the format and presentation mode should be changed to be consistent with news",
"title": null,
"type": "comment"
}
] | 2 | 5 | 666 | false | false | 666 | false |
pulumi/pulumi | pulumi | 604,043,355 | 4,457 | null | [
{
"action": "opened",
"author": "amichel",
"comment_id": null,
"datetime": 1587479304000,
"masked_author": "username_0",
"text": "After upgrade to 2.0 existing workflows fail with:\r\n**error: --yes must be passed in to proceed when running in non-interactive mode**\r\nCurrent workaround is to pass **up --yes** in step args.\r\nI think this should be passed in entrypoint.sh by default",
"title": "Existing Github Actions Workflow fails on missing --yes after upgrade to 2.0",
"type": "issue"
},
{
"action": "closed",
"author": "stack72",
"comment_id": null,
"datetime": 1589383855000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 251 | false | false | 251 | false |
Adlik/Adlik | Adlik | 667,645,905 | 243 | {
"number": 243,
"repo": "Adlik",
"user_login": "Adlik"
} | [
{
"action": "opened",
"author": "KellyZhang2020",
"comment_id": null,
"datetime": 1596010625000,
"masked_author": "username_0",
"text": "",
"title": "Fix the bug of compile .h5 model to tflite model",
"type": "issue"
},
{
"action": "created",
"author": "EFanZh",
"comment_id": 666069231,
"datetime": 1596079274000,
"masked_author": "username_1",
"text": "bors r+",
"title": null,
"type": "comment"
}
] | 3 | 3 | 135 | false | true | 7 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.