repo stringlengths 7 67 | org stringlengths 2 32 ⌀ | issue_id int64 780k 941M | issue_number int64 1 134k | pull_request dict | events list | text_size int64 0 279k | bot_issue bool 1 class | modified_by_bot bool 2 classes | user_count int64 1 77 | event_count int64 1 191 | modified_usernames bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|
tl-its-umich-edu/my-learning-analytics | tl-its-umich-edu | 560,533,556 | 875 | {
"number": 875,
"repo": "my-learning-analytics",
"user_login": "tl-its-umich-edu"
} | [
{
"action": "opened",
"author": "ssciolla",
"comment_id": null,
"datetime": 1580925184000,
"masked_author": "username_0",
"text": "Responding to changes from PR #845, this PR adjusts `globals.js` to use an empty object if it is unable to find the`myla_globals` JSON element in the DOM. Along with an extra condition checking whether `mylaGlobals.primary_color` is `undefined` allows tests to import and consume `siteTheme` without an error. I also updated the hex values in the snapshot for `createHistogram.test.js` to reflect the new secondary color from PR #853. This PR aims to resolve issue #874.",
"title": "Add fallback empty object for mylaGlobals and update createHistogram test (#874)",
"type": "issue"
},
{
"action": "created",
"author": "pushyamig",
"comment_id": 582539539,
"datetime": 1580926342000,
"masked_author": "username_1",
"text": "All the Unit test are passing",
"title": null,
"type": "comment"
}
] | 499 | false | false | 2 | 2 | false |
BoboTiG/ebook-reader-dict | null | 782,588,354 | 533 | {
"number": 533,
"repo": "ebook-reader-dict",
"user_login": "BoboTiG"
} | [
{
"action": "opened",
"author": "BoboTiG",
"comment_id": null,
"datetime": 1610191854000,
"masked_author": "username_0",
"text": "That's better, thanks @username_1 ;)",
"title": "fix #501: Improve aliases retrieval",
"type": "issue"
},
{
"action": "created",
"author": "lasconic",
"comment_id": 757139102,
"datetime": 1610193614000,
"masked_author": "username_1",
"text": "We could probably use the same trick in other scripts.",
"title": null,
"type": "comment"
}
] | 88 | false | true | 2 | 2 | true |
SilenceLove/HXPhotoPicker | null | 775,714,708 | 433 | null | [
{
"action": "opened",
"author": "qunwang6",
"comment_id": null,
"datetime": 1609221532000,
"masked_author": "username_0",
"text": "好像只能设置预览视频时是否自动播放,有没有预览视频一次就停止播放的设置。",
"title": "预览视频次数",
"type": "issue"
},
{
"action": "created",
"author": "SilenceLove",
"comment_id": 752048252,
"datetime": 1609242539000,
"masked_author": "username_1",
"text": "目前没有这种配置",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cike534222598",
"comment_id": 918934363,
"datetime": 1631608343000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "comment"
}
] | 44 | false | false | 3 | 3 | false |
enso-org/enso | enso-org | 824,795,362 | 1,557 | null | [
{
"action": "opened",
"author": "mwu-tow",
"comment_id": null,
"datetime": 1615225255000,
"masked_author": "username_0",
"text": "### General Summary\r\nIf the visualization sets preprocessor code that yields an error, this error is not reported in any way to IDE.\r\n\r\nThis is really troublesome, as developing or debugging visualizations when there is something wrong in the preprocessor code becomes guesswork.\r\n\r\n### Steps to Reproduce\r\nCreate a visualization that sets nonsensical preprocessor expression. Wait for updates.\r\n\r\nSee the example request IDE makes under such condition and the reply:\r\n\r\n```json\r\n{\r\n \"jsonrpc\": \"2.0\",\r\n \"id\": 18,\r\n \"method\": \"executionContext/attachVisualisation\",\r\n \"params\": {\r\n \"expressionId\": \"1f9ca20f-5143-4010-819d-7b4a4768e477\",\r\n \"visualisationConfig\": {\r\n \"executionContextId\": \"38358e93-60ef-400f-bb25-c084e51af57d\",\r\n \"expression\": \"hey -> this is pure nonsense\",\r\n \"visualisationModule\": \"Unnamed.Main\"\r\n },\r\n \"visualisationId\": \"a1cced79-1e4b-47f6-a85d-73f766325372\"\r\n }\r\n}\r\n\r\n{\r\n \"jsonrpc\": \"2.0\",\r\n \"id\": 18,\r\n \"result\": null\r\n}\r\n```\r\n\r\n\r\n### Expected Result\r\n\r\n1) if expression can feasibly be proven to be incorrect, attaching should fail;\r\n2) if expression's validity depends on runtime conditions, the error should be reported through appropriate channels (binary connection?).\r\n\r\nBasically, IDE needs to observe some results of visualization, be it correct values or errors on whatever stage.\r\n\r\n### Actual Result\r\nAttaching succeeds but visualization never receives any data. No error nor diagnostics are provided to explain this silence.\r\n\r\nIn the logs one can find line \r\n```\r\n[warn] [2021-03-08T17:28:42.928Z] [org.enso.languageserver.protocol.json.JsonConnectionController] Received unknown message: VisualisationEvaluationFailed(38358e93-60ef-400f-bb25-c084e51af57d,Compile error: Variable `is` is not defined.)\r\n``` \r\nbut it is not easily accessible.\r\n\r\n### Enso Version\r\n0.2.6",
"title": "Visualization Errors Are Not Reported",
"type": "issue"
},
{
"action": "closed",
"author": "4e6",
"comment_id": null,
"datetime": 1619011943000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 1,891 | false | false | 2 | 2 | false |
rapidsai/cuml | rapidsai | 704,487,124 | 2,843 | null | [
{
"action": "opened",
"author": "cjnolet",
"comment_id": null,
"datetime": 1600444609000,
"masked_author": "username_0",
"text": "The increase in the number of Python warnings in our CI logs from our pytests have made it increasingly challenging to hunt through the logs searching for potential CI failures. \r\n\r\nIn [this CI log] (https://gpuci.gpuopenanalytics.com/blue/rest/organizations/jenkins/pipelines/rapidsai/pipelines/gpuci/pipelines/cuml/pipelines/prb/pipelines/cuml-gpu-build/runs/14624/log/?start=0), almost half of the log is filled up with warnings that aren't relevant at all to the tests. We should probably ignore many of these warnings.",
"title": "[BUG] Ignore excessive warnings in pytests / CI logs",
"type": "issue"
},
{
"action": "created",
"author": "wphicks",
"comment_id": 698461025,
"datetime": 1600965966000,
"masked_author": "username_1",
"text": "Just to confirm, you're talking about the huge number of warnings like the following, yes?\r\n\r\n```\r\n /opt/conda/envs/rapids/lib/python3.8/site-packages/sklearn/utils/validation.py:68: FutureWarning: Pass eps=0.9109496478186582 as keyword args. From version 0.25 passing these as positional arguments will result in an error\r\n warnings.warn(\"Pass {} as keyword args. From version 0.25 \"\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "JohnZed",
"comment_id": 728209573,
"datetime": 1605547676000,
"masked_author": "username_2",
"text": "Possible staging / elements:\r\n\r\n- [ ] Hide scikit-learn warnings by default\r\n- [ ] Add python wrapper that allows us to fail on any warnings (with ability for expected warnings)\r\n- [ ] Reroute C++ logging through python logger to allow it to participate in python fail options\r\n\r\nSee also: adding log_once / warn_once to reduce spew consistently",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "JohnZed",
"comment_id": 728211618,
"datetime": 1605547873000,
"masked_author": "username_2",
"text": "See also:\r\n - [ ] Adding custom SummaryWriter that will make it easier to find and view error",
"title": null,
"type": "comment"
}
] | 1,354 | false | true | 3 | 4 | false |
ppc64le-cloud/builds | ppc64le-cloud | 845,798,393 | 1,868 | {
"number": 1868,
"repo": "builds",
"user_login": "ppc64le-cloud"
} | [
{
"action": "opened",
"author": "ltccci",
"comment_id": null,
"datetime": 1617163175000,
"masked_author": "username_0",
"text": "This is an automated PR via build-bot",
"title": "Update golang/master build",
"type": "issue"
},
{
"action": "created",
"author": "ltccci",
"comment_id": 810741012,
"datetime": 1617163180000,
"masked_author": "username_0",
"text": "[APPROVALNOTIFIER] This PR is **NOT APPROVED**\n\nThis pull-request has been approved by: *<a href=\"https://github.com/ppc64le-cloud/builds/pull/1868#\" title=\"Author self-approved\">username_0</a>*\nTo complete the [pull request process](https://git.k8s.io/community/contributors/guide/owners.md#the-code-review-process), please assign after the PR has been reviewed.\nYou can assign the PR to them by writing `/assign ` in a comment when ready.\n\nThe full list of commands accepted by this bot can be found [here](https://go.k8s.io/bot-commands?repo=ppc64le-cloud%2Fbuilds).\n\n<details open>\nNeeds approval from an approver in each of these files:\n\n- **[OWNERS](https://github.com/ppc64le-cloud/builds/blob/master/OWNERS)**\n\nApprovers can indicate their approval by writing `/approve` in a comment\nApprovers can cancel approval by writing `/approve cancel` in a comment\n</details>\n<!-- META={\"approvers\":[]} -->",
"title": null,
"type": "comment"
}
] | 939 | false | false | 1 | 2 | true |
hyperledger/fabric | hyperledger | 748,322,933 | 2,156 | {
"number": 2156,
"repo": "fabric",
"user_login": "hyperledger"
} | [
{
"action": "opened",
"author": "psingh-io",
"comment_id": null,
"datetime": 1606072285000,
"masked_author": "username_0",
"text": "#### Type of change\r\n- Improvement to cryptogen tool\r\n\r\n#### Description\r\nThis is an improvement on cryptogen tool to allow for custom expiry times for CA, Identity and TLS certificates for each organization. The expiration times can be customized using following configuration for each organization:\r\n CACertExpiry: 131400h\r\n IdentityCertExpiry: 131400h\r\n TLSCertExpiry: 131400h\r\n\r\nThe default expiry time remains 15 years",
"title": "Added support to specify CA, Identity and TLS cert expiry times in cryptogen",
"type": "issue"
},
{
"action": "created",
"author": "sykesm",
"comment_id": 732253926,
"datetime": 1606147342000,
"masked_author": "username_1",
"text": "Re-running PR checks. Some failures related to ubuntu mirror issues and units failed with FAB-16233.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sykesm",
"comment_id": 733774789,
"datetime": 1606317993000,
"masked_author": "username_1",
"text": "@username_0 I plan to review these changes shortly but I wanted to point out that this PR was opened against release-2.3 instead of master. Our general process is to make the changes that introduce new features in master and to pull them back to other branches as appropriate. This helps ensure changes don't unintentionally dropped along the way.\r\n\r\nCan you please retarget this PR to master?\r\n\r\nThanks.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "psingh-io",
"comment_id": 733837883,
"datetime": 1606324413000,
"masked_author": "username_0",
"text": "Sure, I will do this later today.\n\nThabks",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "caod123",
"comment_id": 754045450,
"datetime": 1609774547000,
"masked_author": "username_2",
"text": "@username_0 it seems after you changed the base branch to master, a couple of unnecessary commits from the rebased `release-2.3` branch made it into this PR. Can you clean up the commit log so only your commit is on this PR against `master`? Also if possible can you address @username_1's comment regarding the missing integration tests for verifying this behavior.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "psingh-io",
"comment_id": 754291537,
"datetime": 1609803836000,
"masked_author": "username_0",
"text": "I will do the cleanup by tomorrow.\n\nThere is a separate ticket for integration test and I will pick that after\nthis. As it was pointed out by @username_1 <https://github.com/username_1>'s, the\nintegration tests did not exist for cryptogen before these changes.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sykesm",
"comment_id": 774051136,
"datetime": 1612533768000,
"masked_author": "username_1",
"text": "The goal of that comment was to highlight that we're missing integration tests for this tool and that I believe we need them before we add additional capabilities.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "denyeart",
"comment_id": 831531816,
"datetime": 1620075738000,
"masked_author": "username_3",
"text": "@username_0 Do you intend to address the comments?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "denyeart",
"comment_id": 832621123,
"datetime": 1620214865000,
"masked_author": "username_3",
"text": "No response from @username_0 in many months so let's close, can be re-opened later.",
"title": null,
"type": "comment"
}
] | 1,882 | false | false | 4 | 9 | true |
linz/geospatial-data-lake | linz | 892,766,769 | 710 | null | [
{
"action": "opened",
"author": "billgeo",
"comment_id": null,
"datetime": 1621201705000,
"masked_author": "username_0",
"text": "### Enabler\n\n<!-- A description of the enabler that covers what needs to be done why it needs to be done. It should be understandable by all members of the team -->\n\nSo that we can POC a STAC brower and/or API, we want to research the options for implementing STAC browsing functionality\n\n#### Acceptance Criteria\n\n<!-- Requirements to accept this enabler as completed -->\n\n- [ ] All available options have been investigated for pros and cons, including technology fit, user experience, maturity and effort for the product team to incorporate\n- [ ] Options presented to the team with recommendations\n- [ ] Team decision on best option to use in the POC\n\n#### Tasks\n\n<!-- Tasks needed to complete this enabler -->\n\n\n- [ ] Incorporate @jacobus POC on a bespoke index using AWS services like Elasticsearch etc\n- [ ] Investigated options in this server list here: https://stacindex.org/ecosystem?category=Server\n- [ ] Investigated options in this GUI list here: https://stacindex.org/ecosystem?category=Visualization\n- [ ] Could also consider https://github.com/linz/linz-data-importer\n- [ ] Looked in other places?\n- [ ] Results written up in confluence\n- [ ] Present results to the team\n- [ ] Decision made by the team \n\n#### Additional context\n\n\n\n#### Definition of Ready\n\n- [ ] This story is **ready** to work on, according to the\n [team's definition](https://confluence.linz.govt.nz/pages/viewpage.action?pageId=87930423)\n\n#### Definition of Done\n\n- [ ] This story is **done**, according to the\n [team's definition](https://confluence.linz.govt.nz/pages/viewpage.action?pageId=87930423)\n\n<!-- Please add one or more of these labels: 'spike', 'refactor', 'architecture', 'infrastructure', 'compliance' -->",
"title": "Spike: research STAC browsers",
"type": "issue"
},
{
"action": "closed",
"author": "MitchellPaff",
"comment_id": null,
"datetime": 1623382775000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 1,715 | false | false | 2 | 2 | false |
microsoft/vscode | microsoft | 666,109,970 | 103,400 | null | [
{
"action": "opened",
"author": "gleborgne",
"comment_id": null,
"datetime": 1595840083000,
"masked_author": "username_0",
"text": "<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->\r\n<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->\r\n<!-- Please search existing issues to avoid creating duplicates. -->\r\n<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->\r\n\r\n<!-- Use Help > Report Issue to prefill these. -->\r\n- VSCode Version: 1.47.2\r\n- OS Version: Windows 10 (update 2004)\r\n\r\nSteps to Reproduce:\r\n\r\n1. Start node.js debugging on a project with Typescript\r\n\r\n<!-- Launch with `code --disable-extensions` to check. -->\r\nDoes this issue occur when all extensions are disabled?: Yes\r\n\r\n\r\nSince last vscode version update, when starting to debug I have the error bellow. tslib is at version 2.0.0. I tried downgrading it to 1.X but still have the same error. I saw other bugs that looks related and if I add \"debug.javascript.usePreview\": false to vscode's settings, everything is working fine.\r\n\r\nTypeError: \r\n at c:\\Users\\X\\AppData\\Local\\Programs\\Microsoft VS Code\\resources\\app\\extensions\\ms-vscode.js-debug\\src\\bootloader.bundle.js:16:7581\r\n at Object.<anonymous> (c:\\Users\\X\\AppData\\Local\\Programs\\Microsoft VS Code\\resources\\app\\extensions\\ms-vscode.js-debug\\src\\bootloader.bundle.js:16:7609)\r\n at Object.__decorate (c:\\projects\\ABCD\\master\\front\\node_modules\\tslib\\tslib.js:95:96)",
"title": "Error while debugging node.js on windows",
"type": "issue"
},
{
"action": "created",
"author": "bdunn313",
"comment_id": 664545815,
"datetime": 1595872386000,
"masked_author": "username_1",
"text": "I can confirm the exact same behavior, and setting the `\"debug.javascript.usePreview\": false` does prevent the issue from happening.\r\n\r\nI observe that there _may_ be a memory leak in the preview implementation - when starting the debugger, vscode rapidly grows from ~700mb memory to as much as it can grab (I've seen north of 5GB mem taken up).\r\n\r\nVSCode Version: 1.47.2\r\nOS Version: Windows 10 v2004 Build 19041.388",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "connor4312",
"comment_id": 666559959,
"datetime": 1596131309000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "prbans",
"comment_id": 668829661,
"datetime": 1596575823000,
"masked_author": "username_3",
"text": "@username_2 pinged you offline, I can show you repro and share logs in private",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "connor4312",
"comment_id": 671580185,
"datetime": 1597092227000,
"masked_author": "username_2",
"text": "Closing due to lack of info",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "connor4312",
"comment_id": null,
"datetime": 1597092228000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 1,931 | false | false | 4 | 6 | true |
axe-api/axe-api | axe-api | 910,790,582 | 39 | null | [
{
"action": "opened",
"author": "ozziest",
"comment_id": null,
"datetime": 1622748518000,
"masked_author": "username_0",
"text": "We don't have a filename standard in the project. We should add the eslint plugin to manage that;\r\n\r\n[filename-case](https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/filename-case.md)",
"title": "Filename standards",
"type": "issue"
},
{
"action": "closed",
"author": "ozziest",
"comment_id": null,
"datetime": 1622923850000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 209 | false | false | 1 | 2 | false |
numpy/numpy | numpy | 802,842,346 | 18,355 | null | [
{
"action": "opened",
"author": "MahendraCherukupalli",
"comment_id": null,
"datetime": 1612666820000,
"masked_author": "username_0",
"text": "<!-- Please describe the issue in detail here, and fill in the fields below -->\r\n\r\n### Reproducing code example:\r\n\r\n<!-- A short code example that reproduces the problem/missing feature. It should be\r\nself-contained, i.e., possible to run as-is via 'python myproblem.py' -->\r\n\r\n```python\r\nimport numpy as np\r\n<< your code here >>\r\nimport numpy as np\r\nimport pandas as pd\r\ndf=pd.read_csv('link')\r\ndf.info() and df.describe() gives error as \"TypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type\" and also plotting(df.plot()) gives same error.\r\n```\r\n\r\n### Error message:\r\n\r\n<!-- If you are reporting a segfault please include a GDB traceback, which you\r\ncan generate by following\r\nhttps://github.com/numpy/numpy/blob/master/doc/source/dev/development_environment.rst#debugging -->\r\n\r\n<!-- Full error message, if any (starting from line Traceback: ...) -->\r\n\r\n### NumPy/Python version information:\r\n\r\n<!-- Output from 'import sys, numpy; print(numpy.__version__, sys.version)' -->",
"title": "TypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type",
"type": "issue"
},
{
"action": "created",
"author": "seberg",
"comment_id": 774610226,
"datetime": 1612678238000,
"masked_author": "username_1",
"text": "There is a unfortuate incompatibility with old pandas and 1.20. Updating pandas to a newer version should fix it, see also https://github.com/pandas-dev/pandas/issues/39520#issuecomment-772630011\r\n\r\nUpdating to `pandas>=1.0.5` should solve it. Supposedly also `pandas==0.25.3` works. If you are stuck with pandas 0.24.x you may not be able to usse numpy 1.20.x though, unfortunaly.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jessdwitch",
"comment_id": 775414881,
"datetime": 1612815243000,
"masked_author": "username_2",
"text": "@username_1 Any idea why someone might be suddenly seeing this error when calling `df.to_csv` with `pandas==0.24.2` and `numpy==1.16.5`? Based on what you're saying above, it seems like those should be in the \"OK\" range.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "seberg",
"comment_id": 775422571,
"datetime": 1612815692000,
"masked_author": "username_1",
"text": "@username_2 I am not sure if there are other (likely) ways to get this error, and I doubt it is possible with the above example (or a similar one, such as creating an empty dataframe IIRC).\r\n\r\nCan you double check `np.__version__` and `pd.__version__` in your shell to make sure you are running the expected versions? Sometimes you might get multiple versions or so, which can quickly become very confusing.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jessdwitch",
"comment_id": 775454519,
"datetime": 1612817676000,
"masked_author": "username_2",
"text": "Thank you for the quick reply! I did check those already, since there _are_ multiple versions installed (`numpy==1.20.0` and `pandas==0.25.3` when conda is deactivated, and the versions noted above when the conda environment the script is running in is activated). I double checked the logs, and while I don't have the specific versions printing to the log, I do see the env activating and confirmed `0.24.2` and `1.16.5` were the ones installed to that env.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "seberg",
"comment_id": 775464887,
"datetime": 1612818725000,
"masked_author": "username_1",
"text": "Hmm, can you post an example and the full traceback? I am wondering if I mixed up the issues and this one was not related to 1.20.x at all.\r\n\r\nTo be honest, if you are not running 1.20.x, then searching/asking on pandas is more likely to be successfull.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jessdwitch",
"comment_id": 775470350,
"datetime": 1612819283000,
"masked_author": "username_2",
"text": "Gotcha. Tbh asking around pandas was my first thought, but this is literally the only google result with this error (well, this and an SO thread Mahendra created). Here's the traceback starting with the `to_csv` call. If that doesn't spark anything, I'll leave you be and pop in over there, but thank you regardless for taking a look.\r\n\r\n```\r\n File \"/root/miniconda/lib/python3.7/site-packages/annotation/summarize_annotation.py\", line 159, in create_cluster_table\r\n pd.DataFrame(cluster_dict).to_csv(os.path.join(directory, 'report.csv'), index=False)\r\n File \"/root/miniconda/lib/python3.7/site-packages/pandas/core/frame.py\", line 411, in __init__\r\n mgr = init_dict(data, index, columns, dtype=dtype)\r\n File \"/root/miniconda/lib/python3.7/site-packages/pandas/core/internals/construction.py\", line 257, in init_dict\r\n return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)\r\n File \"/root/miniconda/lib/python3.7/site-packages/pandas/core/internals/construction.py\", line 82, in arrays_to_mgr\r\n arrays = _homogenize(arrays, index, dtype)\r\n File \"/root/miniconda/lib/python3.7/site-packages/pandas/core/internals/construction.py\", line 323, in _homogenize\r\n val, index, dtype=dtype, copy=False, raise_cast_failure=False\r\n File \"/root/miniconda/lib/python3.7/site-packages/pandas/core/internals/construction.py\", line 712, in sanitize_array\r\n subarr = construct_1d_arraylike_from_scalar(value, len(index), dtype)\r\n File \"/root/miniconda/lib/python3.7/site-packages/pandas/core/dtypes/cast.py\", line 1233, in construct_1d_arraylike_from_scalar\r\n subarr = np.empty(length, dtype=dtype)\r\nTypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "seberg",
"comment_id": 775521753,
"datetime": 1612825405000,
"masked_author": "username_1",
"text": "Sorry, do you have a minimal reproducer, preferably printing out the versions? It does look like this issue, but in that case the pandas code in `contruct_1d_arraylike_from_scalar` would include `isinstance(dtype, (np.dtype, type(np.dtype)))` or so (which is incorrect, because `type(np.dtype)` changed and was always weird).\r\n\r\nLong story short, it looks a lot like this issue but I if you are really not on NumPy 1.20 `type(np.dtype) is type` and I am not aware of what might be wrong.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jessdwitch",
"comment_id": 776890968,
"datetime": 1612979066000,
"masked_author": "username_2",
"text": "Hey sorry for disappearing, but you were totally right. For some reason the Conda environment was using the pandas from within the env, but the numpy from outside, causing the conflict. We ended up just downgrading the numpy from outside the env to match the one within the env and everything is happy now. Thank you so much!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mattip",
"comment_id": 777250395,
"datetime": 1613028449000,
"masked_author": "username_3",
"text": "Closing. Please reopen if needed.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mattip",
"comment_id": null,
"datetime": 1613028450000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "aarthim123",
"comment_id": 786297555,
"datetime": 1614295232000,
"masked_author": "username_4",
"text": "Hi I tried updating pandas to 1.0.5 and I still get the same error message. TypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type . I have a numpy version of 1.20",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mattip",
"comment_id": 786438326,
"datetime": 1614320209000,
"masked_author": "username_3",
"text": "@username_4 this issue is specifically about the error in the title. Please open an error on an appropriate issue tracker for problems with installing pandas on conda please use a more appropriate forum. You might have better luck searching for similar error messages. If you do open an issue, you probably should report the output of `conda list` in order to untangle what you have installed and what you might need.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MahendraCherukupalli",
"comment_id": 786444521,
"datetime": 1614321249000,
"masked_author": "username_0",
"text": "@username_4 use ..pip install numpy==1.16.5",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RichardScottOZ",
"comment_id": 820806412,
"datetime": 1618530296000,
"masked_author": "username_5",
"text": "I just saw this:-\r\n\r\n<class 'numpy.ndarray'> <class 'numpy.ndarray'>\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-15-14380fd16e5f> in <module>\r\n 49 print(type(v_list[i]), type(face_list[i]) )\r\n 50 #dataset = pd.DataFrame({'Group':'Neoprot-Ordovician','Surface': s, 'X': v_list[i][:, 0], 'Y': v_list[i][:, 1], 'Z': v_list[i][:, 2], 'SG': -1 })\r\n---> 51 dataset = pd.DataFrame({'Group':'Neoprot-Ordovician','Surface': s, 'X': v_list[i][:, 0].astype(float) })\r\n 52 dataset[\"Name\"] = metadata_list[i][\"NAME\"]\r\n 53 dataset[\"CRS\"] = str(metadata_list[i][\"CRS\"])\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\envs\\gemgis\\lib\\site-packages\\pandas\\core\\frame.py in __init__(self, data, index, columns, dtype, copy)\r\n 433 )\r\n 434 elif isinstance(data, dict):\r\n--> 435 mgr = init_dict(data, index, columns, dtype=dtype)\r\n 436 elif isinstance(data, ma.MaskedArray):\r\n 437 import numpy.ma.mrecords as mrecords\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\envs\\gemgis\\lib\\site-packages\\pandas\\core\\internals\\construction.py in init_dict(data, index, columns, dtype)\r\n 252 arr if not is_datetime64tz_dtype(arr) else arr.copy() for arr in arrays\r\n 253 ]\r\n--> 254 return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)\r\n 255 \r\n 256 \r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\envs\\gemgis\\lib\\site-packages\\pandas\\core\\internals\\construction.py in arrays_to_mgr(arrays, arr_names, index, columns, dtype)\r\n 67 \r\n 68 # don't force copy because getting jammed in an ndarray anyway\r\n---> 69 arrays = _homogenize(arrays, index, dtype)\r\n 70 \r\n 71 # from BlockManager perspective\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\envs\\gemgis\\lib\\site-packages\\pandas\\core\\internals\\construction.py in _homogenize(data, index, dtype)\r\n 320 val = dict(val)\r\n 321 val = lib.fast_multiget(val, oindex.values, default=np.nan)\r\n--> 322 val = sanitize_array(\r\n 323 val, index, dtype=dtype, copy=False, raise_cast_failure=False\r\n 324 )\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\envs\\gemgis\\lib\\site-packages\\pandas\\core\\construction.py in sanitize_array(data, index, dtype, copy, raise_cast_failure)\r\n 463 value = maybe_cast_to_datetime(value, dtype)\r\n 464 \r\n--> 465 subarr = construct_1d_arraylike_from_scalar(value, len(index), dtype)\r\n 466 \r\n 467 else:\r\n\r\n~\\AppData\\Local\\Continuum\\anaconda3\\envs\\gemgis\\lib\\site-packages\\pandas\\core\\dtypes\\cast.py in construct_1d_arraylike_from_scalar(value, length, dtype)\r\n 1459 value = ensure_str(value)\r\n 1460 \r\n-> 1461 subarr = np.empty(length, dtype=dtype)\r\n 1462 subarr.fill(value)\r\n 1463 \r\n\r\nTypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RichardScottOZ",
"comment_id": 820807222,
"datetime": 1618530420000,
"masked_author": "username_5",
"text": "pandas 1.0.1 py38he350917_0 conda-forge\r\nnumpy 1.20.2 py38h09042cb_0 conda-forge",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "seberg",
"comment_id": 820823017,
"datetime": 1618532800000,
"masked_author": "username_1",
"text": "Please just update your pandas. If you want to stay on `1.0.x` that is fine. `1.0.5` is just a bug-fix release that was reported to also include a fix for this. If 1.0.5 is not sufficient to fix it, let us know so others can find a solution easier.\r\nThe pandas version you list are clearly mentioned as _not_ compatible in the pandas issue.\r\n\r\nIf you don't want to upgrade pandas for whatever reason, you may have to stick to NumPy 1.19.x as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RichardScottOZ",
"comment_id": 820827512,
"datetime": 1618533624000,
"masked_author": "username_5",
"text": "For reference if someone else comes across it - it happened due to installing something else downgrading the pandas version.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MigueL71994",
"comment_id": 1029684903,
"datetime": 1643954447000,
"masked_author": "username_6",
"text": "pip install --upgrade numpy\r\npip install --upgrade pandas",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dinhtrang24",
"comment_id": 1035990690,
"datetime": 1644569470000,
"masked_author": "username_7",
"text": "Hi, I've already upgrade both pandas (0.24.2) and numpy (1.21.5). When I tried data.info(), it still doesn't work. Any thoughts?\r\n\r\n`---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-12-6208d269f320> in <module>\r\n----> 1 data.info()\r\n\r\n/opt/apps/apps/binapps/anaconda3/2019.07/lib/python3.7/site-packages/pandas/core/frame.py in info(self, verbose, buf, max_cols, memory_usage, null_counts)\r\n 2503 self.index._is_memory_usage_qualified()):\r\n 2504 size_qualifier = '+'\r\n-> 2505 mem_usage = self.memory_usage(index=True, deep=deep).sum()\r\n 2506 lines.append(\"memory usage: {mem}\\n\".format(\r\n 2507 mem=_sizeof_fmt(mem_usage, size_qualifier)))\r\n\r\n/opt/apps/apps/binapps/anaconda3/2019.07/lib/python3.7/site-packages/pandas/core/frame.py in memory_usage(self, index, deep)\r\n 2597 if index:\r\n 2598 result = Series(self.index.memory_usage(deep=deep),\r\n-> 2599 index=['Index']).append(result)\r\n 2600 return result\r\n 2601 \r\n\r\n/opt/apps/apps/binapps/anaconda3/2019.07/lib/python3.7/site-packages/pandas/core/series.py in __init__(self, data, index, dtype, name, copy, fastpath)\r\n 260 else:\r\n 261 data = sanitize_array(data, index, dtype, copy,\r\n--> 262 raise_cast_failure=True)\r\n 263 \r\n 264 data = SingleBlockManager(data, index, fastpath=True)\r\n\r\n/opt/apps/apps/binapps/anaconda3/2019.07/lib/python3.7/site-packages/pandas/core/internals/construction.py in sanitize_array(data, index, dtype, copy, raise_cast_failure)\r\n 640 \r\n 641 subarr = construct_1d_arraylike_from_scalar(\r\n--> 642 value, len(index), dtype)\r\n 643 \r\n 644 else:\r\n\r\n/opt/apps/apps/binapps/anaconda3/2019.07/lib/python3.7/site-packages/pandas/core/dtypes/cast.py in construct_1d_arraylike_from_scalar(value, length, dtype)\r\n 1185 value = to_str(value)\r\n 1186 \r\n-> 1187 subarr = np.empty(length, dtype=dtype)\r\n 1188 subarr.fill(value)\r\n 1189 \r\n\r\nTypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type\r\n`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jtlz2",
"comment_id": 1039957257,
"datetime": 1644911375000,
"masked_author": "username_8",
"text": "Similarly - this is still an issue for me",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "seberg",
"comment_id": 1040603696,
"datetime": 1644948172000,
"masked_author": "username_1",
"text": "The last time I saw someone who updated their pandas version and then still had the problem, had an environment issue that made them pick up the wrong pandas version after all. Please double check `pandas.__version__` and `numpy.__version__` in whatever you are running (e.g. the script) itself?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "levalencia",
"comment_id": 1041664383,
"datetime": 1645025216000,
"masked_author": "username_9",
"text": "I am having the same issue.\r\nOn a new compute instance in Azure ML, I am using Python 3.8 Kernel.\r\n\r\nI checked the versions:\r\npandas 0.25.3\r\nnumpy 1.22.2\r\n\r\n\r\n`.describe `gives the same error;\r\n`TypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type\r\n`\r\nI also tried:\r\n`scaled_df.select_dtypes(include=[np.number])`\r\n\r\nsame error:\r\n`TypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type`\r\n\r\n\r\nWhat should I do?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dinhtrang24",
"comment_id": 1063952359,
"datetime": 1646911436000,
"masked_author": "username_7",
"text": "Yes. I have checked both script and terminal again. \r\n\r\npandas == 0.24.2 \r\nnumpy == 1.21.5\r\n\r\nThe error doesn't not disappear",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "seberg",
"comment_id": 1064117711,
"datetime": 1646922232000,
"masked_author": "username_1",
"text": "@username_7 that pandas version is known to be too old. I had asked Luis, because I though 0.24.3 may be new enough (I am not quite sure). You have to either upgrade pandas, since it is an old version, or if you are stuck with such an old pandas version downgrade NumPy.\r\n(Or I suppose apply a patch to pandas, but unless you have very concrete reasons for using that pandas version, you should upgrade.)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dinhtrang24",
"comment_id": 1066656363,
"datetime": 1647256357000,
"masked_author": "username_7",
"text": "@username_1 Hi, thanks for the reply. I figured out the problem is that I have installed different versions of pandas and numpy using pip and pip3. After upgrading and matching their versions, the problem was solved.",
"title": null,
"type": "comment"
}
] | 13,723 | false | false | 10 | 26 | true |
delphidabbler/delphidabbler.github.io | null | 639,425,684 | 55 | null | [
{
"action": "opened",
"author": "delphidabbler",
"comment_id": null,
"datetime": 1592291312000,
"masked_author": "username_0",
"text": "Getting many 404s got the old tips pages of form `/tips/99`\r\n\r\nShort term fix would to redirect to Delphi-tips github page using HTML preview. Use temporary redirect code.\r\n\r\nLonger term it would be better to restore the tips, preferably with a public API on the server and a web app on the site.",
"title": "Do redirect for Delphi Tips",
"type": "issue"
},
{
"action": "closed",
"author": "delphidabbler",
"comment_id": null,
"datetime": 1592317912000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 296 | false | false | 1 | 2 | false |
Azure-Samples/cognitive-services-speech-sdk | Azure-Samples | 602,931,519 | 592 | null | [
{
"action": "opened",
"author": "viju2008",
"comment_id": null,
"datetime": 1587357220000,
"masked_author": "username_0",
"text": "Dear Sir\r\n\r\nHow can we disable partial results to the Wesocket endpoint. I do not want partial results as I cannot work on the partial results in my domain as every word is important. The whole audio should be processed in one shot and as a whole. I have a high speed network between the SDK and Speech recognition endpoint. And telling the Speech endpoint that whole speech has to be processed I beleive will reduce the processing time and also improce throughput as it can save the compute time spent of making partial results and sending over network.\r\n\r\n\r\nAlso i saw that the audio is being sent in chunks over the network to the WebSocket endpoint at Azure end and not in one single go. Is it possible to specify the chunk size and increase it.\r\n\r\nRegards,",
"title": "Disable Partial results and increase Chunk size of audio being sent over network",
"type": "issue"
},
{
"action": "created",
"author": "oscholz",
"comment_id": 619318112,
"datetime": 1587788642000,
"masked_author": "username_1",
"text": "@username_0 Thank you for getting in touch! I am checking with the feature team on this, and we'll get back to you soon.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viju2008",
"comment_id": 620089819,
"datetime": 1588004526000,
"masked_author": "username_0",
"text": "Any Updates.\r\n\r\nRegards,\r\n\r\nVijay",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "amitkumarshukla",
"comment_id": 620291018,
"datetime": 1588030664000,
"masked_author": "username_2",
"text": "@username_0 Are you looking for disabling hypothesis ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "amitkumarshukla",
"comment_id": 620295670,
"datetime": 1588031641000,
"masked_author": "username_2",
"text": "@username_0 From here https://docs.microsoft.com/en-us/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet\r\ntry setting this SpeechServiceResponse_StablePartialResultThreshold\r\nlike \r\nconfig.SetProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, \"100\");\r\n\r\nPlay with different numbers and see what fits your requirement.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viju2008",
"comment_id": 620518426,
"datetime": 1588069455000,
"masked_author": "username_0",
"text": "I tried setting is as told config.setProperty(PropertyId.SpeechServiceResponse_StablePartialResultThreshold, \"200\");\r\n\r\n\r\nBut it is stilling recognizing partially. Below is the output\r\n\r\nRECOGNIZING: Text=I need\r\nRECOGNIZING: Text=I need to\r\nRECOGNIZING: Text=I need to transfer\r\nRECOGNIZING: Text=I need to transfer seventh\r\nRECOGNIZED: Text=I need to transfer 7000 rupees to sham.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "amitkumarshukla",
"comment_id": 620875403,
"datetime": 1588110848000,
"masked_author": "username_2",
"text": "You have option not to subscribe to recognizing event and in that case the sdk wont callback the app for every intermediate. Do let me know if that solves your problem",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viju2008",
"comment_id": 620972785,
"datetime": 1588131508000,
"masked_author": "username_0",
"text": "Will disabling the event listening for partial results actually stop the speech endpoint or containers from producing partial results and sending them across the network Or will the sdk simply not display it. I wish that the partial results should not travel over the network also",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "amitkumarshukla",
"comment_id": 621348251,
"datetime": 1588180652000,
"masked_author": "username_2",
"text": "@username_0 Even though you disable the event but the intermediate result will travel over the network. Please keep in mind the data which comes from service is a json payload which is text data. The amount of data received is much more lower than the amount of data sent to the service. So I can guarantee you that your data usage will never be a bottleneck for the incoming traffic.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viju2008",
"comment_id": 621398418,
"datetime": 1588186471000,
"masked_author": "username_0",
"text": "But the purpose of disabling partial results is not served. Its not only about the network overheads also the container need not waste time in making partial results and can in one shot take the hold file and process the same. \r\n\r\nThe Main reason for the post was so that the response time is made better from container by disabling partial results over the network as well as the compute time involved for the container in partial results. As the partial results are not of any use in my case. Hence if the chunk size could be increased and the whole audio is taken and processed in one go the response time of the container will be better in my case",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "amitkumarshukla",
"comment_id": 621469251,
"datetime": 1588195046000,
"masked_author": "username_2",
"text": "@username_0 That is a great input to us. Please put a feature request at https://cognitive.uservoice.com/forums/912208-speech-service\r\n\r\nOur team will look at this and will work on it based on our priority.\r\nThanks a lot.\r\nClosing it for the time being.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "amitkumarshukla",
"comment_id": null,
"datetime": 1588195046000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "viju2008",
"comment_id": 626482073,
"datetime": 1589175724000,
"masked_author": "username_0",
"text": "Hello @username_2 \r\n\r\nI have put a feature request. Please provide update on progress",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jzhw0130",
"comment_id": 644533993,
"datetime": 1592284134000,
"masked_author": "username_3",
"text": "waiting too",
"title": null,
"type": "comment"
}
] | 3,555 | false | false | 4 | 14 | true |
matrix-org/matrix-react-sdk | matrix-org | 805,181,321 | 5,634 | {
"number": 5634,
"repo": "matrix-react-sdk",
"user_login": "matrix-org"
} | [
{
"action": "opened",
"author": "jaiwanth-v",
"comment_id": null,
"datetime": 1612934599000,
"masked_author": "username_0",
"text": "Fixes vector-im/element-web#16273 \r\n\r\n",
"title": "Added loading and disabled the button while searching for server",
"type": "issue"
},
{
"action": "created",
"author": "t3chguy",
"comment_id": 777409052,
"datetime": 1613045618000,
"masked_author": "username_1",
"text": "Thanks for your contribution!",
"title": null,
"type": "comment"
}
] | 192 | false | false | 2 | 2 | false |
BryanWilhite/songhay-web-components | null | 638,538,667 | 17 | null | [
{
"action": "opened",
"author": "BryanWilhite",
"comment_id": null,
"datetime": 1592196352000,
"masked_author": "username_0",
"text": "the list of thumbs used in my [Angular work](https://github.com/username_0/songhay-ng-workspace#songhayplayer-video-you-tube-project) needs to made more flexible:\r\n\r\n\r\n\r\nit needs to work in Angular _and_ on Web standards (the shadow DOM)\r\n\r\n:book: https://www.grapecity.com/blogs/using-web-components-in-angular",
"title": "proposed component: responsive list of thumbnails",
"type": "issue"
}
] | 422 | false | false | 1 | 1 | true |
zeebe-io/zeebe | zeebe-io | 816,471,118 | 6,448 | {
"number": 6448,
"repo": "zeebe",
"user_login": "zeebe-io"
} | [
{
"action": "opened",
"author": "npepinpe",
"comment_id": null,
"datetime": 1614262140000,
"masked_author": "username_0",
"text": "## Description\r\n\r\nAs part of #6175, we will need a way to walk the scope hierarchy of variable scopes in order to properly merge variable documents, and this will be done outside of the `VariableState`. The simplest solution is simply to expose `getParentScopeKey` which returns either the parent scope key, or `NO_PARENT` if there is none.\r\n\r\nIf we end up walking this hierarchy a lot, then we could always introduce a more complex abstraction, but this is fine for now.\r\n\r\n## Related issues\r\n\r\n<!-- Which issues are closed by this PR or are related -->\r\n\r\nrelated to #6175 \r\nblocked by #6446\r\n\r\n## Definition of Done\r\n\r\n_Not all items need to be done depending on the issue and the pull request._\r\n\r\nCode changes:\r\n* [x] The changes are backwards compatibility with previous versions\r\n* [ ] If it fixes a bug then PRs are created to [backport](https://github.com/zeebe-io/zeebe/compare/stable/0.24...develop?expand=1&template=backport_template.md&title=[Backport%200.24]) the fix to the last two minor versions. You can trigger a backport by assigning labels (e.g. `backport stable/0.25`) to the PR, in case that fails you need to create backports manually.\r\n\r\nTesting:\r\n* [x] There are unit/integration tests that verify all acceptance criterias of the issue\r\n* [x] New tests are written to ensure backwards compatibility with further versions\r\n* [x] The behavior is tested manually\r\n* [ ] The change has been verified by a QA run\r\n* [ ] The impact of the changes is verified by a benchmark \r\n\r\nDocumentation: \r\n* [ ] The documentation is updated (e.g. BPMN reference, configuration, examples, get-started guides, etc.)\r\n* [ ] New content is added to the [release announcement](https://drive.google.com/drive/u/0/folders/1DTIeswnEEq-NggJ25rm2BsDjcCQpDape)",
"title": "Provide a way to get the parent scope key of a variable scope",
"type": "issue"
},
{
"action": "created",
"author": "npepinpe",
"comment_id": 785921945,
"datetime": 1614262175000,
"masked_author": "username_0",
"text": "This PR currently includes commits from #6446 as it builds on top of it - it'll probably be easier to review once that one is merged.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "npepinpe",
"comment_id": 786560505,
"datetime": 1614335485000,
"masked_author": "username_0",
"text": "bors r+",
"title": null,
"type": "comment"
}
] | 1,898 | false | true | 1 | 3 | false |
thepirat000/Audit.NET | null | 773,514,783 | 350 | null | [
{
"action": "opened",
"author": "Tsukasa42",
"comment_id": null,
"datetime": 1608703940000,
"masked_author": "username_0",
"text": "**Describe the bug**\r\nAssigning your own value to Environment.Exception is overwritten when the event ends.\r\n\r\n**To Reproduce**\r\nscope.Event.Environment.Exception = $\"{ex.GetType().Name}: {ex.Message}\";\r\n\r\n**Expected behavior**\r\nWe should be allowed to set a custom exception message. While testing out the library on MacOS, the exception data is returned as a COMException with no actual details on the error.\r\n\r\n**Libraries (specify the Audit.NET extensions being used including version):**\r\nFor example:\r\n - Audit.Net 16.2.0\r\n\r\n**Target .NET framework:**\r\nFor example:\r\n - .NET Core 3.0",
"title": "Allow Custom Exception Message",
"type": "issue"
},
{
"action": "created",
"author": "thepirat000",
"comment_id": 750788872,
"datetime": 1608795242000,
"masked_author": "username_1",
"text": "Note you can have extra fields on the `AuditEvent` or the `AuditEventEnvironment` by using the `CustomFields` property, for example:\r\n\r\n```c#\r\nscope.Event.Environment.CustomFields[\"ExceptionMessage\"] = $\"{ex.GetType().Name}: {ex.Message}\";\r\n```\r\n\r\nIf you use JSON serialization, that will be serialized as another property of the Environment object",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "thepirat000",
"comment_id": null,
"datetime": 1609648152000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 937 | false | false | 2 | 3 | false |
kubernetes/kube-state-metrics | kubernetes | 823,132,945 | 1,407 | {
"number": 1407,
"repo": "kube-state-metrics",
"user_login": "kubernetes"
} | [
{
"action": "opened",
"author": "lilic",
"comment_id": null,
"datetime": 1614953831000,
"masked_author": "username_0",
"text": "This PR brings in the release-2.0 branch into master, to sync master with it.\r\n\r\ncc @username_1 @tariq1890 @mrueg please have a look. I had a lot of conflicts to resolve due to branches diverging so having a closer look might be good!",
"title": "Merge release-2.0 branch into master",
"type": "issue"
},
{
"action": "created",
"author": "lilic",
"comment_id": 791449555,
"datetime": 1614954152000,
"masked_author": "username_0",
"text": "cc @kakkoyun",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "brancz",
"comment_id": 791453482,
"datetime": 1614954535000,
"masked_author": "username_1",
"text": "Awesome!\r\n\r\n/lgtm\r\n/approve",
"title": null,
"type": "comment"
}
] | 269 | false | true | 2 | 3 | true |
dpk/et-book | null | 820,237,622 | 16 | null | [
{
"action": "opened",
"author": "dpk",
"comment_id": null,
"datetime": 1614706716000,
"masked_author": "username_0",
"text": "Need:\r\n\r\n* A version of f with a shorter hook for use in German (and probably Dutch, etc). E.g. in the word Auflage, because of the morpheme boundary (Auf|lage), it is technically wrong to use an fl ligature, and an f with a shorter hook should come before the l instead. Also for combinations like få, fä, fü, the umlaut can be uncomfortably close to the hook of the f (or just collide with it) and a ligature for those combinations is probably the wrong thing to do. Then we’ll also need a version of ff ending in the short hooked version (Stoff|farbe, Schiff|bau), but no fff, thankfully, as there’s always a morpheme boundary there.\r\n* fh, fb (halfback), etc with the crossbar connecting to the following character like in fl … then also ffh (offhand), ffb (offbeat).\r\n* An fj ligature would be nice to have, for better support of languages where that’s a common combination (and loanwords in English like fjord).\r\n\r\nBut I’ll need to get comfortable with glyph editing to add these.\r\n\r\nEB Garamond has a comprehensive set, including feature files that handle language specific rules in some detail.",
"title": "More and better ligatures",
"type": "issue"
},
{
"action": "created",
"author": "dpk",
"comment_id": 789121383,
"datetime": 1614710028000,
"masked_author": "username_0",
"text": "As an example for what the f with shorter hook might look like:\r\n\r\n```xml\r\n<?xml version=\"1.0\"?>\r\n<glyph name=\"f.deu\" format=\"1\">\r\n <advance width=\"321\"/>\r\n <unicode hex=\"0066\"/>\r\n <outline>\r\n <contour>\r\n <point x=\"324\" y=\"680\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"320.754\" y=\"660.562\"/>\r\n <point x=\"321\" y=\"650\"/>\r\n <point x=\"305\" y=\"649\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"286.011\" y=\"647.813\"/>\r\n <point x=\"278\" y=\"670\"/>\r\n <point x=\"263\" y=\"670\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"226.654\" y=\"670\"/>\r\n <point x=\"204\" y=\"627\"/>\r\n <point x=\"191\" y=\"605\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"180.762\" y=\"587.674\"/>\r\n <point x=\"175.333\" y=\"550.005\"/>\r\n <point x=\"176\" y=\"504\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"177\" y=\"435\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"177\" y=\"424\"/>\r\n <point x=\"178\" y=\"415\"/>\r\n <point x=\"192\" y=\"415\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"266\" y=\"415\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"271\" y=\"415\"/>\r\n <point x=\"279\" y=\"411\"/>\r\n <point x=\"280\" y=\"406\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"281\" y=\"405\"/>\r\n <point x=\"287\" y=\"361\"/>\r\n <point x=\"287\" y=\"361\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"286\" y=\"356\"/>\r\n <point x=\"283\" y=\"353\"/>\r\n <point x=\"275\" y=\"353\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"190\" y=\"354\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"182\" y=\"354\"/>\r\n <point x=\"177\" y=\"345\"/>\r\n <point x=\"177\" y=\"341\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"177\" y=\"128\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"177\" y=\"33\"/>\r\n <point x=\"181\" y=\"33\"/>\r\n <point x=\"246\" y=\"34\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"258\" y=\"34\"/>\r\n <point x=\"270\" y=\"35\"/>\r\n <point x=\"280\" y=\"34\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"291\" y=\"33\"/>\r\n <point x=\"292\" y=\"24\"/>\r\n <point x=\"293\" y=\"13\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"293\" y=\"5\"/>\r\n <point x=\"289\" y=\"-3\"/>\r\n <point x=\"284\" y=\"-3\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"220\" y=\"-1\"/>\r\n <point x=\"94\" y=\"0\"/>\r\n <point x=\"32\" y=\"-3\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"26\" y=\"-3\"/>\r\n <point x=\"24\" y=\"6\"/>\r\n <point x=\"24\" y=\"13\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"24\" y=\"16\"/>\r\n <point x=\"24\" y=\"21\"/>\r\n <point x=\"26\" y=\"24\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"29\" y=\"32\"/>\r\n <point x=\"37\" y=\"31\"/>\r\n <point x=\"44\" y=\"31\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"93\" y=\"31\"/>\r\n <point x=\"97\" y=\"51\"/>\r\n <point x=\"97\" y=\"132\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"97\" y=\"340\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"95\" y=\"351\"/>\r\n <point x=\"91\" y=\"355\"/>\r\n <point x=\"85\" y=\"356\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"22\" y=\"356\" type=\"line\" smooth=\"yes\"/>\r\n <point x=\"15\" y=\"356\"/>\r\n <point x=\"16\" y=\"382\"/>\r\n <point x=\"22\" y=\"386\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"53\" y=\"393\"/>\r\n <point x=\"76\" y=\"401\"/>\r\n <point x=\"97\" y=\"413\" type=\"curve\"/>\r\n <point x=\"97\" y=\"436\"/>\r\n <point x=\"97.685\" y=\"442.015\"/>\r\n <point x=\"99\" y=\"467\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"103\" y=\"543\"/>\r\n <point x=\"118.798\" y=\"596.267\"/>\r\n <point x=\"173\" y=\"656\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"222\" y=\"710\"/>\r\n <point x=\"267\" y=\"726\"/>\r\n <point x=\"295\" y=\"726\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"298\" y=\"726\"/>\r\n <point x=\"318.667\" y=\"725.222\"/>\r\n <point x=\"321\" y=\"719\" type=\"curve\" smooth=\"yes\"/>\r\n <point x=\"324\" y=\"711\"/>\r\n </contour>\r\n </outline>\r\n</glyph>\r\n```\r\n\r\nBut this is just a draft glyph drawn up by someone who knows very little about font editing. But it doesn’t collide with l (Auflage) or b (Laufband), for instance",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dpk",
"comment_id": 791665874,
"datetime": 1614975787000,
"masked_author": "username_0",
"text": "fj will probably make it into 2.0 — I just designed it (and dcroat) in all weights and styles 🎉",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dpk",
"comment_id": 792726403,
"datetime": 1615206838000,
"masked_author": "username_0",
"text": "[The specimen from the SB Berlin](https://tw.staatsbibliothek-berlin.de/ma10568) appears to include a shorter-hooked f to use as a basis for the design.",
"title": null,
"type": "comment"
}
] | 5,239 | false | false | 1 | 4 | false |
vitech-team/SDLC | vitech-team | 850,991,814 | 115 | null | [
{
"action": "opened",
"author": "serhiykrupka",
"comment_id": null,
"datetime": 1617687349000,
"masked_author": "username_0",
"text": "Extend JX dashboard with resources which been produced by pipeline.\r\n\r\nDifferent pipelines can generate different reports like junit, owasp, large tests, etc. so users should be able to navigate from JX dashboard/pipeline to report in a user-friendly way.",
"title": "Enhance JX dashboard ",
"type": "issue"
}
] | 255 | false | false | 1 | 1 | false |
kubernetes-sigs/cluster-api | kubernetes-sigs | 652,405,013 | 3,296 | null | [
{
"action": "opened",
"author": "randomvariable",
"comment_id": null,
"datetime": 1594134753000,
"masked_author": "username_0",
"text": "<!-- NOTE: ⚠️ For larger proposals, we follow the CAEP process as outlined in https://sigs.k8s.io/cluster-api/CONTRIBUTING.md. -->\r\n\r\n**User Story**\r\n\r\nAs a operator, I deploy a cluster in my infrastructure provider. The infrastructure has a CIDR range of 192.168.0.0/24, and I set a pod and service CIDR range of 192.168.0.0/16.\r\n\r\nI see intermittent communication failures, but Cluster API hasn't told me what is wrong.\r\n\r\n**Detailed Description**\r\n\r\nCluster API should inform a user that there is overlap between their infrastructure networking and that for the pods and service CIDRs, such that it could cause communication issues. This ideally would be a condition on the Cluster resource, that infrastructure providers could inform.\r\n\r\n**Anything else you would like to add:**\r\n\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n/kind feature",
"title": "Cluster condition, consumable by Infrastructure Providers for machine IP / pod & service CIDR overlap",
"type": "issue"
},
{
"action": "created",
"author": "vincepri",
"comment_id": 654966027,
"datetime": 1594138007000,
"masked_author": "username_1",
"text": "/milestone v0.3.x\r\n/help",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vincepri",
"comment_id": 668175962,
"datetime": 1596479464000,
"masked_author": "username_1",
"text": "/milestone v0.4.0",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vincepri",
"comment_id": 668176144,
"datetime": 1596479488000,
"masked_author": "username_1",
"text": "/help\r\n/priority important-longterm",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iamemilio",
"comment_id": 704429728,
"datetime": 1602005119000,
"masked_author": "username_2",
"text": "/assign\r\nI would like to take a look at this",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fabriziopandini",
"comment_id": 704439244,
"datetime": 1602006177000,
"masked_author": "username_3",
"text": "What about having a validation webhook that prevents misconfiguration instead of reporting errors after the cluster is created?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "randomvariable",
"comment_id": 704483359,
"datetime": 1602010373000,
"masked_author": "username_0",
"text": "@username_3 Could do, but would have to be infrastructure specific then, which is fine as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "randomvariable",
"comment_id": 704483850,
"datetime": 1602010417000,
"masked_author": "username_0",
"text": "@username_3 would need validation webhooks performed by each infraprovider, since the cluster object has no idea of the infrastructure networks being used.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "randomvariable",
"comment_id": 704484045,
"datetime": 1602010437000,
"masked_author": "username_0",
"text": "Additionally in DHCP environments, even the infra provider doesn't know what the networks are.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iamemilio",
"comment_id": 704971201,
"datetime": 1602080582000,
"masked_author": "username_2",
"text": "Excuse my Newbieness, but do we need to dynamically check the network CIDR range? For this validation check, why should we not just assume that the CIDR ranges the users entered [here](https://github.com/kubernetes-sigs/cluster-api/blob/7fdecfe013260f6ca4dd5afa1ae72a37766f7506/api/v1alpha3/cluster_types.go#L67) are valid, and just check if they overlap?\r\n\r\nIf we want to validate the CIDR ranges themselves, I think that should be a separate step, and should be handled by the infra providers",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "randomvariable",
"comment_id": 705120414,
"datetime": 1602095770000,
"masked_author": "username_0",
"text": "We don't know the network range of the infrastructure from that information, which is the main problem that pops up. That struct only tells us the CNI networking.\r\n\r\nShould probably be along the lines of comparing Machine.Status.Addresses vs. the CIDRs in the struct above, and add a condition representing a clash.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iamemilio",
"comment_id": 705581470,
"datetime": 1602164956000,
"masked_author": "username_2",
"text": "Thanks, that makes more sense. My next question would be why a validation webhook? Do we do a simlar things for other validation steps? Do we have a process for validating configs?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iamemilio",
"comment_id": 705832390,
"datetime": 1602192352000,
"masked_author": "username_2",
"text": "@username_0 for clarification, by infraprovider do you mean the underlying infra, or cluster-api-provider-foo?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iamemilio",
"comment_id": 705833532,
"datetime": 1602192511000,
"masked_author": "username_2",
"text": "Can we define an interface version of our [crd validation webhooks](https://github.com/kubernetes-sigs/cluster-api/blob/9cbf8be8d643203d99aca6714825abdabcbca0b1/api/v1alpha3/cluster_webhook.go#L64) then have the cluster-api-providers implement it downstream?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "randomvariable",
"comment_id": 708353694,
"datetime": 1602676714000,
"masked_author": "username_0",
"text": "@username_2 There's no guarantee an infra provider knows what the infrastructure network will be ahead of time, so it can't happen via a webhook.\r\n\r\nIf we take vSphere or bare metal as an example, you may very well be putting a machine in an L2 network with some external DHCP. There's nothing on the machine specification that mentions an IP address at that time.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iamemilio",
"comment_id": 709494404,
"datetime": 1602784749000,
"masked_author": "username_2",
"text": "Given those restrictions, the only way this can really work on all platforms the same way is to check the ip assigned to each machine against the service and pods network CIDRS and throw and error or warning in the machine-controller logs if they overlap. Is that an acceptable solution?\r\n\r\nAnother option to consider is how things are handled in OpenShift. We require admins to do a little legwork upfront and tell us the CIDR of the network our nodes will get their IPs from. This makes the validation easy and upfront. In the CAPI case, this would mean that we would add another optional param to the ClusterNetwork object: `Machines *NetworkRanges`. If this is provided, we can quickly check for overlap with a webhook. The big question is if this is worth adding a parameter to the cluster API.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncdc",
"comment_id": 709514957,
"datetime": 1602786880000,
"masked_author": "username_4",
"text": "Logs are not user-facing. The only available user-facing status indicator is the conditions array in status. The original description above suggests making this a condition in Cluster status.\r\n\r\nI'll defer to @username_5 @CecileRobertMichon @username_0 etc. for thoughts on the suggested new API field.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "detiber",
"comment_id": 709591038,
"datetime": 1602796082000,
"masked_author": "username_5",
"text": "I'm not necessarily sure I'm a fan of adding a field for this unless we have a comprehensive proposal around it, how it would impact providers, and how it would impact users.\r\n\r\nI want to make sure we aren't creating a situation where it's really easy for a user to shoot themselves in the foot, and that we aren't necessarily imposing requirements on providers that are unreasonable.\r\n\r\nFor example, with AWS, I could easily see us being able to report the network range in use by the VPC (either cluster-api managed or user-provided) in a way that the Cluster resource could verify things (likely not at admission time, but could still short circuit the feedback loop on conditions before things get too far along). I'm not sure how feasible that approach would be for vSphere, metal3, packet, or other types of bare metal'ish providers.\r\n\r\nAlso depending on the use case we are talking about (such as a managed service provider model), it may not be reasonable to expect the end user to know what network address block will be used in advance.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "randomvariable",
"comment_id": 710026184,
"datetime": 1602852720000,
"masked_author": "username_0",
"text": "Agree with @username_5 .\r\n\r\nJust to clarify the use case for having a condition on the cluster object:\r\n\r\nHaving a tool like the at-the-glance status in https://github.com/kubernetes-sigs/cluster-api/issues/3802 could show the problem across infrastructure providers.\r\n\r\nConcretely, just thinking of a defined `ConditionType` constant for the condition and leaving the rest to infra providers.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fabriziopandini",
"comment_id": 760508293,
"datetime": 1610662289000,
"masked_author": "username_3",
"text": "/remove-lifecycle stale",
"title": null,
"type": "comment"
}
] | 5,755 | false | true | 6 | 20 | true |
mgoltzsche/helm-kustomize-plugin | null | 647,316,106 | 1 | null | [
{
"action": "opened",
"author": "daurnimator",
"comment_id": null,
"datetime": 1593432373000,
"masked_author": "username_0",
"text": "To ensure that a chart doesn't change out from under me, I'd like to include checksums in my generator (the same checksum you'd see in requirements.lock)",
"title": "Support specifying checksum(s)",
"type": "issue"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 651249649,
"datetime": 1593450721000,
"masked_author": "username_1",
"text": "Makes sense!\r\nWould you like to create a PR?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "scjudd",
"comment_id": 657723137,
"datetime": 1594665344000,
"masked_author": "username_2",
"text": "Could most of [`helm.LoadChart`](https://github.com/username_1/helm-kustomize-plugin/blob/master/pkg/helm/helm.go#L235-L262) be replaced with a call to [`man.Build()`](https://github.com/username_1/helm-kustomize-plugin/blob/master/vendor/k8s.io/helm/pkg/downloader/manager.go#L62-L103)? From looking over this a bit today, it seems that `LoadChart` is doing a subset of the work of Helm's download manager's `Build` method.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 657739996,
"datetime": 1594667484000,
"masked_author": "username_1",
"text": "`helm.LoadChart` is also downloading dependencies while `man.Build()` isn't if I am not mistaken but you're right: this can probably be solved more elegantly or rather without copying code that helm provides anyway. I think I simply copied it from [ContainerSolutions/helm-convert](https://github.com/ContainerSolutions/helm-convert/blob/v0.5.1/pkg/helm/helm.go).\r\n\r\nAlso this repo has a [git_chart branch](https://github.com/username_1/helm-kustomize-plugin/tree/git_chart) which provides an additional but hacky feature: Referencing helm charts using go-getters and excluding particular resources like in [this example](https://github.com/username_1/kustomizations/blob/master/linkerd/base/linkerd-chart.yaml) where I fetched linkerd's helm chart from its git repo because they don't publish it but ship their own CLI that uses it. However I did not merge the feature into master because this is something you usually don't want to do.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "scjudd",
"comment_id": 657749613,
"datetime": 1594668739000,
"masked_author": "username_2",
"text": "Right, I didn't consider the initial download/unpack logic.\n\nWell, I think I'll take a stab at this. :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 657818729,
"datetime": 1594678258000,
"masked_author": "username_1",
"text": "@username_2 great!\r\nTo avoid potential conflicts and since I was using the go getter and resource filter features already I merged my very long-lived feature branch into master now.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 699110283,
"datetime": 1601061714000,
"masked_author": "username_1",
"text": "Solved by PR #5",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mgoltzsche",
"comment_id": null,
"datetime": 1601061714000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "reopened",
"author": "mgoltzsche",
"comment_id": null,
"datetime": 1601576578000,
"masked_author": "username_1",
"text": "To ensure that a chart doesn't change out from under me, I'd like to include checksums in my generator (the same checksum you'd see in requirements.lock)",
"title": "Support specifying checksum(s)",
"type": "issue"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 702330679,
"datetime": 1601578285000,
"masked_author": "username_1",
"text": "I have to reopen this to clarify some requirements because I have to change some of the logic when upgrading to helm 3. The features for helm 2 done in PR #5 can be tested in release v0.9.2.\r\n\r\nPeople (including me) were confused about the actual feature that should be implemented:\r\n* Supporting `requirements.lock`: As I understand this is to lock to particular chart versions (in case the requirements.yaml specifies version ranges). In addition PR #5 introduced the field `lockFile` in the generator config so that you could point it to an existing `requirements.lock` - here it was unclear how the user could easily update it and what its purpose is. Do you think this is needed and what would be your use case? Do you need it because you want to refer to a chart by version range within the generator config?\r\n* Verifying an actual chart: this can be done using the generator config fields `verify` (bool) and `keyring` - should work but I didn't try it yet.\r\n\r\nThe changes introduced in PR #5 unfortunately don't work this way with the helm 3 code anymore (since e.g. `HashReq()` is inside a helm-internal package, see WIP PR #8). Therefore before I remove or reimplement the feature I need to ask if you really need to specify a lock file for a remote chart without specifying a local umbrella chart or rather if you need the `lockFile` field within the generator config? (if it is just about version ranges you could as well specify that single version and don't need another `requirements.lock` for it)\r\ncc @username_0 @username_2 @james-callahan\r\n\r\n_If you need the `lockFile` field I'd generate a temporary chart and place the chart reference from the generator config file into the `requirements.yaml` file of that temporary chart and nest values correspondingly to make helm deal with the `requirements.lock` and copy it back and forth to the project directory so that the user doesn't need other tools and knowledge in order to create that lock file initially - other opinions are welcome._",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "daurnimator",
"comment_id": 725230234,
"datetime": 1605076247000,
"masked_author": "username_0",
"text": "@username_1 what I want to do is lock to a particular version of a chart, so that what I audit and test locally (e.g. cert-manager 1.0.1) is the exact same thing that gets deployed onto my cluster.\r\n\r\n`verify` and `keyring` only check that it was signed by a given author; but I don't trust the author to not push out a malicious update in future overwriting their previous chart.\r\n\r\nI was under the impression that the checksums in `requirements.lock` would protect against this issue, but perhaps not?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 725687147,
"datetime": 1605132325000,
"masked_author": "username_1",
"text": "That's not what the digest within the `requirements.lock` is for. As I understand it is to make the user rebuild her chart dependencies after she changed `requirements.yaml` (no idea why helm doesn't do it for us). You can see [here](https://github.com/helm/helm/blob/d481bc6cf32adf8a4684d58ae97002a9d7e70223/internal/resolver/resolver.go#L144) that the digest is just the hash of `requirements.yaml`.\r\nI don't think helm supports the feature you're looking for.\r\nHowever I think preventing redeployment would be very bad for desaster recovery.\r\n\r\nInstead I recommend to use [kpt](https://github.com/GoogleContainerTools/kpt) to render a helm chart into a git repo using a kpt function/sink and commit and push the result (see [here](https://opensource.googleblog.com/2020/03/kpt-packaging-up-your-kubernetes.html)). Your CD pipeline (`kpt live apply .`) can always deploy the plain rendered manifests from the repo and you can reliably audit what gets deployed. Also you CD pipeline doesn't depend on rendering technology like helm or kustomize and availability of their repositories (kpt can also manage other (helm/kustomize/...) repositories within your repository).\r\nThere is also an official helm-template kpt function.\r\n\r\nSoon I ll ship this project as kpt function/sink container image as well since it supports a few more features. As I understand, since they originate from kustomize, kpt functions will soon also be available within kustomize directly and eventually replace kustomize' plugins as they are today - the latter is just me guessing.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 744085228,
"datetime": 1607901153000,
"masked_author": "username_1",
"text": "I am closing this since `requirements.lock` is supported for local charts but does not guarantee that a chart does not change on the server.\r\n\r\n@username_0 you can guarantee that manifests don't change after you have audited them using the [kpt functionality](https://github.com/username_1/khelm#kpt-function) for which I've also prepared an [example](https://github.com/username_1/khelm/tree/master/example/kpt/cert-manager) - though in practice you would also commit `example/kpt/cert-manager/static/generated-manifest.yaml` with the repository to be sure it doesn't change after you audited it and avoid dependencies within the CD pipeline (I just didn't do that to avoid polluting the repo for the sake of an example).",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mgoltzsche",
"comment_id": null,
"datetime": 1607901153000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "mgoltzsche",
"comment_id": 792324543,
"datetime": 1615140036000,
"masked_author": "username_1",
"text": "fyi: there is now an [example](https://github.com/username_1/khelm/tree/master/example/kpt/linkerd) that shows how to pull in charts from other git repositories as kpt dependencies and render them with khelm.",
"title": null,
"type": "comment"
}
] | 6,999 | false | false | 3 | 15 | true |
henriquehbr/svelte-typewriter | null | 762,737,045 | 31 | null | [
{
"action": "opened",
"author": "evdama",
"comment_id": null,
"datetime": 1607711682000,
"masked_author": "username_0",
"text": "If you put a `&` in your text for example then you get a jumping left/right effect which is bad\r\n\r\n```js\r\n<script>\r\n\timport Typewriter from 'svelte-typewriter'\r\n</script>\r\n\r\n<Typewriter loop interval={ 100 }>\r\n <div>normal text doesn't jump</div>\r\n <div>text with ambersand & then some more</div> // <--- jumping effect when processing &\r\n</Typewriter>\r\n```\r\n\r\n\r\nRepl to reproduce https://svelte.dev/repl/5698397692c54c2bb5fd03f5ea6ca8cc?version=3.31.0",
"title": "text with ambersand 'jumps'",
"type": "issue"
},
{
"action": "created",
"author": "henriquehbr",
"comment_id": 743445420,
"datetime": 1607723463000,
"masked_author": "username_1",
"text": "Indeed, just checked the repl, at the moment i'm solving conflicts caused by breaking changes on the dependencies, but i'll look into it as soon as possible\r\n\r\nMany thanks for reporting this!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "henriquehbr",
"comment_id": null,
"datetime": 1608039888000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "henriquehbr",
"comment_id": 745297785,
"datetime": 1608040184000,
"masked_author": "username_1",
"text": "A fix for this bug has been released on [`2.4.4`](https://github.com/username_1/svelte-typewriter/blob/v2.4.4/CHANGELOG.md#v244)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "evdama",
"comment_id": 745410769,
"datetime": 1608050219000,
"masked_author": "username_0",
"text": "Excellent, I just checked and yes, it's fixed!\r\nThanks a lot for your time and effort... it's much appreciated for this awesome package 👍",
"title": null,
"type": "comment"
}
] | 919 | false | false | 2 | 5 | true |
EasyCorp/EasyAdminBundle | EasyCorp | 708,540,561 | 3,808 | {
"number": 3808,
"repo": "EasyAdminBundle",
"user_login": "EasyCorp"
} | [
{
"action": "opened",
"author": "horlyk",
"comment_id": null,
"datetime": 1600991267000,
"masked_author": "username_0",
"text": "There is a problem for the nested collections when the **Add** button stops working.\r\n\r\nMy case: I have a main entity **A**. This entity should contain a collection of entities **B**. Entity **B** should contain a collection of entities **C**. **Add** button works well for adding new entities **B**, but for **C** it stops working.\r\n\r\nI've adjusted JS to work with any level of nesting collections.\r\n\r\nPS: there is one more bug with `prototype_name` option. If you leave it as a default `__name__`, the script will work not correctly. For example: `entity[__name__]colletionItem[__name__]field`, will be replaced for both, even though it should be replaced only for the first or the second.",
"title": "Fixed form-type-collection script for nested collections",
"type": "issue"
},
{
"action": "created",
"author": "horlyk",
"comment_id": 698883217,
"datetime": 1601034244000,
"masked_author": "username_0",
"text": "There is an issue on edit form. Will try to fix.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "horlyk",
"comment_id": 698918719,
"datetime": 1601039365000,
"masked_author": "username_0",
"text": "Fixed in a quick way. Actually, to achieve better results, add button and a collection itself should have some relation. Current code is not flexible enough :(",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "javiereguiluz",
"comment_id": 699493114,
"datetime": 1601125473000,
"masked_author": "username_1",
"text": "Thanks for fixing this bug! Sadly the form collections and the JS code is a bit weak, but thanks to your improvements and future changes we might do, this will improve. Thanks!",
"title": null,
"type": "comment"
}
] | 1,074 | false | false | 2 | 4 | false |
hasadna/anyway | hasadna | 659,272,207 | 1,390 | null | [
{
"action": "opened",
"author": "Mano3",
"comment_id": null,
"datetime": 1594993725000,
"masked_author": "username_0",
"text": "Since @username_1 incredible refactor, a lot of files became deprecated, there are also some functions that are deprecated in files that are being used.\r\nI would appreciate if only code that is usable in production (via the current scraping flow) would remain in the codebase, any legacy code has got to go.\r\nFor example, as far as I understand news_flash folder is deprecated and everything is handled via location_extraction.py and news_flash_*.py files in the parsers folder. This is just an example, there are more legacy files in this folder regarding news flash parsing.\r\nMaybe they all should be moved to a newly created news_flash for organization of files.",
"title": "Remove legacy code from news flash parser folder",
"type": "issue"
},
{
"action": "created",
"author": "elazarg",
"comment_id": 662955394,
"datetime": 1595503874000,
"masked_author": "username_1",
"text": "Oh. I thought I have removed them already.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Mano3",
"comment_id": null,
"datetime": 1595605025000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 704 | false | false | 2 | 3 | true |
RedHatInsights/insights-results-smart-proxy | RedHatInsights | 723,157,003 | 314 | {
"number": 314,
"repo": "insights-results-smart-proxy",
"user_login": "RedHatInsights"
} | [
{
"action": "opened",
"author": "Sergey1011010",
"comment_id": null,
"datetime": 1602849633000,
"masked_author": "username_0",
"text": "## Type of change\r\n\r\nPlease delete options that are not relevant.\r\n\r\n- Bug fix (non-breaking change which fixes an issue)\r\n- Refactor (refactoring code, removing useless files)\r\n\r\n## Testing steps\r\n\r\n`make before_commit`",
"title": "Some refactoring",
"type": "issue"
}
] | 220 | false | true | 1 | 1 | false |
harryjubb/quacker | null | 845,433,453 | 15 | null | [
{
"action": "opened",
"author": "harryjubb",
"comment_id": null,
"datetime": 1617145738000,
"masked_author": "username_0",
"text": "**Is your feature request related to a problem? Please describe.**\r\nA group of shortcuts may be related, e.g. a set of shortcuts for window resizing. It would be useful to be able to organise these and collapse them independently\r\n\r\n**Describe the solution you'd like**\r\nOne-level folder-type hierarchy to organise shortcut cards within.\r\n\r\n**Describe alternatives you've considered**\r\nCould do this implicitly by adding re-ordering, but not as nice.\r\n\r\n**Additional context**\r\nN/A",
"title": "Ability to group related shortcuts",
"type": "issue"
}
] | 481 | false | false | 1 | 1 | false |
iterative/dvc.org | iterative | 798,025,440 | 2,130 | {
"number": 2130,
"repo": "dvc.org",
"user_login": "iterative"
} | [
{
"action": "opened",
"author": "jorgeorpinel",
"comment_id": null,
"datetime": 1612161959000,
"masked_author": "username_0",
"text": "- [ ] UPDATE: After this, (make 1+ PRs) from #2132",
"title": "remote: terminology and sample standardization",
"type": "issue"
},
{
"action": "created",
"author": "jorgeorpinel",
"comment_id": 771855030,
"datetime": 1612289437000,
"masked_author": "username_0",
"text": "These old changes don't carry > 2.0 code (from master). I apply them to `v1` first because otherwise in the process of merging to master (if there are conflicts) the branch could get \"contaminated\" with >2.0 code, and it would be harder to backport.",
"title": null,
"type": "comment"
}
] | 299 | false | false | 1 | 2 | false |
rancher/rancher | rancher | 734,760,493 | 29,906 | null | [
{
"action": "opened",
"author": "rmweir",
"comment_id": null,
"datetime": 1604345050000,
"masked_author": "username_0",
"text": "<!--\r\nPlease search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue\r\nFor security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.\r\n-->\r\n\r\n**What kind of request is this (question/bug/enhancement/feature request):**\r\nenhancement\r\n\r\n**Steps to reproduce (least amount of steps as possible):**\r\n1. Create or register private EKS cluster (a cluster that only has Private Access enabled).\r\n\r\n**Result:**\r\nError: \"Failed to communicate with cluster.\r\nEven if a cluster is created as Public/Private then has public disabled, the agent will connect but health checks will error.\r\n\r\n**Desire:**\r\nRegistering a private EKS cluster should work and rancher should be able to communicate with it.\r\nCreating a private EKS cluster should work. This flow will still require the user to use a command to deploy the agent, similar to regular imported clusters, described here: https://github.com/rancher/rancher/issues/28356.\r\n\r\n**Notes:**\r\nIt is possible to import a private cluster through regular import. This is because imported clusters tunnel their kubernetes requests to the control plane node and communicate with the kubernetes service IP instead of the public endpoint. The solution for this should probably use this same approach for EKS clusters. Requests should be tunneled and instead of using the public Kube API endpoint, the kubernetes service ip should be used.\r\n\r\nThis is blocking: https://github.com/rancher/rancher/issues/28356",
"title": "[EKSv2]: Rancher should be able to fully connect to created/register private EKS clusters",
"type": "issue"
},
{
"action": "closed",
"author": "rmweir",
"comment_id": null,
"datetime": 1604347668000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1,643 | false | false | 1 | 2 | false |
openshift/cluster-network-operator | openshift | 723,966,444 | 840 | {
"number": 840,
"repo": "cluster-network-operator",
"user_login": "openshift"
} | [
{
"action": "opened",
"author": "rcarrillocruz",
"comment_id": null,
"datetime": 1603013298000,
"masked_author": "username_0",
"text": "",
"title": "Bump dependencies of k8s to 0.19",
"type": "issue"
},
{
"action": "created",
"author": "rcarrillocruz",
"comment_id": 711803715,
"datetime": 1603094889000,
"masked_author": "username_0",
"text": "We need to land https://github.com/openshift/release/pull/12907 , unit tests are running on golang 1.13 thus the crypto failures as those symbols naming changed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rcarrillocruz",
"comment_id": 712339018,
"datetime": 1603130170000,
"masked_author": "username_0",
"text": "/retest",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rcarrillocruz",
"comment_id": 712347213,
"datetime": 1603130696000,
"masked_author": "username_0",
"text": "/assign @username_1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danwinship",
"comment_id": 712356758,
"datetime": 1603131539000,
"masked_author": "username_1",
"text": "/lgtm\r\nif it passes enough tests to merge...",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanluisvaladas",
"comment_id": 712783984,
"datetime": 1603193289000,
"masked_author": "username_2",
"text": "/lgtm",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rcarrillocruz",
"comment_id": 712884203,
"datetime": 1603203284000,
"masked_author": "username_0",
"text": "/override ci/prow/e2e-aws-sdn-multi",
"title": null,
"type": "comment"
}
] | 272 | false | true | 3 | 7 | true |
DemocracyClub/UK-Polling-Stations | DemocracyClub | 574,789,516 | 2,638 | null | [
{
"action": "opened",
"author": "polling-bot-4000",
"comment_id": null,
"datetime": 1583252991000,
"masked_author": "username_0",
"text": "EMS: Xpress DC\nFiles:\n- `E06000046/2020-03-03T16:29:11.988826/iow.gov.uk-1583251766000-.tsv`",
"title": "Import E06000046-Isle of Wight Council",
"type": "issue"
},
{
"action": "closed",
"author": "chris48s",
"comment_id": null,
"datetime": 1585562701000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 92 | false | false | 2 | 2 | false |
openlawteam/molochv3-ui | openlawteam | 780,508,350 | 20 | null | [
{
"action": "opened",
"author": "jtrein",
"comment_id": null,
"datetime": 1609935072000,
"masked_author": "username_0",
"text": "_Jotting some initial notes down._\r\n\r\nWe need to make standard the way we submit to Moloch and Snapshot as many actions involve both happening in \"parallel\".\r\n\r\n- [ ] Submitting proposals should be handled by a service which encapsulates other smaller services for submitting to Moloch and Snapshot. Hook?\r\n- [ ] Retry Snapshot (`HTTP`) if failed. Use a hook (search for a good one we can adapt from the interwebz)?\r\n- [ ] Handle case where internet cuts out (etc.) and one succeeds and the other doesn't/can't. How to nicely pick up where user left off?\r\n- [ ] Unit tests\r\n\r\ncc: @jdville03",
"title": "Standard for Moloch + Snapshot proposal submission",
"type": "issue"
},
{
"action": "created",
"author": "jtrein",
"comment_id": 790710651,
"datetime": 1614872561000,
"masked_author": "username_0",
"text": "Mostly handled as per the initial proposal flow's with onboarding and transfer. Will open new issues for improvements related to redundancies.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jtrein",
"comment_id": null,
"datetime": 1614872561000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 732 | false | false | 1 | 3 | false |
trailofbits/blight | trailofbits | 765,825,588 | 15,424 | null | [
{
"action": "opened",
"author": "13140772047",
"comment_id": null,
"datetime": 1607915961000,
"masked_author": "username_0",
"text": "兴海找妹子包夜服务美女【+V:10771909】靓妹兴海妹子多少钱一晚上门服务(十微IO77I9O9)由芒果、银河酷娱联合出品,华晨美创承制的甜宠爱情偶像剧《奈何又如何》于近日在长沙隆重开机。导演吴强,主演宣璐、赵志伟、金雯昕、刘胤君、刘宥畅、王倩等齐齐亮相,各出品方、承制方代表也出席了开机仪式。“甜宠爆款制造机”导演打造超强热门,准备好你的少女心!曾执导《克拉恋人》、《双世宠妃》、《奈何要娶我》、《人间至味是清欢》的导演吴强,操刀多部爆款偶像剧,作品收视与播放量都表现出众。吴导擅长从细节之处出发,将演员的个人特质用观众喜爱的方式表达,打造出广受好评的剧集,足见导演对于女性情感的细腻感知和精准把握。当热门遭遇“甜宠爆款制造”团队,一定会击中你的少女心!开机现场,导演吴强表达了对新剧的强烈期待,面对长沙突降的冬雨,导演不仅用激动人心的话鼓舞了开机士气,还暖心地提醒大家长沙的冬天是“魔法攻击”,需多注意防寒保暖。“叮嘱式发言”提前将全体剧组人员带入了大家庭的温馨氛围。宣璐赵志伟首次合作,演绎超强职场《陈情令》中温柔端庄的“师姐”宣璐摇身一变,成为职场白骨精聂星辰;《我只喜欢你》中的暖心哥哥赵志伟把头发梳成大人的样子,化身极致挑剔强迫症霸总严景致。两位主演一改以往的风格,演绎史上最强与秘书组合,职场上惺惺相惜,情场上棋逢对手,玩转高智商职场爱恋。超强的超甜爱恋之下,涌动着旧日情感纠葛的秘密。初次见面,聂星辰就对“难搞”严景致的日常习惯了如指掌,一次出击住总裁办全场。究竟是什么让初次见面的二人似乎相识多年?蜜恋开始之后,霸总屡出,聂星辰如何见招拆招,出奇制胜,再获霸总心?另类多元视角呈现爱情新样本除了有强迫症霸总和完美特助,剧中还将打造多组:刘胤君饰演的花花公子赵远方和王倩饰演的漂亮女明星甄念从相互看不顺眼到最终相爱,两人学会了在爱情里面对最真实的自己。金雯昕饰演的元气追星女孩儿童欣苦恋帅气医生,一对颇具反差萌的情侣,在带给大家搞笑逗趣的同时,让人体验别样的甜蜜与宠爱。在一波多折的寻爱之路上,恋爱中的男女们共同成长蜕变。三对风格迥异的,浓缩出了当代都市爱恋现状的经典模式,全方位满足了观众们对爱情的不同想象。热门、超强编剧,加上“最懂少女心”导演的加持,搭配契合度的制片班底。全剧以轻松幽默的风格将职场、甜宠、失忆等流行元素融合,力图打造又一部浪漫甜蜜、少女心爆棚的偶像剧。甜宠之外,本剧还聚焦于都市青年的职场与爱情,展现当代青年人面对困难时努力克服的勇气与决心。声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯防较殉煌椅幽毡倘戮笛问陀炕孕倘https://github.com/trailofbits/blight/issues/8148?cbnid <br />https://github.com/trailofbits/blight/issues/5501?dunnj <br />https://github.com/trailofbits/blight/issues/2830?24314 <br />https://github.com/trailofbits/blight/issues/2254?32305 <br />https://github.com/trailofbits/blight/issues/13204?38532 <br />wlyqhgiabkjnmvgfqkviwfxpy",
"title": "兴海哪有特殊服务的洗浴-济南生活圈",
"type": "issue"
}
] | 1,458 | false | false | 1 | 1 | false |
shiftleft-staging/shiftleft-java-demo | null | 858,023,770 | 1 | {
"number": 1,
"repo": "shiftleft-java-demo",
"user_login": "shiftleft-staging"
} | [
{
"action": "opened",
"author": "shiftleft-staging",
"comment_id": null,
"datetime": 1618414902000,
"masked_author": "username_0",
"text": "\n\nThis pull request adds a GitHub Action workflow file that executes ShiftLeft NextGen SAST (NG SAST) on this PR. Once merged, it will also execute NG SAST on all future PRs opened in this repo.\n\n### Visit [shiftleft.io](https://www.stg.shiftleft.io/findingsSummary/shiftleft-java-demo?apps=shiftleft-java-demo&isApp=1) to see the security findings for this repository.\n\n## We've done a few things on your behalf\n\n- Forked this demo application and opened a pull request\n- Generated a unique secret `SHIFTLEFT_ACCESS_TOKEN` to allow GitHub Actions in this repository to communicate with the ShiftLeft API\n- Created a [GitHub Action](https://github.com/username_0/shiftleft-java-demo/blob/demo-branch-1618414897/.github/workflows/shiftleft.yml) that will send this pull request to ShiftLeft for analysis\n- Added a status check that displays the result of the GitHub Action\n\nQuestions? Comments? Want to learn more? [Get in touch with us](https://www.shiftleft.io/contact/) or check out [our documentation](https://docs.shiftleft.io).",
"title": "Add GitHub Action: ShiftLeft NextGen Static Analysis",
"type": "issue"
}
] | 1,101 | false | true | 1 | 1 | true |
rht-labs/ubiquitous-journey | rht-labs | 719,996,267 | 194 | {
"number": 194,
"repo": "ubiquitous-journey",
"user_login": "rht-labs"
} | [
{
"action": "opened",
"author": "mvmaestri",
"comment_id": null,
"datetime": 1602577474000,
"masked_author": "username_0",
"text": "An app of apps for ready-to-use pipelines examples. The chart includes an example of a pipeline developed in Tekton, the peaceful cat 🐈. It contains the main steps of a continuous software delivery process. It enforces a strict semantic version validation strategy, managing tag increments for you. Develop, features, releases, patches and hotfixes flows are supported.\r\n\r\n",
"title": "UJ-pipelines with tekton-demo",
"type": "issue"
},
{
"action": "created",
"author": "springdo",
"comment_id": 981880987,
"datetime": 1638209025000,
"masked_author": "username_1",
"text": "I think given this tekton starter is already in the helm-charts repo in CoP land where it will get more exposure and this repo is supposed to be more focused on just gitOps as a method for CD, we should close it out",
"title": null,
"type": "comment"
}
] | 700 | false | false | 2 | 2 | false |
stealthcopter/AndroidNetworkTools | null | 730,199,810 | 78 | null | [
{
"action": "opened",
"author": "babajasoos",
"comment_id": null,
"datetime": 1603784320000,
"masked_author": "username_0",
"text": "",
"title": "gethostname should return name instead of ip address again",
"type": "issue"
},
{
"action": "created",
"author": "ForionsAcc",
"comment_id": 718886736,
"datetime": 1603990581000,
"masked_author": "username_1",
"text": "Same here, public addresses works fine but not local.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ForionsAcc",
"comment_id": 720977866,
"datetime": 1604392661000,
"masked_author": "username_1",
"text": "Hi, this has to do with the DHCP and DNS server on your local router, is there any other way to get the device name, so its not depended on the router configs.",
"title": null,
"type": "comment"
}
] | 212 | false | false | 2 | 3 | false |
kubernetes-sigs/cluster-api | kubernetes-sigs | 709,123,123 | 3,696 | null | [
{
"action": "opened",
"author": "sedefsavas",
"comment_id": null,
"datetime": 1601054009000,
"masked_author": "username_0",
"text": "Currently, we are using Kind v0.7.1 for CAPD and e2e tests.\r\nKind v0.9.0 came out recently, we may want to wait until v0.9.1.",
"title": "Update Kind to v0.9.x for CAPD and e2e tests",
"type": "issue"
},
{
"action": "created",
"author": "vincepri",
"comment_id": 699048828,
"datetime": 1601054051000,
"masked_author": "username_1",
"text": "/milestone v0.4.0",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sedefsavas",
"comment_id": 699057018,
"datetime": 1601055052000,
"masked_author": "username_0",
"text": "/good-first-issue\r\n/help",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "evalsocket",
"comment_id": 699663708,
"datetime": 1601227755000,
"masked_author": "username_2",
"text": "/assign @username_2",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "tcordeu",
"comment_id": 702234240,
"datetime": 1601567988000,
"masked_author": "username_3",
"text": "Hey @username_2 can I work on these one? If so, would you mind helping me?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "evalsocket",
"comment_id": 702236020,
"datetime": 1601568167000,
"masked_author": "username_2",
"text": "/unassign @username_2 \n/assign @username_3",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "tcordeu",
"comment_id": 703132064,
"datetime": 1601743899000,
"masked_author": "username_3",
"text": "As Kind v0.9.0 depends on apimachinery v0.18.8 we should wait on https://github.com/kubernetes-sigs/cluster-api/pull/3735 to be merged (or to at least be stable), right?\r\ncc @username_1 \r\n\r\nPS: Sorry for the questions, this is my first issue.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fabriziopandini",
"comment_id": 707784297,
"datetime": 1602599901000,
"masked_author": "username_4",
"text": "@username_3 #3735 is merged now. Are you still up to work on this issue\r\n?",
"title": null,
"type": "comment"
}
] | 609 | false | true | 5 | 8 | true |
NCAR/VAPOR | NCAR | 836,204,623 | 2,656 | null | [
{
"action": "opened",
"author": "clyne",
"comment_id": null,
"datetime": 1616173722000,
"masked_author": "username_0",
"text": "There is a setting on the Preferences menu to set (and lock) the window size. This is particularly useful for people who want to capture a number of outputs images, often over numerous session, and want the images to be a consistent size. However, the size specification applies to the overall application window, not the renderer window, limiting the usability of the feature.",
"title": "Window size feature should apply to visualization window, not entire application",
"type": "issue"
},
{
"action": "created",
"author": "StasJ",
"comment_id": 803039573,
"datetime": 1616179643000,
"masked_author": "username_1",
"text": "@username_0 That is why we have the \"Custom Output Size\" option under Viewpoint. I don't know what the point of the original option you are referring to is for, we could just remove it.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "clyne",
"comment_id": 803150835,
"datetime": 1616190218000,
"masked_author": "username_0",
"text": "We should probably discuss whether it makes sense to have both of these? The FrameBuffer control is clearly much more useful, but not sure we should do away with the viewpoint preference. \r\n\r\ncc: @username_3 , @username_2",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "shaomeng",
"comment_id": 803216297,
"datetime": 1616203546000,
"masked_author": "username_2",
"text": "yeah, it seems redundant to me. I'd vote to remove the options under global preference menu, and keep the one that's under viewpoint.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgpearse",
"comment_id": 804302667,
"datetime": 1616438480000,
"masked_author": "username_3",
"text": "Removing it is fine with me.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "clyne",
"comment_id": null,
"datetime": 1616551257000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 935 | false | false | 4 | 6 | true |
campsych/concerto-platform | campsych | 663,285,020 | 294 | null | [
{
"action": "opened",
"author": "csancineto",
"comment_id": null,
"datetime": 1595363012000,
"masked_author": "username_0",
"text": "### Concerto Platform version\r\n5.0.13\r\n\r\n### Expected behavior\r\nText using unicode to be shown correctly \r\n\r\n### Actual behavior\r\nAccented texts are displayed incorrectly. The templates are in UTF-8 but they are shown in a different codification.\r\n\r\n### Steps to reproduce the issue\r\nI'm using Concerto installed in my University's server. When I use the internal container databases, the test is presented correctly. When I use external sql databases (to improve the performance), all the texts are presented incorrectly when I run the tests. When the templates are opened internally, all the texts are in UTF-8; when presented, the same templates are corrupt. \r\n\r\nA simple test that shows the problem is in the following link:\r\nhttps://concerto.sites.ufsc.br/test/teste\r\n\r\n<img width=\"1182\" alt=\"Captura de Tela 2020-07-21 às 17 18 51\" src=\"https://user-images.githubusercontent.com/68610931/88102821-908d9a00-cb76-11ea-9296-b917374ec0c1.png\">\r\n\r\n<img width=\"1182\" alt=\"Captura de Tela 2020-07-21 às 17 18 39\" src=\"https://user-images.githubusercontent.com/68610931/88102795-87043200-cb76-11ea-9193-d37e46caaae5.png\">",
"title": "UTF8 issues when using external sql databases",
"type": "issue"
},
{
"action": "created",
"author": "bkielczewski",
"comment_id": 662441035,
"datetime": 1595423128000,
"masked_author": "username_1",
"text": "Hi. \r\n\r\nIs your Concerto installation on University's server a bare-metal installation, not using the container we provide? \r\n\r\nIf so, this would look like the operating system doesn't have `en_US.UTF8` locale enabled. R interpreter needs this to handle UTF-8 properly in the strings. It is used when you run the test and isn't involved when you just use the admin panel. This would explain why you see the text correctly there but not when running the test. \r\n\r\nTo address this on the server Concerto is running on add this line to `/etc/locale.gen`:\r\n\r\n en_US.UTF-8 UTF-8\r\n\r\nThen execute:\r\n\r\n locale-gen \"en_US.UTF-8\"\r\n\r\nBest,\r\nb.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "csancineto",
"comment_id": 662639799,
"datetime": 1595445195000,
"masked_author": "username_0",
"text": "Thank you very much. I forwarded your answer to the professional responsible for installing the Concert. As soon as I get a return from him, I'll indicate if it worked.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "josenorberto",
"comment_id": 666565447,
"datetime": 1596131890000,
"masked_author": "username_2",
"text": "Hi!\r\n\r\nI was in contact with @username_0 to address this issue.\r\nI made a pull request (https://github.com/campsych/concerto-platform/pull/295) with the solution to the encoding problem we were facing with our University's MySQL server.\r\n\r\nBest regards,\r\nJosé Norberto Guiz Fernandes Corrêa",
"title": null,
"type": "comment"
}
] | 2,215 | false | false | 3 | 4 | true |
gilbarbara/logos | null | 768,110,938 | 398 | null | [
{
"action": "opened",
"author": "bromso",
"comment_id": null,
"datetime": 1608061902000,
"masked_author": "username_0",
"text": "Would be awesome if the current LinkedIn logo could be updated.\r\n\r\nAnd if you could please add like just the 'in' logo so to speak.\r\nDownload link from the official LinkedIn brand guideline site: https://content.linkedin.com/content/dam/me/brand/en-us/brand-home/downloads/LinkedIn-Logos.zip\r\n\r\nhttps://brand.linkedin.com/en-us\r\n\r\nThe 'in' logo I'm refering to: https://content.linkedin.com/content/dam/me/business/en-us/amp/brand-site/v2/bg/LI-Bug.svg.original.svg\r\n\r\nThank in advance!",
"title": "Add/Update LinkedIn logo",
"type": "issue"
},
{
"action": "created",
"author": "gilbarbara",
"comment_id": 751870106,
"datetime": 1609190962000,
"masked_author": "username_1",
"text": "Updated in a02fbe40d0c2e0658ac614783e452fca9d0f17f1\r\nThanks",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gilbarbara",
"comment_id": null,
"datetime": 1609190962000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 545 | false | false | 2 | 3 | false |
lukeautry/tsoa | null | 732,898,872 | 834 | null | [
{
"action": "opened",
"author": "asdf123101",
"comment_id": null,
"datetime": 1604036463000,
"masked_author": "username_0",
"text": "<!--- Provide a general summary of the issue in the Title above -->\r\n\r\n## Sorting\r\n\r\n- **I'm submitting a ...**\r\n\r\n - [x] bug report\r\n - [ ] feature request\r\n - [ ] support request\r\n\r\n- I confirm that I\r\n - [x] used the [search](https://github.com/lukeautry/tsoa/search?type=Issues) to make sure that a similar issue hasn't already been submit\r\n\r\n## Expected Behavior\r\n`@Example()` should read variable value imported from another file.\r\n\r\nWith \r\n```ts\r\n// file a.ts\r\nexport const example = {someKey: 'someValue'}\r\n// file b.ts\r\nimport example from a\r\n...\r\n@Example(a)\r\n...\r\n```\r\ntsoa should generate the correct response example from the imported variable.\r\n\r\n## Current Behavior\r\nIt now creates an empty object `{}` if the variable is imported. It works if the example value is directly supplied to the decorator or defined in the same file.\r\n\r\n## Possible Solution\r\nFeel like a TS module resolution issue.\r\n\r\n## Steps to Reproduce\r\nSee expected behavior section.\r\n\r\n## Context (Environment)\r\n\r\nVersion of the library: 3.4.0\r\nVersion of NodeJS: 12.4.0\r\n\r\n- Confirm you were using yarn not npm: [x]",
"title": "@Example doesn't read value from an imported variable",
"type": "issue"
},
{
"action": "created",
"author": "DigohD",
"comment_id": 743219916,
"datetime": 1607696504000,
"masked_author": "username_1",
"text": "This also seems to only be an issue when specVersion is 3. When specVersion is set to 2 it works as expected.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nikosgpet",
"comment_id": 919539371,
"datetime": 1631656410000,
"masked_author": "username_2",
"text": "Wondering if there is any update, because I just had the same issue.",
"title": null,
"type": "comment"
}
] | 1,280 | false | true | 3 | 3 | false |
mekanism/Mekanism | mekanism | 556,376,650 | 5,823 | null | [
{
"action": "opened",
"author": "Zoratan",
"comment_id": null,
"datetime": 1580234601000,
"masked_author": "username_0",
"text": "Issue description:\r\n\r\nRotary Condensentrator does not accept water, singleplayer and dedicated Server.\r\nLooks like there is no Water Vapor (replaced by Steam) gas tank in creative.\r\n\r\nEven tried to set the water pipe to push. \r\nIt connects but the condensentrator does not take anything in.\r\nCondensentrating steam produces Liquid steam...\r\n\r\nVersion (make sure you are on the latest version before reporting):\r\n\r\nForge: forge-1.15.2-31.0.1\r\nMekanism: 9.9.5",
"title": "[1.15.2] Rotary Condensentrator not accepting water",
"type": "issue"
},
{
"action": "created",
"author": "pupnewfster",
"comment_id": 579383502,
"datetime": 1580235253000,
"masked_author": "username_1",
"text": "Opened up a 1.12 instance and seems that I hadn't actually noticed mekanism had something called \"water vapor\" (maybe that is just the localized name it was using), I will try to look into this at some point and figure out what functionality I accidentally killed off.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "term2112",
"comment_id": 579384813,
"datetime": 1580235451000,
"masked_author": "username_2",
"text": "I can confirm in is in 9.9.6\r\n\r\n\r\n\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "winsrp",
"comment_id": 579849420,
"datetime": 1580316333000,
"masked_author": "username_3",
"text": "I'm just getting to know this mod, and I can confirm this on 1.5.2, What broke is ore processing for tier 4 (5X)\r\n\r\n[https://wiki.aidancbrady.com/w/images/aidancbrady/a/a2/Mekanism-flowchart-v7-simplified.png](url)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "winsrp",
"comment_id": 579979114,
"datetime": 1580335045000,
"masked_author": "username_3",
"text": "Steam seems to be the most common form of this gas (I mean... water vapor is steam XD), so might as well just make it steam which would also make it compatible with other mods,, but also I think you have other machines that make steam right? So would that break this loop, or would it actually add an alternative for it?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Zoratan",
"comment_id": 579993997,
"datetime": 1580337340000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pupnewfster",
"comment_id": 580012885,
"datetime": 1580340691000,
"masked_author": "username_1",
"text": "The reason I for now just added water vapor additionally instead of removing one of the types of steam. Is I don't want to have to go through/figure out currently the implications of being able to get steam via just the condensentrator instead of having to get it via other means. Eventually we will figure out how we want to handle the production/usage of steam and water vapor.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "winsrp",
"comment_id": 580046611,
"datetime": 1580348850000,
"masked_author": "username_3",
"text": "looking around in the wiki it seems that the other thing that produces steam is the Thermoelectric Boiler, which is far more expensive in terms of materials, and the output is much greater, that said, I don't think nothing would break, the Rotary condensentrator works as a small supplier of steam for the ore setup, which is just what you need, and the Thermoelectric Boiler is something bigger for the turbine, which could eventually replace the condensentrator as the steam provider for the setup, but it still has lots more uses so I think from a user perspective that its fine to change it to making just steam instead of steam with another name.\r\n\r\nAlso checking the wiki:\r\n\r\nWater Vapor\r\nMade by:\r\n\r\nRotary Condensentrator using Water\r\nUsed by:\r\n\r\nChemical Infuser to make Sulfuric Acid -> [the one on this setup for ore production]\r\nChemical Injection Chamber with Dirt to make Clay -> [the only other use ¯\\_(ツ)_/¯ ]\r\n\r\nAnd that's about it... so I think that for the actual uses it has... steam would work just fine.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pupnewfster",
"comment_id": 580047803,
"datetime": 1580349145000,
"masked_author": "username_1",
"text": "Closing this for now as it is \"fixed\" in 9.9.7 which is released and on curseforge. I will end up looking back at this when I eventually look more into how things will actually be handled.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "pupnewfster",
"comment_id": null,
"datetime": 1580349146000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "closeshot",
"comment_id": 596086585,
"datetime": 1583586469000,
"masked_author": "username_4",
"text": "I found that it accepts water now version 1.15.2 9.9.14.406 however it is not producing water vapor",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pupnewfster",
"comment_id": 596092211,
"datetime": 1583590122000,
"masked_author": "username_1",
"text": "#5940",
"title": null,
"type": "comment"
}
] | 3,211 | false | false | 5 | 12 | false |
marrobHD/firetv-card | null | 854,688,671 | 9 | null | [
{
"action": "opened",
"author": "broyuken",
"comment_id": null,
"datetime": 1617986835000,
"masked_author": "username_0",
"text": "Why are there 2 power buttons? Can I control 2 devices with this card? IE Sound bar and TV?\r\n\r\nAlso, could you allow separate services for turn_on and turn_off like how universal media players do it?\r\n\r\n```\r\ncommands:\r\n turn_on:\r\n service: switch.turn_on\r\n data:\r\n entity_id: switch.living_room_tv_power\r\n turn_off:\r\n service: switch.turn_off\r\n data:\r\n entity_id: switch.living_room_tv_power\r\n```",
"title": "2 power buttons?",
"type": "issue"
}
] | 420 | false | false | 1 | 1 | false |
SabakiHQ/Sabaki | SabakiHQ | 899,173,223 | 789 | null | [
{
"action": "opened",
"author": "psygo",
"comment_id": null,
"datetime": 1621821140000,
"masked_author": "username_0",
"text": "I'm trying to create some [new themes of my own](https://github.com/FanaroEngineering/fanaro_sabaki_theme_collection) but have encountered 2 issues with trying to modify the following characteristics:\r\n\r\n- The grid color\r\n- Sabaki's background to the Go board — I've only managed to do it outside the `.asar` file, inside the preferences' theming tab\r\n\r\nAre there ways of modifying those characteristics and embed them in the `.asar` file? If so, is there documentation about it — I couldn't find it neither [here](https://github.com/SabakiHQ/Shudan/tree/master/docs#styling) nor [here](https://github.com/SabakiHQ/Sabaki/blob/master/docs/guides/create-themes.md) —?\r\n\r\nAdditionally, are there ways of modifying more of Sabaki's styles through CSS?\r\n\r\nAnd does anyone know how to open the inspector on Linux? I've tried googling it and using the shortcuts I use on my browser and on Windows but nothing really worked.",
"title": "Help with Some Styling Variables for Themes",
"type": "issue"
},
{
"action": "created",
"author": "ParmuzinAlexander",
"comment_id": 855891353,
"datetime": 1623069710000,
"masked_author": "username_1",
"text": "Grid color\r\n`.shudan-goban {--shudan-board-foreground-color: #000;}`\r\nBoard color\r\n```\r\n.shudan-goban {--shudan-board-background-color: #FFF;}\r\n.shudan-goban-image {background-image: none;}\r\n```\r\n\r\nYou may be interested in some of the tricks that I personally use.\r\nP.S. Disable bar very extreme change, use only background settings for fixing bug https://github.com/SabakiHQ/Sabaki/pull/542\r\n```\r\n/* Stones texture */\r\n.shudan-stone-image.shudan-sign_1 {background-image: url('0.png');}\r\n.shudan-stone-image.shudan-sign_-1 {background-image: url('1.png');}\r\n.shudan-stone-image.shudan-sign_-1.shudan-random_1 {background-image: url('2.png');}\r\n.shudan-stone-image.shudan-sign_-1.shudan-random_2 {background-image: url('3.png');}\r\n.shudan-stone-image.shudan-sign_-1.shudan-random_3 {background-image: url('4.png');}\r\n.shudan-stone-image.shudan-sign_-1.shudan-random_4 {background-image: url('5.png');}\r\n/* 100% stones size */\r\n.shudan-vertex .shudan-stone {top: 0; left: 0; width: 100%; height: 100%;}\r\n/* Black lines and disable borders */\r\n.shudan-goban {--shudan-board-border-width: 0; --shudan-board-foreground-color: #000;}\r\n.shudan-goban:not(.shudan-coordinates) {padding: 0;}\r\n/* Board texture */\r\n.shudan-goban-image {background-image: url('board1.png');}\r\n/* Disable stones shadow */\r\n.shudan-vertex:not(.shudan-sign_0) .shudan-shadow {background: none; box-shadow: none;}\r\n/* Disable bar */\r\n#bar {visibility: hidden;}\r\nmain {bottom: 0; background: #f0f0f0 url('background1.png') left top;}\r\n/* Disable goban shadow */\r\n#goban {box-shadow: none;}\r\n/* Disable last move marker */\r\n.shudan-vertex.shudan-marker_point.shudan-sign_1 .shudan-marker {background: none;}\r\n.shudan-vertex.shudan-marker_point.shudan-sign_-1 .shudan-marker {background: none;}\r\n/* Heat map */\r\n.shudan-vertex .shudan-heat {transition: opacity 0s, box-shadow 0s;}\r\n.shudan-vertex.shudan-heat_9 .shudan-heat {background: #009900; box-shadow: 0 0 0 .5em #009900; opacity: 1;}\r\n.shudan-vertex.shudan-heat_8 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_7 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_6 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_5 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_4 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_3 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_2 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex.shudan-heat_1 .shudan-heat {background: none; box-shadow: none; opacity: 1;}\r\n.shudan-vertex .shudan-heatlabel {color: white; font-size: .38em; line-height: 1; text-shadow: none; opacity: 1;}\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "psygo",
"comment_id": 859955271,
"datetime": 1623453962000,
"masked_author": "username_0",
"text": "```css\r\n.shudan-goban {\r\n --shudan-board-foreground-color: #FFF; \r\n}\r\n```\r\n\r\nsolved the color of the grid for me. Thanks a lot, @username_1. Before I close this issue, could you please leave a reference to where you ended up finding all these properties?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ParmuzinAlexander",
"comment_id": 860026076,
"datetime": 1623489239000,
"masked_author": "username_1",
"text": "@username_0 Source code like https://github.com/SabakiHQ/Shudan/blob/master/css/goban.css or inspect web version https://github.com/SabakiHQ/Sabaki/releases/download/v0.43.3/sabaki-v0.43.3-web.zip\r\nBut after adding settings for textures, I don't see the point in simple themes. Maybe things like custom Heat map in lizzie \\ katrain style will be more useful. Using themes as an addon or plugin, unfortunately only one...",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "psygo",
"comment_id": null,
"datetime": 1623505121000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "psygo",
"comment_id": 860054855,
"datetime": 1623505121000,
"masked_author": "username_0",
"text": "I wish there were more documentation for all this. Even the names are not quite great — how am I supposed to guess `--shudan-board-foreground-color` means the grid color? Anyway, I think this will do for now.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yishn",
"comment_id": 920753041,
"datetime": 1631785666000,
"masked_author": "username_2",
"text": "For future reference: https://github.com/SabakiHQ/Shudan/blob/master/docs/README.md#styling\r\n\r\nForeground color also refers to marker/label/arrow/line color on non-stone vertices.",
"title": null,
"type": "comment"
}
] | 4,827 | false | false | 3 | 7 | true |
googleapis/google-cloud-cpp | googleapis | 779,342,077 | 5,676 | {
"number": 5676,
"repo": "google-cloud-cpp",
"user_login": "googleapis"
} | [
{
"action": "opened",
"author": "dopiera",
"comment_id": null,
"datetime": 1609868488000,
"masked_author": "username_0",
"text": "This fixes #5670.\r\n\r\nBefore this PR, `NotifyOnStateChange()` could be called on a\r\n`grpc::CompletionQueue` which was shut down via `Shutdown()`. This PR\r\nmakes `AsyncConnectionReadyFuture` use\r\n`CompletionQueueImpl::StartOperation` to make sure that the whole\r\noperation either fails with `StatusCode::kCancelled` or there is a\r\nguarantee that the `CompletionQueue` is not shut down.\n\n<!-- Reviewable:start -->\n---\nThis change is [<img src=\"https://reviewable.io/review_button.svg\" height=\"34\" align=\"absmiddle\" alt=\"Reviewable\"/>](https://reviewable.io/reviews/googleapis/google-cloud-cpp/5676)\n<!-- Reviewable:end -->",
"title": "fix: don't wait for state change on shut down CQ",
"type": "issue"
},
{
"action": "created",
"author": "coryan",
"comment_id": 760286368,
"datetime": 1610639688000,
"masked_author": "username_1",
"text": "I think #5701 addresses the same problem, closing for now.",
"title": null,
"type": "comment"
}
] | 677 | false | false | 2 | 2 | false |
SimpleAppProjects/SimpleWeather-Windows | SimpleAppProjects | 646,867,473 | 363 | null | [
{
"action": "opened",
"author": "thewizrd",
"comment_id": null,
"datetime": 1593331710000,
"masked_author": "username_0",
"text": "### Version 3.3.0.0(3.3.0.0) ###\n\n\n### Stacktrace ###\n\n\n__Interop.ComCallHelpers.Call($__ComObject __this, RuntimeTypeHandle __typeHnd, Int32 __targetIndex) in Call at 15732480:0;__Interop\n\n__Interop.ForwardComStubs.Stub_19<System.__Canon>(Void* InstParam, $__ComObject __this, Int32 __targetIndex) in Stub_19 at 16707566:0;__Interop.ForwardComStubs\n\n\n\n### Reason ###\n\n\nSystem.Exception\n\n\n### Link to App Center ###\n\n\n* [https://appcenter.ms/users/username_0.dev/apps/SimpleWeather/crashes/errors/1183154435u](https://appcenter.ms/users/username_0.dev/apps/SimpleWeather/crashes/errors/1183154435u)",
"title": "Fix System.Exception in ComCallHelpers.Call ($__ComObject __this, RuntimeTypeHandle __typeHnd, Int32 __targetIndex)",
"type": "issue"
},
{
"action": "closed",
"author": "thewizrd",
"comment_id": null,
"datetime": 1594834563000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 594 | false | false | 1 | 2 | true |
HL7/fhir | HL7 | 812,676,811 | 1,104 | {
"number": 1104,
"repo": "fhir",
"user_login": "HL7"
} | [
{
"action": "opened",
"author": "yunwwang",
"comment_id": null,
"datetime": 1613844988000,
"masked_author": "username_0",
"text": "## HL7 FHIR Pull Request\r\n\r\n_Note: No pull requests will be accepted against `./source` unless logged in the_ [HL7 Jira issue tracker](https://jira.hl7.org/projects/FHIR/issues/).\r\n\r\nIf you made changes to any files within `./source` please indicate the Jira tracker number this pull request is associated with: ` `\r\n\r\n## Description\r\n\r\nFHIR-26831\r\nFHIR-29660\r\nFHIR-31075\r\nFHIR-21243",
"title": "Update Questionnaire and QuestionnaireResponse",
"type": "issue"
},
{
"action": "created",
"author": "yunwwang",
"comment_id": 785261645,
"datetime": 1614189415000,
"masked_author": "username_0",
"text": "Will reorganize tickets",
"title": null,
"type": "comment"
}
] | 408 | false | false | 1 | 2 | false |
apache/hudi | apache | 853,721,526 | 2,793 | {
"number": 2793,
"repo": "hudi",
"user_login": "apache"
} | [
{
"action": "opened",
"author": "TeRS-K",
"comment_id": null,
"datetime": 1617905412000,
"masked_author": "username_0",
"text": "## What is the purpose of the pull request\r\n\r\nThis pull request supports ORC storage in hudi.\r\n\r\n## Brief change log\r\n\r\nIn two separate commits:\r\n- Implemented HoodieOrcWriter\r\n - Added HoodieOrcConfigs\r\n - Added AvroOrcUtils that writes Avro record **to** VectorizedRowBatch\r\n - Used orc-core:no-hive module (`no-hive` is needed because spark-sql uses no-hive version of orc and it would become easier for spark integration)\r\n- Implemented HoodieOrcReader\r\n - Read Avro records **from** VectorizedRowBatch\r\n - Implemented OrcReaderIterator\r\n - Implemented ORC utility functions \r\n\r\n## Verify this pull request\r\n\r\n- Added unit tests for \r\n - reader/writer creation\r\n - AvroOrcUtils\r\n- (local) Wrote a small tool that reads from ORC/Parquet files and writes to ORC/Parquet files, verified that the records in the input/output files are identical using spark.read.orc/spark.read.parquet.\r\n- (local) Changed the HoodieTableConfig.DEFAULT_BASE_FILE_FORMAT to force the tests to run with ORC as the base format. Some changes need to be made, but I'm leaving it out of this PR to get some initial feedback on the reader/writer implementation first.\r\n For all tests to pass with ORC as the base file format:\r\n - Understand schema evolution in ORC (ref TestUpdateSchemaEvolution)\r\n - Add ORC support for places that have hardcoded ParquetReader or sqlContext.read().parquet() \r\n - Add ORC support for bootstrap op\r\n - Hive engine integration with ORC (implement HoodieOrcInputFormat, and more)\r\n - Spark engine integration with ORC (implement HoodieInternalRowOrcWriter, and more)\r\n - Add ORC support for HoodieSnapshotExporter\r\n - Implement HDFSOrcImporter\r\n - and possibly more.\r\n\r\n\r\n## Committer checklist\r\n\r\n - [ ] Has a corresponding JIRA in PR title & commit\r\n \r\n - [ ] Commit message is descriptive of the change\r\n \r\n - [ ] CI is green\r\n\r\n - [ ] Necessary doc changes done or have another open PR\r\n \r\n - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.",
"title": "[HUDI-57] Support ORC Storage",
"type": "issue"
},
{
"action": "created",
"author": "n3nash",
"comment_id": 816037894,
"datetime": 1617905982000,
"masked_author": "username_1",
"text": "@prashantwason Can you review this ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TeRS-K",
"comment_id": 816163413,
"datetime": 1617913917000,
"masked_author": "username_0",
"text": "The build is currently failing with error `ERROR: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit`, it doesn't seem to be related to my change. How can I trigger a rebuild?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yanghua",
"comment_id": 816495293,
"datetime": 1617955102000,
"masked_author": "username_2",
"text": "option 1: close and reopen the PR;\r\noption 2: push an empty commit via git command",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TeRS-K",
"comment_id": 819244201,
"datetime": 1618378961000,
"masked_author": "username_0",
"text": "ORC is well-integrated with hive, so hive already has OrcInputFormat, OrcOutputFormat etc. With my latest change to the HoodieInputFormatUtils class, I was able to sync hudi orc format table to hive metastore (tested with deltastreamer).\r\nHowever, we do still need to implement HoodieOrcInputFormat & HoodieRealtimeOrcInputFormat. I have done some work on that but it's not tested yet.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "n3nash",
"comment_id": 859069045,
"datetime": 1623359257000,
"masked_author": "username_1",
"text": "Closing this in favor of -> https://github.com/apache/hudi/pull/2999",
"title": null,
"type": "comment"
}
] | 2,870 | false | true | 3 | 6 | false |
pytorch/pytorch | pytorch | 860,157,571 | 56,295 | null | [
{
"action": "opened",
"author": "befelix",
"comment_id": null,
"datetime": 1618605822000,
"masked_author": "username_0",
"text": "```\r\nwhich has a hard-coded dimension that replaces the ellipsis according to the batch-dimension of the example tensor. In contrast, `jit.script` identifies the ellipsis and compiles the code to:\r\n```python\r\ndef select(x: Tensor) -> Tensor:\\n return torch.select(x, -1, 0)\\n'\r\n```\r\n\r\n## Motivation\r\n\r\nWhile `jit.script` has come a long way, tracing is often still the easiest to get to jit'ed code. Having indexing operations with ellipsis traced as hard-coded indeces makes it more difficult to use tracing with different batch sizes and can lead to subtle indexing errors as in the example above.",
"title": "[torch.jit.trace] Indexing with ellipsis fixes the batch dimension",
"type": "issue"
},
{
"action": "created",
"author": "gmagogsfm",
"comment_id": 822112447,
"datetime": 1618796518000,
"masked_author": "username_1",
"text": "@username_3 @username_2 Any thoughts? It feels like a fundamental limitation of `trace` to me",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "suo",
"comment_id": 823448238,
"datetime": 1618938145000,
"masked_author": "username_2",
"text": "The current implementation the tracer records operations at the `aten` level, so it only sees advanced indexing operations after they've been desugared into the actual `aten::select` call. So yeah, in that sense it's a fundamental limitation of the current approach.\r\n\r\nPossible options, listed in no particular order of practicality:\r\n1. We could create an aten op that captures the fact that advanced indexing was done, although that increases the complexity of the PT core to solve an implementation detail of the tracer.\r\n2. We could use FX to trace this, although I'm not sure whether it captures this kind of higher-level indexing either (cc @username_3)\r\n3. You can nest a scripted function within a traced one to \"preserve\" the indexing call (this might be ugly/hard to know where the scripted functions need to be inserted)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gmagogsfm",
"comment_id": 823455299,
"datetime": 1618938703000,
"masked_author": "username_1",
"text": "Hi @username_0,\r\n\r\nOptional #3 that @username_2 proposed can be done quickly in your code, could you give that a try? option #1 and #2 would take more time to develop than #3. So #3 is best shot to get unblocked.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gmagogsfm",
"comment_id": null,
"datetime": 1618938715000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "jamesr66a",
"comment_id": 823664588,
"datetime": 1618961211000,
"masked_author": "username_3",
"text": "@username_2 FX traces the surface-level `getitem` call so it's plausible:\r\n\r\n```\r\nimport torch\r\nimport torch.fx\r\n\r\ndef idx(x, z):\r\n return x[..., z]\r\n\r\ntraced = torch.fx.symbolic_trace(idx)\r\nprint(traced.graph)\r\n\"\"\"\r\ngraph(x, z):\r\n %getitem : [#users=1] = call_function[target=operator.getitem](args = (%x, (Ellipsis, %z)), kwargs = {})\r\n return getitem\r\n\"\"\"\r\n```",
"title": null,
"type": "comment"
}
] | 2,081 | false | false | 4 | 6 | true |
sentrysoftware/studioX-templates | sentrysoftware | 810,202,143 | 46 | {
"number": 46,
"repo": "studioX-templates",
"user_login": "sentrysoftware"
} | [
{
"action": "opened",
"author": "RazeemM",
"comment_id": null,
"datetime": 1613569835000,
"masked_author": "username_0",
"text": "",
"title": "Dell EMC Isilon OneFS REST API",
"type": "issue"
},
{
"action": "created",
"author": "MohammedSentry",
"comment_id": 782108342,
"datetime": 1613744823000,
"masked_author": "username_1",
"text": "@username_0 I think there is a conversion issue for the Used HDD Capacity of the StoragePool\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MohammedSentry",
"comment_id": 782122341,
"datetime": 1613746246000,
"masked_author": "username_1",
"text": "@username_0 can you display the Storagepool values in GB instead of MB ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RazeemM",
"comment_id": 782148331,
"datetime": 1613748544000,
"masked_author": "username_0",
"text": "@username_1 Units modified to GB.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MohammedSentry",
"comment_id": 783421395,
"datetime": 1614004804000,
"masked_author": "username_1",
"text": "@username_0 no more USED HDD CAPACITY ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bertysentry",
"comment_id": 785932241,
"datetime": 1614263088000,
"masked_author": "username_2",
"text": "@username_0 no more USED HDD CAPACITY?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RazeemM",
"comment_id": 785979199,
"datetime": 1614266327000,
"masked_author": "username_0",
"text": "It is there.\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "RazeemM",
"comment_id": 786001895,
"datetime": 1614268238000,
"masked_author": "username_0",
"text": "There was no Used HDD Capacity for Nodepool. It is there for Storagepool.",
"title": null,
"type": "comment"
}
] | 577 | false | false | 3 | 8 | true |
jlippold/tweakCompatible | null | 751,057,139 | 134,453 | null | [
{
"action": "opened",
"author": "MEGSystem",
"comment_id": null,
"datetime": 1606331183000,
"masked_author": "username_0",
"text": "```\r\n{\r\n \"packageId\": \"com.rpgfarm.a-font\",\r\n \"action\": \"working\",\r\n \"userInfo\": {\r\n \"arch32\": false,\r\n \"packageId\": \"com.rpgfarm.a-font\",\r\n \"deviceId\": \"iPhone8,1\",\r\n \"url\": \"http://cydia.saurik.com/package/com.rpgfarm.a-font/\",\r\n \"iOSVersion\": \"14.2\",\r\n \"packageVersionIndexed\": true,\r\n \"packageName\": \"A-Font\",\r\n \"category\": \"Tweaks\",\r\n \"repository\": \"MERONA Repo\",\r\n \"name\": \"A-Font\",\r\n \"installed\": \"1.8.3\",\r\n \"packageIndexed\": true,\r\n \"packageStatusExplaination\": \"A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.\",\r\n \"id\": \"com.rpgfarm.a-font\",\r\n \"commercial\": false,\r\n \"packageInstalled\": true,\r\n \"tweakCompatVersion\": \"0.1.5\",\r\n \"shortDescription\": \"Change your font!\",\r\n \"latest\": \"1.8.3\",\r\n \"author\": \"Baw Appie\",\r\n \"packageStatus\": \"Unknown\"\r\n },\r\n \"base64\": \"eyJhcmNoMzIiOmZhbHNlLCJwYWNrYWdlSWQiOiJjb20ucnBnZmFybS5hLWZvbnQiLCJkZXZpY2VJZCI6ImlQaG9uZTgsMSIsInVybCI6Imh0dHA6XC9cL2N5ZGlhLnNhdXJpay5jb21cL3BhY2thZ2VcL2NvbS5ycGdmYXJtLmEtZm9udFwvIiwiaU9TVmVyc2lvbiI6IjE0LjIiLCJwYWNrYWdlVmVyc2lvbkluZGV4ZWQiOnRydWUsInBhY2thZ2VOYW1lIjoiQS1Gb250IiwiY2F0ZWdvcnkiOiJUd2Vha3MiLCJyZXBvc2l0b3J5IjoiTUVST05BIFJlcG8iLCJuYW1lIjoiQS1Gb250IiwiaW5zdGFsbGVkIjoiMS44LjMiLCJwYWNrYWdlSW5kZXhlZCI6dHJ1ZSwicGFja2FnZVN0YXR1c0V4cGxhaW5hdGlvbiI6IkEgbWF0Y2hpbmcgdmVyc2lvbiBvZiB0aGlzIHR3ZWFrIGZvciB0aGlzIGlPUyB2ZXJzaW9uIGNvdWxkIG5vdCBiZSBmb3VuZC4gUGxlYXNlIHN1Ym1pdCBhIHJldmlldyBpZiB5b3UgY2hvb3NlIHRvIGluc3RhbGwuIiwiaWQiOiJjb20ucnBnZmFybS5hLWZvbnQiLCJjb21tZXJjaWFsIjpmYWxzZSwicGFja2FnZUluc3RhbGxlZCI6dHJ1ZSwidHdlYWtDb21wYXRWZXJzaW9uIjoiMC4xLjUiLCJzaG9ydERlc2NyaXB0aW9uIjoiQ2hhbmdlIHlvdXIgZm9udCEiLCJsYXRlc3QiOiIxLjguMyIsImF1dGhvciI6IkJhdyBBcHBpZSIsInBhY2thZ2VTdGF0dXMiOiJVbmtub3duIn0=\",\r\n \"chosenStatus\": \"working\",\r\n \"notes\": \"\"\r\n}\r\n```",
"title": "`A-Font` working on iOS 14.2",
"type": "issue"
}
] | 1,861 | false | false | 1 | 1 | false |
AhmedAmin90/ror-social-scaffold | null | 817,398,548 | 4 | null | [
{
"action": "opened",
"author": "mricanho",
"comment_id": null,
"datetime": 1614349463000,
"masked_author": "username_0",
"text": "Hello Team!\r\n\r\nYou did an excellent job in your first two milestones, just a few things to work on:\r\n\r\n- You need to refactor your friendship code, all of this is in preparation for the milestone.\r\n- Take out all the logic on your views, this is the best practice.\r\n\r\nHappy coding!",
"title": "Peer to peer code review",
"type": "issue"
}
] | 281 | false | false | 1 | 1 | false |
JuliaRegistries/General | JuliaRegistries | 776,796,751 | 27,130 | {
"number": 27130,
"repo": "General",
"user_login": "JuliaRegistries"
} | [
{
"action": "opened",
"author": "JuliaRegistrator",
"comment_id": null,
"datetime": 1609392325000,
"masked_author": "username_0",
"text": "- Registering package: ArrayInterface\n- Repository: https://github.com/SciML/ArrayInterface.jl\n- Created by: @chriselrod\n- Version: v2.14.11\n- Commit: 0256272ce42846753edf055aae66191116f1ff7e\n- Reviewed by: @chriselrod\n- Reference: https://github.com/SciML/ArrayInterface.jl/commit/0256272ce42846753edf055aae66191116f1ff7e#commitcomment-45540590\n<!-- bf0c69308befbd3ccf2cc956ac8a46712550b79fc9bfb5e4edf8f833f05f4c18b06eddad8845b45beb9f45c2b8020dd6052a85cb81bea51da7fec03da3249e2705da7c5893884fa78a1359a9fe123114c0495d8c2338f2d819d9640571c424920cc3378da8c50b036aa90aa540f08fbfbeeb3053ad2642eeeb1467dd964eb1ca067b1c80d36d503de955557ef34211fe5664b8c2d873927252fcf0aef20a15c4c19fa7801ea2aa39c57f553547234bb993cb9adc967564405d2544fc239149239db0738c3c9691dc9bc00dff1df035e202e806a49a7f1b9fdd9eaa502b02e685 -->",
"title": "New version: ArrayInterface v2.14.11",
"type": "issue"
}
] | 803 | false | true | 1 | 1 | false |
reapit/foundations | reapit | 795,166,998 | 3,357 | null | [
{
"action": "opened",
"author": "plittlewood-rpt",
"comment_id": null,
"datetime": 1611758939000,
"masked_author": "username_0",
"text": "The logging is a little light in the data marketplace app, particularly in interactions with the third party platforms. This makes it difficult to isolate exact points of failure in some scenarios. We should enhance the logging capabilities of this service",
"title": "Improve logging",
"type": "issue"
},
{
"action": "closed",
"author": "cbryanreapit",
"comment_id": null,
"datetime": 1611868392000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 256 | false | false | 2 | 2 | false |
OpenAPITools/openapi-generator | OpenAPITools | 848,584,179 | 9,152 | null | [
{
"action": "opened",
"author": "Kink77",
"comment_id": null,
"datetime": 1617292036000,
"masked_author": "username_0",
"text": "<!--\r\nPlease follow the issue template below for bug reports and feature requests.\r\nAlso please indicate in the issue title which language/library is concerned. Eg: [JAVA] Bug generating foo with bar \r\n-->\r\n\r\n##### Description\r\n\r\nGetting a stackoverflow exception when trying to generate a client for csharp-netcore\r\n\r\n##### openapi-generator version\r\n\r\nversion 5.1.0\r\n\r\n##### OpenAPI declaration file content or url\r\n\r\nhttps://gist.github.com/username_0/673cbfe68f6c94afdee6ff5c7f0fe9a8\r\n\r\n##### Command line used for generation\r\n\r\njava -jar openapi-generator-cli-5.1.0.jar generate -i ./openapi-spec.json -c ./config.json -g csharp-netcore\r\n\r\nYou can find openapi-spec.json here : \r\nhttps://gist.github.com/username_0/673cbfe68f6c94afdee6ff5c7f0fe9a8\r\nYou can find config.json here : \r\nhttps://gist.github.com/username_0/48efb0951e4d21f682948138bcd6eb52\r\n\r\n##### Steps to reproduce\r\n\r\ndownload both openapi-spec.json and config.json, put them in the same directory as the openapi-generator-cli-5.1.0.jar and run the command line java -jar openapi-generator-cli-5.1.0.jar generate -i ./openapi-spec.json -c ./config.json -g csharp-netcore\r\n\r\nYou will get the following error : \r\nException in thread \"main\" java.lang.StackOverflowError\r\n\tat com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:740)\r\n\r\nThe culprit is the schema JToken referencing itself which creates a cyclic reference\r\n\r\n##### Related issues/PRs\r\n\r\n##### Suggest a fix/enhancement\r\n\r\nWIth the swagger generator, I end up with a Dictionary<string, JToken> for that schema",
"title": "Openapi Generator stackoverflow exception for csharp-netcore",
"type": "issue"
},
{
"action": "created",
"author": "wing328",
"comment_id": 812823266,
"datetime": 1617432694000,
"masked_author": "username_1",
"text": "```\r\n \"components\": {\r\n \"schemas\": {\r\n \"JToken\": {\r\n \"type\": \"array\",\r\n \"items\": {\r\n \"$ref\": \"#/components/schemas/JToken\"\r\n }\r\n }\r\n }\r\n }\r\n```\r\n`JToken` is an array of itself, which is causing the issue.\r\n\r\nWhat does the JSON payload look like?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Kink77",
"comment_id": 813392745,
"datetime": 1617629293000,
"masked_author": "username_0",
"text": "It's basically a [JObject from Newtonsoft.JSON](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_Linq_JObject.htm)\r\n\r\nWhen I try to generate a client with swagger codegen, it gives me a dictionary of <string, JToken>. Open Api Generator seems to do an endless loop and cause a stackoverflow.\r\n\r\nI managed to generate a client by fiddling with the openapi spec. Here's the [fixed openapi-spec](https://gist.github.com/username_0/c4352f4b02102937f256fafe4eb552a0) I used to make it work. \r\n\r\nSo I'm not too sure how to go about having a property that is a JObject translated to an openapi spec and back to a JObject in the client.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "JesperG",
"comment_id": 852802338,
"datetime": 1622618220000,
"masked_author": "username_2",
"text": "I also ran into this issue because we use JToken in an API. It reached the same endless loop until stack overflowed.\r\nThank you for your insight as it allowed me to manually modify the .json file (change the TJoken type to object instead of an array of itself) in order to make the generator not failing.\r\nI did not need the endpoints which are using the JToken, so I did not look further into how to use this, sorry.\r\nThis is to confirm that there is an issue.",
"title": null,
"type": "comment"
}
] | 2,962 | false | false | 3 | 4 | true |
PlaceOS/drivers | PlaceOS | 838,177,173 | 123 | null | [
{
"action": "opened",
"author": "jeremyw24",
"comment_id": null,
"datetime": 1616454885000,
"masked_author": "username_0",
"text": "API: https://developer.cisco.com/meraki/mv-sense/#!overview/example-use-cases",
"title": "Cisco MV Sense",
"type": "issue"
},
{
"action": "closed",
"author": "stakach",
"comment_id": null,
"datetime": 1628674175000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 77 | false | false | 2 | 2 | false |
aiidateam/aiida-quantumespresso | aiidateam | 827,688,852 | 661 | null | [
{
"action": "opened",
"author": "mbercx",
"comment_id": null,
"datetime": 1615380664000,
"masked_author": "username_0",
"text": "When running the `hp.x` calculation with the `HpCalculation` in the `aiida-quantumespresso-hp` package, you can provide the `parent_scf` input. However, here it is critical that the atoms that the Hubbard atoms are provided first in the `ATOMIC_POSITIONS` list, else the following error is raised by the `hp.x` code:\r\n\r\n```console\r\n WARNING! All Hubbard atoms must be listed first in the ATOMIC_POSITIONS card of PWscf\r\n Stopping...\r\n```\r\n\r\nIn order to avoid this, perhaps we can put the atoms that have Hubbard values assigned first in this list by default?",
"title": "`BasePwCpInputGenerator`: Place Hubbard first in `ATOMIC_POSITIONS` list",
"type": "issue"
},
{
"action": "created",
"author": "mbercx",
"comment_id": 809228635,
"datetime": 1617010293000,
"masked_author": "username_0",
"text": "Pinging @username_1 for comments. :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MackeEric",
"comment_id": 811801146,
"datetime": 1617271408000,
"masked_author": "username_1",
"text": "Thinking of future extensions to `aiida-quantumespresso-hp` to support the calculation of Hubbard V parameters, I would also be in favour of providing at least an option that allows to sort the positions by Hubbard species. This is because the SCF before the first hp.x calculation (for Hubbard V) requires to set a input such as `hubbard_v(i,i,1) = 1.d-7` where `i` is the number of the Hubbard atom in the `ATOMIC_POSITIONS` list. If we could order the positions in a way that puts the Hubbard species first, the input generator could simply set all `hubbard_v(i,i,1)` from i=1 to i=N<sub>Hubbard<sub> to a finite value and we're done.",
"title": null,
"type": "comment"
}
] | 1,238 | false | false | 2 | 3 | true |
jhk0530/aladin | null | 801,988,876 | 452 | null | [
{
"action": "opened",
"author": "jhk0530",
"comment_id": null,
"datetime": 1612516161000,
"masked_author": "username_0",
"text": "- 파이썬과 리액트를 활용한 주식 자동거래 시스템 구축\n- 카카오톡, 라인, 아이 메시지 & 페이스북 메신저와 함께하는 이모티콘으로 돈벌기\n- 컴퓨팅 사고력을 키우는 코딩\n- 팜 1 : 지하 농장\n- 하이퍼레저 패브릭 실전 프로젝트\n- 하이퍼레저 패브릭 철저 입문\n- 문과생, 데이터 사이언티스트 되다\n- 스마트한 생활을 위한 버전 2 : 엑셀 2010 활용\n- 디지털 포렌식\n- 2020 시나공 워드프로세서 실기\n- 월드 오브 사이버펑크 2077\n- 2020 시나공 컴퓨터활용능력 1급 실기\n- 2020 이기적 컴퓨터활용능력 1급 실기 기본서 : 무료 동영상 전강 & 채점 프로그램 제공\n- 큐비코가 이상해!\n- 일상을 아름답게 담아내는 사진촬영\n- 삐딱하게 바라본 4차 산업혁명\n- 두렵지 않은 코딩교육\n- 15초면 충분해, 틱톡!\n- 예제로 배우는 Visual C++ MFC 2017 윈도우 프로그래밍\n- 유닉스의 탄생\n- 3D프린팅 수업을 위한 틴커캐드 디자인 4\n- 소프트웨어 인사이더\n- Microsoft Power BI 기본 + 활용",
"title": "알라딘 잠실새내역점 새로 등록된 IT 도서 알림(2021년 02월 05일)",
"type": "issue"
},
{
"action": "closed",
"author": "jhk0530",
"comment_id": null,
"datetime": 1612531215000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 529 | false | false | 1 | 2 | false |
miguelgrinberg/oreilly-flask-apis-video | null | 611,202,459 | 15 | null | [
{
"action": "opened",
"author": "nataly-obr",
"comment_id": null,
"datetime": 1588429844000,
"masked_author": "username_0",
"text": "**Description**\r\nsession.commit() method produces IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint.\r\n\r\n**To Reproduce**\r\n``` python\r\nclass Currency(Base):\r\n __tablename__ = 'currency'\r\n\r\n currency_code = db.Column('currency_code', db.String(3), primary_key=True)\r\n currency_name = db.Column('currency_name', db.String(140))\r\n\r\nSession = sessionmaker(bind=self.engine)\r\nsession = Session()\r\ncurrency_list = [Currency(currency_code = 'USD'), Currency(currency_code = 'EUR')]\r\ncurrency_list = list(map(lambda x: session.merge(x), currency_list))\r\nsession.bulk_save_objects(currency_list, update_changed_only=False)\r\nsession.commit() \r\n```\r\n**Additional information**\r\n\r\n1. If I modify this code by adding session.commit() after session.merge(), everything works fine:\r\n``` python\r\n...\r\ncurrency_list = list(map(lambda x: session.merge(x), currency_list))\r\nsession.commit() \r\nsession.bulk_save_objects(currency_list, update_changed_only=False)\r\nsession.commit() \r\n```\r\n2. The table 'currency' is empty, so the currency_code that I'm trying to add doesn't exist in the database.\r\n3. Also no problem with adding this currency_code via \r\n``` python\r\nconnection = psycopg2.connect(...) \r\n... \r\nstatement = \"INSERT INTO {0} ({1}) VALUES ({2}); \".format(table, columns, values) \r\ncrsr.execute(statement) \r\nconnection.commit()\r\n```\r\n**Error**\r\n``` python\r\nsqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint \"currency_pkey\"\r\nDETAIL: Key (currency_code)=(EUR) already exists.\r\n```",
"title": "sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint",
"type": "issue"
},
{
"action": "closed",
"author": "nataly-obr",
"comment_id": null,
"datetime": 1588430052000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "reopened",
"author": "nataly-obr",
"comment_id": null,
"datetime": 1588430171000,
"masked_author": "username_0",
"text": "**Description**\r\nsession.commit() method produces IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint.\r\n\r\n**To Reproduce**\r\n``` python\r\nclass Currency(Base):\r\n __tablename__ = 'currency'\r\n\r\n currency_code = db.Column('currency_code', db.String(3), primary_key=True)\r\n currency_name = db.Column('currency_name', db.String(140))\r\n\r\nSession = sessionmaker(bind=self.engine)\r\nsession = Session()\r\ncurrency_list = [Currency(currency_code = 'USD'), Currency(currency_code = 'EUR')]\r\ncurrency_list = list(map(lambda x: session.merge(x), currency_list))\r\nsession.bulk_save_objects(currency_list, update_changed_only=False)\r\nsession.commit() \r\n```\r\n**Additional information**\r\n\r\n1. If I modify this code by adding session.commit() after session.merge(), everything works fine:\r\n``` python\r\n...\r\ncurrency_list = list(map(lambda x: session.merge(x), currency_list))\r\nsession.commit() \r\nsession.bulk_save_objects(currency_list, update_changed_only=False)\r\nsession.commit() \r\n```\r\n2. The table 'currency' is empty, so the currency_code that I'm trying to add doesn't exist in the database.\r\n3. Also no problem with adding this currency_code via \r\n``` python\r\nconnection = psycopg2.connect(...) \r\n... \r\nstatement = \"INSERT INTO {0} ({1}) VALUES ({2}); \".format(table, columns, values) \r\ncrsr.execute(statement) \r\nconnection.commit()\r\n```\r\n**Error**\r\n``` python\r\nsqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint \"currency_pkey\"\r\nDETAIL: Key (currency_code)=(EUR) already exists.\r\n```",
"title": "sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint",
"type": "issue"
},
{
"action": "closed",
"author": "nataly-obr",
"comment_id": null,
"datetime": 1588430325000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 3,174 | false | false | 1 | 4 | false |
dotnet/docfx | dotnet | 758,838,460 | 6,866 | null | [
{
"action": "opened",
"author": "alexhelms",
"comment_id": null,
"datetime": 1607374260000,
"masked_author": "username_0",
"text": "**Operating System**: Windows\r\n\r\n**DocFX Version Used**: 2.56.5\r\n\r\n**Template used**: default\r\n\r\n**Steps to Reproduce**:\r\n\r\n1. Create a property like `public string MyString { get; init; }`\r\n2. Run docfx\r\n\r\n**Expected Behavior**: Documentation is generated.\r\n\r\n**Actual Behavior**: An exception is thrown, see below for stack trace.\r\n\r\nTo find this I had to run docfx from source and in debug mode and trace back from the exception. The hint I had was the name of the property in a function further up the call stack.\r\n\r\nI understand #6805 is for C# 9 support but I don't see any implementation or notes about `init` properties. At work we recently upgraded to .NET 5 and this is the first time I tried to use an `init` property and our CI failed during doc generation and took a considerable amount of time to find the root cause.\r\n\r\nThanks!\r\n\r\n```\r\nMicrosoft.DocAsCode.Exceptions.DocfxException: Unable to generate spec reference for !: ---> System.IO.InvalidDataException: Fail to parse id for symbol in namespace .\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.YamlModelGenerator.AddSpecReference(ISymbol symbol, IReadOnlyList`1 typeGenericParameters, IReadOnlyList`1 methodGenericParameters, Dictionary`2 references, SymbolVisitorAdapter adapter)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.AddSpecReference(ISymbol symbol, IReadOnlyList`1 typeGenericParameters, IReadOnlyList`1 methodGenericParameters)\r\n --- End of inner exception stack trace ---\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.AddSpecReference(ISymbol symbol, IReadOnlyList`1 typeGenericParameters, IReadOnlyList`1 methodGenericParameters)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.AddMethodSyntax(IMethodSymbol symbol, MetadataItem result, IReadOnlyList`1 typeGenericParameters, IReadOnlyList`1 methodGenericParameters)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.VisitMethod(IMethodSymbol symbol)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.VisitNamedType(INamedTypeSymbol symbol)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.VisitDescendants[T](IEnumerable`1 children, Func`2 getChildren, Func`2 filter)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.VisitNamespace(INamespaceSymbol symbol)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.SymbolVisitorAdapter.VisitAssembly(IAssemblySymbol symbol)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.RoslynMetadataExtractor.Extract(ExtractMetadataOptions options)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.ExtractMetadataWorker.GetMetadataFromProjectLevelCache(IBuildController controller, IInputParameters key)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.ExtractMetadataWorker.<SaveAllMembersFromCacheAsync>d__13.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Microsoft.DocAsCode.Metadata.ManagedReference.ExtractMetadataWorker.<ExtractMetadataAsync>d__11.MoveNext()\r\n 0 Warning(s)\r\n 1 Error(s)\r\n```",
"title": "Support C# 9 init properties",
"type": "issue"
}
] | 3,303 | false | false | 1 | 1 | false |
HMS-Core/hms-flutter-plugin | HMS-Core | 801,838,845 | 62 | null | [
{
"action": "opened",
"author": "davidzou",
"comment_id": null,
"datetime": 1612500261000,
"masked_author": "username_0",
"text": "**Description**\r\nThe Splash Ad do not be showed.\r\n\r\n**Logs**\r\n```\r\n [+1012 ms] I/HiAdSDK.RealtimeAdMediator( 5144): doOnShowSloganEnd\r\n [ +1 ms] I/HiAdSDK.RealtimeAdMediator( 5144): Ad fails to display or loading timeout, ad dismiss\r\n [ ] I/HiAdSDK.AdMediator( 5144): ad failed:499\r\n [ ] I/HiAdSDK.AdMediator( 5144): ad is already failed\r\n [ ] I/HiAdSDK.AdMediator( 5144): notifyAdDismissed\r\n [ ] I/HiAdSDK.AdMediator( 5144): ad already dismissed\r\n```\r\n\r\n**Environment**\r\n - Platform: Flutter\r\n - Kit: Ads\r\n - Kit Version : 13.4.35+300\r\n - OS Version : any\r\n - Android Studio version (if applicable) [4.1]\r\n - Platform version (if applicable)\r\n - Node Version (if applicable)\r\n - Your Location/Region (if applicable) CN",
"title": "The Splash Ad do not be showed. Error code 499.",
"type": "issue"
},
{
"action": "created",
"author": "furkansarihan",
"comment_id": 773862414,
"datetime": 1612511664000,
"masked_author": "username_1",
"text": "Hello @username_0,\r\n\r\n1. Make sure HMS Core version is 4.0+ in your device.\r\n2. Don't set the country/Region to China if your phone is not ChinaROM.\r\n\r\nAlso you can test your apk in [cloud debugging](https://developer.huawei.com/consumer/en/console#/openCard/AppService/1045) to make sure there is no issue in your implementation.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Mike-mei",
"comment_id": null,
"datetime": 1616481795000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 1,089 | false | false | 3 | 3 | true |
flashlight/flashlight | flashlight | 860,701,960 | 547 | null | [
{
"action": "opened",
"author": "waynelapierre",
"comment_id": null,
"datetime": 1618762451000,
"masked_author": "username_0",
"text": "### Feature Description\r\nIt would be great if the flashlight library can be used in R. \r\n\r\n#### Use Case\r\nMany users prefer R over Python for data analytics. They can benefit from being able to use the flashlight library in R. \r\n\r\n#### Additional Context\r\nThe Rcpp package can be used to integrate R and C++.",
"title": "Using from R Language",
"type": "issue"
},
{
"action": "created",
"author": "jacobkahn",
"comment_id": 822868658,
"datetime": 1618877145000,
"masked_author": "username_1",
"text": "@username_0 — as with other bindings, we probably don't have the bandwidth or knowhow to create robust R bindings ourselves, but we'd welcome a PR and would absolutely be interested in discussing this further. Would you/others be able to support a first push on this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "waynelapierre",
"comment_id": 822880486,
"datetime": 1618878400000,
"masked_author": "username_0",
"text": "This is beyond my capability. Since someone else at Facebook is doing a great job maintaining the R binding of the prophet project, they may be interested in this project as well. \r\nhttps://github.com/facebook/prophet",
"title": null,
"type": "comment"
}
] | 796 | false | false | 2 | 3 | true |
javve/list.js | null | 754,585,396 | 699 | null | [
{
"action": "opened",
"author": "machupichu123",
"comment_id": null,
"datetime": 1606843598000,
"masked_author": "username_0",
"text": "I found a problem caused by a different version.\r\n\r\nI can't search the symbols `#` and `-` when using a var 2.3.0.\r\n\r\nFor example, here is the code.\r\n[https://jsfiddle.net/veduozjf/](https://jsfiddle.net/veduozjf/)\r\n\r\nIn case of I use var 1.5.0 result shows 'Mark #Twain',\r\nBut var 2.3.0 shows nothing.",
"title": "function is not wark at item option at var 2.3.0 ",
"type": "issue"
},
{
"action": "created",
"author": "sheffieldnick",
"comment_id": 736749760,
"datetime": 1606848824000,
"masked_author": "username_1",
"text": "I think it is probably a bug in `setSearchString()` where it is escaping regular expression characters (like #, -, etc) but the new faster multiword search code in v2.3.0 doesn't use regexp, so those characters shouldn't be escaped?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "machupichu123",
"comment_id": 737759124,
"datetime": 1606985545000,
"masked_author": "username_0",
"text": "Thank you. I solved it by commenting out line 744 of 2.3.0.\r\n\r\n744 s = s.replace(/[-[\\]{}()*+?.,\\\\^$|#]/g, '\\\\$&'); // Escape regular expression characters\r\n↓\r\n744 // s = s.replace(/[-[\\]{}()*+?.,\\\\^$|#]/g, '\\\\$&'); // Escape regular expression characters",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "machupichu123",
"comment_id": 737770762,
"datetime": 1606986800000,
"masked_author": "username_0",
"text": "Thank you. I solved it by commenting out line 744 of 2.3.0.\r\n\r\n744 s = s.replace(/[-[]{}()+?.,\\^$|#]/g, '\\$&'); // Escape regular expression characters\r\n↓\r\n744 // s = s.replace(/[-[]{}()+?.,\\^$|#]/g, '\\$&'); // Escape regular expression characters",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "machupichu123",
"comment_id": 737771103,
"datetime": 1606986838000,
"masked_author": "username_0",
"text": "I am expecting great things of you!\r\nWhat do you think of this issue?\r\n[https://github.com/javve/list.js/issues/698](https://github.com/javve/list.js/issues/698)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sheffieldnick",
"comment_id": 737781458,
"datetime": 1606987953000,
"masked_author": "username_1",
"text": "I suggest renaming this issue to something specific, like \"Search on regexp characters broken by v2.3.0\"",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "machupichu123",
"comment_id": 737825839,
"datetime": 1606990216000,
"masked_author": "username_0",
"text": "I Fixed the title.\r\nWhy did you thumbs down?\r\nAre you angry with me, by any chance?\r\nI'm concerning about that if my English makes sense or not.",
"title": null,
"type": "comment"
}
] | 1,445 | false | false | 2 | 7 | false |
celo-org/celo-monorepo | celo-org | 832,088,241 | 7,439 | null | [
{
"action": "opened",
"author": "yorhodes",
"comment_id": null,
"datetime": 1615833540000,
"masked_author": "username_0",
"text": "### Expected Behavior\r\n\r\nCelo contracts verified on https://sourcify.dev/ \r\n\r\n### Current Behavior\r\n\r\nNo verified contracts on sourcify (or blockscout)\r\n\r\nhttps://docs.blockscout.com/for-projects/premium-features/contracts-verification-via-sourcify",
"title": "Verify released contracts using sourcify ",
"type": "issue"
},
{
"action": "created",
"author": "zviadm",
"comment_id": 812718493,
"datetime": 1617398057000,
"masked_author": "username_1",
"text": "Just a note that might be helpful. I recently integrated fetching verified contract metadata from sourcify in Celo Terminal. It was surprisingly easy to do and also verification side was very easy and smooth too directly with https://sourcify.dev.\r\n\r\nOne really nice thing that is also possible with `sourcify` is that once you verify contracts on alfajores or baklava, you don't even need to do anything to verify them on mainnet as long as contracts match exactly, since it will do automatic bytecode matching and find appropriate metadata + source. \r\n\r\nFor verification itself, using sourcify api directly can be easier than having to go through blockscout: https://github.com/ethereum/sourcify/blob/master/docs/api/server/verification1/verify.md\r\nhttps://github.com/ethereum/sourcify#api",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yorhodes",
"comment_id": 820584906,
"datetime": 1618505956000,
"masked_author": "username_0",
"text": "@kevjue worked with this tool during ubeswap audit, could be a good candidate",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yorhodes",
"comment_id": 843456835,
"datetime": 1621364653000,
"masked_author": "username_0",
"text": "blocked on https://github.com/celo-org/celo-monorepo/issues/7855",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yorhodes",
"comment_id": 863399916,
"datetime": 1623948576000,
"masked_author": "username_0",
"text": "https://github.com/ethereum/sourcify/issues/468",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "yorhodes",
"comment_id": null,
"datetime": 1629909309000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1,228 | false | false | 2 | 6 | false |
OPS-E2E-PPE/E2E_DocFxV3 | OPS-E2E-PPE | 770,369,213 | 12,297 | {
"number": 12297,
"repo": "E2E_DocFxV3",
"user_login": "OPS-E2E-PPE"
} | [
{
"action": "opened",
"author": "OPSTestPPE",
"comment_id": null,
"datetime": 1608239351000,
"masked_author": "username_0",
"text": "",
"title": "pronly-true-warning: changed includes file reported in parent file",
"type": "issue"
},
{
"action": "created",
"author": "e2ebd2",
"comment_id": 747703551,
"datetime": 1608239385000,
"masked_author": "username_1",
"text": "Docs Build status updates of commit _[544c9b0](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/commits/544c9b066de6b679355bc3dca6b386e05d355064)_: \n\n### :white_check_mark: Validation status: passed\r\n\r\n\r\nFile | Status | Preview URL | Details\r\n---- | ------ | ----------- | -------\r\n[E2E_DocsBranch_Dynamic/pr-only/includes/skip-level.md](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/blob/pronly-on-skipLevel-warning/E2E_DocsBranch_Dynamic/pr-only/includes/skip-level.md) | :white_check_mark:Succeeded | [View](https://ppe.docs.microsoft.com/en-us/E2E_DocFxV3/pr-only/skiplevel?branch=pr-en-us-12297) |\r\n\r\nFor more details, please refer to the [build report](https://opbuildstoragesandbox2.blob.core.windows.net/report/2020%5C12%5C17%5C8b165afe-a866-8c48-e5c3-ead94440f27e%5CPullRequest%5C202012172109142210-12297%5Cworkflow_report.html?sv=2016-05-31&sr=b&sig=z6ZgiDyItRqq36A8KcQVBm0l3TfnjmPwZxOAVgFzNoU%3D&st=2020-12-17T21%3A04%3A44Z&se=2021-01-17T21%3A09%3A44Z&sp=r).\r\n\r\n**Note:** Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the [broken link report](https://opportal-sandbox.azurewebsites.net/#/repos/8b165afe-a866-8c48-e5c3-ead94440f27e?tabName=brokenlinks).\r\n\r\nFor any questions, please:<ul><li>Try searching in the <a href=\"https://review.docs.microsoft.com/en-us/search/?search=&category=All&scope=help-docs&category=All&branch=master\">Docs contributor and Admin Guide</a></li><li>See the <a href=\"https://review.docs.microsoft.com/en-us/help/onboard/faq?branch=master\">frequently asked questions</a></li><li>Post your question in the <a href=\"https://teams.microsoft.com/l/channel/19%3a7ecffca1166a4a3986fed528cf0870ee%40thread.skype/General?groupId=de9ddba4-2574-4830-87ed-41668c07a1ca&tenantId=72f98bf-86f1-41af-91ab-2d7cd011db47\">Docs support channel</a></li></ul>",
"title": null,
"type": "comment"
}
] | 1,885 | false | false | 2 | 2 | false |
folio-org/mod-circulation | folio-org | 654,073,762 | 599 | {
"number": 599,
"repo": "mod-circulation",
"user_login": "folio-org"
} | [
{
"action": "opened",
"author": "bohdan-suprun",
"comment_id": null,
"datetime": 1594302352000,
"masked_author": "username_0",
"text": "Includes:\r\n* https://github.com/folio-org/mod-circulation/pull/588\r\n* https://github.com/folio-org/mod-circulation/pull/590\r\n* https://github.com/folio-org/mod-circulation/pull/584\r\n\r\n## Purpose\r\n<!--\r\n Why are you making this change? There is nothing more important\r\n to provide to the reviewer and to future readers than the cause\r\n that gave rise to this pull request. Be careful to avoid circular\r\n statements like \"the purpose is to update the schema.\" and\r\n instead provide an explanation like \"there is more data to be provided and stored for Purchase Orders \r\n which is currently missing in the schema\"\r\n\r\n The purpose may seem self-evident to you now, but the standard to\r\n hold yourself to should be \"can a developer parachuting into this\r\n project reconstruct the necessary context merely by reading this\r\n section.\"\r\n\r\n If you have a relevant JIRA issue, add a link directly to the issue URL here.\r\n Example: https://issues.folio.org/browse/MODORDERS-70\r\n -->\r\n\r\n## Approach\r\n<!--\r\n How does this change fulfill the purpose? It's best to talk\r\n high-level strategy and avoid code-splaining the commit history.\r\n\r\n The goal is not only to explain what you did, but help other\r\n developers *work* with your solution in the future.\r\n-->\r\n\r\n#### TODOS and Open Questions\r\n<!-- OPTIONAL\r\n- [ ] Use GitHub checklists. When solved, check the box and explain the answer.\r\n-->\r\n\r\n## Learning\r\n<!-- OPTIONAL\r\n Help out not only your reviewer, but also your fellow developer!\r\n Sometimes there are key pieces of information that you used to come up\r\n with your solution. Don't let all that hard work go to waste! A\r\n pull request is a *perfect opportunity to share the learning that\r\n you did. Add links to blog posts, patterns, libraries or addons used\r\n to solve this problem.\r\n-->\r\n\r\n## Pre-Merge Checklist:\r\nBefore merging this PR, please go through the following list and take appropriate actions.\r\n\r\n- Does this PR meet or exceed the expected quality standards?\r\n - [ ] Code coverage on new code is 80% or greater\r\n - [ ] Duplications on new code is 3% or less\r\n - [ ] There are no major code smells or security issues\r\n- Does this introduce breaking changes?\r\n - [ ] Were any API paths or methods changed, added or removed?\r\n - [ ] Were there any schema changes?\r\n - [ ] Did any of the interface versions change?\r\n - [ ] Were permissions changed, added, or removed?\r\n - [ ] Are there new interface dependencies?\r\n - [ ] There are no breaking changes in this PR.\r\n \r\nIf there are breaking changes, please **STOP** and consider the following:\r\n\r\n- What other modules will these changes impact?\r\n- Do JIRAs exist to update the impacted modules?\r\n - [ ] If not, please create them\r\n - [ ] Do they contain the appropriate level of detail? Which endpoints/schemas changed, etc.\r\n - [ ] Do they have all they appropriate links to blocked/related issues?\r\n- Are the JIRAs under active development? \r\n - [ ] If not, contact the project's PO and make sure they're aware of the urgency.\r\n- Do PRs exist for these changes?\r\n - [ ] If so, have they been approved?\r\n\r\nIdeally all of the PRs involved in breaking changes would be merged in the same day to avoid breaking the folio-testing environment. Communication is paramount if that is to be achieved, especially as the number of intermodule and inter-team dependencies increase. \r\n\r\nWhile it's helpful for reviewers to help identify potential problems, ensuring that it's safe to merge is ultimately the responsibility of the PR assignee.",
"title": "Bug/fix release 19.0.6",
"type": "issue"
}
] | 3,523 | false | true | 1 | 1 | false |
keyonvafa/tbip | null | 700,662,921 | 3 | null | [
{
"action": "opened",
"author": "yoyoyyono",
"comment_id": null,
"datetime": 1600036864000,
"masked_author": "username_0",
"text": "Hi, I've been using your repository a few weeks ago to estimate social media user ideal points. However, I have noticed that when I try to run the model with more than 800 authors, the model does not converge. Specifically the ELBO returned nan values. Have you ever run the model with more than 800 authors? Also, do you know some article that discusses variational inference convergence problems? I think my problem may be due to the number of parameters but I am not sure.\nI would appreciate if you could guide me, Thanks!",
"title": "Max authors",
"type": "issue"
},
{
"action": "created",
"author": "keyonvafa",
"comment_id": 691737652,
"datetime": 1600038006000,
"masked_author": "username_1",
"text": "Hi, hmm. I haven't tried running with 800 authors but that shouldn't be the nan issue (each author is only adding one extra parameter to the model). Out of curiosity, what happens if you keep the dataset the same but change the author indices so that there are only 2 authors (i.e. incorrectly label the authors)? I assume the nans would still be there, but if they're not, that would confirm that the issue is with the number of authors.\r\n\r\nAre you using the TensorFlow or PyTorch implementation? And what is the vocabulary size and the number of documents you're using?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yoyoyyono",
"comment_id": 730111825,
"datetime": 1605757881000,
"masked_author": "username_0",
"text": "Sorry for the lateness of my reply, I had to pause the project for a while. Changing the author indices leaving only 2 authors solved the problem. However, days later I was able to find the root of my problem and it was not the number of authors. The problem was generated because I had authors with 0 vocabulary words and the optimization placed a 0 in the rate parameter of the Poisson distribution, generating Nans in the log_prob.\r\nHowever, eliminating the authors with 0 words in the vocabulary, I have been able to estimate ideal points for datasets with 100,000 authors.\r\nThanks for the answer and for the excellent tutorial on Google Colab.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "keyonvafa",
"comment_id": null,
"datetime": 1605806581000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "keyonvafa",
"comment_id": 730521223,
"datetime": 1605806581000,
"masked_author": "username_1",
"text": "Great, I'm glad it's working now. And thank you!",
"title": null,
"type": "comment"
}
] | 1,792 | false | false | 2 | 5 | false |
statsmodels/statsmodels | statsmodels | 653,455,681 | 6,865 | {
"number": 6865,
"repo": "statsmodels",
"user_login": "statsmodels"
} | [
{
"action": "opened",
"author": "bashtage",
"comment_id": null,
"datetime": 1594227718000,
"masked_author": "username_0",
"text": "Mark as not implemented to stop use\r\n\r\ncloses #2347\r\n\r\n- [ ] closes #xxxx\r\n- [ ] tests added / passed. \r\n- [ ] code/documentation is well formatted. \r\n- [ ] properly formatted commit message. See \r\n [NumPy's guide](https://docs.scipy.org/doc/numpy-1.15.1/dev/gitwash/development_workflow.html#writing-the-commit-message). \r\n\r\n<details>\r\n\r\n\r\n**Notes**:\r\n\r\n* It is essential that you add a test when making code changes. Tests are not \r\n needed for doc changes.\r\n* When adding a new function, test values should usually be verified in another package (e.g., R/SAS/Stata).\r\n* When fixing a bug, you must add a test that would produce the bug in master and\r\n then show that it is fixed with the new code.\r\n* New code additions must be well formatted. Changes should pass flake8. If on Linux or OSX, you can\r\n verify you changes are well formatted by running \r\n ```\r\n git diff upstream/master -u -- \"*.py\" | flake8 --diff --isolated\r\n ```\r\n assuming `flake8` is installed. This command is also available on Windows \r\n using the Windows System for Linux once `flake8` is installed in the \r\n local Linux environment. While passing this test is not required, it is good practice and it help \r\n improve code quality in `statsmodels`.\r\n* Docstring additions must render correctly, including escapes and LaTeX.\r\n\r\n</details>",
"title": "MAINT: Mark VAR from_formula as NotImplemented",
"type": "issue"
}
] | 1,329 | false | true | 1 | 1 | false |
lihangleo2/ShadowLayout | null | 757,921,960 | 74 | null | [
{
"action": "opened",
"author": "dengyiqian",
"comment_id": null,
"datetime": 1607260055000,
"masked_author": "username_0",
"text": "如果有多个子view FrameLayout排版很不方便,继承ConstraintLayout布局就很灵活了",
"title": "多个子view排版不够灵活",
"type": "issue"
},
{
"action": "created",
"author": "lihangleo2",
"comment_id": 741395654,
"datetime": 1607477244000,
"masked_author": "username_1",
"text": "暂且不支持多个子view排布,ShadowLayout只支持一个子view,如果有多布局,你可以当前子view用LinearLayout或RelativeLayout有点类似ScrollView的使用",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "lihangleo2",
"comment_id": null,
"datetime": 1607477244000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 155 | false | false | 2 | 3 | false |
machty/ember-concurrency | null | 641,659,431 | 362 | {
"number": 362,
"repo": "ember-concurrency",
"user_login": "machty"
} | [
{
"action": "opened",
"author": "chancancode",
"comment_id": null,
"datetime": 1592533758000,
"masked_author": "username_0",
"text": "Had I known that this is a thing, I would probably have written the types a bit differently, but this should work and is non-breaking.\r\n\r\nI didn't make `EncapsulatedTaskInstance` public because I had this lingering suspicious that we'll need to add a second generic parameter to `TaskInstance` for the args. I don't want to lockdown that possibility, or deal with strangely mismatched ordering of parameters between `TaskInstance` vs `EncapsulatedTaskInstance`, so I kept that private.\r\n\r\nFor the time being, the public way to type it (as seen in the tests) is `TaskInstance<T> & { ... }`, which I think is plenty acceptable.\r\n\r\nGot to update e-c-async and e-c-ts to account for this, then update the Octane tests.\r\n\r\ncc @username_1",
"title": "Allow accessing encapsulated task state",
"type": "issue"
},
{
"action": "created",
"author": "andreyfel",
"comment_id": 646675487,
"datetime": 1592577738000,
"masked_author": "username_1",
"text": "Hi @username_0! Thank you for this PR! It seems that it is covering my use case! I've tested against that branch and it works!\r\n\r\nThe only thing I'm missing now is how to type `this` within the `perform` function of the encapsulated task. If I want to set a property on task instance inside perform what type does it have? EncapsulatedTaskInstance?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chancancode",
"comment_id": 646844755,
"datetime": 1592597713000,
"masked_author": "username_0",
"text": "I don't think we should block on making `EncapsulatedTaskInstance` public. It is very rare that you would have to type it (please show some examples – most cases I know of are either implicit or can be inferred, and I think it's rare that you would want to use this in a parameter or return position).\r\n\r\nI do think people will copy the type alias, and I think that's not ideal but not so bad. If we a public one, it will be compatible with and won't break these existing usages.\r\n\r\nI do think we should work towards making it public, we just have to address the other blocker first– whether we need to add the second generic to `TaskInstance` – and that's far from a hypothetical/theoretical issue. I don't think this PR is a good place to address that but we can also come to a decision on that fairy soon and fix both.\r\n\r\nIn the meantime, I think this adds enough values to those who need the feature and it's not obvious to me that they would be inconvenienced by this during normal usage.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chancancode",
"comment_id": 646846670,
"datetime": 1592598070000,
"masked_author": "username_0",
"text": "Why did you have to type it? There are tests that confirms the `this` inference works correctly: https://github.com/username_0/ember-concurrency/blob/encapsulated-task-v2/tests/types/ember-concurrency-test.ts#L2347-L2348\r\n\r\nIs that not working for you for some reason?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andreyfel",
"comment_id": 647405720,
"datetime": 1592818819000,
"masked_author": "username_1",
"text": "@username_0, yes, it works in a basic case. But I have a little bit more complex case.\r\nThe idea is to have a task which exposes a property isRunningLong which should turn true in\r\ncase if the task takes longer than `loadingTimeout`. I'm using an encapsulated task with an inner task for it:\r\n\r\n```\r\nexport interface LongRunningTaskDescriptor<T, Args extends any[]> extends EncapsulatedTaskDescriptor<T, Args> {\r\n isRunningLong: boolean,\r\n timeoutTask: TaskProperty<void, []>,\r\n}\r\n\r\nexport function longRunningTask<T, Args extends any[]>(fn: TaskFunction<T, Args>, loadingTimeout = 300): LongRunningTaskDescriptor<T, Args> {\r\n return {\r\n isRunningLong: false,\r\n\r\n timeoutTask: task(function * (this: LongRunningTaskDescriptor<T, Args>) {\r\n yield timeout(loadingTimeout);\r\n set(this, 'isRunningLong', true);\r\n }).drop(),\r\n\r\n * perform(...args): TaskGenerator<T> {\r\n try {\r\n this.timeoutTask.perform();\r\n return yield * fn.apply(this.context, args);\r\n } finally {\r\n this.timeoutTask.cancelAll();\r\n set(this, 'isRunningLong', false);\r\n }\r\n },\r\n };\r\n}\r\n```\r\n\r\nIt is used like this with new e-c and e-c-d:\r\n```\r\n@task({ restartable: true })\r\nfetchSmth = longRunningTask(function * () {\r\n yield fetchSmth();\r\n}, 600)\r\n```\r\n\r\nSo, I have to specify type of `this` for the `timeoutTask`. And inside the `perform` function this.timeoutTask.perform doesn't work. And `taskFor` doesn't work here.\r\nAlso typescript says that this.context doesn't exist. I'm not sure if it is a type issue or context is not a public filed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "maxfierke",
"comment_id": 647821849,
"datetime": 1592869046000,
"masked_author": "username_2",
"text": "`context` is sort of \"intimate\", undocumented API. It's useful for implementing extensions onto e-c, so perhaps its worth making public but its not something we'd expect people to use often (though with encapsulated tasks, it might make more sense.)\r\n\r\nI think a composition like that to create an encapsulated task is a bit of an advanced use-case, so if it's working in the basic case, it may be worth deferring for later unless addressing it is fairly straightforward.",
"title": null,
"type": "comment"
}
] | 4,398 | false | false | 3 | 6 | true |
emilybache/GildedRose-Refactoring-Kata | null | 874,741,177 | 229 | {
"number": 229,
"repo": "GildedRose-Refactoring-Kata",
"user_login": "emilybache"
} | [
{
"action": "opened",
"author": "Pen-y-Fan",
"comment_id": null,
"datetime": 1620061087000,
"masked_author": "username_0",
"text": "- update README.md with latest PHP information\r\n- update composer.json to support PHP 7.3 to PHP8\r\n - active support for PHP 7.2 ended 6 Dec 2020\r\n - PHP8 was released 26-Nov-2020\r\n - update the dependencies\r\n- PHPUnit now version 9.5 and config file updated\r\n- ECS now version 9.3 and config file changed from `ecs.yaml` to `ecs.php`\r\n\r\nApprovalTest removed, in line with latest readme, all set for refactoring :)\r\n\r\nTested with PHP 7.3, 7.4 and 8.0 one failing \"fixme\" != \"foo\" test!",
"title": "Add PHP8",
"type": "issue"
},
{
"action": "created",
"author": "emilybache",
"comment_id": 831703936,
"datetime": 1620108449000,
"masked_author": "username_1",
"text": "Thankyou!",
"title": null,
"type": "comment"
}
] | 497 | false | false | 2 | 2 | false |
pytorch/pytorch | pytorch | 704,166,113 | 44,938 | null | [
{
"action": "opened",
"author": "nightlessbaron",
"comment_id": null,
"datetime": 1600415044000,
"masked_author": "username_0",
"text": "## 🚀 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nAdding DataParallel method for CPU\r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nRecently, I have been working on few-shot learning and meta-learning models which in particular need small datasets. Let's take MAML for example. We have scope for parallelizing the inner loop and data parallelism seemed to be the answer for that. While, using GPUs does the work, we can also make use of CPUs for the same, as the data is sufficiently small.\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nMy question is, is it worth looking into adding an option for CPUs in DataParallel module along with GPUs?",
"title": "DataParallel on CPU",
"type": "issue"
},
{
"action": "created",
"author": "agolynski",
"comment_id": 694897420,
"datetime": 1600438857000,
"masked_author": "username_1",
"text": "@username_0 Would https://pytorch.org/docs/stable/multiprocessing.html work for your usecase or you are proposing PT to have data parallel wrapper for this?",
"title": null,
"type": "comment"
}
] | 1,054 | false | false | 2 | 2 | true |
ericoporto/ImGi | null | 812,987,372 | 22 | null | [
{
"action": "opened",
"author": "ericoporto",
"comment_id": null,
"datetime": 1613952180000,
"masked_author": "username_0",
"text": "I think the library is not fully featured yet but it's a bit big already. Find out what is possible to cut, maybe use some macros and overall try to reduce a bit the line count of the file.",
"title": "Reduce size of the .scm",
"type": "issue"
},
{
"action": "created",
"author": "ericoporto",
"comment_id": 782973006,
"datetime": 1613955610000,
"masked_author": "username_0",
"text": "Probably I may have left something unused in the renderer since it's a huge part of the code now.",
"title": null,
"type": "comment"
}
] | 286 | false | false | 1 | 2 | false |
agrbin/svgtex | null | 852,104,057 | 20 | null | [
{
"action": "opened",
"author": "alusiani",
"comment_id": null,
"datetime": 1617780601000,
"masked_author": "username_0",
"text": "Thanks for this nice piece of software!\r\n\r\nAlberto",
"title": "failure to render a long math formula",
"type": "issue"
},
{
"action": "created",
"author": "agrbin",
"comment_id": 817331753,
"datetime": 1618157453000,
"masked_author": "username_1",
"text": "Thank you for the message and thanks for the nice words!\r\n\r\nIt's been a long time since I looked at this code :) I won't be able to help quickly.",
"title": null,
"type": "comment"
}
] | 195 | false | false | 2 | 2 | false |
gdamore/tcell | null | 733,452,624 | 404 | null | [
{
"action": "opened",
"author": "walles",
"comment_id": null,
"datetime": 1604085633000,
"masked_author": "username_0",
"text": "Hi!\r\n\r\nIn 1.x, `tcell.Color(74)` used to return a value matching `tcell.Color74.`\r\n\r\nIn 2.0, those values are different.\r\n\r\nIs this something I should worry about?\r\n\r\n Regards /Johan\r\n\r\n# Repro\r\n```go\r\npackage main\r\n\r\nimport (\r\n\t\"fmt\"\r\n\t\"strconv\"\r\n\r\n\t\"github.com/username_1/tcell\"\r\n)\r\n\r\nfunc main() {\r\n\tcolorValue := tcell.Color(74)\r\n\r\n\tfmt.Printf(\"Created color value: %s\\n\", strconv.FormatInt(int64(colorValue), 16))\r\n\tfmt.Printf(\"Constant color value: %s\\n\", strconv.FormatInt(int64(tcell.Color74), 16))\r\n\tif colorValue == tcell.Color74 {\r\n\t\tfmt.Println(\"Equal, good.\")\r\n\t} else {\r\n\t\tfmt.Println(\"Not equal, bad!\")\r\n\t}\r\n}\r\n```",
"title": "In 2.0, color constants don't match created colors",
"type": "issue"
},
{
"action": "closed",
"author": "gdamore",
"comment_id": null,
"datetime": 1604087283000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 719761660,
"datetime": 1604087283000,
"masked_author": "username_1",
"text": "You should not worry about it. The numeric value of the color includes a bit indicating that the color is \"valid\". The low order bits are still 74 (or whatever).\r\n\r\nThe reason for adding in an extra high order bit is so that we can treat 0 (normally black) as an uninitialized value.\r\n\r\nThe numeric values are \"private\" -- if you want the actual RGB value, you can ask for it. We don't give you the palette index as a value -- mostly because we didn't see a need for that, but if you want to have that I'm sure its an API we could provide.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 719761852,
"datetime": 1604087309000,
"masked_author": "username_1",
"text": "(These changes in the color handling were why tcell was bumped to v2 btw...)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "walles",
"comment_id": 719895462,
"datetime": 1604127978000,
"masked_author": "username_0",
"text": "So I have a function returning different colors depending on its input, and I want to test it.\r\n\r\nCurrently the test looks like this, and it works on 1.4 and fails on 2.0:\r\nhttps://github.com/username_0/moar/blob/a174d861801c4c206c116635638d430c08361c19/m/ansiTokenizer_test.go#L93-L99\r\n\r\n```go\r\nfunc TestConsumeCompositeColorHappy(t *testing.T) {\r\n\t// 8 bit color\r\n\t// Example from: https://github.com/username_0/moar/issues/14\r\n\tnewIndex, color, err := consumeCompositeColor([]string{\"38\", \"5\", \"74\"}, 0)\r\n\tassert.NilError(t, err)\r\n\tassert.Equal(t, newIndex, 3)\r\n\tassert.Equal(t, *color, tcell.Color74)\r\n```\r\n\r\nWhat would be the best way for me to phrase this test with tcell 2.0?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "walles",
"comment_id": 723466729,
"datetime": 1604767076000,
"masked_author": "username_0",
"text": "@username_1 should I turn my last comment here into a new ticket, do you want to re-open this one or something else?\r\n\r\nI would *like* to re-open this issue until it has been hashed out, but I don't have permissions to do so.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 723477256,
"datetime": 1604772965000,
"masked_author": "username_1",
"text": "Wait a minute -- you're trying to assert that this color (tcell.Color74) has a specific numeric value? That specifically is not part of our public API.\r\n\r\nHowever.... if you want to get the TrueColor value of tcell.Color74, you can do that via a call to the colors RGB() or Hex() methods. That will give you a value. Maybe that will resolve what you want?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 723477498,
"datetime": 1604773120000,
"masked_author": "username_1",
"text": "Specifically try \"tcell.Color74.Hex()\" to get the 24 bit value, or tcell.Color74.RGB() to return the red, green, blue components. Admittedly when using palette colors this is based on the published default palette of xterm, which may or may not match what folks have configured. (Generally palette entries above >= 16 are rarely if ever modified by themes, but the lower 16 entries are frequently changed via themes.)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 723477743,
"datetime": 1604773253000,
"masked_author": "username_1",
"text": "Btw, if you need to create a color a specific RGB value, you should use NewRGBColor(). It will create the precise value, or as close a match as we can using the palette if direct RGB colors are not supported by the terminal.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "walles",
"comment_id": 723540102,
"datetime": 1604819781000,
"masked_author": "username_0",
"text": "Well, verifying that my color has a specific numeric value is *exactly* what I want to do.\r\n\r\n# Background\r\nIn this case, [moar](https://github.com/username_0/moar) is a pager.\r\n\r\nAnd the test case outlined above is for verifying that it converts an input ANSI escape sequence requesting color 74 to `tcell.Color74`.\r\n\r\nSince this is a pager, it should display the same thing as `cat` would, but with paging.\r\n\r\nSo **I can't convert to RGB**, that would make `moar` display the wrong things, by not following the terminal theming.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "walles",
"comment_id": 730358716,
"datetime": 1605790701000,
"masked_author": "username_0",
"text": "@username_1 any opinions about this use case? ^",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 730402486,
"datetime": 1605795226000,
"masked_author": "username_1",
"text": "The colors starting at index 1 will have the same offset. So compare the difference to color1 if you need that verification. \n\nAlternatively just compare the RGB values which should be good enough for your self tests.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "walles",
"comment_id": 739562080,
"datetime": 1607287991000,
"masked_author": "username_0",
"text": "Given that I have a variable containing the number `74`, what do you think would be the best way for me to turn that into `tcell.Color74`?\r\n\r\nI have [a workaround](https://github.com/username_0/moar/blob/08db9e9cd6232ca0c13c7b434d69c6d16321fd5d/m/ansiTokenizer.go#L426-L432) for how to do that, but it doesn't feel great, so I'd like to know what you think would be the best way.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 740055057,
"datetime": 1607361169000,
"masked_author": "username_1",
"text": "Take 73 (74-1) and add it to Color1.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gdamore",
"comment_id": 740055707,
"datetime": 1607361239000,
"masked_author": "username_1",
"text": "Maybe I should add a method to create a color from a palette index.",
"title": null,
"type": "comment"
}
] | 4,410 | false | false | 2 | 15 | true |
ngageoint/scale-ui | ngageoint | 456,251,953 | 123 | null | [
{
"action": "opened",
"author": "cshamis",
"comment_id": null,
"datetime": 1560519192000,
"masked_author": "username_0",
"text": "Permission denied error when trying to view docs in new ui.",
"title": "Docs in 7.0 don't work",
"type": "issue"
},
{
"action": "created",
"author": "ericsvendsen-mil",
"comment_id": 502274976,
"datetime": 1560547876000,
"masked_author": "username_1",
"text": "This is most likely due to an incorrect value specified in the runtime config for the `documentation` property. As a result this can be fixed where it is currently deployed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "emimaesmith",
"comment_id": 502658701,
"datetime": 1560773858000,
"masked_author": "username_2",
"text": "I also didn't build the docs when I deployed Scale the last few times because it was taking 10+ minutes to run the build. Taking out building the docs dropped the build time in half.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "mikenholt",
"comment_id": null,
"datetime": 1563807126000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
}
] | 414 | false | false | 4 | 4 | false |
asvetliakov/vscode-neovim | null | 775,826,454 | 468 | null | [
{
"action": "opened",
"author": "sombra-yevstakhii-krul",
"comment_id": null,
"datetime": 1609238791000,
"masked_author": "username_0",
"text": "yesterday everything worked perfectly so I assume the latest update broke something. When I use commands like `h`/`j`/`k`/`l` or even `gg` or `0` cursor moves in neovim but vscode shows it in its previous position. If I enter insert mode and then move cursor it updates correctly, search also works fine. Tried empty vimrc, didn't help.\r\n\r\nUsing macOS Big Sur 11.0.1\r\n\r\nneovim version:\r\n\r\n",
"title": "Most motion commands stopped working after update",
"type": "issue"
},
{
"action": "created",
"author": "David-Else",
"comment_id": 752048893,
"datetime": 1609242699000,
"masked_author": "username_1",
"text": "I am a bit unclear about what you mean. You are saying NeoVim works fine with h/j/k/l but VS Code with the extension in normal mode does not? What does 'previous position' refer to? Are you running an actual instance of NeoVim as well as VS Code?\r\n\r\nIt works fine here with `NVIM v0.5.0-dev+975-ga58c5509d\r\n` from a day or two ago and the latest NeoVim 0.0.72 extension on Linux.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sombra-yevstakhii-krul",
"comment_id": 752052500,
"datetime": 1609243529000,
"masked_author": "username_0",
"text": "I meant that it seems like cursor position doesn't update in vscode when it updates in neovim, hopefully a video will help:\r\n\r\nhttps://user-images.githubusercontent.com/50657402/103282348-278c3700-49de-11eb-9347-a422c9c35f9b.mov\r\n\r\nIn the video I press `j`/`k` a bunch of times, then use insert mode, then press `x` in normal a few times. I'm using [quick-scope](https://github.com/unblevable/quick-scope) that highlights unique characters in each word on current line so that I can easily use `f`/`F` to jump there. As you can see quick scope (vim plugin) updates, but vscode doesn't until I go to insert mode and move cursor.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sombra-yevstakhii-krul",
"comment_id": 752054513,
"datetime": 1609243978000,
"masked_author": "username_0",
"text": "Update: tried installing different versions, 0.0.63 works, everything above it doesn't",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "asvetliakov",
"comment_id": 752079001,
"datetime": 1609249369000,
"masked_author": "username_2",
"text": "@username_0 can you enable debug logs in ext settings and upload it somewhere?\r\n\r\nAlso can you check if the issue still happens with default vscode settings and optionally with all extensions disabled?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sombra-yevstakhii-krul",
"comment_id": 752125537,
"datetime": 1609256203000,
"masked_author": "username_0",
"text": "Sorry for not replying for so long, updating neovim required also updating xcode and then it took me some time to realize I also need to update luajit (with `--HEAD` flag in brew)\r\n\r\nAnyway update solved the issue, thanks",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "sombra-yevstakhii-krul",
"comment_id": null,
"datetime": 1609256203000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2,026 | false | false | 3 | 7 | true |
Molunerfinn/PicGo | null | 874,323,863 | 666 | null | [
{
"action": "opened",
"author": "vay1314",
"comment_id": null,
"datetime": 1620027979000,
"masked_author": "username_0",
"text": "<!--\r\nPicGo Issue 模板\r\n请依照该模板来提交,否则将会被关闭。\r\n**提问之前请注意你看过 FAQ、配置手册以及那些被关闭的 issues。否则同样的提问也会被关闭!**\r\n-->\r\n\r\n**声明:我已经仔细看过 [文档](https://picgo.github.io/PicGo-Doc/)、[FAQ](https://github.com/username_1/PicGo/blob/dev/FAQ.md)、和搜索过已经关闭的 [issues](https://github.com/username_1/PicGo/issues?q=is%3Aissue+sort%3Aupdated-desc+is%3Aclosed) 后依然没有找到答案,所以才发了一个新的 issue。**\r\n\r\n## 问题类型\r\n\r\nBug Report\r\n\r\n## PicGo 的相关信息\r\n\r\nPicGo 的版本:2.3.0-beta6\r\n所在平台:Windows\r\n\r\n## 问题重现\r\n\r\n<!-- 如果是 Bug Report 请填写本项 -->\r\n<!-- 请附上相关截图 -->\r\n<!-- 请附上 PicGo 的相关报错日志(用文本的形式)否则无法判断原因-->\r\n<!-- 报错日志可以在 PicGo 设置 -> 设置日志文件 -> 点击打开 后找到 -->\r\n在使用typora中粘贴图片后自动上传到图床,PicGo每次都提示有前序任务,但其实没有上传任务。\r\n使用的插件为ssh-scp-uploader\r\n截图和日志如下\r\n\r\n\r\n\r\n2021-05-03 15:32:48 [PicGo INFO] [PicGo Server] get the request \r\n2021-05-03 15:32:48 [PicGo INFO] [PicGo Server] upload files in list \r\n2021-05-03 15:32:48 [PicGo INFO] Before transform \r\n2021-05-03 15:32:48 [PicGo INFO] Transforming... Current transformer is [path] \r\n2021-05-03 15:32:48 [PicGo INFO] [PicGo Server] get the request \r\n2021-05-03 15:32:48 [PicGo INFO] [PicGo Server] upload files in list \r\n2021-05-03 15:32:48 [PicGo WARN] [PicGo Server] upload failed, see picgo.log for more detail ↑ \r\n2021-05-03 15:32:48 [PicGo WARN] [PicGo Server] upload failed, see picgo.log for more detail ↑ \r\n2021-05-03 15:32:48 [PicGo INFO] Before upload \r\n2021-05-03 15:32:48 [PicGo INFO] beforeUploadPlugins: renameFn running \r\n2021-05-03 15:32:48 [PicGo INFO] Uploading... Current uploader is [ssh-scp-uploader] \r\n2021-05-03 15:32:50 [PicGo SUCCESS] \r\nhttps://xxxxxxxxxxxxxxx/uploads/2021/05/image-20210503153248908.png \r\n## 功能请求\r\n\r\n<!-- 如果是 Feature Request 请填写本项 -->\r\n<!-- 详细描述你所预想的功能或者是现有功能的改进。 -->\r\n\r\n---\r\n\r\n<!-- \r\n 最后,喜欢 PicGo 的话不妨给它点个 star~\r\n 如果可以的话,请我喝杯咖啡?首页有赞助二维码,谢谢你的支持! \r\n -->",
"title": "PicGo提示前序任务问题",
"type": "issue"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 831097551,
"datetime": 1620029232000,
"masked_author": "username_1",
"text": "从日志中确实收到了两次上传请求,上一次还没结束,下一次就请求就过来了。,上一次还没上传完会出现这个提示是正常的。\n\n后续会优化一下",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "vay1314",
"comment_id": 831137479,
"datetime": 1620033836000,
"masked_author": "username_0",
"text": "也就是PicGo同一时间自动执行了两遍上传是吧?还有个问题请教下,之前在typora粘贴本地图片后,PicGo自动上传成功后,会自动把typora中的本地地址替换成上传成功后的网络地址,出现这个前序任务后上传成功也不会替换了,这个是typora的还是PicGo的问题?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 831143378,
"datetime": 1620034521000,
"masked_author": "username_1",
"text": "普通剪贴板粘贴图片的话typora不应该发两个请求过来。后续picgo这侧来处理吧",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rizaust",
"comment_id": 833174487,
"datetime": 1620267404000,
"masked_author": "username_2",
"text": "+1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "liyunfu123",
"comment_id": 833434600,
"datetime": 1620298885000,
"masked_author": "username_3",
"text": "gitee 也出现了 前序任务,覆盖安装2.2正式版(不会出现前序) 然后到2.3 beta5(出现前序)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 833438041,
"datetime": 1620299255000,
"masked_author": "username_1",
"text": "会在下个beta版本解决",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 833685898,
"datetime": 1620320219000,
"masked_author": "username_1",
"text": "https://github.com/typora/typora-issues/issues/4379\r\n\r\n已经上报typora",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 833691462,
"datetime": 1620320690000,
"masked_author": "username_1",
"text": "有问题的可以先降级到0.9.x版本的typora",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ytfsL",
"comment_id": 834987227,
"datetime": 1620441523000,
"masked_author": "username_4",
"text": "我也遇到这个问题了,就是typora的问题,用typora测试上传的时候会上传两张图片",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Molunerfinn",
"comment_id": null,
"datetime": 1620528746000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 835645135,
"datetime": 1620528906000,
"masked_author": "username_1",
"text": "下个版本会解决「前序任务」提示问题,现在每个上传任务都是独立的了,不会有前序任务问题。\r\n\r\n但是typora还是会同时发起两次上传同一张图片的请求,在某些图床下会发生问题。比如 GitHub,不支持上传同名文件,一旦第一张上传成功,第二张一样的图片就会失败。但是其他一些云服务商不会,比如腾讯云、阿里云等。这个需要等typora修复。如果影响使用,请降级typora至0.9.98。",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Molunerfinn",
"comment_id": 835791560,
"datetime": 1620561411000,
"masked_author": "username_1",
"text": "更新,typora最新的0.10.9已经修复",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ysinx",
"comment_id": 868168327,
"datetime": 1624590351000,
"masked_author": "username_5",
"text": "我用 picgo core 也遇到这问题",
"title": null,
"type": "comment"
}
] | 2,549 | false | false | 6 | 14 | true |
AppsFlyerSDK/appsflyer-flutter-plugin | AppsFlyerSDK | 863,484,815 | 120 | null | [
{
"action": "opened",
"author": "oyzxchi",
"comment_id": null,
"datetime": 1618985456000,
"masked_author": "username_0",
"text": "I want to use appsflyer to get advertising information from Facebook, such as Facebook advertising campaign, ads set, Ads and other information. The information I see on af is \"Get af_dp in onConversionDataSuccess callback\". I want to know which interface method should be used To get the onConversionDataSuccess callback\r\n\r\nMy appsflyer_sdk version is 6.2.4+1-flutterv1\r\n\r\nflutter doctor:\r\nDoctor summary (to see all details, run flutter doctor -v):\r\n[✓] Flutter (Channel stable, 1.22.2)\r\n[✓] Android toolchain - develop for Android devices (Android SDK version 30.0.2)\r\n[✓] Xcode - develop for iOS and macOS (Xcode 12.3)\r\n[✓] Android Studio (version 4.0)\r\n[✓] VS Code (version 1.54.1)\r\n[✓] Connected device (1 available)",
"title": "How to get campaign name of FB ADS by AppsFlyer?",
"type": "issue"
},
{
"action": "closed",
"author": "GM-appsflyer",
"comment_id": null,
"datetime": 1619695333000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 722 | false | false | 2 | 2 | false |
aws/aws-cdk | aws | 856,681,935 | 14,130 | null | [
{
"action": "opened",
"author": "tom10271",
"comment_id": null,
"datetime": 1618299993000,
"masked_author": "username_0",
"text": "<!--\r\ndescription of the bug:\r\n-->\r\n\r\n### Reproduction Steps\r\nIt is hard for me to write everything down here or to prepare a minimum reproducible code. I don't time for that\r\n\r\nConsider I have a VPC with RDS and EC2, whatever working and then added these.\r\n\r\n```ts\r\nexport function storageSetup(scope: Construct) {\r\n STORAGE.EFS.testing = new FileSystem(scope, 'App testing server EFS', {\r\n vpc: NETWORKING.VPC,\r\n performanceMode: PerformanceMode.GENERAL_PURPOSE,\r\n enableAutomaticBackups: true,\r\n removalPolicy: RemovalPolicy.RETAIN,\r\n lifecyclePolicy: LifecyclePolicy.AFTER_90_DAYS,\r\n });\r\n}\r\n```\r\n\r\nAnd run `cdk diff`, the diff is to delete everything but create the EFS.\r\n\r\n<!--\r\nminimal amount of code that causes the bug (if possible) or a reference:\r\n-->\r\n\r\n### What did you expect to happen?\r\n\r\n<!--\r\nWhat were you trying to achieve by performing the steps above?\r\n-->\r\n\r\n### What actually happened?\r\n\r\n<!--\r\nWhat is the unexpected behavior you were seeing? If you got an error, paste it here.\r\n-->\r\n\r\n\r\n### Environment\r\n\r\n - **CDK CLI Version :** 1.95.1 (build ed2bbe6)\r\n - **Framework Version:** ???\r\n - **Node.js Version:** <!-- Version of Node.js (run the command `node -v`) --> v14.15.1\r\n - **OS :** macOS Catalina 10.15.2 (19C57)\r\n - **Language (Version):** <!-- [all | TypeScript (3.8.3) | Java (8)| Python (3.7.3) | etc... ] --> TS ~3.9.7\r\n\r\n### Other\r\n\r\n<!-- e.g. detailed explanation, stacktraces, related issues, suggestions on how to fix, links for us to have context, eg. associated pull-request, stackoverflow, slack, etc -->\r\n\r\n\r\n\r\n\r\n--- \r\n\r\nThis is :bug: Bug Report",
"title": "EFS: No clue but adding EFS into my existing stack result in deletion of everything",
"type": "issue"
},
{
"action": "created",
"author": "tom10271",
"comment_id": 818526575,
"datetime": 1618300388000,
"masked_author": "username_0",
"text": "And here is how CDK no longer delete everything but just to create the EFS is for me to create EFS with specifying which subnet to use\r\n\r\n```ts\r\n STORAGE.EFS.testing = new FileSystem(scope, 'App testing server EFS', {\r\n vpc: NETWORKING.VPC,\r\n performanceMode: PerformanceMode.GENERAL_PURPOSE,\r\n enableAutomaticBackups: true,\r\n removalPolicy: RemovalPolicy.RETAIN,\r\n lifecyclePolicy: LifecyclePolicy.AFTER_90_DAYS,\r\n vpcSubnets: {\r\n subnets: [NETWORKING.PUBLIC_SUBNET_2B]\r\n },\r\n });\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iliapolo",
"comment_id": 820212874,
"datetime": 1618473810000,
"masked_author": "username_1",
"text": "@username_0 I am not able to reproduce this. \r\n\r\nUsing the following code:\r\n\r\n```ts\r\nconst vpc = new ec2.Vpc(this, 'Vpc');\r\n\r\n// Create the Database\r\nnew docdb.DatabaseCluster(this, 'todos-database', {\r\n masterUser: {\r\n username: 'epolon', // NOTE: 'admin' is reserved by DocumentDB\r\n },\r\n instanceType: ec2.InstanceType.of(ec2.InstanceClass.R5, ec2.InstanceSize.LARGE),\r\n vpcSubnets: {\r\n subnetType: ec2.SubnetType.PRIVATE\r\n },\r\n vpc,\r\n dbClusterName: \"todos-database-cluster\",\r\n removalPolicy: cdk.RemovalPolicy.DESTROY,\r\n});\r\n```\r\n\r\nIf I add an `efs.FileSystem`, I get the expected diff:\r\n\r\n```ts\r\nnew efs.FileSystem(this, 'FileSystem', {\r\n vpc,\r\n performanceMode: efs.PerformanceMode.GENERAL_PURPOSE,\r\n enableAutomaticBackups: true,\r\n removalPolicy: cdk.RemovalPolicy.RETAIN,\r\n lifecyclePolicy: efs.LifecyclePolicy.AFTER_90_DAYS,\r\n});\r\n```\r\n\r\n```console\r\n───┬────────────────────────────────────────┬─────┬────────────┬─────────────────┐\r\n│ │ Group │ Dir │ Protocol │ Peer │\r\n├───┼────────────────────────────────────────┼─────┼────────────┼─────────────────┤\r\n│ + │ ${FileSystem/EfsSecurityGroup.GroupId} │ Out │ Everything │ Everyone (IPv4) │\r\n└───┴────────────────────────────────────────┴─────┴────────────┴─────────────────┘\r\n(NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299)\r\n\r\nResources\r\n[+] AWS::EFS::FileSystem FileSystem FileSystem8A8E25C0 \r\n[+] AWS::EC2::SecurityGroup FileSystem/EfsSecurityGroup FileSystemEfsSecurityGroup212D3ACB \r\n[+] AWS::EFS::MountTarget FileSystem/EfsMountTarget1 FileSystemEfsMountTarget1586453F0 \r\n[+] AWS::EFS::MountTarget FileSystem/EfsMountTarget2 FileSystemEfsMountTarget24B8EBB43 \r\n[+] AWS::EFS::MountTarget FileSystem/EfsMountTarget3 FileSystemEfsMountTarget37C2F9139 \r\n```\r\n\r\nAre you able to reproduce this in a clean installation? Is it possible something in your configuration is causing this? We need an isolated reproduction to keep investigating it.",
"title": null,
"type": "comment"
}
] | 4,235 | false | true | 2 | 3 | true |
Road-of-CODEr/we-hate-js | Road-of-CODEr | 845,651,923 | 50 | null | [
{
"action": "opened",
"author": "hayoung0Lee",
"comment_id": null,
"datetime": 1617156235000,
"masked_author": "username_0",
"text": "- [ ] 16장: 프로퍼티 어트리뷰트\r\n- [ ] 17장: 생성자 함수에 의한 객체 생성 \r\n- [ ] 18장: 함수와 일급 객체 \r\n- [ ] 19장: 프로토타입 \r\n- [ ] 20장: strict mode",
"title": "16~20장 스터디",
"type": "issue"
},
{
"action": "closed",
"author": "1ilsang",
"comment_id": null,
"datetime": 1626546560000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 117 | false | false | 2 | 2 | false |
microsoft/FluidFramework | microsoft | 878,170,895 | 6,065 | {
"number": 6065,
"repo": "FluidFramework",
"user_login": "microsoft"
} | [
{
"action": "opened",
"author": "ChumpChief",
"comment_id": null,
"datetime": 1620342663000,
"masked_author": "username_0",
"text": "Resolves #6015\r\n\r\nIf `createSubDirectory()` is called with a subdirectory name that already exists, we retrieve the already-existing subdirectory and return it rather than creating a new one. This PR documents that more explicitly, plus adds a couple tests that prove it more explicitly (it was tangentially tested in other cases, but this is more exhaustive).\r\n\r\n~~While looking at this, I noticed that we would still submit a `createSubDirectory` op in the case that it already existed -- this was benign since all collaborators would just ignore it basically, but we can eliminate this op traffic as unnecessary.~~\r\n\r\nActually this is revealing we have some eventual consistency bugs for simultaneous delete/create subdirectory in combination with key sets within that deleted/created subdirectory. Reverting the directory changes in favor of sorting those out before changing behavior -- so this is just a documentation and test PR now.",
"title": "Double createSubDirectory handling",
"type": "issue"
},
{
"action": "created",
"author": "ChumpChief",
"comment_id": 833972550,
"datetime": 1620347711000,
"masked_author": "username_0",
"text": "Filed #6069 for the eventual consistency issues discovered.",
"title": null,
"type": "comment"
}
] | 1,001 | false | false | 1 | 2 | false |
EGI-Federation/documentation | EGI-Federation | 900,778,289 | 218 | {
"number": 218,
"repo": "documentation",
"user_login": "EGI-Federation"
} | [
{
"action": "opened",
"author": "mviljoen-egi",
"comment_id": null,
"datetime": 1621947618000,
"masked_author": "username_0",
"text": "improved top-level description of this section\r\n\r\n<!--\r\nA good PR should describe what benefit this brings to the repository.\r\nIdeally, there is an existing issue which the PR address.\r\n\r\nPlease check the [Contributing guide](https://docs.egi.eu/about/contributing/)\r\nfor style requirements and advice.\r\n-->\r\n\r\n# Summary\r\n\r\n<!-- Describe in plain English what this PR does -->\r\nimproved top-level description of this section\r\n---\r\n\r\n<!-- Add, if any, the related issue here, e.g. #6 -->\r\n\r\n**Related issue :**",
"title": "Update _index.md",
"type": "issue"
},
{
"action": "created",
"author": "gwarf",
"comment_id": 847897125,
"datetime": 1621951344000,
"masked_author": "username_1",
"text": "Thanks @username_0, it's still failing the tests due to spaces at end of lines: https://github.com/EGI-Federation/documentation/pull/218/checks?check_run_id=2665456529\r\n\r\nCan you please edit this PR to fix this?\r\nYou should be able to edit the file by going to the https://github.com/EGI-Federation/documentation/pull/218/files tabs and using `Edit file` from the `...` menu.",
"title": null,
"type": "comment"
}
] | 886 | false | false | 2 | 2 | true |
davodesign84/react-native-mixpanel | null | 734,046,295 | 261 | null | [
{
"action": "opened",
"author": "AashJ",
"comment_id": null,
"datetime": 1604260098000,
"masked_author": "username_0",
"text": "What is the alternative for mixpanel.people.set as seen in the node.js documentation (https://developer.mixpanel.com/docs/nodejs)? \r\n\r\nMy use case is I already have user profiles stored in a database, and am now adding in a mixpanel integration. I want to create user profiles if they don't already exist, but identify otherwise.",
"title": "Mixpanel.people.set",
"type": "issue"
},
{
"action": "created",
"author": "eugenetraction",
"comment_id": 724392535,
"datetime": 1604972169000,
"masked_author": "username_1",
"text": "If I'm not mistaken this is the syntax: \r\n\r\n`// Set People properties (warning: if no mixpanel profile has been assigned to the current user when this method is called, it will automatically create a new mixpanel profile and the user will no longer be anonymous in Mixpanel)\r\nMixpanel.set({\"$email\": \"elvis@email.com\"});\r\n\r\n// Set People Properties Once (warning: if no mixpanel profile has been assigned to the current user when this method is called, it will automatically create a new mixpanel profile and the user will no longer be anonymous in Mixpanel)\r\nMixpanel.setOnce({\"$email\": \"elvis@email.com\", \"Created\": new Date().toISOString()});`",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "AashJ",
"comment_id": null,
"datetime": 1614021064000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 975 | false | false | 2 | 3 | false |
bukalapak/snowboard | bukalapak | 788,494,371 | 169 | null | [
{
"action": "opened",
"author": "X-Ray-Jin",
"comment_id": null,
"datetime": 1610997266000,
"masked_author": "username_0",
"text": "Anyone an idea how to fix that? I tried installing Protagonist and Node-Gyp in various versions manually but that doesn't help. Tried as normal and as admin user. Windows build tools are installed, too.",
"title": "Global Snowboard installation fails",
"type": "issue"
},
{
"action": "created",
"author": "u4989190",
"comment_id": 810703024,
"datetime": 1617156564000,
"masked_author": "username_1",
"text": "you may take a look at this issue [119](https://github.com/bukalapak/snowboard/issues/119)\r\nor you may run snowboard in docker instead installing it on your local machine.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "u4989190",
"comment_id": 812957425,
"datetime": 1617502178000,
"masked_author": "username_1",
"text": "@username_0 \r\nthank you for your recommendation. I managed to install snowboard at last(not on windows though, I did it on wsl2), but yet failed to parse the apib file. it seems that other team member, who is using Mac, edited the file with LF breaks and since windows is CRLF, it just don't work out. I just give up and asked him to generate a static html for me so I will not parse the apib files.\r\n\r\nI am using this API blueprint only because our client choose the tech stack. for those projects where I am the leader, I usually go on with OPEN API, and if you are building REST API endpoints, I strongly encourage you take a look at it. it goes well with a lot of tools which are well-developed, such as swagger, stoplight etc, and the community is thriving. anyway, good luck!",
"title": null,
"type": "comment"
}
] | 1,153 | false | false | 2 | 3 | true |
shibing624/pycorrector | null | 833,439,711 | 196 | null | [
{
"action": "opened",
"author": "cxy86121-sudo",
"comment_id": null,
"datetime": 1615961723000,
"masked_author": "username_0",
"text": "你是我的小宝呗 []\r\n\r\n\r\n使用纠错模块,试了几个例子后发现很多纠正不了,如上面的两个例子。",
"title": "关于pycorrector.correct模块的使用体验",
"type": "issue"
},
{
"action": "created",
"author": "shibing624",
"comment_id": 802499933,
"datetime": 1616122092000,
"masked_author": "username_1",
"text": "长尾漏召回case,建议自己加到混淆词典confusion dict处理。",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "shibing624",
"comment_id": null,
"datetime": 1620569185000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 85 | false | false | 2 | 3 | false |
dotnet/AspNetCore.Docs | dotnet | 795,479,495 | 21,335 | {
"number": 21335,
"repo": "AspNetCore.Docs",
"user_login": "dotnet"
} | [
{
"action": "opened",
"author": "Rick-Anderson",
"comment_id": null,
"datetime": 1611784893000,
"masked_author": "username_0",
"text": "Fixes #21315\r\n[Internal review URL](https://review.docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0&branch=pr-en-us-21335#command-line)",
"title": "Kestrel config command line",
"type": "issue"
},
{
"action": "created",
"author": "Rick-Anderson",
"comment_id": 768837682,
"datetime": 1611816301000,
"masked_author": "username_0",
"text": "@natenho please review.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Rick-Anderson",
"comment_id": 773627161,
"datetime": 1612475515000,
"masked_author": "username_0",
"text": "@serpent5 please review my latest to see if I've addressed `@halter73` feedback. No hurry.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Rick-Anderson",
"comment_id": 778849804,
"datetime": 1613340601000,
"masked_author": "username_0",
"text": "@serpent5 much appreciated",
"title": null,
"type": "comment"
}
] | 313 | false | false | 1 | 4 | false |
TerriaJS/terriajs | TerriaJS | 753,439,424 | 5,042 | {
"number": 5042,
"repo": "terriajs",
"user_login": "TerriaJS"
} | [
{
"action": "opened",
"author": "nf-s",
"comment_id": null,
"datetime": 1606739447000,
"masked_author": "username_0",
"text": "### Revert magda config ID patch + add shareKey\r\n\r\nFixes https://github.com/TerriaJS/terriajs/issues/4978 https://github.com/TerriaJS/qld-digital-twin/issues/237 https://github.com/TerriaJS/terrace/issues/142\r\n\r\nPartially fixes (chill github don't close the issue when I merge this) https://github.com/TerriaJS/terriajs/issues/4774 \r\n\r\n### Checklist\r\n\r\n- [x] Mostly reverting things - no Tests needed\r\n- [x] I've updated CHANGES.md with what I changed.",
"title": "Revert magda config ID patch + add shareKey",
"type": "issue"
},
{
"action": "created",
"author": "nf-s",
"comment_id": 740432396,
"datetime": 1607411806000,
"masked_author": "username_0",
"text": "Closed due to decision made in https://github.com/TerriaJS/terriajs/pull/5056\r\nWe will be keeping the Magda Reference root group id as `\"/\"`",
"title": null,
"type": "comment"
}
] | 596 | false | false | 1 | 2 | false |
Unity-Technologies/datasetinsights | Unity-Technologies | 700,006,017 | 93 | null | [
{
"action": "opened",
"author": "adason",
"comment_id": null,
"datetime": 1599881209000,
"masked_author": "username_0",
"text": "**Why you need this feature:**\r\nThe current GCSDownloader tries to download file sequentially. \r\n\r\n**Describe the solution you'd like:**\r\n\r\nUse multi-threading to speedup the download process. \r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]",
"title": "Improve GCSDownloader to enable multi-threading",
"type": "issue"
}
] | 305 | false | false | 1 | 1 | false |
cyrilou242/covid-19 | null | 615,246,275 | 51 | {
"number": 51,
"repo": "covid-19",
"user_login": "cyrilou242"
} | [
{
"action": "opened",
"author": "donok1",
"comment_id": null,
"datetime": 1589053134000,
"masked_author": "username_0",
"text": "Je me suis permis de remplacer les espaces avant les ponctuations détachées («»:!?)\r\n\r\nIl a aussi quelque \"\" et 2-3 arrangement de foot notes.\r\n\r\net les R < 1 de #50",
"title": "espace insécable + qq details",
"type": "issue"
},
{
"action": "created",
"author": "donok1",
"comment_id": 626225836,
"datetime": 1589053226000,
"masked_author": "username_0",
"text": "Je vais essayer de relire demain encore une fois le tout. Bonne soirée ;)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cyrilou242",
"comment_id": 626228400,
"datetime": 1589054598000,
"masked_author": "username_1",
"text": "super ! J'ai pull ta branche, rebase (j'avais fait qque modifs entre temps) et mergé donc je ferme celle ci.\r\n\r\nJe touche plus à rien jusqu'à demain soir !",
"title": null,
"type": "comment"
}
] | 393 | false | false | 2 | 3 | false |
styled-components/styled-components | styled-components | 634,041,801 | 3,166 | null | [
{
"action": "opened",
"author": "dimaqq",
"comment_id": null,
"datetime": 1591585484000,
"masked_author": "username_0",
"text": "I'm using:\r\n* styled-components@5.1.1 with `pure: true`\r\n* import them `from \"styled-components/macro\"`\r\n* in a create-react-app app\r\n* targeting only evergreen browsers\r\n\r\nBuilding the app, I find this in `2.<hash>.chunk.js`:\r\n\r\n```js\r\n var b = \"undefined\" !== typeof e && (Object({\r\n NODE_ENV: \"production\",\r\n PUBLIC_URL: \"\",\r\n WDS_SOCKET_HOST: void 0,\r\n WDS_SOCKET_PATH: void 0,\r\n WDS_SOCKET_PORT: void 0,\r\n REACT_APP_GIT_SHA: \"d317dc12351f14768f919735d4486a651224f448\"\r\n }).REACT_APP_SC_ATTR || Object({\r\n NODE_ENV: \"production\",\r\n PUBLIC_URL: \"\",\r\n WDS_SOCKET_HOST: void 0,\r\n WDS_SOCKET_PATH: void 0,\r\n WDS_SOCKET_PORT: void 0,\r\n REACT_APP_GIT_SHA: \"d317dc12351f14768f919735d4486a651224f448\"\r\n }).SC_ATTR) || \"data-styled\",\r\n w = \"undefined\" !== typeof window && \"HTMLElement\" in\r\n window,\r\n _ = \"boolean\" === typeof SC_DISABLE_SPEEDY &&\r\n SC_DISABLE_SPEEDY || \"undefined\" !== typeof e && (\r\n Object({\r\n NODE_ENV: \"production\",\r\n PUBLIC_URL: \"\",\r\n WDS_SOCKET_HOST: void 0,\r\n WDS_SOCKET_PATH: void 0,\r\n WDS_SOCKET_PORT: void 0,\r\n REACT_APP_GIT_SHA: \"d317dc12351f14768f919735d4486a651224f448\"\r\n }).REACT_APP_SC_DISABLE_SPEEDY || Object({\r\n NODE_ENV: \"production\",\r\n PUBLIC_URL: \"\",\r\n WDS_SOCKET_HOST: void 0,\r\n WDS_SOCKET_PATH: void 0,\r\n WDS_SOCKET_PORT: void 0,\r\n REACT_APP_GIT_SHA: \"d317dc12351f14768f919735d4486a651224f448\"\r\n }).SC_DISABLE_SPEEDY) || !1,\r\n```\r\n\r\nWould it be possible not to include the entire (?) environment in the build?\r\n\r\nI was trying to set up `REACT_APP_GIT_SHA` env var and use it **very carefully** so that most of the build artefacts are not changed. Here, `styled-components` thwarted my attempts.\r\n\r\nI think, if someone needs to inject a `REACT_APP_SECRET_VALUE` they'd have the same problem.",
"title": "Is it possible not to pull env vars into build?",
"type": "issue"
},
{
"action": "closed",
"author": "probablyup",
"comment_id": null,
"datetime": 1592233252000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "probablyup",
"comment_id": 644188569,
"datetime": 1592233252000,
"masked_author": "username_1",
"text": "Whatever that is isn't coming from our codebase",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dimaqq",
"comment_id": 644539520,
"datetime": 1592285350000,
"masked_author": "username_0",
"text": "Hmm I was rather certain that `SC_` variables were for styled-components.\r\nThere's even a test in this codebase for correct treatment of `REACT_APP_SC_DISABLE_SPEEDY`:\r\nhttps://github.com/styled-components/styled-components/blob/147b0e9a1f10786551b13fd27452fcd5c678d5e0/packages/styled-components/src/test/constants.test.js#L117-L122",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dimaqq",
"comment_id": 644541124,
"datetime": 1592285690000,
"masked_author": "username_0",
"text": "```\r\nnode_modules/styled-components/primitives/dist/styled-components-primitives.esm.js\r\n154:var DISABLE_SPEEDY = typeof SC_DISABLE_SPEEDY === 'boolean' && SC_DISABLE_SPEEDY || typeof process !== 'undefined' && (process.env.REACT_APP_SC_DISABLE_SPEEDY || process.env.SC_DISABLE_SPEEDY) || process.env.NODE_ENV !== 'production'; // Shared empty execution context when generating static styles\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dimaqq",
"comment_id": 647213287,
"datetime": 1592788639000,
"masked_author": "username_0",
"text": "My best guess is that merely referencing `process.env.REACT_APP_xxx` tricks CRA/webpack into including the entire \"visible\" part of env. 🤔",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dimaqq",
"comment_id": 664066167,
"datetime": 1595811278000,
"masked_author": "username_0",
"text": "So, ugh, is there perhaps some other way to let users control \"disable speedy\" than through env?",
"title": null,
"type": "comment"
}
] | 3,398 | false | false | 2 | 7 | false |
cryptolandtech/moonlet | cryptolandtech | 350,426,969 | 11 | null | [
{
"action": "opened",
"author": "i3uRi",
"comment_id": null,
"datetime": 1534253132000,
"masked_author": "username_0",
"text": "*As a user I want to be able to create a password for my Extension wallet so that I can have another layer of security*\n\n- [ ] Display dialog box\n- [ ] Use this \"Re-confirm Backup\" title\n- [ ] Use this \"Are you sure that you have saved your secret phrase in a secure location?\" message\n- [ ] When click on Yes, it should trigger next journey step page\n- [ ] When click on No, it should trigger previous step\n\nPlease [use this visual asset as a guideline](https://gallery.io/projects/MCHbtQVoQ2HCZTlak4tp1lmj/files/MCHtA7U1iMGr61SXmvJTUH2T22Kn7I1KHFQ) and [check this demo in order to map the user journey](https://gallery.io/projects/MCHbtQVoQ2HCZTlak4tp1lmj)",
"title": "Create Password Authentication",
"type": "issue"
},
{
"action": "closed",
"author": "krisboit",
"comment_id": null,
"datetime": 1540060745000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 659 | false | false | 2 | 2 | false |
pwa-builder/PWABuilder | pwa-builder | 899,883,990 | 1,733 | {
"number": 1733,
"repo": "PWABuilder",
"user_login": "pwa-builder"
} | [
{
"action": "opened",
"author": "jgw96",
"comment_id": null,
"datetime": 1621879600000,
"masked_author": "username_0",
"text": "Fixes #\r\n<!-- Link to relevant issue (for ex: #1234) which will automatically close the issue once the PR is merged -->\r\n\r\n## PR Type\r\n<!-- Please uncomment one ore more that apply to this PR -->\r\n\r\nBugfix\r\n<!-- - Feature -->\r\n<!-- - Code style update (formatting) -->\r\n<!-- - Refactoring (no functional changes, no api changes) -->\r\n<!-- - Build or CI related changes -->\r\n<!-- - Documentation content changes -->\r\n<!-- - Sample app changes -->\r\n<!-- - Other... Please describe: -->\r\n\r\n\r\n## Describe the current behavior?\r\n<!-- Please describe the current behavior that is being modified or link to a relevant issue. -->\r\nThis fixes a bug where even if the non-canonical URL is being used for testing (such as https://pinterest.com) we will update the URL being used for packaging with the canonical URL (from our backend manifest finder services) in the background. This adds no new latency for the user to init the tests, but also ensures that once it comes to packaging the correct URL is being used (this is especially important for Android).\r\n\r\n## Describe the new behavior? \r\nOnce we have the canonical URL from our manifest finder services I call a function to update the URL being worked on with this more correct URL. This all happens without any need for action from the user.\r\n\r\n## PR Checklist\r\n\r\n- [ x] Test: run `npm run test` and ensure that all tests pass\r\n- [x ] Target master branch (or an appropriate release branch if appropriate for a bug fix)\r\n- [x ] Ensure that your contribution follows [standard accessibility guidelines](https://docs.microsoft.com/en-us/microsoft-edge/accessibility/design). Use tools like https://webhint.io/ to validate your changes.\r\n\r\n\r\n## Additional Information",
"title": "feat(): update url being used for packaging with canonical URL in the background",
"type": "issue"
}
] | 1,710 | false | true | 1 | 1 | false |
MicrosoftDocs/azure-docs | MicrosoftDocs | 732,677,137 | 65,260 | {
"number": 65260,
"repo": "azure-docs",
"user_login": "MicrosoftDocs"
} | [
{
"action": "opened",
"author": "ChristopherMank",
"comment_id": null,
"datetime": 1604007012000,
"masked_author": "username_0",
"text": "Add detail that the Azure DevOps Admin role doesn't give permissions inside of Azure DevOps itself. That was not clear previously.",
"title": "Add detail about Azure DevOps permissions",
"type": "issue"
},
{
"action": "created",
"author": "PRMerger14",
"comment_id": 719035881,
"datetime": 1604007032000,
"masked_author": "username_1",
"text": "@username_0 : Thanks for your contribution! The author(s) have been notified to review your proposed change.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ChristopherMank",
"comment_id": 722462652,
"datetime": 1604591374000,
"masked_author": "username_0",
"text": "Thanks @username_2 ! Updated per your suggestion.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "curtand",
"comment_id": 722467475,
"datetime": 1604591847000,
"masked_author": "username_2",
"text": "#sign-off \r\n\r\n@username_0 Awesome, thanks!",
"title": null,
"type": "comment"
}
] | 336 | false | false | 3 | 4 | true |
hotosm/tasking-manager | hotosm | 779,232,731 | 4,050 | {
"number": 4050,
"repo": "tasking-manager",
"user_login": "hotosm"
} | [
{
"action": "opened",
"author": "dakotabenjamin",
"comment_id": null,
"datetime": 1609863041000,
"masked_author": "username_0",
"text": "In order to allow for non-hotosm urls (such as tasks.teachosm.org) in the cfn template, we add a condition for the url parameter to check if its a hotosm.org domain. We will not create the Route53 record in the case that it is an external url. \r\n\r\nIn this case currently it only works when we also control the ssl certificate in ACM, so it is not fully generalized. that should come with future development. \r\n\r\n@willemarcel I would like to test this with `staging-test` before merging.",
"title": "Add condition for custom URL in cfn template",
"type": "issue"
},
{
"action": "created",
"author": "dakotabenjamin",
"comment_id": 754796980,
"datetime": 1609869358000,
"masked_author": "username_0",
"text": "Everything works on staging using `staging-test` branch so this is ready for merge",
"title": null,
"type": "comment"
}
] | 568 | false | false | 1 | 2 | false |
AdoptOpenJDK/openjdk-tests | AdoptOpenJDK | 827,925,652 | 2,357 | null | [
{
"action": "opened",
"author": "andrew-m-leonard",
"comment_id": null,
"datetime": 1615390592000,
"masked_author": "username_0",
"text": "https://ci.adoptopenjdk.net/job/Test_openjdk8_j9_extended.openjdk_x86-64_linux/25/console\r\nFails both Hotspot and OpenJ9:\r\n```\r\n12:30:22 openjdk version \"1.8.0_292\"\r\n12:30:22 OpenJDK Runtime Environment (build 1.8.0_292-202103091156-b05)\r\n12:30:22 Eclipse OpenJ9 VM (build openj9-0.26.0-m1, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20210309_972 (JIT enabled, AOT enabled)\r\n12:30:22 OpenJ9 - b227feba2\r\n12:30:22 OMR - 4665e2f72\r\n12:30:22 JCL - 1780cbc92b based on jdk8u292-b05)\r\n```\r\n```\r\n13:29:39 java.lang.NullPointerException\r\n13:29:39 \tat sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1264)\r\n13:29:39 \tat sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:219)\r\n13:29:39 \tat sun.awt.FontConfiguration.init(FontConfiguration.java:107)\r\n13:29:39 \tat sun.awt.X11FontManager.createFontConfiguration(X11FontManager.java:774)\r\n13:29:39 \tat sun.font.SunFontManager$2.run(SunFontManager.java:441)\r\n13:29:39 \tat java.security.AccessController.doPrivileged(AccessController.java:682)\r\n13:29:39 \tat sun.font.SunFontManager.<init>(SunFontManager.java:386)\r\n13:29:39 \tat sun.awt.FcFontManager.<init>(FcFontManager.java:35)\r\n13:29:39 \tat sun.awt.X11FontManager.<init>(X11FontManager.java:57)\r\n13:29:39 \tat java.lang.J9VMInternals.newInstanceImpl(Native Method)\r\n13:29:39 \tat java.lang.Class.newInstance(Class.java:2038)\r\n13:29:39 \tat sun.font.FontManagerFactory$1.run(FontManagerFactory.java:83)\r\n13:29:39 \tat java.security.AccessController.doPrivileged(AccessController.java:682)\r\n13:29:39 \tat sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)\r\n13:29:39 \tat sun.font.SunFontManager.getInstance(SunFontManager.java:250)\r\n13:29:39 \tat sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:264)\r\n13:29:39 \tat sun.font.FontDesignMetrics.getMetrics(FontDesignMetrics.java:250)\r\n13:29:39 \tat sun.awt.X11.XComponentPeer.getFontMetrics(XComponentPeer.java:683)\r\n13:29:39 \tat sun.awt.X11.XLabelPeer.getFontMetrics(XLabelPeer.java:48)\r\n13:29:39 \tat sun.awt.X11.XLabelPeer.getMinimumSize(XLabelPeer.java:70)\r\n13:29:39 \tat sun.awt.X11.XComponentPeer.getPreferredSize(XComponentPeer.java:606)\r\n13:29:39 \tat java.awt.Component.preferredSize(Component.java:2643)\r\n13:29:39 \tat java.awt.Component.getPreferredSize(Component.java:2626)\r\n13:29:39 \tat java.awt.GridBagLayout.GetLayoutInfo(GridBagLayout.java:1115)\r\n13:29:39 \tat java.awt.GridBagLayout.getLayoutInfo(GridBagLayout.java:916)\r\n13:29:39 \tat java.awt.GridBagLayout.preferredLayoutSize(GridBagLayout.java:736)\r\n13:29:39 \tat java.awt.Container.preferredSize(Container.java:1799)\r\n13:29:39 \tat java.awt.Container.getPreferredSize(Container.java:1783)\r\n13:29:39 \tat java.awt.Window.pack(Window.java:809)\r\n13:29:39 \tat com.sun.javatest.regtest.agent.AppletWrapper$AppletRunnable.run(AppletWrapper.java:142)\r\n13:29:39 \tat java.lang.Thread.run(Thread.java:823)\r\n13:29:39 STATUS:Failed.Applet thread threw exception: java.lang.NullPointerException\r\n```\r\nPossibly a fontconfig setup issue? or an upstream problem?",
"title": "extended.openjdk failure: javax/imageio/AppletResourceTest.java : FontConfiguration NPE",
"type": "issue"
},
{
"action": "created",
"author": "aahlenst",
"comment_id": 795640622,
"datetime": 1615391540000,
"masked_author": "username_1",
"text": "@username_0 I suspect a machine issue. [See the list of libraries that need to be present](https://blog.adoptopenjdk.net/2021/01/prerequisites-for-font-support-in-adoptopenjdk/).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "lumpfish",
"comment_id": 817959416,
"datetime": 1618245636000,
"masked_author": "username_2",
"text": "I think this affects a few tests and is being seen when tests are running on docker images:\r\nFailing test run:https://ci.adoptopenjdk.net/job/Test_openjdk11_j9_extended.openjdk_x86-64_linux/16/consoleFull\r\nTo rerun: https://ci.adoptopenjdk.net/job/Grinder/parambuild/?JDK_VERSION=11&JDK_IMPL=openj9&JDK_VENDOR=adoptopenjdk&BUILD_LIST=openjdk&PLATFORM=x86-64_linux_mixed&TARGET=jdk_imageio_0\r\nMachine: test-docker-fedora33-x64-1\r\nFailing tests:\r\n```\r\njavax/imageio/plugins/shared/ImageWriterCompressionTest.java\r\njavax/imageio/AppletResourceTest.java\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sxa",
"comment_id": 817963154,
"datetime": 1618245986000,
"masked_author": "username_3",
"text": "Thanks - attaching to https://github.com/AdoptOpenJDK/openjdk-tests/issues/2138",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sxa",
"comment_id": 818966515,
"datetime": 1618338980000,
"masked_author": "username_3",
"text": "Haven't managed to properly look at this today other than trying l grinders on a real and dockerised Ubuntu of the same level. Will aim to continue tomorrow",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sxa",
"comment_id": 819637631,
"datetime": 1618416538000,
"masked_author": "username_3",
"text": "It's not specific to any particular docker image - seen on Ubuntu and Fedora ones",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sxa",
"comment_id": 819656055,
"datetime": 1618418072000,
"masked_author": "username_3",
"text": "As Andreas suggested, adding `fontconfig` has allowed it to pass on [Fedora](https://ci.adoptopenjdk.net/job/Grinder/75/) and [Ubuntu](https://ci.adoptopenjdk.net/job/Grinder/76/). I'll get that deployed to all the docker systems and added into the playbooks.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "sxa",
"comment_id": null,
"datetime": 1620908870000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "lumpfish",
"comment_id": 858485034,
"datetime": 1623318993000,
"masked_author": "username_2",
"text": "I've seen another occurrence on machine `test-docker-fedora33-x64-1` (https://ci.adoptopenjdk.net/view/Test_openjdk/job/Test_openjdk8_j9_extended.openjdk_x86-64_linux_testList_1/2/console)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sxa",
"comment_id": 858503525,
"datetime": 1623320573000,
"masked_author": "username_3",
"text": "Hmmm `fontconfig` is installed on there so I'm not sure what's going on with that system ([replicated on grinder](https://ci.adoptopenjdk.net/view/Test_grinder/job/Grinder/789/console)) - I'll try on the other 2 Fedora33 machines - [790](https://ci.adoptopenjdk.net/view/Test_grinder/job/Grinder/790/console) [791](https://ci.adoptopenjdk.net/view/Test_grinder/job/Grinder/791/console)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sxa",
"comment_id": 858528710,
"datetime": 1623323178000,
"masked_author": "username_3",
"text": "Hmmm the last two are failing but with a different error\r\n```\r\n11:28:13 STDOUT:\r\n11:28:13 STDERR:\r\n11:28:13 java.lang.RuntimeException: Test failed. Did not get expected IIOException\r\n11:28:13 \tat MaxLengthKeywordReadTest.main(MaxLengthKeywordReadTest.java:63)\r\n11:28:13 \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n11:28:13 \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\r\n11:28:13 \tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n11:28:13 \tat java.base/java.lang.reflect.Method.invoke(Method.java:566)\r\n11:28:13 \tat com.sun.javatest.regtest.agent.MainWrapper$MainThread.run(MainWrapper.java:127)\r\n11:28:13 \tat java.base/java.lang.Thread.run(Thread.java:836)\r\n11:28:13 \r\n11:28:13 JavaTest Message: Test threw exception: java.lang.RuntimeException: Test failed. Did not get expected IIOException\r\n11:28:13 JavaTest Message: shutting down test\r\n11:28:13 \r\n11:28:13 STATUS:Failed.`main' threw exception: java.lang.RuntimeException: Test failed. Did not get expected IIOException\r\n11:28:13 rerun:\r\n11:28:13 cd /home/jenkins/workspace/Grinder/openjdk-tests/TKG/output_16233198211531/jdk_imageio_0/work/javax/imageio/plugins/png/MaxLengthKeywordReadTest && \\\r\n```",
"title": null,
"type": "comment"
}
] | 6,268 | false | false | 4 | 11 | true |
kiali/kiali | kiali | 858,633,753 | 3,886 | null | [
{
"action": "opened",
"author": "aljesusg",
"comment_id": null,
"datetime": 1618475316000,
"masked_author": "username_0",
"text": "Should we show a degraded status if the user scale down the app ?\r\n\r\n\r\n\r\n\r\nI think that Iddle status should have more priority despite the last X minutes the health was degraded or failure. \r\n\r\nWhat do you think?",
"title": "zero replicas vs Degraded status",
"type": "issue"
},
{
"action": "created",
"author": "lucasponce",
"comment_id": 820238292,
"datetime": 1618476028000,
"masked_author": "username_1",
"text": "I think that's due it's taking the 1m telemetry info just when the resource is scale down.\r\nOnce that workload stops receiving telemetry it will show the not ready.\r\n\r\nTo me it's fine to keep it as it's as it tells user (it has traffic in the past but now you won't get more traffic).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "aljesusg",
"comment_id": 820255603,
"datetime": 1618477342000,
"masked_author": "username_0",
"text": "yes I agree the traffic by default is the last minute but maybe this could be confuse from user . is aw this checking another issue and I wanted to open a discussion about this",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "aljesusg",
"comment_id": null,
"datetime": 1618477374000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 812 | false | false | 2 | 4 | false |
leoasis/react-sound | null | 900,131,817 | 96 | null | [
{
"action": "opened",
"author": "ridhwaans",
"comment_id": null,
"datetime": 1621903089000,
"masked_author": "username_0",
"text": "React-sound audio playback works as expected with `react-router-dom`. The audio doesnt suspend or shutdown when using `<Link>` \r\n\r\nUsing `goBack()` and `goForward()` from the `useHistory` hook stops the audio playing for some reason, even though it does not reload/refresh the page. it behaves like `<Link>` to the previous browserrouter `<Route>` \r\nconsole log is \r\n```\r\nsoundmanager2.js:1307 sound0: Destruct\r\nsoundmanager2.js:1307 sound0: Removing event listeners\r\nsoundmanager2.js:1307 sound0: stop()\r\nsoundmanager2.js:1307 sound0: unload()\r\n```",
"title": "React-sound soundmanager doesnt work with useHistory hook",
"type": "issue"
}
] | 552 | false | false | 1 | 1 | false |
webgriffe/SyliusAkeneoPlugin | webgriffe | 836,868,078 | 64 | null | [
{
"action": "opened",
"author": "mmenozzi",
"comment_id": null,
"datetime": 1616261253000,
"masked_author": "username_0",
"text": "After #60 empty attributes values are properly passed to all value handlers during product import. So now, for attributes that are empty on Akeneo, the AttributeValueHandler always sets empty product attributes on Sylius. Instead it should remove them from products.",
"title": "AttributeValueHandler should remove attributes from product when there's no value from Akeneo",
"type": "issue"
},
{
"action": "closed",
"author": "mmenozzi",
"comment_id": null,
"datetime": 1616766033000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 266 | false | false | 1 | 2 | false |
sindresorhus/refined-github | null | 750,062,710 | 3,764 | null | [
{
"action": "opened",
"author": "ElijahLynn",
"comment_id": null,
"datetime": 1606251406000,
"masked_author": "username_0",
"text": "I have searched the issues and installed the extension but don't see functionality for hiding `TimelineItem`s. \r\n\r\nMy request is to be able to hide/toggle all Timelineitems from an issue or pull request view/page. \r\n\r\n\r\n\r\n\r\nGitLab has this built-in and I think GitHub should too, but next best thing would be for Refined GitHub to have this feature:\r\n",
"title": "Toggle/Hide TimelineItem",
"type": "issue"
},
{
"action": "created",
"author": "fregante",
"comment_id": 733258740,
"datetime": 1606255357000,
"masked_author": "username_1",
"text": "This would be an alternative solution to https://github.com/sindresorhus/refined-github/issues/615, but it doesn't feel like an action I would need to take often, if ever. Why do you need/want to hide events?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ElijahLynn",
"comment_id": 733278237,
"datetime": 1606258177000,
"masked_author": "username_0",
"text": "Thanks for the reply. I routinely have issues that have like 20 timeline events and it is really noisy. My use case is nearly every day I would want it on, and I would want it defaulted to my last selection, which will most of the time be turned off. And only if I need an important bit of info will I toggle it on temporarily. Here is a sample issue where it gets out of control: \r\n\r\n\r\n\r\n\r\nsource: https://github.com/department-of-veterans-affairs/va.gov-cms/issues/910",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ElijahLynn",
"comment_id": 733279337,
"datetime": 1606258377000,
"masked_author": "username_0",
"text": "Interesting to note that _some_ of the timeline events are only shown with the ZenHub extension enabled, which I just disabled and took this same screenshot:\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "fregante",
"comment_id": 751500041,
"datetime": 1609093162000,
"masked_author": "username_1",
"text": "Maybe #3847",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ElijahLynn",
"comment_id": 775398011,
"datetime": 1612813618000,
"masked_author": "username_0",
"text": "Looks hopeful, subscribed, thanks!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "fregante",
"comment_id": null,
"datetime": 1614920912000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 1,673 | false | false | 2 | 7 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.