repo stringlengths 7 67 | org stringlengths 2 32 ⌀ | issue_id int64 780k 941M | issue_number int64 1 134k | pull_request dict | events list | user_count int64 1 77 | event_count int64 1 192 | text_size int64 0 329k | bot_issue bool 1 class | modified_by_bot bool 2 classes | text_size_no_bots int64 0 279k | modified_usernames bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
nodkz/graphql-compose-mongoose | null | 241,410,903 | 41 | null | [
{
"action": "opened",
"author": "smooJitter",
"comment_id": null,
"datetime": 1499473667000,
"masked_author": "username_0",
"text": "Suppose I have subsets that on a document like so,\r\n\r\nfriends : [ {userId: string, meta: {}},... ]\r\n\r\nWhere meta contains discriminatory for the type of friend\r\n\r\nHow whould I create a relation on freinds that resolves to friends where meta.mutual = 1, and returning a list of friend Profile objects where the Profile object represents a subset user fields e.g.,\r\n{ username, avatar, userId, ....}?",
"title": "Sub documents.",
"type": "issue"
},
{
"action": "created",
"author": "smooJitter",
"comment_id": 313821960,
"datetime": 1499474201000,
"masked_author": "username_0",
"text": "One alternative would be to restructure the model\r\n\r\nfriends: [ userids....]\r\nmutual: [userIds...]\r\nlikes: [ userId]\r\nlikesMe: [userId...]\r\n\r\nThis is easier but I still would need the fields to resolve to objects of a type Friend {fields...}",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nodkz",
"comment_id": 313998298,
"datetime": 1499660463000,
"masked_author": "username_1",
"text": "You may do something like this:\r\n```js\r\nUserTC.setFields({\r\n // add new field to User type with name `mutualFriends`\r\n mutualFriends: {\r\n // require `friends` field from db, when you use just `mutualFriends` in query\r\n projection: { friends: true },\r\n ...UserTC\r\n // get standard resolver findMany which returns list of Users \r\n .getResolver('findMany')\r\n // wrap it with additional logic, which obtains list of mutual friends from `source.friends`\r\n .wrapResolve(next => rp => {\r\n const mutual = rp.source.friends.filter(o => o.meta.mutual).map(o => o.userId) || [];\r\n if (mutual.length === 0) return [];\r\n rp.rawQuery = {\r\n _id: { $in: mutual }, // may be you'll need to convert stringId to MongoId\r\n };\r\n return next(rp);\r\n }),\r\n});\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "smooJitter",
"comment_id": 316102068,
"datetime": 1500391872000,
"masked_author": "username_0",
"text": "I'm sorry perhaps I should have added more clarity. I believe I am asking about a more common use case. The example above is a bit misleading. \r\n\r\nMy user model looks like this\r\n\r\n```\r\nconst UserPreference = new mongoose.Schema( {\r\n typeOfPreference: { type: [String], enum: ['FOOD', 'BEVERAGE']\r\n ...\r\n});\r\n\r\nconst UserSchema = new mongoose.Schema( {\r\n username: String,\r\n preferences: [UserPreference],\r\n eventsAttended: [{ type: ObjectId, ref: 'Events' }],\r\n });\r\n```\r\nI can actual use this to highlight 2 common use cases. \r\n\r\n1. FILTERED RELATIONS: The events field contains an array of ObjectIds. I like for it to resolve to a subset of fields from the events collection (primary) and in an advance case add filters on a subset of the subset fields, e.g., { name, description, typeOfEvent } where type of event has a filter. \r\n\r\n2. FILTER SUB DOCUMENTS: The preference field contains an array of subdocuments. I would like add a filter on the discriminator key (typeOfPreference). Do I need add a resolver (e.g., getUserPreferencesByType)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nodkz",
"comment_id": 317247448,
"datetime": 1500810426000,
"masked_author": "username_1",
"text": "### FILTERED RELATIONS\r\n\r\nYou may use standard resolver `findMany`, `connection` and extend filter for Events from User data in such manner:\r\n```js\r\nUserTC.addRelation('myEventsWithFilter', () => ({\r\n resolver: () =>\r\n EventsTC.getResolver('findMany').wrapResolve(next => rp => {\r\n // With object-path package set filter by one line \r\n // objectPath.set(rp, 'args.filter._ids', rp.source.eventIds);\r\n // or \r\n if (!rp.args.filter) rp.args.filter = {}; // ensure that `filter` exists \r\n rp.args.filter._ids = rp.source.eventIds; // set `_ids` from current User doc\r\n \r\n // call standard `findMany` resolver with extended filter \r\n return next(rp);\r\n }),\r\n projection: { eventIds: true },\r\n}));\r\n``` \r\nPS. Upgrade to graphql-compose-mongoose@1.6.0 where added `filter._ids` field for `findMany` resolver.\r\n\r\n### FILTER SUB DOCUMENTS\r\nIn such case you need to extend resolver with your custom filters via `addFilterArg`, `addSortArg`. Something like this:\r\n\r\n```js\r\nUserTC.setResolver(\r\n 'connection', // providing same name for replacing standard resolver `connection`, or you may set another name for keepeng standard resolver untoched\r\n UserTC.getResolver('connection')\r\n .addFilterArg({\r\n name: 'region',\r\n type: '[String]',\r\n description: 'Region, Country, City',\r\n query: (rawQuery, value) => {\r\n if (value.length === 1) {\r\n rawQuery['location.name'] = value[0];\r\n } else {\r\n rawQuery['location.name'] = { $in: value };\r\n }\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'salaryMax',\r\n type: CvTC.get('$connection.@filter.salary'),\r\n description: 'Max salary',\r\n query: (rawQuery, value) => {\r\n if (value.total > 0) {\r\n rawQuery['salary.total'] = { $gte: 1, $lte: value.total };\r\n }\r\n if (value.currency) {\r\n rawQuery['salary.currency'] = value.currency;\r\n }\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'specializations',\r\n type: '[String]',\r\n description: 'Array of profession areas (any)',\r\n query: (rawQuery, value) => {\r\n rawQuery.specializations = { $in: value };\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'employments',\r\n type: '[String]',\r\n description: 'Array of employment (any)',\r\n query: (rawQuery, value) => {\r\n rawQuery.employment = { $in: value };\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'totalExperienceMin',\r\n type: 'Int',\r\n description: 'Min expirience in months',\r\n query: (rawQuery, value) => {\r\n rawQuery.totalExperience = { $gte: value };\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'ageRange',\r\n type: 'input AgeRange { min: Int, max: Int }',\r\n description: 'Filter by age range (in years)',\r\n query: (rawQuery, value) => {\r\n const d = new Date();\r\n const month = d.getMonth();\r\n const day = d.getDate();\r\n const year = d.getFullYear();\r\n let minAge = value.min || 0;\r\n let maxAge = value.max || 0;\r\n if (!minAge && !maxAge) return;\r\n if (minAge > maxAge && minAge && maxAge) [minAge, maxAge] = [maxAge, minAge];\r\n rawQuery.birthday = {};\r\n if (maxAge) {\r\n rawQuery.birthday.$gte = new Date(year - maxAge - 1, month, day);\r\n }\r\n if (minAge) {\r\n rawQuery.birthday.$lte = new Date(year - minAge, month, day);\r\n }\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'periodMaxH',\r\n type: 'Int',\r\n description: 'Filter by created date (in hours)',\r\n query: (rawQuery, value) => {\r\n if (value > 0 && value < 99999) {\r\n const curDate = new Date();\r\n const pastDate = new Date();\r\n pastDate.setTime(pastDate.getTime() - value * 3600000);\r\n rawQuery.createdAt = {\r\n $gt: pastDate,\r\n $lt: curDate,\r\n };\r\n }\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'langs',\r\n type: '[String]',\r\n description: 'Language list (all)',\r\n query: (rawQuery, value) => {\r\n if (value.length === 1) {\r\n rawQuery['languages.ln'] = value[0];\r\n } else {\r\n rawQuery['languages.ln'] = { $all: value };\r\n }\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'citizenships',\r\n type: '[String]',\r\n description: 'Citizenship list (any)',\r\n query: (rawQuery, value) => {\r\n if (value.length === 1) {\r\n rawQuery.citizenship = value[0];\r\n } else {\r\n rawQuery.citizenship = { $in: value };\r\n }\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'hasPhoto',\r\n type: 'Boolean',\r\n query: (rawQuery, value) => {\r\n rawQuery.avatarUrl = { $exists: value };\r\n },\r\n })\r\n .addSortArg({\r\n name: 'RELEVANCE',\r\n description: 'Sort by text score or date',\r\n value: resolveParams => {\r\n if (resolveParams.rawQuery && resolveParams.rawQuery.$text) {\r\n return { score: { $meta: 'textScore' } };\r\n }\r\n return { createdAt: -1 };\r\n },\r\n })\r\n .addFilterArg({\r\n name: 'q',\r\n type: 'String',\r\n description: 'Text search',\r\n query: (rawQuery, value, resolveParams) => {\r\n rawQuery.$text = { $search: value, $language: 'ru' };\r\n resolveParams.projection.score = { $meta: 'textScore' };\r\n },\r\n })\r\n .addSortArg({\r\n name: 'SALARY_ASC',\r\n value: resolveParams => {\r\n if (!resolveParams.rawQuery) resolveParams.rawQuery = {};\r\n resolveParams.rawQuery['salary.total'] = { $gt: 0 };\r\n return { 'salary.total': 1 };\r\n },\r\n })\r\n .addSortArg({\r\n name: 'SALARY_DESC',\r\n value: resolveParams => {\r\n if (!resolveParams.rawQuery) resolveParams.rawQuery = {};\r\n resolveParams.rawQuery['salary.total'] = { $gt: 0 };\r\n return { 'salary.total': -1 };\r\n },\r\n })\r\n .addSortArg({\r\n name: 'DATE_ASC',\r\n value: { createdAt: 1 },\r\n })\r\n .addSortArg({\r\n name: 'DATE_DESC',\r\n value: { createdAt: -1 },\r\n })\r\n);\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danielmahon",
"comment_id": 373926748,
"datetime": 1521298995000,
"masked_author": "username_2",
"text": "@username_1 I'm trying to filter relations based on the above and not having any luck, see #96. Any suggestions? I notice my `wrapResolve` and `query` functions never run so I'm sure I'm missing something.",
"title": null,
"type": "comment"
}
] | 3 | 6 | 8,834 | false | false | 8,834 | true |
Nethereum/Nethereum | Nethereum | 243,892,645 | 153 | null | [
{
"action": "opened",
"author": "jerryhorak",
"comment_id": null,
"datetime": 1500425622000,
"masked_author": "username_0",
"text": "First off, great library. Further support for the Parity modules would be a great addition if it's not in the road map - particularly, I find the trace module to be most effective for transactional auditing compared to other implementations.\r\n\r\nIf I can work through all my issues with framework dependencies, I can chip in.",
"title": "Parity Modules",
"type": "issue"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 317159448,
"datetime": 1500704228000,
"masked_author": "username_1",
"text": "Thanks for the feedback, I'll prioritise those.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nvtaveras",
"comment_id": 341041243,
"datetime": 1509526951000,
"masked_author": "username_2",
"text": "@username_1 Hi Juan, \r\n\r\nI was wondering if there's any work currently being done on this? I ran into a case where I need to use Parity's trace module as well.\r\n\r\nI wouldn't mind having a try at implementing this with some guidance, but I'm not very familiar with how modules that are specific to parity/geth are abstracted in the code.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 341042115,
"datetime": 1509527062000,
"masked_author": "username_1",
"text": "Ill push something this week for the trace modules, I got pushed with other priorities.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 341042760,
"datetime": 1509527145000,
"masked_author": "username_1",
"text": "Feel free to ask / remind me any time if you dont see something moving.. too many things too little time :D",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nvtaveras",
"comment_id": 341043706,
"datetime": 1509527273000,
"masked_author": "username_2",
"text": "Awesome, thanks a lot for your work! :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "milkeg",
"comment_id": 341678814,
"datetime": 1509708449000,
"masked_author": "username_3",
"text": "Hello Juan,\r\nAwesome, waiting for it too :)\r\nThank you",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "milkeg",
"comment_id": 346740288,
"datetime": 1511497419000,
"masked_author": "username_3",
"text": "Hi @username_1 \r\nAny update on this issue?\r\nThanks a lot",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 351153220,
"datetime": 1513105400000,
"masked_author": "username_1",
"text": "I forgot to mention, trace modules are the latest release.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "milkeg",
"comment_id": 351647242,
"datetime": 1513241577000,
"masked_author": "username_3",
"text": "Thank you very much Juan :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nvtaveras",
"comment_id": 355452025,
"datetime": 1515117216000,
"masked_author": "username_2",
"text": "@username_1 Quick question: Is there any particular reason why you didn't add the DTO's for the trace_transaction RPC call? Is it because of the lack of documentation for the response in Parity's Wiki? I noticed the implementation for the method returns a JObject.\r\n\r\nI'm working on this and can contribute it to the project, but want to know what is your take on this first.\r\n\r\nThanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 355472898,
"datetime": 1515128081000,
"masked_author": "username_1",
"text": "Documentation it is one of the reason, but also the dynamic nature of the structure of logs. Feel free to make a pull request with a DTO and let's see how it goes. We can always revert if needed (Everyone will understand)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nvtaveras",
"comment_id": 355485998,
"datetime": 1515135458000,
"masked_author": "username_2",
"text": "By looking at the structure of the response ([0]) I'm not entirely sure of the meaning of the \"traceAddress\" array and the type of data it can contain (seems to be an array of Integers?).\r\n\r\nWould you be agains't of a DTO which excludes that field? \r\n\r\n[0]: https://github.com/paritytech/parity/wiki/JSONRPC-trace-module#trace_transaction",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 355545819,
"datetime": 1515156040000,
"masked_author": "username_1",
"text": "Parity docs get outdated very quickly. What you could do is create an extra method to deserialise the JToken so you get the best of both worlds",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 355549487,
"datetime": 1515157298000,
"masked_author": "username_1",
"text": "So if you miss an attribute it can be added later",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 396160555,
"datetime": 1528705030000,
"masked_author": "username_1",
"text": "Change the Issue now to add DTOs",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "juanfranblanco",
"comment_id": 889675712,
"datetime": 1627628376000,
"masked_author": "username_1",
"text": "Parity / OE is getting deprecated.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "juanfranblanco",
"comment_id": null,
"datetime": 1627628376000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 4 | 18 | 2,350 | false | false | 2,350 | true |
wandoulabs/codis | wandoulabs | 112,122,222 | 490 | null | [
{
"action": "opened",
"author": "aylazhang",
"comment_id": null,
"datetime": 1445252461000,
"masked_author": "username_0",
"text": "环境: \r\ncentos 5.x\r\ngo 1.2.2\r\n\r\n按照文档,(https://github.com/wandoulabs/codis/blob/master/doc/tutorial_zh.md)\r\n执行./bootstrap.sh 的时候,报错:\r\n\r\n../../ugorji/go/codec/encode.go:1220: undefined: sync.Pool\r\n../../ugorji/go/codec/encode.go:1230: undefined: sync.Pool\r\n../../ugorji/go/codec/json.go:148: undefined: sync.Pool\r\n......\r\ngo build -o bin/codis-proxy ./cmd/proxy\r\n....\r\n../../ugorji/go/codec/encode.go:1220: undefined: sync.Pool\r\n../../ugorji/go/codec/encode.go:1230: undefined: sync.Pool\r\n../../ugorji/go/codec/json.go:148: undefined: sync.Pool\r\nmake: *** [build-proxy] Error 2",
"title": "../../ugorji/go/codec/encode.go:1220: undefined: sync.Pool",
"type": "issue"
},
{
"action": "created",
"author": "yangzhe1991",
"comment_id": 149186253,
"datetime": 1445252858000,
"masked_author": "username_1",
"text": "不确定是不是go的版本太旧导致的,升级到1.4.3或者1.5.1看看?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yangzhe1991",
"comment_id": 149186376,
"datetime": 1445252911000,
"masked_author": "username_1",
"text": "http://studygolang.com/articles/1673 看上去是这个问题, sync.Pool是1.3加入的。建议升级到1.4.3或者1.5.1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "aylazhang",
"comment_id": 149407162,
"datetime": 1445308057000,
"masked_author": "username_0",
"text": "@username_1 , \r\ncentos 5.11 , go go1.4.3.linux-amd64.tar.gz\r\n\r\n在 go get -u -d github.com/wandoulabs/codis 之后,./bootstrap.sh的时候,又报\r\nFATAL: kernel too old\r\n....",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "yangzhe1991",
"comment_id": 149409129,
"datetime": 1445308469000,
"masked_author": "username_1",
"text": "这就没办法了。。。go对系统版本有点要求,toooold是不行的。。。你试试1.3?\r\n\r\n\r\n\r\n-- \r\nThanks,\r\nPhil Yang",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "yangzhe1991",
"comment_id": null,
"datetime": 1445582588000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 6 | 921 | false | false | 921 | true |
bitrise-io/bitrise-contrib | bitrise-io | 182,097,687 | 25 | {
"number": 25,
"repo": "bitrise-contrib",
"user_login": "bitrise-io"
} | [
{
"action": "opened",
"author": "fadookie",
"comment_id": null,
"datetime": 1476128315000,
"masked_author": "username_0",
"text": "Here is the PR you requested.\r\n\r\nI omitted my bitrise username. I don't feel comfortable with publishing this to the whole internet unless it's really necessary, as it seems like it would just open up a potential attack vector and not provide any useful information to the community. Also I was not sure how to specify that our organization should be credited with the contributor discount. I already sent you an email with the web link to our org in bitrise.\r\n\r\nThanks,\r\nEliot",
"title": "Contributor information for Eliot Lash/Pear Therapeutics",
"type": "issue"
},
{
"action": "created",
"author": "viktorbenei",
"comment_id": 252726483,
"datetime": 1476128391000,
"masked_author": "username_1",
"text": "Sure, no problem at all, this is perfect ;)\r\n\r\nThanks for your contrib! 🚀",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viktorbenei",
"comment_id": 252726646,
"datetime": 1476128448000,
"masked_author": "username_1",
"text": "Just one thing though, it was quite some time ago, could you please ping me again through email/onsite chat on Bitrise.io ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viktorbenei",
"comment_id": 252727791,
"datetime": 1476128794000,
"masked_author": "username_1",
"text": "Nevermind, found our previous conversation ;)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "viktorbenei",
"comment_id": 252727825,
"datetime": 1476128806000,
"masked_author": "username_1",
"text": "(just sent you the instructions there)",
"title": null,
"type": "comment"
}
] | 2 | 5 | 756 | false | false | 756 | false |
zulip/zulip | zulip | 259,210,387 | 6,604 | {
"number": 6604,
"repo": "zulip",
"user_login": "zulip"
} | [
{
"action": "opened",
"author": "HarshitOnGitHub",
"comment_id": null,
"datetime": 1505922055000,
"masked_author": "username_0",
"text": "If the emoji image doesn't get loaded then the alt gets misaligned with the rest of the text:\r\n\r\n\r\nI am not sure if it is easy/possible to fix. This issue also exists in the webapp:\r\n\r\n\r\nEmoji web PR will resolve this problem anyway.\r\n\r\nFixes: #6579.",
"title": "missed_messages: Fixes misalignment of emojis with the text and removes the ugle inactive scrollbar.",
"type": "issue"
},
{
"action": "created",
"author": "timabbott",
"comment_id": 330896185,
"datetime": 1505922698000,
"masked_author": "username_1",
"text": "The web part of the problem will be fixed by the emoji web PR; but the emails one won't be, right, since we need to keep using individual images in the emails indefinitely, right?\r\n\r\nAnyway, this looks good, I'm happy to merge once Travis passes.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "HarshitOnGitHub",
"comment_id": 330897576,
"datetime": 1505922971000,
"masked_author": "username_0",
"text": "Yeah I should have written \"Emoji web PR will resolve the webapp problem anyway.\" :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timabbott",
"comment_id": 330923809,
"datetime": 1505928652000,
"masked_author": "username_1",
"text": "Merged, thanks @username_0!",
"title": null,
"type": "comment"
}
] | 2 | 4 | 832 | false | false | 832 | true |
twosigma/git-meta | twosigma | 181,242,717 | 66 | null | [
{
"action": "opened",
"author": "novalis",
"comment_id": null,
"datetime": 1475695346000,
"masked_author": "username_0",
"text": "",
"title": "synthetic branches: fetch/pull",
"type": "issue"
},
{
"action": "closed",
"author": "bpeabody",
"comment_id": null,
"datetime": 1480367709000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 0 | false | false | 0 | false |
zappan/node-marionette-boilerplate | null | 62,959,835 | 3 | null | [
{
"action": "opened",
"author": "Poline",
"comment_id": null,
"datetime": 1426764012000,
"masked_author": "username_0",
"text": "Help me please\r\nHere is my app.js\r\nvar express = require('express');\r\nvar http = require('http');\r\nvar path = require('path');\r\nvar config = require('config');\r\nvar log = require('libs/log')(module);\r\nvar app = express();\r\napp.engine('ejs', require('ejs-locals')); \r\napp.set('views', __dirname + '/templates');\r\napp.set('view engine', 'ejs'); \r\napp.use(express.favicon());\r\nif(app.get('env') == 'development') {\r\n app.use(express.logger('dev'));\r\n} else{\r\n app.use(express.logget('default'));\r\n}\r\napp.use(express.bodyParser());\r\napp.use(express.cookieParser());\r\napp.use(app.router);\r\napp.get('/', function(req, res, next){\r\n res.render(\"index\",{\r\n body: '<b>Hello</b>'\r\n });\r\n});\r\napp.use(express.static(path.join(__dirname + 'public')));\r\napp.use(function(err, req, res, next){\r\n if (app.get('env') == 'development'){\r\n var errorHandler = express.errorHandler();\r\n errorHandler(err, req, res, next);\r\n } else{\r\n res.send(500);\r\n }\r\n});\r\nhttp.createServer(app).listen(config.get('port'), function(){\r\n log.info('Express server listening on port ' + config.get('port'));\r\n});\r\n\r\nThis is the error\r\nExpress\r\n500 Error: Failed to lookup view \"index\"\r\nat Function.app.render (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\application.js:495:17)\r\nat ServerResponse.res.render (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\response.js:760:7)\r\nat d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\app.js:36:9\r\nat callbacks (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\router\\index.js:164:37)\r\nat param (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\router\\index.js:138:11)\r\nat pass (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\router\\index.js:145:5)\r\nat Router._dispatch (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\router\\index.js:173:5)\r\nat Object.router (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\lib\\router\\index.js:33:10)\r\nat next (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\node_modules\\connect\\lib\\proto.js:190:15)\r\nat Object.cookieParser [as handle] (d:\\Пользователи\\Polina\\Documents\\Универ\\2й курс\\2й семестр\\АИС\\registrationform\\node_modules\\express\\node_modules\\connect\\lib\\middleware\\cookieParser.js:60:5)",
"title": "Express 500 Error: Failed to lookup view \"index\"",
"type": "issue"
},
{
"action": "closed",
"author": "zappan",
"comment_id": null,
"datetime": 1426782090000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "zappan",
"comment_id": 83649350,
"datetime": 1426782090000,
"masked_author": "username_1",
"text": "don't see an issue report here. closing as invalid.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 2,720 | false | false | 2,720 | false |
JuliaLang/julia | JuliaLang | 51,528,608 | 9,295 | null | [
{
"action": "opened",
"author": "Keno",
"comment_id": null,
"datetime": 1418198035000,
"masked_author": "username_0",
"text": "Now that we have the nice syntax for pairs, I think it would be convenient for\r\n```\r\na = Dict()\r\npush!(a,1=>2)\r\n```\r\nto work. At the moment we have to use\r\n```\r\npush!(a,1,2)\r\n```\r\nwhich probably should be reflected in the docs for push!, which are right now:\r\n```\r\n push!(collection, items...) → collection\r\n\r\n Insert items at the end of collection.\r\n\r\n push!([1,2,3], 4) == [1,2,3,4]\r\n```",
"title": "push! on Dicts",
"type": "issue"
},
{
"action": "created",
"author": "bicycle1885",
"comment_id": 71299741,
"datetime": 1422073261000,
"masked_author": "username_1",
"text": "@username_2 \r\nI cherry-picked your half fix and added varargs support and some tests.\r\nBut it breaks very rare (I hope) backward compatibility: `push!`ing a pair of `Pair` key and `Pair` value.\r\n\r\nOne workaround would be using explicit type parameters:\r\n```julia\r\npush!{K,V}(::Associative{K,V}, ::Pair{K,V}, ::Pair{K,V})\r\n```\r\nbut this will change the semantic of `push!`.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ivarne",
"comment_id": null,
"datetime": 1422093681000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 760 | false | false | 760 | true |
chicagopython/chipy.org | chicagopython | 165,443,344 | 137 | null | [
{
"action": "opened",
"author": "JoeJasinski",
"comment_id": null,
"datetime": 1468452001000,
"masked_author": "username_0",
"text": "I've been getting talk submissions with no way to contact the presenter.",
"title": "Presenter Email is not a required field on Talk Submission form",
"type": "issue"
},
{
"action": "created",
"author": "JoeJasinski",
"comment_id": 327678737,
"datetime": 1504757388000,
"masked_author": "username_0",
"text": "This is fixed in this pull request, ready for review: https://github.com/chicagopython/chipy.org/pull/153",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "JoeJasinski",
"comment_id": 327678757,
"datetime": 1504757400000,
"masked_author": "username_0",
"text": "@emperorcezar and @brianray ^",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "JoeJasinski",
"comment_id": 344125254,
"datetime": 1510626225000,
"masked_author": "username_0",
"text": "closed per https://github.com/chicagopython/chipy.org/pull/153",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "JoeJasinski",
"comment_id": null,
"datetime": 1510626227000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 5 | 268 | false | false | 268 | false |
DestinyItemManager/DIM | DestinyItemManager | 234,369,389 | 1,705 | null | [
{
"action": "opened",
"author": "philkernick",
"comment_id": null,
"datetime": 1496877475000,
"masked_author": "username_0",
"text": "Thanks for creating the is:triumph filter.\r\n\r\nIt is working correctly for emblems, but not for other items.\r\n\r\n1. Things it selects, but shouldn't.\r\n\r\nAll of the Titan/Hunter/Warlock Vanguard armor sets. These were introduced in RoI, and are selected by is:roi. They shouldn't be in is:aot.\r\n\r\n2. Things it doesn't select, but should.\r\n\r\nThe \"* of Triumph\" armor sets that drop from Treasure of Ages boxes.\r\nThe Treasure of Ages boxes themselves.\r\nThe \"* of Legends\" class items that drop from the AoT speaker quest.\r\nAge of Triumph ornaments.\r\nThe new ornaments, ships and shaders that come from the treasure boxes. See the Rewards (New Items) section here: https://www.destinygamewiki.com/wiki/Treasure_of_Ages\r\n\r\n3. Things that I think should also be in the is:aot filter:\r\n\r\nAll of the elemental exotic primaries from the raids.\r\nThe legendary raid weapons that drop from the updated raids.\r\nThe raid armor sets that drop from the updated raids.\r\nThe new raid sparrows and ghosts that drop from the updated raids.",
"title": "Age of Triumph filter needs additional sources",
"type": "issue"
},
{
"action": "created",
"author": "delphiactual",
"comment_id": 306953378,
"datetime": 1496877706000,
"masked_author": "username_1",
"text": "not gonna happen",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bhollis",
"comment_id": 307462898,
"datetime": 1497032356000,
"masked_author": "username_2",
"text": "@username_1 what's left for this after your PR?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "delphiactual",
"comment_id": 307502028,
"datetime": 1497043024000,
"masked_author": "username_1",
"text": "@username_2 I'll have to check the items listed in number 2 and add some of them to the missing sources",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "delphiactual",
"comment_id": 307516941,
"datetime": 1497048379000,
"masked_author": "username_1",
"text": "complete with https://github.com/DestinyItemManager/DIM/pull/1706/commits/ee9f7833f049bc983f83999c60e7afbfe2e2c4d7",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "delphiactual",
"comment_id": null,
"datetime": 1497048379000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 6 | 1,299 | false | false | 1,299 | true |
spring-projects/spring-boot | spring-projects | 163,962,075 | 6,333 | null | [
{
"action": "opened",
"author": "jkirsch",
"comment_id": null,
"datetime": 1467758309000,
"masked_author": "username_0",
"text": "This is a feature request.\r\n\r\nUsing an embedded instance of elastic search that should be used by hibernate search in a spring boot autoconfigured environment exhibits a race condition.\r\nElastic search starts in the background and when hibernate gets initialized it tries to connect to elastic search, which it is not up yet, thus the app fails to start.\r\n\r\nHow to reproduce\r\n\r\nUsing spring boot `1.4.0.M3` and the following additional JPA dependencies. Assuming `@Field` annotations on the jpa entity classes.\r\n\r\n```xml\r\n<dependency>\r\n <groupId>org.springframework.boot</groupId>\r\n <artifactId>spring-boot-starter-data-elasticsearch</artifactId>\r\n <version>1.4.0.M3</version>\r\n</dependency>\r\n\r\n<dependency>\r\n <groupId>org.hibernate</groupId>\r\n <artifactId>hibernate-search-backend-elasticsearch</artifactId>\r\n <version>5.6.0.Alpha3</version>\r\n</dependency\r\n```\r\nUsing the following properties\r\n\r\n```\r\nspring.data.elasticsearch.properties.http.enabled= true\r\nspring.jpa.properties.hibernate.search.default.indexmanager= elasticsearch\r\n```\r\n\r\nIt only works, when `index_management_strategy` is set to [none](https://github.com/hibernate/hibernate-search/blob/f78306e8c5b1b8efe6069aa129af5fcb58f045fb/documentation/src/main/asciidoc/elasticsearch-integration.asciidoc#configuration) which skips the initial index configuration. But this means during hibernate setup nothing is send to elastic search, only later when new entities are created. At that time elastic search has finished loading and it works. But unfortunately, because no index has been configured, it uses the default index, which cannot be configured from the properties, thus misses any special `Analyzer` or `Tokenizer` set on the entity classes.\r\n\r\n### Why would one want that?\r\nIt is nice to configure everything together for integration tests. Furthermore, I'm deploying as a fat jar into openshift, so having only one jar to manage eases deployment - instead of starting elastic search separately.\r\n\r\n### The problem\r\nThe problem comes from the background initialization of elastic search, it finishes initialization, even though its not up yet.\r\n\r\n### How to solve\r\nNot sure, I posted the question on [stackoverflow](http://stackoverflow.com/questions/37930019/spring-boot-hibernate-search-elastic-search-embedded-fails-to-start) without success.\r\nPotentially there should be some form of a busy wait on elastic search being up (responds to host:9200) during the autoconfiguration process",
"title": "Better support for hibernate search with embedded elastic search",
"type": "issue"
},
{
"action": "created",
"author": "wilkinsona",
"comment_id": 230711593,
"datetime": 1467794112000,
"masked_author": "username_1",
"text": "@username_0 Can you please provide a small sample that reproduces the problem? I've tried modifying our Data JPA sample but have been unable to cause it to fail to start.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jkirsch",
"comment_id": 230724592,
"datetime": 1467797681000,
"masked_author": "username_0",
"text": "Sure, \r\nI'll look into it. To get it to fail, one needs to set the `index_management_strategy` to for example `CREATE_DELETE`\r\n`spring.jpa.properties.hibernate.search.default.indexmanager.elasticsearch=CREATE_DELETE`",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jkirsch",
"comment_id": 230866037,
"datetime": 1467830526000,
"masked_author": "username_0",
"text": "Please find attached a small demo project, that exhibits the problem\r\n\r\n[demo.zip](https://github.com/spring-projects/spring-boot/files/350664/demo.zip)\r\n\r\nI also upgraded to spring boot 1.4 RC1, but it has the same error.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "wilkinsona",
"comment_id": 231037119,
"datetime": 1467885951000,
"masked_author": "username_1",
"text": "@username_0 Thanks for the sample, I've now reproduced the problem.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "wilkinsona",
"comment_id": 231061324,
"datetime": 1467893891000,
"masked_author": "username_1",
"text": "There's no race condition here as far as I can tell, it's a simple ordering problem.\r\n\r\nThe problem is that there's no dependency between the `EntityManagerFactory` bean and the Elasticsearch `Client` bean so there's no guarantee that Elasticsearch will start before Hibernate. As it happens, Hibernate starts first and then fails to connect to Elasticsearch. \r\n\r\nThis can be fixed by setting up a dependency between the two beans. An easy way to do that is with a subclass of `EntityManagerFactoryDependsOnPostProcessor`:\r\n\r\n```java\r\n@Configuration\r\nstatic class ElasticsearchJpaDependencyConfiguration extends EntityManagerFactoryDependsOnPostProcessor {\r\n\r\n\tpublic ElasticsearchJpaDependencyConfiguration() {\r\n\t\tsuper(\"elasticsearchClient\");\r\n\t}\r\n\r\n}\r\n```\r\n\r\nWith this change in place, Elasticsearch starts before Hibernate. Unfortunately, there's still a failure:\r\n\r\n```\r\nCaused by: org.hibernate.search.exception.SearchException: HSEARCH400007: Elasticsearch request failed.\r\n Request:\r\n========\r\nOperation: \r\nData:\r\nnull\r\nResponse:\r\n=========\r\nStatus: 408\r\nError message: 408 Request Timeout\r\n\r\n\r\n\tat org.hibernate.search.backend.elasticsearch.client.impl.JestClient.executeRequest(JestClient.java:89) ~[hibernate-search-backend-elasticsearch-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.backend.elasticsearch.client.impl.JestClient.executeRequest(JestClient.java:80) ~[hibernate-search-backend-elasticsearch-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.backend.elasticsearch.impl.ElasticsearchIndexManager.waitForIndexCreation(ElasticsearchIndexManager.java:189) ~[hibernate-search-backend-elasticsearch-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.backend.elasticsearch.impl.ElasticsearchIndexManager.createIndex(ElasticsearchIndexManager.java:174) ~[hibernate-search-backend-elasticsearch-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.backend.elasticsearch.impl.ElasticsearchIndexManager.initializeIndex(ElasticsearchIndexManager.java:154) ~[hibernate-search-backend-elasticsearch-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.backend.elasticsearch.impl.ElasticsearchIndexManager.setSearchFactory(ElasticsearchIndexManager.java:143) ~[hibernate-search-backend-elasticsearch-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.indexes.impl.IndexManagerHolder.setActiveSearchIntegrator(IndexManagerHolder.java:188) ~[hibernate-search-engine-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.engine.impl.MutableSearchFactoryState.setActiveSearchIntegrator(MutableSearchFactoryState.java:225) ~[hibernate-search-engine-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.spi.SearchIntegratorBuilder.buildNewSearchFactory(SearchIntegratorBuilder.java:226) ~[hibernate-search-engine-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.spi.SearchIntegratorBuilder.buildSearchIntegrator(SearchIntegratorBuilder.java:117) ~[hibernate-search-engine-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.search.hcore.impl.HibernateSearchSessionFactoryObserver.sessionFactoryCreated(HibernateSearchSessionFactoryObserver.java:75) ~[hibernate-search-orm-5.6.0.Alpha3.jar:5.6.0.Alpha3]\r\n\tat org.hibernate.internal.SessionFactoryObserverChain.sessionFactoryCreated(SessionFactoryObserverChain.java:35) ~[hibernate-core-5.0.9.Final.jar:5.0.9.Final]\r\n\tat org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:530) ~[hibernate-core-5.0.9.Final.jar:5.0.9.Final]\r\n\tat org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:444) ~[hibernate-core-5.0.9.Final.jar:5.0.9.Final]\r\n\tat org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:879) ~[hibernate-entitymanager-5.0.9.Final.jar:5.0.9.Final]\r\n\t... 21 common frames omitted\r\n```\r\n\r\n`ElasticsearchIndexManager.waitForIndexCreation` waits for the health to be green. With the configuration in the sample application it only ever reaches yellow:\r\n\r\n```\r\n2016-07-07 13:14:13.806 INFO 15865 --- [pdateTask][T#1]] org.elasticsearch.cluster.metadata : [Freedom Ring] [com.example.model.person] creating index, cause [api], templates [], shards [5]/[1], mappings []\r\n2016-07-07 13:14:13.848 INFO 15865 --- [pdateTask][T#1]] o.e.cluster.routing.allocation : [Freedom Ring] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[com.example.model.person][4]] ...]).\r\n```\r\n\r\nI don't know enough about Elasticsearch and Hibernate Search to know why the cluster doesn't become green. In our Elasticsearch sample the cluster does become green so I suspect it's a side-effect of how Hibernate Search configures Elasticsearch.\r\n\r\n@username_0 Does this give you what enough to get things going?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jkirsch",
"comment_id": 231154128,
"datetime": 1467913542000,
"masked_author": "username_0",
"text": "Thanks so much for looking into this.\r\nIndeed the ordering constraint is what I was looking for, as originally described on stack overflow - didn't know there is such a simple fix available.\r\n\r\nI updated the sample and was able to run it.\r\n\r\nThe reason for status yellow, is that there are no replicas defined in the cluster [doc](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html) . Which is understandable, as this is a single node deployment.\r\n\r\nThis can be easily fixed, by setting the number of replicas to `0` instead of the default `1`\r\n\r\n```properties\r\nspring.data.elasticsearch.properties.index.number_of_replicas=0\r\n```\r\n\r\nFeels like this should be documented somewhere. \r\nOnce more, thanks for quickly looking into this :+1:",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "philwebb",
"comment_id": 231190438,
"datetime": 1467921705000,
"masked_author": "username_2",
"text": "@username_1 I'm tempted to add `ElasticsearchJpaDependencyConfiguration` into 1.4. What do you think?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "wilkinsona",
"comment_id": 231195909,
"datetime": 1467922984000,
"masked_author": "username_1",
"text": "I'm not sure... It's only necessary when Hibernate Search is being used and it's configured to use Elasticsearch as its index manager. I wonder if it's possible that setting up the dependency would do some harm when that's not the case? I can't think of a case where it would, but I'm a bit wary as I can recall a problem with one of the caches we support where it can either depend on Hibernate or Hibernate can depend on it depending on how it's being used.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "philwebb",
"comment_id": 231197652,
"datetime": 1467923378000,
"masked_author": "username_2",
"text": "OK, we'll just make add a general HowTo about `EntityManagerFactoryDependsOnPostProcessor` and use this as the example.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "wilkinsona",
"comment_id": null,
"datetime": 1468408833000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 11 | 9,342 | false | false | 9,342 | true |
ClusterHQ/flocker | ClusterHQ | 43,347,682 | 751 | null | [
{
"action": "opened",
"author": "dwgebler",
"comment_id": null,
"datetime": 1411338579000,
"masked_author": "username_0",
"text": "**Replaced by** https://clusterhq.atlassian.net/browse/FLOC-751\n\nFlocker does not support the fig build directive, used to specify a path to a Dockerfile and build that image, instead of using an existing image, for the deployment of an application.\r\n\r\nhttp://www.fig.sh/yml.html",
"title": "Support Fig directive: build",
"type": "issue"
},
{
"action": "created",
"author": "adamtheturtle",
"comment_id": 69352151,
"datetime": 1420818576000,
"masked_author": "username_1",
"text": "We are moving our development planning to JIRA. This issue is now being tracked at https://clusterhq.atlassian.net/browse/FLOC-751. You are welcome to file additional issues in GitHub if that's easier for you.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "adamtheturtle",
"comment_id": null,
"datetime": 1420818576000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 488 | false | false | 488 | false |
containerd/containerd | containerd | 278,296,287 | 1,842 | {
"number": 1842,
"repo": "containerd",
"user_login": "containerd"
} | [
{
"action": "opened",
"author": "dmcgowan",
"comment_id": null,
"datetime": 1512081856000,
"masked_author": "username_0",
"text": "Updated to use labels to retain snapshot and a unique key to prevent interference with parallel tests.",
"title": "Fix close twice test to retain snapshots",
"type": "issue"
},
{
"action": "created",
"author": "jessvalarezo",
"comment_id": 348348185,
"datetime": 1512082661000,
"masked_author": "username_1",
"text": "LGTM",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "crosbymichael",
"comment_id": 348352184,
"datetime": 1512083811000,
"masked_author": "username_2",
"text": "LGTM",
"title": null,
"type": "comment"
}
] | 4 | 4 | 110 | false | true | 110 | false |
JPDSousa/ExtendedCLI | null | 263,240,160 | 9 | {
"number": 9,
"repo": "ExtendedCLI",
"user_login": "JPDSousa"
} | [
{
"action": "opened",
"author": "cwancowicz",
"comment_id": null,
"datetime": 1507231962000,
"masked_author": "username_0",
"text": "",
"title": "improved test coverage",
"type": "issue"
}
] | 2 | 2 | 0 | false | true | 0 | false |
jiasir/playback | null | 134,166,402 | 13 | null | [
{
"action": "opened",
"author": "pootow",
"comment_id": null,
"datetime": 1455679363000,
"masked_author": "username_0",
"text": "I'm using os-beta, see error bellow\r\n\r\nError: Failed to launch instance \"nn\": Please try again later [Error: Unexpected error while running command. Command: ssh 10.32.151.16 mkdir -p /var/lib/nova/instances/e8f1f147-9172-4125-9584-c083a23c3b4d Exit code: 255 Stdout: u'' Stderr: u'Host key verification failed.\\r\\n'].",
"title": "resize instance faild",
"type": "issue"
},
{
"action": "closed",
"author": "jiasir",
"comment_id": null,
"datetime": 1456281861000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "jiasir",
"comment_id": 188028358,
"datetime": 1456281861000,
"masked_author": "username_1",
"text": "Thanks for your reporting. I fixed it, please try it again.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 377 | false | false | 377 | false |
airbrake/airbrake | airbrake | 105,628,878 | 415 | null | [
{
"action": "opened",
"author": "kyrylo",
"comment_id": null,
"datetime": 1441814678000,
"masked_author": "username_0",
"text": "Due to our internal changes [the build started failing](https://circleci.com/gh/airbrake/airbrake/303), because some tests are making real HTTP requests.",
"title": "Build failure on master",
"type": "issue"
},
{
"action": "closed",
"author": "shifi",
"comment_id": null,
"datetime": 1441857026000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 153 | false | false | 153 | false |
Homebrew/homebrew-core | Homebrew | 257,846,030 | 18,087 | null | [
{
"action": "opened",
"author": "kenahoo",
"comment_id": null,
"datetime": 1505420792000,
"masked_author": "username_0",
"text": "- [x] Confirmed this is a problem with `brew install`ing one, specific Homebrew/homebrew-core formula (not cask or tap) and not every time you run `brew`? If it's a general `brew` problem please file this issue at https://github.com/Homebrew/brew/issues/new. If it's a `brew cask` problem please file this issue at https://github.com/Homebrew/caskroom/homebrew-cask/new. If it's a tap (e.g. Homebrew/homebrew-php) problem please file this issue at the tap.\r\n- [x] Ran `brew update` and retried your prior step?\r\n- [x] Ran `brew doctor`, fixed all issues and retried your prior step?\r\n- [x] Ran `brew gist-logs <formula>` (where `<formula>` is the name of the formula that failed) and included the output link?\r\n- [x] If `brew gist-logs` didn't work: ran `brew config` and `brew doctor` and included their output with your issue?\r\n\r\n\r\n```\r\n% brew config\r\nHOMEBREW_VERSION: 1.3.2\r\nORIGIN: https://github.com/Homebrew/brew.git\r\nHEAD: 751334a257d81851e68da7ab390982d4e9fdf909\r\nLast commit: 10 days ago\r\nCore tap ORIGIN: https://github.com/Homebrew/homebrew-core\r\nCore tap HEAD: c010d3fdd3b111462e0c34f3efe4a9ec3e583af7\r\nCore tap last commit: 3 hours ago\r\nHOMEBREW_PREFIX: /usr/local\r\nHOMEBREW_REPOSITORY: /usr/local/Homebrew\r\nHOMEBREW_CELLAR: /usr/local/Cellar\r\nHOMEBREW_BOTTLE_DOMAIN: https://homebrew.bintray.com\r\nCPU: octa-core 64-bit haswell\r\nHomebrew Ruby: 2.0.0-p648\r\nClang: 8.1 build 802\r\nGit: 2.11.0 => /Applications/Xcode.app/Contents/Developer/usr/bin/git\r\nPerl: /usr/bin/perl\r\nPython: /usr/local/opt/python/libexec/bin/python => /usr/local/Cellar/python/2.7.13_1/Frameworks/Python.framework/Versions/2.7/bin/python2.7\r\nRuby: /usr/bin/ruby => /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby\r\nJava: 1.8.0_112, 1.8.0_60, 1.7.0_75\r\nmacOS: 10.12.6-x86_64\r\nXcode: 8.3.3\r\nCLT: 8.3.2.0.1.1492020469\r\nX11: 2.7.11 => /opt/X11\r\n```\r\n\r\n```\r\n% brew doctor \r\nPlease note that these warnings are just used to help the Homebrew maintainers\r\nwith debugging if you file an issue. If everything you use Homebrew for is\r\nworking fine: please don't worry and just ignore them. Thanks!\r\n\r\nWarning: \"config\" scripts exist outside your system or Homebrew directories.\r\n`./configure` scripts often look for *-config scripts to determine if\r\nsoftware packages are installed, and what additional flags to use when\r\ncompiling and linking.\r\n\r\nHaving additional scripts in your path can confuse software installed via\r\nHomebrew if the config script overrides a system or Homebrew provided\r\nscript of the same name. We found the following \"config\" scripts:\r\n /Library/Frameworks/Python.framework/Versions/3.6/bin/python3-config\r\n /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6-config\r\n /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6m-config\r\n /Library/Frameworks/Python.framework/Versions/3.5/bin/python3-config\r\n /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5-config\r\n /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5m-config\r\n\r\nWarning: Python is installed at /Library/Frameworks/Python.framework\r\n\r\nHomebrew only supports building against the System-provided Python or a\r\nbrewed Python. In particular, Pythons installed to /Library can interfere\r\nwith other software installs.\r\n\r\nWarning: You have unlinked kegs in your Cellar\r\nLeaving kegs unlinked can lead to build-trouble and cause brews that depend on\r\nthose kegs to fail to run properly once built. Run `brew link` on these:\r\n libtensorflow\r\n```\r\n\r\n`brew doctor` does have some output, but I don't think it has anything to do with the issue here. Let me know if I'm mistaken.\r\n\r\nI have `source-highlight` configured as shown:\r\n\r\n```\r\n[KenMacBook-2:~] % echo $LESS\r\n-eiMqR\r\n[KenMacBook-2:~] % echo $LESSOPEN\r\n| src-hilite-lesspipe.sh %s\r\n[KenMacBook-2:~] % which src-hilite-lesspipe.sh \r\n/usr/local/bin/src-hilite-lesspipe.sh\r\n```\r\n\r\nWhen I try to view a file using `less`, I always get the error `\"source-highlight: cannot find input file anywhere outlang.map\"`. Even if I invoke `source-highlight` without any arguments, it seems to have this trouble:\r\n\r\n```\r\n[KenMacBook-2:~] % less tiles.R \r\nsource-highlight: cannot find input file anywhere outlang.map\r\n[KenMacBook-2:~] % source-highlight --failsafe --infer-lang -f esc --style-file=esc.style -i tiles.R \r\nsource-highlight: cannot find input file anywhere outlang.map\r\n[KenMacBook-2:~] % source-highlight \r\nsource-highlight: cannot find input file anywhere outlang.map\r\n```\r\n\r\nIf I manually add a `--data-dir` parameter to the `source-highlight` call, it doesn't complain anymore (and the output is colorized):\r\n\r\n```\r\n[KenMacBook-2:~] % source-highlight --failsafe --infer-lang -f esc --style-file=esc.style -i tiles.R --data-dir=/usr/local/share/source-highlight \r\nhexagon <- function(x, y, side, ...) {\r\n polygon(x=x+side/2*c(0, 1, 3, 4, 3, 1),\r\n y=y+side*sqrt(3)/2*c(0, -1, -1, 0, 1, 1),\r\n ...)\r\n}\r\n\r\nplot(3,3, type='n')\r\nhexagon(2,3,1)\r\n```\r\n\r\nHas my configuration gotten messed up? I'm inclined to think this is an issue with the brew recipe (or my configuration) rather than the underlying software.",
"title": "source-highlight: cannot find input file anywhere outlang.map",
"type": "issue"
},
{
"action": "created",
"author": "dunn",
"comment_id": 330600278,
"datetime": 1505839458000,
"masked_author": "username_1",
"text": "`outlang.map` exists in the `src` directory of the tarball, but I guess `make install` doesn't copy it to the correct location. Could you report that as a bug upstream?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ilovezfs",
"comment_id": null,
"datetime": 1505839869000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "kenahoo",
"comment_id": 331181773,
"datetime": 1506005632000,
"masked_author": "username_0",
"text": "I tried for a few days to open an account at savannah.gnu.org so I could report this, but their registration process seems hosed and I can't get through it. I could try to report *that* bug, but...",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kenahoo",
"comment_id": 361697988,
"datetime": 1517338772000,
"masked_author": "username_0",
"text": "This is still broken in source-highlight version 3.1.8_8. I seem to have successfully submitted this as a bug upstream, looks like they don't require login for submitters anymore.",
"title": null,
"type": "comment"
}
] | 3 | 5 | 5,744 | false | false | 5,744 | false |
mozilla/pdf.js | mozilla | 4,929,778 | 1,803 | null | [
{
"action": "opened",
"author": "levram",
"comment_id": null,
"datetime": 1338991349000,
"masked_author": "username_0",
"text": "### The problem\nWe tried a couple of pdfs in iOS and text selection is sometimes uncomfortable: it switches to block selection mode from character selection mode inproperly. \nUsually you can only select one line in character selection mode **at most**, and it switches to block selection mode if the selection contains a line break (more precisely one div ends and an other one starts). We also observed this behaviour when the selection contains any font style changes (because this also means a div end and a div start).\n\nUsual use case is that a user wants to select an exact whole sentence, and because of this phenomena the user simply can't (the selection contains excess/unneeded words at the beginning and at the end).\n\nDo you have any solution for our situtation?\nIf not, this is what we came up with.\n\n### A possible solution\nWe would write a paragraph recognizer algorithm which gathers visually related texts into one div (or a p tag) with proper inner and outer positioning (linebreaking, etc)\n\n#### This will have two advantages:\n\n- selection will work as intended within a paragraph\n- this might increase performance because of the less DOM elements\n\n\nWe would like your advice in this matter before we proceed any further.\n\nThanks in advance,\nViktor",
"title": "Uncomfortable textselection in iOS",
"type": "issue"
},
{
"action": "created",
"author": "twigbranch",
"comment_id": 228924107,
"datetime": 1467077440000,
"masked_author": "username_1",
"text": "Is this issue being addressed?\r\n\r\nWhenever I try to select text on an iPhone, I am able to select a word by long pressing on it, but when I try to expand the select area by dragging the iOS handles, the select area suddenly shifts to large blocks and I can no longer select by individual words.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "timvandermeij",
"comment_id": 955750058,
"datetime": 1635699640000,
"masked_author": "username_2",
"text": "Closing since this is a very old issue and many text selection improvements have been made in the meantime, including combining tags where possible.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "timvandermeij",
"comment_id": null,
"datetime": 1635699641000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 1,707 | false | false | 1,707 | false |
google/closure-compiler | google | 185,253,733 | 2,102 | null | [
{
"action": "opened",
"author": "brad4d",
"comment_id": null,
"datetime": 1477435519000,
"masked_author": "username_0",
"text": "I believe the right way to do this is to transpile the call to `super(...)` like this.\r\n```javascript\r\nclass FooError extends Error {\r\n /**\r\n * @param {string} msg\r\n * @param {string|!Error=} cause\r\n */\r\n constructor(msg, cause = undefined) {\r\n super(msg);\r\n /** @const */ this.cause = cause;\r\n }\r\n```\r\nbecomes\r\n```javascript\r\n/**\r\n * @constructor @struct\r\n * @param {string} msg\r\n * @param {string|!Error=} cause\r\n * @extends {Error}\r\n */\r\nvar FooError = function(msg, cause) {\r\n var $jscomp$tmp$error = new Error(msg);\r\n /** @const */ this.message = $jscomp$tmp$error.message;\r\n /** @const */ this.stack = $jscomp$tmp$error.stack;\r\n /** @const */ this.cause = cause;\r\n};\r\n$jscomp.inherits(FooError, Error);\r\n```\r\n\r\nUnfortunately, there's some code I've seen in the wild (angular2) that does its own workaround and will be broken by this change.\r\ne.g.\r\n```javascript\r\nclass FooError extends Error {\r\n /**\r\n * @param {string} msg\r\n * @param {string|!Error=} cause\r\n */\r\n constructor(msg, cause = undefined) {\r\n // This code assumes that super() will return a new Error object, but the spec says it should return\r\n // the (possibly new) value of this.\r\n const nativeError = super(msg);\r\n /** @const @private */ nativeError_ = nativeError;\r\n /** @const */ this.cause = cause;\r\n }\r\n // getters and setters to forward accesses to nativeError.\r\n}\r\n```",
"title": "Make extension of Error with ES6 classes work correctly.",
"type": "issue"
},
{
"action": "created",
"author": "brad4d",
"comment_id": 256207808,
"datetime": 1477437812000,
"masked_author": "username_0",
"text": "Forgot something in my suggested transpilation.\r\nThe spec says that the value returned by `super()` must be the (possibly replaced) value of `this`.\r\nSo, if we have `const e = super(msg);`, that should transpile to\r\n```javascript\r\nvar $jscomp$tmp$error = new Error(msg);\r\n/** @const */ this.message = $jscomp$tmp$error.message;\r\n/** @const */ this.stack = $jscomp$tmp$error.stack;\r\n/** @const */ var e = this;\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mprobst",
"comment_id": 256269482,
"datetime": 1477466654000,
"masked_author": "username_1",
"text": "Ah, ES6 and Errors. I wonder if we can find a code pattern that:\r\n\r\n* allows user code to work correctly before being compiled\r\n* without creating a secondary `Error` object indirection\r\n* and gets correctly compiled by JSCompiler\r\n\r\nBy the way, it seems like this compiler transformation changes the semantics of the original JavaScript code that the user wrote. After all, according to spec, extending `Error` is supposed to work one way, and during compilation we change it here to work another way. I'd be a bit worried about unintended side effects of that, and about the semantic mismatch between compiled and non-compiled code. WDYT?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ChadKillingsworth",
"comment_id": 256342826,
"datetime": 1477487584000,
"masked_author": "username_2",
"text": "I've never seen this and I've looked. I've seen plenty of posts saying that it can't be done (which doesn't mean they are correct). Also comes up when extending `Date` and `Promise`.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "brad4d",
"comment_id": 256383145,
"datetime": 1477495769000,
"masked_author": "username_0",
"text": "The ES6 spec effectively says this:\r\n- The prototype of the class being created gets propagated up the chain of super classes to the highest one.\r\n- An object using that prototype gets implicitly created and passed as `this` to the most-super class constructor. From there (barring `super()` returning some other value), all constructors will act on this object, in order.\r\n\r\nWhat the transpiled code I suggested is doing is this.\r\n- An object using the prototype of the class being created is implicitly created and passed to the child class constructor.\r\n- Because we cannot successfully pass this instance to the Error constructor for initialization, we instead create a new Error & copy the results of its initialization (the two fields) over to the object that has the correct prototype chain.\r\n\r\nThis isn't ideal but has close-enough to the same result.\r\nI'm only proposing this for the `*Error` classes, because\r\n- Inheriting from them is such a clearly desirable thing to do.\r\n- There are only a small number of well-known fields to be copied.\r\n\r\nOne thing I failed to make clear...\r\nThe example of how this is currently written in Angular2, by saving the value returned by calling `super()`, won't work uncompiled according to the spec. The spec says that `super()` should always return `this`, which means the `this.nativeError_` it stores is a circular reference to itself. This will make all of the get and set methods that redirect to it into infinite loops.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "brad4d",
"comment_id": 256388859,
"datetime": 1477496657000,
"masked_author": "username_0",
"text": "See also this advice for creating custom error types.\r\nhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error#Custom_Error_Types",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "brad4d",
"comment_id": 257005320,
"datetime": 1477682253000,
"masked_author": "username_0",
"text": "Fix for this submitted internally. It will be in the next push to github.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "blickly",
"comment_id": null,
"datetime": 1477701512000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
}
] | 4 | 8 | 4,330 | false | false | 4,330 | false |
go-openapi/errors | go-openapi | 284,165,796 | 10 | null | [
{
"action": "opened",
"author": "fredbi",
"comment_id": null,
"datetime": 1513949991000,
"masked_author": "username_0",
"text": "Now go-openapi/validate insure the MultipleOf factor must be positive as required by spec [strange that it does not pops up at json-schema validation time]\r\n\r\nSuggest an additional generic message for that here.\r\n\r\nP.R following up.",
"title": "Add MultipleOf factor must be positive",
"type": "issue"
},
{
"action": "closed",
"author": "casualjim",
"comment_id": null,
"datetime": 1514304962000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 232 | false | false | 232 | false |
openshift/ansible-service-broker | openshift | 281,741,695 | 588 | null | [
{
"action": "opened",
"author": "aliok",
"comment_id": null,
"datetime": 1513170112000,
"masked_author": "username_0",
"text": "**What happened**:\r\nI tried the \"Getting Started with the Ansible Service Broker\" document here on my Mac: \r\nhttps://github.com/openshift/ansible-service-broker#getting-started-with-the-ansible-service-broker\r\n\r\n**What you expected to happen**:\r\nBeing able to start a local OpenShift cluster with ASB enabled.\r\n\r\n**How to reproduce it**:\r\nTry this on a Mac:\r\n```\r\nwget https://raw.githubusercontent.com/openshift/ansible-service-broker/master/scripts/run_latest_build.sh\r\nchmod +x run_latest_build.sh\r\n./run_latest_build.sh\r\n```\r\n\r\n#### More details:\r\n\r\n* Command `ip` doesn't exist on a Mac: https://github.com/openshift/ansible-service-broker/blob/master/scripts/run_latest_build.sh#L36\r\nI modified the file manually to solve this problem.",
"title": "Getting started guide doesn't work on macOS",
"type": "issue"
},
{
"action": "created",
"author": "rthallisey",
"comment_id": 351734436,
"datetime": 1513263489000,
"masked_author": "username_1",
"text": "@username_0 can you provide the command you used on mac?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rthallisey",
"comment_id": 356338949,
"datetime": 1515515740000,
"masked_author": "username_1",
"text": "@jwmatthews or @username_2 can either of you try this on you mac and update the docs if there's an issue?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "shawn-hurley",
"comment_id": null,
"datetime": 1515682167000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 899 | false | false | 899 | true |
mizzy/specinfra | null | 107,071,254 | 482 | {
"number": 482,
"repo": "specinfra",
"user_login": "mizzy"
} | [
{
"action": "opened",
"author": "nurse",
"comment_id": null,
"datetime": 1442522931000,
"masked_author": "username_0",
"text": "",
"title": "Fix freebsd file stat",
"type": "issue"
},
{
"action": "created",
"author": "mizzy",
"comment_id": 141345702,
"datetime": 1442552620000,
"masked_author": "username_1",
"text": "Thanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mizzy",
"comment_id": 141346095,
"datetime": 1442552721000,
"masked_author": "username_1",
"text": "Released as v2.43.7.",
"title": null,
"type": "comment"
}
] | 2 | 3 | 27 | false | false | 27 | false |
CryZe/livesplit-core | null | 204,616,693 | 10 | null | [
{
"action": "opened",
"author": "CryZe",
"comment_id": null,
"datetime": 1485963278000,
"masked_author": "username_0",
"text": "The Splits Editor should act similar to a component. So pretty much everything about it should be implemented in livesplit-core. The Splits Editor provides a state object that contains all the necessary information for visualizing it in any kind of UI.\r\n\r\nThings to clear up: Should the Splits Editor directly modify the Run object that is in use by the timer and keep a clone in case we need to cancel our progress, or should we just modify the clone in the first place and replace the Run used by the timer once the changes are approved. The former case has a lot of issues with potential race conditions and forces us to introduces reference counting and reader writer locks in most of the code we already wrote. However it models what the original LiveSplit is doing more closely.",
"title": "Implement the Splits Editor",
"type": "issue"
},
{
"action": "created",
"author": "CryZe",
"comment_id": 277272426,
"datetime": 1486134841000,
"masked_author": "username_0",
"text": "Currently sketching out the whole thing without writing any actually UI for it, to reduce the scope of the whole situation. Also for the same reason I'm not bothering with any kind of live-editing for now. It shouldn't be all too hard switching to live-editing later.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "CryZe",
"comment_id": null,
"datetime": 1515591367000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 1,051 | false | false | 1,051 | false |
yansongda/pay | null | 263,020,511 | 19 | null | [
{
"action": "opened",
"author": "lifenglibao",
"comment_id": null,
"datetime": 1507186257000,
"masked_author": "username_0",
"text": "## 问题描述\r\n使用你列子中的回调方法,支付宝回调用回调地址后数据是以POST方式返回回来的,请问你的notify 函数规定了类型,导致我一直接收不到支付宝回调信息,我改为了$_POST去接收,但是这样就需要修改你封装好的文件里面的东西,所以想问一下,是我这边有什么代码没有配置上还是怎么回事\r\n\r\n## 代码\r\n<?php\r\n\r\nuse Yansongda\\Pay\\Pay;\r\nuse Illuminate\\Http\\Request;\r\nuse Zhongchuan\\Payment\\PaymentModeType;\r\nuse Zhongchuan\\Payment\\UserPaymentService;\r\n\r\nclass Alipay_notify_controller extends Payment_Base_Controller\r\n{\r\n\tpublic function __construct()\r\n {\r\n parent::__construct();\r\n $this->load->model('paymentrequest/Payment_request_model');\r\n }\r\n\r\n public function notify(Request $request)\r\n {\r\n file_put_contents('notify.txt', $request, FILE_APPEND);\r\n\r\n $pay = new Pay($this->alipay_config);\r\n\r\n if ($pay->driver('alipay')->gateway('app')->verify($request->all())) {\r\n\r\n file_put_contents('notify.txt', $request, FILE_APPEND);\r\n }\r\n }\r\n}\r\n\r\n## 报错详情\r\n\r\n<p>Severity: 4096</p>\r\n<p>Message: Argument 1 passed to Alipay_notify_controller::notify() must be an instance of Illuminate\\Http\\Request, none given</p>\r\n<p>Filename: controllers/Alipay_notify_controller.php</p>\r\n<p>Line Number: 16</p>",
"title": "支付宝回调问题",
"type": "issue"
},
{
"action": "created",
"author": "yansongda",
"comment_id": 334391774,
"datetime": 1507191182000,
"masked_author": "username_1",
"text": "示例代码中使用了 Request 类,并自动注入到了Controller 的 notify 方法中,如果没有使用这个类,您可以直接传递 $_POST 全局变量到 verify 中即可。\r\n\r\n感谢支持!!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "yansongda",
"comment_id": null,
"datetime": 1507455836000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 1,217 | false | false | 1,217 | false |
awesome-print/awesome_print | awesome-print | 221,409,141 | 309 | {
"number": 309,
"repo": "awesome_print",
"user_login": "awesome-print"
} | [
{
"action": "opened",
"author": "matrinox",
"comment_id": null,
"datetime": 1492036372000,
"masked_author": "username_0",
"text": "Bug fix: Duplicate options before mutating it\r\n\r\nChanges:\r\n- calls to `inspector.awesome(x)` are replaced with `x.ai(@options)`\r\n\r\nI noticed that ai calls `inspector.awesome` so I figured we can change it `#ai` without breaking much. I know this way is less efficient but for debugging, it's not going to be a huge concern.",
"title": "Allow any object to override awesome print via #ai",
"type": "issue"
},
{
"action": "created",
"author": "thiagofm",
"comment_id": 311883762,
"datetime": 1498720802000,
"masked_author": "username_1",
"text": "I quite like this change but I would like to hear other opnions on that.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "matrinox",
"comment_id": 312343640,
"datetime": 1498847985000,
"masked_author": "username_0",
"text": "@username_1 How do you guys do documentation? Let me know and I'll submit it as part of this PR",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "matrinox",
"comment_id": 312344618,
"datetime": 1498848229000,
"masked_author": "username_0",
"text": "I'm happy with most of the code except one: the `top_layer` option. It's been a while since I've worked on this but from what I remember, I had to add the `top_layer` option to get around everything being tagged with `<pre>` tags after the root layer. Basically, I needed to separate the first call from the subsequent calls but I couldn't do that without more understanding; this was the simplest solution I could come up with.\r\n\r\nI'm not too happy with it so I would love to hear how you guys would solve it.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 998 | false | false | 998 | true |
Jocs/aganippe | null | 275,163,055 | 1 | null | [
{
"action": "opened",
"author": "Jocs",
"comment_id": null,
"datetime": 1511101718000,
"masked_author": "username_0",
"text": "### when I write emoji name `1s\\_place\\_medio`, it will be treat as emphasize syntax.\r\n\r\n[it is a bug]\r\n\r\n### Steps to reproduce\r\n\r\n1. [input \\:1s_place_madal\\:]\r\n2. [I didn't get 🥇 ]\r\n3. [but got 1s*place*madal]",
"title": "Emoji name with `_`, will be treat as emphasize syntax",
"type": "issue"
},
{
"action": "created",
"author": "Jocs",
"comment_id": 345525702,
"datetime": 1511105896000,
"masked_author": "username_0",
"text": "H~2~O ==HELLO==",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Jocs",
"comment_id": null,
"datetime": 1511532526000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 227 | false | false | 227 | false |
jMonkeyEngine/jmonkeyengine | jMonkeyEngine | 256,453,108 | 721 | {
"number": 721,
"repo": "jmonkeyengine",
"user_login": "jMonkeyEngine"
} | [
{
"action": "opened",
"author": "shadowislord",
"comment_id": null,
"datetime": 1504977846000,
"masked_author": "username_0",
"text": "",
"title": "Try to fix Travis-CI buffer overflow on JDK7",
"type": "issue"
},
{
"action": "created",
"author": "stephengold",
"comment_id": 328361634,
"datetime": 1505067784000,
"masked_author": "username_1",
"text": "Thank you!",
"title": null,
"type": "comment"
}
] | 2 | 2 | 10 | false | false | 10 | false |
istio/proxy | istio | 210,906,713 | 133 | {
"number": 133,
"repo": "proxy",
"user_login": "istio"
} | [
{
"action": "opened",
"author": "ayj",
"comment_id": null,
"datetime": 1488317250000,
"masked_author": "username_0",
"text": "",
"title": "update base debug docker image reference",
"type": "issue"
}
] | 2 | 3 | 69 | false | true | 0 | false |
ROCm-Developer-Tools/HIP | ROCm-Developer-Tools | 248,971,247 | 151 | null | [
{
"action": "opened",
"author": "merlinzone",
"comment_id": null,
"datetime": 1502271064000,
"masked_author": "username_0",
"text": "Does HIP support MPI or hip-aware-mpi?\r\nhttps://devblogs.nvidia.com/parallelforall/introduction-cuda-aware-mpi/",
"title": "Does HIP support MPI",
"type": "issue"
},
{
"action": "created",
"author": "gargrahul",
"comment_id": 484678774,
"datetime": 1555619353000,
"masked_author": "username_1",
"text": "@username_0 Yes, it should. Please share further details if you are seeing any issues.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gargrahul",
"comment_id": null,
"datetime": 1555619354000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "andrewcorrigan",
"comment_id": 785152961,
"datetime": 1614180213000,
"masked_author": "username_2",
"text": "Is there an API available to detect support? With CUDA, I have a check like the following to detect CUDA-aware MPI. What would the equivalent be for detecting HIP-aware MPI?\r\n```\r\n#if defined(MPIX_CUDA_AWARE_SUPPORT) && MPIX_CUDA_AWARE_SUPPORT\r\n if(MPIX_Query_cuda_support() == 1)\r\n {\r\n std::cout << \"Detected CUDA-Aware MPI\" << std::endl;\r\n }\r\n else\r\n#endif\r\n {\r\n std::cout << \"CUDA-Aware MPI is not supported\" << std::endl;\r\n }\r\n```\r\n\r\nSorry if this is obvious,[ I see extensive documentation on how to do this for CUDA](https://www.open-mpi.org/faq/?category=runcuda), but am struggling to find information for HIP.",
"title": null,
"type": "comment"
}
] | 3 | 4 | 852 | false | false | 852 | true |
timothycrosley/isort | null | 108,179,273 | 347 | null | [
{
"action": "opened",
"author": "joaoponceleao",
"comment_id": null,
"datetime": 1443116687000,
"masked_author": "username_0",
"text": "Hi,\r\nI haven't managed to get this to work, wondered if there's a way.\r\nUsing isort, with the config in an editorconfig file.\r\nUse case:\r\nIn django, one ends up having multiple apps/modules inside the project. A common way to organize these, is to use an 'projectName_*' prefix (i.e. something_accounts, something_api, etc...). I would like to have all these sorted by isort within a section (first_party or named section), but the only way i've managed to do this is by referencing them explicitly one by one in editorconfig.\r\nCan i not simply do something like `first_party=something_.\\S*`?\r\nIt would make things much easier and not have update the config everytime a new project app is created.",
"title": "Wildcards for import groups",
"type": "issue"
},
{
"action": "created",
"author": "timothycrosley",
"comment_id": 143109764,
"datetime": 1443150174000,
"masked_author": "username_1",
"text": "HI @username_0, \r\n\r\nI agree this could be very useful! I'll try to make sure and implement a solution for this before the next release of isort.\r\n\r\nThanks!\r\n\r\nTimothy",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "lundberg",
"comment_id": 386966450,
"datetime": 1525673286000,
"masked_author": "username_2",
"text": "@username_0 think we have the same issue, could you try my pull request and see if the solution fits and solves your problem?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "timothycrosley",
"comment_id": null,
"datetime": 1527563250000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 4 | 994 | false | false | 994 | true |
sensu/sensu-go | sensu | 232,061,679 | 186 | null | [
{
"action": "opened",
"author": "palourde",
"comment_id": null,
"datetime": 1496075192000,
"masked_author": "username_0",
"text": "**As an operator, I can emit API keys that have no expiration so they can be used by third-party services**\r\n\r\n- [ ] The API keys are in fact JWT\r\n- [ ] The tokens have no expiration\r\n- [ ] The API keys can be revoked",
"title": "API keys",
"type": "issue"
},
{
"action": "created",
"author": "echlebek",
"comment_id": 393623714,
"datetime": 1527790215000,
"masked_author": "username_1",
"text": "@username_0 is this still valid?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "grepory",
"comment_id": 393624389,
"datetime": 1527790350000,
"masked_author": "username_2",
"text": "@username_1 I think the implementation could use further discussion.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "palourde",
"comment_id": null,
"datetime": 1544716026000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "palourde",
"comment_id": 447016327,
"datetime": 1544716026000,
"masked_author": "username_0",
"text": "Closed in favor of https://github.com/sensu/sensu-go/issues/2534",
"title": null,
"type": "comment"
}
] | 3 | 5 | 377 | false | false | 377 | true |
twitter/heron | twitter | 281,283,116 | 2,618 | {
"number": 2618,
"repo": "heron",
"user_login": "twitter"
} | [
{
"action": "opened",
"author": "kramasamy",
"comment_id": null,
"datetime": 1513063625000,
"masked_author": "username_0",
"text": "",
"title": "convert git repo for bazel dockers rules to artifact",
"type": "issue"
},
{
"action": "created",
"author": "nwangtw",
"comment_id": 350970310,
"datetime": 1513064583000,
"masked_author": "username_1",
"text": "Thanks~",
"title": null,
"type": "comment"
}
] | 2 | 2 | 7 | false | false | 7 | false |
hbristow/gridengine | null | 255,560,019 | 2 | {
"number": 2,
"repo": "gridengine",
"user_login": "hbristow"
} | [
{
"action": "opened",
"author": "lpfann",
"comment_id": null,
"datetime": 1504694701000,
"masked_author": "username_0",
"text": "",
"title": "fixed dict update method for python 3 compatibility",
"type": "issue"
},
{
"action": "created",
"author": "matteoferla",
"comment_id": 980685130,
"datetime": 1638030839000,
"masked_author": "username_1",
"text": "Here is a snippet that applies during runtime the above fix (before running the `GridEngineScheduler`) without altering files and stuff.\r\nI thought I had pip-installed this module from pypi but in writing this comment here I realised it was pip-installed from git, so I should have simply installed the fork whence the merge request came... Oh well.\r\n\r\n```\r\ndef monkeypatch():\r\n # patch as in https://github.com/hbristow/gridengine/pull/2\r\n import inspect, gridengine\r\n from gridengine import job, settings\r\n old = 'resources = dict(self.resources.items() + resources.items())'\r\n new = 'resources = resources.update(self.resources) if resources else self.resources'\r\n old_code = inspect.getsource(gridengine.GridEngineScheduler.schedule)\r\n new_code = old_code.replace(old, new)\r\n exec('class Dummy(gridengine.GridEngineScheduler):\\n'+new_code, globals())\r\n gridengine.GridEngineScheduler = Dummy\r\n gridengine.schedulers.GridEngineScheduler = Dummy\r\nmonkeypatch()\r\n```",
"title": null,
"type": "comment"
}
] | 2 | 2 | 998 | false | false | 998 | false |
mzgoddard/preact-render-spy | null | 239,543,092 | 15 | {
"number": 15,
"repo": "preact-render-spy",
"user_login": "mzgoddard"
} | [
{
"action": "opened",
"author": "gnarf",
"comment_id": null,
"datetime": 1498754627000,
"masked_author": "username_0",
"text": "I've seen this before with some libs using display names like`wrapped(Component)` when you create a wrapped component",
"title": "Allow searching for components by displayName even if lowercased",
"type": "issue"
},
{
"action": "created",
"author": "mzgoddard",
"comment_id": 312037269,
"datetime": 1498757521000,
"masked_author": "username_1",
"text": "👍",
"title": null,
"type": "comment"
}
] | 2 | 2 | 118 | false | false | 118 | false |
nunit/nunit | nunit | 231,767,747 | 2,195 | null | [
{
"action": "opened",
"author": "jnm2",
"comment_id": null,
"datetime": 1495851337000,
"masked_author": "username_0",
"text": "Right now I'm not seeing an API that allows this. I would like to be able to write `Contains.Substring(\"str\").Using(StringComparison.OrdinalIgnoreCase)`.",
"title": "Contains.Substring with custom StringComparison",
"type": "issue"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 311145424,
"datetime": 1498502289000,
"masked_author": "username_0",
"text": "@nunit/framework-team Any objections to adding this to the backlog?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CharliePoole",
"comment_id": 311154910,
"datetime": 1498504526000,
"masked_author": "username_1",
"text": "For an easyfix, I would suggest telling the possible contributor where the fix needs to be made in our source code.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CharliePoole",
"comment_id": 311155169,
"datetime": 1498504590000,
"masked_author": "username_1",
"text": "Although, easyfix for an item not on the backlog means it will show up in searches, possibly confusing people who don't see the pipelines.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 311159866,
"datetime": 1498505809000,
"masked_author": "username_0",
"text": "Yeah, okay. For the time being I won't apply easyfix without it being approved for implementation, even if I think it is an easy fix.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ChrisMaddock",
"comment_id": 311275816,
"datetime": 1498548356000,
"masked_author": "username_2",
"text": "No objection from me",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rprouse",
"comment_id": 311837108,
"datetime": 1498699842000,
"masked_author": "username_3",
"text": "Speaking earlier of feature creep, this is one feature that I think could be useful to a wider audience, so I am okay with the idea 👍",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 311849403,
"datetime": 1498705366000,
"masked_author": "username_0",
"text": "Also, [`string.Contains(string, StringComparison)` is being added](https://github.com/dotnet/corefx/issues/20846) to .NET Core.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 311850538,
"datetime": 1498705892000,
"masked_author": "username_0",
"text": "Notes for implementers:\r\n\r\n[`StringConstraint`](https://github.com/nunit/nunit/blob/master/src/NUnitFramework/framework/Constraints/StringConstraint.cs) needs a `Using(StringComparison)` method, very similar to the `CaseInsensitive` property, and the `Matches` implementation in every subclass needs to use it if specified.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 311850933,
"datetime": 1498706070000,
"masked_author": "username_0",
"text": "Ooh, quick design question that just occurred to me: I think we should throw (and document) an `InvalidOperationException` if you use _both_ `.Using(StringComparison) and `.IgnoreCase` in either order... unless we want to define some fancy rules. Either way what I don't think we should do is have one silenty override the other because it will be misleading to anyone reading the code.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CharliePoole",
"comment_id": 311858695,
"datetime": 1498710079000,
"masked_author": "username_1",
"text": "@username_0 Re design issue: I think we have to do whatever we already do in other classes that allow both `.Using` and `.IgnoreCase`. If we don't it will be very confusing for users.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 311862296,
"datetime": 1498712113000,
"masked_author": "username_0",
"text": "@username_1 I didn't think we had any, but let's follow precedent.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "CharliePoole",
"comment_id": 311863046,
"datetime": 1498712522000,
"masked_author": "username_1",
"text": "EqualConstraint, for example, allows both.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MhdTlb",
"comment_id": 312404188,
"datetime": 1498874752000,
"masked_author": "username_4",
"text": "Can I work on this feature?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jnm2",
"comment_id": 312405515,
"datetime": 1498876613000,
"masked_author": "username_0",
"text": "@username_4 Awesome, you are welcome to! Let us know if you need anything.\r\nWe'll also send an invitation so you can be assigned to this issue.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MhdTlb",
"comment_id": 312466311,
"datetime": 1498961565000,
"masked_author": "username_4",
"text": "I pushed required changes under new branch username_4/nunit/issue-2195",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rprouse",
"comment_id": 317576146,
"datetime": 1500936274000,
"masked_author": "username_3",
"text": "Fixed by #2294",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "rprouse",
"comment_id": null,
"datetime": 1500936275000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
}
] | 5 | 18 | 2,128 | false | false | 2,128 | true |
bcit-ci/codeigniter3-translations | bcit-ci | 199,350,702 | 347 | {
"number": 347,
"repo": "codeigniter3-translations",
"user_login": "bcit-ci"
} | [
{
"action": "created",
"author": "jim-parry",
"comment_id": 271075642,
"datetime": 1483785166000,
"masked_author": "username_0",
"text": ":)",
"title": null,
"type": "comment"
}
] | 2 | 2 | 111 | false | true | 2 | false |
chef-cookbooks/httpd | chef-cookbooks | 137,860,982 | 68 | null | [
{
"action": "opened",
"author": "istopopoki",
"comment_id": null,
"datetime": 1456921542000,
"masked_author": "username_0",
"text": "Hi,\r\nUnless I've missed something, I don't see a way to specify the content of a module .conf file through httpd_module. Would a 'source' parameter (similar to httpd_config) make sense ?\r\nSomething like:\r\n\r\nhttpd_module 'wsgi' do\r\n action :create\r\n source 'wsgi.conf.erb'\r\nend\r\n\r\nThanks !",
"title": "module file content",
"type": "issue"
},
{
"action": "created",
"author": "iennae",
"comment_id": 197173486,
"datetime": 1458110182000,
"masked_author": "username_1",
"text": "For the module resource it uses the filename parameter. If you look at the README though:\r\n\r\nfilename - The filename of the shared object to be rendered into the load config snippet. This can usually be omitted, and defaults to a generated value looked up in an internal map.\r\n\r\nCheck out the module resources in the libraries directory.\r\n\r\nBased on the template name you are using though 'wsgi.conf.erb' that looks like you are wanting to also use the config resource which handles the configuration aspects of Apache. With that resource you have additional parameters:\r\n\r\n* config_name - The name of the config on disk\r\n* cookbook - The cookbook that the source template is found in. Defaults to the current cookbook.\r\n* httpd_version - Used to calculate the configuration's disk path. Defaults to the platform's native Apache version.\r\n* instance - The httpd_service instance the config is meant for. Defaults to 'default'\r\n* source - The ERB format template source used to render the file.\r\n* variables - A hash of variables passed to the underlying template resource\r\n\r\nOf special interest here is that your wrapper cookbook would be where you have your templates defined so you don't need to fork the httpd cookbook.\r\n\r\nLet me know if this explains your question or if you need additional help.\r\n\r\nThanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "iennae",
"comment_id": 199885451,
"datetime": 1458663138000,
"masked_author": "username_1",
"text": "If you need additional help, please do reopen this issue. For now, I'm closing this issue. Thanks!",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "iennae",
"comment_id": null,
"datetime": 1458663138000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 1,699 | false | false | 1,699 | false |
react-component/menu | react-component | 281,447,170 | 112 | {
"number": 112,
"repo": "menu",
"user_login": "react-component"
} | [
{
"action": "opened",
"author": "evgenykochetkov",
"comment_id": null,
"datetime": 1513095897000,
"masked_author": "username_0",
"text": "Addresses one of the problems described in #108",
"title": "Allow passing custom placements config",
"type": "issue"
},
{
"action": "created",
"author": "yesmeck",
"comment_id": 354257722,
"datetime": 1514453116000,
"masked_author": "username_1",
"text": "@username_2 帮忙 review 下\r\n\r\ncc @username_3",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "benjycui",
"comment_id": 354708531,
"datetime": 1514874084000,
"masked_author": "username_2",
"text": "Code is OK, just update the API name.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "valleykid",
"comment_id": 364729041,
"datetime": 1518333464000,
"masked_author": "username_3",
"text": "Thanks for your PR, I'll merge asap.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "afc163",
"comment_id": 384529061,
"datetime": 1524724410000,
"masked_author": "username_4",
"text": "Conflict",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "picodoth",
"comment_id": 390461035,
"datetime": 1526797872000,
"masked_author": "username_5",
"text": "@username_0 will you be able to do a rebase? Let's close this one soon if possible, thanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "borisrorsvort",
"comment_id": 394282139,
"datetime": 1528102415000,
"masked_author": "username_6",
"text": "@username_5 Would be nice to update the examples page with an example similar to this one https://github.com/react-component/menu/issues/122#issuecomment-389991276",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "picodoth",
"comment_id": 395951482,
"datetime": 1528533338000,
"masked_author": "username_5",
"text": "seeing https://github.com/react-component/menu/pull/156",
"title": null,
"type": "comment"
}
] | 8 | 10 | 1,124 | false | true | 480 | true |
MongoEngine/mongoengine | MongoEngine | 183,103,016 | 1,388 | null | [
{
"action": "opened",
"author": "BenCoDev",
"comment_id": null,
"datetime": 1476465044000,
"masked_author": "username_0",
"text": "How could it be easier to integration custom field validation with current field validation ?\r\n\r\nImplementing it with keyword validation on Field does not enable to customize ValidationError object.\r\n\r\nAre there good practices about it with the current V of mongoengine ?\r\n\r\nCheers,",
"title": "Custom validation of fields",
"type": "issue"
},
{
"action": "closed",
"author": "BenCoDev",
"comment_id": null,
"datetime": 1476522576000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "wendy0402",
"comment_id": 293738717,
"datetime": 1492041098000,
"masked_author": "username_1",
"text": "hey @username_0 , can you share your strategy for this kind of case?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "neuronist",
"comment_id": 505529935,
"datetime": 1561481563000,
"masked_author": "username_2",
"text": "@JWDobken it's a bit dangerous to implement the validate method like that.\r\n\r\nYou're overriding the `StringField` validate method completely and `name_validation` is not calling the [`StringField.validate` method](https://github.com/MongoEngine/mongoengine/blob/master/mongoengine/fields.py#L91-L102) with super. This means that you can add a name bigger than 255 characters because you completely ignored that check. If you want to try it for yourself go ahead and do `mydoc = MyDoc(name=\"a\"*300)` it runs and saves when it shouldn't. \r\n\r\nI found the [clean method approach](http://docs.mongoengine.org/guide/document-instances.html?highlight=clean%20document#pre-save-data-validation-and-cleaning) enough for my use case. Your example would look like this and now it also won't allow for names bigger than 255 characters and it should. I know this is not a specific field check but you can still encapsulate the checking of each field in a method and put them all on the clean method.\r\n\r\n```python\r\nfrom mongoengine import Document, StringField, ValidationError\r\n\r\nclass MyDoc(Document):\r\n name = StringField(min_length=3, max_length=255, required=True)\r\n \r\n @staticmethod\r\n def name_validation(name):\r\n if not name.startswith(\"a\"):\r\n raise ValidationError(\"name does not start with 'a'\")\r\n \r\n \r\n def clean(self):\r\n self.name_validation(self.name)\r\n\r\nmydoc = MyDoc(name=\"aaa\"*256)\r\n\r\n# will throw ValidationError: ValidationError (MyDoc:None) \r\n# (String value is too long: ['name'])\r\nmydoc.save() \r\n\r\n# ValidationError: ValidationError (MyDoc:None) \r\n# (name does not start with 'a': ['__all__'])\r\nmydoc = MyDoc(name=\"aaa\"*256).save()\r\n```\r\n\r\nI tried making it work with super but I wasn't sure how to access StringField validate, the closest I got was this but it isn't working, and I'm not willing to keep trying to make it work as the clean method works for me now.\r\n\r\nI hope this helps as I was also a bit lost when trying to find a good way to do my custom checks before saving a document. If there's a better way or someone can explain how to properly override a Field validate method please do it.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "bagerard",
"comment_id": 505568648,
"datetime": 1561487715000,
"masked_author": "username_3",
"text": "The feature isn't well know and the online doc isn't up to date (due to a bug) but any fields can take a `validation` parameter which is a callable\r\nE.g:\r\n```\r\n def _not_empty(z):\r\n if not z:\r\n raise ValidationError('cantbeempty')\r\n\r\n class Person(Document):\r\n name = StringField(validation=_not_empty)\r\n```\r\n\r\nNote that:\r\n- mongoengine>=0.18 expects the callable to raise a ValidationError\r\n- mongoengine < 0.18 expects it to return a boolean (without possibility to customize the error message, its a generic one)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "neuronist",
"comment_id": 505768689,
"datetime": 1561536538000,
"masked_author": "username_2",
"text": "Thanks for that merge request, it will be very helpful. I didn't try the validation arg because I didn't get how it was supposed to use, also reading this \"Generally this is deprecated in favor of the FIELD.validate method\" confused me and led me to find another option like clean.",
"title": null,
"type": "comment"
}
] | 4 | 6 | 3,353 | false | false | 3,353 | true |
ray-project/ray | ray-project | 218,095,611 | 415 | null | [
{
"action": "opened",
"author": "robertnishihara",
"comment_id": null,
"datetime": 1490856099000,
"masked_author": "username_0",
"text": "@username_1 uncovered the following issue.\r\n\r\nOn a 2-node cluster, start the head node with 0 CPUs, and start the other node normally.\r\n\r\nOn the head node:\r\n\r\n```\r\n./scripts/start_ray.sh --head --redis-port=6379 --num-cpus=0\r\n```\r\n\r\nOn the remote node:\r\n\r\n```\r\n./scripts/start_ray.sh --redis-address=<HEAD-NODE-IP:6379>\r\n```\r\n\r\nOn the head node, connect to the cluster and launch 200000 tasks.\r\n\r\n```python\r\nimport ray\r\n\r\nray.init(redis_address=\"<HEAD-NODE-IP:6379>\")\r\n\r\n@ray.remote\r\ndef f():\r\n pass\r\n\r\nl = [f.remote() for _ in range(200000)]\r\n```\r\n\r\nIf you do this, often not all of the tasks are executed. You can see this as follows.\r\n\r\nIf you look at the task table, e.g.,\r\n\r\n```python\r\nr = ray.worker.global_worker.redis_client\r\ntask_ids = r.keys(\"TT:*\")\r\ntask_info = [r.hgetall(task_id) for task_id in task_ids]\r\ntask_states = [info[b\"state\"] for info in task_info] # This line takes maybe 20 seconds.\r\n\r\n# Look at counts of different states.\r\n[(state, task_states.count(state)) for state in set(task_states)]\r\n```\r\n\r\nAll tasks should be in the DONE state `16`, but sometimes a ton of them are in the SCHEDULED state `2` or even the RUNNING state `8`, but nothing is in fact happening. It's almost as if the local scheduler (on the head node)'s pubsub connection to Redis which listens for tasks assigned to it has died (or something like that).\r\n\r\nI added a CHECK which checks to make sure the publish command in the redis module that assigns a task to a local scheduler is actually received by the local scheduler (we can do this because PUBLISH returns an integer equal to the number of subscribers that received the message). And this CHECK failed at one point in the computation, so I think we're FINALLY hitting the point where Redis pubsub is dropping messages (and once it drops one message, it may just kill that pubsub connection.",
"title": "Tasks not being executed when a sufficiently large number of tasks are scheduled remotely.",
"type": "issue"
},
{
"action": "created",
"author": "atumanov",
"comment_id": 290958626,
"datetime": 1491097005000,
"masked_author": "username_1",
"text": "Update:\r\nThe primary cause is overloading the redis client (local scheduler) pubsub buffer, configured by default to be 32MB size as a hard limit and only an 8MB soft-limit. If the latter persists for more than 60s (which was the case), the redis client is disconnected. Source:\r\nhttps://github.com/antirez/redis/blob/d680eb6dbdf2d2030cb96edfb089be1e2a775ac1/redis.conf\r\n\r\nTo get the configured limit:\r\n`CONFIG GET client-output-buffer-limit` inside a redis-client attached to the running redis-server.\r\n\r\nWe address this by increasing both the hard and soft limits for the redis client pubsub buffer to 128MB:\r\n`CONFIG SET client-output-buffer-limit \"normal 0 0 0 slave 268435456 67108864 60 pubsub 134217728 134217728 60\"`\r\n\r\nThe secondary cause is numerous retries triggered due to using a default redis retry data struct with unlimited retries. Over 200 thousand retries were observed at one point when issuing 1M tasks. We will separately investigate disabling retries.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "robertnishihara",
"comment_id": 292749840,
"datetime": 1491690287000,
"masked_author": "username_0",
"text": "The example described in this issue seems to work after $442, but the problem will likely still arise under extremely stressful workloads.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "robertnishihara",
"comment_id": 392297486,
"datetime": 1527382373000,
"masked_author": "username_0",
"text": "This has mostly been fixed by using more Redis shards and such.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "robertnishihara",
"comment_id": null,
"datetime": 1527382373000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 3,021 | false | false | 3,021 | true |
Microsoft/pxt | Microsoft | 194,366,218 | 889 | null | [
{
"action": "opened",
"author": "whaleygeek",
"comment_id": null,
"datetime": 1481211222000,
"masked_author": "username_0",
"text": "microbit.org-ticket: 456\r\n\r\nCustomer has reported that having a default value of 100 for L, means that the RGB equivalent is always white (as opposed to HSV which does not suffer this side effect). The UX of this is therefore slightly strange, because changing H or S does nothing with the default value for L=100.\r\n\r\nCould I suggest perhaps setting default L=50 so that then any change to any of the 3 parameters will make a visible change to the output?\r\n\r\n",
"title": "Default for HSL (L=100) always generates white",
"type": "issue"
},
{
"action": "created",
"author": "whaleygeek",
"comment_id": 265770334,
"datetime": 1481211452000,
"masked_author": "username_0",
"text": "Also the colour swatches for HSL on this page, make it obvious that this is indeed the case for L=max:\r\n(under heading Swatches/HSL)\r\n\r\nhttps://en.wikipedia.org/wiki/HSL_and_HSV",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pelikhan",
"comment_id": 265782749,
"datetime": 1481214132000,
"masked_author": "username_1",
"text": "Fixed in 0.2.5. https://github.com/Microsoft/pxt-neopixel/commit/af3a651b8e20b39316512532e7097e0cfc5748d6",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "pelikhan",
"comment_id": null,
"datetime": 1481214140000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "whaleygeek",
"comment_id": 267188084,
"datetime": 1481757637000,
"masked_author": "username_0",
"text": "Great, thanks very much for your help, I'll let the customer know, they will be pleased!",
"title": null,
"type": "comment"
}
] | 2 | 5 | 968 | false | false | 968 | false |
GoogleCloudPlatform/getting-started-java | GoogleCloudPlatform | 134,790,147 | 20 | {
"number": 20,
"repo": "getting-started-java",
"user_login": "GoogleCloudPlatform"
} | [
{
"action": "opened",
"author": "lesv",
"comment_id": null,
"datetime": 1455864873000,
"masked_author": "username_0",
"text": "",
"title": "Update to non-combat runtime...",
"type": "issue"
},
{
"action": "created",
"author": "lesv",
"comment_id": 186090325,
"datetime": 1455864984000,
"masked_author": "username_0",
"text": "@username_1 PTAL",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "shun-fan",
"comment_id": 186270419,
"datetime": 1455896965000,
"masked_author": "username_1",
"text": "LGTM",
"title": null,
"type": "comment"
}
] | 2 | 3 | 18 | false | false | 18 | true |
bryankeller/BLKFlexibleHeightBar | null | 70,866,492 | 28 | {
"number": 28,
"repo": "BLKFlexibleHeightBar",
"user_login": "bryankeller"
} | [
{
"action": "opened",
"author": "liuxuan30",
"comment_id": null,
"datetime": 1429944610000,
"masked_author": "username_0",
"text": "add customizing snapping animate duration property, so whoever uses definers can define the snapping animate duration. default is 0.15 secs\r\n\r\n```Objective-C\r\n * Determines the animation duration while snapping\r\n */\r\n@property (nonatomic, assign) CGFloat snappingAnimateDuration;\r\n\r\n/**\r\n```",
"title": "add customizing snapping animate duration property",
"type": "issue"
},
{
"action": "created",
"author": "bryankeller",
"comment_id": 96336203,
"datetime": 1430032455000,
"masked_author": "username_1",
"text": "I'll take a closer look this week, but I think this will be a good change. Thank you!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "liuxuan30",
"comment_id": 96468123,
"datetime": 1430100936000,
"masked_author": "username_0",
"text": "Sure, it is a simple change. I need to snap my navi bar slower in my app. Not sure if other people want it",
"title": null,
"type": "comment"
}
] | 2 | 3 | 483 | false | false | 483 | false |
cncjs/cncjs | cncjs | 258,000,239 | 201 | {
"number": 201,
"repo": "cncjs",
"user_login": "cncjs"
} | [
{
"action": "opened",
"author": "yeyeto2788",
"comment_id": null,
"datetime": 1505472462000,
"masked_author": "username_0",
"text": "There are some things still pending to translate. I'll do it as soon as I can.\r\n\r\nRegards.",
"title": "Spanish translations",
"type": "issue"
}
] | 2 | 2 | 377 | false | true | 90 | false |
ionic-team/ionic | ionic-team | 203,487,546 | 10,195 | null | [
{
"action": "opened",
"author": "escobar5",
"comment_id": null,
"datetime": 1485464452000,
"masked_author": "username_0",
"text": "**Ionic version:** (check one with \"x\")\r\n[ ] **1.x**\r\n[X] **2.x**\r\n\r\n**I'm submitting a ...** (check one with \"x\")\r\n[X] bug report\r\n[ ] feature request\r\n[ ] support request => Please do not submit support requests here, use one of these channels: https://forum.ionicframework.com/ or http://ionicworldwide.herokuapp.com/\r\n\r\n**Current behavior:**\r\nWhen you have one <ion-slide> inside another, the length() of the parent one is wrong, the length() is returning the sum of all slides (parent and children)\r\n\r\n**Expected behavior:**\r\nThe parent slide should return only it's own count of slides.\r\n\r\n**Steps to reproduce:**\r\nOpen the plunkr, and click on the button \"Check length\", the length must be 4 but it is 16.\r\n\r\nhttp://plnkr.co/edit/wL8NXdQeqdbUayQvqWJW?p=preview",
"title": "Nested Slides, wrong length()",
"type": "issue"
}
] | 2 | 3 | 995 | false | true | 769 | false |
serenity-bdd/serenity-core | serenity-bdd | 274,858,919 | 1,031 | null | [
{
"action": "opened",
"author": "pavi200863",
"comment_id": null,
"datetime": 1510923877000,
"masked_author": "username_0",
"text": "GetDriver() function is calling initPagesObjectUsing(driver) function , which is being called again ThucydidesWebDriverSupport.initialize function also , do we require to call initPagesObjectUsing(driver) this in each getDriver() function ? \r\n\r\npublic static WebDriver getDriver() {\r\n\r\n initialize();\r\n\r\n if (webdriverManagerThreadLocal.get() == null) {\r\n return null;\r\n }\r\n\r\n WebDriver driver;\r\n\r\n if (defaultDriverType.get() != null) {\r\n driver = getWebdriverManager().getWebdriver(defaultDriverType.get());\r\n } else {\r\n driver = (getWebdriverManager().getCurrentDriver() != null) ?\r\n getWebdriverManager().getCurrentDriver() : getWebdriverManager().getWebdriver();\r\n }\r\n\r\n initPagesObjectUsing(driver);\r\n\r\n return driver;\r\n\r\n }",
"title": "multiple calls to function initPagesObjectUsing(driver);",
"type": "issue"
},
{
"action": "created",
"author": "wakaleo",
"comment_id": 345240304,
"datetime": 1510924374000,
"masked_author": "username_1",
"text": "I'm not sure to be honest; if it is causing you problems, why not write a unit test to reproduce the incorrect behaviour and propose a PR?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "wakaleo",
"comment_id": null,
"datetime": 1644678781000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 992 | false | false | 992 | false |
gigocabrera/MoneyLeash2 | null | 156,563,612 | 1 | null | [
{
"action": "opened",
"author": "yawarts",
"comment_id": null,
"datetime": 1464110664000,
"masked_author": "username_0",
"text": "Any clue on how to convert this to Firebase sdk 3.0 compatible?\r\n\r\nI've followed that tutorial without exit: https://firebase.google.com/support/guides/firebase-web#update_your_client_version_numbered",
"title": "Firebase sdk 3.0",
"type": "issue"
},
{
"action": "created",
"author": "gigocabrera",
"comment_id": 221873025,
"datetime": 1464269906000,
"masked_author": "username_1",
"text": "Looking into it...",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gigocabrera",
"comment_id": 230122146,
"datetime": 1467493732000,
"masked_author": "username_1",
"text": "Converted to Firebase sdk 3.0.2\r\n\r\n(this is still a work in progress...)",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gigocabrera",
"comment_id": null,
"datetime": 1467493732000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 290 | false | false | 290 | false |
diesel-rs/diesel | diesel-rs | 253,010,561 | 1,130 | null | [
{
"action": "opened",
"author": "konstin",
"comment_id": null,
"datetime": 1503693107000,
"masked_author": "username_0",
"text": "## Setup\r\n\r\n### Versions\r\n\r\n- **Rust:** rustc 1.21.0-nightly (8c303ed87 2017-08-20)\r\n- **Diesel:** 0.15.2\r\n- **Database:** mysql 5.7.1\r\n- **Operating System:** Ubuntu\r\n\r\n### Feature Flags\r\n\r\n- **diesel:** `[\"mysql\", \"chrono\", \"large-tables\"]`\r\n- **diesel_codegen:** `[\"mysql\"]`\r\n\r\n## Problem Description\r\n\r\nWhen trying to load a value such as `0000-00-00 00:00:00` into a NaiveDateTime, there is panic inside chrono, which is called by diesel. \r\n\r\n### Steps to reproduce\r\n\r\n * Create a table in mysql with a timestamp field and add an entry with `0000-00-00 00:00:00`\r\n * Try to load that row into a struct with a `NaiveDateTime` field.\r\n * The process will panic: `thread '<unnamed>' panicked at 'invalid or out-of-range date', /checkout/src/libcore/option.rs:819:4`\r\n\r\n### Full backtrace \r\n\r\nThis crash happened trying to read a table with multiple columns, therefore the bigger generic. I'm still sure that the problem is the `0000-00-00 00:00:00` as the crash doesn't occur after removing them.\r\n\r\n```\r\nthread '<unnamed>' panicked at 'invalid or out-of-range date', /checkout/src/libcore/option.rs:819:4\r\nstack backtrace:\r\n 0: 0x561ed0753873 - std::sys::imp::backtrace::tracing::imp::unwind_backtrace::h80d78ba3b40687b5\r\n at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49\r\n 1: 0x561ed074fa14 - std::sys_common::backtrace::_print::h47b9b32fe06dd6eb\r\n at /checkout/src/libstd/sys_common/backtrace.rs:71\r\n 2: 0x561ed0756263 - std::panicking::default_hook::{{closure}}::h006dcf643a2d1ee4\r\n at /checkout/src/libstd/sys_common/backtrace.rs:60\r\n at /checkout/src/libstd/panicking.rs:381\r\n 3: 0x561ed0755fc2 - std::panicking::default_hook::h1e56c296d63316e2\r\n at /checkout/src/libstd/panicking.rs:397\r\n 4: 0x561ed0756767 - std::panicking::rust_panic_with_hook::h218401524ff20a29\r\n at /checkout/src/libstd/panicking.rs:611\r\n 5: 0x561ed07565c4 - std::panicking::begin_panic::h1668556d5aa9a913\r\n at /checkout/src/libstd/panicking.rs:572\r\n 6: 0x561ed0756539 - std::panicking::begin_panic_fmt::h1ac0ef5f67ba5408\r\n at /checkout/src/libstd/panicking.rs:522\r\n 7: 0x561ed07564ca - rust_begin_unwind\r\n at /checkout/src/libstd/panicking.rs:498\r\n 8: 0x561ed078f690 - core::panicking::panic_fmt::h121b79d1b9922ab6\r\n at /checkout/src/libcore/panicking.rs:71\r\n 9: 0x561ed078f6fd - core::option::expect_failed::h297561050155cf3c\r\n at /checkout/src/libcore/option.rs:819\r\n 10: 0x561ed0698469 - <core::option::Option<T>>::expect::hdbbee987ca7eef93\r\n at /checkout/src/libcore/option.rs:302\r\n 11: 0x561ed0699413 - chrono::naive::date::NaiveDate::from_ymd::h1d300ec380bb8f85\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/chrono-0.4.0/src/naive/date.rs:162\r\n 12: 0x561ed0696ad8 - diesel::mysql::types::date_and_time::<impl diesel::types::FromSql<diesel::types::Timestamp, diesel::mysql::backend::Mysql> for chrono::naive::datetime::NaiveDateTime>::from_sql::h2cd8656b153b5a5e\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/mysql/types/date_and_time.rs:67\r\n 13: 0x561ed02f25d9 - diesel::types::impls::option::<impl diesel::types::FromSql<diesel::types::Nullable<ST>, DB> for core::option::Option<T>>::from_sql::h103452ff4257f426\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/types/impls/option.rs:40\r\n 14: 0x561ed02f255b - diesel::types::impls::date_and_time::chrono::<impl diesel::types::FromSqlRow<diesel::types::Nullable<diesel::types::Timestamp>, DB> for core::option::Option<chrono::naive::datetime::NaiveDateTime>>::build_from_row::hb283b609ed8bf430\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/types/impls/mod.rs:80\r\n 15: 0x561ed02f90b1 - diesel::types::impls::tuples::<impl diesel::types::FromSqlRow<(SA, SB, SC, SD, SE, SF, SG, SH, SI, SJ, SK, SL, SM, SN, SO, SP, SQ, SR, SS, ST, SU, SV), DB> for (A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V)>::build_from_row::h1c1a76ea7195c494\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/types/impls/tuples.rs:44\r\n 16: 0x561ed031b60c - <diesel::mysql::connection::MysqlConnection as diesel::connection::Connection>::query_by_index::{{closure}}::h285a5d3f187b6877\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/mysql/connection/mod.rs:76\r\n 17: 0x561ed02f153a - diesel::mysql::connection::stmt::iterator::StatementIterator::map::hca914eae90b9f9d4\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/mysql/connection/stmt/iterator.rs:32\r\n 18: 0x561ed031a37a - <diesel::mysql::connection::MysqlConnection as diesel::connection::Connection>::query_by_index::h243772f5897a7a3e\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/mysql/connection/mod.rs:75\r\n 19: 0x561ed03103fe - <T as diesel::query_dsl::load_dsl::LoadQuery<Conn, U>>::internal_load::hc0dd74066aff7f0f\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/query_dsl/load_dsl.rs:22\r\n 20: 0x561ed03049ae - diesel::query_dsl::load_dsl::LoadDsl::load::hc5d00ac392b993bf\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/diesel-0.15.2/src/query_dsl/load_dsl.rs:33\r\n 21: 0x561ed032af65 - rustparl::paper_from_id::ha0948d4dfa92ae76\r\n at src/main.rs:58\r\n 22: 0x561ed032a9e9 - rustparl::rocket_route_fn_paper_from_id::hb37ad0a6f2a06613\r\n at src/main.rs:42\r\n 23: 0x561ed0547813 - rocket::rocket::Rocket::route::h7dceb302abf37427\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/rocket-0.3.2/src/rocket.rs:287\r\n 24: 0x561ed0545bbf - rocket::rocket::Rocket::dispatch::h320edf44505c98be\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/rocket-0.3.2/src/rocket.rs:223\r\n 25: 0x561ed05423ff - <rocket::rocket::Rocket as hyper::server::Handler>::handle::h552e9111ee31058e\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/rocket-0.3.2/src/rocket.rs:75\r\n 26: 0x561ed044730b - <hyper::server::Worker<H>>::keep_alive_loop::hba6889363156fefc\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.10.12/src/server/mod.rs:337\r\n 27: 0x561ed0448195 - <hyper::server::Worker<H>>::handle_connection::h3e27405dedee55a3\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.10.12/src/server/mod.rs:283\r\n 28: 0x561ed04cd747 - hyper::server::handle::{{closure}}::h8a5cb571581578d8\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.10.12/src/server/mod.rs:242\r\n 29: 0x561ed04cde7a - hyper::server::listener::spawn_with::{{closure}}::hd71bcd07197100bd\r\n at /home/konsti/.cargo/registry/src/github.com-1ecc6299db9ec823/hyper-0.10.12/src/server/listener.rs:50\r\n 30: 0x561ed0449417 - std::sys_common::backtrace::__rust_begin_short_backtrace::hab1fb1c24a4569cd\r\n at /checkout/src/libstd/sys_common/backtrace.rs:136\r\n 31: 0x561ed045abed - std::thread::Builder::spawn::{{closure}}::{{closure}}::h6cfb8020d4cbe3cf\r\n at /checkout/src/libstd/thread/mod.rs:394\r\n 32: 0x561ed0414117 - <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once::hd7f09f7017ca2311\r\n at /checkout/src/libstd/panic.rs:296\r\n 33: 0x561ed045b1cf - std::panicking::try::do_call::hc7df21488714d3f6\r\n at /checkout/src/libstd/panicking.rs:480\r\n 34: 0x561ed075d6fc - __rust_maybe_catch_panic\r\n at /checkout/src/libpanic_unwind/lib.rs:98\r\n 35: 0x561ed045b09c - std::panicking::try::hb012c214aca5b861\r\n at /checkout/src/libstd/panicking.rs:459\r\n 36: 0x561ed04591d2 - std::panic::catch_unwind::h9532f6b5946a25e8\r\n at /checkout/src/libstd/panic.rs:361\r\n 37: 0x561ed045a6c0 - std::thread::Builder::spawn::{{closure}}::h84655cdd1ab51ba4\r\n at /checkout/src/libstd/thread/mod.rs:393\r\n 38: 0x561ed04a70e8 - <F as alloc::boxed::FnBox<A>>::call_box::h95b9ceca574e53bd\r\n at /checkout/src/liballoc/boxed.rs:682\r\n 39: 0x561ed075569b - std::sys::imp::thread::Thread::new::thread_start::h505201887c39140f\r\n at /checkout/src/liballoc/boxed.rs:692\r\n at /checkout/src/libstd/sys_common/thread.rs:21\r\n at /checkout/src/libstd/sys/unix/thread.rs:84\r\n 40: 0x7feb7d6196d9 - start_thread\r\n 41: 0x7feb7d13cd7e - __clone\r\n 42: 0x0 - <unknown>\r\n```\r\n\r\n\r\n<!--\r\nPlease include as much of your codebase as needed to reproduce the error. If the relevant files are large, please consider linking to a public repository or a [Gist](https://gist.github.com/).\r\n\r\nPlease post as much of your database schema as necessary. If you are using `infer_schema!`, you can use `diesel print-schema` and post the relevant parts from that.\r\n-->\r\n\r\n## Checklist\r\n\r\n- [x] I have already looked over the [issue tracker](https://github.com/diesel-rs/diesel/issues) for similar issues.",
"title": "Crash with special timestamp value in mysql (`0000-00-00 00:00:00`) through chrono",
"type": "issue"
},
{
"action": "created",
"author": "killercup",
"comment_id": 325032816,
"datetime": 1503694698000,
"masked_author": "username_1",
"text": "Very good catch!\r\n\r\nLooks like an easy fix, luckily. From the stack trace: In this code\r\n\r\nhttps://github.com/diesel-rs/diesel/blob/1f8118d2b1e24288deac6d3a1cfbc3d56a125915/diesel/src/mysql/types/date_and_time.rs#L67-L71\r\n\r\nwe are calling chrono's [from_ymd](\r\nhttps://github.com/chronotope/chrono/blob/fe529c801609ea1063d901a17adba229682405ab/src/naive/date.rs#L161-L163) but could just as well call the `from_ymd_opt` method just below which would not panic.\r\n\r\nSame for `and_hms_micro` and probably many other chrono calls!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexeyzab",
"comment_id": 326297391,
"datetime": 1504186605000,
"masked_author": "username_2",
"text": "Hi there! I'd like to do this one.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "killercup",
"comment_id": 326297844,
"datetime": 1504186699000,
"masked_author": "username_1",
"text": "@username_2 great! It's yours :)\r\n\r\nIf you need any help, feel free to ask here or on <https://gitter.im/diesel-rs/diesel>!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgrif",
"comment_id": 327307915,
"datetime": 1504646696000,
"masked_author": "username_3",
"text": "Fixed by #1137",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "sgrif",
"comment_id": null,
"datetime": 1504646696000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "konstin",
"comment_id": 327309151,
"datetime": 1504646996000,
"masked_author": "username_0",
"text": "Thanks for fix! \r\n\r\nIs it right that the current solution still means that the query will fail by returning an `Error`? That would imply that it is not possible to query fields with that value, which is kind of bad in a real world scenario like the one I had where the db contains those values.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgrif",
"comment_id": 327309960,
"datetime": 1504647207000,
"masked_author": "username_3",
"text": "There's nothing else we can do here. `chrono` doesn't support dates with a day or month of 0. There is no non-error type we can return with chrono. You can either turn on the `NO_ZERO_DATE` SQL mode (in which case they will be converted to `NULL`, and you should ensure that the column is nullable to avoid errors), or you can load them into a type other than one from chrono which supports zero dates (the only such type I'm aware of is [`MYSQL_TIME`](http://docs.diesel.rs/mysqlclient_sys/type.MYSQL_TIME.html)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgrif",
"comment_id": 327310025,
"datetime": 1504647230000,
"masked_author": "username_3",
"text": "Or option C: Use PG. 😉",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "konstin",
"comment_id": 327311433,
"datetime": 1504647578000,
"masked_author": "username_0",
"text": "Thanks for the detailed answer, I see the point now.\r\n\r\nUsing PG isn't an option here as I use data from a different program that is tied to mysql.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgrif",
"comment_id": 327313741,
"datetime": 1504648197000,
"masked_author": "username_3",
"text": "The PG answer was a joke. :wink:",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "konstin",
"comment_id": 327314592,
"datetime": 1504648435000,
"masked_author": "username_0",
"text": "I got that it was joke, but sometimes jokes become solutions ([though maybe not the one you want](http://www.montulli.org/theoriginofthe%3Cblink%3Etag)), and I've got nothing against giving PG a shot",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "frol",
"comment_id": 397833529,
"datetime": 1529176553000,
"masked_author": "username_4",
"text": "This is my take on getting this weird MySQL 0000-00-00 date handled as `None` on the Rust side.\r\n\r\nI have created a custom field `MysqlNaiveDate`:\r\n\r\n```rust\r\nuse chrono;\r\nuse mysqlclient_sys;\r\n\r\nuse diesel::mysql::Mysql;\r\nuse diesel::sql_types::Date;\r\nuse diesel::deserialize::{self, FromSql};\r\n\r\n#[derive(Debug, FromSqlRow)]\r\npub struct MysqlNaiveDate(Option<chrono::NaiveDate>);\r\n\r\nimpl FromSql<Date, Mysql> for MysqlNaiveDate {\r\n fn from_sql(bytes: Option<&[u8]>) -> deserialize::Result<Self> {\r\n let mysql_time = <mysqlclient_sys::MYSQL_TIME as FromSql<Date, Mysql>>::from_sql(bytes)?;\r\n Ok(MysqlNaiveDate(\r\n if mysql_time.day == 0 && mysql_time.month == 0 && mysql_time.year == 0 {\r\n None\r\n } else {\r\n Some(\r\n chrono::NaiveDate::from_ymd_opt(\r\n mysql_time.year as i32,\r\n mysql_time.month as u32,\r\n mysql_time.day as u32,\r\n ).ok_or_else(|| format!(\"Unable to convert {:?} to chrono\", mysql_time))?\r\n )\r\n }\r\n ))\r\n }\r\n}\r\n```\r\n\r\nand use it like this:\r\n\r\n```rust\r\n#[derive(Debug, Queryable)]\r\npub struct User {\r\n pub user_id: u32,\r\n pub email: String,\r\n pub birthday: MysqlNaiveDate,\r\n}\r\n```\r\n\r\nHere is the `schema.rs` (generated for the existing DB, so I don't need to implement `ToSql` for `MysqlNaiveDate`):\r\n\r\n```rust\r\ntable! {\r\n users (user_id) {\r\n user_id -> Unsigned<Integer>,\r\n email -> Varchar,\r\n birthday -> Date,\r\n }\r\n}\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mro95",
"comment_id": 567036347,
"datetime": 1576676570000,
"masked_author": "username_5",
"text": "I still get this error when i try to load a NaiveDateTime with value 0000-00-00 00:00:00 from MySQL\r\n```\r\nthread 'tokio-runtime-worker' panicked at 'Error loading prices from database for product #1057: \r\nDeserializationError(\"Cannot parse this date: st_mysql_time { year: 0, month: 0, day: 0, hour: 0, minute: 0, second: 0, second_part: 0, neg: 0, time_type: MYSQL_TIMESTAMP_DATE }\")',\r\n src/libcore/result.rs:1189:5\r\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "weiznich",
"comment_id": 567056072,
"datetime": 1576679577000,
"masked_author": "username_6",
"text": "@username_5 That's expected behaviour.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mro95",
"comment_id": 567065147,
"datetime": 1576680813000,
"masked_author": "username_5",
"text": "It is? In MySQL it is valid to store.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "weiznich",
"comment_id": 567065708,
"datetime": 1576680885000,
"masked_author": "username_6",
"text": "@username_5 Yes",
"title": null,
"type": "comment"
}
] | 7 | 17 | 14,224 | false | false | 14,224 | true |
zaproxy/zaproxy | zaproxy | 101,885,405 | 1,813 | null | [
{
"action": "opened",
"author": "K3y8oardCowboy",
"comment_id": null,
"datetime": 1439988629000,
"masked_author": "username_0",
"text": "When I open Zaproxy through a terminal it shows that it found a older version of java then what I currently have since I just updated to the latest version today and verified it through cmd:\"java -version\".\r\n\r\nNext part of the issue is when it finally opens the interface it shows a dialog box stating that I wont be able to use the render html feature due to JavaFX error/issue.\r\n\r\nLast possible issue when Zaproxy is after it has been open for at least 12hr's in the process of 2 active scans about 58% complete, it is extremely laggy, and sometimes it has frozen to where the whole window is gray.\r\n\r\nI had htop running while currently using Zaproxy during that session and it showed at least 4 sessions of java open using almost all of available processing power. I was wondering what do I need to edit so Zaproxy will use the current installation of Java on this box?",
"title": "Java/JavaFX error; Kali 2.0",
"type": "issue"
},
{
"action": "closed",
"author": "psiinon",
"comment_id": null,
"datetime": 1439989547000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 1,021 | false | true | 872 | false |
agmen-hu/node-datapumps | agmen-hu | 220,732,306 | 43 | {
"number": 43,
"repo": "node-datapumps",
"user_login": "agmen-hu"
} | [
{
"action": "opened",
"author": "serenitygrant",
"comment_id": null,
"datetime": 1491849353000,
"masked_author": "username_0",
"text": "",
"title": "Added the ability to log errors with a logging library.",
"type": "issue"
},
{
"action": "created",
"author": "serenitygrant",
"comment_id": 294482265,
"datetime": 1492434344000,
"masked_author": "username_0",
"text": "@username_1 Can this be merged? I want to test the logging in AWS on EC2.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "serenitygrant",
"comment_id": 294845362,
"datetime": 1492522371000,
"masked_author": "username_0",
"text": "@username_1 Check is in. It'll default to console if the logger doesn't have that function. I hope to make it more robust for different types on loggers later on as well.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "novaki",
"comment_id": 295446041,
"datetime": 1492636475000,
"masked_author": "username_1",
"text": "@username_0 I've updated the method to check logger method on call, and removed logging to console when the logger is not valid. Logging to console instead of reporting the error to the developer may lead to unexpected/difficult to debug situations. Included the logErrorsToLogger method in 0.5.1 release.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "serenitygrant",
"comment_id": 295449211,
"datetime": 1492637007000,
"masked_author": "username_0",
"text": "Makes sense! Thanks!",
"title": null,
"type": "comment"
}
] | 2 | 5 | 563 | false | false | 563 | true |
fabric8io/fabric8-planner | fabric8io | 219,817,132 | 1,522 | null | [
{
"action": "opened",
"author": "naina-verma",
"comment_id": null,
"datetime": 1491465865000,
"masked_author": "username_0",
"text": "Two scrollbars on Detail view page \r\nSteps to reproduce:\r\n- Generate plenty on comments \r\nand check that the Detail view page shows up 2 scrollbars\r\n\r\nVideo Link:\r\nhttps://drive.google.com/a/redhat.com/file/d/0B1qUq_laG6COOHZNRkNrN0NIc1U/view?usp=sharing",
"title": "Two scrollbars on Detail view page ",
"type": "issue"
},
{
"action": "created",
"author": "naina-verma",
"comment_id": 292099663,
"datetime": 1491465884000,
"masked_author": "username_0",
"text": "@username_1 can you please look into it !",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SMahil",
"comment_id": 292159328,
"datetime": 1491481946000,
"masked_author": "username_1",
"text": "@username_0 I have checked on chrome and firefox, It seems fine to me, there are no two scrollbars.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "SMahil",
"comment_id": 292159538,
"datetime": 1491481999000,
"masked_author": "username_1",
"text": "@username_0 after creating more than 20 comments refresh the page , I am unable to see only 20 comments. Rest are getting deleted",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "nimishamukherjee",
"comment_id": 294448955,
"datetime": 1492426012000,
"masked_author": "username_2",
"text": "@username_0 please verify",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "naina-verma",
"comment_id": null,
"datetime": 1492601108000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "naina-verma",
"comment_id": 295227425,
"datetime": 1492601108000,
"masked_author": "username_0",
"text": "Verified !",
"title": null,
"type": "comment"
}
] | 3 | 7 | 557 | false | false | 557 | true |
kubernetes/kops | kubernetes | 186,171,522 | 753 | null | [
{
"action": "opened",
"author": "Shrugs",
"comment_id": null,
"datetime": 1477868231000,
"masked_author": "username_0",
"text": "We wanted to migrate from a kube-up-created 1.2.x cluster to a kops-managed 1.4.0 cluster with 0 downtime. After reading the docs in [upgrade_from_k8s_12.md](https://github.com/kubernetes/kops/blob/master/docs/upgrade_from_k8s_12.md), we decided to go with the following strategy to avoid the possibility of downtime. The following steps are taken from the writeup I did internally to document our approach.\r\n\r\n1. Delegate cluster-level dns resolution to Route53 by adding appropriate NS records pointing `cluster.example.com` to Route53's Hosted Zone's nameservers.\r\n2. Create the new cluster's configuration files with kops\r\n - `kops create cluster --cloud=aws --zones=us-east-1a,us-east-1b --admin-access=12.34.56.78/32 --dns-zone=cluster.example.com --kubernetes-version=1.4.0 --node-count=14 --node-size=c3.xlarge --master-zones=us-east-1a --master-size=m4.large --vpc=vpc-123abcdef --network-cidr=172.20.0.0/16 cluster.example.com`\r\n - note that the `--network-cidr` is the cidr of the existing VPC.\r\n - Note that `kops` will propose re-naming the existing VPC but the change never occurs. We eventually manually changed it to the correct value.\r\n4. Verify that the CIDR on each of the zone subnets does not overlap with an existing subnet's.\r\n5. Verify the planned changes with `kops update cluster cluster.example.com`\r\n5. Create the cluster with `kops update cluster cluster.example.com --yes`\r\n6. Wait around for the cluster to fully come up and be available. `k get nodes` should return `(master + minions) = 15` available nodes.\r\n7. (Optional) Create the Dashboard with `kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml`\r\n8. Add the generated `nodes.cluster.example.com` security group to the resources that may need it (i.e. ElastiCache, RDS, etc)\r\n9. Deploy the same resource configuration to the new cluster.\r\n10. Wait patiently while everything happens.\r\n11. Test that everything \"just works\" by hitting the ELB's dns entry directly; you should be able to use the site as expected.\r\n13. To transition traffic from the old cluster to the new cluster, update the `CNAME` record for `example.com` to point to the new ELB's DNS name.\r\n - note that if you're proxying through cloudflare, changes are instantaneous because it's technically a reverse proxy and not a DNS change.\r\n - if not using cloudflare or another reverse proxy solution, you'll want to update you DNS record's TTL to a very low duration about 48 hours in advance of this change (and then change it back to the previous value once the shift has been finalized).\r\n14. Rejoice.\r\n15. Once traffic has shifted from the old cluster (relatively quickly), delete the old resources after confirming that traffic has stabilized and that no new errors are generated.\r\n\r\n## Resources\r\n\r\nThis is a list of resource the old cluster is using that should be deleted. This may not be complete.\r\n\r\n- autoscaling group (`kubernetes-minion-group-us-east-1a`)\r\n- launch config (`kubernetes-minion-group-us-east-1a` and `kubernetes-minion-group-us-east-1aCopy`)\r\n- all ec2 instances (`tag:KubernetesCluster : kubernetes`)\r\n- all associated EBS volumes (most likely released when the instances are terminated)\r\n- security groups (`tag:KubernetesCluster : kubernetes`)\r\n\r\n## Recovery/Rollback\r\n\r\nThe only part of this procedure that should affect the users actively using the site is the DNS swap, which should be relatively instantaneous because we're using Cloudflare as a reverse proxy, not just as a nameserver. In the event that we need to swap back to the old cluster, we can replace the example.com CNAME record with the original ELB's DNS name.\r\n\r\n\r\n---\r\n\r\nI'm willing to write up a PR to add this to the document about migrations from previous clusters.\r\n\r\nAre there any downsides we can surface around this approach? I believe kube-up also uses the route table, so we're unable to use this strategy for a cluster that requires `>25` nodes (since 50 is the artificial limitation on VPC Route Tables). If you're able to gradually transition traffic from one cluster to another (perhaps another load balancer to split traffic between clusters) you could slowly scale the new cluster up and the old cluster down to avoid hitting the 50-entry cap on the route table. Not an ideal situation, though.",
"title": "[docs] Discussing alternative cluster migration strategies",
"type": "issue"
},
{
"action": "created",
"author": "chrislovecnm",
"comment_id": 257190838,
"datetime": 1477872753000,
"masked_author": "username_1",
"text": "@justinsb what do you think?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Shrugs",
"comment_id": 258892266,
"datetime": 1478537632000,
"masked_author": "username_0",
"text": "Bumping. If this is something we want to add to the https://github.com/kubernetes/kops/blob/master/docs/upgrade_from_k8s_12.md doc, I can make a PR.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chrislovecnm",
"comment_id": 260153346,
"datetime": 1478990365000,
"masked_author": "username_1",
"text": "@username_0 been at kubecon 😀 \r\n\r\nActually, the name of that doc sucks so rename it. The content is great, make it AWESOME. When have I turned down a PR? You the man.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "chrislovecnm",
"comment_id": null,
"datetime": 1485679081000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 4,658 | false | false | 4,658 | true |
aws/aws-sdk-js | aws | 177,988,698 | 1,150 | null | [
{
"action": "opened",
"author": "somprabhsharma",
"comment_id": null,
"datetime": 1474357553000,
"masked_author": "username_0",
"text": "I am getting this error most of the times while fetching batch items from dynamodb. \r\n`1 validation error detected: Value '[]' at 'requestItems.Auth.member.keys' failed to satisfy constraint: Member must have length greater than or equal to 1`\r\n\r\nI can understand that it is a validation error means what I am querying is somewhere wrong. \r\nBut please can you explain when this error comes ? I searched about it everywhere but unable to find a reason for it. \r\n\r\nMy request looks like this-\r\n`var keys = [{uid: {'S':'sasdfa'}}, .... ,{uid: {'S':'sasdfa'}}]\r\nvar batchParams = {RequestItems: {'Auth': { Keys: keys,ConsistentRead: false}}}\r\ndynamoDb.batchGetItem(batchParams, function (err, data) {})`",
"title": "Keep getting error in batchGetItem in dynamodb",
"type": "issue"
},
{
"action": "created",
"author": "somprabhsharma",
"comment_id": 251308140,
"datetime": 1475564017000,
"masked_author": "username_0",
"text": "@AdityaManohar Please provide information on this ?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "LiuJoyceC",
"comment_id": 251526341,
"datetime": 1475618383000,
"masked_author": "username_1",
"text": "Hi @username_0 \r\nThanks for reporting the issue. I'm looking into this.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "LiuJoyceC",
"comment_id": 251834032,
"datetime": 1475712220000,
"masked_author": "username_1",
"text": "Hi @username_0 \r\nIt looks like this error is coming from the service, not from the SDK, as the SDK does not display the error you provided above for param validation. I could not reproduce the error with the example you provided above. Can you log out `this.httpResponse.body` and `this.httpResponse.statusCode` inside the callback function you supplied to `batchGetItem`? If these are defined, then the error is coming from the service's response, not from the SDK. Can you verify if that is the case?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "somprabhsharma",
"comment_id": 252162498,
"datetime": 1475821323000,
"masked_author": "username_0",
"text": "I am also not able to reproduce the issue with all possible scenarios of invalid or valid input in my dev environment. But the error is coming more than 100 times a minute in production. That is why I needed your help to determine the possible cause of this scenario.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chrisradek",
"comment_id": 252287564,
"datetime": 1475855120000,
"masked_author": "username_2",
"text": "@username_0 \r\nIs it possible that sometimes your `keys` array has a length of 0? That's what the validation error is suggesting. If you add a check to ensure that you only call `batchGetItem` when `keys.length >== 0`, do you still see these errors?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jeskew",
"comment_id": null,
"datetime": 1487875623000,
"masked_author": "username_3",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "jeskew",
"comment_id": 282083022,
"datetime": 1487875623000,
"masked_author": "username_3",
"text": "Closing due to inactivity. If checking the keys array as described in the previous comment does not resolve the original problem faced, please feel free to reopen this issue.",
"title": null,
"type": "comment"
}
] | 5 | 9 | 2,274 | false | true | 2,024 | true |
conda/conda | conda | 165,595,311 | 3,072 | {
"number": 3072,
"repo": "conda",
"user_login": "conda"
} | [
{
"action": "opened",
"author": "dorzel",
"comment_id": null,
"datetime": 1468511520000,
"masked_author": "username_0",
"text": "create empty environments when no packages are supplied in create",
"title": "WIP Create empty environments",
"type": "issue"
},
{
"action": "created",
"author": "kalefranz",
"comment_id": 232720699,
"datetime": 1468514290000,
"masked_author": "username_1",
"text": "Will definitely need at least one test in `test_create` for this. Read through the tests that are currently there for examples and inspiration.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dorzel",
"comment_id": 233006313,
"datetime": 1468601575000,
"masked_author": "username_0",
"text": "@username_1 all tests passed",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "kalefranz",
"comment_id": 233102959,
"datetime": 1468635123000,
"masked_author": "username_1",
"text": "This PR can be merged by Monday for 4.2.0rc1 as is, but there's still a bit more work to do here and there I think. One thing that I think we probably need is some additional unit tests for `History` now.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "dorzel",
"comment_id": 233669691,
"datetime": 1468942080000,
"masked_author": "username_0",
"text": "@username_1 let me know if this looks good",
"title": null,
"type": "comment"
}
] | 4 | 33 | 8,279 | false | true | 482 | true |
zzzeek/sqlalchemy | null | 140,787,715 | 248 | {
"number": 248,
"repo": "sqlalchemy",
"user_login": "zzzeek"
} | [
{
"action": "opened",
"author": "xflr6",
"comment_id": null,
"datetime": 1457987001000,
"masked_author": "username_0",
"text": "`None` and bools rendered as literal (`IS DISTINCT FROM NULL/true/false`)\r\n\r\nThe sqlite dialect could render the operator(s) as `IS NOT ...`: SQLite lacks `IS DISTINCT FROM` but accepts non-booleans in `IS (NOT)`.",
"title": "Add IS (NOT) DISTINCT FROM operator",
"type": "issue"
},
{
"action": "created",
"author": "zzzeek",
"comment_id": 196515538,
"datetime": 1457988538000,
"masked_author": "username_1",
"text": "this needs an implementation in default_comparators.py as well; see match_op and notmatch_op for how those work, including negation. Then take a look at test_operators -> MatchTest where we are testing string compilation as well as negation.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xflr6",
"comment_id": 196553518,
"datetime": 1457995644000,
"masked_author": "username_0",
"text": "Does it need more than the two tuples in `operator_lookup` in default_comparators.py?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zzzeek",
"comment_id": 196622494,
"datetime": 1458009292000,
"masked_author": "username_1",
"text": "mmm if the test is doing the right thing then that might be all it needs",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xflr6",
"comment_id": 196723574,
"datetime": 1458031744000,
"masked_author": "username_0",
"text": "Yes, works for me. What do you think about the sqlite workaround?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zzzeek",
"comment_id": 196846862,
"datetime": 1458052409000,
"masked_author": "username_1",
"text": "took me awhile to understand that. I think it's OK, I cant get a clear answer on what SQLite defines here looking at http://sqlite.org/datatype3.html but it appears to act like the distinct version elsewhere.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xflr6",
"comment_id": 196853565,
"datetime": 1458053462000,
"masked_author": "username_0",
"text": "Sorry, I'll try to be more specific next time. I was unsure if such sql 'rewriting' would be too much magic for core (it should be equivalent though, given sqlite's 'dynamic' typing).\r\n\r\nMaybe there should be a note on this: In the docstring of the operator or rather in the dialect docs (maybe you also want to take over and fill in docs/details to match the general style)?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zzzeek",
"comment_id": 196881672,
"datetime": 1458056096000,
"masked_author": "username_1",
"text": "oh i understood what you were asking, I was trying to understand what \"IS DISTINCT FROM\" does that's different from \"IS\" :). It is true the \"SQL rewriting\" thing is risky, but in this case it's a very special operator people normally aren't going to use and SQLite really seems to bundle \"IS DISTINCT FROM\" into their \"IS\" operator. We can add a note to the docstring of the operator: \"on databases where the IS operator supplies the same functionatliyy, e.g. sqlite, it will produce \"IS\"\" / <same thing for IS NOT DISTINCT FROM\"",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xflr6",
"comment_id": 196906435,
"datetime": 1458059102000,
"masked_author": "username_0",
"text": "Thanks, added.\r\n\r\nI know :) (I should have added newline after my first sentence)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "zzzeek",
"comment_id": 208685428,
"datetime": 1460430993000,
"masked_author": "username_1",
"text": "Dear contributor -\n\nThis pull request is being moved to Gerrit, at https://gerrit.sqlalchemy.org/42, where it may be tested and reviewed more closely. As such, the pull request itself is being marked \"closed\" or \"declined\", however your contribution is merely being moved to our central review system. Please register at https://gerrit.sqlalchemy.org#/register/ to send and receive comments regarding this item.",
"title": null,
"type": "comment"
}
] | 2 | 10 | 2,289 | false | false | 2,289 | false |
balderdashy/sails | balderdashy | 195,261,172 | 3,929 | null | [
{
"action": "opened",
"author": "junBryl",
"comment_id": null,
"datetime": 1481637858000,
"masked_author": "username_0",
"text": "",
"title": "Model.PublishDestroy throws an error and exits sails, if Model has a collection",
"type": "issue"
},
{
"action": "created",
"author": "junBryl",
"comment_id": 266908810,
"datetime": 1481676432000,
"masked_author": "username_0",
"text": "Hello @karsasmus,\r\n\r\n`destroy()` also returns the object that was destroyed, even if it was not stated in the docs.\r\nI also tried using only the err object but the same thing happens.\r\n\r\n```\r\n Job.destroy({\r\n id: req.param('id')\r\n }).exec(function destroy(err) {\r\n\r\n Job.publishDestroy(req.param('id'), undefined, {\r\n previous: {\r\n title: 'test'\r\n }\r\n });\r\n\r\n return res.ok();\r\n });\r\n\r\n```\r\nIt seems the problem is in `publishDestroy()`. It does not accept `undefined` if the model has a collection.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mikermcneil",
"comment_id": 267453185,
"datetime": 1481838262000,
"masked_author": "username_1",
"text": "@username_0 Thanks for the heads up- looking into it",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "sgress454",
"comment_id": 267456189,
"datetime": 1481839033000,
"masked_author": "username_2",
"text": "@username_0 `.publishDestroy()` does expect that if the `previous` option is provided, that it is a fully-populated record. This way it can do any `publishUpdate` or `publishRemove` calls necessary to inform subscribers to associated records that the association no longer exists. But, it's not clear in the documentation that `previous` is meant to be used this way (and in any case it ought to fail more gracefully).\r\n\r\nIf someone wants to do a pull request to make `publishDestroy` fail gracefully if it can't find a property, here's the line that needs to be fixed: https://github.com/balderdashy/sails/blob/0.12/lib/hooks/pubsub/index.js#L818\r\nIt just needs to check that `previous[association.alias]` exists before accessing `previous[association.alias].length`.\r\n\r\nIn the meantime, to work around this in the original example, you'd just need to add an empty array to the `previous` value for any associated collections a Job might have, e.g. if it has `users` and `tasks` collections:\r\n\r\n```\r\nprevious: {\r\n title: job[0].title,\r\n users: [],\r\n tasks: []\r\n}\r\n```\r\n \r\nAlso note that the ability to call these resourceful pubsub methods is being removed in Sails 1.0, precisely because their usage was confusing!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "junBryl",
"comment_id": 267565667,
"datetime": 1481884614000,
"masked_author": "username_0",
"text": "@username_1 , @username_2 Thank you guys for the response. Finally, I can make it work. Thanks. haha.\r\n\r\nToo bad it will be removed in 1.0. It was working, the lack of documentation and meaningful error logs is what made it confusing to use.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "mikermcneil",
"comment_id": 272885241,
"datetime": 1484579004000,
"masked_author": "username_1",
"text": "The inconvenient truth of this new approach is that you have to enforce consistency of the data you send in your broadcasted messages-- but since it is explicit and self-documenting, you can use any of the same approaches you'd use anywhere else in Sails. For example, you can make your own `Job.publishCancel()` model method:\r\n\r\n```javascript\r\n/**\r\n * Job.publishCancel()\r\n *\r\n * Broadcast a message to connected socket clients who are subscribed to\r\n * any of a particular set of jobs, letting them know the ids of all of the jobs\r\n * being canceled.\r\n * \r\n * @param {Array} ids\r\n * An array of job ids (numbers).\r\n * @param {Ref?} reqToOmitMaybe\r\n * Optional. A virtual socket request (`req`) that, if specified, will be\r\n * omitted from this broadcast.\r\n *\r\n * @broadcasts {Dictionary}\r\n * @property {String} verb\r\n * @property {Array} ids\r\n */\r\npublishCancel: function (ids, reqToOmitMaybe){\r\n Job.publish(ids, {\r\n verb: 'canceled',\r\n ids: ids\r\n }, reqToOmitMaybe);\r\n}\r\n```\r\n\r\n----------------------------\r\n\r\nAll that said, if you would prefer to be able to continue to use the CRUD-oriented RPS methods, there are a few different ways to do that:\r\n\r\n1. define your own `publishDestroy()`, etc. as default model methods in `config/models.js` _(This is the best solution, I think)_\r\n2. override the `pubsub` hook by copying the code in Sails v0.12 _(bad idea IMO -- you'd have to pin your SVRs if you aren't already, and then any time you upgrade, even a patch release, you'd have to check that everything worked first, since you'd be completely overriding the pubsub hook, and could potentially have stuff that gets out of date)_\r\n3. publish a new hook (e.g. `v012-pubsub`) that _just_ includes the methods that were removed. _(to me, that seems like a good idea for short-term compatibility, but could still get confusing in the long run)_\r\n\r\n\r\nAnyway, sorry for the long-winded post (I've been up all night and got ramblesome). I hope it helps provide a bit of extra background anyway! And who knows, it might help other folks with similar questions about this change in Sails v1. \r\n\r\nAnd if you want more info on any of the workarounds I just mentioned, just ask in here -- I'll try my best to come back through issues again later this week.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "junBryl",
"comment_id": null,
"datetime": 1484820231000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "junBryl",
"comment_id": 273731085,
"datetime": 1484820231000,
"masked_author": "username_0",
"text": "@username_1 No problem, it's fine.",
"title": null,
"type": "comment"
},
{
"action": "reopened",
"author": "junBryl",
"comment_id": null,
"datetime": 1484820461000,
"masked_author": "username_0",
"text": "",
"title": "Model.PublishDestroy throws an error and exits sails, if Model has a collection",
"type": "issue"
},
{
"action": "created",
"author": "junBryl",
"comment_id": 273731965,
"datetime": 1484820461000,
"masked_author": "username_0",
"text": "@username_1 \r\nI'm in favor with the changes.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "junBryl",
"comment_id": 273732655,
"datetime": 1484820634000,
"masked_author": "username_0",
"text": "@username_1 I'm in favor with the changes.",
"title": null,
"type": "comment"
}
] | 4 | 13 | 5,107 | false | true | 4,523 | true |
cdent/gabbi | null | 215,524,178 | 210 | {
"number": 210,
"repo": "gabbi",
"user_login": "cdent"
} | [
{
"action": "opened",
"author": "cdent",
"comment_id": null,
"datetime": 1490036783000,
"masked_author": "username_0",
"text": "Each of the response and content handlers has an expected type that\r\nthe test data should be in (in the YAML file). For example\r\nresponse_strings should be a list. These changes add some\r\nenforcement of those types, without which it is possible to (rarely)\r\nget false positives when using a str value where a list value is\r\nexpected and the assertions using that value use 'in'.\r\n\r\nFixes #209",
"title": "Enforce type of response/content-handler test data",
"type": "issue"
},
{
"action": "created",
"author": "cdent",
"comment_id": 287865417,
"datetime": 1490036794000,
"masked_author": "username_0",
"text": "Please sanity check @FND",
"title": null,
"type": "comment"
}
] | 1 | 2 | 416 | false | false | 416 | false |
geerlingguy/drupal-vm | null | 231,156,329 | 1,384 | {
"number": 1384,
"repo": "drupal-vm",
"user_login": "geerlingguy"
} | [
{
"action": "opened",
"author": "danepowell",
"comment_id": null,
"datetime": 1495656485000,
"masked_author": "username_0",
"text": "I was confused when trying to set up a local search core, because I used `collection1` as the core name (according to the docs), and then got an error message that schema.xml wasn't created correctly. Turns out the post-provision script creates a new search core `d8` that should be used instead of `collection1`.",
"title": "Update solr.md",
"type": "issue"
},
{
"action": "created",
"author": "geerlingguy",
"comment_id": 303887016,
"datetime": 1495671022000,
"masked_author": "username_1",
"text": "Good catch—`d8` is the default search core name that the Solr defaults module ships with (which is why I chose that core name for the default).",
"title": null,
"type": "comment"
}
] | 2 | 2 | 456 | false | false | 456 | false |
newcontext-oss/kitchen-terraform | newcontext-oss | 195,582,544 | 68 | {
"number": 68,
"repo": "kitchen-terraform",
"user_login": "newcontext-oss"
} | [
{
"action": "opened",
"author": "xmik",
"comment_id": null,
"datetime": 1481733761000,
"masked_author": "username_0",
"text": "I decided to give kitchen-terraform a try and tested it with OpenStack Terraform provider. It took me a while to set it up together, so I think adding it as example will make it easier for others to use kitchen-terraform with OpenStack.\r\n\r\nI provided a `README.md` which should explain well enough what this example does. I wanted to make it as short and simple as possible, so I didn't repeat information from other examples. It should be easy to run for anyone new to kitchen-terraform.\r\n\r\nWould you like to merge it? Or want to correct something? One thing that I don't like here is that I have 2 identical files: `examples/openstack/variables.tf` and `examples/openstack/test/fixtures/0.7/variables.tf`. But whenever I removed any variable from the latter, there were errors.\r\n\r\nAnd I'm happy to find out that kitchen-terraform exists, there is definitely a need for such a tool!",
"title": "Add example for OpenStack Terraform provider",
"type": "issue"
},
{
"action": "created",
"author": "ncs-alane",
"comment_id": 267370145,
"datetime": 1481818779000,
"masked_author": "username_1",
"text": "Hi @username_0:\r\n\r\nThanks for your interest in the project! We're always happy to have more contributions to the documentation and examples. I will try to review your example soon and hopefully we can ship it with the 0.4.0 release.\r\n\r\nWe may be able to solve your concern with the redundant variable file in the test fixture module. The reasons why the detailed example uses test fixture modules are to illustrate how it can be done and to provide a way for us to test compatibility with different versions of Terraform. I have only skimmed through your code but I believe that you can simplify your example by removing the test fixture and testing the example module directly as the fixture module is not adding any additional configuration.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 267397586,
"datetime": 1481825025000,
"masked_author": "username_0",
"text": "I removed the redundant fixture directory and it works as you said. Now it feels much better. Let me know about any other things to correct and I'll later stash all the commits.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncs-alane",
"comment_id": 267797847,
"datetime": 1482024925000,
"masked_author": "username_1",
"text": "Do you know of an implementation of OpenStack that could used freely with this example?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 267819356,
"datetime": 1482065330000,
"masked_author": "username_0",
"text": "I updated the PR. Now when I use kitchen-terraform from the source code (instead of 0.3.0) I see repeated, unnecessary output on `kitchen verify`. I take it that `username_1-0.4.0` branch is not yet ready for release and perhaps you are aware of that output. (I can post here the output or create a separate issue if you want).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 267819554,
"datetime": 1482065606000,
"masked_author": "username_0",
"text": "You mean some public, free for open source projects OpenStack cloud to test this example? No, I don't. We use it as a private cloud. If I find any, I'll write here.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncs-alane",
"comment_id": 268100078,
"datetime": 1482187286000,
"masked_author": "username_1",
"text": "@username_0 Are you talking about Terraform command output? If so, 0.5.0 will introduce improved logging that pipes the output of commands like `output` and `version` to the debug logger.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 268264565,
"datetime": 1482246012000,
"masked_author": "username_0",
"text": "Yes, that is what I'm talking about. I put that output into gist: https://gist.github.com/username_0/53d2e997772379e512794a0a5bd4b4ab (just for reference). Nice that you are planning to improve it.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncs-alane",
"comment_id": 268366242,
"datetime": 1482269790000,
"masked_author": "username_1",
"text": "@username_0: I looked at your gist and it is indeed expected behaviour. The verbosity and redundant commands will be solved in 0.5.0.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 268790614,
"datetime": 1482409998000,
"masked_author": "username_0",
"text": "I rebased the branch and corrected the readme and build errors from Travis CI. Should I squash the branch?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 268806301,
"datetime": 1482415637000,
"masked_author": "username_0",
"text": "Do you, @username_1 , know how to fix the Travis build? Has this error happened ever before?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncs-alane",
"comment_id": 268835397,
"datetime": 1482423957000,
"masked_author": "username_1",
"text": "@username_0: The error is due to a misconfiguration in the base branch's release logic. I'll submit the correction and then you can rebase once again. 😓 \r\n\r\nYou don't need to squash your local branch as we can squash and merge on GitHub.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ncs-alane",
"comment_id": 268843418,
"datetime": 1482426129000,
"masked_author": "username_1",
"text": "@username_0: The upstream issue should be solved. Please rebase again.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "xmik",
"comment_id": 268975535,
"datetime": 1482491802000,
"masked_author": "username_0",
"text": "Rebased and verified again and the Travis build is finally green! :-)",
"title": null,
"type": "comment"
}
] | 2 | 14 | 3,434 | false | false | 3,434 | true |
cms-sw/cmssw | cms-sw | 128,622,164 | 13,060 | {
"number": 13060,
"repo": "cmssw",
"user_login": "cms-sw"
} | [
{
"action": "opened",
"author": "isobelojalvo",
"comment_id": null,
"datetime": 1453751447000,
"masked_author": "username_0",
"text": "In 76X we were missing some necessary tau discriminators and had an issue with a new decay mode in the decay mode finding algorithm. The fix was applied in 76X by re-running the Taus at the miniAOD step. This pull request makes the RECO/AOD sequence the same as what is run in the miniAOD sequence in 76X (i.e. integration of needed discriminators + bugfix) and removes the 're-running' of the taus in miniAOD.",
"title": "Integration of 76X Tau re-miniAOD into main 80X Sequence",
"type": "issue"
},
{
"action": "created",
"author": "slava77",
"comment_id": 174646529,
"datetime": 1453752784000,
"masked_author": "username_1",
"text": "@username_0 what is the relationship between this PR and #13042 ?\r\nIt looks like the same code is modified, also, the original branch in #13402 was removed, which I guess means that #13402 should be closed.\r\nPlease clarify.\r\nThank you.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 174651335,
"datetime": 1453753564000,
"masked_author": "username_0",
"text": "Hi, The 80xReMiniAOD should be used and the 80xReMiniAODv2 had been created as a dummy for testing purposes. I should have notified AJ of this earlier! Apologies! Could you please work off this pull request?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 174654749,
"datetime": 1453754151000,
"masked_author": "username_1",
"text": "@cmsbuild please test",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 174953645,
"datetime": 1453805328000,
"masked_author": "username_0",
"text": "Checking the unit tests I see they are run on the following sample:\r\n\r\n25-Jan-2016 22:04:29 CET Successfully opened file root://eoscms.cern.ch//eos/cms/store/relval/CMSSW_7_6_0_pre7/RelValProdTTbar_13/AODSIM/76X_mcRun2_asymptotic_v5-v1/00000/0E9A5DE8-1D71-E511-A205-00261894380D.root\r\n\r\nThe error we see is that MVA6 is not present. This is certainly true: we did not have MVA6 in 7_6_0 at MVA6. Am I missing something? @username_1 @username_2 @andrewj314",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175194978,
"datetime": 1453837318000,
"masked_author": "username_2",
"text": "@username_0: For the RECO product of workflow 134.805_RunMET2015C, I find 64 added tau discriminators and 16 removed, for a net gain of 48 discriminators. I think we want to keep the net addition of discriminators to be less than 10. Could you please remove more obsolete and unneeded products?\r\n\r\nI see no change in Mini-AOD event content, and I'm still checking AOD.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175196776,
"datetime": 1453837659000,
"masked_author": "username_2",
"text": "@username_0: The PR description does not describe the results of \"fixing\" the decay mode algorithm. There are numerous differences showing up for workflow 134.805_RunMET2015C. Are these differences expected? If so, please add an explanation of the expected differences to the PR description. Here are some examples:\r\n\r\n\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 175245569,
"datetime": 1453844542000,
"masked_author": "username_0",
"text": "@username_2 Thanks for the plots. Could you tell me if red is from the current pull request or from the old? (or how can I check this myself?) Thank you!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175261446,
"datetime": 1453846723000,
"masked_author": "username_2",
"text": "@username_0: Yes, red in the plots is from the PR, and black is from the baseline I used, 800pre5.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175266174,
"datetime": 1453847181000,
"masked_author": "username_2",
"text": "@username_0: I checked AOD event content also, by using workflow 1000.0_RunMinBias2011A. Again, this PR adds 64 new tau discriminators, while only removing 16. We should be aiming to remove almost as many old, unneeded discriminators as new ones being added.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 175274049,
"datetime": 1453848079000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175287972,
"datetime": 1453849987000,
"masked_author": "username_2",
"text": "@username_1: With this PR, there are 140 tau discriminators in AOD for workflow 1000.0_RunMinBias2011A.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175290536,
"datetime": 1453850668000,
"masked_author": "username_2",
"text": "@username_0: Concerning the unit test failures, the baseline used by Jenkins, CMSSW_8_0_X_2016-01-25-1100, performs the same unit tests successfully. It would seem this PR somehow adds the dependence on MVA6, as indicated by this error message:\r\n```\r\nPrincipal::getByToken: Found zero products matching all criteria\r\nLooking for type: reco::PFTauDiscriminator\r\nLooking for module label: hpsPFTauDiscriminationByMVA6LooseElectronRejection\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 175292207,
"datetime": 1453851031000,
"masked_author": "username_0",
"text": "Hi @username_2, First: apologies about to go offline in a few minutes and will address other issues in the morning. Second: concerning the MVA6, yes, I see it is missing. However, I think this is due to the pattuplization test running on a 76x AOD sample. Dumping the event content of that file shows MVA6 is not there: This is the correct and expected behavior.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 175329702,
"datetime": 1453858040000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 175711784,
"datetime": 1453910698000,
"masked_author": "username_0",
"text": "@username_1 @username_2 \r\nRelval sample:\r\nOK, we will see about the new relval sample. I am actually not used to the workflow for the creation of a new relval. Any guidance would help. I will also check with our tau relval experts. In the mean time, is it allowed to run on a privately produce sample instead? And can this pull request be integrated before the new relval is produced? (I am worrying about the timescale here)\r\n\r\nNumber of Discriminants:\r\nWe have a preference to do the discriminant clean up in a separate branch so that we can keep track if there is an error in the clean up. I created a new branch[*] in cms-tau-pog and will do a pull request when this branch is edited. It should be fast. \r\n\r\nDQM Plots:\r\nIt is expected that the efficiency will increase and this should be in line with what we see in 76xReminiAOD. In the plots, the low efficiency in the black was apparently due to the 3prong+1pi0 integration. It is removed in the current pull request. We are working towards reintegration in the NewDecayModes and not in the OldDecayModes.\r\n\r\n[*]https://github.com/cms-tau-pog/cmssw/tree/80xReMiniAOD-RemoveOldDisc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 175720214,
"datetime": 1453911625000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 175740586,
"datetime": 1453913527000,
"masked_author": "username_1",
"text": "About the unit tests, I was thinking that we could generate an AOD file on the fly for the case where we know well that the new AOD format is needed.\r\n\r\n@gpetruc I seem to remember we did something to make the tests running PAT on AOD inputs to complete in a similar case. Do you remember what was done?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 175742134,
"datetime": 1453913804000,
"masked_author": "username_0",
"text": "The discriminants for removal are listed in the attached files created by Aruna\r\n\r\nA rough count shows approximately 1/3 can certainly be removed. The increase overall is partially due to the addition of a new MVA6 electron discriminator and keeping the MVA5, we can remove MVA5 in the next cycle after analysts have had a chance to fully validate MVA6.\r\n\r\n[PatTauID_Discriminators_80X.txt](https://github.com/cms-sw/cmssw/files/106993/PatTauID_Discriminators_80X.txt)\r\n[RecoTauID_Discriminators_80X.txt](https://github.com/cms-sw/cmssw/files/106992/RecoTauID_Discriminators_80X.txt)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 175915937,
"datetime": 1453938895000,
"masked_author": "username_2",
"text": "@username_0, @username_1: I am editing the list of tau discriminators here: https://twiki.cern.ch/twiki/bin/view/CMS/RecoTauDiscrim",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 176260586,
"datetime": 1453998136000,
"masked_author": "username_0",
"text": "As suggested, I removed the new inputs. I will put them back in for pre6. Could this be tested?\r\n\r\nThanks",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 176263872,
"datetime": 1453998507000,
"masked_author": "username_1",
"text": "@cmsbuild please test",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 176343781,
"datetime": 1454007867000,
"masked_author": "username_0",
"text": "@username_2 concerning the twiki update, I have pointed Arun to the twiki and he will modify as necessary he has also already committed the cleaned up branch. Depending on the release schedule, it would be nice to have the cleaned up branch added to the next release. Let me know if this is a problem for you.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 177000804,
"datetime": 1454106926000,
"masked_author": "username_2",
"text": "+1\r\n\r\nFor #13060 a20e68d1d9f843be26c134fcb066a63c992ff55f\r\n\r\nAdds some new tau discriminators and removes a few obsolete ones. Also, removes the 3prong1pi0 decay mode from the AOD sequence.\r\n\r\nFollow-up PRs will delete additional unneeded tau discriminators and will add new input for PAT algos after they become available in the next pre-release, as discussed above.\r\n\r\nThe code changes are satisfactory. Jenkins [tests](https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-13060/10818/summary.html) against baseline CMSSW_8_0_X_2016-01-27-2300 show numerous, insignificant differences related to tau decay modes, plus efficiency increases that are expected, as discussed above. Tests of workflow 25202.0_TTbar_13 with 70 events against baseline CMSSW_8_0_0_pre5 show similar differences as those from the Jenkins test results and the plots shown above. An additional example difference plot from this 25202 test is shown below.\r\n\r\nMemory and timing measurements show no significant increase. Two timing measurements show no increase, while a third measurement shows an overall timing increase of 1%. At worst, the new tau discriminators may increase times by 18 ms per event, but we expect a follow-up PR that will delete more obsolete discriminators and reduce this timing increase.\r\n\r\nExample of change in decay mode finding frequencies:\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cvuosalo",
"comment_id": 177011400,
"datetime": 1454108749000,
"masked_author": "username_2",
"text": "Additional timing measurements from test of workflow 25202 (without Validation step) against baseline CMSSW_8_0_0_pre5:\r\n\r\n```\r\nTotal time formerly used by deleted discriminators: 2.8 ms/ev\r\nTotal time used by new discriminators: 62.3 ms/ev\r\n\r\nNew discriminators taking > 3 ms/ev\r\n hpsPFTauDiscriminationByLooseCombinedIsolationDBSumPtCorr3HitsdR03 4.90777 ms/ev\r\n hpsPFTauDiscriminationByMediumCombinedIsolationDBSumPtCorr3HitsdR03 4.88347 ms/ev\r\n hpsPFTauDiscriminationByTightCombinedIsolationDBSumPtCorr3HitsdR03 4.88874 ms/ev\r\n hpsPFTauDiscriminationByIsolationMVArun2v1PWnewDMwLTraw 3.37712 ms/ev\r\n hpsPFTauPUcorrPtSumdR03 4.82939 ms/ev\r\n hpsPFTauNeutralIsoPtSumWeightdR03 3.78463 ms/ev\r\n hpsPFTauDiscriminationByIsolationMVArun2v1DBdR03oldDMwLTraw 3.28005 ms/ev\r\n hpsPFTauDiscriminationByIsolationMVArun2v1PWdR03oldDMwLTraw 3.40165 ms/ev\r\n\r\n\r\nRetained discriminator that uses the most time, change from baseline to PR:\r\n hpsPFTauDiscriminationByMVA5rawElectronRejection 15.8576 ms/ev -> 12.6722 ms/ev\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "deguio",
"comment_id": 177861587,
"datetime": 1454317118000,
"masked_author": "username_3",
"text": "+1",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 177934890,
"datetime": 1454327597000,
"masked_author": "username_0",
"text": "Do we (tau group) have any action items for this pull request? Can it be integrated?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "slava77",
"comment_id": 177991190,
"datetime": 1454336454000,
"masked_author": "username_1",
"text": "@username_0 @username_4 \r\nanalysis signature is still pending here.\r\nThere is one more PR (13100) that depends on integration of this one.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidlange6",
"comment_id": 177998093,
"datetime": 1454337672000,
"masked_author": "username_4",
"text": "Right - its been signed by other groups for only a few hours….\r\n\r\n>",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "isobelojalvo",
"comment_id": 178000628,
"datetime": 1454338153000,
"masked_author": "username_0",
"text": "No worries, just want to keep in mind the release closure cycle and not get 'ding-ed'. - Thanks",
"title": null,
"type": "comment"
}
] | 6 | 38 | 11,178 | false | true | 9,460 | true |
LiveTex/Node-Pg | LiveTex | 107,323,499 | 15 | null | [
{
"action": "opened",
"author": "khusamov",
"comment_id": null,
"datetime": 1442658532000,
"masked_author": "username_0",
"text": "make: Entering directory `/home/ubuntu/.nvm/versions/node/v4.1.0/lib/node_modules/livetex-node-pg/build'\r\n CXX(target) Release/obj.target/pg/src/connection.o\r\n../src/connection.cc:11:31: fatal error: jemalloc/jemalloc.h: No such file or directory\r\n #include <jemalloc/jemalloc.h>\r\n ^\r\ncompilation terminated.\r\nmake: *** [Release/obj.target/pg/src/connection.o] Error 1\r\nmake: Leaving directory `/home/ubuntu/.nvm/versions/node/v4.1.0/lib/node_modules/livetex-node-pg/build'\r\ngyp ERR! build error \r\ngyp ERR! stack Error: `make` failed with exit code: 2\r\ngyp ERR! stack at ChildProcess.onExit (/home/ubuntu/.nvm/versions/node/v4.1.0/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:270:23)\r\ngyp ERR! stack at emitTwo (events.js:87:13)\r\ngyp ERR! stack at ChildProcess.emit (events.js:172:7)\r\ngyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)\r\ngyp ERR! System Linux 3.14.13-c9\r\ngyp ERR! command \"/home/ubuntu/.nvm/versions/node/v4.1.0/bin/node\" \"/home/ubuntu/.nvm/versions/node/v4.1.0/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"rebuild\"\r\ngyp ERR! cwd /home/ubuntu/.nvm/versions/node/v4.1.0/lib/node_modules/livetex-node-pg\r\ngyp ERR! node -v v4.1.0\r\ngyp ERR! node-gyp -v v3.0.3\r\ngyp ERR! not ok \r\nnpm ERR! Linux 3.14.13-c9\r\nnpm ERR! argv \"/home/ubuntu/.nvm/versions/node/v4.1.0/bin/node\" \"/home/ubuntu/.nvm/versions/node/v4.1.0/bin/npm\" \"install\" \"livetex-node-pg\" \"-g\"\r\nnpm ERR! node v4.1.0\r\nnpm ERR! npm v2.14.3\r\nnpm ERR! code ELIFECYCLE\r\n\r\nnpm ERR! livetex-node-pg@2.0.1 install: `node-gyp rebuild && make cpp && make js`\r\nnpm ERR! Exit status 1\r\nnpm ERR! \r\nnpm ERR! Failed at the livetex-node-pg@2.0.1 install script 'node-gyp rebuild && make cpp && make js'.\r\nnpm ERR! This is most likely a problem with the livetex-node-pg package,\r\nnpm ERR! not with npm itself.\r\nnpm ERR! Tell the author that this fails on your system:\r\nnpm ERR! node-gyp rebuild && make cpp && make js\r\nnpm ERR! You can get their info via:\r\nnpm ERR! npm owner ls livetex-node-pg\r\nnpm ERR! There is likely additional logging output above.\r\n\r\nnpm ERR! Please include the following file with any support request:\r\nnpm ERR! /home/ubuntu/workspace/npm-debug.log\r\nusername_0@cardinal:~/workspace (master) $",
"title": "compilation terminated",
"type": "issue"
},
{
"action": "created",
"author": "divergence082",
"comment_id": 141813825,
"datetime": 1442770609000,
"masked_author": "username_1",
"text": "u need to install lib jemalloc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "khusamov",
"comment_id": 148806829,
"datetime": 1445021872000,
"masked_author": "username_0",
"text": "thx",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "khusamov",
"comment_id": null,
"datetime": 1445021872000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 4 | 2,325 | false | false | 2,325 | true |
spinnaker/igor | spinnaker | 257,144,682 | 187 | {
"number": 187,
"repo": "igor",
"user_login": "spinnaker"
} | [
{
"action": "opened",
"author": "jeyrschabu",
"comment_id": null,
"datetime": 1505242288000,
"masked_author": "username_0",
"text": "- Only log on actions, removed some logs causing noise",
"title": "chore(jenkins): Make logs less chatty",
"type": "issue"
},
{
"action": "created",
"author": "ajordens",
"comment_id": 328963423,
"datetime": 1505245400000,
"masked_author": "username_1",
"text": "Seems alright, haven't had to look at igor logs in awhile so if we don't think these are valuable when debugging lets get this merged!",
"title": null,
"type": "comment"
}
] | 2 | 2 | 188 | false | false | 188 | false |
opentoonz/opentoonz | opentoonz | 195,131,723 | 962 | null | [
{
"action": "opened",
"author": "ideasman42",
"comment_id": null,
"datetime": 1481591069000,
"masked_author": "username_0",
"text": "On Linux, using a system SuperLU, running CMake will pick the thirdparty include.\r\n\r\nThis is because its looking in the thirdparty hints path first: https://github.com/opentoonz/opentoonz/blob/master/toonz/cmake/FindSuperLU.cmake#L7 \r\n\r\nSuggest to add an option: `WITH_SYSTEM_SUPERLU` which would disable using thirdparty paths when enabled.",
"title": "CMake/Linux uses thirdparty include with system library by default",
"type": "issue"
},
{
"action": "created",
"author": "jabarrera",
"comment_id": 266697297,
"datetime": 1481623461000,
"masked_author": "username_1",
"text": "The solution proposed in #958 is not enough for this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ideasman42",
"comment_id": 267800227,
"datetime": 1482029582000,
"masked_author": "username_0",
"text": "@username_1, that PR scanned the path for the term third-party.\r\n\r\nInstead, I'm suggesting to make this a build option.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ideasman42",
"comment_id": 278815039,
"datetime": 1486684483000,
"masked_author": "username_0",
"text": "Fixed cead1bb2cae97c48b833bec28588646d4bc9a693",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ideasman42",
"comment_id": null,
"datetime": 1486684484000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 5 | 558 | false | false | 558 | true |
pannal/Sub-Zero.bundle | null | 223,804,275 | 269 | null | [
{
"action": "opened",
"author": "ajkis",
"comment_id": null,
"datetime": 1493037200000,
"masked_author": "username_0",
"text": "Full logs attached",
"title": "[BUG] CRITICAL (agentkit:1078) - Exception in the update function of agent named 'Sub-Zero Subtitles (Movies, 1.4.27.973)', called with guid 'com.plexapp.agents.themoviedb://390054?lang=en' (most recent call last):",
"type": "issue"
},
{
"action": "created",
"author": "pannal",
"comment_id": 296656611,
"datetime": 1493038347000,
"masked_author": "username_1",
"text": "Hmm, shouldn't have an effect on your user experience. The error will be fixed, though. Thank you",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "pannal",
"comment_id": null,
"datetime": 1493038600000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "pannal",
"comment_id": 296658109,
"datetime": 1493038600000,
"masked_author": "username_1",
"text": "Is fixed in 2.0, which is currently in beta.",
"title": null,
"type": "comment"
}
] | 2 | 4 | 159 | false | false | 159 | false |
atom/first-mate | atom | 263,867,466 | 102 | null | [
{
"action": "opened",
"author": "gandm",
"comment_id": null,
"datetime": 1507549737000,
"masked_author": "username_0",
"text": "Injection grammars work if using `injectionSelector` but not if using `injections`\r\n\r\nIt appears grammars are only added as injection grammars if they have the injectionSelector property (first example below) which makes no sense inside a nested set of injections as that property doesn't exist (second example below). Atom adds grammars as Injection grammars [here](https://github.com/atom/first-mate/blob/master/src/grammar-registry.coffee#L82) and are only used as rules if they are marked as such [here](https://github.com/atom/first-mate/blob/9295b5632a2eb5d0716da736aba08ce3f4e3237e/src/rule.coffee#L75)\r\ne.g.\r\nThis works\r\n```json\r\n{\r\n \"scopeName\": \"working.injector\",\r\n \"injectionSelector\": \"L:source.js.embedded.html\",\r\n \"patterns\": [\r\n { \"include\": \"source.js.jsx\" }\r\n ]\r\n}\r\n```\r\nThis doesn't\r\n```json\r\n{\r\n \"scopeName\": \"some.injector\",\r\n \"injections\": {\r\n \"L:source.js.embedded.html\": {\r\n \"patterns\": [\r\n { \"include\": \"source.js.jsx\" }\r\n ]\r\n }\r\n }\r\n}\r\n```",
"title": "Injection grammars using the 'injections' property not working.",
"type": "issue"
},
{
"action": "created",
"author": "50Wliu",
"comment_id": 336857223,
"datetime": 1508152809000,
"masked_author": "username_1",
"text": "@username_0 I might need a bit more background. As far as I know, injection selectors are not meant to be actual grammars that you can select using the Grammar Selector. They're more for auxiliary highlighting (like language-todo). Injection grammars are meant to be selected and need to provide the scopes that they inject in themselves (language-php).",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gandm",
"comment_id": 336866733,
"datetime": 1508155570000,
"masked_author": "username_0",
"text": "As far as I can see `language-php` doesn't work either. It has an injection [here](https://github.com/atom/first-mate/blob/7195ec726bad720903331092fa9e0ce822b1e82e/spec/fixtures/php.json#L17) that should inject some patterns and eventually the PHP grammar into a scope `L:source.js.embedded.html` as a LEFT rule set. So an html file using `language-html` with a `<?php` inside a `<script>` and `</script>` which has a scope of `source.js.embedded.html` should highlight as PHP but it doesn't.\r\n\r\nI've disabled `language-javascript` to stop it parsing any `source.js` code inside an html file as shown in the image below. PHP isn't highlighted at all which it should be if the above injection of rules was working.\r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "50Wliu",
"comment_id": 336871086,
"datetime": 1508156784000,
"masked_author": "username_1",
"text": "`injections` injects into _itself`, `injectionSelector` injects into other grammars. So PHP needs to be the active grammar in order for PHP highlighting to work, while TODO doesn't.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "50Wliu",
"comment_id": 336875660,
"datetime": 1508157949000,
"masked_author": "username_1",
"text": "What Atom version are you using?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "gandm",
"comment_id": 336876031,
"datetime": 1508158039000,
"masked_author": "username_0",
"text": "Atom 1.21.1",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gandm",
"comment_id": null,
"datetime": 1508179327000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "gandm",
"comment_id": 336990162,
"datetime": 1508179327000,
"masked_author": "username_0",
"text": "Putting a space in front of the `<?php` works. Must be an issue with the inject regex patterns in language-php rather than the injection. \r\n\r\nI'll close this as it appears to work as designed.",
"title": null,
"type": "comment"
}
] | 2 | 8 | 2,592 | false | false | 2,592 | true |
aspnet/HttpAbstractions | aspnet | 57,352,466 | 190 | null | [
{
"action": "opened",
"author": "danroth27",
"comment_id": null,
"datetime": 1423677561000,
"masked_author": "username_0",
"text": "Currently Microsoft.AspNet.Http.Core depends on Microsoft.AspNet.Http, which seems backwards.",
"title": "Http should depend on HttpCore, not the other way around",
"type": "issue"
},
{
"action": "created",
"author": "Tratcher",
"comment_id": 73933854,
"datetime": 1423678828000,
"masked_author": "username_1",
"text": "Http contains the abstract HttpContext that most apps will reference directly, where Http.Core contains an actual implementation primarily referenced by Hosting. The naming of Http.Core could be better, but I think swapping it with Http would be detrimental to HttpContext.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danroth27",
"comment_id": 73936684,
"datetime": 1423679723000,
"masked_author": "username_0",
"text": "OK - I think we just need to come up with a better name for Http.Core. How about Http.Abstractions?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danroth27",
"comment_id": 73937237,
"datetime": 1423679902000,
"masked_author": "username_0",
"text": "How about Http -> Http.Abstractions?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tratcher",
"comment_id": 73975301,
"datetime": 1423692455000,
"masked_author": "username_1",
"text": "I don't recommend it. Just about everything in the pipeline references HttpContext, so it should stay in the top level Http namespace and package for ease of discovery and usage.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "danroth27",
"comment_id": 73990639,
"datetime": 1423698654000,
"masked_author": "username_0",
"text": "We can argue about the new names, (I'm fine with just renaming Http.Core to something else), but the current names have to change somehow. Everywhere else that I know of in Core is the thing that other stuff depends on, not the other way around. Think .NET Core vs .NET Framework.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidfowl",
"comment_id": 73991401,
"datetime": 1423699046000,
"masked_author": "username_2",
"text": "Http.Impl yay java",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidfowl",
"comment_id": 77351877,
"datetime": 1425556568000,
"masked_author": "username_2",
"text": "@username_0 Did you have any more thoughts on this?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidfowl",
"comment_id": 89801841,
"datetime": 1428251371000,
"masked_author": "username_2",
"text": "We decided to swap the names:\r\n\r\n- Http.Core - Abstractions\r\n- Http - Implementations\r\n\r\n/cc @glennc @muratg @lodejard",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Tratcher",
"comment_id": 93056966,
"datetime": 1429044948000,
"masked_author": "username_1",
"text": "Put everything in the Http namespace.\r\n- Http.Core - Abstract base classes, used to implement middleware\r\n- Http - Implementations, only referenced by Hosting & tests",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "Tratcher",
"comment_id": null,
"datetime": 1429547879000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 11 | 1,311 | false | false | 1,311 | true |
BlakeGuilloud/ganon | null | 267,563,362 | 410 | {
"number": 410,
"repo": "ganon",
"user_login": "BlakeGuilloud"
} | [
{
"action": "opened",
"author": "MahikanthNag",
"comment_id": null,
"datetime": 1508742865000,
"masked_author": "username_0",
"text": "added a method signature for counting number of words in a string\r\nadded test file for the above method test/countNoOfWords.test.js\r\n\r\n<!-- Make sure to replace this with the related Issue so we can keep track of\r\neverything! e.g. Closes #187 -->\r\nCloses #theRelatedIssueNumber\r\n\r\n<!-- Make sure these boxes are checked before submitting this pull request! Thank you!! -->\r\n<!-- To check the boxes, simply replace \"[]\" with \"[x] -->\r\n\r\n- [x] Running `yarn lint` does not trigger any linter errors\r\n- [x] The test of the method you have fixed is passing\r\n- [x] You have written a new skeleton method for someone else to work on!\r\n- [x] You have written tests to accompany your skeleton method",
"title": "wrote logic for LCM method ",
"type": "issue"
},
{
"action": "created",
"author": "ktilcu",
"comment_id": 338662296,
"datetime": 1508766054000,
"masked_author": "username_1",
"text": "@username_0 Thanks for the contribution! Could you please write a new issue for your added method? It would help us out a ton.",
"title": null,
"type": "comment"
}
] | 2 | 2 | 819 | false | false | 819 | true |
JakeWharton/butterknife | null | 158,712,034 | 608 | null | [
{
"action": "opened",
"author": "chrenjay",
"comment_id": null,
"datetime": 1465228484000,
"masked_author": "username_0",
"text": "It was worked normal before. I don't know why it doesn't work normal suddenly. \r\nWhen I add a new field, and it doesn't bind to view. Then I clean and rebuild the project, and all field aren't bound.\r\n\r\nBelow is my gradle file\r\n```\r\n// Top-level build file where you can add configuration options common to all sub-projects/modules.\r\n\r\nbuildscript {\r\n ext.kotlin_version = '1.0.2'\r\n repositories {\r\n jcenter()\r\n mavenCentral()\r\n }\r\n dependencies {\r\n classpath 'com.android.tools.build:gradle:2.1.0'\r\n classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8'\r\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version\"\r\n\r\n classpath \"io.realm:realm-gradle-plugin:1.0.0\"\r\n\r\n // NOTE: Do not place your application dependencies here; they belong\r\n // in the individual module build.gradle files\r\n }\r\n}\r\n\r\nallprojects {\r\n repositories {\r\n jcenter()\r\n }\r\n}\r\n\r\ntask clean(type: Delete) {\r\n delete rootProject.buildDir\r\n}\r\n\r\n```\r\n\r\n```\r\napply plugin: 'com.android.application'\r\napply plugin: 'com.neenbedankt.android-apt'\r\napply plugin: 'kotlin-android'\r\napply plugin: 'realm-android'\r\n\r\nandroid {\r\n compileSdkVersion 23\r\n buildToolsVersion \"23.0.3\"\r\n\r\n defaultConfig {\r\n applicationId \"org.watching\"\r\n minSdkVersion 21\r\n targetSdkVersion 23\r\n versionCode 1\r\n versionName \"1.0\"\r\n }\r\n buildTypes {\r\n release {\r\n minifyEnabled false\r\n proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'\r\n }\r\n }\r\n}\r\n\r\ndependencies {\r\n compile fileTree(include: ['*.jar'], dir: 'libs')\r\n testCompile 'junit:junit:4.12'\r\n compile 'com.android.support:appcompat-v7:23.4.0'\r\n compile 'com.github.bumptech.glide:glide:3.7.0'\r\n compile 'com.android.support:recyclerview-v7:23.4.0'\r\n compile 'com.google.code.gson:gson:2.4.0'\r\n compile 'com.squareup:otto:1.3.5'\r\n compile 'com.squareup.retrofit2:retrofit:2.0.2'\r\n compile 'com.squareup.retrofit2:converter-gson:2.0.2'\r\n compile 'com.squareup.retrofit2:adapter-rxjava:2.0.2'\r\n compile 'io.reactivex:rxandroid:1.2.0'\r\n compile 'io.reactivex:rxjava:1.1.4'\r\n\r\n compile 'com.jakewharton:butterknife:8.0.1'\r\n apt 'com.jakewharton:butterknife-compiler:8.0.1'\r\n\r\n compile 'com.android.support:design:23.4.0'\r\n compile \"org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version\"\r\n compile 'com.android.support:cardview-v7:23.4.0'\r\n}\r\n```\r\n\r\nMy gradle file ought to right, I think there is something conflict?",
"title": "It doesn't work normal sometime",
"type": "issue"
},
{
"action": "created",
"author": "JakeWharton",
"comment_id": 224003377,
"datetime": 1465228667000,
"masked_author": "username_1",
"text": "Looks fine. Can you see the generated files in the\nbuild/generated/source/apt/ folder?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chrenjay",
"comment_id": 224006301,
"datetime": 1465229299000,
"masked_author": "username_0",
"text": "Yes, I can see, it has a debug folder and a test\\debug folder, but all are empty.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chrenjay",
"comment_id": 224280957,
"datetime": 1465306335000,
"masked_author": "username_0",
"text": "@username_1 I'm so sorry to trouble with you, I'm find there is something wrong with kotlin plugin, and may be it's the reason case butterknife don't work. Because when I delete kotlin plugin, my project work normal.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "chrenjay",
"comment_id": null,
"datetime": 1465306345000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "felipepaiva",
"comment_id": 243851067,
"datetime": 1472667014000,
"masked_author": "username_2",
"text": "Guys, for this problem i just solved it duplicating apt line declaration and adding 'k' like this:\r\n apt 'com.jakewharton:butterknife-compiler:8.4.0'\r\n kapt 'com.jakewharton:butterknife-compiler:8.4.0'\r\nJust that!",
"title": null,
"type": "comment"
}
] | 3 | 6 | 3,178 | false | false | 3,178 | true |
OsnaCS/plantex | OsnaCS | 165,977,047 | 2 | null | [
{
"action": "opened",
"author": "LukasKalbertodt",
"comment_id": null,
"datetime": 1468762606000,
"masked_author": "username_0",
"text": "The `AxialPoint` type is currently very incomplete. It should implement all fitting traits from `cgmath` and overload many operators:\r\n\r\n- overload many useful operators (including `Index`, required by `Array` anyway)\r\n- implement `cgmath::{Zero, Array, MetricSpace, VectorSpace, InnerSpace}`\r\n\r\nEverything should be done similar to [cgmath::Point2](http://bjz.github.io/cgmath/cgmath/struct.Point2.html).\r\n\r\nAdditionally multiple unit tests should be added to ensure correctness.\r\n\r\n*Note*: this issue is similar to #1",
"title": "Add features and tests to `AxialPoint`",
"type": "issue"
},
{
"action": "created",
"author": "FischerTo",
"comment_id": 234212006,
"datetime": 1469095670000,
"masked_author": "username_1",
"text": "cgmath::InnerSpace is not going to be implemented.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jonas-schievink",
"comment_id": 234212272,
"datetime": 1469095754000,
"masked_author": "username_2",
"text": "Makes sense, it's meant for vectors, not points",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "FischerTo",
"comment_id": 234247077,
"datetime": 1469106327000,
"masked_author": "username_1",
"text": "Zero and VectorSpace are also not required, as they are implemented in axial_vector.rs",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "LukasKalbertodt",
"comment_id": null,
"datetime": 1469183077000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 5 | 702 | false | false | 702 | false |
alexdobin/STAR | null | 271,688,209 | 342 | null | [
{
"action": "opened",
"author": "charlesdavid",
"comment_id": null,
"datetime": 1510021315000,
"masked_author": "username_0",
"text": "Hi Alex, \r\nI am attempting to use the 2-pass method on my data, but STAR is crashing with a seg fault:\r\nThe input files were verified correct format, no errors.\r\nSTAR reads in the genome, performs the first pass, but crashes when trying to run the second pass.\r\nI have also tried to manually feed in the SJs running STAR the second time, but still crashes.\r\nThere is only a header in the `Log.progress.out` and `Log.out` ends with `Created thread # 1-31` as I am using 32 threads. FYI, this happens for both versions 2.5.2b and 2.5.3a.\r\nHere is the error message:\r\n\r\n`/home/cflcyd/.lsbatch/1510019549.586204: line 8: 37165 Segmentation fault (core dumped) STAR --runThreadN 32 --genomeDir 007.STAR/index --readFilesIn Test_sortmerna_trimmomatic_MIA_R1.fastq Test_sortmerna_trimmomatic_MIA_R2.fastq --chimSegmentMin 15 --chimJunctionOverhangMin 15 --alignMatesGapMax 20000 --alignIntronMax 20000 --outSAMtype BAM Unsorted --outSAMprimaryFlag AllBestScore --outSAMstrandField intronMotif --sjdbFileChrStartEnd Test_sortmerna_trimmomatic_MIA_SJ.out.tab --outFileNamePrefix TEST_ --outQSconversionAdd -31`",
"title": "STAR Two-Pass Mode Not Working: Seg Fault",
"type": "issue"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 343319816,
"datetime": 1510268357000,
"masked_author": "username_1",
"text": "Hi Charles,\r\n\r\ncould you please send me the Log.out file?\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "charlesdavid",
"comment_id": 343783296,
"datetime": 1510534351000,
"masked_author": "username_0",
"text": "Hi Alex,\r\n\r\nHere it is.\r\n[TEST_Log.out.txt](https://github.com/username_1/STAR/files/1465274/TEST_Log.out.txt)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 344338945,
"datetime": 1510681404000,
"masked_author": "username_1",
"text": "Hi Charles,\r\n\r\nnothing suspicious in the Log.out file.\r\nTo narrow down the problem, Could you please try the following:\r\n1. Map with default parameters, only set the --genomeDir and --readFilesIn\r\n2. Map a few thousand reads and map them.\r\n3. Generate a genome for just one chromosome and map a few reads to it.\r\n\r\nIf none of the above works, could you send me the one chromosome and a few reads that still cause the seg-fault for further testing.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 474397286,
"datetime": 1553005982000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nI also have a problem with the two-pass mode (STAR 2.7.0e).\r\n\r\nI get a core dump after mapping > 100 mio reads in the 2nd round. In the first round it's all fine. Happens with the option --twopassMode Basic as well as if I do it manually (in both cases only in the 2nd round). I tried once to remove the SJs at the ends of the chromosome (closer than 250 bp) but this did not help. I could narrow it down to 150'000 reads. \r\n\r\n[Here is a link to all the files that I used for the test below:](https://www.dropbox.com/s/a19hvnyd8ucn8ov/forAlex.tar.gz?dl=1)\r\n\r\n```\r\nTESTFOLDER=\"$HOME/temp\"\r\nSAMPLE=\"Bb-tachyzoites\"\r\nSTARIDXPASS=\"$TESTFOLDER/2ndPassIndex\"\r\nINPUTSEQ=\"$TESTFOLDER/withStructAndSmallVar.fasta\"\r\nSJTABLE=\"$TESTFOLDER/${SAMPLE}SJ.out.tab\"\r\nFORWARD=\"$TESTFOLDER/${SAMPLE}_R1_bugTest_select.tr.fq\"\r\nREVERSE=\"$TESTFOLDER/${SAMPLE}_R2_bugTest_select.tr.fq\"\r\nOUTPREFIX=\"$TESTFOLDER/${SAMPLE}_2ndPass\"\r\n\r\nSTAR --runThreadN 4 --runMode genomeGenerate --genomeDir $STARIDXPASS --sjdbFileChrStartEnd $SJTABLE --sjdbOverhang 149 --genomeFastaFiles $INPUTSEQ\r\n\r\nSTAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir $STARIDXPASS --readFilesIn $FORWARD $REVERSE --outFileNamePrefix $OUTPREFIX --outSAMtype BAM Unsorted\r\n\r\n# returns:\r\nMar 19 15:22:43 ..... started STAR run\r\nMar 19 15:22:43 ..... loading genome\r\nMar 19 15:22:44 ..... started mapping\r\nSegmentation fault (core dumped)\r\n```\r\n\r\nWhat seems very odd is that I cannot narrow it down any further. If I remove the first half of the reads it does not crash. If I remove the second half it does not crash either. It only crashes using all of the remaining 150'000 reads.\r\n\r\n```\r\nwc -l $FORWARD\r\n\r\nhead -n 300000 $FORWARD > $FORWARD.firstHalf\r\nhead -n 300000 $REVERSE > $REVERSE.firstHalf\r\nSTAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir $STARIDXPASS --readFilesIn $FORWARD.firstHalf $REVERSE.firstHalf --outFileNamePrefix $OUTPREFIX --outSAMtype BAM Unsorted\r\n\r\n# no crash, finishes as usual\r\n\r\ntail -n 300000 $FORWARD > $FORWARD.secondHalf\r\ntail -n 300000 $REVERSE > $REVERSE.secondHalf\r\nSTAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir $STARIDXPASS --readFilesIn $FORWARD.secondHalf $REVERSE.secondHalf --outFileNamePrefix $OUTPREFIX --outSAMtype BAM Unsorted\r\n\r\n# no crash, finishes as usual\r\n```\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 474439228,
"datetime": 1553010870000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\ncould you please try the latest patch on GitHub [master](https://github.com/username_1/STAR/archive/master.zip).\r\nI have fixed a bug with 2-pass, it may resolve your issue. if not I will look into it.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 474449312,
"datetime": 1553011905000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nThanks. I downloaded the archive from the link and I used the binary in \"STAR-master/bin/Linux_x86_64_static/\". Still the same behavior.\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 474519553,
"datetime": 1553020991000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nthanks for trying it - I will look into it closely now.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 474612060,
"datetime": 1553035142000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nthis looks like an issue with a small genome rather than 2-pass: you need to scale down --genomeSAindexNbases when generating the genome, for both 1st and 2nd pass (even if there were no problem in the 1st pass).\r\n\r\n--genomeSAindexNbases 12 worked with your fastq, but I would go lower to 10 just to be safe.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 474732559,
"datetime": 1553070300000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nThanks for the suggestion. I tried --genomeSAindexNbases 12 with the version from the master and 2.7.0e. Both crash (two different machines by the way - Ubuntu Server 16.04 on Intel and Kubuntu 18.04 on AMD). Even if I go down to --genomeSAindexNbases 10 or --genomeSAindexNbases 6 it crashes (I double-checked the genomeParameters.txt). It still only happens if I use all reads and not if I split the fastqs further.\r\n\r\nOn which OS did you test it?\r\n\r\nIt seems to be related at least partly to the splice junctions as it only happens if I insert them. I tried now to remove all junctions at the contig ends (first and last 1 kb this time) and all junctions on contigs shorter than 1 Mb. Still crashes.\r\n\r\nAny other ideas?\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 475388475,
"datetime": 1553199874000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\ndo the crashes happen on the same small dataset as before?\r\nAlso, could you try the pre-compiled executables from the bin/ ?\r\nMy systems is CentOS-7.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 475530947,
"datetime": 1553242749000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nYes, same small dataset. \r\n\r\nI used only precompiled binaries so far. The 2.7.0e is from miniconda and the other one from the link you posted previously (master.zip). \r\n\r\nI tested it briefly with a CentOS-7.6.1810. Crashed as well. The same test as at the beginning with the precompiled executable from the master.zip:\r\n\r\n```\r\nTESTFOLDER=\"$HOME/temp\"\r\nSAMPLE=\"Bb-tachyzoites\"\r\nSTARIDXPASS=\"$TESTFOLDER/2ndPassIndex\"\r\nINPUTSEQ=\"$TESTFOLDER/withStructAndSmallVar.fasta\"\r\nSJTABLE=\"$TESTFOLDER/${SAMPLE}SJ.out.tab\"\r\nFORWARD=\"$TESTFOLDER/${SAMPLE}_R1_bugTest_select.tr.fq\"\r\nREVERSE=\"$TESTFOLDER/${SAMPLE}_R2_bugTest_select.tr.fq\"\r\nOUTPREFIX=\"$TESTFOLDER/${SAMPLE}_2ndPass\"\r\n\r\nSTAR --runThreadN 4 --runMode genomeGenerate --genomeSAindexNbases 10 --genomeDir $STARIDXPASS --sjdbFileChrStartEnd $SJTABLE --sjdbOverhang 149 --genomeFastaFiles $INPUTSEQ\r\n\r\nSTAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir $STARIDXPASS --readFilesIn $FORWARD $REVERSE --outFileNamePrefix $OUTPREFIX --outSAMtype BAM Unsorted\r\n\r\n# segmentation fault\r\n```\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 475657041,
"datetime": 1553267264000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nI am running it with exactly the same parameters and it runs fine...\r\nI will run it with valgrind to see if there any hidden problems.\r\nOn your side, could you try to compile on your machine - maybe there is some kind of incompatibility there. And then please send me the Log.out of a failed run.\r\n\r\nThanks!\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 476096306,
"datetime": 1553502017000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nHere is the log from genomeGenerate:\r\n\r\n[Log.out.zip](https://github.com/username_1/STAR/files/3002091/Log.out.zip)\r\n\r\nAnd here from alignReads (precompiled executable from the master.zip, static linkage):\r\n\r\n[Bb-tachyzoites_2ndPassLog.out.zip](https://github.com/username_1/STAR/files/3002142/Bb-tachyzoites_2ndPassLog.out.zip)\r\n\r\nThen I also tried compiling it myself:\r\n\r\n```\r\ngit clone https://github.com/username_1/STAR\r\ncd STAR/source\r\nmake -j 4 STAR\r\ncd\r\n\r\nTESTFOLDER=\"$HOME/temp\"\r\nSAMPLE=\"Bb-tachyzoites\"\r\nSTARIDXPASS=\"$TESTFOLDER/2ndPassIndex\"\r\nINPUTSEQ=\"$TESTFOLDER/withStructAndSmallVar.fasta\"\r\nSJTABLE=\"$TESTFOLDER/${SAMPLE}SJ.out.tab\"\r\nFORWARD=\"$TESTFOLDER/${SAMPLE}_R1_bugTest_select.tr.fq\"\r\nREVERSE=\"$TESTFOLDER/${SAMPLE}_R2_bugTest_select.tr.fq\"\r\nOUTPREFIX=\"$TESTFOLDER/${SAMPLE}_2ndPass\"\r\n\r\n$HOME/STAR/source/STAR --runThreadN 4 --runMode genomeGenerate --genomeSAindexNbases 10 --genomeDir $STARIDXPASS --sjdbFileChrStartEnd $SJTABLE --sjdbOverhang 149 --genomeFastaFiles $INPUTSEQ\r\n\r\n$HOME/STAR/source/STAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir $STARIDXPASS --readFilesIn $FORWARD $REVERSE --outFileNamePrefix $OUTPREFIX --outSAMtype BAM Unsorted\r\n```\r\n\r\nDoes not work, crashes as well. Here is the log:\r\n\r\n[self_compiled_Bb-tachyzoites_2ndPassLog.out.zip](https://github.com/username_1/STAR/files/3002160/self_compiled_Bb-tachyzoites_2ndPassLog.out.zip)\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 476396123,
"datetime": 1553551619000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nnothing suspicious in the Log files, and valgrind did not catch any relevant problems.\r\nCould you download the genome I generated and try mapping to it?\r\nIt will tell us whether the error is in the genome generation or mapping.\r\n\r\nThanks!\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 476397531,
"datetime": 1553551928000,
"masked_author": "username_1",
"text": "Sorry, forgot the link:\r\nhttp://labshare.cshl.edu/shares/gingeraslab/www-data/dobin/STAR/Tests/Marc/",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 476519196,
"datetime": 1553588306000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nI tested it with your genome index. Still crashes. The log looks similar:\r\n\r\n[Bb-tachyzoites_2ndPassLog.out.zip](https://github.com/username_1/STAR/files/3007121/Bb-tachyzoites_2ndPassLog.out.zip)\r\n\r\nI also tested running it on a single thread (crash) and typing the entire command again without any variables (to double check that there are no odd characters, still crash, can't see anything special with the hex editor either).\r\n\r\nAre environment variables or locales important/used?\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 476692206,
"datetime": 1553612823000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nSTAR does not use any env variables or locales...\r\nSo the problem seems to be with the mapping itself, not genome generation.\r\n\r\nI think you would have to try to debug it on your side since I cannot reproduce the problem.\r\nIf you are willing to do it, here are the steps:\r\n$ cd source\r\n$ make clean\r\n$ make gdb\r\n$ gdb --args $HOME/STAR/source/STAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir $STARIDXPASS --readFilesIn $FORWARD $REVERSE --outFileNamePrefix $OUTPREFIX --outSAMtype BAM Unsorted\r\n\r\nand inside gdb:\r\n(gdb) r\r\nThe code will run and we should be able to see where seg-fault happens.\r\n\r\nThanks!\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 476800360,
"datetime": 1553626806000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nThanks for the suggestion. I'm out of office tomorrow but I will test it on Thursday.\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 477495678,
"datetime": 1553761422000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nI tested it. It runs perfectly fine with the debug build. Does not matter if in gdb or directly in the console. I verified the crash of the release build again by doing a quick \"make clean\" and \"make STAR\" again. Still crashes.\r\n\r\nWell, I will just use the debug build for now.\r\n\r\nBest regards,\r\n\r\nMarc\r\n\r\nHere's the output from the debugger:\r\n\r\n(gdb) r\r\nStarting program: /home/marc/STAR/source/STAR --runThreadN 4 --runMode alignReads --limitBAMsortRAM 20000000000 --genomeDir /home/marc/temp/2ndPassIndex --readFilesIn /home/marc/temp/Bb-tachyzoites_R1_bugTest_select.tr.fq /home/marc/temp/Bb-tachyzoites_R2_bugTest_select.tr.fq --outFileNamePrefix /home/marc/temp/Bb-tachyzoites_2ndPass --outSAMtype BAM Unsorted\r\n[Thread debugging using libthread_db enabled]\r\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\r\nMar 28 08:43:16 ..... started STAR run\r\nMar 28 08:43:16 ..... loading genome\r\nMar 28 08:43:16 ..... started mapping\r\n[New Thread 0x7fff906d9700 (LWP 5570)]\r\n[New Thread 0x7fff8fed8700 (LWP 5571)]\r\n[New Thread 0x7fff8f6d7700 (LWP 5572)]\r\n[Thread 0x7fff8f6d7700 (LWP 5572) exited]\r\n[Thread 0x7fff8fed8700 (LWP 5571) exited]\r\n[Thread 0x7fff906d9700 (LWP 5570) exited]\r\nMar 28 08:55:36 ..... finished mapping\r\nMar 28 08:55:36 ..... finished successfully\r\n[Inferior 1 (process 5566) exited normally]",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 477631431,
"datetime": 1553785032000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nthanks for the tests! So it's a Heisenbug... which are hard to debug.\r\nI will try to run valgrind withoutdebug compilation, maybe it will give us a hint.\r\n\r\nDebug compiling is done with -O0, i.e. without any optimizations, which makes the code ~3 times slower.\r\nPlease try to compile with (after make clean) and let me know if it crashes or not:\r\n$ make CXXFLAGS=\"-O2 -pipe -std=c++11 -Wall -Wextra -fopenmp\"\r\nand if it still crashes with -O1\r\nThese should be faster - if they work.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 477636837,
"datetime": 1553785777000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nI tried\r\n\r\n```\r\ncd STAR/source\r\nmake clean\r\nmake CXXFLAGS=\"-O2 -pipe -std=c++11 -Wall -Wextra -fopenmp\"\r\n# also tried with STAR at after the CXXFLAGS:\r\n# make CXXFLAGS=\"-O2 -pipe -std=c++11 -Wall -Wextra -fopenmp\" STAR\r\n```\r\n\r\nand I got:\r\n\r\ng++: warning: parametersDefault.xxd: linker input file unused because linking not done\r\ng++: warning: htslib: linker input file unused because linking not done\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp ParametersSolo.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloRead.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloRead_record.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloReadBarcode.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloReadBarcode_getCBandUMI.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloReadFeature.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloReadFeature_record.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloReadFeature_inputRecords.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp Solo.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloFeature.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloFeature_collapseUMI.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloFeature_outputResults.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SoloFeature_processRecords.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp ReadAlign_outputAlignments.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp ReadAlign.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp STAR.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SharedMemory.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp PackedArray.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp SuffixArrayFuns.cpp\r\ng++ -c -O2 -pipe -std=c++11 -Wall -Wextra -fopenmp Parameters.cpp\r\nParameters.cpp: In member function ‘void Parameters::inputParameters(int, char**)’:\r\nParameters.cpp:340:62: error: ‘COMPILATION_TIME_PLACE’ was not declared in this scope\r\n inOut->logMain << \"STAR compilation time,server,dir=\" << COMPILATION_TIME_PLACE << \"\\n\"<<flush;\r\n ^~~~~~~~~~~~~~~~~~~~~~\r\nMakefile:66: recipe for target 'Parameters.o' failed\r\nmake: *** [Parameters.o] Error 1\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 477638456,
"datetime": 1553785993000,
"masked_author": "username_1",
"text": "Sorry, it should be\r\n$ make CXXFLAGS=\"-O1 -pipe -std=c++11 -Wall -Wextra -fopenmp -D'COMPILATION_TIME_PLACE=1'\"",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 477644623,
"datetime": 1553786835000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nOk, that worked for the compilation. Though, both, -O2 and -O1, crash.\r\n\r\nLet me know if there is something else I can test.\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 477800832,
"datetime": 1553813797000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nthanks a lot for the tests!\r\nI run valgrind with optimized code, and it did catch a problem, which may (fingers crossed) have caused the seg-fault you observed. \r\nPlease try the code on GitHub master:\r\nhttps://github.com/username_1/STAR/archive/master.zip\r\nEither pre-compiled executables or compiled with simple \"make\".\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 477905250,
"datetime": 1553846480000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nThanks for the fix. I compiled the new code and ran it as always. No crash this time. Seems to be fixed :)\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "MWSchmid",
"comment_id": 477914203,
"datetime": 1553848533000,
"masked_author": "username_2",
"text": "Hi Alex\r\n\r\nThanks for the fix. I compiled the new code and ran it as always. No crash this time. Seems to be fixed :)\r\n\r\nBest regards,\r\n\r\nMarc",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "alexdobin",
"comment_id": null,
"datetime": 1553870523000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "alexdobin",
"comment_id": 478022917,
"datetime": 1553870523000,
"masked_author": "username_1",
"text": "Hi Marc,\r\n\r\nthank you very much for your help with debugging it.\r\nI would not be able to do it without your prompt tests.\r\n\r\nCheers\r\nAlex",
"title": null,
"type": "comment"
}
] | 3 | 29 | 15,671 | false | false | 15,671 | true |
OmniSharp/omnisharp-roslyn | OmniSharp | 70,587,108 | 184 | null | [
{
"action": "opened",
"author": "nosami",
"comment_id": null,
"datetime": 1429851735000,
"masked_author": "username_0",
"text": "\r\n\r\nEvents have the word 'event' in front of them.\r\n\r\nAlso, this looks most likely to be a bug in omnisharp-roslyn.\r\nhttps://github.com/OmniSharp/omnisharp-atom/issues/138\r\n\r\n",
"title": "Intellisense bugs",
"type": "issue"
},
{
"action": "closed",
"author": "ctolkien",
"comment_id": null,
"datetime": 1429926023000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 419 | false | false | 419 | false |
Yalantis/Koloda | Yalantis | 146,298,699 | 139 | null | [
{
"action": "opened",
"author": "EduardJS",
"comment_id": null,
"datetime": 1459945653000,
"masked_author": "username_0",
"text": "As the title suggests, I'm having a hard time finding out how to set the images to have an `ScaleAspectToFit` ?",
"title": "How to set the aspect fill ?",
"type": "issue"
},
{
"action": "created",
"author": "AEugene",
"comment_id": 215428538,
"datetime": 1461851196000,
"masked_author": "username_1",
"text": "hi @username_0 . If you're using example provided in repo, you can set `contentMode` to UIImageView instance, which you returns in `viewForCardAtIndex`",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "EduardJS",
"comment_id": null,
"datetime": 1462868692000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 260 | false | false | 260 | true |
i-schuetz/SwiftCharts | null | 220,530,161 | 254 | null | [
{
"action": "opened",
"author": "najmul-csebuet",
"comment_id": null,
"datetime": 1491795660000,
"masked_author": "username_0",
"text": "",
"title": "Can we show the label on top of the bar instead of the y axis.. or show them in k like 10k instead of 10,000 if the numbers are that high?",
"type": "issue"
},
{
"action": "closed",
"author": "i-schuetz",
"comment_id": null,
"datetime": 1501096742000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 0 | false | false | 0 | false |
zig-lang/zig | zig-lang | 244,313,374 | 407 | null | [
{
"action": "opened",
"author": "tiehuis",
"comment_id": null,
"datetime": 1500545991000,
"masked_author": "username_0",
"text": "For example in the generated [here](https://username_0.github.io/iterative-replacement-of-c-with-zig) `#ifndef COMPUTE.ZIG_COMPUTE.ZIG_H` is invalid and everything after the `.` is not considered part of the token.\r\n\r\nSince presumably these can contain any input characters, some sort of suitable normalization needs to be performed so the guards are valid identifiers.",
"title": "Make generated C header file guards conform to standard naming",
"type": "issue"
},
{
"action": "created",
"author": "raulgrell",
"comment_id": 319708447,
"datetime": 1501687857000,
"masked_author": "username_1",
"text": "Could we simply use a double underscore `__` instead of the `.`?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andrewrk",
"comment_id": 319751238,
"datetime": 1501696995000,
"masked_author": "username_2",
"text": "Yes. It's not a hard problem to solve, it's just that area of code hasn't been fully developed yet. It actually could be a good area for a new contributor to ease into the codebase",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "raulgrell",
"comment_id": 319754241,
"datetime": 1501697666000,
"masked_author": "username_1",
"text": "I'll give it a shot. I'm working on a little game engine that i'd like to be able to mix and match c and zig code, so that for example, you can write the platform layer in C, which calls zig engine code that uses a c library for netcode.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andrewrk",
"comment_id": 319786321,
"datetime": 1501705171000,
"masked_author": "username_2",
"text": "That's a great use case. You're probably going to run into some issues. When you do, please report them and we'll get them all sorted out. I prioritize fixing issues that are blocking zig users from working on their projects.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "raulgrell",
"comment_id": 319815790,
"datetime": 1501712684000,
"masked_author": "username_1",
"text": "I've run into plenty of issues already =P mostly stuff that's easy to work around like TODO asserts and other asserts that don't give an indication of where the issue is. I've wanted to report them but i havent been able to reproduce them with a simple case. I hope to be able to put my code up in the next couple of weeks, i just haven't had enough time lately. Ive also got a few things that might be useful for the stdlib. Theyll be in my 'tick' and 'zig-misc' repos respectively",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "raulgrell",
"comment_id": 319815994,
"datetime": 1501712748000,
"masked_author": "username_1",
"text": "Also, i havent started the audio layer yet, is your libsoundio library usable from zig?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andrewrk",
"comment_id": 319819051,
"datetime": 1501713781000,
"masked_author": "username_2",
"text": "There are tricks to figure out what code causes the asserts if you run in a debugger. You can find out the source AST node and from that print the file, line, and column. \n\nLibsoundio is indeed directly usable from zig - it was one of the examples that I made sure all the symbols from the .h file were recognized.\n\nI'll have a look at your zig repos. Exciting stuff!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "tiehuis",
"comment_id": 320171520,
"datetime": 1501829382000,
"masked_author": "username_0",
"text": "Just a reminder that you'll probably want to handle arbitrary input and not just dots.\r\n\r\nFor example `Ω%4.zig` will result in the header guard `Ω%4Ω%4_Ω%4_H`.\r\n\r\nValid identifiers are the usual `[A-Za-z_][A-Za-z0-9_]*` form. I'd say this nearly calls a specific name-mangling strategy as well since foreign characters in files could easily map to the same identifier if a simple replacement strategy is used.\r\n\r\ne.g. `%%##$,zig` and `^#&^%.zig` would map to the same value if just performing a replacement with underscores.\r\n\r\nJust something to think about, starting off simple will already be an improvement!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "andrewrk",
"comment_id": 320270596,
"datetime": 1501858633000,
"masked_author": "username_2",
"text": "Agreed. I would propose something similar to the way percent escaping works in URLs. E.g. choose `_` as the special character, and then anything that is not `[a-zA-Z][a-zA-Z0-9]*` gets `_xx` where xx is the hex code for the byte outside the alphabet.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "raulgrell",
"comment_id": 320277897,
"datetime": 1501860335000,
"masked_author": "username_1",
"text": "That's a pretty good solution. Should we surround the special characters with underscores instead of just a prefix? That way the name mangling could support unicode and actually be fully reversible in the case a special character is followed by something hex-like.\r\n\r\n- `Ω%4.zig` would become `_03A9__25_4_2E_ZIG_H`\r\n- `compute_helper.zig` would become `COMPUTE_5F_HELPER_2E_ZIG_H`\r\n\r\nwhere `2E = .` and `5F == _`.\r\n\r\nIf we want to keep the mangled names prettier in the usual case of `[a-zA-Z][a-zA-Z0-9]*` filenames, the dot can be a special case and convert to a double underscore.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "andrewrk",
"comment_id": null,
"datetime": 1502062362000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 12 | 3,454 | false | false | 3,454 | true |
chaosbot/Chaos | null | 231,558,784 | 276 | {
"number": 276,
"repo": "Chaos",
"user_login": "chaosbot"
} | [
{
"action": "opened",
"author": "MUCHZER",
"comment_id": null,
"datetime": 1495787112000,
"masked_author": "username_0",
"text": "Website was _meh_, now website is **wow**!",
"title": "Updated the website to look better",
"type": "issue"
},
{
"action": "created",
"author": "MUCHZER",
"comment_id": 304225930,
"datetime": 1495787962000,
"masked_author": "username_0",
"text": "Preview of the modifications for the most lazy\r\n",
"title": null,
"type": "comment"
}
] | 2 | 3 | 328 | false | true | 140 | false |
RohanNagar/thunder | null | 290,111,734 | 33 | null | [
{
"action": "opened",
"author": "RohanNagar",
"comment_id": null,
"datetime": 1516397155000,
"masked_author": "username_0",
"text": "If we can build a docker image that runs thunder, we can open ourselves up to much easier deployment, testing, and autoscaling.\r\n\r\nWe can then run Thunder on Amazon ECS and eventually Amazon EKS (Managed Kubernetes).",
"title": "Containerize Thunder with Docker",
"type": "issue"
},
{
"action": "created",
"author": "RohanNagar",
"comment_id": 380238802,
"datetime": 1523392709000,
"masked_author": "username_0",
"text": "Thunder is now on Docker Hub: https://hub.docker.com/r/rohannagar/thunder/",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "RohanNagar",
"comment_id": null,
"datetime": 1523393014000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "RohanNagar",
"comment_id": 380240206,
"datetime": 1523393014000,
"masked_author": "username_0",
"text": "Marking this issue as closed. Docker images are being built and pushed using GitLab",
"title": null,
"type": "comment"
}
] | 1 | 4 | 373 | false | false | 373 | false |
SexualRhinoceros/MusicBot | null | 132,712,813 | 88 | null | [
{
"action": "opened",
"author": "Rapidfire88",
"comment_id": null,
"datetime": 1455114403000,
"masked_author": "username_0",
"text": "!promote to insert currently playing song to backuplist.txt\r\n!promote <URL> to insert <URL> into backuplist.txt\r\n!promoteprev to insert previous played song to backuplist.txt",
"title": "!promote to promote currently playing song to backuplist.txt",
"type": "issue"
},
{
"action": "created",
"author": "imayhaveborkedit",
"comment_id": 182431514,
"datetime": 1455118726000,
"masked_author": "username_1",
"text": "When I write static playlists the autoplaylist will be made into one of those playlists. I'll have management commands for those, including adding songs.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "jaydenkieran",
"comment_id": null,
"datetime": 1514663585000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 328 | false | false | 328 | false |
zalando-incubator/mate | zalando-incubator | 204,172,280 | 78 | {
"number": 78,
"repo": "mate",
"user_login": "zalando-incubator"
} | [
{
"action": "opened",
"author": "ideahitme",
"comment_id": null,
"datetime": 1485822833000,
"masked_author": "username_0",
"text": "addresses the panic mentioned https://github.com/zalando-incubator/mate/issues/77",
"title": "check the slice length",
"type": "issue"
},
{
"action": "created",
"author": "linki",
"comment_id": 276382141,
"datetime": 1485873956000,
"masked_author": "username_1",
"text": "👍",
"title": null,
"type": "comment"
}
] | 3 | 4 | 654 | false | true | 82 | false |
apple/swift | apple | 139,544,099 | 1,596 | {
"number": 1596,
"repo": "swift",
"user_login": "apple"
} | [
{
"action": "opened",
"author": "practicalswift",
"comment_id": null,
"datetime": 1457520959000,
"masked_author": "username_0",
"text": "<!-- Please complete this template before creating pull request. -->\r\n#### What's in this pull request?\r\nMake sure multi line docstrings start without a leading new line.\r\n\r\n#### Resolved bug number: ([SR-](https://bugs.swift.org/browse/SR-))\r\n<!-- If this pull request resolves any bugs from Swift bug tracker -->\r\n\r\n* * * *\r\n\r\n<!-- This selection should only be completed by Swift admin -->\r\nBefore merging this pull request to apple/swift repository:\r\n- [ ] Test pull request on Swift continuous integration.\r\n\r\n<details>\r\n <summary>Triggering Swift CI</summary>\r\n\r\nThe swift-ci is triggered by writing a comment on this PR addressed to the GitHub user @swift-ci. Different tests will run depending on the specific comment that you use. The currently available comments are:\r\n\r\n**Smoke Testing**\r\n\r\n Platform | Comment\r\n ------------ | -------------\r\n All supported platforms | @swift-ci Please smoke test\r\n OS X platform | @swift-ci Please smoke test OS X platform\r\n Linux platform | @swift-ci Please smoke test Linux platform\r\n\r\n **Validation Testing**\r\n\r\n Platform | Comment\r\n ------------ | -------------\r\n All supported platforms | @swift-ci Please test\r\n OS X platform | @swift-ci Please test OS X platform\r\n Linux platform | @swift-ci Please test Linux platform\r\n\r\nNote: Only members of the Apple organization can trigger swift-ci.\r\n</details>\r\n<!-- Thank you for your contribution to Swift! -->",
"title": "[Python] Make sure multi line docstrings start without a leading new line",
"type": "issue"
},
{
"action": "created",
"author": "gribozavr",
"comment_id": 194405801,
"datetime": 1457543719000,
"masked_author": "username_1",
"text": "Why is this an improvement?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "practicalswift",
"comment_id": 194548337,
"datetime": 1457563297000,
"masked_author": "username_0",
"text": "I've seen some style guides recommending putting the summary line on the same line as the opening quotes (see for example the [OpenStack Python Guidelines](https://git.openstack.org/cgit/openstack-dev/hacking/plain/HACKING.rst), rule `H404`) and therefore incorrectly thought about it as being part of the official docstring conventions.\r\n\r\nClosing this PR :-)",
"title": null,
"type": "comment"
}
] | 2 | 3 | 1,933 | false | false | 1,933 | false |
hjylewis/trashable-react | null | 278,556,808 | 3 | null | [
{
"action": "opened",
"author": "glasser",
"comment_id": null,
"datetime": 1512153494000,
"masked_author": "username_0",
"text": "I think it makes sense that if registerPromise is called after unmount, that it should return something like `new Promise(() => {})`, ie a Promise that never will resolve.\r\n\r\nI ran into this in this situation: I had forgotten to register one particular Promise, and so the component unmounted (because of a client-side redirect that occurred on page load) before most of my Promises even ran, and so the fact that I registered later Promises was irrelevant.\r\n\r\nHere's how I'm using this by the way:\r\n\r\n```js\r\n componentDidMount() {\r\n this.pollInterval = setInterval(() => this.poll(), 30 * 1000);\r\n this.poll();\r\n }\r\n componentWillUnmount() {\r\n if (this.pollInterval) {\r\n clearInterval(this.pollInterval);\r\n this.pollInterval = null;\r\n }\r\n }\r\n poll() {\r\n this.props.registerPromise(Promise.resolve())\r\n .then(() => this.props.registerPromise(this.checkStatus()))\r\n .then(details => details || this.props.registerPromise(this.checkIncident()))\r\n .then(details => details || this.props.registerPromise(this.checkMaintenance()))\r\n .then(details => this.setState({details}));\r\n }\r\n```\r\n\r\nWhen I was missing the registerPromise around the initial Promise.resolve() I would trigger the issue described here every time I loaded the page that did a redirect (and thus a component unmount) on startup.\r\n\r\nArguably this was me mis-using the API by forgetting a registration, but it still seems like this should have been able to work.",
"title": "Maybe should pre-trash Promises if registerPromise called after unmount?",
"type": "issue"
},
{
"action": "created",
"author": "hjylewis",
"comment_id": 348621914,
"datetime": 1512164533000,
"masked_author": "username_1",
"text": "Hmm, interesting. So I don't want to encourage calling `registerPromise` after unmounting since that references `this` and prevents the React object from getting GC'd.\r\n\r\nBut on the other hand, what happens *now* when you don't have the `registerPromise` around the initial `Promise.resolve()`? Do to promises go on to be called? 😬",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "glasser",
"comment_id": 348633235,
"datetime": 1512167976000,
"masked_author": "username_0",
"text": "Yep!\r\n\r\nSo arguably maybe the (or \"a\") problem is that I'm registering the promise asynchronously, and I should only ever be calling registerPromise at a time that is \"known untrashed\"?\r\n\r\nHonestly I kinda just want to have to call registerPromise once and have it be \"viral\".\r\n\r\nie, maybe sooooomehow, the way that base `trashable` should work is that if you trash a Promise, you automatically trash any Promise made from a chain of `then` or `catch` on it too. Is that too awful?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "hjylewis",
"comment_id": 348733404,
"datetime": 1512264642000,
"masked_author": "username_1",
"text": "Yeah, I think that would be the goal. It IS annoying to have to wrap all those chained promises.\r\n\r\nIf we were able to create an actual TrashablePromise as proposed [here](https://github.com/username_1/trashable/issues/6#issuecomment-348322130) in username_1/trashable#6, we might be able to wrap the returned values of the handles to chain together Trashable Promises so you could write:\r\n```\r\n this.props.registerPromise(Promise.resolve())\r\n .then(() => this.firstPromise())\r\n .then(() => this.secondPromise())\r\n .then(() => this.thirdPromise());\r\n```\r\ninstead of:\r\n```\r\n this.props.registerPromise(Promise.resolve())\r\n .then(() => this.props.registerPromise(this.firstPromise()))\r\n .then(() => this.props.registerPromise(this.secondPromise()))\r\n .then(() => this.props.registerPromise(this.thirdPromise()));\r\n```\r\nWhich would be great. Not sure if possible though...",
"title": null,
"type": "comment"
}
] | 2 | 4 | 3,188 | false | false | 3,188 | true |
chirag04/react-native-in-app-utils | null | 140,761,586 | 21 | {
"number": 21,
"repo": "react-native-in-app-utils",
"user_login": "chirag04"
} | [
{
"action": "opened",
"author": "fisch0920",
"comment_id": null,
"datetime": 1457980713000,
"masked_author": "username_0",
"text": "See #15.\r\n\r\nThe most obvious use case this improves is previously in-app purchases would never get a callback if the user cancelled the purchase from the confirmation popup.",
"title": "More robust error handling",
"type": "issue"
},
{
"action": "created",
"author": "chirag04",
"comment_id": 196480723,
"datetime": 1457982758000,
"masked_author": "username_1",
"text": "Thanks for the PR. It's a breaking change that we now send the error callback on cancel. Can you document that as well?\r\n\r\nWe can bump the major version then.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "grabbou",
"comment_id": 196490098,
"datetime": 1457984390000,
"masked_author": "username_2",
"text": "Hey, re: `RCTUtils`, I think you can just:\r\n```obj-c\r\nswitch (transaction.transactionState) {\r\n case SKPaymentTransactionStateFailed: {\r\n NSString *key = RCTKeyForInstance(transaction.payment.productIdentifier);\r\n RCTResponseSenderBlock callback = _callbacks[key];\r\n if (callback) {\r\n callback(@[RCTJSErrorFromNSError(transaction.error)]);\r\n [_callbacks removeObjectForKey:key];\r\n } else {\r\n RCTLogWarn(@\"No callback registered for transaction with state failed.\");\r\n }\r\n [[SKPaymentQueue defaultQueue] finishTransaction:transaction];\r\n break;\r\n}\r\n```",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chirag04",
"comment_id": 197073930,
"datetime": 1458086663000,
"masked_author": "username_1",
"text": "@username_0 can you incorporate @username_2 feedback?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "DenisIzmaylov",
"comment_id": 214497301,
"datetime": 1461613776000,
"masked_author": "username_3",
"text": "Ping",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "DenisIzmaylov",
"comment_id": 214815949,
"datetime": 1461690945000,
"masked_author": "username_3",
"text": "@username_2 Could you please explain what does it mean? How I should to update your code to get it working? \r\n",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "grabbou",
"comment_id": 214826980,
"datetime": 1461693141000,
"masked_author": "username_2",
"text": "wrap it with `@(...)` - that converts `int` to NSNumber which is an Obj-c object. Looks like a bug in my snippet!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "DenisIzmaylov",
"comment_id": 214830518,
"datetime": 1461693747000,
"masked_author": "username_3",
"text": "It seemed just missed out `RCTUtils.h` :)",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "DenisIzmaylov",
"comment_id": 214832520,
"datetime": 1461694159000,
"masked_author": "username_3",
"text": "I've created alternative PR to move this fix forward.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "chirag04",
"comment_id": 214916884,
"datetime": 1461712844000,
"masked_author": "username_1",
"text": "Thanks for all the help here guys!",
"title": null,
"type": "comment"
}
] | 4 | 10 | 1,450 | false | false | 1,450 | true |
PX4/Firmware | PX4 | 164,737,971 | 5,023 | null | [
{
"action": "opened",
"author": "dagar",
"comment_id": null,
"datetime": 1468190518000,
"masked_author": "username_0",
"text": "Options - \r\nOverride in pwm module or turn motor_test into a module and disable attitude controller output?\r\n\r\n\r\nhttps://github.com/mavlink/mavlink/pull/580",
"title": "implement MAV_CMD_DO_MOTOR_TEST",
"type": "issue"
},
{
"action": "created",
"author": "LorenzMeier",
"comment_id": 231615679,
"datetime": 1468191025000,
"masked_author": "username_1",
"text": "I think we need a direct actuator_passthrough topic to which all the topics subscribe and what they put onto the output. We could implement it accordingly: A flag wether this is scaled (-1..1 or PWM) and what the timeout should be. While the timeout is still running the mixer would be locked out.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "TSC21",
"comment_id": 367500955,
"datetime": 1519252590000,
"masked_author": "username_2",
"text": "Still relevant I guess.",
"title": null,
"type": "comment"
},
{
"action": "reopened",
"author": "TSC21",
"comment_id": null,
"datetime": 1519252590000,
"masked_author": "username_2",
"text": "Options - \nOverride in pwm module or turn motor_test into a module and disable attitude controller output?\nShould we make motor_ramp a part of this?\n\nhttps://github.com/mavlink/mavlink/pull/580",
"title": "implement MAV_CMD_DO_MOTOR_TEST",
"type": "issue"
},
{
"action": "closed",
"author": "dagar",
"comment_id": null,
"datetime": 1548032037000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "dagar",
"comment_id": 455919139,
"datetime": 1548032037000,
"masked_author": "username_0",
"text": "Duplicate https://github.com/PX4/Firmware/issues/10782",
"title": null,
"type": "comment"
}
] | 4 | 8 | 963 | false | true | 723 | false |
rapid7/nexpose-client-python | rapid7 | 262,888,691 | 32 | null | [
{
"action": "opened",
"author": "gschneider-r7",
"comment_id": null,
"datetime": 1507142308000,
"masked_author": "username_0",
"text": "The following things in run_demo.py are not working after the py3 updates:\r\n\r\n- [x] DemonstrateVulnerabilityAPI() (fixed in #31)\r\n- [ ] DemonstrateBackupAPI()\r\n- [ ] DemonstrateCriteriaAPI()\r\n- [ ] DemonstrateSharedCredentialsAPI()\r\n- [ ] DemonstrateAssetFilterAPI()\r\n- [ ] DemonstrateDiscoveryConnectionAPI()\r\n- [ ] DemonstrateUserAPI()",
"title": "Fix things that stopped working after py3 updates",
"type": "issue"
},
{
"action": "created",
"author": "grobinson-r7",
"comment_id": 334955853,
"datetime": 1507400242000,
"masked_author": "username_1",
"text": "@username_0 , that should be all of the above covered with \r\nPRs#34,35,36. I don't think `DemonstrateDiscoveryConnectionAPI()` is broke as the error in the nsc.log file indicates a connection problem rather than an actual problem with the python client.\r\n`2017-10-07T19:14:45 [WARN] Failed while connecting to discovery center.`",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "gschneider-r7",
"comment_id": null,
"datetime": 1507582359000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 668 | false | false | 668 | true |
TheThingsNetwork/arduino-device-lib | TheThingsNetwork | 181,969,121 | 84 | {
"number": 84,
"repo": "arduino-device-lib",
"user_login": "TheThingsNetwork"
} | [
{
"action": "opened",
"author": "Nicolasdejean",
"comment_id": null,
"datetime": 1476091136000,
"masked_author": "username_0",
"text": "#65 #30",
"title": "New branch for airtime",
"type": "issue"
}
] | 2 | 2 | 428 | false | true | 7 | false |
aschuch/AwesomeCache | null | 162,925,640 | 76 | null | [
{
"action": "opened",
"author": "jdulb17",
"comment_id": null,
"datetime": 1467206980000,
"masked_author": "username_0",
"text": "Hello, I installed AwesomeCache into my Xcode project through the manual installation method; however, when I attempt to run my program, there is an error in the Cache.swift file that says \" Use of unresolved identifier '_awesomeCache_unarchiveObjectSafely' \" (Line 235 on the file). Is there a fix for this or is this possibly an issue that resulted from using the manual installation? Thanks!",
"title": "Unresolved Identifier in Cache.swift File",
"type": "issue"
},
{
"action": "created",
"author": "aschuch",
"comment_id": 229915058,
"datetime": 1467369312000,
"masked_author": "username_1",
"text": "This is an error in the documentation. Besides the two Swift files, you'd also need to add the `NSKeyedUnarchiverWrapper.h/.m` files to your project. I'll keep this issue open and add better documentation for manual installation soon.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ferologics",
"comment_id": 248356554,
"datetime": 1474389141000,
"masked_author": "username_2",
"text": "bump",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ClearerMe",
"comment_id": 257217035,
"datetime": 1477892007000,
"masked_author": "username_3",
"text": "@username_0 , @username_1 , In addition to these, you need create a swift-ObjectiveC-bridging file, and in this file, you need import \r\nNSKeyedUnarchiverWrapper.h. Binggo, it works",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ClearerMe",
"comment_id": 257217112,
"datetime": 1477892050000,
"masked_author": "username_3",
"text": "@username_0 , @username_1 , In addition to these, you need create a swift-ObjectiveC-bridging file, and in this file, you need import NSKeyedUnarchiverWrapper.h file. Binggo, it works~",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "aschuch",
"comment_id": null,
"datetime": 1479246134000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "aschuch",
"comment_id": 260778385,
"datetime": 1479246134000,
"masked_author": "username_1",
"text": "Thanks everyone. I've added this info to the Readme.",
"title": null,
"type": "comment"
}
] | 4 | 7 | 1,038 | false | false | 1,038 | true |
jxtech/wechatpy | jxtech | 205,555,241 | 183 | null | [
{
"action": "opened",
"author": "garyhurtz",
"comment_id": null,
"datetime": 1486380414000,
"masked_author": "username_0",
"text": "I am using wechatpy to add wechat login capability to a website, and I found that WeChat's servers refuse to accept the URLs generated by WeChatOAuth.qrconnect_url.\r\n\r\nAfter some investigation, I found that this method does not encode URLs correctly.\r\n\r\nWeChat servers will accept the URLs after changing:\r\n\r\n redirect_uri = quote(self.redirect_uri)\r\n\r\nto\r\n\r\n redirect_uri = quote(self.redirect_uri, safe=u'')\r\n\r\nWanted to pass this along to the team and other users who are unable to present their users with the login QR code.\r\n-gary",
"title": "qrconnect_url encoding issues",
"type": "issue"
},
{
"action": "created",
"author": "messense",
"comment_id": 277658363,
"datetime": 1486381331000,
"masked_author": "username_1",
"text": "Would you like to open a PR to fix it?\n\n发自我的 iPhone",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "messense",
"comment_id": null,
"datetime": 1487509432000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "Brightcells",
"comment_id": 313893641,
"datetime": 1499567727000,
"masked_author": "username_2",
"text": "Why not use quote_plus instead of quote",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "rockallite",
"comment_id": 403340438,
"datetime": 1531102489000,
"masked_author": "username_3",
"text": "@username_1 For Python 2.7, commit https://github.com/jxtech/wechatpy/commit/3610e8112fbee2d224c11b961ec73ea3ce65e0af introduce a new problem: cache pollution in `quote()`. It causes `UnicodeDecodeError` in subsequent calls to `quote()` with a non-ascii value, if any function in wechatpy which involves in calling `quote(..., safe='')` (the `safe` parameter is a unicode) gets called first in a Python process.\r\n\r\nI can confirm that this bug still exists in wechatpy 1.7.4.\r\n\r\nFor example, after starting a Django dev server, first make an OAuth request in WeChat devtools, then make a fuzzy search with non-ascii characters in Django admin. A typical traceback would look like this:\r\n\r\n```\r\n ...\r\n File \"/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py\", line 1297, in quote\r\n if not s.rstrip(safe):\r\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128)\r\n```\r\n\r\nThe corresponding interactive traceback view in Django would look like this:\r\n\r\n```\r\n...\r\n/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py in quote\r\n1290. (quoter, safe) = _safe_quoters[cachekey]\r\n1291. except KeyError:\r\n1292. safe_map = _safe_map.copy()\r\n1293. safe_map.update([(c, c) for c in safe])\r\n1294. quoter = safe_map.__getitem__\r\n1295. safe = always_safe + safe\r\n1296. _safe_quoters[cachekey] = (quoter, safe)\r\n1297. if not s.rstrip(safe): ...\r\n1298. return s\r\n1299. return ''.join(map(quoter, s))\r\n1300.\r\n1301. def quote_plus(s, safe=''):\r\n1302. \"\"\"Quote the query fragment of a URL; replacing ' ' with '+'\"\"\"\r\n1303. if ' ' in s:\r\n\r\nLocal vars\r\nVariable | Value\r\n -- | --\r\ncachekey | ('', 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_.-')\r\n quoter | <built-in method __getitem__ of dict object at 0x10e5b4d70>\r\n s | '\\xe4\\xb8\\xad'\r\n safe | u'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_.-'\r\n```\r\n\r\nAs you can see, the `safe` variable from `_safe_quoters` dict (as a cache) becomes a unicode string, which ought to be a byte string.\r\n\r\nIf you make the fuzzy search in Django admin first, everything is fine.\r\n\r\nThe proper fix in wechatpy should be calling `quote` with byte strings, like this: `quote(self.redirect_url, safe=b'')` (because of the `from __future__ import unicode_literals` statement at the top of a file).\r\n\r\nA temporary fix in an existing (Python 2.7) project would be calling the following code in a bootstrap script, so the `quote()` cache is filled with a proper value:\r\n\r\n```\r\nfrom urllib import quote\r\n\r\n\r\nquote('non-empty-string', safe='')\r\n```\r\n\r\nA good place for a Django project would be `<project_name>/__init__.py`.",
"title": null,
"type": "comment"
},
{
"action": "reopened",
"author": "messense",
"comment_id": null,
"datetime": 1531102682000,
"masked_author": "username_1",
"text": "I am using wechatpy to add wechat login capability to a website, and I found that WeChat's servers refuse to accept the URLs generated by WeChatOAuth.qrconnect_url.\r\n\r\nAfter some investigation, I found that this method does not encode URLs correctly.\r\n\r\nWeChat servers will accept the URLs after changing:\r\n\r\n redirect_uri = quote(self.redirect_uri)\r\n\r\nto\r\n\r\n redirect_uri = quote(self.redirect_uri, safe=u'')\r\n\r\nWanted to pass this along to the team and other users who are unable to present their users with the login QR code.\r\n-gary",
"title": "qrconnect_url encoding issues",
"type": "issue"
}
] | 4 | 6 | 3,978 | false | false | 3,978 | true |
itchio/itch | itchio | 178,019,082 | 928 | null | [
{
"action": "opened",
"author": "fasterthanlime",
"comment_id": null,
"datetime": 1474366104000,
"masked_author": "username_0",
"text": "Two low-hanging fruits:\r\n\r\n * With predictable unar/butler output, there's no need for a staging folder anymore - we can cut down 1 scan, 1 full read, 1 full write\r\n * `butler unzip` works great for 4GB+ archives but it's slower than unar. Use buffers & decompress files in parallel to use CPU at its full potential\r\n\r\nA good candidate for testing is https://ackkstudios.itch.io/yiik-episode-prime - 482MB archive, decompresses to 3,2GB",
"title": "Improve install performance",
"type": "issue"
},
{
"action": "created",
"author": "fasterthanlime",
"comment_id": 422429178,
"datetime": 1537282810000,
"masked_author": "username_0",
"text": "v25 ships with resumable on-the-fly decompression - the limiting factor is now more often the network than it is the cpu, since we decompress data as soon as we get it.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "fasterthanlime",
"comment_id": null,
"datetime": 1537282810000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 606 | false | false | 606 | false |
sendgrid/csharp-http-client | sendgrid | 155,506,394 | 4 | null | [
{
"action": "opened",
"author": "mastan",
"comment_id": null,
"datetime": 1463579415000,
"masked_author": "username_0",
"text": "#### Issue Summary\r\nAccess forbidden error with Visual Studio 2013-Full working code from https://github.com/sendgrid/csharp-http-client/blob/master/Example/Example.cs \r\n#### Steps to Reproduce\r\n\r\n\r\n\r\n1. Downloaded source from https://github.com/sendgrid/csharp-http-client/blob/master/Example/Example.cs\r\n2. Changed Environment Variable in local box\r\n3. Executed the project\r\n\r\nConsole shows an error as attached.\r\n\r\n#### Technical details:\r\n\r\n* csharp-http-client Version: master (latest commit: [commit number])\r\n* CSharp Version: 4.5",
"title": "Access forbidden",
"type": "issue"
},
{
"action": "created",
"author": "thinkingserious",
"comment_id": 220470435,
"datetime": 1463697104000,
"masked_author": "username_1",
"text": "Hello @username_0,\r\n\r\nIt looks like you are using our API Key ID, your API key should start with \"SG.\" and should be about three times the size of what you have above.\r\n\r\nNote that when you create a new API Key, the full key is only displayed once for your security. If you need to create a new key, you can do so here: https://app.sendgrid.com/settings/api_keys\r\n\r\nAlso, if you want to use a library to access SendGrid's API, then please take a look at our official library here: https://github.com/sendgrid/sendgrid-csharp (please note the beta message at the top, as we have a major update right around the corner). This library is meant to be a general purpose library for any API.\r\n\r\nThanks!\r\n\r\nWith Best Regards,\r\n\r\nElmer",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "thinkingserious",
"comment_id": null,
"datetime": 1463697104000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 1,498 | false | false | 1,498 | true |
beakerbrowser/beaker | beakerbrowser | 202,515,009 | 269 | null | [
{
"action": "opened",
"author": "pmario",
"comment_id": null,
"datetime": 1485172638000,
"masked_author": "username_0",
"text": "[12:51:26] Using gulpfile ~/git/beaker/gulpfile.js\r\n[12:51:26] Starting 'bundle'...\r\n[12:51:26] Starting 'less'...\r\nTreating 'browser-es-module-loader/dist/babel-browser-build' as external dependency\r\nTreating 'browser-es-module-loader/dist/browser-es-module-loader' as external dependency\r\nTreating 'jayson/promise' as external dependency\r\n[12:51:27] Finished 'less' after 444 ms\r\n[12:51:28] Finished 'bundle' after 1.38 s\r\n[12:51:28] Starting 'build'...\r\n[12:51:28] Finished 'build' after 18 μs\r\n[12:51:28] Starting 'watch'...\r\n[12:51:28] Finished 'watch' after 7.9 ms\r\n[12:51:28] Starting 'start-watch'...\r\nSpawning electron /home/mario/git/beaker/node_modules/electron/dist/electron\r\n[12:51:28] Finished 'start-watch' after 6.98 ms\r\n[IPFS] Error fetching IPFS daemon version: ECONNREFUSED\r\nevents.js:160\r\n throw er; // Unhandled 'error' event\r\n ^\r\n\r\nError: watch /home/mario/git/beaker/app/node_modules/es5-ext/string/#/contains/is-implemented.js ENOSPC\r\n at exports._errnoException (util.js:1022:11)\r\n at FSWatcher.start (fs.js:1429:19)\r\n at Object.fs.watch (fs.js:1456:11)\r\n at createFsWatchInstance (/home/mario/git/beaker/node_modules/chokidar/lib/nodefs-handler.js:37:15)\r\n at setFsWatchListener (/home/mario/git/beaker/node_modules/chokidar/lib/nodefs-handler.js:80:15)\r\n at FSWatcher.NodeFsHandler._watchWithNodeFs (/home/mario/git/beaker/node_modules/chokidar/lib/nodefs-handler.js:228:14)\r\n at FSWatcher.NodeFsHandler._handleFile (/home/mario/git/beaker/node_modules/chokidar/lib/nodefs-handler.js:255:21)\r\n at FSWatcher.<anonymous> (/home/mario/git/beaker/node_modules/chokidar/lib/nodefs-handler.js:473:21)\r\n at FSReqWrap.oncomplete (fs.js:123:15)\r\n\r\nnpm ERR! Linux 4.8.0-34-generic\r\nnpm ERR! argv \"/usr/bin/nodejs\" \"/usr/bin/npm\" \"run\" \"watch\"\r\nnpm ERR! node v6.9.4\r\nnpm ERR! npm v3.10.10\r\nnpm ERR! code ELIFECYCLE\r\nnpm ERR! @ watch: `gulp start-watch`\r\nnpm ERR! Exit status 1\r\nnpm ERR! \r\nnpm ERR! Failed at the @ watch script 'gulp start-watch'.\r\nnpm ERR! Make sure you have the latest version of node.js and npm installed.\r\nnpm ERR! If you do, this is most likely a problem with the package,\r\nnpm ERR! not with npm itself.\r\nnpm ERR! Tell the author that this fails on your system:\r\nnpm ERR! gulp start-watch\r\nnpm ERR! You can get information on how to open an issue for this project with:\r\nnpm ERR! npm bugs \r\nnpm ERR! Or if that isn't available, you can get their info via:\r\nnpm ERR! npm owner ls \r\nnpm ERR! There is likely additional logging output above.\r\n\r\nnpm ERR! Please include the following file with any support request:\r\nnpm ERR! /home/mario/git/beaker/npm-debug.log\r\n\r\n```",
"title": "npm run watch throws Error - ubuntu",
"type": "issue"
},
{
"action": "created",
"author": "pfrazee",
"comment_id": 274520881,
"datetime": 1485185825000,
"masked_author": "username_1",
"text": "Actually that exception isnt related to the IPFS error -- or, it shouldnt be, anyway.\r\n\r\nYou're getting an ENOSPC error from the file-watcher. Is it possible you have some volume that's out of space?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pmario",
"comment_id": 274681496,
"datetime": 1485224178000,
"masked_author": "username_0",
"text": "It's a new laptop 256GB SSD and only about 20GB used atm.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "pfrazee",
"comment_id": 274690438,
"datetime": 1485227973000,
"masked_author": "username_1",
"text": "That error code is misleading. Google for that error + file watching. You\nmay need to increase a limit in the sys config",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "pmario",
"comment_id": null,
"datetime": 1485255132000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "pmario",
"comment_id": 274770663,
"datetime": 1485255132000,
"masked_author": "username_0",
"text": "thx. It works now. did search for `ENOSPC file watching` as suggested and found: \r\nhttps://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers which also gives some more details.",
"title": null,
"type": "comment"
}
] | 2 | 6 | 3,229 | false | false | 3,229 | false |
ssy341/datatables-cn | null | 269,141,900 | 254 | null | [
{
"action": "opened",
"author": "ltqTest",
"comment_id": null,
"datetime": 1509117359000,
"masked_author": "username_0",
"text": "The \"PHP - PHP-CS-Fixer Path (cs_fixer_path)\" configuration option has been deprecated. Please switch to using the option in section \"Executables\" (near the top) in subsection \"PHP-CS-Fixer\" labelled \"Path\" in Atom-Beautify package settings.\r\n How to solve?",
"title": "When I configured php-cs-fixer, the following error occurred. ",
"type": "issue"
},
{
"action": "created",
"author": "ssy341",
"comment_id": 340487362,
"datetime": 1509378484000,
"masked_author": "username_1",
"text": "@username_0 I am sorry I can‘t help you. I think your issue is not about jQuery DataTables plugin.",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "ssy341",
"comment_id": null,
"datetime": 1509378504000,
"masked_author": "username_1",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 3 | 353 | false | false | 353 | true |
piotrmurach/github | null | 220,589,976 | 309 | {
"number": 309,
"repo": "github",
"user_login": "piotrmurach"
} | [
{
"action": "opened",
"author": "samphilipd",
"comment_id": null,
"datetime": 1491817336000,
"masked_author": "username_0",
"text": "",
"title": "Projects columns",
"type": "issue"
},
{
"action": "created",
"author": "samphilipd",
"comment_id": 292905540,
"datetime": 1491818629000,
"masked_author": "username_0",
"text": "@username_1 Should this also have unit specs?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "piotrmurach",
"comment_id": 292929908,
"datetime": 1491825912000,
"masked_author": "username_1",
"text": "I would add unit tests as well, they tend to catch different kind of bugs related to the parsing of arguments etc... \r\n\r\nAnother thing that I thought I will do is to add you as collaborator to my `github_api_test` repository which I use for recording cassettes. It's a safe sandbox that I think it would be good for feature tests to us this repo to help with future maintenance.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "samphilipd",
"comment_id": 292941219,
"datetime": 1491829003000,
"masked_author": "username_0",
"text": "@username_1 unit tests added.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "samphilipd",
"comment_id": 293198880,
"datetime": 1491901990000,
"masked_author": "username_0",
"text": "@username_1 ready for merge",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "piotrmurach",
"comment_id": 293232148,
"datetime": 1491911107000,
"masked_author": "username_1",
"text": "Will merge later today.",
"title": null,
"type": "comment"
}
] | 3 | 23 | 5,647 | false | true | 505 | true |
remomueller/documentation | null | 127,722,169 | 7 | null | [
{
"action": "opened",
"author": "remomueller",
"comment_id": null,
"datetime": 1453307443000,
"masked_author": "username_0",
"text": "Using 2.4.8 fixes the issues with certain gems.\r\n\r\n```\r\ngem update --system 2.4.8 --no-ri --no-rdoc\r\n```",
"title": "Ruby Gems version 2.5.1 has issues with undefined method \"this\"",
"type": "issue"
},
{
"action": "created",
"author": "jegodwin",
"comment_id": 190293210,
"datetime": 1456765733000,
"masked_author": "username_1",
"text": "+1 on this. Thanks @username_0; I've been dealing with it for a while and couldn't find a solution.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "remomueller",
"comment_id": 190311064,
"datetime": 1456768763000,
"masked_author": "username_0",
"text": "Glad it helped! Thanks for the note on Rbenv. I just tried this again against RubyGems 2.6.1 but the issue still exists, and it also looks like the issue in the RubyGems repository is still open for the time being.\r\n\r\nMainly added this as a reminder to my future self in case I run into this again, and will probably add it to the [miscellaneous issues](https://github.com/username_0/documentation/blob/master/macosx/199-miscellaneous.md) I've run into the past.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jegodwin",
"comment_id": 190311471,
"datetime": 1456768811000,
"masked_author": "username_1",
"text": ":+1:",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "remomueller",
"comment_id": null,
"datetime": 1460392952000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
},
{
"action": "created",
"author": "remomueller",
"comment_id": 208440897,
"datetime": 1460392952000,
"masked_author": "username_0",
"text": "Looks like this is fixed with the RubyGems [2.6.3](https://rubygems.org/gems/rubygems-update/versions/2.6.3), closing issue.",
"title": null,
"type": "comment"
}
] | 2 | 6 | 795 | false | false | 795 | true |
GuillaumeSalles/redux.NET | null | 214,087,385 | 45 | null | [
{
"action": "opened",
"author": "cmeeren",
"comment_id": null,
"datetime": 1489501283000,
"masked_author": "username_0",
"text": "Please make this installable in .NET Standard class libraries.\r\n\r\nI've just come across Redux in general and this package in particular, and I'm pumped and ready to get started, but I'm afraid lacking .NET Standard support is a deal breaker. 😢",
"title": "Make compatible with .NET Standard",
"type": "issue"
},
{
"action": "created",
"author": "cmeeren",
"comment_id": 286438743,
"datetime": 1489501828000,
"masked_author": "username_0",
"text": "Btw, I successfully compiled the two classes (`IAction` and `Store`) when copied into a .NET Standard 1.0 project, so I expect it's just a matter of updating some metadata and publish to Nuget.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "GuillaumeSalles",
"comment_id": 286446944,
"datetime": 1489503379000,
"masked_author": "username_1",
"text": "Hi @username_0,\r\n\r\nI will publish a new version today.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "GuillaumeSalles",
"comment_id": 286633058,
"datetime": 1489549591000,
"masked_author": "username_1",
"text": "New release : https://www.nuget.org/packages/Redux.NET/2.0.0\r\n\r\nI think I upgraded the nuget correctly but I'm waiting for your approval to see if everything work fine on your side. 👍\r\n\r\nThanks",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cmeeren",
"comment_id": 286837678,
"datetime": 1489602725000,
"masked_author": "username_0",
"text": "Thanks! Installed fine. For the record, here's the packages that were installed. Don't know why `System.ComponentModel` and `System.Runtime.InteropServices.WindowsRuntime` were installed - are those dependencies of `System.Reactive`?\r\n\r\n```\r\nSuccessfully installed 'Redux.NET 2.0.0' to MyProject.Core.Redux\r\nSuccessfully installed 'System.ComponentModel 4.0.1' to MyProject.Core.Redux\r\nSuccessfully installed 'System.Reactive 3.0.0' to MyProject.Core.Redux\r\nSuccessfully installed 'System.Reactive.Core 3.0.0' to MyProject.Core.Redux\r\nSuccessfully installed 'System.Reactive.Interfaces 3.0.0' to MyProject.Core.Redux\r\nSuccessfully installed 'System.Reactive.Linq 3.0.0' to MyProject.Core.Redux\r\nSuccessfully installed 'System.Reactive.PlatformServices 3.0.0' to MyProject.Core.Redux\r\nSuccessfully installed 'System.Runtime.InteropServices.WindowsRuntime 4.0.1' to MyProject.Core.Redux\r\n```",
"title": null,
"type": "comment"
}
] | 2 | 5 | 1,569 | false | false | 1,569 | true |
cisco/elsy | cisco | 195,564,024 | 40 | {
"number": 40,
"repo": "elsy",
"user_login": "cisco"
} | [
{
"action": "opened",
"author": "joeygibson",
"comment_id": null,
"datetime": 1481729869000,
"masked_author": "username_0",
"text": "Fixes #4 \r\n\r\nI was able to replicate #4 by adding code into the `RemoveContainersOfImage` in `docker.go` that changed `container.ID` to one that would not be found. `client.InspectContainer` returned `nil` into the `inspection` var, and a `404` error, which we noticed, but then we went ahead and tried to use the `inspection` var, which was nil, and the panic happened. \r\n\r\nI tried a few scenarios to test this, including a container that committed suicide, but I couldn't get the timing right.",
"title": "Fixes #4",
"type": "issue"
},
{
"action": "created",
"author": "paulcichonski",
"comment_id": 267066731,
"datetime": 1481730006000,
"masked_author": "username_1",
"text": "good find!",
"title": null,
"type": "comment"
}
] | 2 | 2 | 506 | false | false | 506 | false |
omegaup/omegaup | omegaup | 167,246,902 | 833 | null | [
{
"action": "opened",
"author": "frcepeda",
"comment_id": null,
"datetime": 1469385442000,
"masked_author": "username_0",
"text": "If you open up the edition page and edit, say, both the English and Spanish descriptions of a problem, the only one that gets saved when you hit submit is the one that was selected in the editor dropdown. (And the other edits get discarded!)",
"title": "Editing multiple languages' descriptions in the UI only saves one",
"type": "issue"
},
{
"action": "created",
"author": "elendil326",
"comment_id": 307658173,
"datetime": 1497216618000,
"masked_author": "username_1",
"text": "@username_2 esto aún hace repro?",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "lhchavez",
"comment_id": null,
"datetime": 1509760270000,
"masked_author": "username_2",
"text": "",
"title": null,
"type": "issue"
}
] | 3 | 3 | 271 | false | false | 271 | true |
Telmate/terraform-provider-proxmox | Telmate | 268,175,780 | 7 | null | [
{
"action": "opened",
"author": "cc12258",
"comment_id": null,
"datetime": 1508877152000,
"masked_author": "username_0",
"text": "I'm hoping you can help me troubleshoot this issue, I've been stuck at this same point and can't see how to fix it. When bringing up the VM I am getting stuck at the ssh forwarding. I see the port setup and listening on the target node. However when trying to ssh to the port it times out. It appears that it may be having trouble with the port forward. \r\n\r\nHere are some logs:\r\n\r\n2017-10-24T14:22:02.622-0600 [DEBUG] plugin.terraform-provider-proxmox: 2017/10/24 14:22:02 handshake error: ssh: handshake failed: read tcp xxxxx:52843-> xxxxx:22115: read: connection reset by peer\r\n2017-10-24T14:22:04.626-0600 [DEBUG] plugin.terraform-provider-proxmox: 2017/10/24 14:22:04 opening new ssh session\r\n2017-10-24T14:22:04.626-0600 [DEBUG] plugin.terraform-provider-proxmox: 2017/10/24 14:22:04 ssh session open error: 'client not available', attempting reconnect\r\n2017-10-24T14:22:04.626-0600 [DEBUG] plugin.terraform-provider-proxmox: 2017/10/24 14:22:04 connecting to TCP connection for SSH\r\n2017-10-24T14:22:04.657-0600 [DEBUG] plugin.terraform-provider-proxmox: 2017/10/24 14:22:04 handshaking with SSH\r\n\r\nThat repeats and then eventually times out. I'm not sure how to troubleshoot this, is there a command in proxmox to see how the usernet is configured? I see this command in the API code:\r\n\r\nnetdev_add user,id=net1,hostfwd=tcp::\"+sshPort+\"-:22\r\n\r\nSo assuming that is forwarding to the host somehow, but I'd like to see that forward config so I can further troubleshoot. It seems to be forwarding to the wrong place potentially.",
"title": "Struggling with SSHForward",
"type": "issue"
},
{
"action": "created",
"author": "ggongaware",
"comment_id": 339123312,
"datetime": 1508877441000,
"masked_author": "username_1",
"text": "The ssh port selected is always 22000 + the vmID number, in your case it was vm ID 115 made an sshPort forward at 22115.\r\n\r\nPerhaps there's iptables or something blocking these ports?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cc12258",
"comment_id": 339125063,
"datetime": 1508877827000,
"masked_author": "username_0",
"text": "I checked iptables and no it's allowing everything both on the guest and the target node.\r\n\r\nIs there some way to see how that port forward is configured? I see that the target node is listening on port 22115. I just don't know where that port forward is actually going.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ggongaware",
"comment_id": 339126134,
"datetime": 1508878085000,
"masked_author": "username_1",
"text": "When I run sometime like: \r\n```bash\r\nnetstat -lnp | grep 22115\r\n```\r\n\r\nI see that the port is tied to the qemu process that runs the vm. \r\n\r\nIf you open a VNC console to that VM, you should see that eth1 has an IP address that routes through the mini qemu NAT network.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cc12258",
"comment_id": 339134182,
"datetime": 1508879762000,
"masked_author": "username_0",
"text": "When I check eth1 on the VM it shows that it's down. Do I need to preseed that interface config for DHCP in the template?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "ggongaware",
"comment_id": 339134645,
"datetime": 1508879864000,
"masked_author": "username_1",
"text": "Yes, that temporary little qemu network on eth1 works well as dhcp.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "cc12258",
"comment_id": 339140173,
"datetime": 1508881019000,
"masked_author": "username_0",
"text": "Looks like adding this to /etc/network/interfaces in the template fixed it:\r\n\r\n``",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "cc12258",
"comment_id": null,
"datetime": 1508881019000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 8 | 2,522 | false | false | 2,522 | false |
ARM-software/arm-trusted-firmware | ARM-software | 284,514,062 | 1,209 | {
"number": 1209,
"repo": "arm-trusted-firmware",
"user_login": "ARM-software"
} | [
{
"action": "opened",
"author": "Leo-Yan",
"comment_id": null,
"datetime": 1514275822000,
"masked_author": "username_0",
"text": "When some interrupts are configured as group 1 in GICv2, these\r\ninterrupts trigger FIQ signal; this results in the Linux kernel panic\r\nby reporting log: \"Bad mode in FIQ handler detected on CPU0, code\r\n0x00000000 -- Unknown/Uncategorized\". Unfortunately from kernel side it\r\nhas no permission to read the GIC register for group 1 interrupts so we\r\nhave no chance to get to know which interrupt is configured as secure\r\ninterrupt and cause the kernel panic.\r\n\r\nFor upper reason, we can enable exception handling framework in ARM-TF;\r\nafter enable the EHF then the FIQ is to be routed into EL3 level for\r\nexception handling and EL3 has permission to read interrupt number so\r\ncan easily locate which interrupt causes issue. For enabling EHF, except\r\nthis patch we also need pass below parameters for ARM-TF building:\r\n\r\n \"EL3_EXCEPTION_HANDLING=1 GICV2_G0_FOR_EL3=1\"\r\n\r\nIf we need integrate service into ARM-TF with specific interrupt routing\r\nmodel, we can simply remove building options \"EL3_EXCEPTION_HANDLING=1\r\nGICV2_G0_FOR_EL3=1\" so can avoid conflict issue.",
"title": "Hikey960: Enable exception handling framework",
"type": "issue"
},
{
"action": "created",
"author": "soby-mathew",
"comment_id": 355305191,
"datetime": 1515078288000,
"masked_author": "username_1",
"text": "Hi @username_0,\r\nI think you meant Group0 interrupts as they are the secure interrupts whereas Group1 in Non Secure Interrupts.\r\nOnly secure world can configure Interrupts as Group0 on GICv2. So if there are some interrupts misconfigured as Group0, it should be fixed in the Secure Software. Also, there is no handler registered for the EHF priorities registered and this will cause panic at runtime. I didn't understand the intention of this patch.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Leo-Yan",
"comment_id": 356166997,
"datetime": 1515467415000,
"masked_author": "username_0",
"text": "Hi @username_1 @jeenu-arm,\r\n\r\nI have not clearly explain this patch's purpose, so let me elaborate for this:\r\n\r\nFor example, if the system has enabled interrupt routing model, the non secure OS handle non secure interrupt and secure OS (e.g. after we have enabled OP-TEE) handle the secure interrupt, then the secure and non-secure interrupts can be handle well.\r\n\r\nBut if the system has not enabled OP-TEE, and one interrupt is wrongly configured with secure interrupt in group 0; for this case the FIQ will not be routed to EL3 or S-EL1. So the kernel side directly reports the panic and dump the backtrace. We have this kind bug on Hikey960: https://bugs.96boards.org/show_bug.cgi?id=614; from the kernel panic log, we can know there have FIQ happens in non-secure world, but the Linux kernel (in NS-EL1) we cannot know which interrupt number trigger this panic, this is because the Linux kernel has no permission to read back the secure interrupt number and secure registers in GIC.\r\n\r\nSo for the case if we has not enabled interrupt routing model for secure interrupt, if we want to quickly root cause which interrupt introduce FIQ panic, I think we can enable EHF for the debugging. We can trigger the panic in ARM-TF and ARM-TF can help dump more clearly information for this.\r\n\r\nOn Hikey960 we can enable EHF firstly when we have not enabled OP-TEE, after the platform has enabled OP-TEE then we can disable EHF by removing building options \"EL3_EXCEPTION_HANDLING=1 GICV2_G0_FOR_EL3=1\".\r\n\r\nSo essentially the purpose to enable EHF is for debug FIQ panic, we can easily locate the wrongly configured interrupt.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "soby-mathew",
"comment_id": 356258759,
"datetime": 1515497485000,
"masked_author": "username_1",
"text": "Ok, I see. But this is not the intended purpose of EHF and EHF needs a priority handler installed at every priority of interrupt expected.\r\n\r\nIdeally in the platform layer the interrupt should only be enabled in GIC only if Optee is enabled. Since you are trying to catch misconfigured Secure interrupts, I would suggest to install a dummy S-EL1 interrupt handler which prints the Secure Interrupt information and panics. This dummy handler can be installed if there is no SPD (i.e. SPD_none). Would that work for you?\r\n\r\nYou can do this from the platform code.\r\n```\r\n#ifdef SPD_NONE\r\n \t\t\tflags = 0;\r\n\t\t\tset_interrupt_rm_flag(flags, NON_SECURE);\r\n\t\t\trc = register_interrupt_type_handler(INTR_TYPE_S_EL1,\r\n\t\t\t\t\t\thikey_debug_fiq_handler,\r\n\t\t\t\t\t\tflags);\r\n#endif\r\n```\r\n\r\nThis will catch all the FIQ triggered in NS world irrespective of priority and also means you dont have to compile ARM TF with different options if OP-TEE is not enabled.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Leo-Yan",
"comment_id": 356509614,
"datetime": 1515564705000,
"masked_author": "username_0",
"text": "Thanks for guidance, @username_1. I will try your suggestion and will let you know if it works on Hikey960. If so I will drop this PR and send new one. Thanks!",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidcunado-arm",
"comment_id": 357923567,
"datetime": 1516099856000,
"masked_author": "username_2",
"text": "@username_0 \r\nGiven your last comment, can we close this PR?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "davidcunado-arm",
"comment_id": 359038602,
"datetime": 1516383805000,
"masked_author": "username_2",
"text": "@username_0 \r\nI'm going to close this PR - please re-open if needed.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "Leo-Yan",
"comment_id": 359326086,
"datetime": 1516597428000,
"masked_author": "username_0",
"text": "@username_2 sorry for late response. It's good to close this PR and I have committed one new patch for this.",
"title": null,
"type": "comment"
}
] | 4 | 10 | 4,544 | false | true | 4,464 | true |
boomerdigital/solidus_amazon_payments | boomerdigital | 170,244,851 | 30 | {
"number": 30,
"repo": "solidus_amazon_payments",
"user_login": "boomerdigital"
} | [
{
"action": "opened",
"author": "acreilly",
"comment_id": null,
"datetime": 1470769357000,
"masked_author": "username_0",
"text": "https://github.com/boomerdigital/solidus_amazon_payments/issues/26",
"title": "Cancel Payments",
"type": "issue"
},
{
"action": "created",
"author": "jordan-brough",
"comment_id": 238676389,
"datetime": 1470773645000,
"masked_author": "username_1",
"text": "@username_0 thanks for tackling this!\r\n\r\nIt would be great to avoid monkey-patching `Spree::Payment::Processing` if possible. I wonder if we ought to be setting the response code on these payments, and if we did if that might provide a clean way to get at the data that we need?",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "acreilly",
"comment_id": 248954325,
"datetime": 1474561459000,
"masked_author": "username_0",
"text": "@username_1 @manmartinez Please review.",
"title": null,
"type": "comment"
},
{
"action": "created",
"author": "jordan-brough",
"comment_id": 254413744,
"datetime": 1476770055000,
"masked_author": "username_1",
"text": "FYI: Github is hiding it as \"outdated\" but I've left another comment [here](https://github.com/boomerdigital/solidus_amazon_payments/pull/30#discussion_r83785289).",
"title": null,
"type": "comment"
}
] | 2 | 4 | 548 | false | false | 548 | true |
18F/FEC | 18F | 208,163,174 | 939 | null | [
{
"action": "closed",
"author": "noahmanger",
"comment_id": null,
"datetime": 1488505483000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 2 | 2 | 307 | false | true | 0 | false |
uploadcare/uploadcare-widget | uploadcare | 156,560,531 | 339 | null | [
{
"action": "opened",
"author": "homm",
"comment_id": null,
"datetime": 1464109709000,
"masked_author": "username_0",
"text": "Look into `imageSmoothingQuality` property and decide is it can be used instead of our resampling algorithm.\r\n\r\nhttps://developer.apple.com/library/mac/releasenotes/General/WhatsNewInSafari/Articles/Safari_9_1.html#//apple_ref/doc/uid/TP40014305-CH10-SW10",
"title": "Use builtin high quality canvas resampling when possible",
"type": "issue"
},
{
"action": "created",
"author": "homm",
"comment_id": 223805240,
"datetime": 1465123023000,
"masked_author": "username_0",
"text": "Fixed by #345",
"title": null,
"type": "comment"
},
{
"action": "closed",
"author": "homm",
"comment_id": null,
"datetime": 1465123025000,
"masked_author": "username_0",
"text": "",
"title": null,
"type": "issue"
}
] | 1 | 3 | 268 | false | false | 268 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.