added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:38:51.213572
2015-05-27T10:46:49
81395806
{ "authors": [ "goto-bus-stop", "jazzpi" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6528", "repo": "goto-bus-stop/wololobot", "url": "https://github.com/goto-bus-stop/wololobot/pull/3" }
gharchive/pull-request
Streamtime module Adds a !streamtime command to Wololobot that displays a countdown to the next stream. Times for the streams are fetched from a Schedule panel in the stream description. It is identified by either having the title 'Schedule' or an image (identified by the schedImage option passed to the module). The output of !streamtime can also be overwritten by mods: !streamtime overwrite <msg>: The output is overwritten with <msg>. If <msg> contains the string $iftime{...}, the part ... is output if there is a stream scheduled. If <msg> contains $time, that is replaced with a countdown to the next stream. !streamtime overwrite_time YYYY-MM-DD hh:mm [AM|PM] [timezone]: Overwrites the time of the next stream. !streamtime overwrite_discard: Discards any overwrites (messages and times) By default, the streamtimes are updated every 5 minutes. An update can be forced with !streamtime update. Sweet :eyes:
2025-04-01T06:38:51.214570
2017-07-21T04:18:07
244560228
{ "authors": [ "gottfrois", "max-konin" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6529", "repo": "gottfrois/grape-knock", "url": "https://github.com/gottfrois/grape-knock/pull/4" }
gharchive/pull-request
Added compatibility with knock 2 & multiple entities authentication Hi I fixed some errors with new version of knock. great! thanks a lot!
2025-04-01T06:38:51.227599
2024-07-17T07:50:41
2412919015
{ "authors": [ "chrisdodd93" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6530", "repo": "govuk-one-login/observability-configuration", "url": "https://github.com/govuk-one-login/observability-configuration/pull/257" }
gharchive/pull-request
Updating SLO module and amending codeowners Description: Updating SLO dashboard module, amending codeowners and updating service health dashboard Ticket number: Checklist: [ ] Is my change backwards compatible? Please include evidence [ ] I have tested this and added output to Jira Comment: [ ] Documentation added (link) Comment: I'm going to close this PR and encompass the changes into a wider PR to include additional dashboard changes requested by TSD.
2025-04-01T06:38:51.233293
2022-03-22T11:02:04
1176616352
{ "authors": [ "cherrelleM1", "penx" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6531", "repo": "govuk-react/govuk-react", "url": "https://github.com/govuk-react/govuk-react/issues/1063" }
gharchive/issue
MetaLinks won't import Describe the bug The MetaLinks component is not part of the exported components. To Reproduce Steps to reproduce the behavior: Go to 'Storybook govuk-react' Copy the footer snippet Footer with with Meta Links into your code. Import the necessary components See error Expected behavior The MetaLinks component should just import and work just like the other components from govuk-react package. Screenshots Desktop (please complete the following information): OS: Windows 10 Enterprise Browser: Chrome Thanks for reporting this! You should be able to use <Footer.MetaLinks> instead of <MetaLinks> We can fix the documentation by setting the displayName on MetaLinks to Footer.MetaLinks. https://github.com/govuk-react/govuk-react/blob/64df8c00ce6c5f78ca9269742c2feacdeb6afcba/components/footer/src/molecules/meta-links/index.tsx Would you like to raise a PR for this? @penx Thank you so much. That works. Yes, please... Updating the documentation will help future developers working on govuk-react design system.
2025-04-01T06:38:51.238006
2017-08-18T06:05:41
251148114
{ "authors": [ "AndrewRayCode", "KiGniark", "luqingxuan" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6532", "repo": "gowravshekar/font-awesome-webpack", "url": "https://github.com/gowravshekar/font-awesome-webpack/issues/37" }
gharchive/issue
does it support webpack2? does it support webpack2? I just installed it on a project. It works well on webpack2. You just need to install font-awesome first: npm install font-awesome @KiGniark no, it does not work with webpack 2: https://github.com/gowravshekar/font-awesome-webpack/issues/33
2025-04-01T06:38:51.240087
2015-07-15T11:48:51
95172178
{ "authors": [ "bphenriques", "gpbl" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6533", "repo": "gpbl/SwiftChart", "url": "https://github.com/gpbl/SwiftChart/issues/11" }
gharchive/issue
cocoa pods support Please add support for cocoapods :) It's planned! :-) I need to add some tests first... Added! http://cocoadocs.org/docsets/SwiftChart/0.2.0/ 🎉
2025-04-01T06:38:51.241387
2021-07-11T19:01:04
941514044
{ "authors": [ "gpend" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6534", "repo": "gpend/calc-app", "url": "https://github.com/gpend/calc-app/issues/2" }
gharchive/issue
use padding instead of offsets in the display p if might be better to us margins instead of offsets for each font. this has been fixed
2025-04-01T06:38:51.335495
2018-10-13T22:39:54
369848223
{ "authors": [ "litherum" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6535", "repo": "gpuweb/WHLSL", "url": "https://github.com/gpuweb/WHLSL/issues/105" }
gharchive/issue
[WHLSL] Local variables should be statically allocated Migrated from https://bugs.webkit.org/show_bug.cgi?id=188402: At 2018-08-08T03:03:55Z<EMAIL_ADDRESS>wrote: The interpreter conforms to the spec; the Metal compiler doesn't (necessarily). For example, a call to foo() should return 1: thread int* bar(bool flag) { int x = 0; if (flag) x = 1; return &x; } int foo() { thread int* x = bar(false); thread int* y = bar(true); return (*x) * (*y); } The interpreter gets this right; when a VariableDecl is visited in Evaluator.js it only allocates a buffer for that variable if there wasn't one already. The compiler doesn't get it right; local variables are emitted in local scope and therefore the Metal compiler can choose to store them however it likes. They could be independent, or another function bar2() could alias its local variables with bar() if bar and bar2 are never both called. Metal Shading Language only permits constant variables to be static if they are declared in global scope. It doesn't permit statically declared variables in local scope. When functions are inlined this isn't a problem; all variables can be "statically" allocated by having them as local variables declared at the top of a shader function. However, not all functions can be (efficiently) inlined in the MSL output if they don't have reducible control flow. An inefficient solution would "statically" allocate all variables in the main shading functions and pass references to them to non-inlined functions. This awful approach could be mitigated by conservatively finding local variables that references are never created for, but it is far from ideal. It is worth noting that successive executions of the same function cannot rely on previous values stored at the same local variable because local variables are always zero-initialized when they are declared. The compiler and the interpreter behave correctly on the following program: thread int* bar(bool flag) { int x; // x is zero initialized twice if (flag) x = 1; return &x; } int foo() { thread int* x = bar(true); thread int* y = bar(false); return (*x) * (*y); } The result of foo() should be 0. Given that local variable values do not persist between calls, the largest benefit of statically allocating local variables is that references to local variables can be returned from a function, or passed out via "out" parameters. Many programming languages do not permit this, so a possible mitigation would be to disallow it. At 2018-09-06T00:50:56Z<EMAIL_ADDRESS>wrote: (In reply to Thomas Denney from comment #1) Created attachment 348987 [details] WIP This patch doesn’t yet support function arguments having their address taken, and I haven’t done any work on array references yet. It also contains my modified version of the standard library for faster parsing, so this is very much WIP. At 2018-09-06T00:49:45Z<EMAIL_ADDRESS>wrote: Created attachment 348987 WIP At 2018-09-11T03:16:11Z<EMAIL_ADDRESS>wrote: Created attachment 349372 Patch At 2018-09-11T00:10:03Z<EMAIL_ADDRESS>wrote: Created attachment 349358 WIP At 2018-09-11T00:11:10Z<EMAIL_ADDRESS>wrote: (In reply to Thomas Denney from comment #3) Created attachment 349358 [details] WIP This most recent patch is basically complete, but I’m going to wait on the two dependencies of this bug to be resolved before I put this up for review. At 2018-09-19T21:25:39Z<EMAIL_ADDRESS>wrote: Comment on attachment 350096 Patch View in context: https://bugs.webkit.org/attachment.cgi?id=350096&action=review Cool patch. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:29 const entryPoints = []; This is a fairly self-contained block of code. I'd recommend moving it to its own function. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:30 class OnlyVisitFuncDefsThatAreEntryPoints extends Visitor { How about "gatherEntryPointDefs" since visitors visit everything? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:41 const allVariablesAndFunctionParameters = new Set(); const functionsThatAreCalledByEntryPoints = new Set(); class FindAllVariablesAndFunctionParameters extends Visitor { This is a fairly self-contained block of code. I'd recommend moving it to its own function. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:53 node.func.visit(new FindAllVariablesAndFunctionParameters(node.func)); Doesn't this have exponential runtime because it doesn't dedup functions? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:59 node._func = this._currentFunc; A more descriptive name, please Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:68 node._func = this._currentFunc; if (!this._currentFunc.isEntryPoint) allVariablesAndFunctionParameters.add(node); Why does visitVariableDecl() have a super call but visitFuncParameter not? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:83 program.add(ptrToGlobalStructType); This doesn't seem right, because the parser will never put a PtrType at the global level, so we probably shouldn't do that either. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:100 let counter = 0; const varToFieldMap = new Map(); for (let varOrParam of allVariablesAndFunctionParameters) { const fieldName = `field${counter++}_${varOrParam._func.name}_${varOrParam.name}`; globalStructType.add(new Field(varOrParam.origin, fieldName, varOrParam.type)); varToFieldMap.set(varOrParam, fieldName); } for (let func of functionsThatAreCalledByEntryPoints) { if (func.returnType.name !== "void") { const fieldName = `field${counter++}_return_${func.name}`; globalStructType.add(new Field(func.origin, fieldName, func.returnType)); func.returnFieldName = fieldName; } } This is a fairly self-contained block of code. I'd recommend moving it to its own function. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:139 get func() { return this._func; } Not sure if this is necessary if all callers are local to the class. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:157 const possibleAndOverloads = program.globalNameContext.get(Func, functionName); const callExpressionResolution = CallExpression.resolve(node.origin, possibleAndOverloads, functionName, [ this.globalStructVariableRef ], [ ptrToGlobalStructTypeRef ]); It's kind of unfortunate we have to reverse-engineer what would have happened earlier in the compiler. Can we run this stage earlier so we don't have to do this? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:171 return super.visitVariableRef(node); Isn't this an error? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:180 return new Assignment(node.origin, this._dereferencedCallExpressionForFieldName(node, node.type, varToFieldMap.get(node)), node.initializer.visit(this), node.type); Nodes need to get assigned (zero-filled) even if they don't have an initializer. We should add a test for this. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:183 else if (node == this.variableDecl) return node; I'd move this first Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:199 const anonymousVariable = new AnonymousVariable(node.origin, type); What is the purpose of the anonymous variables? Why not assign directly into the global struct? Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:208 exprs.push(this._dereferencedCallExpressionForFieldName(node.func, node.func.returnType, node.func.returnFieldName)); Neat. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:210 node.argumentList = [ this.globalStructVariableRef ]; Are you sure it's wise for them all to be using the exact same VariableRef? Seems like we should store a creation lambda instead of the raw variable itself. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:220 if (node.value && this._func.returnFieldName) If these don't match, seems like this should be an error. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:223 return new CommaExpression(node.origin, [ new Assignment(node.origin, this._dereferencedCallExpressionForFieldName(this._func, this._func.returnType, this._func.returnFieldName), node.value, this._func.returnType), new Return(node.origin) ]); Indentation Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:240 if (node._newParameters) This pollutes the FuncDef nodes. I'd prefer a side-table. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:250 if (node.func.returnFieldName) node._returnType = node.resultType = TypeRef.wrap(program.types.get("void")); Cool. Tools/WebGPUShadingLanguageRI/EBufferBuilder.js:-33 constructor(program) { super(); this._program = program; } What? What's the point of this class if you can never construct it? I don't see any other constructors or static functions. Tools/WebGPUShadingLanguageRI/Func.js:57 set parameters(newValue) Not a great variable name. Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js:51 function createFieldType() { return field.type.visit(new Rewriter()); } function createTypeRef() { return TypeRef.wrap(type); } Do we need these? Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js:101 nativeFunc = new NativeFunc( field.origin, "operator." + field.name + "=", createTypeRef(), field.origin, "operator&." + field.name, new PtrType(field.origin, addressSpace, createFieldType()), [ new FuncParameter(field.origin, null, createTypeRef()), new FuncParameter(field.origin, null, createFieldType()) new FuncParameter( field.origin, null, new PtrType(field.origin, addressSpace, createTypeRef())) ], isCast, shaderType); setupImplementationData(nativeFunc, ([base, value], offset, structSize, fieldSize) => { let result = new EPtr(new EBuffer(structSize), 0); result.copyFrom(base, structSize); result.plus(offset).copyFrom(value, fieldSize); return result; setupImplementationData(nativeFunc, ([base], offset, structSize, fieldSize) => { base = base.loadValue(); if (!base) throw new WTrapError(field.origin.originString, "Null dereference"); return EPtr.box(base.plus(offset)); }); program.add(nativeFunc); diff really made a mess of things, didn't it At 2018-09-13T01:51:52Z<EMAIL_ADDRESS>wrote: rdar://problem/44403028 At 2018-09-19T07:24:12Z<EMAIL_ADDRESS>wrote: Created attachment 350096 Patch At 2018-09-21T02:10:04Z<EMAIL_ADDRESS>wrote: Comment on attachment 350175 Patch Rejecting attachment 350175 from commit-queue. Failed to run "['/Volumes/Data/EWS/WebKit/Tools/Scripts/webkit-patch', '--status-host=webkit-queues.webkit.org', '--bot-id=webkit-cq-02', 'land-attachment', '--force-clean', '--non-interactive', '--parent-command=commit-queue', 350175, '--port=mac']" exit_code: 2 cwd: /Volumes/Data/EWS/WebKit Logging in as<EMAIL_ADDRESS>Fetching: https://bugs.webkit.org/attachment.cgi?id=350175&action=edit Fetching: https://bugs.webkit.org/show_bug.cgi?id=188402&ctype=xml&excludefield=attachmentdata Processing 1 patch from 1 bug. Updating working directory Processing patch 350175 from bug 188402. Fetching: https://bugs.webkit.org/attachment.cgi?id=350175 Failed to run "[u'/Volumes/Data/EWS/WebKit/Tools/Scripts/svn-apply', '--force', '--reviewer', u'Myles C. Maxfield']" exit_code: 1 cwd: /Volumes/Data/EWS/WebKit Parsed 14 diffs from patch file(s). patching file Tools/ChangeLog Hunk #1 succeeded at 1 with fuzz 3. patching file Tools/WebGPUShadingLanguageRI/All.js patching file Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js patching file Tools/WebGPUShadingLanguageRI/CallExpression.js patching file Tools/WebGPUShadingLanguageRI/EBufferBuilder.js patching file Tools/WebGPUShadingLanguageRI/Func.js patching file Tools/WebGPUShadingLanguageRI/FuncDef.js patching file Tools/WebGPUShadingLanguageRI/Prepare.js patching file Tools/WebGPUShadingLanguageRI/Rewriter.js patching file Tools/WebGPUShadingLanguageRI/SPIRV.html patching file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js Hunk #1 FAILED at 20. 1 out of 1 hunk FAILED -- saving rejects to file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js.rej patching file Tools/WebGPUShadingLanguageRI/Test.html patching file Tools/WebGPUShadingLanguageRI/Test.js Hunk #1 succeeded at 8081 (offset 164 lines). patching file Tools/WebGPUShadingLanguageRI/index.html Failed to run "[u'/Volumes/Data/EWS/WebKit/Tools/Scripts/svn-apply', '--force', '--reviewer', u'Myles C. Maxfield']" exit_code: 1 cwd: /Volumes/Data/EWS/WebKit Parsed 14 diffs from patch file(s). patching file Tools/ChangeLog Hunk #1 succeeded at 1 with fuzz 3. patching file Tools/WebGPUShadingLanguageRI/All.js patching file Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js patching file Tools/WebGPUShadingLanguageRI/CallExpression.js patching file Tools/WebGPUShadingLanguageRI/EBufferBuilder.js patching file Tools/WebGPUShadingLanguageRI/Func.js patching file Tools/WebGPUShadingLanguageRI/FuncDef.js patching file Tools/WebGPUShadingLanguageRI/Prepare.js patching file Tools/WebGPUShadingLanguageRI/Rewriter.js patching file Tools/WebGPUShadingLanguageRI/SPIRV.html patching file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js Hunk #1 FAILED at 20. 1 out of 1 hunk FAILED -- saving rejects to file Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js.rej patching file Tools/WebGPUShadingLanguageRI/Test.html patching file Tools/WebGPUShadingLanguageRI/Test.js Hunk #1 succeeded at 8081 (offset 164 lines). patching file Tools/WebGPUShadingLanguageRI/index.html Failed to run "[u'/Volumes/Data/EWS/WebKit/Tools/Scripts/svn-apply', '--force', '--reviewer', u'Myles C. Maxfield']" exit_code: 1 cwd: /Volumes/Data/EWS/WebKit Updating OpenSource From https://git.webkit.org/git/WebKit 2a2836e6631..dee36913aef master -> origin/master Partial-rebuilding .git/svn/refs/remotes/origin/master/.rev_map.268f45cc-cd09-0410-ab3c-d52691b4dbfc ... Currently at 236296 = 2a2836e6631fd50250fbec7774a49ee368daa97b r236297 = 560dda40a46c0fea73db1cb6365debaee9273c3a r236298 = 9ff9defcd0e9c7e3712bd2f28cc498e9e99f7902 r236299 = dee36913aefb932eb3d82e2b3510193dac0212ff Done rebuilding .git/svn/refs/remotes/origin/master/.rev_map.268f45cc-cd09-0410-ab3c-d52691b4dbfc First, rewinding head to replay your work on top of it... Fast-forwarded master to refs/remotes/origin/master. Full output: https://webkit-queues.webkit.org/results/9290392 At 2018-09-20T06:33:31Z<EMAIL_ADDRESS>wrote: Created attachment 350175 Patch At 2018-09-20T02:49:41Z<EMAIL_ADDRESS>wrote: (In reply to Myles C. Maxfield from comment #8) Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:53 node.func.visit(new FindAllVariablesAndFunctionParameters(node.func)); Doesn't this have exponential runtime because it doesn't dedup functions? Damn, good catch. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:157 const possibleAndOverloads = program.globalNameContext.get(Func, functionName); const callExpressionResolution = CallExpression.resolve(node.origin, possibleAndOverloads, functionName, [ this.globalStructVariableRef ], [ ptrToGlobalStructTypeRef ]); It's kind of unfortunate we have to reverse-engineer what would have happened earlier in the compiler. Can we run this stage earlier so we don't have to do this? Annoyingly we need types for this stage, which are only fully annotated in the Checker stage. An earlier version of this patch tried doing this allocation but I couldn’t get it working reliably. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:171 return super.visitVariableRef(node); Isn't this an error? No, anonymous variables can be wrapped in VariableRefs. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:199 const anonymousVariable = new AnonymousVariable(node.origin, type); What is the purpose of the anonymous variables? Why not assign directly into the global struct? I’m going to add a comment into the code explaining why not, because it has now caught me out several times. Consider the case foo(foo(a, b), c). We initially evaluate c, and then evaluate foo(a, b) per the RTL calling convention. To evaluate foo(a, b) we evaluate b, then a, and place them in the global struct for the call to foo. However, this would mean that if c had previously been placed in the global struct then the outer foo wouldn’t see the value of evaluating c, but the value of evaluating b. Therefore all the arguments have to be evaluated into anonymous variables, then copied into the global struct, and then the function has to be called. The existing Metal code generator (MSLStatementEmitter.visitCallExpression) and interpreter (Evaluator._evaluateArguments) both respect this behavior and there are tests that catch this. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:210 node.argumentList = [ this.globalStructVariableRef ]; Are you sure it's wise for them all to be using the exact same VariableRef? Seems like we should store a creation lambda instead of the raw variable itself. There’s nothing in the compiler/interpreter at the moment that memoizes the evaluation or compilation of a node, so it would always be re-evaluated/re-compiled wherever it occurs, but this change seems harmless. Tools/WebGPUShadingLanguageRI/AllocateAtEntryPoints.js:240 if (node._newParameters) This pollutes the FuncDef nodes. I'd prefer a side-table. Cool, will do. Tools/WebGPUShadingLanguageRI/EBufferBuilder.js:-33 constructor(program) { super(); this._program = program; } What? What's the point of this class if you can never construct it? I don't see any other constructors or static functions. There is a default constructor (equivalent to constructor () { super(); }), and there are no construction sites that actually passed in the program object any more, nor is this._program used anywhere in the class. Tools/WebGPUShadingLanguageRI/SynthesizeStructAccessors.js:51 function createFieldType() { return field.type.visit(new Rewriter()); } function createTypeRef() { return TypeRef.wrap(type); } Do we need these? createFieldType() isn’t necessary (it works fine to continue using field.type) and createTypeRef is literally just used as a utility function 3 times (as you noticed, diff had a bad time with this file — I didn’t write these functions). I’ll get rid of them. At 2018-09-21T06:55:22Z<EMAIL_ADDRESS>wrote: Comment on attachment 350175 Patch Rejecting attachment 350175 from commit-queue. <EMAIL_ADDRESS>does not have committer permissions according to https://trac.webkit.org/browser/trunk/Tools/Scripts/webkitpy/common/config/contributors.json. If you do not have committer rights please read http://webkit.org/coding/contributing.html for instructions on how to use bugzilla flags. If you have committer rights please correct the error in Tools/Scripts/webkitpy/common/config/contributors.json by adding yourself to the file (no review needed). The commit-queue restarts itself every 2 hours. After restart the commit-queue will correctly respect your committer rights. At 2018-09-21T21:02:29Z<EMAIL_ADDRESS>wrote: Created attachment 350419 Patch At 2018-09-21T21:46:10Z<EMAIL_ADDRESS>wrote: Comment on attachment 350421 Patch Clearing flags on attachment: 350421 Committed r236361: https://trac.webkit.org/changeset/236361 At 2018-09-21T21:03:16Z<EMAIL_ADDRESS>wrote: Comment on attachment 350419 Patch Rejecting attachment 350419 from commit-queue. <EMAIL_ADDRESS>does not have committer permissions according to https://trac.webkit.org/browser/trunk/Tools/Scripts/webkitpy/common/config/contributors.json. If you do not have committer rights please read http://webkit.org/coding/contributing.html for instructions on how to use bugzilla flags. If you have committer rights please correct the error in Tools/Scripts/webkitpy/common/config/contributors.json by adding yourself to the file (no review needed). The commit-queue restarts itself every 2 hours. After restart the commit-queue will correctly respect your committer rights. At 2018-09-21T21:07:04Z<EMAIL_ADDRESS>wrote: Created attachment 350421 Patch At 2018-09-22T10:03:00Z<EMAIL_ADDRESS>wrote: *** Bug 189107 has been marked as a duplicate of this bug. ***
2025-04-01T06:38:51.347428
2022-12-08T11:33:26
1484435168
{ "authors": [ "douglaseggleton", "gr4vy-code" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6536", "repo": "gr4vy/gr4vy-embed", "url": "https://github.com/gr4vy/gr4vy-embed/pull/121" }
gharchive/pull-request
chore: upgrade dev dependencies Description Upgrades dev dependencies. Checklist [x] My code follows the style guidelines of this project [x] I have performed a self-review of my own changes [x] I have run yarn lint to make sure my changes pass all tests [x] I have run yarn test to make sure my changes pass all linters [x] I have pulled the latest changes from the upstream main branch [x] I have tested both the react and the CDN versions on local and integration environments [x] I have added the necessary labels to this PR in case a new release needs to be published after merging into main (e.g. release and patch) Contribution guidelines For contribution guidelines, styleguide, and other helpful information please see the CONTRIBUTING.md file in the root of this project. :rocket: PR was released in v2.15.0 :rocket:
2025-04-01T06:38:51.370594
2017-09-13T00:28:03
257223085
{ "authors": [ "spyhunter99" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6537", "repo": "gradle-fury/gradle-fury", "url": "https://github.com/gradle-fury/gradle-fury/issues/51" }
gharchive/issue
Add test targets for the newer android plugins and gradle versions ugh https://travis-ci.org/gradle-fury/gradle-fury/builds/274866548 so i did make progress with this, but ran into a whole lot of test failures. unreal how difficult it is to get everything working consistently across versions several issues starting with gradle 3.4 gradle api change which affects the dependency check plugin (updated version) https://github.com/jeremylong/dependency-check-gradle/issues/31 gradle api change which affected the maven-support script due to api change, dependencies declared in the pom are now no longer listed causing the validation tests to fail did as much as i can. gradle api changes break everything
2025-04-01T06:38:51.372081
2024-02-12T14:21:30
2130243321
{ "authors": [ "bigdaz", "martinfrancois" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6538", "repo": "gradle/actions", "url": "https://github.com/gradle/actions/pull/41" }
gharchive/pull-request
Improve documentation Improve grammar Improve clarity Fix small mistakes and word duplications Thanks very much for the documentation improvements! @bigdaz you're welcome, thanks for the quick response!
2025-04-01T06:38:51.375672
2018-10-26T16:48:54
492249990
{ "authors": [ "TLATER", "tculp" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6539", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/10588" }
gharchive/issue
Source dependencies from local (file:...) git repo not allowed in --offline mode Git repositories can be defined as a url or a path. As a URL: sourceControl { gitRepository("https://github.com/${repoGroup}/${repoName}.git") { producesModule("${module}") } } As a path: sourceControl { gitRepository("../${repoPath}") { producesModule("${module}") } } Expected Behavior When the --offline parameter is given, I expect URL-based repos to fail, but path-based repos to succeed, since they do not require online functionality. Current Behavior When using a path-based repo, I still receive the following: Could not resolve all artifacts for configuration ':classpath'. Cannot resolve ${module}:1.0 (branch: dev) from Git repository at file:/Users/${repoPath}/ in offline mode. Context Some git repositories are not published online, but should still be usable as a source dependency. More commonly, a machine may not be connected to the internet, but may still have all of the required repositories cloned locally. This is my situation. Steps to Reproduce (for bugs) Attempt to assemble a project with a local, path-based sourceDependency with the --offline flag See the linked issue. I'd very much like to be able to do gradle offline build for nix packaging purposes. Any chance this could be reopened?
2025-04-01T06:38:51.395984
2022-05-13T18:35:12
1235563215
{ "authors": [ "DPUkyle", "adammurdoch", "bamboo", "big-guy", "liutikas", "nuhkoca", "rolgalan", "yigit" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6540", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/20778" }
gharchive/issue
IdentityTransform fails with FileNotFound after updating to 7.5-rc1 AndroidX Github Build started failing after updating Gradle to 7.5-rc-1 from 7.5-20220421031748+0000 First failure: https://github.com/androidx/androidx/commit/dc4af6559aeea3e9eb285857b06078bb152a56cf Expected Behavior compileReleaseJavaWithJavac task should wait for its dependencies. Current Behavior compileReleaseJavaWithJavac fails in IdentityTransform step with a FileNotFound exception. Unfortunately, the file is there after the build so my guess is that it is not waiting for its dependencies properly. Execution failed for task ':lifecycle:lifecycle-livedata-core:compileReleaseJavaWithJavac'. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Could not resolve all files for configuration ':lifecycle:lifecycle-livedata-core:releaseCompileClasspath'. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Failed to transform lifecycle-common.jar (project :lifecycle:lifecycle-common) to match attributes {artifactType=android-classes-jar, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.jvm.version=8, org.gradle.libraryelements=jar, org.gradle.usage=java-api}. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Execution failed for IdentityTransform: /Users/yboyar/src/androidx/out/activity-playground/activity-playground/lifecycle/lifecycle-common/build/libs/lifecycle-common-2.6.0-alpha01.jar. 2022-05-13T11:22:48.790-0700 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > File/directory does not exist: /Users/yboyar/src/androidx/out/activity-playground/activity-playground/lifecycle/lifecycle-common/build/libs/lifecycle-common-2.6.0-alpha01.jar Context Worked in 7.5-20220421031748+0000 (and before), started failing with 7.5-rc-1. Steps to Reproduce checkout AndroidX Github Repo Might also need the one time setup instructions cd <checkout-root>/activity ./gradlew --stop rm -rf ~/.gradle // important to reproduce ./gradlew buildOnServer --no-build-cache --no-configuration-cache If you re-run bOS it will succeed. You can also validate that the missing file is there after the first failure. Your Environment Build scan URL: https://ge.androidx.dev/s/sg2yfkiqoxpti/failure#1 also tried 7.5-20220511195339+0000. didn't work. Thanks @yigit. We'll try to reproduce this. Could you give 7.5-20220501001223+0000 a try? Looks like<PHONE_NUMBER>0934 (build) is the last good one and<PHONE_NUMBER>2320 is the first failure (build) @big-guy , triggered a new build for 7.5-20220501001223+0000: https://github.com/androidx/androidx/actions/runs/2321565000 That one failed too with the same error: https://github.com/androidx/androidx/runs/6429070066?check_suite_focus=true All the things seem to point to https://github.com/gradle/gradle/commit/0654460d8de07edb1358bfb774c760e47a55cf71 I was able to reproduce. Instead of purging the whole Gradle user home (~/.gradle), I just did: $ rm -rf ../out/activity-playground/activity-playground/lifecycle/lifecycle-common/build/libs from the activity subproject. This was enough to raise the error. I'll continue investigating. Hi @yigit and @liutikas, sorry for the delay in responding. We've been looking at the execution graph optimizations which exposed this issue. The reality is the behavior which analyzed the inputs to transformations has been incorrect since at least Gradle 7.4, but something about the androidx setup combined with our recent optimizations has teased out this bug. Unfortunately the proper long-term fix is too disruptive to add to 7.5-rc-2. Instead I've provided a suggested temporary workaround here. This should allow you to test with Gradle 7.5-rc-1, and we will make changes in either 7.5.1 or 7.6 which will remove the need for the workaround. @bamboo I think Adam is looking/working on this. Do you want us to reassign it to him? This issue is blocked by the root cause, which is being investigated as #20975. This is fixed now via https://github.com/gradle/gradle/pull/21292 Unfortunately, we are still seeing this problem with gradle 8: https://ge.androidx.dev/s/jpkurj73xajqw Unfortunately, we are still seeing this problem with gradle 8: https://ge.androidx.dev/s/jpkurj73xajqw I can confirm we are getting exactly the same problem with Gradle 8.2 and Gradle 8.2.1 only if we enable configuration cache. We detected while executing sonarqube, but the error happens during the kaptGenerateStubsDebugKotlin. Interestingly enough, if we run first kaptGenerateStubsDebugKotlin and then in a separate execution sonarqube, it finishes successfully. This happens for us in our CI (on Linux AMIs, with 16 cores) but not locally (on macOS, with 8-10 cores), not sure if the extra available workers or the OS might be connected. This is a project around 140 modules and it always fails in one of the first ones (required by another 14 modules). Unfortunately, we are still seeing this problem with gradle 8: https://ge.androidx.dev/s/jpkurj73xajqw I can confirm we are getting exactly the same problem with Gradle 8.2 and Gradle 8.2.1 only if we enable configuration cache. We detected while executing sonarqube, but the error happens during the kaptGenerateStubsDebugKotlin. Interestingly enough, if we run first kaptGenerateStubsDebugKotlin and then in a separate execution sonarqube, it finishes successfully. This happens for us in our CI (on Linux AMIs, with 16 cores) but not locally (on macOS, with 8-10 cores), not sure if the extra available workers or the OS might be connected. This is a project around 140 modules and it always fails in one of the first ones (required by another 14 modules). Hey @rolgalan, We are facing exactly the same issue. Were you be able to fix this? Hi @nuhkoca, is that still happening with Gradle 8.9? Hi @bamboo, yes even with Gradle 8.10. I am getting this exception Execution failed for task ':app:kaptGenerateStubsDebugKotlin'. Error while evaluating property 'friendPathsSet$kotlin_gradle_plugin_common' of task ':app:kaptGenerateStubsDebugKotlin'. Could not resolve all files for configuration ':app:debugCompileClasspath'. Failed to transform annotation.jar (project :core:annotation) to match attributes {artifactType=android-classes-jar, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.jvm.environment=standard-jvm, org.gradle.jvm.version=17, org.gradle.libraryelements=jar, org.gradle.usage=java-api, org.jetbrains.kotlin.platform.type=jvm}. Execution failed for IdentityTransform: /home/runner/work/path/to/project/core/annotation/build/libs/annotation.jar. File/directory does not exist: /home/runner/work/path/to/project/core/annotation/build/libs/annotation.jar We have the same scenario as @rolgalan, this exception only occurs in executing sonarqube task regardless of configuration cache. We have only one JVM module in our Android project and this started after adding that module, tho. Hey @bamboo, we figured out that this issue is actually originating from Sonar task. Nothing to do with Gradle. For now, we converted JVM library to Kotlin one and issue got resolved. Thanks for helping tho!
2025-04-01T06:38:51.404687
2018-09-23T17:12:42
362959628
{ "authors": [ "big-guy", "pkubowicz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6541", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/6864" }
gharchive/issue
Renaming unrelated task makes test task not up-to-date Expected Behavior Running tests is expensive, so they should be up-to-date when unrelated changes are done in build scripts Current Behavior Tests task is marked as not up-to-date Context I am running tests using Pact library, which produces contract files. I need to clean the output directory ($buildDir/foo in the simplified project below) before running tests, as the library has unpredictable merging mechanism. Steps to Reproduce (for bugs) Use https://github.com/pkubowicz/gradle-tests-uptodate Run ./gradlew barJar --console=verbose - tests are run Repeat ./gradlew barJar --console=verbose - everything up-to-date Edit build.gradle, changing baz123 task name to baz1234 note that barJar is not related to this task in any way Run ./gradlew barJar --console=verbose - tests are re-run, although nothing related to barJar has changed Your Environment Gradle 4.10.2, happens also on 4.8 Build scan URL: https://scans.gradle.com/s/em3pkt2jw7usg This is working as designed, but I understand it's a little confusing. Using doFirst or doLast in a build script adds the build script as an input to the task because that's the implementation of the closure. This means any change to the build script affects the doFirst or doLast actions and that affects up-to-dateness. That's the source of Task ':test' has additional actions that have changed. e.g., if you were doing this: def outputToDelete = file("$buildDir/foo") test { doFirst { delete outputToDelete } } It's not enough to track the plain text contents of the doFirst block, we need to track more. Like a task's class implementation, we track the classloader, which tracks all of the classes/jars that are part of it. For build scripts, this also includes the build script file itself. I think you have a few options: If you're using buildSrc, move the bit of logic that configures the test task into a plugin in buildSrc. This moves/limits the problem to just changes to buildSrc, which may be less frequent than changes to the build script. You could create a pact plugin and publish it somewhere. This could be an evolution from 1 above. This would behave like you would expect and the test task would only be out of date when the pact plugin changed (or any of the usual Gradle things). Delete these files in some other way. Maybe there's a way to configure pact to delete these files automagically? We had talked about deleting outputs automatically on the Gradle side if we knew the task wasn't incremental, but I don't think we ever did that, did we @wolfs?
2025-04-01T06:38:51.409731
2016-12-05T15:22:42
193526414
{ "authors": [ "ImGanesh", "lacasseio" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6542", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/issues/969" }
gharchive/issue
Extending the NativeComponentSpec to build partially linked objects Gradle by default only allows to build executables or libraries(shared and static) for C source files. For my build system, I would like to build a partially linked object from the C source set. For this, I would like to extend the NativeComponentSpec. Expected Behavior So, I would like to have two types of behaviours: To build a partially linked object from the C sources. The generated artifact should have a .o ending irrespective of OS'es. I would like to build a NativeExecutableSpec from the partially linked objects. So, it should just have a link step where the inputs to the linker will be partially linked objects. Current Behavior The current behavior does not support partial linking or providing an "object source set" to the linker. Hence, there is a need to extend the model. Context I have a main project with multiple sub-projects. My idea is to build the main project by linking the partially linked objects of the various sub-projects. Reason for this is because, we do generic embedded software development and one or more of our sub-projects would be re-used in other main projects. So, we would just like to create partially linked objects which can later on be linked with other projects in a second link phase. Steps to Reproduce (for bugs) Your Environment Build scan URL: Git is VCS. We use sparc-rtems-gcc and sparc-rtems-ld for our compilation and linking (version is 3.x). The sub-projects are mostly decoupled. And I am currently using a Gradle Multi-project build structure. I am just a beginner with Gradle. I have just heard the concept of build scans so will try to update my issue as soon as I have learnt to implement it in my current project. Thanks @ImGanesh for this feature request. For more information about build scans, refer to the getting started documentation here.
2025-04-01T06:38:51.412159
2020-11-24T20:04:27
750002565
{ "authors": [ "bot-gradle", "melix" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6543", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/15301" }
gharchive/pull-request
Use a named object container for the catalogs Context Suggestion from @big-guy For consistency with other DSLs in Gradle, use a named object container to declare catalogs. For the Groovy DSL it's actually a bit nicer. For the Kotlin DSL unfortunately it makes things a bit more verbose. @bot-gradle test this OK, I've already triggered ReadyForMerge build for you.
2025-04-01T06:38:51.413866
2021-08-04T13:47:33
960481054
{ "authors": [ "bamboo", "bot-gradle" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6544", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/17944" }
gharchive/pull-request
Backport: Don't lose task dependencies when zipping against provider with no dependencies Backport #17930 Your PR is queued. See the queue page for details. OK, I've already triggered a build for you.
2025-04-01T06:38:51.415124
2021-12-28T16:55:24
1090017650
{ "authors": [ "bot-gradle", "ljacomet" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6545", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/19443" }
gharchive/pull-request
Ignore always failing tests This either indicates a bug or a behavior change introduced in Gradle 7.3 @bot-gradle test and merge OK, I've already triggered a build for you.
2025-04-01T06:38:51.430070
2022-01-21T19:03:18
1110794583
{ "authors": [ "bamboo", "bot-gradle" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6546", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/19661" }
gharchive/pull-request
Introduce Task.doNotCacheConfigurationIf For optimistic incremental adoption of the configuration cache. The new API supports the scenario where one would like to use the configuration cache whenever it just works via the configuration-cache-problems=warn setting and have it automatically disabled whenever tasks for which configuration caching has been proven problematic are scheduled. @bot-gradle test ACC Sorry some internal error occurs, please contact the administrator @blindpirate @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test ACC Sorry some internal error occurs, please contact the administrator @blindpirate @bot-gradle help Currently, the following commands are supported: @bot-gradle test <BuildTrigger1> <BuildTrigger2> ... <BuildTriggerN> A trigger is a special build for this PR on TeamCity, common triggers are: SanityCheck/CompileAll/QuickFeedbackLinux/QuickFeedback/PullRequestFeedback/ReadyForNightly/ReadyForRelease Shortcuts: SC/CA/QFL/QF/PRF/RFN/RFR Specific builds: PT: PerformanceTest, all performance tests for Ready For Nightly stage. APT: AllPerformanceTest, all performance tests, including slow performance tests. AST: AllSmokeTestsPullRequestFeedback AFT: AllFunctionalTestsPullRequestFeedback ASB: AllSpecificBuildsPullRequestFeedback ACC: AllConfigCacheTestsPullRequestFeedback ACT: AllCrossVersionTestsReadyForNightly AFTN: AllFunctionalTestsReadyForNightly ACTR: AllCrossVersionTestsReadyForRelease AFTR: AllFunctionalTestsReadyForRelease @bot-gradle test and merge queues this PR for testing and merges if all tests pass by: Creating a merge commit from your PR branch HEAD and the target branch Running a ReadyForNightly build against the merge commit When it passes, fast-forward the target branch to this merge commit (i.e. merge the PR) The merge commit is called a pre-tested commit, which means that it fully tests the integration of your branch HEAD and latest master, instead of only testing your branch HEAD. @bot-gradle cancel cancel a running pre-tested commit build or remove it from queue @bot-gradle clean clear the conversation history @bot-gradle help display this message To run a command, simply submit a comment. For detailed instructions see here. @bot-gradle test AllConfigCacheTestsPullRequestFeedback @bot-gradle clean @bot-gradle test ACC Sorry some internal error occurs, please contact the administrator @blindpirate Sorry some internal error occurs, please contact the administrator @blindpirate We might consider this again in the future but it's more likely we introduce something at the project level. @bot-gradle test ACC OK, I've already triggered the following builds for you: AllConfigCacheTestsPullRequestFeedback build
2025-04-01T06:38:51.454740
2022-02-28T20:45:09
1154531753
{ "authors": [ "big-guy", "bigdaz", "blindpirate", "bot-gradle", "eskatos", "jvandort", "octylFractal" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6547", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/20054" }
gharchive/pull-request
Merge daemon defaults with user-supplied jvmargs Fixes #19750 Context Previously, setting any value for org.gradle.jvmargs caused all default settings to be lost. This often resulted in important defaults like -XXMaxMetaspaceSize being omitted when a user attempted to provide more memory to a build process. With this change, default jvmargs will be retained unless specifically overridden by a user-supplied argument. One exception is that setting either -Xmx or -Xms will cause the default heap size settings to be omitted, preventing user-supplied values from conflicting with default values (like having a min heap larger than max heap). Contributor Checklist [ ] Review Contribution Guidelines [ ] Make sure that all commits are signed off to indicate that you agree to the terms of Developer Certificate of Origin. [ ] Make sure all contributed code can be distributed under the terms of the Apache License 2.0, e.g. the code was written by yourself or the original code is licensed under a license compatible to Apache License 2.0. [ ] Check "Allow edit from maintainers" option in pull request so that additional changes can be pushed by Gradle team [ ] Provide integration tests (under <subproject>/src/integTest) to verify changes from a user perspective [ ] Provide unit tests (under <subproject>/src/test) to verify logic [ ] Update User Guide, DSL Reference, and Javadoc for public-facing changes [ ] Ensure that tests pass sanity check: ./gradlew sanityCheck [ ] Ensure that tests pass locally: ./gradlew <changed-subproject>:quickTest Gradle Core Team Checklist [ ] Verify design and implementation [ ] Verify test coverage and CI build status [ ] Verify documentation [ ] Recognize contributor in release notes @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test this @bot-gradle test this @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build OK, I've already triggered the following builds for you: PullRequestFeedback build @octylFractal Thanks for the feedback. Unfortunately this PR is hitting some hard-to-understand test failures that will require further investigation before this change could be merged. I'm not sure if/when I'll find time to do that. In the meantime, would it be helpful if I close this PR (or convert to draft?) I'm looking at the test failures now, so don't worry about that. @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bigdaz I looked into the "hard-to-understand test failures", and the underlying issue is that we used to let anything that set org.gradle.jvmargs run with unlimited Metaspace if not specified, and now we limit it to 256m. After some research, @DPUkyle and I came to the conclusion that it would be a good idea to increase this to perhaps 1G by default, or remove it entirely. What do you think? We should not remove the setting altogether. The entire reason for this PR is that users are setting org.gradle.jvmargs without specifying MaxMetaspaceSize, and the daemon process is consuming more and more memory until the process dies. See #19750 for details. Providing a higher default value might make sense. Or since these failures seem specific to the Kotlin compiler daemon (I think), perhaps the best fix is to ensure an appropriate MaxMetaspaceSize for this process. @eskatos I see you assigned this back to me, but I'm not sure what action can/should be taken. The current behaviour is that if you don't set any jvm args, then we set MaxMetaspaceSize=256m. If a user sets a completely unrelated jvm arg, then we don't provide any default value for MaxMetaspaceSize. Perhaps this makes sense: we provide a default set of jvm args and you can choose to override the entire set, but not one value in the set. But I've seen a number of users struggle with this: they give their build more memory (say -Xmx1024m), and their build starts to run out of memory! It's a slightly different out of memory error, but users don't really know how to fix this. There are a few options to address this: Do nothing, but perhaps improve the documentation so it's clear that setting -Xmx does more than just giving more memory. When a user sets just one of the values that we provide a default for, we only change that value, and leave the others in place. (That's what my PR tries to do). Provide a different (or higher) default value for MaxMetaspaceSize. We should probably document this. Provide a better model for specifying the memory settings for a build, allowing users to set value independently. ??? I could help out with any of 1-3. I think 4 would require more work and should be tackled by the BT team. WDYT? I assigned it back to you while triaging unassigned PRs and adding assignees to all team members/authors. I think this PR is a reasonable improvement and will address the current confusion. But as shown by the failing tests, this may break builds. I'm not sure if raising the default metaspace size to 1g would be a good move. Loots of builds won't need that much. @octylFractal, @big-guy, would the approaching 8.0 be a good time to address this? Can you take over making a decision on this? @bot-gradle test ReadyForNightly OK, I've already triggered the following builds for you: ReadyForNightly build @bot-gradle test this OK, I've already triggered the following builds for you: PullRequestFeedback build @bot-gradle test and merge OK, I've already triggered a build for you. Pre-tested commit build failed. The performance test asserts there's only one daemon involved in multiple iterations. However, this PR seems to change something in daemon compatibility so there're multiple daemon started. The performance test failure is reproducbile. We're not able to look at this until 2025
2025-04-01T06:38:51.458363
2024-01-15T10:07:39
2081682628
{ "authors": [ "blindpirate", "bot-gradle" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6548", "repo": "gradle/gradle", "url": "https://github.com/gradle/gradle/pull/27692" }
gharchive/pull-request
Run native related builds on Intel macs only Since 2024 there will be only 2 Intel Mac build agents. Because only native builds are architecture-dependent, this PR only executes native subprojects on Intel Macs. I have squashed this PR as 1f49fed23a4ec031d66d628e833546019d84c605 Can't cherry-pick due to merge conflict. You have to cherry-pick by yourself. Sorry some internal error occurs, please contact the administrator @blindpirate
2025-04-01T06:38:51.467416
2023-02-02T08:37:10
1567574930
{ "authors": [ "NissesSenap", "gitgaoxiang", "pb82" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6549", "repo": "grafana-operator/grafana-operator", "url": "https://github.com/grafana-operator/grafana-operator/pull/887" }
gharchive/pull-request
Fix port name not in effect and nodePort changed failed when update When grafana instance is created, the port name configured for the service does not take effect. is also not effective when changed grafana nodeport Description apiVersion: integreatly.org/v1alpha1 kind: Grafana metadata: name: grafana namespace: monitoring spec: client: preferService: true ingress: enabled: False config: log: mode: "console" level: "error" security: admin_user: "root" admin_password: "12345" log.frontend: enabled: true auth: disable_login_form: False disable_signout_menu: False auth.anonymous: enabled: True service: name: "grafana-service" labels: app: "grafana" type: "grafana-service" ports: - { nodePort: 30004, port: 3000, protocol: TCP, name: web } type: NodePort dashboardLabelSelector: - matchExpressions: - { key: app, operator: In, values: [grafana] } resources: # Optionally specify container resources limits: cpu: 800m memory: 800Mi requests: cpu: 100m memory: 100Mi bug1:When spec.service.port name is web,but deploy name is grafana. It's restricted in the program. bug2:If the first grafana deployment nodePort is 30001,when I changed cr nodePort to 30002,It didn't work Relevant issues/tickets Type of change [x] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) Checklist [ ] This change requires a documentation update [x] I have added tests that prove my fix is effective or that my feature works [ ] I have added a test case that will be used to verify my changes [ ] Verified independently on a cluster by reviewer Verification steps @gitgaoxiang can you please rebase? rebase Already done @pb82 Done. Don't know why the e2e failes. For some reason it has returned [ { "id": 1, "uid": "He5NIuAVk", "title": "grafana-operator-system", "uri": "db/grafana-operator-system", "url": "/dashboards/f/He5NIuAVk/grafana-operator-system", "slug": "", "type": "dash-folder", "tags": [], "isStarred": false, "sortMeta": 0 }, { "id": 3, "uid": "0ed390cdb20229700c1741b72138163ce2214445", "title": "Node' Exporter 'Full", "uri": "db/node-exporter-full", "url": "/d/0ed390cdb20229700c1741b72138163ce2214445/node-exporter-full", "slug": "", "type": "dash-db", "tags": [ "linux" ], "isStarred": false, "folderId": 1, "folderUid": "He5NIuAVk", "folderTitle": "grafana-operator-system", "folderUrl": "/dashboards/f/He5NIuAVk/grafana-operator-system", "sortMeta": 0 }, { "id": 2, "uid": "2150edaf610ab34b8f1050e5bcd5d4ca5903e1c2", "title": "Simple' 'Dashboard", "uri": "db/simple-dashboard", "url": "/d/2150edaf610ab34b8f1050e5bcd5d4ca5903e1c2/simple-dashboard", "slug": "", "type": "dash-db", "tags": [], "isStarred": false, "folderId": 1, "folderUid": "He5NIuAVk", "folderTitle": "grafana-operator-system", "folderUrl": "/dashboards/f/He5NIuAVk/grafana-operator-system", "sortMeta": 0 } ] This haven't happend before and it didn't on another PR that I just ran so I think this should be okay. I will merge the PR and if there is some issue after I will look in to it.
2025-04-01T06:38:52.335593
2023-06-20T18:47:39
1765967075
{ "authors": [ "kevinwcyu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6550", "repo": "grafana/iot-sitewise-datasource", "url": "https://github.com/grafana/iot-sitewise-datasource/issues/200" }
gharchive/issue
Property Value Aggregate Query Has Double the Expected Number of Data Points The following description was taken from 2 comments in https://github.com/grafana/iot-sitewise-datasource/issues/160 I tested the new version (1.9.2) with Grafana version 10.0.0. With my current setup, I get the following response: By downloading the data and looking at it, there are duplicates of most of the values. Without knowing, I suspect it is the paginated response that is duplicated. Using Boto3 - get_asset_property_aggregates I get the expected value of 600 data points. As the screenshot shows above, the expression fails when using the value aggregates (top right corner), only showing 1 data point. By simply taking the expression query away, the result would be the same as top left corner (as expected). Similarly, the expression does not get all the values for value history either. Again, I suspect that this is something with pagination/next token, not being able to show all the result. Please let me know if anything is unclear or you require more information. Originally posted by @egheie in https://github.com/grafana/iot-sitewise-datasource/issues/160#issuecomment-1592591464 Hi @kevinwcyu, no problem. Here are the screenshots, starting bottom left, going clockwise. Note: every quadrant will have the same query, just the panel is different. (Either Time series, or Stat with Calculation = Count). When doing this, I saw that the bottom left Stat was set to Get property value aggregates and not Get property value history. See new screenshot at the bottom, sorry for that. Screenshots of query editor: Bottom left Top left Top right Bottom right Updated screenshot: Originally posted by @egheie in https://github.com/grafana/iot-sitewise-datasource/issues/160#issuecomment-1594699586 Here's a python script to fetch a count of the data points to compare with the results from the dashboard. import boto3 from datetime import datetime client = boto3.client('iotsitewise') # get asset property values history # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iotsitewise/client/get_asset_property_value_history.html # may need a paginator if there are more than 20000 data points response = client.get_asset_property_value_history( assetId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual asset id propertyId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual property id startDate=datetime.fromisoformat('2023-06-13T07:00:00Z'), # <--- set to corresponding from time from the query endDate=datetime.fromisoformat('2023-06-21T06:59:59Z'), # <--- set to corresponding to time from the query timeOrdering='ASCENDING', maxResults=20000 ) print(len(response['assetPropertyValueHistory'])) # get asset property aggregates # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iotsitewise/client/get_asset_property_aggregates.html agg_paginator = client.get_paginator('get_asset_property_aggregates') agg_iterator = agg_paginator.paginate( assetId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual asset id propertyId='1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d', # <--- replace with actual property id aggregateTypes=['AVERAGE'], resolution='1m', startDate=datetime.fromisoformat('2023-06-13T07:00:00Z'), # <--- set to corresponding from time from the query endDate=datetime.fromisoformat('2023-06-21T06:59:59Z'), # <--- set to corresponding to time from the query timeOrdering='ASCENDING', maxResults=250, ) agg_count = 0 for p in agg_iterator: agg_count += len(p['aggregatedValues']) print(agg_count)
2025-04-01T06:38:52.342138
2024-10-08T08:04:15
2572420634
{ "authors": [ "josemrs", "petewall" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6551", "repo": "grafana/k8s-monitoring-helm", "url": "https://github.com/grafana/k8s-monitoring-helm/issues/773" }
gharchive/issue
Allow tuning of .Values.configValidator.pullPolicy It would be really nice to be able to tune the pullPolicy of the configValidator pod. Just adding this to the values {{- with .Values.configValidator.nodeSelector }} and tweak the template validate-configuration.yaml this feature will be going away in v2. So I'd say either turn it off with configValidator.enabled=false. otherwise, if you want to put together a PR, I'd be happy to take a look and merge it in. It does not worth if it's going away. Is there an ETA for V2? Thanks Aiming for a release probably near the end of November, but subject to change.
2025-04-01T06:38:52.399852
2024-07-26T13:03:38
2432198828
{ "authors": [ "DeanHnter", "Yerkwell", "korniltsev" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6552", "repo": "grafana/pyroscope-rs", "url": "https://github.com/grafana/pyroscope-rs/issues/174" }
gharchive/issue
pprof panic on recent toolchains This simple program panics on recent toolchains use pyroscope::PyroscopeAgent; use pyroscope_pprofrs::{pprof_backend, PprofConfig}; fn main() { let pprof_config = PprofConfig::new().sample_rate(100); let pprof_backend = pprof_backend(pprof_config); let pprof_agent = PyroscopeAgent::builder("https://asd.net", "qwe") .basic_auth("xxx", "xxxx") .backend(pprof_backend) .build().unwrap(); let running_agent = pprof_agent.start().unwrap(); let (add_tag, _) = running_agent.tag_wrapper(); let _ = add_tag("connections".to_string(), 10.to_string()); let _ = add_tag("watchers".to_string(), 10.to_string()); running_agent.stop().unwrap().shutdown(); } panic: /home/korniltsev/.cargo/bin/cargo run --color=always --package pg --bin pg Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.06s Running `target/debug/pg` thread 'main' panicked at library/core/src/panicking.rs:221:5: unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed `isize::MAX` stack backtrace: 0: rust_begin_unwind at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/std/src/panicking.rs:661:5 1: core::panicking::panic_nounwind_fmt::runtime at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/panicking.rs:112:18 2: core::panicking::panic_nounwind_fmt at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/panicking.rs:122:5 3: core::panicking::panic_nounwind at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/panicking.rs:221:5 4: core::slice::raw::from_raw_parts::precondition_check at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/ub_checks.rs:68:21 5: core::slice::raw::from_raw_parts at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/ub_checks.rs:75:17 6: <pprof::collector::TempFdArrayIterator<T> as core::iter::traits::iterator::Iterator>::next at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/collector.rs:225:26 7: core::iter::traits::iterator::Iterator::fold at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/iter/traits/iterator.rs:2587:29 8: <core::iter::adapters::chain::Chain<A,B> as core::iter::traits::iterator::Iterator>::fold at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/iter/adapters/chain.rs:126:19 9: core::iter::traits::iterator::Iterator::for_each at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/iter/traits/iterator.rs:818:9 10: pprof::report::ReportBuilder::build at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/report.rs:110:17 11: pyroscope_pprofrs::Pprof::dump_report at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyroscope_pprofrs-0.2.7/src/lib.rs:202:22 12: <pyroscope_pprofrs::Pprof as pyroscope::backend::backend::Backend>::add_rule at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyroscope_pprofrs-0.2.7/src/lib.rs:180:13 13: pyroscope::pyroscope::PyroscopeAgent<pyroscope::pyroscope::PyroscopeAgentRunning>::tag_wrapper::{{closure}} at /home/korniltsev/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyroscope-0.5.7/src/pyroscope.rs:776:17 14: pg::main at ./src/main.rs:18:13 15: core::ops::function::FnOnce::call_once at /rustc/6292b2af620dbd771ebb687c3a93c69ba8f97268/library/core/src/ops/function.rs:250:5 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. thread caused non-unwinding panic. aborting. Process finished with exit code 134 (interrupted by signal 6:SIGABRT) Looks like it's problem in pprof-rs crate, which won't be fixed any soon as it looks abandoned (last update was almost a year ago). As I understood from googling, the problem has always been there, but started panicking only recently (at rust 1.78.0). So, downgrading may be a workaround, maybe? https://github.com/tikv/pprof-rs/issues/232 I receive the same error on M1 with pyroscope for Rust on Macos M1 thread '' panicked at library/core/src/panicking.rs:219:5: unsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceedisize::MAXstack backtrace: 0: rust_begin_unwind at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/std/src/panicking.rs:652:5 1: core::panicking::panic_nounwind_fmt::runtime at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:110:18 2: core::panicking::panic_nounwind_fmt at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:120:5 3: core::panicking::panic_nounwind at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/panicking.rs:219:5 4: core::slice::raw::from_raw_parts::precondition_check at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/ub_checks.rs:68:21 5: core::slice::raw::from_raw_parts at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library/core/src/ub_checks.rs:75:17 6: pprof::addr_validate::validate at /Users/deanhunter/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/addr_validate.rs:93:28 7: <pprof::backtrace::frame_pointer::Trace as pprof::backtrace::Trace>::trace at /Users/deanhunter/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/backtrace/frame_pointer.rs:114:17 8: perf_signal_handler at /Users/deanhunter/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pprof-0.12.1/src/profiler.rs:354:13 9: ___simple_bprintf note: Some details are omitted, run withRUST_BACKTRACE=fullfor a verbose backtrace. thread caused non-unwinding panic. aborting.
2025-04-01T06:38:52.420062
2019-11-25T07:47:59
527909912
{ "authors": [ "efvhi", "gretamosa", "karlie93", "manu2194", "ryantxu", "sbelondr", "speg" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6553", "repo": "grafana/simple-react-panel", "url": "https://github.com/grafana/simple-react-panel/issues/8" }
gharchive/issue
Error loading: myorgid-simple-panel I read the grafana documentation and starting with this example and then working my way towards integrating the funnel part but I can't seem to get it working。I restarted the Server when I tried opening this Panel I got an error saying Error loading: myorgid-simple-panel.My grafana version is 6.4.4, node.js version is 12.13.1;How to solve this? My development environment is Windows x64 Hello,Try npm install before yarn build I have the same error. This is what I found in the browser console: backend.js:6 Error loading panel plugin: myorgid-simple-panel TypeError: r.PanelPlugin is not a constructor at Module.eval (module.js:1) at n (module.js:1) at eval (module.js:1) at eval (module.js:1) at i (system.js:4) at system.js:4 at system.js:4 at O (system.js:4) at k (system.js:4) at system.js:4 Ok, I think you have a problem because you don't have the latest version. Therefore git pull and try again Ok, I think you have a problem because you don't have the latest version. Therefore git pull and try again Hi Samuel, I've pulled the latest version and it's still not working. Still the same Grafana error, and the same console error. Have you tried building and running it on Grafana? Yes but I'm on Linux. You have the same mistake here: https://github.com/grafana/grafana/issues/20338 And the last simple-react-panel pull repair that: https://github.com/grafana/simple-react-panel/commit/ca7f48c685aa94c40eeabf63efbdf5eebe6baa14 (in src/SimplePanel.tsx and src/SimpleEditor.tsx) Did you install Grafana from the sources? Yes but I'm on Linux. You have the same mistake here: grafana/grafana#20338 And the last simple-react-panel pull repair that: ca7f48c (in src/SimplePanel.tsx and src/SimpleEditor.tsx) Did you install Grafana from the sources? I'm on the MacOS. All my files look exactly like how it is in that repair. The "fixes" that the person in the thread you posted were already implemented by me in my files I installed Grafana through homebrew. I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' Module '"../node_modules/@grafana/ui"' has no exported member 'PanelPlugin'. ?? I just got it working. PanelPlugin should be imported from @grafana/ui and not @grafana/data My old module.ts looked like import { PanelPlugin } from '@grafana/data'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); My new module.ts looks the same, but with the first line replaced to import { PanelPlugin } from '@grafana/ui' I modified the mudule.ts according to your way,it still not working, I input 'yarn dev',got an error "/@grafana/ui"' has no exported member 'PanelPlugin'." Sorry guys. I forgot to share my full module.ts file. You need to add // @ts-ignore on the first line of your module.ts file. So the final version looks like: // @ts-ignore import { PanelPlugin } from '@grafana/ui'; import { SimpleOptions, defaults } from './types'; import { SimplePanel } from './SimplePanel'; import { SimpleEditor } from './SimpleEditor'; export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setDefaults(defaults).setEditor(SimpleEditor); That should solve your problem. Don't ask me why that works though. I also can't get this to work. The plugin shows up in the panel list by clicking on it gives the error loading: myorgid-simple-pane the console complains: keybindingSrv.ts:20 Error loading panel plugin: myorygid-simple-panel SyntaxError: Unexpected token '<' at eval (<anonymous>) at st (system.js:4) at system.js:4 at system.js:4 at O (system.js:4) at k (system.js:4) at system.js:4 i solved the problem by upgrading grafana to 6.5.x . In any case, there is a lot of mismatches between alpha plugins (i.e. piechart alpha panel) in Grafana core and this template. I think it would be reviewed and to propose an unified usage of @grafana libraries. if this still exists... post again -- my guess is the plugin was not built
2025-04-01T06:38:52.438676
2017-11-20T02:33:44
275220435
{ "authors": [ "bradleyfalzon", "coveralls", "keizo042" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6554", "repo": "grafov/m3u8", "url": "https://github.com/grafov/m3u8/pull/100" }
gharchive/pull-request
Add func (seg MediaSegment)String() related #98 Coverage decreased (-5.05%) to 66.278% when pulling 178325b3c3cfb74ad192bc583db21a09792757f0 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. I'm concerned that we'd now have two different methods to write a segment. Can we not move the following to an unexported function, which takes a *m3u8.MediaSegment and *bytes.Buffer and writes to the provided buffer? https://github.com/grafov/m3u8/blob/master/writer.go#L483-L600 As long as performance isn't drastically affected. Coverage decreased (-4.7%) to 66.589% when pulling 9177d906dd8eefa5c7852b4d3725e4091a93af94 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. Coverage decreased (-0.5%) to 70.83% when pulling dde4c977e438e088aad4f7f4c85d5c0623ef3e24 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. @bradleyfalzon Thank you for your review. I'd fix that the functions writing XXX use provided buffer and replace duplicated code to unexported function. Thanks @keizo042 that change is better, but there's still some duplication of the if statements. We've created a few new unexported functions, which is good, but I'd think there should just be one function that both MediaSegment.String() and *MediaPlaylist.Encode() call. @bradleyfalzon I'd think there should just be one function that both MediaSegment.String() and *MediaPlaylist.Encode() call. Unless there's a specific reason we can't do that? I agree but there is two difference. in (p *MediaPlaylist) String, writing m3u8.Map and m3u8.Key depends on previous media segments. but in the function I request, I think it is good to write buffer if they exist. In addition, (p *MediaPlaylist) String() have previous durations cache. but (seg MediaSegment) String() is stateless. I'd like to remove caching procedure in (seg MediaSegment) String()`. so it needs writeKey and writeMap. if I shoud not break order of segments attribute, writeSCTE also. I feel need to create unexported function as rest part of writing media segment in order to remove duplicated code. I don't have good idea how it maanges caching yet. depends on previous media segments. Ah yes, I see. This is unfortunate. have previous durations cache Yeah, so this is so for very large playlists, we don't continue to call the intensive strconv.FormatFloat. I then understand why you've chosen this method, but I do prefer if we could remove the duplication completely. I don't mind duplicate code, but there's a lot of logic here that's being duplicated and that's what concerns me. What if there was a function like // writeSegment writes a string representation of seg to buf. // // durationCache is required to reduce the number of calls to repetitive strconv formats. // If playlist is non-nil, additional context is derived from the playlist. func writeSegment(seg MediaSegment, buf bytes.Buffer, durationCache map[float]string, playlist *MediaPlaylist, buf bytes.Buffer) Then the same if statements that used information from the playlist would first check if playlist is non-nil. -if p.Map == nil && seg.Map != nil { +if p != nil && p.Map == nil && seg.Map != nil { Or similar? I'm not 100% for my suggestion, just asking your thoughts. That's good idea. I'd like to show all infomation when the context is not provided. I think better like this. func (seg MediaSegment) write(buf bytes.Buffer, p *MediaPlaylist, durationCache map[string]float ) { ... if p != nil { if p.Map == nil && seg.Map != nil { writeMap(buf, seg.Map) } } else { writeMap(buf, seg.Map) } ... if p != nil { // original caching and conversion } else { buf.WriteString(strconv.FormatFloat(seg.Duration, 'f', 3, 32)) } ... } Coverage increased (+0.3%) to 71.644% when pulling 625523432ce599f5375de7ca5912c44bd8c72db0 on keizo042:media_segment_string into d137fcd412b91fee5939a56a9d0d5a83c81d39d3 on grafov:master. This is looking good to me I think. Could you run the benchmarks before and after to check for performance regressions? benchmark resutls Env CentOS 7.4 go version go1.8.3 linux/amd64 result [m3u8]$ go test -bench=. BenchmarkDecodeMasterPlaylist-12 50000 29740 ns/op BenchmarkDecodeMediaPlaylist-12<PHONE_NUMBER>0 ns/op BenchmarkEncodeMasterPlaylist-12 1000000 1426 ns/op BenchmarkEncodeMediaPlaylist-12<PHONE_NUMBER> ns/op PASS ok github.com/grafov/m3u8 7.455s [m3u8]$ git checkout media_segment_string Switched to branch 'media_segment_string' [m3u8]$ go test -bench=. BenchmarkDecodeMasterPlaylist-12 50000 29644 ns/op BenchmarkDecodeMediaPlaylist-12<PHONE_NUMBER>8 ns/op BenchmarkEncodeMasterPlaylist-12 1000000 1420 ns/op BenchmarkEncodeMediaPlaylist-12<PHONE_NUMBER> ns/op PASS ok github.com/grafov/m3u8 7.569s only Encode MediaPlaylist benchmark [m3u8]$ git checkout master Already on 'master' m3u8]$ go test -bench=BenchmarkEncodeMediaPlaylist BenchmarkEncodeMediaPlaylist-12<PHONE_NUMBER> ns/op PASS ok github.com/grafov/m3u8 2.000s [m3u8]$ git checkout media_segment_string Switched to branch 'media_segment_string' [m3u8]$ go test -bench=BenchmarkEncodeMediaPlaylist BenchmarkEncodeMediaPlaylist-12<PHONE_NUMBER> ns/op PASS ok github.com/grafov/m3u8 2.131s [m3u8]$ well... I think we prefer that peformance regression is under 0.1sec. I take inlining writeSCTE and invesitage in detail.
2025-04-01T06:38:52.477164
2016-01-22T07:55:44
128103281
{ "authors": [ "edenman", "emartynov", "grandstaish" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6555", "repo": "grandstaish/DataParcel", "url": "https://github.com/grandstaish/DataParcel/issues/1" }
gharchive/issue
Question: would it be possible to use extensions Having annotation is quite small code change. But would it be possible to have extension which accepts an object and saves/load it from parcel. At least current API could be improved with extension to avoid using util classes What do you mean by extensions? The next feature I'm thinking of adding is a way for apps to customize how a type should be parceled. E.g. by default a date is added to a parcel via serialization, but a more efficient way would be to read/write a long value directly to the parcel. Would this cover what you are after? I was thinking about better API. Instead of: val example = Example(42) val parcel = ExampleParcel.wrap(example) // e.g. use in a bundle someBundle.putParcelable("example", parcel) something like: val example = Example(42) example.putToBundle(someBundle) Oh extension methods. I'm not sure how you can do this since I'm generating java code. Do you know if it's possible? I've posted a question on the Kotlin forums. https://discuss.kotlinlang.org/t/annotating-static-java-methods-so-kotlin-can-pick-them-up-as-extension-functions-of-a-type/1431 Unfortunately this is not supported by Kotlin yet (see linked thread) Actually, I may have dismissed this too early. I'll investigate generating a kotlin class with the extension methods alongside the other generated classes. Not sure it's possible yet though. Could also do this with a paperparcel-kotlin library that is just a thin shim on top of paperparcel that adds the extension methods. That is a good idea. I wanted to try it again after this pull request is released and see if the problem is fixed first, because it sounds like it should fix what I was seeing. :+1: Closing due to new APIs replacing the need for extensions. The wrappers are more hidden to the library consumer now.
2025-04-01T06:38:52.484541
2018-04-02T16:36:17
310533981
{ "authors": [ "kikoseijo", "timsuchanek" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6556", "repo": "graphcool/graphql-playground", "url": "https://github.com/graphcool/graphql-playground/issues/629" }
gharchive/issue
All get lost on updates In every update all I had saved its lost. Is there I way I can recover it all? or don't overwrite by updates, its a nightmare!!!!! Thanks @kikoseijo ! We did put a warning into the release notes about exactly this. Did this happen to you through auto update? You can install an older release (<1.5) and all the data should still be there! Hey @timsuchanek, Yes its the auto update app for Mac, been happening on the last 3-4 updates, yes. Lets hope it will settle eventually... thanks for support,
2025-04-01T06:38:52.487982
2018-03-27T07:31:06
308852210
{ "authors": [ "aogriffiths" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6557", "repo": "graphcool/graphql-yoga", "url": "https://github.com/graphcool/graphql-yoga/issues/229" }
gharchive/issue
custom scalar Hi All This is more of a question than an issue. How do I create a custom scalar with graphql-yoga? I have attempted to use GraphQLScalarType from 'graphql' along the following lines: import { GraphQLScalarType } from 'graphql' //... const MyScalar = new GraphQLScalarType({ //... }) //... const resolvers = { Query: { //... }, MyScalar: MyScalar } But the scalar value returns as null in all queries. I made it work with something along the lines of this: const NLString = new GraphQLScalarType({ name: 'NLString', description: 'New line terminated string', //invoked to parse client input that was passed through variables. //takes a plain JS object. parseValue(varaible){ return varaible.replace(/\n$/, "") }, //invoked to parse client input that was passed inline in the query. //takes a value AST. parseLiteral(literal) { return literal.value.replace(/\n$/, "") }, //invoked when serializing the result to send it back to a client. serialize: function(value){ return value + "\n" } }) //... const resolvers = { Query: { //... }, NLString } Also, for completeness, here's are some example graphql queries: query { post(title:"example\n"){ title body } } triggers parseLiteral(literal) with literal.value="example\n" query Posts($title: NLString) { post(title:$title){ title body } } with: { "title": "example\n" } triggers parseValue(varaible) with varaible="example\n"
2025-04-01T06:38:52.514757
2018-09-09T05:18:53
358351026
{ "authors": [ "deniszh", "piotr1212", "rmrf" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6558", "repo": "graphite-project/carbon", "url": "https://github.com/graphite-project/carbon/issues/810" }
gharchive/issue
carbon-cache memory leak when put under LB Recently we put our graphite service under Load Balance, then seeing lots of log as bellow, the IP address here are from LB connection, carbon-cache process memory increasing slowly and finally OOM, suspect the LB short connection which making carbon-cache memory leak. n. 09/09/2018 05:06:34 :: [listener] MetricLineReceiver connection with <IP_ADDRESS>:41731 lost: Connection to the other side was lost in a non-clean fashio n. 09/09/2018 05:06:34 :: [listener] MetricLineReceiver connection with <IP_ADDRESS>:8879 lost: Connection to the other side was lost in a non-clean fashi on. 09/09/2018 05:06:34 :: [listener] MetricLineReceiver connection with <IP_ADDRESS>:42744 lost: Connection to the other side was lost in a non-clean fashi on. @rmrf : It can be non-related carbon memory can grow above the limit if not configured properly. What's your metric flow? How many carbon-caches are you running? Could you please share carbon config? Are you sure it is caused by the lost connections? Can you reproduce this on a carbon with LB without any actual metrics? Which Twisted and Python version are you using?
2025-04-01T06:38:52.523957
2019-01-08T12:43:02
396890432
{ "authors": [ "adamboutcher", "deniszh", "piotr1212", "ploxiln" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6559", "repo": "graphite-project/graphite-web", "url": "https://github.com/graphite-project/graphite-web/pull/2409" }
gharchive/pull-request
Remove /opt/graphite prefix and use setuptools See https://github.com/graphite-project/carbon/pull/835 for reasoning I'm not finished yet but made some changes to the docs which I would like to get some feedback for. I've tried to simplify the docs, IMO there were too much separate pages which you had to jump back and forward to. I've changed the default install (settings.py) so that running collectstatic is not needed. The static files can be served directly from the app with whitenoise. Serving from whitenoise should be fast enough for most installations. this eliminates the need for configuring the static dir in the webserver (simplifies installation). From what I've read the whole purpose of collectstatic is for organisations which run multiple django apps and have separated their static files from code (in repo), so that they can update static files without having to deploy code and vice versa. As graphite's static files haven't changed in years and they are in the code repo I don't see a point to require collectstatic in the default install. Users can still run collectstatic if they want/need. please ignore the GRAPHITE_ROOT commit, ill remove it later. I think a rebase went a bit wrong, you ended up with a copy of the commit "fix dashboard graph metric list icon paths with URL_PREFIX" from the master branch in master: 0a037db4b2d864734e14dd6302bc71194f53e8d3 in this branch: 1ba4da55c08035cccfcdaae2220f8d384dbd1929 I think a rebase went a bit wrong, you ended up with a copy of the commit "fix dashboard graph metric list icon paths with URL_PREFIX" from the master branch in master: 0a037db in this branch: 1ba4da5 I think I merged instead of rebased. Anyway, cleaned up now. All looks good to me. I had another thought about storage dirs ... I think the original idea behind using /opt/graphite is those storage and log dirs, which would be awkward in the python site-packages directory. Maybe the thing to do is divorce them from the graphite application root, and default to /opt/graphite/storage and /opt/graphite/log regardless of the install prefix? And not mention them in setup.py at all? I suppose the downside is they would not be created by install. Just an idea - I suppose I always customize these dirs anyway. good point. I'll have a look. But busy at the moment with more higher prio stuff. This is a big painful change, but I think it's still relatively important. Just sayin' to appease the stale bot :) If this will fix the pip install, can we get it merged? Unfortunately, it's not that easy. That would break backward compatibility, and need more changes in carbon. Also, not sure if that fix issue too :/ ср, 19 июл. 2023 г., 18:36 Adam @.***>: If this will fix the pip install, can we get it merged? — Reply to this email directly, view it on GitHub https://github.com/graphite-project/graphite-web/pull/2409#issuecomment-1642409417, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJLTVRNEHAUAT6W7NWQH7LXRAEH3ANCNFSM4GOVTH5Q . You are receiving this because your review was requested.Message ID: @.***>
2025-04-01T06:38:52.531787
2023-07-03T05:47:39
1785384176
{ "authors": [ "azf20", "huazhuangnan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6560", "repo": "graphprotocol/graph-node", "url": "https://github.com/graphprotocol/graph-node/issues/4732" }
gharchive/issue
[Bug] where or operation error structure Bug report Structure ... where: { or: { xxx_contains_nocase: "xxx", xxx_contains_nocase: "xxx"}} ... query is not work ... or: { xxx_contains_nocase: "xxx", xxx_contains_nocase: "xxx"} ... query work Relevant log output Invalid value provided for argument `where`: Object({\"or\": Object({\"name_contains_nocase\": String(\"xx\"), \"symbol_contains_nocase\": String(\"xx\")})}) IPFS hash No response Subgraph name or link to explorer https://api.thegraph.com/subgraphs/name/uniswap/uniswap-v2/graphql?query=query+MyQuery+{ ++tokens(where%3A+{or%3A+{name_contains_nocase%3A+"xx"%2C+symbol_contains_nocase%3A+"xx"}})+{ ++++name ++++symbol ++} } Some information to help us out [ ] Tick this box if this bug is caused by a regression found in the latest release. [ ] Tick this box if this bug is specific to the hosted service. [X] I have searched the issue tracker to make sure this issue is not a duplicate. OS information Windows hi @huazhuangnan I think you need the following query: query MyQuery { tokens( where: {or: [{name_contains_nocase: "xx"}, {symbol_contains_nocase: "xx"}]} ) { name symbol } }
2025-04-01T06:38:52.533873
2022-11-02T14:56:51
1433295299
{ "authors": [ "oleksandrmarkelov" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6561", "repo": "graphprotocol/indexer", "url": "https://github.com/graphprotocol/indexer/issues/532" }
gharchive/issue
Not populated metrics for indexer service, version 0.20.4 Some metrics are missing from indexer service (not populated) such as indexer_service_queries_ok or indexer_service_queries_total Version: graphval@node023:~/indexer/packages/indexer-cli/bin$ graph-indexer-service --version 0.20.4 However, I'm not sure if MIP send queries excactly to my indexer. the issue is solved when queries started to arrive to index-service
2025-04-01T06:38:52.544250
2023-04-21T19:14:24
1678977005
{ "authors": [ "tilacog" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6562", "repo": "graphprotocol/indexer", "url": "https://github.com/graphprotocol/indexer/issues/651" }
gharchive/issue
Add a protocolNetwork column to Rules and Actions table Depends on #650. Create a migration with these steps: Alter the tables: indexingRules and actions: add protocol_network column, which defaults to null Resolve every row protocol_network (There can be no nulls left behind!) This step can be difficult because we need to consider edge cases, such as when an indexer has allocations in the same subgraph for both layers. derive protocol_network for every rule and action (only need to do for status = queued or approved) gather rules, active allocations (across protocol networks), and actions: matching rules to active allocations to identify the network Alter the tables again to add a NOT NULL Add the protocolNetwork field to Rules and Actions ORM models. Also, add it to non-ORM types, like SubgraphDeployment. At this point, the compiler will surface every usage of the new field, we might make some wild discoveries. Update the indexer-cli sub-commands to require a protocol network identifier. indexingRules add the option to specify the protocol network (optional). If not specified, the value will resolve to the default rule example: indexer rules set <DeploymentId> protocolNetwork homestead allocationAmount 10000 actions update actions queue commands to require protocol network example: indexer actions queue allocate QmYN4ofRb5CUg1WdpLhhNTVCuiiAt29hBKGjTnnxYh9zYt homestead 1000 allocations update create command indexer allocations create <deployment-id> <protocol-network> <amount> <index-node> NOTE: validate that allocation ids are unique per-protocol network; if not, we may need to require protocol-network for allocations close and allocations reallocate and allocations get Update get command support filtering by protocol_network disputes Update get command Support filtering by protocol_network cost OPEN QUESTION: do we continue to use just deployment_id as the primary key, or do we also update to include the network? To state that another way: do we need cost models for each deployment to be different per network? If we do: we’ll need to update the set, get, and delete commands also to require protocol_network arg Fixed by #668
2025-04-01T06:38:52.558925
2022-05-10T16:18:33
1231400326
{ "authors": [ "chadian" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6563", "repo": "graphql-mocks/graphql-mocks", "url": "https://github.com/graphql-mocks/graphql-mocks/issues/169" }
gharchive/issue
Use lodash-es Instead of using individual lodash packages using lodash-es would be a better "es module" player and work with browser playground environments like google's playground elements. Also being an es module package would allow any build tools/bundlers to treeshake as needed. Using ramda instead since it supports both es and cjs modules. Tested with playground-elements and ramda is good to go. The original motivation behind this issue was to move toward a more compatible module that could support the "es-only" world without bundling, etc, and ramda does the trick.
2025-04-01T06:38:52.592948
2014-05-07T01:45:12
32948914
{ "authors": [ "whit537" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6564", "repo": "gratipay/gratipay.com", "url": "https://github.com/gratipay/gratipay.com/issues/2356" }
gharchive/issue
migrate README docs to Sphinx/RTD Following on from #1313, I'd like to continue migrating to Sphinx for documentation, turning next to the installation and configuration instructions in the README. The purpose is to give ourselves a more powerful documentation system than GitHub markdown files and wiki pages. Sphinx enables us to practice literate programming, pulling documentation from docstrings in our source code. I'm using Sphinx successfully on these other libraries: algorithm.py, dependency_injection.py,environment.py, filesystem_tree.py, and postgres.py. We're also moving to Sphinx for the Aspen docs. There's a sphinx-autobuild package that looks promising in terms of streamlining the doc workflow (auto-rebuild and live-reload). There's some potential hiccups, though, around both Mac OS and VirtualBox (i.e., Vagrant). See https://github.com/GaretJax/sphinx-autobuild/issues/6. So we'll have to figure out the best workflow. Want to back this issue? Post a bounty on it! We accept bounties via Bountysource. Closing in light of our decision to shut down Gratipay. Thank you all for a great run, and I'm sorry it didn't work out! 😞 💃
2025-04-01T06:38:52.611575
2022-09-09T13:43:37
1367850536
{ "authors": [ "graviteeio", "jhaeyaert" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6565", "repo": "gravitee-io/gravitee-policy-json-xml", "url": "https://github.com/gravitee-io/gravitee-policy-json-xml/pull/25" }
gharchive/pull-request
chore: cci update fix circleci config :tada: This PR is included in version 1.2.0 :tada: The release is available on: 1.2.0 GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 1.2.0 :tada: The release is available on: 1.2.0 GitHub release Your semantic-release bot :package::rocket:
2025-04-01T06:38:52.660351
2011-11-07T19:18:29
2166223
{ "authors": [ "brson", "elly" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6566", "repo": "graydon/rust", "url": "https://github.com/graydon/rust/issues/1150" }
gharchive/issue
rustc: add manifest so we can install with cargo Depends on #1149 Is this still part of the cargo plan? Do you still want it merged? Yes, this is dead now.
2025-04-01T06:38:52.720614
2018-06-14T07:44:42
332286898
{ "authors": [ "joppino", "rkaw92" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6567", "repo": "greatcare/pm2-zabbix", "url": "https://github.com/greatcare/pm2-zabbix/issues/27" }
gharchive/issue
Spaces in keys break zabbix-sender Here is an example. I'm simulating the parsing of the input from pm2-zabbix --monitor, which produces a key-value stream which is parsed by zabbix-sender and sent. I'm fetching just one row from this stream, and sending to zabbix: echo "xxxxx-be-preprod01 pm2.processes[xxxxx Frontend-0,cpu] 90" | zabbix_sender -vv --config /etc/zabbix/zabbix_agentd.conf -s hostname --input-file - zabbix_sender [25699]: DEBUG: answer [{"response":"success","info":"processed: 0; failed: 1; total: 1; seconds spent: 0.000028"}] info from server: "processed: 0; failed: 1; total: 1; seconds spent: 0.000028" So, if there are spaces in pm2 process, the server fails to process the monitor part. The "discover" part is working, though, because if we enclose the key with double quotes, it goes through: echo "xxxxxx-be-preprod01 "pm2.processes[xxxxx Frontend-0,cpu]" 90" | zabbix_sender -vv --config /etc/zabbix/zabbix_agentd.conf -s hostname --input-file - zabbix_sender [25131]: DEBUG: answer [{"response":"success","info":"processed: 1; failed: 0; total: 1; seconds spent: 0.000083"}] info from server: "processed: 1; failed: 0; total: 1; seconds spent: 0.000083" This is going to be addressed in PR #31.
2025-04-01T06:38:52.790641
2022-05-26T14:46:51
1249670837
{ "authors": [ "greenpau", "jariz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6568", "repo": "greenpau/caddy-security", "url": "https://github.com/greenpau/caddy-security/issues/116" }
gharchive/issue
question: documentation regarding API keys is a bit sparse I added an API key from the auth portal settings page at /settings, but now what? I passed it as a bearer token a la Authorization: Bearer API_KEY, but that does not seem to work. The only doc page that mentions API keys is this one: https://authp.github.io/docs/authorize/basic_api_key_auth I have added with api key auth portal myportal realm local to my policy. Is there something obvious I'm missing here? Or am I just not understanding what purpose API keys are supposed to serve? My goal is to make a never expiring API key that I can use to give external services access to my services behind the authorize directive. Thanks in advance, this project is great @greenpau. @jariz , please share your config. it is not “authorization Bearer”. Rather, pass X-API-Token header with the value of the key you’d created @jariz , let’s keep it open. Will address it next week @jariz , please help promote this project … if you like of course 😃
2025-04-01T06:38:52.842389
2011-06-08T13:59:18
1024044
{ "authors": [ "davidmathei", "pothibo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6569", "repo": "gregbell/active_admin", "url": "https://github.com/gregbell/active_admin/issues/174" }
gharchive/issue
ActiveAdmin::Comment doesn't pick up ActiveRecord::Base.table_name_prefix (Rails 3.0.7, ActiveAdmin 0.2.2) The table_name property of ActiveAdmin::Comment is set before the configuration in application.rb is read and before Activerecord::Base is able to deal out pre- and suffixes for table names. Because of the load order of gems and application configuration this patch does not work: #comments/comment.rb class Comment < ActiveRecord::Base #self.table_name = "active_admin_comments" self.table_name = ActiveRecord::Migrator.proper_table_name("active_admin_comments") # ... more code ... end A simple workaround is: #config/initializers/some_initializer.rb , ActiveAdmin::Comment.table_name = ActiveRecord::Migrator.proper_table_name(ActiveAdmin::Comment.table_name) Is it possible to load ActiveAdmin::Comment more lazily? Then the first solution would work. +1 Just as a side note, is it possible to completely disable the comments when using rails g active_admin:install ? Because some of the solution here works fine if you don't have code already, if you're adding activeadmin after you created your rails project you will be face with namespace collision (Comment)
2025-04-01T06:38:52.868525
2018-07-21T15:11:15
343333262
{ "authors": [ "gregsdennis", "rtrianiguerra" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6570", "repo": "gregsdennis/Manatee.Trello", "url": "https://github.com/gregsdennis/Manatee.Trello/issues/243" }
gharchive/issue
Refresh methods hangs with WPF application but not with .Net Core. Describe the bug To Reproduce Hi! Playing around with your library and cannot figure out why Refresh method hangs when called from WPF app and not from .NET Core app. I have 3 projects in a soluction: TrelloHelper.Library: A .NET Standard Library (2.0), referencing Manatee.Trello 3.2.1 and dependencies (Json and WebApi) TrelloHelper.ConsoleApp: A .NET Core 2.0 Console, referencing TrelloHelper.Library TrelloHelper.WinApp: A .NET Framework 4.6.1 WPF Application referencing TrelloHelper.Library At the TrelloHelper.Library I have a simple method to connect to Trello, get a board and enumerate it's lists. Code bellow: public void TestTrelloConnection(string output) { string s; var stb = new StringBuilder(); var factory = new TrelloFactory(); var board = factory.Board(boardDEV); board.Lists.Refresh(); s = $"Board: {board}"; stb.AppendLine(s); Console.WriteLine(s); foreach (var list in board.Lists) { s = $" {list}"; stb.AppendLine(s); Console.WriteLine(s); } stb.AppendLine(s); Console.WriteLine(); s="Connection OK!"; stb.AppendLine(s); Console.WriteLine(s); output = s; } At the TrelloHelper.ConsoleApp, I have the following code, that works perfectily: static void Main(string[] args) { string s = "" ; TrelloHelperLib.TrelloHelper tr; tr = new TrelloHelperLib.TrelloHelper(); Console.WriteLine("Connecting to trello..."); tr.TestTrelloConnection(s); Console.WriteLine(s); Console.ReadLine(); } At TrelloHelper.WinApp, I have a button that calls the following method. But it does not work, hanging at the line "board.Lists.Refresh();" inside the method TestTrelloConnection : private void btTestarConexao_Click(object sender, RoutedEventArgs e) { string s = ""; tr.TestTrelloConnection(s); MessageBox.Show(s); } I don't know if it is a bug or if I'm doing something wrong using async/await in WPF Application ... Expected behavior I expected that the behavior would be the same at the .net core console and the .net wpf app. Desktop (please complete the following information): OS: Windows 10 Pro .Net Target 4.6.1, Core 2.0 e Standard 2.0 Version: Manatee.Trello 3.2.1, ManateeJson 2.3.0 and Manatee WebApi 2.0.3 (all using .net standard 2.0) Visual Studio Community 2017 15.7.3 Additional Information. Your project is awesome. Thanks! Thanks for reporting this. I can see that one problem is the async/await usage, but surprised that the console app works. I'm on my phone right now, though. I'll update later today after I can do some testing. @rtrianiguerra does this help? Are you still experiencing issues? Yes, @gregsdennis! You put me on the track! I've gone back to the C#'s book to review tasks/await/async, strugglered a little, but got it working! Thanks a lot! Now I'll begin to make something useful to publish on GitHub! These are the revised methods: Library public static async Task<string> TestTrelloConnection() { string s; var stb = new StringBuilder(); var factory = new TrelloFactory(); var board = factory.Board(boardDEV); await board.Lists.Refresh(); s = $"Board: {board}"; stb.AppendLine(s); foreach (var list in board.Lists) { s = $" {list}"; stb.AppendLine(s); } stb.AppendLine(s); s = "Connection OK!"; stb.AppendLine(s); return stb.ToString(); } ConsoleApp: class Program { static void Main(string[] args) { try { Test().Wait(); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Console.ReadLine(); } static private async Task Test() { Console.WriteLine("Connecting Trello..."); var s = await TrelloHelperLib.TrelloHelper.TestTrelloConnection(); Console.WriteLine(s); } } WPFApp : private async void btTestarConexao_Click(object sender, RoutedEventArgs e) { var s = await Task.Run(() => TrelloHelperLib.TrelloHelper.TestTrelloConnection()); MessageBox.Show(s); } That's a lot better. Just remember that async void should only ever be used for event handlers. Outside of that, you should always return a task.
2025-04-01T06:38:52.870826
2019-09-27T11:32:57
499393635
{ "authors": [ "gracoes", "gregurco" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6571", "repo": "gregurco/jobeet-tutorial", "url": "https://github.com/gregurco/jobeet-tutorial/issues/36" }
gharchive/issue
Day 7 - Wrong variable name in List Pagination paragraph Close to the end of the paragraph there is this text. We added page in the URL path and defined default value, in case when page is not defined in the URL (ex: /category/design). Variable $path is added in arguments of the method. It will be injected automatically by name in path. Also we need parameter max_jobs_on_category and getParameter methods to access it. That’s why this controller extends now Symfony\Bundle\FrameworkBundle\Controller\Controller but not Symfony\Bundle\FrameworkBundle\Controller\AbstractController. Shouldn't it be the $page variable? @gracoes yes, you are right. Fixed in PR #40. Thank you for reporting :+1:
2025-04-01T06:38:52.872821
2019-05-21T13:59:14
446636362
{ "authors": [ "greim", "queejie" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6572", "repo": "greim/html-tokenizer", "url": "https://github.com/greim/html-tokenizer/issues/4" }
gharchive/issue
Another Strange Problem I'm not sure why, but the following line: const defaultEntityMap = require('./default-entity-map'); doesn't work when compiling with Angular/Webpack. The implicit .json extension isn't recognized, and I'm not sure why, since it is using node. Would it be possible to add the .json to your code base, by any chance? I'm trying to avoid having a separate copy of the code just to allow this to work. Thanks very much for this module. Hrm, using this project with WebPack has been working for me. Looking at my config, I have .json in my resolve.extensions: extensions: ['.js', '.json', '.ts', '.tsx'], Have you tried adding that?
2025-04-01T06:38:52.875155
2013-11-27T23:22:24
23421702
{ "authors": [ "a-bash", "grempe" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6573", "repo": "grempe/secretsharing", "url": "https://github.com/grempe/secretsharing/issues/1" }
gharchive/issue
Published to Rubygems. I'm very impressed with this implementation and looking forward to using it my own project. I notice that you have contributed some great code clean up to the originally published gem. Will these changes be contributed back to a new version of the published 'secretsharing' gem or will they remain on your fork? I'm just trying to figure out the best way to get your latest code into my project. Thanks! Andy FYI, version 1.0.0 of the gem has been released to rubygems. Please let me know if you see any issues. https://rubygems.org/gems/secretsharing :-)
2025-04-01T06:38:52.876314
2022-11-24T13:49:59
1463379223
{ "authors": [ "grepthat", "victorjarlow" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6574", "repo": "grepthat/libOpenDRIVE", "url": "https://github.com/grepthat/libOpenDRIVE/pull/51" }
gharchive/pull-request
Bug fix where isnan template specifier causes compilation errors Bug fix where isnan template specifier causes compilation errors, fixed it by removing the template specifier. Thanks for spotting! I didn't get an error compiling on clang v12.0 on MacOS.
2025-04-01T06:38:52.893731
2018-06-27T12:27:04
336207469
{ "authors": [ "aaroncox", "liondani" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6575", "repo": "greymass/eos-voter", "url": "https://github.com/greymass/eos-voter/issues/162" }
gharchive/issue
Give the ability to bid on EOS premium names (Name Action) It would be fantastic to let your userbase bid on premium names. It would give much value to EOS to give the oportunity to MORE users to bid on names. I am sure many that would not vote will do if they find it easy from a allready installed wallet. Tentatively slating this for the 0.6.x milestone, couple versions out still. There's been an incredible demand for account creation and account permissions, which will be the two builds between now and then.
2025-04-01T06:38:52.923066
2023-08-21T21:24:06
1860174322
{ "authors": [ "joshdr83", "kmax12" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6576", "repo": "gridstatus/gridstatusio", "url": "https://github.com/gridstatus/gridstatusio/issues/16" }
gharchive/issue
Timestamp issue on ERCOT SPP data pull? When I run the example "Retrieving data in local time", I still seeing the timestamp in UTC... It also looks like it isn't lining up with the ERCOT data that I am finding on the website. Or the data are 4 hours off? it looks like the time zone conversion isn't happening correctly and the date is staying in utc. what version of the grid status client library are you using? >>> import gridstatusio >>> import pandas >>> gridstatusio.__version__ '0.4.0' >>> pandas.__version__ '2.0.1' Here is what I have up and running in this environment  Joshua Rhodes On Aug 22, 2023, at 11:25 AM, Max Kanter @.***> wrote: it looks like the time zone conversion isn't happening correctly and the date is staying in utc. what version of the grid status client library are you using? import gridstatusio import pandas gridstatusio.version '0.4.0' pandas.version '2.0.1' — Reply to this email directly, view it on GitHub https://github.com/gridstatus/gridstatusio/issues/16#issuecomment-1688539078, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZQFTEPFBCBQESL64VFSVTXWTMO5ANCNFSM6AAAAAA3Y6XDD4. You are receiving this because you authored the thread. Here is what I have: thanks! i was able to reproduce and fix the error when I downgraded to that version of pandas. just released version 0.5.0 of gridstatusio. if you upgrade to that, everything should work. let me know if you see any other problems Hi Max I seem to be getting a similar timestamp conversion error with the “ercot_fuel_mix” data after updating to the latest packages? I see solar production that appears to be indexed via UTC time even when asking for local? Thanks! Joshua Rhodes   On Aug 22, 2023, at 8:04 PM, Max Kanter @.***> wrote: thanks! i was able to reproduce and fix the error when I downgraded to that version of pandas. just released version 0.5.0 of gridstatusio. if you upgrade to that, everything should work. let me know if you see any other problems — Reply to this email directly, view it on GitHub https://github.com/gridstatus/gridstatusio/issues/16#issuecomment-1689113353, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZQFTHXGUYD2KJG3XDTWFTXWVJIBANCNFSM6AAAAAA3Y6XDD4. You are receiving this because you authored the thread. Hi Joshua - im not able to reproduce. can you share the code you are using the versions of pandas and gridstatus that you are using? Right, but it looks like that the solar doesn't start producing until late in the afternoon, well after the sun has come up and keeps on producing well into the night after the sun has gone down?
2025-04-01T06:38:52.961362
2023-09-28T13:43:34
1917586249
{ "authors": [ "Kleywalker", "grigorii-zander" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6577", "repo": "grigorii-zander/zsh-npm-scripts-autocomplete", "url": "https://github.com/grigorii-zander/zsh-npm-scripts-autocomplete/issues/7" }
gharchive/issue
Bun support Hi.. Could you add bun.sh support? https://bun.sh Hi.. Could you add bun.sh support? https://bun.sh Hi! Sure! Will do it this weekend
2025-04-01T06:38:52.979184
2016-08-17T18:33:08
171728611
{ "authors": [ "alansouzati", "tracybarmore" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6578", "repo": "grommet/grommet-docs", "url": "https://github.com/grommet/grommet-docs/pull/83" }
gharchive/pull-request
Misspelling in AccordionPanel headings of the word "Third" What does this PR do? Where should the reviewer start? What testing has been done on this PR? How should this be manually tested? Any background context you want to provide? What are the relevant issues? Screenshots (if appropriate) thanks
2025-04-01T06:38:53.035195
2024-10-07T17:24:49
2571030758
{ "authors": [ "brandon-groundlight", "tyler-romero" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6579", "repo": "groundlight/python-sdk", "url": "https://github.com/groundlight/python-sdk/pull/260" }
gharchive/pull-request
Makes source no longer required This should in theory make v0.18 backwards. compatible I need to check in on the test runner's permissions before I can merge this in, but the branch is here to unblock internal work Test runners are updated, should be good to go The fact that source is optional is very unintuitive I think (and hard to determine why by reading the code). Is it possible to add a tag or annotation or something that explians its for compatibility between x version and y version and that in fact it will be returned in y version?
2025-04-01T06:38:53.039373
2014-10-31T11:47:33
47385796
{ "authors": [ "Ahskaniz", "smccarthy" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6580", "repo": "groupon/Selenium-Grid-Extras", "url": "https://github.com/groupon/Selenium-Grid-Extras/issues/66" }
gharchive/issue
Selenium Grid Extras not restarting after N tests Hi Dima, I'm running the last Grid Extras (v1.7.1) in a windows vm and in the configuration I set the restart of the vm at 10 tests. Despite this, the vm is not restarting. In the logs there's no tracking information and I accumulate like 100+ sessions without a restart. It seems not to be a browser bug related, because it fails for ie/chrome/firefox. Thanks in advance @Ahskaniz Is this fixed for you in Selenium-Grid-Extras 1.10.0 ? @smccarthy I just stopped using this feature for one handmade. @Ahskaniz Ok thanks for the quick response! Closing as we think this is working correctly. If anyone finds that this is still an issue, please open a new issue.
2025-04-01T06:38:53.060439
2024-02-13T18:33:19
2132925042
{ "authors": [ "ST-DDT", "codeGuru775", "dsyer", "ocebenzer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6581", "repo": "grpc-ecosystem/grpc-spring", "url": "https://github.com/grpc-ecosystem/grpc-spring/issues/1053" }
gharchive/issue
this dependency seems to be incompatible with new version 3.x of spring boot. The context upgrade my app to spring boot 3.X The bug "grpc-spring-boot-starter" seems to not have a version compatible with spring boot 3.x as of now. And since their auto-configuration is still present in "spring.factories" file, some of the beans for these dependencies are not being autowired after migration to spring boot 3, resulting in application start up failures. Could you please post the error mesaage along with the version you are using? I'm having a similar problem. I'm following the Getting Started Guide, but Spring Boot 3 seems to ignore @GrpcService. Probably there is some kind of auto-detecting issue described in 3rd task of Implementing the Service. And my pom.xml, downgrading spring-boot-starter-parent seemed to work for me. i.e. from: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.2</version> <relativePath/> <!-- lookup parent from repository --> </parent> to: <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.5.7</version> <relativePath/> <!-- lookup parent from repository --> </parent> You never said which version of grpc-spring you are using. Is it possibly not the latest? You never said which version of grpc-spring you are using. Is it possibly not the latest? You were right, I was using 2.15.0 and 3.0.0 is released and Beans are autowired there. What a simple mistake, thank you!
2025-04-01T06:38:53.098725
2015-04-01T21:10:58
65779867
{ "authors": [ "ctiller", "louiscryan", "soltanmm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6582", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/issues/1167" }
gharchive/issue
Metadata value; null-terminated vs. sized There's a lack of consensus on the treatment of null-bytes in metadata values as far as I can tell. There should be documentation here clarifying how the metadata should be treated. Note that GRPC appears to Do The Right Thing™ and respect the length for the metadata value when copying here (thus allowing null-bytes in the value). That such is not written anywhere as a guarantee isn't particularly comfortable. And then there's what to do about metadata key suffixes. The purpose of this should be documented. The HTTP2 spec forbids null as a value in header values which is why we require that arbitrary binary sequences (including ones with nulls) are encoded as base-64 on the wire and use the '-bin' suffix on the header name. Feel free to suggest clarification to https://github.com/grpc/grpc-common/blob/master/PROTOCOL-HTTP2.md I don't think people should need to jump across repositories to get an idea of the semantics of our code. #554 should address this.
2025-04-01T06:38:53.104848
2021-10-25T14:08:13
1035185171
{ "authors": [ "EwanValentine", "KiranHighNote", "atulsriv", "drfloob", "nflux-pyang" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6583", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/issues/27817" }
gharchive/issue
Issue with gRPC channel class missing request method What version of gRPC and what language are you using? What operating system (Linux, Windows,...) and version? OSx Big Sur 11.5.2 What runtime / compiler are you using (e.g. python version or version of gcc) Python 3.9 What did you do? Generated a Python client using protoc-gen-grpclib_python, attempted to use the generated client: import sys import asyncio sys.path.insert(0, '../clients/python') import grpc from proto.hot_storage_pb2 import SetRequest, GetRequest from proto.hot_storage_grpc import HotStorageServiceStub from google.protobuf import struct_pb2 as struct async def main(): with grpc.insecure_channel('<IP_ADDRESS>:9000') as channel: stub = HotStorageServiceStub(channel=channel) await stub.Set(SetRequest(key='key', values=struct.Struct(fields={'value': struct.Value(string_value='value')}))) result = await stub.Get(GetRequest(key='key')) print(result) if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) What did you expect to see? I expected the code snippet to call my gRPC server, and return a result. What did you see instead? Instead, I got the following error: Traceback (most recent call last): File "/Users/ewanvalentine/work/hot-storage-rollout/playground/test.py", line 24, in <module> loop.run_until_complete(main()) File<EMAIL_ADDRESS>line 642, in run_until_complete return future.result() File "/Users/ewanvalentine/work/hot-storage-rollout/playground/test.py", line 17, in main await stub.Set(SetRequest(key='key', values=struct.Struct(fields={'value': struct.Value(string_value='value')}))) File "/Users/ewanvalentine/work/hot-storage-rollout/playground/venv/lib/python3.9/site-packages/grpclib/client.py", line 881, in __call__ async with self.open(timeout=timeout, metadata=metadata) as stream: File "/Users/ewanvalentine/work/hot-storage-rollout/playground/venv/lib/python3.9/site-packages/grpclib/client.py", line 853, in open return self.channel.request(self.name, self._cardinality, AttributeError: 'Channel' object has no attribute 'request' Anything else we should know about your project / environment? $ pip freeze . generates: asyncio==3.4.3 cachetools==4.2.4 certifi==2021.10.8 charset-normalizer==2.0.7 coverage==6.0.2 Cython==0.29.24 google-api-core==2.1.1 google-api-python-client==2.27.0 google-auth==2.3.0 google-auth-httplib2==0.1.0 googleapis-common-protos==1.53.0 greenlet==1.1.1 grpcio==1.41.0 grpcio-tools==1.41.0 grpclib==0.4.2 h2==4.1.0 hpack==4.0.0 httplib2==0.20.1 hyperframe==6.0.1 idna==3.3 msgpack==1.0.2 multidict==5.2.0 mypy-extensions==0.4.3 protobuf==3.12.2 pyasn1==0.4.8 pyasn1-modules==0.2.8 pynvim==0.4.3 pyparsing==2.4.7 python-engineio==3.12.1 requests==2.26.0 rsa==4.7.2 ruamel.yaml==0.16.13 ruamel.yaml.clib==0.2.6 six==1.16.0 typed-argument-parser==1.7.1 typing-extensions==<IP_ADDRESS> typing-inspect==0.7.1 uritemplate==4.1.1 urllib3==1.26.7 File "/Users/ewanvalentine/work/hot-storage-rollout/playground/venv/lib/python3.9/site-packages/grpclib/client.py", line 853, in open return self.channel.request(self.name, self._cardinality, AttributeError: 'Channel' object has no attribute 'request' The error is in grpclib, which is a separate project: https://github.com/vmagamedov/grpclib. Oh! My apologies @drfloob, silly me! How did you solve it, I got the same problem I am also having same issue.. How is that fixed? Anyone have any contributions to the above issue?
2025-04-01T06:38:53.106949
2015-09-12T09:32:35
106143453
{ "authors": [ "Zerqkboo", "jtattermusch" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6584", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/issues/3336" }
gharchive/issue
grpc cpp 0_11_0 create stub error with msvc 2013 with preprocessor definition _USE_32BIT_TIME_T I build my grpc 0_11_0 with default settings under /vsprojects, and get all the cpp libs. When I used it in my project with very simple rpc server and client, I found that NewStub will be created without any error messages, but afterwards I called a rpc method, the program crashed. I tested again without the preprocesser definition _USE_32BIT_TIME_T in all /vsprojects, then everything is ok. My environment is msvc 2013 in windows 10. #4315
2025-04-01T06:38:53.125741
2015-07-19T18:08:16
95936594
{ "authors": [ "grpc-jenkins", "larsonmpdx", "nicolasnoble" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6585", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/pull/2506" }
gharchive/pull-request
changes to allow VS solution/project generation and grpc.mak generation for c++ tests this isn't fully polished but it gets OK coverage (about 10 c++ tests are not building because of posix includes, and a couple other random ones) todo switch back to using generate_projects.py (parallel version) clean up c++ props and split into gtest/gflags props Decide if building .proto files in-place with a manual script is the best solution, or if there's another way add documentation for grpc.mak building and overall buildgen layout get versions of zlib/openssl with pdb files (debug symbols) to get rid of most compile warnings Can one of the admins verify this patch? This is ok to test. not building windows tests are below. I am putting "platform" = posix into build.json for these for now failed test reason async_streaming_ping_pong_test posix (sys/time.h, sys/signal.h) async_unary_ping_pong_test posix (sys/time.h, sys/signal.h) client_crash_test unresolved external in grpc++_test_util (don't know why) server_crash_test unresolved external in grpc++_test_util (don't know why) interop_client posix (unistd.h) interop_server posix (unistd.h) interop_test posix (unistd.h) qps_interarrival_test posix (sys/time.h, sys/signal.h) qps_openloop_test posix (SIGPIPE, sys/time.h, sys/signal.h) qps_test posix (SIGPIPE, sys/time.h, sys/signal.h) sync_streaming_ping_pong_test posix (SIGPIPE, sys/time.h, sys/signal.h) sync_unary_ping_pong_test posix (SIGPIPE, sys/time.h, sys/signal.h) C test failures/timeouts, over 3 runs: failure count fail/timeout name 3 FAILED: secure_endpoint_test 3 FAILED: initial_settings_frame_bad_client_test 3 TIMEOUT: chttp2_fullstack_compression_early_server_shutdown_finishes_inflight_calls_test 3 TIMEOUT: chttp2_fullstack_disappearing_server_unsecure_test 3 TIMEOUT: chttp2_simple_ssl_fullstack_request_with_flags_test 3 TIMEOUT: chttp2_fullstack_compression_early_server_shutdown_finishes_inflight_calls_unsecure_test 2 TIMEOUT: chttp2_fullstack_compression_cancel_after_invoke_unsecure_test 3 TIMEOUT: chttp2_fullstack_graceful_server_shutdown_unsecure_test 3 TIMEOUT: chttp2_fullstack_compression_disappearing_server_test 3 TIMEOUT: chttp2_fullstack_early_server_shutdown_finishes_inflight_calls_test 3 TIMEOUT: chttp2_fullstack_compression_graceful_server_shutdown_unsecure_test 2 TIMEOUT: chttp2_fullstack_cancel_after_invoke_test 1 TIMEOUT: chttp2_fullstack_cancel_after_invoke_unsecure_test 3 TIMEOUT: chttp2_fullstack_compression_graceful_server_shutdown_test 3 TIMEOUT: chttp2_fullstack_early_server_shutdown_finishes_inflight_calls_unsecure_test 3 TIMEOUT: chttp2_fullstack_disappearing_server_test 3 TIMEOUT: chttp2_fullstack_compression_disappearing_server_unsecure_test 3 TIMEOUT: chttp2_fullstack_compression_cancel_after_invoke_test 3 TIMEOUT: chttp2_simple_ssl_with_oauth2_fullstack_request_with_flags_test 3 TIMEOUT: chttp2_fullstack_graceful_server_shutdown_test C++ timeouts (no regular failures). I think these are to do with opening a windows socket, see #2294 thread_stress_test mock_test cli_call_test async_end2end_test end2end_test Alright, thank you for your hard work on this :)
2025-04-01T06:38:53.128200
2015-10-02T20:13:42
109563873
{ "authors": [ "jtattermusch" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6586", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/pull/3605" }
gharchive/pull-request
Build and run per-language containers for interop tests -- introduce per-language docker images for interop tests -- run_interop_tests.py now works in several steps: it first build all the language specific images, then spins up all the servers, runs all the interop tests (each in a separate container) and finally it kills the servers and cleans up the docker images it has build (it leaves the "base" images as those will rarely change). -- added support java client/server (the scripts looks for java sources in ../grpc-java relatively to grpc repo root) -- added support for --alow_flakes flag (fixes #3581) CC @ejona86
2025-04-01T06:38:53.130193
2015-10-06T18:07:26
110066628
{ "authors": [ "murgatroid99", "tbetbetbe" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6587", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/pull/3672" }
gharchive/pull-request
Node package cleanup This change removes several files that have become irrelevant, either because they now live in other repositories (like the contents of src/node/cli and src/node/bin) or because they were completely unused and untested (like src/node/examples/stock*). In addition, src/node/examples is now confusing in relation to the root examples directory, so it is split into more clear directories: src/node/performance for performance tests, and src/node/test/math, since the math service stuff is only used in tests. It also removes some directories from package.json that are not essential to using the library. LGTM, will merge once this is updated It's updated
2025-04-01T06:38:53.132994
2017-02-17T17:51:54
208511807
{ "authors": [ "murgatroid99" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6588", "repo": "grpc/grpc", "url": "https://github.com/grpc/grpc/pull/9766" }
gharchive/pull-request
Improve Node and libuv testing and test coverage This makes a number of related changes: The use of the libuv iomgr implementation can now be set at compile time when building Node (instead of hardcoded into the file). That setting currently defaults to false; we should set it back to true when we are more confident in that implementation. The Node tests now run on Node 7 by default, instead of Node 4. Portability tests have been added to run the tests on Node 4 and Node 6, and with the libuv iomgr instead of the default iomgr. When the debug config is specified, the Node tests actually use the debug build. Some core tests have been modified or added to run with the libuv iomgr. A portability test has been added to run the core tests with libuv. Note: The core test lb_policies_test currently fails with libuv. This test is being disabled in parallel in #9765 This also seems to fix #9668. That test is passing in this PR, at least. The test that was failing in #9726 also passes here. TSAN failure: #9124. The interop failure is an infrastructure failure. The Mac failure appears to be a fluke: I have been unable to reproduce it in thousands of runs.
2025-04-01T06:38:53.159135
2016-04-29T12:47:22
151857913
{ "authors": [ "XhmikosR", "serhiy-yevtushenko" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6589", "repo": "gruntjs/grunt-contrib-csslint", "url": "https://github.com/gruntjs/grunt-contrib-csslint/issues/72" }
gharchive/issue
Feature Request: Fail when number of warnings found exceeds certain threshhold When trying to use CSSLint plugin on an already existing project, one issue that comes up often is that certain violations are not going to be fixed, therefore, one could not start from zero violations. At such a situation the option to fail build only if the number of violations exceeds certain count (watermark) will be extremely useful. I doubt this will ever be implemented. You can make a PR and discuss the implementation there.
2025-04-01T06:38:53.176907
2015-03-02T17:45:54
59515633
{ "authors": [ "AlexChesters", "Fishrock123", "JWGmeligMeyling", "Pedder", "SudoCat", "TheBox193", "ThomasHoadley", "alphanull", "awenro", "cb1kenobi", "crimann", "cwklausing", "digitalcraftco", "ekkis", "flekschas", "greenchapter", "infinityplusone", "jlujan", "juanbrujo", "lancepadgett", "maruf89", "mikehdt", "of6", "ogmios2", "ourcore", "paulhayes", "tannerlinsley", "tbremer", "tconroy", "teejayhh", "tuffz", "vemec", "vladikoff", "yumyo" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6590", "repo": "gruntjs/grunt-contrib-watch", "url": "https://github.com/gruntjs/grunt-contrib-watch/issues/415" }
gharchive/issue
Watch Error: FSEventStreamFlushSync(): failed assertion I am getting the following error after Running "watch" task restarts 2015-03-02 11:19 grunt[7732] (FSEvents.framework) FSEventStreamFlushSync(): failed assertion '(SInt64)last_id > 0LL' Happens with these versions: $ node 0.11.6 - 0.12.0 $ npm 2.5.1 Related Grunt Config: ... watch: { sass: { files: [ '<%= configs.sassPaths.answ_sass + configs.minimatch %>', '<%= configs.sassPaths.sass_lib + configs.minimatch %>' ], options: { nospawn: true } } } ... Get exactly the same message when watch restarts. Happens on several projects with different configurations and on two different Macs. Tested with those npm versions: ''' $ node v0.12.0 $ npm 2.3.0, 2.5.0, 2.7.0, 2.7.1 ''' This happens for me too +1 Ditto Same here. Can't seem to find a solution... +1 Yeah, same here. Watch task seems to actually work, but nonetheless the message is a bit annoying. I am on Yosemite 10.10.2 Same here, whats the problem? Same problem. Upgraded to node.js v0.12.1, and it is still there. Same problem. Does anyone write an bugreport to the nodejs guys? Same here $ node v0.12.2 $ npm 2.7.4 OSX 10.10.2 (14C1514) grunt[22532] (FSEvents.framework) FSEventStreamFlushSync(): failed assertion '(SInt64)last_id > 0LL' If anyone wants to try this in io.js, should be fixed there. Still happening for me as well node v0.12.2 npm 2.7.5 grunt-contrib-watch 0.6.1 OS X 10.10.3 This seems to happen more when it has to run an uglify or concat task. But both are at the latest version. Same here on node v0.12.0 npm 2.5.1 grunt-contrib-watch 0.6.1 OS X 10.9.5 Happens only if I set the option spawn: false. The tasks don't run correctly, it's not just the message for me. Same problem - Seems to be functioning fine, but always gives this error. Node: v0.12.2, NPM: 2.7.5, grunt: ^0.4.5, grunt-contrib-concat: ^0.5.1, grunt-contrib-imagemin: ^0.9.4, grunt-contrib-uglify: ^0.9.1, grunt-contrib-watch: ^0.6.1, grunt-sass: ^0.18.1 OS X 10.10.2 (FSEvents.framework) FSEventStreamFlushSync(): failed assertion '(SInt64)last_id > 0LL' Can confirm. same here Node v0.12.2 grunt-cli v0.1.13 grunt v0.4.5 grunt-contrib-watch 0.6.1 OS X 10.10.3 I'm also experiencing this issue Node v0.12.2 Npm v2.7.5 Packages: ├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>├──<EMAIL_ADDRESS>└──<EMAIL_ADDRESS> Same here. node: v0.12.0 Removing spawn from Watch > Scripts > Options:{ } stops the error. What is spawn anyway again? Might this potentially be why? https://github.com/bdkjones/fseventsbug/wiki/realpath()-And-FSEvents @digitalcraftco solution worked with v0.12.7, removing all spawns from watch: {} task (even if they're false) @juanbrujo Thanks for the follow up. Indeed this has fixed the issue for newer versions. Interesting. Is 0.11.6 the earliest node version this manifests in? We've had the above linked io.js issue for this for a while but haven't really been able to debug it. @digitalcraftco It defines whether a new task should be spawned or not. See https://github.com/gruntjs/grunt-contrib-watch#optionsspawn Removing spawn from options does avoid but not fix the issue. +1 getting this as well still. Interesting to note, as soon as you remove the spawn = false property the compile time jumps up dramatically, in my case from 0.3sec to 4.3sec !! I see similar messages in a C app using libuv and uv_fs_event* I wrote as well. Linked to https://github.com/nodejs/node/issues/854 which mentions this issue. Absolutely nothing changed in my dependencies for the last month at least and I randomly just started seeing this @maruf89 Did you change OS X versions? No, I'm still on Yosemite 10.10.3 - The only thing that changed is I got a hard drive cable replaced but no software/npm deps changed Never had this problem on Yosemite, upgraded to El Capitan and now it happens... +1 having same issue @maruf89, same by me, i had only changed the hard drive cable. This is still an issue with Node.js 4.4.7 and 6.3.1. I'm on OS X 10.11.6. I'm NOT using chokidar, fsevents, gaze, or any other filesystem watching library. I'm only using the built-in fs.watch(). +1 What are the side-effects to this error? It started occurring in my project after I added imagemin What are the side-effects to this error? Unknown, probably nothing impactful? I think the OS just retries or something. Strange that multiple people would see it after a harddrive cable, maybe it's an odd apple hardware issue? Am sure it has nothing to do with hardware. As Fishrock123 said, no apparent(!) sideeffects, but still annoying. So a fix definetly would be nice, since also a lot of systems seem to be affected. I'm having this issue with node v7.3.0 on OSX 10.11.6 (El Capitan) and grunt v0.4.5, grunt-cli v1.2.0 I can't make sense of why this worked, but I was having the exact same issue, and adding more specific selectors for the files I was watching resolved the issue. Here's a sample of my old Gruntfile: watch: { css: { files: '**/*.scss', tasks: ['sass'] }, js: { files: '**/*.js', tasks: ['rollup'] } } and here's my new Gruntfile watch: { css: { files: './assets/**/*.scss', tasks: ['sass'] }, js: { files: './assets/**/*.js', tasks: ['rollup'] } } What's more--my old grunt process was using a huge chunk of my CPU, and with my more specific file selection, I use only 2-3%.
2025-04-01T06:38:53.210415
2017-07-26T06:02:56
245614890
{ "authors": [ "grych", "vic" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6591", "repo": "grych/drab", "url": "https://github.com/grych/drab/pull/29" }
gharchive/pull-request
Fix template code for commander If people uncomments the example it should actually compile :) Thanks a lot!
2025-04-01T06:38:53.242160
2017-01-31T17:11:34
204361567
{ "authors": [ "gsingers", "johanlindquist" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6597", "repo": "gsingers/slack-jira-plugin", "url": "https://github.com/gsingers/slack-jira-plugin/issues/31" }
gharchive/issue
Support for threaded messages When being triggered inside a message thread, the plugin still posts within the main channel - without having looked into too much detail in the code, it would be great if the plugin could use the thread_ts/ts attributes as discussed in [1]. Thanks, Johan [1] https://api.slack.com/docs/message-threading Thanks for the patch!
2025-04-01T06:38:53.243363
2023-08-03T15:26:40
1835282784
{ "authors": [ "gsproston-scottlogic" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6598", "repo": "gsproston-scottlogic/prompt-injection", "url": "https://github.com/gsproston-scottlogic/prompt-injection/issues/93" }
gharchive/issue
Character count Include a character count UI component which displays the length of the user's current message. If the CHARACTER_LIMIT defence mechanism is active, then this component should also inform the user when the message length is too long. @PuneetLoona
2025-04-01T06:38:53.244970
2018-09-22T17:19:24
362874606
{ "authors": [ "erickoledadevrel", "kacrouse" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6599", "repo": "gsuitedevs/apps-script-samples", "url": "https://github.com/gsuitedevs/apps-script-samples/issues/73" }
gharchive/issue
Wrong variable referenced in Generating Google Slides from images Tutorial Lines 61, 65, and 66 of imageSlides reference the presentation variable. At that point in the tutorial, the deck variable is still being used to reference the presentation, so the script throws an error if you're following the tutorial step by step. Good catch, thanks!
2025-04-01T06:38:53.246832
2023-07-13T17:44:59
1803491409
{ "authors": [ "gtfierro", "lazlop" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6600", "repo": "gtfierro/shacl-issues", "url": "https://github.com/gtfierro/shacl-issues/issues/2" }
gharchive/issue
Error using topquadrant shacl Hi Gabe, I'm getting an error running test cases using topquadrant shacl: docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/images/create?tag=1.4.2&fromImage=ghcr.io%2Ftopquadrant%2Fshacl: Internal Server Error ("Head "https://ghcr.io/v2/topquadrant/shacl/manifests/1.4.2": unauthorized") Did you encounter this error? I don't think I've run into this, but it might be an issue with docker configuration that we need to either document in the README or fix in a script. That worked! I thought I had done it but it appears not. It may be a good idea to add to the readme. Just added a note to the README!
2025-04-01T06:38:53.259711
2017-01-27T18:20:22
203704369
{ "authors": [ "EPashkin", "Susurrus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6601", "repo": "gtk-rs/gtk", "url": "https://github.com/gtk-rs/gtk/issues/432" }
gharchive/issue
Disable draw_value on Scale elements disables snapping to value I'm not positive if this is a gtk bug or a gtk-rs bug, but if I create a Scale element with s couple of set values that the selector should snap to, it works fine. But then if I call set_draw_value(false), the scale no longer snaps to the appropriate values. I've included sample code: extern crate gtk; use gtk::prelude::*; fn main() { gtk::init().expect(""); let window = gtk::Window::new(gtk::WindowType::Toplevel); window.set_default_size(400, 300); let scale = gtk::Scale::new_with_range(gtk::Orientation::Horizontal, 1.0, 2.0, 1.0); scale.set_draw_value(false); // Comment out this line to see "proper" behavior scale.add_mark(1.0, gtk::PositionType::Bottom, Some("1")); scale.add_mark(2.0, gtk::PositionType::Bottom, Some("2")); let vbox = gtk::Box::new(gtk::Orientation::Vertical, 0); vbox.pack_start(&scale, false, false, 0); window.add(&vbox); window.show_all(); gtk::main(); } with Cargo.toml: [dependencies.gtk] version = "0.1" Seems snapping unrelated to marks and controlled by https://developer.gnome.org/gtk3/stable/GtkScale.html#gtk-scale-set-digits (scale.set_digits(1); in your case) and works only when current value shown. This can be seen if you set max to 5.0 and add 3.5 (or 4.0) marks. If you need disable smooth scroll and don't show current value, you can use this scale.connect_format_value(|scale, value| { String::new() }); Thanks, yes I see what you're saying. So it seems like this is a GTK+ bug, either in their docs for not noting this limitation or in their code. I can't imagine why the behavior changes when changing appearance, however. I'll file a bug upstream and close this one once I do. Moving this discussion to GNOME's bug tracker: https://bugzilla.gnome.org/show_bug.cgi?id=777858
2025-04-01T06:38:53.281252
2024-07-30T14:09:15
2437939168
{ "authors": [ "gtsteffaniak" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6602", "repo": "gtsteffaniak/filebrowser", "url": "https://github.com/gtsteffaniak/filebrowser/issues/144" }
gharchive/issue
show currently logged in username in slideout action menu https://github.com/filebrowser/filebrowser/issues/2692 Added in https://github.com/gtsteffaniak/filebrowser/pull/157 0.2.7 included
2025-04-01T06:38:53.290904
2015-03-02T10:47:48
59465210
{ "authors": [ "UserStefan", "guaka", "simison" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6603", "repo": "guaka/hitchticker", "url": "https://github.com/guaka/hitchticker/issues/2" }
gharchive/issue
SMS integration let user enter phone number to link username with phone number? philipp has something, that you culd send an sms with your username and your number was automatically registered. Afterwards it was possible to send messages. I've used Nexmo in the past and it works well: https://www.nexmo.com/ https://www.twilio.com/ twilio didn't give me a DE number, trying nexmo now stuck with http://stackoverflow.com/q/29447706/1245190 now Yay! First test worked! Our number is +491771789420 works quite well now todo: match phone number with user continue with #9 and #11
2025-04-01T06:38:53.292471
2020-01-21T13:41:45
552884319
{ "authors": [ "NaveenShivanna86", "davidkhala" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6604", "repo": "guanzhi/GmSSL", "url": "https://github.com/guanzhi/GmSSL/pull/915" }
gharchive/pull-request
Bug Fix GmSSL client cannot communicate with TaSSL server; https://gi… Bug Fix : GmSSL client cannot communicate with TaSSL server. Issue: https://github.com/guanzhi/GmSSL/issues/913 Are you still work on it?
2025-04-01T06:38:53.294379
2015-06-11T23:11:12
87534008
{ "authors": [ "e2", "jamilabreu" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6605", "repo": "guard/guard-livereload", "url": "https://github.com/guard/guard-livereload/pull/136" }
gharchive/pull-request
Update livereload.js Fix for Chrome bug: https://github.com/livereload/livereload-extensions/issues/26#issuecomment-54250594 I can merge this if it helps others - please ask people to add +1s here to confirm. I've made it so you can change those values in the Guardfile: https://github.com/guard/guard-livereload/pull/147 If there are any issues there, please open a new PR.
2025-04-01T06:38:53.341007
2024-02-07T15:38:12
2123312301
{ "authors": [ "tjsilver" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6606", "repo": "guardian/janus-app", "url": "https://github.com/guardian/janus-app/pull/407" }
gharchive/pull-request
fix: use later aws sdk version in configTools What is the purpose of this change? Makes the configTools project use the same version of aws-sdk as the main app. What is the value of this change and how do we measure success? This is an attempt to mitigate a high severity vulnerability discovered by Dependabot, whereby configTools' reliance on aws-scala which has been deprecated, and which in turn relies on an old version of aws-java-sdk-s3. Once this is merged, if successful we should see the issue disappear. I have tested this locally and the dependency tree for configTools shows that the vulnerable version has been replaced a safe version: Before: After: Closing for now to see if Dependabot raises a PR Re-opening as Dependabot didn't raise a PR to fix this.
2025-04-01T06:38:53.352070
2017-04-25T18:38:59
224234028
{ "authors": [ "todor-kolev" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6607", "repo": "guardian/scanamo", "url": "https://github.com/guardian/scanamo/issues/104" }
gharchive/issue
Implement 'Between' RangeKeyCondition Similar to BeginsWith it would be great to have a Between RangeKeyCondition. https://github.com/guardian/scanamo/blob/9cb47068bc3f69709c5bdb1a792dd9a6d5562e01/src/main/scala/com/gu/scanamo/query/DynamoKeyCondition.scala#L26 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html https://github.com/guardian/scanamo/pull/106
2025-04-01T06:38:53.378544
2017-10-25T13:10:39
268391523
{ "authors": [ "joakimsk", "oysstu", "pktrigg" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6608", "repo": "guardiangeomatics/pyall", "url": "https://github.com/guardiangeomatics/pyall/pull/1" }
gharchive/pull-request
Restructure project to be installable as a python module Hi, I've created a pull request that restructures the project to be installable as a python module using setuptools. This allows pyall to be installed using "python setup.py install" and be importable from any project (use develop instead of install for development). You can also use setuptools to upload it to pypi if you want. Feel free to reject it if it does not fit your workflow Regards, oysstu This looks like a useful way forward, are the owners open for this change? Hi Joakim, Good to prompt me on this. I think we should put pyall into pypi so it can be installed with pip. I move the sources round and made some text scripts to demonstrate and clean up. That’s all done I also registered for pypi as a producer. I think we can get this done in the next week or 2 if your ok with the idea? From: Joakim Skjefstad @.> Sent: Saturday, January 20, 2024 6:22 PM To: guardiangeomatics/pyall @.> Cc: Subscribed @.***> Subject: Re: [guardiangeomatics/pyall] Restructure project to be installable as a python module (#1) This looks like a useful way forward, are the owners open for this change? — Reply to this email directly, view it on GitHubhttps://github.com/guardiangeomatics/pyall/pull/1#issuecomment-1902058165, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACDTJMKKABN7OQSYNPCKX5LYPOLFFAVCNFSM4EAX42VKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOJQGIYDKOBRGY2Q. You are receiving this because you are subscribed to this thread.Message ID: @.***> @pktrigg wonderful! There is already a different project using pyall as package name, so you may need to re-name this repo - not sure what you would like to call it, but I would suggest pyemall - em being Kongsberg designator for multibeams. However if you plan to/do support other types of .all-data, then I guess pyemall would restrict the project. What do you think? Here is the other pyall: It seems @oysstu fixed the packaging in this pull request, but it is old now. I am not sure how to best proceed, maybe accept the pull request then modify to fit the latest version, or ask if he can help package the latest version, making a pull request for you? I am not experienced with packaging of modules, but I bet @oysstu would be happy to bring his thoughts and suggestions to the table if we ask him. As far as I can be of any help here, please let me know, and I will try to read up on it the coming weeks. Some additional commits got automatically added to this PR because I committed them to my fork. Feel free to close this PR and copy anything you need. The code needs to be in a subdirectory with an init file, with setup.py in the root directory. I think adding a pyproject.toml with something like this is a good idea: [build-system] requires = [ "setuptools>=42", ] build-backend = "setuptools.build_meta" That ensures that modern packaging tools work
2025-04-01T06:38:53.413477
2020-05-11T17:57:19
616070980
{ "authors": [ "glatorre", "jjhbw" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6610", "repo": "guigrpa/docx-templates", "url": "https://github.com/guigrpa/docx-templates/issues/122" }
gharchive/issue
Cannot populate docx template in node.js Hello, I cannot create a docx from a template. I don't receive any errors, but the output file is corrupted (16 bytes). I've tried on both OSX and Linux. This is my simple template (template.docx): +++=name+++ +++=surname+++ And this is my node.js code: const createReport = require ('docx-templates').default; const fs = require('fs'); const template = fs.readFileSync('./template.docx'); const buffer = createReport({ template, data: { name:"foo", surname: "foo2" }, }); fs.writeFileSync('./report.docx', buffer); Is this my fault? Thank you in advance Note that the createReport function returns a promise. In your example you are writing a promise to a file. It seems this is my fault: the example in the README is missing an await statement! I'm sorry for that. Thanks for the report! See the fix here: https://github.com/guigrpa/docx-templates/commit/336a66b15a63437880823ced34709686366855bc Be sure to reopen this issue if it doesn't solve the problem! Thank you. It was the problem.
2025-04-01T06:38:53.423754
2021-10-07T07:12:16
1019697148
{ "authors": [ "guillim" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6611", "repo": "guillim/nuxt-starter-netlify-cms", "url": "https://github.com/guillim/nuxt-starter-netlify-cms/pull/29" }
gharchive/pull-request
chore(Blog): update “2021-10-06-liste-de-mariage” Automatically generated by Netlify CMS 🔮 Deploy Preview for lucileetguillaume canceled. 🔨 Explore the source changes: 191c04c45f37c55f2037d6eae31bd39bb96edbc9 🔍 Inspect the deploy log: https://app.netlify.com/sites/lucileetguillaume/deploys/615e9dd10da1850008d2eb37
2025-04-01T06:38:53.467896
2016-05-10T10:23:39
153972686
{ "authors": [ "gumblex", "saltydizz", "targetnull" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6612", "repo": "gumblex/ptproxy", "url": "https://github.com/gumblex/ptproxy/issues/9" }
gharchive/issue
ptargs iat-mode wont change even I set iat-mode=1 in json file i set { ... "ptargs": "cert=AAAAAAAAAAAAAAAAAAAAAAAAAAAAA+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA;iat-mode=1", ... } but when I run: python3 ptproxy.py -s server.json ===== Server information ===== "server": "[::]:5899", "ptname": "obfs4", "ptargs": "cert=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX;iat-mode=0", ============================== 2016-05-10 10:15:16 PT started successfully. (I hide the cert string for security) p.s. I installed obfs4 as: go get git.torproject.org/pluggable-transports/obfs4.git/obfs4proxy and when i run it i found that its version is 0.0.7, will it be the problem? "ptargs" on server mode is ignored. This "Server information" is intended to be filled in the client config. @gumblex so it means that the client could specify the cert arbitrarily? But when I use different cert, it wouldn't succeed. Also, could you give a more specific description on how to use SOCKS5 ? I replace the local with socks5 on server, however, the client doesn't accept the socks5 content for local, is there a solution? Thanks @targetnull No. The client must specify the same cert as printed on the server side. The client is not aware of what inner protocol is being used. local is for listening port. You should set your applications (eg. browsers) connect through the SOCKS5 server at local address. Thanks for your reply. I'm wondering how the server generate the cert, is there a private key embeded in your script? The obfs4 private key is stored in obfs4_state.json. If it doesn't exist, obfs4 will generate a random new key. Thanks, I see.
2025-04-01T06:38:53.471779
2023-04-27T17:05:17
1687231378
{ "authors": [ "fischman", "jdegenstein", "snoyer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6613", "repo": "gumyr/build123d", "url": "https://github.com/gumyr/build123d/issues/228" }
gharchive/issue
[Enhancement] Relative Line instead of needing to invoke a position twice for a relative Line movement, there should be a dedicated method: with BuildLine() as l: l1 = Line((0,25),(250/2-15,25)) l2 = Line(l1@1,l1@1+(0,30/2)) l2 = RelLine(l1@1,(0,30/2)) #automatically invokes the second parameter as relative to the first A related but separate proposal (originally in discord): BaseLineObjects could use the @1 of the immediately-preceding BaseLineObject for their start_point (or equivalent) instead of requiring the user to explicitly state this. related: #191 @fischman Following up on your comment above, I personally do not like the proposal to automatically use @1 as there are often times when I am creating a series of segments in the clockwise direction, and then switch to counterclockwise because of what is known about the design. Here is a simple example showing a situation in which the angle/length of a line segment is not specified: @jdegenstein any reason not to use @1 as default, but let user specify an override if needed? I would prefer to just keep it simple and let the user provide the Vector representing the starting point which is the way most current BaseLineObject-derived classes currently work. End users can always write custom helpers that override this behavior if needed.
2025-04-01T06:38:53.484753
2024-01-11T21:20:54
2077608898
{ "authors": [ "artsiomkorzun", "franz1981", "gunnarmorling", "merykitty", "mtopolnik", "thomaswue" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6614", "repo": "gunnarmorling/1brc", "url": "https://github.com/gunnarmorling/1brc/pull/331" }
gharchive/pull-request
merykitty gave in to Unsafe Check List: [x] Tests pass (./test.sh <username> shows no differences between expected and actual outputs) [x] All formatting changes by the build are committed [ ] Your launch script is named calculate_average_<username>.sh (make sure to match casing of your GH user name) and is executable [x] Output matches that of calculate_average_baseline.sh Execution time of merykittyunsafe: Performance counter stats for 'sh calculate_average_merykittyunsafe.sh': 13497.35 msec task-clock:u # 10.750 CPUs utilized 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 227100 page-faults:u # 16.826 K/sec <PHONE_NUMBER>0 cycles:u # 4.519 GHz 289560740 stalled-cycles-frontend:u # 0.47% frontend cycles idle 114022556 stalled-cycles-backend:u # 0.19% backend cycles idle <PHONE_NUMBER>02 instructions:u # 1.87 insn per cycle # 0.00 stalled cycles per insn <PHONE_NUMBER>9 branches:u # 945.553 M/sec 53292216 branch-misses:u # 0.42% of all branches 1.255587212 seconds time elapsed 12.568676000 seconds user 0.870567000 seconds sys Execution time of merykitty: Performance counter stats for 'sh calculate_average_merykitty.sh': 15482.62 msec task-clock:u # 11.502 CPUs utilized 0 context-switches:u # 0.000 /sec 0 cpu-migrations:u # 0.000 /sec 222547 page-faults:u # 14.374 K/sec <PHONE_NUMBER>1 cycles:u # 4.586 GHz 362493890 stalled-cycles-frontend:u # 0.51% frontend cycles idle 130886807 stalled-cycles-backend:u # 0.18% backend cycles idle <PHONE_NUMBER>86 instructions:u # 2.02 insn per cycle # 0.00 stalled cycles per insn <PHONE_NUMBER>3 branches:u # 1.506 G/sec 45042342 branch-misses:u # 0.19% of all branches 1.346125882 seconds time elapsed 14.487117000 seconds user 0.916624000 seconds sys Not sure if this is allowed, I have consciously avoided using Unsafe, and this is my attempt to truly push the challenge to its limit. The code consistently outpaces the solutions of Thomas and Roy in C2, but I don't think it will do so after taking into consideration AOT vs JIT and after staring at the assembly for 3 hours I have not come up with any ideas. @merykitty How much slower does it perform with aot? @artsiomkorzun It is about 8 times slower with Graal, I do not have a Graal built with hsdis at hand but looking at the code comments: 0x00007fadf73244c8: ; ImmutableOopMap {rax=Oop r10=Oop r11=Oop [16]=Oop [24]=Oop } ;*iinc {reexecute=1 rethrow=0 return_oop=0} ; - (reexecute) jdk.incubator.vector.ByteVector::ldLongOp@35 (line 368) ; - jdk.incubator.vector.ByteVector$ByteSpecies::ldLongOp@8 (line 4262) ; - jdk.incubator.vector.ByteVector::lambda$fromMemorySegment0Template$105@8 (line 3811) ; - jdk.incubator.vector.ByteVector$$Lambda/0x00007fad9c05e900::load@10 ; - jdk.internal.vm.vector.VectorSupport::load@32 (line 428) ; - jdk.internal.misc.ScopedMemoryAccess::loadFromMemorySegmentScopedInternal@28 (line 361) ; - jdk.internal.misc.ScopedMemoryAccess::loadFromMemorySegment@31 (line 338) ; - jdk.incubator.vector.ByteVector::fromMemorySegment0Template@33 (line 3807) ; - jdk.incubator.vector.Byte256Vector::fromMemorySegment0@3 (line 938) ; - jdk.incubator.vector.ByteVector::fromMemorySegment@31 (line 3297) ; - dev.morling.onebrc.CalculateAverage_merykittyunsafe::iterate@37 (line 256) I assume that jdk.internal.vm.vector.VectorSupport::load is not intrinsified properly, some more other vector intrinsics seem to express the same behaviour. @merykitty Yes, we are aware of this issue with the Vector API. Unfortunately it still changes quite often when in incubator phase and it is difficult for us to keep up with the intrinsifications, which for this kind of low-level compiler API are essential. It doesn't seems to be a clear improvement on the eval machine for some reason. Here's the results from 10 runs: Benchmark 1: timeout -v 300 ./calculate_average_merykitty.sh 2>&1 Time (mean ± σ): 3.268 s ± 0.127 s [User: 21.568 s, System: 0.741 s] Range (min … max): 2.953 s … 3.430 s 10 runs Summary merykitty: trimmed mean 3.286730414775, raw times 3.1829055004,2.9527902024,3.3237153764,3.3429483424,3.2571899574,3.4296059994,3.2995724264,3.2947934074000003,3.3001821494000003,3.2925361584000004 Leaderboard | # | Result (m:s.ms) | Implementation | JDK | Submitter | Notes | |---|-----------------|--------------------|-----|---------------|-----------| | | 00:03.286 | [link](https://github.com/gunnarmorling/1brc/blob/main/src/main/java/dev/morling/onebrc/CalculateAverage_merykitty.java)| 21.0.1-open | [Quan Anh Mai](https://github.com/merykitty) | | @gunnarmorling It is a separate entry merykittyunsafe instead of merykitty. I don't really want to add Unsafe to my original submission so can this go as a separate entry? Thanks. @gunnarmorling It is a separate entry merykittyunsafe instead of merykitty. Ah, sorry, had missed that. Oh, boy 🤯 : Benchmark 1: timeout -v 300 ./calculate_average_merykittyunsafe.sh 2>&1 Time (mean ± σ): 2.573 s ± 0.024 s [User: 15.982 s, System: 0.749 s] Range (min … max): 2.521 s … 2.605 s 10 runs Summary merykittyunsafe: trimmed mean 2.575439155045, raw times 2.57165876642,2.57405251042,2.55458195942,2.5576553414200003,2.52133204842,2.5802883164200003,2.59440042542,2.58838683042,2.60452826242,2.58248909042 Leaderboard | # | Result (m:s.ms) | Implementation | JDK | Submitter | Notes | |---|-----------------|--------------------|-----|---------------|-----------| | | 00:02.575 | [link](https://github.com/gunnarmorling/1brc/blob/main/src/main/java/dev/morling/onebrc/CalculateAverage_merykittyunsafe.java)| 21.0.1-open | [merykittyunsafe](https://github.com/merykittyunsafe) | | I don't really want to add Unsafe to my original submission so can this go as a separate entry? Thanks. Yes, we can do that. While there should be only one entry per participant by default, I am happy to make an exception for this case. @gunnarmorling Wow that is really impressive, I guess the test machine is much more capable at utilising the reduction in instruction count than mine. Thanks a lot for your help. @merykitty Really cool vectorized solution! On my machine utilizing all CPU cores the solution becomes memory-bound, but I think given that the evaluation is done only on 8 cores out of the 64, the reduction in instructions is giving the speed-up. Strangely enough, both merykitty and merykittyunsafe are very slow on the old Hetzner CCX33. 45 and 40 seconds, respectively (on the official dataset). On the same instance, royvanrijn is at 4.7 seconds. I know the new instance doesn't have AVX-512 support, either, so I wonder what should explain this. @mtopolnik that's why, in some extent I personally prefer SWAR, when applicable, cause it works the same across archs, although it add some register pressure. Peak performance is obvious not comparable... @mtopolnik Are you running with C2 or with Graal? Because the latter has not caught up with the development of the Vector API yet. Yes, @mtopolnik the numbers there seem to indicate you were running with Graal JIT with the missing intrinsifications for the incubator Vector API. Didn't btw know about "-Djdk.incubator.vector.VECTOR_ACCESS_OOB_CHECK=0" yet. Is this planned to stay and be the new unsafe ;-) ? @mtopolnik Are you running with C2 or with Graal? Because the latter has not caught up with the development of the Vector API yet. Oops, I have a script that calls prepare_author.sh and then calculate_average_author.sh. But now I see there's no prepare_merykitty*.sh.
2025-04-01T06:38:53.489925
2019-09-30T00:16:16
499993449
{ "authors": [ "JDZC", "vkosuri" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6615", "repo": "gunthercox/ChatterBot", "url": "https://github.com/gunthercox/ChatterBot/issues/1825" }
gharchive/issue
Default response on logic adapter hey, i'm just using the doc code for default response: ´def get_default_response(self, input_statement): from random import choice if self.default_responses: response = choice(self.default_responses) else: try: response = self.chatbot.storage.get_random() except StorageAdapter.EmptyDatabaseException: response = input_statement self.chatbot.logger.info( 'No known response to the input was found. Selecting a random response.' ) return response´ but, could I define one especif response instead of selecting a new one? If you are looking specific response please specific response adapter for more information please go through this link https://chatterbot.readthedocs.io/en/stable/logic/index.html#specific-response-adapter
2025-04-01T06:38:53.575308
2018-11-16T20:56:40
381756873
{ "authors": [ "Dzhuneyt", "zardilior" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6616", "repo": "guzzle/guzzle", "url": "https://github.com/guzzle/guzzle/issues/2205" }
gharchive/issue
https falling back to http Q A Bug? no New Feature? no Version Specific version or SHA of a commit Actual Behavior What is the actual behavior? The debug option prints http Expected Behavior Https due to the fact that the url specified is https Steps to Reproduce this is returned when set to debug TCP_NODELAY set * Connected to www.cedulaprofesional.sep.gob.mx (<IP_ADDRESS>) port 80 (#0) > POST /cedula/buscaCedulaJson.action HTTP/1.1 Host: www.cedulaprofesional.sep.gob.mx User-Agent: GuzzleHttp/6.2.1 curl/7.54.0 PHP/7.1.21 Content-Type: application/x-www-form-urlencoded Content-Length: 170 this is the code causing it $cliente = new Client([ 'form_params'=>[ "json"=>$message2 ], 'allow_redirects'=>false ]); $request = new Request('POST', 'https://www.cedulaprofesional.sep.gob.mx/cedula/buscaCedulaJson.action' ); $response = $cliente->send($request,["debug"=>true] I am not able to reproduce your issue. Running on an Ubuntu machine with GuzzleHttp/6.3.3. Here's the minimalistic example code: <?php require '../vendor/autoload.php'; use GuzzleHttp\Client; use GuzzleHttp\Psr7\Request; $cliente = new Client([ 'form_params' => [ "json" => [] ], 'allow_redirects' => false ]); $request = new Request('POST', 'https://www.cedulaprofesional.sep.gob.mx/cedula/buscaCedulaJson.action'); $response = $cliente->send($request, ["debug" => true]); The result is:   | * Hostname www.cedulaprofesional.sep.gob.mx was found in DNS cache -- | --   | * Trying <IP_ADDRESS>...   | * Connected to www.cedulaprofesional.sep.gob.mx (<IP_ADDRESS>) port 443 (#0)   | * ALPN, offering http/1.1   | * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH   | * successfully set certificate verify locations:   | * CAfile: /etc/ssl/certs/ca-certificates.crt   | CApath: /etc/ssl/certs   | * SSL connection using TLSv1.2 / DHE-RSA-AES256-GCM-SHA384   | * ALPN, server did not agree to a protocol   | * Server certificate:   | * subject: C=MX; ST=Distrito Federal; L=Ciudad De Mexico; O=Secretar�a de Educaci�n P�blica; CN=sep.gob.mx   | * start date: May 16 14:41:56 2017 GMT   | * expire date: May 17 15:11:52 2019 GMT   | * subjectAltName: www.cedulaprofesional.sep.gob.mx matched   | * issuer: C=CA; O=AffirmTrust; OU=See www.affirmtrust.com/repository; CN=AffirmTrust Certificate Authority - OV1   | * SSL certificate verify ok.   | > POST /cedula/buscaCedulaJson.action HTTP/1.1   | Host: www.cedulaprofesional.sep.gob.mx   | Content-Length: 0   | User-Agent: GuzzleHttp/6.3.3 curl/7.47.0 PHP/7.1.25-1+ubuntu16.04.1+deb.sury.org+1   | Content-Type: application/x-www-form-urlencoded   |     | * HTTP 1.0, assume close after body   | < HTTP/1.0 200 OK   | < Server: Desconocido   | < X-Powered-By: Servlet/3.0 JSP/2.2 (Desconocido Java/Oracle Corporation/1.7)   | < Content-Language: es-MX   | < Content-Length: 0   | < Date: Thu, 03 Jan 2019 15:37:50 GMT   | < X-Cache: MISS from SEP   | < X-Cache-Lookup: MISS from SEP:80   | < Via: 1.0 SEP (squid)   | * HTTP/1.0 connection set to keep alive!   | < Connection: keep-alive   | <   | * Connection #0 to host www.cedulaprofesional.sep.gob.mx left intact   |   As you can see, the request is done through HTTPs (port 443). Please confirm that you have correct root certificates installed on your system/server.
2025-04-01T06:38:53.617438
2024-11-20T22:24:49
2677358718
{ "authors": [ "3vcloud", "BearsMan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6617", "repo": "gwdevhub/GWToolboxpp", "url": "https://github.com/gwdevhub/GWToolboxpp/issues/1277" }
gharchive/issue
TB /cam fog off feature no longer works. After updating to v6.25, I have tried using /cam fog off to eliminate the fog in-game. As soon as I hit enter, the command doesn't work. Could be a bug? It is still not working for me. I have tried in several areas. I don't even know why it won't turn off even if I hotkey it. Enter gloom and show a screenshot Enter gloom and show a screenshot Sure thing. Of course. I entered Gloom and used Hotkey /cam fog off and /camera fog off to turn off the fog, which had no effect. Enter gloom and show a screenshot Okay, as it turns out. I got this message on Discord. It reads the following: ⁠GWCA / TB++⁠ Cam fog off and cam fog on works. They work, but only the other way around. Cam fog off” turns the fog on and cam fog on turns the fog off. @3vcloud
2025-04-01T06:38:53.661848
2023-04-11T10:45:59
1662223787
{ "authors": [ "StefanSalewski", "h-enk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6619", "repo": "h-enk/doks", "url": "https://github.com/h-enk/doks/issues/1036" }
gharchive/issue
content/en/_index.md not mentioned in tutorial Dear Sir, I started yesterday with Hugo and its tutorial. Today, I tried your doks theme, again following the tutorial. In https://getdoks.org/docs/tutorial/set-configuration/ we have Open ./config/_default/params.toml Change these settings: ## Homepage title = "Doks" titleSeparator = "-" titleAddition = "Modern Documentation Theme" description = "Doks is a Hugo theme for building secure, fast, and SEO-ready documentation websites, which you can easily update and customize." I had the feeling, that changing titleAddition or description had no effects. After finding the string literals with grep, I finally modified content/en/_index.md and the running local webserver immediately updated the main page in firefox. Did I something wrong, or may the tutorial be outdated? Maybe I did something wrong, I will repeat all actions later this day from scratch, and tell you if I made indeed a mistake. Well, from comment Set meta data for Search Engine Optimization (SEO) and Social Media. I may get the feeling, that it is no real issue. The content may be just metadata and never got displayed. But for that case, it would make it for beginner easier, when the meta content is not identical to the shown text. Next Problem: JSON-LD Change these settings: ... schemaAuthor = "Henk Verlinde" Well, should I really change the entries? I think the schemaAuthor is Henk Verlinde and not me? Thanks for using Doks! Did I something wrong, or may the tutorial be outdated? No, you did not! You found a bug w/ Chromium browsers (will be fixed in the soon to be released Doks 1.0). You'll just need to add --noHTTPCache to the start script in package.json, like so: "start": "exec-bin node_modules/.bin/hugo/hugo server --gc --bind=<IP_ADDRESS> --disableFastRender --baseURL=http://localhost --noHTTPCache", Thank you very much for your fast reply. [..] it would make it for beginner easier, when the meta content is not identical to the shown text. Get that, will be looking in auto fill possibilities when using the CLI for setting up a new Doks project. Well, should I really change the entries? I think the schemaAuthor is Henk Verlinde and not me? No, the meta data should be completely yours. It's based on the Schema.org approach by Yoast SEO — for more background, see Schema - Background information
2025-04-01T06:38:53.674007
2017-02-23T16:46:48
209817996
{ "authors": [ "TomasKulhanek" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6620", "repo": "h2020-westlife-eu/west-life-wp6", "url": "https://github.com/h2020-westlife-eu/west-life-wp6/issues/26" }
gharchive/issue
PDB component integration into VF [x] litemol viewer of selected pdb file [x] pdb link component and other one entry components [x] prepare dataset panel, which allows to add entries from PDB using autocomplete component and refined by other component/viewers [x] widget to customize entity-id attribute for pdb-topology-viewer (N), data.firstelementofstructure[0].entity_id, filtering polypeptide molecules - issue #32 [x] show/hide all components - encapsulate in dataset - change icon (T) [x] show/hide some of the components, within dataset (T) [x] distinguish between pdb and uniprot entry - show PDB UniProt Viewer (T) [x] litemol viewer in dataset (T) [x] howto effectively rebootstrap already existing pdb component - detach and attach - [x] pdb prints component and other multiple entries components Tomas: [x] autocomplete component in Firefox [x] autocomplete component, works in chrome, test in chromium [ ] store submitted dataset as artifact of virtual folder - represent it as separated folder with metadata [ ] replacing the component in more elegant way (within entity-id, pdb-ids) [ ] jsonrender to show raw data from pdb summary and other APIs [x] autocomplete, fix when key press enter renders button click event Nurul: [ ] accordion to hide all details [ ] make component to choose assembly-id, analogy of entity-id before submitting D5.5 report: [x] address https://portal.west-life.eu/virtualfolder/ shows error message pops up: "Sorry, error when connecting backend web service at /metadataservice/files error:{"ResponseStatus": "ErrorCode":"UnauthorizedAccessException","Message":"Attempted to perform an unauthorized operation.","Errors":[]}} status:UnauthorizedAccessException" [x] There is a link at the bottom of the page " Development documentation at internal-wiki.west-life.eu/w/index.php?title=WP6". This is not appropriate in a page that has been delivered. [x] The report says that there is a demo at https://portal.west-life.eu/virtualfolder/test/index-dataset.html. When I visit this I see:[an error occurred while processing the directive] [x] on that page I also see " PDB or related item to add:". There is no help text to suggest what I should enter. There should be a tooltip listing the possible responses, and a placeholder within the input element. [ ] If I enter something unexpected, e.g. the gene name CFTR, then I get the unhelpful response "No hints." [ ] I guessed that a PDB accession code is expected, and entered one. This gave a useful page but with some errors - failed PDB-REDO etc., failed pdb components should not be rendered, relates to issue #39 [x] Then when I click “Publish dataset” I get “Sorry. Dataset not submitted at undefined error:404 status:Not Found” - remove
2025-04-01T06:38:54.057904
2016-02-14T14:04:31
133542005
{ "authors": [ "dgryski", "james-lawrence" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6621", "repo": "ha/doozer", "url": "https://github.com/ha/doozer/issues/42" }
gharchive/issue
unmaintained? have you stopped maintaining doozer? no commits in 2 years, its basically broken at this point due to dependency path having changed. (protobuf in particular) Yup. Use etcd instead.
2025-04-01T06:38:54.127663
2021-03-10T14:18:18
827810632
{ "authors": [ "habeanf", "yishairasowsky" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6622", "repo": "habeanf/yap", "url": "https://github.com/habeanf/yap/issues/8" }
gharchive/issue
restart api great work you have done the api! i just have trouble when it disconnects... is there a way to make it automatically reconnect, instead of me having to type src/yap/./yap api? This repository is no longer maintained. Please use https://github.com/onlplab/yap
2025-04-01T06:38:54.186318
2015-02-10T15:09:49
57186708
{ "authors": [ "coveralls", "dblock", "maclover7" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6623", "repo": "hackcentral/hackcentral", "url": "https://github.com/hackcentral/hackcentral/pull/26" }
gharchive/pull-request
Fix api specs In your API tests you can examine the response: puts response.body I modified one of the tests: post<EMAIL_ADDRESS>application: FactoryGirl.attributes_for(:application), :format => :json puts response.body And it returned: {"error":"reimbursement_needed is missing, profile_id is missing, hackathon_id is missing"} This is because you're posting these fields under application, so the server gets params[:application][:hackathon_id], not just params[:hackathon_id]. You can decide to go either way, but obviously it needs to be consistent. Closes #25. Btw, check out https://github.com/yujinakayama/transpec and make all the RSpec expectations use a consistent syntax. Add config.raise_errors_for_deprecations! to RSpec spec_helper.rb then. Coverage remained the same at 85.32% when pulling 54875a3f23204bca9d6d21b775ab7bbe02415ffb on dblock:fix-api-specs into aa2369694149a5fbe4f52bd0456250559ca7e5fa on hackcentral:master. Awesome, thanks @dblock for your help with this! Merged.
2025-04-01T06:38:54.204439
2024-05-06T01:00:11
2279835974
{ "authors": [ "Shrey-Mehra", "reesericci" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6624", "repo": "hackclub/webring", "url": "https://github.com/hackclub/webring/pull/163" }
gharchive/pull-request
Remove Reese Armstrong I am no longer participating in the Hack Club webring many are saying this Your pull request has been merged
2025-04-01T06:38:54.324026
2017-02-18T17:58:14
208658179
{ "authors": [ "brenns10", "matthewbentley" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6625", "repo": "hacsoc/love", "url": "https://github.com/hacsoc/love/pull/3" }
gharchive/pull-request
Implement a custom 500 page In particular, this special-cases the NoSuchEmployee error, which is caused by a user not being included in a database dump. I want to have a nicer experience for people who log in but for some reason weren't included in my LDAP dump. This tells the user they can email a support email (i.e. me) to have them added to the database. The general case 500 page doesn't have the support email address. Screenshots below. Builds are broken due to #4. Local make test passes, although I will probably want to add a test or two ensuring that the error pages have the right text. Looks good. I'll work on travis later @matthewbentley no worries I just fixed it - I'm reverting my makefile changes. I'll add a wiki note on how to make it work locally
2025-04-01T06:38:54.346442
2017-07-29T16:33:18
246531181
{ "authors": [ "haegul", "komarevtsevdn" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6626", "repo": "haegul/node-image-filter", "url": "https://github.com/haegul/node-image-filter/issues/9" }
gharchive/issue
Problem with png images I've tried use ur library for change brightness on image. For jpeg images it's work fine. But for png image it return { data: undefined, type: 'png', width: 285, height: 177 I don't understand why. Thanks @komarevtsevdn Hello, Thank you for report. I will check it. @komarevtsevdn I fixed this Issue, you update node-image-filter version to 0.1.0 $ npm install<EMAIL_ADDRESS> I improved save image logic. So you change your code like this. result.data.pipe(fs.createWriteStream(`result.${result.type}`)); // save local Thank you.
2025-04-01T06:38:54.361928
2022-07-11T19:28:38
1301127988
{ "authors": [ "haftamudesta", "youmari" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6627", "repo": "haftamudesta/desta451616-hotmail.com.github.io", "url": "https://github.com/haftamudesta/desta451616-hotmail.com.github.io/issues/3" }
gharchive/issue
portfolio deployment my portfolio is deployed before and the link is here: https://haftamudesta.github.io/desta451616-hotmail.com.github.io/portfolio.html Project Approved :trophy: :tada: 🟢 Hi @haftamudesta Your project is complete! You have done it exceptionally well. it's time to merge it :100: Congratulations! 🎉 To Highlight :+1: :heavy_check_mark: No linters error. :heavy_check_mark: Details PR title and description. :heavy_check_mark: Good commit messages. Cheers and Happy coding!👏👏👏 Feel free to leave any questions or comments in the PR thread if something is not 100% clear. Please, remember to tag me in your question so I can receive the notification. You can also connect with me on slack _As described in the Code reviews limits policy you have a limited number of reviews per project (check the exact number in your Dashboard). If you think that the code review was not fair, you can request a second opinion using this form.
2025-04-01T06:38:54.363887
2020-10-15T19:36:02
722618998
{ "authors": [ "Polarts", "cHidoriPunk" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6628", "repo": "haifa-dev/official-website", "url": "https://github.com/haifa-dev/official-website/issues/43" }
gharchive/issue
Add descriptions to request form's fields Some of the form's fields are too complicated to understand just by the title, so we should add some description. I think the cleanest way would be making it a hover-tooltip popping out of a help icon. Hover actions should work as taps in touch devices. can you assign this to me please ? done
2025-04-01T06:38:54.418647
2022-05-02T13:23:39
1222885080
{ "authors": [ "divVerent", "hajimehoshi" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6629", "repo": "hajimehoshi/ebiten", "url": "https://github.com/hajimehoshi/ebiten/issues/2085" }
gharchive/issue
cmd/ebitenmobile: support current NDK Hi, I've installed a fresh Android Studio with NDK 24.0.8215888. When trying to use ebitenmobile, I get: $ export ANDROID_HOME=$HOME/Android/Sdk $ export ANDROID_NDK_HOME=$HOME/Android/Sdk/ndk/24.0.8215888 $ go run github.com/hajimehoshi/ebiten/v2/cmd/ebitenmobile \ bind -target android -javapkg io.github.divverent.aaaaxy.android \ -androidapi 21 \ -o aaaaxy.aar \ github.com/divVerent/aaaaxy/internal/mobile 2022/05/02 09:20:33 gomobile [init] failed: gomobile: No compiler for 386 was found in the NDK (tried /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang). Make sure your NDK version is >= r19c. Use `sdkmanager --update` to update it. I do not know why it tries to search SDK 16 binaries (and for x86 even, which is an odd platform for Android); this NDK comes with versions 21 to 32 for aarch64 and x86_64, as well as 19 to 32 for armv7a and i686. I would assume that passing -androidapi sets which NDK binaries it'll use, but to no avail. In general, the ebitenmobile instructions could be a LOT clearer - many things in there I just do not understand and will have to figure out by trial and error, not being an Android developer myself. Just in case, I did run the suggested command: /home/rpolzer/Android/Sdk/cmdline-tools/latest/bin/sdkmanager --update Warning: IO exception while downloading manifest [=== ] 10% Computing updates... No updates available [=======================================] 100% Computing updates... https://github.com/golang/go/issues/35030#issuecomment-1026887111 might be the same issue - however, before I am going to muck around in an Android SDK package-managed directory, I'd like to have confirmation that this is really necessary and that gomobile did not fix this elementary issue for three years. Is gomobile unmaintained? gomobile is maintained. Does gomobile work with the latest NDK? (I think it should) Going by Ebiten docs fails: [rpolzer@brlogenshfegle aaaaxy (git)-[mobile]-]$ ~/go/bin/gomobile build github.com/divVerent/aaaaxy/internal/mobile /home/rpolzer/go/bin/gomobile: No compiler for 386 was found in the NDK (tried /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang). Make sure your NDK version is >= r19c. Use `sdkmanager --update` to update it. However, this looks more promising: [rpolzer@brlogenshfegle aaaaxy (git)-[mobile]-]$ ~/go/bin/gomobile build -androidapi 21 github.com/divVerent/aaaaxy/internal/mobile /home/rpolzer/go/bin/gomobile: go build github.com/divVerent/aaaaxy/internal/mobile failed: exit status 2 # github.com/divVerent/aaaaxy/internal/mobile internal/mobile/mobile.go:18:2: imported and not used: "github.com/hajimehoshi/ebiten/v2" as ebiten internal/mobile/mobile.go:25:2: undefined: flag This is good news, as it means the rest might be my fault. But it is worrying that the -androidapi flag doesn't work with ebitenmobile bind. After fixing these, gomobile build succeeds, but gomobile install fails because it doesn't find mobile.apk. Looks like the Ebiten docs are really rather incomplete - would be nice if it actually explained the steps to turn a desktop game into an Android one. Looks like the Ebiten docs are really rather incomplete - would be nice if it actually explained the steps to turn a desktop game into an Android one. I agree we need more documentation about ebitenmobile, but I want to know what was the issue. Doesn't ebitenmobile support the latest NDK, or is this no longer an issue? Hm... this is weird. The gomobile example doesn't actually use init() the same way as the example in the Ebiten docs. It just has a main(). Using the desktop main() file does work with gomobile, apart from LOTS of warnings about the app being blocked by Play Protect, being for an older version etc. - I assume all this can be fixed when using ebitenmobile properly. So, status with gomobile is that ONLY build works. In particular, bind tells me to use init, and init says: [rpolzer@brlogenshfegle aaaaxy (git)-[mobile]-]$ ~/go/bin/gomobile init /home/rpolzer/go/bin/gomobile: No compiler for arm was found in the NDK (tried /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang). Make sure your NDK version is >= r19c. Use `sdkmanager --update` to update it. [rpolzer@brlogenshfegle aaaaxy (git)-[mobile]-]$ ~/go/bin/gomobile init -androidapi=21 flag provided but not defined: -androidapi usage: /home/rpolzer/go/bin/gomobile init [-openal dir] If a OpenAL source directory is specified with -openal, init will build an Android version of OpenAL for use with gomobile build and gomobile install. It doesn't seem to support using any non-default Android NDK. What a mess. I think you are confused with gomobile-build and gomobile-bind. gomobile-build creates an executable while gomobile-bind creates a library (.aar). ebitenmobile has only the 'bind' feature. Yeah - I "assumed" (but may be wrong) that ebitenmobile bind uses gomobile bind. In any case, ebitenmobile bind does not seem to use the -androidapi option, which is the issue here. You can specify -androidapi: https://github.com/hajimehoshi/ebiten/blob/main/cmd/ebitenmobile/main.go#L103 As said, I can specify it, but it's ignored: go run github.com/hajimehoshi/ebiten/v2/cmd/ebitenmobile build -target android -javapkg io.github.divverent.aaaaxy.android -androidapi 21 -o aaaaxy.aar github.com/divVerent/aaaaxy/internal/mobile 2022/05/02 09:52:15 gomobile [init] failed: gomobile: No compiler for arm was found in the NDK (tried /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang). Make sure your NDK version is >= r19c. Use `sdkmanager --update` to update it. Hmm? I found this value was not ignored on my machine (macOS): $ go run github.com/hajimehoshi/ebiten/v2/cmd/ebitenmobile bind -target android -javapkg com.hajimehoshi.goinovation -o ./mobile/android/inovation/inovation.aar -androidapi=999 ./mobile gomobile: No compiler for arm was found in the NDK (tried /Users/hajimehoshi/Library/Android/sdk/ndk-bundle/toolchains/llvm/prebuilt/darwin-x86_64/bin/armv7a-linux-androideabi999-clang). Make sure your NDK version is >= r19c. Use `sdkmanager --update` to update it. exit status 1 That same command fails too for me - surprisingly even BEFORE noticing I don't actually have the goinovation app source checked out: [rpolzer@brlogenshfegle aaaaxy (git)-[mobile]-]$ go run github.com/hajimehoshi/ebiten/v2/cmd/ebitenmobile bind -target android -javapkg com.hajimehoshi.goinovation -o ./mobile/android/inovation/inovation.aar -androidapi=999 ./mobile 2022/05/02 09:58:15 gomobile [init] failed: gomobile: No compiler for 386 was found in the NDK (tried /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang). Make sure your NDK version is >= r19c. Use `sdkmanager --update` to update it. exit status 1 exit status 1 Note how the path still contains 16. ebitenmobile build ebitenmobile build doesn' exist. Please try ebitenmobile bind $ ls /Users/hajimehoshi/Library/Android/sdk/ndk-bundle/toolchains/llvm/prebuilt/darwin-x86_64/bin/ aarch64-linux-android-as aarch64-linux-android21-clang aarch64-linux-android21-clang++ aarch64-linux-android22-clang aarch64-linux-android22-clang++ aarch64-linux-android23-clang aarch64-linux-android23-clang++ aarch64-linux-android24-clang aarch64-linux-android24-clang++ aarch64-linux-android26-clang aarch64-linux-android26-clang++ aarch64-linux-android27-clang aarch64-linux-android27-clang++ aarch64-linux-android28-clang aarch64-linux-android28-clang++ aarch64-linux-android29-clang aarch64-linux-android29-clang++ aarch64-linux-android30-clang aarch64-linux-android30-clang++ aarch64-linux-android31-clang aarch64-linux-android31-clang++ arm-linux-androideabi-as armv7a-linux-androideabi16-clang armv7a-linux-androideabi16-clang++ armv7a-linux-androideabi17-clang armv7a-linux-androideabi17-clang++ armv7a-linux-androideabi18-clang armv7a-linux-androideabi18-clang++ armv7a-linux-androideabi19-clang armv7a-linux-androideabi19-clang++ armv7a-linux-androideabi21-clang armv7a-linux-androideabi21-clang++ armv7a-linux-androideabi22-clang armv7a-linux-androideabi22-clang++ armv7a-linux-androideabi23-clang armv7a-linux-androideabi23-clang++ armv7a-linux-androideabi24-clang armv7a-linux-androideabi24-clang++ armv7a-linux-androideabi26-clang armv7a-linux-androideabi26-clang++ armv7a-linux-androideabi27-clang armv7a-linux-androideabi27-clang++ armv7a-linux-androideabi28-clang armv7a-linux-androideabi28-clang++ armv7a-linux-androideabi29-clang armv7a-linux-androideabi29-clang++ armv7a-linux-androideabi30-clang armv7a-linux-androideabi30-clang++ armv7a-linux-androideabi31-clang armv7a-linux-androideabi31-clang++ clang clang++ clang-12 clang-check clang-cl clang-format clang-tidy clangd dsymutil git-clang-format i686-linux-android-as i686-linux-android16-clang i686-linux-android16-clang++ i686-linux-android17-clang i686-linux-android17-clang++ i686-linux-android18-clang i686-linux-android18-clang++ i686-linux-android19-clang i686-linux-android19-clang++ i686-linux-android21-clang i686-linux-android21-clang++ i686-linux-android22-clang i686-linux-android22-clang++ i686-linux-android23-clang i686-linux-android23-clang++ i686-linux-android24-clang i686-linux-android24-clang++ i686-linux-android26-clang i686-linux-android26-clang++ i686-linux-android27-clang i686-linux-android27-clang++ i686-linux-android28-clang i686-linux-android28-clang++ i686-linux-android29-clang i686-linux-android29-clang++ i686-linux-android30-clang i686-linux-android30-clang++ i686-linux-android31-clang i686-linux-android31-clang++ ld ld.lld ld64.lld lld lld-link lldb lldb-argdumper lldb.sh llvm-addr2line llvm-ar llvm-as llvm-cfi-verify llvm-config llvm-cov llvm-cxxfilt llvm-dis llvm-dwarfdump llvm-dwp llvm-lib llvm-link llvm-lipo llvm-modextract llvm-nm llvm-objcopy llvm-objdump llvm-profdata llvm-ranlib llvm-rc llvm-readelf llvm-readobj llvm-size llvm-strings llvm-strip llvm-symbolizer pbcopy sancov sanstats scan-build scan-view x86_64-linux-android-as x86_64-linux-android21-clang x86_64-linux-android21-clang++ x86_64-linux-android22-clang x86_64-linux-android22-clang++ x86_64-linux-android23-clang x86_64-linux-android23-clang++ x86_64-linux-android24-clang x86_64-linux-android24-clang++ x86_64-linux-android26-clang x86_64-linux-android26-clang++ x86_64-linux-android27-clang x86_64-linux-android27-clang++ x86_64-linux-android28-clang x86_64-linux-android28-clang++ x86_64-linux-android29-clang x86_64-linux-android29-clang++ x86_64-linux-android30-clang x86_64-linux-android30-clang++ x86_64-linux-android31-clang x86_64-linux-android31-clang++ yasm Hm... so you do have version 16 for two platforms; I don't. We have different NDK versions and it seems like this incompatibility is new. Can we as a stopgap document the proper NDK version that works with Ebiten, including instructions how to install that one? I'll try to investigate the mechanism which NDK is chosen. But, could you try ebitenmobile bind instead of ebitenmobile build? You tried build at https://github.com/hajimehoshi/ebiten/issues/2085#issuecomment-1114908765 Sorry, I don't see the word "build" in https://github.com/hajimehoshi/ebiten/issues/2085#issuecomment-1114917037 - I thought I already reran with "bind"? I have the following v1 ones: [rpolzer@brlogenshfegle aaaaxy (git)-[mobile]-]$ ls /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/*21* /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang++ /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi21-clang /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi21-clang++ /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android21-clang /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android21-clang++ /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang /home/rpolzer/Android/Sdk/ndk/24.0.8215888/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang++ So yes, it should work... I'm not 100% sure but this might be an issue in gomobile rather than ebitenmobile, as ebitenmobile is just a (thin?) wrapper of gomobile. Yes, I do suspect this to be a gomobile issue of some sort - however "gomobile build" does work. My suspicion is that some code already runs before the -apiversion flag is parsed. On gomobile i found: https://github.com/golang/go/issues/52470 This issue also looks daunting: https://github.com/golang/go/issues/38439 - if this is true, then gomobile apps are no longer allowed on the Play Store due to only supporting the old v1 signature scheme. https://github.com/golang/go/issues/52470 Thanks. I reviewed the proposal. I'll update Ebiten as soon as possible after this is merged. I am not 100% convinced the proposed fix will be sufficient here, as I still don't know why it doesn't use the -androidapi flag and don't see anything in there fixing that part. I'm going to see if I can try out the change though. Seems like the proposed fix does help, although one still needs to pass -androidapi (but at least it's finally being respected): [rpolzer@carbonic go-inovation (git)-[main]-]$ EBITENMOBILE_GOMOBILE=$HOME/src/gomobile ANDROID_NDK_HOME=$HOME/Android/Sdk/ndk/24.0.8215888 ~/src/ebiten/cmd/ebitenmobile/ebitenmobile bind -target android -javapkg com.hajimehoshi.goinovation -o ./mobile/android/inovation/inovation.aar ./mobile gomobile: ANDROID_NDK_HOME specifies /home/rpolzer/Android/Sdk/ndk/24.0.8215888, which is unusable: unsupported API version 16 (not in 19..32) [rpolzer@carbonic go-inovation (git)-[main]-]$ EBITENMOBILE_GOMOBILE=$HOME/src/gomobile ANDROID_NDK_HOME=$HOME/Android/Sdk/ndk/24.0.8215888 ~/src/ebiten/cmd/ebitenmobile/ebitenmobile bind -target android -javapkg com.hajimehoshi.goinovation -o ./mobile/android/inovation/inovation.aar -androidapi 21 ./mobile [rpolzer@carbonic go-inovation (git)-[main]-]$ ls -la ./mobile/android/inovation/inovation.aar -rw-r----- 1 rpolzer primarygroup 27998497 May 3 05:31 ./mobile/android/inovation/inovation.aar Can we change the default of -androidapi to match the current NDK, or at least document this need in Ebiten's gomobile docs? 19 does seem to work as well, even though some platforms only start at 21. But IIRC this is because gomobile has hardcoded min versions per platform. Can we change the default of -androidapi to match the current NDK Which gomobile or ebitenmobile are you suggesting to fix? As I do not know the design of the two well enough - not sure. I'd hope for gomobile to fix this (i.e. it should "work out of the box" with whatever the current NDK is, especially given Android Studio does not offer previous NDK versions to install), but of course gomobile also has a desire to support previous NDKs - so just hardcoding the new minimum versions won't be quite right. Not sure if they have some compatibility promise that also prevents them from just bumping the flag default of -androidapi. What Ebiten however can do is to just start using the -androidapi option in the documentation's example command lines. In many cases you'll want to set that explicitly anyway (e.g. the Google Play docs say that to get on the Play Store, you soon need to target at least version 31: https://developer.android.com/google/play/requirements/target-sdk).
2025-04-01T06:38:54.424852
2023-06-23T16:40:20
1771746262
{ "authors": [ "divVerent", "hajimehoshi" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6630", "repo": "hajimehoshi/ebiten", "url": "https://github.com/hajimehoshi/ebiten/pull/2681" }
gharchive/pull-request
Fix antialias region size What issue is this addressing? Closes #2679 What type of issue is this addressing? bug What this PR does | solves Fixes coordinate transformation math on dstRegion. Previous code had x1 and y1 shifted by exactly (i.region.X, i.region.Y), which allowed it to exceed the width and height of the destination image. iOS MTLDebugRenderCommandEncoder detects this issue and crashes. No known test case for OpenGL or similar, as the rectangle is always strictly larger or equal to the correct rectangle. It is POSSIBLE, but not reproduced, this this can in OpenGL cause DrawTriangles to draw on a different image than the destination image if they share an atlas. However, as this depends a lot on how the atlas is built, I am not sure if it can actually be triggered reliably, or even at all. LGTM. If we find it hard to add a test, it's OK, let's merge this without tests. What do you think? I tried finding a test case for this by detecting overwriting pixels on other textures, however it appears that the texture antialiasing uses doesn't end up on the atlas of the NewImage textures. Don't know why. If this can't be done, then yeah, let's merge this without test cases. At the very least we have test cases to ensure this isn't a regression. A destination image is often a different texture from the source, and I understand it pretty hard to make a reliable test. OK let's merge this without a test. Found the reason: https://github.com/hajimehoshi/ebiten/blob/main/internal/ui/image.go#L91 So the backing offscreen is only ever unmanaged or volatile. And only regular images ever end up on an atlas. So there's never any texture sharing, and in OpenGL, writing out of bounds of a texture is harmless - so yeah, no test case possible, except if we can produce the Metal crash by forcing the same debug backend. All god then - thanks! The other fact that makes this "untestable" is - region.X and region.Y are elsewhere clamped to be at least zero. Thus, the wrong region is only ever bigger, never smaller, than the correct region. Making the bug 100% harmless on OpenGL backends, except maybe a bit more wasteful regarding dirtying of texture space or similar.
2025-04-01T06:38:54.429387
2018-04-17T20:59:30
315233273
{ "authors": [ "JacquesCarette", "maymoo99" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6631", "repo": "hakaru-CS4ZP6/hakaru", "url": "https://github.com/hakaru-CS4ZP6/hakaru/issues/49" }
gharchive/issue
Inlining expected files to pass the 0-test, import feature and hk-maple As per https://github.com/hakaru-dev/hakaru/pull/154 The issues with failing the 0-test were solved by inlining everything. Most of the tests I've written related to the exponential distribution had the same issues and I solved them same way. Ideally, we would have used an import feature to call distributions directly from stdlib to write these test case files. I'd expect the exact same issues to arise in this case, making import useless for writing expected files. Isn't this inability to automatically in-line a problem then? Not an issue for our project, but something I wanted to bring up in case it hasn't been considered. Writing expected test files is inherently a manual thing. You're testing that Simplify does all sorts of operations correctly, including inlining. So you need to write the expected 'by hand'. You can, of course, use some machine assistance in deriving things.
2025-04-01T06:38:54.436380
2016-11-03T21:54:28
187200837
{ "authors": [ "haleksandre", "sidor555" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6632", "repo": "haleks/laravel-markdown", "url": "https://github.com/haleks/laravel-markdown/issues/8" }
gharchive/issue
config/markdown.php not created I have install it in laravel 5.3 but it's not working even config/markdown.php not created .... Is there any solution ?? Have you pulled in the latest version? (0.3.0) I've noticed that in the documentation I'm still referring to 0.1.* but during that version Laravel 5.3 wasn't out yet. I will update the documentation shortly.
2025-04-01T06:38:54.439917
2023-10-09T17:04:24
1933474905
{ "authors": [ "toridoriv" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6633", "repo": "halfmoonui/halfmoon", "url": "https://github.com/halfmoonui/halfmoon/issues/142" }
gharchive/issue
Website is down I went to check the documentation and realized the website is down. It seems to be down for everyone 😞 Status on Is It Down Right Now It's working now 😅 I'll close the issue.
2025-04-01T06:38:54.468233
2022-08-04T23:38:25
1329279619
{ "authors": [ "hamboneZA" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:6634", "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/2108" }
gharchive/issue
⚠️ ClamAV has degraded performance In 2418bae, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 88 ms Resolved: ClamAV performance has improved in 04cc147.