id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
117073907
Club Leader Pairing From @hellyeah on September 28, 2015 16:40 Copied from original issue: hackedu/hackedu#272 As discussed, @gemmabusoni will be reaching out to @Bogidon to pair on planning.md and recap.md. From @gemmabusoni on September 30, 2015 5:35 @jonleung mentioned that Bogdan is rather swamped this week. Haven't received feedback as to who I should potentially be partnering with instead. From @hellyeah on October 21, 2015 16:8 @gemmabusoni how's this going? From @gemmabusoni on October 9, 2015 16:15 Call with Jeremy scheduled for 3pm Saturday. I'll be taking it from CalHacks, in case anyone else wanted to be involved. From @hellyeah on October 7, 2015 16:4 Gemma and Jeremy (Lowell) From @gemmabusoni on October 25, 2015 6:16 @hellyeah Call was done on said day; he submitted to GitHub as well. From @MaxWofford on October 25, 2015 12:54 @gemmabusoni Would you please link that here? Closing this because it no longer seems relevant.
gharchive/issue
2015-11-16T08:24:15
2025-04-01T06:44:23.890192
{ "authors": [ "jonleung", "zachlatta" ], "repo": "hackedu/meta", "url": "https://github.com/hackedu/meta/issues/297", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2125301365
How to serialize [][] using Block and Flow style nesting I encountered some problems when implementing Matrix4x4Formatter. I would like to ask how to implement it. First I implemented Vector4Formatter using System.Collections.Generic; using UnityEngine; using VYaml.Emitter; using VYaml.Parser; namespace VYaml.Serialization { public class Vector4Formatter : IYamlFormatter<Vector4> { public static readonly Vector4Formatter Instance = new(); public void Serialize(ref Utf8YamlEmitter emitter, Vector4 value, YamlSerializationContext context) { var fs = new[] { value.x, value.y, value.z, value.w }; emitter.BeginSequence(SequenceStyle.Flow); foreach (var f in fs) emitter.WriteFloat(f); emitter.EndSequence(); } public Vector4 Deserialize(ref YamlParser parser, YamlDeserializationContext context) { if (parser.IsNullScalar()) { parser.Read(); return default; } var formatter = context.Resolver.GetFormatterWithVerify<List<float>>(); var list = context.DeserializeWithAlias(formatter, ref parser); if (list.Count == 4) return new Vector4(list[0], list[1], list[2], list[3]); if (list.Count == 3) return new Vector4(list[0], list[1], list[2]); if (list.Count == 2) return new Vector4(list[0], list[1]); return default; } } } Then implement Matrix4x4Formatter based on this using System.Collections.Generic; using UnityEngine; using VYaml.Emitter; using VYaml.Parser; namespace VYaml.Serialization { public class Matrix4x4Formatter : IYamlFormatter<Matrix4x4> { public static readonly Matrix4x4Formatter Instance = new(); public void Serialize(ref Utf8YamlEmitter emitter, Matrix4x4 value, YamlSerializationContext context) { var v4s = new[] { value.GetColumn(0), value.GetColumn(1), value.GetColumn(2), value.GetColumn(3) }; context.Serialize(ref emitter, v4s); // emitter.BeginSequence(); // foreach (var v4 in v4s) // { // context.Serialize(ref emitter, v4); // } // emitter.EndSequence(); // var fs = new[] // { // new[] { value.m00, value.m10, value.m20, value.m30 }, // new[] { value.m01, value.m11, value.m21, value.m31 }, // new[] { value.m02, value.m12, value.m22, value.m32 }, // new[] { value.m03, value.m13, value.m23, value.m33 } // }; // // emitter.BeginSequence(); // foreach (var f in fs) Utils.WriteFloatArrayWithFlowStyle(ref emitter, f, context); // emitter.EndSequence(); } public Matrix4x4 Deserialize(ref YamlParser parser, YamlDeserializationContext context) { if (parser.IsNullScalar()) { parser.Read(); return default; } var formatter = context.Resolver.GetFormatterWithVerify<List<Vector4>>(); var list = context.DeserializeWithAlias(formatter, ref parser); if (list.Count == 4) return new Matrix4x4(list[0], list[1], list[2], list[3]); return default; } } } But the YAML string obtained is similar to this matrix4X4: - [1, 0, 0, 0] - [0, 1, 0, 0] - [0, 0, 1, 0] - [0, 0, 0, 1] As a result, it is impossible to deserialize I solved the problem temporarily, but I think it is a bug and it should be automatic line wrapping that is correct public class Matrix4x4Formatter : IYamlFormatter<Matrix4x4> { public static readonly Matrix4x4Formatter Instance = new(); public void Serialize(ref Utf8YamlEmitter emitter, Matrix4x4 value, YamlSerializationContext context) { var fs = new[] { new[] { value.m00, value.m10, value.m20, value.m30 }, new[] { value.m01, value.m11, value.m21, value.m31 }, new[] { value.m02, value.m12, value.m22, value.m32 }, new[] { value.m03, value.m13, value.m23, value.m33 } }; emitter.BeginSequence(); emitter.WriteRaw(ReadOnlySpan<byte>.Empty, false, true); foreach (var f in fs) Utils.WriteFloatArrayWithFlowStyle(ref emitter, f, context); emitter.EndSequence(); } public Matrix4x4 Deserialize(ref YamlParser parser, YamlDeserializationContext context) { if (parser.IsNullScalar()) { parser.Read(); return default; } var formatter = context.Resolver.GetFormatterWithVerify<List<Vector4>>(); var list = context.DeserializeWithAlias(formatter, ref parser); if (list.Count == 4) return new Matrix4x4(list[0], list[1], list[2], list[3]); return default; } } public static void WriteFloatArrayWithFlowStyle(ref Utf8YamlEmitter emitter, IEnumerable<float> value, YamlSerializationContext context) { emitter.BeginSequence(SequenceStyle.Flow); foreach (var f in value) emitter.WriteFloat(f); emitter.EndSequence(); } Thanks for the report. I'll fix this later. My relevant code is at VYaml.UnityResolvers, you can use Matrix4x4 or List for testing @hadashiA I have done some testing and the following code temporarily solves the issue, but it may not be completely correct. I hope this can be helpful. https://github.com/hadashiA/VYaml/blob/fef8dd721c6ee9f9cf0be84d5bd82e794e58485f/VYaml/Emitter/Utf8YamlEmitter.cs#L141-L151 case EmitState.BlockSequenceEntry: { switch (PreviousState) { case EmitState.BlockMappingValue: if (IsFirstElement) WriteRaw1(YamlCodes.Lf); break; case EmitState.BlockSequenceEntry: if (!IsFirstElement) for (int i = 0; i < options.IndentWidth; i++) WriteRaw1(YamlCodes.Space); break; } var output = writer.GetSpan(currentIndentLevel * options.IndentWidth + BlockSequenceEntryHeader.Length + 1); var offset = 0; WriteIndent(output, ref offset); BlockSequenceEntryHeader.CopyTo(output[offset..]); offset += BlockSequenceEntryHeader.Length; output[offset++] = YamlCodes.FlowSequenceStart; writer.Advance(offset); break; } @hadashiA There is another issue when serializing structures like List<float[][]>, you will get the following result: - -[1, 0, 0, 0] -[1, 0, 0, 0] -[1, 0, 0, 0] -[1, 0, 0, 0] - -[1, 0, 0, 0] -[1, 0, 0, 0] -[1, 0, 0, 0] -[1, 0, 0, 0] @MonoLogueChi Thanks for the report. I believe I fixed your test case in #93. I'll probably release it as soon as I have a few more validations. Thanks,My test is normal https://github.com/u2sb/VYaml.UnityResolvers/blob/fc320766843a4b8052a6d51ab8c7caf351967506/Assets/Tests/VYamlTester.cs#L145-L173 output Matrix4x4: - - [-7.782528, 37.35536, -47.91584, 0] - [43.59058, 42.45243, -28.33409, 0] - [37.03134, -86.71092, -3.385229, 0] - [12.82171, 1.237767, -83.08399, 0] - - [2.874351, -47.73241, -64.72952, 0] - [34.91239, -64.36443, -55.1876, 0] - [65.48726, -39.25167, -24.1115, 0] - [28.50582, 6.619106, -85.29636, 0] https://github.com/u2sb/VYaml.UnityResolvers/blob/fc320766843a4b8052a6d51ab8c7caf351967506/Assets/Tests/VYamlTester.cs#L99-L116 output RectOffset: - [97, 93, 94, 30] - [8, 47, 91, 2]
gharchive/issue
2024-02-08T14:23:00
2025-04-01T06:44:23.989476
{ "authors": [ "MonoLogueChi", "hadashiA" ], "repo": "hadashiA/VYaml", "url": "https://github.com/hadashiA/VYaml/issues/92", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
128923181
Recognizing file type when downloaded from google. I'm attempting to download a simple google excel spreadsheet, but read_excel is not recognizing it and producing this message: Not an excel file library(downloader) library(readxl) leFile = "http://spreadsheets.google.com/pub?key=phAwcNAVuyj2tPLxKvvnNPA&output=xls" tmp = tempfile(fileext=".xls") download(leFile,destfile=tmp) le = read_excel(tmp) The resulting tempfile though can be opened by excel. @jnpaulson even though readxl can read in excel files, it doesn't mean it can read in every file that MS Excel can open. I don't think readxl supports google sheets at this time. I'd recommend checking out this link and using the package listed to read in and work with google sheets If I try to open the tempfile with extension .xls, as described above, Excel complains about file format and file extension being mismatched. The above works for me if I change the extension of the downloaded file: fileext = .xlsx.
gharchive/issue
2016-01-26T19:50:11
2025-04-01T06:44:24.003343
{ "authors": [ "audiolion", "jennybc", "jnpaulson" ], "repo": "hadley/readxl", "url": "https://github.com/hadley/readxl/issues/158", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
819935867
Test createMajorPath( ) in fileHandling.go #22 Test written and running successfully Codecov Report :exclamation: No coverage uploaded for pull request base (main@7ec9abe). Click here to learn what that means. The diff coverage is n/a. @@ Coverage Diff @@ ## main #23 +/- ## ====================================== Coverage ? 4.10% ====================================== Files ? 7 Lines ? 292 Branches ? 0 ====================================== Hits ? 12 Misses ? 276 Partials ? 4 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 7ec9abe...6295343. Read the comment docs.
gharchive/pull-request
2021-03-02T11:39:30
2025-04-01T06:44:24.011848
{ "authors": [ "KaHBrat", "codecov-io" ], "repo": "haevg-rz/go-updater", "url": "https://github.com/haevg-rz/go-updater/pull/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
826839958
I can't get gii to work. Goes to #403 error. Can you help please? Thanks. Thank you for your reply. I try to research first before I bother someone - I will try over 100 links before I contact someone. And, that link was one of my first. But, I happily found the issue. It was another extension that caused the gii not to work. I kept wondering why I was consistently "forbidden" on it, so I figured it may be another access extension that I used. Sure enough, once I removed it, the Gii worked. But, I want to say, thank you so much for responding. I really appreciate it. Have a great day! P.S. I really love your work.
gharchive/issue
2021-03-10T00:15:31
2025-04-01T06:44:24.035871
{ "authors": [ "hellenas" ], "repo": "hail812/yii2-adminlte3", "url": "https://github.com/hail812/yii2-adminlte3/issues/5", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
203842622
Add generating CropObject UIDs to AnnotatorModel When adding a CropObject and assigning an objid, the model should also directly assign an uid. The global namespace should be a config value (no way for MUSCIMarker to tell which dataset is being annotated. The document namespace should be somehow derived from the filename by default. However, it should also perhaps be possible to ask for one - perhaps a config value that on default setting copies current CropObjectList filename? (But what if it changes on export...?) Done.
gharchive/issue
2017-01-29T00:51:12
2025-04-01T06:44:24.049072
{ "authors": [ "hajicj" ], "repo": "hajicj/MUSCIMarker", "url": "https://github.com/hajicj/MUSCIMarker/issues/158", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1299405753
internal/graphicsdriver/directx: low performance Easiest way to reproduce for now is with my game AAAAXY, which currently (for this reason) defaults to OpenGL rather than DirectX: Download AAAAXY from https://github.com/divVerent/aaaaxy/ (latest release will do) Also download the file "benchmark.dem" from the github repository, and put it next to the exe file. In PowerShell, run: $Env:EBITEN_GRAPHICS_LIBRARY = "opengl" Measure-Command { .\aaaaxy-windows-amd64.exe -load_config=false -demo_play="benchmark.dem" -demo_timedemo -vsync=false } ... TotalMilliseconds : 23103.9543 $Env:EBITEN_GRAPHICS_LIBRARY = "directx" Measure-Command { .\aaaaxy-windows-amd64.exe -load_config=false -demo_play="benchmark.dem" -demo_timedemo -vsync=false } ... TotalMilliseconds : 39587.4662 (to view runtime fps, can run .\aaaaxy-windows-amd64.exe -load_config=false -vsync=false -show_fps, which shows me 110fps at the start of the game in OpenGL, and 19fps in DirectX - the lower difference in TotalMilliseconds is primarily due to loading time "equalizing" things somewhat) Issue may be GPU specific though - I have this issue on one of these: https://www.amazon.com/2019office】-Ultra-Light-High-Speed-High-Performance-Notebook/dp/B09CQ22335/ref=sr_1_3?keywords=7+inch+laptop&qid=1657310835&sr=8-3 - according to Device Manager I have an Intel(R) HD Graphics 500. -vsync=false is most certainly not at fault - with vsync on, I can't reach 60fps either, which is very noticeable. FYI on Linux, the same device shows 140fps at the starting point, and this benchmark takes 16.779 seconds wall clock time. glxinfo calls the GPU a "Mesa Intel(R) HD Graphics 500 (APL 2)". https://github.com/divVerent/aaaaxy/blob/main/nodirectx_windows.go - FYI my workaround to default to OpenGL until this is resolved. What draw calls are executed? You can see them with -tags=ebitendebug. I failed to execute your aaaaxy 2022/07/09 11:43:49.383029 [ERROR] cannot open out my version: could not open local:/generated/version.txt: open third_party/yd_pressure/assets/generated/version.txt: no such file or directory goroutine 1 [running, locked to thread]: runtime/debug.Stack() /usr/local/go/src/runtime/debug/stack.go:24 +0x65 runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0x19 github.com/divVerent/aaaaxy/internal/log.Fatalf({0x44f0a3d, 0x1d}, {0xc00013bf48, 0x1, 0x1}) /Users/hajimehoshi/ebitengine-games/aaaaxy/internal/log/log.go:101 +0x3c main.main() /Users/hajimehoshi/ebitengine-games/aaaaxy/main.go:98 +0x135 2022/07/09 11:43:49.383077 [FATAL] could not initialize game: could not initialize version: could not open local:/generated/version.txt: open third_party/yd_pressure/assets/generated/version.txt: no such file or directory exit status 125 This likely means you got the wrong binary - the one from GitHub Actions requires a source checkout that has performed "make generate" with the correct GOOS and GOARCH. To reproduce, the binary here will work: https://github.com/divVerent/aaaaxy/releases/download/v1.2.141/aaaaxy-windows-amd64-v1.2.141.zip (just tested that on my Windows box). Nevertheless, now building a "release" with ebitendebug in it so I can run that on Windows (don't have a dev environment there). Uploaded an ebitendebug build on https://drive.google.com/drive/folders/1QfiiH53DsoV48EKIXF3U9V9yxVaR7txb?usp=sharing - will test it on the machines when I find time to see if anything suspicious is in the render calls list. Typical draw call list on Linux/OpenGL (did a force-quit while the game screen was open so the blur behind the menu doesn't show up): Update count per frame: 1 Internal image sizes: 2: (16, 16) 3: (16, 16) 4: (1024, 512) 5: (1024, 512) 6: (2048, 1024) 7: (1680, 1050) 8: (2048, 2048) 10: (1024, 512) 11: (1024, 512) 12: (1024, 512) 13: (1024, 512) 14: (128, 16) Graphics commands: draw-triangles: dst: 11 <- src: [8, (nil), (nil), (nil)], dst region: (x:1, y:1, width:640, height:360), num of indices: 6, colorm: {}, mode: copy, filter: nearest, address: unsafe, even-odd: false draw-triangles: dst: 11 <- src: [8, (nil), (nil), (nil)], dst region: (x:1, y:1, width:640, height:360), num of indices: 1980, colorm: {}, mode: source-over, filter: nearest, address: unsafe, even-odd: false draw-triangles: dst: 12 <- src: [8, (nil), (nil), (nil)], dst region: (x:1, y:1, width:640, height:360), num of indices: 6, colorm: {}, mode: copy, filter: nearest, address: unsafe, even-odd: false draw-triangles: dst: 12 <- src: [8, (nil), (nil), (nil)], dst region: (x:1, y:1, width:640, height:360), num of indices: 1929, colorm: {}, mode: source-over, filter: nearest, address: unsafe, even-odd: false draw-triangles: dst: 13, shader, num of indices: 6, mode copy draw-triangles: dst: 12, shader, num of indices: 6, mode copy draw-triangles: dst: 4, shader, num of indices: 6, mode copy draw-triangles: dst: 13, shader, num of indices: 6, mode copy draw-triangles: dst: 10, shader, num of indices: 6, mode copy draw-triangles: dst: 5, shader, num of indices: 6, mode copy draw-triangles: dst: 6, shader, num of indices: 6, mode copy draw-triangles: dst: 7 (screen) <- src: [8, (nil), (nil), (nil)], dst region: (x:0, y:0, width:1680, height:1050), num of indices: 6, colorm: {}, mode: copy, filter: nearest, address: unsafe, even-odd: false draw-triangles: dst: 7 (screen) <- src: [6, (nil), (nil), (nil)], dst region: (x:0, y:0, width:1680, height:1050), num of indices: 6, colorm: {}, mode: copy, filter: nearest, address: unsafe, even-odd: false This matches my expectations - there is screen clearing, tiles rendering, polygon rendering for visible area, blurring that polygon, mixing the two together with previous frame, blurring the output for next frame, and finally copying all that stuff to the screen with a CRT filter. Haven't checked yet if it looks any different when using DirectX. The render call list seems to be the same when using DirectX backend. I am sure I am using the backend because whenever I launch with DirectX, at early startup there is a white rectangle on the screen where my command prompt was - with OpenGL this doesn't happen. From your result of ebitendebug, there is nothing odd. I'd like to modify and try your aaaaxy on my local machine (macOS). Would it be possible to build it myself? @divVerent Could you try a32a137fa805f8dca08e499a85f6e84fb96361c8? Thanks, Current profiling result (a32a137fa805f8dca08e499a85f6e84fb96361c8, -vsync=false) I will try your change - I do not think this issue is vsync=off specific, however "unnecessary flushes" is certainly a possibility. Although I'd be surprised if this is due to ReadPixels/ReplacePixels being on a different command chain - I never do those outside precaching at the start of the game or text rendering in my menu (in-game text is precached too to avoid performance loss). On Sat, Jul 9, 2022 at 11:42 AM Hajime Hoshi @.***> wrote: Current profiling result (a32a137 https://github.com/hajimehoshi/ebiten/commit/a32a137fa805f8dca08e499a85f6e84fb96361c8, -vsync=false) [image: image] https://user-images.githubusercontent.com/16950/178112745-7b33a9d1-1454-4a13-b402-ac506325bf7c.png — Reply to this email directly, view it on GitHub https://github.com/hajimehoshi/ebiten/issues/2188#issuecomment-1179563917, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5NMDTO53S5WG3XT6PDITVTGMXFANCNFSM53CA22LA . You are receiving this because you were mentioned.Message ID: @.***> Although I'd be surprised if this is due to ReadPixels/ReplacePixels being on a different command chain - I never do those outside precaching at the start of the game or text rendering in my menu (in-game text is precached too to avoid performance loss). Before the fix, commands were flushed every time when DrawTriangles is called, regardless of copyCommandList usages. So, even though you don't call ReadPixels/ReplacePixels (and actually you don't call them), commands were flushed and then waiting happened unnecessarily. Note: I cannot patch https://github.com/hajimehoshi/ebiten/commit/a32a137fa805f8dca08e499a85f6e84fb96361c8 on top of Ebiten v2.3.5, but I am going to retest against Ebiten main which contains the change as well as https://github.com/hajimehoshi/ebiten/commit/0035ba0bd1a35c4a27c2933af17276af7b7b7e1d. Note that I don't plan to backport this change to 2.3 branch as this is just a performance improvement. With your changes I now get 35fps at game start (OpenGL remains at 110fps). Way better than 19fps, so the flushing fixes certainly helped, but I'd really like to get up to 60fps before I can make DirectX mode default. At fastest render settings (in the menu: graphics=SVGA quality=Lowest) I now get 150fps with DirectX, 215fps with OpenGL. Phasing up render settings on DirectX again: Moving graphics back to VGA (enabling dither.kage.tmpl) moves to 145fps. Moving quality up to Low - still 145fps. Moving quality up to medium - 105fps, so this is a sizable change. Moving quality up to high - 100fps, so didn't matter. Moving quality up to max - 35fps, that change is huge. So the big steps are from low to medium, and from high to max. Looking in source code they are (https://github.com/divVerent/aaaaxy/blob/0878d763d4bedad077d9416eaa13b2bd5e3251c3/internal/menu/settings.go#L255): high to max: screen_filter from simple to linear2xcrt, i.e. using my CRT shader. No other delta. low to medium: screen_filter from nearest to simple, and draw_blurs from false to true. Peculiarly, though, if I move quality to max but graphics to SVGA, I also get 100fps, which is very much acceptable. So the complex dither shader is expensive, and I can have either the dither shader or the CRT shader active, but not both, if I want to be below 60fps. I wonder if reality is that all complex shaders are more expensive in DirectX than in OpenGL mode, and that there is also a hard cap on the framerate (in OpenGL, at lowest possible settings, I can reach 220fps at most, BTW, but I bet it's then simply CPU bound by my render code). Thank you for the trial! I'll take a look further. My current suspect is how much the shader programs are optimized. BTW there quite certainly are things in those shaders I could maybe write better; if it helps, here are the template settings the dither shader runs with: .BayerSize = 0 .RandomDither = 0 .PlasticDither = 1 .TwoColor = 1 The linear2xcrt shader runs with: .CRT = 1 (this rather complex part can be turned off by passing -screen_filter=linear2x, which makes it a fancy upscaler but no longer contain the scanline and bending effect) I simply added an optimization flag to D3DCompile (bf0f3d304bd5c92f26d9df2b5591d1f848a255f1). I'll take a look further later (maybe tomorrow), but my current guess is that the HLSL code generated by Kage might not be good. Thank you for a lot of helpful information. I'd be happy if you could take a look at bf0f3d304bd5c92f26d9df2b5591d1f848a255f1. Thanks, With my Windows PC (Vaio LAPTOP-31PU6LDL), the FPS was about 70 in the original aaaaxy with Ebitengine v2.3.5, and 110 with Ebitengine bf0f3d3. The FPS was 220 with OpenGL. So, the FPS should be increased but is still 2x lower than OpenGL. I'm trying to add more optimization. Remaining tasks I can do are: Reduce local variables from HLSL outputs of Kage Analyze HLSL assembly results With my Windows PC (Vaio LAPTOP-31PU6LDL), the FPS was about 70 in the original aaaaxy with Ebitengine v2.3.5, and about 100 with Ebitengine https://github.com/hajimehoshi/ebiten/commit/bf0f3d304bd5c92f26d9df2b5591d1f848a255f1. The FPS was 220 with OpenGL. So, the FPS should be increased but is still 2x lower than OpenGL. With the current latest commit b8367da7e235036e9c1a9834de50a0a604ec69d8, aaaaxy could keep 150-200 FPS! On my machine, with Ebiten at b8367da7e235036e9c1a9834de50a0a604ec69d8 (TODO: should verify I actually built against that and there wasn't some caching effect): game starts out at 21fps but if I let it sit there, ti soon moves to 31fps and stays there. At SVGA/Max I get 119fps. At VGA/High I get 122fps. This is somewhat illogical - VGA/Max settings should never take longer to render than one frame SVGA/Max and one frame VGA/High, which would yield 1/(1/119+1/122) ~ 60fps, but it's substantially slower than that. Any way how those shaders could negatively interact with each other? They're in different render passes after all. Confirmed I was actually including current code - the comment from b8367da7e235036e9c1a9834de50a0a604ec69d8 is in the binary I tested. One thing I will later to (likely not before end of next week) is experiment with my shader code, comment things out, to see which parts are the expensive parts. There is a way to do this without recompiling (mainly note for myself so I know how to speed this up when I have time for it): aaaaxy-windows-amd64 -dump_embedded_assets=data # make edits in data/assets/shaders/* aaaaxy-windows-amd64 -cheat_replace_embedded_assets=data As for a possible interaction between the shaders: both palette reduction (enabled when graphics is set to VGA or lower) and CRT filter (enabeld at max quality) add one render pass; the former adds a 640x360->640x360 pass, and the latter adds a 640x360->intermediate_res pass and change Ebiten's final pass from 640x360->output_res to intermediate_res->output_res (where intermediate_res is the min of 2560x1440 and output_res). Do note that this postprocessing uses the same input as the round of the two blur render passes that remember a blurred version of previous screen contents for the fade out effect in the "fog of war" area. As there is no data dependency on that output within the same frame, it is conceivable that these two operations might run partially in parallel (not sure how smart DirectX is, but OpenGL probably is not smart enough to do that kind of optimization). -draw_outside=false disables the blur pass that remembers previous screen content, but keeps the two postprocessing shaders active - above 100fps with that. With dither.kage.tmpl neutered (all commented out, and a Fragment function added that just returns imageSrc0UnsafeAt(texCoord)), I still get ~30fps. Same treatment also done to linear2xcrt.kage.tmpl and I get 37fps. Still nowhere near the 100fps. So now I have ruled out the contents of the shaders (as seen above, optimization did help, but only to some extend); the slowness comes from the render passes themselves. So the FPS is still around 40 with the default state, right? Do note that this postprocessing uses the same input as the round of the two blur render passes that remember a blurred version of previous screen contents for the fade out effect in the "fog of war" area. As there is no data dependency on that output within the same frame, it is conceivable that these two operations might run partially in parallel (not sure how smart DirectX is, but OpenGL probably is not smart enough to do that kind of optimization). Are there any DirectX-level debugging tools that could tell me if any such interaction might exist? Like a DirectX equivalent of apitrace? Sorry but I'm not familiar with DirectX tools. It is possible that OpenGL implicitly executes some commands in parallel, while DirectX doesn't unless they are explicitly ordered. And, Ebitengine doesn't specify parallel executions. I'm quite confused at what kind of shaders and how they interact in your application... A figure would be helpful. Thanks, So now I have ruled out the contents of the shaders (as seen above, optimization did help, but only to some extent); the slowness comes from the render passes themselves. Very interesting. Perhaps, does the destination size matter? Issue may be GPU specific though - I have this issue on one of these: https://www.amazon.com/2019office】-Ultra-Light-High-Speed-High-Performance-Notebook/dp/B09CQ22335/ref=sr_1_3?keywords=7+inch+laptop&qid=1657310835&sr=8-3 - according to Device Manager I have an Intel(R) HD Graphics 500. Celeron J4125 has UHD Graphics 600 instead of UHD Graphics 500. https://www.intel.com/content/www/us/en/products/sku/197305/intel-celeron-processor-j4125-4m-cache-up-to-2-70-ghz/specifications.html Could you confirm that this is the machine you are testing? To be clear, I got a device that looks quite much the same on Ali Express and has all connectors in the same place - I assume the Amazon one is the same, but it is possible that the innards are changing without the exterior look changing. My device has a Celeron J3455 according to /proc/cpuinfo, so yeah, it isn't quite the same. On Mon, Jul 11, 2022 at 2:08 AM Hajime Hoshi @.***> wrote: Issue may be GPU specific though - I have this issue on one of these: https://www.amazon.com/2019office】-Ultra-Light-High-Speed-High-Performance-Notebook/dp/B09CQ22335/ref=sr_1_3?keywords=7+inch+laptop&qid=1657310835&sr=8-3 according to Device Manager I have an Intel(R) HD Graphics 500. Celeron J4125 has UHD Graphics 600 instead of UHD Graphics 500. https://www.intel.com/content/www/us/en/products/sku/197305/intel-celeron-processor-j4125-4m-cache-up-to-2-70-ghz/specifications.html Could you confirm that this is the machine you are testing? — Reply to this email directly, view it on GitHub https://github.com/hajimehoshi/ebiten/issues/2188#issuecomment-1180003092, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5NMAUNFQEJSNTFCD65PTVTO26JANCNFSM53CA22LA . You are receiving this because you were mentioned.Message ID: @.***> I'm looking for a machine with the same chipset (Celeron J3455) https://www.amazon.co.jp/dp/B0875LXTRC https://www.amazon.co.jp/dp/B096S7Y23N https://www.amazon.co.jp/dp/B0B14Z49GD https://www.amazon.co.jp/dp/B09R4FWC4D https://www.amazon.co.jp/dp/B07TXYRXW4 I am not yet sure that this is the bottleneck. Yes, removing the pass fixed framrate, but removing the one that applies the palette (even if the shader is a NOP) fixes it too. Which makes me think that the issue may be something else. Am I e.g. exceeding some limit in VRAM usage? Does it otherwise matter how many passes run? But then why is the OpenGL backend not affected equally? On Mon, Jul 11, 2022, 09:10 Hajime Hoshi @.***> wrote: Thanks. The current performance bottleneck is the existence of the shader send P to CRT shader, output is C (typically at screen res, capped to 2560x1440) and whether the shader's content is empty or not doesn't matter. Do I understand correctly? — Reply to this email directly, view it on GitHub https://github.com/hajimehoshi/ebiten/issues/2188#issuecomment-1180390409, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5NMCWY6VLL7P4FPYIDB3VTQMKPANCNFSM53CA22LA . You are receiving this because you were mentioned.Message ID: @.***> It's possible that what the OpenGL driver does more sophisticated things than I do with DirectX. I'll take a look further after the machine I ordered arrives. I now tried revamping how textures are allocated to have different strategies rather than always using the same temp texture for the same purpose - but this does not change DirectX performance at all. This is in my branch managed-offscreens in my game - I am unsure if I really want to merge that, but it eliminates two 640x360 textures by default. Not seeing any differences even now - but also, peculiarly, I cannot run dxcap.exe to get a capture of DirectX usage. In capture mode (dxcap -file aaaaxy.vsglog -c aaaaxy-windows-amd64) just hangs around. I also can no longer reproduce getting 100fps, even with the binary that I had before; I will retest later, suspecting this simply to be some background activity. PIX (https://devblogs.microsoft.com/pix/download/) shows a lot of warnings about 131 "redundant transition to unused state" in a single frame, as well as some redundant ResourceBarriers. Maybe that is related? Can't do much in PIX, this laptop has a 800x480 screen and I can't reach half the UI of it. OK so FPS doesn't change... (though I believe the fix is necessary to use GPUs correctly) "redundant transition to unused state" might be a very good hint. PIX didn't work well on my Parallels machine. I'll try the new machine (Intel HD Graphics 500) later anyway. I think I could reproduce your issue, but the situation might be different. With the quality 'max', the FPS is around 36 at the first location. With the quality 'high', the FPS is around 40. With the quality 'medium', the FPS is around 45. With the quality 'low', the FPS is around 45. With the quality 'lowest', the FPS is around 60. In all the cases I disabled vsync. PIX (https://devblogs.microsoft.com/pix/download/) shows a lot of warnings about 131 "redundant transition to unused state" in a single frame, as well as some redundant ResourceBarriers. Maybe that is related? I coulnd't see such warnings. How did you see them? I realized that FPS depends on the player's position, and in some places the FPS is actually less than 30. I'll take a look further I launched PIX, selected the game binary and set the environment variable EBITEN_GRAPHICS_LIBRARY to directx there, then launched the game from there and once all stabilized, hit print screen. I may then have had to click something in the bottom area to let it actually play back the frame, and the warnings view then showed something - including links to click to get more warnings. As for the numbers on your system - interesting you do not get such a sharp cutoff. I assume in OpenGL mode the framerate is substantially higher for you too? To get the test more similar, maybe try hitting F (toggle full screen) then resize the window to about 800x480 (which is all my 7" laptop does)? On Thu, Jul 14, 2022, 03:27 Hajime Hoshi @.***> wrote: PIX (https://devblogs.microsoft.com/pix/download/) shows a lot of warnings about 131 "redundant transition to unused state" in a single frame, as well as some redundant ResourceBarriers. Maybe that is related? I coulnd't see such warnings. How did you see them? — Reply to this email directly, view it on GitHub https://github.com/hajimehoshi/ebiten/issues/2188#issuecomment-1184093659, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAB5NMC4MNWQA5EJMWYNMXTVT66MDANCNFSM53CA22LA . You are receiving this because you were mentioned.Message ID: @.***> I'll try pressing print screen later, thanks. As for the numbers on your system - interesting you do not get such a sharp cutoff. I assume in OpenGL mode the framerate is substantially higher for you too? Yes, higher and more stable with OpenGL. To get the test more similar, maybe try hitting F (toggle full screen) then resize the window to about 800x480 (which is all my 7" laptop does)? I'm already using a window mode with 1280x720 size. I'll try 800x480 later but I don't think the window size matters here. OK I understand the cause. The number of draw calls exceeded 16 in some scenes, then flushing happned in the middle of a frame. I increased the number of descriptor tables for one frame so that more draw calls can be executed without flushing. Please try 8d74039617fcba3a14d9701b670381c50fdd9104. Thank you very much! I'll try - right now also dug out another old laptop that had a Windows 7 license sticker, and managed to get Windows 10 on it (a Thinkpad L440) using the UEFI-integrated license (didn't know that is even possible). DirectX performance is acceptable there, Ebiten v2.3.5 (BTW: since which version will the name "Ebitengine" be used? Would you like to apply this retroactively, like in my game credits too, or should I change the name and references when I upgrade to v2.4?) however is unable to turn off vsync (but I saw some fixes regarding that). If I however turn off vsync in DirectX mode, there's some black areas near top of the screen - basically kinda like tearing but worse, causing even black strips. Not an issue for me though, I can just document that vsync off can cause render glitches beyond just tearing ;) but on first test, fps go in the 2xx range both in OpenGL and DirectX once I turn off vsync, which is nice. "Real" benchmark series ongoing though. The render glitch looks like this: https://photos.app.goo.gl/S4TEg98dZtWuwxoX6 Can maybe be explained by Ebiten clearing the output then blitting, rather than just blitting, and then just normal tearing on top of that? In any case, this isn't serious at all, but kinda expected without vsync. (BTW: since which version will the name "Ebitengine" be used? Would you like to apply this retroactively, like in my game credits too, or should I change the name and references when I upgrade to v2.4?) Please use Ebitengine for every version. If I however turn off vsync in DirectX mode, there's some black areas near top of the screen - basically kinda like tearing but worse, causing even black strips. Tearing is allowed explicitly when vsync is off. I think such tearing can happen with vsync off, but should we disable this? fps go in the 2xx range both in OpenGL and DirectX once I turn off vsync Is this the latest version? Or v2.3? Can maybe be explained by Ebiten clearing the output then blitting, rather than just blitting, and then just normal tearing on top of that? In any case, this isn't serious at all, but kinda expected without vsync (even though with OpenGL it doesn't happen). Could possibly be made less impacting by not clearing (or only clearing the letterbox area, not the game area itself). Again I allowed tearing explicitly by the DirectX API. I'm not sure whether I disallow this again. Oh, I didn't suggest disabling tearing. Just normally tearing just contains pixels of previous and current frame - here it also contains black pixels from neither frame. But anyway, I consider vsync off a debugging mode, not something anyone should seriously use - so I'm fine with it. Got the "proper" results from the Thinkpad L440: DirectX is about 20% slower in all modes, but at all settings well above 60fps so I don't care. Now back to the Intel 500 laptop... Got the "proper" results from the Thinkpad L440: DirectX is about 20% slower in all modes, but at all settings well above 60fps so I don't care. So, is this v2.3, or the latest? That was on latest - on v2.3.5, disabling vsync doesn't work and leaves fps capped at 60. But anyway, I consider vsync off a debugging mode, not something anyone should seriously use - so I'm fine with it. Acknowleded. That was on latest - on v2.3.5, disabling vsync doesn't work and leaves fps capped at 60. Acknowledged. Now to the low-end laptop. Still just 20fps at the starting position. I don't quite trust this result so I am going to verify if I can see that latest code is actually in there and I wasn't just hit by some caching artifact. Yeah, disregard that, that was an old binary. Got suspicious when I didn't see the nasty tearing ;) Yay! 107fps at highest settings, starting location. Sure, OpenGL still gives me 110fps in the same place - but don't care, this is now an extremely tiny difference on the low-end laptop, and both are fully playable. I think we can consider this issue fixed, and I'll switch the default to DirectX in my game once these changes are in Ebitengine mainline. Back with vsync on, I have stable ~60 fps. As intended. Thank you very much for confirming this fixed! I appreciate your patience for my bunch of fixes and trials. 🥳 BTW, I am rather surprised that DirectX support went so well - like, from the very start it at least rendered everything correctly. Good job! BTW, I am rather surprised that DirectX support went so well - like, from the very start it at least rendered everything correctly. Good job! This is because Ebitengine uses only a limited set of DirectX features as Ebitengine is just a 2D game library. So making the graphics driver 'just' work was not so hard, but making it work correctly was really hard as we discussed here :-)
gharchive/issue
2022-07-08T20:08:40
2025-04-01T06:44:24.116498
{ "authors": [ "divVerent", "hajimehoshi" ], "repo": "hajimehoshi/ebiten", "url": "https://github.com/hajimehoshi/ebiten/issues/2188", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
276423272
Beta gamma transforms Adding transforms to Beta and Gamma distributions Rebased and resubmitted changes
gharchive/pull-request
2017-11-23T16:05:24
2025-04-01T06:44:24.125752
{ "authors": [ "maymoo99" ], "repo": "hakaru-dev/hakaru", "url": "https://github.com/hakaru-dev/hakaru/pull/121", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1825237062
Overloaded routine names Add support for overloaded routine names, fixes #36 Functional changes in the Schema Browser Clicking a routine (function or procedure) properly expands to display the correct parameters for the selected routine Tool tips on routines include the specific name for differentiation when names are overloaded Setting the tool tip rather than changing the tree item label keeps the view cleaner yet still provides the extra info Generate SQL action changed to use the specific name so the results are only generated for the selected routine ( previously all routines with the same name were generated ) Delete action changed to use the specific name so the selected routine can be deleted ( previously delete failed when multiple routines had the same name ) Setup script for testing CREATE SCHEMA OVERLOAD; CREATE OR REPLACE FUNCTION OVERLOAD.MULTI_FUNC ( P1 INTEGER DEFAULT 13 ) RETURNS INTEGER LANGUAGE SQL MODIFIES SQL DATA CONCURRENT ACCESS RESOLUTION DEFAULT FENCED NOT DETERMINISTIC CALLED ON NULL INPUT EXTERNAL ACTION NOT SECURED SPECIFIC OVERLOAD.MULTI_SQ RETURN P1 * P1; LABEL ON ROUTINE OVERLOAD.MULTI_FUNC(INT) IS 'Accepts One Integer and Returns Its Square'; COMMENT ON PARAMETER ROUTINE OVERLOAD.MULTI_FUNC (INT) (P1 IS 'Value to Square'); CREATE OR REPLACE FUNCTION OVERLOAD.MULTI_FUNC ( P1 INTEGER DEFAULT 13, P2 INTEGER DEFAULT 74 ) RETURNS INTEGER LANGUAGE SQL MODIFIES SQL DATA CONCURRENT ACCESS RESOLUTION DEFAULT FENCED NOT DETERMINISTIC CALLED ON NULL INPUT EXTERNAL ACTION NOT SECURED SPECIFIC OVERLOAD.MULTI_PROD RETURN P1 * P2; LABEL ON ROUTINE OVERLOAD.MULTI_FUNC(INT, INT) IS 'Accepts Two Integer and Returns the Product'; COMMENT ON PARAMETER ROUTINE OVERLOAD.MULTI_FUNC (INT, INT) (P1 IS 'First Value', P2 IS 'Second Value'); /* Creating OVERLOAD.MULTI_PROC [Procedure] */ CREATE OR REPLACE PROCEDURE OVERLOAD.MULTI_PROC () LANGUAGE SQL MODIFIES SQL DATA SPECIFIC OVERLOAD.MULTI_MONS PROGRAM TYPE SUB CONCURRENT ACCESS RESOLUTION DEFAULT DYNAMIC RESULT SETS 1 OLD SAVEPOINT LEVEL COMMIT ON RETURN NO BEGIN DECLARE MONITORS CURSOR WITH RETURN FOR SELECT * FROM QUSRSYS.QAUGDBPMD2; OPEN MONITORS; END; /* Setting label text for OVERLOAD.MULTI_PROC */ LABEL ON SPECIFIC ROUTINE OVERLOAD.MULTI_MONS IS 'Returns the Entire List of Monitors'; /* Creating OVERLOAD.MULTI_PROC [Procedure] */ CREATE OR REPLACE PROCEDURE OVERLOAD.MULTI_PROC ( IN CREATOR CHARACTER(10) DEFAULT '' ) LANGUAGE SQL MODIFIES SQL DATA SPECIFIC OVERLOAD.MULTI_MYMONS PROGRAM TYPE SUB CONCURRENT ACCESS RESOLUTION DEFAULT DYNAMIC RESULT SETS 1 OLD SAVEPOINT LEVEL COMMIT ON RETURN NO BEGIN DECLARE MONITORS CURSOR WITH RETURN FOR SELECT * FROM QUSRSYS.QAUGDBPMD2 WHERE "Created by" = CREATOR; OPEN MONITORS; END; /* Setting label text for OVERLOAD.MULTI_PROC */ LABEL ON SPECIFIC ROUTINE OVERLOAD.MULTI_MYMONS IS 'Returns the List of My Monitors'; /* Setting comment text for OVERLOAD.MULTI_PROC */ COMMENT ON PARAMETER SPECIFIC ROUTINE OVERLOAD.MULTI_MYMONS (CREATOR IS 'User That Created the Monitor'); I think I've made the required changes. Please take another look.
gharchive/pull-request
2023-07-27T21:11:31
2025-04-01T06:44:24.132599
{ "authors": [ "davecharron" ], "repo": "halcyon-tech/vscode-db2i", "url": "https://github.com/halcyon-tech/vscode-db2i/pull/94", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
728860130
Added flex-box utility class for justify-content value space-evenly #67 Hey, thank you for the PR. Sorry for getting back to this so late, I was taking a break. Anyway, let me review the code and the compatibility first before making any other adjustments. This particular CSS property seems to have around 95% compatibility at this moment, which is acceptable. Would you kindly make the same changes to the halfmoon-variables.css file please? I can create another branch and merge it there. I have been meaning to create a dev branch for some time, so I can put out the work I have done for v1.2.0, so we can start with this PR. Sorry for the delay in getting back to you. I have made the requested changes to the halfmoon-variables.css file. Let me know if there are any more changes you require to this pull request. Hello, in which version of halfmoon has this PR been released ? Thanks As it was merged in december and the lastest version came out a bit over 2 months earlier, I'd presume this is still "next release"
gharchive/pull-request
2020-10-24T19:32:39
2025-04-01T06:44:24.135750
{ "authors": [ "KaKi87", "Toby222", "halfmoonui", "mansguiche" ], "repo": "halfmoonui/halfmoon", "url": "https://github.com/halfmoonui/halfmoon/pull/68", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
265519119
Is there any method called when clicking the textinput or when its on focus? I wanted to run some functions behind when it gets focus or a user clicks on the textinput. Do you guys have any idea how do we do that ? my bad i just figured out the onFoucs method
gharchive/issue
2017-10-14T19:53:32
2025-04-01T06:44:24.154755
{ "authors": [ "bispul" ], "repo": "halilb/react-native-textinput-effects", "url": "https://github.com/halilb/react-native-textinput-effects/issues/68", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
178773081
structural variant calling (lumpy) error TypeError: %d format: a number is required, not numpy.float64 I am using speedseq v0.1.0 and got an error in the structural variant calling step: Calculating insert distributions... sambamba-view: (Broken pipe) Library read groups: 140517_SN1440_0189_BC41CUACXX_4_CAGATCTG Library read length: 51 sambamba-view: unable to write to stream /scratch/genomic_med/apps/python/anaconda/default/lib/python2.7/site-packages/numpy/core/_methods.py:59: RuntimeWarning: Mean of warnings.warn("Mean of empty slice.", RuntimeWarning) /scratch/genomic_med/apps/python/anaconda/default/lib/python2.7/site-packages/numpy/core/_methods.py:70: RuntimeWarning: invalid ret = ret.dtype.type(ret / rcount) Traceback (most recent call last): File "/scratch/genomic_med/apps/lumpy/default//scripts/pairend_distro.py", line 106, in <module> (removed, upper_cutoff)) TypeError: %d format: a number is required, not numpy.float64 END at Thu Sep 22 16:45:11 CDT 2016 Thanks, Ming Hi Ming, Can you download the latest dev svtyper and run the following? svtyper -B my.bam -wl my.diagnostic.json Then please either post the resulting JSON file here or email it to me Thanks! If you could also post the BAM header it would be helpful ./svtyper -B my_realigned.bam -wl my.diagnostic.json Warning: VCF not found. Calculating library statistics... Error: failed to build insert size histogram for paired-end reads. Please ensure BAM file (my_realigned.bam) has inward facing, paired-end reads. header of the bam @HD VN:1.3 SO:coordinate @SQ SN:1 LN:249250621 @SQ SN:2 LN:243199373 @SQ SN:3 LN:198022430 @SQ SN:4 LN:191154276 @SQ SN:5 LN:180915260 @SQ SN:6 LN:171115067 @SQ SN:7 LN:159138663 @SQ SN:8 LN:146364022 @SQ SN:9 LN:141213431 @SQ SN:10 LN:135534747 @SQ SN:11 LN:135006516 @SQ SN:12 LN:133851895 @SQ SN:13 LN:115169878 @SQ SN:14 LN:107349540 @SQ SN:15 LN:102531392 @SQ SN:16 LN:90354753 @SQ SN:17 LN:81195210 @SQ SN:18 LN:78077248 @SQ SN:19 LN:59128983 @SQ SN:20 LN:63025520 @SQ SN:21 LN:48129895 @SQ SN:22 LN:51304566 @SQ SN:X LN:155270560 @SQ SN:Y LN:59373566 @SQ SN:MT LN:16569 @SQ SN:GL000207.1 LN:4262 @SQ SN:GL000226.1 LN:15008 @SQ SN:GL000229.1 LN:19913 @SQ SN:GL000231.1 LN:27386 @SQ SN:GL000210.1 LN:27682 @SQ SN:GL000239.1 LN:33824 @SQ SN:GL000235.1 LN:34474 @SQ SN:GL000201.1 LN:36148 @SQ SN:GL000247.1 LN:36422 @SQ SN:GL000245.1 LN:36651 @SQ SN:GL000197.1 LN:37175 @SQ SN:GL000203.1 LN:37498 @SQ SN:GL000246.1 LN:38154 @SQ SN:GL000249.1 LN:38502 @SQ SN:GL000196.1 LN:38914 @SQ SN:GL000248.1 LN:39786 @SQ SN:GL000244.1 LN:39929 @SQ SN:GL000238.1 LN:39939 @SQ SN:GL000202.1 LN:40103 @SQ SN:GL000234.1 LN:40531 @SQ SN:GL000232.1 LN:40652 @SQ SN:GL000206.1 LN:41001 @SQ SN:GL000240.1 LN:41933 @SQ SN:GL000236.1 LN:41934 @SQ SN:GL000241.1 LN:42152 @SQ SN:GL000243.1 LN:43341 @SQ SN:GL000242.1 LN:43523 @SQ SN:GL000230.1 LN:43691 @SQ SN:GL000237.1 LN:45867 @SQ SN:GL000233.1 LN:45941 @SQ SN:GL000204.1 LN:81310 @SQ SN:GL000198.1 LN:90085 @SQ SN:GL000208.1 LN:92689 @SQ SN:GL000191.1 LN:106433 @SQ SN:GL000227.1 LN:128374 @SQ SN:GL000228.1 LN:129120 @SQ SN:GL000214.1 LN:137718 @SQ SN:GL000221.1 LN:155397 @SQ SN:GL000209.1 LN:159169 @SQ SN:GL000218.1 LN:161147 @SQ SN:GL000220.1 LN:161802 @SQ SN:GL000213.1 LN:164239 @SQ SN:GL000211.1 LN:166566 @SQ SN:GL000199.1 LN:169874 @SQ SN:GL000217.1 LN:172149 @SQ SN:GL000216.1 LN:172294 @SQ SN:GL000215.1 LN:172545 @SQ SN:GL000205.1 LN:174588 @SQ SN:GL000219.1 LN:179198 @SQ SN:GL000224.1 LN:179693 @SQ SN:GL000223.1 LN:180455 @SQ SN:GL000195.1 LN:182896 @SQ SN:GL000212.1 LN:186858 @SQ SN:GL000222.1 LN:186861 @SQ SN:GL000200.1 LN:187035 @SQ SN:GL000193.1 LN:189789 @SQ SN:GL000194.1 LN:191469 @SQ SN:GL000225.1 LN:211173 @SQ SN:GL000192.1 LN:547496 @RG ID:140517_SN1222_0252_BC3KYPACXX_5_CGACTGGA CN:GCC_02 LB:mylib PL:Illumina_HiSeq200 @PG ID:bwa PN:bwa CL:/scratch/genomic_med/apps/spseq/speedseq//bin/bwa mem -t 12 -C -p /risapps/reference/bwa-indexed/human_g1 @PG ID:SAMBLASTER CL:samblaster -i stdin -o stdout --excludeDups --addMateTags -d my_tmp_realn/disc_pipe Thanks! Thanks Ming. In the first command, SVTyper cannot find any paired-end, inward facing reads in your BAM file. You should have a look at the SAM flags in your BAM file and ensure that it was aligned properly
gharchive/issue
2016-09-23T02:59:52
2025-04-01T06:44:24.159445
{ "authors": [ "cc2qe", "crazyhottommy" ], "repo": "hall-lab/speedseq", "url": "https://github.com/hall-lab/speedseq/issues/91", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
131334808
ValidateEmail.valid?('email@randommail.com.') returns true I'm using this gem with great success for my service for some times now and it works pretty well except a simple case that keeps coming back: adding a stupid character at the end E.g. ValidateEmail.valid?('email@randommail.com') -> true (correct) ValidateEmail.valid?('email@randommail.') -> false (correct) ValidateEmail.valid?('email@randommail.com.') -> true (INCORRECT) ValidateEmail.valid?('email@randommail.com/') -> true (INCORRECT) I don't want to use MX validation because this would consume way too much time, but I would think that this would be an easy check since ending by a dot or a slash is not conform (I get notified of errors when people try to send to those address). +1 I am facing same issue Will using the domain validation option work? ValidateEmail.valid?('email@randommail.com.', domain: true) ValidateEmail.valid?('bla@bla.de+1256') -> true (INCORRECT) The domain option works correctly. Also with the examples of op. Still rather convinced that ValidateEmail.valid?('itsokay@site.com##') should return false even if option domain: true is not passed. Currently it returns true without a domain option and false with a domain option. Closing as it is integrated in 0.1.0 Solution: Domain validation is enabled by default I did not upgrade to the new version yet, but looking at the code, shouldn't this return false on the old version? (the new version only set it to true by default) ValidateEmail.valid?("email@randommail.com." , {:domain => true}) => true Or am I calling it wrong? Rookie ruby question most likely but how is this going to use my params? h1 = {:domain => true } h2 = {:domain => false} h1.merge(h2) #=> {:domain=>false} def valid?(value, user_options={}) options = { :mx => false, :domain => false, :message => nil }.merge(user_options) shouldn't that be the opposite? def valid?(value, user_options={}) options = user_options.merge{ :mx => false, :domain => false, :message => nil } ?
gharchive/issue
2016-02-04T12:40:46
2025-04-01T06:44:24.167825
{ "authors": [ "Nghi93", "Yaroslav-F", "chitrank-samaiya", "hallelujah", "jchatel", "mattecalcio" ], "repo": "hallelujah/valid_email", "url": "https://github.com/hallelujah/valid_email/issues/72", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2628161882
initialize [ ] Package manager [ ] List all versions of candidate [ ] Install a specific version [ ] Remove specific version [ ] Switch between version [ ] Integrate with ChatGPT/Gemini so user can chat directly with these tools on cmd [ ] Other common tools for dev https://github.com/warpy-ai/rustubble/tree/main
gharchive/issue
2024-11-01T02:15:48
2025-04-01T06:44:24.171809
{ "authors": [ "halng" ], "repo": "halng/deto", "url": "https://github.com/halng/deto/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
56432300
Customize query module output The fields printed by the query module should be customizable. The default view should probably include the queueid (id). This is useful for dumping message logs based on queries. --fields
gharchive/issue
2015-02-03T20:28:47
2025-04-01T06:44:24.181916
{ "authors": [ "eriklax" ], "repo": "halonsecurity/halonctl", "url": "https://github.com/halonsecurity/halonctl/issues/9", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2243231549
🛑 Prometheus1 is down In 077508c, Prometheus1 (https://prometheus1.jenshamann.solutions) was down: HTTP code: 0 Response time: 0 ms Resolved: Prometheus1 is back up in 98b51d3 after 17 minutes.
gharchive/issue
2024-04-15T09:57:06
2025-04-01T06:44:24.184342
{ "authors": [ "hamannjens" ], "repo": "hamannjens/upptime", "url": "https://github.com/hamannjens/upptime/issues/154", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1853695109
⚠️ ClamAV has degraded performance In a5b2bf3, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 72 ms Resolved: ClamAV performance has improved in f66c269.
gharchive/issue
2023-08-16T18:09:34
2025-04-01T06:44:24.186927
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/10036", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2245082297
⚠️ ClamAV has degraded performance In cf56784, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 211 ms Resolved: ClamAV performance has improved in 84a55bd after 5 minutes.
gharchive/issue
2024-04-16T04:46:47
2025-04-01T06:44:24.189251
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/12471", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1320694952
⚠️ ClamAV has degraded performance In 11858f1, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 171 ms Resolved: ClamAV performance has improved in e3bad91.
gharchive/issue
2022-07-28T10:03:48
2025-04-01T06:44:24.191624
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/1991", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1638392834
⚠️ ClamAV has degraded performance In c5c9a74, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 443 ms Resolved: ClamAV performance has improved in 3366828.
gharchive/issue
2023-03-23T21:55:00
2025-04-01T06:44:24.193967
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/6354", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1645516448
⚠️ ClamAV has degraded performance In 2792c76, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 213 ms Resolved: ClamAV performance has improved in 37d11eb.
gharchive/issue
2023-03-29T10:37:02
2025-04-01T06:44:24.196276
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/6471", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1756921549
⚠️ ClamAV has degraded performance In 51b69c9, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 69 ms Resolved: ClamAV performance has improved in 2311a64.
gharchive/issue
2023-06-14T13:32:38
2025-04-01T06:44:24.198847
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/8449", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1773347460
⚠️ ClamAV has degraded performance In cc56865, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 234 ms Resolved: ClamAV performance has improved in 7560b63.
gharchive/issue
2023-06-25T15:36:10
2025-04-01T06:44:24.201122
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/8689", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
19583025
Humble request for a new method .trigger I would very much like something like a .trigger method. This would be useful for unit testing, as the unit test could directly trigger a hammer.js event. Preferably I would be able to pass in a mock hammer.js event object. What do you say? Im going to close this since i think we are going to move to actual functional testing with real events using webdriver. Unfortunately not all drivers support this. I'm faced with nightwatch.js and I'm trying to trigger 'panleft' event.
gharchive/issue
2013-09-16T22:14:04
2025-04-01T06:44:24.219788
{ "authors": [ "Fresheyeball", "achikin", "arschmitz" ], "repo": "hammerjs/hammer.js", "url": "https://github.com/hammerjs/hammer.js/issues/365", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
75400966
Rotation appears to rely on non-standard Touchlist ordering consistency The TouchEvents spec says nothing about the order of touches in a TouchList. But when computing the 'rotation' angle it appears that hammer always computes the angle between touches[0] and touches[1]. If the touches happen to be in different positions in the array from event to event, this will result in 180 degree flipping back and forth. We saw an example of this in Chrome and although we're planning in fixing that case, it's possible there are others (eg. depending on the OS behavior, and possibly behavior in other browsers). I raised this as an issue in the TouchEvents spec but it sounds like we're unlikely to have the spec require touch points to have a consistent order in the list. BTW, I say "appears" because I haven't explicitly tested this in current hammer.js builds. We saw the issue on this site, and looking at the current hammer.js code it still appears to me to be dependent on the TouchList ordering. A simple solution to this would probably be to sort the two points by their 'id' (effectively making rotation be the angle between the lower ID touch and the higher ID touch). @RByers Thanks for reporting this things have been slow on this project for a while but we are ramping back up and will be sure to look at this and try and get it fixed! Great, thanks! Closing in favor of #610 and PR #696 Looking back at this it is still an issue Should be fixed ^
gharchive/issue
2015-05-12T00:08:59
2025-04-01T06:44:24.224135
{ "authors": [ "RByers", "arschmitz", "crabmusket", "runspired" ], "repo": "hammerjs/hammer.js", "url": "https://github.com/hammerjs/hammer.js/issues/791", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1014480247
Conflict with Charm's "acquisition" enchantment ...as mentioned in https://www.curseforge.com/minecraft/mc-mods/treechop?comment=225 The issue is most likely on the Charm mod's end; see https://github.com/svenhjol/Charm/issues/566
gharchive/issue
2021-10-03T18:10:01
2025-04-01T06:44:24.225853
{ "authors": [ "hammertater" ], "repo": "hammertater/treechop", "url": "https://github.com/hammertater/treechop/issues/82", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2341770193
Merge chatgpt-on-wechat 405372d add speech_recognition and xunefei voice module 是合并了上游分支吗,不知道您是如何合并的,最好把每次提交都合并进来,不然以后再同步会有大量的冲突 是合并了上游分支吗,不知道您是如何合并的,最好把每次提交都合并进来,不然以后再同步会有大量的冲突 All right. 我已经更新了,感谢
gharchive/pull-request
2024-06-08T18:17:08
2025-04-01T06:44:24.292478
{ "authors": [ "ZimaBlueAI", "hanfangyuan4396" ], "repo": "hanfangyuan4396/dify-on-wechat", "url": "https://github.com/hanfangyuan4396/dify-on-wechat/pull/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
572331601
WANT TO SEE HOT PICS WITH ME? 💋 💋 💋 WANT TO SEE HOT PICS WITH ME? 💋 💋 💋 👇 👇 👇 https://is.gd/ms4xm3 [auto-reply] Thanks for your comment. However, the essential information is required. Please carefully fill out the form then reopen it.
gharchive/issue
2020-02-27T20:13:53
2025-04-01T06:44:24.299925
{ "authors": [ "gromovadarya90", "hankcs" ], "repo": "hankcs/HanLP", "url": "https://github.com/hankcs/HanLP/issues/1432", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2415218683
Chrome Web Store upload workflow broken I think the token is expired: npm warn exec The following package was not found and will be installed: chrome-webstore-upload-cli@3.3.0 - Fetching token Error: Bad Request at throwIfNotOk (file:///home/runner/.npm/_npx/da388c0a4eab99e8/node_modules/chrome-webstore-upload/index.js:27:23) at APIClient.fetchToken (file:///home/runner/.npm/_npx/da388c0a4eab99e8/node_modules/chrome-webstore-upload/index.js:123:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async doAutoPublish (file:///home/runner/.npm/_npx/da388c0a4eab99e8/node_modules/chrome-webstore-upload-cli/cli.js:75:19) at async init (file:///home/runner/.npm/_npx/da388c0a4eab99e8/node_modules/chrome-webstore-upload-cli/cli.js:173:9) at async file:///home/runner/.npm/_npx/da388c0a4eab99e8/node_modules/chrome-webstore-upload-cli/cli.js:182:5 { response: undefined } Can you follow this guide and set the secrets in the repo settings again? https://github.com/fregante/chrome-webstore-upload-keys For now I uploaded the extension manually and also updated the description: Open-source extension to toggle your extensions. "One Click Extensions Manager" is intentionally very simple, it's only meant to let you toggle extensions on and off. If you need a full-featured extension manager, this isn't it. Full list of features: - List the installed extension in a popup - Search the list of extensions - Navigate the list via keyboard shortcuts - Toggle extensions on and off - Toggle all extensions at once - Uninstall extensions - Visit the options and homepage of each extension - Undo/redo any changes (temporarily) Also: - Completely open source on GitHub - No ads - No data collection - No superfluous features - Well-maintained since 2017 Well-maintained since 2017 you saw those comments too, hah? 🤣 Well-maintained since 2017 you saw those comments too, hah? 🤣 Which comments? 🤔 I wrote that because it's old, we fix things quickly, we update at least once a year, we respond to all messages here 😃 The latest comments it gets say it is outdated anymore lol. I was too lazy to respond in the store. Some guy even spam it several months ago. You can just reply "If you encounter bugs, please open an issue on the GitHub repo. The extension works correctly in all the browsers I tested" just created a new credential in GCP and updated the action secrets 🙌 great! I suppose after https://github.com/hankxdev/one-click-extensions-manager/issues/134 we can trigger a new release to test it out Some guy even spam it several months ago. Oh I just saw that review. Crazy how people get so pissed at absolutely nothing. 🤷‍♂️ It looks like I can't reply to them if a developer already answered. I was going to post something that can help others, rather than replying to the specific user: The extension has been working well for 30000 users. If you encounter issues, please open an issue on our GitHub repo and we can look into it. and There has never been a single report of "lag" on our GitHub repo so that's clearly a lie. I don't understand what this extension has done to you to cause so much trouble in your life. This extension is extremely simple and we never claim otherwise. If you're looking for more options, use a different extension. Do you mind if I reply directly? no problem. just do it :) It won't let me if your answer is there, just one reply is allowed that's a crazy person. he keeps deleting and posting reviews. I think we better just ignore it. I don’t think he's actually deleting and posting again, but just editing and tweaking word by word. I invited him to open an issue but yeah, hopefully he'll just get a life after this. he IS deleting and post again, that's why my reply was not there, I did not delete my reply :) Generally the response is lost when the review is changed. The same happens on Google Maps
gharchive/issue
2024-07-18T04:54:25
2025-04-01T06:44:24.327816
{ "authors": [ "fregante", "hankxdev" ], "repo": "hankxdev/one-click-extensions-manager", "url": "https://github.com/hankxdev/one-click-extensions-manager/issues/132", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
234994551
/fingerprint: prevent running command on your own fingerprint It should not be possible to accidentally run /fingerprint [my own fp]. We could also consider coloring the fp when running /fingerprint to query the status so that untrusted fingerprints are showed as their otr fingerprint: [color]xxxxxx yyyyyy ... where color is red/(yellow|green) and your own fingerprint in green. We could also consider restricting /fingerprint in a session to being the FP currently used, and issue an error message otherwise (when would you want to add a fingerprint for a person that you currently have a OTR session with that is using a different key?) I think the coloring stuff would be nice, but I implemented the "reject own fp" in commit referenced above initially, only the (currently) active fingerprint for this session was accepted (see 4bc00cc9a51cefaa7914337a53fa18bb0d779b0c which modified this behaviour) -- should we revert to the old behaviour? I think it's nice to be able to load fingerprints off a napkin or similar before seeing the person online, but maybe the old behavior is better, I don't know. I think this (rejecting your own) should be done no matter what (it follows as a consequence of the old behavior, of course). Do you remember why it was changed?
gharchive/issue
2017-06-10T10:32:37
2025-04-01T06:44:24.331771
{ "authors": [ "cfcs", "hannesm" ], "repo": "hannesm/jackline", "url": "https://github.com/hannesm/jackline/issues/168", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
1670558381
Feedback from Tomasz Hi @hanwenzhang0317 This is my initial feedback about your research project. These are just some propositions of sample changes in your qmd file. Let's chat about these changes and establish the final version. My proposed changes imply some more work: [x] improve the efficiency of your remaining loops as in my commit cea7f8e1c2827d6ec88bb3c7e53765def21eec09 Good luck! Tomasz Hi @donotdespair , Thank you for giving the feedback! It is a really useful suggestion! I have incorporated it in my master branch! Thanks
gharchive/pull-request
2023-04-17T07:09:34
2025-04-01T06:44:24.420881
{ "authors": [ "donotdespair", "hanwenzhang0317" ], "repo": "hanwenzhang0317/MXCS-SVAR", "url": "https://github.com/hanwenzhang0317/MXCS-SVAR/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
75379838
Graph API no longer allows access by username Need another way of getting profile pics. 12th of May 2015: Issue magically solves itself :smile:.
gharchive/issue
2015-05-11T22:27:18
2025-04-01T06:44:24.489818
{ "authors": [ "harababurel" ], "repo": "harababurel/macefash", "url": "https://github.com/harababurel/macefash/issues/12", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
770290502
WaitForSelectorOptions API: Timeout should be TimeSpan I would rather write code like this: await page.WaitForSelectorAsync(folderSelection, new WaitForSelectorOptions() { Timeout = TimeSpan.FromSeconds(5) }); than this: await page.WaitForSelectorAsync(folderSelection, new WaitForSelectorOptions() { Timeout = 5 }); I'd also probably prefer passing in a CancellationToken rather than having the Async method reify its own token. In that way I can more naturally express that a sequence of async methods should share the same CancellationToken logic. Another thing this would dovetail with is the ability to cancel tests in aggregate. using System.Diagnostics; using System.Threading; public abstract class PuppeteerTestBase { protected static readonly TimeSpan ExpectedTimeout = TimeSpan.FromMilliseconds(200); protected static readonly TimeSpan UnexpectedTimeout = Debugger.IsAttached ? Timeout.InfiniteTimeSpan : TimeSpan.FromSeconds(10); protected CancellationToken TimeoutToken => Debugger.IsAttached ? CancellationToken.None : this.timeoutTokenSource.Token; private CancellationTokenRegistration timeoutLoggerRegistration; PuppeteerTestBase() { // initialize ITestOutputHelper (logging), timeoutToken, timeoutLoggerRegistration, and Browser instance } public void Dispose() { this.Dispose(true); } } then in your test class: public class XunitTest : PuppeteerTestBase { [Fact] public async Task DoIt() { var page = await browser.NewPageAsync(this.TimeoutToken); } } ClickOptions, GoToAsync, etc should all use CancellationToken instead of Timeout, IMHO, unless I misunderstand the intent. I guess, in some cases, the Timeout is actually a JavaScript timeout? e.g., https://github.com/hardkoded/puppeteer-sharp/blob/a5d0cf019b69484074d64922ac7e23ada1f05e93/lib/PuppeteerSharp/WaitTask.cs#L30 Closed due to inactivity. Feel free to reopen it if needed.
gharchive/issue
2020-12-17T18:57:54
2025-04-01T06:44:24.520058
{ "authors": [ "jzabroski", "kblok" ], "repo": "hardkoded/puppeteer-sharp", "url": "https://github.com/hardkoded/puppeteer-sharp/issues/1601", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2428116455
Migrate to system.text.json Description Work in progress. Early review is welcome @jnyrup @campersau build is (pretty) much green. If you have some time to review it I will really appreciate it! My plan create a pre-release doc and publish a beta version out of this branch, hoping for some users willing to test this out. Searching for "Newtonsoft" currently yields these results, which could be removed: https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/lib/PuppeteerSharp/IPage.cs#L539 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/lib/PuppeteerSharp.DevicesFetcher/PuppeteerSharp.DevicesFetcher.csproj#L9 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/lib/PuppeteerSharp.Nunit/PuppeteerSharp.Nunit.csproj#L8 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/lib/PuppeteerSharp.Tests/DeviceRequestPromptTests/MockCDPSession.cs#L27 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/lib/PuppeteerSharp.Tests/DeviceRequestPromptTests/WaitForDevicePromptTests.cs#L6 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/lib/PuppeteerSharp.Tooling/PuppeteerSharp.Tooling.csproj#L17 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/samples/PupppeterSharpAspNetFrameworkSample/PupppeterSharpAspNetFrameworkSample/packages.config#L26 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/samples/PupppeterSharpAspNetFrameworkSample/PupppeterSharpAspNetFrameworkSample/PupppeterSharpAspNetFrameworkSample.csproj#L127-L129 https://github.com/hardkoded/puppeteer-sharp/blob/aef54b35b6943c2cc621986d74709e22f3e959f3/samples/PupppeterSharpAspNetFrameworkSample/PupppeterSharpAspNetFrameworkSample/Web.config#L47-L50 @jnyrup some updates: We can't set the JsonTypeInfoResolver by default. That would force the user to do the same thing. If you build your project with AOT, and you use some special types when you call some Evaluate function. You have to provide a serialization context using Puppeteer.ExtraJsonSerializerContext. You should do that as a first step. After that, the DefaultJsonSerializerSettings will be cached. I added a demo in the console app. Thoughts? I don't have that much experience with AOT and JsonTypeInfoResolver, so can't provide the best feedback here. If I have two different types A and B and also two different source-generated JsonSerializerContexts, does the current design allow me two use them without using Puppeteer.ExtraJsonSerializerContext? If I have two different types A and B and also two different source-generated JsonSerializerContexts, does the current design allow me two use them without using Puppeteer.ExtraJsonSerializerContext? You could pass the result a JsonTypeInfoResolver.Combine() to Puppeteer.ExtraJsonSerializerContext.
gharchive/pull-request
2024-07-24T17:28:35
2025-04-01T06:44:24.528507
{ "authors": [ "campersau", "jnyrup", "kblok" ], "repo": "hardkoded/puppeteer-sharp", "url": "https://github.com/hardkoded/puppeteer-sharp/pull/2713", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1936353376
dynamic site url behind proxy Details Hello, i have a nuxt 3 website that serves different domains, eg: https://www.site1.com https://www.site2.com https://www.site3.com the node instance it's behind a reverse proxy (IIS with urlrewrite module) with the follwing rules: https://www.site1.com -> proxy to http://www.site1.local:3000 https://www.site2.com -> proxy to http://www.site2.local:3000 https://www.site3.com -> proxy to http://www.site3.local:3000 what happens is the sitemap module returns as the base address the "local" address eg: http://www.site1.local:3000/Login how and where can i set the site url at runtime in Nuxt 3? I am able to read the original url (www.site1.com) using headers in event.node.req.headers PS: sitemap will be exactly the same for all websites, thanks in advance Hey @niddu85 This module uses nuxt-site-config to handle the site URL. This module was built to support multi-tenancy like you have. You would do something like this, although I'd recommend opting for a switch statement. The reason why it doesn't use the request host directly is that the URLs need to use the canonical site URLs. import { defineNuxtPlugin, updateSiteConfig, useRequestURL } from '#imports' export default defineNuxtPlugin({ enforce: 'post', setup() { const url = useRequestURL() updateSiteConfig({ url: url.origin, }) }, }) There may be some issues around the caching with this though, I'll need to look into that further. For now, you can disable caching with sitemap: { cacheTtl: false }. Let me know how you go! Keen to get this working in a multi-tenancy setup. Yes, in the end I was able to solve it exactly with the solution you explained. What was preventing to work correctly was exactly the cache so I disabled it and started working. What I suggest you is to clear the cache when updateSiteConfig Thanks for your hint Yes, in the end I was able to solve it exactly with the solution you explained. What was preventing to work correctly was exactly the cache so I disabled it and started working. What I suggest you is to clear the cache when updateSiteConfig Thanks for your hint Ideally, it would cache based on the site URL, but yes I agree :) The caching should now be fixed in v4. It uses SWR cache and should cache based on the request origin headers.
gharchive/issue
2023-10-10T22:11:00
2025-04-01T06:44:24.553873
{ "authors": [ "harlan-zw", "niddu85" ], "repo": "harlan-zw/nuxt-simple-sitemap", "url": "https://github.com/harlan-zw/nuxt-simple-sitemap/issues/147", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2251268093
Update testGlobs pattern. Thanks for contributing to the Harness Developer Hub! Our code owners will review your submission. Description Please describe your changes: Update to Ruby for TI pipeline to update testGlobs pattern Jira/GitHub Issue numbers (if any): N/A Preview links/images (Internal contributors only): __________________ PR lifecycle We aim to merge PRs within one week or less, but delays happen sometimes. If your PR is open longer than two weeks without any human activity, please tag a code owner in a comment. PRs must meet these requirements to be merged: [ ] Successful preview build. [ ] Code owner review. [ ] No merge conflicts. [ ] Release notes/new features docs: Feature/version released to at least one prod environment. Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://662163ae1309f500a74db60d--harness-developer.netlify.app
gharchive/pull-request
2024-04-18T18:06:25
2025-04-01T06:44:24.565135
{ "authors": [ "bot-gitexp-user", "dewan-ahmed" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/6382", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2367541660
Hdh 272 add aida to right hand on each page fix Thanks for contributing to the Harness Developer Hub! Our code owners will review your submission. Description Please describe your changes: __________________________________ Jira/GitHub Issue numbers (if any): ______________________________ Preview links/images (Internal contributors only): __________________ PR lifecycle We aim to merge PRs within one week or less, but delays happen sometimes. If your PR is open longer than two weeks without any human activity, please tag a code owner in a comment. PRs must meet these requirements to be merged: [ ] Successful preview build. [ ] Code owner review. [ ] No merge conflicts. [ ] Release notes/new features docs: Feature/version released to at least one prod environment. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://667653bda58cfa5adb0138ae--harness-developer.netlify.app
gharchive/pull-request
2024-06-22T04:20:04
2025-04-01T06:44:24.571564
{ "authors": [ "CLAassistant", "bot-gitexp-user", "rohanmaharjan100" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/7163", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1465628236
runtime error for attached RIS file This RIS file causes a runtime error. The error needs to looked into carefully and understood what is breaking down in our generic implementation to parse the RIS formatted files. S0010465506001299.txt A pull request has been created to address this issue. The file actually has an error. The tittle field TI is incorrectly typed as T1. This file was downloaded directly from a journal, so the error is not on our end. However, the tool should handle errors better.
gharchive/issue
2022-11-28T00:17:19
2025-04-01T06:44:24.794667
{ "authors": [ "harrisonlabollita" ], "repo": "harrisonlabollita/ris-2-bib", "url": "https://github.com/harrisonlabollita/ris-2-bib/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2042799571
fix: Rewrote src/bin/hrtor.rs close #66 了解〜。
gharchive/pull-request
2023-12-15T02:49:22
2025-04-01T06:44:24.800069
{ "authors": [ "haruki7049" ], "repo": "haruki7049/hrtor", "url": "https://github.com/haruki7049/hrtor/pull/69", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1772843469
Issue with Spacy Dependency Version: issubclass() arg 1 must be a class Dear harvardnlp/annotated-transformer maintainers, I am writing to bring to your attention an issue I encountered while using your excellent project. The issue is with the version of Spacy specified in the requirements.txt file, which is currently set to spacy==3.2. While using this version, I encountered an error that says issubclass() arg 1 must be a class spacy. This appears to be due to an incompatibility with newer versions of Pydantic. According to this thread on the Spacy GitHub, this issue has been resolved in Spacy version 3.2.6. Therefore, to ensure smooth operation of the project and avoid this error for other users, I would suggest updating the Spacy version in the requirements.txt file to spacy==3.2.6. Thank you for your attention to this matter. I believe this minor adjustment will make the usage of this valuable tool more seamless for other developers in the future. Best Regards, hydway hey , have you found any solution to that ? hey , have you found any solution to that ? upgrade spacy==3.2.6 can fix my problem :) Thanks brother I will do that . On Tue, 29 Aug, 2023, 3:58 am Hydway, @.***> wrote: hey , have you found any solution to that ? upgrade spacy==3.2.6 can fix my problem :) — Reply to this email directly, view it on GitHub https://github.com/harvardnlp/annotated-transformer/issues/112#issuecomment-1696511582, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXDWDTKTUD64VEFTT4QG2XLXXULR7ANCNFSM6AAAAAAZST2L3Y . You are receiving this because you commented.Message ID: @.***> Thanks! It works well!!
gharchive/issue
2023-06-24T17:20:45
2025-04-01T06:44:24.806543
{ "authors": [ "Hydway", "MaAleem08", "minsuk-sung" ], "repo": "harvardnlp/annotated-transformer", "url": "https://github.com/harvardnlp/annotated-transformer/issues/112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1089781555
[BUG]Can't add secondary disk as data volumen at 2nd host of cluster Describe the bug 3 Baremetal Server with same hardware cofiguration. sda: SSD driver, installed Harvester 1.0 sdb: SATA driver, should be added as data volumen to host To Reproduce Steps to reproduce the behavior: At Master Node, all works as expected. At the 2nd Node, the data disk can be detected, but couldn't be added as data volumen to the host. The 3rd Node has same problem. Expected behavior The secondary disk should be added as data volumen at 2nd and 3rd host as well. Support bundle supportbundle_b5a94f11-01e1-4181-ad74-b141d4d82809_2021-12-28T10-11-12Z.zip Environment: Harvester ISO version: 1.0 Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): Baremetal Dell R740 Additional context All sdbs were formatted as ext4, mkfs -t ext4 /dev/sdb I have the same problem with three Intel NUCs. On the first node I could add the second disc (sdb) as a data volume. On the 2nd and 3rd node it can not be added although it is detected. All nodes are the same hardware. I tried the second time to install Harvester 1.0 on the 3 nodes of cluster. In order to figure it out, the reason which causes the issue, did the following behavior: Install Harvester 1.0 on the 3 nodes. Before add sdb on each node, check the availability of sdbs. 1st node: disk-> /dev/sdb; partition-> /dev/sdb1 2nd node: disk-> /dev/sda 😮 3rd node: disk-> /dev/sda, same as 2nd node. Get ready to add sdb on 1st node, but ... I got 2 sdbs. 😱 Check device information on 3 nodes. After Installation, the SSD disk and hard disk swapped their name. On 2nd node, I did quickly enough, to add partition /dev/sda1 on the node. On 3rd node, got the same result as the first installation. On the 1st node, added the wrong partition /dev/sdb1, couldn't remove it. My conclusion: It seems like, the issue is consequence by the swapped name of sda and sdb. On our 4th server, which has only sda, has no problem at all. So, this issue could be caused only on the node with more than 2 disk devices. Obviously!😂 Also seeing this issue. 3 node harvester cluster, first node I could add all of the disks. 2nd and 3rd, no dice. Not familiar with Vue but it seems that the getter method childParts doesn't exclude blockdevice from other nodes. https://github.com/harvester/dashboard/blob/v1.0.0/models/harvester/harvesterhci.io.blockdevice.js#L5-L12 On the other hand, the blockdevices on host resource does filter by blockdevice.spec.nodeName. https://github.com/harvester/dashboard/blob/v1.0.0/edit/harvesterhci.io.host/index.vue#L71-L78 @n313893254, can you help verify if this is the root cause? @weihanglo Thanks, It is the root cause. Verified fixed on master-1be9cb5e-head (1/23). Close this issue. Result The secondary disk can be added as data volume at 2nd and 3rd host after adding the 1st node Node 1, 2, 3 disk status before adding second disk Node 1, 2, 3 disk status after adding second disk Can correctly add second disk /dev/sdb to node 2 Can correctly add second disk to /dev/sdb to node 3 Can correctly add second disk /dev/sdb with no gpt partition to node 1 Test Information Test Environment: 3 nodes harvester on provo bare machine Harvester version: master-1be9cb5e-head (1/23) Disk status node 1: 2nd disk /dev/sdb (no gpt partition) node 2: 2nd disk /dev/sdb (with gpt partition) node 3: 2nd disk /dev/sdb (with gpt partition) Verify Steps Prepare a 3 nodes harvester each have a second disk /dev/sdb ready to be added to host Add the second disk on node 1 disk page in host Edit disk page on node 2 Check the /dev/sdb disk is available Add the second disk on node 2 disk page in host Edit disk page on node 3 Check the /dev/sdb disk is available Add the second disk on node 2 disk page in host Additional Context If the second disk status is not display in scheduled status, please press ctrl + r to refresh UI Below is details check Node 1, 2, 3 disk status before adding second disk Node 1 before adding second disk Node 1 disk status rancher@hpd8s7:~> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 3.2G 1 loop / sda 8:0 0 279.4G 0 disk ├─sda1 8:1 0 47M 0 part ├─sda2 8:2 0 50M 0 part /oem ├─sda3 8:3 0 15G 0 part /run/initramfs/cos-state ├─sda4 8:4 0 8G 0 part ├─sda5 8:5 0 60G 0 part /usr/local └─sda6 8:6 0 196.3G 0 part /var/lib/longhorn sdb 8:16 0 558.7G 0 disk sdc 8:32 0 50G 0 disk /var/lib/kubelet/pods/6e484470-474b-4c3b-a98a-15c71192155a/volume-subpaths/pvc-3344c591-0ba3-41ba-ba2c-591ab543 sdd 8:48 0 10M 0 disk /var/lib/kubelet/pods/745127aa-d3aa-43c7-9c7e-d27ec4dcf043/volumes/kubernetes.io~csi/pvc-1298033b-a494-4efd-bd9 Node1 /dev/sdb have no gpt partition hpd8s7:~ # gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: not present BSD: not present APM: not present GPT: not present Creating new GPT entries. Node 2 before adding second disk Node 2 disk status rancher@harvester-qg7wt:~> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 3.2G 1 loop / sda 8:0 0 279.4G 0 disk ├─sda1 8:1 0 47M 0 part ├─sda2 8:2 0 50M 0 part /oem ├─sda3 8:3 0 15G 0 part /run/initramfs/cos-state ├─sda4 8:4 0 8G 0 part ├─sda5 8:5 0 60G 0 part /usr/local └─sda6 8:6 0 196.3G 0 part /var/lib/longhorn sdb 8:16 0 558.7G 0 disk └─sdb1 8:17 0 558.7G 0 part Node2 /dev/sdb have gpt partition harvester-qg7wt:~ # gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Node 3 before adding disk Node 3 disk status rancher@harvester-d7fkx:~> lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 3.2G 1 loop / sda 8:0 0 279.4G 0 disk ├─sda1 8:1 0 47M 0 part ├─sda2 8:2 0 50M 0 part /oem ├─sda3 8:3 0 15G 0 part /run/initramfs/cos-state ├─sda4 8:4 0 8G 0 part ├─sda5 8:5 0 60G 0 part /usr/local └─sda6 8:6 0 196.3G 0 part /var/lib/longhorn sdb 8:16 0 558.7G 0 disk └─sdb1 8:17 0 558.7G 0 part Node3 /dev/sdb have gpt partition harvester-d7fkx:~ # gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Add second disk on node 1 Unable to add disk without gpt partition Node 1 disk status with second disk Node 2 disk status with second disk Node 3 disk status with second disk Node 1, 2, 3 disk status after adding second disk
gharchive/issue
2021-12-28T10:35:31
2025-04-01T06:44:24.840947
{ "authors": [ "Palando", "TachunLin", "ebauman", "insbire", "n313893254", "weihanglo" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/issues/1755", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1320799130
[FEATURE] update windows-iso-image-base-template configuration The problem default OS Type is empty, and if not selected, it would be set as Linux default Reserved Memory is empty, as this field will affect Windows VM stability, we should fill in some value and notice users to extend it if their windows VM crash caused by OOM. The solution Set default OS Type to Windows Set default Reserved Memory to 128 (or 256) Verified this bug has been fixed. Test Information Environment: qemu/KVM 2 nodes Harvester Version: v1.0-2ddbc24c-head ui-source Option: Auto ui-index URL: https://releases.rancher.com/harvester-ui/dashboard/release-harvester-v1.0/index.html Verify Steps: Follow Steps to reproduce in https://github.com/harvester/harvester/issues/2592#issue-1320799130
gharchive/issue
2022-07-28T11:36:46
2025-04-01T06:44:24.846871
{ "authors": [ "lanfon72" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/issues/2592", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2257769840
[ENHANCEMENT] Is your enhancement related to a problem? Please describe. With Harvester v1.3.0 we now support vGPU device passthrough. We need to allow user to specify vGPU device requirement during downstream cluster provisioning. This is already supported in the machine driver: https://github.com/harvester/docker-machine-driver-harvester/pull/50 The machine driver changes are included in rancher v2.7.9+ and v2.8.3+ A UI change is now needed in Rancher to allow passing vGPU requirements during downstream cluster provisioning. Describe the solution you'd like Describe alternatives you've considered Additional context related dashboard issue: https://github.com/rancher/dashboard/issues/10588 Rancher issues: v2.9: https://github.com/rancher/dashboard/issues/10588 v2.8: https://github.com/rancher/dashboard/issues/10593 Test PASS, close as done. Environment Harvester Version: v1.4.0-dev-20240918 Profile: Single node QEMU/KVM (48C/128G/1.6T) ui-source: Auto Rancher Version: v2.9.1 Profile: harvester-vcluster (v1.29.9+k3s1) Prerequisite Create VM network mgmt-vlan1 Create VM image jammy-server-cloudimg-amd64.img Enable required addons rancher-vcluster: rancher-v2.9.1 pcidevices-controller nvidia-driver-toolkit: Driver NVIDIA-Linux-x86_64-550.90.05-vgpu-kvm.run Enable SR-IOV GPU Device and one vGPU Device SR-IOV GPU Device vGPU Ref. https://docs.harvesterhci.io/v1.3/advanced/vgpusupport/ Steps Given import Harvester to Rancher When provisioning a new RKE2 guest cluster Then can select the enabled vGPU in the vGPUs field in Advanced section Ref. HEP: https://github.com/harvester/harvester/pull/5752 And guest cluster with attached vGPU can successfully provisioned RKE2 cluster vGPU do attached on VM Supportbundle n/a QA Check List Update labels [ ] This ticket: not-require/test-plan, not-require/release-note, require-ui, regression... [ ] tests ticket: harvester/tests/issues/1240
gharchive/issue
2024-04-23T02:00:05
2025-04-01T06:44:24.865104
{ "authors": [ "albinsun", "bk201", "ibrokethecloud" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/issues/5651", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2404574747
feat(Dockerfile): upgrade go to 1.22 IMPORTANT: Please do not create a Pull Request without creating an issue first. Problem: Solution: Related Issue: https://github.com/harvester/harvester/issues/6160 Test plan: @Mergify backport v1.4
gharchive/pull-request
2024-07-12T03:27:40
2025-04-01T06:44:24.868169
{ "authors": [ "FrankYang0529" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/pull/6163", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
61950850
Suggested Fix for #188: Animating a marker of multiple accidents. Animating a marker of multiple accidents when hovering over a sidebar entry. 2 suggested ways for fixing the issue. One by changing the icon of the correspondent marker. The second, by animating all the cluster. Thanks! Do you think the second option is better? It doesn't show which exact marker it is...
gharchive/pull-request
2015-03-16T02:55:21
2025-04-01T06:44:24.912875
{ "authors": [ "OmerSchechter", "danielhers" ], "repo": "hasadna/anyway", "url": "https://github.com/hasadna/anyway/pull/193", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1990150881
it can be good to add somewhere a link to github repo of the site i think it can be good to add a link to a github repo (may be at header?) as other many opensource sites Can you assign this to me? I think if we want an end user to refer us to a bug then it should be user-friendly and have some kind of bug-reporting form. And then it makes sense to add the GitHub icon. nonetheless, I have already done that: I think if we want an end user to refer us to a bug then it should be user-friendly and have some kind of bug-reporting form. I agree with you about it, maybe we should open a new issue about it. @xoRmalka @NoamGaash I see that the related pr was merged so I close this issue.
gharchive/issue
2023-11-13T08:45:07
2025-04-01T06:44:24.916374
{ "authors": [ "ArkadiK94", "shootermv", "xoRmalka" ], "repo": "hasadna/open-bus-map-search", "url": "https://github.com/hasadna/open-bus-map-search/issues/205", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2056615795
perf: reduce calls of agencyList api Description solves multiple calls to getAgenciesList api #346 screenshots could you please explain why it had two calls in the first place? just for my understanding the reason for 2 calls is - Dashboard page has two instances of OperatorSelector each one shown at different viewport (personally i dont see the reason for such behavior)
gharchive/pull-request
2023-12-26T18:24:19
2025-04-01T06:44:24.918463
{ "authors": [ "shootermv" ], "repo": "hasadna/open-bus-map-search", "url": "https://github.com/hasadna/open-bus-map-search/pull/348", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
223548987
limits.conf: Double nproc limits Users seem to be running against those pretty often, and we do not have unreasonable load... Also, https://www.youtube.com/watch?v=9D-QD_HIfjA @aerth That ought to solve your issue. Also, https://www.youtube.com/watch?v=9D-QD_HIfjA http://www.homestarrunner.com/fhqwhgads.html
gharchive/pull-request
2017-04-22T08:11:51
2025-04-01T06:44:24.931146
{ "authors": [ "KellerFuchs", "daurnimator" ], "repo": "hashbang/shell-etc", "url": "https://github.com/hashbang/shell-etc/pull/161", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1108401082
API De-coupling Requirements The API Should become its own micro-service. Definition of Done Exclude one service from the UI service, Move Policy Engine to the Guardian Service, after that UI Service must become the API Gateway. This issue was QA approved and closed on March 14th.
gharchive/issue
2022-01-19T18:03:18
2025-04-01T06:44:24.940603
{ "authors": [ "blockchain-biopharma", "prernaadev01" ], "repo": "hashgraph/guardian", "url": "https://github.com/hashgraph/guardian/issues/328", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1331977100
Resolve issues with some fields in eth_getBlock Description: Fix issues with values returned from getBlocks: timestamp and gasLimit. Changes timestamp logic to be always the start of the period, returned by the mirror node at /api/v1/blocks/:hashOrNumber Changes gasLimit to be hardcoded as 15000000 Related issue(s): Fixes # Notes for reviewer: Checklist [ ] Documented (Code comments, README, etc.) [ ] Tested (unit, integration, etc.) Codecov Report Merging #432 (912b117) into main (06e89ee) will decrease coverage by 0.09%. The diff coverage is 100.00%. @@ Coverage Diff @@ ## main #432 +/- ## ========================================== - Coverage 73.65% 73.55% -0.10% ========================================== Files 10 10 Lines 835 832 -3 Branches 137 135 -2 ========================================== - Hits 615 612 -3 Misses 172 172 Partials 48 48 Impacted Files Coverage Δ packages/relay/src/lib/eth.ts 81.10% <100.00%> (-0.15%) :arrow_down: Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
gharchive/pull-request
2022-08-08T14:41:54
2025-04-01T06:44:24.948904
{ "authors": [ "Ivo-Yankov", "codecov-commenter" ], "repo": "hashgraph/hedera-json-rpc-relay", "url": "https://github.com/hashgraph/hedera-json-rpc-relay/pull/432", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2157841260
Implement node refresh command Once a network is deployed, if we need to refresh (setup & start) a single node using solo node setup -i node1, it fails because it generates a new config.txt assuming there is only one node. Also node start fails because it tries to setup mirror-node with one node. We should implement solo node refresh command to refresh a single/multiple node with the below functionalities: It shouldn't generate config.txt It shouldn't generate new keys It should not restart the mirror node It should dump all saved state It should allow destroying the whole pod.... It should only setup and start the specified nodes. [ ] required by #186 While solo node refresh is useful, but we need to allow adding a new node. solo state should maintain a list of all members so that we can generate config.txt when provisioning a new node or refreshing a node. :tada: This issue has been resolved in version 0.24.0 :tada: The release is available on: npm package (@release-0.24.x dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/issue
2024-02-28T00:20:48
2025-04-01T06:44:24.961164
{ "authors": [ "leninmehedy", "swirlds-automation" ], "repo": "hashgraph/solo", "url": "https://github.com/hashgraph/solo/issues/96", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
784836066
Fix code to make month and year visible Add Background behind Year and Month Detail To make it visible irrespective of background image color. One nice way is to use dark coloured image, for dark mode. One nice way is to use dark coloured image, for dark mode. @hashirshoaeb Is this issue still unsolved, if yes, please assign it to me. @hashirshoaeb Is this issue still unsolved, if yes, please assign it to me. @cyber-venom003 It's on pending. As it is depending on #121 @cyber-venom003 It's on pending. As it is depending on #121 dependent on #163 dependent on #163 Fix: Based on the brightness option, if the brightness is light, set a light colored image, if the brightness is dark, set a dark colored image, Fix: Based on the brightness option, if the brightness is light, set a light colored image, if the brightness is dark, set a dark colored image, Sir, I would like to work on this, please assign me this issue. Sir, I would like to work on this, please assign me this issue.
gharchive/issue
2021-01-13T06:42:33
2025-04-01T06:44:25.476193
{ "authors": [ "KhyatiSaini", "cyber-venom003", "hashirshoaeb", "spiderxm" ], "repo": "hashirshoaeb/star_book", "url": "https://github.com/hashirshoaeb/star_book/issues/107", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
33118726
Add markup support for properties Original reporter: sol@ Some libraries, e.g. filepath include properties in documentation. It would be useful to have dedicated markup for properties, so that Doctest can extract an test properties with QuickCheck. Anything still happening in this regard? No, please feel free to pick this up.
gharchive/issue
2014-05-08T20:03:49
2025-04-01T06:44:25.518242
{ "authors": [ "alexbiehl", "ghc-mirror", "vimuel" ], "repo": "haskell/haddock", "url": "https://github.com/haskell/haddock/issues/206", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
565793306
Build does not respect job configuration (-j1) I’ve passed the job limit both to the stack executable and to the install.hs script. Both are being ignored, because the build attempts to build Cabal and ghc-lib-parser in parallel. I do not have enough RAM in my VPS to do that simultaneously. ubuntu@ip-10-0-0-24 ~/hs-ide> stack -j1 ./install.hs -j1 hie-8.6.5 # stack (for check) # git (for submodules) # git (for submodules) # stack (for hie-8.6.5) Cabal > configure Cabal > Configuring Cabal-2.4.1.0... Cabal > build ghc-lib-parser > configure Cabal > Preprocessing library for Cabal-2.4.1.0.. Cabal > Building library for Cabal-2.4.1.0.. ghc-lib-parser > Configuring ghc-lib-parser-8.8.2... ghc-lib-parser > build Cabal > [ 1 of 220] Compiling Distribution.Compat.Binary Cabal > [ 2 of 220] Compiling Distribution.Compat.Directory Cabal > [ 3 of 220] Compiling Distribution.Compat.Exception ghc-lib-parser > Preprocessing library for ghc-lib-parser-8.8.2.. ^CCabal > `gcc' failed in phase `Assembler'. (Exit code: -2) Progress 0/149 ******************************************************************************** Building failed, Try running `stack clean` and restart the build If this does not work, open an issue at https://github.com/haskell/haskell-ide-engine ******************************************************************************** user interrupt related: #1302 Till now we have managed to not add args to the build script for the shake of simplicity so the build is determined by the build config files (stack-${ghcVersion}.yaml for stack and cabal.project for cabal) As a workaound you could change stack-8.6.5.yaml adding jobs: 1, see https://docs.haskellstack.org/en/stable/yaml_configuration/#jobs You could add it to the stack global config in $STACK_ROOT/config.yaml Till now we have managed to not add argument handling to the build script for the shake of simplicity so the build is determined by the build config files (stack-${ghcVersion}.yaml for stack and cabal.project for cabal) That’s weird, because the ./install.hs —help documents a —jobs option. That manages the number of threads of the script itself and is a feature of shake that comes for free. Mmm, i am afraid that the help shown by --help is from the stack (check it running stack --help) executable and no from the install.hs script. It looks a little bit confusing but stack is the tool we are using to compile and run a script written in haskell (the install.hs part). The command is a shorthand for (stack run ./install.hs) and when running something with stack run <program> you have to pass specific arguments to program after a -- (check stack run --help) to not mix them with stack arguments. So if we would have arguments for our build script the call would look: stack ./install.hs -- --jobs X @fendor ah, that makes sense. Sorry, I guess I was just confused then. Thanks for explaining. @jneira thanks for the workaround, I’ll close this since it’s already discussed in #1302 FWIW — re: #1302 — I would be fine with binaries so long as they’re distributed from a reasonable source (i.e. Github Releases vs. random S3 bucket).
gharchive/issue
2020-02-15T18:53:09
2025-04-01T06:44:25.525294
{ "authors": [ "fendor", "jneira", "oconnore" ], "repo": "haskell/haskell-ide-engine", "url": "https://github.com/haskell/haskell-ide-engine/issues/1657", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
442103271
Upgrade to java 11 and make java module system (jigsaw) compliant Based upon what I've gathered during the research for #18, this is what I want to do: This library will be java 11 based (not sure yet about further upgrade path). This library will be at the least usable in a jigsaw environment without breakage in the future (Automatic-Module-Name). The preferable situation would be to use module-info for all modules. To do that, we will: Update to java 9 so we can set release parameters for the compiler and check for module usage (crashdata-parent 1.22) Set Automatic-Module-Name entries in manifests via maven-jar-plugin Check for module-system-compliancy in dependencies. If every dependency at last declares an automatic module name, that module can go full module and create a module descriptor (module-info). If not, it will stay as is for now. Update to java 11. Optionally, move the nl.crashdata.chartjs.colors package in java-chartjs-data to nl.crashdata.chartjs.data.colors so all packages of the data module have the same nl.crashdata.chartjs.data prefix. This also enables us to make the module name compliant with Stephen Colebourne's recommendations Move the nl.crashdata.chartjs.components.* packages in java-chartjs-wicket to nl.crashdata.chartjs.wicket.components.* Release all this as version 2.0.0 as these are pretty significant changes and point 5 is an API break. (Non-test) Dependencies and modularity: dependency modularity module java-chartjs-data jackson-annotations automatic module name in manifest com.fasterxml.jackson.annotation java-chartjs-serialization jackson-annotations automatic module name in manifest com.fasterxml.jackson.annotation jackson-core automatic module name in manifest com.fasterxml.jackson.core jackson-databind automatic module name in manifest com.fasterxml.jackson.databind jackson-datatype-jsr310 automatic module name in manifest com.fasterxml.jackson.datatype.jsr310 java-chartjs-wicket jackson-annotations automatic module name in manifest com.fasterxml.jackson.annotation jackson-core automatic module name in manifest com.fasterxml.jackson.core jackson-databind automatic module name in manifest com.fasterxml.jackson.databind jackson-datatype-jsr310 automatic module name in manifest com.fasterxml.jackson.datatype.jsr310 java-chartjs-data module-info nl.crashdata.chartjs.data java-chartjs-serialization module-info nl.crashdata.chartjs.serialization wicket-core none (see WICKET-6585) wicket-request none (see WICKET-6585) wicket-util none (see WICKET-6585) jdk-serializable-functional module-info org.danekja.jdk.serializable.functional openjson none (see https://github.com/openjson/openjson/pull/15) commons-fileupload none commons-io none commons-collections4 none slf4j-api none For data: dependency modularity module-name jackson-annotations modulename in manifest com.fasterxml.jackson.annotation Fixed via #22
gharchive/issue
2019-05-09T08:04:09
2025-04-01T06:44:25.577113
{ "authors": [ "haster" ], "repo": "haster/java-chartjs", "url": "https://github.com/haster/java-chartjs/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1863530228
Push to docker hub As part of testing the clickhouse data connector it'd be great to have this published to docker hub. That is all. Thanks! This should be resolved now: https://hub.docker.com/r/hasura/clickhouse-data-connector/ Thanks!
gharchive/issue
2023-08-23T15:08:43
2025-04-01T06:44:25.583260
{ "authors": [ "jbergstroem", "typhonius" ], "repo": "hasura/clickhouse_gdc_v2", "url": "https://github.com/hasura/clickhouse_gdc_v2/issues/4", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
777094057
Search (os, oss, or, orr...) does not seem to work I have set up all the variables properly (hopefully) and every feature of the workflow works as expected, except for searching for md files. If I type any of the search keywords, oss for example, press enter (figure 1) and then start typing, nothing seems to show up when I search for a file that I know exists (figure 2). These are my settings (/Volume/Notes/ is the volume where I store various notes, /Volume/Notes/Notes is the folder where my Obsidian vault is stored). Both Alfred and Spotlight can find the md files I search for and every other feature of the workflow works perfectly. That's odd. Does od work for you? Does it properly open your daily note in Obsidian? If so, I might need some debugging/logging info. Could you try the following? Open Alfred, go to the workflow, click the bug icon at the top right to pull up the logging/debug pane at the bottom. Then invoke Alfred and run os 2020-12. Then show me the log data (see screenshot below). That might provide some clues... Yes od works as expected, although it does not apply the date tags ({{date}}) I defined in the daily template. os does not show anything in the debug window. As soon as I start typing a search query Alfred falls back to the default search view. oss on the other hand shows this: Yes od works as expected, although it does not apply the date tags ({{date}}) I defined in the daily template. os does not show anything in the debug window. As soon as I start typing a search query Alfred falls back to the default search view. oss on the other hand shows this: Yea, that's the expected behavior of od. It just makes a copy of your template, so it's not "inserting" the way Obsidian does it. Not sure if there's a way to address this problem. As for your problems with os, could you let me know what's your Alfred version? Try the following solutions, one at a time, to see if it addresses your problems. It might be simply because of the location of your vaults that breaking Script Filter. Click on the os's Script Filter, navigate to the Scope pane and checked "Show files marked as System File". Then save settings. And make sure your scope is empty (just like the screenshot below). See if that works? If not, move on to step 2. Navigate to the "Basic Setup" pane and remove the highlighted line under "File Types". Removing this line will make os return all filetypes, not just markdown. If after modifying this and os still doesn't list any matching files, try step 3 and let me know what happens... Try creating a temporary new vault in another directory (maybe your desktop) and then add it with oaddvault. Create a markdown note in it and see if os finds it. If this doesn't work, it might be an Alfred version issue... Have you tried using on to create a new note in your vault? It also uses Script Filter, so if you also cannot get on to work, then it's even more likely it's an Alfred version/Script Filter issue. Yea, that's the expected behavior of od. It just makes a copy of your template, so it's not "inserting" the way Obsidian does it. Not sure if there's a way to address this problem. As for your problems with os, could you let me know what's your Alfred version? Try the following solutions, one at a time, to see if it addresses your problems. It might be simply because of the location of your vaults that breaking Script Filter. Click on the os's Script Filter, navigate to the Scope pane and checked "Show files marked as System File". Then save settings. And make sure your scope is empty (just like the screenshot below). See if that works? If not, move on to step 2. Navigate to the "Basic Setup" pane and remove the highlighted line under "File Types". Removing this line will make os return all filetypes, not just markdown. If after modifying this and os still doesn't list any matching files, try step 3 and let me know what happens... Try creating a temporary new vault in another directory (maybe your desktop) and then add it with oaddvault. Create a markdown note in it and see if os finds it. If this doesn't work, it might be an Alfred version issue... Have you tried using on to create a new note in your vault? It also uses Script Filter, so if you also cannot get on to work, then it's even more likely it's an Alfred version/Script Filter issue. It seems to work now after step 2. Thank you! I removed that file type from os, or, and ot and now all commands work properly. on worked fine both before and after step 2. It seems to work now after step 2. Thank you! I removed that file type from os, or, and ot and now all commands work properly. on worked fine both before and after step 2. Great! Great! As another point of reference, I had the same issue, and it also was resolved after performing step 2. As another point of reference, I had the same issue, and it also was resolved after performing step 2. Thanks for letting me know, @HEmile. I'll probably add a note to the README later. Thanks for letting me know, @HEmile. I'll probably add a note to the README later. To add a slightly different point of reference, what fixed it for me was to remove all the file types like in step 1), and then to drag and drop one of the markdown files from my vault to the Alfred "File Type" list. Turns out Markdown files are marked as "com.unknown.md" on my system. To add a slightly different point of reference, what fixed it for me was to remove all the file types like in step 1), and then to drag and drop one of the markdown files from my vault to the Alfred "File Type" list. Turns out Markdown files are marked as "com.unknown.md" on my system. Man, I've been banging my head trying to figure this out. I have no experience with code, programing or an of the things mentioned here, or the other issues referring to my problem. I did the steps you provided and all of a sudden the os command now recognizes everything and works. Mine seems to be dot com type as well. I will further test all the other functions now that this is working and if they don't work, I will check the status in the related problem for the same issue. Thanks so much for posting this. @etiennepellegrini Dan P.S. I will also post this in the thread that I created with the developer. Not sure if that is needed? To add a slightly different point of reference, what fixed it for me was to remove all the file types like in step 1), and then to drag and drop one of the markdown files from my vault to the Alfred "File Type" list. Turns out Markdown files are marked as "com.unknown.md" on my system. Man, I've been banging my head trying to figure this out. I have no experience with code, programing or an of the things mentioned here, or the other issues referring to my problem. I did the steps you provided and all of a sudden the os command now recognizes everything and works. Mine seems to be dot com type as well. I will further test all the other functions now that this is working and if they don't work, I will check the status in the related problem for the same issue. Thanks so much for posting this. @etiennepellegrini Dan P.S. I will also post this in the thread that I created with the developer. Not sure if that is needed? @Harvison, glad you've found the solution! Just to say that I had the same issue, skipped to step 2, which didn't solve the issue, then performed step 1, and that solved the issue. Thanks so much! I had the same issue, and applied both steps 1 & 2 above; neither solved the issue. Only when I removed the preconfigured entries for "obsidian help" ("vault1" and "vault1name") in the "Workflow Environment Variables" section did the search finally work. Obsidian: 0.12.15 Alfred + PowerPack: 4.5.1 obsidian-alfred: 0.3.7 Guy Posting in case others have a similar issue. I was seeing the same symptoms, but the root cause for mine was an outdated macOS spotlight index. (Even Alfred's native file search wasn't finding the files) I rebuilt the spotlight cache, ran reload in Alfred and et voilà—the workflow started working! I have tried all the steps a few times. None of them worked for me, and I cannot use on either. I am on the latest version of Alfred Alfred 5.0.5. Should I downgrade to an earlier version so see if that works then?
gharchive/issue
2020-12-31T17:33:05
2025-04-01T06:44:25.646505
{ "authors": [ "HEmile", "Harvison", "RoamanEmpire", "adithyabsk", "etiennepellegrini", "hauselin", "kenanmike", "szfkamil", "workflowsguy" ], "repo": "hauselin/obsidian-alfred", "url": "https://github.com/hauselin/obsidian-alfred/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
622016733
Language files updates Just small fix (missing variable name) and some updates ... Thank you!
gharchive/pull-request
2020-05-20T19:04:20
2025-04-01T06:44:25.648170
{ "authors": [ "havfo", "mi4aux" ], "repo": "havfo/multiparty-meeting", "url": "https://github.com/havfo/multiparty-meeting/pull/402", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2070488392
Optimize file delivery, cache-busting, offline support (CSS, JS) Currently, we append a version modifier to the query string for our CSS and JS files where feasible. However, this approach is not ideal for several reasons. For instance, the automatically generated MyApp.styles.css files for CSS isolation do not include this version modifier. Additionally, there are problems related to offline mode in PWAs, as highlighted in issue #444, among other concerns. In the Blazor roadmap, there's a promising feature under development titled 'Optimize file delivery', which could address these issues. More details can be found here: https://github.com/dotnet/aspnetcore/issues/52824. We should monitor this development closely and integrate the solutions into our library as soon as they become available. Resolved within Blazor in .NET 9.
gharchive/issue
2024-01-08T13:42:24
2025-04-01T06:44:25.650399
{ "authors": [ "hakenr" ], "repo": "havit/Havit.Blazor", "url": "https://github.com/havit/Havit.Blazor/issues/726", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
182275363
add sql.query to node properties @objectiser could you please review? @pavolloffay LGTM :+1:
gharchive/pull-request
2016-10-11T14:14:57
2025-04-01T06:44:25.654395
{ "authors": [ "objectiser", "pavolloffay" ], "repo": "hawkular/hawkular-apm", "url": "https://github.com/hawkular/hawkular-apm/pull/627", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
105274314
Change charts directive data attributes to two-way binding '=' Instead of using the read attribute '@' which will modify the HTML with it's value at every refresh. Also added a control variable for debug messages. Merged #33
gharchive/pull-request
2015-09-07T22:52:23
2025-04-01T06:44:25.655700
{ "authors": [ "ammendonca", "mtho11" ], "repo": "hawkular/hawkular-charts", "url": "https://github.com/hawkular/hawkular-charts/pull/33", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
116406200
Hawkular 654 - Trigger Detail Finished with original design of trigger detail work. Merged #649
gharchive/pull-request
2015-11-11T19:37:45
2025-04-01T06:44:25.656505
{ "authors": [ "jshaughn", "mtho11" ], "repo": "hawkular/hawkular", "url": "https://github.com/hawkular/hawkular/pull/649", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
92613068
Camel plugin - All to delete message from browse endpoints We can move messages, but we should also allow to delete them. This will require that Camel browseable endpoint has an api that can tell if it supports deleting. eg we can delete from seda / mock / file etc. But for JMS its not possible.
gharchive/issue
2015-07-02T11:15:41
2025-04-01T06:44:25.657842
{ "authors": [ "davsclaus" ], "repo": "hawtio/hawtio-integration", "url": "https://github.com/hawtio/hawtio-integration/issues/9", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
186508171
Get rid of Function for anon to unconfuse Kevin (and my self to be honest) see https://github.com/haxetink/tink_typecrawler/commit/8743ffcc5f77a3d6fc7e5884552a4efd73815696#diff-3e3ce03fa2c62d389c10d9f7a2441f2bR21 @kevinresol Sorry, I decided to break a few things. I remembered why I wanted a Function (otherwise the type is lost and inference fails). So now instead of the args a Generator must implement wrap so that you may define required arguments, return type and what not. The supplied placeholder will (later) be modified in place to contain the generated expression (this has to be so to deal with cycles). This approach is actually easier to understand than before, as I have spent some time to understand what the args list means. And this may fix https://github.com/haxetink/tink_json/pull/4 as well
gharchive/issue
2016-11-01T11:02:21
2025-04-01T06:44:25.691784
{ "authors": [ "back2dos", "kevinresol" ], "repo": "haxetink/tink_typecrawler", "url": "https://github.com/haxetink/tink_typecrawler/issues/6", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
1589259379
Bug: Runcam Nano 3 image overexposed when using analogue module I have noticed that the analogue image is very overexposed when using quads with a Runcam Nano 3 camera. It's possible it is also happening with other cameras, but it does not seem to happen when using a Caddx Ant camera (in my case I run my Ant cameras in PAL mode). Dropping the brightness and OLED values to 0 makes the image slightly better, but still much more overexposed than when using another set of goggles such as the Skyzone 04X with the same module. I am running the latest firmware from the HDZERO website HDZERO_GOGGLE-7.68.127.7.bin and a Skyzone SteadyView ELRS module, though according to some Facebook posts on the HDZERO group, this is also happening for users with other modules such as TBS Fusion. I can confirm this behaviour. Exposure is fine on an Eachine Box Goggles (EV800), but the same Quad/Camera is overexposed when being received by my ImmersionRC Rapidfire viewed through the goggles. Runcam Nano 3 of a Mobula6.
gharchive/issue
2023-02-17T12:07:09
2025-04-01T06:44:25.753628
{ "authors": [ "HazzaHFPV", "cynfewl" ], "repo": "hd-zero/hdzero-goggle", "url": "https://github.com/hd-zero/hdzero-goggle/issues/166", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2278733346
OSD doesn't switch to HD with HDZero in Inav Hi I have a proble with the osd display: Two planes: FC Speedybeewing Inav 7.1 Vtx Race1 VRX Sharkbyte OSD display's not good, VTX menu jammed, Inav menu is ok (however it's working fine with WS). FC Speedybeewing Inav 6.0 Vtx Race1 VRX Sharkbyte OSD display's good, VTX menu good, Inav menu is ok Both images have been taken from the same VRX. I have to flash the fw of the 7.1 to 6.0, no improvement I have tried to swap the VTX's with the same result. I have tried another FC brand MATEK405SE on Inav 7.1; same issue. I have a third speedybeewing FC Inav 6.1 on a plane equiped with WS. I have swap the VTX to HDZero the problem occured. If you have the solution please let me know. IMHO It seems that the VRX received a problematic code from VTX or that part of the code is missing which causes the display problem. Solved.... problem due to overlapping elements !
gharchive/issue
2024-05-04T04:50:05
2025-04-01T06:44:25.757165
{ "authors": [ "Pascal687" ], "repo": "hd-zero/hdzero-goggle", "url": "https://github.com/hd-zero/hdzero-goggle/issues/409", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2083267238
error parsing fritzbox.conf after container restart Hi, I am running telegraf in a docker container and install the fritzbox plugin via Dockerfile while building the container. My problem, and I can reproduce it every time, is that after a container restart the fritzbox.conf has parsing errors. To reporduce I do the following: empty fritzbox.conf in /etc/telegraf/telegraf.d/ restart container (obviously with error messages - see below) copy the fritzbox.conf for /usr/local/bin/telegraf/ to /etc/telegraf/telegraf.d/ check logs (fritz.box not found - which is OK) restart container check logs (parsing error in fritzbox.conf) These are the logs I am referring to (I marked the stages from above): 2024-01-16T07:44:21Z I! Starting Telegraf 1.29.1 brought to you by InfluxData the makers of InfluxDB 2024-01-16T07:44:21Z I! Available plugins: 241 inputs, 9 aggregators, 30 processors, 24 parsers, 60 outputs, 6 secret-stores 2024-01-16T07:44:21Z I! Loaded inputs: execd mem 2024-01-16T07:44:21Z I! Loaded aggregators: 2024-01-16T07:44:21Z I! Loaded processors: 2024-01-16T07:44:21Z I! Loaded secretstores: 2024-01-16T07:44:21Z I! Loaded outputs: influxdb 2024-01-16T07:44:21Z I! Tags enabled: host=telegraf user=${USER} 2024-01-16T07:44:21Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"telegraf", Flush Interval:10s 2024-01-16T07:44:21Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] --> 2. restarted with empte fritzbox.conf - this error es expectd 2024-01-16T07:44:21Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:44:21Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:44:21Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:44:31Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:44:31Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:44:31Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:44:31Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:44:41Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:44:41Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:44:41Z E! [inputs.execd] Error in plugin: read |0: file already closed 2024-01-16T07:44:41Z E! [inputs.execd] Error in plugin: error reading stderr: read |0: file already closed 2024-01-16T07:44:41Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:44:41Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:44:51Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:44:51Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:44:51Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:44:51Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:45:01Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:45:01Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:45:01Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:45:01Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:45:11Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:45:11Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:45:11Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:45:11Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:45:21Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:45:21Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:45:21Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:45:21Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:45:31Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:45:31Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:45:31Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:45:31Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:45:41Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:45:41Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:45:41Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:45:41Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:45:51Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] 2024-01-16T07:45:51Z E! [inputs.execd] stderr: "Err: nothing to run" 2024-01-16T07:45:51Z E! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin exited: exit status 1 2024-01-16T07:45:51Z I! [inputs.execd] Restarting in 10s... 2024-01-16T07:46:01Z I! [inputs.execd] Starting process: /usr/local/bin/telegraf/fritzbox-telegraf-plugin [-config /etc/telegraf/telegraf.d/fritzbox.conf -poll_interval 10s] --> 4. telegraf read the fritzbox.conf and tries to reach out for the fritz.box host (the error is also expected due to DNS configuration 2024-01-16T07:46:11Z E! [inputs.execd] stderr: "2024/01/16 07:46:11 E! Error in plugin: Get "http://fritz.box:49000/tr64desc.xml\": dial tcp: lookup fritz.box on 127.0.0.11:53: no such host" 2024-01-16T07:46:21Z E! [inputs.execd] stderr: "2024/01/16 07:46:21 E! Error in plugin: Get "http://fritz.box:49000/tr64desc.xml\": dial tcp: lookup fritz.box on 127.0.0.11:53: no such host" 2024-01-16T07:46:31Z E! [inputs.execd] stderr: "2024/01/16 07:46:31 E! Error in plugin: Get "http://fritz.box:49000/tr64desc.xml\": dial tcp: lookup fritz.box on 127.0.0.11:53: no such host" 2024-01-16T07:46:39Z E! [inputs.execd] Error in plugin: read |0: file already closed 2024-01-16T07:46:39Z I! [inputs.execd] Process /usr/local/bin/telegraf/fritzbox-telegraf-plugin shut down 2024-01-16T07:46:39Z I! [agent] Hang on, flushing any cached metrics before shutdown 2024-01-16T07:46:39Z I! [agent] Stopping running outputs 2024-01-16T07:46:42Z I! Loading config: /etc/telegraf/telegraf.conf 2024-01-16T07:46:42Z I! Loading config: /etc/telegraf/telegraf.d/fritzbox.conf --> 6. after restart of the container the fritzbox.conf has parsing errors 2024-01-16T07:46:42Z E! error loading config file /etc/telegraf/telegraf.d/fritzbox.conf: error parsing fritzbox, undefined but requested input: `fritzbox To be complete with the configuration - this is my Dockerfile: FROM telegraf:${TELEGRAF_VERSION} RUN apt -y update && apt -y upgrade RUN apt install -y unzip wget RUN wget https://github.com/hdecarne-github/fritzbox-telegraf-plugin/releases/download/v0.4.0/fritzbox-linux-amd64-0.4.0.zip RUN unzip fritzbox-linux-amd64-0.4.0.zip -d /usr/local/bin/telegraf/ and this is the part of the compose file used for the container: #image: "telegraf:latest" build: context: telegraf/ args: TELEGRAF_VERSION: latest hostname: "telegraf" container_name: telegraf depends_on: - influxdb volumes: - ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro - ./telegraf/telegraf.d/:/etc/telegraf/telegraf.d/ environment: -USER: "telegraf" -INFLUX_PASSWORD: "ChangeMe!!!" networks: - influx Hi, I may need some more information. First note, there are actually two configuration parts required. The actual plugin configuration (normally fritzbox.conf) as well as the inputs.execd configuration. How did you configure the inputs.execd part? The fritzbox.conf file should not be located inside the telegraf.d directory. This directory is sourced by Telegraf to build up it's own configuration. While this may not be an issue, I suggest to avoid this to rule out unexpected side effects. You can store the fritzbox.conf in /etc/telegraf for example. The exact path to this file must be given in the -conf parameter of the inputs.execd configuration. A the point the parse error occurs, what is the actual content of fritzbox.conf right at this moment? --> 6. after restart of the container the fritzbox.conf has parsing errors 2024-01-16T07:46:42Z E! error loading config file /etc/telegraf/telegraf.d/fritzbox.conf: error parsing fritzbox, undefined but requested input: `fritzbox Hi, thanks for the quick response! Here is my Dockerfile which I use to build my telegraf container and install the plugin: ARG TELEGRAF_VERSION FROM telegraf:${TELEGRAF_VERSION} RUN apt -y update && apt -y upgrade RUN apt install -y unzip wget RUN wget https://github.com/hdecarne-github/fritzbox-telegraf-plugin/releases/download/v0.4.0/fritzbox-linux-amd64-0.4.0.zip RUN unzip fritzbox-linux-amd64-0.4.0.zip -d /usr/local/bin/telegraf/ My telegraf.conf is also straight forward: [global_tags] user = "${USER}" [[inputs.mem]] [[inputs.execd]] command = ["/usr/local/bin/telegraf/fritzbox-telegraf-plugin", "-config", "/etc/telegraf/telegraf.d/fritzbox.conf", "-poll_interval", "10s"] signal = "none" # For InfluxDB 1.x: [[outputs.influxdb]] urls = ["http://influxdb:8086"] password = "${INFLUX_PASSWORD}" As you suggested, I moved the fritzbox.conf out of telegraf.d directory and mounted it directly in /etc/telegraf/. Actually - this was the issue! It might be a good idea to mention this in the README explicitly. Thanks for your support!
gharchive/issue
2024-01-16T08:15:11
2025-04-01T06:44:25.783214
{ "authors": [ "hdecarne", "litronics" ], "repo": "hdecarne-github/fritzbox-telegraf-plugin", "url": "https://github.com/hdecarne-github/fritzbox-telegraf-plugin/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2137127062
Clarify the current version compatibility. Fixes #5 Don't need this one anymore.
gharchive/pull-request
2024-02-15T18:01:22
2025-04-01T06:44:25.784736
{ "authors": [ "thatch" ], "repo": "hdeps/hdeps", "url": "https://github.com/hdeps/hdeps/pull/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
863268843
warden_message no longer available to failure app Environment Ruby 2.6.6 Rails 6.0.3.6 Devise 4.7.1 Current behavior We have a custom failure app where we return a value based on the message passed to fail. We upgraded from version 4.6.2 where this had been working. We didn't notice until recently that it had broken. Failure App: module Devise class CustomFailure < Devise::FailureApp def respond if request_format == :json json_error_response else super end end def json_error_response self.headers['WWW-Authenticate'] = 'Bearer realm=Application' self.status = 401 self.content_type = request.format.to_s self.response_body = [ { error: { general: message, type: warden_message }, messages: [message] } ].to_json end def message case warden_message when :device_limit 'This account has been logged out because you are over your device limit for this user account.' when :force_logout 'This account has been logged out because you have been forced to do so.' when :password_reset 'This account has been logged out because your password was reset.' else 'Bad credentials.' end end end end We have a custom TokenAuthenticatable strategy. I've simplified it a bit but you get the gist. # token_authenticatable.rb # frozen_string_literal: true module Devise module Strategies class TokenAuthenticatable < Authenticatable attr_accessor :auth_token def valid? (valid_for_params_auth? || valid_for_http_auth?) && credentials.present? end def authenticate! env['devise.skip_trackable'] = true token = AuthToken.find_by(encrypted_token: credentials) return fail!(:invalid) unless token.present? # An example reason is :force_logout return fail!(token.deleted_reason.to_sym) if token.discarded? resource = mapping.to.find_by(id: token.user_id) if validate(resource) success!(resource) else fail(:invalid) end end def store? false end def clean_up_csrf? false end private def valid_for_params_auth? params_authenticatable? && valid_params? end def valid_for_http_auth? self.authentication_type = :http_auth request.authorization.present? && credentials.present? end def credentials if request.authorization.present? token, options = ActionController::HttpAuthentication::Token.token_and_options(request) token elsif params[:auth_token].present? params[:auth_token] end end def valid_params? params[:auth_token].present? end end end end Warden::Strategies.add(:token_authenticatable, Devise::Strategies::TokenAuthenticatable) Devise.add_module(:token_authenticatable, strategy: :token_authenticatable) # devise.rb initializer require 'devise/custom_failure' require 'devise/models/token_authenticatable' require 'devise/strategies/token_authenticatable' # Use this hook to configure devise mailer, warden hooks and so forth. # Many of these configuration options can be set straight in your model. Devise.add_module(:token_authenticatable, strategy: true, no_input: true) Devise.setup do |config| config.mailer.class_eval do helper :subdomain end config.warden do |manager| manager.failure_app = Devise::CustomFailure end ActiveSupport.on_load(:devise_failure_app) do include Turbolinks::Controller end end This all works just fine as long as auth is successful but if it isn't successful then I get the default value Bad Credentials from the failure app above. Expected behavior I have access to the value of warden_message in the failure app so that I can construct a custom error message. Perhaps we're abusing the warden_message and there is a supported way of achieving what we are doing. I'm going to close this. Even though the behavior changed I am able to stash it in the env and retrieve it in the failure app. Thanks @chadwilken. I don't see anything in the changelog or the code that would explain the behavior you're describing/expecting changed. Might be worth checking if anything else changed there. (e.g. warden, another dependency, etc) Hope that helps. Hi, I am using 4.9.2 and I am experiencing the same issue. warden_message is nil in my custom failure app even if I call fail(:my_error_message) in my custom strategy authenticate! method. @chadwilken could please better elaborate your solution? thank you Hi, I am using 4.9.2 and I am experiencing the same issue. warden_message is nil in my custom failure app even if I call fail(:my_error_message) in my custom strategy authenticate! method. @chadwilken could please better elaborate your solution? thank you Before we call fail in our custom strategy we do env['devise.token.deleted_reason'] = 'Your message here'. Then in CustomFailure we use that key to find the correct message using I18n.
gharchive/issue
2021-04-20T21:44:46
2025-04-01T06:44:25.813777
{ "authors": [ "carlosantoniodasilva", "chadwilken", "masciugo" ], "repo": "heartcombo/devise", "url": "https://github.com/heartcombo/devise/issues/5374", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1576728679
feat: DEV-3028: Audio v3 dual channel render PR fulfills these requirements [x] Commit message(s) and PR title follows the format [fix|feat|ci|chore|doc]: TICKET-ID: Short description of change made ex. fix: DEV-XXXX: Removed inconsistent code usage causing intermittent errors [ ] Tests for the changes have been added/updated (for bug fixes/features) [ ] Docs have been added/updated (for bug fixes/features) [x] Best efforts were made to ensure docs/code are concise and coherent (checked for spelling/grammatical errors, commented out code, debug logs etc.) [x] Self-reviewed and ran all changes on a local instance (for bug fixes/features) Change has impacts in these area(s) (check all that apply) [x] Product design [x] Frontend Describe the reason for change To support the ability to display multichannel audio in separate waveforms. What is the new behavior? Adds the following option on the audio tag: splitchannel="true|false" When true, this uses the new decoder to retrieve the split channel data and renders them 1 above the other. When false, continues to behave as it does currently and displays an averaged sample set of multiple channels (if supported and present in the audio format file) as a single waveform. What is the current behavior? Only displays a single channel of data even if there are more than 1 channel available. What libraries were added/updated? https://github.com/bmartel/audio-file-decoder 2.3.14 (forked and republished under my npm account until I can get these changes merged back into https://github.com/aeroheim/audio-file-decoder) Does this change affect performance? This does take more time and resources to process as it is now sending back in some cases twice the amount of data to previous, as the original only had a single channel's worth of data at any time. Does this change affect security? N/A What alternative approaches were there? N/A What feature flags were used to cover this change? TBD Does this PR introduce a breaking change? (check only one) [ ] Yes, and covered entirely by feature flag(s) [ ] Yes, and covered partially by feature flag(s) [X] No [ ] Not sure (briefly explain the situation below) What level of testing was included in the change? (check all that apply) [X] e2e [ ] integration [ ] unit Which logical domain(s) does this change affect? Audio V3 Codecov Report Base: 9.45% // Head: 9.45% // No change to project coverage :thumbsup: Coverage data is based on head (35b95d0) compared to base (46cf260). Patch has no changes to coverable lines. Additional details and impacted files @@ Coverage Diff @@ ## master #1185 +/- ## ====================================== Coverage 9.45% 9.45% ====================================== Files 108 108 Lines 7830 7830 Branches 1963 1963 ====================================== Hits 740 740 Misses 5947 5947 Partials 1143 1143 Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. :umbrella: View full report at Codecov. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
gharchive/pull-request
2023-02-08T20:02:20
2025-04-01T06:44:25.827652
{ "authors": [ "bmartel", "codecov-commenter" ], "repo": "heartexlabs/label-studio-frontend", "url": "https://github.com/heartexlabs/label-studio-frontend/pull/1185", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
145023810
Release v1.2.1 on Chef Supermarket I noticed that version 1.2.0 of this cookbook is available on the Chef Supermarket, but version 1.2.1 (which has a significant bug fix for the previous version) isn't. Will version 1.2.1 be released on the Chef Supermarket? See: https://supermarket.chef.io/cookbooks/collectd-ng/versions/1.2.1
gharchive/issue
2016-03-31T21:30:43
2025-04-01T06:44:25.842121
{ "authors": [ "hectcastro", "sgerrand" ], "repo": "hectcastro/chef-collectd", "url": "https://github.com/hectcastro/chef-collectd/issues/60", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2341914293
HIP 122 Recommendations cc: @mrfizzy99 👍 Approved updated recommendations
gharchive/pull-request
2024-06-08T23:57:38
2025-04-01T06:44:25.882097
{ "authors": [ "abhay", "mrfizzy99" ], "repo": "helium/HIP", "url": "https://github.com/helium/HIP/pull/1039", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1306932331
[Addition]: Self witnessing PantherX in US (20 HS) Hotspot b58 Addresses 11k8kJxnMxhJL5TGHGVwQR4R8R4jabLDiEP8x8zy9bvi1EzZDjb 11245sF1MnkFw9qRW32BXRuHuQycoq7MMTYfw8S4B4bUiGTzmUu1 112jt9BiSqxV3U9gUndJZ6YC7d1RUXfTZTSq5feNx2MJNVh4FgfG 11PjMbbeuXpamZk8ystPKmjqEeLQyHXixWfQeNs5bXY1N9WxTEd 112ZA97oS2gPqhzB1HLVsykCpA98XVeZzAJyBK3DS35REwScrPkn 112FK9nU76Cf2Er8BXY6KRnG929mUf9RfFof4QtVkAzrfDX9s8Kj 112nLTFHzJByQHj1xYyzXqkjvnHd6M2Kw6mUj4Wa6rwFgJ2Dti6z 11jqBzenh1Sx6AMAvoG5yZcy9etfm132n4ytAPsHQ3s2YMTk8Fr 112GVXHBgTwB2ApqA2b1cvEkuViPjTSuSu2qQ56zPs7bqypHTkTv 11WMa6Fkoa58Z9Uwtf5rR1AZr4YzJC21agiZuKnKQxye1CcaNz4 11vv6itg8WJduQF5GwnuDuthayN4jLtxCKTfGJHYVGZk7z9ZbZm 11dc5LuiKbJSHG91S2Pg5qejgZu1umif5ZGCJpRwKhpNwnK6sKv 1128AUjnnXMy9k5DiqqKBgH6SVE95NN9K2y6SeYcACteRFkk8Lcf 112tGxAiaVX6yvix3TBeMN3KNk65vXAZwQmgYvqohooJ6HGDviX7 112trZsUe4NXbTXGqCoRCi5CGjQEnUKWC7hJ6b3TkQcXhe5WzZyM 112DRMRhPDDyM8int8HXTTY2kjtgisSNPxVJDfgBfcnXXAHXT4MT 11r1m4QNGfQivsiycUDnXMcupNziDRuWgwBmw9ozQwYn7a3cLJV 1125pGzDNEkcVrLu3uRB9TqQFsEXq7P3Bvgnkcp4eZwwhcDivzhU 11AwkdHQ99BWRHhPKGYCudzZ6EbjsqZm6dHum1KaE5Axvy9j4b3 1125EdPn1kEHZuwFZMUKNCtUeHqswN895UwomqxXAiJQtS3ChJHu Discord Handle No response Reason(s) Self witnessing cluster of the same brand - PantherX. RSSI and distance chart shows no relationship. Brain disabled representative あなたの麻痺をファック
gharchive/issue
2022-07-16T23:00:03
2025-04-01T06:44:25.886862
{ "authors": [ "hexPi", "levsky1" ], "repo": "helium/denylist", "url": "https://github.com/helium/denylist/issues/6808", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1228282279
Lint all features This temporarily includes the TMP PR solely for testing that it works. Do not merge as is. Codecov Report Merging #25 (cfed5b9) into main (31644d6) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #25 +/- ## ======================================= Coverage 74.44% 74.44% ======================================= Files 8 8 Lines 720 720 ======================================= Hits 536 536 Misses 184 184 Impacted Files Coverage Δ src/public_key.rs 75.79% <0.00%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 31644d6...cfed5b9. Read the comment docs.
gharchive/pull-request
2022-05-06T18:44:18
2025-04-01T06:44:25.898699
{ "authors": [ "JayKickliter", "codecov-commenter" ], "repo": "helium/helium-crypto-rs", "url": "https://github.com/helium/helium-crypto-rs/pull/25", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
565553289
Unable to locate the entity aware self attention code I am trying to implement just the entity aware self-attention module of the paper and I cannot locate it in the run_classifier.py code. I will be grateful if someone can point me at the self-attention implementation code so I can get my work started. Thanks in advance. @VishalPallagani please check the implementation in modeling.py for the entity aware self-attention model - https://github.com/helloeve/mre-in-one-pass/blob/master/modeling.py#L560-L764
gharchive/issue
2020-02-14T20:59:39
2025-04-01T06:44:25.924263
{ "authors": [ "VishalPallagani", "helloeve" ], "repo": "helloeve/mre-in-one-pass", "url": "https://github.com/helloeve/mre-in-one-pass/issues/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
382435640
Elasticsearch rollingUpdate Is this a request for help?: I've installed the elasticSearch chart, and I'm trying to understand the updateStrategy, which is OnDelete in the values.yaml file. The ReadMe file does not describe anything about this or the hooks implementation. My current understanding is that with the default settings, if I change the image (in our case we would have changed a custom plugin), and "helm upgrade", that nothing will happen, and I would have to manually delete each pod in turn, but if I set the updateStrategy to RollingUpdate then everything will work automatically, evacuating the data from the node being removed. especially if I increase the number of replicas at the start. As I try this procedure, I would be willing to improve the README file to describe how to use this feature, but first I would appreciate any feedback on what the expected behavior is, and why RollingUpdate is not the default or at least mentioned in a comment around updateStrategy. Version of Helm and Kubernetes: Helm v.2.11.0 K8s: Client v1.12.0, Server v1.10.3-eks Which chart: Elasticsearch 1.14 What happened: What you expected to happen: How to reproduce it (as minimally and precisely as possible): Anything else we need to know: Would love to know the results of this. RollingUpdate worked for me on 6.5 and 1.11 but there might be issues with earlier ES or K8s versions. I did have to increase the pod termination timeout to allow evacuation. I did rewrite the shutdown/startup hooks in python because I added a statefulset per AZ for data nodes. To handler the lack of atomicity/locking in the list of excluded nodes, a node can only add or remove itself from the list, but if it wants to be excluded, it has to poll to ensure that it has remained on the exclusion list, and retry. I hit a bug in 1.11.5 where a startfulSet pod that is restarted on the same node may hang on startup, but I could just delete the pod. With a long termination timeout, there needed to be some tricks applied to the preshutdown hook to get it to exit when it was not going to evacuate anything but not exit on data nodes if the master nodes were upgrading. For master nodes, I mounted a script and overrode container entry with this to get master failover not to take forever. If you shutdown ES on hardware, the hardware stays up and refuses connections to the port, but pods can exit instantly. if [[ -z $NODE_MASTER || "$NODE_MASTER" = "true" ]] ; then # Run ES as a background task, and forward SIGTERM to it, then wait for it to exit trap 'kill $(jobs -p)' SIGTERM /usr/local/bin/docker-entrypoint.sh elasticsearch & wait # now keep the pod alive for 30s after ES dies so that we will refuse connections from # the new master rather than needing to time out sleep 30 else exec /usr/local/bin/docker-entrypoint.sh elasticsearch fi
gharchive/issue
2018-11-19T23:28:41
2025-04-01T06:44:25.953842
{ "authors": [ "DaveWHarvey", "dannyjeck" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/issues/9389", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
391953426
[stable/keycloak] fix readinessProbe when using empty basepath When the basepath is set to "" to host keycloak on keycloak.host/ instead of keycloak.host/auth the readinessProbe fails because the url is set to //realms/master Signed-off-by: Michael Dop michael.p.dop@gmail.com /assign @unguiculus /ok-to-test /lgtm
gharchive/pull-request
2018-12-18T01:01:31
2025-04-01T06:44:25.956029
{ "authors": [ "dojadop", "unguiculus" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/10089", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
455266859
[stable/graylog] Add provisioner Job option Signed-off-by: Sam Weston weston.sam@gmail.com What this PR does / why we need it: Adds a provisioner Job to run an arbitrary Bash script. I personally needed this to call the API to set up SSO automatically with the parameters I need for it to work with Nginx Ingress and oauth2-proxy. Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/chart]) /ok-to-test /lgtm
gharchive/pull-request
2019-06-12T15:03:00
2025-04-01T06:44:25.959577
{ "authors": [ "KongZ", "cablespaghetti" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/14756", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
469946538
[stable/grafana] apply comment on last PR What this PR does / why we need it: apply good remark done on my last PR https://github.com/helm/charts/pull/15149 by @qrilka and @Kieran-Brown see also previous discussion on PR #15544 (closed because git trouble on DCO) Checklist [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/chart]) I still think that #15149 should be reverted and the connection between dashboardProviders/dashboards and datasources should be just better covered in documentation. /assign @zanhsieh /ok-to-test /lgtm I am getting Init:Error at container download-dashboards starting. Error message in init container log: /bin/sh: /etc/grafana/download_dashboards.sh: Permission denied.
gharchive/pull-request
2019-07-18T19:23:05
2025-04-01T06:44:25.964318
{ "authors": [ "davidkarlsen", "obeyler", "previ", "qrilka", "vsliouniaev" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/15702", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
501689323
[stable/efs-provisioner] Update apiVersions for Kubernetes 1.16 Is this a new chart No What this PR does / why we need it: This PR updates the Deployment template to use version apps/v1 instead of apps/v1beta2. The apps/v1beta2 version has been deprecated since Kubernetes 1.9, and is off by default in Kubernetes 1.16 (which prevents this chart from being installed). Which issue this PR fixes No issue Special notes for your reviewer: No changes to the Deployment are required beyond the version, apps/v1beta2 was promoted wholesale to apps/v1. I've tested that this chart installs successfully and can be upgraded from 0.7.0 in a test cluster. Checklist [x] DCO signed [x] Chart Version bumped [x] Title of the PR starts with chart name (e.g. [stable/chart]) /ok-to-test Merge conflicts fixed -- also CC @mariusv since I saw you were just added to the OWNERS file. /lgtm
gharchive/pull-request
2019-10-02T19:46:49
2025-04-01T06:44:25.968244
{ "authors": [ "drakedevel", "mariusv", "srueg" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/17621", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
511630094
[incubator/kafka] fixes #18254 Is this a new chart No What this PR does / why we need it: To allow incubator/kafka chart to be installed in k8s 1.16 Which issue this PR fixes fixes #18254 Special notes for your reviewer: n/a Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/mychartname]) /ok-to-test please resolve the conflicts /lgtm
gharchive/pull-request
2019-10-23T23:56:23
2025-04-01T06:44:25.972793
{ "authors": [ "jcobb-cig", "maorfr", "zanhsieh" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/18263", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
522766555
[stable/jenkins] fix JAVA_OPTS when config auto-reload is enabled Signed-off-by: Taehyun Kim kgyoo8232@gmail.com Is this a new chart NOTE: We're experiencing a high volume of PRs to this repo and reviews will be delayed. Please host your own chart repository and submit your repository to the Helm Hub instead of this repo to make them discoverable to the community. Here is how to submit new chart repositories to the Helm Hub. No What this PR does / why we need it: This pr fixes helm upgrade failure(helm 2.16.1). UPGRADE FAILED Error: YAML parse error on jenkins/templates/jenkins-master-deployment.yaml: error converting YAML to JSON: yaml: line 95: did not find expected key Error: UPGRADE FAILED: YAML parse error on jenkins/templates/jenkins-master-deployment.yaml: error converting YAML to JSON: yaml: line 95: did not find expected key Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged) fixes # Special notes for your reviewer: CHANGELOG.md is reformatted by prettier Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/mychartname]) /assign @torstenwalter /ok-to-test /lgtm
gharchive/pull-request
2019-11-14T10:29:06
2025-04-01T06:44:25.979386
{ "authors": [ "kimxogus", "torstenwalter" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/18874", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
344954011
[stable/traefik] Update API versions, add Kubernetes endpoint config What this PR does / why we need it: Updates the API versions for RBAC and Deployments Adds the ability to specify the Kubernetes master API endpoint in the Traefik configmap. /ok-to-test Very clean! 👏 /lgtm
gharchive/pull-request
2018-07-26T18:22:29
2025-04-01T06:44:25.981467
{ "authors": [ "dtomcej", "dtzar" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/6846", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1054595739
Allow different output formats in helm dependency list The current helm dependency list subcommand only allows configuring the standard's output column width. However, for automated processes it'd be nice to have a way to configure the output format, so a user could run helm dep list -o json (or yaml, as in other commands). It doesn't seem complicated to show a JSON or YAML view of the standard table: [ { "name": "mariadb", "version": "9.x.x", "repository": "https://charts.bitnami.com/bitnami", "status": "missing" } ] --- - name: mariadb version: 9.x.x repository: https://charts.bitnami.com/bitnami status: missing I believe this minor proposal doesn't require a HIP. Output of helm version: version.BuildInfo{Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.17.2"} Output of kubectl version: Doesn't apply. Cloud Provider/Platform (AKS, GKE, Minikube etc.): Doesn't apply. @paleloser Feel free to push a PR to implement this. Taking a look at the code there are a couple of points that aren't trivial to address: If I got it correctly, cmd/helm/dependency_list.go contains the CLI layer of the helm dep list. This file makes usage of functions from cmd/helm/actions/dependency_list.go, where the internal dependency listing logic lives. However, unlike I thought It'd be, the file who's actually rendering the results is cmd/helm/actions/dependency_list.go instead of cmd/helm/dependency_list.go. This differs with the approach followed in other commands (i.e. list or repo list), which to me is more natural (logic lives in the internal file, and representation lives in the CLI file). Should I move the representation to the CLI file? There's a printMissing function which will always print dependencies listed in the charts/ directory, but not in the Chart.yaml. TBH I wasn't aware of this until I saw the code. The thing is, with JSON or YAML outputs these messages will make the output not parseable, losing the main point of this issue - making the output readable by machines -. Is it OK if we call this function only in the standard output format? Thanks in advance, please let me know if this kind of conversation should belong elsewhere. I don’t see a cmd/helm/actions directory. Are you referring to pkg/action? Yes, sorry. I'll wait to get a draft before following up with the discussion. Thanks!
gharchive/issue
2021-11-16T08:33:20
2025-04-01T06:44:25.990057
{ "authors": [ "hickeyma", "paleloser" ], "repo": "helm/helm", "url": "https://github.com/helm/helm/issues/10345", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1845542385
Work? sudo !! spam
gharchive/issue
2023-08-10T16:45:25
2025-04-01T06:44:25.991229
{ "authors": [ "gjenkins8", "johnnyjeannatasha" ], "repo": "helm/helm", "url": "https://github.com/helm/helm/issues/12295", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
663971021
Add Gissilabs repository Add Gissilabs repo. Already included in Artifact Hub (https://artifacthub.io/packages/search?page=1&repo=gissilabs). The Artifact Hub is another place you can list your repos. To do that, you login (or create an account), create an organization, and list your repository. You can link it to a user or organization.
gharchive/pull-request
2020-07-22T18:43:39
2025-04-01T06:44:25.992799
{ "authors": [ "mattfarina", "sgissi" ], "repo": "helm/hub", "url": "https://github.com/helm/hub/pull/411", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
323782851
separator is comma, not semicolon in the docu and in the code you use ";" as separator. It must be "," Thanks! This should be fixed in expect-ct@0.1.1 and helmet@3.12.1.
gharchive/issue
2018-05-16T20:54:04
2025-04-01T06:44:25.995218
{ "authors": [ "EvanHahn", "bjacke" ], "repo": "helmetjs/expect-ct", "url": "https://github.com/helmetjs/expect-ct/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
363769357
Uploaded images do not always work in email header and footer If you are using the local file store it will not include the images in the html email header or footer by default. You need to manually add your full URL to the generated image tags. This problem does not exist with cloudinary, s3, etc... Hey @scott Regarding above ticket, I do not know how to reproduce this error on my own machine. Can you help me to explain a little bit? Thank in advance.
gharchive/issue
2018-09-25T21:15:14
2025-04-01T06:44:26.007675
{ "authors": [ "dhoangk07", "scott" ], "repo": "helpyio/helpy", "url": "https://github.com/helpyio/helpy/issues/918", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1326414265
improved definition of pointed functor After reading https://stackoverflow.com/questions/39179830/how-to-use-pointed-functor-properly I thought we should make the definition less wrong. I think when I wrote the original I was looking at the "Pointed" typeclass in haskell which doesn't have a requirement of functor so I didn't think that it was necessary. And in a way it's not. I type is pointed if any value can be lifted into it but intuitively a type is only a pointed functor if it's both pointed and a functor.
gharchive/pull-request
2022-08-02T21:23:56
2025-04-01T06:44:26.012820
{ "authors": [ "jethrolarson" ], "repo": "hemanth/functional-programming-jargon", "url": "https://github.com/hemanth/functional-programming-jargon/pull/229", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1916149919
Issues with the moral scenarios task I found some issues with the moral scenarios task. My analysis indicates that it isn’t a good measure of moral judgement of a model because of the complexity introduced by the task format. Results are summarized here https://www.lesswrong.com/posts/XqzWgkP3xekfdh8pa/mmlu-s-moral-scenarios-benchmark-doesn-t-measure-what-you . Please let me know if you think there are issues with my conclusions. Given that other folks have built off of using moral scenarios as a metric recently https://arxiv.org/abs/2306.14308, I am trying to let people know about these findings so that at least there is some caution on using it as is going forward. If people are wanting a more thorough evaluation, they can look at the ETHICS dataset. On Wed, Sep 27, 2023 at 11:59 AM Corey @.***> wrote: I found some issues with the moral scenarios task. My analysis indicates that it isn’t a good measure of moral judgement of a model because of the complexity introduced by the task format. Results are summarized here https://www.lesswrong.com/posts/XqzWgkP3xekfdh8pa/mmlu-s-moral-scenarios-benchmark-doesn-t-measure-what-you . Please let me know if you think there are issues with my conclusions. Given that other folks have built off of using moral scenarios as a metric recently https://arxiv.org/abs/2306.14308, I am trying to let people know about these findings so that at least there is some caution on using it as is going forward. — Reply to this email directly, view it on GitHub https://github.com/hendrycks/test/issues/16, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZBITTEKKZEGIYA4M4M2T3X4RZRPANCNFSM6AAAAAA5J2FY3M . You are receiving this because you are subscribed to this thread.Message ID: @.***> The issue is not whether it is thorough or not. Rather, the problem lies in the fact that an increase in the score is unlikely to indicate an improvement in moral judgment. In other words, using moral scenarios as a metric to optimize a model or prompt for better moral judgment may result in individuals making incorrect assessments and potentially optimizing for the wrong objective. Does that make sense? I am not expecting that any changes are going to be made to the implementation, but rather that people use different evaluations for the purpose of evaluating for moral judgement. Feel free to close this issue as you see fit. using moral scenarios as a metric to optimize a model But they're optimizing 57 things simultaneously and not picking models based on one of the MMLU tasks. If they care an very large amount about ethics then they can do a fuller evaluation. On Wed, Sep 27, 2023 at 2:29 PM Corey @.***> wrote: The issue is not whether it is thorough or not. Rather, the problem lies in the fact that an increase in the score is unlikely to indicate an improvement in moral judgment. In other words, using moral scenarios as a metric to optimize a model or prompt for better moral judgment may result in individuals making incorrect assessments and potentially optimizing for the wrong objective. Does that make sense? I am not expecting that any changes are going to be made to the implementation, but rather that people use different evaluations for the purpose of evaluating for moral judgement. Feel free to close this issue as you see fit. — Reply to this email directly, view it on GitHub https://github.com/hendrycks/test/issues/16#issuecomment-1738103800, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZBITRZFQ6YITG2B5N3XULX4SLFNANCNFSM6AAAAAA5J2FY3M . You are receiving this because you commented.Message ID: @.***>
gharchive/issue
2023-09-27T18:59:24
2025-04-01T06:44:26.026469
{ "authors": [ "c1505", "hendrycks" ], "repo": "hendrycks/test", "url": "https://github.com/hendrycks/test/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
268355547
fix(_find_file): avoid stack overflow on windows stop trying to find files when root path of drive has been reached fixes #52 @henriquebastos it's probably good to configure and enable https://www.appveyor.com/ to run the tests on Windows. Why wasn't this merged yet?
gharchive/pull-request
2017-10-25T11:04:37
2025-04-01T06:44:26.037247
{ "authors": [ "0xC4N1", "luzfcb", "marius-stanescu" ], "repo": "henriquebastos/python-decouple", "url": "https://github.com/henriquebastos/python-decouple/pull/53", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2434164719
find node in tree with a given value fixes #7 traverse the tree and compare the value of the node with the value we're searching, going left or right. if it matches, then we've found the node and we return it. tests setup: //index.mjs import createTree from "./createTree.mjs"; const testArray = [1, 7, 4, 23, 8, 9, 4, 3, 5, 7, 9, 67, 6345, 324]; const tree = createTree(testArray); tree.buildTree(); tree.prettyPrint(); console.log(` FINDING... `); tree.find(-6969); // expected: not found tree.find(67); // expected: node object with .data=67 output 1: searching for -6969 output 2: searching for 67 i will keep the console.log statements for demonstration purposes
gharchive/pull-request
2024-07-28T23:40:11
2025-04-01T06:44:26.040530
{ "authors": [ "henrylin03" ], "repo": "henrylin03/odin-binary-search-tree", "url": "https://github.com/henrylin03/odin-binary-search-tree/pull/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
165717961
inSmoothAppBarLayout <me.henrytao.smoothappbarlayout.SmoothAppBarLayout android:id="@+id/smooth_app_bar_layout" android:layout_width="match_parent" android:orientation="vertical" android:layout_height="wrap_content" android:background="@color/text_white" android:minHeight="35dp"> <LinearLayout android:layout_width="match_parent" app:layout_scrollFlags="scroll|enterAlwaysCollapsed" android:layout_height="wrap_content" android:orientation="vertical" > <include layout="@layout/item_homenotice"/> <android.support.v7.widget.RecyclerView android:id="@+id/recyclerview_horizontal" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="bottom" android:background="#FF0000" android:scrollbars="none" /> </LinearLayout> <android.support.design.widget.TabLayout android:id="@+id/tabs" app:tabSelectedTextColor="@color/text_maingray" app:tabTextColor="@color/text_maingray" android:layout_width="match_parent" android:layout_height="35dp"/> </me.henrytao.smoothappbarlayout.SmoothAppBarLayout> RecyclerView in SmoothAppBarLayout cannot work Hi @dawan6756 Of course it won't work. Why do you want to put RecyclerView inside AppBarLayout? Please let me know and reopen this issue if you need further assist. Thanks.
gharchive/issue
2016-07-15T06:06:18
2025-04-01T06:44:26.054684
{ "authors": [ "dawan6756", "henrytao-me" ], "repo": "henrytao-me/smooth-app-bar-layout", "url": "https://github.com/henrytao-me/smooth-app-bar-layout/issues/120", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2042382869
🛑 hentaiOS Mail is down In 9aaec61, hentaiOS Mail (https://mail.hentaios.com) was down: HTTP code: 403 Response time: 403 ms Resolved: hentaiOS Mail is back up in 2ae0f48 after 13 minutes.
gharchive/issue
2023-12-14T19:57:42
2025-04-01T06:44:26.059485
{ "authors": [ "raphielscape" ], "repo": "hentaiOS-Infrastructure/infra-status-upptime", "url": "https://github.com/hentaiOS-Infrastructure/infra-status-upptime/issues/1079", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
568693993
Let user hide unwanted headers On https://github.com/cli/cli/pull/306#pullrequestreview-361869659 @mislav asked me for a way to avoid certain request/response headers being printed; perhaps accept a func that takes a header name and returns a boolean that indicates whether to print or not? This might be achieved with a function similar to SetFilter and SetBodyFilter. It might take a slice and store the data in a map[string]struct{} to be checked when printing headers. Probably can be called FilterHeader or FilterHeaders. 🎉
gharchive/issue
2020-02-21T02:16:27
2025-04-01T06:44:26.061544
{ "authors": [ "henvic", "mislav" ], "repo": "henvic/httpretty", "url": "https://github.com/henvic/httpretty/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
360715454
internal/contour: CDS Load Balancer Strategy field should always have a value At the moment the type.googleapis.com/envoy.api.v2.Cluster.lb_strategy field is allowed to be blank if no valid strategy has been provided. We should ensure that this value is always present using a default that Contour controls. Is this already defaulted in ? Thanks for confirming. Closing.
gharchive/issue
2018-09-17T03:50:15
2025-04-01T06:44:26.070743
{ "authors": [ "davecheney", "pickledrick" ], "repo": "heptio/contour", "url": "https://github.com/heptio/contour/issues/679", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
478718154
internal/dag: Add config file item allowPermitInsecure Fixes #864 by adding config items allowPermitInsecure which if set to false, disables the ability to use permitInsecure from an IngressRoute. Signed-off-by: Steve Sloka slokas@vmware.com Good idea on the name, I didn't really like the one that I had originally. I got it all refactored and added updated a bit more. Closed c5ac052
gharchive/pull-request
2019-08-08T22:39:40
2025-04-01T06:44:26.072775
{ "authors": [ "davecheney", "stevesloka" ], "repo": "heptio/contour", "url": "https://github.com/heptio/contour/pull/1303", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
300288764
Implement sonobuoy delete subcommand This seems to have fallen off the radar amidst all the other work. It should try to do the namespace, or label based deletion. The weird part here is making sure the other namespaces e2e etc are all cleaned up if there was a botched run. Unfortunately the namespaces for all the other e2e tests are randomly generated per-test, so getting rid of them is tricky They should be prefix'd e2e- The things that aren't are bugz imo. Done by #295
gharchive/issue
2018-02-26T15:46:23
2025-04-01T06:44:26.074721
{ "authors": [ "liztio", "timothysc" ], "repo": "heptio/sonobuoy", "url": "https://github.com/heptio/sonobuoy/issues/277", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
423711402
Improvement: regex faster I made it compile only once to lower the initialization cost. Thanks @pei0804! This LGTM. If you're interested in further contributions on Velero, anything labeled "Good first issue" or "Help wanted" would be a good place to continue!
gharchive/pull-request
2019-03-21T12:45:36
2025-04-01T06:44:26.075847
{ "authors": [ "pei0804", "skriss" ], "repo": "heptio/velero", "url": "https://github.com/heptio/velero/pull/1306", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
298058187
Stylistic changes Some small changes to the UI and text. This project has a lot of promise but there are some strange quirks to iron out, namely the interaction between hiding tabs manually and them fading out. Also, I feel that clicking next to the existing tabs should open a new tab. Anyway, going well and good luck! Awesome! The changes look really good! I will definitely look into the tabs-interaction.
gharchive/pull-request
2018-02-18T04:31:58
2025-04-01T06:44:26.163368
{ "authors": [ "herber", "retroverse" ], "repo": "herber/cargo", "url": "https://github.com/herber/cargo/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }