added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:17.564105
| 2017-01-24T12:49:28
|
202809391
|
{
"authors": [
"kirkbyo",
"ocnur"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7567",
"repo": "kirkbyo/Dropper",
"url": "https://github.com/kirkbyo/Dropper/issues/9"
}
|
gharchive/issue
|
DropperSelectedRow never called
i followed the instructions to the word and it didn't work
then i tried some other variations and it didn't work as well
no need for code because i'm doing what you wrote
the framework looks great! i really hope to use it
thanks :)
Hey! I created a new project and followed my instructions again, but I wasn't able to replicate the issue you are having.
Here is my View controller:
class ViewController: UIViewController {
let dropper = Dropper(width: 75, height: 200)
@IBOutlet weak var dropButton: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
@IBAction func buttonSelected() {
if dropper.status == .hidden {
dropper.items = ["Item 1", "Item 2", "Item 3", "Item 4"] // Item displayed
dropper.theme = Dropper.Themes.white
dropper.delegate = self
dropper.cornerRadius = 3
dropper.showWithAnimation(0.15, options: Dropper.Alignment.center, button: dropButton)
} else {
dropper.hideWithAnimation(0.1)
}
}
}
extension ViewController: DropperDelegate {
func DropperSelectedRow(_ path: IndexPath, contents: String) {
print(path)
print(contents)
}
}
|
2025-04-01T06:39:17.569375
| 2022-02-08T19:38:21
|
1127678924
|
{
"authors": [
"JiveDig",
"contactjavas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7568",
"repo": "kirki-framework/kirki",
"url": "https://github.com/kirki-framework/kirki/issues/2453"
}
|
gharchive/issue
|
Transport auto doesn't run value through sanitize_callback
Using 4.0.20
I have a text field where users can put in a number or a CSS value like 30px, 2rem, 2em, etc. If an integer is added, my sanitize_callback adds px at the end. Using transport => auto just uses the integer value in the CSS. This may be a product of how transport auto works in JS only, but I wanted to confirm.
Hi @JiveDig , thanks for confirming this.
From what I understand, the postMessage CSS output will output the value based on:
how the control handle the value and then displaying it to the markup via content_template method in PHP or via JS part of the control.
what the control prints into the JS object (via to_json() method in PHP)
how the JS part of the control handles that object
and how the JS part of the control handles the value returned from the customizer
In your use-case, let's say you enter 11 to the text input. Your custom sanitize_callback will add px to it before it's saved to the database. But the control still seeing the value as 11 instead of 11px. In this case, you would need a custom JS to handle this in order to make the CSS output generated by postMessage works as expected. You might already know, the filter name is kirkiPostMessageStylesOutput.
You might also already have the script for that. So, this is just an example to do that:
(function () {
/**
* Check if the provided value is a numeric.
*
* @see https://stackoverflow.com/questions/175739/built-in-way-in-javascript-to-check-if-a-string-is-a-valid-number#answer-175787
*
* @param {string|number} str The provided value.
* @return bool
*/
function isNumeric(str) {
// Number is a numeric.
if ("number" === typeof str) return true;
// We only process strings.
if ("string" !== typeof str) return false;
// Use type coercion to parse the entirety of the string (`parseFloat` alone does not do this) and ensure strings of whitespace fail.
return !isNaN(str) && !isNaN(parseFloat(str));
}
/**
* Function to hook into `kirkiPostMessageStylesOutput` filter.
*
* @param {string} styles The styles to be filtered.
* @param {string|Object|int} value The control's value.
* @param {Object} output The control's output argument.
* @param {string} controlType The control type.
*
* @return {string} The filtered styles.
*/
function stylesOutput(styles, value, output, controlType) {
// These checks are just example :).
if ("kirki-generic" !== controlType) return styles;
if (!isNumeric(value)) return styles;
styles +=
output.element +
"{" +
output.property +
": " +
value + "px" +
";\
}";
return styles;
}
// Hook the function to the `kirkiPostMessageStylesOutput` filter.
wp.hooks.addFilter("kirkiPostMessageStylesOutput", "kirki", stylesOutput);
})();
We also do custom JS in some controls such as in control-react-colorful, field-typography, and field-dimensions to make the CSS output of postMessage output the styles in the expected format. There, people can check where to enqueue, and what are the dependencies.
|
2025-04-01T06:39:17.576686
| 2024-04-02T20:22:17
|
2221379046
|
{
"authors": [
"kishiel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7569",
"repo": "kishiel/eks-ipv6",
"url": "https://github.com/kishiel/eks-ipv6/pull/2"
}
|
gharchive/pull-request
|
feat(s3): autoDeleteObjects log group allows retention period and removal policy definitions
Issue #24815
Closes #24815
Reason for this change
S3 bucket autoDeleteObjects leaves behind a log group for each bucket that uses the feature. This results in a lot of cruft, especially in test accounts, which should be configurable by the bucket owner. The account limit for log groups is 10,000 and I've got test accounts that have hit this limit several times.
Description of changes
Creates a log group rather than relying on the underlying custom-resource to create it automatically (a side effect of using CfnResource for AWS::Lambda::Function)
Sets a default retention period of 90 days on the log group (I picked a number)
Sets a default removal policy of delete on the log group (I don't think anyone wants these after they delete a bucket)
Denies the custom-resource Lambda role permission to create a log group (prevents log group recreation on delete)
Adds log group name as an optional to the interface of the custom-resource. This is plumbed into the loggingConfig and results in an undefined entry if not provided.
Description of how you validated changes
Unit tests in addition to some simple functional tests.
When making a bucket with autoDeleteObjects enabled I wanted to confirm that the log group for the lambda was, in fact, gone after I deleted the stack. This is how I found that I needed to modify the permission of the Lambda role to deny log group creation.
I also confirmed that the custom-resources which do not provide a log group name still produce a log group and logs within.
Also, over 100 snapshot tests (RIP me).
Checklist
[x] My code adheres to the CONTRIBUTING GUIDE and DESIGN GUIDELINES
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license
There are about 10 snapshot tests which are failing that I'm unable to resolve on my own and I could use some help in running them. I believe that a few of them are because I'm using an internal AWS account so hopefully they just need to be run from someone's not-quite-so-special account.
Oh, I definitely did something wrong on opening this PR I have no idea what though, changing to a draft.
Opened this backwards. Closing.
Actual PR: https://github.com/aws/aws-cdk/pull/29698
|
2025-04-01T06:39:17.579297
| 2022-05-20T16:47:43
|
1243399711
|
{
"authors": [
"git-bruh",
"nvidiaLinuxUser"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7570",
"repo": "kiss-community/grepo",
"url": "https://github.com/kiss-community/grepo/issues/31"
}
|
gharchive/issue
|
Open source NVIDIA drivers
It would be nice to update the repository and add the nvidia-open package 😃
(sway with native wlroots works for me without artifacts)
The open source kernel drivers only support GPUs from the 3xxx series onwards and I only have a 1660, so I won't be able to test it.
Nvm, it does support 16xx and 20xx series aswell. Fixed and added an option to use proprietary module
Got it, let's hope for better support for the open source driver.
|
2025-04-01T06:39:17.581587
| 2018-10-09T11:36:38
|
368166810
|
{
"authors": [
"avdgrinten",
"eug93"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7571",
"repo": "kit-parco/networkit",
"url": "https://github.com/kit-parco/networkit/pull/243"
}
|
gharchive/pull-request
|
Removed deprecated class PrioQueueForInts and its tests
The PrioQueueForInts class is deprecated and there is no NetworKit class using it.
This is uncontroversial. I will just merge it.
|
2025-04-01T06:39:17.587564
| 2017-09-14T18:29:41
|
257814293
|
{
"authors": [
"10bass",
"adamwathan",
"dakira"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7572",
"repo": "kitetail/zttp",
"url": "https://github.com/kitetail/zttp/issues/46"
}
|
gharchive/issue
|
runbeforeSendingCallbacks errors on undefined each
This may be entirely to do with some oddity of the project where I'm trying to switch to Zttp (Lumen v5.1.7, due to requirements back when it was built and lack of time/resources to rework it in 5.5 or full Laravel), but the changes in runBeforeSendingCallbacks in commit 7b6dddc8c824671a460de8a093acc92a40f4ffe8 throw an error for me on any request, including a simple get with no parameters:
$response = Zttp::get('https://github.com');
PHP error: Undefined property: Illuminate\Support\Collection::$each in [...]/vendor/kitetail/zttp/src/Zttp.php on line 178
Using kitetail/zttp v0.3.0 and illuminate/support v5.1.41.
Illuminate\Support\Collection definitely has the each function, but I'm probably in over my head as to what I'm doing wrong. If I switch back to array_reduce and [] instead of collect(), there are no errors and the call behaves as expected. Any idea what I'm running up against? Again, fully expecting it to be something on my end (and I'm not expecting help with that) with the older Lumen/Illuminate versions in play rather than a Zttp issue, but I figured I could check.
I bet this is because we use the collect helper function and Lumen already has it defined so we are getting an old version of the Collection class. Not at my computer right now but I'll double check this when I get home 👍🏻
So the issue is unfortunately somewhat complex, I've opened an issue on the package we use with more details:
https://github.com/tightenco/collect/issues/54
I'm going to rename this issue to match the root cause and will think if there's a good interim solution.
I ran into this on a pretty blank API project that had laravel/tinker in its dependencies. Tinker apparently installed illuminate/support v 5.2.x when depending on the latest tinker.
|
2025-04-01T06:39:17.663059
| 2015-01-21T23:44:09
|
55094535
|
{
"authors": [
"akshayaurora",
"hey-sancho",
"macropas",
"matham"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7575",
"repo": "kivy/kivy",
"url": "https://github.com/kivy/kivy/issues/2876"
}
|
gharchive/issue
|
Error after inserting non-ASCII char from Cliboard to TextInput widget
Xubuntu 14.10, Python 2.7.8.
Inserting non-ASCII char "Д" by Control+V into the TextInput widget
[INFO ] [Logger ] Record log in /root/.kivy/logs/kivy_15-01-22_0.txt
[INFO ] Kivy v1.9.0-dev
[INFO ] [Python ] v2.7.8 (default, Oct 20 2014, 15:05:19)
[GCC 4.9.1]
[INFO ] [Factory ] 173 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pygame, img_pil (img_ffpyplayer ignored)
[INFO ] [Window ] Provider: pygame(['window_egl_rpi'] ignored)
[INFO ] [GL ] OpenGL version <3.0 Mesa 10.3.2>
[INFO ] [GL ] OpenGL vendor <Intel Open Source Technology Center>
[INFO ] [GL ] OpenGL renderer <Mesa DRI Intel(R) Sandybridge Mobile >
[INFO ] [GL ] OpenGL parsed version: 3, 0
[INFO ] [GL ] Shading version <1.30>
[INFO ] [GL ] Texture max size <8192>
[INFO ] [GL ] Texture max units <16>
[INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
[INFO ] [Text ] Provider: pygame
[INFO ] [Video ] Provider: pygst
[INFO ] [OSC ] using <multiprocessing> for socket
[INFO ] [ProbeSysfs ] device match: /dev/input/event5
[INFO ] [MTD ] Read event from </dev/input/event5>
[INFO ] [Base ] Start application main loop
[INFO ] [MTD ] </dev/input/event5> range position X is 1212 - 5756
[INFO ] [MTD ] </dev/input/event5> range position Y is 996 - 4876
[INFO ] [MTD ] </dev/input/event5> range touch major is 0 - 0
[INFO ] [MTD ] </dev/input/event5> range touch minor is 0 - 0
[INFO ] [MTD ] </dev/input/event5> range pressure is 0 - 255
[INFO ] [MTD ] </dev/input/event5> axes invertion: X is 0, Y is 0
[INFO ] [GL ] NPOT texture support is available
[INFO ] [Clipboard ] Provider: pygame(['clipboard_dbusklipper'] ignored)
[INFO ] [Base ] Leaving application in progress...
Traceback (most recent call last):
File "/home/johndoe/kivy-master/examples/RST_Editor/main.py", line 61, in <module>
Editor().run()
File "/usr/lib/python2.7/dist-packages/kivy/app.py", line 824, in run
runTouchApp()
File "/usr/lib/python2.7/dist-packages/kivy/base.py", line 484, in runTouchApp
EventLoop.window.mainloop()
File "/usr/lib/python2.7/dist-packages/kivy/core/window/window_pygame.py", line 381, in mainloop
self._mainloop()
File "/usr/lib/python2.7/dist-packages/kivy/core/window/window_pygame.py", line 348, in _mainloop
self.modifiers):
File "_event.pyx", line 697, in kivy._event.EventDispatcher.dispatch (kivy/_event.c:6788)
File "_event.pyx", line 1159, in kivy._event.EventObservers.dispatch (kivy/_event.c:11470)
File "_event.pyx", line 1083, in kivy._event.EventObservers._dispatch (kivy/_event.c:11066)
File "/usr/lib/python2.7/dist-packages/kivy/core/window/__init__.py", line 149, in _on_window_key_down
return self.dispatch('on_key_down', keycode, text, modifiers)
File "_event.pyx", line 697, in kivy._event.EventDispatcher.dispatch (kivy/_event.c:6788)
File "_event.pyx", line 1159, in kivy._event.EventObservers.dispatch (kivy/_event.c:11470)
File "_event.pyx", line 1083, in kivy._event.EventObservers._dispatch (kivy/_event.c:11066)
File "/usr/lib/python2.7/dist-packages/kivy/uix/textinput.py", line 2017, in keyboard_on_key_down
self.paste()
File "/usr/lib/python2.7/dist-packages/kivy/uix/textinput.py", line 1388, in paste
data = Clipboard.paste()
File "/usr/lib/python2.7/dist-packages/kivy/core/clipboard/__init__.py", line 88, in paste
return self._paste()
File "/usr/lib/python2.7/dist-packages/kivy/core/clipboard/__init__.py", line 106, in _paste
data = self.get(mime_type)
File "/usr/lib/python2.7/dist-packages/kivy/core/clipboard/clipboard_pygame.py", line 35, in get
text = text.encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)
Want to back this issue? Place a bounty on it! We accept bounties via Bountysource.
I have the same error.
It looks like pygame.scrap.get() returns string, and the following text.encode('utf-8') works correctly with unicode string.
The following patch fixed the issue for me:
diff --git a/kivy/core/clipboard/clipboard_pygame.py b/kivy/core/clipboard/clipboard_pygame.py
index 6f053ec..8dfbac3 100644
--- a/kivy/core/clipboard/clipboard_pygame.py
+++ b/kivy/core/clipboard/clipboard_pygame.py
@@ -31,8 +31,8 @@ class ClipboardPygame(ClipboardBase):
def get(self, mimetype='text/plain'):
self.init()
text = pygame.scrap.get(mimetype)
- if PY2:
- text = text.encode('utf-8')
+ # if PY2:
+ # text = text.encode('utf-8')
return text
def put(self, data, mimetype='text/plain'):
@hey-sancho
It's strange, but there is no any answer from kivy developers...
I cannot seem to reproduce it on windows with master. Perhaps if it's already bytes we should not encode it again.
closed via 33bfc526add488571b2dadea7771579a9cbb9042
|
2025-04-01T06:39:17.747161
| 2015-08-13T15:58:48
|
100805627
|
{
"authors": [
"Archanciel",
"Julian-O",
"Kazun3500",
"Kerang",
"KeyWeeUsr",
"Naveenkariyappa",
"Walpa",
"Zen-CODE",
"akshayaurora",
"amitmarathe",
"brentpicasso",
"codypace68",
"dessant",
"dolang",
"encloinc",
"gutcheschiro",
"jeeger",
"jegger",
"jimanvlad",
"lrq3000",
"lupin3rd",
"matham",
"maxcrow",
"mborus",
"schitzN",
"sdementen",
"steinnes",
"sushihuye",
"tito",
"tshirtman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7576",
"repo": "kivy/kivy",
"url": "https://github.com/kivy/kivy/issues/3576"
}
|
gharchive/issue
|
Multisamples causing GL error
I created 2 .exe packages for windows according to the documentation. I used stable and dev version kivy. And I have a problem - stable version detects opengl version 2.1 and dev version - 1.1
log files
stable
[INFO ] Logger: Record log in C:\Users\maxim_000\.kivy\logs\kivy_15-08-13_6.txt
[WARNING ] [Config ] Upgrading configuration in progress.
[WARNING ] [Config ] Older configuration version detected (14 instead of 13)
[INFO ] Kivy: v1.9.0
[INFO ] Python: v2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)]
[INFO ] Factory: 173 symbols loaded
[INFO ] Image: Providers: img_tex, img_dds, img_gif, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO ] Text: Provider: sdl2
[INFO ] OSC: using <thread> for socket
[INFO ] Window: Provider: sdl2
[INFO ] GL: GLEW initialization succeeded
[INFO ] GL: OpenGL version <2.1.0 - Build <IP_ADDRESS>2>
[INFO ] GL: OpenGL vendor <Intel>
[INFO ] GL: OpenGL renderer <Mobile Intel(R) 4 Series Express Chipset Family>
[INFO ] GL: OpenGL parsed version: 2, 1
[INFO ] GL: Shading version <1.20 - Intel Build <IP_ADDRESS>2>
[INFO ] GL: Texture max size <4096>
[INFO ] GL: Texture max units <16>
[INFO ] Shader: fragment shader: <No errors.>
[INFO ] Shader: vertex shader: <No errors.>
[INFO ] Shader: program: <No errors.>
[INFO ] Window: auto add sdl2 input provider
[INFO ] Window: virtual keyboard not allowed, single mode, not docked
[INFO ] GL: NPOT texture support is available
[INFO ] Base: Start application main loop
[INFO ] Base: Leaving application in progress...
dev
[INFO ] Logger: Record log in C:\Users\maxim_000\.kivy\logs\kivy_15-08-13_5.txt
[INFO ] Kivy: v1.9.1-dev
[INFO ] Python: v2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)]
[INFO ] Factory: 177 symbols loaded
[INFO ] Image: Providers: img_tex, img_dds, img_gif, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO ] Text: Provider: sdl2
[INFO ] OSC: using <thread> for socket
[INFO ] Window: Provider: sdl2
[INFO ] GL: GLEW initialization succeeded
[INFO ] GL: OpenGL version <1.1.0>
[INFO ] GL: OpenGL vendor <Microsoft Corporation>
[INFO ] GL: OpenGL renderer <GDI Generic>
[INFO ] GL: OpenGL parsed version: 1, 1
[CRITICAL ] GL: Minimum required OpenGL version (2.0) NOT found!
OpenGL version detected: 1.1
Version: 1.1.0
Vendor: Microsoft Corporation
Renderer: GDI Generic
Try upgrading your graphics drivers and/or your graphics hardware in case of problems.
The application will leave now.
I have the same issue with 1.9.1-dev :
[INFO ] [Logger ] Record log in C:\Documents and Settings\gfj138\.kivy\logs\kivy_15-08-25_22.txt
[INFO ] [Kivy ] v1.9.1-dev
[INFO ] [Python ] v2.7.10 |Continuum Analytics, Inc.| (default, May 28 2015, 17:02:00) [MSC v.1500 32 bit (Intel)]
[INFO ] [Factory ] 177 symbols loaded
[DEBUG ] [Cache ] register <kv.lang> with limit=None, timeout=None
[DEBUG ] [Cache ] register <kv.image> with limit=None, timeout=60
[DEBUG ] [Cache ] register <kv.atlas> with limit=None, timeout=None
[INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_sdl2, img_pil (img_ffpyplayer ignored)
[DEBUG ] [Cache ] register <kv.texture> with limit=1000, timeout=60
[DEBUG ] [Cache ] register <kv.shader> with limit=1000, timeout=3600
[INFO ] [Text ] Provider: sdl2
[INFO ] [OSC ] using <thread> for socket
[WARNING ] [Input ] WM_Touch/WM_Pen not supported by your version of Windows
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] GLEW initialization succeeded
GL: glGenFramebuffers is NULL, try to detect an extension
GL: available extensions: GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture
GL: No framebuffers extension is supported
GL: Any call to Fbo will crash !
[INFO ] [GL ] OpenGL version <1.1.0>
[INFO ] [GL ] OpenGL vendor <Microsoft Corporation>
[INFO ] [GL ] OpenGL renderer <GDI Generic>
[INFO ] [GL ] OpenGL parsed version: 1, 1
[CRITICAL ] [GL ] Minimum required OpenGL version (2.0) NOT found!
and 1.9.0
[INFO ] [Kivy ] v1.9.0
[INFO ] [Python ] v2.7.10 |Continuum Analytics, Inc.| (default, May 28 2015, 17:02:00) [MSC v.1500 32 bit (Intel)]
[INFO ] [Factory ] 173 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO ] [Text ] Provider: sdl2
[INFO ] [OSC ] using <thread> for socket
[WARNING ] [Input ] WM_Touch/WM_Pen not supported by your version of Windows
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] OpenGL version <3.1.0 - Build <IP_ADDRESS>98>
[INFO ] [GL ] OpenGL vendor <Intel>
[INFO ] [GL ] OpenGL renderer <Intel(R) HD Graphics 4000>
[INFO ] [GL ] OpenGL parsed version: 3, 1
[INFO ] [GL ] Shading version <1.40 - Intel Build <IP_ADDRESS>98>
[INFO ] [GL ] Texture max size <8192>
[INFO ] [GL ] Texture max units <16>
[INFO ] [Shader ] fragment shader: <No errors.>
[INFO ] [Shader ] vertex shader: <No errors.>
[INFO ] [Shader ] program: <No errors.>
[INFO ] [Window ] auto add sdl2 input provider
libpng warning: iCCP: known incorrect sRGB profile
[INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
As I understand it - for determining the version of opengl responsible kivy.graphics. I'm assuming commit history here https://github.com/kivy/kivy/commits/master/kivy/graphics?page=1 3eb5b4844105fe83349a59612f8850e73622c26c found that it works, and since f15283ffd99b226e6cfe9c99d18d1010176aea21 error exists. So either this error because of a commit or error corrections for some is the above-mentioned package.
Error in this commit fd54e811f9c9413d22a2486920ed8d89ae84fc11
What is precisely the error ? Did you succeed in correcting this error ?
I dont succeed( Error on these lines - https://github.com/kivy/kivy/commit/fd54e811f9c9413d22a2486920ed8d89ae84fc11#diff-b91d9923ff01305495e47990f1ac7951R85
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, ...)
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, ...)
If you set any values to these attributes - error exists. Why - i dont know. I'm looking for a solution, but so far without results.
I do not get exactly what you are trying to do and what error you are fighting against... Do you have compilation errors ?
Googling the issue, I found a thread related to a game using OpenGL where a user had the same issue (other games detecting correct OpenGL driver with OpenGL >= 2.0, but this game only using the default Microsoft driver with OpenGL 1.1.0) => see https://betaguide.wz2100.net/viewtopic.php?f=4&t=11314&sid=a20074f87518a1a6a6d39bb97cabd18b&start=0 . At the end, the game detected the correct driver when he user changed the driver acceleration mode from quality to performance. Can this help us in solving the issue ?
OK, maybe it will help, I'll read it and think about it. There is one way that circumvents the problem but not solve it. Just remove the 124 and 125 lines in https://github.com/kivy/kivy/blob/master/kivy/core/window/_window_sdl2.pyx.
sdementen Do you have compilation errors ?
No, i didn`t have them.
At the end, the game detected the correct driver when he user changed the driver acceleration mode from quality to performance.
Perhaps a similar thing can be done by working with the registry or installing a SDL_GL_SetAttribute.
Commenting lines 124+125 did not solve the issue.
However, by skipping entirely the new code related to multisamples, ie change line 116 into
if multisamples > 0 and False:
it did work.
Thank you for your help !
This is a show stopper bug we need to either find a solution or revert this before 1.9.1
This version works for me https://github.com/kivy/kivy/commit/c394fa891d34fbc37224e2c5b713140d6dc216a2 .About better fix the problem - for support SDL_GL_MULTISAMPLEBUFFERS requires the GL_ARB_multisample extension(according this sdl.beuc.net/sdl.wiki/SDL_GLattr) , and i don`t have it.
Need to add a check for this extension, and in his absence - set in config multisamples to 0. I'll try to do it
Perhaps there is another solution, the information here link_to_SO_1 link_to_SO_2 link_to_SO_3 can help to find it, or make sure that it is not (I had not understand due to poor knowledge of English and even less knowledge in SDL and opengl)
I dont know how to fix it. I know that for the code you need to write, but I dont know where it should be
LaTeX-Python confirmed on irc that it seems to happen with integrated GPUs, he uses this processor and was running a packaged app. His code runs without problems on machines with a dedicated graphics card.
@Kazun3500 i'm unsure if that commit is the real trigger, LaTeX tried with multisamples 0 (which stops the relevant part of the patch from executing) and the crash happened the same way.
Could you check if running the source code shows the same issue, or does it crash only with a packaged app? What is your processor model and driver version?
@dessant running the source code shows the same issue.
And for me setting multisamples to 0 helps to avoid crashes.
Processor version - Celeron(R) Dual-Core CPU T3500 @ 2.10GHz × 2, driver version - i will say a little bit later. And - i have integrated GRU.
driver version -6.3.9600.16384, driver provider - Microsoft. Now I try to find a driver from Intel.
driver version -6.3.9600.16384, driver provider - Microsoft. I can`t find driver from intel. And i checked again - setting multisamples to 0 helps to avoid crashes, and depends on multisamples count it shows different lists avaliable gl extensions
with 0
GL: glGenFramebuffers is NULL, try to detect an extension
GL: available extensions: GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_blend
_color GL_EXT_abgr GL_EXT_texture3D GL_EXT_clip_volume_hint GL_EXT_compiled_vert
ex_array GL_SGIS_texture_edge_clamp GL_SGIS_generate_mipmap GL_EXT_draw_range_el
ements GL_SGIS_texture_lod GL_EXT_rescale_normal GL_EXT_packed_pixels GL_EXT_sep
arate_specular_color GL_ARB_multitexture GL_EXT_texture_env_combine GL_EXT_bgra
GL_EXT_blend_func_separate GL_EXT_secondary_color GL_EXT_fog_coord GL_EXT_textur
e_env_add GL_ARB_texture_cube_map GL_ARB_transpose_matrix GL_ARB_texture_env_add
GL_IBM_texture_mirrored_repeat GL_EXT_multi_draw_arrays GL_NV_blend_square GL_A
RB_texture_compression GL_3DFX_texture_compression_FXT1 GL_EXT_texture_filter_an
isotropic GL_ARB_texture_border_clamp GL_ARB_point_parameters GL_ARB_texture_env
_combine GL_ARB_texture_env_dot3 GL_ARB_texture_env_crossbar GL_EXT_texture_comp
ression_s3tc GL_ARB_shadow GL_ARB_window_pos GL_EXT_shadow_funcs GL_EXT_stencil_
wrap GL_ARB_vertex_program GL_EXT_texture_rectangle GL_ARB_fragment_program GL_E
XT_stencil_two_side GL_ATI_separate_stencil GL_ARB_vertex_buffer_object GL_EXT_t
exture_lod_bias GL_ARB_occlusion_query GL_ARB_fragment_shader GL_ARB_shader_obje
cts GL_ARB_shading_language_100 GL_ARB_texture_non_power_of_two GL_ARB_vertex_sh
ader GL_NV_texgen_reflection GL_ARB_point_sprite GL_EXT_blend_equation_separate
GL_ARB_depth_texture GL_ARB_texture_rectangle GL_ARB_draw_buffers GL_ARB_pixel_b
uffer_object GL_WIN_swap_hint GL_EXT_framebuffer_object GL_EXT_texture_sRGB GL_A
RB_color_buffer_float GL_ARB_half_float_pixel GL_ARB_texture_float GL_NV_conditi
onal_render GL_EXT_texture_swizzle
GL: EXT_framebuffer_object is supported
with 2
GL: glGenFramebuffers is NULL, try to detect an extension
GL: available extensions: GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture
GL: No framebuffers extension is supported
GL: Any call to Fbo will crash !
I confirm that i have this problem with a Windows 10 machine and with Virtualbox Windows machines.
Config.set('graphics', 'multisamples', '0') solve the problem.
@lupin3rd - can you give more details where exactly (file, line, kivy version) you made the change so I can try this on a win7pro 32bit machine?
Just to share an update about this, a work to have ANGLE available on Windows (ANGLE is a project that translate OpenGL instructions to DirectX instructions, used by Firefox, Chrome, etc.) My personnal wish is to have ANGLE by default on windows, and prevent theses kind of issues. ANGLE require DirectX 9, which is the default version installed on Windows 10, so no graphics drivers installation are required!
I tested my app on lots of different machines, Windows 10 and Windows 7. I have no problems with the latest stable 1.9.1 or the latest 1.9.2-dev on these machines. But none of them works on my tablet samsung slate 700t with Windows 10. The workaround with multisamples does not work either.
I think I'm having the same issue as @jegger, works fine on Windows 8.1 on one of my machines, but when testing my packaged app on Windows 10 I get the 1.1 error:
I did some further testing:
On a Windows 10 (64bit) it does not work (as described in my last comment)
On a Windows 7 (32bit, in a virtual-machine) it does work
Could it be related to the architecture (32/64bit?) What do you have @steinnes ?
I created via Pyinstaller3.3-dev (and kivy latest master) a:
single file executable: Does work on Win10 and Win7
folder based executable: Does show OpenGL error on Win10, but not on Win7
@jegger, there have been substantial graphics changes since the last stable release, you might want to test master too.
@dessant Today I tried it with the latest kivy build (windows wheel https://kivy.org/downloads/appveyor/kivy/Kivy-1.9.2.dev0-cp27-cp27m-win32.whl)
I built with pyinstaller (also latest master branch) a folder based executable on windows 10 (virtual machine 64bit). This runs on the virtual machine itself but runs into this error on the other physical windows10 64bit machine. When building a pyinstaller single file executable it works on both windows 10 installations (virtual an physical). I don't get this...
Try setting the environment to KIVY_GL_BACKEND=sdl2 and see if that fixes
it. That needs to be set before kivy is imported. You can set it in environ
before the first kivy import.
On Jan 18, 2017 11:16 AM, "Dominique Burnand"<EMAIL_ADDRESS>wrote:
@dessant https://github.com/dessant Today I tried it with the latest kivy
build (windows wheel https://kivy.org/downloads/
appveyor/kivy/Kivy-1.9.2.dev0-cp27-cp27m-win32.whl)
I built with pyinstaller (also latest master branch) a folder based
executable on windows 10 (virtual machine 64bit). This runs on the virtual
machine itself but runs into this error on the other physical windows10
64bit machine. When building a pyinstaller single file executable it works
on both windows 10 installations (virtual an physical). I don't get this...
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kivy/kivy/issues/3576#issuecomment-273521033, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABkW_iRWtsJUmrmsCumyZef905gklotKks5rTjrYgaJpZM4FrD4a
.
@matham Thanks for your input. I tried setting the variable by: os.environ["KIVY_GL_BACKEND"] = "sdl2" before any kivy imports in my main file. I can confirm that the flag is recognized when I run the script on the machine on which there is no error. The log shows: "Backend used glew" without the variable beeing set and "Backend used sdl2" with the env-variable set to sdl2.
But this does not seems to change anything:
Here is the full log when the error is happening: http://codepad.org/QnEvh6Gl
I think you'll have to debug it on your machine since none of us can reproduce it. The place to look at is https://github.com/kivy/kivy/blob/master/kivy/core/gl/init.py and https://github.com/kivy/kivy/blob/master/kivy/graphics/opengl_utils.pyx#L240.
I'd also try to add the referenced _kivy_opengl_required_func to sys (e.g. sys._kivy_opengl_required_func = lambda *largs: 1) which will make kivy ignore that the version is too low. Maybe kivy will still work or it could crash.
I get this error too. I tried all the below (separately an together) but I still get the same error.
Config.set('graphics', 'multisamples', '0')
sys._kivy_opengl_required_func = lambda *largs: 1
os.environ["KIVY_GL_BACKEND"] = "sdl2"
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] OpenGL version <b'1.1.0'>
[INFO ] [GL ] OpenGL vendor <b'Microsoft Corporation'>
[INFO ] [GL ] OpenGL renderer <b'GDI Generic'>
[INFO ] [GL ] OpenGL parsed version: 1, 1
[CRITICAL ] [GL ] Minimum required OpenGL version (2.0) NOT found!
OpenGL version detected: 1.1
Version: b'1.1.0'
Vendor: b'Microsoft Corporation'
Renderer: b'GDI Generic'
Try upgrading your graphics drivers and/or your graphics hardware in case of problems.
The application will leave now.
GL: glGenFramebuffers is NULL, try to detect an extension
GL: available extensions: GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture
GL: No framebuffers extension is supported
GL: Any call to Fbo will crash !
Process finished with exit code 1
Any other ideas?
Same problem here guys!! Running windows 10 64bit. confirmed i have opengl 3.2 but still detecting opengl 1.1
Just a mention, the same OpenGL 1.1 error is available on Appveyor, so if this gets solved, it might be a nice test case.
kivy_17-02-24_3.txt
I am also getting this misidentifying error.
I tried running the /share/kivy-examples/demo/showcase/main.py file and got that error.
It worked after I edited the main.py file and added the following two lines just after the rest of the import statements:
from kivy import Config
Config.set('graphics', 'multisamples', '0')
Currently running on:
Manufacturer: Acer
Computer Model: Asprie Z3-715
Intel i5 6400T CPU @ 2.2GHz
8GB RAM
64-bit Windows 10
Integrated Graphics: Intel HD 530 with Shared Memory
Dedicated Graphics: Nvidia Geforce 940M with 2GB RAM
The issue is with Windows 10 support for interested graphics, had this problem with a Java library, changed to the last SDK version before windows 10 was released and the issue was fixed
@encloinc Can you explain what you mean? What error did you get in the java library? Also, you changed the sdk version of what? When compiling Angle?
Btw, @KeyWeeUsr The problem with appveyor being 1.1 will likely never be fixed since they literally don't have a graphics card. So that's the version windows is limited to I believe. But I could be wrong, since they do seem to have DirectX support.
@matham I think they really have DirectX, so if we can force kivy to use angle on appveyor, it might test this (or the 4971 part at least).
There seems to be a mishmash of issues in this thread. Let's use this thread for cases where gl detection issues are solved by Config.set('graphics', 'multisamples', '0'), regardless of the gl backend being used.
If that is not the case, please open a new issue, describing the error you get.
I was using JDK v#121 for the library https://libgdx.badlogicgames.com/ , when I switched the JDK version to an older one that was not built around Windows 10, (I switched to v51), the opengl error on java went away.
@KeyWeeUsr Yes, they do have DirectX installed, but that is different from the graphics driver. That link is just for the software package, it doesn't mean the graphics driver would also work.
I should have my head examined. I already enabled the tests on appveyor using angle with 3.5+ as you can see here: https://ci.appveyor.com/project/KivyOrg/kivy/build/1.0.719/job/j7k7c123ojwt0ba9#L1080. So the tests already work there with angle.
It could be other people who have problems is because they are lacking some dll. Appveyor comes preinstalled with the direct3d sdk so it has all the dlls. I can probably test on appveyor to see what other dlls it depends on
Reproduced on a PC I don't really have a stable access to, but these are the specs from dxdiag. I'll try angle if there will be a chance to do so:
Win 10 Education 64bit Build 14393
Intel Core2 Quad Q9400 (6M Cache, 2.66 GHz, 1333 MHz FSB)
Intel GMA 4500M
I think it's this machine
I had no errors such as:
GL: glGenFramebuffers is NULL, try to detect an extension
GL: available extensions: GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture
GL: No framebuffers extension is supported
GL: Any call to Fbo will crash !
just a simple log without any issues + OpenGL 1.1 error at the end, although the CPU should support the required OpenGL version (or at least he and he say so).
@KeyWeeUsr, is it fixed by Config.set('graphics', 'multisamples', '0')? If not, please open a separate issue.
@dessant I'm not quite sure if it is an issue as I didn't really find anything OpenGL > 1.1 in there and even WebGL checker reports no WebGL2 support (so probably just no DLL available). Anyway, I checked today even with disabled multisamples. It did nothing, but that was rather expected and using angle fixed the error, therefore I guess it's just that - a missing DLL.
As @dessant mentioned, this thread has become full of too many issues and suggestions. @jeeger's (#5071) new ticket summarizes the remaining issues. Please follow any remaining discussion there.
Note: I have a work laptop which reproduces the issues as discussed there. I can run any requested tests or make this machine available remotely upon request. In the meantime, I will continue investigating and discussing on that thread.
Thanks
Sorry, you're thinking of @jegger
☺️
Am 29. März 2017 12:07:29 MESZ schrieb Richard Larkin<EMAIL_ADDRESS>
As @dessant mentioned, this thread has become full of too many issues
and suggestions. @jeeger's (#5071) new ticket summarizes the remaining
issues. Please follow any remaining discussion there.
Note: I have a work laptop which reproduces the issues as discussed
there. I can run any requested tests or make this machine available
remotely upon request. In the meantime, I will continue investigating
and discussing on that thread.
Thanks
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/kivy/kivy/issues/3576#issuecomment-290044678
--
Diese Nachricht wurde von meinem Mobiltelefon gesendet.
Comments not related to multisamples have been deleted, let's reopen this.
Kivy 1.10.0 was released!
So I installed it right away to check.
Alas the problem stays the same.
I still need to add
from kivy import Config
Config.set('graphics', 'multisamples', '0')
to the kivy showcase "share\kivy-examples\demo\showcase\main.py" to get it to run.
The showcase itself runs, apart from the scatter demo, which just does nothing.
Tried on Win7 Pro 32bit, Python 3.6.1 32 bit, Intel HD on board graphics.
(Core i5), driver <IP_ADDRESS>1 (not updatable)
Did anyone try it with the sdl2 backend (KIVY_GL_BACKEND=sdl2) or the angle backend for py3.5+ (pip install kivy.deps.angle and KIVY_GL_BACKEND=angle_sdl2)?
@matham can you give beginner friendly instructions on how to activate KIVY_GL_BACKEND=angle_sdl2 assuming I have performed pip install kivy.deps.angle on python 3.6.1 and am now about to start share\kivy-examples\demo\showcase\main.py?
Hmm, that is very strange, I don't see how that error could occur with the last release.
@mborus, dozens of posts have been deleted from this thread to bring it to a usable state, please do not go off-topic, and debug your angle issues on irc or the mailing list.
I also have same experience with python 3.6 but not with python 2.7.
My apps running well with python 2.7 but have false detected about opengl version and can't find any solution yet.
Every solution I try including angle and multisamples didn't work.
the issue seems to be persisting still in my machine windows 10 I have also tried the above solutions to see the issue...
import os
os.environ['KIVY_GL_BACKEND'] = 'angle_sdl2'
import kivy
Adding this to the top of my script is the work around I'm using on windows 10, kivy 1.10.0, python 3.6.2.
I want to clarify:
from kivy import Config Config.set('graphics', 'multisamples', '0')
Does not resolve my error.
RESTART: C:/Users/AppData/Local/Programs/Python/Python36-32/kivy_label.py
[INFO ] [Logger ] Record log in C:\Users\Corie.kivy\logs\kivy_17-09-05_15.txt
[INFO ] [Kivy ] v1.10.0
[INFO ] [Python ] v3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:14:34) [MSC v.1900 32 bit (Intel)]
[INFO ] [Factory ] 194 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_gif (img_pil, img_ffpyplayer ignored)
[INFO ] [Text ] Provider: sdl2
[INFO ] [OSC ] using for socket
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] Using the "OpenGL" graphics system
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] No framebuffers extension is supported
[INFO ] [GL ] Backend used
[INFO ] [GL ] OpenGL version <b'1.1.0'>
[INFO ] [GL ] OpenGL vendor <b'Microsoft Corporation'>
[INFO ] [GL ] OpenGL renderer <b'GDI Generic'>
[INFO ] [GL ] OpenGL parsed version: 1, 1
[CRITICAL] [GL ] Minimum required OpenGL version (2.0) NOT found!
OpenGL version detected: 1.1
Version: b'1.1.0'
Vendor: b'Microsoft Corporation'
Renderer: b'GDI Generic'
Try upgrading your graphics drivers and/or your graphics hardware in case of problems.
The application will leave now.
However,
import os os.environ['KIVY_GL_BACKEND'] = 'angle_sdl2'
does. I'm not sure if this is the same or related issue.
Should we apply the:
Config.set('graphics', 'multisamples', '0')
On windows machines only? Will there be a performance impact on other platforms if applied globally, such as on Android, iOS or OSX?
It's only a problem on windows AFAIK.
Thanks. What I meant was, by setting this config option regardless of the platform is running on, will it have any detrimental performance impact?
I don't think so. From the docs, Sets the MultiSample Anti-Aliasing (MSAA) level. Increasing this value results in smoother graphics but at the cost of processing time. which seems to imply that a lower level will only increase performance. But it may look worse.
I'd just do
import platform
from kivy.config import Config
if platform.system() == 'Windows:
Config.set('graphics', 'multisamples', '0')'
Perfect. Thank you.
This worked for me
import os
os.environ['KIVY_GL_BACKEND'] = 'angle_sdl2'
from kivy import Config
Config.set('graphics', 'multisamples', '0')
What would be ideal is to allow the default multisample value, but pre-emptively set it to zero only if:
platform is windows
compatibility issue is detected
Basically, how do we test ahead of time before Kivy crashes out? I'd like to have the optimal graphics for the 99% of our software installations out there.
I don't think that is possible. You need to run the app twice, once to see if there's an issue, at which point you need to terminate python because gl is already initialized and then you have to run it again with the "fix".
So we have no way to detect the issue and then adjust the setting. You can make it available as a config option that your user changes if they have issues.
Not work for me:
Pc config :
Intel GPU with updated driver:
Source test:
`
import kivy
kivy.require('1.10.0')
import os
os.environ['KIVY_GL_BACKEND'] = 'angle_sdl2'
from kivy import Config
Config.set('graphics', 'multisamples', '0')
from kivy.app import App
from kivy.uix.button import Button
class TestApp(App):
def build(self):
return Button(text='Hello World')
TestApp().run()>
`
Occours:
Python 3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:14:34) [MSC v.1900 32 bit (Intel)] on Walmir-not, Standard
[INFO ] [Logger ] Record log in C:\Users\Walmir.kivy\logs\kivy_17-09-29_7.txt
[INFO ] [Kivy ] v1.10.0
[INFO ] [Python ] v3.6.2 (v3.6.2:5fd33b5, Jul 8 2017, 04:14:34) [MSC v.1900 32 bit (Intel)]
[INFO ] [Factory ] 194 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_gif (img_pil, img_ffpyplayer ignored)
[INFO ] [Text ] Provider: sdl2
[INFO ] [OSC ] using for socket
[INFO ] [Window ] Provider: sdl2
[INFO ] [Window ] Activate GLES2/ANGLE context
[CRITICAL] [Window ] Unable to find any valuable Window provider.
sdl2 - RuntimeError: b'Could not initialize EGL'
File "C:\Python36\lib\site-packages\kivy\core_init_.py", line 67, in core_select_lib
cls = cls()
File "C:\Python36\lib\site-packages\kivy\core\window\window_sdl2.py", line 140, in init
super(WindowSDL, self).init()
File "C:\Python36\lib\site-packages\kivy\core\window_init_.py", line 899, in init
self.create_window()
File "C:\Python36\lib\site-packages\kivy\core\window\window_sdl2.py", line 269, in create_window
self.fullscreen, resizable, state)
File "kivy\core\window_window_sdl2.pyx", line 142, in kivy.core.window._window_sdl2._WindowSDL2Storage.setup_window (kivy\core/window_window_sdl2.c:2782)
File "kivy\core\window_window_sdl2.pyx", line 57, in kivy.core.window._window_sdl2._WindowSDL2Storage.die (kivy\core/window_window_sdl2.c:1872)
[CRITICAL] [App ] Unable to get a Window, abort.
Exception ignored in: 'kivy.properties.dpi2px'
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\kivy\utils.py", line 496, in get
retval = self.func(inst)
File "C:\Python36\lib\site-packages\kivy\metrics.py", line 174, in dpi
EventLoop.ensure_window()
File "C:\Python36\lib\site-packages\kivy\base.py", line 127, in ensure_window
sys.exit(1)
SystemExit: 1
[CRITICAL] [App ] Unable to get a Window, abort.
Please, any suggestions for a lost beginner?
Tks.
The issue is still happening on Windows 7 x64 (2011 computer, but still...). I have an Intel HD Graphics Arrandale. The way to fix it is to place:
from kivy import Config
Config.set('graphics', 'multisamples', '0')
Before any kivy import. Another way to fix just on the current machine is to modify the %HOMEPATH%\.kivy\config.ini to change multisamples = 2 into multisamples = 0.
I do have a compliant openGL on my Windows 10 laptop:
But even with those addition in my code
import os
os.environ['KIVY_GL_BACKEND'] = 'angle_sdl2'
from kivy import Config
Config.set('graphics', 'multisamples', '0')
I get the 1.1 version error ...
The full log:
[INFO ] [Logger ] Record log in C:\Users\Jean-Pierre\.kivy\logs\kivy_18-04-21_14.txt
[INFO ] [Kivy ] v1.10.0
[INFO ] [Python ] v3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
[INFO ] [Factory ] 194 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_pil, img_gif (img_ffpyplayer ignored)
[INFO ] [OSC ] using <thread> for socket
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] Using the "OpenGL" graphics system
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] No framebuffers extension is supported
[INFO ] [GL ] Backend used <glew>
[INFO ] [GL ] OpenGL version <b'1.1.0'>
[INFO ] [GL ] OpenGL vendor <b'Microsoft Corporation'>
[INFO ] [GL ] OpenGL renderer <b'GDI Generic'>
[INFO ] [GL ] OpenGL parsed version: 1, 1
[CRITICAL] [GL ] Minimum required OpenGL version (2.0) NOT found!
OpenGL version detected: 1.1
Version: b'1.1.0'
Vendor: b'Microsoft Corporation'
Renderer: b'GDI Generic'
Try upgrading your graphics drivers and/or your graphics hardware in case of problems.
The application will leave now.
This is really very problematic !
@Archanciel
[INFO ] [Python ] v3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]
Anaconda overrides a lot of libraries with their own versions. Please try again with a CPython from python.org
Thanks for the suggestion, dolang. I did uninstall Python Anaconda 3.6.4 and installed CPython 3.6.5 instead, but this did not solved the problem. But I finally found the solution thanks to a contribution on an Intel graphical card forum. The idea is to use Shims to solve a driver compatibility problem. See the end of the thread here for a description of the solution.
Thank you for contributing your solution back. It might be able to help someone else with the same problem.
@gutcheschiro ,Thanks
It worked for me.
import os
os.environ['KIVY_GL_BACKEND'] = 'angle_sdl2'
import kivy
I'm using on windows 10, kivy 1.10.0, python 3.6.4
Is this is still present on 1.10.1? If so, would it make sense to alter the order of
https://github.com/kivy/kivy/blob/b4d3e7d0db67c9ad23dff8c48c4a275bafb9d76b/kivy/graphics/cgl.pyx#L59
so sdl2 is picked first (at least on windows)?
Is this is still present on 1.10.1? If so, would it make sense to alter the order of
https://github.com/kivy/kivy/blob/b4d3e7d0db67c9ad23dff8c48c4a275bafb9d76b/kivy/graphics/cgl.pyx#L59
so sdl2 is picked first (at least on windows)?
In Kivy 2.2.1 that code picks sdl2 or gl. Does that mean this can be closed?
|
2025-04-01T06:39:17.797884
| 2016-05-22T19:09:24
|
156167301
|
{
"authors": [
"lolgear",
"modocache"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7577",
"repo": "kiwi-bdd/Kiwi",
"url": "https://github.com/kiwi-bdd/Kiwi/issues/682"
}
|
gharchive/issue
|
Rare should equal failure with dictionary comparison [Remastered].
Well, as I tell before
dictionaries equal doesn't work well.
I would like to narrow down the area for inspection, but I could only give you a clue:
git: yourkarma/jwt
branch: master ( latest release, for example, 2.0.2 )
JWTSpec.m
describe encoding -> context claims set -> it decode claims set and verify it correctly.
steps to reproduce:
download latest release.
open JWT.xcworkspace
cleanup everything.
run tests.
do 4 - 3 - 4 - 4 - 4 - 3 - 4 or whatever order you'd like.
catch results as failure of example mentioned above.
Thanks for the remastered version! 😁
So I guess the problem is still occurring? Is it possible to reduce your test case at all? That is, does a dictionary comparison fail stochastically when not using code from the JWT project?
I ask because Kiwi uses -[NSObject isEqual:] for its equality matcher -- there isn't any special code path when comparing NSDictionary. For two dictionaries to only sometimes not be equal, I'd be forced to suspect either:
All Kiwi equality comparisons fail stochastically, or...
...Apple's -[NSDictionary isEqual:] sometimes returns different values.
I think the more likely explanation is that something in the JWTSpec_encoding_claims_set_decode_claims_set_and_verify_it_correctly test has a race condition, which causes the two dictionaries to not be equal sometimes.
Of course, it's possible there's some bug I'm overlooking in Kiwi. Reducing the test case would help find that, too!
@modocache hey, how could I isolate this test?
Is it possible to move it to single spec file?
I think no, because kiwi doesn't allow to run single test :(
Single test project as an option?
Hey @lolgear! Yeah, I think a new Xcode project with just this test, and just the source code it's testing, will probably help isolate the problem better.
@modocache
check master branch:
https://github.com/lolgear/JWT
Inspection -> SherlockHolmes project. ( In spec you could find steps to reproduce issue )
@modocache any update?
Not yet, sorry! Will try to find a spare minute during a weekend soon.
Or if you have some spare time before I do, you could take a look at the Kiwi internals to try and figure it out!
|
2025-04-01T06:39:17.806059
| 2020-06-18T14:50:19
|
641271742
|
{
"authors": [
"Stranger6667",
"barrett-schonefeld"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7578",
"repo": "kiwicom/schemathesis",
"url": "https://github.com/kiwicom/schemathesis/pull/620"
}
|
gharchive/pull-request
|
feat: support for multiple examples
Implements #589
Updates:
add specs/openapi/examples to hold the logic for retrieving examples from endpoints
add support for OpenAPI examples
Tests:
add unit tests for retrieving examples from "examples"
add tests to make sure the static_parameter objects are created correctly
Note:
Implementation allows the "example" and "examples" keywords to be used at the same time. We get all examples from "examples", and if "example" is used also, we create one more strategy using the "example" values.
One concern I have is that, when a request body has multiple mediaTypes, we retrieve examples from the "first" mediaType. However, because the API def is a dictionary, we don't know that calling next(iter(media_types_dict)) actually gives the first mediaType.
The important thing is that we get examples from the same mediaType that is used in specs/openapi/serialization. But I don't know that we can rely on getting the same mediaType because different dictionaries may have different hashing functions.
Great job! Re: dictionaries - I assume, that there might be corner-cases on non-CPython implementation of Python 3.6, but not sure how important it is to handle as we only declare CPython compatibility in pyproject.toml. Also, I don't know if PyPy has the same implementation detail as CPython for dicts on 3.6 (it might be, because that dicts impl came originally from PyPy) - otherwise, I am not sure if we need to support other implementations, that don't guarantee insertion ordering for dicts. I.e. I'd like to know what would be the affected space before taking action on this regard.
|
2025-04-01T06:39:17.854381
| 2022-04-30T19:05:21
|
1221899697
|
{
"authors": [
"RustyJoeM",
"gwenn"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7580",
"repo": "kkawakam/rustyline",
"url": "https://github.com/kkawakam/rustyline/issues/628"
}
|
gharchive/issue
|
confusing example on helper - color for prompt only needed - no matching brackets, completions etc.
Hello,
I have seen the example file but i find it quite confusing.
I do not need bracket matching, file completions or hinting, just plain colored prompt with some ANSI string (i got e.g. from colored crate, or manually written).
Can I add highlight_prompt functionality, without all the "underlated" traits (completion, hints, validations, ...)?
There are some references to colored_prompt in the example code, but helper has extra methods & traits that seem completely unrelated to the prompt higlighting, and it's not clear whether they can/should be replaced with some placeholder code to keep other functionality intact.
There is also an example with only an highlighter: https://github.com/kkawakam/rustyline/blob/master/examples/read_password.rs#L30-L32
And all Highlighter trait methods have a default implementation that does nothing (no highlighting) so you just have to overwrite highlight_prompt impl.
perfect, thank you for direction! :)
|
2025-04-01T06:39:17.858697
| 2020-07-15T19:07:18
|
657585025
|
{
"authors": [
"Julian-Chu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7581",
"repo": "kkdai/youtube",
"url": "https://github.com/kkdai/youtube/issues/73"
}
|
gharchive/issue
|
discussion: improve project layout and etc...
now youtube have more code and functions than before. To improve readability and maintainability, we would like to reorganize project layout, some plans are under discussion:
project layout: change to cmd/pkg(internal)/etc
reorganize errors
move some functions from youtube to own package
Thanks for any suggestion in advance.
closed by v2
|
2025-04-01T06:39:17.873622
| 2024-09-13T06:13:20
|
2523958839
|
{
"authors": [
"kkebaara"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7582",
"repo": "kkebaara/nodejs-goof",
"url": "https://github.com/kkebaara/nodejs-goof/pull/1"
}
|
gharchive/pull-request
|
[Snyk] Security upgrade node from 18.13.0 to 18.20.4
Snyk has created this PR to fix 2 vulnerabilities in the dockerfile dependencies of this project.
Keeping your Docker base image up-to-date means you’ll benefit from security fixes in the latest version of your chosen image.
Snyk changed the following file(s):
Dockerfile
We recommend upgrading to node:18.20.4, as this image has only 184 known vulnerabilities. To do this, merge this pull request, then verify your application still works as expected.
Vulnerabilities that will be fixed with an upgrade:
Issue
Score
Out-of-bounds Write SNYK-DEBIAN11-GLIBC-5927133
829
Out-of-bounds Write SNYK-DEBIAN11-LIBWEBP-5893094
829
Out-of-bounds Write SNYK-DEBIAN11-LIBWEBP-5893094
829
Out-of-bounds Write SNYK-DEBIAN11-LIBWEBP-5893094
829
Out-of-bounds Write SNYK-DEBIAN11-LIBWEBP-5893094
829
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
Max score is 1000. Note that the real score may have changed since the PR was raised.
This PR was automatically created by Snyk using the credentials of a real user.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust project settings
📚 Read about Snyk's upgrade logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Learn about vulnerability in an interactive lesson of Snyk Learn.
Opened a fix PR for one issue as per the document
|
2025-04-01T06:39:17.885530
| 2023-07-03T12:50:46
|
1786121552
|
{
"authors": [
"Eikix",
"danilowhk",
"jobez"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7583",
"repo": "kkrt-labs/kakarot",
"url": "https://github.com/kkrt-labs/kakarot/issues/617"
}
|
gharchive/issue
|
bug: Error when get_caller_address() is called within a view call
Bug Report
Current behavior:
In a view call, CALLER and ORIGIN opcodes return 0.
Expected behavior:
The should behave the same in view and execute call.
Related code:
https://github.com/kkrt-labs/kakarot/blob/e5556e30b7560b8d647899b6dc27bf4b20857f38/src/kakarot/library.cairo#L292
https://github.com/kkrt-labs/kakarot/blob/e5556e30b7560b8d647899b6dc27bf4b20857f38/src/kakarot/instructions/environmental_information.cairo#L129
https://github.com/kkrt-labs/kakarot/blob/e5556e30b7560b8d647899b6dc27bf4b20857f38/src/kakarot/instructions/environmental_information.cairo#L159
Other information:
See JSON-RPC spec and evm.codes
How to reproduce
fork: https://github.com/danilowhk/kakarot-opcode-test
add "@view execute" function on Kakarot
adapt: .env (kakarot contract and class_hash)
View function code:
@view
func execute{
syscall_ptr: felt*, pedersen_ptr: HashBuiltin*, range_check_ptr, bitwise_ptr: BitwiseBuiltin*
}(
starknet_contract_address: felt,
evm_contract_address: felt,
bytecode_len: felt,
bytecode: felt*,
calldata_len: felt,
calldata: felt*,
value: felt,
gas_limit: felt,
gas_price: felt,
) -> (
stack_accesses_len: felt,
stack_accesses: felt*,
stack_len: felt,
memory_accesses_len: felt,
memory_accesses: felt*,
memory_bytes_len: felt,
starknet_contract_address: felt,
evm_contract_address: felt,
return_data_len: felt,
return_data: felt*,
gas_used: felt,
) {
return Kakarot.execute(
starknet_contract_address,
evm_contract_address,
bytecode_len,
bytecode,
calldata_len,
calldata,
value,
gas_limit,
gas_price,
);
}
Ok, want to make sure I am understanding the problem space here--
our tests in python are calling these opcodes in the exact same way, so we are debugging a difference in VM issue and not a logic issue? really important context here.
this comment is around our integration tests in python, anyone can give context for it?
https://github.com/kkrt-labs/kakarot/blame/e273a58716f6a9a6f787b3b3817cbf1ad35035df/tests/integration/test_kakarot.py#L73
What is the status of this issue?
This is still an issue:
In this contract:
pragma solidity ^0.8.0;
contract TestContract {
// This function will return the address of the caller
function getCallerAddress() public view returns (address) {
return msg.sender;
}
}```
When trying to call getCallerAddress() from Remix, there is an error:
```2023-07-26 11:55:36 [2023-07-26T09:55:36Z WARN katana_core::backend] Call error: VirtualMachineExecutionErrorWithTrace { trace: "Error in the called contract (0x07a20c8450211766ecde6cb15882f330381531a29da72648a893940a41728c8b):\nError at pc=0:37:\nGot an exception while executing a hint: Custom Hint Error: Requested contract address ContractAddress(PatriciaKey(StarkFelt(\"0x0000000000000000000000000000000000000000000000000000000000000000\"))) is not deployed.\nCairo traceback (most recent call last):\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:17986)\nUnknown location (pc=0:17364)\nUnknown location (pc=0:7532)\nUnknown location (pc=0:1737)\n", source: CairoRunError(VmException(VmException { pc: 37, inst_location: None, inner_exc: Hint((0, CustomHint("Requested contract address ContractAddress(PatriciaKey(StarkFelt(\"0x0000000000000000000000000000000000000000000000000000000000000000\"))) is not deployed."))), error_attr_value: None, traceback: Some("Cairo traceback (most recent call last):\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:18084)\nUnknown location (pc=0:17986)\nUnknown location (pc=0:17364)\nUnknown location (pc=0:7532)\nUnknown location (pc=0:1737)\n") })) }```
It seems this is not an issue when calling directly from forge:
The test for this is the
address sender_address = counter.getCallerAddress();
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.13;
import "forge-std/Script.sol";
import "kakarot/PlainOpcodes/GetCaller.sol";
contract GetCallerScript is Script {
GetCaller public getCaller;
function run() external {
uint256 deployerPrivateKey = vm.envUint("EVM_PRIVATE_KEY");
vm.startBroadcast(deployerPrivateKey);
getCaller = new GetCaller();
address sender_address = getCaller.getCallerAddress();
console.logAddress(sender_address);
require(
sender_address ==
address(0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266),
"Address should be 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266"
);
vm.stopBroadcast();
}
}
GetCaller Contract:
// SPDX-License-Identifier: MIT
pragma solidity >=0.8.0;
contract GetCaller {
address private caller;
// This function will return the address of the caller
function getCallerAddress() public view returns (address) {
return msg.sender;
}
}
What is the status of this? Is it linked to get_caller_address issue?
What's the status of this?
Tried it locally, it works for latest Kakarot commit.
Closing this issue
|
2025-04-01T06:39:17.888094
| 2024-09-11T00:38:28
|
2518217021
|
{
"authors": [
"Dale-Muccignat",
"osro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7584",
"repo": "klaasnicolaas/home-assistant-glow",
"url": "https://github.com/klaasnicolaas/home-assistant-glow/issues/592"
}
|
gharchive/issue
|
Usage double actual use
Hiya, love your work on this and appreciate your efforts. This is more of an FYI unless there's some clarity others had.
I was setting up my own glow and having trouble adjusting the sensitivity on the photodiode. It seemed unless I got it perfect it would either not register pulses or double count them. My meter is outside in a not very sealed box so perhaps that doesn't help.
I just added the following options to my config:
internal_filter: 200ms
internal_filter_mode: PULSE
I have had the 200ms filter in for a while but adding the filter mode seems to have solved the doublt counting issue.
I'm not entirely across why, I think it might be something to do with the daylight but I don't really understand why it would be.
I have exactly same issue. All the values are doubled to what the actual usage is. I have checked all the settings should be ok.
|
2025-04-01T06:39:17.893228
| 2019-10-28T19:28:21
|
513510356
|
{
"authors": [
"BuonOmo",
"msxavi",
"randy-girard"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7585",
"repo": "klaxit/sidekiq-worker-killer",
"url": "https://github.com/klaxit/sidekiq-worker-killer/issues/11"
}
|
gharchive/issue
|
Kill or restart?
Sorry if this is obvious, but does this kill and not start the worker back up, or will it do a restart. Thanks
Hi @randy-girard
It doesn't restart, you may have to do it manually or have a monitor tool like upstart or monit to restart Sidekiq automatically.
@randy-girard if you are using Heroku, restarts are automated as well :)
I don't think we could handle a restart since the process is kind of killing itself: we are generating a child thread from sidekiq process, and it is this thread that sends SIGTERM, and then dies as well...
Thanks yall.
|
2025-04-01T06:39:17.913041
| 2022-03-18T04:48:51
|
1173185569
|
{
"authors": [
"CLAassistant",
"jimni1222"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7586",
"repo": "klaytn/klaytn-docs",
"url": "https://github.com/klaytn/klaytn-docs/pull/400"
}
|
gharchive/pull-request
|
Added compatibility contents with tx constructor
Proposed changes
From caver-js v1.8.1-rc.4, to support multiple Caver instances, transaction instance creation through the constructor for each transaction type is not supported.
Also i checked the example repo, and there was no places where use constructor to create tx.
In this PR, i've added something to what's described above.
v1.8.1-rc.4 is not released yet, so i wanna merge this after release.
Types of changes
Please put an x in the boxes related to your change.
[x] Minor Issues and Typos
[ ] Major Content Contribution
[ ] Others
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to reach out. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
[x] I have read the CONTRIBUTING GUIDELINES
[x] I have signed the CLA
[x] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in downstream modules
Related issues
Please leave the issue numbers or links related to this PR here.
Further comments
If this is a relatively large content contribution, kick off the discussion by explaining why you would suggest the content contribution, etc...
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
@terrikwak Please take a look :)
@terrikwak @kjhman21
I think i don't have permission to merge. Can you merge this for me?
@dcground @neoofklaytn Please take a look this :) Thank you
|
2025-04-01T06:39:17.919104
| 2022-07-15T01:47:48
|
1305474624
|
{
"authors": [
"CLAassistant",
"iv0rish"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7587",
"repo": "klaytn/klaytn-integration-tests",
"url": "https://github.com/klaytn/klaytn-integration-tests/pull/12"
}
|
gharchive/pull-request
|
Create CLA.yml
Proposed changes
Create CLA pipeline on Github Actions
Types of changes
Please put an x in the boxes related to your change.
[ ] Bugfix
[ ] New feature or enhancement
[x] Others
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.
[x] I have read the CONTRIBUTING GUIDELINES doc
[x] I have signed the CLA
[ ] Lint and unit tests pass locally with my changes ($ make test)
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in downstream modules
Related issues
Further comments
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:39:17.966095
| 2024-05-27T07:27:17
|
2318440952
|
{
"authors": [
"codecov-commenter",
"kwb0523"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7588",
"repo": "kmesh-net/kmesh",
"url": "https://github.com/kmesh-net/kmesh/pull/369"
}
|
gharchive/pull-request
|
update the workload proposl
What type of PR is this?
update the workload proposl
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Flag
Coverage Δ
unittests
31.67% <ø> (?)
Flags with carried forward coverage won't be shown. Click here to find out more.
|
2025-04-01T06:39:17.967870
| 2017-12-05T17:35:42
|
279464654
|
{
"authors": [
"KamranMackey",
"kmikiy"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7589",
"repo": "kmikiy/SpotMenu",
"url": "https://github.com/kmikiy/SpotMenu/pull/86"
}
|
gharchive/pull-request
|
Added album name support.
I have added Album Name support to SpotMenu, of course with a preference to configure it. By default, it is set to true. I also updated the Xcode Workspace project format to be Xcode 8 and higher compatible only, since we don't target any macOS versions that still use Xcode 3.2.
I have tested this a bunch, and so far it works really well aside from the fact that I haven't figured out a way to hide the album name if it's the exact same as the song title unfortunately.
looks good to me!
|
2025-04-01T06:39:17.988278
| 2021-09-19T20:40:07
|
1000414041
|
{
"authors": [
"Artefact2",
"david-janssen",
"humanplayer2",
"slotThe",
"tuxflo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7590",
"repo": "kmonad/kmonad",
"url": "https://github.com/kmonad/kmonad/issues/360"
}
|
gharchive/issue
|
How do I find keycodes for special German / Danish / Nordic keys to use in defsrc?
I'm trying to set up using a laptop with a Danish keyboard. It's a ThinkPad, and I have been using the X220 DE template by @slotThe to get started defining my defsrc block.
The template quickly doesn't match, as my keyboard has ½ left of 1, so I replace that just with ^ as in the X220 template, to get started. Alas:
kmonad: Parse error at 24:3:
|
24 | ^ 1 2 3 4 5 6 7 8 9 0 kp+ grv bspc
| ^
unexpected '^'
expecting ')' or keycode
Figuring that maybe I can't just paste, I try replacing ^ with +:
kmonad: Parse error at 24:3:
|
24 | + 1 2 3 4 5 6 7 8 9 0 kp+ grv bspc
| ^
unexpected '+'
expecting ')' or keycode
OK. So I look up the keycodes in Keycode.hs and see that maybe I should use kp+ instead of just +. Succés! Now that doesn't error.
Alas, my first Danish character does:
kmonad: Parse error at 25:58:
|
25 | tab q w e r t y u i o p å kp+ ret
| ^
unexpected 'å'
expecting ')' or keycode
OK. I cant find anything relevant in Keycodes.hs, so to move on, I replace it with what is in the X220 template, namely ß -- eszett, the german double s. Alas, to no avail:
unexpected 'ß'
expecting ')' or keycode
Now I'm confused. I can't seem to find anything in Keycodes.hs. Others have seemingly had it working earlier. I'm at a loss.
How do I figure out what keycodes I should use to define my defsrc block?
I apologize if I have missed something obvious. I'm on the Ubuntu-related Pop!_OS 21.04 running Wayland.
The template quickly doesn't match, as my keyboard has ½ left of 1
The defsrc block is not really meant as an accurate representation of the layout that you want, but more of your physical keyboard as a whole. As a rule of thumb, always specify a generic US keyboard layout there.
It is also rather restricted in terms of what it accepts, hence symbols that wouldn't phase the parser in a deflayer block are giving you some troubles here.
@slotThe is spot on. The ink on your keys doesn't match the events that get sent to the kernel. The easiest solution would be to run evtest on your keyboard and inspect the events that get sent to the kernel. Those have to line up with your (defsrc ...) definition, and then the rest should work.
Thank you! Defining a US layout was straightforward, and I have now verified its correctness with evtest.
Mapping the US layout back to Danish, I run into some unforeseen and undesirable behavior.
Following the "Special characters" section in this suggested wiki-etry, I've added the following to else empty ~/.XCompose
include "%L"
<Multi_key> <a> <o> : "å"
<Multi_key> <a> <e> : "æ"
<Multi_key> <o> <e> : "ø"
<Multi_key> <1> <2> : "½"
<Multi_key> <1> <0> : "´"
<Multi_key> <1> <p> : "¨"
<Multi_key> <1> <a> : "'"
and use the following config.kbd:
(defcfg
input (device-file "/dev/input/by-path/platform-i8042-serio-0-event-kbd")
output (uinput-sink
"KMonad: X1C9"
"/usr/bin/sleep 1 && /usr/bin/setxkbmap -option compose:ralt")
cmp-seq ralt
)
(defalias
å #(ralt a o)
æ #(ralt a e)
ø #(ralt o e)
½ #(ralt 1 2)
´ #(ralt 1 0)
¨ #(ralt 1 p)
' #(ralt 1 a)
)
(defsrc
esc f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 home end ins del
grv 1 2 3 4 5 6 7 8 9 0 - = bspc
tab q w e r t y u i o p [ ] ret
caps a s d f g h j k l ; ' \
lsft 102d z x c v b n m , . / rsft
wkup lctl lmet lalt spc ralt cmps rctl back up fwd
left down rght
)
(deflayer default
esc f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 home end ins del
@½ 1 2 3 4 5 6 7 8 9 0 + @´ bspc
tab q w e r t y u i o p @å @¨ ret
caps a s d f g h j k l @æ @ø @'
lsft 102d z x c v b n m , . - rsft
wkup lctl lmet lalt spc ralt cmps rctl back up fwd
left down rght
)
That works, partially, but with a lot of issues. The keys work, but in different ways, each key acts unexpected in combination with ´Shift´ and ´AltGr´.
Am I missing some step?
No, that is how special characters work in Linux, we use compose sequences to emit rapid macros that tell the OS what to encode. I.e. we emit something like altgr -> shifted-' -> e to emit an e-umlaut. This is an alright method of emitting special characters if you don't need them too much.
The other option that might work better if you have to use special characters a lot is to simply emit the raw keycodes that your keyboard would have (i.e. if you test evtest on æ and it says that internally it's coded as ;, you can encode your keyboard to just emit a ; (maybe document it in the comments of your keymap). Then you can let the internationalization settings of your OS deal with translating it into a æ.
Do you understand what I'm describing?
I believe I do, and thank you for it!
As option 1 doesn't seem to work ideally for me (e.g., I do need my Shift + ø to be Ø, not Œ ), I tried option 2.
Option 2 works under X, but not under Wayland, when I boot with no remappings active and my input source set to Danish, run kmonad with a config file where deflayer default is identical with defscr from my comment above, with nothing in my .XCompose.
The KMonad name did seem to hint to somebody's preferences :)
I'm sorry if I'm being slow here, but what you are suggesting is that I as a first step get my hands on an external, non-Danish keyboard and check whether that acts as desired with my input source set to Danish, under Wayland?
And if not, then troubleshoot that, learn a lesson, and apply the same to KMonad?
I'm sorry if I'm being slow here, but what you are suggesting is that I as a first step get my hands on an external, non-Danish keyboard and check whether that acts as desired with my input source set to Danish, under Wayland?
Oops: didn't explain my suggestion correctly.
What I was suggesting was treating your KMonad remapping entirely as US-english, but keeping in the back of your head that your OS is going to be remapping some US keys to Danish keys. So don't try to get KMonad to emit special characters, like Ø (which it will try to do using compose sequences), instead just get KMonad to emit the US character which your OS will interpret as Ø. That was all the shifting behavior etc. should just work out of the box.
Oh: and as an addendum, how you could do that under X:
After KMonad is launched, call an setxkbmap dk command, setting the (OS) keyboard map to something. There is a post-init setting in the defcfg section that will let you do this automatically, but experiment by hand first.
You can also switch to a vt (with ctrl-alt-f2) and use the evtest utility. It will give you the linux keynames directly, which you can use in your config as-is (just remove the prefix and convert to lowercase).
This seems fixed so I'm closing for now
Sorry for commenting on an old, closed issue, but I wondered how exactly I could achieve the behavior described in:
What I was suggesting was treating your KMonad remapping entirely as US-english, but keeping in the back of your head that your OS is going to be remapping some US keys to Danish keys. So don't try to get KMonad to emit special characters, like Ø (which it will try to do using compose sequences), instead just get KMonad to emit the US character which your OS will interpret as Ø. That was all the shifting behavior etc. should just work out of the box.
Because this is exactly what I'm looking for. Basically, I want to be able to insert foreign symbols (German umlauts in my case) while a modifier key is pressed. I want to be able to get upper- and lowercase characters, which doesn't seem to work if I use the symbols directly in my kmonad mappings.
|
2025-04-01T06:39:18.008394
| 2021-03-21T04:33:30
|
836982016
|
{
"authors": [
"knadh",
"mr-karan",
"srikanthlogic"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7591",
"repo": "knadh/tg-archive",
"url": "https://github.com/knadh/tg-archive/issues/4"
}
|
gharchive/issue
|
Configurable Privacy Options
While discussing this project + chat privacy in general, we had a small discussion around chat privacy and it will be useful for this project to have configurable privacy options so that this can serve varied groups which have different privacy needs.
Posting some initial thoughts on this
Option to show / hide timestamp - the template can show / hide timestamp based on this.
Link Mode :- Option to pick only links / media and scrub off any comments. This lets the group archive only resources while skipping the chatter. This lets community have freedom to chat, prevents chatty noise on archives, while preserving value. This can be True / False
Include / Exclude from archives based on select hashtags - If LinkMode is True, a set of hashtags can be added as 'Include' hashtags so those important messages get archived, wihle leaving out remaining chats. If LinkMode is False, a set of hashtags can be added to 'Exclude' hashtag (like #DontArchive #KeepThisPrivate), so that the group can still have private non-archiving conversations even while on Full Archive mode.
Thoughts?
"This is a privacy risk, lets stay as is"
Just setting some context, this was discussed in Foss United as well before publishing the archive. Like @knadh mentioned, for an already public Telegram group, the chats are available publicly anyway.
About privacy, like @mr-karan pointed out, by definition, there is no privacy in a public Telegram group. Anyone can join and read/export/copy/re-publish messages at any point.
Option to pick only links / media and scrub off any comments
This is too niche an option to include in the global config, but can be easily achieved externally by doing a DELETE from messages WHERE media_id is NULL on the .sqlite file.
Include / Exclude from archives based on select hashtags
Again, too specific to include in the config but can be easily achieved by querying the .sqlite file.
These can be done with the SQLite CLI.
|
2025-04-01T06:39:18.009838
| 2023-11-20T17:56:18
|
2002719173
|
{
"authors": [
"davidhadas",
"evankanderson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7592",
"repo": "knative-extensions/kn-plugin-source-kamelet",
"url": "https://github.com/knative-extensions/kn-plugin-source-kamelet/pull/239"
}
|
gharchive/pull-request
|
Make SECURITY.md consistent
We're missing these across a lot of Knative repos, this is copied/improved from the 3 that existed.
/approve
/lgtm
|
2025-04-01T06:39:18.057264
| 2020-11-16T20:50:50
|
744180676
|
{
"authors": [
"dprotaso",
"n3wscott",
"slinkydeveloper"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7593",
"repo": "knative-sandbox/reconciler-test",
"url": "https://github.com/knative-sandbox/reconciler-test/issues/37"
}
|
gharchive/issue
|
e2e test framework poc needs more flexibility
So none of the networking conformance tests currently have levels. Only serving has some.
I think for serving a runtime conformance test will be structured as
Create a Service
Make a get request to get some info about the running environment
Run the levelled assertions against the returned info
Repeat step 3 until you have no more assertions
A problem I see is the info from step 2 needs to go to step 3. With the framework as is this is going to be cumbersome - especially if each levelled assertion has unique logic.
Originally posted by @dprotaso in https://github.com/knative-sandbox/reconciler-test/pull/30#discussion_r520992411
/assign @n3wscott
Another example would be the API operations for different types
https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#service
Do you have an example test with code so I can understand?
})
}
I would assume that looks something like:
func StableAPIOpsFeature() *feature.Feature {
f := new(feature.Feature)
f.Stable("Service").
Must("Create", func(ctx context.Context, t *test.T) {
servingclient.Get(ctx).Create(...)
...
})).
Must("Update", func(t *test.T) {
servingclient.Get(ctx).Update(...)
...
}).
Must("Get", func(t *test.T) {
servingclient.Get(ctx).Get(...)
...
}).
Should("Patch", func(t *test.T) {
servingclient.Get(ctx).Patch(...)
...
}).
Must("Delete", func(t *test.T) {
servingclient.Get(ctx).Delete(...)
...
})
return f
}
We need to build some support code, like implement namespace scoped clients and inject them into context: like the tbd servingclient package.
the caveat is that each test should be independent of the previous. Because if you depend on ordering, it will fail, or it will be filtered out if only Should is run... etc
the caveat is that each test should be independent of the previous. Because if you depend on ordering, it will fail, or it will be filtered out if only Should is run... etc
In the prior example I have control over the test sequence since they naturally tie into golang's test lifecycle.
This is the point I'm trying to make - this separation makes this more cumbersome
ie. another example asserting which filesystem properties in our runtime contracts.
func TestRuntimePaths(gotest *testing.T) {
t := globals.NewT(gotest)
var s v1.Service = someTestService()
t.ServingClient.Create(s)
// once ready fetch it's runtime environment
env := http.Get(s.Status.URL + "/runtime")
for _, path := range runtimev1.MustFilesystemPaths {
t.Must(path, checkPath(env, path))
}
for _, path := range runtimev1.MayFilesystemPaths {
t.May(path, checkPath(env, path))
}
}
bump
I am trading cumbersome with composability and decoupled tests.
I would write the TestRuntimePaths like this:
func TestRuntimePaths(t *testing.T) {
ctx, env := global.Environment()
// Use the existing helpers to make a KSVC.
env.Prerequisite(ctx, t, features.ServiceIsCreatedAndReady(s))
// once KSVC is ready fetch it's runtime environment
env := http.Get(s.Status.URL + "/runtime")
// RuntimePaths is the feature we are trying to test.
env.Test(ctx, runtime.RuntimePaths(env))
env.Finish()
}
// ...in a runtime features package...
func RuntimePaths(env http.Response) *feature.Feature {
f := new(feature.Feature)
s := f.Stable("RuntimePaths")
for _, path := range runtimev1.MustFilesystemPaths {
s.Must("have "+path , checkPath(env, path))
}
for _, path := range runtimev1.MayFilesystemPaths {
s.May("have"+path, checkPath(env, path))
}
return f
}
This might look like more code, but what we can do is break down the thing we are trying to assert is a feature (the payload of the runtime contract) from the how we get that response. So with some edits to the test entry point, we can test the runtime or whatever to another way of getting that data, say, a kn environment result or something like that, or a CloudRun curl on another kind of endpoint.
The example above leads me to more questions/thoughts:
1) Use of Feature in lieu of a list of steps in Prerequisites
env.Prerequisite(ctx, t, features.ServiceIsCreatedAndReady(s))
Prerequisite accepting a Feature seems weird since there could be levelled requirements and/or feature state assertions that I wouldn't expect.
2) Generating state from prerequisite steps & using them in assertions
var s v1.Service = someTestService()
env.Prerequisite(ctx, t, features.ServiceIsCreatedAndReady(s))
rt := http.Get(s.Status.URL + "/runtime")
someTestService() in my mind just returns a v1.Service go struct. If that's the case is features.ServiceIsCreatedAndReady(s) meant to mutate s with the intent of setting a valid s.Status.URL?
env.Prerequisite and env.Test both take unique contexts so it seems like there's no hand off of state between the two functions. Meaning Prerequisite doesn't allow for the context to be mutated.
This leads to two ways to consume dependencies for an assertion - closures (in your example above) and the context function argument that's originally passed to Test (which I've seen include clients/informers etc.).
I'm not advocating we let env.Prerequisite modify the context. In order for that to work the context keys need to be known by the prereq & assertion steps - which I'd argue is coupling them together. Maybe a workaround would be to dynamically create assertions with a context keys as input but :shrug: this become fairly complex.
3) Coalescing of Features Steps
I was sold on the fact that we could describe features in a fluent style. I like this because having a central place to see all the aspects of a feature set is great. This is a pattern that originated from networking conformance.
ie. thus for a runtime conformance I'd like to write
func RuntimeV1Conformance() feature.Feature {
f := new(feature.Feature)
f.Stable("filesystem", TestRuntimePaths)
f.Stable("http", TestRuntimeHTTP)
f.Stable("http-upgrade", TestRuntimeHTTPUpgrade)
// etc..
}
In the example the test setup/runner TestRuntimePaths has specific logic that's unique only to RuntimePaths. Given that I'm not sure we can still describe a holistic RuntimeV1Conformance feature if I needed to create unique services for different aspects of the runtime conformance.
It seems like I have two options:
Figure out how to use something like features.ServiceIsCreatedAndReady(s) N times in my test runner
con:
changing a test requires changes in two places
setting up N services (on a single context?) to be consumed by N different assertions requires coordination
Create the services in each assertion func(ctx.Context, t*testing.T)
con:
things that are written as steps can't be used here - ie. features.ServiceIsCreatedAndReady so there's going to be code duplication
4) t.Parallel()
Serving and Networking use t.Parallel() quite extensively and they're invoked at the feature state boundary. ie. Kingress/basic is parallel wrt. KIngress/websocket. I'm not sure how to do this with feature.Feature or why by default step assertions are all parallelized (https://github.com/knative-sandbox/reconciler-test/pull/52) cc @slinkydeveloper
I'm beginning to wonder if feature.Feature should be called something else - feature.Set because the stable/alpha/beta calls are scoped to a feature.
I'm not sure how to do this with feature.Feature or why by default step assertions are all parallelized (#52) cc @slinkydeveloper
@dprotaso since you want to parallelize all tests, why not? I'm not sure I get your question
@dprotaso since you want to parallelize all tests, why not? I'm not sure I get your question
Shouldn't test authors control the flow? What if assertions have side-effects on the remote object being tested?
This seems relevant to #3 - To guarantee test isolation I would have to change my feature definition either by:
Taking what would be in feature.Setup and moving it into every Step - ie. creating N Knative Services instead of one.
Creating duplicate feature.Feature definitions with different Steps on each
Shouldn't test authors control the flow? What if assertions have side-effects on the remote object being tested?
An assertion by definition shouldn't have a side effect right? Are you sure that particular assertion shouldn't live in the setup phase?
An assertion by definition shouldn't have a side effect right?
Yup - I agree.
Are you sure that particular assertion shouldn't live in the setup phase?
Ideally it should - but then I'm forced to split my feature.Feature definition. With the side effects being:
the same issue I mentioned above where some arguments are passed via closures and some via context
env.Test() doesn't run in parallel to other env.Test() invocations - splitting features would actually slow things down
Ideally it should - but then I'm forced to split my feature.Feature definition
Can you provide a use case for that? It might help me reason on the issue
I mentioned it all here see point 2): https://github.com/knative-sandbox/reconciler-test/issues/37#issuecomment-733445763
Another hypothetical
kingress := new(feature.Feature)
//
kingress.Stable("websocket").Must("receive traffic", AssertWebSocketTraffic())
kingiress.Stable("http2").Must("receive traffic", AssertHTTP2Traffic())
If each Stable feature requires a different KIngress configuration where should they be setup?
Inside Assert*Traffic calls?
con: not really using feature.Setup anymore so you can't re-use steps
Setup on the feature?
con: all the assertions are block on all the KIngresses to be ready, how do you transfer the right ingress endpoint to the assertion?
Split the feature into two?
con: more boilerplate, tests run linearly unless you add more Test funcs
I mentioned it all here see point 2)
This is kinda similar to the discussion happening here: https://github.com/knative-sandbox/reconciler-test/issues/51. My understanding is that to generate state you should use Setup. If you need to generate state, then assert, then generate state and then assert again you need to develop 2 features.
I see this even more clearly in your sample with kingress, where yes there is more boilerplate, but it clearly shows how websocket and http2 testing should be 2 different features.
Maybe what might be useful is something like "feature template": in your case the setup and teardown is the same except (i guess) one step to trigger the state change to stimulate the kingress. But, assuming you create feature websocket and http2, you could mostly share the same setup and teardown.
So what we could do is to create a sort of "template" that you can use to define common setup and teardown for more features: the websocket and http2 features will eventually share the same kingress data plane feature template. But still, "at runtime", those features then are executed separately, so the setup and teardown steps are in fact repeated.
The name feature.Feature could be wrong, I have been more focused on how these things are composable and the signatures of the steps and feature providers.
Use of Feature in lieu of a list of steps in Prerequisites
Ville wanted a setup step that was external to the Test or Feature to get the env to a state that is required by orthogonal to the test you would like to preform. In Eventing this would be installing a class of Broker, in Serving this would be configuring the ingress type. To test out the usage I thought it would be handy to use feature.Feature (bikeshed the name) as a shortcut to make a env.Prerequisite in the same signature as env.Test, it is for the author, reporter and debugger's convence to make it clear what is being tested or asserted inside the Prerequisites phase is independent but required for the Test phase.
Generating state from prerequisite steps & using them in assertions
I think var s v1.Service = someTestService() should have been var s *v1.Service = someTestService() and it will work out.
This leads to two ways to consume dependencies for an assertion - closures (in your example above) and the context function argument that's originally passed to Test (which I've seen include clients/informers etc.).
The framework should provide the hooks but make no opinion on which way is best. For some cases having some magic thing in context might be the best, like a set of namespaced clients setup in some Prerequisite phase serving likes to use.
I'm not sure we can still describe a holistic RuntimeV1Conformance feature.
I don't think you can or want to. I would see this written as a series of Test calls on an environment with a list of features that are required to pass conformance:
// RuntimeV1Conformance
func TestRuntimeV1Conformance(t *testing.T) {
ctx, env := global.Environment()
env.Test(ctx, t, conformance.Feature1())
env.Test(ctx, t, conformance.Feature2())
env.Test(ctx, t, conformance.Feature3())
env.Test(ctx, t, conformance.Feature4())
<... etc>
env.Finish()
}
The results of this will be collected into a report that is more easily consumable and understood because the Test phase focused on an aspect of conformance. (that is my hope)
RuntimeV1Conformance feature if I needed to create unique services for different aspects of the runtime conformance
I would see this as something like:
// RuntimeV1Conformance
func TestRuntimeV1Conformance(t *testing.T) {
ctx, env := global.Environment()
s := "ksvc-name"
env.Precondition(ctx, t, conformance.GivenSerivce(s))
env.Test(ctx, t, conformance.Feature1(s))
env.Test(ctx, t, conformance.Feature2(s))
env.Test(ctx, t, conformance.Feature3(s))
env.Test(ctx, t, conformance.Feature4(s))
<... etc>
env.Finish()
}
t.Parallel()
Big plus one, we need to support this, the runner code is total PoC. I am trying to focus on the following things:
Features and Steps are venderable cross projects.
Several Features can be run on a single env.
StepFn has no external dependencies, even to the framework unless you are opting into some base feature.
I have ran into some struggles so far with the PoC. One being the timing and isolation of the Steps makes it hard to pass results, so the step and feature isolation cause you to think about how to compose the test so it has no or few dependencies. This results in each step having a bit more code than you might expect, but that also results in the steps being composable in new ways than you originally wrote them in, as an example:
// TestBrokerAsMiddleware
func TestBrokerAsMiddleware(t *testing.T) {
t.Parallel()
ctx, env := global.Environment(
knative.WithKnativeNamespace(system.Namespace()),
knative.WithLoggingConfig,
knative.WithTracingConfig,
k8s.WithEventListener,
)
// Install and wait for a Ready Broker.
env.Prerequisite(ctx, t, features.BrokerGoesReady("default", "MTChannelBroker"))
// Test that a Broker can act as middleware.
env.Test(ctx, t, features.BrokerAsMiddleware("default"))
env.Finish()
}
Here, I need a Broker to be ready, but the focus of the test is not a ready Broker. I wanted to write a feature that assumes for a ready Broker of a given name, I can pass events through it. This means that I can vendor the features and reuse this in other downstream repos can leverage this same test:
// <... In eventing-rabbitmq ...>
// TestBrokerAsMiddleware
func TestBrokerAsMiddleware(t *testing.T) {
t.Parallel()
ctx, env := global.Environment(
knative.WithKnativeNamespace(system.Namespace()),
knative.WithLoggingConfig,
knative.WithTracingConfig,
k8s.WithEventListener,
)
// Create a RabbitmqCluster in the env, the CO that creates the underlying RabbitMQ Broker (not knative).
env.Prerequisite(ctx, t, rabbitfeatures.RabbitMQBrokerIsCreated())
// Install and wait for a Ready Broker.
env.Prerequisite(ctx, t, features.BrokerGoesReady("default", "RabbitMQBroker"))
// Test that a Broker can act as middleware.
env.Test(ctx, t, features.BrokerAsMiddleware("default"))
env.Finish()
}
So the downstream repo has opt'ed to include this test, but they only have to add the test entry point, not the features. The reasoning here is likely the downstream needs to do some additional setup like above.
then generate state and then assert again you need to develop 2 features.
Then I'm confused by the feature state methods (Alpha, Beta, Stable etc.) on the Feature struct. To my third point above and following observation I think it's important to be able to go to a single place to see a group of related features together and their level of maturity. ie. like the KIngress example
A template approach would work - but I was pointing out in 2) that things neither clearly defined nor consistent
I would need to know more about KIngress to be able to answer in full, but I have been assuming a Feature tests a contained set of functionality, and variations would be passed down to it.
I think you can compose the test you are wanting to, and you will have to do a bit more work when trying to special case a particular feature of an implementation, if that is not generally testable by all implementations.
1) Use of Feature in lieu of a list of steps in Prerequisites
it is for the author, reporter and debugger's convence to make it clear what is being tested or asserted inside the Prerequisites phase is independent but required for the Test phase.
Whatever goes here shouldn't have state (alpha, beta, stable) or levels (must, should, may) decorations since those could be skipped via environment flags etc.
ie. What if I was only testing alpha features of a broker - the broker ready feature passed to a prereq wouldn't run
https://github.com/knative/eventing/blob/fe1b34c4c084eaca1f964e09bd791a428c7bb8cf/test/rekt/features/broker_feature.go#L42-L44
2) Generating state from prerequisite steps & using them in assertions
I think var s v1.Service = someTestService() should have been var s *v1.Service = someTestService() and it will work out.
The pointer doesn't change my interpretation - s still needs to be mutated or refetched to get additional info for the subsequent steps.
3) Coalescing of Features Steps
I'm not sure we can still describe a holistic RuntimeV1Conformance feature.
I don't think you can or want to. I would see this written as a series of Test calls on an environment with a list of features that are required to pass conformance:
We do this in networking currently and it good for discoverability. A side effect is diffs become very clear https://github.com/knative/networking/pull/277/files. Another example
RuntimeV1Conformance feature if I needed to create unique services for different aspects of the runtime conformance
I would see this as something like:
// RuntimeV1Conformance
func TestRuntimeV1Conformance(t *testing.T) {
ctx, env := global.Environment()
s := "ksvc-name"
env.Precondition(ctx, t, conformance.GivenSerivce(s))
env.Test(ctx, t, conformance.Feature1(s))
env.Test(ctx, t, conformance.Feature2(s))
env.Test(ctx, t, conformance.Feature3(s))
env.Test(ctx, t, conformance.Feature4(s))
<... etc>
env.Finish()
}
I see only one service in this example. Also we want to group relevant conformance features to be run together to make it easier for downstream folks to run the correct set of tests. Right now in networking this done using an exported RunConformance function.
The results of this will be collected into a report that is more easily consumable and understood because the Test phase focused on an aspect of conformance. (that is my hope)
Can you elaborate?
I would need to know more about KIngress to be able to answer in full, but I have been assuming a Feature tests a contained set of functionality, and variations would be passed down to it.
I think you can compose the test you are wanting to, and you will have to do a bit more work when trying to special case a particular feature of an implementation, if that is not generally testable by all implementations.
I would need to know more about KIngress to be able to answer in full, but I have been assuming a Feature tests a contained set of functionality, and variations would be passed down to it.
It's conformance so there's are no variations on the features being tested. The only thing that differs is the ingress installation and maybe configuring some global test properties (ie. does DNS work, which ingress class etc.)
I have made several issues to continue the discussion in the forks we have made:
for 1):
Framework should report Prerequisite features with Asserts #64
for 2):
Framework needs to help test authors understand how to pass state between steps #66
for 3):
Framework needs a way to group Features into Sets #65
The results of this will be collected into a report that is more easily consumable and understood because the Test phase focused on an aspect of conformance. (that is my hope)
Can you elaborate?
I think we need to define this a bit more. The thinking is the runner can get metadata out of the feature and generate data around the run of the test and conformance is somehow different, but I think we are waiting for the conformance group to ask for data.
Tagging @nak3 @ZhiminXiang @tcnghia - can someone with networking start taking a look at the current framework and the open issues Scott created.
It may be worth doing a small POC with a single KIngress feature.
|
2025-04-01T06:39:18.083161
| 2020-05-06T09:06:12
|
613164128
|
{
"authors": [
"slinkydeveloper"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7594",
"repo": "knative/eventing-contrib",
"url": "https://github.com/knative/eventing-contrib/issues/1200"
}
|
gharchive/issue
|
Port Gitlab source to adapter/v2
Problem
Gitlab source still uses knative.dev/eventing/pkg/adapter module, which uses the cloudevents/sdk-go v1. We should port it to knative.dev/eventing/pkg/adapter/v2, which uses the new sdk-go
Persona:
Event developer/Event producer
Exit Criteria
Gitlab source should not use anymore knative.dev/eventing/pkg/adapter
Time Estimate (optional):
1
/assign
|
2025-04-01T06:39:18.087881
| 2020-01-29T18:33:03
|
557040976
|
{
"authors": [
"aliok",
"houshengbo"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7595",
"repo": "knative/eventing-operator",
"url": "https://github.com/knative/eventing-operator/issues/90"
}
|
gharchive/issue
|
Removal of sources-controller after 0.12.0
Based on the change here: https://github.com/knative/eventing/pull/2448
It is time for operator to shine, in terms of upgrade without manual interruption.
I can implement this
/assign aliok
@aliok PLZ be advised that I am thinking of extending the scope of our tests-on-latest-eventing t verify the upgrade: https://github.com/knative/eventing-operator/pull/93. Hope we do not overlap our work.
Some notes:
Reconciliation will be updating existing resources and create new resources
It won't be able to delete the old resources (that is removed in eventing manifest)
So, I wrote a small script where I feed the 0.12 manifest (gsutil cp gs://knative-releases/eventing/previous/v0.12.0/eventing.yaml ./) and latest nightly manifest (gsutil cp gs://knative-nightly/eventing/latest/eventing.yaml ./) to make sure we will cover all resources that were deleted.
This is the output:
Unable to find 1.2.0 resource in nightly: apiVersion:v1 kind:ServiceAccount name:eventing-source-controller
Unable to find 1.2.0 resource in nightly: apiVersion:rbac.authorization.k8s.io/v1 kind:ClusterRole name:knative-eventing-source-controller
Unable to find 1.2.0 resource in nightly: apiVersion:rbac.authorization.k8s.io/v1 kind:ClusterRoleBinding name:eventing-source-controller
Unable to find 1.2.0 resource in nightly: apiVersion:rbac.authorization.k8s.io/v1 kind:ClusterRoleBinding name:eventing-source-controller-resolver
Unable to find 1.2.0 resource in nightly: apiVersion:apps/v1 kind:Deployment name:sources-controller
These are the things we need to delete in the operator
More changes needed after https://github.com/knative/eventing/pull/2519
|
2025-04-01T06:39:18.125054
| 2024-03-13T15:04:54
|
2184242838
|
{
"authors": [
"dprotaso",
"knative-automation"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7596",
"repo": "knative/pkg",
"url": "https://github.com/knative/pkg/pull/2987"
}
|
gharchive/pull-request
|
[main] Upgrade to latest dependencies
GKE fixes in hack -dprotaso
/cc knative/serving-writers knative/eventing-writers
/assign knative/serving-writers knative/eventing-writers
Produced by: knative-extensions/knobots/actions/update-deps
/retest
/lgtm
/approve
|
2025-04-01T06:39:18.192257
| 2016-09-22T16:16:59
|
178655041
|
{
"authors": [
"knrafto",
"pbiggar"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7597",
"repo": "knrafto/language-bash",
"url": "https://github.com/knrafto/language-bash/pull/16"
}
|
gharchive/pull-request
|
Grab bag of minor improvements
This is a grab bag of minor fixes that things slightly easier on me:
expand range of transformers allowed (I'm trying to get to GHC 8 via stack/lts-7)
unit tests are now much easier to add
add trivial stack.yml
add test cases which are expected and known to fail. A sort of TODO list (I plan on working on these, and it's a nice place to add future ones)
bump version number and copyright year
clean up warnings and linter warnings in tests
bug fix+tests: heredoc logic is wrong
I can split these into multiple PRs or cut out commits that you don't like: let me know what you prefer!
Added one more bug fix: the logic in heredoc was wrong. Also included tests.
The logic was reversed - it's the quoted heredoc that does no expansion. Here's the spec from man bash:
The format of here-documents is:
<<[-]word
here-document
delimiter
No parameter and variable expansion, command substitution, arith-
metic expansion, or pathname expansion is performed on word. If any
characters in word are quoted, the delimiter is the result of quote
removal on word, and the lines in the here-document are not
expanded. If word is unquoted, all lines of the here-document are
subjected to parameter expansion, command substitution, and arith-
metic expansion, the character sequence \<newline> is ignored, and \
must be used to quote the characters \, $, and `.
Thanks a lot for the PR! :D
I think it's pretty clear that I haven't touched this library since Stack was released, but I'll try to build it again and review the PR this weekend (or Monday).
Thanks! I took a look at fixing the arithmetic and it's way beyond my understanding of parsec. If you're interested in fixing that, that would be very awesome 😁. If I had to guess, it seems to look for 4 sets of parens but unsure.
I'll take a look. It might be related to #15
My guess though is that the parser at https://github.com/knrafto/language-bash/blob/master/src/Language/Bash/Parse/Word.hs#L157 is the culprit
Fixed it! I can't push to your branch, but here's a patch:
From 2b34962d7ed4bb1c5e874b164da6d8cfb7e98d8f Mon Sep 17 00:00:00 2001
From: Kyle Raftogianis<EMAIL_ADDRESS>Date: Sun, 25 Sep 2016 20:21:29 -0700
Subject: [PATCH] Fix parsing of arithmetic expressions (fixes #15)
---
src/Language/Bash/Parse/Word.hs | 6 +++---
tests/Tests.hs | 16 +++++++++++-----
2 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/src/Language/Bash/Parse/Word.hs b/src/Language/Bash/Parse/Word.hs
index 111e606..9e407b0 100644
--- a/src/Language/Bash/Parse/Word.hs
+++ b/src/Language/Bash/Parse/Word.hs
@@ -154,10 +154,10 @@ backquote = Backquote <$> matchedPair '`' '`' False escape
-- | Parse an arithmetic expression.
arith :: Stream s m Char => ParsecT s u m String
-arith = B.toString <$> parens <?> "arithmetic expression"
+arith = B.toString <$> arithPart <?> "arithmetic expression"
where
- parens = B.many inner
- inner = B.matchedPair '(' ')' parens
+ arithPart = B.many inner
+ inner = B.noneOf "()" <|> B.char '(' <+> arithPart <+> B.char ')'
-- | Parse a parenthesized substitution.
subst :: Stream s m Char => ParsecT s u m String
diff --git a/tests/Tests.hs b/tests/Tests.hs
index d5fcab9..4a2b01c 100644
--- a/tests/Tests.hs
+++ b/tests/Tests.hs
@@ -91,15 +91,21 @@ unittests = testGroup "Unit tests"
heredocDelim = "EOF",
heredocDelimQuoted = True,
hereDocument = expandString "asd\\`\n"}])
-
+ , tp "echo $((2 + 2))"
+ (Command
+ (SimpleCommand [] [expandString "echo", [ArithSubst "2 + 2"]])
+ [])
+ , tp "((2 + 2))"
+ (Command (Arith "2 + 2") [])
+ , tp "echo $(((2 + 2)))"
+ (Command
+ (SimpleCommand [] [expandString "echo", [ArithSubst "(2 + 2)"]])
+ [])
]
failingtests :: TestTree
failingtests = testGroup "Failing tests" (map expectFail
- [
- tp "echo $((2+2))"
- (Command (Arith "2 + 2") [])
- ])
+ [])
tests :: TestTree
tests = testGroup "Tests" [properties, unittests, failingtests]
--
2.7.4 (Apple Git-66)
The rest of the PR looks good. I'll merge and add the arith fix, and then release on Hackage.
Thanks for your help!
Thanks for the review and the fix!!
|
2025-04-01T06:39:18.194275
| 2022-12-13T21:16:12
|
1495172613
|
{
"authors": [
"knudsvik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7598",
"repo": "knudsvik/EnergyScore",
"url": "https://github.com/knudsvik/EnergyScore/issues/36"
}
|
gharchive/issue
|
Async improvements
Check: https://developers.home-assistant.io/docs/asyncio_working_with_async/
E.g. need to add some await when pulling states for price and energy.
no need
|
2025-04-01T06:39:18.196478
| 2017-12-06T03:20:28
|
279612631
|
{
"authors": [
"TechnikEmpire",
"TheQuack45",
"knuppe"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7599",
"repo": "knuppe/SharpNL",
"url": "https://github.com/knuppe/SharpNL/issues/39"
}
|
gharchive/issue
|
How are you so awesome?
This is a bug that needs immediate solving. Are you simply a mountain of raw programming power, or did you do some automated translation and then fill in the gaps? I mean even if the latter, still a mountain of raw programming power.
You're absolutely right. This library is a work of art, and it is way too obscure and hard to find.
Wow!! I am very happy to see the compliments. I worked thousands of hours on this project, something close to 6 months of work, just for fun and curiosity, when I started, I had no knowledge about NLP, I just wanted to learn.
All code that exists was written by hand, without tools, I was simply solving one problem after another, to the point of being 100% compatible with the original version in Java.
I'm just passionate about programming!
|
2025-04-01T06:39:18.197922
| 2023-05-26T15:24:04
|
1727828890
|
{
"authors": [
"aidy",
"psankar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7600",
"repo": "ko-build/ko",
"url": "https://github.com/ko-build/ko/pull/1057"
}
|
gharchive/pull-request
|
Fix kind image loading for MacOS
See: https://github.com/kubernetes-sigs/kind/pull/2957
Fixes test following #1054
@cpanato @imjasonh Will this MR be merged ? I am running into the same issue when trying to use Ko on a M series mac
|
2025-04-01T06:39:18.249795
| 2016-01-18T01:08:31
|
127137729
|
{
"authors": [
"aheckmann",
"chrsalx",
"coderhaoxin",
"fengmk2",
"fundon",
"i5ting",
"iyuq",
"jonathanong",
"m4nuC",
"niftylettuce",
"nswbmw",
"ruimarinho",
"stiang",
"stojanovic",
"tejasmanohar",
"tj",
"tunnckoCore",
"yanickrochon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7601",
"repo": "koajs/koa",
"url": "https://github.com/koajs/koa/issues/638"
}
|
gharchive/issue
|
Improving the Koa ecosystem by moving all Koa modules to the koajs organization!
With GitHub's new organization administration, I want to try moving all koa middleware/plugins to the org. The goals are:
To have shared maintenance of all code.
Have all middleware/plugins be in one location.
Create teams so everyone knows who to ask for help.
Allow members to create repos within the organization.
I'm not sure how this will work.
The steps will be something like:
Let us know what repository you'd like to transfer
We'll create a team for that repository if there isn't one yet
We'll add that repository to that team
We'll add you to that team
If anything, just transfer the repository to me and I'll add it to the organization. Comment here so I remember to add it to the organization! Let me know if anyone wants to transfer any repositories.
First one i'd like to transfer is koa-convert :)
NOTE: IF YOU'RE IN THIS ORGANIZATION, PLEASE SETUP 2-FACTOR AUTHENTICATION!
Agree! I will transfer all koa relate middlewares/plugins to koajs org.
I want to transfer coderhaoxin/koa-redis-cache to this org. Maybe cache team is suitable.
@coderhaoxin are you able to transfer? if not, what's the error message?
I'd like that, too. We can manage teams with an organization and this makes things easier for people to find good quality (and actively maintained) middlewares.
I'd transfer https://github.com/m4nuC/async-busboy if that makes sense.
@m4nuC It seems that async-busboy is not a koa middleware >_<
I don't think it makes sense to transfer async-busboy since its application isn't tied to Koa.
@haoxin yes just a module (It's basically co-body for multipart with koa2 support). It could be made a middleware if that makes more sense.
On 18 Jan 2016 13:11, haoxin<EMAIL_ADDRESS>wrote:
@m4nuChttps://github.com/m4nuC It seems that async-busboy is not a koa middleware >_<
—
Reply to this email directly or view it on GitHubhttps://github.com/koajs/koa/issues/638#issuecomment-172427379.
@jonathanong Do we really want all middleware that works in the org though? Doesn't having it in the org signify that the org is going to maintain it? Just because it works doesn't mean it's maintained.
some that would be nice in the org...
koa-graphql @chentsulin
koa-bearer-token @chentsulin
koa-jwt @stiang
koa-joi-schema @simplyianm
We could vote on whether a module is deemed worthy of being in the org or not
not sure if github sends notifications for @mentions on edits/updates so here are some more :P
koa-resourcer @aheckmann
koa-router @alexmingoia
We could vote on whether a module is deemed worthy of being in the org or not
fair enough, this can get messy in threads. maybe we could get a survey platform? not sure what's good out there... if anyone knows, throw around some suggestions :)
@jonathanong
When I try to transfer koa-redis-cache to koajs, cache isn't in the teams checklist .
^ Maybe it's a secret team for some odd reason... :P
Great idea, I’d be happy to transfer koa-jwt. @jonathanong, can you please confirm that you want koa-jwt in the org?
@coderhaoxin but you're in the team and see it, right?
@stiang i'll let others decide that. i don't understand JWT :)
+1 from me :+1:
@jonathanong Got it, but I’m not really sure who is authorized to say whether I should transfer koa-jwt :) @tejasmanohar, you have already requested it, and I see that you are part of the koajs org. Are you in a position to OK the transfer?
Nah, let others vote on it and an Owner (who can actually create teams + transfer) decide :)
:+1: for the idea
:+1: for a voting system. This will also provide insight of what is actually being used (i.e. I didn't even know about koa-jwt!)
what about this: if more than one person wants to maintain it (ex. if someone wants to help @stiang maintain that repository), then it's in.
also, if someone could figure out how to easily transfer repos to the org, that woudl be great. otherwise, you can transfer it to me (or another owner) and we'll transfer it here
if more than one person wants to maintain it (ex. if someone wants to help @stiang maintain that repository), then it's in.
@jonathanong :+1: I agree.
@stiang I'm all in to help maintain koa-jwt so you can transfer. My projects depend on it anyways :P
sweet. @stiang @tejasmanohar added you both to a new @koajs/jwt team. @stiang if you have trouble transferring it over to the org, transfer it to me and i'll transfer it to this org.
@jonathanong Great! I’ve transferred it now, looks like it’s already available at https://github.com/koajs/koa-jwt
I suppose it should be renamed to just "jwt" to adhere to the naming scheme used by koajs. Also, although I was presented with a list of koajs teams to give permission to, the koa-jwt group was not among them, so I currently don’t have write access to the repo. Could you please add the koa-jwt team manually?
@tejasmanohar Very happy that you are willing to help! I’ve fallen behind on PRs and issues lately, so the project could really benefit from some fresh attention.
interesting. when you transferred it over, it was added to no teams, so i had to add it to a team manually.
Yeah, I could have selected another team, but since the relevant one wasn’t displayed I didn’t add it to any. I tried reloading the page and retrying the transfer again, but I still got just a subset of the koajs teams to select from. Not sure what’s going on there.
I renamed the repo to "jwt".
If I am a member of a team, am I free to add new teams as well? For example, we have a few koa projects and middleware over at gh://pebble but I wouldn't want to move anything if I can't bring several developers along with it. Does it make sense to (1) create a Pebble team and add people and repos or does it make more sense to (2) stay distributed just leave repos where they are? I'm leaning towards option 2.
That said I don't see a huge problem with using the wiki to list projects, that's still a more condensed view than browsing koajs/*
@tj @jonathanong can you guys add me to @koajs and so I can have collab on https://github.com/koajs/ratelimit? I rewrote it the other day and I want to push it up and release a new major version for koa@next :+1: (it even has support for whitelist/blacklist, and all tests pass with flying sparkles :sparkles:)
@jonathanong How does moving popular Koa middleware into this org impact the general guidelines we already follow with repos in this org? For example, is it ok to transfer a middleware that's using Babel?
@niftylettuce I don't think we should merge that PR yet if koa-convert does the trick. That's the general procedure that all the rest of the middleware in this org has been following.
i don't care about code style as long as you have a linter to enforce one.
middleware should be transpiled before publishing anyways, at least for node v4+. i don't see why you need to write middleware that needs to be transpiled though.
A transpiler should not be required when using a middleware; it should only be application-specific. That being, I personally don't care if one is used, as long as npm install does not pull Babel or anything like that as dependency.
@jonathanong ah ok. fair enough. @yanickrochon yep, I didn't mean including the full babel-runtime in the package's distribution or anything like that but precompiling instead. that said, I thought we decided not to use Babel in middleware in this org so I thought I'd mention that this may be an issue when/if we transfer external middleware here :)
@tejasmanohar all good :smile:
@jonathanong we can contribute koa-pagination and koa-requestid. Additionally, we are preparing the release of an error mapper which allows registering different mappers to process each error class differently.
@thomseddon might be open to transfer koa-oauth-server, which we currently help maintain too.
I see many people jumping in here, advocating for their projects... and while it's legit, may I suggest to all, no offense intended, that the most downloaded (i.e. popular) get's pulled into the koajs organisation first? I too have koa-* middleware and one of them has more downloads than some proposed here. Yet, I consider that some middlewares should be promoted first. Like koa-gzip, koa-cors, koa-proxy, koa-timeout, koa-passport, etc.
m2c
I’d be happy to transfe:
koa-errorhandler @jonathanong
koa-ip @jonathanong
koa-mongo @jonathanong
koa-scheme @jonathanong
koa-router-validator @jonathanong
koa-router-cache @jonathanong
co-cache @jonathanong
Tip: you can use npm-user-downloads check your packages downloads ranking.
@yanickrochon I think we should take this opportunity not to find which packages have more downloads than the others but to build consensus around them and improve the whole koa ecosystem. I'd be happy to work together on any of the ones I mentioned in case multiple ones from the community are available for the same purpose. Sometimes the multitude of packages on npm comes from the horrible search alone :)
@ruimarinho oh, but I totally agree! My point was that I don't want everybody promoting their own koa-* repositories just for the sake that it has "koa-" as prefix! It's more of a prevention notice, so we don't need to trigger any rejection complex or whatever (LOL)
That being said, usually, the most used (i.e. downloaded) are the ones that should be maintained first (logically). There are cases, however, where you are right and we should take this opportunity to correct, or influence the use of a package over another. For example, there are a few caching middlewares for koa already, some of them perform almost the same thing.
All and all, my point is that community approved packages should be promoted first, so we don't end up with duplicated packages offering the same features yet again.
@yanickrochon agreed! Main areas where I believe there is quite some overlap are caching, routing and error handling.
i've setup some teams here: https://github.com/orgs/koajs/teams
if you want to help with some teams, let me know
if you think the teams could use better organization (new team, add a module, etc) let me know
i'm sure some modules, even in the org, are not properly organized. looking at the teams might help you guys think about other modules to add to the org!
also i noticed a lot of PRs for Koa v2 support. if you see one and want to help, let me know. i don't plan on touching those PRs myself for a while.
@jonathanong we (Pebble) have a few repos we'd like to transfer. You want me to transfer to you first or just directly to koajs?
https://github.com/pebble/koa-resourcer
https://github.com/pebble/koa-joi-router
https://github.com/pebble/koa-resourcer-docs
https://github.com/pebble/koa-bunyan-logger
@aheckmann you're an owner you should be able to transfer yourself
(to koajs)
cool, will do. thanks
I can transfer https://github.com/tunnckoCore/koa-ip-filter if you want. I have few more, but they are very outdated and I'm trying to update them soon. Now i'm working on total refactor of koa-better-body, using koa-body-parsers under the hood.
https://github.com/koa-modules/ ?
@fundon It's better to give a list you want to transfer, not a org link :smile:
@coderhaoxin Ok :smile:
Great idea, I’d be happy to transfer koa-generator. @jonathanong @coderhaoxin
@fundon @coderhaoxin's an owner so he can transfer those modules over
@i5ting :+1: for the generator because we don't have one. anyone else willing to help maintain it?
@i5ting Could you translate Chinese into English in the koa-generator/Readme?
(Or I can do that several days later :smile:)
@coderhaoxin 哈哈,一共也没几个中文,我抽空就整理了,目前测试还差一些,在补充呢 https://github.com/17koa/koa-generator
@fundon Some modules are duplicate! Such as:
koajs/locales vs koa-modules/i18n for i18n
koajs/static vs koa-modules/serve-static
koajs/override-method vs koa-modules/methodoverride
That will be confused for the users, IMO :)
@coderhaoxin Should we add more detail for them?
Suggestions?
I want transfer my middleware webpack-koa2-middleware to koa community.
I'm going to transfer:
https://github.com/tunnckoCore/koa-better-body
https://github.com/tunnckoCore/koa-better-serve
https://github.com/tunnckoCore/koa-better-ratelimit (v3 in progress, sry there's no readme currently)
https://github.com/tunnckoCore/koa-better-router
https://github.com/tunnckocore/koa-rest-router
https://github.com/tunnckoCore/koa-ip-filter
As soon as possible. :)
It's only a testing framework but I don't know if you guys are interested in having:
https://github.com/chrsalx/koa-test.
I would gladly transfer.
|
2025-04-01T06:39:18.261794
| 2019-09-21T23:21:35
|
496712437
|
{
"authors": [
"M4gicT0",
"g1910",
"thanhmvu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7602",
"repo": "kobiso/Computer-Vision-Leaderboard",
"url": "https://github.com/kobiso/Computer-Vision-Leaderboard/issues/25"
}
|
gharchive/issue
|
Incorrect Efficientnets' FLOPs in ImageNet Classification Leaderboard
Hi, I think in ImageNet Classification Leaderboard, Efficientnet-B7' FLOPS should be 37G (37B in the paper) instead of 37000G. Same for other versions.
+1
It should be 0.37 because the column is in GFLOPS
|
2025-04-01T06:39:18.283394
| 2023-01-25T12:47:30
|
1556590447
|
{
"authors": [
"ERussel",
"JustLuuuu",
"yangwao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7603",
"repo": "kodadot/nft-gallery",
"url": "https://github.com/kodadot/nft-gallery/issues/4827"
}
|
gharchive/issue
|
kodadot.xyz not loading in nova wallet
What happened?
When trying to open kodadot.xyz in Nova Wallet DApp browser it is not loading. Error from the dev console is attached.
Please reproduce in steps
Open Nova Wallet app
Go to Browser tab
Put "kodadot" into the search field
Tap on found KodaDot option
Browser is opened but nothing loads
Expected Behavior
DApp browser for KodaDot is loaded and wallet account is requested
What browsers are you seeing the problem on?
Mobile iOS Safari (WebKit)
At which address did you encounter bug?
kodadot.xyz
Are you logged in?
No
Which wallet you are using?
Nova Wallet
At which chain did you encounter bug?
Basilisk, MoonSama, RMRK
Screenshots
https://user-images.githubusercontent.com/570634/214567060-eef8921d-4aa2-4b9e-81c9-15589d97de1d.MP4
Relevant log output
No response
Payment link for reward
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Yeah not working on iOS probably. I tried it too on my iPhone.
Hey @ERussel is this still thing? We did few updates recently. I don't have iOS tho.
Hey @yangwao still can reproduce on iOS. I think it is related to some part of the code that treats Safari with polkadot js extension at the same time in the wrong way.
hey @ERussel if you can check now, it should be available on https://beta.kodadot.xyz
|
2025-04-01T06:39:18.293969
| 2022-02-14T07:58:44
|
1136914729
|
{
"authors": [
"prachi00",
"yangwao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7604",
"repo": "kodadot/nft-gallery",
"url": "https://github.com/kodadot/nft-gallery/pull/2348"
}
|
gharchive/pull-request
|
#2344 Rework navbar
Thank you for your contribution to the KodaDot NFT gallery.
👇 _ Let's make a quick check before the contribution.
PR type
[ ] Bugfix
[ ] Feature
[x] Refactoring
What's new?
[x] PR closes #2344
[ ]
Before submitting Pull Request, please make sure:
[x] My contribution builds clean without any errors or warnings
[x] I've merged recent default branch -- main and I've no conflicts
[x] I've tried to respect high code quality standards
[x] I've didn't break any original functionality
[x] I've posted a screenshot of demonstrated change in this PR
Optional
[ ] I've tested it at </rmrk/collection/26902bc2f7c20c546a-1FVG7>
[ ] I've tested PR on mobile and everything seems works
[ ] I found edge cases
[ ] I've written some unit tests 🧪
Had issue bounty label?
[x] Fill up your KSM address:
Payout
Community participation
[x] Are you at KodaDot Discord?
Screenshot
[x] My fix has changed something on UI; a screenshot is best to understand changes for others.
@roiLeo doneee
Doesn't take me anywhere :|
https://user-images.githubusercontent.com/5887929/153859039-058ef7cc-8117-4fb0-a390-1173b6dc49ac.mov
Doesn't take me anywhere :|
Screen.Recording.2022-02-14.at.12.49.00.mov
found the issue, fixing it
@yangwao check now
love you! 😘
pay 200 usd
😍 Perfect, I’ve sent the payout
💵 $200 @ 161.6 USD/KSM ~ 1.238 $KSM
🧗 EzGc4s9PgCPx1YnF3fqzhLzVHpHMTL4LWPScwpDrR8JKgSU
🔗 0x0f683cefd64ee30ab53b073df12d9707b4c7ba07fb225984257d00d9c993d093
🪅 Let’s grab another issue and get rewarded!
🪄 github.com/kodadot/nft-gallery/issues
|
2025-04-01T06:39:18.304780
| 2023-09-03T18:14:46
|
1879187004
|
{
"authors": [
"stephenjason89",
"yangwao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7605",
"repo": "kodadot/nft-gallery",
"url": "https://github.com/kodadot/nft-gallery/pull/7118"
}
|
gharchive/pull-request
|
docs: updated style_guide
Thank you for your contribution to the KodaDot - One Stop Shop for Polkadot NFTs.
👇 __ Let's make a quick check before the contribution.
PR Type
[ ] Bugfix
[ ] Feature
[ ] Refactoring
[x] Documentation related to #7106
Context
[ ] Closes #<issue_number>
[ ] Requires deployment <snek/rubick/worker>
Before submitting pull request, please make sure:
[x] My contribution builds clean without any errors or warnings
[x] I've merged recent default branch -- main and I've no conflicts
[x] I've tried to respect high code quality standards
[x] I've didn't break any original functionality
Optional
[ ] I've tested it at </ksm/collection>
[ ] I've tested PR on mobile
[ ] I've written unit tests 🧪
[ ] I've found edge cases
Did your issue had any of the "$" label on it?
[x] Fill up your DOT address: Payout
Community participation
[ ] Are you at KodaDot Ecosystem Telegram?
Screenshot 📸
[ ] My fix has changed something on UI; a screenshot is best to understand changes for others.
Copilot Summary
🤖 Generated by Copilot at ec90a29
Updated STYLE_GUIDE.md to fix errors and enhance readability.
🤖 Generated by Copilot at ec90a29
Style guide refined
sentence, example, format
Cut like autumn leaves
pay 10 usd
😍 Perfect, I’ve sent the payout
💵 $10 @ 4.25 USD/DOT ~ 2.353 $DOT
🧗 13rFRPVKjJzQXVC8ZqHZv5YMmwmk4MU7z4HeYk218hEMpQXH
🔗 0x27e11c7b0e13c2027faa9fc9b688a601194ed26fc93bd47c40e66cc790148651
🪅 Let’s grab another issue and get rewarded!
🪄 github.com/kodadot/nft-gallery/issues
|
2025-04-01T06:39:18.307175
| 2024-08-01T10:38:55
|
2442112043
|
{
"authors": [
"lamtrinhdev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7606",
"repo": "kodecocodes/m3-suii-materials",
"url": "https://github.com/kodecocodes/m3-suii-materials/pull/1"
}
|
gharchive/pull-request
|
Correct the link for Getting Started with SwiftUI Course in README.md
Correct the link for Getting Started with SwiftUI Course in README.md
Dear @jellodiil ,
I have a minor update link for "Getting Started with SwiftUI Course" in README.md. Could you please help me to review it?
Thanks,
Lam
|
2025-04-01T06:39:18.322829
| 2016-08-23T19:57:10
|
172791582
|
{
"authors": [
"mikesplain",
"rjeczalik"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7607",
"repo": "koding/vagrantutil",
"url": "https://github.com/koding/vagrantutil/pull/9"
}
|
gharchive/pull-request
|
Fix typo in readme
Hope this saves someone else some time.
Thanks @mikesplain! Sorry for slow reaction, I must have missed a notification on this 😇 😇 😇
Np!
|
2025-04-01T06:39:18.325662
| 2017-05-28T17:20:46
|
231881879
|
{
"authors": [
"choco",
"koekeishiya"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7608",
"repo": "koekeishiya/chunkwm",
"url": "https://github.com/koekeishiya/chunkwm/issues/64"
}
|
gharchive/issue
|
[plugin/tiling] ToogleWindow function don't check for floating window
Edit: Second time I open an issue without typing a word XD
But basically the ToggleWindow function don't check if the window passed is floating, this can lead to crashes when calling one of the function on one of these. Obviously one shouldn't call these function floating windows but it can happen by pressing the wrong binding etc.. and we should probably not crash. Specifically in many places it check for (!Node && !Node->Parent) leading to a crash
Uh, do you have a line of code or steps to reproduce a crash? I don't see a particular problem in the ToggleWindow function. Probably just something I'm overlooking.
chunkc window --toggle float
chunkc window --toggle split
on the same window produces one reliably.
Well that is a major fuckup, wonder how late it was when I wrote that line of code.
It's supposed to be if(!Node || !Node->Parent) so we can short-circuit.
I have tested this and it should now be fixed.
Marking this as a bug even tho it is fixed, for future references.
|
2025-04-01T06:39:18.328782
| 2016-01-20T11:21:56
|
127661544
|
{
"authors": [
"herrbischoff",
"koekeishiya"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7609",
"repo": "koekeishiya/kwm",
"url": "https://github.com/koekeishiya/kwm/issues/93"
}
|
gharchive/issue
|
Tools like PopClip do not work any more
Since #85 and the removal of menu-fix I cannot use PopClip any more. I suspect other tools relaying on an interactive overlay may be broken as well. kwm switches to the PopClip application and thereby losing the context to act on.
I really hate the layer system implemented in OSX..
Will see if something can be done about this in the future.
Thanks, because until there is a fix I need to unload kwm.
I have a fix working atm, that still allows for menubar/dock/context menus to work, but I'm not sure if it may break focusing of other window types.
Basically it has to do with:
enum _CGCommonWindowLevelKey {
kCGBaseWindowLevelKey = 0,
kCGMinimumWindowLevelKey = 1,
kCGDesktopWindowLevelKey = 2,
kCGBackstopMenuLevelKey = 3,
kCGNormalWindowLevelKey = 4,
kCGFloatingWindowLevelKey = 5,
kCGTornOffMenuWindowLevelKey = 6,
kCGDockWindowLevelKey = 7,
kCGMainMenuWindowLevelKey = 8,
kCGStatusWindowLevelKey = 9,
kCGModalPanelWindowLevelKey = 10,
kCGPopUpMenuWindowLevelKey = 11,
kCGDraggingWindowLevelKey = 12,
kCGScreenSaverWindowLevelKey = 13,
kCGMaximumWindowLevelKey = 14,
kCGOverlayWindowLevelKey = 15,
kCGHelpWindowLevelKey = 16,
kCGUtilityWindowLevelKey = 17,
kCGDesktopIconWindowLevelKey = 18,
kCGNumberOfWindowLevelKeys = 19
Cool, thank you very much for the quick turnaround! It appears to work for me.
|
2025-04-01T06:39:18.337515
| 2019-04-06T23:02:55
|
430085394
|
{
"authors": [
"gavinhenderson",
"kognise"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7610",
"repo": "kognise/water.css",
"url": "https://github.com/kognise/water.css/pull/11"
}
|
gharchive/pull-request
|
Added table styles
Table styles added. Closes #5!
I made some minute changes. Thanks so much! Ready to ship?
Changes look great! :+1:
:shipit:
|
2025-04-01T06:39:18.343043
| 2019-10-30T19:57:00
|
514943686
|
{
"authors": [
"MikeyCarter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7611",
"repo": "kohsuke/libpam4j",
"url": "https://github.com/kohsuke/libpam4j/issues/26"
}
|
gharchive/issue
|
"pam_authenticate failed" what drives this message?
2019-10-30 15:51:03.239 ERROR 20613 --- [nio-8443-exec-4] o.o.a.s.CustomAuthenticationProvider : PAM authentication failed: pam_authenticate failed : Authentication failure -- org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure
at org.jvnet.libpam.PAM.check(PAM.java:106)
at org.jvnet.libpam.PAM.authenticate(PAM.java:124)
Getting this in my program. Tryied on three different linux OS same result. Can't seem to figure out what this message is telling me. Did I miss a step somewhere?
pamtester -v login **** authenticate
This works... but fails via the java program running as the same user.
Tried on Fedora 27, Fedora 29, even a Oracle Linux 7 I had kicking around.
ok finally found the problem. Never fails I search at a problem but the minute I have to document it into an issue like this I find it.
So my problem was this:
Collection<? extends GrantedAuthority> authorities = Collections.singleton(new SimpleGrantedAuthority("ROLE_USER"));
return new UsernamePasswordAuthenticationToken(authentication.getPrincipal(),
authentication.getCredentials(),
authorities);
vs
authentication.getCredentials());
Without the ROLES at the end it authenticates once fine (which I missed in the logs) then blanks out the password. Then tries a second attempt with a null password. Which is what I saw above. When it was a wrong password the thing would fail on first try.
So resolution... pay attention to the logs more closely. Posting here in case anyone else falls into the same trap but the issue can be closed.
|
2025-04-01T06:39:18.391566
| 2024-10-27T07:02:22
|
2616376908
|
{
"authors": [
"1223nij",
"MaxRobinsonTheGreat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7612",
"repo": "kolbytn/mindcraft",
"url": "https://github.com/kolbytn/mindcraft/pull/249"
}
|
gharchive/pull-request
|
1.21.1 Support!
https://github.com/PrismarineJS/mineflayer/commit/3f1f0a3fef9aef3113d2de62d70e6e42410b0b44
With the new release 4.23.0, mineflayer dropped 1.21.1 support
In testing i found out that 1.21 doesn't work????? But 1.21.1 does
sorry I implemented it myself #255
|
2025-04-01T06:39:18.427543
| 2024-02-03T13:50:32
|
2116490100
|
{
"authors": [
"1414841886",
"kolos26"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7613",
"repo": "kolos26/GEOFS-LiverySelector",
"url": "https://github.com/kolos26/GEOFS-LiverySelector/issues/512"
}
|
gharchive/issue
|
could you help me fix them?
@kolos26 The places I marked were connected at the top of the plane (yellow line), but there were still errors after I changed them, and I couldn't do it precisely because it was difficult to change it on the map. Thank you!
Hainan Airlines and Hong Kong Airlines belong to the same group, so their fuselages are the same
I will try to fix them :) sorry for not reacting for your messages, i had tones of homeworks
@kolos26 maybe i have a idea to fix. it if i can not fix,i will tell you.and under my promotion, about a thousand people play your livery plugin in chinese bilibili!
@kolos26 maybe i have a idea to fix. it if i can not fix,i will tell you.and under my promotion, about a thousand people play your livery plugin in chinese bilibili!
Wow thats cool, thanks a lot! One of the hardest things is to monitor non contributor users. Thank You For Supporting Us!
Btw this reminded me how huge China is...
@kolos26 maybe i have a idea to fix. it if i can not fix,i will tell you.and under my promotion, about a thousand people play your livery plugin in chinese bilibili!
Wow thats cool, thanks a lot! One of the hardest things is to monitor non contributor users. Thank You For Supporting Us! Btw this reminded me how huge China is...
please you fix them my idea is wrong:(
|
2025-04-01T06:39:18.436621
| 2023-05-03T21:58:28
|
1694902567
|
{
"authors": [
"Odraxs",
"coveralls"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7614",
"repo": "kommitters/soroban.ex",
"url": "https://github.com/kommitters/soroban.ex/pull/34"
}
|
gharchive/pull-request
|
SendTransaction request
Description
Closes #18
Type of change
[ ] New feature (non-breaking change which adds functionality)
Checklist
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my code
[ ] I have commented on my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing unit tests pass locally with my changes
Pull Request Test Coverage Report for Build 6a728f28715d8899f38398a23266491a59653a37-PR-34
2 of 2 (100.0%) changed or added relevant lines in 2 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 100.0%
Totals
Change from base Build ad932605898b88c4966fadf8167963d66e3ca2c5:
0%
Covered Lines:
133
Relevant Lines:
133
💛 - Coveralls
|
2025-04-01T06:39:18.442563
| 2024-08-12T06:03:42
|
2460110231
|
{
"authors": [
"tkdchen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7615",
"repo": "konflux-ci/build-definitions",
"url": "https://github.com/konflux-ci/build-definitions/pull/1280"
}
|
gharchive/pull-request
|
feat: add renovate customManager to update buildah image
STONEBLD-2661
buildah image is also specified in the .spec.stepTemplate.env. This custom manager updates the buildah image there.
Look for any open pull requests in the repository with the title "e2e-tests update" and
see if there are recent e2e-tests updates that will be applicable to your change.
Will adding this change here work?
Based on my current understanding of renovate, I don't think so. The packageRules defines how renovate handles a package (a dependency) detected by the tekton manager.
This custom manager is an addition to the tekton manager to cover the value fields.
|
2025-04-01T06:39:18.450072
| 2024-08-30T14:35:36
|
2497409652
|
{
"authors": [
"codecov-commenter",
"dirgim",
"jperezdealgaba",
"ralphbean"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7616",
"repo": "konflux-ci/konflux-test",
"url": "https://github.com/konflux-ci/konflux-test/pull/292"
}
|
gharchive/pull-request
|
feat: Added csdiff package
csdiff package is installed in order to be used by SAST tasks to parse the results and generate fingerprinting.
This will be used by future SAST tasks provided by the OpenScanHub team, for example: https://issues.redhat.com/browse/OSH-737
@konflux-team , we created this as a draft PR in order to gather feedback from you. Would this be acceptable? Is something else needed? ...
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Please upload report for BASE (main@09f6efc). Learn more about missing BASE report.
Additional details and impacted files
@@ Coverage Diff @@
## main #292 +/- ##
========================================
Coverage ? 100.00%
========================================
Files ? 18
Lines ? 498
Branches ? 0
========================================
Hits ? 498
Misses ? 0
Partials ? 0
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@ralphbean Would you mind giving a review on this?
@ralphbean @14rcole @Josh-Everett Could any of you review it and approve it/comment it? Thank you!
/ok-to-test
@ralphbean Would you mind enabling the last CI test? I am not able to trigger it/merge this
@jperezdealgaba if you rebase your commit on the latest main, the Red Hat Konflux job issue should get resolved.
@dirgim I rebased the PR and GitHub is still showing that I need the approval from one maintainer for the workflow:
P.S.: I added the installation of the git packages to this PR as it will be also needed. I hope it is not a problem
All checks passed! I have no merge rights in this repo
|
2025-04-01T06:39:18.462351
| 2015-08-22T18:14:11
|
102556843
|
{
"authors": [
"konradjk",
"monkollek"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7617",
"repo": "konradjk/exac_browser",
"url": "https://github.com/konradjk/exac_browser/issues/206"
}
|
gharchive/issue
|
Variant in CASP9 can't be found by rsID search
Works if you search by the variant
http://exac.broadinstitute.org/variant/1-15832495-T-G
Although the variant is listed correctly as rs146054764, a search by the rsID returns no results
Fixed in most recent commit.
|
2025-04-01T06:39:18.470619
| 2024-10-15T17:41:06
|
2589451290
|
{
"authors": [
"Enngage",
"nkooman-bzs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7618",
"repo": "kontent-ai/kontent-ai-migration-toolkit",
"url": "https://github.com/kontent-ai/kontent-ai-migration-toolkit/issues/14"
}
|
gharchive/issue
|
Analytics service is down and the Migration Toolkit cannot be run
Brief bug description
At the time of writing this issue, the endpoint that is hit to track analytics of this library is down.
While the endpoint is likely to come back online soon, I think the CLI should run regardless and continue even if analytics cannot be tracked.
Repro steps
Run any command from migration toolkit
Expected behavior
Commands run properly regardless of analytics endpoint health.
Test environment
All environments
Additional context
A quick workaround is to manually instantiate a manager then run the command via code.
Screenshots
N/A
Thank you, I've updated the code to handle these exceptions gracefully :)
|
2025-04-01T06:39:18.477461
| 2024-07-17T12:38:26
|
2413517709
|
{
"authors": [
"jmle"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7619",
"repo": "konveyor/analyzer-lsp",
"url": "https://github.com/konveyor/analyzer-lsp/issues/662"
}
|
gharchive/issue
|
[BUG] Gradle analysis taking too long to complete
Is there an existing issue for this?
[X] I have searched the existing issues
Konveyor version
0.5-beta2
Priority
Critical
Current Behavior
Currently, the analysis of the tackle-testapp-public in its gradle form takes too long to complete. This might be related to this bug opened in the hub:
https://github.com/konveyor/tackle2-hub/issues/667
Expected Behavior
Gradle analysis shouldn't take too much longer than Maven analysis.
How Reproducible
Always (Default)
Steps To Reproduce
Analyze tackle-testapp-public on its main branch (Maven) and then on its gradle branch (Gradle). Compare the time it takes for both to run.
Environment
No response
Anything else?
No response
Can't reproduce anymore in the latest d/s
|
2025-04-01T06:39:18.478551
| 2022-07-12T11:53:26
|
1301954064
|
{
"authors": [
"kmehant"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7620",
"repo": "konveyor/move2kube-api",
"url": "https://github.com/konveyor/move2kube-api/pull/111"
}
|
gharchive/pull-request
|
feat: support multi architecture image builds
Signed-off-by: Mehant Kammakomati<EMAIL_ADDRESS>
WIP: please do not merge
|
2025-04-01T06:39:18.480136
| 2020-10-06T13:36:00
|
715686236
|
{
"authors": [
"KevinMGranger",
"etsauer",
"mpryc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7621",
"repo": "konveyor/pelorus",
"url": "https://github.com/konveyor/pelorus/issues/213"
}
|
gharchive/issue
|
Failure exporter framework not prescriptive enough
As seen with #211 , we need to enhance the abstract class for the failure provider to force the implementer to include app logic
Related to #225
@mpryc @mateusoliveira43 @KevinMGranger the #211 describes steps to create service-now dev account. We probably want to look at the service now failure exporter code and adjust it with the rest of the exporters.
@etsauer could you tell us what do you mean by "include app logic" ?
|
2025-04-01T06:39:18.484847
| 2023-03-19T22:27:38
|
1631141455
|
{
"authors": [
"Marxsal",
"kookma"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7622",
"repo": "kookma/TW-Section",
"url": "https://github.com/kookma/TW-Section/issues/32"
}
|
gharchive/issue
|
Edit Toolbar Configuration button is ignored.
The Edit Toolbar shows up whether or not you toggle the "Turn on editor toolbar" button.
Also, when using simple text area, you have no toolbar (maybe it's always been like that?).
Thank you Mark. I will investigate.
The Edit Toolbar shows up whether or not you toggle the "Turn on editor toolbar" button.
Thank you I will investigate and back to you.
Also, when using simple text area, you have no toolbar (maybe it's always been like that?).
Yes, the simple editor has no toolbar! It is a simple text area!
issue fixed in TW-Section 1.1.1
|
2025-04-01T06:39:18.504845
| 2022-06-30T13:56:59
|
1290178304
|
{
"authors": [
"Siar-Akbayin",
"jgraeger",
"theEpsilon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7623",
"repo": "kopfsachen-dev/api",
"url": "https://github.com/kopfsachen-dev/api/pull/19"
}
|
gharchive/pull-request
|
Fixing API Spec
This PR introduces final breaking changes to the specifications and brings the API into a stable state. After that, more changes might be introduced, but not in a breaking way.
The following changes are planned (checked means ready for review)
[x] Authencation service
[x] Mood diary
[x] Wiki Service
[x] User Service
[x] Motivator Service (Starkmacher)
For those services, after merging the PR, @kopfsachen-dev/backend shall commit to not introduce any additional services in a breaking way.
⚠️ Remark:
After the PR got approved, I will mark all endpoints that are not yet in a stable state in any environment as deprecated. The deprecation notice gets removed, as soon as a service is deployed in a stable state. After removing deprecation notices from all services/endpoints, the deprecated tag will be used in the intended way.
@theEpsilon: The authentication documentation is updated and I consider it finished. Please review that part.
A few things are missing:
The request body property "csrf_token": "<token>" is required in the browser flow submissions for registration and login
Each browser-related submit request (Reg, Login, Logout) needs the headers: Accept: application/json, Content-Type: application/json
Login submit flow is missing the request body definition:
{ "identifier":"<accountKey>", "method": "password", "password":"<md5(accountKey)>", "csrf_token": "<token>" }
Apart from that, auth part looks good
@theEpsilon: The authentication documentation is updated and I consider it finished. Please review that part.
A few things are missing:
Thanks you very much @theEpsilon. All of your feedback should be implemented. Could you check again? :)
Might be worth mentioning the JSON headers in every request. If they are not present, the server will send redirects which are undiserable in our use case.
Cuz it seems like these header declarations are not visible in the spec visualization (?) I can see them in the code though. Everything else is fine!
In OpenAPI v3 you set the a content-type for each operations requestBody and responses. The requester has to set the HTTP Accepts and Content-Type headers accordingly. Standard compliant code generations tools should also act that way. Therefore, the x-accepts and x-contentType properties are actually not needed, i've just added them for compatibility with tools not complying fully to the open api spec. Anyway, they are not shown in the Swagger UI specifically as a requirement.
I agree that this can be misleading and added a few more words on the descriptions of the endpoints you mentioned and hope that it's clear now. Thank you for all the work you put into it :)
As I included the requested changes, this PR closes #20.
I consider the spec done and stable for 1.0 release. Please review.
Some remarks on the (semantic) versioning of the API. A major release (First of the two numbers in the version string) indicates the introduction of breaking changes, why a minor release shall be compatible to all prior versions of the API of the same major relase. Thats why i bumped the version to 1.0.
@MHajoha Many thanks for you quick review of the changes and the approval. I've implemented you're feedback :)
I request a frontend review again for the browser teams. Particularly interesting for you could be, that the browser authentication flow is now described in detail in the spec. In case of struggling with the actual implementation, you can get some inspiration from mindtastic/stagefright (credits to @theEpsilon who implemented the API connection on this demo).
lgtm
|
2025-04-01T06:39:18.907774
| 2015-06-08T12:24:28
|
86155391
|
{
"authors": [
"koyachi",
"yogsototh"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7624",
"repo": "koyachi/elm-sha",
"url": "https://github.com/koyachi/elm-sha/issues/2"
}
|
gharchive/issue
|
Minimal example doesn't seems to work
I've done this minimal example (1.0.1) and I get the following error at runtime:
import Graphics.Element exposing (..)
import Sha
main : Element
main = show (Sha.digest "hex" (Sha.createHash "hello"))
hello is not supported (we accept pull requests)
Sha.createHash accepts one of these hash algorithms: ["sha1", "sha", "sha256", "sha512"]. These are same as sha.js's supported hashes.
https://github.com/crypto-browserify/sha.js#supported-hashes
examples:
https://github.com/koyachi/elm-sha/blob/master/examples/Main.elm
https://github.com/koyachi/elm-sha/blob/master/tests/Test.elm
Ooops, really sorry I didn't quite understood the API correctly. It works like a charm.
You might be able to enforce good usage by creating data structure instead of String:
type HashType = SHA1 | SHA | SHA256 | SHA512
with
createHash : HashType -> Hash
And certainly the same for digest:
type Digest = HEX | B64 | BIN
Best,
Y.
You might be able to enforce good usage by creating data structure instead of String:
type HashType = SHA1 | SHA | SHA256 | SHA512
with
createHash : HashType -> Hash
And certainly the same for digest:
type Digest = HEX | B64 | BIN
Ah, yes. Specifying Hash Algorithm and Digest Encoding with string is not Elm-way.
I'll fix this with elm-sha Ver.1.0.2.
Thanks!
|
2025-04-01T06:39:18.919345
| 2023-08-21T13:37:59
|
1859394731
|
{
"authors": [
"kprotty",
"v1gnesh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7625",
"repo": "kprotty/uasync",
"url": "https://github.com/kprotty/uasync/pull/4"
}
|
gharchive/pull-request
|
Update thread.rs
Tiny change to allow compilation
Didn't know people used this crate. Thanks. Won't be able to cargo publish until later however.
Would love to see the blazingly-fastness from your work get adopted into more commonly used runtimes.
Or to see this mature :)
Tempted to share how the benches look on my machine, but I'll stick to your decision of not sharing one isolated case :+1:
|
2025-04-01T06:39:18.925807
| 2024-02-01T19:07:14
|
2113261766
|
{
"authors": [
"piotrgramacki",
"zackAemmer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7626",
"repo": "kraina-ai/srai",
"url": "https://github.com/kraina-ai/srai/pull/427"
}
|
gharchive/pull-request
|
Possible GTFS Loader Bugs
Two possible bugs with loading and embedding GTFS data with GTFS2VEC loader and embedder:
GTFSLoader uses df.pivot_table() to calculate the hourly embedding features from the static feed, which gives NaN values for hours/stops that don't have trips. For _load_trips this is filled with 0, but for _load_directions it should be an empty set, which as far as I can tell is not possible using df.pivot_table(). I handled this in GTFS2VecEmbedder by filtering NaN values as they are reduced. I also added an initial value because in a few cases there were hours with no trips at all.
GTFS2VecEmbedder expects a features GeoDataFrame with an index that matches the joint GeoDataFrame, which is checked in _validate_indexes. GTFSLoader assigns the features GeoDataFrame an index of None. I just changed it to assign the FEATURE_ID constant.
from pathlib import Path
from srai.embedders import GTFS2VecEmbedder
from srai.joiners import IntersectionJoiner
from srai.loaders import GTFSLoader, download_file
from srai.neighbourhoods.h3_neighbourhood import H3Neighbourhood
from srai.regionalizers import H3Regionalizer
import geopandas as gpd
from shapely.geometry import Polygon
from srai.constants import WGS84_CRS
# Load GTFS from example notebook
wroclaw_gtfs = Path().resolve() / "files" / "example.zip"
gtfs_url = "https://transitfeeds.com/p/mpk-wroc-aw/663/20221221/download"
download_file(gtfs_url, wroclaw_gtfs.as_posix())
gtfs_loader = GTFSLoader()
features = gtfs_loader.load(wroclaw_gtfs)
print(features.index.name) # None
# Get H3 embedding regions covering the GTFS bounding box, join with features
min_x, min_y = features.geometry.bounds[['minx', 'miny']].min()
max_x, max_y = features.geometry.bounds[['maxx', 'maxy']].max()
geo = Polygon((
(min_x, min_y),
(min_x, max_y),
(max_x, max_y),
(max_x, min_y),
(min_x, min_y)
))
area = gpd.GeoDataFrame(
{'region_id': ['Wroclaw_test'],
'geometry': [geo]},
crs=WGS84_CRS
)
area.set_index('region_id', inplace=True)
regionalizer = H3Regionalizer(resolution=8)
joiner = IntersectionJoiner()
regions = regionalizer.transform(area)
neighbourhood = H3Neighbourhood(regions_gdf=regions)
joint = joiner.transform(regions, features)
# Fit embedder
embedder = GTFS2VecEmbedder(hidden_size=2, embedding_size=4)
embedder.fit(regions, features, joint)
embeddings_gtfs = embedder.transform(regions, features, joint)
# ValueError: features_gdf must have a named index.
features.index.name = 'feature_id'
embedder = GTFS2VecEmbedder(hidden_size=2, embedding_size=4)
embedder.fit(regions, features, joint)
embeddings_gtfs = embedder.transform(regions, features, joint)
# TypeError: descriptor 'union' for 'set' objects doesn't apply to a 'float' object
Hi @zackAemmer
Thanks a lot for finding and fixing those bugs. It's a great contribution!
Everything looks good to me. I added the CHANGELOG entry and will merge and ship those changes with the next release.
Awesome!
|
2025-04-01T06:39:18.927324
| 2019-11-29T09:22:14
|
530238927
|
{
"authors": [
"KristofVDB1",
"jurajkrivda"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7627",
"repo": "krakenjs/zoid",
"url": "https://github.com/krakenjs/zoid/issues/281"
}
|
gharchive/issue
|
Setting the height-dimension in percentage does not work
I was trying to set the dimensions of the zoid by using the percentages as suggested by the docs. Setting the width to '100%' does work but whenever I change the height-dimension to a percentage, the iframe does not show up. It only works with pixels. I'm currently using Zoid in a React-environment.
Code for reference:
dimensions: {
width: '800px',
height: '100%',
},
How you did resolve this?
|
2025-04-01T06:39:18.931305
| 2023-10-06T18:26:22
|
1930749111
|
{
"authors": [
"minemindmedia",
"vmitchell85"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7628",
"repo": "krakero/tailwind-fieldtype",
"url": "https://github.com/krakero/tailwind-fieldtype/issues/2"
}
|
gharchive/issue
|
FR: Open in popup?
Possible feature request?
New to Statamic here and I really like this add-on. What would be really neat, is if it were possible to open the palette in a popup to select a color, then hide the palette but showing the activate color being used.
When using Bard or similar to build pay layours, the picker takes up a lot of real estate, especially if used multiple times in a layout.
Just an idea!
@minemindmedia I like this idea. I'm not sure if I'll have bandwidth for it soon, but I'll try to explore the option.
That's awesome!
@minemindmedia Hows this look? https://d.pr/v/4VvrPx
@minemindmedia Would it be better without the extra button? just show an icon to choose the color?
https://d.pr/v/addpRf
Sorry for the multiple posts... @minemindmedia what about this?
https://d.pr/v/vdnhVd
Hi @vmitchell85
Sorry for the late response! Looks great man!
I think any of those would be awesome.
|
2025-04-01T06:39:18.953384
| 2019-10-26T08:00:56
|
512805509
|
{
"authors": [
"ahqmrf",
"danyjadhav",
"helojianxin",
"jeromegamez"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7629",
"repo": "kreait/firebase-php",
"url": "https://github.com/kreait/firebase-php/issues/346"
}
|
gharchive/issue
|
PHP Fatal error: Uncaught GuzzleHttp\Exception\ConnectException: cURL error 7
Getting this exception in apache2 error.log:
PHP Fatal error: Uncaught GuzzleHttp\\Exception\\ConnectException: cURL error 7: Failed to connect to oauth2.googleapis.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) in /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php:200\nStack trace:\n#0 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(155): GuzzleHttp\\Handler\\CurlFactory::createRejection(Object(GuzzleHttp\\Handler\\EasyHandle), Array)\n#1 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php(105): GuzzleHttp\\Handler\\CurlFactory::finishError(Object(GuzzleHttp\\Handler\\CurlHandler), Object(GuzzleHttp\\Handler\\EasyHandle), Object(GuzzleHttp\\Handler\\CurlFactory))\n#2 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/CurlHandler.php(43): GuzzleHttp\\Handler\\CurlFactory::finish(Object(GuzzleHttp\\Handler\\CurlHandler), Object(GuzzleHttp\\Handler\\EasyHandle), Object(GuzzleHttp\\Handler\\CurlFactory))\n#3 /var/www/html/beta/v1/vendor/guzzlehttp/guzzle/src/Handler/Proxy.ph in /var/www/html/beta/v1/vendor/kreait/firebase-php/src/Firebase/Exception/DatabaseApiExceptionConverter.php on line 49
To Reproduce
I did not find why it actually happening. :(
Environment:
OS: Ubuntu 18.04,
PHP version: [e.g. 7.3.8]
Firebase SDK Version: latest
There‘s unfortunately not much I can help you with - if the request to the Google APIs fails, it could have several reasons: bad internet connection, a firewall, your IP could be blocked from accessing the services, ...
Okay, but is there any way to set custom timeout duration to the firebase requests?
https://firebase-php.readthedocs.io/en/stable/setup.html#http-client-options-and-middlewares
I am also getting the below exception.
{ "File": "\/var\/www\/html\/local\/vendor\/kreait\/firebase-php\/src\/Firebase\/Exception\/DatabaseApiExceptionConverter.php", "Line": 49, "Message": "Unable to connect to the API: cURL error 7: Failed to connect to oauth2.googleapis.com port 443: Connection timed out (see http:\/\/curl.haxx.se\/libcurl\/c\/libcurl-errors.html)" }
kreait/firebase:
php: 7.1
Connection code like
$serviceAccount = ServiceAccount::fromJsonFile($firebase_path); $firebase = (new Factory) ->withServiceAccount($serviceAccount) ->withDatabaseUri($firebase_database_path) ->create(); $database = $firebase->getDatabase();
It was working fine before, but getting error from last week.
It's the same kind of error as before - on the machine the code is running on, a connection to the Google API was not possible... unfortunately that's nothing that we can fix in code.
please cheak your url。I said this because I got messages like you and found a space between “ and h....
|
2025-04-01T06:39:18.956986
| 2024-04-10T00:21:32
|
2234518297
|
{
"authors": [
"jobcespedes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7630",
"repo": "krestomatio/moodle-operator",
"url": "https://github.com/krestomatio/moodle-operator/pull/237"
}
|
gharchive/pull-request
|
Develop
Checklist:
[ ] Have you added an explanation of what your changes do and why you'd like them to be included?
[ ] Have you updated or added documentation for the change, as applicable?
[ ] Have you tested your changes on all related environments with successful results, as applicable?
Type of Changes:
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
What is the current behavior? (link to any open issues here)
What is the new behavior (if this is a feature change)?
Other information:
/test image
|
2025-04-01T06:39:18.959397
| 2018-05-14T08:54:59
|
322726396
|
{
"authors": [
"krimpedance",
"springlo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7631",
"repo": "krimpedance/KRProgressHUD",
"url": "https://github.com/krimpedance/KRProgressHUD/issues/36"
}
|
gharchive/issue
|
Problems with continuous operation
KRProgressHUD.dismiss {}
KRProgressHUD.showImage(#imageLiteral(resourceName: "toast_error"), message: message)
the hud can't display.
@springlo
Please use it.
KRProgressHUD.dismiss {
KRProgressHUD.showImage(#imageLiteral(resourceName: "toast_error"), message: message)
}
No response.
|
2025-04-01T06:39:18.969467
| 2021-04-24T07:22:48
|
866681683
|
{
"authors": [
"krishdevdb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7632",
"repo": "krishdevdb/reseter.css",
"url": "https://github.com/krishdevdb/reseter.css/issues/34"
}
|
gharchive/issue
|
Request: Add @lsprr As A Contributor
Add @lsprr As A Contributor For #32
@all-contributors add @Isprr for documentation
@all-contirbutors add @lsprr for doc
@all-contributors add @lsprr for doc
|
2025-04-01T06:39:18.970367
| 2022-06-30T03:31:46
|
1289532276
|
{
"authors": [
"ChristopherMancuso",
"billspat"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7633",
"repo": "krishnanlab/geneplexus_app",
"url": "https://github.com/krishnanlab/geneplexus_app/issues/220"
}
|
gharchive/issue
|
Add gene set enrichment
If we add the ability for the user to supply a network, we should figure out how to add gene set enrichment as a page of results since model similarity won't be a thing we can do
features and issues deferred for rewrite --closing here.
|
2025-04-01T06:39:18.975231
| 2016-06-18T17:13:01
|
161034096
|
{
"authors": [
"juja",
"krispo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7634",
"repo": "krispo/angular-nvd3",
"url": "https://github.com/krispo/angular-nvd3/issues/455"
}
|
gharchive/issue
|
Zoom feature fails on delayed data load
When data is loaded after chart initializations, zoom feature fails:
Uncaught TypeError: Cannot read property 'call' of undefined angular-nvd3.js:587
Plunker:
http://plnkr.co/edit/HmfNd4NzXMFZr36YNXbm?p=preview
Thanks, I will fix it in the near future.
|
2025-04-01T06:39:19.050655
| 2021-12-08T14:45:41
|
1074488629
|
{
"authors": [
"arronwy",
"bacongobbler",
"shanesveller",
"thomastaylor312"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7635",
"repo": "krustlet/oci-distribution",
"url": "https://github.com/krustlet/oci-distribution/pull/9"
}
|
gharchive/pull-request
|
Export pull_layer & auth API
Export pull_blob API
1. Container images will share layers, the API should support the scenario which don't need pull all the layers.
2. Container image size may vary from megabyte to gigabyte, export pull_layer API can allow the user to the following layer decompress/unpack/store operations in parallel.
Export auth API
For some container image service which support ondemand layer pull like:
* stargz https://github.com/containerd/stargz-snapshotter
* Nydus Image Service https://github.com/dragonflyoss/image-service
export auth API is a requirement when token is expired.
I seem to remember there being a reason why these methods were not public. @bacongobbler or @radu-matei was there a reason either of you remember why these were private?
Yes - some of that conversation can be found here: https://github.com/krustlet/krustlet/pull/564
Basically the user should not be given any control over auth - we know what endpoints do and do not need authentication, so the call to auth should be hidden behind methods like pull.
As for exposing pull_layer... I don't see how this is useful unless you're trying to write an abstraction over the existing Client. It doesn't make a ton of sense because you still have no way to push or pull manifests, and push_layer is still hidden. We need to decide whether we allow others to write their own clients on top of oci-distribution, or we are the ones publishing an OCI client, exposing only the high-level concepts like client.pull() and client.push() (which is the current design today).
export pull_layer and auth API can allow the user to do the following layer decompress/unpack/store operations in parallel.
Is there a compromise we can make here? I think that should be something pull can handle. It's already async anyways.
If it helps contextualize the OP's desire, one of my hopeful use-cases when consuming this crate is a flavor/variant of https://oras.land/ - specifically with the goal of usage for smarter CI caching. This means that I may be running my caching utility in memory-constrained contexts while also dealing with up-to-multi-gigabyte payloads when considering the "image" as a whole. One of the possible optimizations under those constraints would be for me to be able to stream individual layers to decompress and write to disk during the logical pull operation, so that I never need to buffer an entire layer payload in-memory. I have an analogous desire during pushing as well due to the same memory constraints, where I wouldn't wish to hold those entire layers in-memory all at once. I'd have to know the checksum ahead of time for the push per the registry API, but that doesn't directly require me to hold the payload in memory.
Most of these ideas appear incompatible with this crate's implementation details today, which I understand has been mostly informed by krustlet's usecase and contending with much smaller WASM artifacts.
(I recognize that my goals are not inherently this project's goals and may need to find my own way as a result if the examples I've offered are not compelling-enough.)
one of my hopeful use-cases when consuming this crate is a flavor/variant of https://oras.land/ - specifically with the goal of usage for smarter CI caching.
We've discussed the idea with a few of the ORAS maintainers. They were interested in oci-distribution being the basis of a Rust client for ORAS. oras-rs imported krustlet (including oci-distribution) as a subtree project, but hasn't seen any activity since that point. I assume the goal was to copy oci-distribution as a starting point for a Rust client. If you're looking for an oras-go-alike client but for Rust, I'd ask them about their plans with that repository.
As oras-rs matures, I could see much of oci-distribution being ported over to oras-rs. Implementing the entire OCI distribution spec is one of our stated goals.
One of the possible optimizations under those constraints would be for me to be able to stream individual layers to decompress and write to disk during the logical pull operation, so that I never need to buffer an entire layer payload in-memory. I have an analogous desire during pushing as well due to the same memory constraints, where I wouldn't wish to hold those entire layers in-memory all at once. I'd have to know the checksum ahead of time for the push per the registry API, but that doesn't directly require me to hold the payload in memory.
I don't see how exposing methods like pull_image_layer and auth help you in that regard unless you're embedding Client within another Client, calling methods like auth to fetch credentials and pass that back to the exterior client. That just seems wonky. But perhaps we can decouple these methods away from the internal logic of the Client and into its own module. Kinda like how oras-go has its own standalone Copy that isn't tied to a Client struct. That might help you re-use some of oci-distribution's client logic.
We could also abstract some of the Client's methods into different Trait which would give you the high-level constraints like pull and push, then it'd be up to you to determine the underlying behaviour. That way the existing Client doesn't have to leak implementation details like pull_manifest and auth back to the caller. I'd imagine we would want to have those as separate traits so users can implement a read-only client.
can allow the user to do the following layer decompress/unpack/store operations in parallel.
Is there a compromise we can make here? I think that should be something pull can handle. It's already async anyways. Parallelizing the layer pull/unpack/store operations within pull should accomplish the same thing as what's requested here.
Yes, current pull API is already async, but we need wait all the layers are pulled before next operations. Many containers image support encrypted layers, decryption and decompression are time consuming and these operations depends on other crates which different users may have different selections.
Another reason we want export pull_layer API is many container stack support on demand pull like stargz-snapshotter, we will not pull all the layers at the beginning, and will pull the layers on demand.
I think we're in agreement here. I want to re-think the design approach though.
I don't see how exposing methods like pull_image_layer and auth could help you unless you're embedding Client within another Client, calling methods like auth to fetch credentials and pass that back to the exterior client. That just seems wonky from a design perspective. But perhaps we can decouple these methods away from the internal logic of the Client and into its own module.
Do you have an example how you plan to use auth and pull_image_layer in your project?
I think we're in agreement here. I want to re-think the design approach though.
I don't see how exposing methods like pull_image_layer and auth could help you unless you're embedding Client within another Client, calling methods like auth to fetch credentials and pass that back to the exterior client. That just seems wonky from a design perspective. But perhaps we can decouple these methods away from the internal logic of the Client and into its own module.
Would you mind weighing in on this? Do you have an example how you plan to use auth and pull_image_layer in your project? Perhaps that may help clarify your use case.
For parallel image layer data processing, we may don't need use auth API, and pull_layer API self param can not be mutable like current implementation in pull_layer API, we will do the work as below:
let mut client = Client::default();
// Authenticate when pull_manifest_and_config
let (manifest, digest, config) = client
.pull_manifest_and_config(&reference, &RegistryAuth::Anonymous)
.await?;
let layers = manifest.layers.into_iter().map(|layer| {
let this = &client;
async move {
this.pull_layer(image, &layer.digest, &mut out).await?;
decrypt_layer()
decompress_layer()
unpack_layer()
}
});
For on demand pull, we may need auth when token is expired:
// on demand pull like when the token is expired
let op = RegistryOperation::Pull;
client
.auth(&reference, &RegistryAuth::Anonymous, op)
.await?;
client.pull_layer(image, &layer.digest, &mut out).await?;
We can also hiden the auth for pull_layer API like below, but the self will be mutable since token may updated, now this pull_layer API will can not used in the first senario when we want pull in parallel, any suggestions when we want support both? I found export auth and pull_layer API can do the job, but not sure whether it is the righ way:
pub async fn pull_layer<T: AsyncWrite + Unpin>(
&mut self,
image: &Reference,
auth: &RegistryAuth,
digest: &str,
mut out: T,
) -> anyhow::Result<()> {
let op = RegistryOperation::Pull;
if !self.tokens.contains_key(image, op) {
self.auth(image, auth, op).await?;
}
self._pull_layer(image, digest, out)
.await?;
Ok(())
}
async fn _pull_layer<T: AsyncWrite + Unpin>(
&self,
Okay. I've thought about this for a while... I'd be okay with exposing these APIs as it does not appear there's a good alternative other than a huge refactor of the crate to match the design I proposed earlier.
If you can remove all of the additional changes made to this PR and keep it to the bare minimum (marking these functions as pub), that would be appreciated. We can discuss the design decisions behind some of the other changes in another PR if you'd like to still propose those, but they appear orthogonal to the original ask.
Thanks!
For on demand pull, we may need auth when token is expired
Can't we just address that in the calling code by checking the token's expiration date? That would mean you can just call pull without having to embed auth/pull_layer yourself.
For parallel image layer data processing,
I still don't understand why this can't be handled in oci-distribution. Why does this have to be orchestrated from another library? Why can't a pull fetch multiple layers in parallel? Why does this have to be done at a higher level?
For on demand pull, we may need auth when token is expired
Can't we just address that in the calling code by checking the token's expiration date? That would mean you can just call pull without having to embed auth/pull_layer yourself.
Yes, we can do that way, but current TokenCache in client module is not public and TokenCache itself is only visible in current crate by design:
pub(crate) struct TokenCache {
For parallel image layer data processing,
I still don't understand why this can't be handled in oci-distribution. Why does this have to be orchestrated from another library? Why can't a pull fetch multiple layers in parallel? Why does this have to be done at a higher level?
We could just implement some form of middleware pattern so that pull can call a function on each layer. That way you can still call decrypt/decompress/unpack on each layer, and it'd all be performed in parallel. Would that solve your issue?
https://doc.rust-lang.org/book/ch19-05-advanced-functions-and-closures.html
Thanks for your suggestions. Yes, we can pass functions to current pull API, but we have two concerns, first is we modify the interface of an key public API, next is after we processed the layer data, the pull API return value will also need be changed based on the user's needs.
@bacongobbler Another concern is container image layers are shared, after we pull the image manifests, we also need check whether the host already have shared layers pulled by other containers, and we only need pulling the missed parts of the layers.
Image service/runtime are operate at image layer level and image distribution may also need export the layer related API.
Hi @bacongobbler @thomastaylor312 @flavio I rebased the PR and updated the commit message, please review.
Yeah I think this is fine for now. We should just be careful as we approach 1.0 as we decide whether or not the pull_layer function should be exported or not
I still disagree with this change in relation to oci-distribution's current API, but I don't really have the time right now to make contributions for further improvements. I'm fine with this going through for now. We can make changes to this API in future iterations since we haven't hit 1.0 yet, so there's plenty of time to refactor if necessary.
I see one code regression that I'd like to see changed. Otherwise this looks good to go.
Thanks Matt, fully agree to keep auth() clean as you requested, just updated the PR.
@thomastaylor312 and @flavio do you have any ideas/concerns about this change?
@bacongobbler This good to go from your end?
@bacongobbler @thomastaylor312 @flavio Thanks, much appreciated!
|
2025-04-01T06:39:19.060135
| 2019-03-03T04:12:42
|
416473546
|
{
"authors": [
"VictorPhilipp",
"aubergine10"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7636",
"repo": "krzychu124/Cities-Skylines-Traffic-Manager-President-Edition",
"url": "https://github.com/krzychu124/Cities-Skylines-Traffic-Manager-President-Edition/issues/161"
}
|
gharchive/issue
|
Cargoload of trains/trucks
Fix that cargo trains drive with around with really low loads of cargo. (1 - 30 %)
Automatically select appropriate vehicle for load --> a van for a small load, a truck for a large load. (i've made difference in capacity in AVO, sadly i get trucks loaded for 50%, instead of a van, loaded for 100%), while vans would probably mean faster delivery.
https://github.com/VictorPhilipp/Cities-Skylines-Traffic-Manager-President-Edition/issues/123
Duplicate of #170
Closing this as duplicate of 170
|
2025-04-01T06:39:19.085177
| 2024-11-03T21:21:06
|
2631481464
|
{
"authors": [
"ksk0629"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7637",
"repo": "ksk0629/quantum_machine_learning",
"url": "https://github.com/ksk0629/quantum_machine_learning/issues/7"
}
|
gharchive/issue
|
Create YZEncoder class
Summary
Create YXEcnoder class, which is used in the original quclassi paper.
merged to the main branch.
|
2025-04-01T06:39:19.091269
| 2017-03-03T08:19:24
|
211622943
|
{
"authors": [
"ksss",
"pyama86"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7638",
"repo": "ksss/mruby-signal",
"url": "https://github.com/ksss/mruby-signal/pull/1"
}
|
gharchive/pull-request
|
I want to lock global_mrb
Hi!
We implemented it because we want to lock global_mrb when using it with multithreading.
https://github.com/ksss/mruby-signal/blob/master/src/signal.c#L558
↑ code is executed at mrb_open to prevent global_mrb from being rewritten.
sample.rb
def run_with_signal msec_timer, signal
@timer_thread = Thread.new msec_timer, 1000, signal do |timer, interval, sig|
loop_time = 0
# calculate by usec
while loop_time < timer * 1000
loop_time += usleep interval
end
Process.kill sig, Process.pid
end
Signal.mrb_state_unlock
end
Signal.trap(:USR1) do |signo|
puts "catch signal from timer thread"
exit
end
Signal.mrb_state_lock
run_with_signal 1000, :USR1
puts "waiting timer"
loop { sleep 1 }
@pyama86 Thank you for using this library.
Agree
This library expects to work same as CRuby's Signal.
$ cat t.rb
# mattn/mruby-thread
# matsumotory/mruby-sleep
# iij/mruby-process
def run_with_signal(signal)
Thread.new(signal) do |sig|
sleep 1
Process.kill sig, Process.pid
end
end
Signal.trap(:USR1) do |signo|
puts "catch signal from timer thread"
exit
end
run_with_signal :USR1
puts "waiting timer"
loop { sleep 1 }
$ mruby t.rb
waiting timer
SignalException: SIGUSR1
[2] 65470 abort mruby t.rb
$ ruby t.rb
waiting timer
catch signal from timer thread
So, It's not expected behavior. I agree with you on this issue.
Negative
This library expects to work same as CRuby's Signal.
So, I'm negative to add methods what CRuby's Signal doesn't have.
Proposal
I admit that mruby-signal and multi mrb_state are incompatible.
So, I propose this patch.
diff --git a/src/signal.c b/src/signal.c
index 1a3da25..38b3127 100644
--- a/src/signal.c
+++ b/src/signal.c
@@ -165,7 +165,7 @@ static const struct signals {
{NULL, 0}
};
-static mrb_state *global_mrb;
+static mrb_state *initial_mrb = NULL;
static const char*
signo2signm(mrb_int no)
@@ -228,7 +228,7 @@ static sighandler_t mrb_signal(mrb_state *mrb, int signum, sighandler_t handler)
static RETSIGTYPE
sighandler(int sig)
{
- mrb_state *mrb = global_mrb;
+ mrb_state *mrb = initial_mrb;
struct RClass *mrb_mSignal = mrb_module_get(mrb, "Signal");
mrb_value trap_list = mrb_iv_get(mrb, mrb_obj_value(mrb_mSignal), mrb_intern_lit(mrb, "trap_list"));
mrb_value command = mrb_ary_ref(mrb, trap_list, sig);
@@ -373,7 +373,6 @@ mrb_signal(mrb_state *mrb, int signum, sighandler_t handler)
{
struct sigaction sigact, old;
- global_mrb = mrb;
sigemptyset(&sigact.sa_mask);
sigact.sa_handler = handler;
sigact.sa_flags = 0;
@@ -546,6 +545,8 @@ install_sighandler(mrb_state *mrb, int signum, sighandler_t handler)
void
mrb_mruby_signal_gem_init(mrb_state* mrb) {
+ if (initial_mrb == NULL) initial_mrb = mrb;
+
struct RClass *signal = mrb_define_module(mrb, "Signal");
mrb_obj_iv_set(mrb, (struct RObject *)signal, mrb_intern_lit(mrb, "trap_list"), mrb_ary_new_capa(mrb, NSIG));
Does this solve your problem?
Yes! I will solve my problem.Rather, I want them to do so.
Would you try this? https://github.com/ksss/mruby-signal/commit/be8980cdad5e58c7526698c9c06aa90c0e3bf18d
(CI maybe relates to mruby-onig-regexp and maybe solved recently)
This worked perfectly!
Thanks.
✨
|
2025-04-01T06:39:19.095687
| 2018-01-12T10:58:15
|
288082031
|
{
"authors": [
"cgwyllie",
"iOkay"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7639",
"repo": "kstenerud/KSCrash",
"url": "https://github.com/kstenerud/KSCrash/issues/270"
}
|
gharchive/issue
|
Pointer deref crash in KSReachableOperationKSCrash
Hi,
We have recently seen a crash coming from inside the KSReachableOperationKSCrash initWithHost method. Not really sure how to reliably reproduce but seems to occasionally happen on app launch. We're using KSCrash 1.15.16 with Carthage.
Here's an example backtrace:
0 libobjc.A.dylib 0x1b290dd6 objc_msgSend
1 KSCrash 0xbd68cf __60-[KSReachableOperationKSCrash initWithHost:allowWWAN:block:]_block_invoke
2 KSCrash 0xbd6503 -[KSReachabilityKSCrash onReachabilityFlagsChanged:]
3 KSCrash 0xbd6279 __49-[KSReachabilityKSCrash initWithReachabilityRef:]_block_invoke_2
4 libdispatch.dylib 0x1b6c9797 _dispatch_call_block_and_release
5 libdispatch.dylib 0x1b6c9783 _dispatch_client_callout
6 libdispatch.dylib 0x1b6cdd05 _dispatch_main_queue_callback_4CF
7 CoreFoundation 0x1bfb7d69 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__
8 CoreFoundation 0x1bfb5e19 __CFRunLoopRun
9 CoreFoundation 0x1bf091af CFRunLoopRunSpecific
10 CoreFoundation 0x1bf08fd1 CFRunLoopRunInMode
11 GraphicsServices 0x1d6b3b41 GSEventRunModal
12 UIKit 0x21291a53 UIApplicationMain
Here's a snippet of the report that was generated by KSCrash:
{
"diagnosis": "Attempted to dereference garbage pointer 0x4d.",
"error": {
"address": 77,
"mach": {
"code": 1,
"exception": 1,
"exception_name": "EXC_BAD_ACCESS",
"subcode": 0
},
"signal": {
"code": 0,
"code_name": "BUS_NOOP",
"name": "SIGBUS",
"signal": 10
},
"type": "mach"
}
}
We also managed to catch it on the debugger:
At first guess, it looks like the blockSelf reference has been freed.
If this is the most likely case, would adding a guard be a sufficient solution? Happy to submit a PR to that effect if so.
Thanks,
Chris
I think should use '__weak' to replace '__unsafe_unretained' in line 320。
|
2025-04-01T06:39:19.098639
| 2017-04-08T17:24:46
|
220414305
|
{
"authors": [
"AmmonHepworth",
"PhilipNelson5",
"johnsonjo4531"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7640",
"repo": "ksundberg/CourseMaterials",
"url": "https://github.com/ksundberg/CourseMaterials/issues/12"
}
|
gharchive/issue
|
dev goes out of bounds on devices
In Simulation.cpp this line appears twice: auto dev = jobs[job].tasks[jobs[job].cur].device; followed by some form of devices[dev]. When stepping though however, dev is sometimes outside the bounds of devices.
So I can confirm its happening.
fix:
Task.cpp : 10
std::uniform_int_distribution<> dist(low, high - 1);
Task.cpp : 37
auto max = type == cs3100::Task::Type::CPU ? maxPage : maxDevice - 1;
In my program your second fix does the trick but implementing the first fix in any combination causes seg faults in all cases. Can anybody else confirm?
I changed it, only the one from line 37 is needed
|
2025-04-01T06:39:19.154645
| 2023-04-13T05:34:34
|
1665740731
|
{
"authors": [
"fokcuk",
"kuba2k2"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7641",
"repo": "kuba2k2/libretuya",
"url": "https://github.com/kuba2k2/libretuya/issues/108"
}
|
gharchive/issue
|
Cannot update the device post initial install
Hello
For some reason the update has stopped working. I changed the code, but even though it says it compiled and OTA pushed through ESPHome, the device is still not updated - old sensor names.
I tried deleting the device and recreating it in ESPHome, but its still the same thing - no update
What version of libretuya and what chip are you using? What board name did you choose?
turned out to be a device specific issue - if you hold the button after powering it off and then plugging back in with button pressed, it will start to use firmware. So its loading it, but not using is for some reason.
Closing the issue
This is very weird, can you tell me more about this device? Is it Realtek or Beken?
Beken
Then it's not really possible to update depending on the button presses. The bootloader should update the firmware as soon as it receives it.
|
2025-04-01T06:39:19.159580
| 2023-04-18T19:53:58
|
1673743598
|
{
"authors": [
"mysticaltech",
"stubbi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7642",
"repo": "kube-hetzner/terraform-hcloud-kube-hetzner",
"url": "https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/issues/736"
}
|
gharchive/issue
|
[Bug]: autoscaler workers not working?
Description
Hi there,
It seems I cannot create autoscaler nodepools without creating at least one non-autoscaler node? This was working in an older version.
Kube.tf file
module "kube-hetzner" {
source = "kube-hetzner/kube-hetzner/hcloud"
version = "2.1.0"
initial_k3s_channel = "v1.25"
hcloud_token = var.hcloud_token
cluster_name = "${var.stage}-${var.region}"
base_domain = var.domain
network_region = var.region
load_balancer_type = var.lb_type
lb_hostname = local.lb_hostname
load_balancer_location = local.locations[var.region][0]
control_plane_nodepools = [
for cp in range(3) :
{
name = "control-plane-${var.stage}-${var.region}-${cp}",
server_type = var.cp_type,
location = local.locations[var.region][cp % length(local.locations[var.region])],
labels = [],
taints = [],
count = 1
}
]
autoscaler_nodepools = [
for cp in range(3) :
{
name = "autoscaler-${cp}"
server_type = var.agent_type
location = local.locations[var.region][cp % length(local.locations[var.region])],
min_nodes = 0
max_nodes = 5
}
]
agent_nodepools = [
{
name = "agent-small",
server_type = var.agent_type
location = "fsn1",
labels = [],
taints = [],
count = 0 // working when setting to > 0
}
]
ingress_controller = "nginx"
use_control_plane_lb = true
cni_plugin = "cilium"
enable_cert_manager = false
create_kubeconfig = false
create_kustomization = false
ssh_public_key = file("${var.ssh_key}.pub")
ssh_private_key = file(var.ssh_key)
providers = {
hcloud = hcloud
}
}
variable "lb_type" {
default = "lb11"
}
variable "cp_type" {
default = "cpx11"
}
variable "agent_type" {
default = "cpx31"
}
locals {
lb_hostname = "lb-${var.stage}-${var.region}.${var.domain}"
# https://docs.hetzner.com/cloud/general/locations/
locations = {
"eu-central" = ["fsn1", "nbg1", "hel1"],
"us-east" = ["ash"],
"us-west" = ["hil"],
}
}
Screenshots
No response
Platform
Mac
@stubbi You have to check the autoscaler pod logs, see what is happening.
@stubbi Was your previews try with cilium too? Try without cilium, and as started about, the key is probably in the logs of the autoscaler pod.
|
2025-04-01T06:39:19.162151
| 2021-02-11T16:38:23
|
806556169
|
{
"authors": [
"timflannagan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7643",
"repo": "kube-reporting/helm",
"url": "https://github.com/kube-reporting/helm/pull/56"
}
|
gharchive/pull-request
|
Bug 1927850: Bump the github.com/gogo/protobuf dependency to v1.3.2
Ensure the github.com/gogo/protobuf dependency uses the v1.3.2 version.
This was done by running the following commands locally:
# Get the newer version of the protobuf implicit dependency.
$ go get<EMAIL_ADDRESS>$ go mod vendor && go mod tidy && go mod verify
# Verify the version is correctly pinned.
$ go list -mod=readonly -m all | grep gogo/protobuf
github.com/gogo/protobuf v1.3.2 => github.com/gogo/protobuf v1.3.2
/bugzilla refresh
/cherry-pick release-4.7
|
2025-04-01T06:39:19.174521
| 2022-12-19T09:26:33
|
1502601724
|
{
"authors": [
"spectro30"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7644",
"repo": "kubedb/docs",
"url": "https://github.com/kubedb/docs/pull/518"
}
|
gharchive/pull-request
|
Add Percona XtraDB Docs
Signed-off-by: Md. Alif Biswas<EMAIL_ADDRESS>
Before Merge
Need to update the parent repository
Folder name is updated from percona-xtradb to perconaxtradb (Got rid of the hyphen)
Codespan schema check is failing from ProxySQL end
|
2025-04-01T06:39:19.206377
| 2023-02-16T11:49:54
|
1587536264
|
{
"authors": [
"MooreZheng",
"qxygxt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7645",
"repo": "kubeedge/sedna",
"url": "https://github.com/kubeedge/sedna/issues/396"
}
|
gharchive/issue
|
Inference result is not output in lifelong learning thermal comfort case
What happened:
I correctly installed Sedna and KubeEdge. And tried running Lifelong Learning Thermal Comfort Prediction case.
I found that the pods for training and evaluation were working fine, but the inferring pod remained running and was never completed, and did not output results to the designated "/output/deployment" directory.
I looked at the log of the pod responsible for inference and no valuable information was found.
What you expected to happen:
Run this case correctly, both training and inference.
How to reproduce it (as minimally and precisely as possible):
I have written a blog including everything about installation and the running of this lifelong learning case, which can be used for reproduction.
Log of the pod responsible for inference
log of inference pod.txt
Environment:
Sedna Versionv0.5.1
KubeEdge Versionv1.10.0
My issue is similar to https://github.com/kubeedge/sedna/issues/380#issue-1452864350
@jaypume might help to take a look at it
This issue has been solved, and the reference result can be found here.
Thanks all.
|
2025-04-01T06:39:19.210319
| 2021-04-14T09:25:12
|
857705195
|
{
"authors": [
"llhuii"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7646",
"repo": "kubeedge/sedna",
"url": "https://github.com/kubeedge/sedna/issues/51"
}
|
gharchive/issue
|
Implementation for injecting storage-initializer
In the #18, I proposed to add an init-containers to download dataset/models before running workers.
Then we need to inject the storage-initializer to workers.
the simply way
The obvious way to implement it is to modify the creating-worker logic in each collaboration features in GM.
I can abstract the common logic to one func/file.
its pros: simply and quick
its cons: need to modify the gm
the more decouple way
Another good way I found is to leverage the k8s admission hooks used by the kfserving.
its pros: decoupling with each collaboration features
its cons: add extra a webhook server; more code worker
What I decide to do now
For simplicity, firstly to implement the simply way, then evolute to the admission hook way when needed since injecting code can be reused.
kfserving storage-initializer implementation see this pr https://github.com/kubeflow/kfserving/pull/156.
/close
closed by #52
|
2025-04-01T06:39:19.223662
| 2024-06-09T22:46:41
|
2342526933
|
{
"authors": [
"alechp",
"fharper"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7647",
"repo": "kubefirst/kubefirst",
"url": "https://github.com/kubefirst/kubefirst/issues/2193"
}
|
gharchive/issue
|
Add gitops & metaphor repository names as flags for create
What is your feature idea?
Problem:
Gitops & metaphor repository names are hard-coded
Screenshot:
Solution:
Add --gitopsRepoName flag
Add --metaphorRepoName flag
Keep "gitops" & "metaphor" as sane defaults
Impacted files:
There might be more, just what I found off initial search
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/google/create.go#L66
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/digitalocean/create.go#L67
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/civo/create.go#L61
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/k3s/create.go#L67
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/akamai/create.go#L61
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/vultr/create.go#L68
https://github.com/kubefirst/kubefirst/blob/a47ff7fa7d222555c9cd31ef066b4910368cc796/cmd/aws/create.go#L90
Why is it needed?
Why:
maintaining multiple gitops repos for different clouds/regions
maintaining associated metaphor repos for different clouds/regions
mitigating risk of deletion when working with deletes (eg. spinning up a gitops directory on my local github to test something and then stressing that cleanup commands could nuke something on organization's github if I'm not careful)
Is this missing feature preventing you from using kubefirst?
[ ] Yes
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Started fork: https://github.com/alechp/kubefirst/tree/feat/repository-name-flags-for-digitalocean
Still failing here:
Happy to keep exploring. Pointers to save time would be nice
Inside gitShim/init.go I see Repositories[] being referenced & looped through to check whether the repositories exist:
Updated the newRepositories here:
Noticed that the success message has a hard-coded gitops/metaphor. Which makes sense, but don't think that would impact create
I am not setting the flag with Viper in flags.go, but from what I can tell that's not necessary...? Perhaps I'm wrong
I didn't define it in apiTypes.ClusterDefinition which might be the issue (not sure how this impacts gitShim, but maybe the check that throws is being done elsewhere):
Thanks for this feature suggestion @alechp. Could you split those in two issues please as they will need different work in different places. Mainly, the gitops one will require a lot more work since it's hardcoded in multiples places.
Hey @fharper went ahead and split it into 3 issues (three requirements to enable deploying more than one gitops cluster per github organization):
https://github.com/kubefirst/kubefirst/issues/2210
https://github.com/kubefirst/kubefirst/issues/2211
https://github.com/kubefirst/kubefirst/issues/2212
Thanks a lot, and sorry for the additional work 😅
Will close this one now.
|
2025-04-01T06:39:19.254899
| 2024-10-17T17:02:59
|
2595279283
|
{
"authors": [
"DharmitD",
"VaniHaripriya",
"hbelmiro"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7648",
"repo": "kubeflow/internal-acls",
"url": "https://github.com/kubeflow/internal-acls/pull/720"
}
|
gharchive/pull-request
|
Add VaniHaripriya as a Kubeflow Member
Resolves #719
Contributions
https://github.com/kubeflow/pipelines/pull/11300
https://github.com/kubeflow/pipelines/pull/11295
https://github.com/kubeflow/pipelines/pull/11262
https://github.com/kubeflow/pipelines/pull/11066
pytest Output:
$ pytest test_org_yaml.py
============================================================================================================ test session starts =============================================================================================================
platform linux -- Python 3.11.9, pytest-7.4.3, pluggy-1.3.0
rootdir: /home/vmudadla/OpenshiftAI/internal-acls/github-orgs
collected 1 item
test_org_yaml.py . [100%]
============================================================================================================= 1 passed in 0.15s ==============================================================================================================
Make sure to sign off your commit @VaniHaripriya :)
cc @terrytangyuan
/ok-to-test
|
2025-04-01T06:39:19.301641
| 2024-10-24T21:34:20
|
2612598068
|
{
"authors": [
"akshaychitneni",
"coveralls"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7649",
"repo": "kubeflow/training-operator",
"url": "https://github.com/kubeflow/training-operator/pull/2307"
}
|
gharchive/pull-request
|
KEP-2170: Adding validation webhook for v2 trainjob
Adds validation webhook for v2 trainjob
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in Fixes #<issue number>, #<issue number>, ... format, will close the issue(s) when PR gets merged):
Fixes #
Checklist:
[ ] Docs included if any changes are user facing
cc @tenzen-y @andreyvelich
Pull Request Test Coverage Report for Build<PHONE_NUMBER>1
Details
6 of 6 (100.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 100.0%
Totals
Change from base Build<PHONE_NUMBER>0:
0.0%
Covered Lines:
78
Relevant Lines:
78
💛 - Coveralls
|
2025-04-01T06:39:19.304553
| 2021-12-14T15:10:13
|
1079875872
|
{
"authors": [
"jbottum",
"kilmarnock",
"shannonbradshaw"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7650",
"repo": "kubeflow/website",
"url": "https://github.com/kubeflow/website/issues/3097"
}
|
gharchive/issue
|
Install Kubeflow on OpenShift
The Link
(https://raw.githubusercontent.com/opendatahub-io/manifests/v1.3-branch/distributions/kfdef/kfctl_openshift_v1.3.0.yaml).
on page
https://www.kubeflow.org/docs/distributions/openshift/install-kubeflow/
is dead. 404. Seemls like the closing bracket got included in the link somehow.
@nakfour have you seen this issue on the OpenShift Docs ?
/platform Openshift
/priority p2
/kind bug
@kilmarnock thanks for filing this. We'll get it fixed.
@nakfour I'll fix the broken link, but I wanted to check in to see whether we should go ahead and update this page for Kubeflow 1.4.
|
2025-04-01T06:39:19.309824
| 2019-05-20T15:03:30
|
446167231
|
{
"authors": [
"IronPan",
"sarahmaddox",
"xaoo"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7651",
"repo": "kubeflow/website",
"url": "https://github.com/kubeflow/website/issues/726"
}
|
gharchive/issue
|
mysql Backup&Restore procedure
Hey,
we are trying to restore kubeflow mysql db. Is a mysql dump enogh to backup?
we are dealing with the current error after restore:
Error: failed to generate Pipeline graph.
An error occurred
Cannot read property 'spec' of undefine
/assign @IronPan
I found that if I add a pipeline, backup the mysql DB, delete the pipeline then restore, that pipeline will return the erro above.
all the other pipelines, which were not deleted, just backed and restored, are ok.
Hey guys, any suggestions?
I need to know what that delete button does when a pipeline is deleted.
Because it's sure deletes more than something in mysql.
In this way I can prepare some consistent backups.
TY
//George
/cc @paveldournov Do you have any suggestions for this problem? Many thanks, Sarah
Hey Sarah, thank you, that would be really helpful to find out the missing dependency when we delete a pipeline
just to double check did you follow up this instruction or some other ways
https://www.kubeflow.org/docs/pipelines/upgrade/
aha, so I can get a backup of all pipeline with Reinstalling Kubeflow Pipelines ?
I just did a mysql dump, deleted a pipeline, and restore the dump, and I saw the delete pipeline is not consistent after restore.
|
2025-04-01T06:39:19.313360
| 2021-09-06T18:18:15
|
989369628
|
{
"authors": [
"Jeffwan",
"johnugeorge"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7652",
"repo": "kubeflow/website",
"url": "https://github.com/kubeflow/website/pull/2918"
}
|
gharchive/pull-request
|
Update docs for MXNet Jobs
Fix issues in existing doc. Address part of https://github.com/kubeflow/website/issues/2915
/cc @johnugeorge @andreyvelich
/lgtm
|
2025-04-01T06:39:19.316508
| 2019-06-05T22:21:35
|
452743462
|
{
"authors": [
"joeliedtke",
"sarahmaddox"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7653",
"repo": "kubeflow/website",
"url": "https://github.com/kubeflow/website/pull/774"
}
|
gharchive/pull-request
|
Fixed assumption that KFAPP variable includes path
Fixes https://github.com/kubeflow/website/issues/422
This change is
Preview: https://deploy-preview-774--competent-brattain-de2d6d.netlify.com/docs/gke/cloud-filestore/
/assign @joeliedtke
/lgtm
/approve
/approve cancel
/lgtm cancel
/approve
|
2025-04-01T06:39:19.318669
| 2018-04-24T19:51:24
|
317370608
|
{
"authors": [
"aledbf",
"andresmgot"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7654",
"repo": "kubeless/kubeless",
"url": "https://github.com/kubeless/kubeless/pull/715"
}
|
gharchive/pull-request
|
Update Kong http-trigger configuration
Issue Ref: None
Description:
Update Kong http-trigger documentation to use new CRDs for Consumers and Credentials.
This removes the need to make HTTP requests to the admin API.
TODOs:
[X] Ready to review
awesome, thanks @aledbf
|
2025-04-01T06:39:19.320793
| 2022-07-05T03:48:18
|
1293748916
|
{
"authors": [
"hongzhen-ma"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7655",
"repo": "kubeovn/kube-ovn",
"url": "https://github.com/kubeovn/kube-ovn/pull/1666"
}
|
gharchive/pull-request
|
ignore pod not scheduled when reconcile subnet
What type of this PR
Bug fixes
1、when reconcile subnet, all pods in this subnet will be checked and add to port-group. Should ignore pod which has not been scheduled to node.
2、 This is caused by https://github.com/kubeovn/kube-ovn/pull/1655
Which issue(s) this PR fixes:
Fixes #(issue-number)
已回合release-1.10 和 release-1.9 分支
|
2025-04-01T06:39:19.323539
| 2021-02-21T14:16:49
|
812865497
|
{
"authors": [
"junka",
"oilbeater"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7656",
"repo": "kubeovn/kube-ovn",
"url": "https://github.com/kubeovn/kube-ovn/pull/697"
}
|
gharchive/pull-request
|
fix checkSBBindings error when hostname is not nodeName
in some case the k8s spec.nodeName is not same with a hostname,
then the sbctl could not get the chassis uuid, the check process
will fail here.
A workaroud solution is that try to get chassis name first from encap
table and then do the left lookup.
It is not a good solution. When we use a second nic and change the encap
ip to the second nic, it could not work either.
Signed-off-by: Wan Junjie<EMAIL_ADDRESS>
The start-ovs.sh will use nodeName to set the hostname in ovn-sb. They should be same in theory
https://github.com/kubeovn/kube-ovn/blob/8c5ae3131711a350852aadc5aaefd6522123bccf/dist/images/start-ovs.sh#L100
@oilbeater you are right,will do that. Close.
|
2025-04-01T06:39:19.326893
| 2021-08-20T16:57:33
|
975774354
|
{
"authors": [
"integrii",
"jonnydawg",
"pragmaticivan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7657",
"repo": "kuberhealthy/kuberhealthy",
"url": "https://github.com/kuberhealthy/kuberhealthy/issues/1007"
}
|
gharchive/issue
|
Connection refused for k8s api
Currently getting this error. Wondering if there's another setup to be aware
"kuberhealthy/pod-restarts": {
"OK": false,
"Errors": [
"Get https://<IP_ADDRESS>:443/api/v1/events?fieldSelector=type%3DWarning: dial tcp <IP_ADDRESS>:443: connect: connection refused"
],
"RunDuration": "",
"Namespace": "kuberhealthy",
"Node": "",
"LastRun": "2021-08-20T16:31:28Z",
"AuthoritativePod": "kuberhealthy-55d8dc7cff-zp8mj",
"uuid": "d15d798a-385b-4a9c-bcf7-4941dad5e11d"
},
Hello @pragmaticivan, could you please provide what Cluster OS, k8s version, and pod-restarts version you are using?
pod-restarts version: kuberhealthy/pod-restarts-check:v2.5.0
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-eks-087e67", GitCommit:"087e67e479962798594218dc6d99923f410c145e", GitTreeState:"clean", BuildDate:"2021-07-31T01:39:55Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
This seems like a generic 'kubernetes client could not talk to the Kubernetes API' error. I am not sure Kuberhealthy code could cause this one... Maybe there is a NetworkPolicy in place somewhere?
|
2025-04-01T06:39:19.348767
| 2021-03-19T13:55:01
|
836021297
|
{
"authors": [
"xmudrii"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7658",
"repo": "kubermatic/kubeone",
"url": "https://github.com/kubermatic/kubeone/pull/1282"
}
|
gharchive/pull-request
|
Install cri-tools on Amazon Linux 2
What this PR does / why we need it:
Amazon Linux 2 doesn't install crictl by default. We use crictl to restart the API server if it's affected by #1222. Additionally, kubeadm verifies is crictl present if containerd is used.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1281
Does this PR introduce a user-facing change?:
Install cri-tools (crictl) on Amazon Linux 2. This fixes the issue with provisioning Kubernetes and Amazon EKS-D clusters on Amazon Linux 2
/assign @kron4eg
/hold
to manually test the changes
/retest
/retest
/hold cancel
/cherrypick release/v1.2
|
2025-04-01T06:39:19.350768
| 2020-07-09T11:04:59
|
653972955
|
{
"authors": [
"xmudrii"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7659",
"repo": "kubermatic/kubeone",
"url": "https://github.com/kubermatic/kubeone/pull/964"
}
|
gharchive/pull-request
|
Reconcile DynamicWorkers
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
xref #531
Does this PR introduce a user-facing change?:
NONE
/assign @kron4eg
/close
We'll take another approach to this.
|
2025-04-01T06:39:19.393382
| 2017-01-19T19:56:13
|
201959302
|
{
"authors": [
"hjacobs",
"mbohlool"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7660",
"repo": "kubernetes-incubator/client-python",
"url": "https://github.com/kubernetes-incubator/client-python/issues/102"
}
|
gharchive/issue
|
What's the future of this client and how does it compare to pykube?
I'm currently using this client for my Kubernetes Operational View dashboard, but I will probably switch to pykube as it looks much cleaner (e.g. config loading does not modify a global object), directly uses requests (which I'm using too) and supports insecure-skip-tls-verify (see #99).
Did you consider merging this client with pykube or what are compelling arguments to use client-python instead of pykube?
Most part of this client is auto-generated. That distinguish it from pykube. Generating it make it easier for us to keep it in sync with API changes in main repo.
Having said that, I have no problem supporting features of pykube here if I get time or get somebody contributes.
About your specific concerns, our config loader has a version to load configs in a local config object instead of global one (and there is an example for that in examples folder). I plan to support insecure-skip-tls-verify when I got time. I considered using requests instead of what we have right now (urllib3) but didn't see compelling reasons yet to spend time on it.
@mbohlool thanks for the quick answer :smile:
Sure.
|
2025-04-01T06:39:19.399243
| 2016-10-26T22:33:22
|
185528210
|
{
"authors": [
"MrHohn",
"vijaygos"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7661",
"repo": "kubernetes-incubator/cluster-proportional-autoscaler",
"url": "https://github.com/kubernetes-incubator/cluster-proportional-autoscaler/issues/9"
}
|
gharchive/issue
|
Should provide other ways to specify the scaling target beside name.
As the scaling target's name could be changed, autoscaler should provide other ways for user to specify what the target is. One candidate would be label selector.
Take kube-dns as an example, we could use label k8s-app: kube-dns to select the target ReplicationController/Deployment, and don't need to restart autoscaler to change the input argument everytime when the target name is changed.
I think this is a much-needed feature. We have a cluster with multiple nodepools and we would like to scale independently for each of these node pools. Having a label selector to filter the number of nodes allows for scaling in such scenarios.
@vijaygos For you use case, is that https://github.com/kubernetes-incubator/cluster-proportional-autoscaler/pull/55 (filter node based on labels)?
Yes. This would work. I just realized that the change went in recently.
Thanks for adding that. Much appreciated.
@vijaygos In case you don't want to build it from head, I will publish a new image includes that change very soon.
Would it be possible for you to comment on your timeline for a formal release?
@vijaygos I just did --- images below should be available now:
k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.5.0
k8s.gcr.io/cluster-proportional-autoscaler-arm:1.5.0
k8s.gcr.io/cluster-proportional-autoscaler-arm6464:1.5.0
k8s.gcr.io/cluster-proportional-autoscaler-ppc64le:1.5.0
Awesome! Thanks@MrHohn for the quick turnaround.
|
2025-04-01T06:39:19.406368
| 2016-12-19T14:22:58
|
196425220
|
{
"authors": [
"cyphar",
"feiskyer",
"gouyang",
"rhatdan",
"runcom"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7662",
"repo": "kubernetes-incubator/cri-o",
"url": "https://github.com/kubernetes-incubator/cri-o/issues/287"
}
|
gharchive/issue
|
grpc crash when trying to start a pod
This happens on master if you just try to start a pod with one of the tesdata configurations:
% sudo ./ocid --debug
E1220 01:21:23.830356 990 ocicni.go:136] error updating cni config: No networks found in /etc/cni/net.d
DEBU[2016-12-20 01:21:23.831441299+11:00] sandboxes: map[]
DEBU[2016-12-20 01:21:23.831471271+11:00] containers: &{map[] {{0 0} 0 0 0 0}}
DEBU[2016-12-20 01:21:27.103103358+11:00] RunPodSandboxRequest config:<metadata:<name:"podsandbox1" uid:"redhat-test-ocid" namespace:"redhat.test.ocid" attempt:1 > hostname:"ocic_host" log_directory:"." port_mappings:<protocol:UDP container_port:80 host_port:4888 host_ip:"<IP_ADDRESS>" > port_mappings:<protocol:2 container_port:81 host_port:4889 host_ip:"<IP_ADDRESS>" > labels:<key:"group" value:"test" > annotations:<key:"owner" value:"hmeng" > annotations:<key:"security.alpha.kubernetes.io/seccomp/pod" value:"unconfined" > annotations:<key:"security.alpha.kubernetes.io/sysctls" value:"kernel.shm_rmid_forced=1,net.ipv4.ip_local_port_range=1024 65000" > annotations:<key:"security.alpha.kubernetes.io/unsafe-sysctls" value:"kernel.msgmax=8192" > linux:<cgroup_parent:"/ocid-podsandbox1" security_context:<namespace_options:<host_network:false host_pid:false host_ipc:false > > > >
DEBU[2016-12-20 01:21:27.105374620+11:00] copying infra rootfs binary: /usr/libexec/ocid/pause -> /var/lib/ocid/graph/vfs/pause/rootfs/pause
2016/12/20 01:21:27 grpc: Server failed to encode response proto: Marshal called with nil
With the key command being:
% sudo ./ocic pod run --config test/testdata/sandbox_config.json
2016/12/20 01:21:27 transport: http2Client.notifyError got notified that the client transport was broken EOF.
2016/12/20 01:21:27 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/ocid.sock: connect: connection refused"; Reconnecting to {"/var/run/ocid.sock" <nil>}
FATA[0000] Creating the pod sandbox failed: rpc error: code = 13 desc = transport is closing
.. looking .. (thanks)
I'm bisecting right now. Gimme a sec. :P
Can't reproduce on master though :confused:
No networks found in /etc/cni/net.d
maybe caused by missing cni networks?
@runcom Okay, it just started working again. I had to fix up ocid.conf to correctly refer to the right binaries (this was a new box).
Looks like it's a configuration issue, but I'm confused why I got that error which then crashed the daemon. Seems like a really nasty failure mode.
Looks like it's a configuration issue, but I'm confused why I got that error which then crashed the daemon. Seems like a really nasty failure mode.
welcome to grpc nil marshaling
@runcom It's also caused when conmon exits with a non-zero exit code. I hit it with #162 (which I've since fixed) when it would crash due to the log path being wrong.
I just met the same issue on the master.
# git log -1
commit 6133465e420d387c977271111a3e1bccc316ac08
Merge: ac7943c 8e1af36
Author: Mrunal Patel<EMAIL_ADDRESS>Date: Wed Dec 21 11:20:08 2016 -0800
Merge pull request #292 from sameo/topic/network-bats
Additional networking tests
The issue can be reproduced by running two ocid process on the latest branch.
Steps:
Start ocid by systemd, # systemctl start ocid
Run ocid in terminal, # ocid --debug
Run ocic will hit the issue.
@cyphar @gouyang is this still an issue?
I can still hit the issue by the steps I described above.
@gouyang Still an issue?
The issue is gone on the master.
|
2025-04-01T06:39:19.408388
| 2018-06-26T22:56:41
|
336020130
|
{
"authors": [
"TomSweeneyRedHat",
"mrunalp",
"rhatdan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7663",
"repo": "kubernetes-incubator/cri-o",
"url": "https://github.com/kubernetes-incubator/cri-o/pull/1650"
}
|
gharchive/pull-request
|
[1.11] Update ocicni to latest
Signed-off-by: Mrunal Patel<EMAIL_ADDRESS>
/test all
LGTM assuming happy tests
LGTM
|
2025-04-01T06:39:19.410013
| 2018-07-31T07:52:05
|
346074661
|
{
"authors": [
"moonek",
"wongma7"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7664",
"repo": "kubernetes-incubator/external-storage",
"url": "https://github.com/kubernetes-incubator/external-storage/pull/896"
}
|
gharchive/pull-request
|
flex: Add PV name to delete
PV name is required even when deleting as much as provisioning.
I added the PV Name to the Delete function in the same way as the provision.go source.
This is the tested output.
2018-07-31 07:29:33 flex[37]: delete() called: {"kubernetes.io/pvOrVolumeName":"pvc-6e9b3727-9493-11e8-afe6-525400d87180"}
2018-07-31 07:29:33 flex[37]: log() called: {"status": "Success"}
/lgtm
|
2025-04-01T06:39:19.413080
| 2017-01-10T11:19:11
|
199795220
|
{
"authors": [
"bogdando",
"mattymo"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7665",
"repo": "kubernetes-incubator/kargo",
"url": "https://github.com/kubernetes-incubator/kargo/issues/879"
}
|
gharchive/issue
|
[draft] Release v2.1.0 proposal
https://github.com/kubernetes-incubator/kargo/releases/tag/v2.1.0
Here's a list of changes to add:
We need to move rkt to experimental feature list. It's not the default deployment type. And it only works right now with Flannel/Canal
We upgraded etcd to v3.0.12
Other noteworthy changes:
Added the nginx proxy to provide k8s apiserver HA
Removed the etcd-proxy
Improved docker container download and sync
Improved scale deployment time
Enabled fact caching by default
Added optional SSH bastion configuration
@mattymo note that in Kargo etcd_version: v3.0.6
Oops we should update it soon, but it's not a blocker for release
@mattymo thanks, updates done
|
2025-04-01T06:39:19.414320
| 2017-04-06T14:10:47
|
219911911
|
{
"authors": [
"kadel",
"surajnarwade"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7666",
"repo": "kubernetes-incubator/kompose",
"url": "https://github.com/kubernetes-incubator/kompose/issues/548"
}
|
gharchive/issue
|
update docs/conversion.md
docs/conversion.md is slightly out of the for example tmpfs is as unsupported
we can close this now
done
|
2025-04-01T06:39:19.416148
| 2017-07-06T17:43:22
|
241033786
|
{
"authors": [
"cdrage",
"surajnarwade"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7667",
"repo": "kubernetes-incubator/kompose",
"url": "https://github.com/kubernetes-incubator/kompose/pull/684"
}
|
gharchive/pull-request
|
Change menu to left side
This commit changes the menu to the left side rather than on top,
syncing with cdrage/minimal branch / style.
This syncs to the changes I made upstream at https://github.com/cdrage/minimal
It will now look like this:
this looks cool, LGTM :+1:
|
2025-04-01T06:39:19.422636
| 2017-03-23T05:08:54
|
216300791
|
{
"authors": [
"codecov-io",
"mumoshu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7668",
"repo": "kubernetes-incubator/kube-aws",
"url": "https://github.com/kubernetes-incubator/kube-aws/pull/442"
}
|
gharchive/pull-request
|
Retry on 504 errors when fetching Container Linux AMIs
Closes #440
Codecov Report
Merging #442 into master will increase coverage by 0.08%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #442 +/- ##
==========================================
+ Coverage 40.79% 40.88% +0.08%
==========================================
Files 37 37
Lines 2662 2666 +4
==========================================
+ Hits 1086 1090 +4
Misses 1418 1418
Partials 158 158
Impacted Files
Coverage Δ
coreos/amiregistry/reliable_http.go
82.35% <100%> (+5.42%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cc7e1da...c9da13c. Read the comment docs.
|
2025-04-01T06:39:19.425337
| 2018-08-24T13:26:57
|
353783374
|
{
"authors": [
"Atoms",
"gitphill",
"mattymo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7669",
"repo": "kubernetes-incubator/kubespray",
"url": "https://github.com/kubernetes-incubator/kubespray/pull/3178"
}
|
gharchive/pull-request
|
Add azure-container-registry-config for Azure
Seperated out KUBELET_CLOUDPROVIDER env var assignment when cloud_provider equals azure
Appended azure-container-registry-config parameter
Please sign CLA
Signed CLA
/check-cla
it can be that CLA is signed with different e-mail than commit is done?
/check-cla
ci check this
can you please rebase
/lgtm
/approve
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.