id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1648462068 | 🛑 Rideable.ch is down
In b386273, Rideable.ch (https://www.rideable.ch) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Rideable.ch is back up in af12911.
| gharchive/issue | 2023-03-30T23:41:13 | 2025-04-01T06:44:37.736722 | {
"authors": [
"jonock"
],
"repo": "jonock/rideable_upptime",
"url": "https://github.com/jonock/rideable_upptime/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
100999499 | and again, use simple-get
I definitely realized that it's better to use simple-get plus concat-stream. It's really small and really better and even smaller than github-request huh.
And return callback(err, buffer, stream) without JSON.parse, cuz Github have endpoints that not return JSON, like /markdown and it crash when you try to do that. Returning buffer instead of JSON.parse(res) would free us up. Also giving the readable stream is good thing.
Another thing is duplication of defaulting opts.apiurl and sending everything from data to the body of the request. Which isn't okey for me. Like, now it's possible to define placeholders, headers, body, method and etc stuff through data from .post(path, data, cb), but you also pass them to the body of the request e.g.
var data = {
owner: 'jonschlinkert',
repo: 'github-base',
headers: {
foobar: 'baz'
},
title: 'foo bar',
body: 'baz qux @chorks woohoo'
body: {
bar: 'qux'
}
}
github.post('/repos/:owner/:repo/issues', data, console.log)
All that data would be stringified and send to the request (from the index.js .request() directly to github-request pkg), from which only 2 fields are needed for request body.
And here can view another problem. What means that body, because some endpoints expect that parameter, like above example - "Create an issue" endpoint. But in the utils.js in the one if you check that data exist, in the other that opts.body exist - which actually is data.body because the extend.
And exposing token or username+password directly to the server request... don't think so.
Other than that.. it's awesome, as always!
all good points, sorry you did some work on this that didn't get merged in, I already had most of the refactoring done. I think you're right about simple-get, I'd be happy to merge that in if you want to do a pr.
fwiw, because of the scope and amount of things I need to get accomplished, I try as hard as I can to make libs awesome - but I don't have as much time as I'd like to explore every possible alternative. So I really appreciate the feedback and help.
sorry you did some work on this that didn't get merged in, I already had most of the refactoring done.
Yea, it's normal, no problem.
I'd be happy to merge that in if you want to do a pr.
:+1:
but I don't have as much time as I'd
same here lol
@jonschlinkert okey, it's ready! :tada:
I'll PR when add tests and cover 100% coverage with test api server. :)
:+1:
so... closing this, because #4 merged.
/cc @jonschlinkert release and publish
oops meant to click comment. it's on the way
| gharchive/issue | 2015-08-14T12:17:33 | 2025-04-01T06:44:37.743827 | {
"authors": [
"jonschlinkert",
"tunnckoCore"
],
"repo": "jonschlinkert/github-base",
"url": "https://github.com/jonschlinkert/github-base/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1005984636 | 알고리즘 상태관리 관련 워딩 변경 제안
현재, 대나무숲에서 특정 알고리즘을 신고할경우, 해당 알고리즘의 심의/규정위반 여부와는 관계없이 해당 알고리즘의 상태가 DELETED 로 변경되며, 이후 심의를 거쳐 다시 ACCEPTED 로 변경할지 삭제할지 결정하는것으로 알고있습니다
이때, DELETED 라는 상태의 이름은 자칫 이미 삭제된 알고리즘 이라는 인식을 줄 수 있으니 신고되어 심의중인 알고리즘 이라는 인식을 키우기위해
상태관리 워딩을 DELETED 에서 REPORTED 로 변경할것을 제안드립니다
혹시 디스코드 웹훅에서 나타나는 표기 말씀하시는 건가요?
DELTED를 REPORTED로 바꿀 경우, 기존 존재하던 게시물들의 상태또한 바꾸어야되기때문에, 상당한 시간이 소요될 것으로 예상됩니다.
유저측에서 눈에 보이는것만 다룰 경우는 이미 해소되었습니다 :D
좋은 제보 감사합니다
혹시 디스코드 웹훅에서 나타나는 표기 말씀하시는 건가요?
DELTED를 REPORTED로 바꿀 경우, 기존 존재하던 게시물들의 상태또한 바꾸어야되기때문에, 상당한 시간이 소요될 것으로 예상됩니다.
유저측에서 눈에 보이는것만 다룰 경우는 이미 해소되었습니다 :D
좋은 제보 감사합니다
네네 디스코드에서 나타나는 표기 말씀드리는겁니다
실제 상태이름은 제가 Project Structure 를 몰라서 상세히 말씀드리기 어려울듯합니다
좋은서비스 감사합니다 😃😃
| gharchive/issue | 2021-09-24T01:18:26 | 2025-04-01T06:44:37.750642 | {
"authors": [
"iseolin76",
"key-del-jeeinho",
"sunrabbit123"
],
"repo": "joog-lim/bamboo-front",
"url": "https://github.com/joog-lim/bamboo-front/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1515457538 | /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found
acme-dns version 1.0 does not appear to run on operating systems that have a libc older than 2.32.
When trying to start acme-dns on an OS that has an older libc version, the following errors are shown, followed by an exit with code 1:
/usr/local/bin/acme-dns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/bin/acme-dns)
/usr/local/bin/acme-dns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /usr/local/bin/acme-dns)
/usr/local/bin/acme-dns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/local/bin/acme-dns)
This issue occurs on both Ubuntu 20.04 and Debian 11, which are both reasonably new operating systems, and both have libc 2.31.
weird that nobody reported this before today :)
i guess everybody just runs it in docker ... in any case , if you need it on that platform you can easily compile it yourself
git clone https://github.com/joohoi/acme-dns
cd acme-dns
export GOPATH=/tmp/acme-dns
go build
Similar error on CentOS 7:
/usr/local/bin/acme-dns: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /usr/local/bin/acme-dns)
/usr/local/bin/acme-dns: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/bin/acme-dns)
/usr/local/bin/acme-dns: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by /usr/local/bin/acme-dns)
/usr/local/bin/acme-dns: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /usr/local/bin/acme-dns)
Similar error on Debian 11, which was (until yesterday) the most recent stable Debian version available:
$ ./acme-dns
./acme-dns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ./acme-dns)
./acme-dns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by ./acme-dns)
./acme-dns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./acme-dns)
Debian 11 is maintained until at least 07/2024.
Looks to be Go, specifically: https://github.com/golang/go/issues/58550
| gharchive/issue | 2023-01-01T10:25:51 | 2025-04-01T06:44:37.755133 | {
"authors": [
"HansAdema",
"aduzsardi",
"laf0rge",
"mmiller7",
"webprofusion-chrisc"
],
"repo": "joohoi/acme-dns",
"url": "https://github.com/joohoi/acme-dns/issues/328",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
192602659 | Buiten de lijst om is een beetje zinloos in de meeste gevallen
Buiten de lijst om is een beetje zinloos in de meeste gevallen. Vrijwel altijd wil je het 'via wbw via X' doen. Alleen als degene die haalt niet in wbw staat, moet het anders: dan moet alles cash.
Voorstel: "Buiten de lijst" om wordt een bestel-modus in plaats van een keuze voor iedere individuele persoon, op ongeveer de manier hoe eetvoudig werkt.
Zeker, goed punt. Dit was de makkelijkste manier om de use-case te supporten dat een groep die niet/onvoldoende overlapt met de Technicie eetFestijn kan gebruiken zonder in de knoop te komen met de wiebetaaltwat-intergratie.. Maar het kan inderdaad een stuk netter door al eerder die keuze te introduceren. Ik heb een paar keer gespeeld met de gedachte om eetvoudig en eetFestijn meer samen te voegen, maar het is vooral moeite en op dit moment zie ik geen concrete beperkingen.
Het grootste probleem is dat je bij bvb een ALV een 50/50 lijst krijgt met mensen die wel/niet op de lijst staan en dan iemand die haalt die niet op de lijst staat – met als gevolg dat het erg verwarrend wordt allemaal.
Right, er ontstaat inderdaad een probleem wanneer degene die haalt niet op de lijst staat, maar er wel mensen aangeven graag via de lijst te willen betalen. Komt dat voor? Waarschijnlijk wel.
Zoals gezegd is dat vaak wanneer de site voor een niet-Eetfestijn-scenario gebruikt wordt, zoals bij ALVs.
| gharchive/issue | 2016-11-30T15:54:11 | 2025-04-01T06:44:37.913527 | {
"authors": [
"joostrijneveld",
"thomwiggers"
],
"repo": "joostrijneveld/eetFestijn",
"url": "https://github.com/joostrijneveld/eetFestijn/issues/22",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
147480581 | Getting "User did not grant permission" on Ubuntu 14.04 no matter what
Hi, the pkexec dialog is correctly being shown, however no matter the password I pass, I get this error.
After some investigation, turns out that users can't run graphical applications with pkexec without explicitly configuring it to do so.
However, turns out I can workaround this by running pkexec like this: pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY. See http://askubuntu.com/a/332847.
This almost works, since relative arguments to the program passed through pkexec get resolved from /root/, and I have no idea how to solve this (tried everything I could think of).
I'm sending a PR to inherit the environment variables mentioned above, and will use absolute paths for my arguments unless you can think about anything else.
Thanks @jviotti
I should have made it clearer, but the Readme (under Behavior) mentions that sudo-prompt should only be used to run non-graphical commands, for the same reason that the sudo command should only be used to run non-graphical commands. There is also a link in the Readme explaining the reason for this.
I think that's why pkexec requires an explicit option in order to run graphical applications, and we rely on that when using the pkexec binary on Linux.
When we use gksudo, we actually pass the sudo-mode option to actively prevent the command being used to launch graphical applications: https://github.com/jorangreef/sudo-prompt/blob/master/index.js#L160
With sudo-prompt I tried to keep the surface area as small and focused as possible, to do one thing well. The idea is that it should mimic sudo and the uses for sudo as much as possible. It's sudo but with a graphical password and that's all.
I was thinking we could support passing ENV variables, but the reason for not doing so this far is because I am not sure how much this can be supported across the various sudo binaries we use (and the way in which we use them).
For example, on OS X, we currently set the sudo timestamp using an applet which runs an Apple Script using administrator privileges, and only after that do we call the command using sudo (which then does not require a password because there is an existing session). I am planning on adding support for OS X systems which do not support this, by getting the Apple Script to launch a script of the user's command instead, and I am not sure how easy it will be getting ENV variables to work here.
Hi @jorangreef
Thanks for your response. According to the post you link in the "Behaviour" section, using sudo is not recommended to open graphical applications, but using graphical variants, like gksudo, or kdesudo is ok:
Just be consistent in suggesting good practice: gksudo and kdesudo for graphical applications. sudo for command-line applications.
We use sudo-prompt at https://github.com/resin-io/etcher to provide application-wide elevation in OS X, which works fine (OS X is smart enough to open graphical applications running as sudo in the current graphical session), therefore the ENV workaround should only be necessary on Linux.
We did look into only providing elevation when necessary, however had trouble forking an elevated Electron process in a packaged application given that when packaged, the Electron executable seems to be locked in to running a specific application (the one that is has been packaged with). Do you know a solution to this problem?
Yes, gksudo or kdesudo were meant to open graphical applications, but sudo-prompt tries to mimic sudo itself (except for the graphical prompt) and I would like for it to provide the same guarantees as sudo as far as possible.
If we were to support graphical applications, there would be a few more things beyond ENV we might need to do, and it would be difficult to handle all the edge cases for different applications running as root, see: http://askubuntu.com/questions/270006/why-should-users-never-use-normal-sudo-to-start-graphical-applications
We did look into only providing elevation when necessary, however had trouble forking an elevated Electron process in a packaged application given that when packaged, the Electron executable seems to be locked in to running a specific application (the one that is has been packaged with). Do you know a solution to this problem?
By packaged application, do you mean a Linux package, e.g. something distributed through apt-get?
I would try to avoid elevating the entire application if possible. Rather restrict sudo access to the specific shell commands that need it. That should be much safer.
I see, makes sense.
By packaged application, do you mean a Linux package, e.g. something distributed through apt-get?
Yeah, that's right.
I would try to avoid elevating the entire application if possible. Rather restrict sudo access to the specific shell commands that need it. That should be much safer.
The code that requires elevation is a NodeJS script, with opens a device file for writing purposes. In order to run this script we should call a command with electron or node, e.g: node write.js or electron write.js (with the RUN_AS_NODE setting), however the first one assumes node is installed on the system, and in the second case, the electron binary in the packaged application seems to ignore command line arguments.
Thanks, I think I understand a bit better.
Have you checked with @zcbenz regarding the electron binary ignoring command line arguments when packaged?
It would add an extra 9 MB or so, but what about including a node binary with etcher?
@jorangreef I will. I tested this long ago, so maybe it was an issue that is fixed by now, so I'll give it a go with later versions just in case.
Regarding including node, we thought about it, but we already have lots of complains regarding the application bundle size, so I guess that's not an option for us.
Yeah, the issue (or feature?) still happens in Electron v0.36.11, in OS X at least.
I think the best thing then would be for you to target pkexec directly when on Linux.
I'll take your suggestion, thanks a lot!
| gharchive/issue | 2016-04-11T16:19:13 | 2025-04-01T06:44:37.928619 | {
"authors": [
"jorangreef",
"jviotti"
],
"repo": "jorangreef/sudo-prompt",
"url": "https://github.com/jorangreef/sudo-prompt/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
530520451 | Auto link building
Links get auto built
Link can’t block off source, can’t be on a road, must be adjacent to a spot that miner could stand
Possible strategy:
Find possible miner positions, go through miner positions and find potential link spots for each miner spot. If link spot is legal, place it and break from loop.
if there is already a link, the room shouldn't build 1. alternatively (don't do this) the planner can destroy the old links
case study- E11 S9: W S S W W = wall, S = source, L = link, C = creep
L C C L
decided not to handle double source caes
| gharchive/issue | 2019-11-30T01:57:04 | 2025-04-01T06:44:37.944689 | {
"authors": [
"JonathanSafer",
"jordansafer"
],
"repo": "jordansafer/screeps",
"url": "https://github.com/jordansafer/screeps/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1780007331 | 能否远程使用
如题,远程服务器是linux,如何才能在本地用windows的机器调用远程算力。
参考 https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples/ChatGPT-Next-Web
有API
https://github.com/josStorer/RWKV-Runner/blob/master/README_ZH.md#小贴士你可以在服务器部署backend-python然后将此程序仅用作客户端在设置的api-url中填入你的服务器地址
| gharchive/issue | 2023-06-29T02:18:46 | 2025-04-01T06:44:38.004076 | {
"authors": [
"josStorer",
"kylincaster"
],
"repo": "josStorer/RWKV-Runner",
"url": "https://github.com/josStorer/RWKV-Runner/issues/102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2917196 | Extremely high resources usage with rails 3.2.0rc2
Hi,
I tried to use footnotes with rails 3.2.0rc2 on ruby 1.9.2 and 1.9.3 but causes extremely high usage of CPU and RAM (mostly RAM). Request to a view with simple nested form used more than 3.5GB of RAM and caused system hangs.
Same thing here. Well, not quite as bad, but memory usage of well over a GB.
Same here
Removing Assigns from the notes fixed the issue for me :
Footnotes::Filter.notes = [:session, :cookies, :params, :filters, :queries, :log]
| gharchive/issue | 2012-01-20T21:35:44 | 2025-04-01T06:44:38.025420 | {
"authors": [
"AliEzer",
"gkochan",
"styx",
"tylereaves"
],
"repo": "josevalim/rails-footnotes",
"url": "https://github.com/josevalim/rails-footnotes/issues/70",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2282431695 | 🛑 Taxidrivers Cuba is down
In 0fea2db, Taxidrivers Cuba (https://taxidriverscuba.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Taxidrivers Cuba is back up in 6c31975 after 18 minutes.
| gharchive/issue | 2024-05-07T06:56:20 | 2025-04-01T06:44:38.073858 | {
"authors": [
"josmiguel92"
],
"repo": "josmiguel92/upptime",
"url": "https://github.com/josmiguel92/upptime/issues/181",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1144930370 | 🛑 Taxidrivers Cuba is down
In bf497f0, Taxidrivers Cuba (https://taxidriverscuba.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Taxidrivers Cuba is back up in 302baeb.
| gharchive/issue | 2022-02-20T06:01:52 | 2025-04-01T06:44:38.076287 | {
"authors": [
"josmiguel92"
],
"repo": "josmiguel92/upptime",
"url": "https://github.com/josmiguel92/upptime/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2443795005 | 为什么我插件运行不了
我安装了一直报错 ThermodeiMac:emqx_plugin rnd$ cd emqx_plugin_mongodb ThermodeiMac:emqx_plugin_mongodb rnd$ make rel _build/default/emqx_plugrel/emqx_plugin_mongodb-<vsn>.tar.gz -bash: vsn: No such file or directory ThermodeiMac:emqx_plugin_mongodb rnd$ make rel _build/default/emqx_plugrel/emqx_plugin_mongodb-v1.0.0.tar.gz
完整的报错日志麻烦贴一下
| gharchive/issue | 2024-08-02T01:39:40 | 2025-04-01T06:44:38.078077 | {
"authors": [
"ArdWang",
"jostar-y"
],
"repo": "jostar-y/emqx_plugin_mongodb",
"url": "https://github.com/jostar-y/emqx_plugin_mongodb/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
838324734 | Multiline entry summaries
This is a very extreme suggestion, but I thought it's worth mentioning to see what other people think.
I've been putting longer descriptions alongside each of my entries, and it occurred to me it'd be nice to be able to integrate the whole thing with markdown, such that the klog format is a subset of markdown.
Here's one way it could work: an entry is a bulleted list with a time-code, and the entry-description is anything which is attached to that bullet (including sub-bullets, etc.)
# 2021-03-22 Monday
- 07:00-08:30 #writing, wrote to Arash, Jon, Per.
- 08:30-10:30
- Found Hall's marriage theorem, which says saxophone and diagonal representation of inequalities are equivalent, which allowed me to prove the diagonal lemma & so the theorem.
- Was able to cut out a bunch of extra material from the paper.
- Including cutting all the corollaries.
- 13:00-18:00 Read
> "It is in the chemistry of these subtle substances, these curious precipitates and explosive gases which are suddenly formed by the contact of mind with mind, that James is unequalled"
# 2021-03-23 Tuesday
Terrible hangover.
- 14:00 coffee
- 14:00-13:00 read my letters, wrote to my mother.
I have been thinking about that as well already and also find myself using bullet points in the summaries. (Not indented, though.) I agree the summaries could allow for more flexibility, so it’s good that you bring this up so that it can be discussed further.
A few general upfront thoughts: I think there is a natural overlap between the three concerns of time tracking, todo’s and notes keeping. It’s a tricky balancing act to decide where to draw the line, and I’d like to keep klog focussed on time tracking primarily. What I like about the current summary style is that it feels “cheap” (read: simple to memorize and understand) with a quite straightforward (read: basically non-existent) syntax. I also don’t want to force users into a certain usage scheme but rather allow them to make up their own according to individual needs. There might be people who don’t use summaries at all, but who only want to enter the “naked” time data.
However, on the other hand, users shouldn’t feel unnecessarily limited by the summaries, and there are indeed a few shortcomings at the moment. I feel that making klog a subset of markdown probably goes a bit too far, but it should be possible to use something like markdown/todo.txt inside the summaries more easily.
Two things have been floating around in my head:
Multiline entry summaries
As you pointed out, the entry summaries (the text directly behind the time values) can only be one-liners. It could be possible, though, to have multi-line text as well, which could be achieved by expanding on the python-style indentation like so:
2020-01-01
4h Was able to cut out a bunch of extra material from the paper
including cutting all the corollaries
1h
- this
- that
So you would basically be able to continue the text on the next line, with increased indentation level.
Allow indentation in record summaries
Currently there can’t be indentation in the record summaries (the one underneath the date), because indentation is reserved for the entries. There could be some sort of marker though, similar to blockquote syntax in markdown:
2020-01-01
| - Did this and that
| - Includes this other thing
| - And this one
8:00 - ? Just started
You could do this right now already, but of course the | (or whatever character is used) would be part of the copy then. But the spec could reserve a special marker that gets stripped away by the parser. I’d find it important for this to be optional, in order to not loose the simplicity for use-cases where this kind of flexibility is not needed.
Happy to hear that -- and happy to give thoughts on any ideas you might have.
You mentioned in a previous post "I feel that making klog a subset of markdown probably goes a bit too far." Could you expand a bit on what the drawbacks of that would be?
There would be many advantages (I think): you could use existing markdown highlighting, processors, & you'd naturally have all the features of markdown.
It certainly would break backwards-compatibility, and it would probably be require reasonable amount of work to implement, but are there drawbacks specific to the format?
I think building on top of markdown would change the notion of klog. Currently, klog is basically a data format that happens to be human-readable. If it was markdown, I think it would feel more like a journal that allows for time tracking additionally. That’s also what I meant earlier: I think there is a natural overlap between time tracking and journaling, but it’s a philosophical question where to put the focus on.
From a technical point of view it might be tricky to define the file format so that it isn’t ambiguous or hard to memorize, because with markdown you generally have the full freedom to structure the document as you please. So a klog-flavoured markdown would either a) have to be very restrictive and narrow down what is allowed and how it’s supposed to be structured. In this case I think you’d lose a lot of the benefits of using markdown, however. Or b) the rules would be broad and permissive, but that might lead to complications or ambiguities. There could be text in between sections, or other headlines, links, footnotes, etc. Consider the following example: which of the data would be understood? And would the rest be silently skipped, or would klog raise errors or warnings? (Because it might have been done on purpose, but it also could have been by accident).
# 2021-03-22 Monday
- 07:00-08:30 #writing, wrote to Arash, Jon, Per.
- [1h Tennis](https://my-tennis-club.com)
# Tuesday, 2021-03-23
1h Workout
8:00 - 9:00 Tennis again
## 2021-03-24 (Wednesday)
- I did the following things:
- 09:00-10:00 Breakfast
- 1h Workout
I found your initial example quite interesting, because I use klog completely differently. My summaries (if I write them at all) are rarely longer than 5 words, and I treat them more like a reminder than like a journal with detailed descriptions. So I perceive summaries basically as short annotations of the data for future reference. If I want to take more extensive notes about something, I usually do that separately. It’s interesting anyway to see how the habits and preferences are different.
| gharchive/issue | 2021-03-23T03:56:36 | 2025-04-01T06:44:38.095013 | {
"authors": [
"jotaen",
"tecunningham"
],
"repo": "jotaen/klog",
"url": "https://github.com/jotaen/klog/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
46290338 | Makefile fails with dash as /bin/sh
There is some bash parameter expansion in the makefile that fails if /bin/sh isn't bash. Since bash is required it should be declared in the makefile. Setting the following variable should do the trick.
SHELL := /bin/bash
This is fixed in ed76919ae72bcd87efb8206625a197da2c428063.
This commit introduces the following variables and their defaults in the Makefile.
BASH = /usr/bin/bash
SHELL := $(BASH)
Thus you can set the location of Bash that the installed files will refer to (in BASH) and the location of Bash to be used by make (in SHELL) in case it differs.
| gharchive/issue | 2014-10-20T15:51:55 | 2025-04-01T06:44:38.097696 | {
"authors": [
"joukewitteveen",
"kevincox"
],
"repo": "joukewitteveen/xlogin",
"url": "https://github.com/joukewitteveen/xlogin/issues/3",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
681049519 | Updates README
Small update to clean up the docs a bit.
Limited to listing the Publisher and pointing at the underlying property.
Fixes #11
| gharchive/pull-request | 2020-08-18T13:37:56 | 2025-04-01T06:44:38.105055 | {
"authors": [
"piterwilson"
],
"repo": "jozsef-vesza/AVFoundation-Combine",
"url": "https://github.com/jozsef-vesza/AVFoundation-Combine/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
350241320 | User base64 instead of atob for fastboot support
Fixes https://github.com/jpadilla/ember-simple-auth-token/issues/247
window.base64 from https://github.com/simplabs/ember-simple-auth/blob/master/addon/authenticators/oauth2-password-grant.js#L122
@jpadilla Yes, but it is probably not a good idea to depend on that undocumented behavior. I will reproduce internally. Good point!
@jpadilla I think that is better.
@musaffa any chance you could test this out and see if it resolves your original issue?
Yes, buffer must be added to fastbootDependencies. We should add some documentation about this.
Using the FastBoot should be no different than require:
From your Ember.js app, you can run FastBoot.require() to require a package. This is identical to the CommonJS require except it checks all requests against the whitelist first.
I'm hesitant to use the FastBoot global in this library.
A simple (atob) conditional check doesn't work.
What does it do?
Unable to require module 'buffer' because it was not in the whitelist
That seems like a good error message to me... however, it will not be displayed if the FastBoot global is not used. So custom error message makes sense.
Update incoming.
A simple (atob) conditional check doesn't work.
What does it do?
(typeof atob !== 'undefined') is the right way to check for the presence of atob. The simple check currently present in the PR doesn't work.
IMHO you should test the changes in a Fastboot environment.
@musaffa I did... and it worked fine. I did add buffer to the fastbootDependencies.
const decode = str => {
if (typeof atob === 'function') {
return atob(str);
} else if (typeof FastBoot === 'object') {
try {
const buffer = FastBoot.require('buffer');
return buffer.Buffer.from(str, 'base64').toString('utf-8');
} catch (err) {
throw new Error('buffer must be available for decoding base64 strings in FastBoot. Make sure to add buffer to your fastbootDependencies.');
}
} else {
throw new Error('Neither atob nor the FastBoot global are avaialble. Unable to decode base64 strings.');
}
};
I think that should be good.
LGTM
@jpadilla Thoughts on this?
| gharchive/pull-request | 2018-08-14T00:38:24 | 2025-04-01T06:44:38.141386 | {
"authors": [
"fenichelar",
"jpadilla",
"musaffa"
],
"repo": "jpadilla/ember-simple-auth-token",
"url": "https://github.com/jpadilla/ember-simple-auth-token/pull/248",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
168307326 | [Feature] Add TLS support
Today when filling the Address of the overseer.Config type we open a TCP listener. It would be great to add TLS support.
See https://golang.org/pkg/crypto/tls/#NewListener
tlsListener := tls.NewListener(tcpListener, tls.Config{ ... })
Thanks! :smile:
| gharchive/issue | 2016-07-29T11:43:18 | 2025-04-01T06:44:38.163819 | {
"authors": [
"jpillora",
"rafaeljusto"
],
"repo": "jpillora/overseer",
"url": "https://github.com/jpillora/overseer/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
614663159 | Project with JPype 0.7.4 hangs, with 0.7.3 it does not - problematic commit is 9b0cfe01
In my project https://github.com/hexhex/hexlite I use jpype and since changing to 0.7.4 i nearly always get hangs in my automatic tests. reverting to 0.7.3 solved the issue.
Symptomps: when my program hangs and even CTRL+C cannot interrupt it. Killing is possible with CTRL+Z followed by jobs -p |xargs kill -9
I have bisected the problem in the jpype repository and the problematic commit is 9b0cfe01
Is there any way I can help to find out why it hangs?
How you can reproduce it:
Cloning the master of https://github.com/hexhex/hexlite and do in a new conda environment the following:
install jpype with
$ python setup.py sdist
$ pip install dist/*
install prerequisites of hexlite
$ conda install -c potassco clingo
build the java part of hexlite
mvn clean compile package install
install hexlite in develop mode and run the test script that demonstrates some of the hanging tests
$ python setup.py develop
$ ./tests.sh
the last step might succeed in a few seconds
or it hangs
The most likely issue is non-daemon threads. Prior to 0.7.4 JPype was terminating Java using an effective "halt". This means the JVM would be terminated even if user threads were still active and runnable. When we switched to the more correct shutdown sequence that means the any threads spun up be the user that are not terminated by the user or marked as daemon are going to cause shutdown to hang.
This unfortunately is going to expose a lot of bugs in users codes. You can use jstack to view the active threads or use a stack walker to evaluate the state of threads prior to the end of your program execution which should show the threads that require being dealt with.
For some reason the attach thread as daemon is missing from core. I am going to need to investigate and get back to you.
For now I built it with --enable-coverage (just so because I assumed this activates debug symbols) and I have the following backtrace during the hang:
jstack -F backtrace
Server compiler detected. [0/1895]JVM version is 25.152-b12 Deadlock Detection: No deadlocks found. Thread 20792: (state = BLOCKED) - org.jpype.ref.JPypeReferenceQueue.removeHostReference(long, long, long) @bci=0 (Interpreted frame) - org.jpype.ref.JPypeReferenceSet.flush() @bci=83, line=101 (Interpreted frame) - org.jpype.ref.JPypeReferenceQueue.stop() @bci=63, line=102 (Interpreted frame) - org.jpype.JPypeContext.shutdown() @bci=218, line=228 (Interpreted frame) - org.jpype.JPypeContext.access$100(org.jpype.JPypeContext) @bci=1, line=67 (Interpreted frame) - org.jpype.JPypeContext$1.run() @bci=3, line=115 (Interpreted frame) - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) Thread 20784: (state = BLOCKED) - java.lang.Thread.exit() @bci=0, line=754 (Interpreted frame) Thread 20783: (state = BLOCKED) - java.lang.Object.wait(long) @bci=0 (Interpreted frame) - java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Interpreted frame) - java.lang.ref.ReferenceQueue.remove() @bci=2, line=164 (Interpreted frame) - java.lang.ref.Finalizer$FinalizerThread.run() @bci=36, line=209 (Interpreted frame) Thread 20782: (state = BLOCKED) - java.lang.Object.wait(long) @bci=0 (Interpreted frame) - java.lang.Object.wait() @bci=2, line=502 (Interpreted frame) - java.lang.ref.Reference.tryHandlePending(boolean) @bci=54, line=191 (Interpreted frame) - java.lang.ref.Reference$ReferenceHandler.run() @bci=1, line=153 (Interpreted frame)
gdb backtrace
(gdb) bt #0 0x00007f316ffedf85 in futex_abstimed_wait_cancelable (private=<optimized out>, abstime=0x7fffe0181310, expected=0, futex_word=0x7f317079d938 <_PyRuntime+1336>) at ../sysdeps/unix/sysv/linux/futex-internal.h:205 #1 __pthread_cond_wait_common (abstime=0x7fffe0181310, mutex=0x7f317079d940 <_PyRuntime+1344>, cond=0x7f317079d910 <_PyRuntime+1296>) at pthread_cond_wait.c:539 #2 __pthread_cond_timedwait (cond=0x7f317079d910 <_PyRuntime+1296>, mutex=0x7f317079d940 <_PyRuntime+1344>, abstime=0x7fffe0181310) at pthread_cond_wait.c:667 #3 0x00007f317056cf35 in PyCOND_TIMEDWAIT (cond=0x7f317079d910 <_PyRuntime+1296>, mut=0x7f317079d940 <_PyRuntime+1344>, us=5000) at /tmp/build/80754af9/python_1585000375785/work/Python/condvar.h:90 #4 take_gil (tstate=0x7fffd86590b0) at /tmp/build/80754af9/python_1585000375785/work/Python/ceval_gil.h:208 #5 PyEval_RestoreThread () at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:271 #6 0x00007f316d7bf2f9 in JPContext::shutdownJVM (this=0x7fffd8859460) at native/common/jp_context.cpp:293 #7 0x00007f316d7f79b4 in PyJPModule_shutdown (obj=<optimized out>) at native/python/pyjp_module.cpp:263 #8 0x00007f31705b66c1 in _PyMethodDef_RawFastCallKeywords () at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:633 #9 0x00007f31705b6901 in _PyCFunction_FastCallKeywords (func=0x7f316e3b2d70, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>) at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:734 #10 0x00007f3170621d0c in call_function (kwnames=0x0, oparg=0, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:4568 #11 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:3093 #12 0x00007f31705b5d5b in function_code_fastcall (globals=<optimized out>, nargs=0, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:283 #13 _PyFunction_FastCallKeywords () at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:408 #14 0x00007f3170621979 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>) at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:4616 #15 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:3093 #16 0x00007f317056730b in function_code_fastcall (globals=<optimized out>, nargs=0, args=<optimized out>, co=0x7f316e40c270) at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:283 #17 _PyFunction_FastCallDict () at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:322 #18 0x00007f3170677934 in atexit_callfuncs () at /tmp/build/80754af9/python_1585000375785/work/Modules/atexitmodule.c:87
For now I built it with --enable-coverage (just so because I assumed this activates debug symbols) and I have the following backtrace during the hang:
jstack -F backtrace
Attaching to process ID 20770, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.152-b12
Deadlock Detection:
No deadlocks found.
Thread 20792: (state = BLOCKED)
- org.jpype.ref.JPypeReferenceQueue.removeHostReference(long, long, long) @bci=0 (Interpreted frame)
- org.jpype.ref.JPypeReferenceSet.flush() @bci=83, line=101 (Interpreted frame)
- org.jpype.ref.JPypeReferenceQueue.stop() @bci=63, line=102 (Interpreted frame)
- org.jpype.JPypeContext.shutdown() @bci=218, line=228 (Interpreted frame)
- org.jpype.JPypeContext.access$100(org.jpype.JPypeContext) @bci=1, line=67 (Interpreted frame)
- org.jpype.JPypeContext$1.run() @bci=3, line=115 (Interpreted frame)
- java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
Thread 20784: (state = BLOCKED)
- java.lang.Thread.exit() @bci=0, line=754 (Interpreted frame)
Thread 20783: (state = BLOCKED)
- java.lang.Object.wait(long) @bci=0 (Interpreted frame)
- java.lang.ref.ReferenceQueue.remove(long) @bci=59, line=143 (Interpreted frame)
- java.lang.ref.ReferenceQueue.remove() @bci=2, line=164 (Interpreted frame)
- java.lang.ref.Finalizer$FinalizerThread.run() @bci=36, line=209 (Interpreted frame)
Thread 20782: (state = BLOCKED)
- java.lang.Object.wait(long) @bci=0 (Interpreted frame)
- java.lang.Object.wait() @bci=2, line=502 (Interpreted frame)
- java.lang.ref.Reference.tryHandlePending(boolean) @bci=54, line=191 (Interpreted frame)
- java.lang.ref.Reference$ReferenceHandler.run() @bci=1, line=153 (Interpreted frame)
gdb backtrace
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 20770
[New LWP 20771]
[New LWP 20772]
[New LWP 20773]
[New LWP 20774]
[New LWP 20775]
[New LWP 20776]
[New LWP 20777]
[New LWP 20778]
[New LWP 20779]
[New LWP 20780]
[New LWP 20782]
[New LWP 20783]
[New LWP 20784]
[New LWP 20785]
[New LWP 20786]
[New LWP 20787]
[New LWP 20788]
[New LWP 20789]
[New LWP 20792]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f316ffedf85 in futex_abstimed_wait_cancelable (private=<optimized out>, abstime=0x7fffe0181310, expected=0,
futex_word=0x7f317079d938 <_PyRuntime+1336>) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
205 ../sysdeps/unix/sysv/linux/futex-internal.h: No such file or directory.
(gdb) bt
#0 0x00007f316ffedf85 in futex_abstimed_wait_cancelable (private=<optimized out>, abstime=0x7fffe0181310, expected=0,
futex_word=0x7f317079d938 <_PyRuntime+1336>) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 __pthread_cond_wait_common (abstime=0x7fffe0181310, mutex=0x7f317079d940 <_PyRuntime+1344>, cond=0x7f317079d910 <_PyRuntime+1296>)
at pthread_cond_wait.c:539
#2 __pthread_cond_timedwait (cond=0x7f317079d910 <_PyRuntime+1296>, mutex=0x7f317079d940 <_PyRuntime+1344>, abstime=0x7fffe0181310)
at pthread_cond_wait.c:667
#3 0x00007f317056cf35 in PyCOND_TIMEDWAIT (cond=0x7f317079d910 <_PyRuntime+1296>, mut=0x7f317079d940 <_PyRuntime+1344>, us=5000)
at /tmp/build/80754af9/python_1585000375785/work/Python/condvar.h:90
#4 take_gil (tstate=0x7fffd86590b0) at /tmp/build/80754af9/python_1585000375785/work/Python/ceval_gil.h:208
#5 PyEval_RestoreThread () at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:271
#6 0x00007f316d7bf2f9 in JPContext::shutdownJVM (this=0x7fffd8859460) at native/common/jp_context.cpp:293
#7 0x00007f316d7f79b4 in PyJPModule_shutdown (obj=<optimized out>) at native/python/pyjp_module.cpp:263
#8 0x00007f31705b66c1 in _PyMethodDef_RawFastCallKeywords () at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:633
#9 0x00007f31705b6901 in _PyCFunction_FastCallKeywords (func=0x7f316e3b2d70, args=<optimized out>, nargs=<optimized out>, kwnames=<optimized out>)
at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:734
#10 0x00007f3170621d0c in call_function (kwnames=0x0, oparg=0, pp_stack=<synthetic pointer>)
at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:4568
#11 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:3093
#12 0x00007f31705b5d5b in function_code_fastcall (globals=<optimized out>, nargs=0, args=<optimized out>, co=<optimized out>)
at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:283
#13 _PyFunction_FastCallKeywords () at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:408
#14 0x00007f3170621979 in call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>)
at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:4616
#15 _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1585000375785/work/Python/ceval.c:3093
#16 0x00007f317056730b in function_code_fastcall (globals=<optimized out>, nargs=0, args=<optimized out>, co=0x7f316e40c270)
at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:283
#17 _PyFunction_FastCallDict () at /tmp/build/80754af9/python_1585000375785/work/Objects/call.c:322
#18 0x00007f3170677934 in atexit_callfuncs () at /tmp/build/80754af9/python_1585000375785/work/Modules/atexitmodule.c:87
#19 0x00007f3170675da7 in call_py_exitfuncs () at /tmp/build/80754af9/python_1585000375785/work/Python/pylifecycle.c:2235
#20 Py_FinalizeEx () at /tmp/build/80754af9/python_1585000375785/work/Python/pylifecycle.c:1145
#21 0x00007f3170675dc9 in Py_Exit (sts=0) at /tmp/build/80754af9/python_1585000375785/work/Python/pylifecycle.c:2288
#22 0x00007f3170675e87 in handle_system_exit () at /tmp/build/80754af9/python_1585000375785/work/Python/pythonrun.c:636
#23 0x00007f3170675f22 in PyErr_PrintEx () at /tmp/build/80754af9/python_1585000375785/work/Python/pythonrun.c:646
#24 0x00007f3170688347 in PyRun_SimpleFileExFlags () at /tmp/build/80754af9/python_1585000375785/work/Python/pythonrun.c:435
#25 0x00007f31706893f5 in pymain_run_file (p_cf=0x7fffe01819a0, filename=0x7fffd8655890 L"/home/ps/_setup/miniconda3/envs/hexlite-jpype/bin/hexlite",
fp=0x7fffd86b20d0) at /tmp/build/80754af9/python_1585000375785/work/Modules/main.c:462
#26 pymain_run_filename (cf=0x7fffe01819a0, pymain=0x7fffe0181ab0) at /tmp/build/80754af9/python_1585000375785/work/Modules/main.c:1641
#27 pymain_run_python (pymain=0x7fffe0181ab0) at /tmp/build/80754af9/python_1585000375785/work/Modules/main.c:2902
#28 pymain_main () at /tmp/build/80754af9/python_1585000375785/work/Modules/main.c:3442
#29 0x00007f317068951c in _Py_UnixMain () at /tmp/build/80754af9/python_1585000375785/work/Modules/main.c:3477
#30 0x00007f316fc01b97 in __libc_start_main (main=0x7f31705470f0 <main>, argc=8, argv=0x7fffe0181c08, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffe0181bf8) at ../csu/libc-start.c:310
#31 0x00007f317062cac0 in _start () at ../sysdeps/x86_64/elf/start.S:103
Small addition: as far as I know I do not create any Java threads in Hexlite or in the testcases that are used by Hexlite.
Okay. I will investigate further and see if I can identify the issue. Thanks for the bug report.
Thank you very much!
You won't happen to have a way to run hexlite under a vanilla ubuntu (Windows WSL) install? My development install is vanilla rather than conda. I do not see your clingo dependency on pypi.
It is possible to build clingo (that's why I use conda) in vanilla. It is available here: https://github.com/potassco/clingo/
Buildling it requires cmake re2c bison g++ and it is built with
git submodule update --init --recursive
cmake -H./ -B./build -DCMAKE_BUILD_TYPE=Release
cmake --build ./build
Then python path must be set so that in python import clingo does not complain and hexlite should work without conda.
I can also do tests on native Ubuntu if it would help you.
Zero stalls or hangs with master on WSL, so not able to replicate your issue currently. Did you try with master or just 0.7.4 release? In 0.7.4 we had to roll back some of the shutdown logic to to some groups getting crashes. So the issue may have been resolved already. Or it may be that it is platform dependent and we need to work to find a way to replicate it.
I tried 0.7.3 and 0.7.4 and master. I have hangs in 90% of runs with 0.7.4 and master.
But I think my bisected commit just shows that the issue is the shutdown, not that the commit is a bug, as you said it might expose other issues.
The problem happens also in Travis (this is setup to use conda 0.7.4) and I think this is not WSL but docker-based Ubuntu. https://travis-ci.org/github/hexhex/hexlite/builds/683011694
Are you sure on that? 9b0cfe0 is not in v0.7.4. So someplace there must be a version 0.7.4 which is not the official tag.
commit 434c2aa0ef9b14694c74825c80a43e1ea28bf4c1 (HEAD -> v0.7.4, tag: v0.7.4)
commit 1cf9600b4f1c87ac3604b2197d86a1087a83d655
commit 1b408f628c29097ee066188c40faa08f2118fa0c (origin/quickfix, quickfix)
commit 27c617634fb8fbe3310b1db394b51355b2f1f78f
commit 953634696a4cfa354d257137e9b62d64984cf71c
commit 7f24bba8b16adee6d8829a80ce37acf633a52622
commit d847511a41382e11fe21ed9de87ecc4fa307fc9f
commit ed2f4ba37398308e76b879a889f19f85fa312990
commit c39f5cc1dc149ea2198471daef2c06f66b612f75
commit 40b8ea7cfaca714287045382f76f2813e7e130de
commit 472fc7b2d651cb4e060f4625378f5dcaafbf2089
commit 4ef503e7f2b47eba5a231434f00d4b0e7df45853 (tag: v0.7.3)
@marscher I do not understand this issue. I checked the release feed on the project page and the 0.7.4 points to the correct release tag. But PyPi 0.7.4 is pointing to a version that was never supposed to be released. No wonder I am getting issue reports that don't make any sense. Is there some script that is making the PyPi releases?
I intended to start with v0.7.4 as "bad" and v0.7.3 as "good" and then I did what git bisect told me to do until the result was 9b0cfe0 but I can see that in the history it does not seem to be on the path.
I now again checked and I started from master with bisecting.
v0.7.4 works for me, but the master does not, and the version v0.7.4 in conda is not working.
I think 0.7.4 in conda is different from v0.7.4 in the repo.
I edited the title and the initial post to make it more clear what is not working.
Seems like the confusion is partly on our end. Normally we release from master but 0.7.4 was a hot fix release on a branch. Something in the distribution chain likely saw the release and checked out master and pushed it as the 0.7.4 release rather than the official cut leading to havoc. This we everyone is getting a broken version unless they get it from the github release page. I will have to figure out where the chain got broken and perhaps release a 0.7.5 which actually points to a working release.
But back to your issue. Thus far I haven't been able to get the issue to replicate. The bisect may be tripping in a bad place as there was a series of master commits with bugs in shutdown and then a series of commits intended to fix the issue.
So rather than point to the commit that installed shutdown hooks the best path is to replicate the issue, instrument the shutdown hook on the master branch and identify where in the shutdown is causing issues. To do that I need to replicate the problem on version which is rigged for fast development cycles (my windows install takes 15-30 minutes per cycle so not so hot for debugging).
Ran 20 runs on Windows 10, Python version 3.7.3, conda 4.8.3 using JPype master, no stalls no hangs. I have only one other machine which can be configured to run this software, so lets hope it replicated there.
In the meantime, please install JPype from master with python setup.py install --force to make sure that you are getting a clean version or uninstall the old version first. In some cases the install setup.py will decide the version is already installed and skip. Given we have a bad version out there we need to make sure we are not chasing ghosts.
I tested with hexlite commit c61e4ff390473ff7e8500e416d40ab7480cc9598 and Jpype commit 39eba3e2da0043b682972f5163142a111109dc36.
You were right about chasing ghosts. When I use python setup.py install --force then I cannot reproduce the hang with v0.7.4 and with master in the repo.
Only with 9b0cfe01 I can still reproduce it, or by installing the conda version, but this is both irrelevant because after 9b0cfe01 bugs have been fixed and because the conda version was not intended to be released in that way.
When I use python setup.py install without --force then I can reproduce the issue if before I had the conda version installed.
I think we can close this as "fixed in the master" and for now i can use 0.7.3 from conda and in the future 0.7.5.
Thank you for your support and for the --force option, I did not know that!
@marscher If you can help figure out what happened on 0.7.4 I can release a 0.7.5 release with the correct release contents.
I have deleted the wrong tar.gz file and replaced it with a zip created from releases/0.7.4 via python setup.py sdist. Additionally the conda package will be dumped by a new build number. I can add you as a package maintainer both on PyPI and Conda-Forge if you like @Thrameos
| gharchive/issue | 2020-05-08T10:43:35 | 2025-04-01T06:44:38.281528 | {
"authors": [
"Thrameos",
"marscher",
"peschue"
],
"repo": "jpype-project/jpype",
"url": "https://github.com/jpype-project/jpype/issues/731",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
408430394 | Uri resolution
I am trying to set up a local environment without docker(currently) and have a Spring Eureka server and a Spring Discovery. Now i try to connect a Node Js server to the ecosystem to process some data through it, however even though the service registers on Eureka, the URI doesn't get resolved. Here's what the discovery client outputs:
2019-02-09 13:56:59.820 DEBUG 12644 --- [ctor-http-nio-2] o.s.c.g.h.RoutePredicateHandlerMapping : Mapping [Exchange: GET http://localhost:8080/] to Route{id='082ec842-9ece-4591-9fd8-d11dffedef73', uri=lb://nodeFileService, order=0, predicate=org.springframework.cloud.gateway.support.ServerWebExchangeUtils$$Lambda$390/0x000000080043d840@430f66e, gatewayFilters=[OrderedGatewayFilter{delegate=null, order=0}]}
2019-02-09 13:56:59.820 DEBUG 12644 --- [ctor-http-nio-2] o.s.c.g.h.RoutePredicateHandlerMapping : [75de1336] Mapped to org.springframework.cloud.gateway.handler.FilteringWebHandler@10048149
2019-02-09 13:56:59.820 DEBUG 12644 --- [ctor-http-nio-2] o.s.c.g.handler.FilteringWebHandler : Sorted gatewayFilterFactories: [OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter@25f61c2c}, order=-2147482648}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter@6397248c}, order=-1}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter@2d2b6960}, order=0}, OrderedGatewayFilter{delegate=null, order=0}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter@72bd2871}, order=10000}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.LoadBalancerClientFilter@25be445f}, order=10100}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter@38291795}, order=2147483646}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter@5d1b1c2a}, order=2147483647}, OrderedGatewayFilter{delegate=GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter@77429040}, order=2147483647}]
2019-02-09 13:56:59.854 DEBUG 12644 --- [ctor-http-nio-2] .a.w.r.e.DefaultErrorWebExceptionHandler : [75de1336] Resolved [NullPointerException: null] for HTTP GET /
Here's my configuration of eureka-js-client:
const eurekaIP = '127.0.0.1';
const eurekaPort=8761;
const client = new Eureka({
// application instance information
instance: {
app: 'nodeFileService',
instanceId:'localhost:nodeFileService:3000',
hostName: 'localhost',
ipAddr: 'localhost',
port: {
'$': 3000,
'@enabled': true,
},
statusPageUrl: 'http://localhost:3000',
healthCheckUrl: 'http://localhost:3000',
vipAddress: 'localhost',
dataCenterInfo: {
'@class': 'com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo',
name: 'MyOwn',
},
},
eureka: {
// eureka server host / port
host: eurekaIP,
port: eurekaPort,
servicePath: '/eureka/apps/'
},
});
My Spring Eureka Server config(application.properties):
spring.application.name=eurekaserver
server.port=8761
eureka.instance.hostname=eureka
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
eureka.client.service-url.defaultZone=http://${eureka.instance.hostname}:${server.port}/eureka/
logging.level.root=INFO
My Spring discovery service config(application.properties):
spring.application.name=microservicediscovery
server.port=8080
eureka.client.enabled=true
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://127.0.0.1:8761/eureka/
logging.level.root=Debug
spring.cloud.gateway.discovery.locator.enabled=true
eureka.instance.prefer-ip-address=true
and route configuration on the same service:
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route(r ->
r.path("/")
.filters(v -> {
return v.filter(loggingFilter);
})
.uri("lb://nodeFileService")).build();
}
At last here's the Eureka Server eureka/apps for the node js app :
<application> <name>NODEFILESERVICE</name> <instance> <instanceId>nodeFileServ</instanceId> <hostName>localhost</hostName> <app>NODEFILESERVICE</app> <ipAddr>localhost</ipAddr> <status>UP</status> <overriddenstatus>UNKNOWN</overriddenstatus> <port enabled="true">3000</port> <securePort enabled="false">7002</securePort> <countryId>1</countryId> <dataCenterInfo class="com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo"><name>MyOwn</name> </dataCenterInfo> <leaseInfo> <renewalIntervalInSecs>30</renewalIntervalInSecs> <durationInSecs>90</durationInSecs><registrationTimestamp>1549715269040</registrationTimestamp> <lastRenewalTimestamp>1549715929061</lastRenewalTimestamp> <evictionTimestamp>0</evictionTimestamp> <serviceUpTimestamp>1549715269040</serviceUpTimestamp> </leaseInfo> <metadata class="java.util.Collections$EmptyMap"/> <statusPageUrl>http://localhost:3000</statusPageUrl> <healthCheckUrl>http://localhost:3000</healthCheckUrl> <vipAddress>localhost</vipAddress> <isCoordinatingDiscoveryServer>false</isCoordinatingDiscoveryServer> <lastUpdatedTimestamp>1549715269040</lastUpdatedTimestamp> <lastDirtyTimestamp>1549715269040</lastDirtyTimestamp> <actionType>ADDED</actionType> </instance> </application> </applications>
Looks good! Since you successfully registered and the registration on the server looks complete, theres not much else I can do. This module only handles the registration. My guess is that the calling code on the Spring app needs to be modified as it's getting a null pointer exception.
| gharchive/issue | 2019-02-09T13:13:06 | 2025-04-01T06:44:38.304392 | {
"authors": [
"KPapam",
"jquatier"
],
"repo": "jquatier/eureka-js-client",
"url": "https://github.com/jquatier/eureka-js-client/issues/150",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
121590797 | fs.readydirSync is not a function
Using browserify, gulp, babelify I am not able to include CLDR and load into globalize.
const Globalize = require('globalize');
const GlobalizeLocalizer = require('react-widgets/lib/localizers/globalize');
const CLDR = require('cldr-data');
Globalize.load(CLDR.entireSupplemental());
Globalize.load(CLDR.entireMainFor("en"));
Globalize.locale('en-US');
GlobalizeLocalizer(Globalize);
The entireSupplemental() method above eventually uses fs which is an empty object. Is there something I can do to fix this? I tried brfs, but haven't had any luck
The specific error is _fs.readdirSync is not a function. This is thrown in the jsonFiles method within the CLDR entireSupplemental method.
Thank you.
I think the issue is that fs is a nodejs thing, so it won't be present in your browser. Try manually loading the cldr-data paths you need, instead of relying on the .entireSupplemental() and entireMainFor() functions. For example, do this instead:
Globalize.load(
require("cldr-data/main/en/ca-gregorian"),
require("cldr-data/main/en/numbers")
);
| gharchive/issue | 2015-12-10T22:24:34 | 2025-04-01T06:44:38.320490 | {
"authors": [
"aclindsa",
"ibrown-gaikai"
],
"repo": "jquery/globalize",
"url": "https://github.com/jquery/globalize/issues/567",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
581681406 | Update the world
Kotlin, AGP, Gradle, Dagger, RxJava, Retrofit, OktHttp, Retrofit, AssertJ, Mockito, Fresco etc. etc.
This PR was released with 0.22.1
| gharchive/pull-request | 2020-03-15T13:46:52 | 2025-04-01T06:44:38.340375 | {
"authors": [
"jraska"
],
"repo": "jraska/github-client",
"url": "https://github.com/jraska/github-client/pull/225",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2254053209 | Offer both ULO and TOU rates as separate entities?
In the interest of doing some long-term retrospective analysis of our power bill, would it be possible for the integration to offer, as two different entities, both the ULO and not-ULO rates simultaneously? Having to pick just one is kind of a bummer.
Thanks!
You can add more than one provider. Once you add one, customize the name of the added entity before adding a new one.
I must be doing something wrong. I have deleted all devices from the integration; added the non-ULO device; then renamed the device, the entity, and the entity unique name... and yet I am still told "Device is already configured" when I try to add the ULO one. :(
| gharchive/issue | 2024-04-19T21:48:07 | 2025-04-01T06:44:38.355410 | {
"authors": [
"jrfernandes",
"nwf"
],
"repo": "jrfernandes/ontario_energy_board",
"url": "https://github.com/jrfernandes/ontario_energy_board/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1661668112 | Add npm config, esm build, types, inlined base64 build
This PR fixes #2 and #3
Changes
Add package.json for publishing to npm, with alternative builds exposed via entrypoints
The asm, wasm and wasm-compat directories and package.json files are required for module resolution
Build ESM bundle (ECMAScript/ES6)
The javascript ecosystem is moving towards ESM as the standard for both web and node environments - starting with only ESM builds for now.
Remove onload.js script
Side effects like this aren't desirable in npm packages. The behaviour can easily be added back in user-land.
Add an inlined base64 build - jolt.wasm-compat.js
Inlining WASM in javascript has some downsides (larger bundle size, some processing overhead), but it is helpful to offer. It can be used without a bundler with wasm support or needing to host the wasm file separately
Use webidl-dts-gen for typescript type generation
This generates types from the JoltJS.idl idl that emscripten uses
TODO
[x] Verify package.json configuration is correct for different module resolution environments
[x] Settle on NPM package details
[x] Decide on the npm package name
[x] author field in package.json
[x] description, keywords, etc
[x] Consensus on ESM build vs non-ESM vs both
[x] Move builds from dist into Build directory
[x] convert the examples js to modules - will do in a follow-up PR
Hello,
Thanks for working on this!
The javascript ecosystem is moving towards ESM as the standard for both web and node environments
I'm fine with starting with ESM only (as long as most browsers support this then it's ok).
Add an inlined base64 build - jolt.wasm-compat.js
Do we then still need the other wasm target? (I'm fine with killing it, it takes quite a bit of time to build one of these packages)
Decide on the npm package name (jolt-js is a placeholder)
How about jolt-physics? (everything on npm is JavaScript right, so -js is kind of redundant?)
author field in package.json
jrouwe is fine
description, keywords, etc
Take a look at the description/keywords I added on https://github.com/jrouwe/JoltPhysics (although I don't know if multi core friendly means something in JS land - as far as I know browsers only support 1 thread but maybe Node.js has more?)
About the code change: Does the 'dist' folder mean something to npm or could it be moved into the Build folder?
Maybe we can add a script to just build one of them for development use?
Let's just leave as it is. It's also not that bad and if you're iterating you can always comment out the stuff you don't need.
Multithreading isn't the exact same across node and browser environments, but both have a way of running code in a separate thread, and both have SharedArrayBuffer for sharing memory between threads
Thanks, I'll try that out one day.
I've moved the builds to the Build folder now.
Thanks for that as well!
I think this PR is ready for review 🙂
I've tested with several environments to make sure the package.json config is correct. If we run into any issues with different environments, I'll create follow-up PRs to address any issues.
I've also updated README.md with installation instructions for npm + instructions for using the wasm flavour with a bundler.
Thanks for doing all this work! I've merged this and managed to publish a package.
I was playing around with the result and then noticed that the decision to move the dist folder into the Build folder seems to have leaked into the package URL too:
https://www.unpkg.com/jolt-physics/Build/jolt-physics.wasm-compat.js
I'm guessing this is not very standard, so perhaps we should actually undo that change (sorry I'm a total noob here)?
When you do a ./build.sh Debug it will write to the same output files. I think this means that the dependency checking will fail and that it is likely that you don't get a debug build at all if you first built a normal version. Are there standard ways to handle this?
So far I published the examples on jrouwe.nl because the WASM files would not load from htmlpreview.github.io. Now that the WASM files are actually published to npm, I'm guessing that the examples could just point to unpkg.com. If you do that, local development is harder because you'll not be pointing to any files you create during the build. Is there a way to handle this nicely?
I'm also happy to create a follow-up PR to convert the examples to modules.
I'm not entirely sure what this will look like, but I'm assuming this doesn't require a new npm package and also that it still runs in the browser as before? (If so you have my blessing)
Awesome! All good, I'm keen to start using it more 🙂
I'm guessing this is not very standard, so perhaps we should actually undo that change (sorry I'm a total noob here)?
I don't think it's a massive deal, but consistency with other packages can't hurt. I can change it back to one of these in another PR 🙂
For those installing the package in a project via npm (as opposed to using one of the npm CDNs like unpkg), this will be largely transparent.
When you do a ./build.sh Debug it will write to the same output files. I think this means that the dependency checking will fail and that it is likely that you don't get a debug build at all if you first built a normal version. Are there standard ways to handle this?
Ah good catch. I can update build.sh to delete the old bundles before building the new ones.
So far I published the examples on jrouwe.nl because the WASM files would not load from htmlpreview.github.io. Now that the WASM files are actually published to npm, I'm guessing that the examples could just point to unpkg.com. If you do that, local development is harder because you'll not be pointing to any files you create during the build. Is there a way to handle this nicely?
We're now using the wasm-compat version in the examples, which has the WASM file inlined in the javascript bundle. So maybe now you won't have issues hosting on github pages?
me: I'm also happy to create a follow-up PR to convert the examples to modules.
Sorry, I didn't describe this very well 😅 No new packages or anything like that, just a refactor of what's there now. The examples right now have a mix of regular javascript scripts (<script>) and module scripts (<script type="module">).
I don't think it's a massive deal, but consistency with other packages can't hurt. I can change it back to one of these in another PR 🙂
I just changed it back :)
Ah good catch. I can update build.sh to delete the old bundles before building the new ones.
Just did this as well.
We're now using the wasm-compat version in the examples, which has the WASM file inlined in the javascript bundle. So maybe now you won't have issues hosting on github pages?
What I meant is that we don't check in Examples/js/jolt-physics.wasm-compat.js (and I think we shouldn't because it will bloat the github repo) which means that for a local build e.g. falling_shapes.html needs to contain:
import initJolt from './js/jolt-physics.wasm-compat.js';
but when hosting on github it should be:
import initJolt from 'https://www.unpkg.com/jolt-physics/dist/jolt-physics.wasm-compat.js';
meaning you always have to change the file locally to test things when you're working on a change. Is there a better way for this?
Sorry, I didn't describe this very well 😅 No new packages or anything like that, just a refactor of what's there now. The examples right now have a mix of regular javascript scripts (
What I meant is that we don't check in Examples/js/jolt-physics.wasm-compat.js (and I think we shouldn't because it will bloat the github repo) which means that for a local build e.g. falling_shapes.html needs to contain:
import initJolt from './js/jolt-physics.wasm-compat.js';
but when hosting on github it should be:
import initJolt from 'https://www.unpkg.com/jolt-physics/dist/jolt-physics.wasm-compat.js';
meaning you always have to change the file locally to test things when you're working on a change. Is there a better way for this?
Ah, I'm on the same page now.
I'll usually address this by building the library + examples in CI, and then deploying the built examples from there.
For reference, here's a github actions config I created for another emscripten project:
https://github.com/isaac-mason/recast-navigation-js/blob/main/.github/workflows/release.yml
https://github.com/isaac-mason/recast-navigation-js/blob/main/ci/install-emsdk.sh
That config is set up to deploy to to a different hosting provider, but it should be possible to deploy to github pages from a github action.
If you want to go down that path, I can help with setting that up.
Sure, help is appreciated there!
| gharchive/pull-request | 2023-04-11T02:22:30 | 2025-04-01T06:44:38.409741 | {
"authors": [
"isaac-mason",
"jrouwe"
],
"repo": "jrouwe/JoltPhysics.js",
"url": "https://github.com/jrouwe/JoltPhysics.js/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
679360603 | (ruby) Wrong number of arguments passed for Timers
Currently, the wrong number of arguments are passed to the Timer lambdas from the Timer gem in librb/engine.rb. The Timer gem passes self as a second argument.
Fixed in #342 342
Hi, thanks for posting the pull request. I have merged the change and published a new version (2.0.28).
| gharchive/issue | 2020-08-14T19:30:44 | 2025-04-01T06:44:38.425747 | {
"authors": [
"jruizgit",
"mrryanjohnston"
],
"repo": "jruizgit/rules",
"url": "https://github.com/jruizgit/rules/issues/343",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
115583244 | Store template(s) with user account
I'm assuming that default templates are stored in local storage because anytime I restart my browser (I clear data on close) my default template has been lost.
I would prefer to have it saved with my user account, so that when I login it's always there.
+1
That's not the reason. It's because templates existed before user accounts.
That's all.
Happy to take a PR that adds this functionality and I can help direct the
required UI changes.
On Thu, 7 Apr 2016 15:44 Samuel Hagman, notifications@github.com wrote:
+1
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/jsbin/jsbin/issues/2614#issuecomment-206936579
| gharchive/issue | 2015-11-06T20:40:42 | 2025-04-01T06:44:38.461013 | {
"authors": [
"remy",
"samhagman",
"viveleroi"
],
"repo": "jsbin/jsbin",
"url": "https://github.com/jsbin/jsbin/issues/2614",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1640179192 | 🛑 Startpage is down
In 5035c3f, Startpage ($SERVER_BASE) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Startpage is back up in 84aafee.
| gharchive/issue | 2023-03-24T23:17:36 | 2025-04-01T06:44:38.469577 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/1092",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2067236588 | 🛑 Linus-Wordpress is down
In 991cc3a, Linus-Wordpress ($SERVER_BASE/wordpress/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Linus-Wordpress is back up in dd1bdc5 after 31 minutes.
| gharchive/issue | 2024-01-05T12:09:27 | 2025-04-01T06:44:38.471795 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/3562",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2068989843 | 🛑 Piwigo is down
In a37d8a9, Piwigo ($SERVER_BASE/piwigo/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Piwigo is back up in 8f22091 after 26 minutes.
| gharchive/issue | 2024-01-07T05:11:22 | 2025-04-01T06:44:38.474300 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/3744",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2759613634 | 🛑 Linus-Wordpress is down
In 5219287, Linus-Wordpress ($SERVER_BASE/wordpress/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Linus-Wordpress is back up in d8eba44 after 1 minute.
| gharchive/issue | 2024-12-26T10:45:09 | 2025-04-01T06:44:38.476482 | {
"authors": [
"jscmidt"
],
"repo": "jscmidt/upptime",
"url": "https://github.com/jscmidt/upptime/issues/9426",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
73500057 | Does "--esnext" support static class properties?
I use static class properties in my code:
class MyComponent extends React.Component {
static propTypes = {
expanded: false
};
...
}
I use --esnext, but still get an error:
parseError: Unexpected token =
Is there a way to ignore this error?
Are there any plans to support static class properties?
Where could I find all the es6 features that --esnext supports?
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/13252966-does-esnext-support-static-class-properties?utm_campaign=plugin&utm_content=tracker%2F281640&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F281640&utm_medium=issues&utm_source=github).
You can use esprima-fb as the parser if it supports that syntax.
Does it make sense to use both --esnext --esprima=esprima-fb, or I should be using only one of these?
Yes yo
I'm not sure who esprima-fb is relevant here, it seems Esprima nor in 2.2 nor in harmony branch does not support static properties.
Re-opining for further invistigation, @mikesherov @ariya please advice
Ok wasn't sure if it supported it so I gave the suggestion: if it supports that syntax. But yeah it seems that it's not - https://github.com/facebook/esprima/issues/63.
@markelog Still not supported, if you paste the sample code to http://esprima.org/demo/parse.html.
Okay, upon further investigation it is revealed that this is not valid es6, it's not even in the list of proposals for es7, i guess babel only supports it because of this discussion, which seems premature even for babel.
/cc @sebmck
@markelog It's based on this proposal which was proposed at the last TC39.
Yeah, i have a link to this gist above, isn't to early? Since this proposal, even if submitted (which is currently not) will likely change?
@markelog It's behind a flag, you have to explicitly turn on support for stage 0 features. It's in Babel because it's meant to provide feedback to the committee and improve the proposal. ie. to find out what does and doesn't work. If the proposal changes then Babel changes with it. Decorators were implemented in Babel when it was still stage 0 (now stage 1) and have provided critical feedback to their design and use and it's influenced the proposal/spec.
| gharchive/issue | 2015-05-06T03:20:37 | 2025-04-01T06:44:38.483946 | {
"authors": [
"ariya",
"hzoo",
"markelog",
"moroshko",
"sebmck"
],
"repo": "jscs-dev/node-jscs",
"url": "https://github.com/jscs-dev/node-jscs/issues/1344",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
100294678 | Docs: Fix a typo in maximumNumberOfLines
See http://jscs.info/rule/maximumNumberOfLines.html (snapshot: https://github.com/jscs-dev/jscs-dev.github.io/blob/6d3018fd5940236c7ec306ee9e492415078563b2/rule/maximumNumberOfLines.html).
Thank you!
| gharchive/pull-request | 2015-08-11T11:45:10 | 2025-04-01T06:44:38.485567 | {
"authors": [
"markelog",
"yous"
],
"repo": "jscs-dev/node-jscs",
"url": "https://github.com/jscs-dev/node-jscs/pull/1660",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
84348134 | Drop assests from /libraries
Also drops $loki, making for much cleaner responses!
fixes #88; fixes #74
Rebased, this should be good to go if you're in agreement @jimaek & @tombyryer
It was looking good the last time I checked it, feel free to merge and deploy to v2 testing app.
Thank you
| gharchive/pull-request | 2015-06-03T04:09:48 | 2025-04-01T06:44:38.489991 | {
"authors": [
"jimaek",
"megawac"
],
"repo": "jsdelivr/api",
"url": "https://github.com/jsdelivr/api/pull/90",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
71085687 | Fix 2292
the problem was that in the simple implied map
i = 1; // i goes into implied
let i; // this should error
var a = () => { i = 1; /* i goes into implied */ };
let i; // this should not error
there was no way to differentiate between the two situations.
So I took a leaf out of the blockscope manager and created one for implied that manages the different scopes.
But then as I've been doing this I've become unsure about the approach of the (blockscope) manager and implied manager - that logic is already present for var's in order to detect var's out of scope
function() {
for(var i = 0; i < 8; i++) {
var j = 1;
}
for(j = 0; j < 8; j++) {
}
}
because even though j is function scope, jshint detects that j was not declared in the outer scope.
and it seems to me the logic should be combined. whats more I prefer the explicitness of the implied manager and blockscope manager (I think the logic outside of it is difficult to follow, with little overall explanation).
so @caitp @jugglinmike .. what do you think.. do you think the approach here is okay? which way should it be going?
Would you be happy if I continued refactoring and put implieds, blockscope and scope all into one "manager" with better names to test and poke on different scenarios ?
Coverage increased (+0.12%) to 96.49% when pulling 872dfbed6f171326225bcbbe871d565c644b15f0 on lukeapage:fix-2292 into 2b673d92107be93df80e48471dced0432a127f25 on jshint:master.
Coverage increased (+0.03%) to 96.5% when pulling 23bd529b585120ae044b1b895b3c3eacce98b5a4 on lukeapage:fix-2292 into c0edd9ff84a68946c615198a4a6f188f6fbbd54d on jshint:master.
@caitp @jugglinmike, now you've merged the other PR, would be good to get some discussion here. I think the best thing to do would be to discuss what should happen going forward so if I spend further time on this it will be accepted.
I think it's fine to fix this within some limits --- particularly, I don't want to go too crazy trying to enforce TDZ semantics that can't be determined statically. But the examples in the PR description don't seem unreasonable
I don't want to go too crazy trying to enforce TDZ semantics that can't be determined statically
I agree - determining whether or not a function is called before the declaration for instance would be going too far - right?
I'm most interested in your opinion on this:
Would you be happy if I continued refactoring and put implieds, blockscope and scope all into one "manager" with better names to test and poke on different scenarios ?
or do you think that whats ended up here is where things should be at?
Would you be happy if I continued refactoring and put implieds, blockscope and scope all into one "manager" with better names to test and poke on different scenarios ?
or do you think that whats ended up here is where things should be at?
The churn has probably died down for a while, so if we're going to do that, this is a good time to do it
Coverage increased (+0.03%) to 96.5% when pulling a8b55bb7f3dad22ffac5be8f59aa512ea0595f41 on lukeapage:fix-2292 into c0edd9ff84a68946c615198a4a6f188f6fbbd54d on jshint:master.
@caitp based on your last comment, is this ready to go?
Not sure if @lukepage wanted to work on it a bit more first or not.
Got it!
I am going to work on top of this, but I think its a big job and I don't know how much time I've got. I might have an update in a day or a week, or two. So, upto you.
Coverage increased (+0.03%) to 96.56% when pulling 1cd190239e7a867e9792e44785d3b0e66f8dcd9c on lukeapage:fix-2292 into e22b21a7bfeed9d1e38d0ed578c787aa9276690b on jshint:master.
Coverage increased (+0.03%) to 96.56% when pulling 1cd190239e7a867e9792e44785d3b0e66f8dcd9c on lukeapage:fix-2292 into e22b21a7bfeed9d1e38d0ed578c787aa9276690b on jshint:master.
Coverage increased (+0.03%) to 96.59% when pulling 76f376d1efcdf015db5f42d766bb8d7109e4ef36 on lukeapage:fix-2292 into 2ea9cb0f636cf005d48e53cc69b29175818f4ddf on jshint:master.
Coverage increased (+0.03%) to 96.59% when pulling 1c8294766dcb52c8a56012a1a2c75f2427378e14 on lukeapage:fix-2292 into 2444a0463e1a99d46e4afa50ed934c317265529d on jshint:master.
moved to #2435
| gharchive/pull-request | 2015-04-26T15:56:44 | 2025-04-01T06:44:38.526068 | {
"authors": [
"caitp",
"coveralls",
"lukeapage",
"rwaldron"
],
"repo": "jshint/jshint",
"url": "https://github.com/jshint/jshint/pull/2344",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1699239509 | chore(deps): update dependency ngx-markdown to v16
This PR contains the following updates:
Package
Type
Update
Change
ngx-markdown
dependencies
major
^15.1.1 -> ^16.0.0
Release Notes
jfcere/ngx-markdown
v16.0.0
Compare Source
Update Angular 16
Library has been updated to support Angular 16.
It is recommended to stick with ngx-markdown v15.x.x if you are using Angular 15.
New features and enhancements
Update to Angular 16
Add required version ranges to install instructions on README.md
Commits
Update Angular 16 (#454) (37f67d2) @jfcere
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
⚠ Artifact update problem
Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.
♻ Renovate will retry this branch, including artifacts, only when one of the following happens:
any of the package files in this branch needs updating, or
the branch becomes conflicted, or
you click the rebase/retry checkbox if found above, or
you rename this PR's title to start with "rebase!" to trigger it manually
The artifact failure details are included below:
File name: package-lock.json
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: @angular-material-components/datetime-picker@9.0.0
npm ERR! Found: @angular/cdk@16.0.0
npm ERR! node_modules/@angular/cdk
npm ERR! @angular/cdk@"^16.0.0" from the root project
npm ERR! peer @angular/cdk@"16.0.0" from @angular/material@16.0.0
npm ERR! node_modules/@angular/material
npm ERR! @angular/material@"^16.0.0" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer @angular/cdk@"^15.0.1" from @angular-material-components/datetime-picker@9.0.0
npm ERR! node_modules/@angular-material-components/datetime-picker
npm ERR! @angular-material-components/datetime-picker@"^9.0.0" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: @angular/cdk@15.2.9
npm ERR! node_modules/@angular/cdk
npm ERR! peer @angular/cdk@"^15.0.1" from @angular-material-components/datetime-picker@9.0.0
npm ERR! node_modules/@angular-material-components/datetime-picker
npm ERR! @angular-material-components/datetime-picker@"^9.0.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR!
npm ERR! For a full report see:
npm ERR! /tmp/renovate/cache/others/npm/_logs/2023-05-07T21_13_57_303Z-eresolve-report.txt
npm ERR! A complete log of this run can be found in: /tmp/renovate/cache/others/npm/_logs/2023-05-07T21_13_57_303Z-debug-0.log
| gharchive/pull-request | 2023-05-07T21:14:03 | 2025-04-01T06:44:38.572632 | {
"authors": [
"json-derulo"
],
"repo": "json-derulo/angular-ecmascript-intl",
"url": "https://github.com/json-derulo/angular-ecmascript-intl/pull/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1413746101 | Add slider keyboard functionality
Hi all, this PR was made from the suggestion in #2570, by adding three fields enable_keys, keys_adjust, and keys_panning in order to allow a participant to use the keyboard to pan through a slider.
This is a draft for now as I'd like to make sure that everything looks good before I can potentially add this functionality to the rest of the slider response plugins.
🎉
what about using pluginAPI.getKeyboardResponse({persist:true}) to handle the keyboard listener? It's not necessary, but makes it more consistent with other keyboard events.
@jodeleeuw all done! interesting thing to note though is i had to use ALL_KEYS for both listeners, if i tried to specify which keys were allowed by concatenating the two key field arrays, it would throw an error.
Hmm it should be possible to use one listener and register only the set of valid keys. What error was being generated?
this kind of error, since keys_adjust and keys_panning can technically be any or none character (indicated by a string), it throws a hissy fit when you try to concatenate a string | string[] with another string | string[]
Just one thing to flag is that it is not entirely clear to me how the 'panning' would work. Using the jsfiddle example from #2570, it's unclear to me why the keys 3 and 5 pressed in succession should lock to 30 and 50, instead of 35. The keys_panning argument suggests (I have not looked at the code itself) that the 1-6 keys would move the slider to the values of the slider equidistantly, i.e. if there are 12 ticks on the slider, the key 1 would move the slider to the value fo 2, the key 3 to the value of 6 and so on - this, it feels to me, is far from an intuitive use of a slider; instead, I would expect participants to be able to type in their value, e.g. press 3 and 5 and set the value to 35.
Going to pick this back up - current proposal for functionality I think covers most use cases (incl. key_panning issue): https://jsfiddle.net/fk1h45vp/41/ (repo). Will add to -contrib first, lmk if suggestions on above!
closing this for now, it looks like for anyone that wants this functionality, check out @Max-Lovell's plugin in the contrib repository. a v8 version of it will be out hopefully soon!
| gharchive/pull-request | 2022-10-18T19:37:23 | 2025-04-01T06:44:38.674625 | {
"authors": [
"Max-Lovell",
"jadeddelta",
"jodeleeuw",
"nikbpetrov"
],
"repo": "jspsych/jsPsych",
"url": "https://github.com/jspsych/jsPsych/pull/2827",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
562595478 | Disabled autocompletion of text form (issue #669)
As for the issue #669, I modified two scripts.
thanks. this is a great fix.
Hi @grocio, thanks very much for this. But it just so happened that I merged a different pull request (#1073) that fixes this issue before I saw this one. So I'm closing this PR, but again, thanks for suggesting the fix!
Oh, I totally got it and red messages on #1073 . Thanks for your work!
| gharchive/pull-request | 2020-02-10T14:25:15 | 2025-04-01T06:44:38.676566 | {
"authors": [
"becky-gilbert",
"grocio",
"jodeleeuw"
],
"repo": "jspsych/jsPsych",
"url": "https://github.com/jspsych/jsPsych/pull/671",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2626106823 | The open graph image does not support cross origin
How about add Access-Control-Allow-Origin: * to the og:image resource?
This can make the open graph more friendly to web apps.
Yes, absolutely.
| gharchive/issue | 2024-10-31T06:54:57 | 2025-04-01T06:44:38.677675 | {
"authors": [
"LitoMore",
"lucacasonato"
],
"repo": "jsr-io/jsr",
"url": "https://github.com/jsr-io/jsr/issues/801",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
304956450 | Chrome warning for 1.0 users upon (automatic?) upgrade to 2.0
As a long-time user of 1.0, I was surprised to see an orange exclamation mark in my Chrome hamburger menu button, stating that Don't Fuck With Paste was inactive; the options were either to remove the extension or enable it, but enabling it would require giving the extension access to my browser history and notifications. I got a little creeped out, as nothing about an upgrade was disclosed. I went to the Chrome store and no comments had been made to this effect. So I just posted this in my review:
Love it! Can't use any browser with out this.
Heads up to users of version 1.x! Version 2.0 was just released, and the upgrade is rocky but worth it. (See https://github.com/jswanner/DontFuckWithPaste/wiki/Version-2.0 for the upgrade announcement.)
If your upgrade experience is like mine, your browser will automatically update you to 2.0, but since 2.0 apparently requires different permissions, you'll get an orange exclamation mark icon, which upon clicking it offers to remove the plugin unless you give the plugin permission to see your browsing history and display notifications. Read the announcement from the developer to see what's changed about 2.0. Looks like a nice upgrade. So I've accepted and have begun taking it for a spin.
I'm posting this in case it's useful feedback to the developer or to other users who happen to come to the repository not knowing what's up.
@jswanner Perhaps it would be wise to update the README.md noting the release of 2.0 and any notes about the upgrade process that you think are worth passing on?
If this could be done without needing those two permissions, all the better.
Yes, please. I'd like to understand what the need is for the new permissions before upgrading.
There's no explanation in the README.md file about why this extension needs access to my browsing history. It's staying disabled until I know.
Also, does adding a site to the blacklist automatically activate the plugin for that site? Or does it require a page reload? Because if it requires a page reload, it's going to be a real problem where you've already filled in part of the form only to realize that the page forbids pasting on one of the later form fields. Reloading may mean losing what you've already entered.
Another vote for an explanation or a roll-back to the previous permissions regime. I really appreciate the extension but it's such a small and simple feature I'm struggling to understand why it would need additional permissions to run unless something nefarious is going on. I'll need an explanation, at minimum, before accepting the upgrade.
I also have some concerns about the big switch to explicit blacklisting--I suspect it's going to reduce the utility of this quite a lot, which was that it basically allowed me to NOT HAVE TO JUMP THROUGH FUCKING HOOPS to just use a site normally--but willing to give it a shot, at least, as long as there's some good explanation for the permissions changes.
Thanks for the feedback.
Version 2.0 is a big change from 1.x, and it's meant to address the problems that many people ran into with the old methodology (see the majority of other issues). Adding the "tabs" permission was really needed in order to make the experience of using the extension as smooth as possible -- otherwise any relevant tab would need to be reloaded in order for changes to take effect.
As for the "notifications" permission: I really wanted to make it an optional permission, but I was not able to do that due to the way Chrome allows an extension to request an optional permission. After the 2.0 upgrade, for many people the extension will never be active, as users need to explicitly declare which sites the extension should be active. Since this was a major change, I wanted to make sure people were made aware that a major change did occur, there's not many options that I have at my disposal, so I went with the notification approach.
As for concerns about the extension accessing your browsing history: the extension needs the "tabs" permission in order to know which tab is active and its URL. Being open source, you can check how the permissions are being used:
Fetch active tab
Look at its URL
Re-run activeness check on tab change
@grantbarrett
Also, does adding a site to the blacklist automatically activate the plugin for that site? Or does it require a page reload?
A page reload is not required, but that's why the extension needs the "tabs" permission, which is the one that claims the extension has access to your browsing history.
@jswanner Will putting * as the rule make it behave like the previous version?
@shaund . will make it run everywhere, which is mostly how the default configuration ran before. To exactly match the old behavior add the following as a rule:
https?://(?!.*?\.?facebook\.com|.*?\.?github\.com|.*?\.?google\.com|.*?\.?imgur\.com)
Thank you for responding so quickly with answers and clarifications. I appreciate it.
Ditto, I also appreciate the quick responses and explanations.
Excellent response, thanks. You should add something explaining this to the chrome extension web store page itself (maybe an added note in the overview?); most users aren't going to come to the github issues and find this one to understand what's going on.
You need to place the explanation for the need for the permissions front and center in the chrome store. I would also identify what part of "browsing history" category is needed. Google's descriptions are for the worst case of exploiting every possible variation of every discreet permission in the "Browsing History" category. That kinda freaks people out. I know where to look, and why I need to look, others not so much.
I don't remember if it was an extension or an Android app. Google groups
the permissions in the same way. What I remember is something like:
"Permission: requires first born child, includes the sub permissions of
blah, blah, blah, teach manners, blah, blah; this app/extension teaches
manners so it is categorized as needing your first born child." Then they
gave a brief non technical explanation of how the app/extension uses the
required permission.
If I can find the reference, I will send it along.
On Wed, Mar 14, 2018 at 5:04 PM, Jacob Swanner notifications@github.com
wrote:
@gshollingsworth https://github.com/gshollingsworth Do you know of any
example extensions that provide similar explanations?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/jswanner/DontFuckWithPaste/issues/52#issuecomment-373174406,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AepSnvvGEtGv46gbNDbG1wkepDMwHA7oks5teYX2gaJpZM4Spi1d
.
--
George Scott Hollingsworth
Assistant NDT Technician
PO BOX 414
Revloc, PA 15948-0414
814-419-9022
You're getting slammed in the web store comments rn. :(
Thanks for the explanation...I was getting ready to delete the extension. I've had a couple very simple extensions "go rogue" recently (stuff like a stopwatch app) and I sadly thought this was another one.
It's all a bit annoying. I want to just accept the permissions, and I can read the current code being deployed to see that it isn't evil right now, but if I accept the permission and then it uses the permission to do more later, I can't stop that, right?
@trejkaz, sure, but that’s true of all extensions. Being an open-sourced project, you can always download the code locally then do a local install, which will isolate you from future updates that I push to the Chrome Web Store.
| gharchive/issue | 2018-03-13T22:19:37 | 2025-04-01T06:44:38.720445 | {
"authors": [
"Dwedit",
"NelsonRothermel",
"a7rk6s",
"antinetizen",
"grantbarrett",
"gshollingsworth",
"joewiz",
"jswanner",
"shaund",
"tjmcewan",
"trejkaz",
"tuck182"
],
"repo": "jswanner/DontFuckWithPaste",
"url": "https://github.com/jswanner/DontFuckWithPaste/issues/52",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
137410375 | Get target_url base from new environment variable
A new environment variable has been added, making the need to specify base url optional: https://github.com/concourse/atc/blob/52e8522186cce3c22503027453839ef4588fbda3/engine/step_metadata.go#L30
Also base_url was retrieved from params instead of source like documented.
Wanted to add a test but could find an easy way to do it using puffing-billy.
Good call, rewriting some specs to cover this in tests.
| gharchive/pull-request | 2016-02-29T23:19:29 | 2025-04-01T06:44:38.737046 | {
"authors": [
"aequitas",
"jtarchie"
],
"repo": "jtarchie/pullrequest-resource",
"url": "https://github.com/jtarchie/pullrequest-resource/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2163251110 | Run lint only on ubuntu, add mac os CI test stage
With this PR
the linter is run only in the ubuntu CI stage - pretty sure it's not platform-dependent
we have a Mac OS CI stage running pytest, useful for platform-dependent tests such as https://github.com/jtec/prx/blob/7d4fcd2e977146152a534c2eac2034daa28909ad/src/prx/test/test_helpers.py#L258
Great! This project looks really professional :-)
| gharchive/pull-request | 2024-03-01T12:00:15 | 2025-04-01T06:44:38.738847 | {
"authors": [
"jtec",
"plutonheaven"
],
"repo": "jtec/prx",
"url": "https://github.com/jtec/prx/pull/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
477395637 | Fix parentheses on functions-lambdas-are-nameless-functions
Hi @jtmoulia ,
I noticed two small issues when I was solving functions.el.
We have an empty lambda function without a body on Line 142.
We're invoking the funcall function without the arguments on Line 150.
This PR should solve it. :)
Let me know if you need anything else.
Perfect -- thank you for the fixes @bcfurtado !!
| gharchive/pull-request | 2019-08-06T13:54:07 | 2025-04-01T06:44:38.742584 | {
"authors": [
"bcfurtado",
"jtmoulia"
],
"repo": "jtmoulia/elisp-koans",
"url": "https://github.com/jtmoulia/elisp-koans/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
453371130 | Speedup for sfr obs setup
I've made some changes to the load_sfr_out() method to try and speed up the setup of sfr obs.
I think we can get away without .loc for these assignments, which speeds things up a bit. The biggest gain comes from using a list comprehension instead of a df.apply(lambda) fuction to construct the segment reach strings.
These little changes seem to make a reasonable difference, as they get hit a lot of times when setting up sfr obs with many output times (the sfr output is read twice and these lines are hit for every output time block). Setting up the sfr obs now seems to be around 30% faster but I am sure there are more savings that can be made with similar changes elsewhere.
Coverage increased (+4.1%) to 77.373% when pulling 0a68d6c4d5f85dafe5f3486f8a3d3735b24a57bd on briochh:hotfix_sfr_out_load_speedup into 1c4428a5a671a3fcf8362b4263938b402cec959a on jtwhite79:develop.
thanks B!
| gharchive/pull-request | 2019-06-07T07:14:10 | 2025-04-01T06:44:38.813488 | {
"authors": [
"briochh",
"coveralls",
"jwhite-usgs"
],
"repo": "jtwhite79/pyemu",
"url": "https://github.com/jtwhite79/pyemu/pull/79",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1154630224 | Linter Fails
Bug description
I am just getting started with this repo and have identified a "clean up" task. I might just fix it myself, but since it's a cyclomatic complexity issue, is might not be a trivial fix.
To Reproduce
make dev
Fails
golangci-lint run --fix --timeout 10m
oidc.go:134:1: Function name: OIDCCallback, Cyclomatic Complexity: 22, Halstead Volume: 4757.77, Maintainability Index: 19 (maintidx)
func (h *Headscale) OIDCCallback(ctx *gin.Context) {
^
poll.go:34:1: Function name: PollNetMapHandler, Cyclomatic Complexity: 18, Halstead Volume: 5318.34, Maintainability Index: 19 (maintidx)
func (h *Headscale) PollNetMapHandler(ctx *gin.Context) {
^
poll.go:272:1: Function name: PollNetMapStream, Cyclomatic Complexity: 20, Halstead Volume: 6635.19, Maintainability Index: 16 (maintidx)
func (h *Headscale) PollNetMapStream(
Context info
No context required.
Hi, this is a new linter that was added to golangci-lint, I have disabled it in #366 , so should be gone soon, until we have time to fix the complexity.
great! closing in favor of https://github.com/juanfont/headscale/pull/366
| gharchive/issue | 2022-02-28T22:45:19 | 2025-04-01T06:44:38.821844 | {
"authors": [
"kradalby",
"m-tanner-dev0"
],
"repo": "juanfont/headscale",
"url": "https://github.com/juanfont/headscale/issues/367",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1213593217 | 🛑 Tableau is down
In 488f7f3, Tableau (http://200.10.100.224:8000/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tableau is back up in 6aa38df.
| gharchive/issue | 2022-04-24T09:02:11 | 2025-04-01T06:44:38.824434 | {
"authors": [
"juanguara"
],
"repo": "juanguara/upptime",
"url": "https://github.com/juanguara/upptime/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1246288875 | 🛑 File Terminal is down
In 57e7d58, File Terminal ($FTP_SERVER) was down:
HTTP code: 0
Response time: 0 ms
Resolved: File Terminal is back up in 69a1053.
| gharchive/issue | 2022-05-24T09:49:49 | 2025-04-01T06:44:38.855920 | {
"authors": [
"juanretamales"
],
"repo": "juanretamales/Sunai-Upptime",
"url": "https://github.com/juanretamales/Sunai-Upptime/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
335406352 | Referenced microk8s.config command doesn't exist
As mentioned in the README, microk8s.config > $HOME/.kube/config fails because microk8s.config isn't installed by the snap.
@AdamIsrael the microk8s.config command came in with https://github.com/juju-solutions/microk8s/pull/19. It's currently available in edge:
$ snap info microk8s | grep -B1 -E 'microk8s.config|installed'
commands:
- microk8s.config
--
edge: v1.10.4 (93) 174MB classic
installed: v1.10.4 (93) 174MB classic
If moving to edge isn't an option, you should be able to get the same info with microk8s.kubectl config view --raw.
@AdamIsrael it is strongly recommended to deploy to microk8s in edge. There a lot of fixes and features there. there will be a release to beta soon but I do not know when.
What is stopping a beta release?
On Mon, Jun 25, 2018, 13:23 Konstantinos Tsakalozos <
notifications@github.com> wrote:
@AdamIsrael https://github.com/AdamIsrael it is strongly recommended to
deploy to microk8s in edge. There a lot of fixes and features there. there
will be a release to beta soon but I do not know when.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/juju-solutions/microk8s/issues/54#issuecomment-400030296,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAET1bjjdeaYO8IujiKrT5zBZgePs4Yvks5uARypgaJpZM4U2JaJ
.
@marcoceppi I would like to make the release through our CI and by integrating the test https://github.com/juju-solutions/microk8s/pull/51 . It is a good first step to not breaking the deployments of our users.
Switching to edge is fine for what I'm doing, maybe even preferable.
Thanks all!
| gharchive/issue | 2018-06-25T13:25:59 | 2025-04-01T06:44:38.895859 | {
"authors": [
"AdamIsrael",
"ktsakalozos",
"kwmonroe",
"marcoceppi"
],
"repo": "juju-solutions/microk8s",
"url": "https://github.com/juju-solutions/microk8s/issues/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2009758845 | Race condition creating model and application
Description
There seems to be a race condition if creating a model and an application
Urgency
Casually reporting
Terraform Juju Provider version
0.10.0
Terraform version
v1.5.7
Terraform Configuration(s)
terraform {
required_version = ">= 1.5"
required_providers {
juju = {
source = "juju/juju"
version = "0.10.0"
}
}
}
provider "juju" {}
resource "juju_model" "tf_test" {
name = "tf-test"
}
resource "juju_application" "ubuntu" {
name = "ubuntu"
model = "tf-test"
charm {
name = "ubuntu"
}
units = 1
}
Reproduce / Test
When running `terraform apply` of the above against an empty machine controller I got this: https://pastebin.ubuntu.com/p/TVRxHhvqym/
Debug/Panic Output
No response
Notes & References
No response
@mthaddon so the reason this is happening is that you wrote in the config for juju_application resource the name of the model as model = "tf-test" instead of model = juju_model.tf_test.name, which would've had the terraform engine to create a dependency between the resources and communicate that nicely to the juju provider.
We're not entirely sure what's the conventional way to handle this in the terraform world (i.e. is it reasonable to expect the plan writers to always write it in that way?), which is why we're keeping this issue open to get more insight.
From my experience, it's expected that you would write references to other resources if your current resource has a dependency.
If submitted plan works, it's luck, if it fails, it's expected.
(when you can't clearly use a reference to another resource, you can use depends_on, but it's not needed in this case)
| gharchive/issue | 2023-11-24T13:47:46 | 2025-04-01T06:44:38.935271 | {
"authors": [
"cderici",
"gboutry",
"mthaddon"
],
"repo": "juju/terraform-provider-juju",
"url": "https://github.com/juju/terraform-provider-juju/issues/345",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2630895957 | Restoring missing segments broken with compaction
Compacting segments is not deterministic so nodes could have differing segments. The old method of requesting sstables over grpc doesn't work anymore.
One idea is to apply compaction from raft so that every node compacts the same way.
| gharchive/issue | 2024-11-03T01:06:26 | 2025-04-01T06:44:38.936384 | {
"authors": [
"jukeks"
],
"repo": "jukeks/tukki",
"url": "https://github.com/jukeks/tukki/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
240383170 | Opera issue
Check your script on Opera browser (mac). After browser update doesn't work properly.
I have this same problem on Opera 46 (win 8.1).
I have the same problem, weirdly Opera is built on Chromium. I use it because it's Chrome with some nice extra features Always on Top video, Build in Ad Blocker and the sidebar.
Since only 1.1% of the population use it, I think anime.js is still fine to use. I'm thinking this is more of an Opera issue than a AnimeJS issue.
FYI: Had the problem with Opera 46, but seems fixed in Opera 47 (Linux).
Check on Opera 47 (Mac) and everything is working perfectly.
| gharchive/issue | 2017-07-04T10:42:53 | 2025-04-01T06:44:38.939106 | {
"authors": [
"AndersSchmidtHansen",
"MichaelRise",
"TomS-",
"juliangarnier",
"mczaja"
],
"repo": "juliangarnier/anime",
"url": "https://github.com/juliangarnier/anime/issues/201",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1446526768 | get position of scaned barcode
Is there anyway to get the position of a scanned barcode from the ondetect callback?
With version 3.0.0-beta.2 the barcode object contains corners parameter which contains the corners of the barcode.
| gharchive/issue | 2022-11-12T17:03:37 | 2025-04-01T06:44:38.946390 | {
"authors": [
"WenheLI",
"juliansteenbakker"
],
"repo": "juliansteenbakker/mobile_scanner",
"url": "https://github.com/juliansteenbakker/mobile_scanner/issues/363",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1506707755 | RFC on a changing GuestManager.convert to update in-place
Hi @julianwachholz,
First off, thank you very much for maintaining this package for new versions of Django!
I've opened this ticket to propose a relatively simple change (or addition) of the GuestManager in the package that I think would enable greater functionality for devs wanting to create guest users.
Background
Business Use Case
I started using this package to enable guest users functionality on a site, with the goal that:
ephemeral guest users would be able to modify the db (ie CRUD their own objects like a regular user)
after conversion all of the state in related models would be maintained
After reading the documentation for django-guest-user I didn't see any flags that stated this usage wasn't possible, but after installing the package and trying to use it, to the best of my understanding, the use case above is not supported. I think this has also been raised previously in https://github.com/julianwachholz/django-guest-user/issues/3.
EDIT:
This use case is supported; I had a misconfigured form and had misread the guest creation workflow when debugging.
Thanks very much for your time!
Heya! Thanks for the kind words!
I'm going to have to take a deeper look into your other concerns one by one at a later date but I want to address your first point, the business use case.
What you're describing is exactly how guest users are intended to be used and working. It's possible you may have found a bug where converting will create a new user instead of only deleting the Guest instance?
Thanks very for your reply! I must have something miscondigured on my end, and I'll take a closer look.
Ah, I see my problems:
I had a misconfigured GUEST_USER_CONVERT_FORM that subclassed UserCreationForm, but was pointing to the wrong post endpoint; so I was creating new users rather than upserting an existing one 🤦
I also misread the guest_user.models code and mistakenly thought that the guest User record was being deleted in GuestManager.convert(), when it's just the Guest record being deleted, and had missed that the User instance was supposed to be bound in the form view (because I had the wrong endpoint, my guest User instances were not being bound)
Thanks very much for your time and helping me debug from afar ...
| gharchive/issue | 2022-12-21T17:57:39 | 2025-04-01T06:44:38.952749 | {
"authors": [
"julianwachholz",
"mbhynes"
],
"repo": "julianwachholz/django-guest-user",
"url": "https://github.com/julianwachholz/django-guest-user/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
244922767 | fix incorrect use of PyZMQ
in Supvisors, ZMQ sockets are not used by multiple threads.
however, they are all created in the Supervisor thread, which is not the correct way to do it.
Recommendation is to create / use / destroy the ZMQ socket in the same thread.
that would explain random crashes like this one:
Bad file descriptor (src/epoll.cpp:119)
Bad file descriptor (src/tcp.cpp)
Zmq context is corrupted because of Supervisor
more particularly, the call to options.cleanup_fds
| gharchive/issue | 2017-07-23T17:03:53 | 2025-04-01T06:44:38.973145 | {
"authors": [
"julien6387"
],
"repo": "julien6387/supvisors",
"url": "https://github.com/julien6387/supvisors/issues/67",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
204639758 | Adding limiters to each word and search for complete keyword
Hi,
When "separateWordSearch" in options set to true, the mark() function adds the limiters defined in "accuracy" to end of each words in keyword and searchs for each word separately. Is it possible to do this (adding limiters to each words) when "separateWordSearch" value is false and search for whole keyword?
for example when we search for "Lorem ipsum" and "separateWordSearch" is false, I want to highlight keyword like this:
Lorem ipsum dolor sit amet. Lorem: ipsum dolor sit amet, Lorem, (ipsum) dolor sit amet. Lorem dolor sit amet. ipsum dolor sit amet.
the bold words are highlighted.
I set "separateWordSearch" value to false because I don't want to highlight each words of the keyword separately in any where.
Hmm, I was going to tell you to try setting the accuracy to "complementary", but it doesn't include spaces; even adding a limiter only applies to the beginning and ending of the query, not between words.
@julmot I was making a demo and discovered that while marking "ipsum" it included one instance with the surrounding parentheses. Is this the intended behavior when using "complementary" accuracy? The regular expression uses [^\s()]* to target the complementary characters, so that would include parentheses. It just seems a bit odd to me to include the parentheses.
@Mottie Is there a way to use markRegExp() function to do this? I don't know how I create the regular expression.
You could try this code (demo); but please take note, the options are more limited when using the markRegExp function:
$(function() {
$("input").on("input", function() {
var include = ",:()",
keyword = $("input").val(),
regexString = "[\\b" + include + "]*" +
keyword.split(/\s+/).join("[\\s" + include + "]*") +
"[\\b" + include + "]*";
$(".context").unmark().markRegExp(new RegExp(regexString, "gi"));
}).trigger("input");
});
This example creates a very basic regular expression. It will not match diacritics, word joiners, change accuracy or include synonyms. You would have to modify the regular expression to include these abilities.
Thank you @Mottie , Is it possible to add this ability to mark() function?
As far as I can tell, it isn't possible to use a basic mark() function to accomplish what you want.
Hi @Saeid2016,
Sorry for the late response.
@Mottie already gave you the correct solution for this scenario: .markRegExp(). It's not possible to realize what you need using the .mark() method. We'd have to integrate some kind of a fuzzy mark option or supporting wildcards in queries like ? or *. Follow #112 to see when you can switch to .mark() from .markRegExp().
@Mottie
It just seems a bit odd to me to include the parentheses.
I agree here. As this isn't related to this issue I've created another one: https://github.com/julmot/mark.js/issues/113
Hi @Mottie , In this demo I added "[" and "]" characters to include string. but it doesn't highlight any things. why?
When you include special regular expression characters, you need to escape them - demo - see http://stackoverflow.com/a/9310752/145346
Thank you very much.
| gharchive/issue | 2017-02-01T16:52:13 | 2025-04-01T06:44:38.999001 | {
"authors": [
"Mottie",
"Saeid2016",
"julmot"
],
"repo": "julmot/mark.js",
"url": "https://github.com/julmot/mark.js/issues/110",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
392381501 | Full Match not picking up
Most matches work. I am using acrossElements: true for a nested set of DOM nodes.
Seems to be having issues with nodes that contain special HTML chars such as
What is the expected behavior for this?
Hi,
Thanks for submitting.
Please read the contribution guidelines and provide the required information named there.
| gharchive/issue | 2018-12-18T23:39:13 | 2025-04-01T06:44:39.001441 | {
"authors": [
"ddeychak",
"julmot"
],
"repo": "julmot/mark.js",
"url": "https://github.com/julmot/mark.js/issues/347",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2216595824 | AttributeError: 'Tokenizer' object has no attribute 'tokenizer'
I was try to use this module but i am getting this type of error
AttributeError: 'Tokenizer' object has no attribute 'tokenizer'
so that i copied our code in readme file that also doesn't work.
I've got the same problem
You can solve the problem by installing this version of openai-whisper :
pip install openai-whisper==20230117
| gharchive/issue | 2024-03-30T17:32:50 | 2025-04-01T06:44:39.005000 | {
"authors": [
"Gokulraam2257",
"areffarhadi",
"xkortazar"
],
"repo": "jumon/whisper-punctuator",
"url": "https://github.com/jumon/whisper-punctuator/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1491858427 | [ Feature Request ] Add options that can change --info inline's prompt
[x] I have read through the manual page (man fzf)
[x] I have the latest version of fzf
[x] I have searched through the existing issues
Info
OS
[x] Linux
[ ] Mac OS X
[ ] Windows
[ ] Etc.
Shell
[ ] bash
[ ] zsh
[x] fish
Problem / Steps to reproduce
does fzf have any options that can change --info inline's prompt?
the man fzf only give us these options
--prompt=STR
Input prompt (default: '> ')
--pointer=STR
Pointer to the current line (default: '>')
--marker=STR
Multi-select marker (default: '>')
Related issue where the user could also benefit to have more control on the < char of the info prompt and remove the extra space : https://github.com/junegunn/fzf/issues/2030
| gharchive/issue | 2022-12-12T13:12:22 | 2025-04-01T06:44:39.040383 | {
"authors": [
"Delapouite",
"NEX-S"
],
"repo": "junegunn/fzf",
"url": "https://github.com/junegunn/fzf/issues/3084",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
496764416 | Model architecture
Hi @junhwanjang. Thank you so much for sharing us your interesting repo!
May I ask you some questions?
How do you know the architecture of each models? And how to generate the re-implemented models with the python script on this repo, especially the palm_detection without custom ops?
I have tried to read your source code but can't figure it out how to run them.
Can you please tell me more in detail? Had been searching around but still have no idea...
just used Netron
Hi.
Used Netron. The app can visualize tflite model.
I checked google mediapipe code especially on “conv2dtransposeBias”. The op could be replaced with Conv2dtranspose and Add op.
@junhwanjang, thank you for your prompt reply!
Can you also show me how to run your code? I mean I don't know which python script need to run to achieve the re-implemented model files. I just want to try generating them by myself.
example code of generating model
from model import palm_detection_model
from utils import set_pretrained_weights, convert_to_pb, convert_to_tflite
OUT_PB_PATH = "./test.pb"
OUT_TFLITE_PATH = "./test.tflite"
def main():
# Initialized model
model = palm_detection_model(input_size=(256, 256, 3))
set_pretrained_weights(model)
convert_to_pb(model, OUT_PB_PATH)
convert_to_tflite(OUT_PB_PATH, OUT_TFLITE_PATH)
@junhwanjang, thank you so much!!!
@junhwanjang Can you let me ask you one more question?
Where did the palm_detection_weights.npy file come from? I tried to extract weights from palm_detection.tflite file by using get_pretrained_tflite_weights function from utils.py but encountered custom op error.
AFAIK we can't read tflite file with Python API but the .npy extension of pretrained weights file which indicated that file was created by some Python script. This made me confused.
Yes it's right. "palm_detection_weights.npy" file extracted in another custom-build-tensorflow environment.
The previous "palm_detection.tflite" file can load only in custom-build-environment(including with Mediapipe custom op)
Got it! Thank you so much again, @junhwanjang!!
| gharchive/issue | 2019-09-22T11:03:58 | 2025-04-01T06:44:39.048917 | {
"authors": [
"junhwanjang",
"metalwhale"
],
"repo": "junhwanjang/mediapipe-models",
"url": "https://github.com/junhwanjang/mediapipe-models/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
114678191 | Proxy failed Error
Hi guys,
I have the jupyterhub running in CentOS. Everything looks fine except that jupyterhub reports that it fails to start proxy almost everyday. We are guessing that it is because someone clicks on "Stop Server" in the "jupyterhub" control panel. Do you guys have the same experience? Thanks!
[[31m[E 2015-11-02 20:17:35.252 JupyterHub ioloop:612]^[(B^[[m Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f74bc5c1268>, <tornado.concurrent.Future object at 0x7f74bc669e80>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 592, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 598, in <lambda>
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 826, in check_proxy
yield self.start_proxy()
File "/data/git/jupyterhub/jupyterhub/app.py", line 809, in start_proxy
_check()
File "/data/git/jupyterhub/jupyterhub/app.py", line 805, in _check
raise e
RuntimeError: Proxy failed to start with exit code 8
Same here. Also I tried putting the server ip;
c.JupyterHub.ip='xx.xx.xx.xx'
but it doesn't fix the problem.
Still haven't the same issue. No luck!
^[[32m[I 2015-12-01 22:11:38.057 JupyterHub app:788]^[(B^[[m Starting proxy @ http://*:8000/
^[[31m[E 2015-12-01 22:11:39.066 JupyterHub ioloop:612]^[(B^[[m Exception in callback functools.partial(<function wrap..null_wrapper at 0x7f33da62e488>, <tornado.concurrent.Future object at 0x7f33da6ebcc0>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 592, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 598, in
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 826, in check_proxy
yield self.start_proxy()
File "/data/git/jupyterhub/jupyterhub/app.py", line 809, in start_proxy
_check()
File "/data/git/jupyterhub/jupyterhub/app.py", line 805, in _check
raise e
RuntimeError: Proxy failed to start with exit code 8
What do you get from configurable-http-proxy --version? Is there no other output? Can you tell if an existing configurable-http-proxy process is already running?
Hi @minrk , thank you for looking into this.
[centos@ip-10-0-1-230 ~]$ configurable-http-proxy --version
0.6.0-dev
No other output I could find. My current workaround is to restart jupyterhub every time I see this happens. It is very annoying and hope to see if there is something I did wrong.
Yes, I just checked, there is one already running:
root 27338 27331 0 01:34 ? 00:00:03 node /bin/configurable-http-proxy --ip --port 8000 --api-ip localhost --api-port 8001 --default-target http://ip-10-0-1-230:8081
I wonder if there is any progress in this issue? I saw many similar issues but not sure why this is still persisted.
@jlamcanopy how are you starting JupyterHub and capturing the logs? It's surprising to me that launching the proxy is failing with no output at all, so I'm guessing the output of the proxy is getting lost somewhere. Can you run the Hub with --debug to get a bit more output?
@minrk I started jupyterhub with:
jupyterhub -f jupyterhub_config.py
I captured the log in /var/log/jupyterhub.log. I will set the flag you recommended. Thanks!
Can you share the output of the jupyterhub process itself? I bet that's where the proxy output is going.
@minrk I got more output from the --debug flag:
[[31m[E 2016-01-05 23:17:55.352 JupyterHub ioloop:629]^[(B^[[m Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f8a70368950>, <tornado.concurrent.Future object at 0x7f8a703c9940>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 600, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 615, in <lambda>
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 990, in update_last_activity
routes = yield self.proxy.get_routes()
File "/data/git/jupyterhub/jupyterhub/orm.py", line 209, in get_routes
resp = yield self.api_request('', client=client)
ConnectionRefusedError: [Errno 111] Connection refused
^[[31m[E 2016-01-05 23:17:56.354 JupyterHub ioloop:629]^[(B^[[m Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7f8a703dce18>, <tornado.concurrent.Future object at 0x7f8a703d3400>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 600, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 615, in <lambda>
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 826, in check_proxy
yield self.start_proxy()
File "/data/git/jupyterhub/jupyterhub/app.py", line 809, in start_proxy
_check()
File "/data/git/jupyterhub/jupyterhub/app.py", line 805, in _check
raise e
RuntimeError: Proxy failed to start with exit code 8
And what about the output of the proxy itself? This will probably be in the stderr where JupyterHub was started.
@minrk Sorry about the delay in responding your message. The interesting thing is that whenever I restart jupyterhub (i.e. killing the process and then start it again), the proxy works for few days before the above error occurs.
Is there a better way to debug this? It is getting quite annoying because I need to manually restart jupyterhub every other day :(
Can you provide the output of the process where jupyterhub is running? I'm still waiting for that, which should have the proxy output. How are you starting JupyterHub? Can you also provide more surrounding context in the JupyterHub log file, rather than just the traceback?
@minrk Here is the log at the beginning when I start jupyterhub with
jupyterhub -f jupyterhub_config.py
[root@ip-10-0-0-230 jupyterhub]# jupyterhub -f jupyterhub_config.py
[I 2016-01-15 16:47:06.969 JupyterHub app:518] Loading cookie_secret from /data/git/jupyterhub/jupyterhub_cookie_secret
[W 2016-01-15 16:47:07.000 JupyterHub app:257]
Generating CONFIGPROXY_AUTH_TOKEN. Restarting the Hub will require restarting the proxy.
Set CONFIGPROXY_AUTH_TOKEN env or JupyterHub.proxy_auth_token config to avoid this message.
[W 2016-01-15 16:47:07.010 JupyterHub app:631] No admin users, admin interface will be unavailable.
[W 2016-01-15 16:47:07.010 JupyterHub app:632] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2016-01-15 16:47:07.010 JupyterHub app:652] Not using whitelist. Any authenticated user will be allowed.
[I 2016-01-15 16:47:07.024 JupyterHub app:1031] Hub API listening on http://ip-10-0-0-230:8081/hub/
[I 2016-01-15 16:47:07.029 JupyterHub app:788] Starting proxy @ http://*:8000/
16:47:07.126 - info: [ConfigProxy] Proxying https://*:8000 to http://ip-10-0-0-230:8081
16:47:07.128 - info: [ConfigProxy] Proxy API at http://ip-10-0-0-230:8001/api/routes
[I 2016-01-15 16:47:07.134 JupyterHub app:1054] JupyterHub is now running at http://localhost:8000/
[I 2016-01-15 16:47:11.353 JupyterHub dockerspawner:277] Container 'jupyter-5f780d0cc4' is gone
[I 2016-01-15 16:47:11.606 JupyterHub dockerspawner:329] Created container 'jupyter-5f780d0cc4' (id: d7abf62)
[I 2016-01-15 16:47:11.606 JupyterHub dockerspawner:339] Starting container 'jupyter-5f780d0cc4' (id: d7abf62)
[I 2016-01-15 16:47:12.622 JupyterHub base:281] User 5f780d0cc4 server took 1.302 seconds to start
[I 2016-01-15 16:47:12.623 JupyterHub orm:170] Adding user 5f780d0cc4 to proxy /user/5f780d0cc4 => http://127.0.0.1:9000
[I 2016-01-15 16:47:12.775 JupyterHub log:100] 200 GET /hub/api/authorizations/cookie/jupyter-hub-token-5f780d0cc4/[secret] (5f780d0cc4@10.0.0.230) 12.44ms
And can you include the context surrounding the error (say, going back 5-10 minutes)?
@minrk I was able to find the log that has this happened when jupyterhub restarted. Then jupyterhub was restarted again, then it works again for a day again.
^[[32m[I 2016-01-14 23:39:54.744 JupyterHub app:518]^[(B^[[m Loading cookie_secret from /data/git/jupyterhub/jupyterhub_cookie_secret
^[[33m[W 2016-01-14 23:39:54.773 JupyterHub app:257]^[(B^[[m
Generating CONFIGPROXY_AUTH_TOKEN. Restarting the Hub will require restarting the proxy.
Set CONFIGPROXY_AUTH_TOKEN env or JupyterHub.proxy_auth_token config to avoid this message.
^[[33m[W 2016-01-14 23:39:54.777 JupyterHub app:631]^[(B^[[m No admin users, admin interface will be unavailable.
^[[33m[W 2016-01-14 23:39:54.777 JupyterHub app:632]^[(B^[[m Add any administrative users to `c.Authenticator.admin_users` in config.
^[[32m[I 2016-01-14 23:39:54.777 JupyterHub app:652]^[(B^[[m Not using whitelist. Any authenticated user will be allowed.
^[[32m[I 2016-01-14 23:39:54.789 JupyterHub app:716]^[(B^[[m 5f780d0cc4 still running
^[[32m[I 2016-01-14 23:39:54.795 JupyterHub app:716]^[(B^[[m f7c3ada4d7 still running
^[[32m[I 2016-01-14 23:39:54.801 JupyterHub app:716]^[(B^[[m 664e39971b still running
^[[32m[I 2016-01-14 23:39:54.806 JupyterHub app:716]^[(B^[[m 883dd834a4 still running
^[[32m[I 2016-01-14 23:39:54.811 JupyterHub dockerspawner:277]^[(B^[[m Container 'jupyter-demo1430431644' is gone
^[[33m[W 2016-01-14 23:39:54.811 JupyterHub dockerspawner:248]^[(B^[[m container not found
^[[32m[I 2016-01-14 23:39:54.815 JupyterHub app:716]^[(B^[[m demo1448407643 still running
^[[32m[I 2016-01-14 23:39:54.822 JupyterHub dockerspawner:277]^[(B^[[m Container 'jupyter-992549896e' is gone
^[[33m[W 2016-01-14 23:39:54.822 JupyterHub dockerspawner:248]^[(B^[[m container not found
^[[32m[I 2016-01-14 23:39:54.826 JupyterHub app:716]^[(B^[[m demo1430432792 still running
^[[32m[I 2016-01-14 23:39:54.842 JupyterHub app:1031]^[(B^[[m Hub API listening on http://ip-10-0-0-230:8081/hub/
^[[32m[I 2016-01-14 23:39:54.846 JupyterHub app:788]^[(B^[[m Starting proxy @ http://*:8000/
^[[32m[I 2016-01-14 23:39:54.951 JupyterHub app:1054]^[(B^[[m JupyterHub is now running at http://localhost:8000/
^[[32m[I 2016-01-14 23:39:54.956 JupyterHub orm:170]^[(B^[[m Adding user 5f780d0cc4 to proxy /user/5f780d0cc4 => http://127.0.0.1:9000
^[[32m[I 2016-01-14 23:39:54.960 JupyterHub orm:170]^[(B^[[m Adding user f7c3ada4d7 to proxy /user/f7c3ada4d7 => http://127.0.0.1:9005
^[[32m[I 2016-01-14 23:39:54.964 JupyterHub orm:170]^[(B^[[m Adding user 664e39971b to proxy /user/664e39971b => http://127.0.0.1:9004
^[[32m[I 2016-01-14 23:39:54.966 JupyterHub orm:170]^[(B^[[m Adding user 883dd834a4 to proxy /user/883dd834a4 => http://127.0.0.1:9001
^[[32m[I 2016-01-14 23:39:54.968 JupyterHub orm:170]^[(B^[[m Adding user demo1448407643 to proxy /user/demo1448407643 => http://127.0.0.1:9003
^[[32m[I 2016-01-14 23:39:54.970 JupyterHub orm:170]^[(B^[[m Adding user demo1430432792 to proxy /user/demo1430432792 => http://127.0.0.1:9002
^[[32m[I 2016-01-14 23:40:08.702 JupyterHub log:100]^[(B^[[m 200 GET /hub/api/authorizations/cookie/jupyter-hub-token-5f780d0cc4/[secret] (5f780d0cc4@10.0.0.230) 11.13ms^[[32m[I 2016-01-14 23:43:05.188 JupyterHub orm:185]^[(B^[[m Removing user 5f780d0cc4 from proxy
^[[32m[I 2016-01-14 23:43:05.195 JupyterHub dockerspawner:368]^[(B^[[m Stopping container jupyter-5f780d0cc4 (id: 912c634)
^[[32m[I 2016-01-14 23:43:06.012 JupyterHub dockerspawner:374]^[(B^[[m Removing container jupyter-5f780d0cc4 (id: 912c634)
^[[32m[I 2016-01-14 23:43:07.008 JupyterHub log:100]^[(B^[[m 302 GET /user/5f780d0cc4/api/sessions?_=1452814808799 (@54.85.218.209) 0.95ms
^[[32m[I 2016-01-14 23:43:07.012 JupyterHub log:100]^[(B^[[m 302 GET /user/5f780d0cc4/api/terminals?_=1452814808800 (@54.85.218.209) 0.68ms
^[[32m[I 2016-01-14 23:43:11.202 JupyterHub dockerspawner:277]^[(B^[[m Container 'jupyter-5f780d0cc4' is gone
^[[32m[I 2016-01-14 23:43:11.202 JupyterHub dockerspawner:277]^[(B^[[m Container 'jupyter-5f780d0cc4' is gone
^[[32m[I 2016-01-14 23:43:11.204 JupyterHub base:328]^[(B^[[m User 5f780d0cc4 server took 6.014 seconds to stop
^[[33m[W 2016-01-14 23:43:11.204 JupyterHub dockerspawner:248]^[(B^[[m container not found
^[[33m[W 2016-01-14 23:43:11.205 JupyterHub dockerspawner:248]^[(B^[[m container not found
^[[32m[I 2016-01-14 23:43:11.237 JupyterHub log:100]^[(B^[[m 204 DELETE /hub/api/users/5f780d0cc4/server (5f780d0cc4@54.85.218.209) 6053.31ms
^[[31m[E 2016-01-14 23:43:11.237 JupyterHub web:1524]^[(B^[[m Uncaught exception GET /hub/user/5f780d0cc4/api/terminals?_=1452814808800 (54.85.218.209)
HTTPServerRequest(protocol='https', host='notebooks.example.com:8000', method='GET', uri='/hub/user/5f780d0cc4/api/terminals?_=1452814808800', version='HTTP/1.1', remote_ip='54.85.218.209', headers={'X-Forwarded-For': '54.85.218.209', 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Connection': 'close', 'X-Forwarded-Port': '8000', 'X-Forwarded-Proto': 'https', 'Cookie': 'jupyter-hub-token="2|1:0|10:1452526899|17:jupyter-hub-token|44:OTRlY2FkZmNiZTMwNDY0MzgwYjRiZGYxZmEwNGYyOGQ=|be246d6f3943ba9c27531992b7792a2a89852c2c131713f9118a16bd167ee26d"; optimizelyEndUserId=oeu1435684870212r0.9612519931979477; optimizelySegments=%7B%221526203220%22%3A%22referral%22%2C%221532971879%22%3A%22gc%22%2C%221545481075%22%3A%22false%22%7D; optimizelyBuckets=%7B%222608620230%22%3A%222610360195%22%7D; __utma=120812314.675328110.1435684870.1440533120.1443103469.5; __utmz=120812314.1435684870.1.1.utmcsr=example.atlassian.net|utmccn=(referral)|utmcmd=referral|utmcct=/wiki/display/CANOPY/Onboarding; express.sid=s%3A1b8Bhk3I2WHBIwJrVfovqwEQ8iUimHZo.mVDzngdNsp%2Bi%2Fl4gpyoe3uNXhDmQYAK5TgmcDZeb9TM', 'Accept-Language': 'en-US,en;q=0.8', 'Referer': 'https://notebooks.example.com:8000/user/5f780d0cc4/tree', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36', 'Accept-Encoding': 'gzip, deflate, sdch', 'X-Requested-With': 'XMLHttpRequest', 'Host': 'notebooks.example.com:8000'})
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/web.py", line 1445, in _execute
result = yield result
File "/data/git/jupyterhub/jupyterhub/handlers/base.py", line 445, in get
yield self.spawn_single_user(current_user)
File "/data/git/jupyterhub/jupyterhub/handlers/base.py", line 260, in spawn_single_user
raise RuntimeError("Spawn already pending for: %s" % user.name)
RuntimeError: Spawn already pending for: 5f780d0cc4
^[[31m[E 2016-01-14 23:43:11.250 JupyterHub log:99]^[(B^[[m {
"X-Forwarded-For": "54.85.218.209",
"Accept": "application/json, text/javascript, */*; q=0.01",
"Connection": "close",
"X-Forwarded-Port": "8000",
"X-Forwarded-Proto": "https",
"Cookie": "jupyter-hub-token=\"2|1:0|10:1452526899|17:jupyter-hub-token|44:OTRlY2FkZmNiZTMwNDY0MzgwYjRiZGYxZmEwNGYyOGQ=|be246d6f3943ba9c27531992b7792a2a89852c2c131713f9118a16bd167ee26d\"; optimizelyEndUserId=oeu1435684870212r0.9612519931979477; optimizelySegments=%7B%221526203220%22%3A%22referral%22%2C%221532971879%22%3A%22gc%22%2C%221545481075%22%3A%22false%22%7D; optimizelyBuckets=%7B%222608620230%22%3A%222610360195%22%7D; __utma=120812314.675328110.1435684870.1440533120.1443103469.5; __utmz=120812314.1435684870.1.1.utmcsr=example.atlassian.net|utmccn=(referral)|utmcmd=referral|utmcct=/wiki/display/CANOPY/Onboarding; express.sid=s%3A1b8Bhk3I2WHBIwJrVfovqwEQ8iUimHZo.mVDzngdNsp%2Bi%2Fl4gpyoe3uNXhDmQYAK5TgmcDZeb9TM",
"Accept-Language": "en-US,en;q=0.8",
"Referer": "https://notebooks.example.com:8000/user/5f780d0cc4/tree",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36",
"Accept-Encoding": "gzip, deflate, sdch",
"X-Requested-With": "XMLHttpRequest",
"Host": "notebooks.example.com:8000"
}
^[[31m[E 2016-01-14 23:43:11.250 JupyterHub log:100]^[(B^[[m 500 GET /hub/user/5f780d0cc4/api/terminals?_=1452814808800 (5f780d0cc4@54.85.218.209) 4199.29ms
^[[32m[I 2016-01-14 23:43:11.250 JupyterHub dockerspawner:277]^[(B^[[m Container 'jupyter-5f780d0cc4' is gone
^[[32m[I 2016-01-14 23:43:11.508 JupyterHub dockerspawner:339]^[(B^[[m Starting container 'jupyter-5f780d0cc4' (id: 9110b48)
^[[32m[I 2016-01-14 23:43:12.529 JupyterHub base:281]^[(B^[[m User 5f780d0cc4 server took 1.324 seconds to start
^[[32m[I 2016-01-14 23:43:12.530 JupyterHub orm:170]^[(B^[[m Adding user 5f780d0cc4 to proxy /user/5f780d0cc4 => http://127.0.0.1:9000
^[[31m[E 2016-01-14 23:43:12.533 JupyterHub web:1524]^[(B^[[m Uncaught exception GET /hub/user/5f780d0cc4/api/sessions?_=1452814808799 (54.85.218.209)
HTTPServerRequest(protocol='https', host='notebooks.example.com:8000', method='GET', uri='/hub/user/5f780d0cc4/api/sessions?_=1452814808799', version='HTTP/1.1', remote_ip='54.85.218.209', headers={'X-Forwarded-For': '54.85.218.209', 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Connection': 'close', 'X-Forwarded-Port': '8000', 'X-Forwarded-Proto': 'https', 'Cookie': 'jupyter-hub-token="2|1:0|10:1452526899|17:jupyter-hub-token|44:OTRlY2FkZmNiZTMwNDY0MzgwYjRiZGYxZmEwNGYyOGQ=|be246d6f3943ba9c27531992b7792a2a89852c2c131713f9118a16bd167ee26d"; optimizelyEndUserId=oeu1435684870212r0.9612519931979477; optimizelySegments=%7B%221526203220%22%3A%22referral%22%2C%221532971879%22%3A%22gc%22%2C%221545481075%22%3A%22false%22%7D; optimizelyBuckets=%7B%222608620230%22%3A%222610360195%22%7D; __utma=120812314.675328110.1435684870.1440533120.1443103469.5; __utmz=120812314.1435684870.1.1.utmcsr=example.atlassian.net|utmccn=(referral)|utmcmd=referral|utmcct=/wiki/display/CANOPY/Onboarding; express.sid=s%3A1b8Bhk3I2WHBIwJrVfovqwEQ8iUimHZo.mVDzngdNsp%2Bi%2Fl4gpyoe3uNXhDmQYAK5TgmcDZeb9TM', 'Accept-Language': 'en-US,en;q=0.8', 'Referer': 'https://notebooks.example.com:8000/user/5f780d0cc4/tree', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36', 'Accept-Encoding': 'gzip, deflate, sdch', 'X-Requested-With': 'XMLHttpRequest', 'Host': 'notebooks.example.com:8000'})
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/web.py", line 1445, in _execute
result = yield result
File "/data/git/jupyterhub/jupyterhub/handlers/base.py", line 445, in get
yield self.spawn_single_user(current_user)
File "/data/git/jupyterhub/jupyterhub/handlers/base.py", line 296, in spawn_single_user
yield finish_user_spawn()
File "/data/git/jupyterhub/jupyterhub/handlers/base.py", line 282, in finish_user_spawn
yield self.proxy.add_user(user)
File "/data/git/jupyterhub/jupyterhub/orm.py", line 179, in add_user
client=client,
ConnectionRefusedError: [Errno 111] Connection refused
^[[31m[E 2016-01-14 23:43:12.544 JupyterHub log:99]^[(B^[[m {
"X-Forwarded-For": "54.85.218.209",
"Accept": "application/json, text/javascript, */*; q=0.01",
"Connection": "close",
"X-Forwarded-Port": "8000",
"X-Forwarded-Proto": "https",
"Cookie": "jupyter-hub-token=\"2|1:0|10:1452526899|17:jupyter-hub-token|44:OTRlY2FkZmNiZTMwNDY0MzgwYjRiZGYxZmEwNGYyOGQ=|be246d6f3943ba9c27531992b7792a2a89852c2c131713f9118a16bd167ee26d\"; optimizelyEndUserId=oeu1435684870212r0.9612519931979477; optimizelySegments=%7B%221526203220%22%3A%22referral%22%2C%221532971879%22%3A%22gc%22%2C%221545481075%22%3A%22false%22%7D; optimizelyBuckets=%7B%222608620230%22%3A%222610360195%22%7D; __utma=120812314.675328110.1435684870.1440533120.1443103469.5; __utmz=120812314.1435684870.1.1.utmcsr=example.atlassian.net|utmccn=(referral)|utmcmd=referral|utmcct=/wiki/display/CANOPY/Onboarding; express.sid=s%3A1b8Bhk3I2WHBIwJrVfovqwEQ8iUimHZo.mVDzngdNsp%2Bi%2Fl4gpyoe3uNXhDmQYAK5TgmcDZeb9TM",
"Accept-Language": "en-US,en;q=0.8",
"Referer": "https://notebooks.example.com:8000/user/5f780d0cc4/tree",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36",
"Accept-Encoding": "gzip, deflate, sdch",
"X-Requested-With": "XMLHttpRequest",
"Host": "notebooks.example.com:8000"
}
^[[31m[E 2016-01-14 23:43:12.544 JupyterHub log:100]^[(B^[[m 500 GET /hub/user/5f780d0cc4/api/sessions?_=1452814808799 (5f780d0cc4@54.85.218.209) 5498.67ms
^[[31m[E 2016-01-14 23:43:24.952 JupyterHub app:824]^[(B^[[m Proxy stopped with exit code 8
^[[32m[I 2016-01-14 23:43:24.957 JupyterHub app:788]^[(B^[[m Starting proxy @ http://*:8000/
^[[31m[E 2016-01-14 23:43:25.966 JupyterHub ioloop:629]^[(B^[[m Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7fd400140c80>, <tornado.concurrent.Future object at 0x7fd400e3ee10>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 600, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 615, in <lambda>
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 826, in check_proxy
yield self.start_proxy()
File "/data/git/jupyterhub/jupyterhub/app.py", line 809, in start_proxy
_check()
File "/data/git/jupyterhub/jupyterhub/app.py", line 805, in _check
raise e
RuntimeError: Proxy failed to start with exit code 8
^[[31m[E 2016-01-14 23:43:54.952 JupyterHub app:824]^[(B^[[m Proxy stopped with exit code 8
^[[32m[I 2016-01-14 23:43:54.954 JupyterHub app:788]^[(B^[[m Starting proxy @ http://*:8000/
^[[31m[E 2016-01-14 23:43:55.964 JupyterHub ioloop:629]^[(B^[[m Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7fd4001408c8>, <tornado.concurrent.Future object at 0x7fd400601128>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 600, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 615, in <lambda>
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 826, in check_proxy
yield self.start_proxy()
File "/data/git/jupyterhub/jupyterhub/app.py", line 809, in start_proxy
_check()
File "/data/git/jupyterhub/jupyterhub/app.py", line 805, in _check
raise e
RuntimeError: Proxy failed to start with exit code 8
^[[31m[E 2016-01-14 23:44:24.952 JupyterHub app:824]^[(B^[[m Proxy stopped with exit code 8
^[[32m[I 2016-01-14 23:44:24.956 JupyterHub app:788]^[(B^[[m Starting proxy @ http://*:8000/
^[[31m[E 2016-01-14 23:44:25.965 JupyterHub ioloop:629]^[(B^[[m Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7fd4000992f0>, <tornado.concurrent.Future object at 0x7fd400e2a080>)
Traceback (most recent call last):
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 600, in _run_callback
ret = callback()
File "/usr/lib64/python3.4/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/lib64/python3.4/site-packages/tornado/ioloop.py", line 615, in <lambda>
self.add_future(ret, lambda f: f.result())
File "/data/git/jupyterhub/jupyterhub/app.py", line 826, in check_proxy
yield self.start_proxy()
File "/data/git/jupyterhub/jupyterhub/app.py", line 809, in start_proxy
_check()
File "/data/git/jupyterhub/jupyterhub/app.py", line 805, in _check
raise e
RuntimeError: Proxy failed to start with exit code 8
Hi @minrk Just wonder if you have any insight on what might cause this?
I have no idea. I'm still very surprised that there's no output from the proxy about the error, which is really the only information I can think of that would help figure it out. Can you run with JupyterHub.debug_proxy=True? That will produce a lot of output, but might show why the proxy is dying.
@minrk I just enabled the debug output for proxy. I'll paste the log here the next time it fails to start.
@minrk I also have the same crash. I just started running JupyterHub last week and it crashed twice (a few days apart). I got the same crash log as @jlamcanopy but there were no activities for about 1 hour before the crash except a few GETs to /hub/api/authorizations/cookie/jupyter-hub-token-xxx.
I just restarted with the debug_proxy set to true and will report back when it crashes again.
@minrk Setting debug_proxy to true makes it crash even more often (twice in a day) but without any log messages. I am setting it back to false. Any other help would be appreciated. Thanks.
@vitapoly I experienced exactly what you said. Setting debug_proxy makes it crash much often. I didn't turn it on at the end because I cannot keep up with the rate of crashes. What I did was, I wrote a script to restart jupyterhub every time the log outputs "Proxy failed to start with exit code 8". It is not great but at least I can sleep better.
Setting debug_proxy produces a lot of output from the proxy (every HTTP request is logged), so if you are seeing none that means you aren't seeing the proxy output at all, which is probably why we are not seeing the traceback from CHP when it fails to restart. That really is the output we need. It will be logged to stdout of the same terminal as jupyterhub. Are you running that with screen, or piping output to a file? How are you capturing the log output?
Hello i face the same problem,
the problem come from the ssh session, so i found this tricks
Replace Defaults requiretty by Defaults !requiretty in your /etc/sudoers. This will impact your global sudo configuration.
And currently it is working fine :-)
Unfortunately it did not work for me.
It's been a few months since this has been discussed. I'm tagging this "needs: more user information". If you see this behavior, please provide details. Thanks!
Since it's been 15 days since the last activity on this issue, I'm going to close the issue. Please feel free to ask for it to be reopened if additional information becomes available.
I get the similar error. I try to run JupyterHub, with DockerSpawner, using SystemUser. As for now just for tests I run JupyterHub wih sudo jupyterhub -f jupyterhub_config.py --no-ssl
For me it looks like a side effect of failing to launch docker by the spawner. I do not know yet why the docker does not get spawned. Anyway, below is the full log before the error occurs. What I try to do since launching is go to JupyterHub website and log in. Login succeeds but Spawner fails to start.
versions of jupyterhub, docker, proxy
$:~/Downloads$ jupyterhub --version
0.6.1
$:~/Downloads$ configurable-http-proxy --version
1.3.0
$:~/Downloads$ docker --version
Docker version 1.12.1, build 23cf638
and the copy of log from console
[I 2016-10-26 15:59:49.732 JupyterHub app:622] Loading cookie_secret from /home/username/Downloads/jupyterhub_cookie_secret
[D 2016-10-26 15:59:49.732 JupyterHub app:694] Connecting to db: sqlite:///jupyterhub.sqlite
[I 2016-10-26 15:59:49.770 JupyterHub app:785] Not using whitelist. Any authenticated user will be allowed.
[D 2016-10-26 15:59:49.778 JupyterHub app:871] Loading state for username from db
[D 2016-10-26 15:59:49.779 JupyterHub dockerspawner:324] Getting container 'jupyter-username'
[D 2016-10-26 15:59:49.786 JupyterHub dockerspawner:310] Container 274687f status: {'Dead': False,
'Error': '',
'ExitCode': 1,
'FinishedAt': '2016-10-26T13:54:00.482096921Z',
'OOMKilled': False,
'Paused': False,
'Pid': 0,
'Restarting': False,
'Running': False,
'StartedAt': '2016-10-26T13:53:59.834615662Z',
'Status': 'exited'}
[D 2016-10-26 15:59:49.786 JupyterHub app:883] username not running.
[D 2016-10-26 15:59:49.787 JupyterHub app:888] Loaded users:
username admin
[I 2016-10-26 15:59:49.792 JupyterHub app:1231] Hub API listening on http://192.168.1.2:8080/hub/
[W 2016-10-26 15:59:49.795 JupyterHub app:959] Running JupyterHub without SSL. There better be SSL termination happening somewhere else...
[I 2016-10-26 15:59:49.795 JupyterHub app:968] Starting proxy @ http://*:8000/
[D 2016-10-26 15:59:49.795 JupyterHub app:969] Proxy cmd: ['configurable-http-proxy', '--ip', '', '--port', '8000', '--api-ip', '127.0.0.1', '--api-port', '8001', '--default-target', 'http://192.168.1.2:8080', '--error-target', 'http://192.168.1.2:8080/hub/error', '--log-level', 'debug']
15:59:49.940 - info: [ConfigProxy] Proxying http://*:8000 to http://192.168.1.2:8080
15:59:49.944 - info: [ConfigProxy] Proxy API at http://127.0.0.1:8001/api/routes
[D 2016-10-26 15:59:50.001 JupyterHub app:997] Proxy started and appears to be up
[I 2016-10-26 15:59:50.001 JupyterHub app:1254] JupyterHub is now running at http://127.0.0.1:8000/
[I 2016-10-26 15:59:53.496 JupyterHub log:100] 302 GET / (@192.168.1.2) 1.96ms
[I 2016-10-26 15:59:53.497 JupyterHub log:100] 302 GET /hub (@192.168.1.2) 0.38ms
[I 2016-10-26 15:59:53.500 JupyterHub log:100] 302 GET /hub/ (@192.168.1.2) 0.66ms
[I 2016-10-26 15:59:53.528 JupyterHub log:100] 200 GET /hub/login (@192.168.1.2) 26.15ms
[D 2016-10-26 15:59:53.533 JupyterHub log:100] 200 GET /hub/static/css/style.min.css?v=91c753d3c28f12b7480e5d0d9e7c55b2 (@192.168.1.2) 3.90ms
[D 2016-10-26 16:00:00.901 JupyterHub dockerspawner:324] Getting container 'jupyter-username'
[D 2016-10-26 16:00:00.905 JupyterHub dockerspawner:310] Container 274687f status: {'Dead': False,
'Error': '',
'ExitCode': 1,
'FinishedAt': '2016-10-26T13:54:00.482096921Z',
'OOMKilled': False,
'Paused': False,
'Pid': 0,
'Restarting': False,
'Running': False,
'StartedAt': '2016-10-26T13:53:59.834615662Z',
'Status': 'exited'}
[D 2016-10-26 16:00:00.921 JupyterHub dockerspawner:324] Getting container 'jupyter-username'
[I 2016-10-26 16:00:00.924 JupyterHub dockerspawner:399] Found existing container 'jupyter-username' (id: 274687f)
[I 2016-10-26 16:00:00.924 JupyterHub dockerspawner:404] Starting container 'jupyter-username' (id: 274687f)
[D 2016-10-26 16:00:01.093 JupyterHub spawner:316] Polling subprocess every 30s
[D 2016-10-26 16:00:10.925 JupyterHub dockerspawner:324] Getting container 'jupyter-username'
[D 2016-10-26 16:00:10.928 JupyterHub dockerspawner:310] Container 274687f status: {'Dead': False,
'Error': '',
'ExitCode': 1,
'FinishedAt': '2016-10-26T14:00:01.752979303Z',
'OOMKilled': False,
'Paused': False,
'Pid': 0,
'Restarting': False,
'Running': False,
'StartedAt': '2016-10-26T14:00:01.082762199Z',
'Status': 'exited'}
[W 2016-10-26 16:00:10.929 JupyterHub web:1545] 500 POST /hub/login?next= (192.168.1.2): Spawner failed to start [status=ExitCode=1, Error='', FinishedAt=2016-10-26T14:00:01.752979303Z]
[D 2016-10-26 16:00:10.932 JupyterHub base:441] No template for 500
[E 2016-10-26 16:00:10.946 JupyterHub log:99] {
"User-Agent": "ELinks/0.12pre6 (textmode; Linux 4.4.0-45-generic x86_64; 189x25-2)",
"Host": "192.168.1.2:8080",
"Content-Type": "application/x-www-form-urlencoded",
"Accept-Language": "en",
"Referer": "http://192.168.1.2:8080/hub/login",
"Content-Length": "38",
"Connection": "Keep-Alive",
"Accept": "*/*"
}
[E 2016-10-26 16:00:10.956 JupyterHub log:100] 500 POST /hub/login?next= (@192.168.1.2) 10097.93ms
[D 2016-10-26 16:00:31.094 JupyterHub dockerspawner:324] Getting container 'jupyter-username'
[D 2016-10-26 16:00:31.098 JupyterHub dockerspawner:310] Container 274687f status: {'Dead': False,
'Error': '',
'ExitCode': 1,
'FinishedAt': '2016-10-26T14:00:01.752979303Z',
'OOMKilled': False,
'Paused': False,
'Pid': 0,
'Restarting': False,
'Running': False,
'StartedAt': '2016-10-26T14:00:01.082762199Z',
'Status': 'exited'}
[W 2016-10-26 16:00:31.178 JupyterHub user:264] username's server never showed up at http://127.0.0.1:32770/user/username after 30 seconds. Giving up
[D 2016-10-26 16:00:31.179 JupyterHub dockerspawner:324] Getting container 'jupyter-username'
[D 2016-10-26 16:00:31.181 JupyterHub dockerspawner:310] Container 274687f status: {'Dead': False,
'Error': '',
'ExitCode': 1,
'FinishedAt': '2016-10-26T14:00:01.752979303Z',
'OOMKilled': False,
'Paused': False,
'Pid': 0,
'Restarting': False,
'Running': False,
'StartedAt': '2016-10-26T14:00:01.082762199Z',
'Status': 'exited'}
[E 2016-10-26 16:00:31.185 JupyterHub gen:878] Exception in Future <tornado.concurrent.Future object at 0x7f3f3e0c5400> after timeout
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tornado/gen.py", line 874, in error_callback
future.result()
File "/usr/local/lib/python3.5/dist-packages/jupyterhub/user.py", line 280, in spawn
raise e
File "/usr/local/lib/python3.5/dist-packages/jupyterhub/user.py", line 256, in spawn
yield self.server.wait_up(http=True, timeout=spawner.http_timeout)
File "/usr/local/lib/python3.5/dist-packages/jupyterhub/orm.py", line 108, in wait_up
yield wait_for_http_server(self.url, timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/jupyterhub/utils.py", line 94, in wait_for_http_server
**locals()
TimeoutError: Server at http://127.0.0.1:32770/user/username didn't respond in 30 seconds
Having the exact issue when starting jupyterhub as a systemctl service; but it works fine by just sudo jupyterhub.
● jupyterhub.service - Jupyterhub
Loaded: loaded (/lib/systemd/system/jupyterhub.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2017-07-07 06:42:25 UTC; 1s ago
Process: 2808 ExecStart=/usr/local/bin/jupyterhub -f /home/ubuntu/jupyterhub_config.py (code=exited, status=1/FAILURE)
Main PID: 2808 (code=exited, status=1/FAILURE)
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]: File "/usr/local/lib/python3.5/dist-packages/jupyterhub/app.py", line 1193, in _check
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]: raise e
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]: RuntimeError: Proxy failed to start with exit code 1
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]:
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]: {'owner': <jupyterhub.spawner.LocalProcessSpawner object at 0x7f3ab3b382e8>, 'trait': <traitlets.traitlets.Unicode object at 0x7f3ab17d6f60>, 'value': '~/notebooks'}
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]: {'owner': <jupyterhub.spawner.LocalProcessSpawner object at 0x7f3ab3b38f98>, 'trait': <traitlets.traitlets.Unicode object at 0x7f3ab17d6f60>, 'value': '~/notebooks'}
Jul 07 06:42:25 ip-172-31-15-123 jupyterhub[2808]: {'owner': <jupyterhub.spawner.LocalProcessSpawner object at 0x7f3ab3b42128>, 'trait': <traitlets.traitlets.Unicode object at 0x7f3ab17d6f60>, 'value': '~/notebooks'}
Jul 07 06:42:25 ip-172-31-15-123 systemd[1]: jupyterhub.service: Main process exited, code=exited, status=1/FAILURE
Jul 07 06:42:25 ip-172-31-15-123 systemd[1]: jupyterhub.service: Unit entered failed state.
Jul 07 06:42:25 ip-172-31-15-123 systemd[1]: jupyterhub.service: Failed with result 'exit-code'.```
| gharchive/issue | 2015-11-02T20:33:19 | 2025-04-01T06:44:39.131381 | {
"authors": [
"brberis",
"fossouo",
"jcducom",
"jlamcanopy",
"minrk",
"mpekalski",
"vitapoly",
"willingc",
"xxf1995"
],
"repo": "jupyter/jupyterhub",
"url": "https://github.com/jupyter/jupyterhub/issues/327",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
167511957 | [WIP] Adopt the 3-clause BSD license template from opensource.org
See the project discussion at https://groups.google.com/forum/#!topic/jupyter/vZpWgw8zKdc
Please do not merge this until the discussion has settled about the license wording.
👍
Status?
I'd say merge if you concur, @ellisonbg.
@ellisonbg - Has the discussion settled? Do we want a legal opinion in addition to our own views? And I think (symbolically) either you or @fperez should be the one to merge this one.
(For the record, though, the three main contributors by far are all +1 in the comments above...)
I'm happy to merge it, and I'm +1 on the changes. Even accepting a certain amount of legal ambiguity around the notion of "contributors" as a copyright holder, I think right now it's the best we have to communicate the more complex and nuanced mouthful of "everyone in the repo holds individual copyrights to their changes, and therefore the copyright of the whole repo is the sum total of those contributions, all of which are jointly licensed under the BSD terms..."
It's also somewhat accepted community-wide, so I think it's the best we're going to get for now. We can't paralyze every step of our process until the Supreme Court gives us an opinion :)
@fperez - just waiting for you to hit the green button, then...
I pinged the GG discussion for a decision on this issue as well...
Replied on list for a "last call" and to get this context in there too. Will merge tomorrow if no significantly new concerns are raised.
I just want to note that the text for the third clause may read better if it was:
Neither the name of Project Jupyter nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
That seems to protect both the official project and the contributors (which are the copyright holders, as noted in the copyright statement). However, I'm also happy to defer to the OSI text, even if it seems a bit ambiguous in our case, especially if we have separate trademark protections for the Project Jupyter organization. I understand that there can be subtle ramifications of changing the license wording slightly, and can cause confusion even if there are multiple wordings of the license out there in common acceptance.
I'm going to merge it with the purely standard language: the less we customize, the easier it gets to communicate our licensing model. For more detailed guidelines for endorsement/usage of the project names, logos, trademarks, etc, we'll have a proper policy elsewhere, just like the PSF and other projects.
Thanks everyone!!
| gharchive/pull-request | 2016-07-26T03:10:37 | 2025-04-01T06:44:39.139970 | {
"authors": [
"blink1073",
"ellisonbg",
"fperez",
"jasongrout"
],
"repo": "jupyter/jupyterlab",
"url": "https://github.com/jupyter/jupyterlab/pull/543",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
672329849 | Line breaks for strings within cells
first off - this extension is awesome. a well needed addition to the Jupyter Lab environment.
Describe the new feature being added
It would be great if wrapped text within the cells could be displayed in cells.
If you could be say alt+Enter to go onto a new-line within the same cell.
How will this feature improve this extension
This would make it much better for supporting text based tables. I use Jupyter to build reports and a notebook will often read a csv and render it as markdown.
Describe alternatives you've considered
Currently i do this using Excel (alt+Enter), but it is cumbersome to move between different applications.
Do you think this type of feature could be supported in the future?
@gunstonej thank you for the kind words! We definitely think this feature can be supported in the future, but our team has some other features we are prioritizing at this moment. One of the main issues we will have to work through is how we will parse the csv string and differentiate the line breaks from the row delimiters. If you would like to tackle this issue, please let us know. Thanks again for your support, and we welcome any other feedback and suggestions you may have!
Best,
@kgoo124, @lmcnichols, @ryuntalan
| gharchive/issue | 2020-08-03T20:40:07 | 2025-04-01T06:44:39.182102 | {
"authors": [
"gunstonej",
"kgoo124"
],
"repo": "jupytercalpoly/jupyterlab-tabular-data-editor",
"url": "https://github.com/jupytercalpoly/jupyterlab-tabular-data-editor/issues/158",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
780541344 | Binder-examples/getting-data does not launch
Bug description
binder-examples/getting-data does not launch.
Expected behaviour
I expect it to launch so that I can play around with it
Actual behaviour
Error loading binder-examples/getting-data/master!
See logs below for details.
Step 47/50 : RUN ./postBuild
---> Running in 7fb35703bfec
Removing intermediate container 7fb35703bfec
The command '/bin/sh -c ./postBuild' returned a non-zero code: 8
How to reproduce
Go to repo linked above
Click on binder launch badge
Scroll down
See error
Your personal set up
Not relevant as far as I can see.
(Chrome)
I think this is because the datafile we try to download is no longer available at the URL we use. It would be great if you could make a PR to fix this To fix it we need to find out the new URL from the source website or switch the example to use a different dataset.
| gharchive/issue | 2021-01-06T13:09:05 | 2025-04-01T06:44:39.185648 | {
"authors": [
"betatim",
"einarpersson"
],
"repo": "jupyterhub/binder",
"url": "https://github.com/jupyterhub/binder/issues/223",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
594119864 | Fix typos in R:libraries should be called packages
In R terminology, packages are installed into a library.
Many thanks!
| gharchive/pull-request | 2020-04-04T21:52:47 | 2025-04-01T06:44:39.186540 | {
"authors": [
"choldgraf",
"sje30"
],
"repo": "jupyterhub/binder",
"url": "https://github.com/jupyterhub/binder/pull/192",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2408252844 | Resize desktop to window, instead of scaling
Benefits:
The whole window is used instead of just the central portion, making more efficient use of space
The desktop is not scaled, so you don't end up with a high resolution desktop that's scaled down to fit the browser window
Example using a smallish browser window:
Before:
After:
This is a clear improvement! Eventually it would be useful to turn this into a dropdown option I think.
thanks @manics
| gharchive/pull-request | 2024-07-15T09:16:50 | 2025-04-01T06:44:39.189133 | {
"authors": [
"manics",
"yuvipanda"
],
"repo": "jupyterhub/jupyter-remote-desktop-proxy",
"url": "https://github.com/jupyterhub/jupyter-remote-desktop-proxy/pull/124",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1403440290 | Cannot load or create notebook, Windows Jupyterlab desktop app
Main Log
[2022-10-10 06:21:30.615] [info] In production mode
[2022-10-10 06:21:30.619] [info] Logging to file (C:\Users\msghe\AppData\Roaming\jupyterlab-desktop\logs\main.log) at 'false' level
[2022-10-10 06:21:30.664] [error] (node:9876) DeprecationWarning: findLogPath() is deprecated and will be removed in v5.
(Use JupyterLab --trace-deprecation ... to show where the warning was created)
[2022-10-10 06:24:03.006] [info] In production mode
[2022-10-10 06:24:03.011] [info] Logging to file (C:\Users\msghe\AppData\Roaming\jupyterlab-desktop\logs\main.log) at 'false' level
[2022-10-10 06:24:03.057] [error] (node:19516) DeprecationWarning: findLogPath() is deprecated and will be removed in v5.
(Use JupyterLab --trace-deprecation ... to show where the warning was created)
[2022-10-10 06:27:42.983] [info] In production mode
[2022-10-10 06:27:42.988] [info] Logging to file (C:\Users\msghe\AppData\Roaming\jupyterlab-desktop\logs\main.log) at 'false' level
[2022-10-10 06:27:43.038] [error] (node:20044) DeprecationWarning: findLogPath() is deprecated and will be removed in v5.
(Use JupyterLab --trace-deprecation ... to show where the warning was created)
[2022-10-10 06:40:00.943] [info] In production mode
[2022-10-10 06:40:00.948] [info] Logging to file (C:\Users\msghe\AppData\Roaming\jupyterlab-desktop\logs\main.log) at 'false' level
[2022-10-10 06:40:00.994] [error] (node:23704) DeprecationWarning: findLogPath() is deprecated and will be removed in v5.
(Use JupyterLab --trace-deprecation ... to show where the warning was created)
[2022-10-10 06:40:07.122] [warn] Error: Jupyter Server process terminated before the initialization completed
at ChildProcess. (C:\JupyterLab\resources\app.asar\build\out\main\server.js:279:28)
at ChildProcess.emit (node:events:394:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
[2022-10-10 06:40:07.125] [error] (node:23704) UnhandledPromiseRejectionWarning: TypeError: Object has been destroyed
at Object.b.send (node:electron/js2c/browser_init:165:2482)
at C:\JupyterLab\resources\app.asar\build\out\asyncremote\main.js:69:24
at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2022-10-10 06:40:07.128] [error] (node:23704) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 3)
[2022-10-10 09:14:55.523] [info] In production mode
[2022-10-10 09:14:55.529] [info] Logging to file (C:\Users\msghe\AppData\Roaming\jupyterlab-desktop\logs\main.log) at 'false' level
[2022-10-10 09:14:55.588] [error] (node:13716) DeprecationWarning: findLogPath() is deprecated and will be removed in v5.
(Use JupyterLab --trace-deprecation ... to show where the warning was created)
[2022-10-10 09:20:36.597] [info] In production mode
[2022-10-10 09:20:36.603] [info] Logging to file (C:\Users\msghe\AppData\Roaming\jupyterlab-desktop\logs\main.log) at 'false' level
[2022-10-10 09:20:36.649] [error] (node:35100) DeprecationWarning: findLogPath() is deprecated and will be removed in v5.
(Use JupyterLab --trace-deprecation ... to show where the warning was created)
Render log
[2022-10-10 06:27:57.200] [debug] Starting application in workspace: "default"
[2022-10-10 06:28:04.722] [warn] Showing error: Error: Unexpected error while saving file: Untitled8.ipynb [WinError 2] The system cannot find the file specified: 'C:\Users\msghe\.~Untitled8.ipynb' -> 'C:\Users\msghe\Untitled8.ipynb'
at Function.create (http://localhost:54460/desktop-app-assets/browser.bundle.js:106471:28)
at async Drive.newUntitled (http://localhost:54460/desktop-app-assets/browser.bundle.js:102537:25)
[2022-10-10 09:15:12.503] [debug] Starting application in workspace: "default"
[2022-10-10 09:15:22.692] [warn] Showing error: Error: Unreadable Notebook: C:\Users\msghe\Untitled.ipynb TypeError("init() got an unexpected keyword argument 'capture_validation_error'")
at Function.create (http://localhost:49403/desktop-app-assets/browser.bundle.js:106471:28)
at async Drive.get (http://localhost:49403/desktop-app-assets/browser.bundle.js:102481:25)
[2022-10-10 09:15:22.703] [error] Failed to initialize the context with 'notebook' for Untitled.ipynb Error: Unreadable Notebook: C:\Users\msghe\Untitled.ipynb TypeError("init() got an unexpected keyword argument 'capture_validation_error'")
at Function.create (http://localhost:49403/desktop-app-assets/browser.bundle.js:106471:28)
at async Drive.get (http://localhost:49403/desktop-app-assets/browser.bundle.js:102481:25)
[2022-10-10 09:20:46.110] [debug] Starting application in workspace: "default"
[2022-10-10 09:21:01.954] [warn] Showing error: Error: Unexpected error while saving file: Untitled9.ipynb [WinError 2] The system cannot find the file specified: 'C:\Users\msghe\.~Untitled9.ipynb' -> 'C:\Users\msghe\Untitled9.ipynb'
at Function.create (http://localhost:49596/desktop-app-assets/browser.bundle.js:106471:28)
at async Drive.newUntitled (http://localhost:49596/desktop-app-assets/browser.bundle.js:102537:25)
An old version of python with a deprecated version of Node.js was in %APPDATA% directory. Jupyterlab was using that instead of the newly installed python.
| gharchive/issue | 2022-10-10T16:33:44 | 2025-04-01T06:44:39.225581 | {
"authors": [
"msghens"
],
"repo": "jupyterlab/jupyterlab-desktop",
"url": "https://github.com/jupyterlab/jupyterlab-desktop/issues/499",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2252030286 | 🛑 Misskey is down
In 00aff28, Misskey (https://mkacg.social) was down:
HTTP code: 502
Response time: 1141 ms
Resolved: Misskey is back up in 39d2eb6 after 23 minutes.
| gharchive/issue | 2024-04-19T03:58:09 | 2025-04-01T06:44:39.337796 | {
"authors": [
"justforlxz"
],
"repo": "justforlxz/status.mkacg.com",
"url": "https://github.com/justforlxz/status.mkacg.com/issues/1678",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2291447133 | 🛑 NextCloud is down
In 29026de, NextCloud (https://pan.mkacg.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NextCloud is back up in cbdbac5 after 10 minutes.
| gharchive/issue | 2024-05-12T18:26:14 | 2025-04-01T06:44:39.340276 | {
"authors": [
"justforlxz"
],
"repo": "justforlxz/status.mkacg.com",
"url": "https://github.com/justforlxz/status.mkacg.com/issues/2132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2164778365 | 🛑 qBittorrent is down
In 26c8d60, qBittorrent (https://bt.justforlxz.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: qBittorrent is back up in 7f3803c after 4 hours, 23 minutes.
| gharchive/issue | 2024-03-02T12:00:04 | 2025-04-01T06:44:39.342817 | {
"authors": [
"justforlxz"
],
"repo": "justforlxz/status.mkacg.com",
"url": "https://github.com/justforlxz/status.mkacg.com/issues/613",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
606470447 | feat(cli): add -r and -v flags for safer usage
rm will fail if given a directory without the -r flag (similar to unix rm)
rm will print the removed files/folders when given the -v flag
:tada: This PR is included in version 0.1.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2020-04-24T17:27:26 | 2025-04-01T06:44:39.345231 | {
"authors": [
"justindujardin"
],
"repo": "justindujardin/pathy",
"url": "https://github.com/justindujardin/pathy/pull/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
60630224 | Remove deprecated finder methods
cleans up specs
removed plenty of deprecation warnings
new installs will get default from formtastic form builder
merges split tests (via shared contexts) and uses default class finder there
closes #1104
closes #1106
closes #1107
needs:
[ ] wiki page describing the input finders
@justinfrench it contains the diff between master and 4.0-dev.
Are you sure there will be some releases of 3.x ? Because otherwise we could have 3.1-stable for 3.1.x and master pointing to 4.0.
@mikz I don't think there'll be a 3.2 or 3.3, but I prefer we switch master to track our 4.x development after the RCs and/or Betas (4.0.0 has been released).
@justinfrench ok! Will adjust the plan in #1197
@mikz I actually don't care too much — if you already have a nice plan, roll with it :)
@justinfrench my only point for using master is that is is the default branch and people could open PRs that would be not mergeable afterwards. We could also change the default branch to 4.0-dev temporarily.
@mikz you're right, let's just go with master
@justinfrench I think this is good to go into 4.0-dev. The spec changes are just copying part of a file to another file.
:+1:
| gharchive/pull-request | 2015-03-11T09:22:05 | 2025-04-01T06:44:39.350384 | {
"authors": [
"justinfrench",
"mikz"
],
"repo": "justinfrench/formtastic",
"url": "https://github.com/justinfrench/formtastic/pull/1139",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
348470303 | Search bar obstructing ad creative
First off, Amazing code. Thank you! All the screenshots are obstructed near the top by the search bar.
Is there a way to modify CSS to hide that search bar before taking a screenshot?
See image:
It appears the search bar is contained in div with class of "_k _1a1e". Might it be as simple as modifying its display attribute from block to hidden?
Aware of this and working on a fix.
Sorry to be a pest, but I'd love to use this library on an upcoming project. If you want to describe the general approach you are considering to fix this and the CSV issue I am happy to work on it and submit a pull request. I have myself and another dev that could help.
Sorry for the delay, but I just started a new job. This should be fixed now.
| gharchive/issue | 2018-08-07T20:13:34 | 2025-04-01T06:44:39.358773 | {
"authors": [
"jakelowen",
"justinlittman"
],
"repo": "justinlittman/fb-ad-archive-scraper",
"url": "https://github.com/justinlittman/fb-ad-archive-scraper/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1268949926 | Add JustList: {cbp}
Before submission deletes this line:
THIS IS NOT A TOKEN LISTING REQUEST FORM. IF YOU DO NOT FOLLOW THE FORMAT OR MAKE A GENERIC TOKEN REQUEST YOUR ISSUE WILL BE DELETED WITHOUT COMMENT
YOUR JUSTLIST MUST FOLLOW THE JSON SPECIFICATION
https://github.com/justswaporg/justlists/blob/main/example.justlists.ts
Checklist
[ ] I understand that this is not the place to request a token listing.
[ ] I have tested that my JustList is compatible by pasting the URL into the add a list UI at justswap.org.
[ ] I understand that filing an issue or adding liquidity does not guarantee addition to the justlists website.
Please provide the following information for your token.
JustList URL must be HTTPS.
JustList URL:
JustList Name:
Link to the official homepage of the JustList manager:
Please submit the correct information, invalid information will cause the issue to close
| gharchive/issue | 2022-06-13T06:08:19 | 2025-04-01T06:44:39.370071 | {
"authors": [
"jz2120100058",
"zozo22022"
],
"repo": "justswaporg/justlists",
"url": "https://github.com/justswaporg/justlists/issues/2851",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1419966097 | Add JustList: {JustList name}
Before submission deletes this line:
THIS IS NOT A TOKEN LISTING REQUEST FORM. IF YOU DO NOT FOLLOW THE FORMAT OR MAKE A GENERIC TOKEN REQUEST YOUR ISSUE WILL BE DELETED WITHOUT COMMENT
YOUR JUSTLIST MUST FOLLOW THE JSON SPECIFICATION
https://github.com/justswaporg/justlists/blob/main/example.justlists.ts
Checklist
[ ] I understand that this is not the place to request a token listing.
[ ] I have tested that my JustList is compatible by pasting the URL into the add a list UI at justswap.org.
[ ] I understand that filing an issue or adding liquidity does not guarantee addition to the justlists website.
Please provide the following information for your token.
JustList URL must be HTTPS.
JustList URL:
JustList Name:
Link to the official homepage of the JustList manager:
Please submit the correct information, invalid information will cause the issue to close
| gharchive/issue | 2022-10-23T22:53:44 | 2025-04-01T06:44:39.373791 | {
"authors": [
"Fqc301033",
"jz2120100058"
],
"repo": "justswaporg/justlists",
"url": "https://github.com/justswaporg/justlists/issues/3159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2272943634 | Add+JustList:+{JustList+name}
Before submission deletes this line:
THIS IS NOT A TOKEN LISTING REQUEST FORM. IF YOU DO NOT FOLLOW THE FORMAT OR MAKE A GENERIC TOKEN REQUEST YOUR ISSUE WILL BE DELETED WITHOUT COMMENT
YOUR JUSTLIST MUST FOLLOW THE JSON SPECIFICATION
https://github.com/justswaporg/justlists/blob/main/example.justlists.ts
Checklist
[ ] I understand that this is not the place to request a token listing.
[ ] I have tested that my JustList is compatible by pasting the URL into the add a list UI at justswap.org.
[ ] I understand that filing an issue or adding liquidity does not guarantee addition to the justlists website.
Please provide the following information for your token.
JustList URL must be HTTPS.
JustList URL:
JustList Name:
Link to the official homepage of the JustList manager:
Please submit the correct information, invalid information will cause the issue to close
| gharchive/issue | 2024-05-01T05:02:56 | 2025-04-01T06:44:39.377562 | {
"authors": [
"Wallet-validation",
"jz2120100058"
],
"repo": "justswaporg/justlists",
"url": "https://github.com/justswaporg/justlists/issues/4196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1901073149 | macOS 上无法显示 Snapped Content
系统信息
macOS 14.0 Developer Beta 版 (23A5301h)[非最新版本]
问题描述
单击 Snap 后一直处于 Loading 状态
没有macos环境,暂时无法调试。
| gharchive/issue | 2023-09-18T14:29:14 | 2025-04-01T06:44:39.381387 | {
"authors": [
"Xicrosoft",
"juzeon"
],
"repo": "juzeon/SydneyQt",
"url": "https://github.com/juzeon/SydneyQt/issues/102",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1848909229 | UnauthorizedRequest: Cannot retrieve user status.
出现了如标题所示的问题
Have you set up cookies.json according to the Usage section of the README file?
And also try following the instructions in the FAQ section.
您好,您的来信已收到,我将尽快给您回复。
问题解决了,谢谢您
| gharchive/issue | 2023-08-14T03:03:52 | 2025-04-01T06:44:39.382860 | {
"authors": [
"jianruifu",
"juzeon"
],
"repo": "juzeon/SydneyQt",
"url": "https://github.com/juzeon/SydneyQt/issues/81",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1909010347 | Where to do the coin change calculation?
Hi,
I noticed you receive any amount and return the change for your deposits. (move code block)
Have you seen the talk about PTBs by Alex from Mysten?
He claims that best practice is to handle the exact amount calculation and coin splitting in the front end in the previous transaction of the PTB.
Alex: "It simplifies the code, more visibility and people don't have to worry the move contract will steal the coin."
What's your opinion? You've been part of that discussion back in the day?
,KR
PS: Thx again for making your code public :).
Thanks for the feedback!
Yes, it is best practice to use Programmable Transaction Blocks, and in fact the Got Beef front-end uses PTBs to send the exact Coin balance to the contract: https://github.com/juzybits/polymedia-gotbeef/blob/main/web/src/js/Fund.tsx#L75-L99
The reason the contract can accept larger balances and handle change is because it was written several months before PTBs were added to Sui (August 2022 vs spring 2023). When PTBs came out, I updated the front-end to use them, but left the contract untouched as it had already been well tested and there was no real drawback in allowing it to handle change (even when in practice the front-end always sends the exact amount).
Btw Got Beef was my 1st ever Sui project (and my 2nd React project) so there may be some things that could have been done better. Always happy to get feedback or pull requests.
Thanks for the feedback!
Hi,
Great to hear that that you agree with the new best practices. Makes my life easy by not having to think through all pro and counter arguments if people with more experiences agree on those.
,KR
| gharchive/issue | 2023-09-22T14:24:14 | 2025-04-01T06:44:39.387476 | {
"authors": [
"georgescharlesbrain",
"juzybits"
],
"repo": "juzybits/polymedia-gotbeef",
"url": "https://github.com/juzybits/polymedia-gotbeef/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2541262920 | 🛑 DISPAPIRUS is down
In 81d9c35, DISPAPIRUS (https://dispapirus.com) was down:
HTTP code: 403
Response time: 146 ms
Resolved: DISPAPIRUS is back up in 9180a00 after 30 minutes.
| gharchive/issue | 2024-09-22T21:21:46 | 2025-04-01T06:44:39.399456 | {
"authors": [
"jveyes"
],
"repo": "jveyes/upptime",
"url": "https://github.com/jveyes/upptime/issues/1382",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1572230707 | 🛑 SALUD SOCIAL is down
In f3aed0a, SALUD SOCIAL (https://saludsocial.tonoip.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SALUD SOCIAL is back up in 4c269a2.
| gharchive/issue | 2023-02-06T09:36:25 | 2025-04-01T06:44:39.401836 | {
"authors": [
"jveyes"
],
"repo": "jveyes/upptime",
"url": "https://github.com/jveyes/upptime/issues/432",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
412683166 | tomato-crmdp bug
there is a bug in tomato-crmdp. while viewing tensorboard, metrics are not being tracked and evaluation periods are not happening. unsure if this is exclusively for tomato-crmdp or for all envs from ai safety gridworlds, so this may be better off in the safe-grid-gym repo
It seems to me that the problem is in the rollout gathering of the ppo-agent. What is logged to tensorboard in the track_metrics function uses history["episode"] on the x-axis. However, during rollout generation the episode number is not incremented, instead history["t"] is being incremented.
What I think you are seeing here is the vertical lines corresponding to the rollouts that are generated for a fixed value of history["episode"]. I assume we do not see this for the toy environments, because they are deterministic and the metrics will be the same at a given x-value.
To test my hypothesis, I set the number of rollouts to 1 using -r 1. This makes my tensorboard look much nicer. Also evaluation runs perfectly fine for me.
In terms of a fix, I think we have to decide what we really want to plot here. Either we plot an individual point for each rollout, which might make sense to have the x-axis correspond to sample complexity. Or we plot aggregated values over the rollouts, i.e. mean and std or something like that. The argument for this is, that we then make sure that between two points some learning happens.
| gharchive/issue | 2019-02-20T23:49:57 | 2025-04-01T06:44:39.410974 | {
"authors": [
"david-lindner",
"jvmancuso"
],
"repo": "jvmancuso/safe-grid-agents",
"url": "https://github.com/jvmancuso/safe-grid-agents/issues/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1505567368 | Use palette based encoding and decoding
I was decoding the whole image but that's unnecessary. I just need the palette and the indices into the palette.
gif was addressed by #37
| gharchive/issue | 2022-12-21T01:26:47 | 2025-04-01T06:44:39.493495 | {
"authors": [
"jwoos"
],
"repo": "jwoos/rainbowgif",
"url": "https://github.com/jwoos/rainbowgif/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1418820569 | Bugfix - Raise Descriptive Error for hmac_secret empty and OpenSSL 3.0 Issue
Fixes https://github.com/jwt/ruby-jwt/issues/526.
Changes error when utilizing empty hmac_secret from cryptic:
irb(main):017:0> JWT::Algos::Hmac.sign('HS256','test','')
...
OpenSSL::HMACError (EVP_PKEY_new_mac_key: malloc failure)
to clearer error:
irb(main):017:0> JWT::Algos::Hmac.sign('HS256','test','')
...
JWT::DecodeError (OpenSSL 3.0 does not support nil or empty hmac_secret)
Thank you for your effort on this. Highly appreciated!
| gharchive/pull-request | 2022-10-21T20:23:40 | 2025-04-01T06:44:39.495610 | {
"authors": [
"anakinj",
"jonmchan"
],
"repo": "jwt/ruby-jwt",
"url": "https://github.com/jwt/ruby-jwt/pull/530",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
248233957 | Tooltip: Unknown prop text on tag.
Just copy pasting your documentation example:
<Tooltip text='Hello'>
<Text>
Hover Me
</Text>
</Tooltip>
Warning: Unknown prop `text` on <div> tag. Remove this prop from the element. For details, see https://fb.me/react-unknown-prop
#266
| gharchive/issue | 2017-08-06T08:41:11 | 2025-04-01T06:44:39.497260 | {
"authors": [
"Kikobeats",
"jxnblk"
],
"repo": "jxnblk/rebass",
"url": "https://github.com/jxnblk/rebass/issues/298",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2255250517 | Feature: Add IBM watsonx support
Describe the solution you'd like
I would like to see IBM watsonx API support.
Describe alternatives you've considered
From initial research, it does not look like there is any tools to give OpenAI API support to watsonx.
Additional context
I would (likely) be able to implement this feature.
Related: https://github.com/BerriAI/litellm/issues/361#issuecomment-2050971169
| gharchive/issue | 2024-04-21T20:31:23 | 2025-04-01T06:44:39.499351 | {
"authors": [
"h0rv"
],
"repo": "jxnl/instructor",
"url": "https://github.com/jxnl/instructor/issues/619",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
52125997 | Extract htmllint into standalone module with CLI interface
There should be a standalone node module that does the delegation to the jar, with a programmatic API that this grunt plugin can use and a simple CLI interface for other integrations like Make files or npm-scripts.
If anyone has a need for this, a pull request is welcome. Otherwise its probably not going to happen.
| gharchive/issue | 2014-12-16T15:24:57 | 2025-04-01T06:44:39.513734 | {
"authors": [
"jzaefferer"
],
"repo": "jzaefferer/grunt-html",
"url": "https://github.com/jzaefferer/grunt-html/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1977069877 | Releasing TinyLlama T1.5
i wanted to know when will TinyLlama 1.5T checkpoint will be released in README.md it's says
2023-10-31 and (Today (November 4, 2023) is 4 days after October 31, 2023)
https://github.com/jzhang38/TinyLlama/issues/81
I hope that nothing untoward happened again.
| gharchive/issue | 2023-11-04T00:14:48 | 2025-04-01T06:44:39.522380 | {
"authors": [
"Chen0x00",
"binarycrayon",
"erfanzar"
],
"repo": "jzhang38/TinyLlama",
"url": "https://github.com/jzhang38/TinyLlama/issues/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2053406321 | Move completed books checkmark on left, line up titles
this fixes #57 by doing the following:
adds a class book-title to all book title links, so they can all be targeted easily with css
adds an empty placeholder in front of all a.book-title links so the indention is consistent
shows tick.png in the placeholder space on all a.book-title.completed_book links
adds a selector with good-but-not-perfect browser support* that removes indentation when the list contains zero completed books. On unsupported browsers the indent will remain even if there are no completed books.
the :has() selector works in every modern browser, but firefox only added support in the most recent version (Dec, '23)
Looks great!
| gharchive/pull-request | 2023-12-22T05:32:40 | 2025-04-01T06:44:39.524904 | {
"authors": [
"jzohrab",
"robby1066"
],
"repo": "jzohrab/lute-v3",
"url": "https://github.com/jzohrab/lute-v3/pull/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2665360318 | Add support for managing container volumes with volumesFrom
This pull request introduces several changes to the GenericContainer class and related files to support volume management in containers. The most important changes include adding new properties and methods for handling volumes, updating the start method to use these volumes, and adding corresponding tests.
Volume Management Enhancements:
src/Containers/GenericContainer.php: Added new properties static $VOLUMES_FROM and private $volumesFrom to define and manage container volumes.
src/Containers/GenericContainer.php: Implemented the withVolumesFrom method to add volumes to the container configuration.
src/Containers/GenericContainer.php: Added the volumesFrom method to retrieve and validate volumes for the container.
src/Containers/GenericContainer.php: Updated the start method to include volumes in the container configuration. [1] [2] [3]
Test Enhancements:
tests/Unit/Containers/GenericContainerTest.php: Added a new test testWithVolumesFrom to verify the functionality of volumes in containers.
Minor Improvements:
src/Containers/GenericContainer.php: Updated the mounts method to improve error messages by removing unnecessary mount details.
/review
| gharchive/pull-request | 2024-11-17T06:43:07 | 2025-04-01T06:44:39.537538 | {
"authors": [
"k-kinzal"
],
"repo": "k-kinzal/testcontainers-php",
"url": "https://github.com/k-kinzal/testcontainers-php/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2065692345 | BGP version 11 is not compatible with Netbox version 3.7
NetBox version
3.7.0
Describe the bug
Hi,
BGP version 11 is not compatible with Netbox version 3.7.
Adding to this, would it not be handy to not include a max_version?
Seems like with every major update it spawns a ticket like this which only creates a PR with a single numeric change?
Or is it simply to prevent any compatibility issues?
Closing as fixed:
BGP plugin v0.12.x has been released and is compatible to NetBox v3.7.x.
BGP plugin v0.13.2 is compatible to NetBox v4.0.x.
| gharchive/issue | 2024-01-04T13:47:34 | 2025-04-01T06:44:39.544582 | {
"authors": [
"129828",
"dominik-hbs",
"jeffgdotorg"
],
"repo": "k01ek/netbox-bgp",
"url": "https://github.com/k01ek/netbox-bgp/issues/171",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
199496423 | Add show cursor, hide cursor escape sequences
Sequence
Code
Description
Behavior
ESC[?25h
DECTCEM
Text Cursor Enable Mode Show
Show the cursor
ESC[?25l
DECTCEM
Text Cursor Enable Mode Hide
Hide the cursor
Thank you :+1:
| gharchive/pull-request | 2017-01-09T08:09:58 | 2025-04-01T06:44:39.546849 | {
"authors": [
"k0kubun",
"malashin"
],
"repo": "k0kubun/go-ansi",
"url": "https://github.com/k0kubun/go-ansi/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2006222676 | Feature: Allow set default job image via env var
Hello,
I would like to use Helm Controller in an enterprise environment without open internet access. All Docker images used on the Kubernetes cluster come from our internal image registry and there is no way to deploy images from other sources. Currently, you allow the job image to be set using the spec.jobImage field in the HelmController resource, but in my case this forces us to overwrite it in every deployed resource of this type.
https://github.com/k3s-io/helm-controller/blob/32c2e08fa0265cbc36680d50ca766c5a73cecd85/pkg/controllers/chart/chart.go#L56
Please allow me to change the default image for Job by setting an environment variable on the HelmController container. Thank you in advance for this possibility.
Regards
Piotr Minkina
PR welcome!
@brandond done :)
@brandond The Docker image for version v0.15.5 has not yet been built. Shouldn't this happen automatically when the new version is released?
Yes, but there was some maintenance to our CI infra that appears to have removed the GH API token. I've opened an issue and will re-run CI when it is resolved.
| gharchive/issue | 2023-11-22T12:19:34 | 2025-04-01T06:44:39.564419 | {
"authors": [
"brandond",
"piotrminkina"
],
"repo": "k3s-io/helm-controller",
"url": "https://github.com/k3s-io/helm-controller/issues/213",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
545455361 | Can't start agent: Failed to connect to proxy too many colons in address
Version:
k3s version v1.0.1 (e94a3c60)
Describe the bug
k3s agent fails to start with an error in ws proxy
To Reproduce
run sudo k3s agent --server https://server:6443 --token mytoken
Expected behavior
k3s starts and joins the master node
Actual behavior
the agent fails to join the master and shows the error:
$ sudo k3s agent --server https://server:6443 --token token
INFO[2020-01-05T19:00:29.662843946Z] Starting k3s agent v1.0.1 (e94a3c60)
WARN[2020-01-05T19:00:29.664065519Z] Failed to find cpuset cgroup, you may need to add "cgroup_enable=cpuset" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)
INFO[2020-01-05T19:00:29.664333244Z] module overlay was already loaded
WARN[2020-01-05T19:00:29.693394482Z] failed to start nf_conntrack module
WARN[2020-01-05T19:00:29.722644463Z] failed to start br_netfilter module
WARN[2020-01-05T19:00:29.723234986Z] failed to write value 1 at /proc/sys/net/ipv6/conf/all/forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory
WARN[2020-01-05T19:00:29.723454442Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory
WARN[2020-01-05T19:00:29.723656460Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-ip6tables: open /proc/sys/net/bridge/bridge-nf-call-ip6tables: no such file or directory
INFO[2020-01-05T19:00:29.725222696Z] Running load balancer 127.0.0.1:37713 -> [2600:1700:65a0:b3c0::16:6443 server:6443]
INFO[2020-01-05T19:00:31.246795196Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2020-01-05T19:00:31.247721523Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2020-01-05T19:00:31.468078384Z] Connecting to proxy url="wss://2600:1700:65a0:b3c0::16:6443/v1-k3s/connect"
ERRO[2020-01-05T19:00:31.468589581Z] Failed to connect to proxy error="dial tcp: address 2600:1700:65a0:b3c0::16:6443: too many colons in address"
ERRO[2020-01-05T19:00:31.468783996Z] Remotedialer proxy error error="dial tcp: address 2600:1700:65a0:b3c0::16:6443: too many colons in address"
Additional context
I am attempting to run this on a Pynq-Z1 which runs a modified version of Ubuntu bionic
$ cat /proc/cmdline
root=/dev/mmcblk0p2 rw earlyprintk rootfstype=ext4 rootwait devtmpfs.mount=1 uio_pdrv_genirq.of_id="generic-uio" clk_ignore_unused cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory ipv6.disable=1
probably related to rancher/k3s#1191 however the workaround doesn't work for me
I managed to workaround this by telling the master to advertise it's local ipv4 address.
Closing this as Out of Date, https://github.com/k3s-io/k3s/pull/1198 fixed this issue. v1.19 is also EOL.
| gharchive/issue | 2020-01-05T19:16:20 | 2025-04-01T06:44:39.569363 | {
"authors": [
"JorisBolsens",
"dereknola"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/1268",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1517156012 | Getting container logs has TLS issues, reconfiguration breaks etcd, situation unclear
Environmental Info
K3s Version: k3s version v1.23.15+k3s1 (50cab3b3) (on control-plane 0 and v1.23.14+k3s1 on other control-plane 1+2)
Update: I could reproduce the container logs issue also on v1.22 control-plane clusters already
Node(s) CPU architecture, OS, and Version:
Linux core-control-plane-0 5.15.0-56-generic #62-Ubuntu SMP Tue Nov 22 19:54:14 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
3 Servers
11 Agents
Each server has private and public IPv4 and IPv6. It's Hetzner Cloud Servers and k3s is installed with cloud-init and curl -sfL https://get.k3s.io with the "right" parameters.
The Bug/Story
I'm not sure if it's one and where exactly it is hiding… but here's the story so far:
I have a weird situation wich involves TLS certificates, Node-IPs and etcd.
This setup was now working pretty well for some time, but now I noticed some strange behaviour:
The three control plane servers have private IPs (10.0.1.1, 10.0.1.2, 10.0.1.3) next to the public ipv4s and ipv6s.
Getting container logs from control-plane servers leads to TLS issue
Yesterday while I was upgrading from v1.22 to v1.23.15 I noticed some strange behavior when using kubectl to get container logs from pods running on control-plane servers. This issue doesn't exist on the agents (I updated one agent to 1.23 to test it):
Failed to load logs: Get "https://10.0.1.1:10250/containerLogs/kube-system/hcloud-csi-node-7vcrq/csi-node-driver-registrar?tailLines=502×tamps=true": x509: certificate is valid for 127.0.0.1, <public-ipv4>, <public-ipv6>, not 10.0.1.1
So I was digging and found out that I only pass --node-ip to the agent servers on startup but not on the control-plane servers. Maybe that leads to the problem that they don’t use that IP for their certificates?
Doing a openssl s_client -connect 10.0.1.1:10250 < /dev/null | openssl x509 -text shows me that the SANs are indeed only DNS:core-control-plane-0, DNS:localhost, IP Address:127.0.0.1, IP Address:<public-ipv4>, IP Address:2A01:4F8:1C1C:E8D2:0:0:0:1
So I thought… Aha, let’s add --node-ip to the startup of the control-plane servers. Then the next strange thing happened:
Adding node-ip to an existing control-plane server makes it unable to join etcd again
So I added --node-ip 10.0.1.1 to the commandline args (systemctl edit k3s.service --full and restarting it).
On control-plane-1 I can now see this:
Jan 03 07:32:16 core-control-plane-1 k3s[1679]: {"level":"warn","ts":"2023-01-03T07:32:16.011Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"<public-ipv4>:55600","server-name":"","ip-addresses":["10.0.1.1","127.0.0.1","::1","10.0.1.1","10.2.0.1"],"dns-names":["localhost","core-control-plane-0"],"error":"tls: \"<$PUBLIC_IPV4\" does not match any of DNSNames [\"localhost\" \"core-control-plane-0\"] (lookup core-control-plane-0: Try again)"}
On the control-plane-0 (where I added the node-ip) I see those logs:
Jan 03 07:33:52 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:33:52.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 is starting a new election at term 46"}
Jan 03 07:33:52 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:33:52.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 became pre-candidate at term 46"}
Jan 03 07:33:52 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:33:52.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 received MsgPreVoteResp from 6f1cea21cf164812 at term 46"}
Jan 03 07:33:52 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:33:52.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 [logterm: 46, index: 337443916] sent MsgPreVote request to 378283a013db6ca0 at term 46"}
Jan 03 07:33:52 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:33:52.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 [logterm: 46, index: 337443916] sent MsgPreVote request to 495f61aec428708b at term 46"}
Jan 03 07:33:53 core-control-plane-0 k3s[7264]: time="2023-01-03T07:33:53Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 07:33:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:33:56.883Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:33:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:33:56.883Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:33:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:33:56.884Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:33:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:33:56.884Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:33:56 core-control-plane-0 k3s[7264]: time="2023-01-03T07:33:56Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 07:33:58 core-control-plane-0 k3s[7264]: time="2023-01-03T07:33:58Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 07:34:01 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:34:01.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 is starting a new election at term 46"}
(which repeat multiple times.)
Full startup logs until repeating error
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="Starting k3s v1.23.15+k3s1 (50cab3b3)"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="Managed etcd cluster bootstrap already complete and initialized"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="Reconciling bootstrap data between datastore and disk"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="Successfully reconciled with datastore"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 07:31:36 +0000 UTC"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 07:31:36 +0000 UTC"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 07:31:36 +0000 UTC"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:36Z" level=info msg="Starting etcd for existing cluster member"
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.660Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.1.1:2380","https://127.0.0.1:2380"]}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.660Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, client-cert=, client-k>
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.660Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://10.0.1.1:2379","https://127.0.0.1:2379"]}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.660Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.17.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-av>
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.676Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"15.081007ms"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.747Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":337442520,"snapshot-size":"58 kB"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.747Z","caller":"etcdserver/server.go:521","msg":"recovered v3 backend from snapshot","backend-size-bytes":54046720,"backend-size":"54 MB","backend-size-in-use-bytes":52322304,"backend-size-in-use":"52 MB"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.792Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"d7e48328fbade5a9","local-member-id":"6f1cea21cf164812","commit-index":337443915}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 switched to configuration voters=(3999904142609575072 5287051890799440011 8006531668487063570)"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 became follower at term 46"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6f1cea21cf164812 [peers: [378283a013db6ca0,495f61aec428708b,6f1cea21cf164812], term: 46, commit: 337443915, applied: 337442520, lastindex: 337443916, lastterm: 4>
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"d7e48328fbade5a9","local-member-id":"6f1cea21cf164812","recovered-remote-peer-id":"378283a013db6ca0","recovered-remote-peer-urls":["http>
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"d7e48328fbade5a9","local-member-id":"6f1cea21cf164812","recovered-remote-peer-id":"495f61aec428708b","recovered-remote-peer-urls":["http>
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"d7e48328fbade5a9","local-member-id":"6f1cea21cf164812","recovered-remote-peer-id":"6f1cea21cf164812","recovered-remote-peer-urls":["http>
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.793Z","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.5"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:36.794Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.795Z","caller":"mvcc/kvstore.go:345","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":305453105}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.846Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":305456694}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.848Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.849Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.849Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.850Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.850Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.850Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f1cea21cf164812","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.850Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.850Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f1cea21cf164812","remote-peer-id":"378283a013db6ca0"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.850Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"378283a013db6ca0","remote-peer-urls":["https://23.88.123.167:2380"]}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.851Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.852Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.853Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.853Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.854Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.855Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6f1cea21cf164812","remote-peer-id":"495f61aec428708b","remote-peer-urls":["https://167.235.19.84:2380"]}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.855Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"6f1cea21cf164812","timeout":"15s"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.854Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6f1cea21cf164812","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:36 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:36.854Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6f1cea21cf164812","remote-peer-id":"495f61aec428708b"}
Jan 03 07:31:41 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:41.852Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"378283a013db6ca0","rtt":"0s"}
Jan 03 07:31:41 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:41.852Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"378283a013db6ca0","rtt":"0s"}
Jan 03 07:31:41 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:41.855Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"495f61aec428708b","rtt":"0s"}
Jan 03 07:31:41 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:41.855Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"495f61aec428708b","rtt":"0s"}
Jan 03 07:31:41 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:41.889Z","caller":"etcdserver/corrupt.go:289","msg":"failed hash kv request","local-member-id":"6f1cea21cf164812","requested-revision":305456694,"remote-peer-endpoint":"https://23.88.123.167:2380","error":"Get \"https://23.88.123.167:2380>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:46.854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:46.854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:46.856Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:46.857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:46.904Z","caller":"etcdserver/corrupt.go:289","msg":"failed hash kv request","local-member-id":"6f1cea21cf164812","requested-revision":305456694,"remote-peer-endpoint":"https://167.235.19.84:2380","error":"Get \"https://167.235.19.84:2380>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.904Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"6f1cea21cf164812"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.904Z","caller":"etcdserver/server.go:842","msg":"starting etcd server","local-member-id":"6f1cea21cf164812","local-server-version":"3.5.4","cluster-id":"d7e48328fbade5a9","cluster-version":"3.5"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.906Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.908Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, client-cert=, client-key=, tru>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.909Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6f1cea21cf164812","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://10.0.1.1:2380","https://127.0.0.1:2380>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Running kube-apiserver --advertise-address=10.0.1.1 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=1>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.913Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.1.1:2380"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.914Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.1.1:2380"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.914Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"127.0.0.1:2380"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.914Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"127.0.0.1:2380"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:46.914Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-ad>
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Tunnel server egress proxy mode: agent"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="To join server node to cluster: k3s server -s https://<public-ipv4>:6443 -t ${SERVER_NODE_TOKEN}"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="To join agent node to cluster: k3s agent -s https://<public-ipv4>:6443 -t ${AGENT_NODE_TOKEN}"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="Run: k3s kubectl"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="certificate CN=core-control-plane-0 signed by CN=k3s-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 07:31:46 +0000 UTC"
Jan 03 07:31:46 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:46Z" level=info msg="certificate CN=system:node:core-control-plane-0,O=system:nodes signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 07:31:46 +0000 UTC"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Module overlay was already loaded"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Module nf_conntrack was already loaded"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Module br_netfilter was already loaded"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Module iptable_nat was already loaded"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Module iptable_filter was already loaded"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=warning msg="The wireguard backend is deprecated and will be removed in k3s v1.26, please switch to wireguard-native. Check our docs for information about how to migrate."
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Jan 03 07:31:47 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:47Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Jan 03 07:31:48 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:48Z" level=info msg="Containerd is now running"
Jan 03 07:31:48 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:48Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --c>
Jan 03 07:31:48 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:48Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jan 03 07:31:48 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:48Z" level=info msg="Handling backend connection request [core-control-plane-0]"
Jan 03 07:31:48 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:48Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-storage-0-trusty-piranha]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-control-plane-2]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-storage-2-capital-jackass]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-small-2-fun-lacewing]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-medium-0-united-eft]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-small-0-mint-sawfly]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-storage-1-major-bedbug]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-medium-3-tight-halibut]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-small-1-allowed-treefrog]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-control-plane-1]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-small-3-engaging-katydid]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-medium-2-becoming-hare]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Handling backend connection request [core-medium-1-renewing-hookworm]"
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:51.854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:51.854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:51.857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:51.857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:31:51 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:51Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:52 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:52Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
Jan 03 07:31:53 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:53Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 07:31:55 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:55.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 is starting a new election at term 46"}
Jan 03 07:31:55 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:55.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 became pre-candidate at term 46"}
Jan 03 07:31:55 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:55.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 received MsgPreVoteResp from 6f1cea21cf164812 at term 46"}
Jan 03 07:31:55 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:55.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 [logterm: 46, index: 337443916] sent MsgPreVote request to 378283a013db6ca0 at term 46"}
Jan 03 07:31:55 core-control-plane-0 k3s[7264]: {"level":"info","ts":"2023-01-03T07:31:55.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f1cea21cf164812 [logterm: 46, index: 337443916] sent MsgPreVote request to 495f61aec428708b at term 46"}
Jan 03 07:31:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:56.855Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:31:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:56.857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"378283a013db6ca0","rtt":"0s","error":"EOF"}
Jan 03 07:31:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:56.857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:31:56 core-control-plane-0 k3s[7264]: {"level":"warn","ts":"2023-01-03T07:31:56.857Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"495f61aec428708b","rtt":"0s","error":"EOF"}
Jan 03 07:31:56 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:56Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 07:31:58 core-control-plane-0 k3s[7264]: time="2023-01-03T07:31:58Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Removing node-ip from the k3s args and restarting makes it join the etcd cluster again.
So that's my first big question: Is there a migration path to using --node-ip on servers?
--tls-san doesn't help
I also tried adding --tls-san to that servers startup commands but that didn't fix getting the logs. Maybe that is only evaluated on cluster-init?
Recreation instead of Reconfiguration fails differently
I also tried to add control-plane-0 as a fresh member to the cluster. (Deleting the server and creating it again but using --node-ip now already and also --advertise-address (both pointing to the private ipv4).
Full log of control-plane-0
Jan 03 08:15:24 core-control-plane-0 systemd[1]: Starting Lightweight Kubernetes...
Jan 03 08:15:24 core-control-plane-0 sh[1769]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Jan 03 08:15:24 core-control-plane-0 sh[1770]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Starting k3s v1.23.15+k3s1 (50cab3b3)"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Managed etcd cluster not yet initialized"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=warning msg="Cluster CA certificate is not trusted by the host CA bundle, but the token does not include a CA hash. Use the full token from the server's node-token file to enable Cluster CA validation."
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Reconciling bootstrap data between datastore and disk"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Running kube-apiserver --advertise-address=10.0.1.1 --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.2.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Tunnel server egress proxy mode: agent"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --profiling=false --secure-port=10259"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.1.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.2.0.0/16 --use-service-account-credentials=true"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="To join server node to cluster: k3s server -s https://<public-ipv4>:6443 -t ${SERVER_NODE_TOKEN}"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="To join agent node to cluster: k3s agent -s https://<public-ipv4>:6443 -t ${AGENT_NODE_TOKEN}"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Run: k3s kubectl"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initialization or first client request"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.0.1.1:10.0.1.1 listener.cattle.io/cn-10.2.0.1:10.2.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-core-control-plane-0:core-control-plane-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=B55A7BB8500726B769372D48CFE43CA048E90E22]"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Active TLS secret / (ver=) (count 10): map[listener.cattle.io/cn-10.0.1.1:10.0.1.1 listener.cattle.io/cn-10.2.0.1:10.2.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-core-control-plane-0:core-control-plane-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=A90D7F4E8F04285FFF7B319CCB0AB35CDEF699B9]"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=core-control-plane-0 signed by CN=k3s-server-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="certificate CN=system:node:core-control-plane-0,O=system:nodes signed by CN=k3s-client-ca@1633418218: notBefore=2021-10-05 07:16:58 +0000 UTC notAfter=2024-01-03 08:15:25 +0000 UTC"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Module overlay was already loaded"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Module br_netfilter was already loaded"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Set sysctl 'net/ipv4/conf/all/forwarding' to 1"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 131072"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=warning msg="The wireguard backend is deprecated and will be removed in k3s v1.26, please switch to wireguard-native. Check our docs for information about how to migrate."
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Jan 03 08:15:25 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:25Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Containerd is now running"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.2.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/8337c3c448159e502751ad88e10cd3bb2d8572cf28d9351bef216d68aaed3413/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime=remote --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=core-control-plane-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-ip=10.0.1.1 --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Handling backend connection request [core-control-plane-0]"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Adding member core-control-plane-0-898acd78=https://10.0.1.1:2380 to etcd cluster [core-control-plane-1-11456e80=https://23.88.123.167:2380 core-control-plane-2-9d063b64=https://167.235.19.84:2380]"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:26Z" level=info msg="Starting etcd to join cluster with members [core-control-plane-1-11456e80=https://23.88.123.167:2380 core-control-plane-2-9d063b64=https://167.235.19.84:2380 core-control-plane-0-898acd78=https://10.0.1.1:2380]"
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:26.796Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.1.1:2380","https://127.0.0.1:2380"]}
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:26.796Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, client-cert=, client-key=, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:26.797Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://10.0.1.1:2379","https://127.0.0.1:2379"]}
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:26.798Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.17.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"core-control-plane-0-898acd78","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["https://10.0.1.1:2380","https://127.0.0.1:2380"],"advertise-client-urls":["https://10.0.1.1:2379"],"listen-client-urls":["https://10.0.1.1:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"core-control-plane-0-898acd78=https://10.0.1.1:2380,core-control-plane-1-11456e80=https://23.88.123.167:2380,core-control-plane-2-9d063b64=https://167.235.19.84:2380","initial-cluster-state":"existing","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
Jan 03 08:15:26 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:26.803Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"4.819977ms"}
Jan 03 08:15:30 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:30Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 08:15:31 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:31Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 08:15:31 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:31.816Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39814","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:31 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:31.819Z","caller":"etcdserver/cluster_util.go:79","msg":"failed to get cluster response","address":"https://167.235.19.84:2380/members","error":"Get \"https://167.235.19.84:2380/members\": EOF"}
Jan 03 08:15:31 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:31.832Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39812","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:31 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:31.832Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39798","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:31 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:31.951Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39826","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:32 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:32.073Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39828","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:32 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:32.083Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39838","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:35 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:35.020Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39840","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:35 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:35Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Jan 03 08:15:35 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:35.822Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39850","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:35 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:35.834Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39856","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:36 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:36Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Jan 03 08:15:36 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:36.840Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:39858","server-name":"","ip-addresses":["127.0.0.1","::1","23.88.123.167","2a01:4f8:1c17:c664::1","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
Jan 03 08:15:36 core-control-plane-0 k3s[1773]: {"level":"warn","ts":"2023-01-03T08:15:36.840Z","caller":"etcdserver/cluster_util.go:79","msg":"failed to get cluster response","address":"https://23.88.123.167:2380/members","error":"Get \"https://23.88.123.167:2380/members\": EOF"}
Jan 03 08:15:36 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:36.842Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"core-control-plane-0-898acd78","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://10.0.1.1:2379"]}
Jan 03 08:15:36 core-control-plane-0 k3s[1773]: {"level":"info","ts":"2023-01-03T08:15:36.843Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"core-control-plane-0-898acd78","data-dir":"/var/lib/rancher/k3s/server/db/etcd","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://10.0.1.1:2379"]}
Jan 03 08:15:36 core-control-plane-0 k3s[1773]: time="2023-01-03T08:15:36Z" level=fatal msg="ETCD join failed: cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs"
The interesting part on control-plane-1 logs looks like that:
Jan 03 08:23:39 core-control-plane-1 k3s[1679]: {"level":"warn","ts":"2023-01-03T08:23:39.834Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"<public-ipv4(of-cp0)>:41662","server-name":"","ip-addresses":["10.0.1.1","127.0.0.1","::1","10.0.1.1","10.2.0.1"],"dns-names":["localhost","core-control-plane-0"],"error":"tls: \"<public-ipv4(of-cp0)>\" does not match any of DNSNames [\"localhost\" \"core-control-plane-0\"] (lookup core-control-plane-0: Try again)"}
So I wonder why does the control-plane-0 register in the etcd with its public-ipv4. I found this comment which says it should not be the case https://github.com/k3s-io/k3s/issues/2533#issuecomment-748387146
So I guess the setup somehow things that the public IPs are the private ones?
Adding --node-external-ip
This moves the error to control-plane-0 which now logs:
Jan 03 10:46:19 core-control-plane-0 k3s[20099]: {"level":"warn","ts":"2023-01-03T10:46:19.875Z","caller":"embed/config_logging.go:160","msg":"rejected connection","remote-addr":"10.0.1.2:43660","server-name":"","ip-addresses":["127.0.0.1","::1","<public-ipv4-cp1","<ipv6-cp1>","10.2.0.1"],"dns-names":["localhost","core-control-plane-1"],"error":"tls: \"10.0.1.2\" does not match any of DNSNames [\"localhost\" \"core-control-plane-1\"] (lookup core-control-plane-1: Try again)"}
while control-plane-1 says:
Jan 03 10:44:45 core-control-plane-1 k3s[1679]: {"level":"warn","ts":"2023-01-03T10:44:45.455Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://10.0.1.1:2380/version","remote-member-id":"8aa818d1fa6327ef","error":"Get \"https://10.0.1.1:2380/version\": dial tcp 10.0.1.1:2380: connect: connection refused"}
So it looks like taking all control-plane hosts down and defining additionally the public-ip via commandline arguments would maybe be a solution.
What makes this harder is that the public-ip is currently now known during server creation and therefore the snipped in the cloud-init file.
While researching more I think https://github.com/k3s-io/k3s/issues/3551#issuecomment-970122524 is similar. Now the question would be how to actually migrate existing clusters to use all those arguments…
I tried to migrate the current setup to the new setup by stopping 2 of 3 control-plane nodes, then changing the commandline args on the last one for using node-ip, advertise-ip and node-external-ip. It came up but was not happy/healthy.
Sadly control-plane-0 (also with node-ip/advertise-ip/external could not join the cluster anymore with error:
Jan 03 13:42:29 core-control-plane-0 k3s[1747]: time="2023-01-03T13:42:29Z" level=fatal msg="ETCD join failed: error validating peerURLs {ClusterID:d7e48328fbade5a9 Members:[&{ID:378283a013db6ca0 RaftAttributes:{PeerURLs:[https://<ipv4-cp-1>:2380] IsLearner:false} Attributes:{Name:core-control-plane-1-11456e80 ClientURLs:[https://<ipv4-cp-1>:2379]}} &{ID:495f61aec428708b RaftAttributes:{PeerURLs:[https://<ipv4-cp2>:2380] IsLearner:false} Attributes:{Name:core-control-plane-2-9d063b64 ClientURLs:[https://<ipv4-cp2>:2379]}}] RemovedMemberIDs:[]}: PeerURLs: no match found for existing member (378283a013db6ca0, [https://<ipv4-cp-1>:2380]), last resolver error (\"https://<ipv4-cp-1>:2380\"(resolved from \"https://<ipv4-cp-1>:2380\") != \"https://10.0.1.1:2380\"(resolved from \"https://10.0.1.1:2380\"))"
Next thing I'll probably try is to completely kill the control plane and restore a etcd backup and then try to make it HA again.
Jeah well, it somehow all worked with k3s 1.21 apparently because the setup around did not change. Something in 1.22+ changed how IP adresses are used.
But I'll try to migrate the setup to one which explicitly sets the ip addresses via cli params. The only hurdle currently is the migration of the etcd but I'll try out changing it to a single-node one using --cluster-reset next.
Okay, so I tested and played around with this a lot now. Good thing is that k3s is not too complex.
Migration strategy from "wrong" IP-setup with HA control plane looks roughly like that:
Downgrade from a HA control plane to a single control plane
E.g. turn off/delete cp0 and cp1 and only keep cp2
Edit the k3s startup on cp2 to do --cluster-reset and add the ip flags (--node-ip/--advertise-address /--node-external-ip)
Now is also the time (if you need to) to migrate from wireguard to wireguard-native (update the argument)
Restart the cp2 node, it will then start up with the same data but throw out cp0/cp1 members from the etcd
It will fail with a message to remove the flag and then restart
Now it should come up healthy again
Recreate cp1 and cp2
Restart/recreate all agent servers
In the end etcd should talk via private IPs and the certs will also contain the private IPs.
| gharchive/issue | 2023-01-03T09:53:15 | 2025-04-01T06:44:39.623074 | {
"authors": [
"toabi"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/6679",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1750656286 | Validate Oracle Linux 8.8
K3s Versions to be Validated
1.27
1.26
1.25
Testing Considerations
QA to edit as necessary
Install and run sonobuoy conformance tests on a hardened cluster
Validate SUC upgrade
Additional Information
Jira Ticket: https://jira.suse.com/browse/SURE-6405
This should be validated with selinux enabled:
# /etc/rancher/k3s/config.yaml
selinux: true
Validated Oracle Linux 8.8 with k3s
Versions tested
v1.25.10+k3s1
v1.26.5+k3s1
v1.27.2+k3s1
Performed tests
✅ Sonobuoy (v0.56.16) checks on hardened cluster
✅ SUC upgrade from previous minor versions on non-hardened cluster (selinux: true)
Observations
On SUC, observed apply-k3-server and apply-k3s-agent pods going in to Warning state. Also observed the warning from kubelet upon checking the pods on all the versions. FYI the cluster went with the upgrade as expected
MountVolume.SetUp failed for volume "kube-api-access-fdcr7" : object "system-upgrade"/"kube-root-ca.crt" not registered
| gharchive/issue | 2023-06-10T00:10:02 | 2025-04-01T06:44:39.632514 | {
"authors": [
"caroline-suse-rancher",
"mdrahman-suse"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/7730",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1977955632 | coredns never becoming ready on fresh install
Environmental Info: fresh Vultr VPS instance of Debian 12.2
K3s Version: k3s version v1.27.7+k3s1 (b6f23014) / go version go1.20.10
Node(s) CPU architecture, OS, and Version: Linux vultr 6.1.0-10-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.38-1 (2023-07-14) x86_64 GNU/Linu
Cluster Configuration: 1 server, 1 agent
Describe the bug:
The k3s instance never becomes up/healthy.
Steps To Reproduce:
curl -sfL https://get.k3s.io | sh -
Installed K3s:
Expected behavior:
All pods become Ready.
Actual behavior:
Multiple pods stay in CrashLoopBackOff / not Ready status.
Additional context / logs:
I don't think SELinux is enabled and I did see some errors about restorecon: command not found during setup despite policycoreutils being installed.
journalctl -u k3s: https://gist.github.com/brandonros/69802a1677ab358db4071ab7efa8ea94
coredns pod logs:
~$ kubectl logs -n kube-system coredns-77ccd57875-f8g77
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.override
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.override
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/reload: Running configuration SHA512 = b941b080e5322f6519009bb49349462c7ddb6317425b0f6a83e5451175b720703949e3f3b454a24e77f3ffe57fd5e9c6130e528a5a1dd00d9000e4afd6c1108d
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:38518->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:52443->108.61.10.10:53: i/o timeout
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:37070->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:38136->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:37826->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:47120->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:57901->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:53164->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:60964->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[ERROR] plugin/errors: 2 7959190641718103507.1126462387144451575. HINFO: read udp 10.42.0.6:50477->108.61.10.10:53: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.override
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.43.0.1:443/version": dial tcp 10.43.0.1:443: i/o timeout
...
helm-install-traefik-logs:
$ kubectl logs -n kube-system helm-install-traefik-g8wxf
if [[ ${KUBERNETES_SERVICE_HOST} =~ .*:.* ]]; then
echo "KUBERNETES_SERVICE_HOST is using IPv6"
CHART="${CHART//%\{KUBERNETES_API\}%/[${KUBERNETES_SERVICE_HOST}]:${KUBERNETES_SERVICE_PORT}}"
else
CHART="${CHART//%\{KUBERNETES_API\}%/${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}}"
fi
set +v -x
+ [[ '' != \t\r\u\e ]]
+ export HELM_HOST=127.0.0.1:44134
+ HELM_HOST=127.0.0.1:44134
+ tiller --listen=127.0.0.1:44134 --storage=secret
+ helm_v2 init --skip-refresh --client-only --stable-repo-url https://charts.helm.sh/stable/
[main] 2023/11/05 20:44:01 Starting Tiller v2.17.0 (tls=false)
[main] 2023/11/05 20:44:01 GRPC listening on 127.0.0.1:44134
[main] 2023/11/05 20:44:01 Probes listening on :44135
[main] 2023/11/05 20:44:01 Storage driver is Secret
[main] 2023/11/05 20:44:01 Max history per release is 0
Creating /home/klipper-helm/.helm
Creating /home/klipper-helm/.helm/repository
Creating /home/klipper-helm/.helm/repository/cache
Creating /home/klipper-helm/.helm/repository/local
Creating /home/klipper-helm/.helm/plugins
Creating /home/klipper-helm/.helm/starters
Creating /home/klipper-helm/.helm/cache/archive
Creating /home/klipper-helm/.helm/repository/repositories.yaml
Adding stable repo with URL: https://charts.helm.sh/stable/
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/klipper-helm/.helm.
Not installing Tiller due to 'client-only' flag having been set
++ timeout -s KILL 30 helm_v2 ls --all '^traefik$' --output json
++ jq -r '.Releases | length'
[storage] 2023/11/05 20:44:02 listing all releases with filter
+ V2_CHART_EXISTS=
+ [[ '' == \1 ]]
+ [[ '' == \v\2 ]]
+ shopt -s nullglob
+ [[ -f /config/ca-file.pem ]]
+ [[ -f /tmp/ca-file.pem ]]
+ [[ -n '' ]]
+ helm_content_decode
+ set -e
+ ENC_CHART_PATH=/chart/traefik.tgz.base64
+ CHART_PATH=/tmp/traefik.tgz
+ [[ ! -f /chart/traefik.tgz.base64 ]]
+ return
+ [[ install != \d\e\l\e\t\e ]]
+ helm_repo_init
+ grep -q -e 'https\?://'
chart path is a url, skipping repo update
+ echo 'chart path is a url, skipping repo update'
+ helm_v3 repo remove stable
Error: no repositories configured
+ true
+ return
+ helm_update install --set-string global.systemDefaultRegistry=
+ [[ helm_v3 == \h\e\l\m\_\v\3 ]]
++ helm_v3 ls --all -f '^traefik$' --namespace kube-system --output json
++ tr '[:upper:]' '[:lower:]'
++ jq -r '"\(.[0].app_version),\(.[0].status)"'
[storage/driver] 2023/11/05 20:44:32 list: failed to list: Get "https://10.43.0.1:443/api/v1/namespaces/kube-system/secrets?labelSelector=OWNER%3DTILLER": dial tcp 10.43.0.1:443: i/o timeout
metrics-server logs:
$ kubectl -n kube-system logs metrics-server-648b5df564-89n86
Error: unable to load configmap based request-header-client-ca-file: Get "https://10.43.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.43.0.1:443: i/o timeout
Usage:
[flags]
Metrics server flags:
--kubeconfig string The path to the kubeconfig used to connect to the Kubernetes API server and the Kubelets (defaults to in-cluster config)
--metric-resolution duration The resolution at which metrics-server will retain metrics, must set value at least 10s. (default 1m0s)
--version Show version
Kubelet client flags:
--deprecated-kubelet-completely-insecure DEPRECATED: Do not use any encryption, authorization, or authentication when communicating with the Kubelet. This is rarely the right option, since it leaves kubelet communication completely insecure. If you encounter auth errors, make sure you've enabled token webhook auth on the Kubelet, and if you're in a test cluster with self-signed Kubelet certificates, consider using kubelet-insecure-tls instead.
--kubelet-certificate-authority string Path to the CA to use to validate the Kubelet's serving certificates.
--kubelet-client-certificate string Path to a client cert file for TLS.
--kubelet-client-key string Path to a client key file for TLS.
--kubelet-insecure-tls Do not verify CA of serving certificates presented by Kubelets. For testing purposes only.
--kubelet-port int The port to use to connect to Kubelets. (default 10250)
--kubelet-preferred-address-types strings The priority of node address types to use when determining which address to use to connect to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
--kubelet-use-node-status-port Use the port in the node status. Takes precedence over --kubelet-port flag.
Apiserver secure serving flags:
--bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used. (default 0.0.0.0)
--cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
--http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
--permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
--permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
--secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
--tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
--tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
--tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
--tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
--tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
Apiserver authentication flags:
--authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
--authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
--authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
--authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
--client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
--requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
--requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
--requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
--requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
--requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
Apiserver authorization flags:
--authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
--authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
--authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
--authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
Apiserver audit log flags:
--audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
--audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
--audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
--audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
--audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
--audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
--audit-log-compress If set, the rotated log files will be compressed using gzip.
--audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
--audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
--audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
--audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
--audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
--audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
--audit-log-truncate-enabled Whether event and batch truncating is enabled.
--audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
--audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
--audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
--audit-policy-file string Path to the file that defines the audit policy configuration.
--audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
--audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
--audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
--audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
--audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
--audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
--audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
--audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
--audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
--audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
--audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
--audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
--audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
Features flags:
--contention-profiling Enable lock contention profiling, if profiling is enabled
--profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
Logging flags:
--add_dir_header If true, adds the file directory to the header of the log messages (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--alsologtostderr log to standard error as well as files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) (default :0)
--log_dir string If non-empty, write log files in this directory (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--log_file string If non-empty, use this log file (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) (default 1800)
--logtostderr log to standard error instead of files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) (default true)
--one_output If true, only write logs to their native severity level (vs also writing to each lower severity level) (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--skip_headers If true, avoid header prefixes in the log messages (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--skip_log_headers If true, avoid headers when opening log files (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
--stderrthreshold severity logs at or above this threshold go to stderr (DEPRECATED: will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) (default 2)
-v, --v Level number for the log level verbosity
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
panic: unable to load configmap based request-header-client-ca-file: Get "https://10.43.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.43.0.1:443: i/o timeout
goroutine 1 [running]:
main.main()
/go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/metrics-server.go:37 +0xa5
looks similar to https://github.com/k3s-io/k3s/issues/8749 but I'm not running on anything related to AWS as far as I can tell
I'm not sure if this is also because of https://github.com/k3s-io/k3s/issues/8755, I think it is and if it is this can be closed as a duplicate.
No, this is not related to traefik. Something is breaking access to the in-cluster Kubernetes service endpoint. This is handled by iptables rules managed by kube-proxy. Do you have any host-based firewall (ufw or firewalld) that might be blocking this traffic?
My issue was that I did apt-get dist-upgrade and didn't reboot after.
@brandond
Nov 07 02:22:53 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth11b9bf43
Nov 07 02:22:53 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth11b9bf43
Nov 07 02:22:54 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth11b9bf43
Nov 07 02:22:54 vultr kernel: [UFW BLOCK] IN=cni0 OUT= PHYSIN=veth11b9bf43
Nov 07 02:22:55 vultr kernel: [UFW BLOCK] IN=cni0 OUT= PHYSIN=veth11b9bf43
Nov 07 02:22:57 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth11b9bf43
Nov 07 02:23:04 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth088c89ba
Nov 07 02:23:04 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth088c89ba
Nov 07 02:23:04 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth088c89ba
Nov 07 02:23:04 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth088c89ba
Nov 07 02:23:04 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth088c89ba
Nov 07 02:23:04 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth088c89ba
Nov 07 02:23:07 vultr kernel: [UFW BLOCK] IN=cni0 OUT= PHYSIN=vethf599bcfd
Nov 07 02:23:13 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth95545286
Nov 07 02:23:36 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=vethf599bcfd
Nov 07 02:24:01 vultr kernel: [UFW BLOCK] IN=cni0 OUT=cni0 PHYSIN=veth11b9bf43
root@vultr:/home/debian# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
Not sure if these are ignorable. I guess they are since coredns came up...
Nov 05 20:29:14 vultr k3s[18368]: time="2023-11-05T20:29:14Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
This was probably the best clue as well as
Nov 05 20:29:21 vultr k3s[18368]: E1105 20:29:21.179982 18368 proxier.go:897] "Failed to ensure chain jumps" err=<
Nov 05 20:29:21 vultr k3s[18368]: error appending rule: exit status 4: Ignoring deprecated --wait-interval option.
Nov 05 20:29:21 vultr k3s[18368]: iptables v1.8.9 (nf_tables): CHAIN_ADD failed (No such file or directory): chain OUTPUT
Nov 05 20:29:21 vultr k3s[18368]: > table=nat srcChain=OUTPUT dstChain=KUBE-SERVICES
Nov 05 20:29:21 vultr k3s[18368]: I1105 20:29:21.180018 18368 proxier.go:862] "Sync failed" retryingTime="30s"
| gharchive/issue | 2023-11-05T20:45:25 | 2025-04-01T06:44:39.648605 | {
"authors": [
"brandond",
"brandonros"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/issues/8785",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
947669918 | Added logic to strip any existing hyphens before processing the args.
Updated the logic to handle if extra args are passed with existing hyphens in the arg. The test was updated to add the additional case of having pre-existing hyphens. The method name was also refactored based on previous feedback.
https://github.com/k3s-io/k3s/issues/3642
Codecov Report
Merging #3662 (ac177df) into master (580955d) will increase coverage by 0.01%.
The diff coverage is 30.00%.
@@ Coverage Diff @@
## master #3662 +/- ##
==========================================
+ Coverage 11.37% 11.38% +0.01%
==========================================
Files 134 134
Lines 8733 8734 +1
==========================================
+ Hits 993 994 +1
Misses 7525 7525
Partials 215 215
Flag
Coverage Δ
unittests
11.38% <30.00%> (+0.01%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
pkg/agent/run.go
0.00% <0.00%> (ø)
pkg/daemons/agent/agent.go
0.00% <0.00%> (ø)
pkg/daemons/control/server.go
0.00% <0.00%> (ø)
pkg/daemons/config/types.go
50.00% <100.00%> (+3.33%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 580955d...ac177df. Read the comment docs.
It looks like you merged master into your branch, instead of rebasing onto it. Can you switch that around?
| gharchive/pull-request | 2021-07-19T13:44:27 | 2025-04-01T06:44:39.663088 | {
"authors": [
"brandond",
"codecov-commenter",
"phillipsj"
],
"repo": "k3s-io/k3s",
"url": "https://github.com/k3s-io/k3s/pull/3662",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2552583307 | operator mariadb-operator (0.32.0)
Added operator mariadb-operator (0.32.0)
/merge possible
/merge possible
| gharchive/pull-request | 2024-09-27T10:30:21 | 2025-04-01T06:44:39.673566 | {
"authors": [
"framework-automation",
"mmontes11"
],
"repo": "k8s-operatorhub/community-operators",
"url": "https://github.com/k8s-operatorhub/community-operators/pull/5076",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
152815477 | Unread Mail Button inaccessible in the folders list
Expected behaviour
When scrolling the folders list and there is an "unread mails" bullet, i want to touch it and open the list of unread mails in that folder / view the unread mail.
Actual behaviour
I touch it and the previously hidden scrollbar appears and the folder list jumps a bit up/down, but i cannot touch the colord bullet hidden behind the scrollbar
Steps to reproduce
open folders list on an mail account
find some folder with unread mails
touch button
Environment
K-9 Mail version: v5.010 (and earlier versions)
Android version: 6.0.1
Account type (IMAP, POP3, WebDAV/Exchange): IMAP (probably any type)
Yep, I see your point. Tapping just to the left of the bullet works in the mean time. I suspect we'll fix this as part of the design overhaul.
Hmm, on my phone it seems to be very hard or impossible to get the right point.
Are the options like moving the scrollbar to the other side? Or an option to disable it? I guess not many people are using such scroll bars on touch devices anyway, as the folder list shouldn't be THAT long for most accounts.
I don't think so no, sorry.
Hi, I have exactly the same problem. Does anyone know of a fix or workaround ?
Use the global unread or read the whole mailbox. Both are not the same, but workaround to get to the mails.
Obsolete due to design overhaul.
| gharchive/issue | 2016-05-03T16:19:34 | 2025-04-01T06:44:39.709813 | {
"authors": [
"allo-",
"cketti",
"philipwhiuk",
"sploinga"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/1350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
426218398 | Incorporate an inbox zero graphic.
Expected behavior
A nice graphic is displayed when inbox is cleared.
For example Gmail (online),
Actual behavior
Nothing is displayed to indicate that the inbox is empty (other than no more emails)
Steps to reproduce
Empty inbox of emails
Refresh inbox
Environment
K-9 Mail version: 5.600
Android version: 8.1.0
Account type (IMAP, POP3, WebDAV/Exchange): IMAP
@cketti I mocked up an idea for this. If you like it I can go ahead and put together a PR, make sure everything scales, the colors are right and such.
Displaying the image is only part of the work. This feature has to take quite a few edge cases into account. Some that come to mind right now:
Folders that have never been synchronized appear empty when entered, but should not display an image (or it needs to be specific for this state). Side note: they shouldn't show "Load up to X more" either.
Similar to the above. "Clear local messages" should not lead to the "inbox zero" image being displayed.
It is misleading when an "inbox zero" image is displayed but the folder hasn't been synchronized in a while. The state on the server could be different. We'll first have to figure out what to do in cases like that.
| gharchive/issue | 2019-03-27T22:54:09 | 2025-04-01T06:44:39.714466 | {
"authors": [
"AlexKDawson",
"cketti",
"rakuna"
],
"repo": "k9mail/k-9",
"url": "https://github.com/k9mail/k-9/issues/3989",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
72773577 | Add proper versioning
It's been on version 0.1.0 since the beginning. We should start versioning it to reflect changes made to the library.
We have decided to go with Semantic Versioning 2.0.0. That means that API might be changed until we reach 1.0.0 version.
Semver in nutshell:
Version has a format MAJOR.MINOR.PATCH. We increment patch when we have backward compatible bug fixes, we increment minor for backward compatible API changes. If we make breaking changes, we increment major. Please check more detailed explanation here.
Milestones and issue tags
It is very nice to have milestones with defined scope. Therefore issues should have a milestone tag assigned.
Also declared versioning policy in #57
| gharchive/issue | 2015-05-03T08:42:41 | 2025-04-01T06:44:39.719666 | {
"authors": [
"d-kunin",
"kaaes"
],
"repo": "kaaes/spotify-web-api-android",
"url": "https://github.com/kaaes/spotify-web-api-android/issues/52",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
226445974 | Support GitLab OAuth as login method
matterpoll cannnot use for Mattermost with setting Gitlab OAtuh as login method.
Now matterpoll supports only id/password.
matterpoll should support GitLab OAuth as login method.
We are focusing to develop Matterpoll plugin.
Please consider using this plugin.
| gharchive/issue | 2017-05-05T00:41:20 | 2025-04-01T06:44:39.727090 | {
"authors": [
"kaakaa"
],
"repo": "kaakaa/matterpoll-emoji",
"url": "https://github.com/kaakaa/matterpoll-emoji/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.