added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:36:01.001746
| 2020-06-28T17:00:51
|
646965293
|
{
"authors": [
"ddland",
"jscoursera"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12243",
"repo": "ytdl-org/youtube-dl",
"url": "https://github.com/ytdl-org/youtube-dl/issues/25821"
}
|
gharchive/issue
|
[rtlnl] changed url
Checklist
[x] I'm reporting a broken site support
[x] I've verified that I'm running youtube-dl version 2<IP_ADDRESS>
[x] I've checked that all provided URLs are alive and playable in a browser
[x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
[x] I've searched the bugtracker for similar issues including closed ones
Verbose log
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.rtlxl.nl/programma/rtl-nieuws/da2963c3-c468-3d89-b764-0edb0450c211']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2<IP_ADDRESS>
[debug] Git HEAD: e942cfd1a
[debug] Python version 3.7.3 (CPython) - Linux-4.19.0-9-amd64-x86_64-with-debian-10.4
[debug] exe versions: ffmpeg 4.1.4-1, ffprobe 4.1.4-1, phantomjs 2.1.1, rtmpdump 2.4
[debug] Proxy map: {}
[generic] da2963c3-c468-3d89-b764-0edb0450c211: Requesting header
WARNING: Falling back on generic information extractor.
[generic] da2963c3-c468-3d89-b764-0edb0450c211: Downloading webpage
[generic] da2963c3-c468-3d89-b764-0edb0450c211: Extracting information
ERROR: Unsupported URL: https://www.rtlxl.nl/programma/rtl-nieuws/da2963c3-c468-3d89-b764-0edb0450c211
Traceback (most recent call last):
File "/home/derek/test/youtube-dl/youtube_dl/YoutubeDL.py", line 797, in extract_info
ie_result = ie.extract(url)
File "/home/derek/test/youtube-dl/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "/home/derek/test/youtube-dl/youtube_dl/extractor/generic.py", line 3382, in _real_extract
raise UnsupportedError(url)
youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.rtlxl.nl/programma/rtl-nieuws/da2963c3-c468-3d89-b764-0edb0450c211
Description
rtlxl.nl changed their website. In the url 'programma' is added. With an updated regexp everything works again, fixed in #25816
In order to test the new regexp also the unittests are updated, old episodes were not available anymore.
A fix for this change would be highly appreciated :-)
|
2025-04-01T04:36:01.003665
| 2014-03-25T18:15:33
|
30149598
|
{
"authors": [
"Rudloff",
"anandps002"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12244",
"repo": "ytdl-org/youtube-dl",
"url": "https://github.com/ytdl-org/youtube-dl/issues/2630"
}
|
gharchive/issue
|
Disable warnings
Is there a way to disable warnings ?
I still see WARNING: Falling back on generic information extractor. when I use the -q option.
It is showing error Why is it because proxxy.It is showing the below
ERROR: Unable to download webpage: <urlopen error [Errno 110] Connection timed out> (caused by URLError(TimeoutError(110, 'Connection timed out')))
|
2025-04-01T04:36:01.011235
| 2020-01-01T10:35:42
|
544340896
|
{
"authors": [
"d3fault",
"dstftw",
"mushifali"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12245",
"repo": "ytdl-org/youtube-dl",
"url": "https://github.com/ytdl-org/youtube-dl/pull/23589"
}
|
gharchive/pull-request
|
Append sha1 hash of url to filename in Generic Extractor
Please follow the guide below
You will be asked some questions, please read them carefully and answer honestly
Put an x into all the boxes [ ] relevant to your pull request (like that [x])
Use Preview tab to see how your pull request will actually look like
Before submitting a pull request make sure you have:
[x] At least skimmed through adding new extractor tutorial and youtube-dl coding conventions sections
[x] Searched the bugtracker for similar pull requests
[x] Checked the code with flake8
In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under Unlicense. Check one of the following options:
[x] I am the original author of this code and I am willing to release it under Unlicense
[ ] I am not the original author of this code but it is in public domain or released under Unlicense (provide reliable evidence)
What is the purpose of your pull request?
[ ] Bug fix
[x] Improvement
[ ] New extractor
[ ] New feature
Description of your pull request and other information
Generic Extractor now appends sha1 hash of URL to filename to avoid filename collisions thus closing #23191.
Id must be stable for identical URLs.
@dstftw sha1 is stable... you put a single URL through it multiple times, you always get same result. am I missing something?
for identical URLs
@dstftw sha1 is stable for identical URLs
No, it's not. sha1 as well as any other hash is only stable for exactly same strings.
|
2025-04-01T04:36:01.021703
| 2020-04-09T14:45:22
|
597334598
|
{
"authors": [
"Oneboy1979",
"dstftw",
"mawi1",
"sehaas"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12246",
"repo": "ytdl-org/youtube-dl",
"url": "https://github.com/ytdl-org/youtube-dl/pull/24710"
}
|
gharchive/pull-request
|
[Playerglobewien]
Please follow the guide below
You will be asked some questions, please read them carefully and answer honestly
Put an x into all the boxes [ ] relevant to your pull request (like that [x])
Use Preview tab to see how your pull request will actually look like
Before submitting a pull request make sure you have:
[X] At least skimmed through adding new extractor tutorial and youtube-dl coding conventions sections
[X] Searched the bugtracker for similar pull requests
[X] Checked the code with flake8
In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under Unlicense. Check one of the following options:
[X] I am the original author of this code and I am willing to release it under Unlicense
[ ] I am not the original author of this code but it is in public domain or released under Unlicense (provide reliable evidence)
What is the purpose of your pull request?
[ ] Bug fix
[ ] Improvement
[X] New extractor
[ ] New feature
Description of your pull request and other information
new extractor for globewien player
download works as expected
I think your extractor would also work for https://player.hader.at/. Could you add support for it?
Cool. Thank you.
Read coding conventions.
I'll have a look at this!
@dstftw Fixed in accordance with the coding convention.
@dstftw Fixed in accordance with the coding convention.
I also started an extractor (before checking open PRs :/ ) but took a slightly different approach. All required data is already included in the webpage and no additional API request is necessary.
@sehaas i checked your code and think this is more flexible as mine, i also include to extract all thumbnails from given link, can i use your code in this PR?
Yes, feel free to integrate my code.
Hi @dstftw,
are there any objections left or can this extractor be merged?
Hi @dstftw,
is something wrong with this PR, or can you merge it?
regards Oneboy1979
Hi @Oneboy1979, could you squash all these commits into a single clean commit. Maybe the PR gets merged then?
|
2025-04-01T04:36:01.063296
| 2022-08-11T14:47:58
|
1336072491
|
{
"authors": [
"JulioSarda",
"StartingProgramming"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12249",
"repo": "yuichiroaoki/poly-flashloan-bot",
"url": "https://github.com/yuichiroaoki/poly-flashloan-bot/issues/100"
}
|
gharchive/issue
|
Connect to smart contract
Hi yuichiroaoki,
can you tell me what the best (fastest) way is to connect to the smart contract? You are using alchemy in you code. Is it faster to use metamask or rpc?
Thank you!
I use for now Alchemy and Metamask.
So you connect via alchemy. Wouldn't it be faster, if you connect directly without alchemy between?
|
2025-04-01T04:36:01.097049
| 2016-07-01T05:04:16
|
163320044
|
{
"authors": [
"shivrajsa",
"yuku-t"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12251",
"repo": "yuku-t/jquery-textcomplete",
"url": "https://github.com/yuku-t/jquery-textcomplete/issues/270"
}
|
gharchive/issue
|
Problem to show suggestions based on condition
Based on some condition I want to display different suggestions, please refer below code, where if condition is 0 I want to use obj1 else if it is 1 then I want to use obj2.
But below code is not working as expected, every time I get suggestions from only one object which was selected first time as per condition. Please help to solve the problem
function autoComplete(condition) {
var obj1 = {
A: 'a',
B: 'b'
},
obj2 = {
A: 'X',
B: 'Y'
},
words = ['A', 'B'];
$('.handsontableInput').textcomplete([{
match: /(^|\b)(\w{0,})$/,
search: function(term, callback) {
callback($.map(words, function(word) {
return word.indexOf(term) === 0 ? word : null;
}));
},
template: function(word) {
var x;
if (condition == 0)
x = word + '  ' + '<span class="NotationsAutoComplete">' + obj1[word] + '</span>';
else if (condition == 1)
x = word + '  ' + '<span class="NotationsAutoComplete">' + obj2[word] + '</span>';
return x;
},
replace: function(word) {
return word + ' ';
}
}
I get suggestions from only one object which was selected first time as per condition
It is because the condition argument is in closure. In other words, the condition is fixed when you execute the autoComplete function and never change.
You can fix the problem by moving the condition from function argument to outer variable.
var condition;
function autoComplete() {
...
$('..').textcomplete([{
template: function (word) {
if (condition) {
return word + '...';
} else {
...
}
}
}])
}
Thank you, its working now :-)
|
2025-04-01T04:36:01.123026
| 2019-12-24T14:24:35
|
542143895
|
{
"authors": [
"codecov-io",
"ioito",
"swordqiu",
"yousong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12252",
"repo": "yunionio/onecloud",
"url": "https://github.com/yunionio/onecloud/pull/4286"
}
|
gharchive/pull-request
|
fix: list增加provider字段,过滤已删除账号的cloudevent
这个 PR 实现什么功能/修复什么问题:
list增加provider字段,过滤已删除账号的cloudevent
是否需要 backport 到之前的 release 分支:
release/2.13
/area cloudevent
/cc @swordqiu @yousong
Codecov Report
Merging #4286 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #4286 +/- ##
======================================
Coverage 6.86% 6.86%
======================================
Files 520 520
Lines 88008 88008
======================================
Hits 6039 6039
Misses 81468 81468
Partials 501 501
Flag
Coverage Δ
#aFlag
6.86% <ø> (ø)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f36f006...662c80d. Read the comment docs.
/lgtm
/approve
/lgtm
|
2025-04-01T04:36:01.130906
| 2019-12-25T09:16:10
|
542300890
|
{
"authors": [
"codecov-io",
"ioito",
"swordqiu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12253",
"repo": "yunionio/onecloud",
"url": "https://github.com/yunionio/onecloud/pull/4316"
}
|
gharchive/pull-request
|
Automated cherry pick of #4313: fix: 避免在域管理后台rules不可见
Cherry pick of #4313 on release/2.11.
#4313: fix: 避免在域管理后台rules不可见
Codecov Report
:exclamation: No coverage uploaded for pull request base (release/2.11@cf05505). Click here to learn what that means.
The diff coverage is 0%.
@@ Coverage Diff @@
## release/2.11 #4316 +/- ##
==============================================
Coverage ? 7.11%
==============================================
Files ? 480
Lines ? 78070
Branches ? 0
==============================================
Hits ? 5554
Misses ? 72050
Partials ? 466
Flag
Coverage Δ
#aFlag
7.11% <0%> (?)
Impacted Files
Coverage Δ
pkg/compute/models/secgrouprules.go
4.45% <0%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cf05505...5974104. Read the comment docs.
/lgtm
/approve
|
2025-04-01T04:36:01.138970
| 2020-01-03T15:58:24
|
545051089
|
{
"authors": [
"codecov-io",
"swordqiu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12254",
"repo": "yunionio/onecloud",
"url": "https://github.com/yunionio/onecloud/pull/4530"
}
|
gharchive/pull-request
|
Automated cherry pick of #4527: fix: httpclient timeout value cleanup, use notimeout client for oss
Cherry pick of #4527 on release/2.14.
#4527: fix: httpclient timeout value cleanup, use notimeout client for oss
Codecov Report
:exclamation: No coverage uploaded for pull request base (release/2.14@b9c605b). Click here to learn what that means.
The diff coverage is 75%.
@@ Coverage Diff @@
## release/2.14 #4530 +/- ##
==============================================
Coverage ? 6.84%
==============================================
Files ? 522
Lines ? 88759
Branches ? 0
==============================================
Hits ? 6077
Misses ? 82175
Partials ? 507
Flag
Coverage Δ
#aFlag
6.84% <75%> (?)
Impacted Files
Coverage Δ
pkg/multicloud/ucloud/ufile.go
0% <0%> (ø)
pkg/util/httputils/httputils.go
39.83% <81.81%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b9c605b...62b43e6. Read the comment docs.
|
2025-04-01T04:36:01.142185
| 2020-10-10T03:41:46
|
718522083
|
{
"authors": [
"ioito",
"zexi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12255",
"repo": "yunionio/onecloud",
"url": "https://github.com/yunionio/onecloud/pull/8209"
}
|
gharchive/pull-request
|
Automated cherry pick of #8207: fix: qcloud set hostname for instance
Cherry pick of #8207 on release/3.3.
#8207: fix: qcloud set hostname for instance
/lgtm
/approve
|
2025-04-01T04:36:01.161706
| 2023-03-12T08:46:53
|
1620283396
|
{
"authors": [
"Swepilot",
"yurii-khi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12256",
"repo": "yurii-khi/text_scroll",
"url": "https://github.com/yurii-khi/text_scroll/issues/32"
}
|
gharchive/issue
|
Text align does not work
When trying to align a text too short to scroll, setting the textAlign property to e.g. TextAlign.right, the text is still left aligned.
Container(
color: Colors.black,
height: 30,
width: MediaQuery.of(context).size.width,
child: Row(
children: [
Expanded(
child: TextScroll(
"Short text",
style:
const TextStyle(color: Colors.white, fontSize: 20),
textAlign: TextAlign.right,
),
),
],
),
);
If I change the TextScroll in the code above to a normal Text widget the text is right aligned.
Thanks for feedback @Swepilot, it should be resolved in #29.
|
2025-04-01T04:36:01.181920
| 2022-01-29T10:53:14
|
1118180049
|
{
"authors": [
"fperez",
"yuvipanda"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12257",
"repo": "yuvipanda/github-app-user-auth",
"url": "https://github.com/yuvipanda/github-app-user-auth/issues/7"
}
|
gharchive/issue
|
Works in the notebook/console! Do you want a PR? :)
Try running this version in a notebook:
import argparse
import requests
import sys
import time
import os
from IPython.display import display, Javascript
import ipywidgets as widgets
def do_authenticate_device_flow(client_id):
"""
Authenticate user with given GitHub app using GitHub OAuth Device flow
https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps#device-flow
describes what happens here.
Returns an access_code and the number of seconds it expires in.
access_code will have scopes defined in the GitHub app
"""
verification_resp = requests.post(
"https://github.com/login/device/code",
data={"client_id": client_id, "scope": "repo"},
headers={"Accept": "application/json"},
).json()
url = verification_resp["verification_uri"]
code = verification_resp["user_code"]
display(Javascript(f'navigator.clipboard.writeText("{code}");'))
print(f'The code {code} has been copied to your clipboard.')
print(f'You have 15 minutes to go to this URL and paste it there:')
print(f'{url}')
ans = input("Hit ENTER to open that page in a new tab (type anything to cancel)>")
if ans:
print("Automatic opening canceled!")
else:
display(Javascript(f'window.open("{url}", "_blank");'))
print('Waiting...', end='', flush=True)
while True:
time.sleep(verification_resp["interval"])
print('.', end='', flush=True)
access_resp = requests.post(
"https://github.com/login/oauth/access_token",
data={
"client_id": client_id,
"device_code": verification_resp["device_code"],
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
},
headers={"Accept": "application/json"},
).json()
if "access_token" in access_resp:
print()
return access_resp["access_token"], access_resp["expires_in"]
def main():
client_id = os.environ.get("GITHUB_APP_CLIENT_ID")
git_credentials_path = "/tmp/github-app-git-credentials"
access_token, expires_in = do_authenticate_device_flow(client_id)
expires_in_hours = expires_in / 60 / 60
print(f"\nSuccess! Authentication will expire in {expires_in_hours:0.1f} hours.")
# Create the file with appropriate permissions (0600) so other users can't read it
with open(os.open(git_credentials_path, os.O_WRONLY | os.O_CREAT, 0o600), "w") as f:
<EMAIL_ADDRESS>
and then just call main() either in the notebook or a console attached to it. I think works really nicely!
Do you want a PR for this? Or you can go ahead and do it :)
I'd refactor the code a bit to reuse more at the cmd line and Jupyter, and I'd add a magic, say %ghauth, to shorten this. We can then add the magic to __init__ and pre-import it, or even add it to a button/menu on the UI that shows the current GH connection status and lets you refresh the auth by clicking on it...
But anyway, I think this is now good enough for everyday use, esp. if we add a magic we preload or similar for convenience....
With this workflow, it's just running one command, enter, paste, enter, click on the GH green authorize button, close tab, done. I kind of like it :)
BTW - I'm going to make a PR against IPython adding this new magic:
from IPython.core.magic import register_line_magic
@register_line_magic
def pym(line):
"Equivalent to 'python -m'"
import runpy
runpy.run_module(line)
which then gives us this clean workflow (my ghauth module is the same as yours, just with the above JS code and a shorter name for convenience):
The idea here is that, once we put %pym into IPython, then any python package/module that offers a python -m entry point can become a cmd line magic as %pym pkg! And we can then expose nice Jupyter-oriented functionality in packages without the need for them to explicitly register magics or have users import anything :)
@fperez wow this is awesome! PR would be lovely - especially if we can make the Javascript calls conditional on us running in a notebook, so this could continue to run in terminals still. HPC users might need that still.
Hey, I found a better solution! It turns out that %run already provides a -m flag! I'll make a quick PR now, and we can discuss there :)
This was fixed up in #8!
|
2025-04-01T04:36:01.207636
| 2023-02-15T19:15:40
|
1586406366
|
{
"authors": [
"domenkozar",
"yvan-sraka"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12258",
"repo": "yvan-sraka/cargo-cabal",
"url": "https://github.com/yvan-sraka/cargo-cabal/pull/5"
}
|
gharchive/pull-request
|
Add Haskell Tool Stack support
Preliminary support for https://docs.haskellstack.org, behind the CLI tool cargo-stack, you can give it a try by just:
cargo install --git https://github.com/yvan-sraka/cargo-stack
cargo stack init
Feedbacks welcomed! (I'm not a stack user) 🙂
I'm seeing issue when stack copies the static rust library, have you tested on a sample project?
I'm seeing issue when stack copies the static rust library, have you tested on a sample project?
TBH, I haven't tested it until now. So, I quickly generated a sample project, and I got an error that I believe similar to the one you got?
stack build
greetings> configure (lib)
[1 of 3] Compiling Main ( /Users/yvan/greetings/Setup.lhs, /Users/yvan/greetings/.stack-work/dist/aarch64-osx/ghc-9.6.4/setup/Main.o )
[2 of 3] Compiling StackSetupShim ( /Users/yvan/.stack/setup-exe-src/setup-shim-9p6GVs8J.hs, /Users/yvan/greetings/.stack-work/dist/aarch64-osx/ghc-9.6.4/setup/StackSetupShim.o )
[3 of 3] Linking /Users/yvan/greetings/.stack-work/dist/aarch64-osx/ghc-9.6.4/setup/setup
Configuring greetings-0.1.0...
greetings> build (lib)
Preprocessing library for greetings-0.1.0..
Building library for greetings-0.1.0..
[1 of 1] Compiling Greetings
greetings> copy/register
Installing library in /Users/yvan/greetings/.stack-work/install/aarch64-osx/ca9cd2c969f892412e8410cfbf46d9820c96e63d610936f628c719343c2e742a/9.6.4/lib/aarch64-osx-ghc-9.6.4/greetings-0.1.0-Ir5HRrXiwTd2Xu60QFW6S
.stack-work/dist/aarch64-osx/ghc-9.6.4/build/libgreetings.a: copyFile: does not exist (No such file or directory)
Error: [S-7282]
Stack failed to execute the build plan.
While executing the build plan, Stack encountered the error:
[S-7011]
While building package greetings-0.1.0 (scroll up to its section to see the error) using:
/Users/yvan/greetings/.stack-work/dist/aarch64-osx/ghc-9.6.4/setup/setup --verbose=1 --builddir=.stack-work/dist/aarch64-osx/ghc-9.6.4 copy
Process exited with code: ExitFailure 1
This would likely be solved by tweaking the Setup.hs that cargo-stack generated, even if I would indeed prefer to rather keep the build logic in the stack.yaml or the .cabal ...
We solved this by using #6, hope to open source soon for an example.
|
2025-04-01T04:36:01.448632
| 2015-11-21T04:23:39
|
118169109
|
{
"authors": [
"blueyed",
"yyuu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12259",
"repo": "yyuu/pyenv",
"url": "https://github.com/yyuu/pyenv/pull/487"
}
|
gharchive/pull-request
|
rbenv 20151121
Imported changes from latest rbenv, by git merge rbenv/master -s recursive -X rename-threshold=5%. (cc: @blueyed )
@yyuu
:+1:
|
2025-04-01T04:36:01.475550
| 2021-01-14T23:16:28
|
786410668
|
{
"authors": [
"yzernik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12260",
"repo": "yzernik/squeaknode",
"url": "https://github.com/yzernik/squeaknode/issues/655"
}
|
gharchive/issue
|
Improve display of conversation thread
We can use something like this to show that squeaks are part of the same reply thread:
https://material-ui.com/components/timeline/#basic-timeline
Something like this:
Something like this:
I did some experiment with this in this branch: https://github.com/yzernik/squeaknode/tree/use_react_timeline_for_thread
I did some experiment with this in this branch: https://github.com/yzernik/squeaknode/tree/use_react_timeline_for_thread
https://github.com/yzernik/squeaknode/pull/716
https://github.com/yzernik/squeaknode/pull/716
|
2025-04-01T04:36:01.497196
| 2019-09-21T03:50:49
|
496609054
|
{
"authors": [
"js201909",
"pcfjojo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12261",
"repo": "z-song/laravel-admin",
"url": "https://github.com/z-song/laravel-admin/issues/3951"
}
|
gharchive/issue
|
步骤表单:浏览文件,点击下一步,出现异常,怎么解决?Symfony\Component\HttpKernel\Exception\MethodNotAllowedHttpException: The GET method is not supported for this route. Supported methods: POST.
Laravel Version: #.#.#
PHP Version:
Laravel-admin: #.#.#
Description:
步骤表单:浏览文件,点击下一步,出现异常,怎么解决?Symfony\Component\HttpKernel\Exception\MethodNotAllowedHttpException: The GET method is not supported for this route. Supported methods: POST.
Steps To Reproduce:
是路径问题,已解决
@js201909 in which version it is resolved?
version 1.7.6
|
2025-04-01T04:36:01.503426
| 2017-06-27T07:25:52
|
238759579
|
{
"authors": [
"DoctorCoder-A",
"alexoleynik0",
"azinkey",
"banstola",
"chenqixuan",
"miurabo",
"naingwin",
"z-song"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12262",
"repo": "z-song/laravel-admin",
"url": "https://github.com/z-song/laravel-admin/issues/926"
}
|
gharchive/issue
|
External JS
Load external js using Admin::js('something.js') in boostrap.php. But not load those js file when I go to create view from list view. I need to refresh once for my external js work. I anyone has this issue?
me too :( 我也遇到同样的问题,请问怎么解决 ?
Because of pjax, the js files are loaded only once when page opens, so you should define js functions in js file, and use Admin::script() to call the functions you defined
In you js file:
function message() {
alert('xxxx');
}
and call this function in your action Admin::script('message()');
Let me please re-open this issue, I am using vue js for some custom process. So my js code will be like this
new Vue({ el: '#app', });
how can I call those script via Admin::script();. I am just new in JS. Thanks in advance
Did you actually solve this issue? I have exact same problem. I have to include custom js script . 200 lines but the JS is not included unless I refresh
@banstola Isn't it possible to call it with Admin::script () after converting it into a function?
when i am calling Admin::script() in my form(){ action it will produce an error
FatalThrowableError In SpecializationsController.php line 74 :
Class 'App\Admin\Controllers\Admin' not found
when i am calling Admin::script() in my form(){ action it will produce an error
FatalThrowableError In SpecializationsController.php line 74 :
Class 'App\Admin\Controllers\Admin' not found
I have a news for you. Admin class does not exist in that namespace. Are you importing correct class?
you can use this code to reload the page
if(!localStorage.getItem('reload')){
localStorage.setItem('reload', 1)
window.location.reload()
}
setTimeout(()=>{
localStorage.removeItem('reload')
}, 9000)
Don't know what I'm doing here, but besides the good example that z-song provided (and in most cases I will recommend it too), there's also pjax events you can use.
For example
$(document).on('pjax:complete', function() {
// this will be run on every pjax request complete
});
a more reliable option with js underloading is to disable pajax, this will reload page on every open
protected function form() { (new \Encore\Admin\Admin)->disablePjax(); $form = new Form(new Post());
or
protected function grid() { (new \Encore\Admin\Admin)->disablePjax(); $grid = new Form(new Post());
|
2025-04-01T04:36:01.591392
| 2017-02-07T20:49:04
|
206010161
|
{
"authors": [
"gman2691",
"jimluke827",
"panzarino",
"rhoffer21",
"zachpanz88"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12265",
"repo": "zachpanz88/mlbgame",
"url": "https://github.com/zachpanz88/mlbgame/issues/24"
}
|
gharchive/issue
|
XML
The examples of your program on GitHub have an XMLSyntax Error
@gman2691 can you please provide a direct example of where this occurs
The teaser code that you posted to find all of the scores in the Mets games.
@gman2691 I'm not getting any error, can you please post the error response you are getting?
I get this error lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
full copy:
Traceback (most recent call last):
File "C:/......../main.py", line 5, in
month = mlbgame.games(2015, 6, home="Mets")
File "C:........._init_.py", line 205, in games
game = day(i, y, x, home=home, away=away)
File "C:.........._init_.py", line 175, in day
data = mlbgame.game.scoreboard(year, month, day, home=home, away=away)
File "C:.................\game.py", line 17, in scoreboard
parsed = etree.parse(data)
File "src\lxml\lxml.etree.pyx", line 3427, in lxml.etree.parse (src\lxml\lxml.etree.c:81101)
File "src\lxml\parser.pxi", line 1811, in lxml.etree._parseDocument (src\lxml\lxml.etree.c:117832)
File "src\lxml\parser.pxi", line 1837, in lxml.etree._parseDocumentFromURL (src\lxml\lxml.etree.c:118179)
File "src\lxml\parser.pxi", line 1741, in lxml.etree._parseDocFromFile (src\lxml\lxml.etree.c:117091)
File "src\lxml\parser.pxi", line 1138, in lxml.etree._BaseParser._parseDocFromFile (src\lxml\lxml.etree.c:111637)
File "src\lxml\parser.pxi", line 595, in lxml.etree._ParserContext._handleParseResultDoc (src\lxml\lxml.etree.c:105093)
File "src\lxml\parser.pxi", line 706, in lxml.etree._handleParseResult (src\lxml\lxml.etree.c:106801)
File "src\lxml\parser.pxi", line 635, in lxml.etree._raiseParseError (src\lxml\lxml.etree.c:105655)
File "file:/C:/.................../scoreboard.xml.gz", line 1
lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
Process finished with exit code 1
@jimluke827 I'm not getting that error at all, it could be an issue with lxml. Can you try updating both mlbgame and lxml to the latest version?
@zachpanz88 I am at the latest version of each.
@zachpanz88 I was using Python 3, I used python 2 and had a ton of trouble getting libxml2 to install, but once I got that all set everything is working
@gman2691 see above if you are using wrong version of python
@jimluke827 that might have been the issue. I've only been testing on python 2, so I'll check out the issue on python 3.
Again, I do not seem to get the error when I am testing with python 3.
File "file:/C:/.................../scoreboard.xml.gz", line 1
@jimluke827 Is there any way you could fill in part of that "......" so I can diagnose what day the information is coming from? I just need the "month_06/day_xx/" part.
Just to be clear, we are talking about this example, correct?
from __future__ import print_function
import mlbgame
month = mlbgame.games(2015, 6, home="Mets")
games = mlbgame.combine_games(month)
for game in games:
print(game)
@zachpanz88 I don't have the error anymore, but I just used the exact examples that were shown in the readme, neither worked and both gave the same error.
@jimluke827 I noticed from your error messages that you are using Windows. I have been testing everything on Linux, which handles .tar.gz much better than Windows. The problem could have arisen from Windows not being able to handle the compressed files, so I'll run some tests on Windows.
Expanding on this, I also had the same issue. I believe it is related to the .gz compressed xml files in a windows environment. I was eventually able to remedy the issue by moving the archived gameday-data to a backup folder which forced the wrapper to make calls to the server instead of using the archived xml data.
I unfortunately don't have a better work around to offer at this time, other than moving the saved data.
@rhoffer21 I will try to find a better compression format for windows (probably zip). I will also build a delete function into the update module that will allow you to remove all downloaded data.
Released in 2.3.2.
|
2025-04-01T04:36:01.636549
| 2015-05-13T18:03:35
|
76071438
|
{
"authors": [
"codeofsumit",
"zaggino"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12266",
"repo": "zaggino/brackets-git",
"url": "https://github.com/zaggino/brackets-git/issues/1034"
}
|
gharchive/issue
|
changing branches triggers "gulp watch"
I'm using gulp as a build tool. gulp watch watches for file changes. When changing branches, the watcher is triggered and creates "new" min-files although the min-files should be exactly the same as in the branch i changed to. I always discard changes but it's annoying nonetheless.
Any idea what could cause this?
Can you post a snippet of your watch task?
of course:
// Watch
gulp.task('watch', function() {
// Watch .scss files
gulp.watch('src/css/**/*.scss', ['styles']);
// Watch .js files
gulp.watch('src/js/**/*.js', ['scripts']);
});
// Styles
gulp.task('styles', function() {
return gulp.src('src/css/app.scss')
.pipe(sass({
style: 'expanded',
"sourcemap=none": true,
noCache: true
}))
.pipe(autoprefixer('last 2 version', 'safari 5', 'ie 8', 'ie 9', 'opera 12.1', 'ios 6', 'android 4'))
// .pipe(gulp.dest('dist/styles'))
.pipe(rename({ suffix: '.min' }))
.pipe(minifycss())
.pipe(gulp.dest('dist/css'))
.pipe(notify({ message: 'Styles task complete' }));
});
// Scripts
gulp.task('scripts', function() {
return gulp.src([
'src/js/config.js',
'src/js/helper.js',
'src/js/app.js',
'src/js/page.js',
'src/js/views/page.js',
'src/js/**/*.js'
])
.pipe(jshint('.jshintrc'))
.pipe(jshint.reporter('default'))
.pipe(concat('app.js'))
// .pipe(gulp.dest('dist/scripts'))
.pipe(rename({ suffix: '.min' }))
.pipe(uglify())
.pipe(gulp.dest('dist/js'))
.pipe(notify({ message: 'Scripts task complete' }));
});
watch is better for this sort of thing than default gulp.watch, this is how my gulp is set up:
var watch = require('gulp-watch');
gulp.task('watch', function () {
watch('src/js/**/*.js', function (file) {
var filePath = path.relative(__dirname, file.path);
console.log(filePath);
});
});
you'll see, which files get picked up by watch in the console and then we should be able to find out more about the issue
See for example here: https://github.com/zaggino/brackets-npm-registry/blob/master/gulpfile.js#L90-L98
to replace your:
gulp.watch('src/css/**/*.scss', ['styles']);
i'd personally do:
var watch = require('gulp-watch');
var runSequence = require('run-sequence');
watch('src/css/**/*.scss', function (file) {
var filePath = path.relative(__dirname, file.path);
console.log(filePath);
runSequence('styles');
});
So you think it's a gulp.watch problem? I was under the impression that brackets-git may make some changes to the files when changing branches. I haven't had this problem with earlier versions of brackets-git.
ouu never mind I'm terribly sorry. It works like a charm now after updateing brackets and brackets-git again. I apparently overlooked some updated.
|
2025-04-01T04:36:01.664720
| 2020-10-26T20:27:22
|
729881180
|
{
"authors": [
"GeneGi",
"zakthompson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12267",
"repo": "zakthompson/pokemon-hex-generator",
"url": "https://github.com/zakthompson/pokemon-hex-generator/issues/12"
}
|
gharchive/issue
|
does it support arduino zero board?
if so, how do I change the code to generate the hex and flash it to my board?
Hey @GeneGi, unfortunately it does not :( The Zero is an ATSAMD21G18 board. The MCUs supported by these bots are ATMEGA16U2, ATMEGA32U4 and AT90USB1286. Thanks for reaching out!
|
2025-04-01T04:36:01.684590
| 2018-05-25T10:45:23
|
326473090
|
{
"authors": [
"alexeyklyukin",
"coveralls",
"zerg-junior"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12268",
"repo": "zalando-incubator/postgres-operator",
"url": "https://github.com/zalando-incubator/postgres-operator/pull/304"
}
|
gharchive/pull-request
|
Introduce a repair scan to fix failing clusters
A repair is a sync scan that acts only on those clusters that indicate
that the last add, update or sync operation on them has failed. It is
supposed to kick in more frequently than the repair scan. The repair
scan still remains to be useful to fix the consequences of external
actions (i.e. someone deletes a postgres-related service by mistake)
unbeknownst to the operator.
The repair scan is controlled by the new repair_period parameter in the
operator configuration. It has to be at least 2 times more frequent than
a sync scan to have any effect (a normal sync scan will update both last
synced and last repaired attributes of the controller, since repair is
just a sync underneath).
A repair scan could be queued for a cluster that is already being synced
if the sync period exceeds the interval between repairs. In that case a
repair event will be discarded once the corresponding worker finds out
that the cluster is not failing anymore.
:+1:
Coverage decreased (-0.08%) to 11.194% when pulling fdcb8270c6fa8cbdceb5c0d2c693e1c4b2cd7d1f on repairs into 9c86f8bd9694d1af4ea062c6f698a8d3324b865f on master.
👍
👍
:+1:
👍
:+1:
|
2025-04-01T04:36:01.692481
| 2017-04-28T13:44:14
|
225085167
|
{
"authors": [
"maxim-tschumak",
"netme"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12269",
"repo": "zalando-incubator/zally",
"url": "https://github.com/zalando-incubator/zally/issues/314"
}
|
gharchive/issue
|
Use Meter metrics instead of Counters
Current behavior
Zally server collects counter metrics and exposes them via REST interface. However, counter metrics don't provide much meaning without a persistence and in the cloud setup (multiple instances, auto-scaling, etc.).
Expected behavior
We should use Meter metrics instead. They measure the rate of the events in specific time frames (e.g. 120 must violations in the last 15 minutes).
Acceptance criteria
Counter metrics should be removed
The Meter metrics should be collected for:
Each violation
Each violation type (must, should, could)
Functional monitoring is set up (related to internal-issue)
@maxim-tschumak special thanks for the ticket description 👍
|
2025-04-01T04:36:01.696593
| 2020-06-09T08:08:00
|
635206409
|
{
"authors": [
"Huangkai1008",
"RobbeSneyders"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12270",
"repo": "zalando/connexion",
"url": "https://github.com/zalando/connexion/issues/1247"
}
|
gharchive/issue
|
How to ignore to validate the parameters and request bodies ?
Description
Hello! I use other tools to validate the inputs in my project, how to ignore the validation by connexion? I see a parameter called validate_responses, can you provide similar parameter or method for this situation,thank you very much !
Expected behaviour
connexion not to validate the inputs
Actual behaviour
connexion always validate the inputs
Steps to reproduce
N/A
Additional info:
N/A
Output of the commands:
python --version: Python 3.6.5
pip show connexion | grep "^Version\:": Connexion 2.7.0
Hi @Huangkai1008, you can override the validators used for parameter and request body validation. See docs.
|
2025-04-01T04:36:01.701246
| 2019-02-19T10:48:28
|
411860535
|
{
"authors": [
"FRNCSCM",
"Jyhess",
"Zelaskal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12271",
"repo": "zalando/connexion",
"url": "https://github.com/zalando/connexion/issues/883"
}
|
gharchive/issue
|
Request in aiohttp endpoint
Description
I am currently trying to use the aiohttp framework inside of the connexion package, which works really good. The one problem i've got is to catch a request (from server side) inside of the endpoint logic, so I can read the header from the request.
The request is available in connexion.decorators.security, but how can i catch it in the endpoint logic!
I appreciate any help, as i've been working on this issue for hours now.
Expected behaviour
Flask's "request" worked perfectly fine inside of my endpoint logic, I assume it has something to do, with me switching to "async def".
Actual behaviour
aiohttp.web.Request.headers just returns an object and not really the content of a request.
Steps to reproduce
This is how my logic is build and everything seems to work fine, except the catching of the "request".
import logging
import connexion
import json
import query_segments
import socket
import tokens
import time
import json
import asyncio
import aiohttp
from aiohttp import web
async def endpoint(user_input):
user_token = request.get['Authorization'] #**(something like that)**
return web.Response(text=f"--> {user_input}")
logging.basicConfig(level=getattr(logging, 'INFO', None))
ip_address = socket.gethostbyname(socket.gethostname())
webapp = connexion.AioHttpApp(__name__,host=ip_address, port=4000, debug=False)
webapp.add_api('swagger.yaml', base_path='/cs')
application = webapp.app
if __name__ == '__main__':
webapp.run()
Additional info:
Output of the commands:
Python version: 3.7
connexion = 2.2.0
You can use pass_context_arg_name in add_api arguments:
webapp.add_api('swagger.yaml', base_path='/cs', pass_context_arg_name='request')
It defines the name of a request argument that will be added in your endpoints:
async def endpoint(user_input, request):
user_token = request.get['Authorization'] #**(something like that)**
return web.Response(text=f"--> {user_input}")
Perfect - that's it. Thanks!
Does this also work for x-apikeyInfoFunc and its other auth function equivalents?
Not yet. Pull request #869 is a proposal to allow it.
|
2025-04-01T04:36:01.711465
| 2017-05-24T17:10:00
|
231110298
|
{
"authors": [
"coveralls",
"gbordyugov",
"shansfolder"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12272",
"repo": "zalando/expan",
"url": "https://github.com/zalando/expan/pull/115"
}
|
gharchive/pull-request
|
OCTO-1614 cleanup module structure
What I did in this PR:
Move some root-level functions to class methods
Move other root-level functions to util module
Optimize imports and cleanup code
Restructure test folders
Coverage increased (+3.1%) to 79.173% when pulling f6a191b2695765460aff11f0a07ac3e2cc1de545 on shansfolder:dev into f7816d9b2ebd80e117bb8aa0e920ed501f8fdcbe on zalando:dev.
Coverage increased (+3.1%) to 79.173% when pulling f6a191b2695765460aff11f0a07ac3e2cc1de545 on shansfolder:dev into f7816d9b2ebd80e117bb8aa0e920ed501f8fdcbe on zalando:dev.
Have no idea why unit test failed on Python 2.7. Need to investigate further.
Coverage increased (+3.1%) to 79.173% when pulling 10c9a58be9d01096d4a878ea7d9ee9006c34e8fe on shansfolder:dev into f7816d9b2ebd80e117bb8aa0e920ed501f8fdcbe on zalando:dev.
Ok I fixed the build.
The reason is that pytest just released a new version 3.1.0 on May. 22th(see changelog here), which leads to our failed tests with Python 2.7.
As a hot fix, I fix pytest version to 3.0.7 when running tox.
Coverage increased (+2.7%) to 78.743% when pulling bc332e6b114cc40448537f472967e270ebf8e0e4 on shansfolder:dev into f7816d9b2ebd80e117bb8aa0e920ed501f8fdcbe on zalando:dev.
Coverage increased (+2.7%) to 78.743% when pulling 4d2de459794eadfa6b346cd6009dc9fb1db3b7bd on shansfolder:dev into f7816d9b2ebd80e117bb8aa0e920ed501f8fdcbe on zalando:dev.
looks ok to me, especially considering the +5,677 −5,874 thing ;-)
Coverage increased (+2.7%) to 78.743% when pulling 512c05733f0092c8c693695a8528f1e2ae8bfb73 on shansfolder:dev into f7816d9b2ebd80e117bb8aa0e920ed501f8fdcbe on zalando:dev.
|
2025-04-01T04:36:01.717537
| 2016-12-19T12:16:47
|
196400192
|
{
"authors": [
"aeons",
"etorreborre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12273",
"repo": "zalando/grafter",
"url": "https://github.com/zalando/grafter/issues/9"
}
|
gharchive/issue
|
Publish artifact for 2.12
It seems like the project is building for 2.12, so can we have a release as well? :)
No problem, I'm not at my laptop now, I'll try to do that later on today or tomorrow.
1.2.6 is now available for Scala 2.12.
|
2025-04-01T04:36:01.721106
| 2016-05-12T11:24:56
|
154459388
|
{
"authors": [
"coveralls",
"jmcs",
"rafaelcaricio"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12274",
"repo": "zalando/lizzy",
"url": "https://github.com/zalando/lizzy/pull/128"
}
|
gharchive/pull-request
|
Add health check endpoint
Fixes #125
Coverage increased (+0.07%) to 90.215% when pulling da2d171286a0f327134ba039dfd4f94357732e0c on rafaelcaricio:healthcheck-endpoint into ed0446d9a055c71c92a29a85ba210784193351a0 on zalando:release2.0.
:+1:
@rafaelcaricio can you solve the conflicts?
@jmcs done.
Coverage increased (+0.08%) to 89.563% when pulling 8ab731d90163b3ff0723c47fef18622cdaf2c889 on rafaelcaricio:healthcheck-endpoint into 58652f59f1c75aa4cd328d05fa18ae4d4269565a on zalando:release2.0.
:+1:
|
2025-04-01T04:36:01.742467
| 2020-08-17T13:59:19
|
680258392
|
{
"authors": [
"ePaul"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12275",
"repo": "zalando/zally",
"url": "https://github.com/zalando/zally/issues/1175"
}
|
gharchive/issue
|
Add links to common data types
Please check if the PR https://github.com/zalando/restful-api-guidelines/pull/596 introduces changes which are relevant to the Zally project.
Not needed, just editorial change.
|
2025-04-01T04:36:01.794535
| 2015-06-19T13:39:40
|
89567082
|
{
"authors": [
"LevelbossMike",
"achambers",
"lukemelia"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12276",
"repo": "zapnito/ember-cli-deploy-redis",
"url": "https://github.com/zapnito/ember-cli-deploy-redis/pull/7"
}
|
gharchive/pull-request
|
User definable context keys
This PR allows:
a plugin to specify which keys it will be accessing from the context object and then and;
a user to override the context keys if they would like the plugin to retrieve data from some other value on the context object.
To do this, a plugin can define a contextKeys hash like so:
return {
name: options.name,
contextKeys: {
revision: 'revision'
}
}
This means that the plugin will be using a revision property at some stage in the lifecycle and it will retrieve it from context.revision.
However, a user might have written their own plugin that puts some sort of revision string under a different property on the context object, say, context.tag. In this case, the user can specify in their deploy.js file that they would like the redis plugin to use the new property for it's revision, like so:
module.exports = function(environment) {
var ENV = {
redis: {
revision: '$context:tag'
}
};
return ENV;
}
The redis plugin will inspect the config looking for anything that starts with $context: and the value to override the contextKeys that it will use. Now, when the redis plugin is looking for the revision it will look in context.tag instead of context.revision
Do we really need the contextKeys hash? What do you think about having resolveConfigurationValue(key, config, context) that reads from config and either returns the value or, if the value is of the form $context:____, returns the value from context?
I think that suggestion has a slightly different intention than what I had. So maybe I'm still slightly on a different page to you @lukemelia. So, my understanding of the intention of your suggestions is: "Retrieve config from the config object, but, if the config value begins with $context: then assume that you should be getting the value from the context object instead".
The idea behind this PR, and my take on what we were talking about is: "as a plugin I need to access a value from the context object. For my internal purpose I'm going to refer to that value as revision and access from context.revision. However, if a user has decided that they would like me to retrieve the revision from another property on the context, say, context.tag then I will get it from there instead."
These seem to be two slightly different goals.....both, I think, are valid though.
Thoughts?
Oh, and in answer to your first question, from the perspective of what I was trying to achieve, I used the contextKeys hash as a way of being a bit more intention revealing. This hash is the list of keys that this plugin cares about on the context object. If plugins are going to become dependent on data placed on the context object by other plugins,it could get quite unwieldily trying to understand where certain data is assumed to have come from. This was an attempt and making it a bit more explicit so that developers, and other plugin authors that are using a plugin or maybe looking for inspiration about how to write a plugin can quickly see which context keys the plugin is reliant upon.
That was the intention anyway. It doesn't have to be like that.
I get the idea of being able to customize the revision context the redis plugin will use because plugin authors might write their revision-key to another property than context.revision but wouldn't it be easier for the end-users to not care about that and the plugin authors just behaving in a convention over configuration way?
In essence by moving this responsibility every end-user will need to know how their deploy-plugins behave in detail in contrast to plugin authors knowing conventions and write their revision-key to the correctly named property (which would be documented of course)?
@LevelbossMike @achambers I agree that convention-over-configuration is still essential. For golden path use cases, including the packages and doing unavoidable configuration (e.g. credentials and connection info) should be all it takes. Of course, it should be possible to wire together plugins in ways beside the golden path, too.
I've refined my idea for how we can accomplish this.
Goals:
A great out-of-the-box app developer experience for common deployment scenarios.
An API for app developers to define deployment pipeline configuration synchronously or asynchronously.
An API for plugin developers to provide static default configuration values.
An API for plugin developers to provide default configuration values that are derived at run-time from the data produced by plugin running prior to it in the pipeline.
An API for plugin developers to allow ap developers to interact with plugin settings via command line flags.
An API for app developers to specify configuration of a plugin to use data produced by a plugin running prior to it in the pipeline.
Approach:
App developers use config/deploy.js to return a function that receives the build environment as a string and returns either a config object or a Promise that fulfills with a config object. The config object has properties corresponding to the name of the plugin (e.g. for ember-cli-deploy-redis, the property is “redis”). Supports goal #2 above. (This is implemented in the 0.5.0 WIP already.)
Examples:
// deploy.js (sync)
module.export function(environment){
var ENV = {
redis: {
url: process.env.REDIS_URL
}
}
return ENV;
};
// deploy.js (async)
module.export function(environment){
var ENV = {
redis: {
}
}
return someAsyncDataRetrieval(environment).then(function(data){
ENV.redis = data.redisUrl;
return ENV;
}
};
Plugin developers can implement a configure hook that runs at the beginning of pipeline execution (because “configure” is the first step of the pipeline). This hook has read/write access to the config object. It can specify default configuration values, as well as throw an Error in the case that a required configuration property was not provided. Supports goal #3 above. (This is implemented in the 0.5.0 WIP already, although we should provide plugins with helper to define defaults and enforce required properties more expressively and more consistently with other plugins.)
Example:
// some-ember-cli-deploy-plugin/index.js
module.exports = {
name: 'ember-cli-deploy-myplugin',
createDeployPlugin: function(options) {
return {
name: options.name,
configure: function(context) {
var deployment = context.deployment;
var config = deployment.config[this.name] = deployment.config[this.name] || {};
config.filePattern = config.filePattern || “**/*.html”; // provide default
},
// ...
}
};
Plugin developers can also use configure hook specify a default configuration property as a function, which will be called at run-time, when a plugin wishes to read and use the configuration value. The function will receive the context and must return a value or throw an Error. The context would allow access to data added to pipeline by previous plugins, as well as flags set on command line. Supports goal #4 and #5 above. (This is not yet implemented.)
Example:
// some-ember-cli-deploy-plugin/index.js
module.exports = {
name: 'ember-cli-deploy-myplugin',
createDeployPlugin: function(options) {
return {
name: options.name,
configure: function(context) {
var deployment = context.deployment;
var config = deployment.config[this.name] = deployment.config[this.name] || {};
config.revision = config.revision || function(context){
return context.deployment.revision;
};
// we could also provide a helper for this, e.g.
// config.revision = config.revision || fromPipelineData(context, “revision”);
config.shouldActivate = config.shouldActivate || function(context){
return !!context.flags.activate; // set via `--activate on command line
};
},
// ...
}
};
App developers can also use this function-style configuration in config/deploy.js in order to wire together plugins. The function will receive the context and must return a value or throw an Error. Supports goal #6 above. (No additional implementation would be necessary if the above were implemented.)
Example:
// deploy.js
module.export function(environment){
var ENV = {
redis: {
revisionKey: function(context) { return context.deployment.tag; },
forceUpdate: function(context) { return context.flags.force; }
}
}
return ENV;
};
These approaches all combine to achieve goal #1 above.
@lukemelia this looks great! :+1:
Going to close this PR now as it's clear that the work on this branch isn't needed. I'm keen not to lose your thoughts @lukemelia . Not sure the best place to record them are.
I'm going to update the 0.5.0 RFC doc.
Cool
On Mon, 22 Jun 2015 at 16:31 Luke Melia<EMAIL_ADDRESS>wrote:
I'm going to update the 0.5.0 RFC doc.
—
Reply to this email directly or view it on GitHub
https://github.com/zapnito/ember-cli-deploy-redis/pull/7#issuecomment-114155125
.
|
2025-04-01T04:36:01.850399
| 2020-11-11T04:00:14
|
740439048
|
{
"authors": [
"bmeneguele",
"huhuang03"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12277",
"repo": "zaquestion/lab",
"url": "https://github.com/zaquestion/lab/issues/490"
}
|
gharchive/issue
|
build failed by undefined: gitlab.ListDescendantGroupsOptions on branch projectgroups
I want use your create group feature https://github.com/zaquestion/lab/pull/450.
But when I run go build on branch projectgroups. The error throws:
xx@xxs-MacBook-Pro lab % go get
go: downloading github.com/inconshreveable/mousetrap v1.0.0
go: downloading google.golang.org/appengine v1.6.6
go: downloading github.com/golang/protobuf v1.4.2
go: downloading google.golang.org/protobuf v1.25.0
# github.com/zaquestion/lab/internal/gitlab
internal/gitlab/gitlab.go:769:27: lab.Groups.ListDescendantGroups undefined (type *gitlab.GroupsService has no field or method ListDescendantGroups)
internal/gitlab/gitlab.go:769:62: undefined: gitlab.ListDescendantGroupsOptions
I can't figure out how to resolve this..
Hi @huhuang03,
This branch/merge request is outdated wrt go-gitlab version that contains the Groups.ListDescendantGroups support.
Try to change the following line:
https://github.com/zaquestion/lab/blob/62523a3a712276b9a532e2ce92418a3dc6a7c476/go.mod#L25
to point to version 0.38.2 instead.
This issue should be solved by the current state of the pull request.
|
2025-04-01T04:36:01.861563
| 2015-04-08T21:06:26
|
67220351
|
{
"authors": [
"mcanders",
"zargony"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12278",
"repo": "zargony/atom-language-rust",
"url": "https://github.com/zargony/atom-language-rust/issues/26"
}
|
gharchive/issue
|
Add '_' character to 'Non Word Characters' list
I often want to select a part of a variable name. Consider the variable name "line_index" with the cursor right before the 'l' ([cursor]line_index). In order to select "line", I currently need to move to the end of "line" (either by hitting the "right arrow" key four times or clicking it with the mouse shudders) and hit control+shift+left to select to the beginning of the word. Adding '_' to the 'Non Word Characters' list in the package settings allows me to select "line" by hitting ctrl+shift+right. This emulates Sublime Text's alt+shift+right behavior.
I'm not sure if this would be a good idea. Most people probably expect underscores to be part of a word so that you can double-click or ctrl-shift-w on a variable or function name to select it. I'd vote for not adding '_' to the non word characters list.
I'm closing this since nobody else seems to be interested in changing this. I'm taking that as a sign that the current behavior is fine ;)
|
2025-04-01T04:36:01.865691
| 2024-11-28T08:42:56
|
2701306845
|
{
"authors": [
"MSanKeys963",
"joshmoore"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12279",
"repo": "zarr-developers/blog",
"url": "https://github.com/zarr-developers/blog/pull/61"
}
|
gharchive/pull-request
|
Steering council membership update (November 2024)
see: https://github.com/zarr-developers/governance/pull/45
cc: @zarr-developers/steering-council
LGTM.
|
2025-04-01T04:36:01.881942
| 2024-04-21T19:55:28
|
2255238249
|
{
"authors": [
"kschweiger",
"ravibrock"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12280",
"repo": "zbirenbaum/copilot-cmp",
"url": "https://github.com/zbirenbaum/copilot-cmp/issues/106"
}
|
gharchive/issue
|
debounce option in copilot.lua doesn't seem to apply for cmp suggestions
Hi! I'm opening this issue since I was hoping for a way to ask Copilot to wait for some x amount of time before bringing up suggestions again. I've found that I often run into an issue where I'll be typing, say, function() and then I'll hit <CR> to get a newline and continue typing the function body. However, because I have <CR> mapped to cmp confirmation and Copilot typically generates a suggestion by the time I've typed func..., I end up confirming that suggestion rather than just getting a newline. My idea was an option (that ideally debounce would work for) where typing a character cancels the Copilot suggestion for the specified number of milliseconds, so that if I'm typing quickly Copilot suggestions don't appear but if I stop typing then the Copilot suggestion appears. I hope that makes sense. Thank you!
Would also really like to have some kind of delay. I think there is already a (quite old) branch adding this. Is there a particular reason why it never made it to master?
|
2025-04-01T04:36:01.887723
| 2022-11-22T08:08:16
|
1459349762
|
{
"authors": [
"pobearm",
"zc2638"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12281",
"repo": "zc2638/swag",
"url": "https://github.com/zc2638/swag/issues/19"
}
|
gharchive/issue
|
build faild from example
Error: Schema not declared by package endpoint.
Build error with latest code v1.14.0.
There is a cache in goproxy, v1.14.0 is a wrong version, v1.4.1 should be used.
|
2025-04-01T04:36:01.920670
| 2018-10-05T04:48:11
|
367058546
|
{
"authors": [
"levonpetrosyan93",
"reubenyap",
"ultimaweapon"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12282",
"repo": "zcoinofficial/zcoin",
"url": "https://github.com/zcoinofficial/zcoin/issues/273"
}
|
gharchive/issue
|
Upgrade Tor to latest version
See @reubenyap comment at https://github.com/zcoinofficial/zcoin/pull/249#issuecomment-427244780
https://blog.torproject.org/new-release-tor-0348-also-other-stable-updates-02917-03212-and-03310
Latest TOR release is <IP_ADDRESS>
Latest release now is <IP_ADDRESS>. Let's make sure this is reflected in Trello.
Card created.
Latest version <IP_ADDRESS> now is https://blog.torproject.org/new-release-tor-0405. @riordant will be taking it.
@levonpetrosyan93 has done some work on it and will be completing soon.
Yes, Itis working, PR wil be created today.
#525
|
2025-04-01T04:36:01.922220
| 2019-08-10T12:07:31
|
479260603
|
{
"authors": [
"levonpetrosyan93",
"psolstice",
"riordant"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12283",
"repo": "zcoinofficial/zcoin",
"url": "https://github.com/zcoinofficial/zcoin/pull/599"
}
|
gharchive/pull-request
|
Sigma index fix
PR intention
It's possible for the block that removed from the blockchain to be re-inserted back later. This PR fixes rare condition of not re-adding spends and mints into the index after such operation
Code changes brief
In CSigmaState::RemoveBlock() clearing mints and spends from the block index should not be done
But if we remove a block and connect another block anyway this to containers are being cleared in ConnectBlockSigma function.
LGTM
|
2025-04-01T04:36:01.930169
| 2019-03-19T18:03:55
|
422877043
|
{
"authors": [
"AdriaanKuipers",
"guilleva",
"whiterook6",
"zimrick"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12284",
"repo": "zcreativelabs/react-simple-maps",
"url": "https://github.com/zcreativelabs/react-simple-maps/issues/132"
}
|
gharchive/issue
|
Zoom around cursor
I can implement a nice zoom function like this:
public componentDidMount(){
this.node = ReactDOM.findDOMNode(this.refs.ComposableMap);
if (this.node){
this.node.addEventListener('mousewheel', this.handleScroll);
}
}
public componentWillUnmount() {
if (this.node){
this.node.removeEventListener('mousewheel', this.handleScroll);
}
}
public handleScroll = (event: WheelEvent) => {
const { deltaY } = event;
const clampedDeltaY = clamp(-50, 50, deltaY);
const newZoom = this.state.zoom * (1 + (clampedDeltaY / 100));
const clampedNewZoom = clamp(1, 10, newZoom);
this.setState({zoom: clampedNewZoom});
}
// ...
<ComposableMap ref="ComposableMap">
<ZoomableGroup zoom={this.state.zoom}>
And if I'm really clever I can grab the position of the mouse from the mouse event and even invert it from the zoomable projection, though I'm not sure it takes into account the panning functionality of the zoomable group
But I would like to be able to update the center of the map based on where the cursor when the zoom happens. The zoom should center around the cursor when using a mouse, like in this example: https://bl.ocks.org/mbostock/2206340
Is there a way to update the center while respecting the panning from the map as well? If I hover my cursor above europe and scroll the wheel I don't want to zoom in off the coast of africa.
I'm looking for the exact same thing.
You can keep track of the center like this
handleMoveEnd(newCenter) {
this.setState({
center: newCenter
});
}
And
<ZoomableGroup
onMoveEnd={this.handleMoveEnd}
>
Still missing compensation for the zoom level: demo
based on this
Basic working example here: demo
based on this
I had something close to working as you describe it; move center to cursor on zoom. But it became very difficult to control as the map will jump the whole time. Most online maps seem to make sure the location underneath the cursor stays the same whilst zooming.
This might help you: codesandbox
I'm not quite there yet. I didn't find the exact formula yet to make sure the same location stays under the cursor, and zooming far away from the center gives weird results. But it's a step in the right direction, I think.
If anyone finds how to do this, please let me know.
@AdriaanKuipers did you manage to get working properly? I'm also trying to accomplish the same thing.
@guilleva not really. this codesandbox is a bit smoother. I think the only difference is adding 1 to zoom/subtracting 1 from zoom when zooming, instead of multiplication/division.
Zooming in and out on Europe works pretty well. But if you test Australia, you'll find that it's still bugged... First zoom in is fine, second will move you to Africa.
I think zoom needs to compensated for somewhere before projection.invert. But I haven't looked at this for over a month now, it's all a bit misty...
thanks @AdriaanKuipers I also tried something similar without any luck! :(
@guilleva the problem is feeding projection.invert with values outside of the box. (below zero, or above width/height of Composeable map)
Closing this. Proper wheel and scroll zoom are now implemented and should be working in a robust way on both desktop and mobile with v2.
|
2025-04-01T04:36:01.940468
| 2023-01-23T16:30:38
|
1553399806
|
{
"authors": [
"TENX-S",
"leslie255"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12285",
"repo": "zed-industries/community",
"url": "https://github.com/zed-industries/community/issues/887"
}
|
gharchive/issue
|
Add a http_proxy setting
Check for existing issues
[X] Completed
Describe the feature
FYI, programmers in China are blocked by GFW. So, we need a VPN to access the internet freely. When I open Zed from the launcher, it will download the LSP server automatically, which is not going to success because of the GFW. So I need to set the http_proxy in a terminal session and open zed by using CLI.
It would be really helpful if there was a http_proxy in settings. VS Code have one:
If applicable, add mockups / screenshots to help present your vision of the feature
No response
Or at least make it so that it respects the system proxy setting? It doesn't seem to do that in the current version
|
2025-04-01T04:36:02.000599
| 2017-05-03T18:51:06
|
226080393
|
{
"authors": [
"arunoda",
"lukeed"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12286",
"repo": "zeit/next.js",
"url": "https://github.com/zeit/next.js/pull/1864"
}
|
gharchive/pull-request
|
Add React production aliases to Babel config (SSR)
The second half of my previous PR (#1855). I mistakenly thought this wasn't needed, but I was focused on client-side output only. Oopsie
This sets the proper Babel aliases so that the server (next start) will know to use the pre-minified, production files.
This PR should follow #1862.
Check the error on the travis. That's what I am talking about. Let's close this chat on the other thread.
Feel free to send a new one :D
|
2025-04-01T04:36:02.003336
| 2017-01-03T03:38:17
|
198405553
|
{
"authors": [
"aranajhonny",
"rauchg"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12287",
"repo": "zeit/next.js",
"url": "https://github.com/zeit/next.js/pull/634"
}
|
gharchive/pull-request
|
Aphrodite example.
Using aphrodite for styling. (supports SSR)
Live demo --> https://with-aphrodite-umezkaqkzp.now.sh/
the original example of aphrodite github page. running this setup. https://with-aphrodite.now.sh/
Always really impressive @aranajhonny !
|
2025-04-01T04:36:02.006772
| 2019-05-02T15:36:27
|
439655683
|
{
"authors": [
"ijjk"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12288",
"repo": "zeit/next.js",
"url": "https://github.com/zeit/next.js/pull/7220"
}
|
gharchive/pull-request
|
Add experimental routing
This adds support for dynamic routing.
To use it you identify a dynamic route by creating a page prefixed with $ in your pages folder.
Example pages tree:
/$post/index.js
/$post/comments.js
/$post/$comment.js
/index.js
/another.js
Generated routes from above tree:
[
{ path: '/', component: '/index.js' },
{ path: '/another', component: '/another.js' },
{ path: '/:post/comments', component: '/post/comments.js' },
{ path: '/:post/:comment', component: '/post/comment.js' },
{ path: '/:post', component: '/post/index.js' },
]
For dynamic routes we populate, query in ctx for getInitialProps and on next/router
This results in: query: { post: 'post-1' } for /post-1 and query: { post: 'post-1', comment: 'cmnt-1'} for /post-1/cmnt-1
To navigate to a dynamic route using next/link you specify the component with the href and the path with as. This allows us to not have to ship a manifest of all routes to the client.
Example:
import Link from 'next/link'
export default () => (
<div>
<p>My blog</p>
<Link href='/$post' as='/post-1'>
<a>View post 1</a>
</Link>
<Link href='/$post/comments' as='/post-1/comments'>
<a>View post 1 comments</a>
</Link>
<Link href='/$post/$comment' as='/post-1/comment-1'>
<a>View comment 1 on post 1</a>
</Link>
</div>
)
See mark 2 #7432
|
2025-04-01T04:36:02.008245
| 2019-06-19T08:10:56
|
457867761
|
{
"authors": [
"Timer",
"yassh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12289",
"repo": "zeit/next.js",
"url": "https://github.com/zeit/next.js/pull/7612"
}
|
gharchive/pull-request
|
fix: downgrade strip-ansi to v3 for IE 11
fix #7610
Instead of downgrading can you please make this package compiled in our webpack config?
|
2025-04-01T04:36:02.010266
| 2018-04-09T16:08:39
|
312592649
|
{
"authors": [
"antony",
"javivelasco"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12290",
"repo": "zeit/now-cli",
"url": "https://github.com/zeit/now-cli/issues/1278"
}
|
gharchive/issue
|
Now alias tries to set the alias alias.now.sh if you use a token
Depending if you pass --token first or last, the command works, or tries to do something insane.
I'd expect this to work either way around.
>
$ now alias --token=my-token
> Assigning alias abc.now.sh to deployment aaa-bbb.now.sh
> Success! abc.now.sh now points to aaa-bbb.now.sh! [3s]
>
$ now --token=my-token alias
> Assigning alias alias.now.sh to deployment aaa-bbb.now.sh
> Error! The alias alias.now.sh is in use by a different team.
Possibly related to #1262
$ now -v
11.0.6
Should I be using a canary version?
@antony no need to use canary but errors should be always reproduced on canary before opening the issue. In this case it's a solved issue :) Thanks for reporting though
|
2025-04-01T04:36:02.011074
| 2017-05-05T05:57:41
|
226480945
|
{
"authors": [
"jamo",
"toomu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12291",
"repo": "zeit/now-cli",
"url": "https://github.com/zeit/now-cli/issues/509"
}
|
gharchive/issue
|
Access terminal
Is there a way to ssh into my now instance where I am deploying my application ?
It's not currently supported
|
2025-04-01T04:36:02.015372
| 2018-02-10T14:16:08
|
296103988
|
{
"authors": [
"codeofsumit"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12292",
"repo": "zeit/serve",
"url": "https://github.com/zeit/serve/issues/324"
}
|
gharchive/issue
|
ERR_CONTENT_LENGTH_MISMATCH
If I set chrome dev tools to resemble a "low tier" device, I'm getting ERR_CONTENT_LENGTH_MISMATCH error when my SPA tries to load the main javascript file.
I can't necessarily say what the issue is but from my research everyone points to the server. I'm using serve -s to serve my single page app in a deployment on now and the app.js is around 1MB (i know, way too large).
Could this have something todo with serve or am I completely off?
nevermind, not a problem of serve
|
2025-04-01T04:36:02.022479
| 2019-11-08T00:37:50
|
519592299
|
{
"authors": [
"giuseppeg",
"kkangil"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12293",
"repo": "zeit/styled-jsx",
"url": "https://github.com/zeit/styled-jsx/issues/593"
}
|
gharchive/issue
|
Component(constants) having props with children doesn't working
👋 friend. Welcome to styled-jsx and thanks for contributing!
⚠️ IMPORTANT ⚠️
If you need help or have a question about styled-jsx please ask it on Spectrum https://spectrum.chat/styled-jsx or join our Slack community at https://zeit.chat and ask it in the #next channel.
Before you open a new issue please take a look at our Frequent Asked Questions and open issues.
Remember, often time asking in chat or looking at the FAQ/issues can be faster than waiting for us to reply to a new issue*.
If you are here to report a bug or request a feature please remove this introductory section and fill out the information below!
Do you want to request a feature or report a bug?
bug
What is the current behavior?
https://github.com/zeit/styled-jsx#constants
Component(constants) having props with children doesn't working that I expected like below
const App = () => {
return (
...
<Title color={"red"}>
<div>
<h1>TITLE</h1>
<p>DESCRIPTION</p>
</div>
</Title>
...
)
}
const Title = ({children, color}) => (
<div className={'test'}>
{children}
<style jsx>{`
.test h1 { color:${color}; }
.test p { color:${color}; }
`}</style>
</div>
);
I can't understand why this code is not working.
But below code is working well.
const App = () => {
return (
...
<Title color={"red"}>
<h1>TITLE</h1>
</Title>
...
)
}
const Title = ({children, color}) => (
<h1 className={'test'}>
{children}
<style jsx>{`
.test { color:${color}; }
`}</style>
</h1>
);
I wonder, Why it isn't working. How can I resolve it?
What is the expected behavior?
I think .test h1, .test p styles should be working.
Environment (include versions)
OS: macOS Catalina
Browser: Chrome 77
styled-jsx (version):NextJS 8.0.3 bundled<EMAIL_ADDRESS>
@kkangil styles are scoped to the component that's why they don't apply to children. You can read more about this here https://github.com/zeit/styled-jsx#styling-third-parties--child-components-from-the-parent
|
2025-04-01T04:36:02.035132
| 2017-04-25T18:14:16
|
224227365
|
{
"authors": [
"hanishassim",
"zekunyan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12294",
"repo": "zekunyan/TTGTagCollectionView",
"url": "https://github.com/zekunyan/TTGTagCollectionView/issues/21"
}
|
gharchive/issue
|
Tag background view color on tableview cell highlight and unhighlight
Hi, please have a look at this tag view inside the table view cell:
But when I highlight the cell, before release my touch (touch up), the tag view background color hides, and displays this result:
How can make the tag view background color retain its original color?
Demo code ?
@hanishassim Can you provide a Demo project ?
|
2025-04-01T04:36:02.063515
| 2015-12-05T16:18:04
|
120569181
|
{
"authors": [
"knownasilya",
"orzarchi"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12295",
"repo": "zemirco/json2csv",
"url": "https://github.com/zemirco/json2csv/pull/89"
}
|
gharchive/pull-request
|
Add failing test to illustrate problem with defaultValue of an empty string
The problem only manifests when the JSON object has a field that is null (not undefined), and the option 'defaultValue' is set to an empty string.
Ah, thank you!
|
2025-04-01T04:36:02.090622
| 2019-09-11T21:39:54
|
492480386
|
{
"authors": [
"nogates",
"orien"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12296",
"repo": "zendesk/zendesk_api_client_rb",
"url": "https://github.com/zendesk/zendesk_api_client_rb/pull/394"
}
|
gharchive/pull-request
|
Use SVG badges on readme, instead of PNG
The .png badges look pretty terrible on high resolution monitors:
How about using .svg instead:
Thanks @orien ! that looks much better!
|
2025-04-01T04:36:02.093416
| 2018-03-26T03:41:06
|
308423892
|
{
"authors": [
"svizzari"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12297",
"repo": "zendesk/zendesk_apps_tools",
"url": "https://github.com/zendesk/zendesk_apps_tools/pull/235"
}
|
gharchive/pull-request
|
[ZD3408278][AF-1046] 'zat' errors out on 'clean' with "uninitialized" FileUtils
✌️
Description
When running ZAT commands under certain Ruby versions, the FileUtils class wasn't being auto-loaded, resulting in a NameError.
c:/Ruby23/lib/ruby/gems/2.3.0/gems/zendesk_apps_tools-2.8.3/lib/zendesk_apps_tools/command.rb:128:in `clean': uninitialized constant ZendeskAppsTools::Command::FileUtils (NameError)
Did you mean? FileTest
from c:/Ruby23/lib/ruby/gems/2.3.0/gems/thor-0.20.0/lib/thor/command.rb:27:in `run'
...
References
Ticket: https://support.zendesk.com/agent/tickets/3408278
Risk
N/A
/cc @bryan-flynn-zd
|
2025-04-01T04:36:02.101875
| 2019-03-18T17:05:04
|
422333241
|
{
"authors": [
"guilherme90",
"weierophinney"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12298",
"repo": "zendframework/zend-expressive-swoole",
"url": "https://github.com/zendframework/zend-expressive-swoole/issues/65"
}
|
gharchive/issue
|
The command "start" does not exist.
I can not run the swoole. He says that the command "start" does not exist.
Code to reproduce the issue
$ ./vendor/bin/zend-expressive-swoole start
Expected results
start the server with swoole. I am using Cwgyn 64bit.
Actual results
Return a error
The command "start" does not exist.
What version have you installed? Can you copy and paste the result of composer show, please?
Sure!
Expressive installed: 3.2.1
More details: https://gist.github.com/guilherme90/27c7e45cd50936bbbd1920042aacbb0b
This looks like it may be a problem with Cygwin and/or your shell. Can you look at the following link and see if it helps you resolve the issue:
https://stackoverflow.com/questions/20307618/symfony2-cygwin-cli-php-commands-error
Well, I'm trying and unsuccessful. But it's just the one I'm having trouble with. If I execute commands like: composer expressive module:create ..., it works normally.
So far, I am unable to reproduce it; I have a bare Expressive skeleton in which I've installed zend-expressive-swoole 2.4.0, and the start command works as expected. The only part that differs that I can tell is the operating sytem; I'm on Ubuntu Linux. That said, the OS should only be an issue for certain PHP extensions.
The other possibility: are you running the command from within your project root directory? If not, that would be a problem; otherwise, it should work.
I agree, I'm thinking it's a problem with my OS. I'm using Windows 10. And yes, I'm running at the root of the project.
Can you do some testing, virtualizing Win10, installing Cygwin?
Thanks.
Can you do some testing, virtualizing Win10, installing Cygwin?
Honestly, no. Doing so requires a ton of time: download time; initial virtual image initialization; building and installing cygwin once I have a running vm; finding and installing an appropriate PHP package from cygwin; hunting down all dependencies needed to build the Swoole extension once I have; building the swoole extension; and that's all before I can even create a project to test against. This would likely be a 4-8 hour project, to solve an issue for a non-production system. I simply cannot justify it, unfortunately.
You have two other options:
Use the Windows Subsystem for Linux (WSL); versions from last fall at least worked with the Ubuntu 18.04 version used by WSL, though it's possible recent versions do not.
Use virtualbox or vmware to run a linux VM, and use the PHP in the VM. One good option is Laravel Homestead (we've even documented running Expressive on it).
Otherwise, see if you can find somebody else running Cygwin within the Slack, and see if they can help you debug.
Right!
So, you can close this issue. Thanks.
|
2025-04-01T04:36:02.127841
| 2016-03-17T18:07:19
|
141662615
|
{
"authors": [
"Okeanrst",
"ThaDafinser",
"gavinlimely",
"wagnerjsilva",
"webimpress",
"weierophinney",
"xLeonius"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12299",
"repo": "zendframework/zend-mvc",
"url": "https://github.com/zendframework/zend-mvc/issues/103"
}
|
gharchive/issue
|
Deprecated error. Need a quick solution please!
I have been searching for a solution to this issue for 3 days and I couldn't find an appropriate solution. Every time I get the same Issue, it just says deprecated. Here is what I get on my website:
Deprecated: ServiceManagerAwareInterface is deprecated and will be removed in version 3.0, along with the ServiceManagerAwareInitializer. Please update your class SmallUser\Service\User to remove the implementation, and start injecting your dependencies via factory instead. in C:\Apache24\htdocs\pserverCMSFull\vendor\zendframework\zend-mvc\src\Service\ServiceManagerConfig.php on line 123
Deprecated: You are retrieving the service locator from within the class PServerCore\Controller\IndexController. Please be aware that ServiceLocatorAwareInterface is deprecated and will be removed in version 3.0, along with the ServiceLocatorAwareInitializer. You will need to update your class to accept all dependencies at creation, either via constructor arguments or setters, and use a factory to perform the injections. in C:\Apache24\htdocs\pserverCMSFull\vendor\zendframework\zend-mvc\src\Controller\AbstractController.php on line 258
Deprecated: ServiceManagerAwareInterface is deprecated and will be removed in version 3.0, along with the ServiceManagerAwareInitializer. Please update your class PServerCore\Service\News to remove the implementation, and start injecting your dependencies via factory instead. in C:\Apache24\htdocs\pserverCMSFull\vendor\zendframework\zend-mvc\src\Service\ServiceManagerConfig.php on line 123
Deprecated: ServiceManagerAwareInterface is deprecated and will be removed in version 3.0, along with the ServiceManagerAwareInitializer. Please update your class PServerCore\Service\ConfigRead to remove the implementation, and start injecting your dependencies via factory instead. in C:\Apache24\htdocs\pserverCMSFull\vendor\zendframework\zend-mvc\src\Service\ServiceManagerConfig.php on line 123
<
What I tried:
Tried to remove the error reporting part from the .php files which cause these errors.
Tried to re-download the whole zend-mvc folder from github, didn't work.
Please keep in mind that I am not an expert in such things, so please give me detailed steps to fix this issue. I'd really be grateful ;)
I've addressed this before in #89, in a comment. Essentially, we're deprecating the usage of getServiceLocator() for the upcoming version 3, and providing you with deprecation notices whenever you use that method from a controller.
There are a few solutions:
In your error_reporting, disable E_USER_DEPRECATED reporting. This just masks the problem.
Pin to an earlier version of zend-mvc (e.g., composer require "zendframework/zend-mvc:~2.6.0" will pin specifically to the 2.6 series, and will not install the 2.7 series). This, again, just masks the problem, and will potentially leave your application insecure if security patches are applied to a later minor release of zend-mvc.
Fix your code to no longer use getServiceLocator(). This is the recommended path.
The way to accomplish this latter point is to ensure that all dependencies for your controller are injected during instantiation. This will mean:
You need to create factories for your controllers.
You will need to update your controllers to accept dependencies in their constructors that were previously pulled from getServiceLocator().
As an example, let's say you had something like this in your controller:
$db = $this->getServiceLocator()->get('Db\ApplicationAdapter');
You would change your code as follows:
Add a $db property to your class.
Update your constructor to accept the database adapter, and assign it to the property.
Change the above line to simply $db = $this->db (or just use the property!)
Add a factory for your controller, if one does not currently exist.
So:
use Zend\Db\Adapter\AdapterInterface;
use Zend\Mvc\Controller\AbstractActionController;
class YourController extends AbstractActionController
{
private $db;
public function __construct(AdapterInterface $db)
{
$this->db = $db;
}
public function someAction()
{
$results = $this->db->query(/* ... */);
/* ... */
}
}
Your factory would look something like this:
class YourControllerFactory
{
public function __invoke($container)
{
return new YourController($this->get('Db\ApplicationAdapter'));
}
}
In your application or module configuration, you would map this factory to your controller:
return [
'controllers' => [
'factories' => [
YourController::class => YourControllerFactory::class,
/* ... */
],
/* ... */
],
/* ... */
];
];
This may seem like a lot of steps. However, it ensures your code has no hidden dependencies, improves the testability of your code, and allows you to do cool things like substitute alternatives via configuration.
One frequently asked question we receive is: what about dependencies that are only required sometimes, like forms, or a database adapter, etc?
You have two options:
Split your controller into separate responsibilities, and use the more specific controllers. This way you don't need to inject dependencies that are only used in some actions. (I recommend you start doing this regardless, as it helps keep your code more maintainable.)
When you configure these, zend-servicemanager gives you a proxy instance that, on first access, loads the full service. This allows you to delay the most expensive operations until absolutely needed.
Good luck!
What about $this->serviceLocator->get('Error_Logger')?
I seem to be able to do that from the controller without any deprecation messages.
For example:
in module.config.php
'controllers' => array(
'factories' => array(
'Collection' =>
'OkeanrstBooks\Factory\Controller\CollectionControllerFactory'
),
),
In CollectionControllerFactory:
class CollectionControllerFactory implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $name, array
$options = null)
{
$parentLocator = $container->getServiceLocator();
$ErrorLogerService =
$parentLocator->get('Error_Logger');
return new CollectionController( $ErrorLogerService);
}
public function createService(ServiceLocatorInterface $container)
{
return $this($container, CollectionController::class);
}
}
in CollectionController:
public function __construct($ErrorLogerService)
{
$this->errorLogerService = $ErrorLogerService;
}
indexAction()
{
$this->errorLogerService->log($error);
}
or manually in ControllerFactory inject in Controller serviceLocator
2016-04-28 17:20 GMT+03:00 Wagner Silva<EMAIL_ADDRESS>
What about $this->serviceLocator->get('Error_Logger')?
I seem to be able to do that from the controller without any deprecation
messages.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/zendframework/zend-mvc/issues/103#issuecomment-215439806
@wagnerjsilva —
What about $this->serviceLocator->get('Error_Logger')?
I seem to be able to do that from the controller without any deprecation messages.
Don't do that.
Why? Because once you update to zend-servicemanager v3, and zend-mvc v3, the property not only is not injected, but does not exist. Follow the instructions from my previous comment.
Probably good to add a deprecation message there too.
I've always used the service manager to prevent objects being instantiated multiple times.
Will the new approach give us any sort of functionality as Zend 1 Zend_Registry?
class YourControllerFactory { public function __invoke($container) { return new YourController($this->get('Db\ApplicationAdapter')); } }
I can alway check the source code later, but will the get function prevent the object from being instantiated multiple times, or do we need to go to another library to achieve that?
@wagnerjsilva —
Regarding:
Probably good to add a deprecation message there too.
We can't add deprecation notices for property access; PHP doesn't allow intercepting that.
Regarding:
I've always used the service manager to prevent objects being instantiated multiple times.
and
will the get function prevent the object from being instantiated multiple times, or do we need to go to another library to achieve that?
This is and has always been the case; the service manager caches instances after initial creation (unless configuration indicates a particular service is not shared). get() will return the same instance on each request.
Regarding:
Will the new approach give us any sort of functionality as Zend 1 Zend_Registry?
No, because zend-servicemanager never used that approach, and was in fact a rebuke of that approach. Zend_Registry was a global singleton, which is an anti-pattern (makes testing difficult, hides dependencies). zend-servicemanager was intended to address the short-comings of that class. However, by defining ServiceLocatorAwareInterface and recommending usage of getServiceLocator(), we were still falling prey to many of the same problems. The current deprecation for v3 finally addresses those definitively.
Nice! Thanks for clearing that up.
Hi @weierophinney, thanks for clearing this up.. Just to clarify, say I have a UsersController that has 5 actions, but only one of the actions requires a dependency, is there any way for me to load the dependency for just that one action? Or do I need to add it to the constructor so it can potentially be used by all of the actions (even though it's not needed).
Cheers
@gavinlimely there are two approaches described in documentation
thank you for the quick reply @webimpress, is there any documentation you know of that uses this in controller context?
Since ZF is one of the best customizable framework, you can of course "avoid" this deprecation issue.
But please be aware that this workaround MUST be a TEMPORARY solution
It will not prevent you from correcting your code at the end to upgrade to v3
Use the application.config.php to avoid the log messages from ServiceLocator
https://gist.github.com/ThaDafinser/9cb57e0ca70c6fe830bc52f94f14535b#file-application-config-php
And for the controller part, you need all other files
https://gist.github.com/ThaDafinser/9cb57e0ca70c6fe830bc52f94f14535b
|
2025-04-01T04:36:02.139502
| 2022-11-17T10:13:47
|
1453072392
|
{
"authors": [
"bcdurak",
"fa9r"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12300",
"repo": "zenml-io/zenml",
"url": "https://github.com/zenml-io/zenml/pull/1077"
}
|
gharchive/pull-request
|
The ultimate optimization for performance
Describe changes
Small typo fixed. More information will be available soon. Stay tuned!
Pre-requisites
Please ensure you have done the following:
[X] I have read the CONTRIBUTING.md document.
[ ] If my change requires a change to docs, I have updated the documentation accordingly.
[ ] If I have added an integration, I have updated the integrations table and the corresponding website section.
[X] I have added tests to cover my changes.
Types of changes
[X] Bug fix (non-breaking change which fixes an issue)
[X] New feature (non-breaking change which adds functionality)
[X] Breaking change (fix or feature that would cause existing functionality to change)
[X] Other (add details above)
trivial changes, LGTM
|
2025-04-01T04:36:02.159701
| 2020-09-09T20:22:37
|
697132186
|
{
"authors": [
"alexkrolick",
"jbean96"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12301",
"repo": "zenoamaro/react-quill",
"url": "https://github.com/zenoamaro/react-quill/issues/660"
}
|
gharchive/issue
|
Inline Style ordering is inconsistent
My application is porting over to using ReactQuill as our new rich-text-editor. We need to be backwards compatible with our old rich text format which was to apply all styles inline (hence the registering of custom bold/italic style attributors to Quill).
ReactQuill (or perhaps the Quill editor it uses under the hood) seems to be inconsistent with it's ordering of these inline style attributes. This is a problem because these re-ordering changes are surfaced as a "change" to the data model currently being displayed to the user. So they need to save changes to the rich text even though no changes were actually made.
Ideally we would change our data storage format to not be the raw HTML and instead be the Quill Delta, however, we are unable to do that as we need to be backwards compatible with existing rich text data.
In the application you can see that our initial HTML displayed by the editor should be: <p><span style="font-style: italic; font-weight: bold;">Text</span></p> however, the instant this is displayed, the onChange handler is called and indicates that the inner HTML is actually: <p><span style="font-weight: bold; font-style: italic;">Text</span></p> where the font-style and font-weight style attributes have been re-ordered.
If you start the editor with a default value of <p><span style="font-weight: bold; font-style: italic; ">Text</span></p> where the ordering of the attributes has been flipped the resulting HTML passed to the onChange handler continues to do re-ordering: <p><span style="font-style: italic; font-weight: bold;">Text</span></p>.
Ideally Quill/ReactQuill would either
A. have a consistent ordering of these attributes (i.e. font-style is always first, font-weight is always second, etc.) or
B. simply append new attributes to the end of the style string and not do any sort of re-ordering.
https://codesandbox.io/s/react-quill-template-forked-ocoge?fontsize=14&hidenavigation=1&theme=dark
Ticket due diligence
[x] I have verified that the issue persists under ReactQuill v2.0.0-beta.2
[ ] I can't use the beta version for other reasons
ReactQuill version
[ ] master
[x] v2.0.0-beta.2
[ ] v2.0.0-beta.1
[ ] 1.3.5
[ ] 1.3.4 or older
[ ] Other (fork)
FAQ
Is this a bug in Quill or ReactQuill?
It's very possible that this bug is a Quill issue, however, it's easiest to demonstrate with ReactQuill's API. Looking at the code I think the culprit may be how Quill converts a Delta to HTML and is surfaced in ReactQuill because of the use of editor.clipboard.convert.
How do I access the wrapped Quill instance?
See the instance methods and API documentation.
I think this is a Quill thing - it's part of how Quill handles the HTML conversion. I'd raise this over there.
|
2025-04-01T04:36:02.168923
| 2020-12-21T14:08:05
|
772196572
|
{
"authors": [
"kbond"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12302",
"repo": "zenstruck/browser",
"url": "https://github.com/zenstruck/browser/pull/1"
}
|
gharchive/pull-request
|
[RFC] make assertions test framework agnostic
Currently, assertions have an unenforced dependency on PHPUnit. This PR adds a slim built-in assertion framework instead. When using this library with PHPUnit, the built-in assertions are converted to PHPUnit assertions.
FYI, I investigated using an existing assertion library like webmozart/assert but what I needed was the ability to hook into "assertion passed". I use this, with the PHPUnitHandler, to trigger a successful assertion. Without this, many of your tests would be marked as "risky" by PHPUnit.
Still on the fence with this one. I do now have 4 libraries that could benefit from such an "assertion" library:
(this one)
https://github.com/zenstruck/foundry
https://github.com/zenstruck/mailer-test
https://github.com/zenstruck/messenger-test
@wouterj, do you have any thoughts on this - I believe you use foundry outside of phpunit tests.
|
2025-04-01T04:36:02.179428
| 2018-10-10T10:31:45
|
368600217
|
{
"authors": [
"jarz-nordic",
"jukkar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12303",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/issues/10480"
}
|
gharchive/issue
|
Creating shell with current SHELL_DEFINE() is not practical
I have been porting multiple network related shells to use new shell and noticed some issues how the shells are defined.
Currently one needs to do something like this in order to define a shell.
SHELL_CMD_REGISTER(net, &net_commands, "Networking commands", NULL);
SHELL_UART_DEFINE(shell_transport_uart);
SHELL_DEFINE(net_shell, "net:~$ ", &shell_transport_uart, 10, SHELL_FLAG_OLF_CRLF);
Currently this code is called by subsys/net/ip/net_shell.c when CONFIG_NET_SHELL=y is set.
There are other network related shells that can co-exists with this network shell. For example if one defines CONFIG_NET_L2_IEEE802154_SHELL=y then IEEE 802.15.4 specific shell is created. It is also created similar way as generic network shell.
SHELL_CMD_REGISTER(ieee802154, &ieee802154_commands, "IEEE 802.15.4 commands", NULL);
SHELL_UART_DEFINE(ieee802154_shell_transport_uart);
SHELL_DEFINE(ieee802154_shell, "ieee802154:~$ ", &ieee802154_shell_transport_uart, 10, SHELL_FLAG_OLF_CRLF);
It really looks like that the backend configuration should not be done like this for a shell. In these examples, the actual shell implementation is in proper place but how this shell is used should be placed in some shell specific directory and controlled by generic config options. The shells in question do not really care about the backend, they are just generic shells that can be used with any backend.
It looks like that the SHELL_DEFINE() could easily just have the two parameters like this SHELL_DEFINE(net_shell, SHELL_FLAG_OLF_CRLF);. The other parameters should be defined in some other file.
Also noticed that if I enable various shells at the same time, and this is quite common case with networking, the history of the shell stops working.
But do you need SHELL_DEFINE in these files at all?
In my opinion all you need is to register commands which will be available for all created shells - somewhere else.
If I leave the SHELL_DEFINE() away, which means that I cannot call shell_init(). Then also the shell functionality is missing (= I do not see any shell after this). Regardless how this is is implemented in shell library and support, the expectation (at least from networking shell point of view) is that everything should work out of the box without any extra coding regarding shell configuration.
Possibly I am missing full pictrue here. I thought that: CONFIG_NET_SHELL or CONFIG_NET_L2_IEEE802154_SHELL are in fact only resposible for adding commands to the shell. Shell instance needs to be created separatly.
Otherwise you would force one transport medium for them (UART).
Shell instance needs to be created separately.
I am fine with this, but which component in zephyr would do this? We can have multiple shells that are activated by just a config option, only common element here is the core shell code so I suppose it should call the shell_init() then?
Proposal:
keep SHELL_DEFINE() as we just introduced the flag variable there which is useful, perhaps the shell commands variable could be placed as one parameter there
place the shell defined by SHELL_DEFINE() to a separate linker section
make core shell code traverse this section and initialize things like calling shell_init() if it needs to do so
@jukkar : I see that in old implementation shell init was in file: shell_service.c
Can't we reuse this approach? You could also add there macro SHELL_DEFINE as it is.
I am not sure that we will need any short term solution here. Currently most of the things work ok if I have multiple shells defined, except that history does not work but personally I can live with that. My issue is more like an enhancement request than a bug report anyway.
@jukkar : it works with legacy shell. Plan is to get rid of it asap.
So the current way how the new shell works is just an interim step towards something else. Is there some concrete plans to this "something else", I mean I do not want to investigate this more if you have plans to remove existing macros/functions.
I do not want to do "something else" to be honest. In my opinion all you should do is to migrate commands to the new shell and #ifdef them with Kconfig parameters like CONFIG_NET_SHELL_COMMANDS or CONFIG_NET_L2_IEEE802154_SHELL_COMMANDS . Please take a look how logger is implemented. It has own command set but it does not define a shell istance. It is up to the user to create a shell instance where he will have all commands that are necessary.
For instance if user want to have a Logger commands he will set LOG_CMDS in the Kconfig file.
Next it is up to the user to enable and initialize the shell on whatever backend is needed.
Just like in the examples. Shell instance is created once in the main.c file and all commands activated in the Kconfig are available for the user.
I have had totally different usecases in mind. There user does not need to do anything in the main.c to get shell to use it, this is how the old shell worked. It would be very convenient for a user to just say that he/she wants to use a shell in config file, and if the default shell settings are ok, then a shell with UART backend would be created automatically without any extra coding.
I can work tomorrow on shell module. Once user activates particular backend it will be initialized and started.
The #10511 works for me regarding the issues described here, thanks for supporting this.
|
2025-04-01T04:36:02.181916
| 2018-10-15T06:53:51
|
370022650
|
{
"authors": [
"jhedberg",
"mandarcthorat1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12304",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/issues/10578"
}
|
gharchive/issue
|
[Coverity CID :188748]Memory - corruptions in /subsys/bluetooth/host/gatt.c
Static code scan issues seen in File: /subsys/bluetooth/host/gatt.c
Category: Memory - corruptions
Function: bt_gatt_notify_cb
Component: Bluetooth
CID: 188748
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996
Marked as false positive in Coverity
|
2025-04-01T04:36:02.186204
| 2018-11-07T02:22:08
|
378115122
|
{
"authors": [
"SebastianBoe",
"ceolin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12305",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/issues/11167"
}
|
gharchive/issue
|
Building failing in arm cortex-m0
Building is failing when CONFIG_NO_OPTIMIZATIONS is enabled. This is the output for ./samples/hello_world/ with bbc-microbit as target.
Memory region Used Size Region Size %age Used
FLASH: 103636 B 256 KB 39.53%
SRAM: 9240 B 16 KB 56.40%
IDT_LIST: 120 B 2 KB 5.86zephyr/arch/arm/core/libarch__arm__core.a(isr_wrapper.S.obj): In function `_idle_state_cleared':
./zephyr/arch/arm/core/isr_wrapper.S:132:(.text._isr_wrapper+0x2c): relocation truncated to fit: R_ARM_THM_JUMP11 against symbol `_IntExit' defined in .text._HandlerModeExit section in zephyr/arch/arm/core/libarch__arm__core.a(exc_exit.S.obj)
collect2: error: ld returned 1 exit status
%
ninja: build stopped: subcommand failed.
Reproduced.
diff --git a/samples/hello_world/CMakeLists.txt b/samples/hello_world/CMakeLists.txt
index 9c3c38b669..ad74422ed8 100644
--- a/samples/hello_world/CMakeLists.txt
+++ b/samples/hello_world/CMakeLists.txt
@@ -1,5 +1,7 @@
cmake_minimum_required(VERSION 3.8.2)
+set(BOARD nrf51_pca10028)
+
include($ENV{ZEPHYR_BASE}/cmake/app/boilerplate.cmake NO_POLICY_SCOPE)
project(hello_world)
diff --git a/samples/hello_world/prj.conf b/samples/hello_world/prj.conf
index b2a4ba5910..8c8a1404c5 100644
--- a/samples/hello_world/prj.conf
+++ b/samples/hello_world/prj.conf
@@ -1 +1,2 @@
# nothing here
+CONFIG_NO_OPTIMIZATIONS=y
Fixed.
Not sure what kind of consequences this fix has though, it might be breaking things ...
diff --git a/arch/arm/core/isr_wrapper.S b/arch/arm/core/isr_wrapper.S
index 53c24645d7..d3c24d33f1 100644
--- a/arch/arm/core/isr_wrapper.S
+++ b/arch/arm/core/isr_wrapper.S
@@ -133,4 +133,4 @@ _idle_state_cleared:
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
/* exception return is done in _IntExit() */
- b _IntExit
+ bl _IntExit
diff --git a/samples/hello_world/CMakeLists.txt b/samples/hello_world/CMakeLists.txt
The root cause and solution options are described in https://github.com/zephyrproject-rtos/zephyr/commit/17652760001cec19cd25f48fa18d0c973f5ca345#diff-ef6459503728d8b77ea93b33e8de0117
Personally I think that porting to C is the best long-term solution. Not sure what to do short-term ...
Fixed.
Not sure what kind of consequences this fix has though, it might be breaking things ...
diff --git a/arch/arm/core/isr_wrapper.S b/arch/arm/core/isr_wrapper.S
index 53c24645d7..d3c24d33f1 100644
--- a/arch/arm/core/isr_wrapper.S
+++ b/arch/arm/core/isr_wrapper.S
@@ -133,4 +133,4 @@ _idle_state_cleared:
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
/* exception return is done in _IntExit() */
- b _IntExit
+ bl _IntExit
diff --git a/samples/hello_world/CMakeLists.txt b/samples/hello_world/CMakeLists.txt
EDIT: This fix is no good.
Yeah, it override the link register, one change that works is save lr register into other register and restore it in _IntExit. But my re-organize these sections make more sense to me.
|
2025-04-01T04:36:02.190148
| 2017-11-28T20:40:17
|
277530497
|
{
"authors": [
"andrewboie",
"linkmeyer",
"lpereira"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12306",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/issues/5184"
}
|
gharchive/issue
|
kernel system call handlers missing due to -Wl,--no-whole-archive
Discovered by @agross-linaro, who was getting "unimplemented system call" errors on ARM for k_thread_abort().
ARM has a custom implementation of k_thread_abort under arch/arm. This has the effect of compiling out everything in kernel/thread_abort.c except the handler function.
For some strange reason, if --no-whole-archive is enabled, the linker decides to prefer the weak handler for k_thread_abort() in syscall_dispatch.c.
@andrewboie , can you please set the priority?
I set the priority to high since the way things are right now, they won't work on ARM. This fix is required.
I had originally set this to high because this wouldn't work on ARM, but userland isn't working on ARM yet and won't be at least for 1.11. So moved back to medium.
This issue affects x86 and needs to go into 1.10
@AdithyaBaglody discovered that this problem is still happening, my patch didn't completely fix it.
Diving in....
@AdithyaBaglody found a fix
|
2025-04-01T04:36:02.202529
| 2024-12-24T09:03:17
|
2757452874
|
{
"authors": [
"andyross",
"ycsin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12307",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/issues/83354"
}
|
gharchive/issue
|
log/mpsc_pbuf: logging in spinlock held context can cause recurring exception
Describe the bug
For architectures that selects USE_SWITCH (with the exception of ARM64, as it is excluded from validating spinlock in do_swap()), logging in spinlock-held context can cause recurring exception:
% west build -b qemu_riscv64 -p auto -t run zephyr/samples/hello_world
-- west build: running target run
[116/117] Linking C executable zephyr/zephyr.elf
Memory region Used Size Region Size %age Used
RAM: 82152 B 256 MB 0.03%
IDT_LIST: 0 GB 2 KB 0.00%
Generating files from /Users/ycsin/zephyrproject/build/zephyr/zephyr.elf for board: qemu_riscv64
[116/117] To exit from QEMU enter: 'CTRL+a, x'[QEMU] CPU: riscv64
*** Booting Zephyr OS build v4.0.0-2696-g31ebd6036aa0 ***
[00:00:00.000,000] <inf> main: Hello World!qemu_riscv64/qemu_virt_riscv64
ASSERTION FAIL [arch_irq_unlocked(key) || arch_current_thread()->base.thread_state & (((1UL << (0))) | ((1UL << (3))))] @ WEST_TOPDIR/zephyr/kernel/include/kswap.h:99
Context switching while holding lock!
[00:00:00.000,000] <err> os:
[00:00:00.000,000] <err> os: mcause: 11, Environment call from M-mode
[00:00:00.000,000] <err> os: mtval: 0
[00:00:00.000,000] <err> os: a0:<PHONE_NUMBER>000004 t0:<PHONE_NUMBER>000000
[00:00:00.000,000] <err> os: a1:<PHONE_NUMBER>000063 t1:<PHONE_NUMBER>000067
[00:00:00.000,000] <err> os: a2:<PHONE_NUMBER>000020 t2:<PHONE_NUMBER>000020
[00:00:00.000,000] <err> os: a3:<PHONE_NUMBER>011b88 t3:<PHONE_NUMBER>00fb2f
[00:00:00.000,000] <err> os: a4:<PHONE_NUMBER>000001 t4:<PHONE_NUMBER>00004c
[00:00:00.000,000] <err> os: a5:<PHONE_NUMBER>000000 t5:<PHONE_NUMBER>00006c
[00:00:00.000,000] <err> os: a6:<PHONE_NUMBER>013e58 t6:<PHONE_NUMBER>000012
[00:00:00.000,000] <err> os: a7:<PHONE_NUMBER>000001
[00:00:00.000,000] <err> os: sp:<PHONE_NUMBER>013e80
[00:00:00.000,000] <err> os: ra:<PHONE_NUMBER>005ae0
[00:00:00.000,000] <err> os: mepc:<PHONE_NUMBER>00141a
[00:00:00.000,000] <err> os: mstatus: 0000000a00001800
[00:00:00.000,000] <err> os:
[00:00:00.000,000] <err> os: s0:<PHONE_NUMBER>013e90 s6:<PHONE_NUMBER>00e888
[00:00:00.000,000] <err> os: s1:<PHONE_NUMBER>011088 s7:<PHONE_NUMBER>000400
[00:00:00.000,000] <err> os: s2:<PHONE_NUMBER>011c90 s8:<PHONE_NUMBER>013f10
[00:00:00.000,000] <err> os: s3:<PHONE_NUMBER>000000 s9:<PHONE_NUMBER>000000
[00:00:00.000,000] <err> os: s4:<PHONE_NUMBER>011b88 s10:<PHONE_NUMBER>000000
[00:00:00.000,000] <err> os: s5:<PHONE_NUMBER>00e6a8 s11:<PHONE_NUMBER>000000
[00:00:00.000,000] <err> os:
[00:00:00.000,000] <err> os: call trace:
[00:00:00.000,000] <err> os: 0: sp:<PHONE_NUMBER>013e80 ra:<PHONE_NUMBER>00141a [assert_post_action+0xe]
[00:00:00.000,000] <err> os: 1: sp:<PHONE_NUMBER>013ec0 ra:<PHONE_NUMBER>006dfc [z_tick_sleep+0x11a]
[00:00:00.000,000] <err> os: 2: sp:<PHONE_NUMBER>013ef0 ra:<PHONE_NUMBER>006e70 [z_impl_k_sleep+0x56]
[00:00:00.000,000] <err> os: 3: sp:<PHONE_NUMBER>013f10 ra:<PHONE_NUMBER>0004cc [main+0xca]
[00:00:00.000,000] <err> os: 4: sp:<PHONE_NUMBER>013f48 ra:<PHONE_NUMBER>00640a [z_thread_timeout+0x0]
[00:00:00.000,000] <err> os: 5: sp:<PHONE_NUMBER>013f70 ra:<PHONE_NUMBER>005032 [bg_thread_main+0x128]
[00:00:00.000,000] <err> os: 6: sp:<PHONE_NUMBER>013f98 ra:<PHONE_NUMBER>004f0a [bg_thread_main+0x0]
[00:00:00.000,000] <err> os: 7: sp:<PHONE_NUMBER>013fb0 ra:<PHONE_NUMBER>001404 [z_thread_entry+0x2e]
[00:00:00.000,000] <err> os:
[00:00:00.000,000] <err> os: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
[00:00:00.000,000] <err> os: Current thread: 0x80011b88 (unknown)
[00:00:00.060,000] <err> os: Halting system
The error happens when:
CONFIG_LOG=y, CONFIG_SPIN_VALIDATE=y, CONFIG_ASSERT=y, and
The software generated a large amount of log msgs, leading to msg_alloc() -> mpsc_pbuf_alloc() taking this branch.
The error is reproducible on:
sparc (qemu_leon3)
riscv (qemu_riscv64)
x86_64 64BIT-only (qemu_x86_64)
(xtensa should be affected, but qemu_xtensa doesn't launch on my M1 work laptop ¯\_(ツ)_/¯)
To Reproduce
We discovered this after studying an error in our application. To repro on upstream, apply the following diff:
diff --git a/samples/hello_world/prj.conf b/samples/hello_world/prj.conf
index b2a4ba59104..e08318b9a3c 100644
--- a/samples/hello_world/prj.conf
+++ b/samples/hello_world/prj.conf
@@ -1 +1,7 @@
# nothing here
+CONFIG_LOG=y
+CONFIG_SPIN_VALIDATE=y
+CONFIG_ASSERT=y
+CONFIG_FRAME_POINTER=y
+CONFIG_LOG_BUFFER_SIZE=256
+CONFIG_LOG_BLOCK_IN_THREAD=y
diff --git a/samples/hello_world/src/main.c b/samples/hello_world/src/main.c
index c550ab461cb..263cec7e337 100644
--- a/samples/hello_world/src/main.c
+++ b/samples/hello_world/src/main.c
@@ -5,10 +5,40 @@
*/
#include <stdio.h>
+#include <stdbool.h>
+
+#include <zephyr/kernel.h>
+#include <zephyr/logging/log.h>
+
+LOG_MODULE_REGISTER(main, LOG_LEVEL_DBG);
+
+char str[] = " ";
+
+static void print_fn(int count)
+{
+ struct k_spinlock lock = {0};
+
+ while (true) {
+ K_SPINLOCK(&lock) {
+ LOG_INF("%s %d\n", str, count++);
+ }
+ }
+}
+
+K_THREAD_DEFINE(printer_1, 1024, print_fn, 1, NULL, NULL, 1, 0, 1000);
+K_THREAD_DEFINE(printer_2, 1024, print_fn, 2, NULL, NULL, 1, 0, 2000);
+K_THREAD_DEFINE(printer_3, 1024, print_fn, 3, NULL, NULL, 1, 0, 3000);
+K_THREAD_DEFINE(printer_4, 1024, print_fn, 4, NULL, NULL, 1, 0, 4000);
+K_THREAD_DEFINE(printer_5, 1024, print_fn, 5, NULL, NULL, 1, 0, 5000);
+K_THREAD_DEFINE(printer_6, 1024, print_fn, 6, NULL, NULL, 1, 0, 6000);
+K_THREAD_DEFINE(printer_7, 1024, print_fn, 7, NULL, NULL, 1, 0, 7000);
+K_THREAD_DEFINE(printer_8, 1024, print_fn, 8, NULL, NULL, 1, 0, 8000);
+K_THREAD_DEFINE(printer_9, 1024, print_fn, 9, NULL, NULL, 1, 0, 9000);
+K_THREAD_DEFINE(printer_10, 1024, print_fn, 10, NULL, NULL, 1, 0, 10000);
int main(void)
{
- printf("Hello World! %s\n", CONFIG_BOARD_TARGET);
+ print_fn(0);
return 0;
}
Then run one of the following:
west build -b qemu_leon3 -p auto -t run zephyr/samples/hello_world
west build -b qemu_riscv64 -p auto -t run zephyr/samples/hello_world
west build -b qemu_x86_64 -p auto -t run zephyr/samples/hello_world
Expected behavior
The logging subsystem should work properly under stress and doesn't cause recurring exception.
Impact
Depending on configurations, devices can enter exception when there's a flood of log messages.
Logs and console output
Appended above
Environment (please complete the following information):
Occurs in current main branch (v4.0.0-2696-g31ebd6036aa0) built with Zephyr SDK 0.16.8
Occurs in v3.7.0 built with custom GCC toolchain.
cc @luchnikov @akabaev
cc @andyross @peter-mitsis
The point to that assert is to catch a case where something can reach a cooperative context switch while holding a nested lock. The z_swap() API family take locks that get released atomically on suspend (i.e. it's a condition variable), but the framework checks that the key passed will act to unmask interrupts entirely. If it won't, that means that the lock passed is an inner lock inside some other spinlock.
And if we context switch, we'll (by definition) break that lock and allow unlocked code run and interrupts to be serviced, which the outer context was promised wouldn't happen. It's a bad bug, thus the assertion.
This was a routine goof inside the kernel when doing the original SMP work. I'm at a loss for how logging would trip over it though. Can you work up a call tree that shows the problem?
Also: are all the extra delayed threads and the while loop in printf_fn() needed to show the bug? I'm a little confused as to what how they're involved.
Oh! I just saw that there's a partial stack trace in the report. It's showing a k_sleep() being invoked from main, presumably from inside LOG_INF() somewhere. Yeah, that's illegal. You can't sleep inside a lock, for obvious reasons. Why does logging need to sleep?
|
2025-04-01T04:36:02.204954
| 2018-09-04T17:36:51
|
356908774
|
{
"authors": [
"ceolin",
"nashif"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12308",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/issues/9788"
}
|
gharchive/issue
|
update to mbedTLS 2.12.0
https://tls.mbed.org/download/start/mbedtls-2.12.0-apache.tgz now available
Just for coincidence I was looking it yesterday. I was struggling with a bug in crypto and updated zephyr's mbedtls to this version. It worked with few changed. Need to check whether something break or not. Also they have now some platform specific functions that we probably should point to ours but as I was just testing I've used their reference functions.
https://github.com/zephyrproject-rtos/zephyr/pull/9836
|
2025-04-01T04:36:02.212065
| 2019-02-15T13:25:03
|
410768409
|
{
"authors": [
"codecov-io",
"ioannisg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12309",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/13427"
}
|
gharchive/pull-request
|
Arch arm fault fix nonsecure
Fixes some bugs with Secure/Non-Secure firmware and error handling (Reported in the commit messages)
Only affects builds with TrustZone-M enabled.
Codecov Report
Merging #13427 into master will increase coverage by <.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #13427 +/- ##
==========================================
+ Coverage 48.56% 48.56% +<.01%
==========================================
Files 319 319
Lines 46567 46567
Branches 10761 10761
==========================================
+ Hits 22616 22617 +1
Misses 19360 19360
+ Partials 4591 4590 -1
Impacted Files
Coverage Δ
arch/posix/core/posix_core.c
91.91% <0%> (+1.01%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 95e8a58...bc01f01. Read the comment docs.
Thanks @agross-linaro , commit messages fixed.
|
2025-04-01T04:36:02.215977
| 2019-07-30T23:49:37
|
474871106
|
{
"authors": [
"albertofloyd",
"andrewboie",
"scottwcpg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12310",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/17898"
}
|
gharchive/pull-request
|
Add configuration option to switch back to systick timer instead 32Khz driver in MCHP MEC15xx board
Add board configuration option to switch timer drivers, currently several configuration changes have to be done to switch between 32KHz timer driver and use systick driver.
Default configuration for the board will have 32Khz RTOS driver disabled until following accuracy issues and timer tests failures are resolved.
https://github.com/zephyrproject-rtos/zephyr/issues/17897
https://github.com/zephyrproject-rtos/zephyr/issues/17778
All,
There are many issues with this driver.
Re-programming timer while it running. I worked with the designer of the timer block and this block was not designed for being re-programmed while running. The designer did provide a sequence of register writes that should work to re-program it. Hopefully this will solve the failures due to the timer not taking the new value.
This driver is losing 0.5% of the requested time. For example, for 1000 ms it produces 995 ms, for 100ms it produces 99.5 ms. I believe this is an issue in the algorithm in the driver but I have not been able to find it. I have implemented drivers using other timers in our chip that run on the 48MHz clock and see this issue in all drivers. All I can think of is to add lots of debug code to dump intermediate values from the driver and kernel for analysis. Can Zephyr be built to run as an app on x86_64 where I could model the timer driver and use OS debug to track all the calculations in the kernel and driver?
Can Zephyr be built to run as an app on x86_64 where I could model the timer driver and use OS debug to track all the calculations in the kernel and driver?
Yes, the native_posix_64 target does this.
|
2025-04-01T04:36:02.219146
| 2019-09-13T07:44:35
|
493182359
|
{
"authors": [
"erwango",
"rosterloh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12311",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/19130"
}
|
gharchive/pull-request
|
drivers: gpio: Add open drain output support for STM32
Support configuring output as open drain instead of push pull
for STM32 devices with the GPIO_DS_DISCONNECT_HIGH flag
Signed-off-by: Richard Osterloh<EMAIL_ADDRESS>
@rosterloh, thanks for proposal.
Please note imminent driver rework due to new GPIO API introduction, merged yesterday actually (#16648). So please have a look and check if it fits your need.
Is there a PR already for migrating the gpio_stm32 driver to the new API?
Is there a PR already for migrating the gpio_stm32 driver to the new API?
Only an issue for now, https://github.com/zephyrproject-rtos/zephyr/issues/19270.
Things should move next week
I'm happy with the way this is implemented in #19607 so I'm closing this PR
|
2025-04-01T04:36:02.221521
| 2020-05-17T15:18:07
|
619729671
|
{
"authors": [
"galak",
"henrikbrixandersen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12312",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/25396"
}
|
gharchive/pull-request
|
drivers: pwm: mcux_ftm: allow configuring the clock prescaler
Allow configuring the clock prescaler divider for the NXP Kinetis FlexTimer. Setting the prescaler to a lower value allows for much higher resolution/accuracy for the generated PWM waveforms.
Signed-off-by: Henrik Brix Andersen<EMAIL_ADDRESS>
This is a follow-up to #23141.
Flagging as TSC in the hope this can make it into v2.3.0. This change is needed to reach the required resolution on one our products with mainline Zephyr.
Approved in May 20 TSC meeting
|
2025-04-01T04:36:02.222610
| 2021-01-13T01:18:17
|
784709975
|
{
"authors": [
"nashif"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12313",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/31276"
}
|
gharchive/pull-request
|
ci: fix check_compliance workflow
- Use older junitparser, new version is not compatible
- Fetch pull request ref, not master
- add few debug messages
major breakge in compliance workflow, merging without delay.
major breakge in compliance workflow, merging without delay.
|
2025-04-01T04:36:02.224525
| 2021-06-21T10:17:45
|
926062618
|
{
"authors": [
"ycsin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12314",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/36431"
}
|
gharchive/pull-request
|
drivers: timer: st_stm32: add lptimer management to stm32g0 series
Patches the lptim's Kconfig to compile for the STM32G0 series and adds lptim1 node to the dts.
Signed-off-by: Yong Cong Sin<EMAIL_ADDRESS>
I know that the tickless idle works after having this driver, based on current consumption down from 30~40mA to 8mA. But tbh I haven't run tests/kernel/timer/timer_api
|
2025-04-01T04:36:02.227053
| 2021-09-30T20:51:53
|
1012612366
|
{
"authors": [
"rlubos",
"rmelch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12315",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/39016"
}
|
gharchive/pull-request
|
See Issue https://github.com/zephyrproject-rtos/zephyr/issues/38994
@rmelch It looks good now, thanks, but there's still some issue with the commit message. It seems that you did not use your full name for commit author, you need to set git user.name config to your full name and then git commit --amend --no-edit --reset-author to update the commit.
I would say that it should block the release, I believe that ARP doesn't work if your IP Address is XXX.XXX.224.XXX -> XXX.XXX.239.XXX, not sure for IPV6.
|
2025-04-01T04:36:02.228814
| 2017-09-29T20:05:39
|
261756226
|
{
"authors": [
"andrewboie",
"dbkinder"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12316",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/4123"
}
|
gharchive/pull-request
|
kernel: expose API when userspace not enabled
We want applications to be able to enable and disable userspace without
changing any code. k_thread_user_mode_enter() now just jumps into the
entry point if CONFIG_USERSPACE is disabled.
Signed-off-by: Andrew Boie<EMAIL_ADDRESS>
@dbkinder doxygen doesn't like this one either
working on it...
|
2025-04-01T04:36:02.230796
| 2022-11-15T03:36:32
|
1449107286
|
{
"authors": [
"hakehuang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12317",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/52230"
}
|
gharchive/pull-request
|
test: twister: add more exception protection
the ser.in_wait sometime meet issue like serial handler error. This is because the serial port is in reset status. add final protection to retry later
Signed-off-by: Hake Huang<EMAIL_ADDRESS>
@nashif , @chen-png , @enjiamai can you help to review? this is more for serial console stabilities which I found in our RT1060_evk board
@dleach02 can you help to merge this. Thanks a lot
|
2025-04-01T04:36:02.244671
| 2023-05-04T16:17:08
|
1696296271
|
{
"authors": [
"PerMac",
"carlescufi",
"cfriedt",
"gmarull",
"nashif"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12318",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/57563"
}
|
gharchive/pull-request
|
Twister config names
In some scenarios, like applications, we may want to use twister to
build multiple configurations. The "sample" or "testcase" file names may
not specify the purpose of it, so allow "twister" name as well. It is a
name that clearly identifies the file with Twister, without any
particular purpose.
In some scenarios, like applications, we may want to use twister to
build multiple configurations.
can you please provide some details on those scenarios and how this will work and where?
In some scenarios, like applications, we may want to use twister to
build multiple configurations.
can you please provide some details on those scenarios and how this will work and where?
Main motivation is driven by https://github.com/zephyrproject-rtos/example-application/blob/main/app/sample.yaml
Main motivation is driven by<EMAIL_ADDRESS>
ok, why not use sample.yaml? :)
well, it's just a matter of naming. I see the naming "sample" good for samples/, "testcase" for tests/ but.. for an actual application I'd prefer something more specific like "app.yaml", or just use "twister.yaml". Maybe testcase is also an option instead of sample...
BTW, if changes/new name is to be added, please modify also this west command https://github.com/zephyrproject-rtos/zephyr/blob/main/scripts/west_commands/build.py#L263 and it's description
BTW, if changes/new name is to be added, please modify also this west command https://github.com/zephyrproject-rtos/zephyr/blob/main/scripts/west_commands/build.py#L263 and it's description
done
@nashif We are we keeping both testcase/sample.yaml names if there is only one workflow for both in twister? IMO having two names for the same being is confusing. Can we replace all names with one, which will be more generic and meaningful?
@nashif We are we keeping both testcase/sample.yaml names if there is only one workflow for both in twister? IMO having two names for the same being is confusing. Can we replace all names with one, which will be more generic and meaningful?
sample.yaml was introduced at some point to maintain additional meta data about samples. Did not make sense to have another meta data file and we used the same file and syntax and called it differently to distinguish samples from tests. I am fine moving to only one single name, the sample metadata can then be either dropped, put somewhere else or just kept in the same file.
This did not really bother anyone until now, so I do not want to go into all of this not knowing exactly what are you trying to do here, what is your usecase and what problem will this solve.
@nashif We are we keeping both testcase/sample.yaml names if there is only one workflow for both in twister? IMO having two names for the same being is confusing. Can we replace all names with one, which will be more generic and meaningful?
sample.yaml was introduced at some point to maintain additional meta data about samples. Did not make sense to have another meta data file and we used the same file and syntax and called it differently to distinguish samples from tests. I am fine moving to only one single name, the sample metadata can then be either dropped, put somewhere else or just kept in the same file.
This did not really bother anyone until now, so I do not want to go into all of this not knowing exactly what are you trying to do here, what is your usecase and what problem will this solve.
So as I mentioned before, the names testcase.yaml or sample.yaml are implicitly bounded to a test or a sample. However, Twister can be used for other stuff like building multiple application configurations in one go, like in example-application. Since these config files are, regardless their name, Twister configs, I think a more generic name like twister.yaml would be more flexible. Other than that there's no other use-cases behind this proposal.
I think a more generic name like twister.yaml would be more flexible. Other than that there's no other use-cases behind this proposal.
I agree, why not just use twister.yaml everywhere and be done with it?
I agree, why not just use twister.yaml everywhere and be done with it?
because this defines test scenarios and not twister is not the only user. The same file can also be used by west to build/run a specific scenario.
Fine getting rid of sample.yaml and just have testsuite.yaml (not testcase.yaml), but twister.yaml I am struggling with :)
And I am not convinced on having test in name 😅 Only in samples we have tests (regexed to find and not always) defined in those yamls. IMO the major function of those yamls is to describe build scenarios for twister (and west). Maybe then config.yaml? However, I'd be fine with testsuite.yaml
btw I also found that testcase/sample.yaml names are used in https://github.com/zephyrproject-rtos/zephyr/blob/main/scripts/ci/test_plan.py#L197
I'd probably avoid the word "test" since these files can describe things that are not strictly a test. Even though west build can parse the files, the schema is maintained by twister, not west build; ie, these are twister related files. Maybe twister-config.yaml?
dev-review: @gmarull - will you be moving forward with this?
dev-review: @gmarull - will you be moving forward with this?
I'd like to get this in, just needs approvals.
closing, no signs of interest shown by the maintainers.
I believe we had a consensus, that we can go with a single name both for testcase/ and sample/ -.yamls. We just don't have the name...
|
2025-04-01T04:36:02.254832
| 2023-08-07T13:16:26
|
1839439417
|
{
"authors": [
"gchwier",
"gopiotr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12319",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/61224"
}
|
gharchive/pull-request
|
Pytest plugin - align adapters API
This PR contains mainly changes of device adapters (classes dedicated to communication with different types of devices - hardware, QEMU, native simulator). Thanks to them, now iter_stdout generator was replaced by simple readline method, which is more intuitive in usage and safer due to introduction of timeout monitoring during execution read operation.
Changes introduced in this PR allow to create helper object Shell which can be used in shell tests.
To show how new adapters API and new Shell helper class can be used in practice, the main exemplary sample samples/subsys/testsuite/pytest/shell/pytest/test_shell.py was modified and adopted to new changes:
https://github.com/gopiotr/zephyr/blob/pr/align_adapters_API/samples/subsys/testsuite/pytest/shell/pytest/test_shell.py
The bug connected with "not respecting timeout provided in testcase.yaml file" was also solved - if user define his/her own timeout in testcase.yaml, then connection with dut in pytest test will not be disrupted until this particular timeout will be exceeded.
During work on this PR some simplifications of pytest plugin was introduced - the log.py file was removed and its settings was replaced by extending pytest run command (called by Twister), what gives similarly the same effect, but reduce code which required maintaining and extend flexibility of logging settings.
Appropriate unit tests were also modified/added and thanks to this code coverage for pytest plugin was increased from 72% to 90%.
In general, this PR contains following changes:
Unification of used timeouts for executed operations.
Rename DeviceAbstract to DeviceAdapter class and SimulatorAdapterBase to BinaryAdapterBase.
Introduction a new and common API and "a method of reading from device" for all adapters. Replacing iter_stdout generator by simple readline method. Introduction of launch and close methods which allow respectively to prepare "ready-to-test" device, and at the end of test close all open connections and cleanup workspace.
Refactorization of FifoHandler class.
Adding readlines_until method to adapters API.
Dividing dut fixture into smaller parts what makes it easier to user to change dut fixture scope or to write her/his own dut fixture.
Creating of Shell helper class.
Replace "logging configuration logic" from dedicated log.py file to pytest CLI options.
Removing log_file.py, constants.py and log.py files - their logic was moved to other places, and they are no longer needed.
Adding/improving description for available options in pytest plugin.
Adding new unit tests which allow to test introduced changes.
Fix of pytest warning about unregistered fixtures.
Adding conditional CONFIG in sample.yaml for native_posix to avoid warning during building on "non-posix" platforms.
@nashif is any chance to merge this PR?
When we are talking about code duplication between Twister and pytest-twister-harness plugin (Handlers in Twister and DeviceAdapter in the plugin), this PR does not introduce that code duplication. This PR, by refactoring of DeviceAdapter classes, just commonalizes the code inside the plugin. And this is a good step, if we want to reuse this code in a potential future refactoring of Handlers in Twister.
Moreover, what is important, this PR fixes the issue with blocking readline operations (by adding separate thread and queue to read output from the device).
I'm waiting also for that PR, to rework / simplify tests of MCUboot ( #58393 )
@hakehuang Not so easy.
Pytest is executed by Twister as a separate program and pytest-twister-harness is added as a plugin.
Sure, it is possible to import Handler class in pytest-twster-harness (ugly but possible), but in current implementation of Handlers (lot of dependencies, large methods) it is not possible to reuse it.
We can plan such commonalization of that code, but not in that PR (however, this PR can be treat as a step to do that)
|
2025-04-01T04:36:02.256155
| 2023-08-25T12:45:26
|
1866994868
|
{
"authors": [
"MaureenHelm",
"MeisterBob"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12320",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/61893"
}
|
gharchive/pull-request
|
drivers: veml7700: add power domain support
When the power domain switched off the 3V3 rail the sensor needs to be reinitialized on the PM_DEVICE_ACTION_TURN_ON event.
@JordanYates please revisit
|
2025-04-01T04:36:02.262710
| 2018-02-21T18:48:37
|
299084988
|
{
"authors": [
"codecov-io",
"galak",
"nzmichaelh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12321",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/6306"
}
|
gharchive/pull-request
|
dma: add a DMA driver for the Atmel SAM0 series.
Tested using the memory++ -> memory++ and memory++ -> peripheral modes.
Depending on what happens with #6305 I'll move the fixup changes to the SoC file.
A follow up patches will switch serial TX to use DMA.
Codecov Report
Merging #6306 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #6306 +/- ##
=======================================
Coverage 53.14% 53.14%
=======================================
Files 412 412
Lines 40148 40148
Branches 7733 7733
=======================================
Hits 21338 21338
Misses 15675 15675
Partials 3135 3135
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a73e8fa...7daca96. Read the comment docs.
Dropped the include/dma and tests/ changes, ready for review.
@nzmichaelh I think you dropped the driver as well :)
@galak heh, yeah.
I've rebased and done a few other fixups. This depends on #6615 landing first before it'll build though.
Still blocked on #6615.
|
2025-04-01T04:36:02.264548
| 2023-11-02T10:30:38
|
1973952026
|
{
"authors": [
"kartben",
"nordicjm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12322",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/64726"
}
|
gharchive/pull-request
|
doc: gsg: Remove misleading mention of Powershell
While we will probably want to properly document how to use Powershell on Windows in the near future, current Getting Started Guide contained a misleading mention to activating Pythonv venv using Powershell that this commit removes.
Addresses comments raised in #64682.
FWIW I am using Powershell all the time and it just works ; we just need to properly document the subtle variations there are when e.g refering to HOMEPATH in legacy cmd.exe vs. Powershell... or just bite the bullet and only recommend powershell going forward.
From what I remember powershell breaks many things, I vaguely remember some python modules would not work with powershell
|
2025-04-01T04:36:02.269757
| 2024-03-28T13:16:34
|
2213223680
|
{
"authors": [
"MaureenHelm",
"pepe2k"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12323",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/70843"
}
|
gharchive/pull-request
|
west_commands: sign: imgtool: set '--rom-fixed' and use correct slot addr for 'Direct-XIP'
This PR includes two changes focused on signing with use of imgtool for MCUboot's Direct-XIP mode. With these changes applied, images dedicated for that mode will include ROM_FIXED flag and correct slot offset (taken from zephyr,code-partition) set in header.
west_commands: sign: imgtool: set '--rom-fixed' for 'Direct-XIP'
If the MCUboot is configured in Direct-XIP mode, it may verify whether image selected for load is placed at suitable flash offset (slot). That requires that the image header includes ROM_FIXED flag and load_addr field set to correct value.
This change extends imgtool sign command with --rom-fixed option if assumed MCUboot mode of operation is set to Direct-XIP or Direct-XIP with revert. With --rom-fixed option, imgtool sign will set the ROM_FIXED flag and load_addr field in generated image's header.
west_commands: sign: imgtool: use correct slot offset for 'Direct-XIP'
Previous commit revealed an issue with slot address used by imgtool when signing image for use with MCUboot in Direct-XIP mode of operation. Instead of target slot the image is being built for, slot0_partition offset is always used:
# always use addr of slot0_partition, which is where slots are run
addr = slots['slot0_partition'].regs[0].addr
Fix this by using offset of the slot selected by zephyr,code-partition in chosen node if assumed MCUboot mode is set to Direct-XIP or Direct-XIP with revert.
@mbolivar-ampere @nordicjm please take a look
|
2025-04-01T04:36:02.282516
| 2018-04-16T21:42:08
|
314828885
|
{
"authors": [
"SebastianBoe",
"codecov-io",
"d3zd3z",
"lpereira",
"nashif"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12324",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/7099"
}
|
gharchive/pull-request
|
kernel: Add option to randomize link order
On systems where virtual memory addressing isn't possible (e.g. all MPU microcontrollers), it's not possible to have something akin to ASLR. This option will randomize the link order of libzephyr.a during build time, making it slightly harder to create reusable exploits for certain kinds of attacks.
That's the idea, at least. Genrating random numbers in CMake is kind of weird. One has to generate a random string up to a certain length, and if no random seed is provided, the same sequence is generated over and over again.
I'm looking for feedback, then:
Is there a better way to randomize the link order? A better place to hook this into?
Can this be done in a way that, if the feature is enabled, a no-op "make" actually rebuilds libzephyr.a with a different order?
Codecov Report
Merging #7099 into master will increase coverage by <.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #7099 +/- ##
==========================================
+ Coverage 58.61% 58.62% +<.01%
==========================================
Files 464 464
Lines 47440 47440
Branches 8790 8790
==========================================
+ Hits 27809 27810 +1
+ Misses 15798 15797 -1
Partials 3833 3833
Impacted Files
Coverage Δ
lib/posix/pthread.c
69.84% <0%> (+0.5%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 09a8810...46da462. Read the comment docs.
"Is there a better way to randomize the link order? A better place to hook this into?"
There are two problems with where this is hooked in.
It will not randomize the order when sources are added like this:
zephyr_sources(a.c)
zephyr_sources(b.c)
Only randomize internally in a single invocation, so:
zephyr_sources(a.c b.c)
It would need to be injected into zephyr_library_* if one wanted to randomize more object files.
An alternative that is not affected by these two issues (but might have more issues I haven't thought of),
would be to do post-processing of the CMake libraries properties. Meaning; after all sources have been added to all libraries, we read out the sources lists and replace them with randomized versions.
Grep for "_property" in zephyr/CMakeLists.txt to see examples of reading and writing Zephyr CMake library properties.
Looks like the property we need to read and modify is 'SOURCES' (https://cmake.org/cmake/help/v3.0/manual/cmake-properties.7.html#properties-on-targets).
"Can this be done in a way that, if the feature is enabled, a no-op "make" actually rebuilds libzephyr.a with a different order?" I am unable to see the security value in this, AFAICT the point is to obfuscate production builds, we don't want to make the developers life harder than it needs to be. (Also, this is tricky/undesirable from a build-system point of view).
I should also point out that this is in direct conflict with reproducible builds. This is required for getting a CII Gold badge, which is something we, as a project, are desiring to be able to get.
It is probably OK if it is configurable, but it would have to be disabled for most CI-type builds.
Also, I'm curious how helpful this will be, since for deployment, most targets will be running one of a small number of versions of their software, and a given version will have a particular and specific layout.
I'm aware that this conflicts with reproducible builds.
The main idea behind this, though, is to reduce the possibility of reusing exploits from one deployed version of a firmware to another. It's not that strong otherwise.
stale for a while, re-open if anything comes out of that.
|
2025-04-01T04:36:02.283958
| 2024-04-30T07:24:40
|
2270707460
|
{
"authors": [
"maass-hamburg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12325",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/72124"
}
|
gharchive/pull-request
|
mgmt: hawkbit: change the tls certificate tag
Be able to change the tls certificate tag in hawkBit
and be able to deactivate TLS for the hawkBit subsystem independent of CONFIG_NET_SOCKETS_SOCKOPT_TLS
@real-tintin
|
2025-04-01T04:36:02.285068
| 2024-05-10T17:34:44
|
2290141629
|
{
"authors": [
"jukkar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12326",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/72595"
}
|
gharchive/pull-request
|
net: context: Do not check our own ports
There is no need to check our own context when going through the used ports in the system. This prevents error when binding in some corner cases.
Fixes #72035
@JordanYates please give it a try, seems to work in my test setup which was different from yours.
|
2025-04-01T04:36:02.286978
| 2024-05-17T13:42:30
|
2302792878
|
{
"authors": [
"PerMac"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12327",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/72945"
}
|
gharchive/pull-request
|
twister: Add pipeline level error and its handling
If a pipeline crashes from an error that is not handled anywhere else, a "Pipeline error" is reported. New log file is added for this level of errors and it has priority when dumped into a report as a reason of failure.
One can try this PR by adding e.g. raise BufferError somewhere in the process, e.g. in https://github.com/zephyrproject-rtos/zephyr/blob/main/scripts/pylib/twister/twisterlib/runner.py#L676
another approach was chosen #72893
|
2025-04-01T04:36:02.289761
| 2024-06-06T10:27:53
|
2337910916
|
{
"authors": [
"FRASTM",
"GeorgeCGV",
"MaureenHelm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12328",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/73840"
}
|
gharchive/pull-request
|
drivers: flash: stm32 ospi: extend memory map support for other modes
Extends memory map support with quad, dual, and spi modes.
Allow custom write opcode and adapt used lines based on read opcode.
That allows memory map mode to be used with Winbond and other vendors.
Add more clarity to the error log output to differentiate the error's root cause.
Replace LOG_INF with LOG_DBG to signal about memory map mode being enabled.
do you confirm that set memory_mapped is still funtionnal with a b_u585i_iot02a or stm32h7b3i
that would be hard to do as I don't have mentioned board. So far the memap mode is used on a custom board in quad mode against macronix and winbond nor-flashes. However, no writing is involved. But memap write configuration in this PR is pretty much the same as standard ospi write function.
@erwango @FRASTM please take a look
Tested on b_u585i_iot02a
|
2025-04-01T04:36:02.291105
| 2024-06-07T06:25:44
|
2339682291
|
{
"authors": [
"evgeniy-paltsev",
"kokas-a"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12329",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/73892"
}
|
gharchive/pull-request
|
The filter is no longer needed as nSIM performence was improved and test timeout was increased because of another platforms.
@evgeniy-paltsev FYI
@npitre you marked as assignee here, may I ask you to review?
Thanks!
|
2025-04-01T04:36:02.293656
| 2024-10-10T00:31:34
|
2577244451
|
{
"authors": [
"CZKikin",
"ajf58",
"sylvioalves"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12330",
"repo": "zephyrproject-rtos/zephyr",
"url": "https://github.com/zephyrproject-rtos/zephyr/pull/79631"
}
|
gharchive/pull-request
|
tests: counter: add current counter value into no_alarm test
In the test scenario without alarms, it might
be the case that counter is already running and
it will not match the expected tick.
This adds a tick offset into the expected value
based on current counter reading.
Could fix #79574 considering the scenario where counter_start() do not reset the timer.
@CZKikin, would you confirm whether this approach works for you?
LGTM, tested using a Raspberry Pi Pico 2.
I don't have enough permission to approve, but the latest version looks good to me.
@CZKikin, would you confirm whether this approach works for you? (asking due to #76527).
I think it should be ok. I'll run the test on HW to be sure, but works for us.
|
2025-04-01T04:36:02.295345
| 2018-10-04T17:35:12
|
366895630
|
{
"authors": [
"facuspagnuolo",
"miguelmota",
"spalladino"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12331",
"repo": "zeppelinos/zos",
"url": "https://github.com/zeppelinos/zos/issues/175"
}
|
gharchive/issue
|
Error when running tests using TestHelper without setting NODE_ENV to test
I'm getting the error Error: Provider not set or invalid when doing something like:
it('should not fail', async function () {
this.app = await TestHelper({ ... })
await this.app.createProxy(Contract, contractCreationParams) // failing line
})
I'm running on a project on top of ZeppelinOS using<EMAIL_ADDRESS>
Having the same problem
Hold until #1308 is solved
|
2025-04-01T04:36:02.305638
| 2015-12-16T13:13:02
|
122500482
|
{
"authors": [
"zero2one"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12332",
"repo": "zero2one/drupal-skeleton",
"url": "https://github.com/zero2one/drupal-skeleton/issues/36"
}
|
gharchive/issue
|
Binary wrappers don't support "quoted multivalue" variables
Calling a command that wraps around a binary does not respect values that are quoted.
Commands affected:
bin/composer
bin/drush
bin/coder
Examples:
--test="abc def" will be passed as --test="abc".
bin/drush "command path" will be passed as drush command path.
See PR #38
Fixed.
|
2025-04-01T04:36:02.383138
| 2023-10-20T16:09:28
|
1954611598
|
{
"authors": [
"cylewitruk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12333",
"repo": "zesterer/chumsky",
"url": "https://github.com/zesterer/chumsky/issues/551"
}
|
gharchive/issue
|
Question: how to set the filename for errors?
Hi!
How can I set the filename for errors when I'm parsing from dynamically provided source (i.e. there is no file)? Referring to the <unknown> below - I've looked through examples and I'm probably blind but I haven't found this yet...
╭─[<unknown>:1:7]
│
1 │ (list -1 int)
│ ─┬
│ ╰── max-len indicator for list declarations must be greater than zero.
───╯
Maybe this is more of an Ariadne question?
Yes, it was most definitely an Ariadne question as it's very clear on their readme :) https://github.com/zesterer/ariadne
Closing this.
|
2025-04-01T04:36:02.384185
| 2023-07-15T00:33:33
|
1805758493
|
{
"authors": [
"CraftSpider",
"zesterer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12334",
"repo": "zesterer/chumsky",
"url": "https://github.com/zesterer/chumsky/pull/481"
}
|
gharchive/pull-request
|
Add serde feature and support
Closes #479
Will fix the failing tests tomorrow if I have time
LGTM!
|
2025-04-01T04:36:02.399474
| 2024-06-20T19:00:08
|
2365069648
|
{
"authors": [
"kingpinXD",
"ws4charlie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12335",
"repo": "zeta-chain/node",
"url": "https://github.com/zeta-chain/node/pull/2362"
}
|
gharchive/pull-request
|
fix: set 1000 sats as minimum amount that can be withdrawn
Description
This PR is to prevent Bitcoin dust-amount withdrawals from registration. It set 1000 satoshis as the minimum amount that can be withdrawn from zEVM.
Closes: 2326
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Include instructions and any relevant details so others can reproduce.
[ ] Tested CCTX in localnet
[ ] Tested in development environment
[x] Go unit tests
[ ] Go integration tests
[ ] Tested via GitHub Actions
Checklist:
[ ] I have added unit tests that prove my fix feature works
LGTM,
Is there a way to double check that the original TX is reverted?
LGTM, Is there a way to double check that the original TX is reverted?
Yeah. It was reverted in my local network. See
LGTM, Is there a way to double check that the original TX is reverted?
Yeah. It was reverted in my local network. See
I mean, would it make sense to add an e2e test and check the balance? The user should not have any loss of funds I guess , since the PostTxProcessing hook reverts the entire tx?
If you think an e2e test is not necessary just checking it would be a good idea
Seems just checking is okay. Also the 'unsupported address' validation was tested against mainnet on v17 release, the zEVM 'withdraw' tx just failed.
|
2025-04-01T04:36:02.401085
| 2017-07-10T19:41:24
|
241824245
|
{
"authors": [
"acdenisSK",
"imnotbad"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12336",
"repo": "zeyla/serenity",
"url": "https://github.com/zeyla/serenity/pull/120"
}
|
gharchive/pull-request
|
Fixed clippy warnings
Fixed some clippy warnings, made code more concise.
Clippy also complained about src/client/dispatch.rs:446:36 but i'm not sure if these branches are required for the bot framework so i didn't modify them.
Yikes, if only i were able to run clippy properly on my machine. Oh well, thanks!
|
2025-04-01T04:36:02.405128
| 2016-06-08T20:34:19
|
159264385
|
{
"authors": [
"AGmakonts",
"TomHAnderson"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12337",
"repo": "zfcampus/zf-apigility-doctrine",
"url": "https://github.com/zfcampus/zf-apigility-doctrine/issues/255"
}
|
gharchive/issue
|
ZF MVC 3.0.0 compatibility
@AGmakonts when you get to this repository with your MVC 3.0 changes (thanks, btw!) we can't move this forward because of this problem: https://github.com/doctrine/DoctrineModule/pull/558#issuecomment-207212246
I now think it's filed under the wrong issue but the problem remains. Until DoctrineModule can implement the changes @weierophinney made when he split zend-hydrator this repository can't follow the rest of the herd.
@TomHAnderson thanks for the heads up, I'll be watching this issue.
@AGmakonts Would you be willing to evaluate my PR to fix this?
https://github.com/zfcampus/zf-apigility-doctrine/pull/260
The problem listed above was solved with https://github.com/API-Skeletons/zf-doctrine-module-zend-hydrator
|
2025-04-01T04:36:02.406562
| 2016-07-15T13:22:09
|
165785539
|
{
"authors": [
"stavarengo",
"webimpress"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12338",
"repo": "zfcampus/zf-content-validation",
"url": "https://github.com/zfcampus/zf-content-validation/issues/73"
}
|
gharchive/issue
|
Typo: Is InputFilter at config/module.config.php
Error in the line https://github.com/zfcampus/zf-content-validation/blob/master/config/module.config.php#L9
Duplicated, fixed in #70 and #71
Fixed in 1.3.1 release. It can be closed.
|
2025-04-01T04:36:02.456259
| 2019-03-27T02:04:53
|
425727645
|
{
"authors": [
"ahuczp",
"zhangjun001"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12339",
"repo": "zhangjun001/ICNet",
"url": "https://github.com/zhangjun001/ICNet/issues/2"
}
|
gharchive/issue
|
Training problem
When I use your code to train my data,I found taht the loss did not fall.Why? please help me .Thank you.
did you linearly align your data to a common space?
Yes, my dataset affine to a commom space. Could you add me WeChat? My WeChat is “ahuczp”.
As indicated ->"For intensity normalization, we first match the intensity histogram of each brain MRI to that of the Colin27 template by using a histogram matching algorithm. Then, we also perform the z-score normalization to make the mean intensity of each image is zero and the standard deviation is one."
Since I employ the MSE loss for similarity, the histogram matching should be performed for data pre-processing. The code has performed the general intensity norm, but you may need to perform the histogram matching (to one fixed template of your dataset) for your data preprocessing.
我下载ADNI3数据集,用fsl软件去除了颅骨,用SimpleITK做了直方图匹配到Colin27模板上,我现在不知道怎么做线性对齐,可以帮助我一下吗,万分感谢。
我现在训练,loss始终不下降,请问是什么原因?
我训练完了 测试的时候,配准结果十分的模糊,请问一下 可能是什么原因?谢谢
you may need to adjust the parameters to fit your dataset. Also, range_flow is related to your task. If you do not know the distribution of the flows in your dataset, you can simply use a relatively large value for the parameter of range_flow in the beginning. It is better to analyze the registration performance with the flow rather than looking at the warped images only.
|
2025-04-01T04:36:02.495729
| 2023-07-20T16:58:40
|
1814435335
|
{
"authors": [
"Scandiravian",
"nrdxp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12342",
"repo": "zhaofengli/colmena",
"url": "https://github.com/zhaofengli/colmena/issues/165"
}
|
gharchive/issue
|
Limit access required for SSH
The issue
I was looking into how to run Colmena without requiring root access. I saw that this was also discussed in #27 and from what I could gather the current solution is to add the privilegeEscalationCommand to the config. Unfortunately this still requires a password, so the only solution I could find was to set
{
security.sudo.wheelNeedsPassword = false;
}
That solution makes me worried, as that will mean there are now two users with root access instead of just the root account. I think that lowers the security, since, besides the root account, there'll now also be all the accounts in the wheel group that can run whatever sudo command they want without being required a password. What I would like is to only set NOPASSWD on the specific commands that will be run when deploying with Colmena for the users that I will ssh in as.
I therefore worked out a solution that will make this possible:
Suggestion
Looking at the code there's two commands that need sudo privileges; nix-env and /nix/store/<profilePath>/bin/switch-to-configuration. The first one is simple to fix:
users.groups.colmena = {};
security.sudo = {
extraRules = [
{
commands = [
{
command = "${pkgs.nix}/bin/nix-env";
options = ["NOPASSWD"];
}
];
groups = ["colmena"];
}
];
};
Then it's as simple as adding the desired user to the group and the first issue is resolved.
The second is however a bit more difficult, since profilePath isn't calculated until everything in the config itself has been evaluated. Using something like config.system.build.toplevel to find the path for switch-to-configuration in the store will cause an infinite recursion error.
The alternative I'd suggest is to have a simple wrapper script that can be called from colmena itself. This could look something like:
switch-colmena = pkgs.writeShellScriptBin "switch-colmena" ''
NEXT_PROFILE=$1
GOAL=$2
"$NEXT_PROFILE"/bin/switch-to-configuration "$GOAL"
'';
The sudo rules can then be changed to:
# This could also be added to the relevant user paths to limit access a bit further.
environment.systemPackages = [
# ...other packages...
switch-colmena
]
security.sudo = {
extraRules = [
{
commands = [
{
command = "${pkgs.nix}/bin/nix-env";
options = ["NOPASSWD"];
}
{
command = "${switch-colmena}/bin/switch-colmena";
options = ["NOPASSWD"];
}
];
groups = ["colmena"];
}
];
};
To make colmena actually call the desired command I made a small change to the activation_command function in profile.rs
pub fn activation_command(&self, goal: Goal) -> Option<Vec<String>> {
if let Some(goal) = goal.as_str() {
let switch_to_configuration = self
.as_path()
.to_str()
.expect("The string should be UTF-8 valid")
.to_string();
Some(vec![
"switch-colmena".to_string(),
switch_to_configuration,
goal.to_string(),
])
} else {
None
}
}
This is a pretty bare-bones proof-of-concept, so it'll probably require a bit more fleshing out, but I think it would be an improvement overall.
It has been a while since I've used nix-env, but can't it run programs similar to nix run? If so I don't see how this would be an improvement after all, since you could basically run any program nix can fetch with escalated priviledges (through nix-env).
@nrdxp, let me provide an example to illustrate how I believe this approach could enhance security. I'll begin by discussing why I think security.sudo.wheelNeedsPassword = true should be avoided. Then, I'll address the issue you highlighted regarding a dedicated user and nix-env, and finally, I'll propose a solution.
My Concern with wheelNeedsPassword = false
The wheel group is slightly special since its purpose is to grant its members access to the sudo command. I think using it for another purpose (i.e. to run Colmena) should be avoided, since being a member of this group does not equate to being a user who operates Colmena. Consequently, setting wheelNeedsPassword = false undermines the principle of least privilege by allowing users who do not require it the ability to execute sudo commands without a password. Here's an example to illustrate the issue:
Consider a server with three individuals:
Alice: The system administrator who executes the Colmena command.
Bob: A user who needs sudo access and is a member of the wheel group.
Carol: A malicious entity aiming to gain root access to the server.
If security.sudo.wheelNeedsPassword = true, both Bob and Alice must authenticate with their SSH key and then verify their knowledge of the relevant password to execute a sudo command on the server.
In this scenario, Carol must acquire both the SSH key and the sudo password to gain root access.
Conversely, if security.sudo.wheelNeedsPassword = false, Carol only needs to obtain the SSH key of either Alice or Bob to gain complete server access. This scenario increases both the attack surface and the potential damage from a compromised SSH key.
Implementing a Dedicated User or Group for Colmena
Introducing a specific user or group for Colmena indeed presents an additional vector for Carol to exploit.
With a dedicated group, Carol could:
Acquire the SSH key and password from Bob.
Obtain an SSH key from Alice.
With a dedicated user, she could either:
Acquire the SSH key and password from Alice or Bob.
Obtain an SSH key from the dedicated user.
Despite these risks, I believe both scenarios offer an improvement by limiting the target to one user's SSH key, as opposed to any user in the wheel group. The security benefit diminishes only if the wheel group exclusively contains users who will run Colmena, and it's assured that all future members will do the same.
Mitigation Strategy
Based on your description of the issue, the core problem appears to be that allowing a user to run nix-env without sudo enables them to execute arbitrary code on the system. If I've misunderstood, please correct me.
As I explained above, this does not mean security is not improved by avoiding wheelNeedsPassword = false, but it does not follow the principle of least privilege, since it's possible to run any nix-env command. Access to nix-env could be restricted similarly to how switch-to-configuration is managed, but I'll go through why I don't think this solution is an improvement at the end of the comment. The nix-env command can be restricted as follows:
set-system-path = pkgs.writeShellScriptBin "set-system-path" ''
PATH=$1
${pkgs.nix}/bin/nix-env --profile /nix/var/nix/profiles/system --path $PATH
'';
Then, adjust the extraRules to permit only the execution of the restricted command instead of nix-env directly:
environment.systemPackages = [
# ...other packages...
switch-colmena
set-system-path
];
security.sudo = {
extraRules = [
{
commands = [
{
command = "${lib.getExe set-system-path}";
options = ["NOPASSWD"];
},
{
command = "${lib.getExe switch-colmena}";
options = ["NOPASSWD"];
}
];
groups = ["colmena"];
}
];
};
It's still possible for a malicious actor that has gained access to the users account to change the system path to whatever they want, since they can simply point the script to a path of a build containing whatever code they want to run, so it'll only be an improvement if the attacker can't be bothered to build and push their own closure.
As I mentioned above however, the main goal of my suggestion is to limit how many users has this privilege without altering the wheel group's intended function.
I agree that have passwordless sudo is far from ideal.
But yeah, my basic concern is that if nix-env has sudo access, that you can basically run any program you want through it, which is basically the same as passwordless sudo in practice.
I know the upstream nixos-rebuild script resolves this by adding a --use-remote-sudo flag which only calls sudo directly in the places where the script actually needs it. I'm curious why exactly nix-env would even need sudo at all? I don't think anything it does would really require it. Perhaps the code could be rewritten to avoid it. If we could just limit the call to sudo when calling the "switch to" script alone, (essentially mimicing the upstream --use-remote-sudo flag) then I think that'd be the way to go.
I am decent with Rust but not too familiar with the codebase here, so maybe I can take a look at it soonish and see if I can figure it out.
@nrdxp thanks for the reply. It sounds like we might be in agreement. My issue description might have lacked clarity in its description and focused too much on a solution without clearly establishing the problem I'd like to see solved.
My main concern is that depending on the wheel group to run Colmena leaves a pretty dangerous foot-gun lying around - especially on multi-user systems. It's primarily a UX concern in terms of how Colmena might be used in practice, since a dependency on wheelNeedsPassword = false in my opinion makes it unsuitable for production environments.
Removing the need for password-less sudo for nix-env would be a better solution than only the privilege of running it to a dedicated user/group. I still think a good solution allows for the user to configure a specific user/group for Colmena, so I hope both can be done.
I'm somewhat familiar with Rust and I did spend some time looking through the codebase when creating the ticket. I'd be happy to contribute to a solution if it'll be helpful. Just let me know if I can help out with anything either here in the ticket or reach out directly through the email in my GitHub profile.
Thanks for taking the time to provide some really good feedback to my proposal. I can now see that i had a blind spot in regards to nix-env that would have resulted in a less than ideal solution.
|
2025-04-01T04:36:02.525077
| 2021-12-19T05:04:51
|
1084014693
|
{
"authors": [
"DianCh",
"zhechen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12343",
"repo": "zhechen/Deformable-DETR-REGO",
"url": "https://github.com/zhechen/Deformable-DETR-REGO/issues/1"
}
|
gharchive/issue
|
Questions on DETR initialization
Hi! May I ask how the model is initialized for the training? Specifically, is the DETR (or equivalents like deformable DETR, etc.) part trained from scratch or initialized by the trained model?
Only the backbone is from pre-trained weights, others are trained from scratch. You can find DETR/Deformable DETR training codes.
|
2025-04-01T04:36:02.555658
| 2016-12-12T13:56:56
|
194980390
|
{
"authors": [
"itjhDev",
"zhuhaow"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12344",
"repo": "zhuhaow/NEKit",
"url": "https://github.com/zhuhaow/NEKit/issues/34"
}
|
gharchive/issue
|
carthage update
warning: 'AsyncSocket' is deprecated: The RunLoop versions of CocoaAsyncSocket are deprecated and will be removed in a future release. Please migrate to GCDAsyncSocket.
This has nothing to do with NEKit.
|
2025-04-01T04:36:02.561547
| 2020-10-12T16:46:20
|
719517696
|
{
"authors": [
"andrewBIsP",
"zhw2590582"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12345",
"repo": "zhw2590582/SubPlayer",
"url": "https://github.com/zhw2590582/SubPlayer/issues/28"
}
|
gharchive/issue
|
What does it mean ?
docs/index.html
function() {
if (!isLocalhost) {
var o = document.createElement("script");
o.src = "https://hm.baidu.com/hm.js?9c948099957cd6a524dac835394d4495";
var t = document.getElementsByTagName("script")[0];
t.parentNode.insertBefore(o, t)
}
}()
Similar to Google statistics script, but it comes from Baidu
it's not good practice, i suppose
I don't understand. It's just used to count website visits
for whom ?
For me and my website(subplayer.js.org)
It's obvious, but why do people need your baidu analytics script ?
This project is not released as a dependency, I initially thought it would be published as my personal website. People have enough flexibility to modify all the code. I didn't add a more friendly hint to the script that people can delete these lines of code. It's really my fault.
|
2025-04-01T04:36:02.566193
| 2017-10-03T13:35:11
|
262429949
|
{
"authors": [
"PavelVozenilek",
"andrewrk"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12346",
"repo": "zig-lang/zig",
"url": "https://github.com/zig-lang/zig/issues/524"
}
|
gharchive/issue
|
Proposal: compile time decision based on ability to compile a snippet
Right now Zig allows to make compile time decision on a type/literal value, like:
fn max(comptime T: type, a: T, b: T) -> T {
if (T == bool) {
return a or b;
} else if (a > b) {
return a;
} else {
return b;
}
}
This is rather limited (one can check types for equality, question basic types, ask if cast is possible and not much more) and does not cover many real problems.
E.g. there are two variants of allocators:
traditional one with malloc/realloc/free,
other without realloc and with size parameter for free (this has some advantages).
And one would like to make a library which is able to use both allocator variants, easily.
My proposal: allow compile time decision based on ability to compile (or not) a snippet:
fun foo(comptime Allocator : type, a : &Allocator)
{
if-compiles {
a.realloc(null, 10); /// compiles only with first allocator variant
} => {
... /// runtime code using variant 1
} elif-compiles {
a.free(null, 10); /// compiles only with second allocator variant
} => {
... /// runtime code using variant 2
} else {
@compileError("");
}
}
Notes:
If a provided snippet compiles then its associated block is selected.
The snippet itself gets discarded, nothing is executed.
Trivial syntax errors (like unbalanced parenthesis) stop compilation, always.
There could be a "compiles-switch" which verifies that one and exactly one branch compiles. This would eliminate problems after new alternative is added/removed and one forgets to update some code.
I took inspiration for this from one feature of Nim language.
Compile time switch example:
compiles-switch { /// one and only one arm must work
{ a.realloc(null, 10) } => ...
{ a.free(null, 10) } => ...
}
Even shorter syntax is possible for conditional compilation:
// If this compiles use it, if it doesn't compile ignore it
??? {
x = a.realloc(...);
}
This is not the best way to support multiple interfaces.
Precisely communicate intent.
See #130
|
2025-04-01T04:36:02.574057
| 2024-05-03T16:35:27
|
2278046159
|
{
"authors": [
"Koenkk",
"anti-spy",
"bartbh",
"fschaeck",
"mfalkvidd",
"zezo-git"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12347",
"repo": "zigbee2mqtt/hassio-zigbee2mqtt",
"url": "https://github.com/zigbee2mqtt/hassio-zigbee2mqtt/issues/605"
}
|
gharchive/issue
|
ECONNREFUSED error after 1.37.0-1
Description of the issue
After the update, the add on won’t start with this error: (MQTT failed to connect: connect ECONNREFUSED) and I fixed it by going back to 1.36.1-1. I am using EMQX as my MQTT broker. No other fixes worked but I’m not a professional. No other errors appeared in the log
Addon version
v1.37.0-1
Platform
Core: 2024.5.1
Supervisor: 2024.04.4
Operating System: 12.2
Frontend: 20240501.0
Logs of the issue (if applicable)
No response
I'm getting exactly the same error. Both in Docker and in Home Assistant.
error: z2m: MQTT error: connect ECONNREFUSED /
Same error. I'm using zigbee2mqtt 1.37.0-1 as a Home Assistant Addon (docker container) with Home Assitant Supervised install.
error: z2m: MQTT error: connect ECONNREFUSED /
I've tried loggin in with the command line in the docker container. I'm able to connect to the ip+port of the mqtt server, but I don't see any failed logins in the logging of the mqtt broker.
Which version of Home Assistant are you using?
https://github.com/Koenkk/zigbee2mqtt/releases/tag/1.37.0 Says at least version 2024.4 is required.
Current HA version:
Core 2024.5.1
Supervisor 2024.04.4
Could you check if this is fixed in the latest-dev?dev branch
Switching to the dev/edge addon unfortunately did no solve this issue. I'll try switching to an older version if that's possible.
I had the same error for HA Z2M Addon version 1.37.0-1 with Home Assistant Core 2024.4.x and 2024.5.x using the Moskito Browser Addon of HA as MQTT server. Reverting to HA Z2M Addon version 1.36.1-1 resolved the issue. No changes were made to MQTT during this process.
Pushed a fix, does it work now?
Changes will be available in the dev branch in a few hours from now.
Just installed the new edge add-on and for me z2m is working again.
|
2025-04-01T04:36:02.623558
| 2024-11-04T07:48:25
|
2632086528
|
{
"authors": [
"alexanderguzhva",
"foxspy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12348",
"repo": "zilliztech/knowhere",
"url": "https://github.com/zilliztech/knowhere/issues/921"
}
|
gharchive/issue
|
Replace HNSW with FAISS_HNSW_FLAT
Faiss has added flat support for HNSW and supports SQ/PQ/PRQ, which currently results in two HNSW options within knowhere, both offering identical parameters and capabilities but with different naming (FAISS_HNSW_FLAT vs. HNSW). This dual support increases future maintenance costs (since both sets of HNSW need to be maintained) and adds user complexity (users must rebuild and replace indexes).
If the goal is to use FAISS_HNSW to replace HNSW, a compatibility solution should be provided—allowing FAISS_HNSW to load pre-existing HNSW indexes and support search/range_search/iterator capabilities. This would centralize all HNSW-related logic into FAISS_HNSW, automatically transitioning users to FAISS_HNSW after upgrading.
Two potential approaches to address this:
Code Compatibility Handling
Remove hnsw.cc and associated hnswlib code.
Rename FAISS_HNSW_FLAT to "HNSW" and route HNSW-related requests to FAISS_HNSW.
When deserializing, FAISS_HNSW should recognize HNSW binaries and load them compatibly, handling iterator/search/range_search requests.
Version Isolation
Rename HNSW in hnsw.cc to HNSWXXX.
Upgrade the index version (from 5 to 6).
Rename FAISS_HNSW_FLAT to "HNSW" and route HNSW-related requests to FAISS_HNSW.
FAISS_HNSW should delegate requests to HNSWXXX for versions ≤ 5; for versions > 5, FAISS_HNSW should handle them internally.
@alexanderguzhva faiss_hnsw is part of the 2.5.0 release planned for early next week, it would be ideal to address this soon.
@foxspy already testing
|
2025-04-01T04:36:02.626011
| 2024-05-13T15:08:22
|
2293103878
|
{
"authors": [
"alexanderguzhva",
"chasingegg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:12349",
"repo": "zilliztech/knowhere",
"url": "https://github.com/zilliztech/knowhere/pull/562"
}
|
gharchive/pull-request
|
Cherry-pick commit #3416 from Faiss baseline
https://github.com/facebookresearch/faiss/pull/3416
The code generated for function fvec_L2sqr generated by OpenXL do not perform as good as the codes generated by gcc on Power. The macros to enable imprecise floating point operation don’t cover Power with OpenXL. This patch adds the OpenXL compiler options for the PowerPC macros to achieve better performance.
rerun ut
/rerun-e2e
/run-e2e
/kind improvement
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.