added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:34:51.909869
| 2018-06-07T03:43:05
|
330106016
|
{
"authors": [
"aizaiz",
"aleixmorgadas"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8982",
"repo": "nemtech/nem2-sdk-typescript-javascript",
"url": "https://github.com/nemtech/nem2-sdk-typescript-javascript/issues/8"
}
|
gharchive/issue
|
Verify Signature on nem2-sdk-typescript
Hello @aleixmorgadas, I found this verify signature function in nem2-library-js .
is there verify signature function in nem2-sdk-tipescript-javascript?
this is related to issue #3
Hi @aleixmorgadas I need this verify signature function to verify apostille on nem apostille library. are you working on it already or you got something else to do? if you are busy with something else, do you mind if I do this enhancement?
Sorry for reply late.
Yes, I'm busy with other important task and I cannot do this one now, if you want to do it, we are glad to receive your help ^^
If you have questions to where do this things, just post your question here
Okay here is the questions ^^:
where should I put this function? new class/file or is there available class for this function?
about the minimum scope, do I have to recreate this verify signature function from javascript to typescript or can I just create a hood or wrapper where I can use this verify signature function using nem2-library? I would be glad if you add some of your envision here
where should I put this function? new class/file or is there available class for this function?
I would start adding it in the PublicAccount class.
about the minimum scope, do I have to recreate this verify signature function from javascript to typescript or can I just create a hood or wrapper where I can use this verify signature function using nem2-library? I would be glad if you add some of your envision here
A wrapper of nem2-library is enough since there's already the feature there.
Hello @aleixmorgadas sorry for the delay, I just came back from holiday.
I have finished the code and this is the pull request.
here is the preview of the code
import { convert, KeyPair } from 'nem2-library';
/*
* Verify a signature.
*
* @param {string} publicKey - The public key to use for verification.
* @param {string} data - The data to verify.
* @param {string} signature - The signature to verify.
*
* @return {boolean} - True if the signature is valid, false otherwise.
*/
static verifySignature(publicKey: string, data: string, signature: string): boolean {
if (!publicKey || !data || !signature) {
throw new Error('Missing argument !');
}
if (publicKey.length !== 64 && publicKey.length !== 66) {
throw new Error('Not a valid public key');
}
if (convert.isHexString(signature)) {
throw new Error('Signature must be hexadecimal only !');
}
if (signature.length !== 128) {
throw new Error('Signature length is incorrect !');
}
// Convert signature key to Uint8Array
const _signature = convert.hexToUint8(signature);
let _data;
// Convert data to hex if data is not hex
if (!convert.isHexString(data)) {
_data = convert.utf8ToHex(data);
}
// Convert to Uint8Array
_data = convert.hexToUint8(_data);
return KeyPair.verify(publicKey, _data, _signature);
}
Hi @aizaiz,
first of all, thank you for spending some time adding features to nem2-sdk :smile:
No need to add the code here, we do a code review on each PR, so we'll see the code there ^_^
|
2025-04-01T04:34:51.913728
| 2015-12-10T13:43:45
|
121487808
|
{
"authors": [
"con322",
"nenad-zivkovic",
"rodzadra"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8983",
"repo": "nenad-zivkovic/yii2-advanced-template",
"url": "https://github.com/nenad-zivkovic/yii2-advanced-template/issues/27"
}
|
gharchive/issue
|
Would it be possible to use this template with AngularJS?
Any advice/tops for creating a one page application?
I have found this guide but its using the default application.
https://www.gitbook.com/book/hscstudio/angular1-yii2/details
Hi, try this video
https://t.co/A5QNRyecyn
On Thu, Dec 10, 2015 at 11:43 AM, con322<EMAIL_ADDRESS>wrote:
Any advice/tops for creating a one page application?
I have found this guide but its using the default application.
https://www.gitbook.com/book/hscstudio/angular1-yii2/details
β
Reply to this email directly or view it on GitHub
https://github.com/nenad-zivkovic/yii2-advanced-template/issues/27.
I dont see reason why not. Its a matter of your configuration.
|
2025-04-01T04:34:51.921123
| 2017-08-17T19:41:21
|
251050339
|
{
"authors": [
"blahd",
"erikzhang",
"gernotpokorny",
"markcross"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8984",
"repo": "neo-project/neo-cli",
"url": "https://github.com/neo-project/neo-cli/issues/16"
}
|
gharchive/issue
|
Feature request: Ability to generate signed transactions offline
Someone posted this in the GUI branch but I think it's more relevant here
https://github.com/neo-project/neo-gui/issues/4
The ability to sign a transaction so that NEON wallet team could have their wallet have off-line signing functionality.
The current system of send assets assumes the box is 100% secure.
yep please integrate offline transactions i.e. offline signing of transactions.
It's implemented in #162.
This has been implemented in neo-cli but is not available to the average end-user in the neo-gui as per the original feature request in neo-project/neo-gui#4 as far as I'm aware.
Or am I missing something?
|
2025-04-01T04:34:51.943454
| 2018-03-26T18:14:45
|
308680403
|
{
"authors": [
"conker84",
"jexp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8985",
"repo": "neo4j-contrib/neo4j-kafka",
"url": "https://github.com/neo4j-contrib/neo4j-kafka/pull/1"
}
|
gharchive/pull-request
|
Small improvements
Hi @jexp ,
in this PR you will find the following small improvements:
Added Kafka Embedded for the tests
Externalized the configuration
@jexp ping :)
Ups sorry, didn't see your PR initially.
|
2025-04-01T04:34:51.950565
| 2023-04-19T14:42:25
|
1675056333
|
{
"authors": [
"vga91"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8986",
"repo": "neo4j/apoc",
"url": "https://github.com/neo4j/apoc/pull/381"
}
|
gharchive/pull-request
|
[ZkSGN6PO] apoc.export.graphml.all doesn't accept absolute Windows paths
Changed export procedures to handle path consistently with imports
Added a note for windows user is case of a c:/path/to/file.
Tested directly on windows.
It would be great to add tests for both windows and unix path,
but I tried adding tests to emulate a windows path (e.g. with this one),
and they don't work properly.
Maybe we can create a separate card to try to add them?
LGTM, can you add a card for exploring how to add Windows tests like you suggested, sounds like a good idea to me :)
Added trello card with id o1mFBgUF :)
|
2025-04-01T04:34:52.066218
| 2016-05-13T08:31:11
|
154659633
|
{
"authors": [
"Corebel",
"blueyed",
"edwardsmit",
"junkblocker",
"kbarrette"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8987",
"repo": "neomake/neomake",
"url": "https://github.com/neomake/neomake/issues/408"
}
|
gharchive/issue
|
Neomake no longer shows loclist or warning-signs
As of the new behavior added in https://github.com/neomake/neomake/commit/9d7d3f8608c71f332e69753f8aea8f8450a21541 by @blueyed my loclist isn't showing anymore, neither do the warning-signs show.
My messages keep getting filled with the feedback message of Neomake that eslint exited with exit code 1.
I use the latest neovim from master branch, and configure neomake as:
" This setting will open the |loclist| or |quickfix| list (depending on whether
" it is operating on a file) when adding entries. A value of 2 will preserve the
" cursor position when the |loclist| or |quickfix| window is opened. Defaults to 0.
let g:neomake_open_list = 2
" Only use eslint
let g:neomake_javascript_enabled_makers = ['eslint']
" Use the fix option of eslint
let g:neomake_javascript_eslint_args = ['-f', 'compact', '--fix']
" Callback for reloading file in buffer when eslint has finished and maybe has
" autofixed some stuff
function! s:Neomake_callback(options)
if (a:options.name ==? 'eslint') && (a:options.has_next == 0)
checktime
endif
endfunction
" Call neomake#Make directly instead of the Neomake provided command
autocmd BufWritePost,BufEnter * call neomake#Make(1, [], function('s:Neomake_callback'))
My current working workaround is pinning neomake at the commit before the changes applied in the above referenced commit
@edwardsmit
Thanks for the report and sorry for the trouble!
Can you check which of your setting causes this, please?
@blueyed : It's not that it's not highlighting, I think it may not be running at all or something. The reason I think it is so is that the :lwindow is empty in the case of HEAD while for 4f2b11f it is not. Tried with a very minimalistic config on MacVim.
set nocompatible
set number " Show line numbers with auto width adjust
set backspace=2 " Allow backspacing over to the previous lines. same as ":set backspace=indent,eol,start" but backwards compatible
set wildmenu " Command line completion shows a list of matches
set wildmode=list:longest,full " Specified how command line completion works
set laststatus=2 " Always show a status line
set whichwrap=b,s,<,>,[,] " Allow cursor keys to move to previous/next line
set showcmd " Let me see what I type
syntax enable " Enable syntax AND do not override my settings
autocmd BufWritePost * Neomake
@junkblocker
It is working for me and others, and your example does not include any neomake config.
Do you mean it happens with the default config already for you, for every file, e.g. with filetype=shell?
Yes, I tested with just the config I listed on a:
shell script - gets the default shellcheck checker run on it.
HTML file - gets the default tidy checker run on it.
Here's verbose run output produced using
set verbose=100 verbosefile=/tmp/verbose | Neomake
on /tmp/foo which is a HTML file.
same in ordinary vim/macvim
π
Same problem with homebrew vim, vimrc of:
set nocompatible
call plug#begin('~/.vim/plugged')
Plug 'neomake/neomake'
call plug#end()
$ uname -a
Darwin xxxxxx 15.4.0 Darwin Kernel Version 15.4.0: Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64
$ vim --version
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Dec 15 2015 13:21:06)
MacOS X (unix) version
Included patches: 1-963
Compiled by Homebrew
Huge version without GUI. Features included (+) or not (-):
+acl +farsi +mouse_netterm +syntax
+arabic +file_in_path +mouse_sgr +tag_binary
+autocmd +find_in_path -mouse_sysmouse +tag_old_static
-balloon_eval +float +mouse_urxvt -tag_any_white
-browse +folding +mouse_xterm -tcl
++builtin_terms -footer +multi_byte +terminfo
+byte_offset +fork() +multi_lang +termresponse
+cindent -gettext -mzscheme +textobjects
-clientserver -hangul_input +netbeans_intg +title
+clipboard +iconv +path_extra -toolbar
+cmdline_compl +insert_expand +perl +user_commands
+cmdline_hist +jumplist +persistent_undo +vertsplit
+cmdline_info +keymap +postscript +virtualedit
+comments +langmap +printer +visual
+conceal +libcall +profile +visualextra
+cryptv +linebreak +python +viminfo
+cscope +lispindent -python3 +vreplace
+cursorbind +listcmds +quickfix +wildignore
+cursorshape +localmap +reltime +wildmenu
+dialog_con -lua +rightleft +windows
+diff +menu +ruby +writebackup
+digraphs +mksession +scrollbind -X11
-dnd +modify_fname +signs -xfontset
-ebcdic +mouse +smartindent -xim
+emacs_tags -mouseshape -sniff -xsmp
+eval +mouse_dec +startuptime -xterm_clipboard
+ex_extra -mouse_gpm +statusline -xterm_save
+extra_search -mouse_jsbterm -sun_workshop -xpm
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
2nd user vimrc file: "~/.vim/vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/local/share/vim"
Compilation: /usr/bin/clang -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -Os -w -pipe -march=native -mmacosx-version-min=10.11 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1
Linking: /usr/bin/clang -L. -L/usr/local/lib -L/usr/local/lib -Wl,-headerpad_max_install_names -o vim -lm -lncurses -liconv -framework Cocoa -fstack-protector -L/System/Library/Perl/5.18/darwin-thread-multi-2level/CORE -lperl -framework Python -lruby.2.0.0 -lobjc
$
I can reproduce it now using Vim.
Is everybody here using Vim?
@edwardsmit (OP) does not seem to use Vim, but Neovim though?!
@blueyed , you are right. It doesn't seem to happen with neovim. Since you asked, I used neovim for a while but decided it wasn't stable/beneficial enough for me yet. And now that Bram has finally started playing catchup added async job control to vim the reasons to switch for me have reduced further.
Well, Neomake has not adopted the Vim way yet.. :D
I'm in the process of setting up a testing framework for Neomake currently, with the first test being this essential thing.. stay tuned.
Regarding this issue I'd say it's a good time to consider switching to Neovim.. ;)
Regarding this issue I'd say it's a good time to consider switching to Neovim.. ;)
;) I know you say this half in jest but if it comes to that, I'd probably go back to using Syntastic. I have to/can/run/use vim on more diverse systems (OSes/versions) than neovim is currently practical to use at this point. These can be custom appliances without compilers which already have vim on them or I don't have install access on. If I constantly need to keep watching out for such downgraded cases where neovim is not available, it's less mental/habitual/configuration/maintenance overhead for me to just keep using vim. Usually I am on the newest version of everything I run but this is one reason which keeps me using vim over neovim and neobundle over dein.vim etc. So, already considered and had to go back to vim :)
Looking forward to the fix!
@junkblocker
Well, I get your rant, but the point is that you're one step ahead: it works with Neovim, but not Vim.. ;)
And I am certainly surprised about Vim-peeps using Neomake instead of Syntastic. I consider the latter to still being more robust and feature-complete, and only moved over to Neomake (and became a maintainer) because of the async processing (which all Vim users are missing?!).
Why are Vim users using Neomake instead of Syntastic?
Anyway, I'm also looking forward to a fix, of course (and am sorry to have broken this).
And I would appreciate really if people would dig into the source and come up with a fix and PRs for things they notice - after all we're developers and should develop instead of +1'ing issues we're affected by.
Anyway, I recommend going back to https://github.com/neomake/neomake/commit/9d7d3f8608c71f332e69753f8aea8f8450a21541 for all Vim users, until this issue is resolved.
I will only start investigating when tests are working, and it's currently blocked by Vim and Neovim not behaving the same regarding to output (hopefully solved by https://github.com/junegunn/vader.vim/pull/72).
It's very frustrating that there are no tests currently (and that is true for most of the Vim plugins).
My current idea is to use Vader, based on the test framework I've developed for https://github.com/blueyed/vim-diminactive/.
And I am certainly surprised about Vim-peeps using Neomake instead of Syntastic. I consider the latter to still being more robust and feature-complete, and only moved over to Neomake (and became a maintainer) because of the async processing (which all Vim users are missing?!).
I'd guess that's the reason for most everybody using neomake.
Why are Vim users using Neomake instead of Syntastic?
Continuing from previous .. it is good enough, even if not equal or better, for most cases and also requires one less config to maintain. And when somebody does have a chance to, or checks back on neovim progress, they have the config ready to use.
I use NVIM v0.1.5-292-gc17f6c5 in combination with Neomake latest, no other plugins.
I've narrowed it down to the combination of an autocmd on BufEnter combined with g:neomake_open_list having the value of 2, when setting the value to 1 I get stuck in the LocList and can't get out of that anymore (new Loclists get opened when changing the window). Setting the value to 0 won't cause a problem.
let g:neomake_open_list = 2
autocmd BufWritePost,BufEnter * Neomake
Tested this with both javascript (eslint) and bash-files (shellcheck)
Continuing from previous .. it is good enough, even if not equal or better, for most cases and also requires one less config to maintain. And when somebody does have a chance to, or checks back on neovim progress, they have the config ready to use
This is exactly why I'm interested in Neomake. I HATE Syntastic blocking the UI, but I don't always use Neovim, and I don't want to maintain separate configs.
JFI: I have a fix and (more importantly) a test (suite) for this. Need to clean it up / refactor it, but it's expected to be fixed soonish.
@kbarrette
We might all be better off by adding async support to Syntastic.. at least that's what I've thought the last days - given that it's more mature altogether.
#458 is ready for review. It should fix this regression with Vim and adds a test suite to make sure this does not happen again.
|
2025-04-01T04:34:52.121341
| 2023-04-29T09:41:29
|
1689486906
|
{
"authors": [
"kelvich",
"koivunej",
"petuhovskiy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8988",
"repo": "neondatabase/neon",
"url": "https://github.com/neondatabase/neon/pull/4122"
}
|
gharchive/pull-request
|
Override sharded-slab to increase MAX_THREADS
Describe your changes
Add patch directive to Cargo.toml to use patched version of sharded-slab: https://github.com/neondatabase/sharded-slab/commit/98d16753ab01c61f0a028de44167307a00efea00
Patch changes the MAX_THREADS limit from 4096 to 32768. This is a temporary workaround for using tracing from many threads in safekeepers code, until async safekeepers patch is merged to the main.
Note that patch can affect other rust services, not only the safekeeper binary.
Issue ticket number and link
Checklist before requesting a review
[X] I have performed a self-review of my code.
[ ] If it is a core feature, I have added thorough tests.
[ ] Do we need to implement analytics? if so did you add the relevant metrics to the dashboard?
[ ] If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section.
Checklist before merging
[ ] Do not forget to reformat commit message to not include the above checklist
Note that patch can affect other rust services, not only the safekeeper binary.
Been reviewing the sharded-slab. if I understand correctly, there is now a Vec::<*mut T>::with_capacity(32768) instead of Vec::with_capacity(4096). The individual slabs are allocated on-demand and freed ones kept around, so I would not expect anything interesting to happen.
I was a bit spooked by that comment and #ifdef:
#[cfg(target_pointer_width = "64")]
const MAX_THREADS: usize = 32768;
#[cfg(target_pointer_width = "32")]
// TODO(eliza): can we find enough bits to give 32-bit platforms more threads?
const MAX_THREADS: usize = 128;
wonder if they store that indices in the unused bits of pointers (which is kind of common in lock-free algorithms). If that is the case it would brake miserably
Yes, noticed that as well. cargo hack test passed and I think there are especially tests for packing the bits, so I think we are good. Can continue checking if I have time.
|
2025-04-01T04:34:52.124875
| 2024-04-25T15:17:10
|
2263879259
|
{
"authors": [
"skyzh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8989",
"repo": "neondatabase/neon",
"url": "https://github.com/neondatabase/neon/pull/7514"
}
|
gharchive/pull-request
|
vm-image: add sqlexporter for autoscaling metrics
Problem
As discussed in https://github.com/neondatabase/autoscaling/pull/895, we want to have a separate sql_exporter for simple metrics to avoid overload the database because the autoscaling agent needs to scrape at a higher interval. The new exporter is exposed at port 9499. I didn't do any testing for this pull request but given it's just a configuration change I assume this works.
Summary of changes
Checklist before requesting a review
[x] I have performed a self-review of my code.
[ ] If it is a core feature, I have added thorough tests.
[ ] Do we need to implement analytics? if so did you add the relevant metrics to the dashboard?
[ ] If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section.
Checklist before merging
[ ] Do not forget to reformat commit message to not include the above checklist
tested in staging:
> curl localhost:9499/metrics
# HELP lfc_approximate_working_set_size Approximate working set size in pages of 8192 bytes
# TYPE lfc_approximate_working_set_size gauge
lfc_approximate_working_set_size 269
# HELP lfc_cache_size_limit LFC cache size limit in bytes
# TYPE lfc_cache_size_limit gauge
lfc_cache_size_limit 6.30194176e+08
# HELP lfc_hits lfc_hits
# TYPE lfc_hits gauge
lfc_hits 0
# HELP lfc_misses lfc_misses
# TYPE lfc_misses gauge
lfc_misses 279
# HELP lfc_used LFC chunks used (chunk = 1MB)
# TYPE lfc_used gauge
lfc_used 66
# HELP lfc_writes lfc_writes
# TYPE lfc_writes gauge
lfc_writes 279
|
2025-04-01T04:34:52.179019
| 2022-12-26T22:40:22
|
1511290996
|
{
"authors": [
"RaWolFoX",
"neoteche"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8990",
"repo": "neoteche/termux-xfce",
"url": "https://github.com/neoteche/termux-xfce/issues/1"
}
|
gharchive/issue
|
Browser
Don't use otter-browser. https://github.com/termux/termux-packages/issues/12813#issue-1433932823
use Firefox instead.
Replaced Otter with Firefox. Check https://github.com/neoteche/termux-xfce/commit/20ae814afe86bb373b4908f35d0855920524b72c.
|
2025-04-01T04:34:52.267050
| 2023-03-27T07:40:32
|
1641563988
|
{
"authors": [
"glepnir",
"sigzegv",
"uga-rosa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8991",
"repo": "neovim/nvim-lspconfig",
"url": "https://github.com/neovim/nvim-lspconfig/issues/2530"
}
|
gharchive/issue
|
sqls have been deprecated.
Language server
sqls
Requested feature
sqls is now public archived.
I will leave it to you to decide whether to keep it or delete it, but I will report it.
Other clients which have this feature
No response
currently we can mark it as deprecated. and wonder do we have second server choice ?
There are sqlls, which are already in.
yep I found it. but currently github can't push code .. so I will mark the sqls as deprecated later.
@glepnir hi :)
pong. sry a little busy recently. need to popup a notify when server is deprecated in a version. currently we don't have this. so I am thinking add a field in setup like deprecated or something else .
why sqls has been deprecated ? for now I tested sqlls, it doesn't seems to handle parameters in connection settings (as exemple to configure tls, I didn't found anything), and the lsp client tells me that sqlls doesn't handle hovering and displaying definitions (sqls does it)
sqls is now public archived.
|
2025-04-01T04:34:52.284578
| 2024-06-12T11:15:12
|
2348507607
|
{
"authors": [
"Catalin-Stratulat-Ericsson",
"kushnaidu",
"liamfallon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8992",
"repo": "nephio-project/catalog",
"url": "https://github.com/nephio-project/catalog/pull/40"
}
|
gharchive/pull-request
|
Allow configuration of an external webhook and its associated certs in the Porch API server
This change is related to https://github.com/nephio-project/nephio/issues/554 and previous PR https://github.com/nephio-project/porch/pull/53
This PR implements an optional porch package which can be used to allow web-hooks to be externally defined in yaml files, have cert manager automatically create and update the certificates which these web-hooks will utilize.
This optional package Requires cert manager to be installed in the system and as such to not make porch dependent on cert manager has been made optional.
Note: The version used for the porch-server image is set to v3.0.0 which does not exist yet as it requires the code written in the previously mentioned porch PR to function and as such v2.0.0 would not work.
Note: the only changes in this package compared to the default porch package are found in issuer-cert, validating-webhook and porch-server yaml files. all other files have stayed the same
To aid reviewers:
% diff -qr nephio/core/porch nephio/optional/porch-cert-manager-webhook
Only in nephio/optional/porch-cert-manager-webhook: 2-2-issuer-cert.yaml
Only in nephio/optional/porch-cert-manager-webhook: 2-3-validating-webhook.yaml
Files nephio/core/porch/3-porch-server.yaml and nephio/optional/porch-cert-manager-webhook/3-porch-server.yaml differ
Files nephio/core/porch/9-porch-controller-packagevariants-clusterrole.yaml and nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariants-clusterrole.yaml differ
Files nephio/core/porch/9-porch-controller-packagevariantsets-clusterrole.yaml and nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariantsets-clusterrole.yaml differ
% diff -w nephio/core/porch/3-porch-server.yaml nephio/optional/porch-cert-manager-webhook/3-porch-server.yaml
40c40,41
< emptyDir: {}
---
> secret:
> secretName: porch-system-server-tls
46c47
< image: docker.io/nephio/porch-server:v2.0.0
---
> image: docker.io/nephio/porch-server:v3.0.0
68a70,71
> - name: USE_CERT_MAN_FOR_WEBHOOK
> value: "true"
% diff -w nephio/core/porch/9-porch-controller-packagevariants-clusterrole.yaml nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariants-clusterrole.yaml
34,41d33
< - config.porch.kpt.dev
< resources:
< - repositories
< verbs:
< - get
< - list
< - watch
< - apiGroups:
% diff -w nephio/core/porch/9-porch-controller-packagevariantsets-clusterrole.yaml nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariantsets-clusterrole.yaml
51,58d50
< - apiGroups:
< - config.porch.kpt.dev
< resources:
< - repositories
< verbs:
< - get
< - list
< - watch
To aid reviewers:
% diff -qr nephio/core/porch nephio/optional/porch-cert-manager-webhook
Only in nephio/optional/porch-cert-manager-webhook: 2-2-issuer-cert.yaml
Only in nephio/optional/porch-cert-manager-webhook: 2-3-validating-webhook.yaml
Files nephio/core/porch/3-porch-server.yaml and nephio/optional/porch-cert-manager-webhook/3-porch-server.yaml differ
Files nephio/core/porch/9-porch-controller-packagevariants-clusterrole.yaml and nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariants-clusterrole.yaml differ
Files nephio/core/porch/9-porch-controller-packagevariantsets-clusterrole.yaml and nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariantsets-clusterrole.yaml differ
% diff -w nephio/core/porch/3-porch-server.yaml nephio/optional/porch-cert-manager-webhook/3-porch-server.yaml
40c40,41
< emptyDir: {}
---
> secret:
> secretName: porch-system-server-tls
46c47
< image: docker.io/nephio/porch-server:v2.0.0
---
> image: docker.io/nephio/porch-server:v3.0.0
68a70,71
> - name: USE_CERT_MAN_FOR_WEBHOOK
> value: "true"
% diff -w nephio/core/porch/9-porch-controller-packagevariants-clusterrole.yaml nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariants-clusterrole.yaml
34,41d33
< - config.porch.kpt.dev
< resources:
< - repositories
< verbs:
< - get
< - list
< - watch
< - apiGroups:
% diff -w nephio/core/porch/9-porch-controller-packagevariantsets-clusterrole.yaml nephio/optional/porch-cert-manager-webhook/9-porch-controller-packagevariantsets-clusterrole.yaml
51,58d50
< - apiGroups:
< - config.porch.kpt.dev
< resources:
< - repositories
< verbs:
< - get
< - list
< - watch
thanks for the diff for clarity. there should be no changes made to the cluster-roles for pkg variants and pkg variant sets it must have been due to myself making this copy of the nephio/core/porch pkg before PR#37 was pulled in. this has been addressed in latest update.
Thanks @Catalin-Stratulat-Ericsson
/approve
/approve
/approve
/lgtm
|
2025-04-01T04:34:52.286186
| 2023-06-22T15:16:56
|
1769910135
|
{
"authors": [
"electrocucaracha",
"johnbelamaric",
"radoslawc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8993",
"repo": "nephio-project/kpt-backstage-plugins",
"url": "https://github.com/nephio-project/kpt-backstage-plugins/pull/12"
}
|
gharchive/pull-request
|
adding license scan
Check for licensing info headers, fossology and scancode tests
/test presubmit-kbp-fossology
/lgtm
/approve
|
2025-04-01T04:34:52.292921
| 2024-07-26T15:26:29
|
2432472202
|
{
"authors": [
"arora-sagar",
"efiacor",
"kispaljr",
"liamfallon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8994",
"repo": "nephio-project/porch",
"url": "https://github.com/nephio-project/porch/pull/89"
}
|
gharchive/pull-request
|
Sandbox debugging
This PR tweaks the launch configuration and Makefile to allow remote debugging of Porch on a Nephio sandbox VM.
This tweak requires explicit inventory parameters to be used, see https://github.com/nephio-project/test-infra/pull/293
The documentation for remote VM debugging is in this PR: https://github.com/nephio-project/docs/pull/158
/assign @efiacor @kispaljr @arora-sagar
I couldn't get these Make targets to work with kpt as @kispaljr had done in the original run-in-kind targets so I used kubectl directly instead. I think this is because the Makefile is not in control of the kpt package used to deploy Porch in the sandbox. I admit that I did not go to extremes in trying to get kpt to work.
Also, this PR works on the assumption that the Porch package installed in the sandbox VM is identical to the package under development (CRDs etc) except for what's in the pod deployments. I think that's a fair assumption.
I've read the VM setup documentation that you linked in the PR, but I still do not understand what "default" and "override" deployments are.
@liamfallon could you elaborate on the new make targets a bit? thx
I've read the VM setup documentation that you linked in the PR, but I still do not understand what "default" and "override" deployments are. @liamfallon could you elaborate on the new make targets a bit, please?
The "default" install is the install that is already on the VM when it is installed by the nephio init script. In the sandbox, it is in /tmp/kpt-pkg/nephio/core/porch so it's the default install for Porch in Nephio.
The "override" one is the development version that we are installing, building, and debugging ont he vm, so it overrides the default install.
Maybe the names I picked are not great.
I've read the VM setup documentation that you linked in the PR, but I still do not understand what "default" and "override" deployments are. @liamfallon could you elaborate on the new make targets a bit, please?
They are removed in the new version of the PR.
/test presubmit-nephio-go-test
/test presubmit-nephio-go-test
/assign @efiacor
I tried it works thank you @liamfallon
/approve
/lgtm
|
2025-04-01T04:34:52.295931
| 2018-02-15T16:05:34
|
297501135
|
{
"authors": [
"ederpf",
"nerdfury"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8995",
"repo": "nerdfury/Slack.Webhooks",
"url": "https://github.com/nerdfury/Slack.Webhooks/issues/41"
}
|
gharchive/issue
|
throwing notifications to the channel
Good afternoon!
I don't know if call it an issue or not.
I am trying to throw a notification to the channel when publishing a message that includes "@SomeGroup" but no notification is launched. I have tried it in different ways using your package but without good result.
Is it possible? Because maybe it is not.
Thanks!
Eder
Hey, looks like it possible but you have to follow a specific format (not @groupname) I pulled the following from: https://api.slack.com/docs/message-formatting
"For paid account there is an additional command for User Groups that follows the format <!subteam^ID|handle>. (subteam is literal text. ID and handle are replaced with the details of the group.) These indicate a User Group message, and should cause a notification to be displayed by the client. User Group IDs can be determined from the usergroups.list API endpoint. For example, if you have a User Group named happy-peeps with ID of S012345, then you would use the command <!subteam^S012345|happy-peeps> to mention that user group in a message."
Hope that helps.
|
2025-04-01T04:34:52.301142
| 2023-06-09T21:43:20
|
1750527234
|
{
"authors": [
"f-dy",
"tancik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8997",
"repo": "nerfstudio-project/nerfstudio",
"url": "https://github.com/nerfstudio-project/nerfstudio/pull/2058"
}
|
gharchive/pull-request
|
colormaps: add "gray"
Add the trivial "gray" colormap, useful to eg extract accumulation.
Don't show "pca" in the viewer menu (it crashes the server)
The PCA was there for typing reasons. This logic would also need to change . Maybe there should be a separate 1d colomaps list. Currently "PCA" and "Default" are weird.
@tancik I updated to re-add "pca" to the list, and fixed the viewer code to not show the "pca" option for single-channel images
|
2025-04-01T04:34:52.306957
| 2021-09-17T01:08:35
|
998813815
|
{
"authors": [
"igorsantos07",
"nergal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8998",
"repo": "nergal/homeassistant-vacuum-viomi",
"url": "https://github.com/nergal/homeassistant-vacuum-viomi/issues/15"
}
|
gharchive/issue
|
Testing viomi.vacuum.v7
I'm testing the integration with my v7 (the v8's Chinese version), and those are the commands I tried that work great:
start
pause
stop
set_fan_speed
return_to_base
Fun fact: the same pause command on miio2 causes the vacuum to say something after pausing, while in your case it just pauses silently.
Commands that don't work:
locate: lol, raises NotImplementedError... but works fine on miio2
start_pause: AttributeError: 'MiroboVacuum2' object has no attribute 'async_start_pause' (same for miio2)
turn_off: doesn't seem to do anything? Didn't try further since I would probably be unable to test other services. Also doesn't do anything (nor any error logs) on miio2
I didn't try:
turn_on: made no sense since it is on all the time and I couldn't turn it off
send_command: no clue what it does
clean_spot: no clue how to use it
Also, since I'm comparing working commands with miio2, an addendum: miio2 seems to update the vacuum's state (as seen via https://github.com/denysdovhan/vacuum-card) faster than this integration :thinking:
Everything here was tried against the entity. Some commands were also tried against the device, which seems to work fine as well. Anything else I should check to say "this integration works fine with v7"? :)
@igorsantos07 I am appreciated for this feedback - thank you!
locate - I wasn't able to find anything in the docs nor in the API that does this feature for my particular device, however, I can implement it "blindly" since it works for v7
start_pause - this is the issue for sure, will be refactored
turn_off/turn_on - needs to be retested on my end
As for the clean_spot - this is a part of the feature that I'm going to implement someday. It is about cleaning only a predefined part of the territory (in the same way as Xiaomi Home app does for rooms)
|
2025-04-01T04:34:52.501842
| 2021-11-06T22:34:35
|
1046611126
|
{
"authors": [
"PatrickWe",
"netcupClaudiaM"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8999",
"repo": "netcup-community/community-tutorials",
"url": "https://github.com/netcup-community/community-tutorials/pull/41"
}
|
gharchive/pull-request
|
Add tutorial how to install and configure fail2ban on debian
I have read and understood the Contributor's Certificate of Origin at the end of the template and I hereby certify that I meet the contribution criteria described in it.
Signed-off-by: (Patrick Weber<EMAIL_ADDRESS>
Hi PatrickWe, thank you for contributing! It's been a while since you opened this PR and it probably is an open secret we're currently overwhelmed with reviewing all the awesome tutorials we got. Please be patient - we're about to publish tutorials this and next week. Thank you!
|
2025-04-01T04:34:52.576979
| 2018-03-26T12:09:23
|
308545407
|
{
"authors": [
"coveralls",
"shinytang6"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9000",
"repo": "netjson/netjsongraph.js",
"url": "https://github.com/netjson/netjsongraph.js/pull/59"
}
|
gharchive/pull-request
|
[Packaging] updated webpack configuration
Refer to https://github.com/webpack/webpack-dev-server/issues/66
Coverage decreased (-0.8%) to 15.663% when pulling 686c6212297d0ad77ded8c7331cc847aa6e5093e on shinytang6:update-webpack into b2f1ca54b7dc217bb21fda1a31ebed70fee7b03a on netjson:dev.
|
2025-04-01T04:34:52.586491
| 2020-04-27T18:26:10
|
607757038
|
{
"authors": [
"bencao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9001",
"repo": "netlify/cli",
"url": "https://github.com/netlify/cli/pull/865"
}
|
gharchive/pull-request
|
fix the dev server issue when proxy server returns 304 while redirect status is force set to 200
- Summary
currently if we have a redirect rule in netlify.toml with
force = true and status = 200
the response from proxy server will always be 200
this behavior is problematic when the proxy returns 304 with empty body
the browser will always render a blank page
the change proposed in this PR is to leave 3xx code back to the browser and never override
- Test plan
Before: Page Request return Status 200 with empty Body
After: Page Request return Status 304
- Description for the changelog
when proxy server returns status code between 300 and 399, the return status will honor proxy server status instead of using the one designated in redirection rule.
close and waiting for a better approach
|
2025-04-01T04:34:52.592991
| 2017-03-18T02:45:25
|
215161202
|
{
"authors": [
"erquhart",
"mittalyashu",
"tech4him1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9002",
"repo": "netlify/netlify-cms",
"url": "https://github.com/netlify/netlify-cms/issues/302"
}
|
gharchive/issue
|
Add a submit button to the search bar
Because it's the right thing to do.
Requirements
Add a "Search" button to submit the search form in the app header
Use the standard button from React Toolbox
We should probably update the color of the button to the primary brand color, as used in the header text
We may as well update all primary buttons in the CMS to use that color, but this can be split off if it's out of the way to do so
@erquhart Can you explain why this was closed? I still think it is unclear that you have to push enter to search.
Busted. Maybe we just shouldn't have an enter button at all, just a 1 second debounce (in case an integration is in use).
I would be cool with an instant search because of the small usable space we have, although I still think that is less clear.
Ideally I think it would look like this for all backends:
A limited number of results appear in a dropdown directly beneath the search input while typing
Pressing enter or clicking submit takes you to a full results view
Why we need a "Search" button?
Even the GitHub doesn't have a search button.
Many interfaces don't, it's a fair point.
I think we should have one of two things here:
Instant search
A search button
Right now, if I go to type in the search box and don't think to press "Enter", I get nothing: no feedback at all on what do do.
I agree with @tech4him1, we should go for instant search instead of a search button. As every user want the information ASAP, without doing much work and that's why instant search will be more useful.
As we are discussing search, how about we add Algolia, where users can add API keys and search bar will be replaced with Algolia search bar. Where users can not only search the title, but also the content inside each post.
Instant search is a more intuitive approach. Submission is implicit, which is why I created this issue in the first place, but it's a common enough pattern that it shouldn't be a problem.
There's a custom Algolia integration already in the CMS, it just isn't a public feature yet. That's pending backend/integration API redesign work that's currently underway.
|
2025-04-01T04:34:52.599011
| 2023-01-23T10:14:00
|
1552829947
|
{
"authors": [
"HsiangNianian"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9003",
"repo": "netlify/netlify-cms",
"url": "https://github.com/netlify/netlify-cms/issues/6663"
}
|
gharchive/issue
|
TypeError: Cannot read properties of undefined (reading 'path')
Describe the bug
I tried to upload or delete pics and nothing happen.
In addition,it seems that there is sth wrong with the create new post.
To Reproduce
none
Expected behavior
none
Screenshots
Applicable Versions:
Netlify CMS version<EMAIL_ADDRESS>Git provider: github
Browser version: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/<IP_ADDRESS> Safari/537.36
CMS configuration
backend:
name: github
repo: retrofor/retrofor.github.io
publish_mode: editorial_workflow
media_folder: static/img/post
public_folder: /img/post
collections:
- name: blog
label: Blog
folder: blog
create: true
slug: "{{slug}}"
fields:
- label: Title
name: title
widget: string
- label: Author
name: author
widget: string
default: WithdewHua
- label: Type
name: type
widget: string
default: post
- label: Categories
name: categories
widget: list
- label: Tags
name: tags
widget: list
- label: Series
name: series
widget: list
required: false
- label: Publish Date
name: date
widget: datetime
- label: Lastmod
name: lastmod
widget: date
default: ""
required: false
format: YYYY-MM-DD
- label: Description
name: description
widget: text
required: false
- label: Featured Image
name: featured
widget: image
required: false
- label: Featured Alt
name: featuredalt
widget: string
required: false
- label: Featured Path
name: featuredpath
widget: string
default: img/post
required: false
- label: Slug
name: slug
widget: string
required: false
- label: Draft
name: draft
widget: boolean
default: false
required: false
- label: Show TOC
name: showtoc
widget: boolean
default: true
required: false
- label: Post Content
name: body
widget: markdown
publish: true
type: folder_based_collection
sortable_fields:
- commit_date
- title
- date
- author
- description
view_filters: []
view_groups: []
slug:
encoding: unicode
clean_accents: false
sanitize_replacement: "-"
isFetching: false
error: null
Additional context
other errors:
Failed to persist entry: API_ERROR: Although you appear to have the correct authorization credentials, the retrofor organization has enabled OAuth App access restrictions, meaning that data access to third-parties is limited. For more information on these restrictions, including how to enable this app, visit https://docs.github.com/articles/restricting-access-to-your-organization-s-data/
ohh,i solved it
|
2025-04-01T04:34:52.600576
| 2017-08-15T09:50:04
|
250270392
|
{
"authors": [
"bdougie",
"javimosch"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9004",
"repo": "netlify/staticgen",
"url": "https://github.com/netlify/staticgen/pull/290"
}
|
gharchive/pull-request
|
staticstuff renaming
I renamed the project.
I do not know if I need to renamed the file as well (staticstuff.md)
Please rename the file as well @javimosch
|
2025-04-01T04:34:52.615728
| 2018-07-22T14:58:33
|
343412597
|
{
"authors": [
"Dhull442",
"netson"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9005",
"repo": "netson/ubuntu-unattended",
"url": "https://github.com/netson/ubuntu-unattended/issues/34"
}
|
gharchive/issue
|
Boot hangs after install on KVM
I tried to install the unattended iso file created using the script in a KVM virtual machine.
Everything goes fine during the install, but the machine hangs on the first boot after install. The same problem doesn't happen when I install the standard server iso file. Also, the same unattended iso file works fine with VirtualBox. I first thought the error might be due to some intel_rapl message at the beginning, but it's not.
Any chance you are able to make a screenshot via console or something of what the boot seems to be stuck on?
There was a blank screen opening up in the front which I was able to close using Alt+F4 and all the commands and everything was actually working in the background. So the issue is not actually due to preseeding and was probably due to difference in VM settings set during the creation.
|
2025-04-01T04:34:52.639059
| 2019-04-07T09:33:23
|
430124064
|
{
"authors": [
"RiKap",
"f3l1x"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9006",
"repo": "nettrine/migrations",
"url": "https://github.com/nettrine/migrations/pull/17"
}
|
gharchive/pull-request
|
Next: Refactoring to support Nette v3
Require PHP ^7.2
Test against PHP 7.2-8.0
Support new yummy nette/schema. Refactor configuration to allow pass any key needed.
{
"require": {
"nettrine/migrations": "^0.7.0"
}
}
@f3l1x Hi. Do you need any help? :)
|
2025-04-01T04:34:52.654636
| 2020-05-06T11:45:02
|
613256739
|
{
"authors": [
"amizurov",
"itlidaye",
"normanmaurer",
"praveens4all"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9007",
"repo": "netty/netty",
"url": "https://github.com/netty/netty/issues/10252"
}
|
gharchive/issue
|
not clearing heap memory - threads allocated and not releasing netty latest version
Expected behavior
Heap memory should clear after processing messages
Actual behavior
Keep piling up memory not releasing my threads still serving for network messages for checking connections.
Steps to reproduce
keep checking network connectivity and processing messages.
Minimal yet complete reproducer code (or URL to code)
server :
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast("decoder", new Decoder())
.addLast("encoder", new Encoder())
.addLast(group, new Handler());
}
}).option(ChannelOption.SO_BACKLOG, 1024)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true);
List ports = Arrays.asList(8080,8081);
channels = new ArrayList<>(ports.size());
for (int port : ports) {
Channel serverChannel = bootstrap.bind(port).sync().channel();
channels.add(serverChannel);
}
decoder :
public class Decoder extends ByteToMessageDecoder {
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf byteBuf, List<Object> out) throws Exception {
---- business logic to read --
add object to out.
out.add(msg);
}
}
handler :
public class Handler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
--business logic--
ctx.writeAndFlush(msg);
}
}
Encoder :
public class Encoder extends MessageToByteEncoder {
@Override
protected void encode(ChannelHandlerContext ctx, Object msg, ByteBuf out) {
--business logic --
out.writeBytes(msg.getBytes());
}
}
Netty version
4.1.45.Final
JVM version (e.g. java -version)
Adopt OpenJDK 11
OS version (e.g. uname -a)
Linux
@praveens4all Hi by default Netty uses the PooledByteBufAllocator which can cache an allocated buffer to reusing.
@praveens4all Hi by default Netty uses the PooledByteBufAllocator which can cache an allocated buffer to reusing.
Okay, Thank you for response, In my application i wont reuse any messages, processing messages were huge and making out of memory because Old Generation GC filled up my entire RAM. Is there any way to clear or use another buffer ? Pls advise.
You can try UnpooledByteBufAllocator
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap
.childOption(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT);
...
See if it is suitable for your case, since now Netty will not be able to reuse buffers and will allocate them every time, which can lead to frequent GC.
Also alternative you can configure the PooledByteBufAllocator to only pool direct memory
Also alternative you can configure the PooledByteBufAllocator to only pool direct memory
Sorry for late reply.....Please provide me example how to do that, i did not configure any explicitly.
Still my GC filledup..
e.g. via system properties -Dio.netty.allocator.numHeapArenas=0
Thank you for reply @amizurov
I have tried,
.childOption(ChannelOption.ALLOCATOR, UnpooledByteBufAllocator.DEFAULT); still no change in memory.
as i explained, my requirement server and client runs 24/7 never disconnects keep processing messages.
if i use system property -Dio.netty.allocator.numHeapArenas=0 then can i revert UnpooledByteBufAllocator changes ? still need to use UnpooledByteBufAllocator? please advise.
No, you can remove βUnpooledByteBufAllocatorβ and use default pooled allocator with disabled heap arena, like @normanmaurer suggested.
okay, Thank you, will try and update you tomorrow.
No, you can remove βUnpooledByteBufAllocatorβ and use default pooled allocator with disabled heap arena, like @normanmaurer suggested.
Still no changes in memory, i have added vm argument.
how can i verify properties reflected or not ?
i debug my ByteToMessageDecoder , SimpleChannelInboundHandler, MessageToByteEncoder ..seems releasing buffer code not triggering, because my connection still active.
What is this tool?
What is this tool?
dynatrace
Has the problem been solved?
No still we facing outofmemory - due to old generation space filled up with netty buffer i think..not clearing buffer.
I think you will need to share more code and also ensure there are no buffer leaks detected.
|
2025-04-01T04:34:52.656149
| 2011-12-15T12:13:13
|
2565917
|
{
"authors": [
"normanmaurer",
"trustin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9008",
"repo": "netty/netty",
"url": "https://github.com/netty/netty/issues/133"
}
|
gharchive/issue
|
ChannelEvent should info about direction
We should include information about the direction of a ChannelEvent (upstream vs downstream). Otherwise its impossible to say if a ChannelEvent was upstream or downstream
Related issue: #111 - ChannelEventRunnableFilter could be replaced with ChannelEventFilter if this issue is resolved.
|
2025-04-01T04:34:52.660191
| 2012-03-07T17:15:03
|
3546892
|
{
"authors": [
"normanmaurer",
"trustin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9009",
"repo": "netty/netty",
"url": "https://github.com/netty/netty/issues/222"
}
|
gharchive/issue
|
HttpMessageDecoder fails on Hixie76
Tim Fox wrote:
Something I completely forgot about:
When I was running using Netty 3.4.0.alpha one of my websockets tests started failing, and I remembered that there was one other small change I had made to my hacked version of 3.2.5.final, this is not related to the changes I made for event loops.
Basically... HttpMessageDecoder:
protected boolean isContentAlwaysEmpty(HttpMessage msg) {
if (msg instanceof HttpResponse) {
HttpResponse res = (HttpResponse) msg;
int code = res.getStatus().getCode();
if (code < 200) {
return true;
}
switch (code) {
case 204: case 205: case 304:
return true;
}
}
return false;
}
The problem with this is it always returns true if code < 200. But for earlier versions (Hixie76) of websockets the return code is 101 (Response to a websockets upgrade), but the content is not empty in this case.
This causes Hixie76 handshakes to fail if Netty is used on the client side.
The fix is trivial, I just changed it to:
if (code < 200 && code != 101) {
Which seems to work fine.
Apologies, I should have reported this ages ago when I first found it!
279d859c7ef09f69a64f2a7627660cdfe008fe9c handles this issue more precisely
|
2025-04-01T04:34:52.690788
| 2017-01-18T19:04:43
|
201663328
|
{
"authors": [
"Scottmitch",
"idelpivnitskiy",
"jasobrown"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9010",
"repo": "netty/netty",
"url": "https://github.com/netty/netty/issues/6248"
}
|
gharchive/issue
|
Lz4FrameEncoder compression failure behavior
If Lz4FrameEncoder encounters an error during compression the buffered content is left untouched and an exception is thrown [1]. Since Lz4 allows data to be sent "uncompressed" should we just send the data to the remote peer "uncompressed" so data is not lost?
Outstanding questions:
Is it generally expected if we fail to compress then the data is generally "bad" and won't be useful to the peer?
Is it possible that appending more data will mitigate the condition which caused the exception in the first place. So its possible the user could catch the exception, append more data, and then the compression will succeed.
[1] https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/compression/Lz4FrameEncoder.java#L267
@jasobrown @idelpivnitskiy - FYI
@Scottmitch good point, thanks!
I've checked the source code of lz4-java library, looks like most probable cause of LZ4Exception is not enough length of destination buffer/array: https://github.com/jpountz/lz4-java/search?q=throw+new+LZ4Exception&type=Code&utf8=β
But we check it before call compress method: https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/handler/codec/compression/Lz4FrameEncoder.java#L257
So, no need to worry about it.
The second possible cause is an IndexOutOfBoundsException during copying arrays. The author named this situation as "Malformed input at offset". I didn't understand in which situation it may happen... Could you take a look, please?
Is it generally expected if we fail to compress then the data is generally "bad" and won't be useful to the peer?
I think it's impossible for a library, such as netty, to make that determination. All we can know is the compression failed, for some reason. It might be the data, it might an obscure edge case in the compression alg itself ("zip bombs" for example) - but we can't know. The best we can do, I think is to send the uncompressed data along, and if the receiver chokes on the data, then the app will need to be responsible for closing the channel (practically, there's not much else they could do, I suspect).
Is it possible that appending more data will mitigate the condition which caused the exception in the first place. So its possible the user could catch the exception, append more data, and then the compression will succeed.
My suspicion is "no", from a quick reading of the library - would need to dig in more (not sure I have time in the next week for that).
My recommendation would be to try sending the payload uncompressed. It's kind of a roll of the dice as we know there's some kind of error, but maybe we get lucky and hope it's a compression alg problem that is recoverable. Either way, in case of a real data problem, either the sender or receiver channels will need to make the decision to close the connection.
I would also need to dig in to understand in more detail. However on first glance I suspect for our use case exceptions arise due to a bug in our code, compression library, or malicious/unexpected data. My main concern is about forwarding malicious data, but maybe it is OK to assume that is the peer's responsibility to handle.
|
2025-04-01T04:34:52.708670
| 2017-03-12T15:44:25
|
213612680
|
{
"authors": [
"CodingFabian",
"Scottmitch",
"denisgaebler",
"normanmaurer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9011",
"repo": "netty/netty",
"url": "https://github.com/netty/netty/issues/6535"
}
|
gharchive/issue
|
HTTP2 Channel between BIGENDIAN and LITTLEENDIAN platform does not work
Expected behavior
Fix HTTP2 codec to be endianess save.
Actual behavior
On ByteBuf some methods use endianess dependent methods like buf.writeInt or buf.writeMedium where in other places of the same class readUnsignedint(buf) or other endianess save methods or used (that were implemented in Http2CodecUtil.java).
Steps to reproduce
Just try to connect a big endian and a little endian platform.
Minimal yet complete reproducer code (or URL to code)
Netty version
Probably all, I used netty-all-4.1.8.Final.jar
JVM version (e.g. java -version)
JDK 8 on z/OS for one side and OpenJDK 8 on the other side of the channel:
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu<IP_ADDRESS>-b13)
OpenJDK 64-Bit Zero VM (build 25.121-b13, interpreted mode)
OS version (e.g. uname -a)
Ubuntu 16.04 and z/OS 2.2
This is the Exception. It seams there is a need to agree, that binary fields need to be treated with the same endianess on both ends of a Channel, lets say always Little Endian, otherwise the other end is confused.
Caused by: io.netty.handler.codec.http2.Http2Exception: Invalid frame length 1024.
at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.verifyWindowUpdateFrame(DefaultHttp2FrameReader.java:382)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processHeaderState(DefaultHttp2FrameReader.java:223)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:148)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:341)
at io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:220)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:401)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
Maybe the problem is in this line of code of DefaultHttp2FrameReader.java in package io.netty.handler.codec.http2 of netty/codec-http2
// Read the header and prepare the unmarshaller to read the frame.
payloadLength = in.readUnsignedMedium();
Needs likely be changed to readUnsignedMediumLE();
I found another piece of code which is not endianess save in Http2CodecUtil.java
static void writeFrameHeaderInternal(ByteBuf out, int payloadLength, byte ty
pe,
Http2Flags flags, int streamId) {
out.writeMedium(payloadLength);
out.writeByte(type);
out.writeByte(flags.value());
out.writeInt(streamId);
}
The writeMedium and writeInt depend on Platform endianess but should use the same endianess on both ends of the wire.
From a endianess perspective the HTTP2 implementation of netty codec is broken. In the DefaultHttp2FrameReader.java and DefaultHttp2FrameWriter.java and Http2CodecUtil.java there is a mix of endian dependent (out.write...) and endian independent (e.g. writeMediumInt(int, buf)) use of methods for writing and reading to and from the bytebuffer.
aha, thanks for digging this up, we were suspecting something like this in #6437 already
@Scottmitch @nmittler @ejona86 could someone of you have a look ?
When I managed to make it work, I replaced the writeInt method with the existing writeUnsignedInt method from Http2CodecUtil und for readMedium and writeMedium I added two new methods to Http2CodecUtil like:
public static void writeUnsignedIntMedium(long value, ByteBuf out) {
out.writeByte((int) (value >> 16 & 0xFF));
out.writeByte((int) (value >> 8 & 0xFF));
out.writeByte((int) (value & 0xFF));
}
and
public static int readIntMedium(ByteBuf buf) {
return (buf.readByte() & 0xFF) << 16
| (buf.readByte() & 0xFF) << 8 | buf.readByte() & 0xFF;
}
Hope that helps.
The writeMedium and writeInt depend on Platform endianess but should use the same endianess on both ends of the wire.
The intention is that by default ByteBuf's memory is in Network Byte Order (big endian). Methods on the ByteBuf interface without the LE suffix should be represented in memory as big endian. I found some bugs in the ByteBuf code which handles reading/writing primitives and fixed them in PR https://github.com/netty/netty/pull/6582.
There are also some unnecessary methods in codec-http2 which are not necessary and I submitted https://github.com/netty/netty/pull/6584 to clean these methods up.
Unfortunately I don't have access to a little endian machine at the moment. Would you mind running with PR https://github.com/netty/netty/pull/6584 and report back if there are still issues?
@denisgaebler - These fixes have been merged. Please try with 4.1 HEAD and report back.
@CodingFabian - fyi. could you also see if behavior has changed?
Hi,
I can do the test, however, I have used Hyperledger when I found the problem. Is there an easier testcase that I can execute?
I would prefer you run what ever scenario lead to your discovery of this issue. If you are unable to build the software on your own I would suggest waiting for the next release.
@denisgaebler any update ?
Hi Norman,
basically I have asked for an easier (http2 netty only) testcase because I have trouble building the hyperledger project, since there are lots of changes and my original testcase does not work anymore.
I will give it another try tomorrow and let you know.
Hi,
unfortunatly I cannot test it with Hyperledger, because it uses grpc-netty:
https://github.com/grpc/grpc-java/tree/master/netty/src/main/java/io/grpc/netty
Which has hard coded gone classes with 4.10:
import io.netty.handler.codec.http2.internal.hpack.Decoder;
So I will open a ticket with netty and continue.
Thanks.
thanks for the update. Let me close this issue for now. Please re-open if 4.10 doesn't solve this issue.
|
2025-04-01T04:34:52.716007
| 2019-10-09T12:11:29
|
504610174
|
{
"authors": [
"colinrgodsey",
"normanmaurer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9012",
"repo": "netty/netty",
"url": "https://github.com/netty/netty/issues/9651"
}
|
gharchive/issue
|
Add proper support for TCP Fast Open in native transport
We currently support to set the TCP_FAST_OPEN ChannelOption in our native epoll transport. While this is generally good enough for servers it is not good enough for client-side usage. Here we will need to introduce the usage of sendto(...) to replace the usage of connect(...) when TCP_FAST_OPEN is used.
See https://lwn.net/Articles/508865/
Going to take a crack at this today
I'm guessing we'll have to let the channel connect 'optimistically' succeed until data is first sent. Kind of necessary to allow the '0RTT' transmission there.
Now I'm getting all hung up trying to figure out how to handle TCP_FASTOPEN_CONNECT vs TCP_FASTOPEN... there doesn't seem to be any way to test for TCP_FASTOPEN_CONNECT support. I wonder if its worth just replacing the TCP_FASTOPEN_CONNECT functionality with the classic TCP_FASTOPEN? The _CONNECT form seems to just be boilerplate to make workflow with TCP_FASTOPEN easier. Netty users will be blind to the underlying implementation anyways (also, the TCP_FASTOPEN channel open is an Integer types which makes it kind of odd with the client implementation, as there's no threshold).
I've got this far: https://github.com/netty/netty/compare/4.1...colinrgodsey:4.1
Code format clobbered some random things, so need to fix that... plus some other cleanup, and some new tests.
(sorry, just journaling things here...)
Yea, I really think hijacking TCP_FASTOPEN_CONNECT is the best option. There's not actually a socket flag for client fast open anyways, its just a special flag on "sendto". While it's kind of a "lie" that we're not actually using the new TCP_FASTOPEN_CONNECT flag, we are effectively doing the same thing (from what I can tell in the kernel code, the end effects are reasonably identical). So this extends existing functionality without introducing any visible API changes.
Works now, getting through tests.... seems to be some weirdness with LEVEL_TRIGGERED epoll, and one of the connect tests (doesnt like the empty local address)
Alright, so I added client fast open to the epoll permutations and.... lots of things are failing. The normal data transmission pattern stuff works, so thats good (overall write pattern works), but it seems to failing with some of the epoll trigger mechanics and connection mechanics. not surprising about the connection mechanics, this does ultimately change how all of that works. still working through the tests
Alright, so... some things to consider:
a) Should TCP_FASTOPEN_CONNECT be replaced with the more-compatible sendto/sendmsg form.
b) TCP_FASTOPEN_CONNECT doesn't seem to have been in any tests, and it likely exhibits the same behavior issues as the other form, but not sure.
c) Should the behavior changes be considered "expected", and a normal caveat of the feature? (it is after all a stateless connection). These caveats can be handled piecemeal in the Epoll test suite to help further cement which behaviors are expected to be violated.
Personally, I think it's best to leave the fastopen behavior in the epoll permutations, and for each test case that is violated by the fastopen behavior, provide a different EpollSocketTestPermutation that leaves out the fastopen client. That way its searchable and can be used as a definitive source for producing a list of 'caveats' for TCP client fastopen.
|
2025-04-01T04:34:52.718994
| 2019-04-11T19:53:38
|
432220119
|
{
"authors": [
"IPvSean",
"gdykeman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9013",
"repo": "network-automation/linklight",
"url": "https://github.com/network-automation/linklight/pull/238"
}
|
gharchive/pull-request
|
organization for network tower 3.0
SUMMARY
organization for the tower exercises
ISSUE TYPE
Docs Pull Request
COMPONENT NAME
documentation for ansible_network
ADDITIONAL INFORMATION
@IPvSean Don't see any issues! Good to merge.
|
2025-04-01T04:34:52.826939
| 2020-05-10T12:32:18
|
615382372
|
{
"authors": [
"neumanrq"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9014",
"repo": "neumanrq/cmd_stan_rb",
"url": "https://github.com/neumanrq/cmd_stan_rb/issues/5"
}
|
gharchive/issue
|
Properly parse error output
When something goes wrong, i.e. one makes a mistake in the model definition,
then the output.csv does not contain any rows, but the fitting command prints useful information. Let's wrap the error output and transport it to the user to re-evaluate.
Example situation: When calling fit did not succeed, we need a proper error outout
|
2025-04-01T04:34:52.831244
| 2020-05-31T18:15:04
|
628014730
|
{
"authors": [
"mdinata",
"mhejrati",
"mhsekhavat"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9015",
"repo": "neuralet/smart-social-distancing",
"url": "https://github.com/neuralet/smart-social-distancing/issues/13"
}
|
gharchive/issue
|
docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')
Hi, i am new to docker and follow exactly as per the instruction
$ sudo docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/data neuralet/smart-social-distancing:latest-jetson-nano
here's that i got
docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')
please advise what i may probably miss
thank you
Hi @mdinata ,
Please replace HOST_PORT with the port number you would like to serve your application on. You can use 8000, for example, and run:
sudo docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD/data":/data
Then, you would open http://localhost:8000 in your browser to see the UI.
Thanks for the prompt feedback this case is solved.
Next error prompted. ImportError: libnvinfer.so.6: cannot open shared object file: No such file or directory.
is it because this demo will run only on Jetpack 4.3, or is there any other way to do than Jetpack 4.3 fresh install.
Sorry different topic than the subject, appreciate your comment before i close this issue
Hi @mdinata
Can you please provide details on what hardware and software you are running this on? Is it jetson-nano that is booted with jetpack 4.3?
hi @mhejrati , my jetson is running on Jetpack 4.2.2. I guess that's the main problem of the last error. Thanks for the reply, i will close this issue
|
2025-04-01T04:34:52.835106
| 2021-04-27T09:16:00
|
868645349
|
{
"authors": [
"eldarkurtic",
"markurtz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9016",
"repo": "neuralmagic/sparseml",
"url": "https://github.com/neuralmagic/sparseml/issues/197"
}
|
gharchive/issue
|
Checkpoint path for integrations/timm not parsed
Describe the bug
Checkpoint path returned by zoo_model.download_framework_files(extensions=[".pth"]) is of List[str] type.
https://github.com/neuralmagic/sparseml/blob/0c8b1d4533ab43159a1ac34c5b9c250027f6b2e7/integrations/timm/train.py#L392
create_model(...) from the timm repo expects a str type (only one path)
https://github.com/neuralmagic/sparseml/blob/0c8b1d4533ab43159a1ac34c5b9c250027f6b2e7/integrations/timm/train.py#L397
Expected behavior
To have: args.initial_checkpoint = "model.pth" instead of args.initial_checkpoint = ["model.pth"].
To Reproduce
Exact steps to reproduce the behavior: use a local recipe and a SparseZoo checkpoint.
python integrations/timm/train.py \
/PATH/TO/DATASET/imagenet/ \
--sparseml-recipe /PATH/TO/RECIPE/recipe.yaml \
--initial-checkpoint zoo:model/stub/path \
--dataset imagenet \
--batch-size 64 \
--remode pixel --reprob 0.6 --smoothing 0.1 \
--output models/optimized \
--model resnet50 \
--workers 8 \
Errors
stat: path should be string, bytes, os.PathLike or integer, not list
Fixed with the PR that was merged in, not sure why this didn't close out once that went in.
|
2025-04-01T04:34:52.841565
| 2024-04-11T19:22:23
|
2238404022
|
{
"authors": [
"alyssadai",
"coveralls",
"surchs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9017",
"repo": "neurobagel/bagel-cli",
"url": "https://github.com/neurobagel/bagel-cli/pull/291"
}
|
gharchive/pull-request
|
[MNT] Release the CLI
Release Notes
We have updated the Neurobagel data model to allow users to specify phenotypic information at the session level (https://github.com/neurobagel/planning/issues/83). This release updates the CLI so you can create .jsonld files according to the new data model.
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 99.715%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
1400
Relevant Lines:
1404
π - Coveralls
:rocket: PR was released in v0.2.2 :rocket:
NOTE: This appears to have been a test release to trigger a PR with (only) custom release notes, which was then reverted. Adding the skip-release label here to avoid it contaminating future releases.
|
2025-04-01T04:34:53.026764
| 2016-03-23T03:05:19
|
142845399
|
{
"authors": [
"alexbaden",
"jovo",
"willgray"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9018",
"repo": "neurodata/ocp-journal-paper",
"url": "https://github.com/neurodata/ocp-journal-paper/issues/22"
}
|
gharchive/issue
|
annotation table
a table listing all of our annotation datasets, much like https://github.com/neurodata/ocp-journal-paper/blob/gh-pages/Results/Tables/Table1/table1b_v2.md
but slightly different
@MrAE we can discuss tomorrow what i mean.
@willgray @alexbaden @j6k4m8 can you help?
is there an "easy" way to find all the "annotation" channels?
Channels themselves don't have a public/private attribute.
project_id
channel_name
channel_description
public
ac3
annotation
annotation
1
ac3ac4
ac3_neuron_truth
ac3_neuron_truth
0
ac3ac4
ac3_synapse_truth
0
ac3ac4
ac4_neuron_truth
0
ac3ac4
ac4_synapse_truth
ac4_synapse_truth
0
ac3_synTruth_v4
annotation
annotation
0
ac4
annotation
annotation
1
ac4_raw
neuron
neuron
0
ac4_raw
synapse
synapse
0
ac4_synapses
neurons
neurons
0
ac4_synapses
synapses
synapses
0
ac4_synTruth_v3
annotation
annotation
0
apiUnitTests
apiUnitTestKasthuri
apiUnitTestKasthuri
0
apiUnitTests
apiUnitTestPropagate
0
ara_ccf2
annotation
0
ara_ccf3
annotation
0
atlastest
test1
test1
0
atlastest
test2
test2
0
atlastest
test3
test3
0
bock11_parse
eval1
0
bock7k_synapsesTruth1
annotation
annotation
0
bock7k_synapsesTruth2
annotation
annotation
0
bock7k_synapsesTruth3
annotation
annotation
0
bock7k_synapsesTruth4
annotation
annotation
0
bock7k_synapsesTruth5
annotation
annotation
0
bockvesicle
test1
0
bockvesicle
test2
0
bockvesicle
test3
0
bockvesicle
test4
0
bockVessel
annotation
annotation
0
bockVessel1
annotation
annotation
0
bock_scalable_synapse
test1
0
cajal_demo
anno
anno
1
chen15
chentest
0
chen15
chentest2
anno
0
CM07_CellAnno
annotation
annotation
0
Cocaine174_mask
annotation
annotation
0
Cocaine178ImageLddmmABA
annotation
0
Cocaine178MaskLddmmABA
annotation
0
Cocaine178StanfordABA
annotation
0
collman14anno
annotation
annotation
0
collman14anno_anish1
annotation
annotation
0
collman14anno_anish2
annotation
annotation
0
collman14_syndetect
annotation
annotation
0
collman15anno
annotation
annotation
0
collman15axons
annotation
annotation
0
collman15dendrites
annotation
annotation
0
collman_silane_mask_anno
annotation
annotation
0
Control239ImageLddmmABA
annotation
0
Control239MaskLddmmABA
annotation
0
Control239StanfordABA
annotation
0
kasthuri11cc_vesicles_ac3
annotation
annotation
1
kasthuri11_inscribed_vesicles
annotation
annotation
1
Ex10R55_anno1
anno2
0
Ex10R55_anno1
psd_anno1
psd_anno1
0
Fear199ImageLddmmABA
annotation
0
Fear199MaskLddmmABA
annotation
0
Fear199StanfordABA
annotation
0
flyemanno
annotation
annotation
1
gelatin_detection_uploads
gelatin_detect_anno
nov 4 2015
0
gelatin_manual_annotations
gelatin_man_anno
nov 3 2015
0
gk1
testanno
testanno
0
golgi_v1
gala_raw
0
golgi_v1
gala_raw_erode
gala_raw_erode
0
golgi_v1
img2graph_v29_gala_node_paramset_3
0
golgi_v1
img2graph_v29_gala_node_paramset_3_erode
0
kasthuri11cc_vesicles_ac3
annotation
annotation
1
kasthuri11cc_vesicles_ac3
annotation
annotation
0
kasthuri11_inscribed_vesicles
annotation
annotation
0
kasthuri14anno
annotation
annotation
0
kasthuri14s1colANNO
annotation
annotation
1
kasthuri2015_ramon
mitochondria
1
kasthuri2015_ramon
neurons
1
kasthuri2015_ramon
synapses
1
kasthuri2015_ramon
vesicles
1
kasthuri2015_ramon_v2
neurons
1
kasthuri2015_ramon_v2
neurons_meta
1
kasthuri2015_ramon_v2
synapses
1
kasthuri2015_ramon_v2
vesicles
1
kasthuri_ramon_v3
mitochondria
1
kasthuri_ramon_v3
vesicles
1
kasthuri2015_ramon_v4
mitochondria
1
kasthuri2015_ramon_v4
neurons
1
kasthuri2015_ramon_v4
synapses
1
kasthuri2015_ramon_v4
vesicles
1
kat11greencylinder
annotation
annotation
1
kat11mito
annotation
annotation
1
kat11mojocylinder
annotation
annotation
1
kat11redcylinder
annotation
annotation
1
kat11segments
annotation
annotation
1
kat11synapses
annotation
annotation
1
kat11vesicles
annotation
annotation
1
kat14psd
annotation
annotation
0
kat15segments
seg1
seg1
0
kat15synapses
syn1
syn1
0
kharris15apical_anno
annotation
annotation
1
kharris15apical_axondendrite_catmaid
annotation
annotation
1
kharris15apical_axon_catmaid
annotation
annotation
1
kharris15apical_dendrite_catmaid
annotation
annotation
1
kharris15apical_endosomal_catmaid
annotation
annotation
1
kharris15apical_ersa_catmaid
annotation
annotation
1
kharris15apical_gliasubcell_catmaid
annotation
annotation
1
kharris15apical_glia_catmaid
annotation
annotation
1
kharris15apical_mitomicro_catmaid
annotation
annotation
1
kharris15apical_polyribo_catmaid
annotation
annotation
1
kharris15apical_subcell_catmaid
annotation
annotation
1
kharris15apical_synapse_catmaid
annotation
annotation
1
kharris15oblique_anno
annotation
annotation
1
kharris15oblique_axondendrite_catmaid
annotation
annotation
1
kharris15oblique_axon_catmaid
annotation
annotation
1
kharris15oblique_dendrite_catmaid
annotation
annotation
1
kharris15oblique_gliasubcell_catmaid
annotation
annotation
1
kharris15oblique_glia_catmaid
annotation
annotation
1
kharris15oblique_subcell_catmaid
annotation
annotation
1
kharris15oblique_synapse_catmaid
annotation
annotation
1
kharris15spine_anno
annotation
annotation
1
kharris15spine_axondendrite_catmaid
annotation
annotation
1
kharris15spine_axon_catmaid
annotation
annotation
1
kharris15spine_dendrite_catmaid
annotation
annotation
1
kharris15spine_gliasubcell_catmaid
annotation
annotation
1
kharris15spine_glia_catmaid
annotation
annotation
1
kharris15spine_subcell_catmaid
annotation
annotation
1
kharris15spine_synapse_catmaid
annotation
annotation
1
krv2test
mitochondria
0
krv2test
neurons
0
krv2test
synapses
0
krv2test
vesicle
0
LP4
annotation
annotation
0
LP4merged
annotation
annotation
0
manno
mito
mito
0
MP4
annotation
annotation
0
ndio_demos
ramontests
1
rhoana_kasthuri11
test1
test1
0
rhoana_kasthuri11
test2
test2
0
rhoana_kasthuri11
test3
test3
0
ritaN2
annotation
Annotation
1
ritaN2_5
annotation
Annotation
1
ritaN2_four
annotation
Annotation
1
S1_proj_test1
annotation
annotation
0
silane_detection_uploads
puncta_negative_detections
oct 28 2015
0
silane_detection_uploads
puncta_syn_v1
puncta_syn_v1
0
silane_detection_uploads
puncta_upload_v1
oct 27 2015
0
silane_manual_annotations
silane_man_anno
0
testanalyzeAugust3
jordan
jordan
0
testnov
neuron
0
testnov
synapse
0
testproject
test1
test1
0
test_atlas1
t1
t1
0
test_atlas1
t2
t2
0
test_cajal
anno1
anno1
0
test_graph_seg
annotation
annotation
0
test_graph_syn
annotation
annotation
0
test_ilastik
obj1
obj1
0
test_ilastik
obj2
obj2
0
test_ilastik
obj3
obj3
0
test_ramonify2
neuron
neuron
0
test_ramonify2
synapse
synapse
0
test_ramonify
neuron
1
test_ramonify
synapse
1
test_upload2
test1
0
test_upload_kas
test1
test1
0
test_upload_ramon
anno
anno
0
test_xbrain_0816
annotation
0
test_xbrain_0816
test1
test1
0
test_xbrain_0816
test_xbrain_0816
test_xbrain_0816
0
test_xbrain_jan
test1
0
test_xbrain_jan
test2
0
test_xbrain_jan
test2_delete
0
test_xbrain_jan
test3
0
test_xbrain_jan
test3v
0
test_xbrain_jan
test4
0
test_xbrain_jan
test4v
0
test_xbrain_jan
test5
0
test_xbrain_jan
test5v
0
test_xbrain_jan
test6
0
test_xbrain_jan
test6v
0
test_xbrain_jan
test9cellxy1
0
test_xbrain_jan
test9cellxy2
0
test_xbrain_jan
test9cellxyz
0
test_xbrain_jan
test9cellxyz2
0
test_xbrain_jan
test9threshxyz
0
test_xbrain_october
test1
test1
0
test_xbrain_october
uploadChannel
uploadChannel
0
thatswhat
said
said
0
thatswhat
she
she
0
vesiclerf_example
op_point1
precision: 0.26, recall: 0.92
1
vesiclerf_example
op_point2
precision: 0.89, recall: 0.71
1
vesiclerf_example
op_point3
precision: 1.00, recall: 0.22
1
vesicletest
test1
0
vesicletest
test2
0
synRFR12
annotation
annotation
0
vTest
test1
0
vTest
test2
0
vTest
test3
test3
0
will_test1
seg1
0
will_test1
seg2
seg2
0
will_test1
seg3
seg3
0
xbrain_rfr1
cell_seg
0
xbrain_rfr1
vessel_seg
0
xbrain_rfr2
cell_seg2
0
xbrain_rfr2
cell_seg3
0
xbrain_rfr2
vessel_seg
0
xbrain_rfr3
cell_seg
0
xbrain_rfr3
vessel_seg
0
xbrain_wow
feb2cell
0
xbrain_wow
test1
0
xbrain_wow
test2
0
xbrain_wow
test3
0
amazing!
@j6k4m8 ask @willgray if you have time to make this table with me.
if not, @MrAE we will do it.
guys - let's not make this public -
also @jovo - i need some guidance on what should be public. I often make things public in the interest of open science, but many of these are mine and are draft/dev versions of things. So I'm fine claiming them, publishing them, whatever, but they aren't anything anyone should use really...no provenance and out of date, draft etc.
Should we make it so only things that are published are public?
@willgray good point! can somebody move these tables to the internal repo and delete them from here.
regarding which ones should be public, the new nomenclature is
"searchable" and "secret"
or
"public" and "secret"
so, i think searchable is the "right" answer, and only things that we want people to be able to find should be find-able. make sense?
maybe we should "force" users to populate the description field when making a project (I often am an offender here) for anything searchable. Then it would be easy to make a table!!! Might also be nice to have a way to flag things that aren't ready to be searchable but are complete (but maybe that's too complicated). Thinking about trying to distinguish between "test run" and "pre-pub"
I can clean up my stuff once this gets moved.
auto-ingest requires specifying.
too complicated for a 3rd level, you'll have to choose when it is ready to
be searchable.
also, searchable is at the token/project level, so cleaning up your stuff
might mean moving some channels to different projects?
On Wed, Mar 23, 2016 at 9:32 AM, William Gray<EMAIL_ADDRESS>wrote:
maybe we should "force" users to populate the description field when
making a project (I often am an offender here) for anything searchable.
Then it would be easy to make a table!!! Might also be nice to have a way
to flag things that aren't ready to be searchable but are complete (but
maybe that's too complicated). Thinking about trying to distinguish between
"test run" and "pre-pub"
I can clean up my stuff once this gets moved.
β
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/neurodata/ocp-journal-paper/issues/22#issuecomment-200344344
--
the glass is all full: half water, half air.
neurodata.io
|
2025-04-01T04:34:53.030606
| 2022-10-17T13:08:15
|
1411553658
|
{
"authors": [
"JoeZiminski",
"adamltyson"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9019",
"repo": "neuroinformatics-unit/datashuttle",
"url": "https://github.com/neuroinformatics-unit/datashuttle/issues/5"
}
|
gharchive/issue
|
Add documentation
Even at an early stage, it would be good to start adding docs on how to use the package, even if it's just in the readme.
Now might be a good time to also auto-generate API docs so these can be referenced in the README.
# TODO: not only upload_data nd download_data support wildcard
|
2025-04-01T04:34:53.065557
| 2023-12-14T10:49:33
|
2041439363
|
{
"authors": [
"timonmerk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9020",
"repo": "neuromodulation/py_neuromodulation",
"url": "https://github.com/neuromodulation/py_neuromodulation/issues/272"
}
|
gharchive/issue
|
Bug with nm_channels "used" keyword
Currently this keyword only sets the ch_names parameter, but does not affects the passed data shape to each feature function.
Therefore, it would be optimal to include an nm_channels class that can be passed as a first element in the preprocecessors list in nm_run_analysis
#274
|
2025-04-01T04:34:53.086131
| 2024-08-08T16:14:31
|
2456178385
|
{
"authors": [
"delphinepilon",
"maxradx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9021",
"repo": "neuropoly/slicer-manual-annotation",
"url": "https://github.com/neuropoly/slicer-manual-annotation/issues/53"
}
|
gharchive/issue
|
QIntValidator not working as expected
Describe the bug
The QIntValidator restrictions imposed on QLineEdits are not taken into account in the SlicerCART configuration windows.
To Reproduce
Steps to reproduce the behavior:
Go to the SlicerCART extension.
Select the "New configuration" option.
Click on "Next".
Select the option "CT" in the modality field.
See that it is possible to write letters in the fields "Window Level" and "Window Width".
Expected behaviour
The typed input should be restricted to integers, as requested in the code.
Screenshots
Desktop (please complete the following information):
OS: MacOS
Version : Sonoma 14.1.1
Additional context
The problem also occurs in the configuration window for a single label, in both the fields for the HU range and the 3 fields dedicated to the RGB values of the colour. This QIntValidator for the colours should only accept values within 0 to 255.
To solve issue 59, this has been required and adressed in PR 81. Issue 53 now fixed.
|
2025-04-01T04:34:53.088479
| 2024-07-12T13:22:31
|
2405606221
|
{
"authors": [
"jcohenadad",
"laurentletg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9022",
"repo": "neuropoly/slicer-manual-annotation",
"url": "https://github.com/neuropoly/slicer-manual-annotation/pull/41"
}
|
gharchive/pull-request
|
fixes #38 import missing packages. Removed PyQt5 import, uses qt instead
Updated PR.
Work around created to import missing python packages when loading 3D Slicer, including a Qt Message box prompt. There is probably a better way to install missing python packages through the extension manager. To check later if we decide to go that way.
Removed the PyQt5 import and just kept everything under qt.
Note I closed my previous PR since there were unnecessary lines added to my previous commit.
I'm closing this PR as it has been superseded by https://github.com/neuropoly/slicer-manual-annotation/pull/43 (which has just been merged)
|
2025-04-01T04:34:53.095539
| 2020-12-28T18:23:50
|
775529479
|
{
"authors": [
"joshuacwnewton",
"kousu"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9023",
"repo": "neuropoly/spinalcordtoolbox",
"url": "https://github.com/neuropoly/spinalcordtoolbox/pull/3135"
}
|
gharchive/pull-request
|
Replace troublesome unicode quote characters with more friendly ones
Checklist
GitHub
[X] I've given this PR a concise, self-descriptive, and meaningful title
[X] I've linked relevant issues in the PR body
[X] I've applied the relevant labels to this PR
[X] I've assigned a reviewer
PR contents
[ ] I've consulted SCT's internal developer documentation to ensure my contribution is in line with any relevant design decisions
[ ] I've added relevant tests for my contribution
[ ] I've updated the relevant documentation for my changes, including argparse descriptions, docstrings, and ReadTheDocs tutorial pages
Description
β causes issues. Replace with '.
Linked issues
Part of fixing #3128.
Good thinking. :sweat_smile:
Oh I just realized this maybe wasn't quite the crash I thought it was: https://github.com/neuropoly/spinalcordtoolbox/issues/3128#issuecomment-751822864
Still, getting these out will make debugging the actual crashes easier.
Oh I just realized this maybe wasn't quite the crash I thought it was: #3128 (comment)
Still, getting these out will make debugging the actual crashes easier.
Thanks for letting me know. I've unlinked the issue for now. :slightly_smiling_face:
|
2025-04-01T04:34:53.113481
| 2021-06-18T17:31:37
|
925088345
|
{
"authors": [
"adelavega",
"jeromedockes",
"tsalo"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9024",
"repo": "neuroquery/neuroquery_data",
"url": "https://github.com/neuroquery/neuroquery_data/issues/3"
}
|
gharchive/issue
|
Apply BIDS-like naming convention for data files and reorganize them slightly
For NiMARE/Neurostore conversion, I think it would be useful to reorganize the files a bit. I have four proposed sets of changes:
First, it would be great to adopt some key-value pairs in the npz filenames, in a pseudo-BIDS format. Perhaps the following entities?
source: The origin of the counts/tfidf values, such as "title", "abstract", or "combined".
vocab: The set of terms. Maybe corpus would work here too?
type: Just "count" vs. "tfidf".
version: Unused at the moment (could be 1 I guess), but this would be particularly useful for Neurosynth.
So the files might be something like source-abstract_vocab-standard_type-count_neuroquery.npz.
Second, the vocabulary txt files could adopt a reduced version: vocab-standard_neuroquery.txt, vocab-expanded_neuroquery.txt, and vocab-combined_neuroquery.txt.
Third, since the PMIDs should be shared across all files in the dataset (unless I'm mistaken), then that could be moved to the top level. I think the same goes for the corpus metadata file.
Finally, I think the data could all go in training_data or a general data folder.
I would then try to mirror the naming convention in neurosynth-data (see https://github.com/neurosynth/neurosynth-data/issues/5), update NiMARE's ingestion function and fetchers (https://github.com/neurostuff/NiMARE/issues/522), and finally update the neurostore ingestion function as well.
thanks! all these changes sound good.
I would only keep the neuroquery_model.zip as it is, as it is a pre-trained model so something different than the training data, and it is downloaded and used by the neuroquery package
this:
https://github.com/neurostuff/NiMARE/issues/522#issuecomment-867169369
also still needs to be done (removing the few studies in training_data that are not used in neuroquery_model)
LGTM. I suppose since there's no actual standard to adhere to there's not much to worry about it, but it will make it easier to know what we're looking at.
Only concern is that it might possibly break some stuff in neurosynth python library but either way that is very deprecated at this point.
Thanks @jeromedockes and @adelavega. I took a crack at it in https://github.com/neurosynth/neurosynth-data/pull/6. One thing to note- once I started actually reorganizing the Neurosynth files, I ended up deciding to change the format slightly. Namely, I moved the database to a new "data" entity at the beginning and using the suffix for the data type (i.e., "database", "features", "vocab", and "ids"). Below is a screenshot of the list of version-7 files for Neurosynth. Please let me know if you see any issues with the updated convention:
LGTM. @jdkent do you want to comment on this?
btw, .npz was chosen just because it was convenient to use with the dependencies neuroquery already had but it has the disadvantage of being specific to scipy -- if you have another suggestion for serializing sparse matrices that can be changed as well
@jeromedockes That's a good point. I never work with sparse matrices, so I don't actually know anything about the different options. I don't have a problem with using scipy though.
For neurosynth/neurosynth-data#6 I used this notebook. I used scipy.sparse.csc_matrix to convert to a sparse matrix, but I realized that I don't know what specific format you used...
I'm thinking a better entity order might be data, version, vocab, source, weight. That way, related files will end up closer.
"features" and "database" are not very informative IMO... do you thing we could name them "textfeatures" or "vectorizedtext" and "coordinates", or something like that?
I didn't change Neurosynth's database files at all, beyond gzipping them, so they contain both coordinates and metadata. My thinking is that those files should have everything for the database outside of annotations.
Not sure what the best replacement for "features" is. Hypothetically, one could store a manually-annotated dataset in this format, in which case the annotations wouldn't come from text, but I don't know if that's a likely enough scenario to make excluding "text" from the name necessary. I prefer "textfeatures" over "vectorizedtext", though, since "vectorizedtext" feels a little specific.
My thinking is that those files should have everything for the database outside of annotations.
so the studies' metadata is duplicated for each coordinate?
Hypothetically, one could store a manually-annotated dataset in this format, in which case the annotations wouldn't come from text
wouldn't that be a different kind of file then? it probably wouldn't be sparse, there wouldn't be a vocab-, ...
I think the same file format is still valid for manual annotations. You're right that it probably wouldn't be sparse, although it definitely could be. In those kinds of annotations, I'd expect binary values, and the amount of zeros would depend on the scope of the ontology. If you pulled BrainMap and converted it to this format, I'd expect a fairly sparse matrix. I'd probably label a manually-annotated features file vocab-cogpo, vocab-cogat, or vocab-[author], depending on what ontology used.
I think the same file format is still valid for manual annotations.
I see, in that case if you prefer I'm ok with keeping "features" even though it's not very informative. the "_database" suffix OTOH contains zero information so we might as well not have it at all
We could separate any study-level information out, but then we're talking about managing, at minimum, four files instead of three.
yes that is how it was in neuroquery so far. it's not only about the size but avoiding the duplication makes it possible to have a meaningful index and makes it clearer that there is only one information. also, although it isn't the case now, text features or annotations of studies that don't contain any coordinates could be useful for example to compute term similarities, topic models etc and we would still want their metadata. this is not a very strong opinion on my part but I find distributing the join of the tables instead of their normal forms a bit surprising
Okay, so we can have coordinates.tsv.gz, with, at minimum, id, table_id, x, y, and z columns. Additional columns that'll be in the Neurosynth data would be peak_id and table_num. For NeuroQuery, the only additional column would be table_name, right?
In metadata.tsv.gz files, Neurosynth will have the columns id, doi, space, title, authors, year, and journal. NeuroQuery will have id, title, and pubmed_url, I think?
Does that work?
@jeromedockes do you have any concerns about the proposed changes? I can incorporate them fairly easily into the Neurosynth and NiMARE PRs, but I want to make sure they're good with you first.
sorry for the late reply! yes that sounds good, thanks. so IIUC on the NeuroQuery side what needs to change is renaming the files, changing from csv to tsv, and gzipping them?
I think so. Some of the NeuroQuery files might have "pmid" instead of "id" though.
Updated file list (just version 7):
Two small notes:
I used na_rep="n/a" to follow BIDS convention.
I used line_terminator="\n" just to be safe. I think that's also BIDS convention.
|
2025-04-01T04:34:53.156598
| 2016-07-26T05:55:22
|
167526702
|
{
"authors": [
"nictrix"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9025",
"repo": "newcontext/kitchen-terraform",
"url": "https://github.com/newcontext/kitchen-terraform/pull/12"
}
|
gharchive/pull-request
|
use proper switch for bundle command
Will fix https://github.com/newcontext/kitchen-terraform/issues/10
closing request, opening new one to another branch
|
2025-04-01T04:34:53.159704
| 2019-11-27T12:45:59
|
529324693
|
{
"authors": [
"newlandsvalley"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9026",
"repo": "newlandsvalley/tunebank",
"url": "https://github.com/newlandsvalley/tunebank/issues/6"
}
|
gharchive/issue
|
Add a test framework
see https://haskell-servant.readthedocs.io/en/stable/cookbook/testing/Testing.html.
Fixed by bcdf25d934a64a6d24bd7252ed851f7c518679e0.
|
2025-04-01T04:34:53.162220
| 2023-10-23T18:02:17
|
1957719198
|
{
"authors": [
"newnoiseanand"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9027",
"repo": "newnoiseworks/omgd",
"url": "https://github.com/newnoiseworks/omgd/issues/110"
}
|
gharchive/issue
|
need solution for paths where terraform apply makes only VPC network and not compute engine w/o saving state (terraform apply hangup, compute engine API not enabled, etc)
two situations where terraform apply runs partially and creates a VPC network but not compute engine, and doesn't make a terraform.tfstate file
Not setting up compute engine API on the project (likely to happen, have done it several times myself setting up GCP projects)
terraform apply seems to hangup in weird ways as well, may be auto approve issues, should be able to work around
Two possible solutions:
Tell the user to manually delete VPC network in this case (temporary, not great)
Run this command to import it: terraform import google_compute_network.vpc_network nakama-instance-network - also kind of a patch but I could program it in
This should be a note in the README, and a bug to figure out down the line for now. removing from the milestone, will add to #50 for the README file though
|
2025-04-01T04:34:53.166979
| 2024-03-04T13:28:59
|
2166853878
|
{
"authors": [
"CLAassistant",
"akin-ozer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9028",
"repo": "newrelic/helm-charts",
"url": "https://github.com/newrelic/helm-charts/pull/1303"
}
|
gharchive/pull-request
|
Update README.md to add description for custom secret usage
Is this a new chart: no
What this PR does / why we need it: Documentation for custom secret usage in README.md
Which issue this PR fixes: none
Special notes for your reviewer: none
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:34:53.179159
| 2021-10-04T11:25:44
|
1015056862
|
{
"authors": [
"paologallinaharbur"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9029",
"repo": "newrelic/nri-kubernetes",
"url": "https://github.com/newrelic/nri-kubernetes/issues/220"
}
|
gharchive/issue
|
Modify integration to support sending data to an HTTP/HTTPS endpoint
This step is required to have a long-running integration sending data to a agent sidecar.
The approach is similar to nri-kube-events
Implemented in https://github.com/newrelic/nri-kubernetes/pull/248
|
2025-04-01T04:34:53.228605
| 2023-05-18T09:18:16
|
1715299732
|
{
"authors": [
"pombredanne",
"tdruez"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9030",
"repo": "nexB/scancode.io",
"url": "https://github.com/nexB/scancode.io/issues/735"
}
|
gharchive/issue
|
Import an XLSX spreadsheet with the "load_inventory" pipelines
The input format of this XLSX should be exactly the same as the SCIO XLSX output format with these additional considerations:
the tab names that are unknown from SCIO should be ignored
the column names that are unknown from SCIO in a tab should be ignored
we only support the current version of SCIO XLSX output
There will be some data loss due to the XLSX format limitation.
Those should be minor, but the ScanCode.io JSON output is preferred to load project data without any loss.
From https://github.com/nexB/scancode.io/blob/main/scanpipe/pipes/output.py#L378
Convert the value to a string and perform these adaptations:
- Keep only unique values in lists, preserving ordering.
- Truncate the "description" field to the first five lines.
- Truncate any field too long to fit in an XLSX cell and report error.
- Create a combined license expression for expressions
- Normalize line endings
- Truncate the value to a ``maximum_length`` supported by XLSX.
|
2025-04-01T04:34:53.231845
| 2023-04-04T21:32:00
|
1654609059
|
{
"authors": [
"chirino",
"nerdalert"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9031",
"repo": "nexodus-io/nexodus",
"url": "https://github.com/nexodus-io/nexodus/pull/747"
}
|
gharchive/pull-request
|
feat: generate an openapi client using the generated openapi spec.
fixed lots of the handler annotations so that a better openapi spec is generated.
@nerdalert I think it should be good now.
@chirino nice, LGTM
|
2025-04-01T04:34:53.317771
| 2023-11-12T22:23:31
|
1989610647
|
{
"authors": [
"ivoruetsche",
"wiktor2200"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9033",
"repo": "nextcloud/ansible-collection-nextcloud-admin",
"url": "https://github.com/nextcloud/ansible-collection-nextcloud-admin/issues/319"
}
|
gharchive/issue
|
[NC] - Run occ installation command: Missing sudo password
Hi
We try to use the ansible-collection-nextcloud-admin on Ubuntu 22.04.3. but we got these error:
TASK [nextcloud.admin.install_nextcloud : [NC] - Run occ installation command] ************************************************************************************************************************************
fatal: [<IP_ADDRESS>]: FAILED! => {"msg": "Missing sudo password"}
I saw, that on line 38 is 'become_user: "{{ nextcloud_websrv_user }}"', well, nextcloud_websrv_user=www-root, but this user shouldn't be in the sudo list...???
Thank a lot for your work
Ivo
Maybe some additional information.
Ansible runs as "ansible" user, password login via ssh is denied, only ssh key is allowed. So the playbook looks like:
- hosts: nextcloud
become: true
remote_user: ansible
become_method: sudo
vars:
ansible_ssh_private_key_file: ~/ansible/sshkey/ansible.key
Solved: I changed the ansible sudoers user from
ansible ALL=NOPASSWD: ALL to
ansible ALL=(ALL) NOPASSWD:ALL
Hello @ivoruetsche, it all depends on you local configuration. You don't configure Ubuntu itself with this role.
In clean VM usually just setting:
- hosts: all
become: true
vars_files:
- your_var_file.yml
would be sufficient.
|
2025-04-01T04:34:54.023992
| 2024-05-01T13:15:10
|
2273485168
|
{
"authors": [
"jrgarciadev",
"wingkwong"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9035",
"repo": "nextui-org/nextui",
"url": "https://github.com/nextui-org/nextui/pull/2924"
}
|
gharchive/pull-request
|
fix(switch): support uncontrolled switch in react-hook-form
Closes #
π Description
Applied the same fix done in other form input components
β³οΈ Current behavior (updates)
defaultValues wont work for uncontrolled switch.
π New behavior
π£ Is this a breaking change (Yes/No):
No
π Additional Information
Summary by CodeRabbit
Bug Fixes
Fixed an issue with the uncontrolled switch component in forms using react-hook-form.
New Features
Introduced a new form handling component for switches, enhancing integration with react-hook-form.
Refactor
Improved synchronization between the switch state and its display in the DOM.
Updated switch component to use more efficient reference handling and effects for state management.
@wingkwong please add tests
@jrgarciadev I plan to do it another PR along with other affected components.
|
2025-04-01T04:34:54.027690
| 2022-06-27T21:27:16
|
1286417020
|
{
"authors": [
"MWhite-22",
"asheeeshh",
"juliusmarminge",
"nexxeln"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9036",
"repo": "nexxeln/create-t3-app",
"url": "https://github.com/nexxeln/create-t3-app/pull/61"
}
|
gharchive/pull-request
|
Adding full ESM support
Resolves issue #53
BTW, bundled package went down from 10.5kb to 8kb just by switching over
why is import x from "x/index.js" required over import x from "x"?
It actually comes up as a ts error as well. I HATE that i have to reference .js files in the import statement of a .ts, but thats literally just the way the standard works now.
looks good, i had actually tried implementing this following sindre's gist but got confused midway
This looks good. Has this been tested properly?
This looks good. Has this been tested properly?
tested ππΌ
Cool, thank you
|
2025-04-01T04:34:54.029710
| 2023-12-13T12:43:53
|
2039637028
|
{
"authors": [
"artlu99",
"vinliao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9037",
"repo": "neynarxyz/farcaster-channels",
"url": "https://github.com/neynarxyz/farcaster-channels/pull/19"
}
|
gharchive/pull-request
|
remove dup (Surveycaster was renamed to Ponder)
Surveycaster channel was renamed to Ponder channel:
name change
parent_url did not change, causing a duplicate entry
image change
channel_id change
Thanks @artlu99!
|
2025-04-01T04:34:54.051127
| 2024-05-17T06:45:03
|
2301948793
|
{
"authors": [
"mashehu",
"scwatts"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9038",
"repo": "nf-core/website",
"url": "https://github.com/nf-core/website/pull/2520"
}
|
gharchive/pull-request
|
Add specific Slack channel to request community review for initial pipeline releases
indicate to readers a suitable channel to make a community review request
this will help prevent requests being made in the wrong channels
e.g. #maintainers hypothetically being mistakenly guessed as an appropriate channel
/cc @jfy133
@netlify /docs/contributing/tutorials/nf_core_contributing_overview
Could you please also add these changes to the following pr instead: https://github.com/nf-core/website/pull/2441
We will hopefully merge the PR soon and changes to the current docs might get lost on the way
No problems. I don't have permissions to push to that branch but I'll rebase changes here on top + force push and then set it as the merge target.
Looks like the Netlify Redirect check is required to merge even into a non-main branch. A few options I suppose:
merge this change into main once docs-restructure is merged
wait for the Netlify Redirect check failure to be fixed in docs-restructure before merging into it
someone with write permissions cherry-pick the commit from my remote across to docs-restructure
someone with write permissions manually make the change in docs-restructure
I'll leave it to the nf-core team to decide.
The docs-restructure branch is now passing checks, rebasing on top and force pushing so that merge is possible.
Automatically closed after deletion of nf-core:docs-restructure, opening a new PR since closing cannot be reversed.
|
2025-04-01T04:34:54.056584
| 2019-07-24T23:47:42
|
472600590
|
{
"authors": [
"afharo",
"dpyy",
"royale56"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9039",
"repo": "nfarina/homebridge",
"url": "https://github.com/nfarina/homebridge/issues/2268"
}
|
gharchive/issue
|
Stuck in updating at random times but if I open home app from mac it unlocks it
I have the most weird issue with homebridge.
I've been running the same config for over a year everything works great. But started last week, the accessories started freezing at "updating" at random times. Then for some reason if I open the home app on the mac, it unfreezes it. Then all the iOS devices can see it again.
But after a while, it would freeze again.
I checked the logs, theres nothing.
How do I troubleshoot this?
Running latest version of macOS and homebridge.
Iβm having a similar issue when my accessories are showing as βupdatingβ in the home app but when I go to control any homebridge accessory via my HomePod it works perfectly.
I notice that some of the accessories will occasionally connect in the home app but as soon as I try to turn it on/off etc it will go back to updating. The HomePod seems to work every time though.
No errors or any issues in terminal.
Is your mac going to sleep? If so, you can try https://apps.apple.com/us/app/amphetamine/id937984704?mt=12 to avoid your mac from going to sleep
|
2025-04-01T04:34:54.065405
| 2020-10-27T12:35:46
|
730411673
|
{
"authors": [
"eric6506"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9040",
"repo": "nficano/pytube",
"url": "https://github.com/nficano/pytube/issues/776"
}
|
gharchive/issue
|
KeyError: 'assets'
Here's my log:
Traceback (most recent call last): File "C:\Users\user\Desktop\youtube archiver\youtube.py", line 66, in <module> yt = YouTube(love) File "C:\Users\user\AppData\Local\Programs\Python\Python39-32\lib\site-packages\pytube\__main__.py", line 91, in __init__ self.prefetch() File "C:\Users\user\AppData\Local\Programs\Python\Python39-32\lib\site-packages\pytube\__main__.py", line 183, in prefetch self.js_url = extract.js_url(self.watch_html) File "C:\Users\user\AppData\Local\Programs\Python\Python39-32\lib\site-packages\pytube\extract.py", line 143, in js_url base_js = get_ytplayer_config(html)["assets"]["js"] KeyError: 'assets'
Here's my script:
from pytube import YouTube
love = "https://www.youtube.com/watch?v=vewjKRorasc"
yt = YouTube(love) <------ERROR HERE"
For now fixed 100% with this:
https://github.com/nficano/pytube/pull/767#issuecomment-716184994
With anyone else getting this error or issue, run this command in a terminal or cmd:
python -m pip install git+https://github.com/nficano/pytube
|
2025-04-01T04:34:54.076737
| 2020-03-06T02:47:30
|
576660734
|
{
"authors": [
"jefflill",
"johncburns1",
"marcusbooyah"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9041",
"repo": "nforgeio/neonKUBE",
"url": "https://github.com/nforgeio/neonKUBE/issues/775"
}
|
gharchive/issue
|
WorkflowParallelOperationException
Ok, I've found a way to reliable reproduce this. It seems like the last workflow standing will complete successfully, but all other concurrent workflows will fail until there is just 1 left.
Here's the workflow:
[WorkflowMethod(WorkflowIdReusePolicy = WorkflowIdReusePolicy.RejectDuplicate, Name = "process-order")]
public async Task<int> ProcessOrderAsync(Order order)
{
// order activity
orderActivity = Workflow.NewLocalActivityStub<IActivityOrder, ActivityOrder>(
new LocalActivityOptions()
{
ScheduleToCloseTimeout = TimeSpan.FromHours(1),
RetryOptions = new RetryOptions()
{
MaximumAttempts = 10,
BackoffCoefficient = 2.0,
InitialInterval = TimeSpan.FromSeconds(1),
MaximumInterval = TimeSpan.FromSeconds(30),
NonRetriableErrors = new List<string>()
}
});
for (int i = 0; i < await Workflow.NextRandomAsync(1000); i++)
{
await orderActivity.SleepAsync(order.Id);
}
return 0;
}
[ActivityMethod(Name = "sleeparoo")]
public async Task SleepAsync(string id)
{
await Task.Delay(10);
}
This is probably the true scenario that was causing #773 too.
This may be a problem with cadence-proxy.
The .NET side is seeing WorkflowInvokeRequest messages from cadence-proxy with the same ContextId for different workflows. I poked around the GO code a bit and see that @johncburns1 is protecting NextContextId() with a mutex, but perhaps your not using this everywhere.
I've recreated the neonKUBE cadence branch and have added some temporary debug logging. The log output will be appended to C:\temp\cadence-debug.log. I generally open this in Notepad++ and clear and save this file before each run.
Here's what I'm seeing in this log when I run Marcus' workflow multiple times:
2020-03-10T10:18:35.738Z: Workflow Invoke Message: workflowId=wf-stripe-payment:b0c71f58-750c-49c2-babf-587ff4ac8714 contextId=1
2020-03-10T10:18:37.192Z: Workflow Invoke Message: workflowId=wf-customer-app-order:b0c71f58-750c-49c2-babf-587ff4ac8714 contextId=1
2020-03-10T10:18:39.856Z: Workflow Invoke Message: workflowId=wf-stripe-payment:7302e95d-5543-4224-b585-7d9b1b53a9d9 contextId=2
2020-03-10T10:18:41.131Z: Workflow Invoke Message: workflowId=wf-customer-app-order:7302e95d-5543-4224-b585-7d9b1b53a9d9 contextId=2
2020-03-10T10:18:42.092Z: ******* Parallel Execution: workflowId=wf-customer-app-order:7302e95d-5543-4224-b585-7d9b1b53a9d9 contextId=2
Note that the last two Workflow Invoke Messages have different workflow IDs but share the same contextId=2. This results in both workflows sharing the same context which is really bad and is resulting in the parallel operation exception because the two workflows are in fact performing parallel operations on the same workflow context.
BTW: I discovered a cool way to debug packages referenced by other projects without having to publish nuget packages with new version numbers etc. The trick is to update the DLL (and PDB) you're modifying in your machines nuget package cache.
In this case, I added the debug logging to the Neon.Cadence code, rebuilt the library and then copied these files:
C:\src\neonKUBE\Lib\Neon.Cadence\bin\Debug\netstandard2.0
Neon.Cadence.dll
Neon.Cadence.pdb
to
C:\Users\jeff\.nuget\packages\neon.cadence\1.2.2\lib\netstandard2.0
Neon.Cadence.dll
Neon.Cadence.pdb
Then all you need to do is rebuild the project referencing the package and it will include the new library bits. The extra cool thing is that you can set breakpoints in the library code. I think this works when the library was rebuilt on the same machine your debugging on.
When you're done debugging, you should remove this package from the nuget cache and then restore packages to get back to the official published package bits.
Some interesting results from exploration today:
This seems to only happen with local activities (it might be interesting to rewrite the workflow using all normal activities),
The 2 activities that seemed to throw produce this error are ActivityShipRoom (2x) and ActivityOrder (~8x),
Other running workflows not of type wf-customer-app-order unaffected by this issue.
There were some other interesting findings when I ran some of these tests. In some cases other errors were thrown in addition to the workflow parallel operations error. The first 3 panics were thrown on the same run and the last error, EntityNotExists, is thrown on almost every run when you continue to submit orders after the parallel operation exception has been thrown.
Illegal Access Outside of Workflow Context
Neon.Cadence.CadenceCustomException: 'Panic: getState: illegal access from outside of workflow context, MessageType: ActivityExecuteLocalRequest, RequestId: 485, ClientId: 6. goroutine 3498 [running]:
runtime/debug.Stack(0xc0004f05a0, 0x6, 0x1b)
c:/go/src/runtime/debug/stack.go:24 +0xa4
github.com/cadence-proxy/internal/endpoints.handleIProxyRequest.func1(0xfba180, 0xc0005fe030, 0xc0007cfe00, 0xc0007cfe40)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:152 +0x141
panic(0xcbac00, 0xf80a60)
c:/go/src/runtime/panic.go:679 +0x1c0
go.uber.org/cadence/internal.getState(0xfa71a0, 0xc0005fa3f0, 0xc0007cfa78)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_workflow.go:529 +0xa4
go.uber.org/cadence/internal.NewChannel(0xfa71a0, 0xc0005fa3f0, 0x10, 0xc0002fee20)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/workflow.go:278 +0x40
go.uber.org/cadence/internal.newDecodeFuture(...)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_workflow.go:1272
go.uber.org/cadence/internal.ExecuteLocalActivity(0xfa71a0, 0xc0005fa3f0, 0xd07ca0, 0xc0005a45e0, 0xc0002fee20, 0x1, 0x1, 0x5, 0x1d3)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/workflow.go:467 +0x54
go.uber.org/cadence/workflow.ExecuteLocalActivity(...)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/workflow/workflow.go:157
github.com/cadence-proxy/internal/endpoints.handleActivityExecuteLocalRequest(0xfa6ae0, 0xc000032170, 0xc0005fe030, 0x4, 0xf7fe60)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/request_handlers.go:2549 +0x774
github.com/cadence-proxy/internal/endpoints.handleIProxyRequest(0xfba180, 0xc0005fe030, 0x0, 0x0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:486 +0x1a4e
github.com/cadence-proxy/internal/endpoints.processIncomingMessage(0xfb2540, 0xc0005fe030, 0xc0005cc120)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:116 +0x100
created by github.com/cadence-proxy/internal/endpoints.MessageHandler
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:69 +0x146
Trying to Block on Couroutine Which is Already Blocked
Neon.Cadence.CadenceCustomException: 'Panic: trying to block on coroutine which is already blocked, most likely a wrong Context is used to do blocking call (like Future.Get() or Channel.Receive(), MessageType: ActivityExecuteLocalRequest, RequestId: 566, ClientId: 6. goroutine 4330 [running]:
runtime/debug.Stack(0xc000693050, 0x6, 0x1b)
c:/go/src/runtime/debug/stack.go:24 +0xa4
github.com/cadence-proxy/internal/endpoints.handleIProxyRequest.func1(0xfba180, 0xc0002c8168, 0xc000727e00, 0xc000727e40)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:152 +0x141
panic(0xcbac00, 0xf80aa0)
c:/go/src/runtime/panic.go:679 +0x1c0
go.uber.org/cadence/internal.(*coroutineState).initialYield(0xc0003ad300, 0x3, 0xc0002cab80, 0x19)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_workflow.go:742 +0xc9
go.uber.org/cadence/internal.(*coroutineState).yield(...)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_workflow.go:757
go.uber.org/cadence/internal.(*channelImpl).Receive(0xc000867c20, 0xfa71a0, 0xc00081ebd0, 0x0, 0x0, 0x0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_workflow.go:572 +0x2a4
go.uber.org/cadence/internal.(*decodeFutureImpl).Get(0xc0004b1600, 0xfa71a0, 0xc00081ebd0, 0xc8d120, 0xc0004b1640, 0x1, 0x1)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_workflow.go:1231 +0x70
github.com/cadence-proxy/internal/endpoints.handleActivityExecuteLocalRequest(0xfa6ae0, 0xc000032170, 0xc0002c8168, 0x4, 0xf7fe60)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/request_handlers.go:2557 +0x7e7
github.com/cadence-proxy/internal/endpoints.handleIProxyRequest(0xfba180, 0xc0002c8168, 0x0, 0x0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:486 +0x1a4e
github.com/cadence-proxy/internal/endpoints.processIncomingMessage(0xfb2540, 0xc0002c8168, 0xc0005d6fc0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:116 +0x100
created by github.com/cadence-proxy/internal/endpoints.MessageHandler
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:69 +0x146
No Cached Result Found for Side Effect
Neon.Cadence.CadenceCustomException: 'Panic: No cached result found for side effectID=1. KnownSideEffects=[0], MessageType: WorkflowMutableRequest, RequestId: 743, ClientId: 6. goroutine 6049 [running]:
runtime/debug.Stack(0xc000647350, 0x6, 0x16)
c:/go/src/runtime/debug/stack.go:24 +0xa4
github.com/cadence-proxy/internal/endpoints.handleIProxyRequest.func1(0xfbbb20, 0xc0004681e8, 0xc000491e00, 0xc000491e40)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:152 +0x141
panic(0xcbac00, 0xc00064be50)
c:/go/src/runtime/panic.go:679 +0x1c0
go.uber.org/cadence/internal.(*workflowEnvironmentImpl).SideEffect(0xc000562e00, 0xc0004ec990, 0xc0004ec9c0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/internal_event_handlers.go:593 +0x86a
go.uber.org/cadence/internal.SideEffect(0xfa6fa0, 0xc0002d1900, 0xc00064be40, 0x9, 0xc0003ab0d0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/internal/workflow.go:1022 +0x24f
go.uber.org/cadence/workflow.SideEffect(...)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/vendor/go.uber.org/cadence/workflow/workflow.go:261
github.com/cadence-proxy/internal/endpoints.handleWorkflowMutableRequest(0xfa6ae0, 0xc000032170, 0xc0004681e8, 0x4, 0xf7fe60)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/request_handlers.go:935 +0x6ea
github.com/cadence-proxy/internal/endpoints.handleIProxyRequest(0xfbbb20, 0xc0004681e8, 0x0, 0x0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:321 +0x108d
github.com/cadence-proxy/internal/endpoints.processIncomingMessage(0xfb52c0, 0xc0004681e8, 0xc0008acae0)
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:116 +0x100
created by github.com/cadence-proxy/internal/endpoints.MessageHandler
C:/Users/johnc/source/repos/neonKUBE/Go/src/github.com/cadence-proxy/internal/endpoints/message.go:69 +0x146
Entity Not Exists
[2020-03-11T01:19:31.569+00:00] [ERROR] [module:Neon.Cadence.WorkflowBase] [index:16] Neon.Cadence.EntityNotExistsException: EntityNotExistsError\n
at Neon.Cadence.Internal.ProxyReply.ThrowOnError()\n
at Neon.Cadence.Workflow.ExecuteLocalActivityAsync(Type activityType, ConstructorInfo activityConstructor, MethodInfo activityMethod, Byte[] args, LocalActivityOptions options)\n
at Neon.Cadence.Stubs.ActivityOrder_Stub_14.___StubHelper.ExecuteLocalActivityAsync(Workflow workflow, Type activityType, ConstructorInfo constructor, MethodInfo method, Byte[] args, LocalActivityOptions options)\n
at Neon.Cadence.Stubs.ActivityOrder_Stub_14.SleepAsync()\n
at WFCustomerAppOrder.WorkflowOrder.ProcessOrderAsync(Order order) in C:\\Users\\johnc\\source\\repos\\loopie\\Workflows\\wf-customer-app-order\\WorkflowOrder.cs:line 101\n
at Neon.Common.NeonHelper.GetTaskResultAsObjectAsync(Task task)\n at Neon.Cadence.WorkflowBase.OnInvokeAsync(CadenceClient client, WorkflowInvokeRequest request)
@johncburns1 figured this out. This ended up being due to the workflow saving activity stubs as static rather than instance properties. When the second workflow started, it would create new stubs, save them to the static properties, which would break these stubs when called from other workflow instances.
|
2025-04-01T04:34:54.093050
| 2022-02-24T13:09:11
|
1149270135
|
{
"authors": [
"nftchef",
"shingsoso"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9042",
"repo": "nftchef/art-engine",
"url": "https://github.com/nftchef/art-engine/issues/66"
}
|
gharchive/issue
|
Issue with shuffleLayerConfigurations
Discussed in https://github.com/nftchef/art-engine/discussions/64
Originally posted by AC1AN February 24, 2022
Hi, I noticed that when I turn on theshuffleLayerConfigurations the generator always skip a number, for example if I ask for 5 images I'll have a result like 1 2 4 5 6 , with a gape in it
In this example I asked for 10 and the number 2 is missing I have instead number 11
.
Thank you
I faced the same problem. Have you figured out what caused that ?
Check this out , a quick fix
https://github.com/nftchef/art-engine/issues/70#issuecomment-1053605090
TY @shingsoso! I will hopefully have it fixed this week.
|
2025-04-01T04:34:54.120800
| 2018-08-19T12:04:09
|
351897588
|
{
"authors": [
"Jamaks",
"irustm",
"shyamal890"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9043",
"repo": "ng-fullcalendar/ng-fullcalendar",
"url": "https://github.com/ng-fullcalendar/ng-fullcalendar/issues/118"
}
|
gharchive/issue
|
Maintain scroll position on events model change
Scroll position is reset to default one after a new event is added to events list:
https://stackblitz.com/edit/ng-fullcalendar-demo-zsgsl3?file=app%2Fapp.component.html
Just click on a day to add an event and you would see the scroll move back to default position.
it's issue in jquery fullcalendar too?
this issue deprecated.
<EMAIL_ADDRESS>released without Jquery,
try use<EMAIL_ADDRESS>with fulcalendar plugins
|
2025-04-01T04:34:54.123274
| 2016-02-28T15:22:06
|
137063166
|
{
"authors": [
"Buthrakaur",
"kasperp"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9044",
"repo": "ngReact/ngReact",
"url": "https://github.com/ngReact/ngReact/issues/143"
}
|
gharchive/issue
|
TypeScript and injections
Hello,
I'm trying to get started with ng-react and TypeScript 1.8, but I'm struggling with the component registration when I want to inject some dependencies into the React component. My current component looks like this:
interface Props {
title: string;
someService: any;
}
class HelloReactComponent extends React.Component<Props, {}> {
render() {
return <div>
<span>Hello React: {this.props.title}</span>
<span>{this.props.someService.getValue()}</span>
</div>;
}
}
Now I'm not able to use the trivial registration using angular.module(x).value(...), but need to use angular.module(x).factory(...) and don't see any possibility how to use the HelloReactComponent class in this case. Do I really need to redesign my component not to be a class, but write the code in React.createClass without possibility to use the class?
@Buthrakaur thanks for raising this. So If you use React.createClass there is no problem ? Could you post an example of your use of angular.module(x).value(...) ?
Closed due to inactivity. Feel free to reopen.
|
2025-04-01T04:34:54.153290
| 2016-07-05T18:30:04
|
163915802
|
{
"authors": [
"dfaller",
"gzehring"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9045",
"repo": "ngageoint/scale",
"url": "https://github.com/ngageoint/scale/issues/370"
}
|
gharchive/issue
|
Pass Docker Credentials
Update Scale to be able to pass Docker credentials to Mesos. This will allow Scale to launch jobs based on images in Docker repositories that require authentication.
Mesos has documentation for passing credentials for a private docker repository, but it appears to not support credentials on a task basis. Here is a link to that documentation /
|
2025-04-01T04:34:54.159939
| 2024-05-02T01:30:25
|
2274434398
|
{
"authors": [
"buchdag",
"prairietree"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9046",
"repo": "nginx-proxy/acme-companion",
"url": "https://github.com/nginx-proxy/acme-companion/issues/1107"
}
|
gharchive/issue
|
The certificate is not trusted because it is self-signed. Error during secondary validation.
Hello
I have a NextCloud and Collabora docker image that are behind the same proxy and acme companion. I had it working at one time but it seemed like it is not able to renew so I changed a few things and now nginxproxy/acme-companion is generating a self signed certificate for the office domain. One of the last things I changed was to add a DEFAULT_EMAIL I also switched from nginxproxy/nginx-proxy:alpine to nginxproxy/nginx-proxy:1.5-alpine.
I found this line in the logs Invalid status, office.[domain].com:Verify error detail:During secondary validation: 173.224.185.[...]: Fetching http://office.[domain].com/.well-known/acme-challenge/VZktBtQyGTGW_x1mL2vGVzV7TFs-eHqCp0t0I67VGAw: Timeout during connect (likely firewall problem).
But I can get to the office sub domain from outside my local network. So I think it might have something to do with the way the proxy is set up. One other thing I added was the proxy-tier aliases, but changing them back did not help.
Logs:
$ docker-compose logs | grep letsencrypt
nextcloud-letsencrypt-companion-1 | Info: running acme-companion version v2.2.10-13-gb22b6ef
nextcloud-letsencrypt-companion-1 | Info: 4096 bits RFC7919 Diffie-Hellman group found, generation skipped.
nextcloud-letsencrypt-companion-1 | Reloading nginx proxy (df9d656c8f843fb2f10e54c30f9a0005f6b84454d61a528ef28e37083a2dbfcc)...
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 Generated '/etc/nginx/conf.d/default.conf' from 8 containers
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 [notice] 50#50: signal process started
nextcloud-letsencrypt-companion-1 | Warning: /app/letsencrypt_service_data not found, skipping data from containers.
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 Generated '/app/letsencrypt_service_data' from 8 containers
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 Running '/app/signal_le_service'
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 Watching docker events
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 Contents of /app/letsencrypt_service_data did not change. Skipping notification '/app/signal_le_service'
nextcloud-letsencrypt-companion-1 | Reloading nginx proxy (df9d656c8f843fb2f10e54c30f9a0005f6b84454d61a528ef28e37083a2dbfcc)...
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 Generated '/etc/nginx/conf.d/default.conf' from 8 containers
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:16 [notice] 77#77: signal process started
nextcloud-letsencrypt-companion-1 | Creating/renewal nextcloud.[domain].com certificates... (nextcloud.[domain].com)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:16 UTC 2024] Domains not changed.
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:16 UTC 2024] Skip, Next renewal time is: 2024-05-08T05:14:13Z
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:16 UTC 2024] Add '--force' to force to renew.
nextcloud-letsencrypt-companion-1 | Creating/renewal office.[domain].com certificates... (office.[domain].com)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:16 UTC 2024] Using CA: https://acme-v02.api.letsencrypt.org/directory
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:16 UTC 2024] Using pre generated key<EMAIL_ADDRESS>nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:16 UTC 2024] Generate next pre-generate key.
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:17 UTC 2024] Single domain='office.[domain].com'
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:17 UTC 2024] Getting domain auth token for each domain
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:18 UTC 2024] Getting webroot for domain='office.[domain].com'
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:18 UTC 2024] Verifying: office.[domain].com
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:18 UTC 2024] Pending, The CA is processing your order, please just wait. (1/30)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:20 UTC 2024] Pending, The CA is processing your order, please just wait. (2/30)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:23 UTC 2024] Pending, The CA is processing your order, please just wait. (3/30)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:25 UTC 2024] Pending, The CA is processing your order, please just wait. (4/30)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:27 UTC 2024] Pending, The CA is processing your order, please just wait. (5/30)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:29 UTC 2024] Invalid status, office.[domain].com:Verify error detail:During secondary validation: 173.224.185.[...]: Fetching http://office.[domain].com/.well-known/acme-challenge/VZktBtQyGTGW_x1mL2vGVzV7TFs-eHqCp0t0I67VGAw: Timeout during connect (likely firewall problem)
nextcloud-letsencrypt-companion-1 | [Thu May 2 00:34:29 UTC 2024] Please check log file for more details: /dev/null
nextcloud-letsencrypt-companion-1 | Reloading nginx proxy (df9d656c8f843fb2f10e54c30f9a0005f6b84454d61a528ef28e37083a2dbfcc)...
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:30 Generated '/etc/nginx/conf.d/default.conf' from 8 containers
nextcloud-letsencrypt-companion-1 | 2024/05/02 00:34:30 [notice] 109#109: signal process started
nextcloud-letsencrypt-companion-1 | Sleep for 3600s
nextcloud-proxy-1 | nginx.1 | office.[domain].com <IP_ADDRESS> - - [02/May/2024:00:34:18 +0000] "GET /.well-known/acme-challenge/VZktBtQyGTGW_x1mL2vGVzV7TFs-eHqCp0t0I67VGAw HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nextcloud-proxy-1 | nginx.1 | office.[domain].com <IP_ADDRESS> - - [02/May/2024:00:34:18 +0000] "GET /.well-known/acme-challenge/VZktBtQyGTGW_x1mL2vGVzV7TFs-eHqCp0t0I67VGAw HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
nextcloud-proxy-1 | nginx.1 | office.[domain].com <IP_ADDRESS> - - [02/May/2024:00:34:18 +0000] "GET /.well-known/acme-challenge/VZktBtQyGTGW_x1mL2vGVzV7TFs-eHqCp0t0I67VGAw HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
And part of docker-compose.yml:
version: '3'
services:
db:
[...]
redis:
[...]]
app:
#image: nextcloud:apache
build: ./nextcloud
restart: always
volumes:
- nextcloud:/var/www/html
- /data/nextcloud/nextcloud/config:/var/www/html/config
- /data/nextcloud/nextcloud/data:/srv/nextcloud/data
- ./remoteip.conf:/etc/apache2/conf-enabled/remoteip.conf:ro
- ./redis-session.ini:/usr/local/etc/php/conf.d/redis-session.ini
environment:
- VIRTUAL_HOST=nextcloud.[domain].com
- LETSENCRYPT_HOST=nextcloud.[domain].com
-<EMAIL_ADDRESS> - MYSQL_HOST=db
- REDIS_HOST=redis
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.[domain].com [domain].com
- NEXTCLOUD_DATA_DIR=/srv/nextcloud/data
- TRUSTED_PROXIES=nextcloud-proxy-1
- NEXTCLOUD_HOSTNAME=nextcloud.[domain].com
- OVERWRITEPROTOCOL=https
- OVERWRITEHOST=nextcloud.[domain].com
env_file:
- db.env
depends_on:
- db
- redis
networks:
- proxy-tier
- default
cron:
#image: nextcloud:apache
build: ./nextcloud
restart: always
volumes:
- nextcloud:/var/www/html
- /data/nextcloud/nextcloud/config:/var/www/html/config
- /data/nextcloud/nextcloud/data:/srv/nextcloud/data
- ./remoteip.conf:/etc/apache2/conf-enabled/remoteip.conf:ro
- ./redis-session.ini:/usr/local/etc/php/conf.d/redis-session.ini
environment:
- MYSQL_HOST=db
- REDIS_HOST=redis
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.[domain].com [domain].com
- NEXTCLOUD_DATA_DIR=/srv/nextcloud/data
env_file:
- db.env
entrypoint: /cron.sh
depends_on:
- db
- redis
networks:
- proxy-tier
- default
collabora:
image: collabora/code:latest
cap_add:
- MKNOD
environment:
- aliasgroup1=https://nextcloud.[domain].com:443
- username=[...]
- password=[...]
- VIRTUAL_HOST=office.[domain].com
- LETSENCRYPT_HOST=office.[domain].com
-<EMAIL_ADDRESS> - extra_params=--o:ssl.enable=false --o:ssl.termination=true
ports:
- 19980:9980
restart: always
volumes:
- "/etc/localtime:/etc/localtime:ro"
networks:
- proxy-tier
- default
proxy:
# Dockerfile
# FROM nginxproxy/nginx-proxy:1.5-alpine
# COPY uploadsize.conf /etc/nginx/conf.d/uploadsize.conf
build: ./proxy
restart: always
ports:
- 80:80
- 443:443
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
volumes:
- certs:/etc/nginx/certs:ro
- vhost.d:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
-<EMAIL_ADDRESS> container_name: nextcloud-proxy-1
networks:
proxy-tier:
aliases:
- nextcloud.[domain].com
- office.[domain].com
letsencrypt-companion:
image: nginxproxy/acme-companion
restart: always
volumes:
- certs:/etc/nginx/certs
- acme:/etc/acme.sh
- vhost.d:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- proxy-tier
depends_on:
- proxy
volumes:
db:
nextcloud:
certs:
acme:
vhost.d:
html:
networks:
proxy-tier:
At first glance it looks like a genuine network error between the Let's Encrypt secondary validation servers and your host, because you don't appear to have misconfigurations and you see the request from the primary validation being correctly answered at the end of the log.
https://community.letsencrypt.org/t/during-secondary-validation-dns-problem-query-timed-out/188165
https://community.letsencrypt.org/t/renew-certificate-failed-due-to-secondary-validation/178643/2
https://community.letsencrypt.org/t/renew-certificate-failed-due-to-secondary-validation-again/185301
Seems to be plenty of threads related to failing secondary validations.
I believe it is working now that I turned off Country Restrictions on the network. I am trying to figure out what countries I need to allow. Thanks for the above links. That helped.
|
2025-04-01T04:34:54.193465
| 2023-08-30T20:34:08
|
1874328090
|
{
"authors": [
"achawla2012",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9047",
"repo": "nginx/agent",
"url": "https://github.com/nginx/agent/pull/460"
}
|
gharchive/pull-request
|
Get metadata for phpfpm master and workers process
Proposed changes
This change determines metadata for php-fpm master and worker metadata with focuss on parity/functionality with old agent)
Testing
Unit tests/shell commands execution. Future:- will leverage gopsutil package for shell command execution.
Checklist
Before creating a PR, run through this checklist and mark each as complete.
[X] I have read the CONTRIBUTING document
[X] I have run make install-tools and have attached any dependency changes to this pull request
[X] If applicable, I have added tests that prove my fix is effective or that my feature works
[X] If applicable, I have checked that any relevant tests pass after adding my changes
[ ] If applicable, I have updated any relevant documentation (README.md)
[ ] If applicable, I have tested my cross-platform changes on Ubuntu 22, Redhat 8, SUSE 15 and FreeBSD 13
Codecov Report
Patch coverage: 84.10% and project coverage change: -0.11% :warning:
Comparison is base (7c1ada3) 67.19% compared to head (385e398) 67.09%.
Report is 9 commits behind head on main.
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the GitHub App Integration for your organization. Read more.
Additional details and impacted files
@@ Coverage Diff @@
## main #460 +/- ##
==========================================
- Coverage 67.19% 67.09% -0.11%
==========================================
Files 113 116 +3
Lines 12849 13077 +228
==========================================
+ Hits 8634 8774 +140
- Misses 3647 3725 +78
- Partials 568 578 +10
Files Changed
Coverage Ξ
src/core/environment.go
52.03% <ΓΈ> (+0.07%)
:arrow_up:
src/core/nginx.go
44.62% <0.00%> (ΓΈ)
...or/github.com/nginx/agent/sdk/v2/config_helpers.go
64.19% <50.00%> (+0.18%)
:arrow_up:
src/extensions/php-fpm-metrics/pool/manager.go
81.81% <81.81%> (ΓΈ)
src/extensions/php-fpm-metrics/master/manager.go
84.21% <84.21%> (ΓΈ)
src/extensions/php-fpm-metrics/master/master.go
86.17% <86.17%> (ΓΈ)
src/core/checksum.go
100.00% <100.00%> (ΓΈ)
...r/github.com/nginx/agent/v2/src/plugins/metrics.go
66.92% <100.00%> (-3.49%)
:arrow_down:
... and 7 files with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T04:34:54.220090
| 2023-05-11T16:39:44
|
1706176955
|
{
"authors": [
"pleshakov",
"sjberman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9048",
"repo": "nginxinc/nginx-kubernetes-gateway",
"url": "https://github.com/nginxinc/nginx-kubernetes-gateway/pull/638"
}
|
gharchive/pull-request
|
Set gateway Pod IP as GatewayStatus address
To satisfy conformance and provide the address of the gateway, we'll now set the pod IP address of the gateway on the GatewayStatus resource. Eventually we will also support other addresses.
Closes #370
[x] I have read the CONTRIBUTING doc
[x] I have added tests that prove my fix is effective or that my feature works
[x] I have checked that all unit tests pass after adding my changes
[x] I have updated necessary documentation
[x] I have rebased my branch onto main
[x] I will ensure my PR is targeting the main branch and pulling from my branch from my own fork
@sjberman
@pleshakov I originally was going to mimic the existing validation pattern for arguments, but I changed my mind. For one, I wouldn't expect that we would be supported many env vars, so may not be worth it. Also, if we did want to do it, there's likely a way to refactor the existing argument validation to combine args and env vars, but I figured that's probably a bit too much work for right now, and went with the simpler approach. If we feel that it's worth implementing now though, I can do that.
This makes sense to me. However, it made me think - do we need to have an env variable at all?
what if we add --external-ips cli arg that will accept the list of IPs (a list, because the Gateway status can include many IPs). By default, we can initialize it to the POD_IP. For example:
args:
- --gateway-ctlr-name=k8s-gateway.nginx.org/nginx-gateway-controller
- --gatewayclass=nginx
- --external-ips=$(POD_IP)
Could that work better?
it made me think - do we need to have an env variable at all
We do need it; in order to use the k8s downward API to get the pod IP, it has to be an environment variable. I'm also not a huge fan of more CLI args if they can be avoided.
|
2025-04-01T04:34:54.267849
| 2019-09-10T08:44:11
|
491533654
|
{
"authors": [
"aitboudad",
"swillisstudio"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9050",
"repo": "ngx-formly/ngx-formly",
"url": "https://github.com/ngx-formly/ngx-formly/issues/1765"
}
|
gharchive/issue
|
Expression property not working when checking against an observable
I'm submitting a ... (check one with "x")
[ ] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[X] support request
Current behavior
I have a select box which populates the options through an API using an observable. If there are less than 2 options then I want to disable the box to show there is nothing that can be changed, but it doesn't seem to do anything.
Expected behavior
If the observable returns 1 or 0 options then I want the select box to be disabled.
Minimal reproduction of the problem with instructions
Stackblitz
pass the options through expressionproperties which resolve 'templateOptions.options' into any[]:
expressionProperties: {
'templateOptions.options': of([
{ value: 'blue', label: 'Blue' },
{ value: 'red', label: 'Red' },
]),
},
|
2025-04-01T04:34:54.271051
| 2024-03-22T02:43:11
|
2201572709
|
{
"authors": [
"aitboudad",
"its-dibo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9051",
"repo": "ngx-formly/ngx-formly",
"url": "https://github.com/ngx-formly/ngx-formly/issues/3884"
}
|
gharchive/issue
|
wrong type for formState
FormlyFormOptions.formState is defined as any, means it can accept any arbitrary value.
also, the docs says
The formState property is passed to all fields and is a mechanism for communicating between fields (without having to mess with your model).
which also means it holds arbitrary value that doesn't mean anything to firmly
but when you pass formStatus.disabled=true it disables the form, which means it has been interpreted by Formly, which is not implied neither by the type definition nor the documentation.
export interface FormlyFormOptions {
updateInitialValue?: (model?: any) => void;
resetModel?: (model?: any) => void;
formState?: any;
fieldChanges?: Subject<FormlyValueChangeEvent>;
showError?: (field: FieldType) => boolean;
build?: (field?: FormlyFieldConfig) => FormlyFieldConfig;
checkExpressions?: (field: FormlyFieldConfig) => void;
detectChanges?: (field: FormlyFieldConfig) => void;
parentForm?: FormGroupDirective | null;
}
it should be changed to
formStats: {
disabled: boolean
// other props that are used by Formly
[key:string]: any
}
but when you pass formStatus.disabled=true it disables the form, which means it has been interpreted by Formly, which is not implied neither by the type definition nor the documentation.
Not true, in order to disable the form you should use formStatus.disabled inside field expression.
If you think the doc should be adjusted, could you please send a PR π ?
|
2025-04-01T04:34:54.276929
| 2015-08-05T20:00:53
|
99287038
|
{
"authors": [
"nham",
"rschulman"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9052",
"repo": "nham/rust-base58",
"url": "https://github.com/nham/rust-base58/pull/1"
}
|
gharchive/pull-request
|
Make rust58 compatible with current master.
The current code relies on append, which is not yet in stable. I modified to use push instead so it can be used in stable rustc. If there was a reason you weren't doing it this way, feel free to disregard.
Hi, sorry for the delay. Thanks for bringing this up! Having this work on stable slipped my mind. I decided to fix this in a slightly different way, by using extend (see d27f2cc08e3c486520f86e7b41726d55485f522f).
|
2025-04-01T04:34:54.281197
| 2023-07-06T06:01:05
|
1790872789
|
{
"authors": [
"dbarrosop"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9053",
"repo": "nhost/nhost",
"url": "https://github.com/nhost/nhost/issues/2095"
}
|
gharchive/issue
|
add banner/warning under settings if there is a connected repo
if users have a connected repo, could we show a warning box or something saying something ilke:
As you have a connected repository, make sure to synchronize your changes with ο»Ώnhost config pull or they may be reverted with the next push. If there are multiple projects linked to the same repository and you only want these changes to apply to a subset of them, please check out https://docs.nhost.io/cli/overlays for guidance.
feel free to rephrase if needed
|
2025-04-01T04:34:54.282371
| 2023-01-05T07:44:55
|
1520255478
|
{
"authors": [
"elitan",
"szilarddoro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9054",
"repo": "nhost/nhost",
"url": "https://github.com/nhost/nhost/pull/1475"
}
|
gharchive/pull-request
|
fix(dashboard): create new user
fixes: https://github.com/nhost/nhost/issues/1474
Could you please create a patch bump by running pnpm changeset in the root folder?
|
2025-04-01T04:34:54.287047
| 2024-06-26T20:38:08
|
2376240879
|
{
"authors": [
"frankieroberto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9056",
"repo": "nhsuk/nhsuk-frontend",
"url": "https://github.com/nhsuk/nhsuk-frontend/pull/975"
}
|
gharchive/pull-request
|
Make copyright statement settable via params
This de-duplicates 2 areas where the copyright statement is set (depending on the number of columns of links), and also allows the copyright text to be set using params.copyright on the Nunjucks macro.
This was previously documented as an option on the footer component but wasnβt implemented in the actual nunjucks macro.
Iβve set the default to "NHS England" as the guidance suggests thatβs what most services should use, but the NHS.UK website, or any local NHS trusts or other organisations, could override this.
Possibly this should follow the pattern of having copyright.text and copyright.html instead, but the labels for the links in the footer donβt seem to do that so I wasnβt sure how rigidly we wanted to follow the pattern? π€·ββοΈ
|
2025-04-01T04:34:54.288334
| 2024-10-17T15:50:15
|
2595132711
|
{
"authors": [
"edwardhorsford",
"frankieroberto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9057",
"repo": "nhsuk/nhsuk.service-manual.prototype-kit.docs",
"url": "https://github.com/nhsuk/nhsuk.service-manual.prototype-kit.docs/pull/15"
}
|
gharchive/pull-request
|
Fix link to zip file
Hardcoded for now, in 2 places. Open to better ideas!
I'd undo these changes and add a prototypeKitVersion in package.json - I think we should avoid hardcoded links in two places.
|
2025-04-01T04:34:54.301580
| 2022-12-14T22:56:04
|
1497529901
|
{
"authors": [
"jattasNI",
"rajsite"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9058",
"repo": "ni/nimble",
"url": "https://github.com/ni/nimble/pull/922"
}
|
gharchive/pull-request
|
Fix storybook mdx plugin
Pull Request
π€¨ Rationale
Improves #825. Found that the current "@mdx-js/react": "*" version was resolving in package-lock.json to the latest "@mdx-js/react" at v2 which is an actual breaking change for storybook.
π©βπ» Implementation
Instead matched the version specifier with the one in the storybook package version and verified that the package-lock only shows one version.
π§ͺ Testing
Built locally
β
Checklist
[x] I have updated the project documentation to reflect my changes or determined no changes are needed.
This is definitely an improvement. Though the Getting Started page still seems to have styling issues (e.g. uses browser default font and image is too big).
I changed the PR description not to resolve #825 but I'm going to complete the PR since it's a step forward. I'll update the issue to describe what needs validating when we update to the next Storybook major version.
|
2025-04-01T04:34:54.308561
| 2019-06-21T16:20:20
|
459275128
|
{
"authors": [
"coveralls",
"d-bohls",
"epage"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9059",
"repo": "ni/nixnet-python",
"url": "https://github.com/ni/nixnet-python/pull/268"
}
|
gharchive/pull-request
|
Staging for release 0.3.1
Staging for release 0.3.1
[x] This contribution adheres to CONTRIBUTING.md.
[x] New tests have been created for any new features or regression tests for bugfixes.
[x] tox successfully runs, including unit tests and style checks (see CONTRIBUTING.md).
What all went into this release? Anything besides the recent fix?
Just wanting to catch for breaking changes.
Coverage remained the same at 67.742% when pulling c3a81cc87a0096dcf3c61417f8baa79f6e12149f on d-bohls:release_0.3.1 into 4bd5677b9b9092ce3661455ccf7f39c9651b51c5 on ni:master.
The recent bug fix is the only change.
For posterity, I found another commit (fc369ba) that should be in the release notes, so I amended my commit message to include it. It contains two properties and an enum value; no breaking changes.
|
2025-04-01T04:34:54.325918
| 2021-03-20T19:36:15
|
836894656
|
{
"authors": [
"tbugfinder"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9060",
"repo": "nichtraunzer/ods-pre-commit-hooks",
"url": "https://github.com/nichtraunzer/ods-pre-commit-hooks/issues/5"
}
|
gharchive/issue
|
use git-repo name instead of directory-basename
As users might use different directory names while cloning a repository it should be possible to use the repo name instead.
https://github.com/nichtraunzer/ods-pre-commit-hooks/blob/d3c57b1b8e69d37522ebf94a4616c8af03a92f72/hooks/createstackmoduleoutputs.rb#L54
https://github.com/nichtraunzer/ods-pre-commit-hooks/blob/d3c57b1b8e69d37522ebf94a4616c8af03a92f72/hooks/createbpmoduleoutputs.sh#L15
https://github.com/nichtraunzer/ods-pre-commit-hooks/commit/8fc5e658275259e2d81452238bfdf63143a3605e
|
2025-04-01T04:34:54.327658
| 2020-12-22T13:28:53
|
772947410
|
{
"authors": [
"nick-thompson",
"olidacombe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9061",
"repo": "nick-thompson/blueprint",
"url": "https://github.com/nick-thompson/blueprint/pull/190"
}
|
gharchive/pull-request
|
camelCase to kebab-case
Allow the use of camelCase for css props in js code, and auto-translate to kebab-case where necessary before pushing to the native side.
See https://github.com/nick-thompson/blueprint/issues/13
Yessss! Love it, this is excellent, will merge right away.
Thanks @olidacombe I think people will appreciate this one quite a bit
Now available in<EMAIL_ADDRESS>on npm
|
2025-04-01T04:34:54.331839
| 2019-08-12T10:15:27
|
479567525
|
{
"authors": [
"asfaltboy",
"nmalacarne"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9062",
"repo": "nickhammond/ansible-logrotate",
"url": "https://github.com/nickhammond/ansible-logrotate/issues/47"
}
|
gharchive/issue
|
Needs become to run
Install logrotate task requires root. However, it is not given that the user we use to connect to the target machine will be root. Using become: true on the task should solve this.
:+1:
This currently fails with "Destination /etc/logrotate.d not writable" when using connecting via a user other than root.
|
2025-04-01T04:34:54.333858
| 2024-10-27T15:45:32
|
2616687066
|
{
"authors": [
"nickjj",
"sekmo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9063",
"repo": "nickjj/docker-rails-example",
"url": "https://github.com/nickjj/docker-rails-example/pull/85"
}
|
gharchive/pull-request
|
Expose the DB port to the host machine so it's easy to use tools like TablePlus
During development, it's very convenient to connect to the database using tools like TablePlus. I think these tools are very helpful for developers to quickly get feedback on the data they're working with.
I guess it's better to go for a non-default postgres port in order to avoid issues if the developer already has postgres running through brew services.
Hi,
There is an open issue for this at: https://github.com/nickjj/docker-rails-example/issues/42
I'm on the fence about it based on the comments in that thread but thank you for the contribution.
I see, but allowing connections only from the host machine should be a safe choice, no?
|
2025-04-01T04:34:54.344450
| 2015-11-20T19:22:14
|
118108477
|
{
"authors": [
"davidzovko",
"receter"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9064",
"repo": "nickvane/Magento-RestApi",
"url": "https://github.com/nickvane/Magento-RestApi/issues/25"
}
|
gharchive/issue
|
Issue with different URL for OAuth and api
I have a magneto store that uses StoreCodes in the the URL. eg. http://magentostore.com/en/
With this setting my oauth and and api urls are:
http://magentostore.com/en/oauth/...
http://magentostore.com/api/rest/...
If I call .Initialize("http://magentostore.com", ... your Library assumes that the urls are:
http://magentostore.com/oauth/...
http://magentostore.com/api/rest/...
If I use the Url with StoreCode (.Initialize("http://magentostore.com/en", ...) your Library assumes that the urls are:
http://magentostore.com/en/oauth/...
http://magentostore.com/en/api/rest/...
I it possible to set different URLs for oauth and api?
PS: Thank you for uploading your code to GitHub, I am really happy that I found your Library!
I solved this Issue by adding this to the local config:
[...]
[...]
[...]
http://stackoverflow.com/questions/14472228/magento-rest-api-oauth-url-returning-404
@receter could you please care to help me with the same issue as yours?
I do have a local.xml file, but it does not include or >direct_front_name> tags. It this from app/etc/local.xml or some other place?
Thank you a lot, appreciate it.
Yeah I guess that was the file. Just add all missing node to the file and be sure to clear all caches.
|
2025-04-01T04:34:54.350308
| 2017-11-18T14:13:41
|
275082095
|
{
"authors": [
"AuditeMarlow",
"nicodebo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9065",
"repo": "nicodebo/base16-fzf",
"url": "https://github.com/nicodebo/base16-fzf/pull/2"
}
|
gharchive/pull-request
|
Updated themes with builder
Couple of new themes, others have been case-insensitive'd.
thank you very much for the update
|
2025-04-01T04:34:54.353929
| 2015-11-13T09:00:55
|
116727351
|
{
"authors": [
"Railslide",
"nicolaiarocci"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9066",
"repo": "nicolaiarocci/cerberus",
"url": "https://github.com/nicolaiarocci/cerberus/issues/170"
}
|
gharchive/issue
|
Version links in documentation point at Eve docs instead of Cerberus
When trying to switch between versions (e.g. between latest and stable) of the docs it opens the chosen version of the Eve project documentation instead of Cerberus.
To reproduce:
go to http://docs.python-cerberus.org/en/latest/
from the version menu (in the right bottom corner of the page), select any version
Download and Github links work as expected
Hello, thanks for reporting this. It looks like it is a Reat the Doc issue though, I opened a ticket there. Let's see if we get any advice/fix before I re-build the whole project there.
In order to mitigate this issue, and also because a lot of people keep emailing me since the documented features are not (yet) available in the released package, I switched the default documentation version to "stable".
Until this issue is solved, those interested can always look at the "latest" (development) docs at https://cerberus.readthedocs.org/en/latest/
This should now be solved. Had to delete and rebuilt the whole Read The Docs project.
|
2025-04-01T04:34:54.383730
| 2022-01-13T07:36:48
|
1492354215
|
{
"authors": [
"nicolay-r"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9067",
"repo": "nicolay-r/AREnets",
"url": "https://github.com/nicolay-r/AREnets/issues/5"
}
|
gharchive/issue
|
Tensorflow warnings -- dependency should be a part of network contrib
Reasons: tensorflow 1.14.0 is pretty outdated. The main contribution of this framework is a data preparation rather than model training.
[x] move dependency into network contrib module
[x] make it a particular kernel. (Move all the tf dependencies into the related subfolder)
Now it is a separate project, so this issue is no longer relevant.
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f76d9ad5cc0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f76d9ad5cc0>>: AttributeError: module 'gast' has no attribute 'Str'
WARNING:tensorflow:Entity <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f76d9ad5cc0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BasicLSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.BasicLSTMCell object at 0x7f76d9ad5cc0>>: AttributeError: module 'gast' has no attribute 'Str'
WARNING:tensorflow:From /home/nicolay/proj/AREnets/arenets/context/architectures/base/fc_single.py:18: The name tf.nn.xw_plus_b is deprecated. Please use tf.compat.v1.nn.xw_plus_b instead.
WARNING:tensorflow:From /home/nicolay/proj/AREnets/arenets/context/architectures/base/fc_single.py:18: The name tf.nn.xw_plus_b is deprecated. Please use tf.compat.v1.nn.xw_plus_b instead.
|
2025-04-01T04:34:54.407129
| 2022-04-26T14:04:51
|
1216012744
|
{
"authors": [
"nidhisinghai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9068",
"repo": "nidhisinghai/observability",
"url": "https://github.com/nidhisinghai/observability/pull/1"
}
|
gharchive/pull-request
|
Feature/gauge chart cypress test case
Description
Renders Gauge chart
Renders Gauge chart, add value parameters and verify Reset button click is working
Renders Gauge chart and save visulization
Delete All the Random Created Visulization Gauge Chart
Verify Quick select section in Calendar overlay
Verify Calender button and time range fields are working
Issues Resolved
[List any issues this PR will resolve]
Check List
[ ] New functionality includes testing.
[ ] All tests pass, including unit test, integration test and doctest
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[ ] New functionality has user manual doc added
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
new separate files are created for charts under visualizationcharts folder
|
2025-04-01T04:34:54.447015
| 2024-06-24T15:02:13
|
2370480168
|
{
"authors": [
"MikePlante1",
"Sjoerd-Bo3",
"dnzxy",
"marionbarker",
"mountrcg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9069",
"repo": "nightscout/Trio",
"url": "https://github.com/nightscout/Trio/pull/332"
}
|
gharchive/pull-request
|
add manual glucose value visualization to chart and history
Visualize manual Glucose (finger prick) in Chart and history
Chart
History
Thank you for tackling this and bringing this small change to Trio.
If I may ask, why the ZStack and to achieve the white outline, and why the white outline? Wouldnβt this also work? Just asking because I seldom see icons have borders on iOS.
If I may ask, why the ZStack and to achieve the white outline, and why the white outline? Wouldnβt this also work? Just asking because I seldom see icons have borders on iOS.
fixed!
I just merged this into alpha to test this on a simulator.
I noticed this PR breaks the display when using "Glucose Simulator" for cgm, but it seems that's probably not really so much a problem with this PR as it is that readings from Glucose Simulator use nil for type instead of sgv.
Summary
Success using NS as a CGM
Switched to CGM simulator and it continued to work - not sure of the configuration that @MikePlante1 referred to.
Test
Began with SE running Trio, alpha branch, commit 3ec6176d, rPi DASH pod and NS as CGM
added manual glucose reading of 399
Manual glucose shows up on main screen as current glucose
Manual glucose does not appear in plot
Manual glucose appears with -- beside value in history
applied the patch
Manual glucose now shows up in plot
Manual glucose appears with red drop beside value in history
Realized I had not tested deleting the manual Glucose entry. Returned and test that too. Works as expected.
not sure of the configuration that @MikePlante1 referred to.
https://github.com/nightscout/Trio/assets/82073483/075a4bae-6e2c-4c52-bf86-00ecc504e9d6
Glucose dots for Glucose Simulator and Freestyle Simulator are drawn
Test without filter adjustment
Filter adjusted
I just merged this into alpha to test this on a simulator.
I noticed this PR breaks the display when using "Glucose Simulator" for cgm, but it seems that's probably not really so much a problem with this PR as it is that readings from Glucose Simulator use nil for type instead of sgv.
yes the culprit was this PR! Fixed now - see above.
Got sniped by @dnzxy there π, But also LGTM!
|
2025-04-01T04:34:54.492227
| 2024-07-22T17:41:47
|
2423413100
|
{
"authors": [
"Laura",
"nikgraf"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9070",
"repo": "nikgraf/react-yjs",
"url": "https://github.com/nikgraf/react-yjs/pull/7"
}
|
gharchive/pull-request
|
Use use-sync-external-store shim so that react-yjs is backwards compatible
Uses shim so that the library would work with versions of react before 18 :)
hey @Laura, thanks for the PR!
I think since the library is quite new and we never supported older React versions I would not add this kind of support. Better to require React 18 with this one and maybe you could publish a fork to support older versions
Np. Out of curiosity, are you planning on adding more functionality to the library that might require React 18?
I ended up using y-presence for my presence stuff, but presence would be super helpful! :)
great, will look into it. closing the PR for now
|
2025-04-01T04:34:54.495248
| 2015-10-18T16:53:10
|
112029172
|
{
"authors": [
"MoritzKn",
"zp-j"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9071",
"repo": "nikhilkalige/docblockr",
"url": "https://github.com/nikhilkalige/docblockr/issues/130"
}
|
gharchive/issue
|
Auto-adding for function fails when anonymous function is regarded as a parameter
When anonymous function is passed as a parameter, the auto-added documentation is messed up. I'm not sure is this an issue, or shouldn't I use auto-add documentation for function call like this:
/**
* [request description]
* @method request
* @param {[type]} url [description]
* @param {[type]} function(error, response, content [description]
* @return {[type]} [description]
*/
request(url, function(error, response, content) {
if (error || response.statusCode !== 200) {
callback(error || new Error(response.statusMessage));
} else {
callback(null, content);
}
});
Is this still a thing? I can't reproduce this.
I'm pretty sure this is not reproducible anymore, otherwise please reopen.
|
2025-04-01T04:34:54.513890
| 2023-09-01T19:10:43
|
1877932611
|
{
"authors": [
"aditya-garg-09-01-2002",
"nikohoffren"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9072",
"repo": "nikohoffren/fork-commit-merge",
"url": "https://github.com/nikohoffren/fork-commit-merge/pull/404"
}
|
gharchive/pull-request
|
algorithm for bubble_sort
Optimized Bubble Sort Algorithm without use of any inbuilt functions thus receiving correct sorted array as output
Optimized Bubble Sort Algorithm without use of any inbuilt functions thus receiving correct sorted array as output
Looks good!
Merged
This is an automated message from Fork, Commit, Merge [BOT].
Thank you for your contribution! Your pull request has been merged. The files have been reset for the next contributor.
What's next?
If you're looking for more ways to contribute, I invite you to check out my other projects. Just click here to find more. These projects contain real issues that you can help resolve. You can also check out the Influences section in the README to find more projects similar to this one.
Also please leave a star to this project if you feel it helped you, i would really appreciate it.
I look forward to seeing your contributions!
|
2025-04-01T04:34:54.517242
| 2023-10-01T04:52:56
|
1920560520
|
{
"authors": [
"nikohoffren",
"vedantsrivastava42"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9073",
"repo": "nikohoffren/fork-commit-merge",
"url": "https://github.com/nikohoffren/fork-commit-merge/pull/757"
}
|
gharchive/pull-request
|
contact form HTML added
Description
Added Contact form containing name , email , phone number and message.
Additional Context
Merged
This is an automated message from Fork, Commit, Merge [BOT].
Thank you for your contribution! Your pull request has been merged. The files have been reset for the next contributor.
What's next?
If you're looking for more ways to contribute, I invite you to check out my other projects. Just click here to find more. These projects contain real issues that you can help resolve. You can also check out the Influences section in the README to find more projects similar to this one.
Also please leave a star to this project if you feel it helped you, i would really appreciate it.
I look forward to seeing your contributions!
|
2025-04-01T04:34:54.537019
| 2021-03-02T14:01:15
|
820046989
|
{
"authors": [
"YssDiamond",
"cisko99za",
"nilaoda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9074",
"repo": "nilaoda/N_m3u8DL-CLI",
"url": "https://github.com/nilaoda/N_m3u8DL-CLI/issues/390"
}
|
gharchive/issue
|
merge audio-video
hi, excellent software but on video with drm the download is separate: an audio file and a video file.
need to decrypt (with key and ID) then merge with ffmpeg...
if you integrate mp4decrypt (or also put on same folder of N_m3u8DL-CLI) and same thing for ffmpeg, it would be easier if the program after downloading made a merge of the 2 files (audio&video) and decrypted creating a merged and decrypted final file... is this possible in future releases?
i have try package N_m3u8DL-CLI_v2.9.5_with_ffmpeg_and_SimpleG but same thing: one file audio and one file video...
+1 great suggestion would love it to
i think youd better do it yourself
|
2025-04-01T04:34:54.630715
| 2018-05-03T04:21:23
|
319784234
|
{
"authors": [
"Lahphim",
"olivierobert"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9075",
"repo": "nimbl3/rails-templates",
"url": "https://github.com/nimbl3/rails-templates/pull/52"
}
|
gharchive/pull-request
|
Fix: Deprecated code for simplecov
What happened
Got some deprecated code warning in simplecov file.
/spec/support/simplecov.rb:2:in<top (required)>': [DEPRECATION] ::[] is deprecated.
Insight
Following this referrence.
https://github.com/colszowka/simplecov#using-multiple-formatters
Issue
[#35]
closes #35
|
2025-04-01T04:34:54.669676
| 2021-06-13T21:48:16
|
919887903
|
{
"authors": [
"CALO77103",
"nimiology"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9076",
"repo": "nimiology/spotify_downloader_telegram__bot",
"url": "https://github.com/nimiology/spotify_downloader_telegram__bot/issues/2"
}
|
gharchive/issue
|
the song can't be downloaded
this time the bot work perfectly but when I put spotify track it say this
897551417:https://open.spotify.com/track/3rdXXspuZign63VttNZrbF
Blasterjaxx - Make It Out Alive (feat. Jonathan Mendelsohn) [Official Lyric Video]
https://www.youtube.com//watch?v=dpeeXf1bcgk
[generic] watch?v=dpeeXf1bcgk: Requesting header
[redirect] Following redirect to https://consent.youtube.com/m?continue=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DdpeeXf1bcgk&gl=IT&m=0&pc=yt&uxe=23983172&hl=en&src=1
[youtube:tab] m: Downloading webpage
[download] Downloading playlist: m - Home
[youtube:tab] playlist m - Home: Downloading 0 videos
[download] Finished downloading playlist: m - Home
It seems to be normal but bot send me "can't download music"
Do you have ffmpeg?
I'm on windows with ubuntu installed
I'm on windows with ubuntu installed
I'm on windows with ubuntu installed
anyway I have it
Did you change token in main.py and spotify.py either?
I think i fixed the bug
check it out
I think i fixed the bug
check it out
Same problem, try to update or idk
I can use this program in Heroku still but still, there is a bug that I can't find it.
here is my bot use it until I fix the program.
https://t.me/spotdlmp3_bot
Can u give me the code of your bot?
I want to try fix ur bot :)
of course!
spotify.py
from __future__ import unicode_literals
import spotipy
from spotipy.oauth2 import SpotifyClientCredentials
import requests
from youtube_search import YoutubeSearch
import youtube_dl
import eyed3.id3
import eyed3
import lyricsgenius
import telepot
spotifyy = spotipy.Spotify(
client_credentials_manager=SpotifyClientCredentials(client_id='a145db3dcd564b9592dacf10649e4ed5',
client_secret='389614e1ec874f17b8c99511c7baa2f6'))
genius = lyricsgenius.Genius('biZZReO7F98mji5oz3cE0FiIG73Hh07qoXSIzYSGNN3GBsnY-eUrPAVSdJk_0_de')
token = 'YOUR TOKEN'
bot = telepot.Bot(token)
def DOWNLOADMP3(link,chat_id):
results = spotifyy.track(link)
song = results['name']
artist = results['artists'][0]['name']
YTSEARCH = str(song + " " + artist)
artistfinder = results['artists']
tracknum = results['track_number']
album = results['album']['name']
realese_date = int(results['album']['release_date'][:4])
if len(artistfinder) > 1:
fetures = "( Ft."
for lomi in range(0, len(artistfinder)):
try:
if lomi < len(artistfinder) - 2:
artistft = artistfinder[lomi + 1]['name'] + ", "
fetures += artistft
else:
artistft = artistfinder[lomi + 1]['name'] + ")"
fetures += artistft
except:
pass
else:
fetures = ""
time_duration = ""
time_duration1 = ""
time_duration2 = ""
time_duration3 = ""
millis = results['duration_ms']
millis = int(millis)
seconds = (millis / 1000) % 60
minutes = (millis / (1000 * 60)) % 60
seconds = int(seconds)
minutes = int(minutes)
if seconds >= 10:
if seconds < 59:
time_duration = "{0}:{1}".format(minutes, seconds)
time_duration1 = "{0}:{1}".format(minutes, seconds + 1)
time_duration2 = "{0}:{1}".format(minutes, seconds - 1)
if seconds == 10:
time_duration2 = "{0}:0{1}".format(minutes, seconds - 1)
time_duration3 = "{0}:{1}".format(minutes, seconds + 2)
elif seconds<58:
time_duration3 = "{0}:{1}".format(minutes, seconds + 2)
time_duration2 = "{0}:{1}".format(minutes, seconds - 1)
elif seconds == 58:
time_duration3 = "{0}:0{1}".format(minutes+1, seconds -58)
time_duration2 = "{0}:{1}".format(minutes, seconds - 1)
else:
time_duration2 = "{0}:{1}".format(minutes, seconds - 1)
else:
time_duration1 = "{0}:0{1}".format(minutes + 1, seconds - 59)
if seconds == 59:
time_duration3 = "{0}:0{1}".format(minutes + 1, seconds - 58)
else:
time_duration = "{0}:0{1}".format(minutes, seconds)
time_duration1 = "{0}:0{1}".format(minutes, seconds + 1)
if seconds < 8:
time_duration3 = "{0}:0{1}".format(minutes, seconds + 2)
time_duration2 = "{0}:0{1}".format(minutes, seconds - 1)
elif seconds ==9 or seconds==8:
time_duration3 = "{0}:{1}".format(minutes, seconds + 2)
elif seconds == 0:
time_duration2 = "{0}:{1}".format(minutes - 1, seconds + 59)
time_duration3 = "{0}:0{1}".format(minutes, seconds + 2)
else:
time_duration2 = "{0}:0{1}".format(minutes, seconds - 1)
time_duration3 = "{0}:0{1}".format(minutes, seconds + 2)
trackname = song + fetures
response = requests.get(results['album']['images'][0]['url'])
DIRCOVER = "songpicts//" + trackname + ".png"
file = open(DIRCOVER, "wb")
file.write(response.content)
file.close()
results = list(YoutubeSearch(str(YTSEARCH)).to_dict())
LINKASLI = ''
for URLSSS in results:
timeyt = URLSSS["duration"]
print(URLSSS['title'])
if timeyt == time_duration or timeyt == time_duration1:
LINKASLI = URLSSS['url_suffix']
break
elif timeyt == time_duration2 or timeyt == time_duration3:
LINKASLI = URLSSS['url_suffix']
break
YTLINK = str("https://www.youtube.com/" + LINKASLI)
print(YTLINK)
options = {
# PERMANENT options
'format': 'bestaudio/best',
'keepvideo': False,
'outtmpl': f'song//{trackname}.*',
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '320'
}],
# (OPTIONAL options)
'noplaylist': True
}
with youtube_dl.YoutubeDL(options) as mp3:
mp3.download([YTLINK])
aud = eyed3.load(f"song//{trackname}.mp3")
aud.tag.artist = artist
aud.tag.album = album
aud.tag.album_artist = artist
aud.tag.title = trackname
aud.tag.track_num = tracknum
aud.tag.year = realese_date
try:
songok = genius.search_song(song, artist)
aud.tag.lyrics.set(songok.lyrics)
except:
pass
aud.tag.images.set(3, open("songpicts//" + trackname + ".png", 'rb').read(), 'image/png')
aud.tag.save()
bot.sendAudio(chat_id, open(f'song//{trackname}.mp3', 'rb'),title=trackname)
def album(link):
results = spotifyy.album_tracks(link)
albums = results['items']
while results['next']:
results = spotifyy.next(results)
albums.extend(results['items'])
return albums
def artist(link):
results = spotifyy.artist_top_tracks(link)
albums = results['tracks']
return albums
def searchalbum(track):
results = spotifyy.search(track)
return results['tracks']['items'][0]['album']['external_urls']['spotify']
def playlist(link):
results = spotifyy.playlist_tracks(link)
return results['items'][:50]
def searchsingle(track):
results = spotifyy.search(track)
return results['tracks']['items'][0]['href']
def searchartist(searchstr):
results = spotifyy.search(searchstr)
return results['tracks']['items'][0]['artists'][0]["external_urls"]['spotify']
main.py
from spotify import DOWNLOADMP3 as SONGDOWNLOADER
import telepot
import spotify
import requests
import threading
token = 'YOUR TOKEN'
bot = telepot.Bot(token)
nima =<PHONE_NUMBER>
bot.sendMessage(nima,"Now I'm alive")
sort = {}
def txtfinder(txt):
a = txt.find("https://open.spotify.com")
txt = txt[a:]
return txt
def cantfind(chat_id):
bot.sendSticker(chat_id, 'CAACAgQAAxkBAAIBE2BLNclvKLFHC-grzNdOEXKGl6cLAALzAAMSp2oDSBk1Yo7wCGUeBA')
bot.sendMessage(chat_id, "can't find it")
def cantfindone(chat_id):
bot.sendSticker(chat_id, 'CAACAgQAAxkBAAIFSWBF_m3GHUtZJxQzobvD_iWxYVClAAJuAgACh4hSOhXuVi2-7-xQHgQ')
bot.sendMessage(chat_id, "can't download one of them")
def downloader(link,chat_id,type):
PLAYLIST = False
if type=='AL':
ITEMS = spotify.album(link)
elif type == 'AR':
ITEMS = spotify.artist(link)
elif type == 'PL':
ITEMS = spotify.playlist(link)
PLAYLIST = True
else:
ITEMS = []
MESSAGE = ""
for song in ITEMS:
if PLAYLIST:
song = song['track']
MESSAGE += song['name'] + " :\n " + song['external_urls']['spotify'] + '\n\n'
bot.sendMessage(chat_id, MESSAGE)
for song in ITEMS:
if PLAYLIST:
song = song['track']
try:
SONGDOWNLOADER(song['href'], chat_id)
except:
cantfindone(chat_id)
def START(msg,chat_id):
print(f"{chat_id}:{msg}")
msglink = txtfinder(msg)
if msglink[:30]==('https://open.spotify.com/album') :
downloader(msg,chat_id,'AL')
elif msglink[:30]== ('https://open.spotify.com/track') :
try:
SONGDOWNLOADER(msg, chat_id)
except:
bot.sendSticker(chat_id,
'CAACAgQAAxkBAAIFSWBF_m3GHUtZJxQzobvD_iWxYVClAAJuAgACh4hSOhXuVi2-7-xQHgQ')
bot.sendMessage(chat_id, "can't download music")
elif msg[:33] == 'https://open.spotify.com/playlist':
downloader(msg,chat_id,'PL')
elif msglink[:31] == ('https://open.spotify.com/artist'):
downloader(msg,chat_id,'AR')
elif msg == "/start":
bot.sendMessage(chat_id,
"Hi \nsend me spotify link and I'll give you music\nor use /single or /album or "
"/artist")
elif msg == "/album":
sort[chat_id]='album'
bot.sendMessage(chat_id, 'send name and name of artist like this: \nName album\nor for better search use this:\nName album - Name artist')
elif msg == '/single':
sort[chat_id]='single'
bot.sendMessage(chat_id,'send name and name of artist like this: \nName song\nor for better search use this:\nName song - Name artist')
elif msg == '/artist':
sort[chat_id]='artist'
bot.sendMessage(chat_id,'send name and name of artist like this: \nName artist')
else:
try:
if sort[chat_id]=='artist':
try:
downloader(spotify.searchartist(msg),chat_id,'AR')
del sort[chat_id]
except:
cantfind(chat_id)
elif sort[chat_id]=='album':
try:
downloader(spotify.searchalbum(msg),chat_id,'AL')
del sort[chat_id]
except:
cantfind(chat_id)
elif sort[chat_id]=='single':
try:
SONGDOWNLOADER(spotify.searchsingle(msg), chat_id)
del sort[chat_id]
except:
cantfind(chat_id)
except:
bot.sendSticker(chat_id, 'CAACAgQAAxkBAAIBFGBLNcpfFcTLxnn5lR20ZbE2EJbrAAJRAQACEqdqA2XZDc7OSUrIHgQ')
bot.sendMessage(chat_id,'send me link or use /single or /album or /artist')
print('Listening ...')
tokenurl = f'https://api.telegram.org/bot{token}'
Update = tokenurl+"/getUpdates"
def UPDATE():
MESSAGES = requests.get(Update).json()
return MESSAGES['result']
while 1:
if threading.activeCount()-1 < 15:
try:
for message in UPDATE():
offset = message['update_id']+1
offset = Update+f"?offset={offset}"
offset = requests.post(offset)
msg = message['message']['text']
chat_id = message['message']['from']['id']
thread = threading.Thread(target=START,args=(msg,chat_id))
thread.start()
except:
pass
requierments.txt
youtube_search
youtube_dl
spotipy
eyed3
lyricsgenius
requests
telepot
ffmpeg
bs4
about how to deploy to Heroku:
1.create an account in Heroku
2.create app in Heroku
3.create repo like this https://github.com/michaelkryukov/heroku-python-script
4.put your code in that repo
5.go back to Heroku and open your app and go to deploy
6.choose Deployment method 'GitHub and sync your GitHub with Heroku
7. choose deploy branch
8.go to settings and add this link to your buildpacks 'https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git'
9. go to resources and active your app
if you didn't undranstand tell me, I will create a video about it
Can u do a video of how to deploy bc I think I do something wrong and it say no module named spotipy
I will....
https://youtu.be/gPoGVPjy8ZI
now the bot when I give link it say this
2021-06-29T21:27:29.506218+00:00 app[worker.1]: 897551417:/single https://open.spotify.com/track/69WpV0U7OMNFGyq8I63dcC?si=yBz53zMAQsC1wqSiiDhu8w
2021-06-29T21:27:30.401594+00:00 app[worker.1]: ENHYPEN (μνμ΄ν) 'Given-Taken' Official MV
2021-06-29T21:27:30.401604+00:00 app[worker.1]: ENHYPEN 'Given-Taken' Lyrics (μνμ΄ν Given-Taken κ°μ¬) (Color Coded Lyrics)
2021-06-29T21:27:30.401608+00:00 app[worker.1]: https://www.youtube.com//watch?v=ZLM9197v8vM
2021-06-29T21:27:30.450211+00:00 app[worker.1]: [generic] watch?v=ZLM9197v8vM: Requesting header
2021-06-29T21:27:30.818884+00:00 app[worker.1]: [redirect] Following redirect to https://www.youtube.com/watch?v=ZLM9197v8vM
2021-06-29T21:27:30.822183+00:00 app[worker.1]: [youtube] ZLM9197v8vM: Downloading webpage
2021-06-29T21:27:31.292501+00:00 app[worker.1]: [youtube] Downloading just video ZLM9197v8vM because of --no-playlist
2021-06-29T21:27:31.293494+00:00 app[worker.1]: [youtube] ZLM9197v8vM: Downloading player 1a0ca43b
2021-06-29T21:27:31.762874+00:00 app[worker.1]: WARNING: Writing cache to '/app/.cache/youtube-dl/youtube-sigfuncs/js_1a0ca43b_106.json' failed: Traceback (most recent call last):
2021-06-29T21:27:31.762883+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/site-packages/youtube_dl/cache.py", line 49, in store
2021-06-29T21:27:31.762884+00:00 app[worker.1]: os.makedirs(os.path.dirname(fn))
2021-06-29T21:27:31.762885+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 213, in makedirs
2021-06-29T21:27:31.762885+00:00 app[worker.1]: makedirs(head, exist_ok=exist_ok)
2021-06-29T21:27:31.762885+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 223, in makedirs
2021-06-29T21:27:31.762886+00:00 app[worker.1]: mkdir(name, mode)
2021-06-29T21:27:31.762887+00:00 app[worker.1]: NotADirectoryError: [Errno 20] Not a directory: '/app/.cache/youtube-dl'
2021-06-29T21:27:31.762887+00:00 app[worker.1]:
2021-06-29T21:27:32.000990+00:00 app[worker.1]: WARNING: Writing cache to '/app/.cache/youtube-dl/youtube-sigfuncs/js_1a0ca43b_102.json' failed: Traceback (most recent call last):
2021-06-29T21:27:32.000992+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/site-packages/youtube_dl/cache.py", line 49, in store
2021-06-29T21:27:32.000993+00:00 app[worker.1]: os.makedirs(os.path.dirname(fn))
2021-06-29T21:27:32.000993+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 213, in makedirs
2021-06-29T21:27:32.000994+00:00 app[worker.1]: makedirs(head, exist_ok=exist_ok)
2021-06-29T21:27:32.000994+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 223, in makedirs
2021-06-29T21:27:32.000995+00:00 app[worker.1]: mkdir(name, mode)
2021-06-29T21:27:32.000995+00:00 app[worker.1]: NotADirectoryError: [Errno 20] Not a directory: '/app/.cache/youtube-dl'
2021-06-29T21:27:32.000996+00:00 app[worker.1]:
2021-06-29T21:27:32.299113+00:00 app[worker.1]: [download] Destination: song//Given-Taken.*
2021-06-29T21:27:32.373894+00:00 app[worker.1]:
2021-06-29T21:27:32.390842+00:00 app[worker.1]: [ffmpeg] Destination: song//Given-Taken.mp3
2021-06-29T21:27:39.030635+00:00 app[worker.1]: Deleting original file song//Given-Taken.* (pass -k to keep)
2021-06-29T21:27:39.033686+00:00 app[worker.1]: Searching for "Given-Taken" by ENHYPEN...
2021-06-29T21:27:39.713769+00:00 app[worker.1]: Done.
Does the program work?
I have this problem but it works for me.
Quota is exhausted I have to wait
Same error
2021-06-30T10:35:12.646555+00:00 app[Safone.1]: Fedez, Achille Lauro, Orietta Berti - MILLE (Official Video)
2021-06-30T10:35:12.646569+00:00 app[Safone.1]: Fedez, Achille Lauro & Orietta Berti - MILLE (Official Video)
2021-06-30T10:35:12.646577+00:00 app[Safone.1]: fedez, achille lauro, orietta berti - mille (testo)
2021-06-30T10:35:12.646578+00:00 app[Safone.1]: Mille - Fedez, Achille Lauro - feat. Orietta Berti ENGLISH LYRICS
2021-06-30T10:35:12.646578+00:00 app[Safone.1]: Fedez - MILLE (feat. Orietta Berti, Achille Lauro)
2021-06-30T10:35:12.646579+00:00 app[Safone.1]: https://www.youtube.com//watch?v=b8bNFinHEAg
2021-06-30T10:35:12.686106+00:00 app[Safone.1]: [generic] watch?v=b8bNFinHEAg: Requesting header
2021-06-30T10:35:13.072935+00:00 app[Safone.1]: [redirect] Following redirect to https://www.youtube.com/watch?v=b8bNFinHEAg
2021-06-30T10:35:13.077096+00:00 app[Safone.1]: [youtube] b8bNFinHEAg: Downloading webpage
2021-06-30T10:35:13.418438+00:00 app[Safone.1]: [youtube] Downloading just video b8bNFinHEAg because of --no-playlist
2021-06-30T10:35:13.419855+00:00 app[Safone.1]: [youtube] b8bNFinHEAg: Downloading player 1a0ca43b
2021-06-30T10:35:13.794296+00:00 app[Safone.1]: WARNING: Writing cache to '/app/.cache/youtube-dl/youtube-sigfuncs/js_1a0ca43b_106.json' failed: Traceback (most recent call last):
2021-06-30T10:35:13.794312+00:00 app[Safone.1]: File "/app/.heroku/python/lib/python3.8/site-packages/youtube_dl/cache.py", line 49, in store
2021-06-30T10:35:13.794313+00:00 app[Safone.1]: os.makedirs(os.path.dirname(fn))
2021-06-30T10:35:13.794314+00:00 app[Safone.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 213, in makedirs
2021-06-30T10:35:13.794314+00:00 app[Safone.1]: makedirs(head, exist_ok=exist_ok)
2021-06-30T10:35:13.794315+00:00 app[Safone.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 223, in makedirs
2021-06-30T10:35:13.794316+00:00 app[Safone.1]: mkdir(name, mode)
2021-06-30T10:35:13.794316+00:00 app[Safone.1]: NotADirectoryError: [Errno 20] Not a directory: '/app/.cache/youtube-dl'
2021-06-30T10:35:13.794318+00:00 app[Safone.1]:
2021-06-30T10:35:14.051436+00:00 app[Safone.1]: WARNING: Writing cache to '/app/.cache/youtube-dl/youtube-sigfuncs/js_1a0ca43b_102.json' failed: Traceback (most recent call last):
2021-06-30T10:35:14.051442+00:00 app[Safone.1]: File "/app/.heroku/python/lib/python3.8/site-packages/youtube_dl/cache.py", line 49, in store
2021-06-30T10:35:14.051443+00:00 app[Safone.1]: os.makedirs(os.path.dirname(fn))
2021-06-30T10:35:14.051443+00:00 app[Safone.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 213, in makedirs
2021-06-30T10:35:14.051444+00:00 app[Safone.1]: makedirs(head, exist_ok=exist_ok)
2021-06-30T10:35:14.051444+00:00 app[Safone.1]: File "/app/.heroku/python/lib/python3.8/os.py", line 223, in makedirs
2021-06-30T10:35:14.051445+00:00 app[Safone.1]: mkdir(name, mode)
2021-06-30T10:35:14.051445+00:00 app[Safone.1]: NotADirectoryError: [Errno 20] Not a directory: '/app/.cache/youtube-dl'
2021-06-30T10:35:14.051445+00:00 app[Safone.1]:
2021-06-30T10:35:14.169592+00:00 app[Safone.1]: [download] Destination: song//MILLE (feat. Orietta Berti)( Ft.Achille Lauro, Orietta Berti).*
2021-06-30T10:35:14.248986+00:00 app[Safone.1]:
2021-06-30T10:35:14.268791+00:00 app[Safone.1]: [ffmpeg] Destination: song//MILLE (feat. Orietta Berti)( Ft.Achille Lauro, Orietta Berti).mp3
2021-06-30T10:35:20.362839+00:00 app[Safone.1]: Deleting original file song//MILLE (feat. Orietta Berti)( Ft.Achille Lauro, Orietta Berti).* (pass -k to keep)
2021-06-30T10:35:20.365790+00:00 app[Safone.1]: Searching for "MILLE (feat. Orietta Berti)" by Fedez...
2021-06-30T10:35:20.879250+00:00 app[Safone.1]: Done.
Do you get the song with the error or just get the error?
No I get error and bot say unable to download music
I think I know what's wrong with bot
What?
It say same error but now it work :)
What?
U send me ur script and that doesn't work, I tried with github script and now it work
It say same error but now it work :)
same For me :)
Thank u so much :D
Anyway .... I sent you the message on telegram I'm Nima .
Can I close it now?
you're welcome.
yeah
|
2025-04-01T04:34:54.684869
| 2021-03-15T15:27:45
|
831926026
|
{
"authors": [
"niradler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9077",
"repo": "nimrodkor/checkov",
"url": "https://github.com/nimrodkor/checkov/pull/109"
}
|
gharchive/pull-request
|
Ckv k8 s checks
CKV_K8S_114
CKV_K8S_149
CKV_K8S_150
CKV_K8S_151
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
@nimrodkor any suggestion on the error?
|
2025-04-01T04:34:54.697674
| 2019-10-14T09:52:17
|
506551050
|
{
"authors": [
"iamvijaydev",
"ninjabachelor"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9078",
"repo": "ninjabachelor/markdown-yaml-metadata-parser",
"url": "https://github.com/ninjabachelor/markdown-yaml-metadata-parser/pull/5"
}
|
gharchive/pull-request
|
support win32 file formats
This PR is intended to add support for the Windows platform. On Windows, the newline is represented as \r\n.
@ninjabachelor I have a personal project uses your awesome package to parse and convert the markdown files to markdownjs files. I am currently using a local build of the current PR to get my job done. Please review and approve the PR so I can switch to official package.
Thanks @iamvijaydev !
maybe I'll change 'platform' with 'os', just to have an explicit dependency...even if it's identical :)
Thanks again,
Alberto
|
2025-04-01T04:34:54.704927
| 2023-05-08T16:09:31
|
1700533885
|
{
"authors": [
"ninpartners"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9079",
"repo": "ninpartners/gu_uptime",
"url": "https://github.com/ninpartners/gu_uptime/issues/593"
}
|
gharchive/issue
|
[RESOLVED] Interface eth0(): High error rate (>2 for 5m)
Problem started at 12:09:29 on 2023.05.08
Problem name: Interface eth0(): High error rate (>2 for 5m)
Host: S100-Primario
Severity: Warning
Operational data: errors in: 4, errors out: 0
Original problem ID: 20425
Event details in Zabbix: http://<IP_ADDRESS>/tr_events.php?triggerid=23483&eventid=20425
Problem has been resolved in 9m 0s at 12:18:29 on 2023.05.08
Problem name: Interface eth0(): High error rate (>2 for 5m)
Host: S100-Primario
Severity: Warning
Original problem ID: 20425
|
2025-04-01T04:34:54.709725
| 2024-11-21T22:58:47
|
2681246405
|
{
"authors": [
"michellewang",
"nikhil153"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9080",
"repo": "nipoppy/nipoppy",
"url": "https://github.com/nipoppy/nipoppy/issues/435"
}
|
gharchive/issue
|
[ENH] Generate list of participants to be submitted for HPC job
Is there an existing issue for this?
[x] I have searched the existing issues
New feature
A back-up / manual option for submitting available (and incomplete processing) participants to HPC job.
This will involve changing the BasePipelineWorkflow
[ ] Checking bagel for identifying participants that are yet to be processed per session per pipeline
[ ] Write separate lists of these participants on disk
Unclear documentation
No response
Just wanted to clarify that the list is already being generated, we just need to write the file when the user asks for it (--write-participant-list <PATH>? --write-participant-session-list <PATH>?)
|
2025-04-01T04:34:54.769250
| 2023-05-23T05:16:00
|
1721199662
|
{
"authors": [
"lahma",
"tonyqus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9081",
"repo": "nissl-lab/npoi",
"url": "https://github.com/nissl-lab/npoi/pull/1084"
}
|
gharchive/pull-request
|
NUKE improvements
NUKE 7.0.2
don't print missing documentation warnings
don't build NUKE project as part of solution
use also Ubuntu for building (faster builds)
skip tests as Appveyour isn't testing either
separate workflow for PRs (still same for master build)
NUKE update was done using the latest global tool and running command nuke :update. New GitHub workflow files were generated by running build.cmd, NUKE updated definitions automatically. Maybe at some point could also enable tests when they are green in master π
LGTM
I see the change. Interesting
The ubuntu runner should generally be the fastest free CI around. Windows is somewhat slower. Ubuntu runner will also make sure that anyone can compile on MacOS and Linux. Appveyr might not be needed anymore if it's only doing builds. NUKE can be customized for many things like building and publishing to NuGet on tag creation etc.
BTW. I read your recent blog post (great writeup!) and now I'm curious if I could make NPOI faster π
now I'm curious if I could make NPOI faster
Definitely, you/we can. But its a huge task I'm afraid. And again, I don't wanna totally replace commercial product created by these rich companies who actually hire developers.
There are two major improvement we can do.
Refactor the cell management and support virtual cell management based on range instead of creating real cell clasess for every existing cells. Because this takes too many memory (one class instance per cell is too much). This feature has been implemented in EPPlus several years ago. This can also increase speed of creating/modifying cells.
Memory consumption improvement - NPOI still has a few chances to get OOM (more than EPPLus does). This has much space to improve.
Although I do know the improvement direction, my major problem is that I don't wanna put more than 5 days per month on NPOI since it's not profitable. In other words, it's mainly in maintence mode. This can also prevent my wife from complaining too much like a bee :D
|
2025-04-01T04:34:54.791026
| 2015-07-08T18:21:32
|
93857060
|
{
"authors": [
"choptastic",
"fooflare"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9082",
"repo": "nitrogen/nitrogen",
"url": "https://github.com/nitrogen/nitrogen/issues/89"
}
|
gharchive/issue
|
Fail compilation on Erlang 18.0
Hi, due to fail_on_warning option in:
https://github.com/nitrogen/nitrogen/blob/master/rel/overlay/rebar.config.src#L11
I get this warning and compilation fail:
simple_bridge/src/sb_file_upload_handler.erl:194: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.
ERROR: compile failed while processing /home/fooflare/workspace/Nitrogen/nitrogen/rel/nitrogen/lib/simple_bridge: rebar_abort
Thanks.
Thanks for the reminder!
If you pull the latest version of simple_bridge, it should compile properly
now.
-Jesse
On Wed, Jul 8, 2015 at 1:21 PM, Amos Oviedo<EMAIL_ADDRESS>wrote:
Hi, due to fail_on_warning option in:
https://github.com/nitrogen/nitrogen/blob/master/rel/overlay/rebar.config.src#L11
I get this warning and compilation fail:
simple_bridge/src/sb_file_upload_handler.erl:194: erlang:now/0: Deprecated
BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS
User's Guide for more information.
ERROR: compile failed while processing
/home/fooflare/workspace/Nitrogen/nitrogen/rel/nitrogen/lib/simple_bridge:
rebar_abort
Thanks.
β
Reply to this email directly or view it on GitHub
https://github.com/nitrogen/nitrogen/issues/89.
--
Jesse Gumm
Owner, Sigma Star Systems
414.940.4866 || sigma-star.com || @jessegumm
Hi again :) now it compiles fine. Because of the now() function there are 3 more warnings:
https://github.com/nitrogen/nprocreg/blob/master/src/nprocreg.erl#L154
https://github.com/nitrogen/nprocreg/blob/master/src/nprocreg.erl#L162
https://github.com/nitrogen/nitrogen_core/blob/master/src/lib/wf_render_elements.erl#L132
With these 3 warnings the compilation doesn't fail, but I tell you if you want to make something with them.
I definitely know of those errors and I'm in the middle of refactoring all
of them out. For now the warnings are fine, and don't break anything.
Thanks for the reports!
On Wed, Jul 8, 2015 at 1:50 PM, Amos Oviedo<EMAIL_ADDRESS>wrote:
Hi again :) now it compiles fine. Because of the now() function there are
3 more warnings:
https://github.com/nitrogen/nprocreg/blob/master/src/nprocreg.erl#L154
https://github.com/nitrogen/nprocreg/blob/master/src/nprocreg.erl#L162
https://github.com/nitrogen/nitrogen_core/blob/master/src/lib/wf_render_elements.erl#L132
With these 3 warnings the compilation doesn't fail, but I tell you if you
want to make something with them.
β
Reply to this email directly or view it on GitHub
https://github.com/nitrogen/nitrogen/issues/89#issuecomment-119694982.
--
Jesse Gumm
Owner, Sigma Star Systems
414.940.4866 || sigma-star.com || @jessegumm
I'm going to close this issue only because it will for sure be fixed for the next release of both Nitrogen, SimpleBridge, and nprocreg.
|
2025-04-01T04:34:54.795149
| 2021-02-07T16:52:22
|
802997901
|
{
"authors": [
"nitros12",
"spdegabrielle",
"williewillus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9083",
"repo": "nitros12/racket-cord",
"url": "https://github.com/nitros12/racket-cord/pull/4"
}
|
gharchive/pull-request
|
Fix update-channel failing
wrong arity call to update-channel
@williewillus Do have a test script? I'd like to try this out on the racket discord!
@spdegabrielle I'm using this library in a project called R16, which is a trick bot for Discord. Basically, it allows users to register and execute short code snippets in a sandbox from Discord commands.
https://git.sr.ht/~williewillus/r16/
Thanks for fixing this, I don't have much interest in working on this lib (I'm using my haskell library nowadays), so would you like to be added as a repo collaborator?
That'd be great, I had to plans to add support for V8 of the gateway api (so gateway intents would be usable)
@williewillus sorry for contacting you via GitHub - I couldn't find another mechanism to message you
Hi Vincent, I recently asked you about racket-cord, and you kindly pointed me to your r16 bot repo> I'm thinking about lightly modifying it and including it in the racket-templates repository as a bot example that new racketeers could work from. Would you be willing to provide a suitable licence? Since the goal is reuse I'd prefer MIT or apache2 - but it is up to you.
Kind regards,
Stephen
@spdegabrielle sure, we updated it to MIT. Here's the project page on sourcehut for posting https://sr.ht/~williewillus/r16/
|
2025-04-01T04:34:54.796598
| 2015-04-18T16:23:27
|
69323909
|
{
"authors": [
"donalwilson"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9084",
"repo": "niuzhiheng/caffe",
"url": "https://github.com/niuzhiheng/caffe/issues/62"
}
|
gharchive/issue
|
MATLAB crashed when run the matcaffe-demo
I build the Maincaller and matcaffe sucessfully,but when i run the demo files,there was't any errors but the matlab program crashed .The sone problem occured in both matlab 2012b and 2013b.Did anyone meet these too? Ask for help!i
I traked the progress in matlab,found the crash always occured during the progress of the demo file call the caffe.mex64 to initiaze the caffe model. and the size of my caffe.mex64 is 10.5mb rather than 2.4mb, as provided under the MSVCmex/bin .
|
2025-04-01T04:34:54.833116
| 2024-12-18T04:44:20
|
2746757028
|
{
"authors": [
"Mic92",
"sedlund"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9085",
"repo": "nix-community/nixos-anywhere",
"url": "https://github.com/nix-community/nixos-anywhere/pull/439"
}
|
gharchive/pull-request
|
fix(nixos-generate-config): dont output carriage returns in output files
ssh -t is being used by runSsh which causes new lines in output designed for a terminal. It could also output escape sequences.
https://unix.stackexchange.com/a/420438
this change fixes this problem (#438) by only outputting new lines. but it might be worth removing ssh password entry as an option and the tty allocation, and force the use of sshpass for passwords.
There is a FIXME note regarding the output of the nixos-facter with strange characters. This may resolve that as well.
Also updated the message about nixos-facter not being found to hopefully clarify and resolve #423 around the --generate-hardware-config option.
What version did you use for nixos-facter. We switched logging libraries at some point, which resolved an issue with ANSI escapes.
should have quoted:
https://github.com/nix-community/nixos-anywhere/blob/bf4c0c50d65b4f1cf13972da7feaa4108879abc9/src/nixos-anywhere.sh#L457-L459
I was unsure what 'weird characters' referred to.
So maybe what you refer to fixed that, but CRLF in output persists with tty allocation with ssh.
I tested with:
ssh -t localhost nixos-generate-config --show-hardware-config --no-filesystems > ngc-old.out
ssh -t localhost "stty nl; nixos-generate-config --show-hardware-config --no-filesystems" > ngc-new.out
ssh -t localhost sudo nixos-facter > facter-old.out
ssh -t localhost "stty nl; sudo nixos-facter" > facter-new.out
fd -e out -x nix shell nixpkgs#tinyxxd -c xxd {} {.}.xxd
file *.out
facter-new.out: JSON text data
facter-old.out: JSON text data
ngc-new.out: Unicode text, UTF-8 text
ngc-old.out: Unicode text, UTF-8 text, with CRLF line terminators
delta --side-by-side ngc-old.xxd ngc-new.xxd
delta --side-by-side facter-old.xxd facter-new.xxd
this is qpqn0b91qbh879snbqpr04z8rdi7zkq8-nixos-facter-0.3.0.
diffs show CRLF in the old output
We can remove the comment. It was caused by charm bracelet's logging library: https://github.com/numtide/nixos-facter/pull/119
It wrote some ansi escape at the very beginning, which broke the json output. However this was removed.
Are you running on macOS on the host by chance?
But I think you are write, we shouldn't use -t for these commands. It's only needed for the installation because we need support for sudo and other password prompts.
Could you try to remove it and see if this is fixing the issue?
Are you running on macOS on the host by chance?
my client is NixOS-wsl. dont know what reporter in #438 is running.
created a new runSshNoTty function that omits the tty request.
tested outputs of both nixos-facter and nixos-generate-config on a hetzner vm - no CRLF in this patched version.
@Enzime this is also what we saw the other day.
@mergify queue
|
2025-04-01T04:34:54.843273
| 2024-08-09T15:15:44
|
2458150533
|
{
"authors": [
"GaetanLepage",
"MattSturgeon",
"khaneliman",
"nixos-discourse"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9086",
"repo": "nix-community/nixvim",
"url": "https://github.com/nix-community/nixvim/pull/1999"
}
|
gharchive/pull-request
|
plugins/yazi: init
Adding module for https://github.com/mikavilpas/yazi.nvim
NOTE: It looks like it has a dependency on telescope in the builtin functions for searching. Should that be a warning or added as an extra plugin automatically? To make this more complex, technically a person can setup the integrations property to change the grep behavior to not rely on it..
The developer doesn't list telescope as a dependency. In that sense, I don't think that we should do anything.
Can the plugin be operated normally when simply enabling plugins.yazi.enable ?
The developer doesn't list telescope as a dependency. In that sense, I don't think that we should do anything. Can the plugin be operated normally when simply enabling plugins.yazi.enable ?
Yeah, it's fine to use most of the functionality. Here's where I'm talking about.
Added a description to make note about it.
The warning would basically have to be checking if the user hasn't overridden the settings.integrations.grep_in_directory then check if telescope is enabled or something and even that doesn't feel foolproof.
TBH this seems like something that upstream plugin should be dealing with. If the user configures the plugin with a mapping that needs telescope (even if it's a default mapping) the plugin should really warn that telescope is missing.
Perhaps we should report an issue upstream?
This pull request has been mentioned on NixOS Discourse. There might be relevant details there:
https://discourse.nixos.org/t/satisfaction-survey-from-the-new-rfc-166-formatting/49758/22
The warning would basically have to be checking if the user hasn't overridden the settings.integrations.grep_in_directory then check if telescope is enabled or something and even that doesn't feel foolproof.
TBH this seems like something that upstream plugin should be dealing with. If the user configures the plugin with a mapping that needs telescope (even if it's a default mapping) the plugin should really warn that telescope is missing.
Perhaps we should report an issue upstream?
Looking at the documentation on their README, it is noted in their that it uses telescope when it's available otherwise the binding does nothing.
Looking at the documentation on their README, it is noted in their that it uses telescope when it's available otherwise the binding does nothing. mikavilpas/yazi.nvim#%EF%B8%8F-keybindings. I think we should be fine with the description text, link to their documentation, and their documentation calling it out, too.
Well spotted. Following that link, it seems <c-g> has a similar dependency on grug-far.
If both of these fail somewhat gracefully, as the documentation would imply, and on the basis that you've been using the plugin a while without even noticing the optional deps, I think we can reword or remove our note in the description?
Looking at the documentation on their README, it is noted in their that it uses telescope when it's available otherwise the binding does nothing. mikavilpas/yazi.nvim#%EF%B8%8F-keybindings. I think we should be fine with the description text, link to their documentation, and their documentation calling it out, too.
Well spotted. Following that link, it seems <c-g> has a similar dependency on grug-far.
If both of these fail somewhat gracefully, as the documentation would imply, and on the basis that you've been using the plugin a while without even noticing the optional deps, I think we can reword or remove our note in the description?
Updated description to be more clear about the external dependencies. Let me know if it's clear enough.
@mergifyio queue
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.